text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{Introduction} Geomstats \cite{miolane2020geomstats} is an open-source Python package for statistics and learning on manifolds. Geomstats allows users to analyze complex data that belong to manifolds equipped with various geometric structures, such as Riemannian metrics. This type of data arise in many applications: in computer vision, the manifold of 3D rotations models movements of articulated objects like the human spine or robotics arms \cite{Arsigny:PHD:2006}; and in biomedical imaging, biological shapes are studied as elements of shape manifolds \cite{Dryden1998,Younes2012Spaces}. The manifolds implemented in Geomstats come equipped with Riemannian metrics that allow users to compute distances and geodesics, among others. Geomstats also provides statistical learning algorithms that are compatible with the Riemannian structures, \textit{i.e.}, that can be used in combination with any of the implemented Riemannian manifolds. These algorithms are geometric generalizations of common estimation, clustering, dimension reduction, classification and regression methods to nonlinear manifolds. Probability distributions are a type of complex data often encountered in applications: in text classification, multinomial distributions are used to represent documents by indicating words frequencies \cite{lebanon2012learning}; in medical imaging, multivariate normal distributions are used to model diffusion tensor images \cite{lenglet2006}. Many more examples of applications can be found in the rest of this paper. Spaces of probability distributions possess a nonlinear structure that can be captured by two main geometric representations: one provided by optimal transport and one arising from information geometry \cite{amari2016information}. In optimal transport, probability distributions are seen as elements of an infinite-dimensional manifold equipped with the Otto-Wasserstein metric \cite{otto2001geometry,ambrosio2013user}. By contrast, information geometry gives a finite-dimensional manifold representation of parametric families of distributions. \footnote{There also exists a non parametric-version that can be defined on the infinite-dimensional space of probability distributions \cite{friedrich1991fisher}, that we do not consider here.} Specifically, information geometry represents the probability distributions of a given parametric family by their parameter space, on which the Fisher information is used to define a Riemannian metric \textemdash the so-called \textit{Fisher-Rao metric} or \textit{Fisher information metric} \cite{rao1945}. This metric is a powerful tool to compare and analyze probability distributions inside a given parametric family. It is invariant to diffeomorphic changes of parametrization, and it is the only metric invariant with respect to sufficient statistics, as proved by Cencov \cite{cencov1982}. Most importantly, the Fisher-Rao metric comes with Riemannian geometric tools such as geodesics, geodesic distance and intrinsic means, that give an intrinsic way to interpolate, compare, average probability distributions inside a given parametric family. By construction, geodesics and means for the Fisher-Rao metric never leave the parametric family of distributions, contrary to their Wasserstein-metric counterparts. These intrinsic computations can then serve as building blocks to apply learning algorithms to parametric probability distributions. The geometries of several parametric families have been studied in the literature, and some relate to well-known Riemannian structures: the Fisher-Rao geometry of univariate normal distributions is hyperbolic \cite{atkinson1981}; the Fisher-Rao geometry of multinomial distributions is spherical \cite{kass1989geometry}; and the Fisher-Rao geometry of multivariante distributions of fixed mean coincides with the affine-invariant metric on the space of symmetric positive definite matrices \cite{pennec2006riemannian}. \subsubsection*{Contributions.} Computational tools for optimal transport have been proposed, in Python in particular \cite{flamary2021pot}. However, to the best of our knowledge, there exists no wide-ranging open source Python implementation of parametric information geometry, despite a recent implementation in Julia \cite{arutjunjan2021}. To fill this gap, this paper presents a module of Geomstats that implements the Fisher-Rao geometries of standard parametric families of probability distributions. Each parametric family of distributions is implemented through its Fisher-Rao manifold with associated exponential and logarithm maps, geodesic distance and geodesics. These manifolds are compatible with the statistical learning algorithms of Geomstats' learning module, which can therefore be applied to probability distributions data. As in the rest of Geomstats, the implementation is object-oriented and extensively unit-tested. All operations are vectorized for batch computation and support is provided for different execution backends — namely NumPy, Autograd, PyTorch, and TensorFlow. \subsubsection*{Outline.} The rest of the paper is organized as follows. Section~\ref{sec:module} provides the necessary background of Riemannian geometry and introduces the structure of Geomstats' information geometry module, \textit{i.e.}, the Python classes used to define a Fisher-Rao geometry. Section~\ref{sec:catalogue} details the geometries of the parametric families implemented in the module, along with code illustrations and examples of real-world usecases in the literature. Section~\ref{sec:application} presents an application of the information geometry tools of Geomstats to geometric learning on probability distributions. Altogether, the proposed information geometry module represents the first comprehensive implementation of parametric information geometry in Python. \section{Information geometry module of geomstats}\label{sec:module} This section describes the design of the \codeobj{information\_geometry} module and its integration into Geomstats. The proposed module implements a Riemannian manifold structure for common parametric families of probability distributions, such as normal distributions, using the object-oriented architecture shown in Fig.~\ref{fig:architecture}. The Riemannian manifold structure is encoded by two Python classes: one for the parameter manifold of the family of distributions and one for the Fisher-Rao metric on this manifold. For example, in the case of normal distributions, these Python classes are called \codeobj{NormalDistributions} and \codeobj{NormalMetric}. They inherit from more general Python classes, in particular the \codeobj{Manifold}, \codeobj{Connection} and \codeobj{RiemannianMetric} classes. These are abstract classes that define structure, but cannot be instantiated, contrary to their child classes \codeobj{NormalDistributions} and \codeobj{NormalMetric}. They also inherit from the \codeobj{InformationManifold} mixin and the \codeobj{FisherRaoMetric}: these are Python structures specific to the information geometry module. This section details this architecture along with some theoretical background. For more details on Riemannian geometry, we refer the interested reader to a standard textbook such as \cite{do1992riemannian}. \begin{figure} \centering \centerline{ \includegraphics[scale=.45]{geomstats_ig_paper.pdf}} \smallskip \caption{Architecture of the information geometry module of \codeobj{Geomstats}. The \codeobj{InformationManifold} Python mixin and the \codeobj{FisherRaoMetric} Python class implement the building blocks of parametric information geometry. The most common parametric families of distributions are Python classes represented in colors, and inherit from the \codeobj{InformationManifold} mixin. They are equipped with their respective Riemannian metrics, which themselves inherit from the \codeobj{FisherRaoMetric} class. The abstract (ABC) Python classes \codeobj{Manifold}, \codeobj{OpenSet}, \codeobj{LevelSet}, \codeobj{Connection}, \codeobj{RiemannianMetric} provides tools of Riemannian geometry to compute on the information manifolds.} \label{fig:architecture} \end{figure} \subsection{Manifold} The \codeobj{Manifold} abstract class implements the structure of a \textit{manifold}, \textit{i.e.}, a space that locally resembles a vector space, without necessarily having its global flat structure. Manifolds of dimension $d$ can be defined in an abstract way, \textit{i.e.}, without considering their embedding in an ambient space, by ``gluing'' together small pieces of Euclidean space $\mathbb R^d$ using charts. We will only consider smooth manifolds, for which the transition from one chart to another is smooth. In addition, submanifolds of a larger Euclidean space $\mathbb R^N$ can be defined locally in various ways: \textit{e.g.}, using a parametrization, an implicit function or as the graph of a function \cite{guigui2023}. The simplest examples of manifolds are Euclidean spaces $\mathbb R^d$, or more generally vector spaces in finite dimensions, open sets of vector spaces (there is only one chart which is the identity) and level sets of functions (defined globally by one implicit function). These important cases are implemented in the abstract classes \codeobj{VectorSpace}, \codeobj{OpenSet} and \codeobj{LevelSet}, which are child classes of \codeobj{Manifold} as shown in Figure~\ref{fig:architecture}. A $d$-dimensional manifold $M$ admits a \textit{tangent space} $T_xM$ at each point $x\in M$ that is a $d$-dimensional vector space. For open sets of $\mathbb R^d$, it can be identified with $\mathbb R^d$ itself. The classes that inherit from \codeobj{Manifold} contain methods that allow users to verify that an input is a point belonging to the manifold via the \codeobj{belongs()} method or that an input is a tangent vector to the manifold at a given base point via the method \codeobj{is\_tangent()} (see Figure~\ref{fig:architecture}). \subsection{Connection} The \codeobj{Connection} class implements the structure of an affine connection, which is a geometric tool that defines the generalization of straight lines, addition and subtraction to nonlinear manifolds. To this end, a connection allows us to take derivatives of vector fields, \textit{i.e.}, mappings $V:M\rightarrow TM$ that associate to each point $p$ a tangent vector $V(p)\in T_pM$. Precisely, an \textit{affine connection} is a functional $\nabla$ that acts on pairs of vector fields $(U,V) \mapsto \nabla_UV$ according to the following rules: for any vector fields $U,V,W$ and differentiable function $f$, \begin{align*} \nabla_{fU+V} W &= f\nabla_UV + \nabla_UW,\\ \nabla_U(fV+W) &= U(f)V + f\nabla_UV + \nabla_UW, \end{align*} where $U(f)$ denotes the action of the vector field $U$ on the differentiable function $f$. The action induced by the connection $\nabla$ is referred to as \textit{covariant derivative}. \subsubsection*{Geodesics} If $\gamma(t)$ is a curve on $M$, its velocity $\dot\gamma(t)$ is a vector field along $\gamma$, i.e. $\dot\gamma(t)\in T_{\gamma(t)}M$ for all $t$. The acceleration of a curve is therefore the covariant derivative of this velocity field with respect to the affine connection $\nabla$. A curve $\gamma$ of zero acceleration \begin{equation} \label{general_geod_eq} \nabla_{\dot\gamma}\dot\gamma=0, \end{equation} is called a \textit{$\nabla$-geodesic}. Geodesics are the manifolds counterparts of vector spaces' straight lines. Equation~\eqref{general_geod_eq} translates into a system of ordinary differential equations (ODEs) for the coordinates of the geodesic $\gamma=(\gamma_1,\hdots,\gamma_d)$ \begin{equation}\label{geod_eq} \ddot\gamma_k+\sum_{i,j=1}^d\Gamma_{ij}^k(\gamma)\dot\gamma_i\dot\gamma_j=0, \quad k=1,\hdots,d, \end{equation} where the coefficients $\Gamma_{ij}^k$ are the \textit{Christoffel symbols} that define the affine connection in local coordinates. In the \codeobj{Connection} class, Equation~\eqref{geod_eq} is implemented in the \codeobj{geodesic\_equation()} method and the Christoffel symbols are implemented in the \codeobj{christoffels()} method (see Figure~\ref{fig:architecture}). \subsubsection*{Exp and Log maps} Existence results for solutions of ODEs allow us to define geodesics starting at a point $x$ with velocity $v\in T_xM$ for times $t$ in a neighborhood of zero, or equivalently for all time $t\in[0,1]$ but for tangent vectors $v$ of small norm. The \textit{exponential map} at $x\in M$ associates to any $v\in T_xM$ of sufficiently small norm the end point $\gamma(1)$ of a geodesic $\gamma$ starting from $\theta$ with velocity $v$: $$\exp_x(v)=\gamma(1), \quad \text{where} \begin{cases} \gamma \text{ is a geodesic}, \\ \gamma(0) = x, \, \dot\gamma(0) = v. \end{cases}$$ If $B$ is a small ball of the tangent space $T_xM$ centered at $0$ on which $\exp_x$ is defined, then $\exp_x$ is a diffeomorphism from $B$ onto its image and its inverse $\log_x \equiv \exp_{x}^{-1}$ defines the \textit{logarithm map}, which associates to any point $y$ the velocity $v\in T_xM$ necessary to get to $y$ when departing from $x$: $$\log_{x}(y)=v \quad\text{where}\quad \exp_x(v)=y.$$ The exponential and logarithm maps can be seen as generalizations of the Euclidean addition and subtraction to nonlinear manifolds. Both maps are implemented in the \codeobj{exp()} and \codeobj{log()} methods of the \codeobj{Connection} class, which further allow us to get other tools such as \codeobj{parallel\_transport()} (see Figure~\ref{fig:architecture}). We refer to \cite{guigui2023} for additional details on the \codeobj{Connection} class. \subsection{Riemannian metric} Just like there is an abstract Python class that encodes the structure of manifolds, the abstract class \codeobj{RiemannianMetric} encodes the structure of Riemannian metrics. A \textit{Riemannian metric} is a collection of inner products $(\langle\cdot,\cdot\rangle_p)_{p\in M}$ defined on the tangent spaces of a manifold $M$, that depend on the base point $p\in M$ and varies smoothly with respect to it. \subsubsection*{Levi-Civita Connection} A Riemannian metric is associated with a unique affine connection, called the \textit{Levi-Civita connection}, which is the only affine connection that is symmetric and compatible with the metric, \textit{i.e.}, that verifies \begin{align*} UV-VU = \nabla_UV - \nabla_VU\\ U\langle V,W\rangle = \langle \nabla_UV, W\rangle + \langle V,\nabla_UW\rangle \end{align*} for all vector fields $U,V,W$. The geodesics of a Riemannian manifold are those of its Levi-Civita connection. The class \codeobj{RiemannianMetric} is therefore a child class of \codeobj{Connection} and inherits all its methods, including \codeobj{geodesic()}, \codeobj{exp()} and \codeobj{log()}. The class \codeobj{RiemannianMetric} overwrites the \codeobj{Connection} class' method \codeobj{christoffels()} and computes the Christoffel symbols using derivatives of the metric. The geodesics, by the compatibility property, have velocity of constant norm, \textit{i.e.}, are parametrized by arc length. \subsubsection*{Geodesic Distance} The \codeobj{dist()} method implements the geodesic distance induced by the Riemannian metric, defined between two points $x,y\in M$ to be the length of the shortest curve linking them, where the length of a (piecewise) smooth curve $\gamma:(0,1)\rightarrow M$ is computed by integrating the norm of its velocity $$d(x,y)=\inf_{\gamma;\gamma(0)=x, \gamma(1)=y}L(\gamma),\quad \text{where}\quad L(\gamma)=\int_0^1 || \dot\gamma(t) ||_{\gamma(t)} dt,$$ using the norm induced by the Riemannian metric. In a Riemannian manifold, geodesics extend another property of straight lines: they are locally length-minimizing. In a geodesically complete manifold, any pair of points can be linked by a minimizing geodesic, not necessarily unique, and the \codeobj{dist()} can be computed using the $\codeobj{log}$ map: $$\forall x,y \in M,\quad d(x, y) = ||\log_x(y)||_x.$$ \subsubsection*{Curvatures} Finally, different notions of curvature are implemented, including the \codeobj{riemann\_curvature()} tensor and \codeobj{sectional\_curvature()}, among others (see Figure~\ref{fig:architecture}). The Riemann curvature tensor is defined from the connection, namely for any vector fields $U,V,W$ as $R(U,V)W = \nabla_{[U,V]}W+\nabla_V\nabla_UW-\nabla_U\nabla_VW$. Sectional curvature at $x\in M$ is a generalization of the Gauss curvature of a surface in $\mathbb R^3$. It is defined for any two-dimensional subspace $\sigma(u,v) \subset T_xM$ spanned by tangent vectors $u,v$, as $$K_{\sigma(u,v)}(x)=\frac{\langle R(u, v)v,u\rangle}{\langle u,u\rangle \langle v,v\rangle-\langle u,v\rangle^2}.$$ It yields important information on the behavior of geodesics, since a geodesically complete and simply connected manifold with everywhere negative sectional curvature (a \textit{Hadamard manifold}) is globally diffeomorphic to $\mathbb R^d$ through the exponential map. Consequently, negatively curved spaces share some of the nice properties of Euclidean spaces: any two points can be joined by a unique minimizing geodesic, the length of which gives the geodesic distance. \subsection{Information manifold} The proposed \codeobj{information\_geometry} module is integrated into the differential geometry structures implemented in Geomstats. The module contains child classes of \codeobj{Manifold} that represent parametric families of probability distributions, and child classes of \codeobj{RiemannianMetric} that define the Fisher information metric on these manifolds. The combination of two such classes define what we call an \textit{information manifold,} which is specified by an inheritance from the mixin: \codeobj{InformationManifoldMixin} shown in Figure~\ref{fig:architecture}. \subsubsection*{Parameter Manifolds} Specifically, consider a family of probability distributions on a space $\mathcal X$, typically $\mathcal X=\mathbb R^n$ for some integer $n$. Assume that the distributions in the family are absolutely continuous with respect to a reference measure $\lambda$ (such as the Lebesgue measure on $\mathbb R^n$) with densities $$f(x|\theta), \quad x\in\mathcal X, \theta\in\Theta,$$ with respect to $\lambda$, where $\theta$ is a parameter belonging to $\Theta$ an open subset of $\mathbb R^d$. Then, this parametric family is represented by the \textit{parameter manifold $\Theta$}. The \codeobj{information\_geometry} module implements this manifold as a child class of one of the abstract classes \codeobj{OpenSet} and \codeobj{LevelSet}, which are themselves children of \codeobj{Manifold}. Most of the parameter manifolds are implemented as child classes of \codeobj{OpenSet} as shown in Figure~\ref{fig:architecture}. Other parameter manifolds are implemented more easily with another class. This is the case of \codeobj{CategoricalDistributions}, which inherits from \codeobj{LevelSet} as its parameter space is the interior of the simplex. \subsubsection*{Information Manifolds} Parameter manifolds also inherit from the mixin class, called \codeobj{InformationManifoldMixin}, which turns them into \textit{information manifolds}. First, this mixin endows them with specific methods such as \codeobj{sample()}, which returns a sample of the distribution associated to a given parameter $\theta \in \Theta$, or \codeobj{point\_to\_pdf()}, which returns the probability density function (or probability mass function) associated to a given parameter $\theta \in \Theta$ (see Figure~\ref{fig:architecture}). For example, to generate at random a categorical distribution on a space of $5$ outcomes, we instantiate an object of the class \codeobj{CategoricalDistributions} with dimension $4$ using \codeobj{manifold = CategoricalDistributions(4)} and define \codeobj{parameter = manifold.random\_point()}. Then, in order to sample from this distribution, one uses \codeobj{samples = manifold.sample(parameter, n\_samples=10)}. Second, the \codeobj{InformationManifoldMixin} endows the parameter manifolds with a Riemannian metric defined using the Fisher information, called the \textit{Fisher-Rao metric} and implemented in the \codeobj{FisherRaoMetric} class shown in Figure~\ref{fig:architecture}. The Fisher information is a notion from statistical inference that measures the quantity of information on the parameter $\theta$ contained in an observation with density $f(\cdot,\theta)$. It is defined, under certain regularity conditions \cite{lehmann2006theory}, as \begin{equation} \label{eq:fisherinfo} I(\theta)=-\mathbb E_\theta\left[\mathrm{Hess}_\theta\left(\log f(X|\theta)\right)\right], \end{equation} where $\mathrm{Hess}_\theta$ denotes the hessian with respect to $\theta$ and $\mathbb E_\theta$ is the expectation taken with respect to the random variable $X$ with density $f(\cdot, \theta)$. If this $d$-by-$d$ matrix is everywhere definite, it provides a Riemannian metric on $\Theta$, called the Fisher-Rao metric, where the inner product between two tangent vectors $u,v$ at $\theta\in\Theta$ is defined by \begin{equation} \label{eq:FRmetric} \langle u,v\rangle_{\theta}=u^\top I(\theta) v. \end{equation} Here the tangent vectors $u, v$ are simply vectors of $\mathbb R^d$ since $\Theta$ is an open subset of $\mathbb R^d$. In the sequel, we will describe the Fisher-Rao metric for different parametric statistical families by providing the expression of the infinitesimal length element $$ds^2= \langle d\theta, d\theta\rangle_\theta = d\theta^\top I(\theta)d\theta$$ The metric matrix $I$ is implemented using automatic differentiation in the \codeobj{FisherRaoMetric} class. This allows users to get the Fisher-Rao Metric of any parametric family of probability distributions, for which the probability density function is known. For example, a user can compute the Fisher-Rao metric of the normal distributions with the syntax given below, which uses automatic differentiation behind the scenes. \begin{minted}{python} class MyInformationManifold(InformationManifoldMixin): def __init__(self): self.dim = 2 def point_to_pdf(self, point): means = point[..., 0] stds = point[..., 1] def pdf(x): constant = (1. / gs.sqrt(2 * gs.pi * stds**2)) return constant * gs.exp(-((x - means) ** 2) / (2 * stds**2)) return pdf metric = FisherRaoMetric( information_manifold=MyInformationManifold(), support=(-10, 10)) \end{minted} The user can then access the Fisher-Rao metric matrix $I(\theta)$ at $\theta = (1., 1.)$ with the code below. \begin{minted}{python} print(metric.metric_matrix(gs.array([1., 1.]))) >>> array([[1.00000000e+00, 1.11022302e-16], [1.11022302e-16, 2.00000000e+00]]) \end{minted} We recognize here the metric matrix of the Fisher-Rao metric on the univariate normal distributions. For convenience, the Fisher-Rao metrics for well-known parameter manifolds are already implemented in classes such as \codeobj{NormalMetric}, \codeobj{GammaMetric}, \codeobj{CategoricalMetric}, etc, as shown in Figure~\ref{fig:architecture}. These classes implement the closed-forms of the Fisher-Rao metric when these are known. The corresponding parameter manifolds in the classes \codeobj{NormalDistributions}, \codeobj{GammaDistributions}, \codeobj{CategoricalDistributions}, etc, are equipped with their Fisher-Rao metric, which is found as an attribute called \codeobj{metric}. For example, the Fisher-Rao metric on the categorical distributions on a support of cardinal $5$ is found in the \codeobj{metric} attribute of the class of categorical distributions, i.e. \codeobj{metric = CategoricalDistributions(4).metric}. Its methods allow to compute exponential, logarithm maps and geodesics using \texttt{metric.exp()}, \texttt{metric.log()}, \texttt{metric.geodesic()}, together with the various notions of curvatures. \section{Information manifolds implemented in Geomstats}\label{sec:catalogue} This section details the tools of information geometry that we implement in each of the information manifold classes. As such, this section also provides a comprehensive review of the field of computational information geometry and its main applications. Each subsection further showcases code snippets using each information manifold to demonstrate the diversity of use cases of the proposed \codeobj{information\_manifold} module. \subsection{One-dimensional parametric families} \subsubsection{Main results} The information geometry of one-dimensional information manifolds is simple: there is no curvature, the parameter manifold $\Theta$ is always diffeomorphic to $\mathbb R$, and there is only one path to go from one point to another in $\Theta$. However, the parametrization of this path can vary and leads to different interpolations between the probability distribution functions, as seen in Figure~\ref{fig:1Dcomparison}. \begin{figure}[h!] \centering \includegraphics[scale=.4]{interpolation_exp_1.png} \caption{Comparison between affine (left) and geodesic (right) interpolations between pdfs of exponential distributions of parameter $\lambda_0=0.1$ (black) and $\lambda_1=2$ (blue).} \label{fig:1Dcomparison} \end{figure} The Fisher-Rao geodesic distances are given in closed forms for the Poisson, exponential, binomial (and Bernoulli) distributions in \cite{atkinson1981}. We compute it for geometric distributions too (see the appendix). Results are summarized in Table~\ref{tab:dim1} and implemented in the \codeobj{dist()} methods of the corresponding metric classes. \begin{table}[h!] \resizebox{\textwidth}{!}{% \begin{tabular}{||c c c||} \hline Distribution & P.d.f. (or P.m.f.) & Geodesic distance \\ [0.5ex] \hline\hline Poisson (mean $\lambda$) & $\forall k \in \mathbb{N},\, P(k|\lambda) = \frac{\lambda^k}{k!} e^{-\lambda}, \, \lambda > 0$ & $d(\lambda_1, \lambda_2) = 2 |\sqrt{\lambda_1} - \sqrt{\lambda_2}|$ \\ [0.4ex] \hline Exponential (mean $\frac{1}{\lambda}$) & $\forall x \geq 0, f(x|\lambda) = \lambda e^{-\lambda x}, \, \lambda > 0$ & $d(\lambda_1, \lambda_2) = |\log \frac{\lambda_1}{\lambda_2}|$ \\ [0.4ex] \hline Binomial (known index $n$) & $\forall k \in \{0,...,n\},\, P(k|p) = \left(\begin{matrix} n \\ k \end{matrix} \right) p^k (1-p)^{n-k}, \, 0<p<1$ & $d(p_1, p_2) = 2 \sqrt{n} |\sin^{-1}(\sqrt{p_1}) - \sin^{-1}(\sqrt{p_2})|$ \\ [0.4ex] \hline Bernoulli ($1$-binomial) & $\forall k \in \{0,1\},\, P(k|p) = p^k (1-p)^{1-k}, \, 0<p<1$ & $d(p_1, p_2) = 2 |\sin^{-1}(\sqrt{p_1}) - \sin^{-1}(\sqrt{p_2})|$ \\ [0.4ex] \hline Geometric & $\forall k \in \mathbb{N^*}, \, P(k|p) = (1-p)^{k-1}p, \, 0<p<1$ & $d(p_1, p_2) = 2|\tanh^{-1}(\sqrt{1-p_1}) - \tanh^{-1}(\sqrt{1-p_2}) |$ \\ \hline \end{tabular}} \smallskip \caption{Fisher-Rao distance for one-dimensional parametric families of probability distributions implemented in the information geometry module. P.d.f. means probability density function and P.m.f. means probability mass function. These formulas are implemented in the \codeobj{dist()} methods in the metric Python classes of Figure~\ref{fig:architecture}.} \label{tab:dim1} \end{table} \subsubsection{\codeobj{Geomstats} example} The following code snippet shows how to compute the middle of the geodesic between points $p_1=.4$ and $p_2=.7$ on the one-dimensional $5$-binomial manifold. \begin{minted}{python} import geomstats.backend as gs from geomstats.information_geometry.binomial import BinomialDistributions manifold = BinomialDistributions(5) point_a = .4 point_b = .7 times = gs.linspace(0, 1, 100) geodesic = manifold.metric.geodesic(initial_point=point_a, end_point=point_b)(times) middle = geodesic(.5) print(middle) >>> 0.5550055679356352 \end{minted} The geodesic middle point of $p_1=.4$ and $p_2=.7$ on the $5$-binomial manifold is roughly $p=.555$, a little higher than the Euclidean middle point (=.55)! \subsection{Multinomial and categorical distributions} \subsubsection{Main results} \textit{Multinomial distributions} model the results of an experiment with a finite number $k$ of outcomes, repeated $n$ times. When there is no repetition ($n=1$), it is called a \textit{categorical distribution}. Here the number of repetitions $n$ is always fixed. The parameter $\theta$ of the parameter manifold encodes the probabilities of the different outcomes. The parameter manifold $\Theta$ is therefore the interior of the $k-1$ dimensional simplex $\Theta = \Delta_{k-1} = \{\theta \in \mathbb{R}^k: \forall i, \theta_i> 0, \theta_1+\hdots+\theta_k = 1 \}$. \begin{definition}[Probability mass function of the multinomial distribution] Given $k, n\in \mathbb{N^*}$ and $\theta=(\theta_1, ..., \theta_k) \in \Delta_{k-1}$, the p.m.f. of the $n$-multinomial distribution of parameter $\theta$ is $$p(x=(x_1,\hdots,x_k)|\theta) = \frac{n!}{x_1!\hdots x_k!} \theta_1^{x_1}\hdots\theta_k^{x_k},$$ where $x_i\in\{0,\hdots,n\}$ for all $i=1,\hdots,k$ and $x_1+\hdots+x_k=n$. \end{definition} The Fisher-Rao geometry on the parameter manifold $\Delta_{k-1}$ is well-known, see for example \cite{kass1989geometry}. We summarize the geometry with the following propositions. \begin{proposition}[Fisher-Rao metric on the multinomial manifold] The Fisher-Rao metric on the parameter manifold $\Theta = \Delta_{k-1}$ of $n$-multinomial distributions is given by $$ds^2 = n\left(\frac{d\theta_1^2}{\theta_1} + ... + \frac{d\theta_k^2}{\theta_k} \right).$$ \end{proposition} Thus, one can see that the Fisher-Rao metric on the parameter manifold $\Theta = \Delta_{k-1}$ of multinomial distributions can be obtained as the pullback of the Euclidean metric on the positive $(k-1)$-sphere of radius $2\sqrt{n}$, $S_{k-1}^+ = \{\theta \in \mathbb{R}^k: \forall i, \theta_i> 0, \sum_{i=1}^k \theta_i^2 = 2\sqrt{n} \}$ by the diffeomorphism $$R : \theta \mapsto R(\theta) = (2\sqrt{n\theta_1}, ..., 2\sqrt{n\theta_k}).$$ Therefore the distance between two given parameters is the spherical distance of their images by transormation $R$, and the curvature of the parameter manifold is that of the $(k-1)$-sphere of radius $2\sqrt{n}$. \begin{proposition}[Geodesic distance on the multinomial manifold] The geodesic distance between two parameters $\theta^1, \theta^2 \in \Delta_{k-1}$ has the following analytic expression: $$d(\theta^1, \theta^2) = 2\sqrt{n} \arccos{\left(\sum_{i=1}^k \sqrt{\theta_i^1 \theta_i^2} \right)}.$$ \end{proposition} \begin{proposition}[Curvature of the multinomial manifold] The Fisher-Rao manifold of multinomial distributions has constant sectional curvature $K=2\sqrt{n}$. \end{proposition} \begin{figure} \centering \includegraphics[scale=.3]{categorical_2.png} \captionsetup{singlelinecheck=off} \caption[]{Information geometry of the 3-Categorical manifold implemented in the Python class \codeobj{CategoricalDistributions}. The orange geodesic ball is of radius 0.7 and centered on the red point $(0.1, 0.58, 0.32)$, the blue geodesic ball is of radius 0.3 and centered on the green point $(0.74, 0.21, 0.05).$ } \label{fig:categorical} \end{figure} We implement the p.m.f, Fisher-Rao metric, geodesic distance, and curvatures in the Python classes \codeobj{MultinomialDistributions} and \codeobj{MultinomialMetric} of the \codeobj{information\_geometry} module. \subsubsection{Applications} The Fisher-Rao geometry of multinomial distributions has been used in the literature, \textit{e.g.}, to formulate concepts in evolutionary game theory \cite{harper2009information} and to classify documents after term-frequency representation in the simplex \cite{lebanon2012learning}. \subsubsection{\codeobj{Geomstats} example} This example shows how we use the \codeobj{information\_geometry} module to compute on the $6$-categorical manifold, \textit{i.e.}, the $5$-dimensional manifold of categorical distributions with $k=6$ outcomes. The following code snippet computes the geodesic distances between a given point on the $6$-categorical manifold and the vertices of the simplex $\Delta_5$. \begin{minted}{python} import geomstats.backend as gs from geomstats.information_geometry.categorical import CategoricalDistributions manifold = CategoricalDistributions(dim=5) point_a = gs.array([.1, .2, .1, .3, .15, .15]) point_b = gs.array([.25, .25, .1, .05, .05, .3]) vertices = list(gs.eye(6)) distances_a = [manifold.metric.dist(point_a, extremity) for vertex in vertices] distances_b = [manifold.metric.dist(point_b, extremity) for vertex in vertices] print(f"distances_a = {[float(str(distance)[:5]) for distance in distances_a]}") print(f"distances_b = {[float(str(distance)[:5]) for distance in distances_b]}") >>> distances_a = [2.498, 2.214, 2.498, 1.982, 2.346, 2.346] >>> distances_b = [2.094, 2.094, 2.498, 2.69, 2.69, 1.982] closest_a = vertices[gs.argmin(distances_a)] closest_b = vertices[gs.argmin(distances_b)] print(f"closest extremity to {point_a} is {closest_a}") print(f"closest extremity to {point_b} is {closest_b}") >>> closest extremity to [0.1 0.2 0.1 0.3 0.15 0.15] is [0. 0. 0. 1. 0. 0.] >>> closest extremity to [0.25 0.25 0.1 0.05 0.05 0.3 ] is [0. 0. 0. 0. 0. 1.] \end{minted} This result confirms the intuition that the vertex of the simplex that is closest, in terms of the Fisher-Rao geodesic distance, to a given categorical distribution is the one corresponding to its mode. Indeed, noting $e_i = (\delta_{ij})_j$, $i=1,\hdots,6$ the extremities of the simplex, we see that for all $i\in\{1,\hdots, 6\}$ and $\theta \in \Delta_5$, $d(\theta, e_i) = \arccos({\sqrt{\theta_i}})$ is minimal when $i$ matches the mode of the distribution. \subsection{Normal distributions} Normal distributions are ubiquitous in probability theory and statistics, especially via the Central limit theorem. They are a very widely used modelling tool in practice, and provide one of the first non trivial Fisher-Rao geometries to be studied in the literature. \subsubsection{Main results} Let us start by reviewing the univariate normal model. \begin{definition}[Probability density function of the univariate normal distribution] The p.d.f. of the normal distribution of mean $m\in\mathbb R$ and variance $\sigma^2\in\mathbb R_+^*$ is $$f(x|\theta)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(x-m)^2}{2\sigma^2}\right).$$ \end{definition} It is well known since the 1980s \cite{atkinson1981} that the corresponding Fisher-Rao metric with respect to $\theta=(m,\sigma)$ defines hyperbolic geometry on the parameter manifold $\Theta = \mathbb R\times\mathbb R_+^*$. \begin{proposition}[Fisher-Rao metric for the univariate normal manifold] The Fisher-Rao metric on the parameter manifold $\Theta = \mathbb{R} \times \mathbb{R_+^*}$ of normal distributions is $$ds^2=\frac{dm^2+2d\sigma^2}{\sigma^2}.$$ \end{proposition} Indeed, using the change of variables $m\mapsto m/\sqrt{2}$, we retrieve a multiple of the Poincaré metric $ds^2=2(dx^2+dy^2)/y^2$ on the upper half-plane $\{(x,y): \, x\in\mathbb R, y>0\}$, a model of two-dimensional hyperbolic geometry. Thus, closed-form expressions are known for the geodesics, which are either vertical segments or portions of half-circles orthogonal to the $m$-axis. The same is true for the distance. \begin{proposition}[Geodesic distance on the univariate normal manifold \cite{skovgaard1984}] The geodesic distance between normal distributions of parameters $(m_1, \sigma_1)$ and $(m_2, \sigma_2)$ in $\mathbb{R} \times \mathbb{R_+^*}$ is given by $$ d((m_1,\sigma_1),(m_2,\sigma_2))=\sqrt{2}\cosh^{-1}\left( \frac{(m_1-m_2)^2/2 + (\sigma_1+\sigma_2)^2}{2\sigma_1\sigma_2} \right).$$ \end{proposition} The curvature is the same as that of the $2$-Poincaré metric, and rescaling the Poincaré metric by a factor $2$ implies dividing the sectional curvature by the same factor. The manifold of univariate normal distributions has therefore constant negative curvature, and since it is simply connected and geodesically complete we get the following result. \begin{proposition}[Curvature of the univariate normal manifold \cite{skovgaard1984}.] The Fisher-Rao manifold of normal distributions has constant sectional curvature $K=-1/2$. In particular, any two normal distributions can be linked by a unique geodesic, the length of which gives the Fisher-Rao distance. \end{proposition} We implement the p.d.f, Fisher-Rao metric, geodesics, geodesic distance, and curvatures in the Python classes \codeobj{NormalDistributions} and \codeobj{NormalMetric} of the \codeobj{information\_geometry} module. Figure~\ref{fig:univariate_normals} shows 2 geodesics, 2 geodesic spheres, and 1 geodesic grid on the information manifold of univariate normal distributions. \begin{figure}[h!] \centering \centerline{ \includegraphics[scale=0.5]{normal_w_labels.png}} \captionsetup{singlelinecheck=off} \caption[]{Information geometry of the manifold of normal distributions implemented in the Python class \codeobj{NormalDistributions}. Up-left: two geodesics of length 1 departing from two random points A and B; Bottom-left: geodesic grid between A and B. Right: two geodesic spheres of radius 1 centered on A and B } \label{fig:univariate_normals} \end{figure} We now turn to the multivariate case. \begin{definition}[Probability density function of multivariate normal distributions] In higher dimensions $p \geq 2$, the p.d.f. of the normal distribution of mean $m\in\mathbb R^p$ and covariance matrix $\Sigma \in S_p(\mathbb{R})^+$ is $$f(x|\theta)=\frac{1}{\sqrt{(2\pi)p|\Sigma|}}\exp\left(-\frac{1}{2}(x-m)^\top\Sigma^{-1}(x-m)\right).$$ \end{definition} The Fisher-Rao geometry of multivariate normal distributions was first studied in the early 1980's \cite{sato1979geometrical}, \cite{atkinson1981}\cite{skovgaard1984}. In general, no closed form expressions are known for the distance nor the geodesics associated to the Fisher information metric in the multivariate case. However, analytic expressions for these quantities are known for some particular submanifolds, and can be found e.g. in the review paper \cite{pinele2019fisher}. The first of these particular cases corresponds to multivariate distributions with diagonal covariances. \begin{proposition}[Multivariate normal distributions with diagonal covariance matrices \cite{skovgaard1984}] The submanifold of Gaussian distributions with mean $m=(m_1,\hdots,m_p)$ and diagonal covariance matrix $\Sigma = diag(\sigma_1^2, ..., \sigma_p^2)$ can be identified with the product manifold $(\mathbb R\times\mathbb R_+^*)^p=\{(m_1,\sigma_1,\hdots, m_p,\sigma_p): \, m_i\in\mathbb R, \sigma_i>0\}$, on which the Fisher-Rao metric is the product metric $$ds^2 = \sum_{i=1}^p \frac{dm_i^2 + 2d\sigma_i^2}{\sigma_i^2}.$$ The induced geodesic distance bertween distributions of means $m_j=(m_{ji})_{1\leq i\leq p}$ and covariance matrices $\Sigma_j=\mathrm{diag}(\sigma_{j1}^2,\hdots,\sigma_{jp}^2)$, $j=1, 2$, is given by $$d_p((m_1,\Sigma_1),(m_2,\Sigma_2))=\sqrt{\sum_{i=1}^p d^2((m_{1i},\, \sigma_{1i}), (m_{2i},\sigma_{2i}))},$$ where $d$ is the geodesic distance on the space of univariate normal distributions. \end{proposition} The second particular case when the geometry is explicit corresponds to multivariate normal distributions with fixed mean. In this case, the parameter space is the space of symmetric positive definite matrices and the Fisher-Rao metric coincides with the affine-invariant metric \cite{pennec2006riemannian}. Note that even though the parameter with respect to which the Fisher information is computed differs between the different submanifolds of the multivariate normal distributions, this does not affect the distance, which is invariant with respect to diffeomorphic change of parametrization. \begin{proposition}[Multivariate normal distributions with fixed mean \cite{atkinson1981}] Let $\mathbf{m} \in \mathbb{R}^p.$ The geodesic distance between Gaussian distributions with fixed mean $\mathbf{m}$ and covariance matrices $\Sigma_1$, $\Sigma_2$ is $$d(\Sigma_1,\Sigma_2)=\sqrt{\frac{1}{2}\sum_{i=1}^p \log(\lambda_i)^2},$$ where the $\lambda_j$ are the eigenvalues of $\left(\Sigma_1\right)^{-1} \Sigma_2$. \end{proposition} The sectional curvature in the fixed mean case is negative, although non constant \cite{lenglet2006}. We implement the information geometry of the normal distributions reviewed here within the Python classes \codeobj{NormalDistributions} and \codeobj{NormalMetric} shown in Figure~\ref{fig:architecture}. \subsubsection{Applications} The Fisher-Rao geometry of normal distributions has proved very useful in the field of diffusion tensor imaging \cite{lenglet2006} and more generally in image analysis, \textit{e.g.}, for detection \cite{maybank2004}, mathematical morphology \cite{angulo2014morphological} and segmentation \cite{verdoolaege2011geodesics, strapasson2016clustering}. We refer the interested reader to the review paper \cite{pinele2019fisher} and the references therein. \subsubsection{\codeobj{Geomstats} example} This example shows how users can leverage the proposed \codeobj{information\_geometry} module to get intuition on the Fisher-Rao geometry of normal distributions. Specifically, we compute the geodesics and geodesic distance between two normal distributions with same variance and different means $m_1=1, m_2=4$, for two different values $\sigma^2=1, \sigma'^2=4$ of the common variance. \begin{minted}{python} import matplotlib.pyplot as plt import geomstats.backend as gs from geomstats.information_geometry.beta import BetaDistributions from geomstats.information_geometry.normal import NormalDistributions manifold = NormalDistributions() point_a = gs.array([1., 1.]) point_b = gs.array([4., 1.]) point_c = gs.array([1., 2.]) point_d = gs.array([4., 2.]) print(manifold.metric.dist(point_a, point_b)) print(manifold.metric.dist(point_c, point_d)) >>> 2.38952643457422 >>> 1.3862943611198915 times = gs.linspace(0, 1, 100) geod_ab = manifold.metric.geodesic(initial_point=point_a, end_point=point_b)(times) geod_cd = manifold.metric.geodesic(initial_point=point_c, end_point=point_d)(times) max_variance_ab = geodesic_ab[gs.argmax(geod_ab[:, 1])] max_variance_cd = geodesic_cd[gs.argmax(geod_cd[:, 1])] plt.plot(*gs.transpose(geod_ab)) plt.scatter(*point_a, color='g') plt.scatter(*point_b, color='g') plt.scatter(*max_variance_ab, color='r') plt.plot(*gs.transpose(geod_cd)) plt.scatter(*point_c, color='g') plt.scatter(*point_d, color='g') plt.scatter(*max_variance_cd, color='r') plt.ylim([0., 3.]) plt.show() \end{minted} The two geodesics generated by this code snippet yield the two curves in Figure~\ref{fig:normal_geod}. We see that the higher the variance, the smaller the distance. As pointed out in \cite{costa2015fisher}, this result reflects the fact that the p.d.f.s overlap more when the variance increases. On each geodesic, we observe that the point of maximum variance corresponds to the geodesic' middle point. \begin{figure} \includegraphics[width=0.5\linewidth]{normal_geodesics.png} \caption{Geodesics in the manifold of normal distributions. When the variance of the normal distributions at the extremities (green points) increases, the geodesic becomes shorter. Variance increases along the geodesic and reaches a maximum in the middle (red points).} \label{fig:normal_geod} \end{figure} \subsection{Gamma distributions} Gamma distributions form a $2$-parameter family of distributions defined on the positive half-line, and are used to model the time between independent events that occur at a constant average rate. They have been widely used to model right-skewed data, such as cancer rates \cite{shinmoto2015diffusion}, insurance claims \cite{semenikhine2018multiplicative}, and rainfall \cite{husak2007use}. \subsubsection{Main results} Standard Gamma distributions take support over $\mathbb{R_+^*}$ and consist of one of the prime examples of information geometry, namely for for the variety of parametrizations they have been endowed with \cite{lauritzen1987statistical}, \cite{burbea2002some}, \cite{arwini2008}. \begin{definition}[Probability density function for Gamma distributions in natural coordinates] In natural coordinates, given $(\nu, \kappa) \in \left(\mathbb{R_+^*}\right)^2$, the p.d.f. of the two-parameter Gamma distribution of rate $\nu$ and shape $\kappa$ is: $$\forall x > 0,\, f(x | \nu, \kappa) = \frac{\nu^{\kappa}}{\Gamma(\kappa)} x^{\kappa-1} e^{-\nu x} \text{, where $\Gamma$ is the Gamma function}.$$ \end{definition} \begin{proposition}[Fisher-Rao metric for the Gamma manifold in natural coordinates \cite{arwini2008}] The Fisher-Rao metric on the Gamma manifold $\Theta = \left(\mathbb{R_+^*}\right)^2$ is $$ds^2 = \frac{\kappa}{\nu^2} d\nu^2 - 2\frac{d\nu d\kappa}{\nu} + \psi'(\kappa) d\kappa^2 ,$$ where $\psi$ is the digamma function, i.e. $\psi = \frac{\Gamma'}{\Gamma}$. \end{proposition} However, the fact that this metric is not diagonal for the natural parametrization encourages one to consider the manifold under a different set of coordinates. Getting rid of the middle term in $d\nu d\kappa$ highly simplifies the geometry. \begin{proposition}[Fisher-Rao metric for the Gamma manifold in $(\gamma, \kappa)$ coordinates \cite{arwini2008}] The change of variable $(\gamma, \kappa) = (\frac{\kappa}{\nu}, \kappa)$ gives the following expression of the Fisher-Rao metric: $$ds^2 = \frac{\kappa}{\gamma^2} d\gamma^2 + \left(\psi'(\kappa) - \frac{1}{\kappa}\right) d\kappa^2.$$ \end{proposition} Both parametrizations $(\gamma, \kappa)$ and $(\kappa, \gamma)$ can be found in the literature. The use of of $(\kappa, \gamma)$ is standard in information geometry and it is the one we use to implement the Gamma manifold. This yields the following expression of the p.d.f. \begin{definition}[Probability density function for Gamma distributions in $(\kappa, \gamma)$ coordinates] The p.d.f. of the two-parameter Gamma distribution of parameters $\gamma$, $\kappa$ is: $$\forall x > 0,\, f(x | \gamma, \kappa) = \frac{\kappa^{\kappa}}{\gamma^\kappa\Gamma(\kappa)} x^{\kappa-1} e^{-\frac{\kappa x}{\gamma}}.$$ \end{definition} \begin{proposition}[Geodesic equations on the Gamma manifold \cite{arwini2008}] The associated geodesic equations are: $$ \begin{cases} \ddot{\gamma} = \frac{\dot{\gamma}^2}{\gamma} - \frac{\dot{\gamma} \dot{\kappa}}{\kappa} \\ \ddot{\kappa} = \frac{\kappa \dot{\gamma}^2}{2 \gamma^2 (\kappa \psi'(\kappa)-1)} - \frac{(\psi"(\kappa)\kappa^2 + 1) \dot{\kappa}^2}{2 \kappa (\kappa \psi'(\kappa)-1)}. \end{cases} $$ \end{proposition} No closed form expressions are known for the distance nor the geodesics associated to the Fisher information geometry with respect to $(\gamma, \kappa)$. Yet, our information module is able to compute both numerically by leveraging the automatic differentiation computations available in the parent Python class of the \codeobj{FisherRaoMetric}. Figure~\ref{fig:gamma_geodesics} shows 3 geodesics, 2 geodesic spheres, and a geodesic grid for the Gamma manifold. Running code from the information geometry module shows that some geodesics are horizontal (with $\gamma$ constant), which is notable. This can also be directly seen from the geodesic equation $\ddot{\gamma} = \dot{\gamma} \left(\frac{\dot{\gamma}}{\gamma} - \frac{ \dot{\kappa}}{\kappa} \right)$: a geodesic with a horizontal initial direction ($\dot\gamma = 0$) will stay horizontal. \begin{figure}[h!] \centering \centerline{ \includegraphics[scale=.4]{gamma.png}} \captionsetup{singlelinecheck=off} \caption[]{Information geometry of the manifold of Gamma distributions implemented in the Python class \codeobj{GammaDistributions}. Up-left: three geodesics of length 1 departing from two random points A (red) and B (green) and C (magenta, with $\gamma$ constant). Bottom-left: geodesic grid between A and B. Right: two geodesic spheres of radius 1 centered on A and B; } \label{fig:gamma_geodesics} \end{figure} There is a closed-form expression of the geodesic distance in the manifold of Gamma distributions with fixed $\kappa$, which is therefore a one-dimensional manifold. \begin{proposition}[Geodesic distance on the Gamma manifold with fixed $\kappa$.] The geodesic distance $d$ on the Gamma manifold, for a fixed $\kappa$ is given in $(\kappa, \gamma)$ parameterization by: $$\forall \gamma_1, \gamma_2 >0,\, d(\gamma_1, \gamma_2) = \sqrt{\kappa} \left|\log \frac{\gamma_1}{\gamma_2}\right|,$$ or, in $(\kappa, \nu)$ parameterization by: $$\forall \gamma_1, \gamma_2 >0,\, d(\nu_1, \nu_2) = \sqrt{\kappa} \left|\log \frac{\nu_1}{\nu_2} \right|.$$ \end{proposition} This result, proved in the appendix, was expected, at least for integer values of $\kappa$. Consider one Gamma process as the sum of $\kappa$ i.i.d exponential processes. Because the processes are independent, the Fisher information for the Gamma distribution is $\kappa$ times as big as that of the exponential distribution. Consequently, the length of a geodesic on the Gamma manifold observes a $\sqrt{\kappa}$ coefficient. The sectional curvature of the Gamma manifold, which is plotted in Figure~\ref{fig:gamma_curvature}, is everywhere negative, bounded and depends only on the $\kappa$ parameter. Since it is also simply connected and geodesically complete, the following result holds. \begin{proposition}[Curvature of the Gamma manifold \cite{burbea2002some}] The sectional curvature of the Gamma manifold at each point $(\gamma,\, \kappa) \in \left(\mathbb{R_+^*}\right)^2$ verifies $$-\frac{1}{2} < K(\gamma,\kappa)=K(\kappa)=\frac{\psi'(\kappa) + \kappa \psi''(\kappa)}{4(-1 + \kappa \psi'(\kappa))^2} < -\frac{1}{4}.$$ In particular, any two gamma distributions can be linked by a unique geodesic in the parameter space, the length of which gives the Fisher-Rao distance. \end{proposition} \begin{figure}[h!] \centering \includegraphics[scale=.5]{gamma_sectional_curvature_1.png} \includegraphics[scale=.5]{beta_sectional_curvature_1.png} \caption{Sectional curvature of the Fisher-Rao manifolds of gamma (left) and beta (right) distributions.} \label{fig:gamma_curvature} \end{figure} We implement the information geometry of the Gamma distributions reviewed here within the Python classes \codeobj{GammaDistributions} and \codeobj{GammaMetric} shown in Figure~\ref{fig:architecture}. Let us mention that the Fisher-Rao geometry of generalized Gamma distributions have also been studied in the literature \cite{chen2013riemannian, abbad2017rao, rebbah2019}, and will be the object of future implementation in the proposed information geometry module. \subsubsection{Applications} Information geometry of both the standard Gamma and the generalized Gamma manifolds have been used in the literature. Most often, the goal is to implement a ``natural'' (geodesic) distance between distributions. In that aspect, a geometric reasoning of Gamma distributions finds purposes in many fields, ranging from performance improvement in classification methods in medical imaging \cite{rebbah2019} to texture retrieval \cite{abbad2017rao}. \subsubsection{\codeobj{Geomstats} example} In the following example, we compute the sectional curvature of the Gamma manifold at a given point. The sectional curvature is computed for the subspace spanned by two tangent vectors, but since the gamma manifold is two dimensional, the result does not depend on the chosen vectors. \begin{minted}{python} import geomstats.backend as gs from geomstats.information_geometry import GammaDistributions dim = 2 manifold = GammaDistributions() point = gs.array([1., 2.]) vec_a = manifold.to_tangent(gs.random.rand(dim))) vec_b = manifold.to_tangent(gs.random.rand(dim))) vec_c = manifold.to_tangent(gs.random.rand(dim))) print(manifold.metric.curvature(vec_a, vec_b, point)) print(manifold.metric.curvature(vec_a, vec_c, point)) >>> -0.45630369144018423 >>> -0.4563036914401915 \end{minted} A comprehensive example using information geometry of the Gamma manifold in the context of traffic optimization in São Paulo can be found in \href{https://notebooks.gesis.org/binder/jupyter/user/geomstats-geomstats-yo5d6iiw/notebooks/notebooks/18_real_world_applications__sao_paulo_traffic_optimization.ipynb}{this notebook}. \subsection{Beta and Dirichlet distributions} \subsubsection{Main results} Beta distributions form a 2-parameter family of probability measures defined on the unit interval and often used to define a probability distribution on probabilities. In Bayesian statistics, it is the conjugate prior to the binomial distribution, meaning that if the prior on the probability of success in a binomial experiment belongs to the family of beta distributions, then so does the posterior distribution. This allows users to estimate the distribution of the probability of success by iteratively updating the parameters of the beta prior. Beta and Dirichlet distributions are defined as follows: \begin{definition}[Probability density function of Beta distributions] The p.d.f. of beta distributions is parameterized by two shape parameters $\alpha,\beta>0$ and given by: $$f(x|\alpha, \beta)=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}x^{\alpha-1}(1-x)^{\beta-1}, \quad \forall x\in[0, 1].$$ \end{definition} Figure~\ref{fig:beta_manifold} shows examples of p.d.f. of beta distributions, which can take a wide variety of shapes. The distribution has a unique mode in $]0,1[$ when $\alpha,\beta>1$, and a mode in $0$ or $1$ otherwise. \begin{figure} \centering \includegraphics[scale=.4]{beta_pdf.png} \caption{P.d.f.s of the beta distributions plotted in Geomstats example \ref{ex:beta}. The Fisher-Rao geodesic distance between the parameters of the blue and green distributions is larger than the one between the blue and orange, while the converse is true for the Euclidean distance.} \label{fig:beta_manifold} \end{figure} Beta distributions can be seen as a sub-family of the Dirichlet distributions, defined on the $(n-1)$-dimensional probability simplex $\Delta_{n-1}$ of $n$-tuples composed of non-negative components that sum up to one. Similarly to the beta distribution, the Dirichlet distribution is used in Bayesian statistics as the conjugate prior to themultinomial distribution. It is a multivariate generalization of the beta distribution in the sense that if $X$ is a random variable following a beta distribution of parameters $\alpha_1,\alpha_2$, then $(X,1-X)$ follows a Dirichlet distribution of same parameters on $\Delta_1$. \begin{definition}[Probability density function of Dirichlet distributions] The p.d.f. of Dirichlet distributions is parametrized by $n$ positive reals $\alpha_1,\hdots,\alpha_n>0$ and given by: $$f(x|\alpha_1,\hdots,\alpha_n)=\frac{\Gamma(\sum_{i=1}^n\alpha_i)}{\prod_{i=1}^n\Gamma(\alpha_i)}\prod_{i=1}^n{x_i}^{\alpha_i-1}, \quad \forall (x_1,\hdots,x_n)\in\Delta_{n-1}.$$ \end{definition} \begin{proposition}[The Fisher-Rao metric on the Dirichlet manifold \cite{lebrigant2021fisher}] The Fisher-Rao metric on the parameter manifold $\Theta=(\mathbb R_+^*)^n$ of Dirichlet distributions is $$ds^2=\sum_{i=1}^n\psi'(\alpha_i)d\alpha_i^2-\psi'(\bar\alpha)d\bar\alpha^2,$$ where $\bar\alpha=\sum_{i=1}^n\alpha_i$. \end{proposition} No closed form are known for the geodesics of the beta and Dirichlet manifold. Therefore, our \codeobj{information\_geometry} module solves the geodesic equations numerically. Figure~\ref{fig:beta_geodesic} shows 3 geodesics, 2 geodesic sphere and 1 geodesic grid for the beta manifold, and Figure-\ref{fig:dirichlet_geodesic} shows geodesic spheres in the $3$-Dirichlet manifold. In the beta manifold, the oval shape of the geodesic spheres suggest that the cost to go from one point to another is less important along the lines of equation $\alpha_2/\alpha_1=\text{cst}$. This seems natural since these are the lines of constant distribution mean. \begin{figure}[h!] \centering \centerline{ \includegraphics[scale=.4]{beta.png}} \captionsetup{singlelinecheck=off} \caption[]{Information geometry of the Beta manifold implemented in \codeobj{BetaDistributions}. Up-left: three geodesics of length 1 departing from three random points A (red) and B (green) and C (magenta, with $\frac{\alpha}{\beta}$ constant). Bottom-left: geodesic grid between A and B. Right: two geodesic spheres of unit radius centered on A and B; } \label{fig:beta_geodesic} \end{figure} \begin{figure}[h!] \centering \raisebox{.4cm}{ \includegraphics[scale=.38]{beta_manifold.png}} \includegraphics[scale=.4]{dirichlet_1.png} \caption{Left: rays of four geodesic spheres in the beta manifold, the oval shape of which suggest that the cost to go from one beta distribution to another is less important along the lines of equation $\alpha_2/\alpha_1=\text{cst}$. This seems natural since these are the lines of constant distribution mean. Right: geodesic spheres of unit radius in the 3-Dirichlet manifold. } \label{fig:dirichlet_geodesic} \end{figure} The Dirichlet manifold is isometric to a hypersurface in flat $(n+1)$-dimensional Minkowski space through the transformation $$(x_1,\hdots,x_n)\mapsto (\eta(x_1),\hdots,\eta(x_n),\eta(x_1+\hdots+x_n)),$$ where $\eta'(x)=\sqrt{\psi'(x)}$. This allows to show the following result on the curvature, which is plotted in Figure~\ref{fig:gamma_curvature} for dimension $2$. \begin{proposition}[\cite{lebrigant2021fisher}] The parameter manifold of Dirichlet distributions endowed with the Fisher-Rao metric is simply connected, geodesically complete and has everywhere negative sectional curvature. In particular, any two Dirichlet distributions can be linked by a unique geodesic, the length of which gives the Fisher-Rao distance. \end{proposition} \begin{figure} \centering \includegraphics[trim=100 0 100 0, scale=.4]{beta_clust_riem_4.png} \includegraphics[trim=100 0 100 20, scale=.4]{beta_clust_eucl_4.png} \caption{Results of K-means clustering of the beta distributions of Geomstats example \ref{ex:beta} using the Fisher-Rao metric (upper row) and the Euclidean distance (lower row), shown in terms of parameters (left column) and p.d.f.s (right column). Contrary to the Euclidean distance, the Fisher-Rao metric regroups the distributions with the same mean, i.e. with parameters aligned on a straight line through the origin, and inside a group of same mean, it regroups the p.d.f.s with similar shape.} \label{fig:beta_clustering} \end{figure} The classes \codeobj{BetaDistributions}, \codeobj{DirichletDistributions}, and \codeobj{DirichletMetric} implement the geometries described here. We note that \codeobj{BetaDistributions} inherits from \codeobj{DirichletDistributions} and thus inherits the computations coming from its Fisher-Rao metrics as shown in Figure~\ref{fig:architecture}. \subsubsection{Applications} The Fisher-Rao geometry of beta distributions has received less attention in the literature than the previously described families, although it has been used in \cite{lebrigant2021classifying} to classify histograms of medical data. \subsubsection{\codeobj{Geomstats} example}\label{ex:beta} The following example compares the Fisher-Rao distance with the Euclidean distance between the beta distributions shown in Figure~\ref{fig:beta_manifold}. The Euclidean distance between the beta distributions with p.d.f.s shown in blue and green is much larger than the one between the blue and orange. This does not seem satisfactory when considering the differences in mean and mass overlap. By contrast, the blue distribution is closer to the green than to the orange distribution according to the Fisher-Rao metric. \begin{minted}{python} import matplotlib.pyplot as plt import geomstats.backend as gs from geomstats.information_geometry.beta import BetaDistributions point_a = gs.array([1., 10.]) point_b = gs.array([10., 1.]) point_c = gs.array([10., 100.]) # Plot pdfs samples = gs.linspace(0., 1., 100) points = gs.stack([point_a, point_b, point_c]) pdfs = manifold.point_to_pdf(points)(samples) plt.plot(samples, pdfs) plt.show() # Euclidean distances print(gs.linalg.norm(point_a - point_b)) print(gs.linalg.norm(point_a - point_c)) >>> 12.73 >>> 90.45 # Fisher-Rao distances print(manifold.metric.dist(point_a, point_b)) print(manifold.metric.dist(point_a, point_c)) >>> 4.16 >>> 1.76 \end{minted} More generally, beta distributions with the same mean are close for the Fisher-Rao metric. Indeed, the oval shape of the geodesic balls shown in Figure~\ref{fig:beta_manifold} suggests that the cost to go from one point to another is less important along the lines of equation $\alpha_2/\alpha_1=\text{cst}$, which are the lines of constant distribution mean. The next example performs K-means clustering , using either the Euclidean distance or the Fisher-Rao distance. We consider a set of beta distributions whose means take only two distinct values, which translates into the alignment of the parameters on two straight lines going through the origin, see Figure~\ref{fig:beta_clustering}. The clustering based on the Fisher-Rao metric (top row of the figure) distinguishes these two classes, and can further separate the distributions according to the shape of their p.d.f. The Euclidean distance on the other hand (bottom row of the figure) does not distinguish between the two different means. \begin{minted}{python} import geomstats.backend as gs from geomstats.geometry.euclidean import Euclidean from geomstats.information_geometry.beta import BetaDistributions from geomstats.learning.kmeans import RiemannianKMeans # Data values = gs.array([1/i for i in range(1, 6)] + [i for i in range(2, 10)]) factor = 5 cluster_1 = gs.stack((values, factor * values)).T cluster_2 = gs.stack((factor * values, values)).T points = gs.vstack((cluster_1, cluster_2)) n_points = points.shape[0] n_clusters = 4 # KMeans with the Euclidean distance r2 = Euclidean(dim=2) kmeans = RiemannianKMeans(metric=r2.metric, n_clusters=n_clusters, verbose=1) centroids_eucl = kmeans.fit(points) labels_eucl = kmeans.predict(points) # KMeans with the Fisher Rao distance beta = BetaDistributions() kmeans = RiemannianKMeans(metric=beta.metric, n_clusters=n_clusters, verbose=1) centroids_riem = kmeans.fit(points) labels_riem = kmeans.predict(points) \end{minted} \section{Application to text classification}\label{sec:application} This section presents a comprehensive usecase of the proposed Geomstats module \codeobj{information\_geometry} for text classification using the information manifold of Dirichlet distributions. We use the Latent Dirichlet Allocation (LDA) model to represent documents in the parameter manifold of Dirichlet distributions. LDA is a generative model for text, where each document is seen as a random mixture of topics, and each topic as a categorical distribution over words \cite{blei2003latent}. Specifically, consider a corpus with several documents composed of words from a dictionary of size $V$, and $K$ topics represented by a $K\times V$ matrix $\beta$ where the $i$-th line $\beta_{i\bullet}$ gives the discrete probability distribution of the $i$-th topic over the vocabulary. Given a Dirichlet parameter $\alpha$ in $\Delta_{K-1}$ the $(K-1)$-dimensional simplex, each document of $N$ words is generated as follows. First, we sample mixing coefficients $\theta=(\theta_1,\hdots,\theta_K) \sim \mathrm{Dirichlet}(\alpha)$. Next, in order to generate each word, we sample the $i$-th topic from $\mathrm{Categorical}(\theta)$. Finally, we sample a word from $\mathrm{Categorical}(\beta_{i\bullet})$. In other words, for each document the following two steps are iterated for $n=1,\hdots,N$: \begin{enumerate} \item select a topic $z_n$ according to $\mathbb P(z_n=i | \theta)=\theta_i$, $1\leq i\leq K$ \item select a word $w_n$ according to $\mathbb P(w_n=j|z_n,\beta)=\beta_{ij}$, $1\leq j\leq V$. \end{enumerate} Here ``$z_n=i$'' means that the $i^{th}$ topic is selected among the $K$ possible topics, and this is encoded as a vector of size $K$ full of zeros except for a $1$ in $i^{th}$ position. Similarly, ``$w_n=j$'' means that the $j^{th}$ word of the dictionary is selected and is encoded by a vector of size $V$ full of zeros except for a $1$ in $j^{th}$ position. The Dirichlet parameter $\alpha\in\Delta_{K-1}$ and the word-topic distributions $\beta \in \mathbb R^{k\times V}$ are the parameters of the model, which need to be estimated from data. Unfortunately, the likelihood of the LDA model cannot be computed and therefore cannot be maximized directly to estimate these parameters. In the seminal paper \cite{blei2003latent}, the authors introduce variational parameters that are document-specific, as well as a lower bound of the likelihood that involves these parameters. This bound can serve as a substitute for the true likelihood when estimating the parameters $\alpha$ and $\beta$. In binary classification experiments, the authors use the variational Dirichlet parameters to represent documents of the Reuters-21578 dataset and perform Euclidean support vector machine (SVM) in this low-dimensional representation space. Here we also use the parameter space of Dirichlet distributions to represent documents. However, we use the Fisher-Rao metric instead of the Euclidean metric for comparison. We extract 140 documents from the 20Newsgroups dataset, a collection of news articles labeled according to their main topic. We select documents from 4 different classes: 'alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'soc.religion.christian'. We then perform LDA on the obtained corpus, estimate the corresponding variational Dirichlet parameters on a space of $K=10$ topics, and use these to represent the documents in the $10$-dimensional parameter manifold of Dirichlet distributions. The pairwise distances between these parameters, regrouped by classes, for the Euclidean distance and the Fisher-Rao geodesic distance are shown in Figure~\ref{fig:text_distances}. While the 4 classes structure does not appear clearly, one can see 2 classes appear \textemdash one corresponding to religion and the other to computers \textemdash more distinctly with the Fisher-Rao metric than with the Euclidean metric. We use these distance matrices to perform $K$-nearest neighbors classification ($K=10$) after splitting the dataset into training and testing sets, and show the evolution of the classification error with respect to the percentage of data chosen for the training set in Figure~\ref{fig:text_knn}. We observe that the classification error is consistently lower for the Fisher-Rao metric compared to the Euclidean metric. \begin{figure} \centering \includegraphics[scale=0.24]{text_eucl_distances} \includegraphics[scale=0.24]{text_riem_distances} \caption{Distance matrices between the variational Dirichlet parameters of 140 documents from 4 classes of the 20NewsGroup dataset, for the Euclidean distance (left) and the Fisher-Rao geodesic distance (right). The indices are regrouped by classes, which are 'alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'soc.religion.christian'.} \label{fig:text_distances} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{text_knn.png} \caption{Classification error of K-nearest neighbors algorithm applied to 140 documents from 4 classes of the 20Newsgroups dataset, using the Euclidean distance and the Fisher-Rao geodesic distance, plotted with respect to the percentage of the data chosen for the training set.} \label{fig:text_knn} \end{figure} \section*{Conclusion} In this paper, we presented a Python implementation of information geometry integrated in the software Geomstats. We showed that our module \codeobj{information\_geometry} contains the essential building blocks to perform statistics and machine learning on probability distributions data. As we have described the formulas and mathematical structures implemented in our module, we have also reviewed the main analytical results of the field and the main areas of applications. We also demonstrated a clear usecase of information geometry for text classification, where the geometry of the probability space helps improve the data analysis. We hope that our implementation will inspire researchers to use, and contribute to, information geometry with the Geomstats library. \section*{Appendix} \subsection{Proof of geodesic distance for geometric distributions} A geometric distribution of parameter $p \in [0,1]$ has a p.m.f. : $$\forall k \geq 1,\, P(k|p) = f(k|p) = (1-p)^{k-1}p.$$ Then, for $0<p<1$, as $\frac{\partial^2 \log f}{\partial p^2} = \frac{1-k}{(1-p)^2} - \frac{1}{p^2}$, we have: $$I(p) = - \mathbb{E}_{p}\left[\frac{\partial^2 \log f(X)}{\partial p^2} \right] = \frac{1}{p^2} + \frac{\mathbbm{E}(X) -1}{(1-p)^2} =\frac{1}{p^2} + \frac{1}{p(1-p)} = \frac{1}{p^2(1-p)}$$ Then, with $ds$ the infinitesimal distance on the geometric manifold, we get: $$ds^2 = \frac{1}{p^2(1-p)} dp^2.$$ Therefore the distance between $p_1$ and $p_2 \geq p_1$ writes: $$d(p_1, p_2) = \int_{p_1}^{p_2} \frac{1}{p} \frac{1}{\sqrt{1-p}}dp.$$With the change of variable $u = \sqrt{p}$, we eventually draw: $$d(p_1, p_2) = 2 \int_{\sqrt{p_1}}^{\sqrt{p_2}}\frac{du}{u \sqrt{1 - u^2}} = 2 \left[\tanh^{-1}\left(\sqrt{1-u^2}\right)\right]_{\sqrt{p_1}}^{\sqrt{p_2}}.$$Finally: $$d(p_1, p_2) = 2\left(\tanh^{-1}\left(\sqrt{1-p_2}\right) - \tanh^{-1}\left(\sqrt{1-p_1}\right)\right).$$ \subsection{Proof of geodesic distance for on the Gamma manifold with fixed $\kappa$} From the Fisher information matrix obtained in 3.4.1.2., we derive here: $$ds^2 = \frac{\kappa}{\gamma^2} d\gamma^2, $$and then for $\gamma_1 \leq \gamma_2$: $$d(\gamma_1, \gamma_2) = \sqrt{\kappa} \int_{\gamma_1}^{\gamma_2} \frac{d\gamma}{\gamma} = \sqrt{\kappa} \log\left(\frac{\gamma_2}{\gamma_1}\right).$$ \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,914
Sembawang Secondary School is a co-educational government secondary school located in Sembawang, Singapore. History In the late 1990s, the housing estate of Sembawang began to expand. To meet the needs of the community, Sembawang Secondary School was established in January 1999. As the construction of the new premises was underway, the pioneering batch of Secondary 1 classes and school staff were housed temporarily at Woodlands Ring Secondary School. On 25 August 2001, the school finally was declared opened by Education Minister Tony Tan. In 2008, the school celebrated its tenth anniversary. Lim Wee Kiak, a Member of Parliament for Sembawang GRC, joined the school in unveiling a celebratory sculpture erected at the school's foyer. Principals Awards In total, three teachers from Sembawang Secondary School have been conferred the President's Awards for Teachers, which recognises their passion and perseverance in teaching. External links Official website Notes Secondary schools in Singapore Sembawang
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,008
\section{Introduction} Research in autonomous vehicles has attracted a lot of interest from researchers around the world. With the rise of electric vehicles over the past few years, autonomous navigation and path planning have become an inherent feature of these vehicles. In presence of traffic, these vehicles should reach their destination and also follow traffic rules, prevent accidents, detect various traffic signs, handle reckless drivers and rogue vehicles. To be able to perform the aforementioned tasks, the autonomous vehicle must have the ability to predict the motion of it's surrounding vehicles. This will enable the vehicle to make necessary decisions at the right time. Anticipating traffic scenarios is thus a major functionality of autonomous vehicles in order to navigate safely amidst their human counterparts. This is a very challenging problem due to the unpredictable nature of traffic agents. Their behaviour is often determined by multiple latent variables that cannot be estimated beforehand in new and unknown environments, such as the mental state and driving experiences of human drivers, road and weather conditions, destination of each vehicle in the traffic, reckless behaviour of traffic agents that involve overtaking, abrupt lane changing without indication, etc. Many recent state$\shortminus$of$\shortminus$art deep learning models have utilized \textit{Long-Short Term Memory} (LSTM) networks\cite{hochreiter1997long} and \textit{Gated Recurrent Units} (GRUs)\cite{cho2014learning} for the trajectory prediction problem. One technique that is utilized by many approaches is that of an encoder-decoder architecture. In these approaches, the spatio-temporal context from the vehicle trajectories is extracted and then a recurrent neural network (RNN) based decoder is used to predict the future trajectories. While they have been successful in regressing the future trajectories of traffic agents over a certain time horizon, they are heavily dependent on computational resources due to their complex architecture and require a lot of training time. This paper will attempt to address all the aforementioned problems by adapting a unique recurrent neural network called the Memory Neuron Network\cite{sastry1994memory}. The Memory Neuron Network is an extension of the traditional neural network with addition of memory elements to each neuron in the network, that are capable of storing temporal information. This network has a simple architecture, and requires less computational resources as compared to the currently available state$\shortminus$of$\shortminus$art deep learning methods. The performance of the proposed model is evaluated on the publicly available NGSIM US-101 dataset. Although, NGSIM dataset provide comprehensive data of real traffic agents, it does not contain sufficient data for reckless and rogue traffic agents. To address this situation, a synthetic dataset is generated using the \texttt{CARLA} simulator\cite{Dosovitskiy17} that contains the trajectories of multiple heterogeneous rogue traffic agents. As the proposed model is computationally less intensive, it allows for the deployment onto all the rogue vehicles present in the real$\shortminus$time traffic simulation with additional 80 normal cars. To summarize, our main contributions are as follows: \begin{itemize} \item A novel model is proposed that uses a recurrent neural network $\shortminus$ the Memory Neuron Network for the problem of spatio$\shortminus$temporal look$\shortminus$ahead trajectory prediction. \item The proposed model is evaluated on publicly available US$\shortminus$101 dataset, and the RMSE is reported along with several state$\shortminus$of$\shortminus$art methods. \item To evaluate the performance of our model with respect to reckless drivers, rogue vehicles are simulated on \texttt{CARLA} simulator and their trajectories are recorded. The model is then implemented in real$\shortminus$time simulation on each rogue vehicle with a look$\shortminus$ahead horizon of $5s$, demonstrating the robustness and the computational efficiency of the proposed model. \end{itemize} \section{Related Work} This section sets out to explore some of the various methods currently present in the literature to address the motion prediction problem. The existing literature can be broadly classified into three parts which are discussed below. \subsection{Mechanics-based methods} In these approaches, vehicles are mathematically modelled using Newtonian laws of translation and rotation. Once the model is formed, an Unscented Kalman Filter (UKF) is used to estimate the states of the vehicles. \cite{xie2017vehicle} propose an Interactive Multiple Model Trajectory Prediction (IMMTP) which combines physics-based and manoeuvre-based predictive models. \cite{veeraraghavan2006deterministic} use a deterministic sampling approach in the UKF process for a robust estimate of target trajectories. These models work really well in certain scenarios and short time prediction horizon. However, these approaches tend to linearize the obtained models and hence, are unable to capture the inherent non-linear characteristics in a generic traffic scenario. Another issue with these approaches is that the parameters of the mathematical model such as the dimensions of the vehicle, its braking coefficients, steering torque etc., must be set and tuned in real$\shortminus$time, as soon as a vehicle is detected in the vicinity. This may not be feasible when the other agent's model is unknown. A detailed study on these methods can be found in \cite{schubert2008comparison}. \subsection{Human behavior-based models} These techniques attempt to build a mathematical formulation of the human behavior and utilize these as a model for the driving process. \cite{li2016human} apply the \textit{theory of planned behavior} to model the driver behavior, and develop a driver model that accounts for various human aspects such as driving experiences, emotions, age, gender etc. \cite{ferreira2013gender} and \cite{puterman2014markov} apply control theory and Markov Decision Process (MDP) to model human behaviors specifically for the navigation process in a single lane. To extend the analysis to multi$\shortminus$lane junctions, Hidden Markov Models are proposed to model human behaviors in \cite{zou2006modeling}. Statistical models have been proposed in \cite{boyraz2009driver}, \cite{dapzol2005driver} and \cite{ziebart2009human} to predict driving manoeuvres and behaviors. These methods work best when the knowledge of the human behaviors and their analysis are known beforehand. However, in the case of new and unknown environments these models fail to provide reliable predictions. \subsection{Deep learning methods} These methods use a spatial encoder to process the raw trajectory data, and then use recurrent neural networks to estimate the future trajectories. To extract the spatial context from the trajectories, \cite{gupta2018social}, \cite{vemula2018social} and \cite{qi2017pointnet} use a sequential point$\shortminus$based representation. Occupancy grid$\shortminus$base is another popular representation for the spatial context. These approaches model trajectories as a $2D$ sequence, which can be unstructured at times due to the missing temporal information. Extraction of the temporal context is normally done by using RNNs. \cite{subhrajit2020bayesian} propose a Bayesian fuzzy model to accurately estimate the temporal dependencies. \cite{li2019grip} and \cite{nikhil2018convolutional} also use Convolutional Neural Networks to encode the temporal context. To unify the spatial and temporal contexts, \cite{he2020ust} follows a simple and effective approach, where both the contexts are encoded together, using a Multi-Layer Perceptron, which drastically improves the prediction performance. \cite{messaoud2020attention} use a RNN based encoder-decoder along with \cite{bahdanau2014neural} to model the spatio-temporal context. For the process of predicting future trajectories different variants of RNNs have been used. \cite{altche2017lstm} use a standard LSTM network for trajectory prediction on highways. \cite{bhattacharyya2018multi} and \cite{si2019agen} use Imitation Learning along with Generative Adversarial Networks to predict future trajectories. \cite{deo2018convolutional} use LSTMs along with Convolutional Neural Networks with social pooling layers and generate a multi-modal Gaussian model for trajectory prediction. While these approaches have helped in improving the performance, they require heavy computational resources. This can make them quite hard to be implemented in real$\shortminus$time scenarios. \section{Trajectory Prediction Framework} Fig. \ref{MNN} shows the proposed model for trajectory prediction. The figure consists of a \textit{trajectory database}, that consists of all the change in trajectory samples for multiple vehicles, present in the dataset, and the Memory Neuron Network which is shown as a black box. At every time instant $t$, the trajectory database provides the change in the $(x, y)$ coordinates for a particular vehicle, and the network estimates the next change in position of the vehicle. The initial values provided by the trajectory database is fed to the network multiple times sequentially, so that the predicted values reach a steady$\shortminus$state. Once the steady$\shortminus$state is achieved, the network then receives consecutive input values from the trajectory database. \begin{figure} \begin{tikzpicture}[x=1.5cm, y=1.5cm, >=stealth] \draw (0.6, 3.5) rectangle (3.1, 5.0) node[pos=.5]{\makecell[l]{Trajectory \\ \ Database: \\ $\Delta x_1, \Delta x_2, ...$ \\ $\Delta y_1, \Delta y_2, ...$}}; \draw (3.1, 4.6) -- (3.7, 4.6) node[above, midway]{$\Delta x_{t}$}; \draw (3.1, 4.0) -- (3.7, 4.0) node[above, midway]{$\Delta y_{t}$}; \draw [->](3.7, 4.6) -- (4.2, 3.72); \draw [->](3.7, 4.0) -- (4.07, 3.6); \draw [fill= black!10] (1.2, 2.2) rectangle (2.9, 3.2) node[pos=.5]{\makecell[l]{Memory \\ Neuron \\ Network}}; \draw (2.9, 2.9) -- (3.7, 2.9) node[above, midway]{$\Delta \hat{x}_{t}$}; \draw (2.9, 2.4) -- (3.7, 2.4) node[above, midway]{$\Delta \hat{y}_{t}$}; \draw [->](3.7, 2.9) -- (4.07, 3.4); \draw [->](3.7, 2.4) -- (4.2, 3.27); \draw [->](4.5, 3.5) -- (5.1, 3.5) node[above, midway]{$e_t$}; \draw (3.0, 2.9) -- (3.0, 2.0); \draw (3.2, 2.4) -- (3.2, 1.8); \draw (3.0, 2.0) -- (2.4, 2.0); \draw (3.2, 1.8) -- (2.4, 1.8); \draw (1.9, 1.7) rectangle (2.4, 2.1) node[pos=.5]{\makecell[l]{$z^{\shortminus 1}$}}; \draw (1.9, 2.0) -- (1.0, 2.0); \draw (1.9, 1.8) -- (0.8, 1.8); \draw [->](1.0, 2.0) -- (1.0, 2.9) -- (1.2, 2.9) node at (0.8, 3.1) {$\Delta \hat{x}_{t\shortminus 1}$}; \draw [->](0.8, 1.8) -- (0.8, 2.4) -- (1.2, 2.4) node at (0.65, 2.55) {$\Delta \hat{y}_{t\shortminus 1}$}; \node at (3.0, 2.9)[circle, fill, inner sep=0.9pt]{}; \node at (3.2, 2.4)[circle, fill, inner sep=0.9pt]{}; \draw [fill=gray!50](4.3, 3.5) circle [radius=0.25] node{\makecell[l]{$\sum$}}; \end{tikzpicture} \centering \caption{Spatio-temporal lookahead model} \label{MNN} \end{figure} \begin{figure} \begin{tikzpicture}[x=1.5cm, y=1.5cm, >=stealth] \draw (0.2,0) -- (4.2, 0); \draw (0.2, 0) -- (0.2, -3); \draw (4.2, 0) -- (4.2, -3); \draw [dashed] (0.2, -1.1) -- (4.2, -1.1); \draw [dashed] (0.2, -1.9) -- (4.2, -1.9); \draw [dashed] (0.2, -0.2) -- (4.2, -0.2); \draw [fill=black!50] (1.5, -1.75) rectangle (2.5, -1.25); \draw [fill=black!10] (0.2, -1.75) rectangle (0.9, -1.25); \draw [fill=black!10] (3.5, -1.75) rectangle (4.2, -1.25); \draw [fill=black!10] (0.9, -0.9) rectangle (1.9, -0.4); \draw [fill=black!10] (2.7, -0.9) rectangle (3.7, -0.4); \draw [fill=black!10] (2.5, -2.6) rectangle (3.5, -2.1); \draw [fill=black!10] (0.5, -2.6) rectangle (1.5, -2.1); \draw [dashed] (0.2, -2.8) -- (4.2, -2.8); \draw (0.2, -3) -- (4.2, -3); \draw [->] (2.0,-1.5) -- (3.0,-1.5) node at (3.0, -1.35) {$y_t$}; \draw [->] (2.0, -1.5) -- (2.0, -2.1) node at (2.15, -2.1) {$x_t$}; \node at (2.0, -1.5)[circle, fill, inner sep=1.0pt]{}; \end{tikzpicture} \centering \caption{The coordinate system is shown for a particular ego vehicle in a multi-lane traffic environment. The y-axis is along the longitudinal direction and the x-axis is perpendicular to it.} \label{CS} \end{figure} \subsection{Problem Formulation} The coordinate system used for formulating the trajectory prediction problem is shown in Fig. \ref{CS}. It shows the ego vehicle (filled rectangle) and the non$\shortminus$ego vehicles surrounding it (hollow rectangles). The location of the vehicle is measured at its centre of mass in the local coordinate frame instead of the global coordinate frame (GPS data). The ego vehicle is assumed to be equipped with sensors that can measure the position and velocity of the surrounding non$\shortminus$ego vehicles in the local coordinate frame . In this manner, it is possible to obtain the track histories of the non$\shortminus$ego vehicles present in the vicinity of the ego vehicle. The inherent uncertainties of the sensors only provide an approximate estimate of the position and velocities of the surrounding vehicles. As a result, it is challenging to predict the future trajectories of these vehicles using simple kinematic equations. Thus, as followed in \cite{kumar2006identification}, a \textit{data - driven} model is developed that can relate the past track histories of the vehicles to their future trajectories. As the values of the trajectory data can change drastically when driving from one point to another over long periods of time, the difference between consecutive $(x, y)$ coordinates are taken: \begin{equation} \Delta \mathbf{x}_t = \mathbf{x}_t \shortminus \mathbf{x}_{t\shortminus 1} \label{state_diff} \end{equation} where $\mathbf{x}_t = (x_{_t}, y_{_t})$ are the local coordinates of a vehicle at time instant $t$. As the datasets are generated through sampling data points uniformly, the difference in the trajectory samples will be bounded within certain limit, ensuring network stability and improved performance. The trajectory prediction problem is then, posed as a \textit{system identification} problem, with the state of the system given by $\Delta \mathbf{x}_t$. Assuming this system is \textit{observable}, from \cite{leontaritis1985input} the state of the system can be formulated as: \begin{equation} \Delta \mathbf{x}_t = F(\Delta\mathbf{x}_{t\shortminus 1}, \Delta\mathbf{x}_{t\shortminus 2}, ..) \label{formulation} \end{equation} where $F(.)$ is an unknown nonlinear function of the previous states. The goal of the network is to predict the next change in coordinates ($\Delta \hat{\mathbf{x}}_t$) of the vehicle at time $t$ such that the cost function $J$ is minimized at every time step. Here $J$ is given by \begin{align} J = \left\Vert{\Delta \mathbf{x}_{t} \shortminus \Delta \hat{\mathbf{x}}_{t}}\right\Vert_{_2} \end{align} where $\left\Vert . \right\Vert_{_2}$ represents the $L^2$ norm. \begin{figure} \begin{tikzpicture}[x=1.5cm, y=1.5cm, >=stealth] \node [every neuron](nn-demo) at (0.8, 4.2) {}; \draw [<-] (nn-demo) -- ++(-0.5, 0); \draw [->] (nn-demo) -- ++(0.5, 0); \draw (0.65, 3.6) rectangle (0.95, 3.9) node[pos=.5]{$z^{\shortminus 1}$}; \node [neuron memory](mnn-demo) at (0.8, 3.3) {}; \draw [->] (mnn-demo) -- ++(0.5, 0); \draw (0.65, 2.75) rectangle (0.95, 3.05) node[pos=.5]{$z^{\shortminus 1}$}; \draw [->] (nn-demo.east) .. controls +(left:-4mm) and +(right:4mm) .. (0.95, 3.75) node at (1.3, 3.85) {\scriptsize $\alpha_1^i$}; \draw [->] (0.65, 3.75) .. controls +(left:4mm) and +(right:-4mm) .. (mnn-demo.west); \draw [->] (mnn-demo.east) .. controls +(left:-4mm) and +(right:4mm) .. (0.95, 2.9) node at (1.4, 2.95) {\scriptsize $1\shortminus\alpha_1^i$}; \draw [->] (0.65, 2.9) .. controls +(left:4mm) and +(right:-4mm) .. (mnn-demo.west); \draw [dashed] (0.12, 2.7) -- (1.65, 2.7) -- (1.65, 4.5) -- (0.12, 4.5) -- cycle; \draw [dashed] (0.12, 2.7) -- (0.7, 1.8); \draw [dashed] (1.65, 2.7) -- (1.3, 1.8); \node [every neuron](nn-legend) at (3.0, 4.0) {}; \node [neuron memory](mnn-legend) at (3.0, 3.4) {}; \node at (4.2, 4.0) {Network neuron}; \node at (4.2, 3.4) {Memory neuron}; \draw (2.5, 4.4) -- (5.2, 4.4) -- (5.2, 3.0) -- (2.5, 3.0) -- cycle; \foreach \i in {1, 2} \node [every neuron/.try](input-\i) at (1.0,2.5-\i) {}; \foreach \i in {1, 2} \node [neuron memory/.try](input-memory-\i) at (1.0, 2.0-\i) {}; \foreach \l [count=\i] in {x, y} \draw [<-] (input-\i) -- ++(-1, 0) node [above, midway] {$\Delta\hat{\l}_{t\shortminus 1}$}; \foreach \i in {1, 2} \draw[->] (input-\i) -- (input-memory-\i); \foreach \m [count=\i] in {1, memory, 3, memory, missing, 6, memory} \node [every neuron/.try, neuron \m/.try](hidden-\m-\i) at (2.4, 2.8-\i/2) {}; \foreach \i\j in {1/2, 3/4, 6/7} \draw[->] (hidden-\i-\i) -- (hidden-memory-\j); \foreach \i in {1, 2} \node [every neuron/.try](output-\i) at (3.8,2.5-\i) {}; \foreach \i in {1, 2} \node [neuron memory/.try](output-memory-\i) at (3.8, 2.0-\i) {}; \foreach \i in {1, 2} \draw[->] (output-\i) -- (output-memory-\i); \foreach \i in {1, 2} \foreach \j in {1, 3, 6} \draw [->] (input-\i) -- (hidden-\j-\j); \foreach \i in {1, 2} \foreach \j in {1, 3, 6} \draw [->] (input-memory-\i) -- (hidden-\j-\j); \foreach \i in {1, 3, 6} \foreach \j in {1, 2} \draw [->] (hidden-\i-\i) -- (output-\j); \foreach \i in {2, 4, 7} \foreach \j in {1, 2} \draw [->] (hidden-memory-\i) -- (output-\j); \foreach \i in {1, 2} \draw [->] (output-memory-\i.west) .. controls +(left:6mm) and +(right:-6mm) .. (output-\i.west); \foreach \l [count=\i] in {x, y} \draw [->] (output-\i) -- ++(1, 0) node [above, midway] {$\Delta\hat{\l}_{t}$}; \node at (1.6, 2.0) {\scriptsize $n_1^i$}; \node at (1.6, 1.7) {\scriptsize $v_1^i$}; \draw [->] (1.8, 1.95) .. controls +(left:1mm) and +(right:-3mm) .. (2.0, 2.7); \draw [->] (1.85, 1.78) .. controls +(left:1mm) and +(right:-2mm) .. (2.0, 2.4); \node at (2.17, 2.75) {\scriptsize $w_{11}^i$}; \node at (2.06, 2.53) {\scriptsize $f_{11}^i$}; \draw [->] (3.4, 0.09) .. controls +(left:2mm) and +(right:-3mm) .. (3.5, -0.4); \node at (3.6, -0.4) {\scriptsize $\beta_2^L$}; \draw [dashed] (0.7, 0.8) -- (1.3, 0.8) -- (1.3, 1.8) -- (0.7, 1.8) -- cycle; \end{tikzpicture} \centering \caption{The memory neuron network is fully connected with 6 hidden neurons. Every neuron has a memory neuron associated with it. Initially, the network is trained with zero inputs so that the weights stabilize to some equilibrium point, before providing the actual data.} \label{NN} \end{figure} \subsection{Network Architecture}\label{NA} The network architecture is shown in Fig. \ref{NN}. The figure shows some of the network parameters that provides clarity on understanding the functioning of the network. The Memory Neuron Network consists of fully connected \textit{network neurons} (large open circles) and its associated \textit{memory neurons} (small filled circles). There are weights associated with both the connections of network neurons and memory neurons. Both these weights are updated during backpropagation. To describe the functioning of the network, let $\Delta\mathbf{x}_{t\shortminus1} = (\Delta\hat{x}_{t-1}, \Delta\hat{y}_{t-1})$ be the inputs to the network. The net output $n_j^{h}(t)$ of the $j^{th}$ network neuron in the hidden layer $h$ can be calculated as: \begin{align} m_j^h(t) = \sum_{k=1}^{2} w_{kj}^i n_k^i(t) + \sum_{k=1}^{2} f_{kj}^iv_k^i(t)\label{eq:1} \\ n_j^h(t) = g^h\left( m_j^h(t) \right), \ \ \ \ 1 \leq j \leq 6 \end{align} where, \begin{itemize} \item $w_{kj}^i$ is the weight of the connection from $k^{th}$ network neuron in the input layer $i$ to $j^{th}$ network neuron of the hidden layer $h$. \item $n_k^i(t)$ is the output of the $k^{th}$ network neuron in the input layer $i$. In our case, $n_1^i(t) = \Delta\hat{x}_{t-1}$ and $n_2^i(t) = \Delta\hat{y}_{t-1}$. \item $f_{kj}^i$ is the weight of the connection from the memory neuron corresponding to the $k^{th}$ network neuron in the input layer $i$ to $j^{th}$ network neuron of the hidden layer $h$. \item $v_k^i(t)$ is the output of the memory neuron of the $k^{th}$ network neuron in the input layer $i$. \item $g^h(.) = tanh(.)$ is the activation function of the network neurons present in the hidden layer. \end{itemize} The output of the memory neuron corresponding to the $j^{th}$ network neuron in the layer $l$ is given by: \begin{align} v_j^l(t) = \alpha_j^ln_j^l(t\shortminus1) + (1\shortminus\alpha_j^l)v_j^l(t\shortminus1), \ l \in \{ i, h, L \} \end{align} where $\alpha_j^l$ is the weight of the connection from $j^{th}$ network neuron in the input layer $l$ to its corresponding memory neuron. The net output $n_j^L(t)$ of the $j^{th}$ network neuron in the last layer $L$ is calculated as: \begin{align} m_j^L(t) = \sum_{k=1}^6 w_{kj}^hn_k^h(t) + \sum_{k=1}^6 f_{kj}^hv_k^h(t) + \beta_j^Lv_j^L(t) \\ n_j^L(t) = g^L\left( m_j^L(t) \right), \ \ \ \ 1 \leq j \leq 2 \label{eq:2} \end{align} where, \begin{itemize} \item $\beta_j^L$ is the weight of the connection from the memory neuron to its corresponding $j^{th}$ network neuron in the last layer $L$. \item $v_j^L(t)$ is the output of the memory neuron corresponding to the $j^{th}$ network neuron in the last layer $L$. \item $g^L(.)$ is a linear activation function with unit slope for the network neurons in the output layer $L$. \item $n_j^L(t)$ is the output of the $j^{th}$ network neuron in the last layer $L$. In our case, $n_1^L(t) = \Delta\hat{x}_{t}$ and $n_2^L(t) = \Delta\hat{y}_{t}$. \end{itemize} To ensure the stability of the network dynamics, the following condition is imposed: $0 \leq \alpha_j^l, \beta_j^L \leq 1$. The backpropagation algorithm is used to update all the weights of the network corresponding to both the network neurons as well as the memory neurons. The following squared error function is used for backpropagation: \begin{align} e(t) = \sum_{j=1}^{2} (n_j^L(t) - d_j(t))^2 \label{error_calc} \end{align} where $d_j(t)$ is the desired teaching signal that is derived from the trajectory database. In our case, $d_1(t) = \Delta x_t$ and $d_2(t) = \Delta y_t$. At the time of updation $t = \tau$, the weights are updated by using the following rule: \begin{align} w_{kj}^l(\tau+1) = w_{kj}^l(\tau) - \eta e_j^{l+1}(\tau)n_i^l(\tau), \ \ l \in \{i, h\} \label{update:1} \\ f_{kj}^l(\tau+1) = f_{kj}^l(\tau) - \eta e_j^{l+1}(\tau)v_i^l(\tau), \ \ l \in \{i, h\} \end{align} where $\eta$ is the learning rate for the weights of the network, and \begin{align} e_j^L(\tau) = \left( n_j^L(\tau) - d_j(\tau)\right), \ \ 1 \leq j \leq 2 \\ e_j^h(\tau) = \left(g^h(m_j^i(\tau))\right)'\sum_{p=1}^{2}e_p^L(\tau)w_{jp}^h(\tau), \ \ 1 \leq j \leq 6 \end{align} The various memory coefficients are updated using the following equations: \begin{align} \alpha_j^l(\tau+1) = \alpha_j^l(\tau) - \eta'\frac{\partial e}{\partial v_j^l}(\tau)\frac{\partial v_j^l}{\partial \alpha_j^l}(\tau) \\ \beta^L_j(\tau + 1) = \beta_j^L(\tau) - \eta'e_j^L(\tau)v_j^L(\tau) \ \ \ \ \ \end{align} where $\eta'$ is the learning rate for updating the memory coefficients, and \begin{align} \frac{\partial e}{\partial v_j^h}(\tau) = \sum_{s=1}^{N_{l+1}}f^h_{js}(\tau)e_s^L(\tau) \\ \frac{\partial v_j^l}{\partial \alpha_j^l}(\tau) = n_j^l(\tau \shortminus 1) \shortminus v_j^l(\tau \shortminus 1) \label{update:2} \end{align} where $N_{l+1}$ is the number of network neurons in the layer next to $l$. The memory coefficients are hard$\shortminus$limited to $\left[ 0, 1 \right]$ if they happen to fall outside the range. For a detailed discussion on the functioning of the network and additional details, please refer to \cite{sastry1994memory}. A crucial requirement in system identification problems is to determine how many previous inputs and outputs are to be fed back to the model to capture the generic nonlinear input-output mapping of the model. The presence of the memory neurons ensures that this requirements is optimally learnt during the learning process. Note that the output of the network depends on the previous inputs as well as its own outputs due to the presence of memory neurons in the output layer. Thus, the estimated next state of the system $\Delta \hat{\mathbf{x}}_k$ is given by: \begin{align} \Delta \hat{\mathbf{x}}_{t} = \hat{F}(\Delta \hat{\mathbf{x}}_{t\shortminus 1}, \Delta \hat{\mathbf{x}}_{t\shortminus 2}, ...) \label{PIPO} \end{align} where $\hat{F}(.)$ is the nonlinear transformation represented by the Memory Neuron Network. The predicted samples $\Delta\hat{\mathbf{x}}_t$ depends on the previous inputs due to the presence of memory neurons in the input and hidden layers, and it depends on its own previous outputs due to the presence of memory neurons in the output layer. Thus, the spatio$\shortminus$temporal look$\shortminus$ahead model represented by Fig. \ref{MNN} is known as parallel identification model \cite{narendra1991identification}. \begin{table*} \caption{Root Mean Square Error (RMSE) values (in meters) are reported over a prediction horizon of $5s$ for the NGSIM dataset.} \label{Table_RMSE} \begin{tabularx}{\textwidth}{@{}l*{10}{C}c@{}} \toprule Time & CV & CV-GMM\cite{deo2018would} & GAIL-GRU\cite{kuefler2017imitating} & LSTM & MATF\cite{zhao2019multi} & CS-LSTM\cite{deo2018convolutional} & S-LSTM\cite{alahi2016social} & UST\cite{he2020ust} & UST-180\cite{he2020ust} & MNN \\ \midrule $1s$ & $0.73$ & $0.66$ & $0.69$ & $0.68$ & $0.67$ & $0.61$ & $0.65$ & $0.58$ & $0.56$ & $\mathbf{0.36}$ \\ $2s$ & $1.78$ & $1.56$ & $1.56$ & $1.65$ & $1.51$ & $1.27$ & $1.31$ & $1.20$ & $1.15$ & $\mathbf{0.85}$ \\ $3s$ & $3.13$ & $2.75$ & $2.75$ & $2.91$ & $2.51$ & $2.09$ & $2.16$ & $1.96$ & $1.82$ & $\mathbf{1.38}$ \\ $4s$ & $4.78$ & $4.24$ & $4.24$ & $4.46$ & $3.71$ & $3.10$ & $3.25$ & $2.92$ & $2.58$ & $\mathbf{1.92}$ \\ $5s$ & $6.68$ & $5.99$ & $5.99$ & $6.27$ & $5.12$ & $4.37$ & $4.55$ & $4.12$ & $3.45$ & $\mathbf{2.74}$ \\ \bottomrule \end{tabularx} \end{table*} \subsection{Training and Implementation Details}\label{TID} \begin{algorithm} \SetKwFunction{isOddNumber}{isOddNumber} \SetKwInOut{KwIn}{Input} \SetKwInOut{KwOut}{Output} \SetKwInOut{KwInit}{Initialize} \KwIn{A list $\mathcal{D} = [d_i]$, $i=1, 2, \cdots, n$, where each element is a set of differential trajectory data $d_i = \left\{\Delta\mathbf{x}_t^{(i)}\right\} = \left\{(\Delta x_t^{(i)}, \Delta y_t^{(i)})\right\}_{t=1}^T$ for vehicle $i$, learning rates $\eta, \eta '$, \textit{epochs}\;} \KwOut{Trained memory neuron model for trajectory prediction\;} \KwInit{Initialize the weights of the network arbitrarily, except the memory coefficients which are initialized to zero.\;} \ForEach{$d_i \in \mathcal{D}$}{ \For{$e \leftarrow 0$ \KwTo \textit{epochs}}{ \ForEach{$\Delta\mathbf{x}_t \in d_i$}{ Compute output of the network using feedforward equations \eqref{eq:1} - \eqref{eq:2};\\ Compute error for backpropagation using equation \eqref{error_calc}\\ Update all the weights and the memory coefficients using equations \eqref{update:1} - \eqref{update:2}; } } } \caption{Training pseudocode} \label{MNN_algorithm} \end{algorithm} The Trajectory database consists of differences between consecutive trajectory samples, as given by equation \eqref{state_diff}. During the learning process, at every time step $t$ the network receives the previous state information $\Delta \mathbf{x}_{t\shortminus 1}$, and predicts the estimated next state $\Delta \hat{\mathbf{x}}_t$. The actual state of the system $\Delta \mathbf{x}_t$ is then used as a \textit{teaching signal}, to backpropagate the squared error $\left\Vert{\Delta \mathbf{x}_t \shortminus \Delta \hat{\mathbf{x}}_t}\right\Vert_{_2}^2$ and update both the weights associated with the network neurons and the memory neurons. The network consists of six neurons in the hidden layer, with \textit{tanh(.)} as it's activation function, and \textit{linear} activation function in the output layer. The range of the activation function is adjusted according to the range of the state values of the system, to avoid clipping during the prediction phase. It's slope is also adjusted to provide a linear relationship with unit slope, about the origin. The entire trajectory data is taken for a vehicle, and the difference between consecutive trajectory samples is calculated and stored in the trajectory database for every vehicle. They will be referred as \textit{differential trajectory samples}. Each sample is then presented to the network sequentially and is trained using backpropagation. One epoch is said to be completed when the last sample in the set of differential trajectories samples is presented and learnt. This procedure is repeated for 1,00,000 epochs, for multiple vehicle trajectories. The learning rates for both type of weights is chosen to be $4\times 10^{-6}$. Algorithm \ref{MNN_algorithm} summarizes the training procedure. The entire model is implemented in \texttt{Python} using \texttt{NumPy} library\cite{harris2020array}. \section{Performance Evaluation} In this section, the proposed model is evaluated on two datasets, and the performance is compared quantitatively with several state$\shortminus$of$\shortminus$art techniques by employing the RMSE metric. \subsection{Datasets} For evaluating the performance of the proposed model, the following datasets are used: \begin{enumerate} \item[(a)]\textit{NGSIM US-101\cite{colyar2007us}}: The Next Generation Simulation (NGSIM) US$\shortminus 101$ dataset consists of trajectory data sampled at $10$Hz, over a span of $45$ minutes. The trajectory data is reported in both global as well as local coordinate frames. These trajectories are recorded from a fixed bird's eye view, and consists of varying traffic conditions. A similar experimental setup is followed as in \cite{deo2018convolutional}, where $3s$ of trajectory history is chosen to predict the estimated trajectories over the horizon of next $5s$ during the testing phase. \item[] \item[(b)]\textit{Synthetic Dataset:} In order to predict trajectories of rogue vehicles, the trajectories for 20 different rogue vehicles is generated by using the \texttt{CARLA} simulator. The rogue vehicles are made to skip traffic lights randomly and move in a zig$\shortminus$zag fashion within the lane, while traveling at a dangerously high velocity. They can also change lanes abruptly without any indication. The trajectory data is sampled at $20$Hz over a $1\shortminus$minute duration. In order to capture abrupt changes in the trajectories of rogue vehicles, they are sampled at a higher rate of $20$Hz. The same procedure of choosing $3s$ of trajectory history and predicting the estimated trajectories over the horizon of next $5s$ during the testing phase is followed. \end{enumerate} \subsection{Evaluation metric} During the prediction phase, the differential trajectory samples from the trajectory database is provided for a duration of $3s$ to the network and for the next $5s$, the input to the network is it's previous outputs. The predicted values of the network are summed up with the starting actual trajectory values of each vehicle over the duration of $5s$ to generate the predicted actual trajectory of the vehicle. In order to compare the results of the proposed model quantitatively, the root mean squared error (RMSE) metric is used over all future time steps $T_{_H}$ and number of vehicles $N$: \begin{equation} \text{RMSE} = \frac{\sum_{n=1}^N\sqrt{\frac{\sum_{t=1}^{T_{_H}} \left\Vert \mathbf{x}_t^{(n)} - \hat{\mathbf{x}}_t^{(n)} \right\Vert^2}{T_{_H}}}}{N} \end{equation} \begin{figure*} \centering \includegraphics[width=16cm,height=7cm]{CARLA1}\hfill \\[\bigskipamount]\hfill \\ \centering \includegraphics[width=16cm,height=4cm]{CARLA2}\hfill \caption{Simulating trajectory prediction on \texttt{CARLA} for two rogue vehicles. The trained model is deployed on each of the rogue vehicle present in the simulation, so that the other vehicles present in the traffic get a `$5s$' look$\shortminus$ahead of every rogue vehicle. This way, they can plan some protective measures to avoid any collision with them. The predicted trajectories are shown frame-by-frame in green dotted lines for future $5s$, and the actual trajectories given by the planner are shown for $10s$ in red dotted lines. Figure on top shows a car traveling at a roundabout. The bottom figure shows the trajectory prediction at a junction.} \label{CARLA_MNN} \end{figure*} \subsection{Results} The performance of the Memory Neuron Network is reported along with several state-of-the-art algorithms tested on the NGSIM US-101 dataset in Table \ref{Table_RMSE}. The table consists of the RMSE for a look$\shortminus$ahead duration of $1s$ to $5s$ for 9 algorithms, which has been reproduced from \cite{he2020ust}. It is evident that the Memory Neuron Network outperforms all the other algorithms. Our results have improved by $35\%$ for $1s$ prediction horizon, and about $20\%$ for $5s$ prediction horizon when compared to \cite{he2020ust}. Further, the rise in the RMSE values from $1s$ horizon to $5s$ horizon is far less for our proposed model, as compared to other algorithms. From this analysis, it can also be concluded that the proposed model is relatively more stable, than the current existing algorithms. This superior performance can be attributed to the fact that the memory neurons not only remember their own past values, but the past values of all the other memory neurons in it's preceding layers as well. This makes the Memory Neuron Network \textit{globally recurrent}, as compared to the LSTM networks which are locally recurrent. To test it's robustness, the trained model is deployed in real-time simulation, with 100 cars.\footnote{A detailed video demonstration on the same can be found \href{https://www.youtube.com/watch?v=54DSHSfTy74}{here}.} The simulation is carried out using \texttt{C++ APIs} provided by \texttt{CARLA}'\texttt{s} unreal environment. Only the feedforward part of trained network is implemented in each of the rogue vehicle's trajectory planner. About $20\%$ of them are rogue vehicles. The simulation consists of mixed vehicles, ranging from small cars to heavy trucks. The future trajectories of all the rogue vehicles are predicted, based on their current location and their $3s$ past track histories. The prediction of the trajectories are shown for two different rogue vehicles as frame-by-frame snapshots in Fig. \ref{CARLA_MNN}. It can be observed from Fig. \ref{CARLA_MNN} that there is minimal error between the predicted trajectories and the actual future trajectories, when the vehicle is travelling in a near$\shortminus$straight path. The bottom left figure shows the predicted trajectories at the beginning of a left-turn manoeuvre. It is evident that there is a relatively higher error in this scenario, as the model cannot anticipate the radius of curvature of the turning due to the fact that it has no prior knowledge about the map and the dimensions of the roads and junctions present in the map. This shouldn't be concerning, as the predicted trajectory has the same structure of the actual future trajectory, and thus it can be inferred that the vehicle is still going to take a left$\shortminus$turn. \section{Conclusions and future works} This paper presents a trajectory prediction model, which uses a novel recurrent neural network as its base model. The trajectory prediction problem is posed as a system identification problem, where the Memory Neuron Network learns the input-output relationship between the past trajectory samples and the future predicted trajectory samples. It is clear that the proposed model outperformed all the state$\shortminus$of$\shortminus$art algorithms currently available, and is also very efficient in the sense that it requires less resources when training, computationally faster due to it's less complicated architecture. The proposed model has a RMSE that is about $20\%$ lesser than the RMSE reported by the current state$\shortminus$of$\shortminus$art algorithms, for a $5s$ look$\shortminus$ahead prediction . The robustness of the proposed model is also verified by deploying it in the \texttt{CARLA} simulator, for each rogue vehicle. While the model performs very well in relatively straighter paths, it fails to predict the trajectories accurately at a junction as it is not aware of the structure of the map. The proposed model will be improved in this regards by adding some features related to the roads and junctions present in the map during the training process, in one of our future works. \section{Acknowledgments} The authors would like to thank Dr. Shirin Dora and Dr. Chandan Gautam for their valuable suggestions and comments, and would also like to acknowledge the Wipro$\shortminus$IISc Research Innovation Network (WIRIN) for their financial support. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,170
Tag Archives: Quest NATYA – a simple formula for success On Vijayadasami day (Oct 22, 2015), considered auspicious for Vidhyarambham (initiation of learning, of all kinds, including performing arts) in Bharat, Swamy was invited to be one of the chief guests for a classical dance event. While Swamy doesn't have any claim to be associated with dance in any way (he at least actively listens to many kinds of music, including the classical, and even sings & plays percussion instruments, though not formally trained in either), he chose to accept the invite as a mark of respect for the tremendous amount of effort that goes into learning an art or craft (or even sport for that matter, which Swamy has witnessed up-close-and-personal as he's been part of the quest to raise Jr as a sports – Chess – champion) by all beings involved – the learner (usually an eager child), the learned (the teacher, trainer, coach or Guru, who trains many eager children), the learning enablers (usually parents and a few near & dear) and the learning itself, which is an excruciating, arduous & invariably long process that inevitably consumes a significant part of the childhood. But on any day, Swamy would gladly vote for children spending time learning a performing art, craft or sport, instead of wasting long hours pointlessly in rote learning. The event Swamy was invited to was a Salangai Puja, to mark the elevation of learners of classical dance from beginner to intermediate level, by letting them wear the salangai on their legs (around the ankle) for the first time, after a few years of learning the basics of classical dance – Bharatanatyam, in this case. The learners will continue to learn for a few more years under the Guru before they can do the Arangetram, i.e., first (typically public) solo performance, announcing to the world that s/he is ready to perform the dance independently anywhere. Four children – Adhira, Shailaja, Shivadarshini & Srujana – from Siva Natyalaya (the popular dance school of Mrs. Bhuvanadevi Karthikeyan located at Madambakkam, near the famed ancient Shiva temple) were elevated by their Guru to the next level on Vijayadasami & Swamy was one of the chief guests (the other two – Mrs. M.R.Lakshmi & Mrs. Krishnakumari Karthikeyan – were eminent persons from music & dance arenas respectively). This blog post is an excerpt of Swamy's speech at the salangai puja (not his first public speech but certainly the first at a performing arts or cultural event). Swamy's speech at Siva Natyalaya's Bharatanatyam Salangai Puja (சலங்கை பூஜை) event in Chennai on 22-Oct-2015 (the speech was in Tamizh தமிழ், which Swamy is quite conversant with, as it happens to be his mother tongue தாய்மொழி) Namaskaram. After seeing my profession on the invitation (in chaste தமிழ், no less), you must've been wondering about what I really do for a living and what relationship do I've, if any, with dance! பிரகாஷ் ராமஸ்வாமி – முழுமை வழிகாட்டி, மனித உள் ஆற்றல் கண்டறிதல் மற்றும் பேணி வளர்த்தல் வல்லுநர் I too have wondered the same way about the dance part, as unlike the other two chief guests here – one from the classical music world (learning from none other than the great Semmangudi himself) & the other from the classical dance world (whose grandfather learned from the legendary Balasaraswathi), I've nothing to do with dance. Music has been a part of my Life but dance not so much. But as I also write (blogs now, books soon), it's natural for me to ponder about things and wonder about connections between seemingly unrelated things. When I thought about me & dance, it dawned upon me that there indeed is a powerful connect. And that is Adiyogi (Swamy pointed to the bronze Nataraja statue on the stage that was decked up with flowers for the auspicious occasion). As someone on the spiritual path for the about 6 years now through Isha yoga, we're deeply reverential to Adiyogi. Shiva is considered the first yogi in the yogic culture. Incidentally, he also happens to be the Lord of Dance, i.e. Nataraja. His famed dance is the Tandava – in Ananda & Rudra forms – and he's also known as Thandavakkon in Tamizh because of this. So, now you know the connection between Swamy and dance (the audience chuckle). Once that became clear, I've thought about what to say in this forum, where eminent persons from music & dance arena are there and children are performing the traditional Bharatanatyam dance beautifully with able guidance from their Guru & the accompanying musicians. Since children are familiar with formulas in many subjects they learn at school such as maths, physics & chemistry, it would be appropriate to give them a simple formula for achieving success in their quest to become eminent classical dancers. And that simple formula is NATYA. Very easy to remember (as the term itself means dance) and hopefully to follow & practice as well. N is Nada. Nada means sound. The universe that we're part of is said to have originated from sound – the primordial sound AUM. Recently, NASA scientists have managed to make the sound of Sun audible and it incredibly sounds as AUM (which isn't surprising for our ancient culture as the many sages & enlightened beings have figured it out many centuries ago). All Creation is said to be just sound and the creator is also known as Nada Brahma. For dance too, the basis is sound. We've seen how beautifully the word Pankajam in a song was depicted as blossoming of lotus by the children. When we hear the word (sound) and see the depiction through gestures or mudras (movement or expression), we can connect with it easily and can comprehend better. What's critical in this relationship is the alignment – between sound and movement in the case of dance. It could be something else for other types of Life pursuits. Alignment with a source, principle, goal or purpose. If your action is in alignment with a purpose, the result will be beautiful and the expreince will be joyful. A is Agnostic. In these turbulent times of religious intolerance, agnostic is a very sensitive world. But agnostic doesn't just mean non-believer. It also means someone who is not biased – towards good or bad, like or dislike, progressive or regressive and so on. Shiva is also known as Aghori, someone for whom nothing is ghori or horrible. He accepts everything as it is. The people surrounding him are always depicted as demented beings, but he loves them the same way he would love any other being. He's as comfortable being in the graveyard smeared in ash, wearing skeletons as he's in his magnificent Somasundareswara form (worshipped in the famous Madurai Meenakshi temple, in Swamy's home town). That's why he symbolises Life the way it is. As a learner of dance (or anything for that matter), you shouldn't be biased towards one or the other. I like this adavu or I don't like that jathi. This song is more beatiful to perform than that song. I prefer that accompanying person to this one. Your focus should remain just the dance itself (performance or action). And you'll just accept everything else around it as it is. Nothing is good or bad, no likes or dislikes. Everything just the way it is. That's being agnostic. T is Transendence. Loosly translated (as Swamy was speaking in Tamizh – a far deeper, more delightful & meaningful classical language), it is elevating oneself from the present state to a state that's many levels above or beyond. In the yogic path, it is achieving the state of Samadhi – an eternal state of bliss, which enlightened beings are said to be in, all the time. In dance (or any performance or action), Transendence is not just for the dancer (to go from a learner to a spectacular performer) but for the spectators too (going from hmmm to wow). You should reach a level of performance where there's no stage, no accompanying artists, no spectators, not even the dancer but just the dance. That kind of a dance would bring tears flooding from the eyes of anyone blessed to be present. Tandava certainly would've done that to the sages and celestial beings who may've witnessed it, as Tandava is the dance of transendence. Both at cosmic and atomic level as creation is also known as the dance of Shiva. Y is Yearning. Not earning, which unfortunately is the focus of many star performers in any performing art today. Without yearning for something, it's not possible to achieve the state of transendence. The sages who've become enlightened have performed yoga or tapas for many years. The saptarishis have spent not one or two but 84 years in preparing themselves to receive the grace of Adiyogi, who after observing their diligent preparation has become the benevolent Adi Guru (first Guru or Master) as the benevolent Dakshinamurthi to transmit his yogic knowledge to them, several thousands of years ago. As a dancer (or anyone performing a purposeful action), you must have immense interest and expend tremendous effort to be someone that people line up to see, not because you follow the principles of dance to the T but because being part of your performance lets them transcend to a different level. That's how you'll be known as a Balasaraswathi, Rukmini Devi, Padma Subramaniam or Alarmelvalli. In classical music, Bho Shambho belongs to Maharajapuram Santhanam. One can't think beyond Madurai Mani Iyer when it comes to rendering Ka Vaa Vaa. Suprabatham is synonymous with MS. The same holds true for Kanda Sashti Kavacham and Sulamangalam sisters. Such astonishing levels of achievement is possible for you too, if you truly yearn for it. A is Absolute. Adiyogi is depicted as either an ascetic in a blissed out state of meditation or ecstatic state of tandava. He's not partially in either state but absolute in both states. Whenever you perform dance (or any other purposeful action), your intent, focus and intensity has to be absolute. Such a state is not possible if you are performing for others. The performance or action will naturally flow from within and encompass all those who are touched by the performance or action in some way, only when it is absolute. If every moment of our Life is absolute, there's no other choice for us but to be joyful. So, that's the simple formula of NATYA… Nada – action in alignment with a purpose. Agnostic – accepting Life the way it is, without any kind of bias. Transendence – elevating oneself & others from the mundane routine to an eternal state of bliss. Yearning – immense interest and effort to achieve peak performance. Absolute – intent, focus & intensity to achive transendence. Nataraja's (Shiva in the form of Lord of Dance) dance Tandava is said to depict or embody the 5 thatvas – Srishti (ஆக்கல் அல்லது படைத்தல்), Sthiti (காத்தல் அல்லது பொறுத்தல்), Samhara (அழித்தல் அல்லது மாற்றல்), Tirobhava (மறைத்தல் அல்லது மாயை விளையாடல்) & Anugraha (அருளல் அல்லது விடுவித்தல்). It's easy to see the NATYA formula is directly associated with the 5 Tandava thatvas (தாண்டவ தத்துவம்). But if I start explaining that now, the organisers may open their Netrikkan (third eye), as we're running out of time. So, let me conclude by requesting you to use the NATYA formula for success in your pursuit of Dance. May Nataraja's Grace be with you for a purposeful Life of dance overflowing with Joy. The following is an extension to the speech (not shared at the event due to paucity of time), connecting the dots between NATYA & Tandava. Nada = Srishti (creation or Shiva as the source of creation) – Life of creation (humans inclusive) in alignment with the creator. Agnostic = Sthiti (existence or Shiva as the preserver of creation) – Accepting Life the way it is, without any personal bias. Transendence = Samhara (transformation or Shiva as the transformer of creation) – Elevating oneself (& others in one's presence) from the mundane routine to an eternal state of bliss Yearning = Tirobhava (maya or Shiva as the master who plays with the ignorance of his creation) – Having immense interest and willing to expend any amount of effort to know the Truth or purpose of Life. Absolute = Anugraha (enlightenment or Shiva as the Guru who benevolently enables one to achieve and remain in the state of eternal bliss) – Having absolute clarity of intent/purpose, total focus and burning intensity to achive self-realization. As there is Nada Yoga (yoga of sound), there is also Natya Yoga (yoga of dance or movement). Dance, when performed as an offering to the ultimate (Shiva as Nataraja) and not as a mere performance, has the potential to let the performer transcend to the state of bliss. Just the way being in the mere presence of a realized being (like Sadhguru) can elevate a seeker to a different state of Life experience (beyond the known physical, limited to the five senses), being present during the perormance of a dancer whose dance is an offering to the Lord of Dance can and will let the participants transcend to a different level of experience – akin to being present during Nataraja's Tandava itself. May Adiyogi's grace be with you to immerse yourself in and experience the joyful dance of Life! Swamy Blogs | Swamystery | Been there, Seen that | SwamyRay | Swamyem | Swamyverse | SwamyView | Swamygraphy Connect with Swamy | Twitter | Facebook | Google+ | LinkedIn | Pinterest | Tumblr | Indiblogger Tags: Absolute, Adi Yogi, Agnostic, Awareness, Best, Bharatanatyam, Business, Care, career, Chess, Children, Choice, Creator, Dance, Devotion, Education, Enchantment, Enlightenment, Existence, Faith, Family, Focus, Future, Gardener, grace, Graduation, Growth, Guru, Human being, Human beings, Humans, Initiation, Intensity, Intent, Isha, Isha Yoga, Joy, Knowledge, Leadership, Learn, Learning, Lord Shiva, Mudra, Mystic, Nada, Nataraja, Nature, Natya, Performing Arts, Present, Puja, Quest, Realization, Salangai, Self, Self Realization, Shiva, success, Talent, Think, Thinking, Transcendence, Truth, Universe, Yearning, Yoga, Yogi Categories Arts, Culture, God, India, Life, Motivation, Nature, Performance, Philosophy, Quote, Self Improvement, Spiritual, Yoga The Witness! A more than decade long quest – against odds, obviously – crossed a significant milestone this week, when Akash. PC. Iyer, aka Swamy Jr., a professional Chess player of international repute, ranked 2nd in the TNEA sports category overall, secured a seat at the prestigious CEG campus of Anna University. While heaving a huge sigh of relief, Swamy couldn't help ponder about the challenging journey itself (which started amidst a number of raised eyebrows and wide open mouths 11 years ago, with only TeamAK – Mr & Mrs Swamy, Jr & Coach SA Krishna – being true believers of this improbable quest, which is all set to continue, with Jr. aiming to become a GrandMaster before he graduates) and the many travellers of varied hue (including dark gray), size (of ego) and shape (of ambition, subversion or submission) who kept joining, trespassing, leaving, misdirecting or helping & guiding along the way. His pondering led him to an interesting destination (at least, an interim one) called UG (coincidental with what Jr. is going to pursue at the university, i.e., UG), where he got astounding insights about why in the enchanting Game of Life, things are the way they were and we still are the way we were! The conversation between Lord Krishna and Uddhava, known as Uddhava Gita, is quite possibly the most insightful conversation on Life, Karma, Human nature, the need for & purpose of Guru (or Coach, in case of material Life pursuits) and the ultimate Truth, aka Self-realization. Uddhava was a favorite devotee of Lord Krishna and served him as a charioteer his entire life. Despite being a charioteer's charioteer (Lord Krishna was Arjuna's charioteer during the epic Mahabharata war), he never demanded anything and served his Master with utmost faith. He is no different from millions of humans who slog daily for an employer (individual or organization) with acquired skill, available knowledge and utmost commitment, expecting just to be paid for whatever they do (job or service), and not much more. After the Mahabharata war, Krishna insisted that Uddhava ask for boons (wishes) that he can grant & fulfill as his Master. This is akin to a supervisor or boss asking a loyal employee to request for a raise (in pay or grade or both) or release (to pursue one's own interest – if that's different from what one does for a living). Instead of asking for boons or wishing for wealth, Uddhava sought to have a conversation with Lord Krishna based on a few questions he had & the Lord readily agreed. The all-seeing, all-knowing & all-being Lord Krishna also declared that this conversation shall be regarded as "Uddhava Gita". This is not too different from a mentoring conversation that helps a Mentee gain information, insight and wisdom from the Mentor. Here is a part of the conversation between Uddhava and Lord Krishna. Aside from the not-so-obvious spiritual context, this conversation is assured to offer deep insights into the human psyche, our wayward way of Life full of quirks and incorrigible acts and desolation leading to utter despair in many aspects. Though all the situations in our Life are a result of our own thoughts and actions, we usually neither acknowledge that (which could lead to introspection resulting in possible ways out of the quagmire) nor accept that (which will lead to moving ahead, instead of being stuck in the self-pit(y) forever) and end up blaming every unexpected outcome, that's far from desired, on others – including, but certainly not limited to, gigantic planets far off from ours! Uddhava: Who is the real friend? Krishna: A person who stands by his close ones in need, even without being called. That pretty much means you gotta be around the near and dear, anytime and all the time. Not necessarily being physically present (an instantaneous human assumption – obviously incorrect like any assumption) but available nevertheless. Also, the expectation is for you to stand by, i.e., support them like a rock, come what may. There goes 'Pride & Prejudice', thrown out of the umpteenth floor, sea view balcony! 70 Bhagavatha – Uddhava Gita Uddhava: Excellent! For all the years, the Pandavas treated you as their beloved friend, as 'apathbandhava'. But, Why did you not stop the gambling game? Why did you let the Pandavas lose, even as you had every power in your hand to turn that situation and stop this whole war? What forced you to not act? Krishna: The rule of this world is, one who has viveka (true wisdom or awareness) wins the game. In this case Duryodhana had wisdom, so he won the game. Life's a game. Unfortunately, like any game we have an opportunity to play, we misunderstand, mishandle and miss the fun in this one too. And like in any game, even the best players can commit the worst mistakes (just in terms of playing). This is exemplified by Dharmaputra's compulsive gambling (demonstrating a shockingly obvious lack of any kind of wisdom) that led to him losing everything, blindsided to the point where there was nothing else to pledge, and resulting in pawning his (& his 4 brothers') wife. Considered to be one of the wisest humans (of that time), Yudishtra doesn't appear to be the smartest (not just at that time). His choice(s) led the Pandavas to a life of misery, suffering and even loss of identity (for the last one year of their life in exile). Uddhava: was astounded beyond belief and asked, How is that even possible? Weren't the Pandavas on the side of Truth? How can Duryodhana be right? Krishna: Duryodhana was not completely aware of dice game, its tricks and rules. He made a smart call to seek uncle Shakuni's help to play the game. Now that is viveka. Yudhishtra, on the other hand, not only did not invite me to the game but also informed his people to not let me know about the game. The Pandavas even prayed to not seek help and bet everything against the odds. Had they invited me, and had I played against Shakuni, the game and its outcome would have been entirely different. Perhaps we would not even have fought the war, as you said. The joy of any game isn't in winning it (nor is the sorrow in losing it), but in playing it – wholeheartedly, with utmost vigor and absolute involvement. Knowing how to play certainly helps (Dhuryodhana may not have known how to play well, but certainly seems to knew how to win – using uncle Shakuni, who was not just an exponent of the game of dice, but apparently also with a personal agenda to bring both sides down), but how you play it determines the state you're and will be. And one must be open to seeking and getting as much help as possible to play. Most top sportspersons have a Coach, who typically is a former professional player him/herself, offering deeper insights and course corrections that may not be obvious to the player. Many top business leaders have a Mentor or Coach to, not because they are incapable or don't believe in themselves, but because objective insights, guidance and direction from someone else are respected for the value they bring to accelerate their own growth. Uddhava: May be. But if they are your devotees, was it not your responsibility to help them in their need and not wait until they request you to do so? Do you really need prayers or an invitation to help your devotees? Krishna: This entire creation runs on karma. Everyone has to abide by it and accept what comes out of their acts in this creation. I don't act on it. I am the 'witness.' I stay close with my creation all the time and observe. This is the law for the creator. Most humans relish success and believe it's self-created. But when things don't go the way they expect and the prospect of failure taunts them, the blame games and finger pointing starts in a flurry. 'The launch wasn't done at an auspicious time'… 'Didn't have a capable team'… 'Partner cheated and led to the failure'… 'No one's willing to help'… 'Been there, done that'… et al. Whether we play the game (which invariably has rules & procedures), we can either be a player, referee, organiser, mediaperson or spectator. Or all of them. While not believing the illusion that we're actually any of them. Uddhava: If you observe, then how can you let all the evil things prevail in this world. How can people do bad things, how can there be murders, killings, injustice and all the wrong doings that we come across while you are standing right next to it, watching it happen. Krishna: I am entitled to be the 'witness'. I am therefore always present. But in the middle of admiring this creation, people, out of ignorance or attachment, forget my existence before them and start doing things. In case of Yudhishtra, he knew I was aware of the game, he knew he wasn't capable of winning the game, he knew it was bad to bet everything in the game, but he still went ahead, without inviting me. Though I am always present with them as 'witness', people forget my presence and act out of delusion. This adds to karmic cycle and they are entitled to continue in that cycle. Should they choose to act differently, realise my existence and notice that I am ever present, their actions shall differ and hence the results will be different as well. They will then be able to break the karmic cycle and move to self-realisation. Fact is, each one of us is just another happening in this vast creation. But because of our inherent stupidity (despite being endowed with a grossly overrated 6th sense), we firmly believe we know things and are better at doing them than anyone else. At some point in our miserable life, our ego becomes so bloated that it won't let us seek help even when we know clearly that things are about to go horrendously wrong. Yet, we fail to learn from our failures, and simply take the easy way out of blaming them on other factors or people and continue to slide down the self-destructive path. Thinking, imagining, dreaming only about I-me-myself. Non-stop for 24×365. Lifetime after lifetime! Grace is the way to break out of this tragic trajectory and enjoy the escape into eternal nothingness, which essentially is everything! And a Guru is someone who can engulf one in boundless Grace and enable the sail – not necessarily smooth – towards Oneness or Allness*! This classic conversation between Krishna & Uddhava eloquently (and needless to say, eminently) captures the big dilemma most of the people suffer, while trying to comprehend the absurdity of existence such as what is creation, creator and the purpose of Life! The rut that has set in the lives of humans, by doing the same things the same way forever, is inevitably binding them to the seemingly never ending karmic cycle, lifetime after lifetime. The rut is so deeply engraved in our lifeline, we don't even know what we do is so repetitive, replete with actions that we'll laugh at ourselves, if only we had the sense (where is that extra 6th, when one needs it dearly) to understand and accept the pointlessness of all things we think we know (& invariably do), but obviously don't have a clue about. If and when we realize that, we may just end up exclaiming Holy cow, mo(o)ral of the Life story! Interestingly, a Guru (and even a Coach) also play the role of a Witness (don't get jittery – there is no comparison with the Lord himself, but just a reference to the role he plays). Their presence is always there, but they'll never interfere nor influence the actions taken by the Sishya (disciple) or Coachee (client) directly. They pretty much know all about Life but would still let the seekers explore & experience Life for themselves, and in the process become a better being, doing things better & living a better Life, joyfully. May Grace be with you for a purposeful Life – with less 'i' & more 'why' & 'why not' – overflowing with Joy – not a result of mere wealth or mirth, but of selfless sharing & willful caring! Tags: Awareness, Bhagawadgita, Boon, Business, Care, career, Chess, Choice, Creator, Enchantment, Enlightenment, Environment, Eternal, Existence, Faith, Game, Gita, grace, Growth, Guru, Human beings, Humans, Isha, Isha Yoga, Joy, Knowledge, Krishna, Leader, Leadership, Learn, Learning, Life, Lord Krishna, Mystic, Nature, Past, Planet, Power, Present, Quest, Realization, Seed, Self, Self Realization, Sixth Sense, Society, Spiritual Master, Spirituality, Study, Talent, Teaching, Team, Think, Thinking, Truth, Uddhava, Uddhava Gita, Universe, Wish, Yogi Categories Books, Leadership, Life, Motivation, Nature, Philosophy, Quote, Self Improvement, Spiritual, Sports, Travel Along Came A Spider! On the occasion of yet another unique date this year, i.e., 11-12-13, instead of the usual SwamyWay of writing a post about or around that (here are his posts published on 11-11-11 and 12-12-12), Swamy decided to delight his loyal readers by offering them the opportunity to choose what he would write this week. Three options were provided in both Facebook and Google+, along with a hint as to how each of those posts might be. The response was quite overwhelming (keep in mind Swamy is a compulsive optimist 😉 and indicated how busy blog readers are these days, especially during weekends! One can reflect on the poojyameva jayathey response in three different ways – normal human, sub-normal human & abnormal human. First one is philosophical. Normal humans hate making choices, with the apathy getting amplified if that choice has to be made for others. And if at all they are left with no choice but to make a choice, they invariably resent whatever choice made. Choice is temporary but suffering is permanent seems to be their philosophical conclusion. With a little over 80% of our race in this category (how else do we fit in the ubiquitous 80-20 model, which is supposed to be applicable to all and sundry), is it any surprise that books on philosophy and self-improvement – especially "how to" and "for dummies" type – are not just published regularly but also outsell every other kind in publication (most are immediately left to make friends with dust bunnies at massive bookshelves hidden in the corners of bedrooms, but Swamy politely brushed aside that as topic for a future blog post, considering his bookshelves also have thousands of them stacked up to the ceiling and his own first book will get added to that list – in 2014)! Second one is existential. Sub-normal humans don't give a damn about most things in Life anyway and making choices probably figure below the bottom of their list. They are the Lone Wolf Mcquade type that mostly keep to themselves in any place, avoid eye contact with others (or stare so intensely that you'll end up avoiding eye contact with them) and appear always to be in a contemplative mood (you may try ruffling a coiled snake at your own peril). "How does it matter if I make a choice or not?","What difference is it going to make to me or you?","Who cares about anyone or anything anyway?" and such are part of their human existential examination. It shouldn't be a shock to you if close to 20% of the world population belongs to this category, because if you're not here, then you've to be part of the 80%. Just in case you're aware that you neither belong here nor there (fret not for there is a possibility that you're not just another Trishanku but may be someone blessed with that rare skill of seeing the Third Side of a Coin), please move on to the next paragraph. Third one is spiritual. Abnormal humans absolutely love to make bold and radical choices in Life (like choosing a professional sports career for their only child and doggedly pursuing it for decades – there will naturally be a book on this enchanting penance by Swamily, when the time comes – or walking away from a role that brought name and fame not once, but once more), time and again, irrespective of whether it is for themselves or others and completely irreverent about the consequences of their choices. "Life finds a way… I'm neither a doer, nor the done, but just a mere tool used by the magnificent mechanism of Life… This too shall pass…" seem to be their spiritual submission. You're free to conclude which category you belong to, simply because you'll anyway be upset if someone categorizes you as one of the three. Anyway, now that we conclusively know how absolutely open you are to reading (Swamy's a believer) or ignoring (most likely the stark reality) any of the three choices, Swamy chose the one that sounded like a quintessential Swamystery title for this week's post. And like many Swamy posts from the past (hmm… that has a nice ring to it, a la, blast from the past!), this one too was triggered by a real life incident. One fine day two weeks ago (all the days are always fine, it's only we who make them good, bad or ugly), when Swamy was on his way back home after completing few errands, a tiny being decided to hop a ride on his significant half's scooter (the best mode of transport in an Indian city, especially when one plans to buy stuff at multiple outlets, all located in a crowded marketplace). Not that Swamy's royal steed, the Classic 500, isn't enthusiastic about such rides (with a long overdue service done recently and the addition of an upgraded exhaust, the red bull is now raring to go – care to hop along?) but the fact remains Scooters have room to spare – both within and out – for carrying stuff, while their more macho brethren, the motorcycles, which are known as bikes in India, not to be confused with bicycles that are known as bikes elsewhere, which in turn are just called cycles here! Of course there are Scooters and there are scooters (here's a hilarious read about that other kind)! As soon as he started revving up the scooter, he noticed it on top of the instrument cluster on the handlebar. Despite the wind and movement, it didn't budge and sat (stood?) still. Swamy was in a dilemma within about 300 meters of the start of that ride (which probably was the equivalent of 30 kilometers for that being, considering its size), with a stream of questions flowing briskly in his mind like the wind that caressed his salt 'n pepper beard (his hair too is of similar shade, but was hidden within the helmet, which he has always worn from the time he started riding – well, certainly not the same helmet though). Just like the way he shares anything that he gets to know, here's the opportunity for you too to swim in that swirling stream! The Where ? – Is location really important? Where did that being came from? Does it really belong to that place or has it hopped numerous other rides to get there? Where does that being think it was going, hopping this ride? Or does it really bother to think about such trivial things as staying or moving? The Who ? – Who really decides where one belongs to?? Who made the choice for that being to hop on this scooter – the creature, the rider, the scooter or someone else? Is that the same as who makes the choices for whatever happens in every moment of our lives? The How ? – How does that someone become the Who that decides the Where??? How did that creature decide on who would give it a ride on that particular day to wherever it wanted to go? Or is it beyond such pointless pondering to happily hop on and go wherever any ride takes one to? The Why ? – Why does that How occur to that particular Who that decided the Where???? Why did that creature hop on that scooter exactly at that time? Why does any being stays or moves? What does stay offer – stability? And what does move offer – change? Is one better than the other? Are both important or just irrelevant? The When ? – When exactly does that Why sprout leading to the How, Who and Where????? When exactly does a move happen? Is there a destined time for anything to happen in anyone's Life? Is 'this moment inevitable?' If so, isn't every single moment of our lives inevitable? Then what about those who don't move or change at all? The Which ? – Which is tipping point when that When really happens, resulting in the Why, How, Who and Where?????? Which is important or critical – the journey or the destination? Does any being have a choice to choose its destination (that creature would've ended up somewhere on the road or at the parking lot of Swamy's home at the end of that journey, but it didn't) or is the only choice in Life available to any being is just to be a part of the Journey? The What ? – What is the reason for us pondering forever the Which, When, Why, How, Who and Where for whatever that happens in Life??????? What happened to that creature, other than what exactly happened at that point in time? What else could have possibly happened, had it not hopped on Swamy's scooter? What's the point in asking all these W ?s Looking at it from the other side of the mirror (if that thought hasn't ever occurred to you till now, do watch Mirrors or The Sorcerer's Apprentice to know), is there really a point in pondering about things, especially after they occur (which is very typical of humans, who even have invented tools to do such pondering such as 5-Why)? If you are fully aware of things, events or actions, as and when they happen, then you belong to an elite category called realized Masters, who just know the answers for all the questions – even those that haven't been asked yet – in which case you obviously have no need to read or ponder about this, or anything else for that matter, as you just know, and be! Since that's as rare as us sighting a Unicorn (or UFO, if you happen to be a nerd – probably a proud one too) near your home on broad daylight, just keep reading. Now, where were we? Heck, does it even matter! The way this blog post has written itself (Swamy has told it many times in the past – including in this post about his posts – that once he sits down to write a blog post, it pretty much writes itself), Swamy does not think this one's for the faint hearted anyway. At the cost of sounding haughty, this one's more bang in the middle of the endless beginning of JK territory than Swamystery, which is a pleasant surprise for himself, for that's a path he hasn't treaded consciously till now! So, go on and ponder about the probable (it's tempting to say possible, but who knows what is, until it really becomes possible) answers for each of those questions. If nothing else, at least your brain cells will get that much needed exercise to reignite the electricity that could lead to some marvelous ideas in 2014. Isn't that worth dreaming about and pursuing (it would sound magnificent, if you try saying that like Morpheus), considering that for most of 2013 would have just been another year such as 2012 or many more that preceded it? Even when FINE isn't fine, it's perfectly fine if you lose a few strands of hair (or whatever is left of what used to be there) while on that quest, for what you may discover within might well be worth it! Ahem, that's not exactly Swamy planned to pen, err… type, when that tiny 8-legged creature nonchalantly hopped a ride on his scooter on that beautiful day. But is it any wonder, fortunately for him and unfortunately for you dear reader, that's exactly what has transpired, as intricate and complex as a spider web, when Along Came a Spider! P.S.: Swamy let that spider out somewhere around that 300m mark from where it hopped on for a ride, near a tree, well away from the traffic. It's a small world and time is relative, so one can never be sure enough to say, never the twain shall meet, again! Love + Gratitude > @PrakashSwamy Swamy Blogs | Swamystery | Been there, Seen that | SwamyRay | Swamyverse | SwamyView Connect with Swamy | Twitter | Facebook | Google+| LinkedIn | Pinterest | Tumblr Tags: Abnormal, Being, Change, Creature, Existence, Human beings, Life, Movement, Philosophy, Quest, Question, Spider, Spirituality, Thinking, Thought Categories Life, Motivation, Nature, Philosophy, Quote, Self Improvement, Spiritual
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,484
extern zend_class_entry *phalcon_image_adapter_imagick_ce; ZEPHIR_INIT_CLASS(Phalcon_Image_Adapter_Imagick); PHP_METHOD(Phalcon_Image_Adapter_Imagick, check); PHP_METHOD(Phalcon_Image_Adapter_Imagick, __construct); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _resize); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _liquidRescale); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _crop); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _rotate); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _flip); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _sharpen); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _reflection); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _watermark); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _text); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _mask); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _background); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _blur); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _pixelate); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _save); PHP_METHOD(Phalcon_Image_Adapter_Imagick, _render); PHP_METHOD(Phalcon_Image_Adapter_Imagick, __destruct); PHP_METHOD(Phalcon_Image_Adapter_Imagick, getInternalImInstance); PHP_METHOD(Phalcon_Image_Adapter_Imagick, setResourceLimit); ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick___construct, 0, 0, 1) ZEND_ARG_INFO(0, file) ZEND_ARG_INFO(0, width) ZEND_ARG_INFO(0, height) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__resize, 0, 0, 2) ZEND_ARG_INFO(0, width) ZEND_ARG_INFO(0, height) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__liquidrescale, 0, 0, 4) ZEND_ARG_INFO(0, width) ZEND_ARG_INFO(0, height) ZEND_ARG_INFO(0, deltaX) ZEND_ARG_INFO(0, rigidity) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__crop, 0, 0, 4) ZEND_ARG_INFO(0, width) ZEND_ARG_INFO(0, height) ZEND_ARG_INFO(0, offsetX) ZEND_ARG_INFO(0, offsetY) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__rotate, 0, 0, 1) ZEND_ARG_INFO(0, degrees) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__flip, 0, 0, 1) ZEND_ARG_INFO(0, direction) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__sharpen, 0, 0, 1) ZEND_ARG_INFO(0, amount) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__reflection, 0, 0, 3) ZEND_ARG_INFO(0, height) ZEND_ARG_INFO(0, opacity) ZEND_ARG_INFO(0, fadeIn) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__watermark, 0, 0, 4) ZEND_ARG_OBJ_INFO(0, image, Phalcon\\Image\\Adapter, 0) ZEND_ARG_INFO(0, offsetX) ZEND_ARG_INFO(0, offsetY) ZEND_ARG_INFO(0, opacity) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__text, 0, 0, 9) ZEND_ARG_INFO(0, text) ZEND_ARG_INFO(0, offsetX) ZEND_ARG_INFO(0, offsetY) ZEND_ARG_INFO(0, opacity) ZEND_ARG_INFO(0, r) ZEND_ARG_INFO(0, g) ZEND_ARG_INFO(0, b) ZEND_ARG_INFO(0, size) ZEND_ARG_INFO(0, fontfile) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__mask, 0, 0, 1) ZEND_ARG_OBJ_INFO(0, image, Phalcon\\Image\\Adapter, 0) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__background, 0, 0, 4) ZEND_ARG_INFO(0, r) ZEND_ARG_INFO(0, g) ZEND_ARG_INFO(0, b) ZEND_ARG_INFO(0, opacity) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__blur, 0, 0, 1) ZEND_ARG_INFO(0, radius) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__pixelate, 0, 0, 1) ZEND_ARG_INFO(0, amount) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__save, 0, 0, 2) ZEND_ARG_INFO(0, file) ZEND_ARG_INFO(0, quality) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick__render, 0, 0, 2) ZEND_ARG_INFO(0, extension) ZEND_ARG_INFO(0, quality) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(arginfo_phalcon_image_adapter_imagick_setresourcelimit, 0, 0, 2) ZEND_ARG_INFO(0, type) ZEND_ARG_INFO(0, limit) ZEND_END_ARG_INFO() ZEPHIR_INIT_FUNCS(phalcon_image_adapter_imagick_method_entry) { PHP_ME(Phalcon_Image_Adapter_Imagick, check, NULL, ZEND_ACC_PUBLIC|ZEND_ACC_STATIC) PHP_ME(Phalcon_Image_Adapter_Imagick, __construct, arginfo_phalcon_image_adapter_imagick___construct, ZEND_ACC_PUBLIC|ZEND_ACC_CTOR) PHP_ME(Phalcon_Image_Adapter_Imagick, _resize, arginfo_phalcon_image_adapter_imagick__resize, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _liquidRescale, arginfo_phalcon_image_adapter_imagick__liquidrescale, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _crop, arginfo_phalcon_image_adapter_imagick__crop, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _rotate, arginfo_phalcon_image_adapter_imagick__rotate, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _flip, arginfo_phalcon_image_adapter_imagick__flip, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _sharpen, arginfo_phalcon_image_adapter_imagick__sharpen, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _reflection, arginfo_phalcon_image_adapter_imagick__reflection, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _watermark, arginfo_phalcon_image_adapter_imagick__watermark, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _text, arginfo_phalcon_image_adapter_imagick__text, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _mask, arginfo_phalcon_image_adapter_imagick__mask, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _background, arginfo_phalcon_image_adapter_imagick__background, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _blur, arginfo_phalcon_image_adapter_imagick__blur, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _pixelate, arginfo_phalcon_image_adapter_imagick__pixelate, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _save, arginfo_phalcon_image_adapter_imagick__save, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, _render, arginfo_phalcon_image_adapter_imagick__render, ZEND_ACC_PROTECTED) PHP_ME(Phalcon_Image_Adapter_Imagick, __destruct, NULL, ZEND_ACC_PUBLIC|ZEND_ACC_DTOR) PHP_ME(Phalcon_Image_Adapter_Imagick, getInternalImInstance, NULL, ZEND_ACC_PUBLIC) PHP_ME(Phalcon_Image_Adapter_Imagick, setResourceLimit, arginfo_phalcon_image_adapter_imagick_setresourcelimit, ZEND_ACC_PUBLIC) PHP_FE_END };
{ "redpajama_set_name": "RedPajamaGithub" }
742
HIGH-SCHOOL 9NEWS Prep Rally Honor Roll: Top Plays of the Week (5/14/19) Watch the five play of the week nominees, then vote for your favorite. Author: Taylor Temby Published: 5:01 PM MDT May 14, 2019 Updated: 5:13 PM MDT May 14, 2019 DENVER — The high school spring sports season is rapidly coming to a close, with three more state championships taking place this week. But first - here are the nominees for this week's Prep Rally Honor Roll. We zigged and zagged our way into the number five play. Steamboat Springs senior Lucy Shimek led the Sailor's with four points in their girls' lacrosse playoff game with Mullen -- including one goal that required her to fake out her defender multiple times en route to the net. Number four came out of nowhere. Off the corner kick, Chatfield's Hannah Peterson came surging into the box, before redirecting the ball with her head to the back of the net. Our number three play this week was a stretch -- in net. Boulder girls' soccer was able to keep its playoff game scoreless with Columbine in their post-season showdown, thanks to a full-extension save by goalkeeper Alex Smith. Number two - is a walk-off! Golden Junior Ben McLaughlin went yard in the bottom of the 8th inning, that eventually gave the Demons the 1-0 win over rival Wheat Ridge. (Also, shout out to Jefferson County School District Athletics for sending in the play!) This week's top play, however, was a no-look, behind-the-back beauty of a score from Kent Denver boys' lacrosse. It was one of three goals from Michael Bowler in their playoff win over Lakewood High School. We've ranked them, now we want to here from you! VOTE for the play you think should be placed in the top spot. The poll will be closed Saturday morning, with the winner being announced during the 9NEWS Sunday morning Prep Rally. If you can't see the poll below, visit bit.ly/2VkXS6N.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,084
Jens Naessens, né le à Deinze, est un joueur de football belge. Il évolue actuellement au Lierse Kempenzonen comme attaquant. Carrière À six ans, Jens Naessens s'affilie au KMSK Deinze. En 2001, il rejoint le « Futurosport » de l'Excelsior Mouscron, qu'il quitte trois ans plus tard pour le centre de formation du FC Bruges. Après deux saisons dans l'équipe brugeoise, il part au SV Zulte Waregem en 2006, où il termine sa formation. En 2010, il est intégré au noyau professionnel du club. Il joue son premier match le face à Saint-Trond. Un mois plus tard, il est titularisé pour la première fois contre Westerlo. La semaine suivante, il inscrit son premier but en championnat face à Courtrai. Durant les « play-offs 2 », il obtient plus de temps de jeu et s'affirme comme un titulaire potentiel dans l'attaque flandrienne. Jens Naessens entame la saison 2011-2012 dans l'équipe de base. Il dispute la quasi-totalité des rencontres du championnat et marque à six reprises. Le , il est sélectionné pour la première fois en équipe nationale espoirs pour un match de qualification du championnat d'Europe espoirs 2013 contre l'Angleterre. À peine monté au jeu, il inscrit le but égalisateur pour les « Diablotins », qui s'imposeront finalement 2-1. Il dispute encore les trois derniers matches de la phase qualificative mais la Belgique est éliminée. En avril 2011, il prolonge son contrat à Zulte Waregem jusqu'en juin 2015 La saison suivante, malgré l'arrivée de nouveaux attaquants étrangers comme Frédéric Gounongbe ou Ivan Lendrić, il conserve la confiance de l'entraîneur Francky Dury et sa place de titulaire. Le club lutte pour le titre jusqu'à la dernière journée de championnat mais termine finalement vice-champion derrière Anderlecht. Qualifié pour le troisième tour préliminaire de la Ligue des champions, il est éliminé par le PSV Eindhoven et reversé en Ligue Europa. Dans cette compétition, Jens Naessens dispute six rencontres et inscrit un but, celui de la qualification pour la phase de poules lors du match retour à l'APOEL Nicosie, à quelques minutes de la fin de la rencontre. Statistiques Palmarès Zulte Waregem Coupe de Belgique Vainqueur : 2017 Notes et références Notes Références Liens externes Naissance en avril 1991 Naissance à Deinze Footballeur belge Joueur du SV Zulte Waregem Joueur du KV Malines Joueur du Royal Antwerp FC Joueur du KVC Westerlo Joueur du KSV Roulers Joueur du Lierse Kempenzonen
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,623
Pseudoplatyura campbelli är en tvåvingeart som först beskrevs av Tonnoir 1927. Pseudoplatyura campbelli ingår i släktet Pseudoplatyura och familjen platthornsmyggor. Inga underarter finns listade i Catalogue of Life. Källor Platthornsmyggor campbelli
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,108
Zhiyuan may refer to: Chinese cruiser Zhiyuan (致遠), an imperial Chinese cruiser which sank during the First Sino-Japanese War (1894) Historical eras Zhiyuan (至元, 1264–1294), era under Kublai Khan, Mongol emperor Zhiyuan (至元, 1335–1340), era under Toghon Temür, Mongol emperor
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,601
\section{Introduction} Lepton colliders, such as the Large Electron-Positron collider {\sc lep} which ran from 1989-2000 at {\sc{cern}}, provide an optimal environment for precision studies in high energy physics. Lacking the complications of strongly interacting initial states, which plague hadron colliders, {\sc lep} has been able to provide extremely accurate measurements of standard model quantities such as the $Z$-boson mass, and its results tightly constrain beyond-the-standard model physics. The precision {\sc lep} data is also used for QCD studies, for example to determine the strong coupling constant $\alpha_s$. With the variation of $\alpha_s$ known to 4-loops, one should be able to confirm in great detail the running of the coupling, or use it to establish a discrepancy which might indicate new physics. Even at fixed center-of-mass energy, differential distributions for event shapes, such as thrust probe several energy scales and are extremely sensitive to the running coupling. Moreover, event shape variables are designed to be infrared safe, so that they can be calculated in perturbation theory and so the theoretical predictions should be correspondingly clean. Nevertheless, extractions of $\alpha_s$ from event shapes at {\sc lep} have until now been limited by theoretical uncertainty from unknown higher order terms in the perturbative expansion. One difficulty in achieving an accurate theoretical prediction from QCD has been the complexity of the relevant fixed-order calculations. Indeed, while the next-to-leading-order (NLO) results for event shapes have been known since 1980~\cite{Ellis:1980wv}, the relevant next-to-next-to-leading order (NNLO) calculations were completed only in 2007 \cite{GehrmannDeRidder:2007bj,GehrmannDeRidder:2007hr}. In addition to the loop integrals, the subtraction of soft and collinear divergencies in the real emission diagrams presented a major complication. In fact, this is the first calculation where a subtraction scheme has been successfully implemented at NNLO \cite{GehrmannDeRidder:2005cm}. However, even with these new results at hand, the corresponding extraction of $\alpha_s$ continues to be limited by perturbative uncertainty. The result of~\cite{Dissertori:2007xa} was $\alpha_s (m_Z) = 0.1240 \pm 0.0033$, with a perturbative uncertainty of 0.0029. This NNLO result for the strong coupling constant comes out lower than at NLO, but $2 \sigma$ higher than the PDG average $\alpha_s (m_Z) = 0.1176 \pm 0.0020$ \cite{Yao:2006px}. Actually, the most precise values of $\alpha_s$ are currently determined not from {\sc lep} but at low energies using lattice simulations \cite{Mason:2005zx} and $\tau$-decays \cite{Davier:2005xq}. An extensive review of $\alpha_s$ determinations is given in \cite{Bethke:2006ac}, new determinations since its publication include \cite{Blumlein:2006be, Brambilla:2007cz}. To further reduce the theoretical uncertainty of event shape calculations, it is important to resum the dominant perturbative contributions to all orders in $\alpha_s$. To see this, consider thrust, which is defined as \begin{equation} T = \max_{\mathbf{n}} \frac{\sum_i | \mathbf{p}_i \cdot \mathbf{n} |}{\sum_i | \mathbf{\mathbf{p}}_i |}\,, \end{equation} where the sum is over all momentum 3-vectors $\mathbf{p}_i$ in the event, and the maximum is over all unit 3-vectors ${\mathbf{n}}$. In the endpoint region, $T \rightarrow 1$ or $\tau = (1 - T) \rightarrow 0$, no fixed-order calculation could possibly describe the full distribution due to the appearance of large logarithms. For example, at leading order in perturbation theory the thrust distribution has the form \begin{equation} \frac{1}{\sigma_0} \frac{\mathrm{d}\sigma}{\mathrm{d}\tau} =\delta(\tau)+\frac{2\alpha_s}{3\pi} \left[\frac{-4\ln\tau-3}{\tau} + \dots\right] \,, \end{equation} where the ellipsis denotes terms that are regular in the limit $\tau\to 0$. Upon integration over the endpoint region, one finds \begin{equation} R(\tau)=\int_0^\tau \! \mathrm{d}\tau' \frac{1}{\sigma_0} \frac{\mathrm{d}\sigma}{\mathrm{d}\tau'} =1+ \frac{2\alpha_s}{3\pi} \left[-2\ln^2\tau-3\ln\tau + \dots\right] \,. \end{equation} Double logarithmic terms of the form $\alpha^n_s \ln^{2n}\tau$ arise from regions of phase space where the quarks or gluons are soft or collinear. For small enough $\tau$, higher order terms are just as important as lower order ones and the standard perturbative expansion breaks down. Resummation refers to summing a series of contributions of the form $\alpha_s^n \ln^{m} \tau$ for the integral $R(\tau)$ or $\alpha_s^n (\ln^{m-1} \tau)/\tau$ for the differential distribution. Leading logarithmic (LL) accuracy is achieved by summing the tower of logarithms with $m=2n$, next-to-leading logarithmic accuracy (NLL) also sums the terms with $m=2n-1$. Resummation at N$^k$LL accuracy, provides all logarithmic terms with $2n \geq m \geq 2n-2k+1$, as detailed in Section \ref{sec:scet}. The first resummation of event shapes was done by Catani, Trentadue, Turnock and Webber (CTTW) in~\cite{Catani:1992ua}. Their approach was to define jet functions $J_C (p^2)$ as the probability for finding a jet of invariant mass $p^2$ in the event. These can be calculated to NLL by summing probabilities for successive emissions using the Alterelli-Parisi splitting functions. Each term in the series that is resummed corresponds to an additional semi-classical radiation. The splitting functions only account for collinear emissions; to include soft emission, it is common either to impose some kind of angular ordering constraint to simulate soft coherence effects, or to use more sophisticated probability functions, such as Catani-Seymour dipoles~\cite{Catani:1996vz}. Except for \cite{deFlorian:2004mp}, none of these approaches has led to a resummation for event shapes beyond NLL. The approach to resummation of event shapes~\cite{Schwartz:2007ib} based on Soft-Collinear Effective Theory (SCET)~\cite{Bauer:2000yr,Bauer:2001yt,Beneke:2002ph} contrasts sharply with the semi-classical CTTW treatment. The most important conceptual difference is that effective field theory works with amplitudes, at the operator level, instead of probabilities at the level of a differential cross-section. Consequently, the resummation comes not from the exponentially decreasing probability for multiple emissions, but from a solution to renormalization group (RG) equations. The starting point for the effective field theory approach is the factorization formula for thrust in the 2-jet region, \begin{equation} \frac{1}{\sigma_0} \frac{\mathrm{d} \sigma_{2}}{\mathrm{d} \tau} = H(Q^2, \mu) \int \! \mathrm{d} p^2_L \mathrm{d} p_R^2 \mathrm{d} k \,J (p^2_L, \mu) \,J (p^2_R, \mu)\, S_T (k, \mu) \delta (\tau - \frac{p^2_L+p^2_R}{Q^2}-\frac{k}{Q}) \label{scetfact}\,, \end{equation} where $H (Q^2, \mu)$ is the hard function, $J(p^2, \mu)$ the jet function, and $S_T (k, \mu)$ is the soft function for thrust. $Q$ refers to the center-of-mass energy of the collision, $\mu$ is an arbitrary renormalization scale, and the born-level cross section $\sigma_0$ appears for normalization. A similar factorization formula was derived to study top quark jets in~\cite{Fleming:2007qr}, and then transformed into this form to study event shapes in~\cite{Schwartz:2007ib}. Factorization properties of event shape variables were also studied in~\cite{Berger:2003gr,Berger:2003pk}. The expression (\ref{scetfact}) is valid to all orders in perturbation theory up to terms which are power suppressed in the two-jet region $\tau\rightarrow 0$, \begin{equation} \frac{\mathrm{d} \sigma}{\mathrm{d} \tau} = \frac{\mathrm{d} \sigma_2}{\mathrm{d} \tau} \Big[1+{\mathcal O}(\tau)\Big]\,. \end{equation} The key to the factorization theorem is that near maximum thrust, $\tau$ reduces to the sum of hemisphere masses \begin{equation} \tau \to \frac{M_L^2+M_R^2}{Q^2}=\frac{p_L^2+p_R^2+k Q}{Q^2}\,, \end{equation} where the two hemispheres are defined by the thrust axis $\mathbf{n}$. Here, $p_L^2 (p_R^2)$ is the invariant mass of the energetic particles in the left (right) jet and $k Q$ is the increase of the invariant mass on the two sides due to soft emissions. A more detailed interpretation of this formula can be found in~\cite{Schwartz:2007ib,Fleming:2007qr,Fleming:2007xt}. The factorization theorem (\ref{scetfact}) makes it evident that the thrust distribution involves three different scales in the endpoint region. First of all, there are virtual effects arising in the production of the quark anti-quark pair at the hard scale $\mu_h\sim Q$ which are encoded in the hard function $H(Q^2,\mu)$. A second relevant scale is associated with the invariant mass of the two back-to-back jets, $\mu_j^2\sim p_L^2+p_R^2 \sim \tau Q^2$. In addition to these two external scales, a third, lower seesaw scale is encoded in the soft function $\mu_s \sim k\sim \tau Q \sim \mu_j^2/\mu_h$. The effective theory treatment separates the effects associated with these three scales and makes transparent that a larger range of scales, and consequently a larger range of $\alpha_s (\mu)$ is being probed than is evident in either the fixed-order calculation or in the traditional NLL resummation. Large logarithms are avoided using RG evolution in the effective theory. Each of the three functions $H$, $J$ and $S$ is evaluated at its characteristic scale, and then evolved to a common scale $\mu$. Solving the differential RG equations resums logarithms of the scale ratios. In Section \ref{sec:scet}, we provide the definitions of $H$, $J$ and $S$ in SCET. These functions can be calculated directly in SCET, or one can rewrite them in terms of matrix elements of QCD operators. For practical calculations, the definitions in QCD are often more suitable since the QCD Feynman rules are simpler. The hard and jet function appear in other processes and are known to two-loop order \cite{Becher:2006qw, Becher:2006mr,Becher:2007ty}. Their RG-equations have been solved in closed form and the relevant anomalous dimensions are known at three-loop order \cite{Becher:2006nr}. With the hard and jet functions known, the only missing ingredient to resum the thrust distribution to next-to-next-to-next-to-leading logarithmic (N$^3$LL) accuracy is the soft function $S$. Its one-loop expression was given in \cite{Schwartz:2007ib} and in Section \ref{sec:match} we determine soft function to two loops. Its logarithmic part is obtained using RG invariance of the thrust distribution and the remaining constant piece by a numerical procedure. After plugging the solutions back into the factorization theorem (\ref{scetfact}), we obtain the result for the resummed distribution valid to N$^3$LL. Next, we expand the effective theory result to fixed order in $\alpha_s$. The logarithmically enhanced terms which are determined in the effective theory dominate the thrust distribution. This is especially pronounced at NNLO: color structure by color structure, we find that the logarithmic terms are an excellent approximation of the full fixed-order result. This close agreement also provides an independent check on the NNLO calculation. After comparing the full fixed-order result to the logarithmic terms, we add the small difference between the two to our resummed result. By this matching procedure, we obtain a resummed result which is also correct to NNLO in fixed-order perturbation theory. In Section \ref{sec:fit}, we fit the resummed matched calculation to {\sc{aleph}} and {\sc{opal}} data. We find a relatively small perturbative uncertainty on $\alpha_s$ compared to previous event shape fits of the same {\sc lep} data. In fact, the final statistical, systematic, perturbative and hadronization uncertainties end up being quite similar, all around 1\%. At this point, we have the least handle on hadronization effects, and these and other power corrections are explored in Section \ref{sec:NP}. The conclusion contains a brief discussion of how the various uncertainties might be further reduced. \section{Resummation of thrust in effective field theory \label{sec:scet}} The large logarithms in the thrust distribution dominate near the endpoint, $\tau \rightarrow 0$. This region of phase space corresponds to configurations with two back-to-back light jets. In this situation, the vector and axial-vector currents relevant to the production of the $q\bar q$-pair are mapped onto the two-jet operators in SCET~\cite{Bauer:2002nz} \begin{equation} \mathcal{O}_2 = \bar\chi_{\bar{n}} \Gamma \chi_n\,, \end{equation} where $\Gamma=\gamma^\mu$ or $\Gamma=\gamma^\mu \gamma_5$ for vector or axial-vector currents respectively. Here, $n$ is a light-like 4-vector aligned with the thrust axis and the composite fields $\chi_n$ and $\chi_{\bar{n}}$ are the collinear quark fields in the $n$- and $\bar n$-directions, multiplied by light-like Wilson lines~\cite{Hill:2002vw}. The first step in the effective field theory calculation is matching to full QCD. This is done by calculating matrix elements in SCET and in QCD and adjusting the Wilson coefficients in the effective theory so that the matrix elements agree. Performing the matching on-shell, one finds that the relevant matching coefficient for the vector operator is given by the on-shell vector quark form factor. In a scheme with an anti-commuting $\gamma_5$, the Wilson coefficients of the vector and axial-vector operators are identical. Neglecting electro-weak corrections, the use of such a scheme is consistent in the endpoint region $\tau\to0$. In this region, the two energetic quarks produced directly by the current always appear in the final state, so that the $\gamma_5$ matrices from the axial currents always appear in a single trace formed by the cut fermion loop. After normalizing to the tree-level cross section, the hard function $H(Q,\mu)$ is given by the absolute value squared of the time-like on-shell form factor. Using the known two-loop result for the on-shell QCD form factor \cite{Matsuura:1987wt,Matsuura:1988sm,Gehrmann:2005pd,Moch:2005id}, the hard function at two loops was derived in \cite{Becher:2006mr}. It satisfies the RG equation \cite{Becher:2006nr} \begin{eqnarray}\label{Hrge} \frac{\mathrm{d}}{\mathrm{d}\ln\mu}\,H(Q^2,\mu) &=& \left[ 2\Gamma_{\rm cusp}(\alpha_s) \ln\frac{Q^2}{\mu^2} + 2\gamma^H(\alpha_s) \right] H(Q^2,\mu) \,, \end{eqnarray} whose solution can be written as \begin{equation} H(Q^2, \mu) = H(Q^2, \mu_h) \exp \left[ 4 S (\mu_h, \mu) - 2A_H (\mu_h, \mu) \right]\, \left( \frac{Q^2}{\mu_h^2} \right)^{-2A_\Gamma(\mu_h,\mu)}\,. \end{equation} Here, \begin{equation}\label{Sfun} S (\nu, \mu) = - \int_{\alpha_s (\nu)}^{\alpha_s (\mu)} \mathrm{d} \alpha \frac{\Gamma_{\mathrm{cusp}} (\alpha)}{\beta (\alpha)} \int_{\alpha_s (\nu)}^{\alpha} \frac{\mathrm{d} \alpha'}{\beta (\alpha')} \end{equation} and \begin{equation}\label{Afun} A_H (\nu, \mu) = - \int_{\alpha_s (\nu)}^{\alpha_s (\mu)} d \alpha \frac{\gamma^H (\alpha)}{\beta (\alpha)}\,. \end{equation} The function $A_\Gamma(\nu,\mu)$ is defined as $A_H (\nu, \mu)$, but with $\gamma^H$ replaced by $\Gamma_{\rm cusp}$. The solutions of the RG equations for the jet and soft function given below involve functions $A_J (\nu,\mu)$, $A_S (\nu, \mu)$ which are obtained from (\ref{Afun}) by substituting $\gamma_J, \gamma_S$ for $\gamma_H$. It is straightforward to expand $S (\nu, \mu)$ and $A_H (\nu, \mu)$ perturbatively in $\alpha_s (\nu)$ and $\alpha_s (\mu)$ given the expansions of $\Gamma_{\mathrm{cusp}} (\alpha)$ and $\gamma^H (\alpha)$. The explicit expansions can be found in~\cite{Becher:2006mr}. In SCET the jet function is given by the imaginary part of the collinear quark propagator, \begin{equation} J (p^2, \mu) = \frac{1}{( \bar{n} \cdot p)} \frac{1}{\pi} \mathrm{Im} \left[ i \int d^4x\, e^{- i p x} \langle 0 | {\bf T} \left\{ \bar\chi_n(x)\,\frac{{\bar n}\!\!\!/}{2}\, {\chi}_{n} (0) \right\} |0 \rangle \right] =\delta(p^2) + {\cal O}(\alpha_s) \, \end{equation} and thus vanishes for $p^2<0$. The jet function was calculated at one loop in~\cite{Bauer:2003pi} and at two loops in~\cite{Becher:2006qw}. To evaluate the function perturbatively, it is convenient to rewrite the collinear quark propagator in terms of QCD fields. One finds that the jet function is obtained from the quark propagator in light-cone gauge. The jet function satisfies a RG equation which is non-local in $p^2$~\cite{Becher:2006qw}, \begin{equation} \frac{\mathrm{d} J (p^2, \mu)}{\mathrm{d} \ln \mu} = \left[ - 2 \Gamma_{\mathrm{cusp}} \ln \frac{p^2}{\mu^2} - 2 \gamma_J \right] J (p^2, \mu) + 2 \Gamma_{\mathrm{cusp}} \int_0^{p^2} d q^2 \frac{J (p^2, \mu) - J (q^2, \mu)}{p^2 - q^2}\,. \end{equation} From the divergent part of the form factor at three loops~\cite{Moch:2005id} and the NNLO Altarelli-Parisi splitting functions~\cite{Moch:2004pa} the jet anomalous dimension $\gamma_J$ was derived at three loops in~\cite{Becher:2006mr} and is given in Appendix \ref{sec:anomalous}. Although the RG equation is non-local in $p^2$, it is local in $\mu$ and can be solved using Laplace transform techniques. The result is~\cite{Becher:2006mr} \begin{equation}\label{jetsol} J (p^2, \mu) = \exp \left[ - 4 S (\mu_j, \mu) + 2 A_J (\mu_j, \mu) \right] \widetilde{j} (\partial_{\eta_j},\mu_j) \frac{1}{p^2} \left( \frac{p^2}{\mu_j^2} \right)^{\eta_j} \frac{e^{- \gamma_E \eta_j}}{\Gamma (\eta_j)} \,, \end{equation} where $\eta_j = 2 A_{\Gamma} (\mu_j, \mu)$. The function $\widetilde j(L,\mu)$ is the Laplace transform of the jet function. Its definition and explicit form are given in Appendix \ref{sec:Hjs}. To any given order in perturbation theory, $\widetilde j(L,\mu)$ is a polynomial in the variable $L$ so that the derivatives with respect to $\eta_j$ in (\ref{jetsol}) can be performed explicitly. The thrust soft function is defined as a matrix element of Wilson lines along the directions of the energetic quarks, \begin{equation} S_T (k) = \sum_X \left| \left\langle X|Y_n^{\dag} Y_{\bar{n}} |0 \right\rangle \right|^2 \delta (k - n \cdot p_{X_n} - \bar{n} \cdot p_{X_{\bar{n}}})\,, \end{equation} where \begin{equation}\label{eq:Sn} Y_n = {\rm\bf P}\,\exp\left( ig\int_{-\infty}^0\!\mathrm{d} t\,n\cdot A_{s}(t n) \right)\,. \end{equation} This Wilson line describes the Eikonal interactions of soft gluons with the fast moving quark, and $p_{X_n}$ ($p_{X_{\bar n}}$) is the sum of the momenta of the soft particles in the $n$-hemisphere (${\bar n}$-hemisphere). The variable $k$ measures the change in the invariant mass due to soft emissions from the two jets. At the leading power, the mass in the $n$-hemisphere is given by \begin{eqnarray} M_n^2 &=& (p_n+p_{X_n})^2 \approx p_n^2+ Q\,( n \cdot p_{X_n}) \,, \end{eqnarray} where $p_n$ denotes the total collinear momentum in the hemisphere. Note that the soft function vanishes for negative argument. Like the jet function, the soft function can be calculated order-by-order in perturbation theory. The one-loop soft function was derived in~\cite{Schwartz:2007ib} from results of~\cite{Korchemsky:1993uz}; it was also calculated directly in SCET~\cite{Fleming:2007xt}. The two-loop soft function will be determined below. The factorization theorem (\ref{scetfact}) and the fact that the thrust distribution is independent of the renormalization scale $\mu$ implies that the soft function fulfills the RG equation \begin{equation}\label{Srge} \frac{{\rm d}S_T(k,\mu)}{{\rm d}\ln\mu} = \left[ 4\Gamma_{\rm cusp}(\alpha_s)\,\ln\frac{k}{\mu} - 2\gamma^S(\alpha_s) \right] S_T(k,\mu) \mbox{}-4\Gamma_{\rm cusp}(\alpha_s) \int_0^{k}\!dk'\, \frac{S_T(k,\mu)-S_T(k',\mu)}% {k-k'} \,, \end{equation} and that, to all orders, \begin{equation}\label{gw} \gamma^S = \gamma^H- 2 \gamma^J \,. \end{equation} This relation was checked to one loop in~\cite{Schwartz:2007ib} (with a different convention for $\gamma_H$), and here we use it to determine the two- and three-loop soft anomalous dimensions. Similar to (\ref{jetsol}), the solution for the soft function is \begin{equation}\label{softsol} S_T(k, \mu) = \exp \left[ 4 S (\mu_s, \mu) + 2 A_S (\mu_s, \mu) \right] \widetilde{s}_T (\partial_{\eta_s}) \frac{1}{k} \left( \frac{k}{\mu_s} \right)^{\eta_s} \frac{e^{- \gamma_E \eta_s}}{\Gamma \left( \eta_s \right)} \,, \end{equation} with $\eta_s = - 4 A_{\Gamma} (\mu_s, \mu)$. From the linearity of $A_S$ in $\gamma_S$ it also follows that $A_S = A_H - 2 A_J$. The convolution integrals in (\ref{scetfact}) can be done analytically once the solutions (\ref{jetsol}) and (\ref{softsol}) are put back into the factorization theorem. The thrust distribution becomes \begin{multline}\label{complicated} \frac{1}{\sigma_0} \frac{{\rm d} \sigma_2}{{\rm d} \tau} = \exp \left[ 4 S (\mu_h, \mu) - 2 A_H (\mu_h, \mu) - 8 S (\mu_j, \mu) + 4 A_J (\mu_j, \mu) + 4 S (\mu_s, \mu) + 2 A_S (\mu_s, \mu) \right] \\ \times \left(\frac{Q^2}{\mu_h^2}\right)^{-2A_\Gamma(\mu_h,\mu)} H(Q^2,\mu_h) \left[ \widetilde{j} (\partial_{2 \eta_j},\mu_j)\right]^2 \widetilde{s}_T (\partial_{\eta_s},\mu_s) \left[ \frac{1}{\tau} \left( \frac{\tau Q^2}{\mu_j^2} \right)^{2 \eta_j} \left( \frac{\tau Q}{\mu_s} \right)^{\eta_s} \frac{e^{- \gamma_E (2 \eta_j + \eta_s)}}{\Gamma (2 \eta_j + \eta_s)} \right] \,. \end{multline} Using the relations, \begin{align} A_\Gamma(\mu_1,\mu_2) +A_\Gamma(\mu_2,\mu_3) &= A_\Gamma(\mu_1,\mu_3)\,, \\ S (\mu_1, \mu_2) + S(\mu_2, \mu_3) &= S (\mu_1, \mu_3) + \ln \frac{\mu_1}{\mu_2} A_{\Gamma} (\mu_2, \mu_3) \,, \nonumber \end{align} and \begin{equation} f (\partial_{\eta}) X^{\eta} = X^{\eta} f (\ln X + \partial_{\eta})\,, \end{equation} the expression (\ref{complicated}) simplifies to \begin{multline} \label{scetdist} \frac{1}{\sigma_0} \frac{\mathrm{d}\sigma_2}{\mathrm{d}\tau}= \exp\left[ 4S(\mu_h,\mu_j)+4S(\mu_s,\mu_j)-2A_H(\mu_h,\mu_s)+4A_J(\mu_j,\mu_s)\right] \left(\frac{Q^2}{\mu_h^2}\right)^{-2A_\Gamma(\mu_h,\mu_j)} \\ \times H(Q^2,\mu_h)\, \left[\widetilde j\Big( \ln\frac{\mu_s Q}{\mu_j^2}+\partial_\eta,\mu_j\Big)\right]^2\, \widetilde s_T\Big(\partial_\eta,\mu_s\Big) \frac{1}{\tau} \left(\frac{\tau Q}{\mu_s} \right)^{\eta} \frac{e^{-\gamma_E \eta}}{\Gamma(\eta)}\,, \end{multline} with $\eta = 4 A_{\Gamma} (\mu_j, \mu_s)$. From this final result we can read off the canonical relations among the hard, jet, and soft matching scales and the physical scales $Q$ and $p \sim \sqrt{\tau} Q$: \begin{equation}\label{canonical} \mu_h = Q\,, \hspace{1em} \mu_j = \sqrt{\tau} Q\,, \hspace{1em} \mu_s = \tau Q\,. \end{equation} Note that the arbitrary reference scale $\mu$ has dropped out completely. For the $\alpha_s$ fits, we need the differential thrust distribution integrated over each bin. The integral of the thrust distribution can be evaluated analytically, since the derivatives with respect to $\eta$ in (\ref{scetdist}) commute with the integration over $\tau$. The resulting expression is \begin{multline} R_2(\tau) = \int_0^{\tau} \frac{1}{\sigma_0} \frac{\mathrm{d} \sigma_2}{\mathrm{d} \tau'} \mathrm{d} \tau'= \exp \left[4 S (\mu_h, \mu_j) + 4 S (\mu_s, \mu_j) - 2 A_H (\mu_h, \mu_s) + 4 A_J (\mu_j, \mu_s) \right] \\ \times \left(\frac{Q^2}{\mu_h^2}\right)^{-2A_\Gamma(\mu_h,\mu_j)} H(Q^2,\mu_h) \left[ \widetilde{j} (\ln \frac{\mu_s Q}{\mu_j^2} + \partial_{\eta},\mu_j) \right]^2 \widetilde{s}_T (\partial_{\eta},\mu_s) \left[ \left( \frac{\tau Q}{\mu_s} \right)^{\eta} \frac{e^{- \gamma_E \eta}}{\Gamma (\eta + 1)} \right] \,. \label{R2eq} \end{multline} Note that the integral is performed for fixed $\mu_j$ and $\mu_s$, that is, before setting them to their canonical $\tau$-dependent values. In this way, large logarithms are removed in the observable of interest, not for some intermediate expression. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{order} & \multirow{2}{*}{$\Gamma_{\rm cusp}$} & \multirow{2}{*}{$\gamma^{H/J/S}$} & \multirow{2}{*}{$H$, $\widetilde j$, $\widetilde s_T$} & \multirow{2}{*}{$\beta$} & fixed-order & logarithmic \\ & & & & & matching & accuracy \\ \hline {1$^{\mathrm{st}}$} order & 2-loop & 1-loop & tree & 2-loop& -- & NLL \\ \hline {2$^{\mathrm{nd}}$} order & 3-loop & 2-loop & 1-loop & 3-loop& LO & NNLL \\ \hline {3$^{\mathrm{rd}}$} order & 4-loop & 3-loop & 2-loop & 4-loop& NLO & N$^3$LL \\ \hline {4$^{\mathrm{th}}$} order & 4-loop & 3-loop & 3-loop & 4-loop& NNLO & N$^3$LL \\ \hline \end{tabular} \end{center} \caption{Definition of orders in perturbation theory} \label{tab:ords} \end{table} Different definitions of logarithmic accuracy are commonly used in the literature. Before proceeding further, we now show which logarithms are included at a given order in our calculation. We use renormalization-group improved perturbation theory, in which logarithms of scales are eliminated in favor of coupling constants at different scales which are counted as small parameters of the same order \begin{equation} \ln \frac{\mu}{\nu} = \int_{\alpha_s(\nu)}^{\alpha_s(\mu)} \frac{d\alpha}{\beta(\alpha)} = \frac{2\pi}{\beta_0}\left(\frac{1}{\alpha_s(\mu)}-\frac{1}{\alpha_s(\nu)}\right) +\dots\,. \end{equation} The expansion of the Sudakov exponent (\ref{Sfun}) then takes the form \begin{equation} \label{RGI} S(\nu,\mu)= \frac{1}{\alpha_s(\nu)} f_1(r) + f_2(r) + \alpha_s(\nu) f_3(r) + \alpha_s(\nu)^2 f_4(r)+ \dots \end{equation} where $r=\alpha_s(\mu)/\alpha_s(\nu)$. The explicit expressions for the functions $f_1$ to $f_4$ needed for our calculation are given in \cite{Becher:2006mr}. The leading-order term $\alpha_s^0$ in renormalization group improved perturbation theory involves the functions $f_1$ and $f_2$, which depend on the one and two-loop cusp anomalous dimension. To make contact with the literature, we can expand $\alpha_s(\mu)$ around fixed coupling $\alpha_s\equiv\alpha_s(\nu)$. The result takes the form \begin{equation} \label{LOG} S(\nu,\mu)= L\, g_1(\alpha_s L) + g_2(\alpha_s L) + \alpha_s g_3(\alpha_s L) +\alpha_s^2 g_4(\alpha_s L) + \dots\,, \end{equation} with $L=\ln(\mu/\nu)$. LL resummations include only $g_1$, NLL also $g_2$ and so forth. When rewriting (\ref{RGI}) in the form (\ref{LOG}), the expansion of $f_i$ contributes to the functions $g_j$ with $j\geq i$ so that there is a one-to-one correspondence between the order in renormalization group improved perturbation theory and the standard logarithmic accuracy. Note that the higher order terms to (\ref{RGI}) and (\ref{LOG}) are suppressed by explicit factors of $\alpha_s$. The missing pieces in the integral $R_2(\tau)$ at N$^3$LL are suppressed by $\alpha_s^3$ so that the missing logarithms are $\alpha^3\times\alpha^{n} \ln^{2n}\tau \equiv \alpha^k \ln^{2k-6}\tau$ for the default scale choice. In particular, at order $\alpha^3$ the N$^3$LL result includes everything except for the constant term in $R_2(\tau)$ which does not contribute to the thrust distribution. In Table \ref{tab:ords}, we list the ingredients to obtain $(\ref{R2eq})$ to a given accuracy. The necessary anomalous dimensions and the results for the functions $H$, $\widetilde{j}$ and $\widetilde{s}_T$ are provided in Appendix \ref{sec:anomalous}. Everything in the table except for the four-loop cusp anomalous dimension and the constant part of the two-loop soft function are known. We estimate the former using the Pad\'e approximation $\Gamma_4=\Gamma_3^2/\Gamma_2$~\cite{Moch:2005ba} and determine the latter numerically in the next section. Rather than specifying both the accuracy of the resummation and the order to which we match to the fixed order result, will will in the following simply refer to the definitions of {1$^{\mathrm{st}}$}, {2$^{\mathrm{nd}}$}, {3$^{\mathrm{rd}}$} and {4$^{\mathrm{th}}$} order as given in Table \ref{tab:ords}. Note that the difference between {3$^{\mathrm{rd}}$} and {4$^{\mathrm{th}}$} order, as we have defined them, is only the inclusion of NNLO matching corrections, but the logarithmic accuracy stays the same. \section{Resummation vs. fixed order \label{sec:match}} \begin{figure}[t!] \begin{center} \includegraphics[width=0.95\textwidth]{plSingularVsFull} \end{center} \vspace*{-1.2cm} \caption{A comparison of the full fixed-order calculations and the fixed-order expansion of the resummed distributions from the effective field theory. The light-red areas in the NNLO histogram are an estimate of the statistical uncertainty.} \label{fig:ABC} \end{figure} In this section, we compare the resummed expression, valid in the endpoint region $\tau \to 0$ to the fixed-order expression, which is valid away from the endpoint. The resummed expression, when expanded to fixed order, must reproduce the $\tau = 0$ singularities of the fixed-order calculation. This observation can be used to extract numerically the constant part of the two-loop soft function. Then, by including the difference between the expanded resummed expression and the fixed-order expression, we derive the final matched distribution. The fixed-order thrust distribution has been calculated to leading order analytically and to NLO and NNLO numerically. For the scale choice $\mu=Q$, the result is usually written in the form \begin{equation}\label{fixeddist} \frac{1}{\sigma_0} \frac{\mathrm{d} \sigma}{\mathrm{d} \tau} = \delta (\tau) + \left( \frac{\alpha_s}{2 \pi} \right) A (\tau) + \left( \frac{\alpha_s}{2 \pi} \right)^2 B (\tau) + \left( \frac{\alpha_s}{2 \pi} \right)^2 C (\tau) + \cdots\,, \end{equation} where we have suppressed the argument of the coupling constant, $\alpha_s \equiv \alpha_s (Q)$. Throughout the following analysis, we use an analytical form for $A (\tau)$, a numerical calculation of $B (\tau)$ using the program {\sc{event2}}~\cite{Catani:1996jh} with $10^{10}$ events and a numerical calculation of $C (\tau)$ that was generously provided by the authors of~\cite{GehrmannDeRidder:2007bj}. A value of $y_0=10^{-5}$ for the infrared cut-off was used in the calculation of the NNLO histograms, see \cite{GehrmannDeRidder:2007jk}. The resummed differential thrust distribution in the effective theory is given in Eq. (\ref{scetdist}). To compare with fixed-order results (\ref{fixeddist}), we set all scales equal $\mu_h=\mu_j=\mu_s=Q$. Doing so switches off the resummation: all evolution factors, such as $S (\mu_h, \mu_j)$ and $A_H(\mu_h,\mu_s)$, vanish in the limit of equal scales. Before taking the limit $\eta =4A_\Gamma(\mu_j,\mu_s)\rightarrow 0$, we expand the kernel in (\ref{scetdist}) using the relation \begin{equation} \frac{1}{\tau^{1-\eta}}= \frac{1}{\eta} \delta(\tau) +\sum_{n=0}^{\infty} \frac{\eta^n}{n!} \left[\frac{\ln^n\tau}{\tau}\right]_+\,, \end{equation} and evaluate the derivatives with respect to $\eta$ using the explicit expressions for $\widetilde{j}$ and $\widetilde{s}$. The result is a sum of distributions \begin{equation} \frac{1}{\sigma_0} \frac{\mathrm{d} \sigma_2}{\mathrm{d} \tau} = \delta (\tau) D_{\delta} + \left( \frac{\alpha_s}{2 \pi} \right) \left[ D_A (\tau) \right]_+ + \left( \frac{\alpha_s}{2 \pi} \right)^2 \left[ D_B (\tau) \right]_+ + \left( \frac{\alpha_s}{2 \pi} \right)^3 \left[ D_C (\tau) \right]_+ + \cdots\,. \label{foscet} \end{equation} The coefficients $D_{\delta}$, $D_A$ $D_B$ and $D_C$ are given in Appendix~\ref{sec:singular}. Away from $\tau = 0$, the $\delta$-function terms can be dropped and the plus-distributions reduce to their argument functions, $[D_X (\tau)]_+ =D_X (\tau)$. Since the effective field theory resums the large logarithms of the fixed-order distribution, there should not be any $1/\tau$ singularities in $A$, $B$, or $C$ which are not reproduced in $D_A$, $D_B$ and $D_C$ respectively. This was shown analytically for the $A$ function in~\cite{Schwartz:2007ib}. It is demonstrated numerically for $A$, $B$ and $C$ in Figure \ref{fig:ABC}. In fact, the figure shows that even at moderate $\tau$, the thrust distribution is dominated by the singular terms. Note that the lowest three bins of the numerical result for $C$ are above the effective theory prediction. This is due to numerical difficulties in the fixed-order code used to evaluate $C$ and will be explored in more detail below. \begin{figure}[t!] \psfrag{x}[]{\small $1-T$} \psfrag{y}[]{{\small $(1-T)$}{\large $\frac{1}{\sigma}$}{\Large$\frac{\mathrm{d} \sigma}{\mathrm{d} T}$}} \begin{center} \includegraphics[width=0.6\textwidth]{nlocols.eps} \end{center} \vspace*{-0.7cm} \caption{Color structures used in NLO comparison with fixed order.} \label{fig:NLOcols} \end{figure}% The SCET expression (\ref{scetdist}) for the thrust distribution is valid as $\tau \rightarrow 0$, that is, in the 2-jet region. One could perform resummation also for terms which are power suppressed in this limit, by including operators with additional fields or derivatives into the effective theory~\cite{Bauer:2006mk,Bauer:2006qp}. However, since these terms are power suppressed it is sufficient to include them at fixed order. To do so, we simply subtract the singular terms from the fixed-order expression. The remainder is \begin{multline} r(\tau)\equiv \frac{1}{\sigma_0}\left( \frac{\mathrm{d} \sigma}{\mathrm{d} \tau} - \frac{\mathrm{d} \sigma_2}{\mathrm{d} \tau}\right) \\ = \left( \frac{\alpha_s}{2 \pi} \right) \left[ A (\tau) - D_A (\tau) \right] + \left( \frac{\alpha_s}{2 \pi} \right)^2 \left[ B (\tau) - D_B (\tau) \right] + \left( \frac{\alpha_s}{2 \pi} \right)^3 \left[ C (\tau) - D_C (\tau) \right] + \cdots \,.\label{scet3} \end{multline} Including the matching contribution, the thrust distribution becomes \begin{equation} \frac{1}{\sigma_0}\frac{\mathrm{d} \sigma}{\mathrm{d} \tau} = \frac{1}{\sigma_0}\frac{\mathrm{d} \sigma_2}{\mathrm{d} \tau} + r(\tau) \,. \label{scet23} \end{equation} With the inclusion of $r(\tau)$, our result not only resums the thrust distribution to N$^3$LL, but is is also correct to NNLO in fixed-order perturbation theory. \begin{figure}[t!] \begin{center} \psfrag{y}[]{$c_2^S$} \includegraphics[width=0.6\textwidth]{softplot.eps} \end{center} \vspace*{-0.8cm} \caption{Extraction of the two-loop constants in the soft function. The points correspond to the value of an infrared cutoff applied to the fixed-order calculation. The lines are interpolations among the points from $\tau=0.001$ to $\tau=0.003$ extrapolated to $\tau=0$ to extract the constants. From top to bottom, the curves are the $C_F^2, C_A$ and $n_f$ color factors.} \label{fig:softNLO} \end{figure} Now let us turn to the two-loop soft function. Its RG equation together with the anomalous dimensions determine the logarithmic part of ${\widetilde s}_T$, but the constant part \begin{equation} {\widetilde s}_T(0,\mu) = 1 + C_F\frac{\alpha_s}{4\pi} \left( -\pi ^2 \right) + C_F \left( \frac{\alpha_s}{4\pi} \right)^2 \left[ C_F\,c_{2,C_F}^S + C_A\, c_{2,C_A}^S + T_F \,n_f \, c_{2,n_f}^S \right] \end{equation} cannot be obtained in this way. We will determine the constant from the requirement that the integral over the thrust distribution reproduces the total hadronic cross section \begin{equation} \frac{\sigma_{\rm had}}{\sigma_0} = 1 + \frac{\alpha_s}{4 \pi} \left[ 3 C_F \right] + \left( \frac{\alpha_s}{4 \pi} \right)^2 \left[ C_F C_A \left( \frac{123}{2} - 44 \zeta_3 \right) + C_F T_F n_f \left( - 22 + 16 \zeta_3 \right) - C_F^2 \frac{3}{2} \right]\, . \label{sigtot} \end{equation} Plugging (\ref{foscet}) and (\ref{scet3}) into ($\ref{scet23}$) we find \begin{equation} \frac{\sigma_{\rm had}}{\sigma_0} = D_{\delta} + \int_0^1 d \tau\, r(\tau) = 1 + \frac{\alpha_s}{\pi} + \left( \frac{\alpha_s}{4 \pi} \right)^2 \left\{ 317.5 + c_{2}^S + 4\int_0^1 \left[ B (\tau) - D_B (\tau) \right] d \tau \right\}\,, \end{equation} where 317.5 comes from setting $n_f=5$ in $D_\delta$ (see Appendix C). Since we know separately the color structures for $B$ (numerically) and $D_B$ (analytically), as shown in Figure \ref{fig:NLOcols}, we can perform this integral numerically and then extract $c_2^S$ by comparing to (\ref{sigtot}). Although the difference $B (\tau) - D_B (\tau)$ is integrable as $\tau \rightarrow 0$ both of these functions are separately divergent. To have numerically stable results, we impose an infrared cutoff $\tau_0$ on the integral and interpolate to $\tau_0 = 0$. We do this in discrete steps by dropping the lowest bins in the $B (\tau)$ distribution which was generated with the {\sc{event2}} program. The convergence and interpolation are shown in Figure \ref{fig:softNLO}. We find \begin{align}\label{softConst} c_{2_{CF}}^S &= 58 \pm 2\,, & c_{2_{CA}}^S & = - 60 \pm 1\,, & c_{2_{nf}}^S & = 43 \pm 1\,. \end{align} These constants were explored previously in \cite{Catani:1992ua}. Lacking the form of the divergences near $\tau = 0$, these authors had to fit for the shape of the curve as well as the constants, leading to results with much poorer accuracy. A comparison with the results of \cite{Catani:1992ua} is given Appendix \ref{sec:singular}. \begin{figure}[t!] \psfrag{N2}[]{\small $N^2$} \psfrag{N0}[]{\small $N^0$} \psfrag{Nm}[]{\small $1/N^{2}$} \psfrag{Nnf}[]{\small $n_f N$} \psfrag{NfNm}[]{\small $n_f/N$} \psfrag{Nf2}[]{\small $n_f^{2}$} \psfrag{sig}[t]{{\small $\phantom{aa}10^{-2}(1-T)$}{\Large $\frac{1}{\sigma}$}{\Large$\frac{\mathrm{d}\sigma}{\mathrm{d} T}$}} \psfrag{log}[b]{\small $\phantom{abc}1-T$} \begin{center} \includegraphics[width=0.87\textwidth]{colorstructuresThrustAlt.eps} \end{center}\vspace*{-0.8cm} \caption{\label{fig:NNLOcols} Contributions of different color structures to the three-loop coefficients of the thrust distribution. The plots show a comparison of our result for the singular terms encoded in $D_C$ (blue lines) with the numerical evaluation of the full coefficient $C$ (red histograms) \cite{GehrmannDeRidder:2007jk}. The light-red areas are an estimate of the statistical uncertainty.} \end{figure} \begin{figure}[t!] \begin{center} \psfrag{N2}[l]{\small $N^2$} \psfrag{N0}[l]{\small $N^0$} \psfrag{Nm}[l]{\small $1/N^{2}$} \psfrag{Nnf}[l]{\small $n_f N$} \psfrag{NfNm}[l]{\small $\phantom{a}n_f/N$} \psfrag{Nf2}[l]{\small $n_f^{2}$} \psfrag{sig}[B]{{\small $\phantom{ac}10^{-3}(1-T)$}{\Large $\frac{1}{\sigma}$}{\Large$\frac{\mathrm{d}\sigma}{\mathrm{d} T}$}} \psfrag{log}[B]{\small $\phantom{ab}-\ln(1-T)$} \includegraphics[width=0.90\textwidth]{colorstructuresThrustLogAlt} \end{center}\vspace*{-0.8cm} \caption{\label{fig:NNLOcolslog} Contributions to the three-loop coefficients of the thrust distribution. The plots show a comparison of our result for the singular terms (blue lines) with the numerical evaluation of the full result (red histograms) \cite{GehrmannDeRidder:2007jk}. The dotted, dashed and solid lines correspond to an infrared cut-off $y_0=10^{-5}$,$10^{-6}$ and $10^{-7}$, see \cite{GehrmannDeRidder:2007jk}. The light-red areas are an estimate of the statistical uncertainty.} \end{figure} We now have all the necessary perturbative input at hand to evaluate the thrust distribution and to extract $\alpha_s$. Before doing so, we compare the recent NNLO fixed-order results in detail to the singular terms predicted using the effective theory. In Figure \ref{fig:NNLOcols} the contribution of the six color structures which appear at $\alpha_s^3$ to $C (\tau)$ and $D_C (\tau)$ are plotted. The color structure of the NNLO coefficient $C$ has the form $C = C_F (N^2 C_1 + C_2+ 1/N^{2} C_3 + N n_f C_4+ n_f/N C_5+ n_f^2 C_6 ) $ and the plot shows the six parts, with the prefactors evaluated for $N=3$ colors and $n_f=5$ quark flavors. The figure shows that the singular terms (blue lines) are a good approximation to the full result (red lines) for each color structure. What is surprising is that they seem to agree well almost everywhere. One consequence of this is that the matching to the NNLO fixed-order distributions will have a small effect. The dominance of the logarithmically enhanced terms, even at moderate $\tau$, strongly suggests that resummation would indeed lead to a significant improvement in perturbative accuracy. The close agreement also provides a verification of the fixed-order result. Because the same numerical code is used for many other NNLO observables, such an independent check is certainly welcome. As we observed earlier, the lowest three bins of the NNLO fixed-order result of \cite{GehrmannDeRidder:2007jk} are higher than the singular terms obtained with the effective theory, see Figure $\ref{fig:ABC}$. The excess at small $\tau$ seen in Figure $\ref{fig:ABC}$ is barely noticeable in Figure \ref{fig:NNLOcols}, because we have multiplied the distributions by $\tau$ which de-emphasizes the small-$\tau$ region. To analyze this region in detail, we plot the distribution as a function of $\ln\tau$ in Figure \ref{fig:NNLOcolslog}. For very small $\tau$, the full result should reduce to the singular terms derived in the effective theory. However, this region is very challenging for the numerical integration. The numerical results are shown in red in Figure \ref{fig:NNLOcolslog} and the light-red bands are the statistical uncertainty from the numerical NNLO calculation. The three red lines correspond to different values of an infrared cutoff, which is imposed when generating events \cite{GehrmannDeRidder:2007jk}. The agreement is good, except for the two leading color structures. The authors of~\cite{GehrmannDeRidder:2007jk} are aware of the problem \cite{Gehrmann}. For the extraction of $\alpha_s$, the region of very small $\tau$ will not be used, so these numerical difficulties are not critical for present purposes. \section{$\alpha_s$ extraction and error analysis \label{sec:fit}} \begin{figure}[t] \begin{center} \includegraphics[width=\textwidth]{convergence} \end{center} \vspace{-0.8cm} \caption{Convergence of resummed and fixed-order distributions. {\sc{aleph}} data (red) and {\sc{opal}} data (blue) at $91.2$ GeV are included for reference. All plots have $\alpha_s(m_Z)=0.1168$.} \label{fig:sfconv} \end{figure} In this section we now use our result for the thrust distribution to determine $\alpha_s$, using {\sc lep} data from {\sc{aleph}} \cite{Heister:2003aj} and {\sc{opal}} \cite{Abbiendi:2004qz}. Before performing the fit, let us compare the perturbative expansion with and without resummation. The result at $Q=91.2$ GeV is shown in Figure \ref{fig:sfconv} side-by-side with the fixed-order expression. We use the same value $\alpha_s (m_Z) = 0.1168$ for both plots and have set the scales $\mu_h$, $\mu_j$ and $\mu_s$ to their canonical values (\ref{canonical}). For reference, we also show the {\sc{aleph}} and {\sc{opal}} data. The curves for the fixed-order calculation correspond to the standard LO, NLO, NNLO series; for the effective field theory calculation, the orders are defined in~Table~\ref{tab:ords}. It is quite striking how much faster the resummed distribution converges. In fact, it is hard to even distinguish the higher order curves after resummation, except in the region of very small $\tau$, where the distribution peaks. The peak region is affected by non-perturbative effects, as will be discussed in the next section, but it will not be used in the extraction of $\alpha_s$. The region relevant for the $\alpha_s$ extraction is shown in the lower two plots. The value of $\alpha_s (m_Z) = 0.1168$ we use in the plots corresponds to the best fit value in the range $0.1<\tau<0.24$ for the {\sc aleph} data set. However, the plot makes it evident that the extracted $\alpha_s$ value will not change much beyond first order. A fit to the NNLO fixed-order prediction gives $\alpha_s(m_Z) = 0.1275$. The {\sc{aleph}} and {\sc{opal}} collaborations have published analyses of the {\sc lep} 1 and higher energy {\sc lep} 2 thrust distributions. To fit $\alpha_s$ we calculate the thrust distribution integrated over each bin measured in the experiments. The resummed contribution in a given bin is obtained as $R_2(\tau_R) - R_2(\tau_L)$ using Eq.~\eqref{R2eq} for the bin with $\tau_L < \tau < \tau_R$. For the matching contribution, we integrate analytically the $D_A(\tau), D_B(\tau)$ and $D_C(\tau)$ functions and subtract them from the analytic integral of $A (\tau)$ and the appropriately binned numerical distributions $B (\tau)$ and $C (\tau)$. A problem we encounter when trying to extract $\alpha_s$ is that the experiments have published statistical, systematic, and hadronization uncertainties for each bin, but have not made the bin-by-bin correlations public. Without this information, we proceed with a conservative approach to error estimates: to extract the default value of $\alpha_s$, we perform a $\chi^2$-fit to the data including only statistical uncertainties. We then use the systematic and hadronization errors on $\alpha_s$ obtained in previous fits to {\sc{aleph}} \cite{Dissertori:2007xa} and {\sc{opal}} \cite{Abbiendi:2004qz} data. In these papers fits to $\alpha_s$ were performed which included the correlation information. To be able to use their values, we perform our fits using exactly the same fit ranges as used in these papers. This is not entirely optimal, since the experimental systematic error will depend somewhat on the theoretical model used in the fit. \begin{figure}[t] \psfrag{x}[l]{$\!\!\!\!\!\frac{\mathrm{total\: exp.\: uncertainty}}{\mathrm{data}}$} \psfrag{y}[l]{$\!\!\!\!\!\frac{\mathrm{stat.\:\: uncertainty}}{\mathrm{data}}$} \psfrag{z}[l]{$\!\!\!\!\!\frac{\mathrm{fit-data}}{\mathrm{data}}$} \begin{center} \includegraphics[width=0.7\textwidth]{chisboth.eps} \end{center} \vspace*{-0.8cm} \caption{Relative error for best fit to {\sc aleph} data at $91.2$ GeV. The inner green band includes only statistical uncertainty, while the outer yellow band includes statistical, systematic and hadronization uncertainties. The solid line is fit to $0.1 < 1-T < 0.24$ giving $\alpha_s(m_Z)=0.1168$ while the dashed line is fit from $0.08 < 1-T < 0.3$ giving $\alpha_s(m_Z) = 0.1171$. The smaller fit range is used for the error analysis because it has been previously studied in~\cite{Dissertori:2007xa}.} \label{fig:chis} \end{figure} Our resummed calculation is valid in a wider range of $\tau$ than the predictions used in \cite{Dissertori:2007xa, Abbiendi:2004qz}, so one could use data closer to the peak, where the statistics are higher and resummation is more important. In a future analysis, the fit range could be optimized to minimize the total error after folding in the proper correlations. In Figure \ref{fig:chis}, we plot the relative statistical and total experimental uncertainty as a function of $\tau$ and compare to the best fit result. We find that the extracted value is fairly insensitive to the fit range. In fact, going from the standard range (solid line) to the larger region (dashed lines) changes the best-fit value of $\alpha_s (m_Z)$ by less than 0.3\%, from 0.1168 to 0.1171. \begin{figure}[t!] \psfrag{y}[]{{\scriptsize $(1-T)$} {\small $\frac{1}{\sigma}\frac{\mathrm{d} \sigma}{\mathrm{d} T}$}} \psfrag{x}[]{\scriptsize $1-T$} \begin{center} \begin{tabular}{ll} \includegraphics[width=0.47\textwidth]{Evary4zoomT} & \includegraphics[width=0.47\textwidth]{Hvary4zoomT} \\[6pt] \includegraphics[width=0.47\textwidth]{Jvary4zoomT} & \includegraphics[width=0.47\textwidth]{Svary4zoomT} \\[6pt] \includegraphics[width=0.47\textwidth]{Cvary4zoomT} & \includegraphics[width=0.47\textwidth]{Bvary4zoomT} \end{tabular} \end{center} \vspace{-0.6cm} \caption{Perturbative uncertainty at $Q=91.2\,{\rm GeV}$. The first four panels show the variation of the matching scale, the hard scale, the jet scale, and the soft scale. Each of the scales is varied separately by a factor of two around the default value. The last two panels show the effect of simultaneously varying the jet- and soft scales, see text. The {\sc lep} 1 {\sc aleph} data is included for reference. All plots have $\alpha_s(m_Z)=0.1168$.} \label{fig:MHJSBC} \end{figure} Next, we consider the perturbative theoretical uncertainty. In the effective field theory analysis, four scales appear: the hard scale $\mu_h \sim Q$, the jet scale $\mu_j \sim \sqrt{\tau} Q$, the soft scale $\mu_s \sim \tau Q$, and the scale $\mu_m$ at which the matching corrections are added. In the matching corrections the physics associated with the hard, jet and soft scales has not been factorized, so it is not obvious which value of $\mu_m$ should be chosen. We follow standard fixed-order practice and choose $\mu_m=Q$ as the default value. Our result is independent of these scales to the order of the calculation: the change in the result due to scale variation can thus be used to estimate the size of unknown higher order terms, of $O(\alpha_s^4)$ for our final result. We show the results of varying each of the four scales up and down by a factor of 2 in the first four panels of Figure \ref{fig:MHJSBC}. The results converge nicely, with the dominant uncertainty coming from the soft scale variation. This is expected, as the soft scale probes the lowest energies and therefore the largest values of $\alpha_s$. In fact, it is a critical advantage of the effective theory that the soft scale can be probed explicitly -- the fixed-order calculation has access to only one scale and assuming $\mu\sim Q$ may therefore underestimate the perturbative uncertainty. From the first panel in Figure \ref{fig:MHJSBC} it is clear that the extraction of $\alpha_s$ is almost completely insensitive to the scale at which the fixed order calculation comes in. Again, this is in contrast to a pure fixed-order result. The matching scale variation is so small because the matching correction itself is small, as we saw in Figures \ref{fig:ABC}, \ref{fig:NLOcols}, \ref{fig:NNLOcols}, and \ref{fig:NNLOcolslog}. Figure \ref{fig:MHJSBC} shows the effect of varying the jet and soft scales separately by factors of two: $\frac{1}{2} \sqrt{\tau} Q < \mu_j < 2 \sqrt{\tau} Q$ and $\frac{1}{2} \tau Q < \mu_s < 2 \tau Q$. While a factor of two may seem reasonable for a fixed order calculation (although as we have already observed, the thrust distribution probes scales $\tau Q \ll Q$), from the effective field theory point of view it makes little sense to vary the soft and jet scales separately. In doing so, one can easily have $\mu_j < \mu_s$ or $\mu_h < \mu_j$ which is completely unphysical. Instead, for the error analysis we will use two coordinated variations. First, a correlated variation holding $\mu_j / \mu_s$ fixed: \begin{equation} \mu_j \rightarrow c \sqrt{\tau} Q, \hspace{1em} \mu_s \rightarrow c \tau Q, \hspace{1em} \text{$\frac{1}{2} < c < 2$}\,. \end{equation} This probes the upper and lower limits on $\mu_j$ and $\mu_s$, but avoids the unphysical region. Second, an anti-correlated variation, holding $\mu_j^2 / (Q \mu_s)$ fixed: \begin{equation} \mu_j^2 \rightarrow a Q^2 \tau \hspace{1em} \mu_s \rightarrow a Q \tau, \hspace{1em} \frac{1}{\sqrt{2}} < a < \sqrt{2}\,. \end{equation} This is independent from the correlated mode but again avoids having $\mu_j < \mu_s$. The uncertainty resulting from these two variations is shown in the last two panels of Figure \ref{fig:MHJSBC}. \begin{figure} \begin{minipage}[]{.9\textwidth} \begin{center} \epsfig{file=bandplot.eps, scale=1.3} \end{center} \end{minipage} \begin{minipage}[]{.9\textwidth} \begin{center} \epsfig{file=padeerr.eps,scale =1.3} \end{center} \end{minipage} \caption{Uncertainty bands for various scale variations. The band in the first panel is determined entirely by scale variations. The second panel shows an alternative way of estimating the perturbative uncertainty using an educated guess of the uncalculated higher order coefficients, as described in the text. } \label{fig:uncbands} \end{figure} To estimate the total perturbative uncertainty on the extracted value of $\alpha_s$, we use the uncertainty band technique proposed in \cite{Jones:2003yv} and adopted both by {\sc{aleph}} \cite{Heister:2003aj} and {\sc{opal}} \cite{Abbiendi:2004qz} as well as in the recent fit of NNLO results to {\sc{aleph}} data \cite{Dissertori:2007xa}. The result is shown in Figure~\ref{fig:uncbands}. In short, the theoretical uncertainty is determined as follows: one first calculates $\alpha_s(m_Z)$ using a least-squares fit to the data with all scales at their canonical values and without including any theoretical uncertainty in the $\chi^2$-function. Then each scale is varied separately, holding $\alpha_s (m_Z)$ fixed to its best-fit value. These produce the curves in Figure \ref{fig:uncbands}. Next, the uncertainty band, the yellow region in Figure \ref{fig:uncbands}, is defined as the envelope of all these variations. Finally, the scales are returned to their canonical values, and the maximal and minimal values of $\alpha_s$ are determined which allow the prediction to remain within the uncertainty band. An important feature of this approach is that the data enters only in the determination of the best fit $\alpha_s$ and the fit region; the perturbative uncertainty is determined purely from within the theoretical calculation. Separating the theoretical and experimental errors in this way makes it much easier to average $\alpha_s$ results obtained from different data sets, since they suffer from the same theoretical uncertainty. \begin{table}[t!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Q & 91.2 & 133 & 161 & 172 & 183 & 189 & 200 & 206 & AVG\\ \hline \hline \multirow{2}{*}{fit range} & 0.1 & 0.06 & 0.06& 0.04 & 0.06 & 0.04 & 0.04 & 0.04 & \multirow{2}{*}{--}\\ & 0.24 & 0.25 & 0.25 & 0.2 & 0.25 &0.2 & 0.2 & 0.2 & \\ \hline $\chi^2$/d.o.f.\ & 32.5/13 & 7.7/4 & 3.3/4 & 10.3/4 & 3.6/4 & 0.9/4 & 24.6/4 & 4.0/4 & -- \\ stat.\ err.\ & 0.0001 & 0.0037 & 0.0070 & 0.0080 & 0.0043 & 0.0022 & 0.0023 & 0.0023 & 0.0010 \\ syst.\ err.\ & 0.0008 & 0.0010 & 0.0010 & 0.0010 & 0.0011 & 0.0010 & 0.0010 & 0.0010 & 0.0010\\ hadr.\ err.\ & 0.0019 & 0.0014 & 0.0012 & 0.0012 & 0.0011 & 0.0011 & 0.0010 & 0.0010 & 0.0012\\ pert.\ err.\ & $^{+0.0013}_{-0.0017}$ & $^{+0.0012}_{-0.0016}$ & $^{+0.0015}_{-0.0020}$ & $^{+0.0006}_{-0.0009}$ & $^{+0.0010}_{-0.0013}$ & $^{+0.0011}_{-0.0015}$ & $^{+0.0010}_{-0.0014}$ & $^{+0.0009}_{-0.0012}$ & 0.0012 \\ tot.\ err.\ & 0.0026 & 0.0043 & 0.0074 & 0.0082 & 0.0047 & 0.0030 & 0.0030 & 0.0029 & 0.0022 \\ \hline (Pad\'e $\times$ 2) & 0.0004 & 0.0004 & 0.0003 & 0.0004 & 0.0004 & 0.0004 & 0.0004 & 0.0003 & -- \\ \hline {$\alpha_s(m_Z)$} & 0.1168 & 0.1183 & 0.1263 & 0.1059 & 0.1160 & 0.1203 & 0.1175 & 0.1140 &0.1168 \\ \hline \hline {\sc pythia} & 0.1152 & 0.1164 & 0.1248 & 0.1028 & 0.1146 & 0.1177 & 0.1151 & 0.1119 & 0.1146 \\ {\sc ariadne} & 0.1169 & 0.1181 & 0.1264 & 0.1047 & 0.1164 & 0.1197 & 0.1170 & 0.1135 & 0.1164 \\ \hline \end{tabular} \caption{Best fit to {\sc aleph} data. \label{tab:alephresults} The row labeled (Pad\'e $\times$ 2) is an alternative measure of perturbative uncertainty as described in the text. It is not combined into the total error. The rows labeled {\sc pythia} and {\sc ariadne} give the value of $\alpha_s$ after correcting for hadronization and quark masses using {\sc pythia} or {\sc ariadne}.} \end{center} \end{table} The purpose of scale variations is to estimate the effect that a higher order perturbative calculation would have on a distribution. This is justified by arguing that any scale variation can be compensated by terms at one order higher in $\alpha_s$, thus it should give a reasonable estimate of these higher order terms. However, as we have seen, the amount by which we vary the scales is arbitrary, and the traditional factor of 2 in the variation is both problematic for the jet and soft scales and seems to overestimate the uncertainty. The distribution determined by the effective theory at one higher order depends on only a handful of numbers: the beta-function coefficient $\beta_4$, the anomalous dimensions $\Gamma_4, \gamma^H_3, \gamma^J_3$ and the constants in the hard, jet and soft functions, $c^H_3, c^J_3, c^S_3$, see Appendix \ref{sec:anomalous} for the subscript conventions. Thus in the effective field theory, there is a straightforward way to estimate the effect of higher orders: one simply varies these coefficients. For example, we can estimate their size using a Pad\'e approximation: $\Gamma_{n+1}=\pm c \frac{\Gamma_n^2}{\Gamma_{n-1}}$. This should reasonably span likely values for what a higher order perturbative calculation would provide. We show the variations corresponding to $c=2$ and $c=5$ in the bottom panel of Figure~\ref{fig:uncbands}, which are labeled Pad\'e $\times$ 2 and Pad\'e $\times$ 5 respectively. In each case we scan over the signs for the various coefficients to find the largest variations. Even for $c=2$, the fifth order coefficients come out quite large, for example, $\Gamma_4 \approx \pm 2 \times 10^4$ and $\beta_4 \approx \pm 3\times 10^5$. Nevertheless, the uncertainty is still significantly smaller than what we found using scale variation. We find that Pad\'e $\times$ 2 gives $\delta \alpha_s(m_Z) \sim 0.0003$ in contrast to errors around $\delta \alpha_s(m_Z)\sim 0.0012$ from the scale variations. Although the higher order constants are unknown, one might try to estimate them in more sophisticated ways, for example, by computing the dominant diagrams. In the end, we will not use this new method for the final error estimates, but we present the resulting uncertainties in Tables \ref{tab:alephresults} and \ref{tab:opalresults} for completeness. They include a scale variation in the matching correction because this is independent of the resummed distribution. For each of the energies in the {\sc{aleph}} and {\sc{opal}} data sets, we perform a least-squares fit using the experimental statistical errors. The statistical uncertainty on $\alpha_s$ is calculated from the variation in $\chi^2$, the perturbative uncertainty is calculated using the uncertainty band (with scale variations), and systematic uncertainties are taken from~\cite{Dissertori:2007xa} and~\cite{Abbiendi:2004qz}, as discussed above. We include the non-perturbative hadronization uncertainties from these papers, but do not include the corresponding hadronization corrections. We will discuss hadronization and other power-suppressed effects in detail in the next section. The fit results are given in Tables~\ref{tab:alephresults}~and~\ref{tab:opalresults}. \begin{table}[t!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Q & 91.2 & 133 & 177 & 197 & AVG\\ \hline \hline fit range & 0.05-0.3 & 0.05-0.3 & 0.05-0.3 & 0.05-0.3 & --\\ \hline $\chi^2$/d.o.f. & 149.9/5 & 17.0/5 & 1.7/5 & 18.3/5 & -- \\ stat.\ err.& 0.0001 & 0.0038 & 0.0033 & 0.0014 & 0.0014\\ syst.\ err. & 0.0011 & 0.0054 & 0.0028 & 0.0013 & 0.0013 \\ hadr.\ err. & 0.0031 & 0.0024 & 0.0021 & 0.0019 & 0.0019 \\ pert.\ err. & $^{+0.0014}_{-0.0018}$ & $^{+0.0011}_{-0.0015}$ & $^{+0.0009}_{-0.0013}$ & $^{+0.0011}_{-0.0014}$ & 0.0013 \\ tot.\ err. & 0.0037 & 0.0072 & 0.0049 & 0.0030 & 0.0030\\ \hline (Pad\'e $\times$ 2) & 0.0004 & 0.0003 & 0.0003 & 0.0003 & -- \\ \hline $\alpha_s(m_Z)$ & 0.1189 & 0.1165 & 0.1153 & 0.1189 & 0.1189 \\ \hline \hline {\sc pythia}& 0.1143 & 0.1142 & 0.1134 & 0.1173 & 0.1173 \\ {\sc ariadne} & 0.1163 & 0.1160 & 0.1151 & 0.1189 & 0.1189 \\ \hline \end{tabular} \caption{Best fit to {\sc opal} data. \label{tab:opalresults}} \end{center} \end{table} To combine the results from different energies, we compute a weighted average, $\bar\alpha_s = \sum_i w_i \alpha_s^{(i)}$. The weights $w_i$ are determined by minimizing the uncertainty $\bar \sigma^2 =\sum_{ij} w_i\, w_j \,{\rm cov}(i,j)$. Given that we don't know the exact correlations, we set \begin{equation} {\rm cov}(i,j) = \left(\sigma_{\rm stat}^{(i)}\right)^2 \delta_{i,j} + \sigma_{\rm sys}^{(i)} \sigma_{\rm sys}^{(j)} + \sigma_{\rm hadr}^{(i)} \sigma_{\rm hadr}^{(j)} +\sigma_{\rm pert}^{(i)} \sigma_{\rm pert}^{(j)} \,. \end{equation} That is, we assume uncorrelated statistical errors and 100\% correlation for the systematic, hadronic and perturbative uncertainties at different energies. Because the correlated uncertainties are dominant, naively minimizing the uncertainty can in some cases be lead to negative weights. This happens when combining the OPAL results in the above way. We eliminate these solutions by imposing $w_i>0$, after which the best value from OPAL is obtained by assigning 100\% weight to the highest energy measurement which has the smallest systematic uncertainty. The result obtained after combining {\sc{aleph}} and {\sc{opal}} results individually is given in the last column in Tables \ref{tab:alephresults} and \ref{tab:opalresults}. Finally, we combine the {\sc{aleph}} and {\sc{opal}} results to an overall average. In this case, we assume that the systematic uncertainties are completely correlated between the individual energy results from each experiment, but neglect the correlations between the systematical uncertainties among the two experiments. For the hadronization and perturbative error, we assume 100\% correlation. Proceeding in this way, we find \begin{align} \alpha_s (m_Z) &= 0.1172 \pm 0.0010 (\mathrm{stat}) \pm 0.0008 (\mathrm{sys}) \pm 0.0012 (\mathrm{had}) \pm 0.0012 (\mathrm{pert}) \nonumber \\ & = 0.1172 \pm 0.0022\,. \end{align} This result is close to the PDG world average $\alpha_s (m_Z) = 0.1176 \pm 0.0020$ and has similar uncertainties. Our calculation does not include hadronization corrections and neglects quark masses. If we estimate their effect using {\sc{pythia}}, the central value shifts to $\alpha_s (m_Z) = 0.1150$, while correcting with {\sc{ariadne}} gives $\alpha_s (m_Z) = 0.1168$. We observe that the difference we find between {\sc{pythia}} and {\sc{ariadne}} is larger than the hadronization uncertainty in our average, which is based on {\sc{aleph}} and {\sc{opal}} studies. Correcting our higher order perturbative result with a tuned leading-order Monte Carlo shower is problematic, so this difference should be interpreted with caution. Various issues associated with hadronization corrections will be discussed in detail in the next section. \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{alephconverge} & \includegraphics[width=0.48\textwidth]{opalconverge} \end{tabular} \end{center} \vspace{-0.8cm} \caption{Best fit values for $\alpha_s(m_Z)$. From right to left the lines are the total error bars at each energy for first order, second order, third order and fourth order, as defined in the text. The bands are weighted averages with errors combined from all energies.} \label{fig:fitresults} \end{figure} It is interesting to repeat the fit order by order. This is done in Table \ref{tab:fitorder} and displayed graphically in Figure~\ref{fig:fitresults}. The figure shows that the results found at different energies are consistent and illustrates the reduction of the uncertainty when including higher order terms. \begin{table}[t!] \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|} {\sc{aleph}} \\ \hline & \multicolumn{3}{|c||}{{\sc lep} 1 +{\sc lep} 2 } & \multicolumn{3}{c|}{{\sc lep} 1 } \\ \hline order & $\alpha_s$ & total err & pert. err & $\alpha_s$ & tot.err & pert.err \\ \hline {1$^{\mathrm{st}}$} order& 0.1142 & 0.0297 & 0.0296 & 0.1142 & 0.0297 & 0.0296 \\ {2$^{\mathrm{nd}}$} order& 0.1152 & 0.0068 & 0.0064 & 0.1166 & 0.0071 & 0.0068 \\ {3$^{\mathrm{rd}}$} order& 0.1164 & 0.0033 & 0.0027 & 0.1166 & 0.0037 & 0.0031 \\ {4$^{\mathrm{th}}$} order& 0.1168 & 0.0022 & 0.0012 & 0.1168 & 0.0026 & 0.0015 \\ \hline \multicolumn{7}{c}{} \\ \hline \multicolumn{7}{|c|} {\sc{opal}} \\ \hline & \multicolumn{3}{|c||}{{\sc lep} 1 + {\sc lep} 2 } & \multicolumn{3}{c|}{{\sc lep} 1 } \\ \hline order & $\alpha_s$ & total err & pert. err & $\alpha_s$ & tot.err & pert.err \\ \hline {1$^{\mathrm{st}}$} order& 0.1190 & 0.0305 & 0.0304 & 0.1190 & 0.0305 & 0.0304 \\ {2$^{\mathrm{nd}}$} order & 0.1198 & 0.0076 & 0.0070 & 0.1205 & 0.0081 & 0.0074 \\ {3$^{\mathrm{rd}}$} order& 0.1194 & 0.0040 & 0.0029 & 0.1194 & 0.0047 & 0.0034 \\ {4$^{\mathrm{th}}$} order& 0.1189 & 0.0030 & 0.0013 & 0.1189 & 0.0037 & 0.0016\\ \hline \end{tabular} \caption{Best fit values and uncertainties at different orders, as defined in Table \ref{tab:ords}.} \label{tab:fitorder} \end{center} \end{table} \section{Non-perturbative effects and quark mass corrections \label{sec:NP}} Let us now turn to two power suppressed effects which we have so far neglected in our analysis. The first is hadronization: the effective theory calculation corresponds to a parton-level distribution, while the experiment measures hadrons. Secondly, we have neglected quark masses in our calculation. Because thrust is an infrared-safe observable, both corrections are expected to be small, however they may not be negligible. Most of the previous determinations of $\alpha_s$ have used Monte Carlo event generators to correct the parton-level predictions for hadronization effects and estimate the hadronic uncertainty by comparing the output of different generators. In particular, {\sc{aleph}} \cite{Heister:2003aj}, {\sc{opal}} \cite{Abbiendi:2004qz} and the recent NNLO analysis \cite{Dissertori:2007xa} all use {\sc{pythia}} to obtain their default hadronization corrections and then compare to {\sc{herwig}} and {\sc{ariadne}} to obtain the associated uncertainty. It turns out that the largest differences generally occur between {\sc pythia} and {\sc ariadne} \cite{Abbiendi:2004qz}, even though {\sc ariadne} uses {\sc pythia} to calculate hadronization. We include in Tables~\ref{tab:alephresults}~and~\ref{tab:opalresults} the best fit values of $\alpha_s$ obtained after correcting the data bin-by-bin for hadronization and $b$- and $c$-quark masses using {\sc pythia} {\tt v.6.409}~\cite{Sjostrand:2006za}, with default parameters, and with {\sc ariadne} {\tt v.4.12}~\cite{Lonnblad:1992tz}, using the {\sc aleph} tune. Correcting with {\sc ariadne} has quite a small effect on the values of $\alpha_s$. Moreover, {\sc ariadne} always gives a larger value of $\alpha_s$ than {\sc pythia}. If the central values are taken after the {\sc ariadne} corrections, they agree quite closely with a fit to the parton level distributions, that is, without any hadronization. In addition, we also used the new {\sc sherpa} dipole shower \cite{Winter:2007ye} for hadronization and find results similar to {\sc ariadne}. Relying on the Monte-Carlo generators for hadronization is clearly not ideal, since they have been tuned to the same {\sc lep} data we are trying to reproduce! The situation is especially problematic when trying to correct our resummed distribution. The Monte Carlo generators are all based on the parton-shower approximation, which only sums the leading Sudakov double logarithms and part of the next-to-leading logarithms. In contrast, our distribution is correct to N$^3$LL and to NNLO in fixed-order perturbation theory. By tuning to data, part of the missing higher order perturbative corrections get absorbed into the hadronization model. An obvious way to avoid this problem would be to include the higher order corrections into the Monte Carlo codes, but needless to say, no such generator yet exists (although, see~\cite{Bauer:2006mk,Bauer:2006qp} for an approach to improving generators based on the same effective field theory ideas we are using). As shown in Figure \ref{fig:pythia} {\sc{pythia}} agrees with the {\sc{aleph}} data better than our 4th order resummed and matched theoretical calculation. How is this possible in a leading-log shower with leading-order matrix elements? The answer is that part of what is being tuned to data in the Monte Carlo program is not just the hadronization model but also some kind of unfaithful imitation of subleading-log resummation. This is demonstrated in Figure \ref{fig:pythia}, where {\sc{pythia}} is run at the parton and hadron level and compared to the 1st order and 4th order resummed matched distributions in the effective field theory. Even at the parton level, {\sc{pythia}} agrees more with the 4th order than the 1st order. Moreover, the hadronization corrections provide something like a shift in the distribution, but cannot explain the structure of the peak region, which really should be determined by subleading order resummation. To demonstrate the danger of trusting a tuned Monte Carlo generator, we run the same event generator at $Q = 1$ TeV, and compare again to the theoretical calculations, see Figure \ref{fig:pythia}. Now {\sc{pythia}} looks like the leading-order event generator that it is, and the hadronization corrections are small, but {\sc pythia} undershoots the more accurate 4th order theoretical prediction. At high energy the difference will be more difficult to absorb into non-perturbative effects since hadronization corrections are small. One consequence is that these Monte Carlo generators may be underestimating backgrounds at an ILC by 30\%, and perhaps by a similar magnitude at the LHC as well. An alternative to correcting the theoretical distribution with a Monte-Carlo transfer matrix is to include explicitly a theoretical model of non-perturbative corrections and then use data to determine its parameters. The non-perturbative effects are suppressed by the center-of-mass energy and will scale as a power of $\Lambda_{\mathrm{NP}}/Q$, with $\Lambda_{\mathrm{NP}}\sim 1$ GeV a scale characteristic of strong-interaction effects. The effective theory analysis shows that since scales lower than $Q$ appear in the perturbative expansion, there will in fact be power corrections suppressed by the lowest scale, in this case the soft scale $\tau Q$ which will go as a power of $\Lambda_{\rm NP}/(\tau Q)$. For completely inclusive processes first order power corrections are absent, but one should not expect the leading power to be absent for thrust. \begin{figure} \psfrag{x}[]{\small $\alpha_s(m_Z)$} \psfrag{y}[]{\small $\Lambda_{\mathrm{NP}}$ (GeV)} \begin{center} \includegraphics[width=0.8\textwidth]{contsflip.eps} \end{center} \vspace{-0.4cm} \caption{Contours at 95\% confidence level for a fit to the {\sc opal} data of $\alpha_s$ and a non-perturbative shift parameter $\Lambda_{\rm{NP}}$.} \label{fig:NPconts} \end{figure} The non-perturbative effects will be most important in the soft region for small $\tau$. The corrections can be parameterized by a non-perturbative shape function which is convoluted with the perturbative soft function \cite{Korchemsky:1998ev,Korchemsky:1999kt} \begin{equation} S (k, \mu) \rightarrow \int \mathrm{d} k' S (k - k', \mu) S_{\mathrm{NP}} (k', \mu)\,. \end{equation} Then one can parametrize $S_{\mathrm{NP}}(k)$ with a few-parameter family of distributions \cite{Korchemsky:2000kp}. For example, a common model is $S_{\mathrm{NP}} (k) = \delta (k - \Lambda_{\mathrm{NP}})$, which leads to an overall shift in the thrust distribution. Figure \ref{fig:NPconts} shows the result of a simultaneous fit to $\Lambda_{\mathrm{NP}}$ and $\alpha_s$ for the {\sc opal} data. From this rough analysis one can see that the fit to {\sc lep} data has trouble distinguishing the effect of raising the shift parameter from increasing the coupling -- both variations increase the theoretical prediction in all bins where the fit to data is performed. Much of the evidence for a shift in event shape distributions has come from comparisons to data of calculations done at NLO or with resummation at NLL \cite{MovillaFernandez:2001ed}. It would be very interesting to reconsider these analyses including information from NNLO and with N$^3$LL resummation. To extract the soft shape function a detailed analysis, including lower energy data, should be performed. At lower energies the effect of the power corrections will be more pronounced so that the parameters of the shape function can be determined and then used in the extraction of $\alpha_s$ from higher energy data. The high statistics {\sc{jade}} data with energies from $Q=22-44$ GeV might be particularly suitable for such an analysis \cite{MovillaFernandez:1997fr}. In our Monte-Carlo studies, we find that quark-mass effects at {\sc lep} 1 are of order 1\%. They tend to increase $\alpha_s$, while hadronization effects lower the central value. In fixed-order perturbation theory, the quark-mass effects have been evaluated at NLO~\cite{Bernreuther:1997jn,Brandenburg:1997pu,Nason:1997nw,Rodrigo:1997gy}. Using the factorization theorem for the production of massive quark jets \cite{Fleming:2007qr} and the recent two-loop result for the massive jet-function \cite{Jain:2008gb}, it is would be possible to perform the resummation also for the $b$-quark contribution. Since the quark mass corrections are small in the region where we extract $\alpha_s$, a fixed-order treatment might be sufficient. Additional issues involved in matching the perturbative soft and non-perturbative shape functions were discussed recently in~\cite{Hoang:2007vb}. Since neither Monte-Carlo hadronization corrections nor a simple non-perturbative shift model are satisfactory, we conclude that the best option at this point is to fit the parton-level distribution. To estimate the hadronization uncertainties, we simply lift the errors from previous studies of the {\sc{aleph}} and {\sc{opal}} data. Numerically this is essentially equivalent to using {\sc ariadne} to calculate the hadronization and quark-mass corrections and the difference to {\sc pythia} as an estimate of the resulting uncertainty, as can be seen in Tables~\ref{tab:alephresults}~and~\ref{tab:opalresults}. With the increased perturbative precision of our result, it would be important to get better control over hadronization effects and to have a more reliable way to assess the associated uncertainty. As we discussed above, this can be achieved with a dedicated shape-function analysis involving also lower energy data. \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{scetvspyth92G} & \includegraphics[width=0.48\textwidth]{scetvspyth1T} \end{tabular} \end{center} \vspace{-0.6cm} \caption{Comparison between theoretical predictions in effective field theory at first order and fourth order, as defined in table \ref{tab:ords}, and {\sc{pythia}} at the parton and hadron level. {\sc{aleph}} data is included in the first panel.} \label{fig:pythia} \end{figure} \section{Conclusions} We have resummed the leading logarithmic corrections to the thrust distribution to N$^3$LL. Our calculation is based on an all-order factorization theorem for the thrust distribution in the two-jet region $T\to 1$. The traditional method for resummation of event shapes is limited to NLL. The present paper goes beyond this not only by one but by two orders in logarithmic accuracy. The factorization theorem, obtained using Soft-Collinear Effective Theory, separates the contributions associated with different energy scales in a transparent way. Those associated with higher energy scales are absorbed into Wilson coefficients. Solving the renormalization-group equations resums large perturbative logarithms of scale ratios. An advantage of the effective theory treatment is that the factorization theorem is derived at the operator level. The different building blocks in the factorization theorem are given by operator matrix elements and appear in a variety of other processes. With the exception of the two-loop constant in the soft function, all the necessary ingredients to the factorization theorem were known to N$^3$LL accuracy from resummations of other processes. We have determined the missing two-loop constant numerically using effective field theory and an NLO fixed-order event generator. Comparing to fixed-order results, we found that the logarithmically enhanced pieces, determined by a few constants in the effective theory, amount to the bulk of the fixed-order results, even away from the endpoint $T\to 1$. Of particular interest is the comparison at NNLO. The necessary fixed-order calculation has been completed only recently and so far not been independently checked. The close agreement with the logarithmic contributions we derive provides a non-trivial check on both calculations. Once matched to the full fixed-order result, our result is valid not only to N$^3$LL accuracy, but also to NNLO in fixed-order perturbation theory. Matching improves our result away from the endpoint region, but numerically the matching corrections are small, in particular at NNLO. Our result is the most precise calculation of an event shape to date, and we have used it to perform a precision determination of $\alpha_s$ using {\sc{aleph}} and {\sc{opal}} data. Our final combined result is \begin{align} \alpha_s (m_Z) &= 0.1172 \pm 0.0010 (\mathrm{stat}) \pm 0.0008 (\mathrm{sys}) \pm 0.0012 (\mathrm{had}) \pm 0.0012 (\mathrm{pert}) \nonumber \\ & = 0.1172 \pm 0.0022 \nonumber\,. \end{align} Unfortunately, we had to combine different data sets with the conservative assumption that systematic errors are completely correlated. We also had to use the fit regions selected by the experiments which are not optimized to our higher order calculation. An improved error analysis would involve information from the collaborations about correlations which is not publicly available. With the resummed calculation, the perturbative uncertainty is finally smaller than the other uncertainties at each energy, in contrast to earlier results where the perturbative error was dominant. With the reduction in perturbative uncertainty, the hadronization error has become a relatively large contribution to the total error. To reduce it, one could parameterize the non-perturbative effects with a shape function, and then extract this shape function from data at {\sc lep} and lower energy experiments. In addition, one could account explicitly for quark mass effects which should help reduce the systematic errors. Even though the perturbative error is greatly reduced by including resummation, the technique used to estimate this error may be unduly conservative. We have followed the standard procedure and used a collection of scale variations to estimate terms higher order in $\alpha_s$. An alternative method, which we have suggested here, is to attempt a more sophisticated guess at the effects that a higher order calculation might have. At one higher order the resummed distribution is known up to a handful of numbers, such as higher-loop anomalous dimensions. So we can extrapolate an approximation to these numbers and use that directly. This procedure results in smaller and perhaps more realistic perturbative errors, although we have not used the errors derived this way for the final results. The effective theory can also be used to study other event shapes. For example, heavy- and light-jet masses can be obtained with minimal modifications from the formulae given here~\cite{Schwartz:2007ib,Fleming:2007qr}. These observables involve the same hard and jet functions as (\ref{scetfact}) and the necessary soft function can be determined in the same way as we did here for thrust. The factorization theorem for a wider class of event shapes, including jet-broadening and the $C$-parameter was derived recently \cite{Bauer:2008dt}. Its form is the same as (\ref{scetfact}), except that it involves different jet-functions which depend on the variable under consideration. To reach the same accuracy we have achieved here, additional perturbative calculations will thus be necessary. We could also try to use the same techniques to calculate precision observables in a hadronic environment. Many of the necessary ingredients have already been understood from the threshold resummation for inclusive processes such as deep-inelastic scattering and Drell-Yan production. Despite the complication of hadronic initial states, a precision calculation of jet-observables relevant for the LHC seems feasible. Considering the discrepancy we found between {\sc pythia} and the fourth-order effective theory prediction for the thrust distribution at 1 TeV (see Figure~\ref{fig:pythia}), having a systematically improvable way to perform resummations might be vital for the LHC. In addition, given the size of the logarithmic corrections found here, it is likely that many fixed-order calculations can be improved using methods of effective field theory. \subsection*{Acknowledgements} The authors would like to thank Keith Ellis, Thomas Gehrmann, Carlo Oleari, Hasko Stenzel, Morris Swartz, Jan Winter and Giulia Zanderighi. M.D.S.\ is supported in part by the National Science Foundation under grant NSF-PHY-0401513 and by the Johns Hopkins Theoretical Interdisciplinary Physics and Astronomy Center. The research of T.B.\ was supported by the Department of Energy under Grant DE-AC02-76CH03000. Fermilab is operated by the Fermi Research Alliance under contract with the U.S.\ Department of Energy. \begin{appendix} \section{Anomalous dimensions\label{sec:anomalous}} The QCD beta-function satisfies \begin{align}\label{eqbeta} \frac{\mathrm{d}\alpha_s(\mu)}{\mathrm{d}\ln \mu} &= \beta (\alpha_s(\mu)) \,,\\ \beta (\alpha) &=- 2 \alpha\, \left[ \left( \frac{\alpha}{4 \pi} \right) \beta_0 + \left( \frac{\alpha}{4 \pi} \right)^2 \beta_1 + \left( \frac{\alpha}{4 \pi} \right)^3 \beta_2 + \cdots \right]\,, \end{align} with \begin{align} \beta_0 &=\frac{11}{3} C_A - \frac{4}{3} T_F n_f \,, \nonumber \\ \beta_1 &= \frac{34}{3} C_A^2 - \frac{20}{3} C_A T_F n_f - 4 C_F T_F n_f \,,\\ \beta_2 &= \frac{325}{54} n_f^2 - \frac{5033}{18} n_f + \frac{2857}{2}\,, \nonumber \\ \beta_3 &= \frac{1093}{729} n_f^3 +\left(\frac{50065}{162} + \frac{6472}{81}\zeta_3\right) n_f^2 +\left(-\frac{1078361}{162} - \frac{6508}{27} \zeta_3 \right) n_f + 3564 \zeta_3 + \frac{149753}{6} \nonumber\,, \end{align} where we have written $\beta_0$ and $\beta_1$ in terms of the Casimir invariants $C_F=\frac{4}{3}$, $C_A=3$ and $T_F=\frac{1}{2}$, but have evaluated $\beta_2$ and $\beta_3$ for $N=3$ colors. The RG equation (\ref{eqbeta}) has a solution in terms of $L = \ln \frac{\mu^2}{\Lambda^2}$ \begin{multline} \alpha_s (\mu) = \frac{4 \pi}{\beta_0} \left[ \frac{1}{L} - \frac{\beta_1}{\beta_0^2 L^2} \ln L + \frac{\beta_1^2}{\beta_0^4 L^3} (\ln^2 L - \ln L - 1) + \frac{\beta_2}{\beta_0^3 L^3} \right. \\ \left. + \frac{\beta_1^3}{\beta_0^6 L^4} \left( - \ln^3 L + \frac{5}{2} \ln^2 L + 2 \ln L - \frac{1}{2} \right) - 3 \frac{\beta_1 \beta_2}{\beta_0^5 L^4} \ln L + \frac{\beta_3}{2 \beta_0^4 L^4} \frac{}{} \right]\,. \end{multline} It is also useful sometimes to work with perturbative expansion of $\alpha_s (\mu)$ in terms of $\alpha_s$ at a fixed renormalization scale, $\mu_R$: \begin{multline} \alpha_s (\mu) = \alpha_s(\mu_R) - \frac{\alpha_s^2(\mu_R)}{2 \pi} \beta_0 \ln \frac{\mu}{\mu_R} + \frac{\alpha_s^3(\mu_R)}{8 \pi^2} \left( - \beta_1 \ln \frac{\mu}{\mu_R} + 2 \beta_0^2 \ln^2 \frac{\mu}{\mu_R} \right) \\ + \frac{\alpha_s^4(\mu_R)}{32 \pi^2} \left( - \beta_2 \ln \frac{\mu}{\mu_R} + 5 \beta_0 \beta_1 \ln^2 \frac{\mu}{\mu_R} - 4 \beta_0^3 \ln^3 \frac{\mu}{\mu_R} \right) + \cdots \,. \end{multline} We write the perturbative expansion of the anomalous dimensions as\begin{eqnarray} \Gamma_{\mathrm{cusp}} (\alpha) &=& \left( \frac{\alpha}{4 \pi} \right) \Gamma_0 + \left( \frac{\alpha}{4 \pi} \right)^2 \Gamma_1 + \left( \frac{\alpha}{4 \pi} \right)^3 \Gamma_2 + \cdots \,,\nonumber\\ \gamma_H (\alpha) &=& \left( \frac{\alpha}{4 \pi} \right) \gamma_0^H + \left( \frac{\alpha}{4 \pi} \right)^2 \gamma_1^H + \left( \frac{\alpha}{4 \pi} \right)^3 \gamma_2^H + \cdots\,, \nonumber\\ \gamma_J (\alpha) &=& \left( \frac{\alpha}{4 \pi} \right) \gamma_0^J + \left( \frac{\alpha}{4 \pi} \right)^2 \gamma_1^J + \left( \frac{\alpha}{4 \pi} \right)^3 \gamma_2^J + \cdots \,,\nonumber\\ \gamma^S(\alpha) &=& \gamma^H(\alpha)-2\gamma^J(\alpha) \,. \end{eqnarray} The exact anomalous dimensions are known to $\alpha_s^3$. The anomalous dimensions for the hard function are \begin{align} \gamma_0^H &= - 6 C_F\,, \nonumber \\ \gamma_1^H &= C_F^2 (- 3 + 4 \pi^2 - 48 \zeta_3) + C_F C_A \left( - \frac{961}{27} - \frac{11 \pi^2}{3} + 52 \zeta_3 \right) + C_F T_F n_f \left( \frac{260}{27} + \frac{4 \pi^2}{3} \right) \,, \nonumber \\ \gamma_2^H &= - 1.856 n_f^2 + 259.3 n_f - 1499 \,. \end{align} For the jet function \begin{align} \gamma_0^J &= - 3 C_F \,, \nonumber \\ \gamma_1^J &= C_F^2 (- \frac{3}{2} + 2 \pi^2 - 24 \zeta_3) + C_F C_A \left( - \frac{1769}{54} - \frac{11 \pi^2}{9} + 40 \zeta_3 \right) + C_F T_F n_f \left( \frac{242}{27} + \frac{4 \pi^2}{9} \right)\,, \nonumber \\ \gamma_2^J &= - 0.7255 n_f^2 + 85.35 n_f - 203.8\, . \end{align} For the soft function \begin{align} \gamma_0^S &= \gamma_0^H - 2 \gamma_0^J \,,& \gamma_1^S &= \gamma_1^H - 2 \gamma_1^J \,,& \gamma_2^S &= \gamma_2^H - 2 \gamma_2^J\, . \hspace*{5cm}\phantom{a} \end{align} And for the cusp anomalous dimension \begin{align} \Gamma_0 & = 4 C_F\,,\hspace*{13cm}\phantom{a} \nonumber\\ \Gamma_1 &= 4 C_F \left[ C_A \left( \frac{67}{9} - \frac{\pi^2}{3} \right) - \frac{20}{9} C_F T_F n_f \right] \,, \nonumber \\ \Gamma_2 &= - 0.7901 n_f^2 - 183.2 n_f + 1175 \,,& \nonumber\\ \Gamma_3 &\approx \frac{\Gamma_2^2}{\Gamma_1}\,. \end{align} Analytical expressions for the three-loop terms $\gamma_2^H$, $\gamma_2^J$ and $\Gamma_2$ can be found in~\cite{Becher:2006mr}. The $\alpha_s^4$ part of the cusp anomalous dimension is not known and we estimate it using a Pad\'e approximation. The same approximation works well at $\alpha_s^3$ and in any case our results are very insensitive to the value of $\Gamma_3$. \section{Hard, jet and soft function\label{sec:Hjs}} The hard function can be written as \begin{equation} H(Q^2,\mu) = h(\ln\frac{Q^2}{\mu^2},\mu)\,, \label{hardfdef} \end{equation} where to three-loop order \begin{eqnarray}\label{hardfun} h(L,\mu)&=&1+\left(\frac{\alpha_s}{4 \pi }\right) \bigg[-\frac{1}{2} \Gamma_0 L^2-\gamma^H_0 L+c^H_1\bigg] +\left(\frac{\alpha_s}{4 \pi }\right)^2 \bigg[\frac{1}{8} \Gamma _0^2 L^4+\bigg(\frac{\beta_0 \Gamma_0}{6}+\frac{\gamma^H_0 \Gamma_0}{2}\bigg) L^3 \nonumber \\ &&\hspace{1cm} +\bigg(\frac{(\gamma^H_0)^2}{2}+\frac{\beta _0 \gamma^H_0}{2}-\frac{\Gamma_1}{2}\bigg) L^2-\gamma^H_1 L+c^H_1 \bigg(-\frac{L^2 \Gamma _0}{2}+L \left(-\beta _0-\gamma^H _0\right)\bigg)+c^H_2\bigg] \nonumber\\ && +\left(\frac{\alpha _s}{4 \pi }\right)^3 \bigg[-\frac{1}{48} \Gamma _0^3 L^6+\bigg(-\frac{1}{12} \beta _0 \Gamma _0^2-\frac{1}{8} \gamma^H _0 \Gamma _0^2\bigg) L^5\nonumber\\ &&\hspace{2cm} +\bigg(-\frac{1}{12} \Gamma _0 \beta _0^2-\frac{5}{12} \gamma^H _0 \Gamma _0 \beta _0 -\frac{1}{4} (\gamma_0^H)^2 \Gamma _0+\frac{\Gamma _0 \Gamma _1}{4}\bigg) L^4 \nonumber\\ &&\hspace{2cm} +\bigg(-\frac{(\gamma^H_0)^3}{6}-\frac{1}{2} \beta _0 (\gamma^H_0)^2-\frac{1}{3} \beta _0^2 \gamma^H_0+\frac{\Gamma _1 \gamma^H_0}{2}+\frac{\beta _1 \Gamma _0}{6}+\frac{\gamma^H_1 \Gamma _0}{2}+\frac{\beta _0 \Gamma _1}{3}\bigg) L^3 \nonumber\\ &&\hspace{2cm} +\bigg(\frac{\beta _1 \gamma^H_0}{2}+\gamma^H_1 \gamma^H_0+\beta _0 \gamma^H_1-\frac{\Gamma _2}{2}\bigg) L^2-\gamma^H_2 L \nonumber \\ && \hspace{2cm} +c^H_1 \bigg\{\frac{1}{8} \Gamma _0^2 L^4+\bigg(\frac{2 \beta _0 \Gamma _0}{3}+\frac{\gamma^H_0 \Gamma _0}{2}\bigg) L^3 +\bigg(\beta _0^2+\frac{3 \gamma^H_0 \beta _0}{2}+\frac{(\gamma^H_0)^2}{2}-\frac{\Gamma _1}{2}\bigg) L^2 \nonumber\\ &&\hspace{3cm} +\left(-\beta _1-\gamma^H_1\right) L\bigg\} +c^H_2 \bigg\{L \left(-2 \beta _0-\gamma^H_0\right)-\frac{L^2 \Gamma _0}{2}\bigg\}+c^H_3\bigg]\,. \end{eqnarray} The three-loop constant $c_3^H$ is not yet known but only contributes to the $\delta(\tau)$ part of the thrust distribution. The values of the lower order constants are \begin{align} c_1^H & = C_F \left(-16+\frac{7 \pi ^2}{3}\right) \,,\nonumber \\ c_2^H &= C_F^2 \left(\frac{511}{4}-\frac{83 \pi ^2}{3}+\frac{67 \pi ^4}{30}-60 \zeta_3 \right) + C_F C_A \left(-\frac{51157}{324}+\frac{1061 \pi ^2}{54}-\frac{8 \pi ^4}{45}+\frac{626 \zeta_3}{9} \right) \nonumber \\ & \hspace{1cm} + C_F T_F n_f \left( \frac{4085}{81}-\frac{182 \pi ^2}{27}+\frac{8 \zeta_3}{9} \right) \,. \end{align} The expression for $H(Q,\mu)$ is obtained by solving the RG-equation (\ref{Hrge}) order by order in $\alpha _s$. The RG equations for the Laplace transformed jet function and soft function have the same form so that their explicit forms are obtained from the above result using simple substitution rules. Defining the Laplace transform of the cross section as \begin{equation}\label{laplace} \widetilde t(\nu) = \int_0^\infty\!\mathrm{d}\tau\,e^{-\nu \tau}\, \frac{1}{\sigma_0} \frac{\mathrm{d}\sigma}{\mathrm{d}\tau} \,, \end{equation} the cross section factors into the product of the Laplace transforms of the jet- and soft functions: \begin{equation}\label{factLaplace} \widetilde t(\nu)= H(Q^2,\mu)\, \left[\widetilde j\Big(\ln\frac{ s Q^2}{\mu^2},\mu\Big)\right]^2\, \widetilde s_T\Big( \ln\frac{s Q}{\mu},\mu\Big), \, \quad s\equiv \frac{1}{e^{\gamma_E} \nu}. \end{equation} The Laplace transforms $\widetilde j$ and $\widetilde s_T$ of the jet and soft functions are defined as in (\ref{laplace}). After writing these as functions of a logarithm of the argument, the RG equations simplify to \begin{eqnarray}\label{gammaV} \label{jtildeevol} \frac{\mathrm{d}}{\mathrm{d}\ln\mu}\,\widetilde j\Big(\ln\frac{s Q^2}{\mu^2},\mu \Big) &=& \left[- 2\Gamma_{\rm cusp}(\alpha_s)\,\ln\frac{s Q^2}{\mu^2} - 2\gamma^J(\alpha_s) \right] {\widetilde j}\Big(\ln\frac{s Q^2}{\mu^2},\mu \Big) \,, \\ \label{stildeevol} \frac{\mathrm{d}}{\mathrm{d}\ln\mu}\,\widetilde s_T\Big(\ln\frac{s Q}{\mu},\mu \Big) & = & \left[ 4\Gamma_{\rm cusp}(\alpha_s)\,\ln\frac{s Q}{\mu} - 2\gamma^S(\alpha_s) \right] {\widetilde s}_T\Big(\ln\frac{s Q}{\mu},\mu \Big) \,. \end{eqnarray} Comparing to the RG equation for the hard function Eq. (\ref{Hrge}), and looking at Eq.~\eqref{hardfdef} one sees that the expression for the jet-function $\widetilde j(L,\mu)$ is obtained from (\ref{hardfun}), by simple substitutions: \begin{eqnarray} \widetilde{j}(L,\mu) &=& h(L,\mu) \quad \mathrm{with} \quad \gamma^H \to -\gamma^J,\,c^H \to c^J, \,\,\mathrm{and}\,\, \Gamma_{\rm cusp} \to -\Gamma_{\rm cusp},\,\\ \widetilde{s}_T(L,\mu) &=& h(2 L,\mu) \quad \mathrm{with} \quad \gamma^H \to -\gamma^S,\, \,\mathrm{and}\,\, c^H \to c^S \,. \end{eqnarray} The constants for the jet and soft functions are \begin{align} c_1^J &= C_F \left( 7 - \frac{2 \pi^2}{3} \right) \nonumber \,,\\ c_2^J &= C_F^2 \left( \frac{205}{8} - \frac{97 \pi^2}{12} + \frac{61 \pi^4}{90} - 6 \zeta_3 \right) + C_F C_A \left( \frac{53129}{648} - \frac{155 \pi^2}{36} - \frac{37 \pi^4}{180} - 18 \zeta_3 \right) \nonumber\\ &\hspace{1cm} + C_F T_F n_f \left( - \frac{4057}{162} + \frac{13 \pi^2}{9} \right)\,,\\ c_1^S &= - C_F \pi^2 \,, \nonumber\\ c_2^S &= C_F^2 c_{2_{CF}}^S + C_F C_A c_{2,{C_A}}^S + C_FT_F\,n_f c_{2,{n_f}}^S \,.\nonumber \end{align} The constants $c_{2_{CF}}^S, c_{2_{CA}}^S$ and $c_{2_{n_f}}^S$ are extracted numerically as explained in Section \ref{sec:match}. We found \begin{align} c_{2_{CF}}^S &= 58\pm2, & c_{2_{CA}}^S &= - 60\pm 1, & c_{2_{n_f}}^S = 43 \pm 1 \,. \end{align} \section{Singular terms in the thrust distribution\label{sec:singular}} The fixed-order distributions can be written in terms of delta functions and plus distributions. \begin{equation} D (\tau) = \delta (\tau)\, D_{\delta} (\tau) + \left( \frac{\alpha_s}{2 \pi} \right) \left[ D_A (\tau) \right]_{+} + \left( \frac{\alpha_s}{2 \pi} \right)^2 \left[ D_B (\tau) \right]_{+} + \left( \frac{\alpha_s}{2 \pi} \right)^3 \left[ D_C (\tau) \right]_{+} \, . \end{equation} The delta-function terms are known to $\alpha_s^2$ accuracy \begin{align}\label{deltaterms} D_{\delta} &= 1 + \left( \frac{\alpha_s}{4 \pi} \right)C_F \left( - 2 + \frac{2 \pi^2}{3} \right) + \left( \frac{\alpha_s}{4 \pi} \right)^2 \left\{ C_F^2 \left( 4 - \frac{3 \pi^2}{2} + \frac{\pi^4}{18} - 24 \zeta_3 + c_{2_{CF}}^S \right) \right.\quad \\ & \left. + C_A C_F \left( \frac{493}{81} + \frac{85 \pi^2}{6} - \frac{73 \pi^4}{90} + \frac{566 \zeta_3}{9} + c_{2_{CA}}^S \right) + C_F T_F n_f \left( \frac{28}{81} - \frac{14 \pi^2}{3} - \frac{88 \zeta_3}{9} + c_{2_{nf}}^S \right) \right\} \,. \nonumber \end{align} Our results allow us to derive all plus-distribution terms to $\alpha_s^3$. We find { \begin{align} D_A (\tau) &= \frac{1}{\tau}\left\{C_F \left[- 4 \ln \tau -3\right]\right\}\, , \nonumber \\ D_B (\tau) &= \frac{1}{\tau} \left\{ C_F^2 \left[ 8 \ln^3 \tau + 18 \ln^2 \tau + (13 - 4 \pi^2) \ln \tau + \frac{9}{4} - 2 \pi^2 + 4 \zeta_3 \right] \right. \\ & \hspace{1cm} \left. + C_F T_F n_f \left[ - 4 \ln^2 \tau + \frac{22}{9} \ln \tau + 5 \right] \right. \nonumber \\ &\left. \hspace{1cm}+ C_F C_A \left[ 11 \ln^2 \tau + (- \frac{169}{18} + \frac{2 \pi^2}{3}) \ln \tau - \frac{57}{4} + 6 \zeta_3 \right] \right\} \, ,\nonumber \\ D_C (\tau) &= \frac{1}{\tau}\bigg\{ C_F^3 \bigg[-8 \ln^5\tau-30 \ln^4\tau +\ln^3\tau \left(-44+\frac{40 \pi ^2}{3}\right)+\ln^2\tau \big(-88 \zeta _3+24 \pi ^2\nonumber \\ & -27\big) + \ln\tau \left(-c_{2_{C_F}}^S-96 \zeta _3-\frac{53 \pi ^4}{90}+\frac{79 \pi ^2}{6}-\frac{17}{2}\right) +16 \pi ^2 \zeta _3-39 \zeta _3-132 \zeta _5\nonumber \\ &+\frac{19 \pi ^4}{120} +\frac{5}{8} \pi ^2-\frac{47}{8}-\frac{3}{4} c_{2_{C_F}}^S \bigg] +C_F^2 n_f T_F \bigg[\frac{40 \ln^4\tau}{3}+\frac{56 \ln^3\tau}{9}+\ln^2\tau \left(-43-\frac{28 \pi ^2}{3}\right) \nonumber \\ & +\ln\tau \left(- c_{2_{n_f}}^S+\frac{664 \zeta _3}{9}+\frac{164 \pi ^2}{27}-\frac{1495}{81}\right)+\frac{274 \zeta _3}{9}-\frac{31 \pi ^4}{45}+\frac{56 \pi ^2}{9}+\frac{1511}{108} \nonumber \\ & + \frac{2}{3}c_{2_{C_F}}^S-\frac{3}{4} c_{2_{n_f}}^S \bigg] + C_F n_f^2 T_F^2 \bigg[-\frac{112 \ln^3\tau}{27}+\frac{68 \ln^2\tau}{9}+\ln\tau \left(\frac{140}{81}+\frac{16 \pi ^2}{27}\right) \nonumber\\ &-\frac{176 \zeta _3}{27}-\frac{64 \pi ^2}{81}-\frac{3598}{243}+\frac{2}{3} c_{2_{n_f}}^S \bigg] + C_F C_A^2 \bigg[-\frac{847 \ln^3\tau}{27}+\ln^2\tau \left(\frac{3197}{36}-\frac{11 \pi ^2}{3}\right) \nonumber \\ &+\ln\tau \left(22 \zeta _3-\frac{11 \pi ^4}{45}+\frac{85 \pi ^2}{9}-\frac{11323}{324}\right)-10 \zeta _5+\frac{361 \zeta _3}{27}+\frac{541 \pi ^4}{540}-\frac{739 \pi ^2}{81}\nonumber \\ & -\frac{77099}{486}-\frac{11}{6} c_{2_{C_A}}^S \bigg] + C_F^2 C_A \bigg[-\frac{110 \ln^4\tau}{3}+\ln^3\tau \left(-\frac{58}{9}-\frac{8 \pi ^2}{3}\right) +\ln^2\tau \left(-36 \zeta _3 \nonumber \right. \\ & \left.+\frac{68 \pi ^2}{3}+\frac{467}{4}\right)+\ln\tau \left(-\frac{2870 \zeta _3}{9}+\frac{173 \pi ^4}{90}-\frac{625 \pi ^2}{27}+\frac{29663}{324}- c_{2_{C_A}}^S\right)-30 \zeta _5 \nonumber \\ & -\frac{1861 \zeta _3}{18}+\frac{973 \pi ^4}{360}-\frac{317 \pi ^2}{18}-\frac{49}{27}-\frac{11}{6} c_{2_{C_F}}^S-\frac{3}{4} c_{2_{C_A}}^S\bigg] +C_A C_F n_f T_F \bigg[\frac{616 \ln^3 \tau}{27} \nonumber \\ &+\ln^2\tau \left(\frac{4 \pi ^2}{3}-\frac{512}{9}\right)+\ln\tau \left(8 \zeta _3-\frac{128 \pi ^2}{27}+\frac{673}{81}\right)+\frac{608 \zeta_3}{27}-\frac{10 \pi ^4}{27}+\frac{430 \pi ^2}{81} \nonumber \\ & +\frac{24844}{243} -\frac{11}{6} c_{2_{n_f}}^S+\frac{2}{3} c_{2_{C_A}}^S \bigg] \bigg\}\,. \nonumber \end{align}} The numerical values of $c_{2_{C_F}}^S$, $c_{2_{C_A}}^S$ and $c_{2_{n_f}}^S$ were given in (\ref{softConst}). \begin{table}[t!] \begin{center} \begin{tabular}{|c|c|c|c|}\hline & $\alpha_s$ & $\alpha_s^2$ & $\alpha_s^3$ \\ \hline \multirow{2}{*}{LL} & $G_{12}$ & $G_{23}$ & $G_{34}$ \\ & -2.667 & -10.22 & -45.72\\ \hline \multirow{2}{*}{NLL} & $G_{11}$ & $G_{22}$ & $G_{33}$ \\ & 4 & -24.94 & -285.1 \\ \hline \multirow{2}{*}{N$^2$LL} & $C_1$ & $G_{21}$ & $G_{32}$ \\ & 1.053 & 21.82 & -230.7 \\ \hline \multirow{2}{*}{N$^3$LL} & & $C_2$ & $G_{31}$ \\ & -- & $73.\pm 2. $ & $293. \pm 24.$ \\ \hline \end{tabular} \end{center} \caption{Numerical values for the expansion coefficients of $R(\tau)$ as defined in (\ref{Rexpand}). \label{GijNumerical}} \end{table} To compare with the existing literature and for the readers convenience, we also quote the third order result for the quantity $R(\tau)$, which is is often written in the form \begin{equation}\label{Rexpand} R(\tau) =\int_0^\tau \frac{1}{\sigma_{\rm had}} \frac{d\sigma}{d\tau} = \left(1+\sum_{k=1}^{\infty} C_k \left(\frac{\alpha_s}{2\pi}\right)^k\right) \exp\left[ \sum_{i=1}^\infty \sum_{j=0}^{i+1} \left(\frac{\alpha_s}{2\pi}\right)^i \ln^j\frac{1}{\tau} G_{ij}\right] \,. \end{equation} We normalize here to the total hadronic cross section $\sigma_{\rm had}$ given in (\ref{sigtot}) instead of the Born cross section $\sigma_0$. Our result provides the normalization of $R(\tau)$ to second order \begin{align}\label{Cicoeff} C_1 =& C_F \left(-\frac{5}{2}+\frac{\pi ^2}{3}\right) \, ,\nonumber \\ C_2 =& C_F^2 \left(-6 \zeta (3)+\frac{\pi ^4}{72}-\frac{7 \pi^2}{8}+\frac{41}{8}+\frac{c_{2_{C_F}}^S}{4} \right) +C_A C_F \left( \frac{481 \zeta (3)}{18}-\frac{73 \pi^4}{360}+\frac{85 \pi ^2}{24}-\frac{8977}{648} \right. \nonumber \\ & \left.+\frac{c_{2_{C_A}}^S}{4} \right) + C_F n_f T_F \left(-\frac{58 \zeta (3)}{9}-\frac{7 \pi^2}{6}+\frac{905}{162}+\frac{c_{2_{n_f}}^S}{4} \right)\,, \end{align} and determines all logarithmic terms up to $\alpha_s^3$: \begin{align}\label{Gijcoeff} G_{12} &= -2 C_F \, ,\;\;\; G_{11} = 3 C_F \, ,\nonumber\\ G_{23} &= C_F \left(n_f T_F \frac{4 }{3}-C_A \frac{11 }{3}\right) ,\;\;\; G_{22} = C_F \left(-C_F\frac{4 \pi ^2 }{3}+n_f T_F\frac{11 }{9}+C_A \left(-\frac{169}{36}+\frac{\pi ^2}{3}\right) \right) \, , \nonumber\\ G_{21}& = C_F \left(C_F \left(-4 \zeta _3+\pi ^2+\frac{3}{4}\right)-5 n_f T_F+C_A \left(\frac{57}{4}-6 \zeta _3\right)\right)\, , \nonumber\\ G_{34}& = C_F \Bigg(-C_A^2\frac{847 }{108}+ C_A n_f T_F\frac{154}{27} -n_f^2 T_F^2 \frac{28}{27} \Bigg) \, , \nonumber \\ G_{33}& = C_F \Bigg(C_A^2 \left(-\frac{3197}{108}+\frac{11 \pi ^2}{9}\right) +n_f T_F C_A \left(\frac{512}{27}-\frac{4 \pi ^2}{9}\right) - n_f^2 T_F^2\frac{68}{27} + \nonumber\\ & C_F n_f T_F \left(2+\frac{8 \pi ^2}{3}\right)- C_F C_A \frac{22 \pi ^2}{3}+C_F^2 \frac{64}{3} \zeta _3\Bigg) \, ,\nonumber\\ G_{32}& = C_F \Bigg(C_A^2 \left(11 \zeta _3-\frac{11 \pi ^4}{90}+\frac{85 \pi ^2}{18}-\frac{11323}{648}\right) +C_A n_f T_F \left(4 \zeta _3-\frac{64 \pi ^2}{27}+\frac{673}{162}\right) \nonumber \\ & +n_f^2 T_F^2 \left(\frac{70}{81}+\frac{8 \pi ^2}{27}\right) +C_F^2 \left(\frac{8 \pi ^4}{45}-48 \zeta _3\right)+ C_F C_A \left(-110 \zeta _3+\frac{4 \pi ^4}{9}-\frac{70 \pi ^2}{27}+\frac{11}{8}\right) \nonumber\\ & + C_F n_f T_F \left(32 \zeta _3+\frac{8 \pi ^2}{27}+\frac{43}{6}\right)\Bigg) \, ,\nonumber\\ G_{31}& = C_F \Bigg( C_F^2 \left(-\frac{44}{3} \pi ^2 \zeta _3+53 \zeta _3+132 \zeta _5-\frac{8 \pi ^4}{15}+\frac{5 \pi ^2}{4}+\frac{29}{8}\right) \nonumber\\ & + C_F n_f T_F \left(-\frac{2}{3} c_{2_{C_F}}^S-\frac{208 \zeta _3}{9}+\frac{31 \pi ^4}{45}-\frac{19 \pi ^2}{18}-\frac{77}{4}\right) \nonumber\\ &+C_F C_A \left(\frac{11 c_{2_{C_F}}^S}{6}+2 \pi ^2 \zeta _3+\frac{452 \zeta _3}{9}+30 \zeta _5-\frac{377 \pi ^4}{180}+\frac{161 \pi ^2}{72}+\frac{23}{2} \right) \nonumber\\ &+C_A^2 \left(\frac{11}{6} c_{2_{C_A}}^S-\frac{361 \zeta _3}{27}+10 \zeta _5-\frac{541 \pi ^4}{540}+\frac{739 \pi ^2}{81}+\frac{77099}{486}\right) \nonumber\\ & +C_A n_f T_F \left(\frac{11 c_{2_{n_f}}^S}{6}-\frac{2}{3} c_{2_{C_A}}^S-\frac{608 \zeta _3}{27}+\frac{10 \pi ^4}{27}-\frac{430 \pi ^2}{81}-\frac{24844}{243}\right) \nonumber\\ &+n_f^2 T_F^2 \left(-\frac{2}{3} c_{2_{n_f}}^S+\frac{176 \zeta _3}{27}+\frac{64 \pi ^2}{81}+\frac{3598}{243}\right) \Bigg) \, . \end{align} The numerical values of the above coefficients are listed in Table \ref{GijNumerical}. The NLL coefficients up to $O(\alpha_s^3)$ were given in \cite{Catani:1992ua} and we completely agree with their results. In the same reference the remaining $\alpha_s^2$ coefficients were determined using a fit to the numerical fixed order result with the result $C_1=34\pm 22 $ and $G_{21}=30\pm 10$. Our analytical result agrees with the extracted value of $G_{21}$, but our value of $C_1$ is about a factor of two larger. This disagreement is perhaps not that surprising, given that \cite{Catani:1992ua} had to extract $C_2$ and $G_{21}$ numerically using a simultaneous fit to both quantities at small $\tau$, where the result is dominated by the contribution from the logarithmic term proportional to $G_{21}$. Since we have the analytical result for $G_{21}$, we are able to extract $C_2$ with much higher precision. \end{appendix} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
1,123
Summer is never complete without a chic bikini set, and Sofia's high-waist style is on-point! No need to fuss over your look because Sofia's basic tee and jeans combo will do the trick! If a dainty tropical dress doesn't make you think of summer, maybe this will! Sporty, comfy, but still lady-like... Seriously, how does Sofia do it? Off-shoulder tops can make you feel and look fresh in this unforgiving heat! A comfy crop top will save your life this summer! Sofia makes it pop with a bit of layering. A striped top and a denim jumper are your essentials for a laid-back weekend! Keep it simple and cool by breaking your denim-on-denim style with a stripey cropped top. A polished top and sporty bottoms make for a fashion-forward look! When it's scorching outside, showing skin is in! Sofia's feminine dress is the perfect daytime look! We just love how smart she is with her color choices. Sofia nails this muted palette perfectly! Sleeveless top, breezy pants, and statement sneakers... We've got your next OOTD covered! No time to put in that extra effort? A shirt dress and pool slides make for a no-brainer look! If you're into that black-on-black aesthetic, switch things up by adding pops of color! Who wouldn't love to get summer style inspo from an it-girl who's an expert in strutting her stuff? Sofia Andres knows how to endure the summer heat wave while looking chic and trendy that we can't help but keep our eyes glued to her every Instagram post. Her stellar style makes her our celebrity BFF of choice (if only we can share outfits with her!) and we've rounded up her latest OOTDs so you can steal the look and go from city girl to balmy summer gal stat!
{ "redpajama_set_name": "RedPajamaC4" }
9,954
György Gordon Bajnai (* 5. březen 1968, Szeged) je maďarský ekonom, politik a bývalý ministr Národního rozvoje a hospodářství. V letech 2009 – 2010 byl sedmým premiérem Maďarské republiky. Biografie Gordon Bajnai se narodil dne 5. březen 1968 v Szegedu v tehdejší Maďarské lidové republice. Vystudoval ekonomii na Budapesti Corvinus Egyetem - Ústav mezinárodních vztahů. Kariéra Od 1. července 2007 do 30. dubna 2008 byl v čele Ministerstva pro místní samosprávu a regionální rozvoj. Dne 1. května 2008 převzal post ministra Národního rozvoje a hospodářství. Předseda vlády Po rezignaci premiéra Ference Gyurcsányho, který odstoupil z důvodu umožnit tak sestavení nové vlády s novým předsedou a pomoci tak k prosazení důležitých reforem a škrtům ve státním rozpočtu, byly jako vhodnými kandidáty na post předsedy vlády navrženi György Surányi, Ferenc Glatz a András Vértes. Ovšem dne 26. března 2009 György Surányi oznámil, že se o post ministerského předsedy ucházet nebude. Jelikož již 24. března uvedl, že se funkce premiéra ujme jen tehdy, když bude mít podporu všech parlamentních stran, a tu nemá. Odmítnutí György Surányiho donutilo MSZP hledat další vhodné kandidáty na post premiéra. Dne 29. března 2009 navrhla dalšího kandidáta, a to ministra národního rozvoje a hospodářství - Gordona Bajnaie. Koaliční strana SZDSZ následně jeho kandidaturu o den později podpořila. Gordon Bajnai měl dvě podmínky pro přijetí nominace - získání podpory opozičního SZDSZ a písemnou dohodu mezi MSZP a SZDSZ, že budou podporovat jím vedený kabinet i v době, "kdy budou muset být přijímána obtížná rozhodnutí". Chce sestavit vládu odborníků, kteří by dokázali řešit tíživou hospodářskou situaci země. Bajnai upozornil, že splnění jeho programu vyžaduje oběti, odříkání a dotkne se každé maďarské rodiny. "Bude to bolet, ale bude to mít výsledky". Do úřadu premiéra byl zvolen dne 14. dubna 2009, hlasovalo pro něj 204 z 385 poslanců ze MSZP a SZDSZ. Bajnai rozhodl jít příkladem a nechává si vyplácet jako plat jen jeden forint měsíčně. Podle majetkového přiznání by měl za měsíc vydělat 1,4 milionu forintů (140 tisíc korun). Premiér slíbil, že od června bude posílat celou sumu zpět do státní kasy. Toto gesto je v souladu s tvrdým úsporným balíčkem. Například v rámci úsporných opatřen chce šéfům státních podniků stanovit platový strop na dva miliony forintů. Jeho ministři si již snížili platy o celých 15 procent. Navzdory těmto krokům Maďaři jeho vládě nevěří. Dne 24. srpna 2009 se Gordon Bajnai sešel s českým premiérem Janem Fischerem. Na společném jednání se mimo jiné shodli na společné strategii v romské problematice. Maďarsko proto předkládá návrh, aby země Visegrádské čtyřky vypracovaly společnou středoevropskou strategii pro integraci Romů do společnosti. O nástupci Gordona Bajnaje ve funkci premiéra se rozhodlo v parlamentních volbách 2010. S velkým náskokem je vyhrála pravicová strana Fidesz, která nyní v parlamentu drží dvoutřetinovou ústavní většinu. Premiérem byl dne 29. května 2010 zvolen Viktor Orbán, který již premiérem byl v letech 1998–2002. Odkazy Reference Související články Maďarsko Seznam premiérů Maďarska Externí odkazy Meh.hu - Bajnai Gordon Bajnai Gordon - Életrajz Muži Premiéři Maďarska Maďarští politici Narození v roce 1968 Žijící lidé Narození 5. května Narození v Segedíně
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,234
I am pleased to welcome you at my websites and I am sure that you will find what you are looking for at my site. My name is Michael Murr and I am born in 1974 in Bavaria, Germany. The first time, I got contact to the internet, was in 1995. and since 1996 I am running this website. The constantly enhancement of the content as well as the technology you can see in website history. Since the beginning of time the railway is fascinating me, so that is why it is a essential part of my page. In addition to the description of interesting railway lines, there is also information about Munichs and Augsburgs Public Transportation Association and more. Kumaris, the living goddesses of Nepal meanwhile have become the second essential topic of my homepage. Incidentally I got a news article about this topic in October 2008, which led me to further research, so that the topic fascinates me till this day. Do you like my page? Then I would be happy about a like on my Facebook page.
{ "redpajama_set_name": "RedPajamaC4" }
2,515
David and Goliath. Spartans and Persians. North vs. South. Good vs. evil. Epic battles have always given us hope, inspired our actions and captured our imaginations. This Saturday, a battle of epic proportions will take place, and although it might not capture our imaginations, it definitely will capture our taste buds. At the Farm to Food Truck Challenge III on Saturday at the SoCo Farmers' Market in Costa Mesa, six top gourmet food trucks will compete for the title of "Top Chop Truck." The title will be given to the chef who can create the best dish using only locally grown, farm-fresh ingredients found at the Farmers' Market, which is certified. The goal is to promote healthy eating and support of locally grown foods. Top local food judges include Mona Shah-Anderson, O.C restaurant publicist; chef Andrew Gruel, winner of Farm2FTII and owner of SlapFish; chef Santanna Salas from Haven Gastropub; Kim Glen of the city of Costa Mesa; and Priscilla Willis of She's Cookin'. Food blogger Bobby Navarro will act as emcee. The competition begins at noon Saturday, and the chefs will only have 60 minutes to whip up their best culinary creation. This is a wonderful chance to check out an event that pulls together the community and really brings our current food truck obsession to life. Come by the Farmer's Market for some great food, live music, fresh ingredients and an opportunity to learn, firsthand, about the farm-to-fork mentality. A food drive will also take place, and attendees are encouraged to bring canned goods or make a small donation. Any donation can be used to purchase food directly from the farmers. SoCo Farmers' Market is at 3315 Hyland Ave., Costa Mesa. There has never been a better time to get into classical music. Not only has the Philharmonic Society of Orange County booked Grammy-winning acts like Yo-Yo Ma and the Parker Quartet, but it is also offering affordable three-concert subscriptions. There are six different concert subscriptions to choose from: the All-Stars, Beethoven, British Orchestras, Barclay, Recital and Saturday mini-series. All mini-series will range from $90 to $950, and you can sit back in Orange County's finest venues and listen to the complex sounds of classical music. We recommend the Beethoven mini-series. The exploration of Beethoven's late works will be curated by Philharmonic Society President Dean Corey as he gives context and introduction to Beethoven's greatest works, including his Seventh and Ninth symphonies. As a bonus, the St. Lawrence String Quartet will perform Beethoven's late string quartets. If you act fast, you can save 20% when you use the promotion code MINI through July 15. For more information, visit http://www.philharmonicsociety.org. Movie soundtracks are such a subtle yet important part of the cinema experience. The Huntington Beach Symphony Orchestra is presenting "Music from Movies We Love" at the Huntington Beach Central Library at 3 p.m. Sunday. Songs from "My Fair Lady," "The King and I," "Oliver!" and "The Sound of Music" will be featured. The Huntington Beach Central Library is at 7111 Talbert Ave. The Crow Bar and Kitchen in Corona del Mar just announced its new menu by James Beard-nominated chef John Cuevas. The menu is a pleasure to read. It not only features Crow Bar's famous burgers but also gives history of the importance of local farms, and the beer list is hard to put down. For example, Stone's More Black than Brown IPA is described as "deep brown, a bit hazy with tan foam, a powerful hop blend. Modest body with lingering bitter, drying end hints of roasted malt/chocolate." After reading that, how can you not want to wash down your BBQ pork belly sliders with that tasty IPA? As opposed to a place like the Counter, which trusts the diner to make good menu decisions, the Crow Bar invites you to trust Cuevas. His signature crow burger made — with five ounces of their proprietary blend of all-natural Angus and prime beef, with tallegio and Gorgonzola, housemade ketchup and wild rocket on a rosemary bun — is a solid menu choice. It's even better when paired with the new wild arugula salad with candied bacon, a poached ranch egg, sherry dressing and pecorino romano. Whatever you try on Crow Bar's new menu, it will not disappoint. After a long day at the beach this summer, give the new items a try and kick back with some familiar favorites. The Crow Bar and Kitchen is at 2325 E. Coast Hwy., Corona del Mar. For more information, call (949) 673-2747 or visit crowburgerkitchen.com. Where's the Party? on East 17th Street in Costa Mesa will be hosting its first-ever technique class on "How to Host a Shrimp Broil" from 4 to 6 p.m. Saturday. Just in time for summer, the gals at Where's the Party? will show you how to treat your family and friends to a traditional Southern shrimp boil. From the decor to the beverages to the shrimp to the invitations to the ever-so-charming Southern drawl, they will teach you how to host a great summer gathering. This is their first-ever "do-it-yourself" class, but they plan on hosting many more this summer. It's the perfect opportunity to support a local business and learn the secrets to throwing a wonderful party from the masters themselves. Where's the Party? has been in business for more than 20 years and has always been the premier source for all your event-planning needs. The shrimp boil lesson this Saturday is free, but there is limited seating, so RSVP with an email sent to sales@wheresthepartyoc.com if you want to attend. Visit their website at http://www.wheresthepartyoc.com or walk into their fabulous shop at 270 E. 17th St., Costa Mesa. They always have something new going on. The shop is open 10 a.m. to 6 p.m. Monday through Friday, and 9 a.m. to 5 p.m. Saturday. For more information, call (949) 722-1803.
{ "redpajama_set_name": "RedPajamaC4" }
1,851
{"url":"https:\/\/ask.sagemath.org\/questions\/30102\/revisions\/","text":"# Revision history [back]\n\n### How do I define (and work with) a set of matrices?\n\nSuppose $a,b,c$ are matrices, and I want to define the set $S={a,b,c}$. What is the proper syntax for this? More generally, how do I define a set by specifying its elements (regardless of their nature). I am only finding documentation for sets of numbers.\n\nI'm also interested in doing functions that take sets like $S$ as input. For instance, if I had a matrix $d$, I want to be able to define a set like $S'={da,db,dc}$, but I want to be able to do this via the Sage equivalent of $f(S):={ms\\mid s\\in S}$.\n\n### How do I define (and work with) a set of matrices?\n\nSuppose $a,b,c$ are matrices, and I want to define the set $S={a,b,c}$. What is the proper syntax for this? More generally, how do I define a set by specifying its elements (regardless of their nature). I am only finding documentation for sets of numbers.\n\nI'm also interested in doing functions that take sets like $S$ as input. For instance, if I had a matrix $d$, I want to be able to define a set like $S'={da,db,dc}$, but I want to be able to do this via the Sage equivalent of $f(S):={ms\\mid s\\in S}$.\n\n### How do I define (and work with) a set of matrices?\n\nSuppose $a,b,c$ are matrices, and I want to define the set $S={a,b,c}$. $S=${$a,b,c$}$. What is the proper syntax for this? More generally, how do I define a set by specifying its elements (regardless of their nature). I am only finding documentation for sets of numbers. I'm also interested in doing functions that take sets like$S$as input. For instance, if I had a matrix$d$, I want to be able to define a set like$S'={da,db,dc}$,$S'=${$da,db,dc$}$, but I want to be able to do this via the Sage equivalent of $f(S):={ms\\mid s\\in S}$.\n\n### How do I define (and work with) a set of matrices?\n\nSuppose $a,b,c$ are matrices, and I want to define the set $S=${$a,b,c$}$.$S=${$a,b,c$}. What is the proper syntax for this? More generally, how do I define a set by specifying its elements (regardless of their nature). I am only finding documentation for sets of numbers. I'm also interested in doing functions that take sets like$S$as input. For instance, if I had a matrix$d$, I want to be able to define a set like$S'=${$da,db,dc$}$, $S'=$ { $da,db,dc$}, but I want to be able to do this via the Sage equivalent of $f(S):={ms\\mid s\\in S}$.\n\n### How do I define (and work with) a set of matrices?\n\nSuppose $a,b,c$ are matrices, and I want to define the set $S=$ { $a,b,c$ }. What is the proper syntax for this? More generally, how do I define a set by specifying its elements (regardless of their nature). I am only finding documentation for sets of numbers.\n\nI'm also interested in doing functions that take sets like $S$ as input. For instance, if I had a matrix $d$, I want to be able to define a set like $S'=$ { $da,db,dc$}, $da,db,dc$ }, but I want to be able to do this via the Sage equivalent of $f(S):={ms\\mid$f(S):=${$ ms\\mid s\\in S}$.S$}.","date":"2019-02-19 11:50:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9133239984512329, \"perplexity\": 175.54004365247206}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247489933.47\/warc\/CC-MAIN-20190219101953-20190219123953-00562.warc.gz\"}"}
null
null
Q: Cannot MATCH WHERE clause on numeric - have to convert with TOSTRING() I have a database of Tweets. If I return a Tweet as follows MATCH (t:Tweet) WHERE ID(t) = 337314 RETURN TOSTRING(t.id), ID(t) AS id, t.text AS text I get: 657066610235154432 337314 THE WEEK THAT WAS Happenings in #climatechange #politics, #policy & #science for @climaterealitya #reality Website: https://t.co/abnpfO4blb If I then use the property t.id to match the Tweet this fails: MATCH (t:Tweet) WHERE t.id = 657066610235154432 RETURN TOSTRING(t.id), ID(t) AS id, t.text AS text, t.alpha_updated Returns - No Rows If I use TOSTRING() to convert t.id to string then the node is returned: MATCH (t:Tweet) WHERE TOSTRING(t.id) = '657066610235154432' RETURN TOSTRING(t.id), ID(t) AS id, t.text AS text Here is a screenshot of this in neo4j: http://www.screencast.com/t/EtMSlKdZPv This is happening on approx 6k of the Tweets from a database of several million. They are in an almost continuous block of ID values, so I suspect a corrupt index during a certain time when these nodes were being added. Any help understanding and resolving this issue would be appreciated. A: It seems your DB is sometimes storing a string value for the id property. Here is a console that contains 2 nodes with the id property. One node uses a string value, and the other uses an integer. The console shows that a TOSTRING() query matches both nodes. However, if you use this query, you will successfully get (only) the node that stored an integer value: MATCH (t:Tweet) WHERE t.id = 657066610235154432 RETURN t.id, ID(t) AS id, t.text AS text; Therefore, you need to go through your DB and re-set all properties that have a string value with the corresponding integer value.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,001
Hostile environment policy has failed, causing poverty and racism amongst migrants instead of achieving its goals by Charlotte Rubin A study by the Institute for Public Policy Research (IPPR) has found that the hostile environment policy, introduced by Therese May in 2012 in an effort to deter irregular migrants from staying to the UK, has fostered racism and discrimination, contributed to pushing many people into destitution, and erroneously affected people with the legal right to live and work in the UK. The hostile environment's key objective has always been to make life for those living in the UK without immigration status so difficult that they ultimately decide to leave. In order to achieve this, measures under the hostile environment make it harder for individuals without status to rent a house, find a job, get driving licences or even simply open a bank account, in the hope that by making these basic services harder to access, they would voluntarily leave and irregular migration numbers would decline. As voluntary returns/departures from the UK have dropped since 2014 (after the hostile environment came into force), the IPPR's report found that the policy not only fails to meet that goal, but it also has endangered and complicated the lives of migrants in the UK in various ways. Firstly, for those without immigration status with little to no financial support from the state, finding work is essential to ensuring some financial security and to avoid destitution. By forcing employers to check employees' "right to work" and criminalising work without immigration status, the hostile environment pushes migrants without a status into the shadow economy and cash-in-hand jobs (especially if they are not allowed to open a bank account). This makes them vulnerable to exploitation and modern slavery if they manage to find work, and destitution if they don't. The risk of destitution and impoverishment is exacerbated by the restrictions on access to benefits and healthcare. The report specifically mentions malnutrition, cramped and substandard accommodation, and mental ill-health among undocumented migrant families unable to access public funds. The problems do not stop there. The hostile environment, it turns out, not only impacts its target population, namely individuals without immigration status, but also many individuals with legal immigration status. As such, the report shows that the policy fosters ethnic and racial bias, as home and work raids are often targeted at specific nationalities on the basis that they are "believed to be removable." Unsurprisingly, these people are often people of colour and ethnic minority background. Similarly, the right to rent checks have been ruled discriminatory and biased against people of ethnic minority backgrounds, because they make landlords more suspicious of "removable-looking" people, whatever that may mean, and therefore disadvantage tenants of ethnic minority backgrounds who might very well be British nationals or people with leave to remain. Recently, the hostile environment has been under heavy scrutiny. In March, the Wendy Williams Windrush review was laid before Parliament. The report overtly criticised the workings of the Home Office's hostile environment, exposing how thousands of legal UK residents were classified as illegal immigrants and denied the right to work, rent property, access healthcare and benefits during the Windrush Scandal. In April, the Court of Appeal affirmed that immigration checks required by landlords to ensure that tenants have the right to rent are discriminatory, but fell short from ruling that the discrimination was severe enough to render it unlawful. The case is currently being appealed. The IPPR report warns that a significant proportion of EU citizens will miss the EUSS application deadline of 30 June 2021, barring them from accessing benefits and many public services and losing their immigration status altogether. Despite the mounting warnings and criticism, the Home Secretary confirmed in May that EU citizens who fail to apply for status under the EU Settlement Scheme in time will be unlawful residents and fall subject to all hostile environment policies currently in place. For all these reasons and many more, the report is unequivocal in its condemnation of the policy, stating that "restrictions on access to benefits can force people without immigration status into destitution. There is evidence of malnutrition, cramped and substandard accommodation, and mental ill-health among undocumented migrant families unable to access public funds … The hostile environment does not appear to be working for anyone: for migrants, for the Home Office, or for the wider public." If you need assistance you can contact us here, call us on 020 8142 8211 or send us a question on WhatsApp. Tags: Law and Policy, Hostile Environment The impact of COVID-19 on the immigration system: the good, the bad and the recommendations by Charlotte Rubin COVID-19 makes it difficult, if not impossible to operate a normal immigration system. Travel restrictions make entering or leaving the UK a complex process, implementing ordinary work or income requirements for visas can undermine public health messages, and to make matters worse, the Home Office itself has been heavily impacted by the government-imposed lockdown, as their staffing levels have suffered and their workload is constantly changing. It is therefore not surprising that numerous changes aimed at ensuring that the UK's immigration and visa systems continues to function properly have been announced in the past few months. Last week, a cross-party Home Affairs Select Committee published its report on the Home Office response to the impact of COVID-19 on the immigration and visa systems. In the report, the Committee welcomes the government decision to scrap the immigration health surcharge for all NHS and social care workers, calling it "a recognition of the contribution made by the front-line workers fighting COVID-19." However, it is said not to go far enough. Committee Chair Yvette Cooper MP said: "It is very welcome that the Government has agreed to waive the Immigration Health Surcharge and extend the bereavement scheme for NHS and social care workers. However, most care workers and low-paid NHS support staff are still excluded from receiving the free one-year visa extension granted to clinical staff, and as a result could be facing costs of hundreds or thousands of pounds this summer. The Committee therefore recommends to open free visa extensions to the same range of employees as they have done for the immigration health surcharge waiver. It also recommends simplifying (and lowering the price tag) of paths to British citizenship and permanent residency to those health and social care workers who risked their lives during the pandemic. "Excluding the care workers who hold dying residents' hands, the cleaners who scrub the door handles and floors of the COVID-19 wards, or the porters who take patients to intensive care is just wrong. The Government must ensure that all measures of support for NHS and care workers apply to all frontline staff equally, irrespective of grade or job title." The Committee also evaluated visa extensions for non-NHS staff. When announcing the Home Office policy change which allowed all visas due to expire before 31 July 2020 to be extended, the Home Secretary said that "nobody will be punished for circumstances outside of their control". To make good on that promise, the Committee recommends that the Home Office implement automatic, blanket visa extensions instead of making individuals apply for them via email, to ensure that individuals do not overstay their visa unintentionally. Highlighting a concern which lawyers and immigration experts flagged up immediately after the visa extensions were made public, the report reiterates there is currently no legal basis for any of these extensions. Individuals relying on government policy announcements (which can be changed at any given time and lack legal foundation) need legal reassurance that their extension is lawful and valid and that they can continue to live and work in the UK. The Committee therefore recommends that the Home Office implements a statutory instrument (a form of secondary legislation) to clarify the legal basis of both the extension of leave for all individuals who are unable to leave the country before the expiry of their current visa, and for the automatic extension of leave offered to NHS staff. Analysing the financial impact of the coronavirus on the visa system as a whole, the report acknowledges the disruption and economic impact of COVID-19, recognising that many individuals have lost their jobs or seen their income significantly reduced through no fault of their own. It is within this context that the Committee recommends adapting visa requirements such as the Minimum Income requirement to take loss of income due to COVID-19 into account when evaluating applications. In order to ensure public health and safety for all, the Government is also urged to lift the No Recourse to Public Funds (NRPF) conditions, which caused turmoil a few weeks ago when it seemed like the PM was not aware of the policy's existence. The Committee Chair said the government "needs to make sure that these exceptional Covid-19 circumstances aren't pushing families into desperate hardship because of the NRPF rules which prevent them getting the urgent support they need." Last but not least, the Home Affairs Committee evaluated the impact of the coronavirus on the EU Settlement Scheme (EUSS), calling upon the Home Office to step up their efforts to identify vulnerable persons who may not have applied to the EUSS yet. The report shows that COVID-19 has exacerbated the underlying problems of the EUSS. One of those problems is the lack of information on how the Home Office will approach late applications (applications made after the deadline of 30 June 2021.) The Home Affairs Committee recommendations include a clarification of what support will be provided to assist vulnerable individuals in applying, especially for children in care, given that there is a low application rate for that particular group of people. At the minimum, it is said local authorities should increase their work to identify EU children in care who have not yet applied to the scheme, but ideally, more comprehensive measures should be implemented. The Committee therefore recommends that the Home Office grant automatic Settled Status to all children in care and care leavers, without requiring them to explicitly apply. The Committee also calls on the Home Office to clarify the legal position of those with pre-settled status. During the pandemic, people with pre-settled status have questioned whether they are able to access all public funds, specifically whether they can get benefits, or whether those rights are reserved for people with indefinite leave to remain only. To sum up, just like many experts in the area, the Committee is willing to cut the Home Office some slack in these unprecedented times. It is appreciated that going through the normal routes to introduce new policies is made complicated by circumstances outside of the government's control. However, it is in times like these that guidance needs to be clear, unambiguous, and publicly available so that practitioners know the law, visa holders feel secure, and the Home Office act legally to address the issues we face. Tags: Covid-19, Law and Policy Statement of Changes to the Immigration Rules and EUSS quarterly statistics out today by Charlotte Rubin An eventful day in the immigration world, as the Home Office released a Statement of Changes to the Immigration Rules, as well as their most recent set of EU Settlement Scheme quarterly statistics. The Statement of Changes to the Immigration rules carries some good news. For one, it confirms that victims of domestic violence for durable partners will be eligible for status under the EUSS. This is in line with other government initiatives to tackle domestic abuse in the UK. In the same vein, any family member within scope of the EUSS whose family relationship with an EEA citizen breaks down is now eligible for status under the EUSS. Previously, only ex-spouses and ex-civil partners of EEA citizens could apply to retain a right of residence after divorce or breakdown of a relationship. Additionally, for family members of the people of Northern Ireland, the proposed changes extend the EUSS to dual Irish/British citizens, allowing eligible family members of the people of Northern Ireland to apply for UK immigration status under the Scheme on the same terms as the family members of Irish citizens in the UK. Prior to this change, family members of Northern Irish people could not access the EUSS – under the new rules, they are able to do so on the same basis as those of the Republic of Ireland. These are welcome changes which broaden the applicability of the EUSS. It comes as no surprise, then, that the government considers the EUSS a great success. Today's EUSS press release boasts that with over a year until the application deadline, currently set at 30 June 2021, almost 3.5 million applications to the scheme, making it the biggest scheme of its kind in British history. 3.1 million of those applications have been concluded, of which 58% were granted settled status, 41% pre-settled status and 1% had other outcomes. Other outcomes include 640 refused, 23,740 withdrawn or void and 10,030 invalid applications. Most EUSS applications are made online, and are relatively straightforward. But the online service is not available to everyone. The EUSS sets out that applicants must send in paper applications if they don't have biometric ID documents, or if they are applying on the basis of a derivative right to reside. The latter includes people who are not EU, EEA or Swiss citizens but are applying under the scheme as the family member of a British citizen they lived with in the EU/EEA/Switzerland, the family member of an EU/EEA/Swiss citizen who has become a British citizen, the primary carer of a British, EU, EEA or Swiss citizen, the child of an EU, EEA or Swiss citizen who used to live and work in the UK in education, or such a child's primary carer. Immigration lawyers and front-field workers were looking forward to this release of quarterly statistics, as the Home Office had promised to integrate paper applications into the statistics in March, something they had not previously been able to do. Despite this promise, there is still no information about the paper routes to be found in the newly released statistics. The reason given for failing to deliver on their promise is the COVID-19 pandemic, as the statistics state that it was the Home Office's "intention to develop electronic integration of the two systems to provide a more complete account of all applications received for the quarterly publication in May 2020, but due to the impacts of Covid-19, this has not been possible." The Home Office have also temporarily stopped accepting ID documents by post, which delays the processing paper applications. Nevertheless, the statistics reaffirm that the deadline to apply to the EUSS will not be extended. Paper applications are amongst the most complex applications under the EUSS, and often represent the most vulnerable individuals in society. As a consequence of the pandemic, charities and outreach projects which assist vulnerable applicants in their applications are unable to operate. As such, the people most unlikely to apply to the EUSS on time (those without ID), and whose applications are most affected by the pandemic (as they have to submit ID documents), are quite literally being left out in the cold: they cannot currently apply, their applications are excluded from the statistics and there is reduced community assistance available. The Home Office is working hard to overcome obstacles and delays caused by the pandemic, and resume normal operation. It is only logical that they should take the same approach towards applicants dealing with hindrances on their side of the process. In brief, other, non-EUSS related changes to the Immigration Rules include: Changes to the new Start Up and Innovator visa categories, tightening the requirements that endorsing bodies have to take into account when giving their endorsement A change to student visas (Tier 4), whereby all applicants who apply under Appendix W who are sponsored for their studies in the UK by a government or international scholarship agency now have to obtain written consent from the relevant organisation. The new Global Talent visa has been finetuned, as the Rules merge the old Exceptional Talent visa with this new category, and minor amendments have been made at the request of the endorsing bodies. Changes to the Representative of an Overseas Business visa category, restricting its scope. Representative of an Overseas Business visa holders are employees of overseas businesses which do not have a presence in the UK, to be sent to establish a branch or wholly owned subsidiary of the overseas business in the UK. The changes include that the overseas business must be active, trading and intending to maintain their principal place of business outside the UK; that applicants must have the skills, experience, knowledge and authority to represent the overseas business in the UK; and that applicants must be senior employees of the overseas business. Some amendments and clarifications regarding family life, including that if an individual is granted leave as a fiancé(e) or proposed civil partner, this automatically enables the marriage or civil partnership to take place in the UK, as well as clarification for the spent period for applicants under the family rules who have been convicted and sentenced to a period of imprisonment for a period between 12 months to four years is 10 years. Read the full explanatory note here. Tags: Statement of Changes, Law and Policy, EU Settlement Scheme What happens to new-born babies when birth registrations are suspended? By Charlotte Rubin On Wednesday morning, Prime Minister Boris Johnson and his fiancée Carrie Symonds welcomed a healthy baby boy to this world. The birth of the PM's son brings some uplifting news in difficult times, as the PM comes out of a tough personal recovery from coronavirus, whilst facing a daunting national crisis for the weeks and months to come. But the PM might not be out of the woods yet. COVID-19 might impact the Prime Minister on a personal level yet again – not by infection this time, but in relation to his new-born son. In the UK, there is no central government authority to register births. Instead, this has to be done in the area the child was born. Ever since all local authorities closed down their offices on 23 March, birth registration appointments are no longer carried out. Parents of new-born babies in the UK are therefore unable to register their child as normally required, with potentially unduly harsh consequences. The general rule is that parents need to register the birth of a child with their local authority within 42 days of birth. If they fail to do so, they risk a fine or some other form of reprimand. Fortunately, this rule has been relaxed due to the coronavirus outbreak: government guidance states that no action will be taken against parents who fail to meet the deadline due to no fault of their own. In addition, parents can exceptionally make claims for child benefits and/or universal credit prior to obtaining official birth certificates. These are welcome changes, but they are not enough. In order to issue ID cards and travel documents, embassies have to see the birth certificates of children born in the UK. As ID cards are currently not being issued, parents cannot obtain passports or ID cards for their new-borns. In other words, the suspension on issuing birth certificates contributes to citizens ending up without identification and travel documents. For non-British citizens, these concerns are exacerbated even further. In a global pandemic, emergency situations are not rare occurrences. Yet, because new-borns cannot get IDs under the current circumstances, parents cannot travel abroad in those emergencies unless they leave their new-born child behind. Not only are all non-British parents unable to travel with their children should they need to do so, they also face additional challenges when applying for immigration status in the UK. EU citizens, specifically, will find that applying to the EU Settlement Scheme without a form of ID is a complicated endeavour. When asked to clarify on these pressing issues, a Home Office official wrote that his office will evaluate on a "case by case basis" any application where a parent is unable to obtain an identity document for their child from an EU27 embassy due to circumstances beyond their control. Concerning the EU Settlement Scheme, the Home Office employee reiterated that the deadline to apply under the scheme is not before 30 June 2021, and, assuming that local authorities will resume their functions soon enough, parents therefore have plenty of time to apply before then, should they be unable to do now. The case-by-case evaluation proposed by the Home Office is at their discretion and therefore, does not offer a solution to the structural consequences of suspending birth registrations. In theory, this chaos affects everyone in the same way. One cannot help but wonder whether the PM will face similar obstacles when registering the birth of his son. Might that prompt the Home Office to find a temporary solution to avoid that more citizens, British and European alike, end up without IDs? Tags: Law and Policy, EU Settlement Scheme, Free Movement, Borders New government guidance on points-based system comes at a tactless time by Charlotte Rubin Every day, at 8PM, millions of people across the country clap for our healthcare workers, an initiative which has been encouraged by the government. Meanwhile, as coronavirus numbers soar to almost a thousand deaths a day in the UK, the Home Office published updated guidance for employers on navigating working visas once the new points-based immigration system comes into force on 1 January 2021. Whilst encouraging signs of solidarity, the government is thus detailing the ins and outs of an immigration system which will likely stop many of the people we clap for from coming to work in the UK once it becomes law. The new guidance lays out that all workers will have to be sufficiently qualified (at the minimum, they must have A-level equivalence) and speak sufficient English in order to get a visa. Highly skilled workers are the only ones who can come to the UK without a job offer. In order to do so, they need to get an endorsement from a relevant competent body in order to obtain a Global Talent Visa. Any other individual who wants to come work in the UK will need to have a job offer from an approved sponsor. To become an approved sponsor, employers who want to recruit migrant workers will need to take active steps. They will have to check that their business is eligible, and choose which type of workers they are looking to hire: skilled workers with long-term job offers, or temporary workers. Employers will then have to put in place a framework within their business to deal with the sponsorship process, apply online and pay an application fee ranging from £536 to £1,476, depending on the type of business. The whole process usually takes about 8 weeks. Once they become an approved sponsor, they can recruit people without UK residency to fill their job openings. If an individual, then, receives a job offer from an approved sponsor, they will need to meet a minimum income threshold on top of the language and skill requirements. The general minimum salary threshold is set at £25,600. For some jobs, the threshold may be higher, if the Home Office estimates that it is a higher paid occupation. If an individual does not meet the income threshold, they may still be eligible for a visa if they can demonstrate that they have a job offer in a specific shortage occupation or a PhD relevant to the job. For these occupations, the income threshold is lowered to £20,480. The list of shortage occupations, which includes doctors and nurses, is published by the Migrant Advisory Committee. Concerning lower-skilled workers, the guidance explicitly reiterates that "there will NOT be an immigration route specifically for those who do not meet the skills or salary threshold for the skilled worker route." The skill level for different jobs can be found in Appendix J of the Immigration Rules. Considering that the average health care worker in the UK makes £19,080 a year, the timing of this publication seems peculiar to say the least. As our Director suggests, how does it make sense for the Home Office state that care workers, nurses, hospital porters, cleaners, logistics personnel, postal workers, etc. will not be able to apply for visa under the new immigration system in the midst of the Covid-19 crisis? It is hard to imagine that the Home Office has a valid reason for needlessly doubling down on an immigration policy which fails to take care of the workers who, in times of crisis, put everything at risk to take care of us. Tags: Covid-19, Law and Policy, Brexit From low-skilled to key workers: will the COVID-19 pandemic soften post-Brexit immigration policy? by Charlotte Rubin Just a month ago, when the government introduced its new points-based immigration system, a lot of workers in the health, food production, and transport industries were considered unskilled workers, and unwelcome in post-Brexit Britain. The basics of the proposed points-based system are clear. If a worker does not have a secondary school diploma, does not speak English, or their salary falls below £25,600, the door to the UK is closed for them. As it turns out, a lot of these "low-skilled" workers are now considered essential in the fight to manage, control and survive the coronavirus crisis. In the current circumstances, they have been put under additional strain. The trend to bulk buy has put staff in supermarkets and grocery stores under significant pressure, with one employee writing that him and his co-workers have been working long days on their feet, anticipating the next few weeks to be "a nightmare," and advising against panic buying. There is no reason to bulk buy: there are no food shortages anywhere in Europe, and supermarkets are staying open throughout nation-wide lockdowns as they are part of a (small) group of essential businesses which are exempt from the new rules. But this may soon change. Agricultural workers from eastern Europe usually fill the majority of jobs on farms. The combination of Brexit caps on seasonal workers with strict coronavirus travel restrictions has slowed recruitment in agriculture, and the EU labour force is simply not coming through. UK farmers find themselves in a crisis and could face a shortage of 80,000 labourers this summer if the Government fails to intervene. These spots as fruit pickers need to be filled by British workers or fruit and vegetables will be left unpicked, and stocks could be put in danger. Jobs now classified as "key workers" include NHS staff, social workers, the police and military, and those working in food distribution, energy, utilities and transportation. In other words, the people sustaining essential businesses are, by extent, deemed essential workers, as they help feed and care for a country in standstill. Only a few weeks ago, Johnson's government described these people and the jobs they filled as "low skilled", stating that the government "intends to create a high wage, high-skill, high productivity economy." If anything, the COVID-19 pandemic highlights the stark dissonance between this government's policy on who is key in keeping the economy running and the truth on who is actually keeping the country together. It proves that "low-skilled" labour does not equate low-value labour. Recognising these workers as "key" or "essential" is a step towards recognising that they form the backbone of our society and without them, British civilisation would have already collapsed. The question remains whether this will be reflected in immigration policy when all of this blows over, and the pandemic finally dies down. Tags: Law and Policy, Brexit At what cost do we take back control? The new points-based system explained by Charlotte Rubin The United Kingdom (UK) left the European Union (EU) on 31 January 2020. Since then, the government has been rolling out changes to the immigration system, adapting it to a world without free movement to and from Europe. Today, the government finally revealed its plan for post-Brexit economic migration in Britain. At its core is the idea of "taking back control," a slogan which won the 2016 Brexit referendum, implemented through the end of free movement, a new visa system for EU and third-party nationals alike and a focus on "skilled migrants" to reduce overall immigration. A transition… Under the current immigration rules, EU citizens do not need a visa to work and live in the UK because they benefit from freedom of movement. Those from outside the EU have to meet certain requirements such as English language skills, sponsorship by a company and a salary threshold in order to apply for a visa. There is a cap of 21,000 on the number of visas awarded per year. Following the new plan, freedom of movement with the EU will end, and EU nationals will be subject to the same exact rules as non-EU nationals. As such, people coming to the UK from any country in the world for the purpose of work or study, other than some short-term business visitors and short-term students, will have to obtain a visa for which they will pay a fee. In addition, employers will have to pay an Immigration Skills surcharge on their migrant employees, and migrants from in and outside of the EU will have to pay an Immigration Health Surcharge. The only group unaffected by the new rules are Irish nationals, which the government states will be able to enter and exit the UK the same way they always have. … to an Australian points-based system? Freedom of movement will be replaced by with what the government calls a points-based system, supposedly modelled after the Australian immigration system which allows economic migrants to settle if they can demonstrate that they have a blend of skills and qualifications adding up to enough points. The selling point of a true points-based system is its flexibility, as it allows migrants to mix and match from a list of characteristics to reach the necessary threshold, and then settle in the host country without having to meet any mandatory requirements, such as an employment sponsorship as one needs in the US for example. The government proposals released today, however, fail to offer that flexibility and probably explains the complete absence of the term 'Australia-style' system. The plan requires all economic migrants wanting to come to the UK to fulfil three essential requirements, which are worth 50 points all together. In addition to that, individuals will have to score another 20 points based on their salary expectations to reach 70 points overall, and be eligible to apply for a visa. The minimum salary threshold to reach 70 points automatically is set at £25,600. If the applicant earns less than that required minimum salary threshold, but no less than £20,480, they may still be able to reach 70 points by demonstrating that they have a job offer in a specific shortage occupation such as nursing, or that they have a PhD relevant to the job. The policy paper specifically states that there will be no regional concessions to different parts of the UK, nor will there be a dedicated route for self-employed people. The three essential requirements are knowledge of the English language, a job offer from an approved sponsor, and a job at the appropriate skill level. These mandatory requirements differentiate the system from its Australian counterpart, and therefore, the plan is not a true points-based system. Especially the job offer requirement flies in the face of the Australian analogy, where every year, the largest percentage of new economic permanent resident visas are awarded to individuals without a job offer, but who make up for it with other skills or abilities from the list. (Un)skilled workers For highly-skilled workers, the government laid out its extended Global Talent visa route on the day Britain left the EU. Through this scheme, the most highly skilled, who can achieve the required level of points, will be able to enter the UK without a job offer if they are endorsed by a relevant and competent body. For now, this forms the only exception to the job offer requirement, although the policy plan promises to roll out a broader unsponsored route within the points-based system to run alongside the employer-led system in the future. The appropriate skill level under the points-based system is set at the equivalent to A-levels. Anyone who does not meet that level will not be able to apply, as it is one of the mandatory requirements. Additionally, the plan explicitly states that there will be no general low-skilled or temporary work route '…shifting the focus of [the UK] economy away from a reliance on cheap labour from Europe…', leaving immense labour shortages in specific industries. The list of low-skilled workers industries includes waiters, waitresses, elementary agriculture workers and fishery workers. The report unhelpfully states 'Employers will need to adjust.' Special arrangements are put in place for certain sectors such as scientists, graduates, NHS workers, to fill the gap, but these arrangements are unlikely to resolve the immense labour shortage created. The cap for the agricultural sector, for example, is increasing to 10,000 places per year for seasonal workers who harvest the fields, but remains far below the National Farmers' Union's (NFU) demands for 70,000 temporary visas in 2021. Nothing is mentioned of other groups likely to get caught up in the low-skilled workers group such as care home workers, waiters, cleaners or domestic workers. This drew immediate criticism from people in the sector, as the hospitality sector, for instance, famously relies on an EU national workforce, with Pret A Manger reporting that only one in 50 job applicants was a British national in 2018. The newly released plan indicates a major overhaul in the UK's approach to economic migration. It does not, however, affect students, family migration, or asylum law. Notably, none of these changes will take effect immediately. The transitional period, in which EU nationals are still free to exercise their free movement rights in the same way they were when the UK was still a part of the EU, is set to end on 31 December 2020. On 1 January 2021, then, is when the proposed changes will come into force. Even then, they will not take effect retroactively. As such, they will not affect the millions of EU citizens already living in the UK, and the job market is not going to change overnight. They will, however, change the composition of who comes and stays in the UK in the future. But for the 2016 Brexit voters, that future may be too far away to offer satisfaction. Tags: EU Citizens, Points Based System, Law and Policy "Teachers tax" for EU nationals: fake news or facts? by Charlotte Rubin Earlier this month, it was reported that EU citizens face a "teachers tax" of £4,345 over 5 years if they want to come teach in the UK after Brexit. Although not factually incorrect, this statement does not reflect the law – or the reality – of teachers working in the UK. There is no such thing as a "teachers' tax." There is simply an immigration system already in place which in consequence of the Brexit vote will apply to anyone who does not fall under the umbrella of exemptions to that system. In other words, after Brexit, EU citizens will fall under the same immigration regime as third party (non-EU) nationals. Curbing immigration by ending free movement in this way was one of the Leave-campaign's main selling points, and largely how they won the 2016 referendum. Effectively, the end of free movement means that everyone, including EU nationals, will need to apply for a visa if they want to enter and live in the UK post-Brexit. The Johnson government has drawn up a plan of what this would look like. Needless to say, under this plan, getting a visa costs money. The Tier 2 visa, which is the working visa for which teachers would have to apply if the rules stay as they are now, costs £1220 if it is a permit for longer than 3 years. In addition to that, the government has stated that any non-British nationals will be liable to pay a yearly NHS immigration surcharge, which all non-EU migrants already pay today. The price of the immigration surcharge is set to go up to £800 a year. If you add up 5 years' worth of immigration surcharge with the visa fees, it will cost at least £4,345 to live and work in the UK for five years after Brexit, explaining the figure that The Independent alludes to. Some groups of special workers will have different requirements. The main group of workers with guaranteed special status is NHS workers. The Tory manifesto promises to alleviate the burden of immigration for EU workers with NHS job offers by offering cheaper visa fees and fast-track entry. It is their attempt to ensure that the NHS survives Brexit, labour shortages are filled and employment targets met. It is not unimaginable that if the government recognises a labour shortage and reliance on Europe for the NHS, it may do so for other fields and professions as well. In short, unless the government implements a special exemption for teachers, which may be a good idea considering the labour shortage in the teaching profession, then yes, they too, like any non-British nationals in the UK, will have to pay for immigration services and the cost of these applications is not to be underestimated. But it is not a tax on teachers, as the Independent article seems to imply. Rather, it is simply the price tag which comes attached to the UK immigration system, which, after Brexit, will apply to EU and non-EU nationals alike. Tags: Brexit What does immigration policy look like under the newly-elected Conservative government? by Charlotte Rubin Last week's general election means the Conservative Party now has a clear majority in government to fulfil the many promises they made in their manifesto, including major overhauls to immigration policy. Not only did Boris Johnson vow to get Brexit done by the New Year, but his party also plans to put EU nationals on the same level as third party nationals once free movement law ends. This in and of itself is a radical approach to immigration law, and will have major consequences for EU citizens in the UK. After Brexit, once EU nationals are levelled with third party nationals, the conservatives want to introduce what they call a points-based immigration system, which they proclaim to base on the Australian visa system. The plan, broadly, is to introduce three visa categories after Brexit, for which anyone who moves to the UK will have to apply, and which replace existing categories. The first is the "Exceptional Talent/Contribution" category, and includes the entrepreneur and investor visa. These visas are geared towards "highly educated migrants who have received world-leading awards or otherwise demonstrated exceptional talent, sponsored entrepreneurs setting up a new business or investors." These people will not require a job offer and will receive fast-track entry to the UK. This category is not dissimilar from the current Tier 1 visa category, albeit with some minor changes. The second category is for skilled workers, and to some extent, is a rebrand of the current Tier 2 category. The vast majority of these visas would require a job offer, in line with how work visas are allocated to third party nationals now. The skilled workers category is the only way for workers who meet the criteria of the points-based system and have a confirmed job offer to get limited leave to remain. It will effectively require all non-British nationals to prove that they have a job offer as well as reach the amount of points required under the points-based system. Needless to say, implementing this will constitute the most significant change compared to free movement law, which is currently in force, as it requires EU national to comply with visa requirements. This will have a massive impact on fields such as hospitality, where EU nationals make up more than half of the workforce, and the NHS. The Conservative party propose to make up for that potential labour shortage by introducing fast-track entry and reduced fees for certain special types of work, such as a NHS specific visa. The general idea behind a points-based system is that people are scored on their personal attributes such as language skills, education, age and work experience. If their score hits the minimum required, they can acquire a visa. Crucially, there is no one fixed way to score enough points; a plethora of work experience can make up for older age and excellent language skills might make up for lack of formal education. As long as an individual's different attributes add up to enough points, they will be granted a visa. The key point about points-based systems is not that they are inherently liberal or progressive; whether it is a liberal system will depend on how points are awarded. Rather, the key feature is their flexibility and the ability to get enough points by making any combination of characteristics. That is how the Australian points-based system works. Contrastingly, the UK immigration system today is based on mandatory requirements. This is a system where applicants need to tick all the boxes in order to be granted a visa. For example, an applicant will need to prove his language skills, have a certain amount in savings, show that they have a job offer AND show that they will be making a minimum salary. If the individual lacks one of those requirements the visa will be refused, that is how simple it is. The issues with the Tories' proposals is that they want the best of both worlds. They want to introduce point-based characteristics, but keep the mandatory requirement of a job offer, combining mandatory requirements with points-based elements. Essentially, they want a points-based system where, after making the points-based selection, they can cherry pick who is granted a visa and who is not. As such, although they like to call it a points-based system, it not really points-based, and it is certainly not as simple or easy to navigate as portrayed by the Party. The third category is the "sector-specific rules-based" category, which will be made up of specific temporary schemes such as for low-skilled labour, youth mobility or short-term visits. These visas will be time-limited and will not provide a path to settlement. They are how the government will attempt to match the demand for workers in specific sectors with enough visas to supply that demand. Supposedly, these visas will replace the free movement of labour with state planning. Deciding which markets need workers will be outsourced from the Home Office to the Migration Advisory Committee (MAC). This means that the MAC would react to gaps in the economy, flag them up, and the government will then create a temporary visa category to fill the gap. These will be revised on an ongoing basis based on expert advice from the MAC. In other words, the temporary visas will be reactionary in nature. They will be time-limited and will not provide a path to settlement. If this sounds difficult, that's because it is. The economy adapts to reality more quickly than the law, and new policy takes months, if not years, to come into force. By the time a new visa category actually opens, the gap in the job market it was trying to fill may well have been resolved by market forces. As an attentive reader may notice, the only migrants mentioned in the Conservative policy proposals are economic immigrants. The manifesto does not mention changes to other areas of the current immigration regime. It retains the status quo of Theresa May's controversial hostile environment policies, fails to tackle legal aid cuts, and does not propose any change to the clear human rights violation of indefinite detention, for example. Additionally, the manifesto indicates an attack on judicial review. Since the removal and erosion of appeal rights in the 2014 Immigration Act, judicial review is now often the only recourse to justice for many people who have been wronged by the immigration system. Reforming judicial review, and limiting its scope, removes another layer of checks and balances on Home Office powers, suggesting that not only labour rights, but also human rights, are set to be qualified and watered down after Brexit and once this government starts rolling out policy. Tags: Brexit, Law and Policy A week before the election: Comparing manifestos by Charlotte Rubin When New Labour came to power in 1997, just 3% of the public cited immigration as a key issue. By the time of the EU referendum in 2016, that figure was 48%. As a consequence, migration has become a key issue in political campaigns on all sides of the spectrum. For years, MPs have relied on strong rhetoric about migration in setting ambitious goals for "net migration", installing the hostile environment and finally, in their approach to Brexit. In reality, harsh numerical targets have often not been met, and promises have failed to materialise. As evidenced by the three major party manifestos before the election of 12 December, immigration remains a hot topic. We take a look at the manifestos of the Liberal Democrats, Labour and the ruling Conservative party, and what they intend to do about an immigration system that desperately needs reform to help you make an informed decision. One major issue on which the three parties have outlined a clear and very different strategy is Brexit. The Liberal Democrats, staunch Remainers from the very beginning, still promise that if they are elected, they will revoke Article 50, end Brexit and save freedom of movement for EEA nationals. The Labour Party backs a second referendum, promising that if they win, they will negotiate a new deal within three months, and present it to the people alongside an option to remain in the Union within six months – this time, as a legally binding referendum. The Tories remain committed to Brexit no matter what it may cost and promise to deliver it by January, based on Boris Johnson's deal. In a post-Brexit Britain, the Conservative Party Manifesto sets out that the EU Settlement Scheme (EUSS) will remain as it is, and that in the future EU nationals will be treated exactly the same as other foreign nationals. As such, people coming into the country from the EU will only be able to access unemployment, housing, and child benefits after five years, in the way non-EEA migrants currently do. They will also have to pay an NHS health surcharge to access public health services, the price of which the Tories promise to increase to reflect the full cost of use. The only care that will still be free under a Tory government is emergency care for those in need. Labour, on the other hand, have a different approach. They propose to end the uncertainty of the EUSS by making it a declaratory scheme instead of an application process. A declaratory scheme would essentially establish that the rights one has now are carried through with them for their lifetime. Residents can then obtain physical evidence of their lawful lifetime residence right by asking for it. Lobbying groups such as the 3 million have endorsed such a declaratory scheme, arguing it ends the uncertainty of the EUSS, shields against the hostile environment policies, as well as guarantees favourable treatment of UK citizens living abroad in return. The Liberal Democrats, then, have no proposals in place for if Brexit goes ahead. Their view is that they will do anything to stop it from happening; even if they do not win the election, the party says they will back a second referendum and campaign to remain. On immigration policy, both Labour and the Liberal Democrats promise to end the hostile environment, decriminalise illegal working, and end indefinite detention. The Liberal Democrats openly advocate for a 28-day-time limit on detention, and for any decision to detain an individual for longer than 72 hours to be approved by the courts. This position was recommended to Parliament by the Joint Committee on Human Rights in their 16th report of the 2017-2019 session. Additionally, the LibDems want to close seven out of nine detention centres currently open in the UK, whereas Labour promises to close two of them, and to use the immediate savings towards a fund of £20 million to support the survivors of modern slavery, human trafficking and domestic violence. All parties promise support for victims of the Windrush scandal, with the Conservative party offering to build a memorial for the Windrush generation. In the same symbolistic vein, the Tories have moved away from their rhetoric of "reducing net migration" although their manifesto still states that they will "keep the numbers down." They propose to do this by instating a points-based system not unlike the one in Australia. The points-based system would be based on three pillars: education, English language skills, and criminality. The Tories promises to make decisions on who comes to this country on the basis of the skills they have and the contribution they can make to the country – not where they come from. The visa system, under the points-based system, would be rebooted, with many old visa routes being brought back to life, such as the post-study visa extension, the NHS visa, and the new start-up visa. The Tories also promise entry and exit checks, emphasising that the British people will be able to take back control of their borders. The Liberal Democrats propose the most radical reforms to the immigration system as a whole. Not only do they promise to break down existing barriers as well as add new routes to permanent status - they also propose to remove the exemption of the Data Protection Act for immigration as well as separate enforcement and border control from decision-making. The former measure protects data privacy by establishing a firewall to prevent public agencies from sharing personal information with the Home Office for the purposes of immigration enforcement. The latter would prevent perverse factors from playing a role in decision-making by taking policymaking out of the Home Office altogether. Instead, the Liberal Democrats want to establish a new arms-length, non-political agency to take over processing applications, thus increasing the separation of power. As such, they would move policymaking on work permits and student visas out of the Home Office and into the Departments for Business and Education respectively. They would also move asylum policymaking from the Home Office to the Department for International Development and establish a dedicated unit to improve the speed and quality of decision-making. This may seem like a welcome development for those who have said that the Home Office needs to change its approach to asylum from the ground up, but the Institute of Government report was equivocal about the benefits of such separation. It could trouble accountability by splitting up decision-making, and case management where individuals and families don't fit neatly into one category could be difficult. And finally, the Liberal Democrats, like Labour, will seek to reduce the fee for registering a child as a British citizen from £1,012 to the cost of administration – something that we've advocated for ourselves. Labour, then, says the Tories have required landlords, teachers and medical staff to work as unpaid immigration officers when they created a hostile environment, instead of setting up an effective border control. A Labour government will therefore review the border controls to make them more effective. They also promise to scrap the 2014 Immigration Act passed by the then-Conservative government, restore legal aid cuts, and end the deportation of family members of people entitled to be here and end the minimum income requirements which separate families. They focus on cooperation with Europe and especially France to resume rescue missions in the Mediterranean and end the horrific camps and homelessness which the current immigration regime has led to. Similarly to the Liberal Democrats, Labour will allow asylum seekers to work whilst awaiting a decision on their status, and decriminalise illegal working. All three parties claim to be advocating for humane, fair and compassionate immigration regimes. It is now up to the voters to show whose programme is most convincing. Tags: Election 2019
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,640
Q: Bulk creation of objects that contains multiple foreign keys in Django I have the following models in my DJango app: class Ratings(models.Model): field1 = models.ForeignKey(Table1) field2 = models.ForeignKey(Table2) field3 = models.ForeignKey(Table3) field4 = models.ForeignKey(Table4) field5 = models.ForeignKey(Table5) class Values(models.Model): grade = models.ForeignKey(Grade) rating = models.ForeignKey(Ratings, related_name='ratings') value = models.TextField(blank=True, null=True) I am using the following code to create instances of the Rating table: rating_obj = Ratings( field1_id=id1, field2_id=id2, field3_id=id3, field4_id=id4, field5_id=id5 ) rating_obj.save() The above method works but it is way too slow. I have to create around 30 instances of this model. I cant use bulk_create as i need the reference of the rating_obj for creating Values objects. I have already tried using raw_sql for insertion but to no avail. Can someone suggest a better way for achieving the same.? P.S. Table1,Table2..etc already have db_index=True in their primary keys. A: How about multi-threading using threading? This will create the Ratings objects concurrently. import threading def createRatings(id1, id2, id3, id4, id5): return Ratings( field1_id=id1, field2_id=id2, field3_id=id3, field4_id=id4, field5_id=id5 ).save() for ids in idsList: threading.Thread( target=createRatings, args=( ids[0], ids[1], ids[2], ids[3], ids[4], ids[5], ) ).start() Source: https://pymotw.com/2/threading/ A: @Cole I could not try your answer. What worked for me was : @transaction.atomic at the top of the function. It disables any savepoints in between operations and delays commit to the end of the function. This allows for faster performance,if you have a lot of operations in your view
{ "redpajama_set_name": "RedPajamaStackExchange" }
381
{"url":"http:\/\/mathoverflow.net\/questions\/106501\/leray-degeneration-for-smooth-projective-morphisms-and-formality-of-families-of","text":"# Leray degeneration for smooth projective morphisms and formality of families of compact K\u00e4hler manifolds\n\nLet $\\pi \\colon X \\to S$ be a smooth projective morphism of algebraic varieties, say over $\\mathbf C$. By Deligne's argument (\"Th\u00e9or\u00e8me de Lefschetz...\", 1968), there is for each $i$ an injection $$\\newcommand{\\Q}{\\mathbf{Q}} R^i \\pi_\\ast\\Q[-i] \\hookrightarrow R\\pi_\\ast\\Q,$$ such that the direct sum of all these gives a quasi-isomorphism between $R\\pi_\\ast\\Q$ and its cohomology.\n\nQuestion: Can this construction be made compatible with cup-product? That is, can one choose these injections so that the diagram $$\\begin{matrix} R^i\\pi_\\ast\\Q[-i] \\otimes R^j\\pi_\\ast\\Q[-j] & \\to & R^{i+j}\\pi_\\ast\\Q[-i-j] \\\\\\\\ \\downarrow & & \\downarrow \\\\\\\\ R\\pi_\\ast\\Q \\otimes R\\pi_\\ast\\Q & \\to & R\\pi_\\ast\\Q \\end{matrix}$$ commutes? If not, can one write down an obstruction?\n\nThe question is motivated by the fact that the fibers are compact K\u00e4hler manifolds, hence formal by Deligne-Griffiths-Morgan-Sullivan. So on each fiber $X_s$, we have a quasi-isomorphism with the cohomology when both are considered as dg algebras. Hence an affirmative answer would be a version of DGMS's result which is moreover valid in families.\n\n-\nIn general, it is not true that this construction can be made compatible with cup products. I had thought about this question many years ago and, if I remember correctly, there are obstructions even when the relative dimension of $\\pi$ is one. See the recent paper of Voisin \"Chow rings and decomposition theorems...\" for more information. \u2013\u00a0 ulrich Sep 8 '12 at 6:43\n@ulrich Thanks for the reference! Voisin's paper is really interesting and settles the question authoritatively. If you re-post your comment as an answer I'll accept it. \u2013\u00a0 Dan Petersen Sep 8 '12 at 13:51","date":"2014-04-21 15:38:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9575155377388, \"perplexity\": 385.6942774031823}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-15\/segments\/1397609540626.47\/warc\/CC-MAIN-20140416005220-00167-ip-10-147-4-33.ec2.internal.warc.gz\"}"}
null
null
{"url":"http:\/\/www.imomath.com\/index.php?options=320&lmm=0","text":"# Number Theory\n\n 1. (2 p.) The square $$\\begin{array}{|c|c|c|} \\hline x&20&151 \\\\\\hline 38 & & \\\\ \\hline & & \\\\ \\hline\\end{array}$$ is magic, i.e. in each cell there is a number so that the sums of each row and column and of the two main diagonals are all equal. Find $$x$$.\n\n 2. (12 p.) Find the sum of all positive integers of the form $$n = 2^a3^b$$ $$(a, b \\geq 0)$$ such that $$n^6$$ does not divide $$6^n$$.\n\n 3. (29 p.) Let $$\\tau (n)$$ denote the number of positive divisors of $$n$$, including 1 and $$n$$. Define $$S(n)$$ by $$S(n)=\\tau(1)+ \\tau(2) + \\dots + \\tau(n)$$. Let $$a$$ denote the number of positive integers $$n \\leq 2008$$ with $$S(n)$$ odd, and let $$b$$ denote the number of positive integers $$n \\leq 2008$$ with $$S(n)$$ even. Find $$|a-b|$$.\n\n 4. (36 p.) Let $$a,b,c$$ and $$d$$ be positive real numbers such that $$a^2+b^2-c^2-d^2=0$$ and $$a^2-b^2-c^2+d^2=\\frac {56}{53}(bc+ad)$$, Let $$M$$ be the maximum possible value of $$\\frac {ab+cd}{bc+ad}$$ ,If $$M$$ can be expressed as $$\\frac {m}{n}$$,$$(m,n)=1$$ then find $$100m+n$$\n\n 5. (19 p.) Let $$0 < a < b < c < d$$ be integers such that $$a$$, $$b$$, $$c$$ is an arithmetic progression, $$b$$, $$c$$, $$d$$ is a geometric progression, and $$d - a = 30$$. Find $$a + b + c + d$$.\n\n2005-2018 IMOmath.com | imomath\"at\"gmail.com | Math rendered by MathJax","date":"2018-05-20 10:18:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9775792956352234, \"perplexity\": 88.75756145469798}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-22\/segments\/1526794863277.18\/warc\/CC-MAIN-20180520092830-20180520112830-00023.warc.gz\"}"}
null
null
Q: jQuery: Subtract value from another value? I'm trying to subtract numeric values from a total value using jQuery. However, when I subtract numbers I get a - before my final value! so, lets say I have 300 as total and I take 4.99 from it, I get a number like this: -296 instead of 296. I have created this JSFIDDLE so you can see the issue. Could someone please advise on this issue? A: Your equation is incorrect. You need to update your equation to following var finaltotal23 = parseInt(inputval23) - parseInt(price23); //inputVal23 is the value of input element i.e. 300 //price23 is the data attribute of button i.e. 4.99 Please find updated fiddle at - http://jsfiddle.net/57k08t7p/1/
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,173
""" Define a region of interest (ROI) in color, magnitude, and direction space. The ROI is divided into 3 regions: 1) The 'target' region: The region occuping the likelihood-scale healpix pixel over which the likelihood is evaluated. (Size controlled by 'nside_likelihood') 2) The 'interior' region: The region where objects are included into the likelihood fit. 3) The 'annulus' region: The region where the background is fit. """ import numpy as np import healpy as hp import ugali.utils.binning import ugali.utils.projector import ugali.utils.skymap from ugali.utils.logger import logger from ugali.utils.config import Config from ugali.utils.healpix import query_disc, ang2pix, pix2ang, ang2vec ############################################################ # ADW: Should really write some "PixelSet" object that contains the pixels for each region... class PixelRegion(np.ndarray): # https://docs.scipy.org/doc/numpy-1.13.0/user/basics.subclassing.html def __new__(cls, nside, pixels): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(pixels).view(cls) # add the new attribute to the created instance obj._nside = nside obj._pix = pixels obj._lon,obj._lat = pix2ang(nside,pixels) # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return # NOTE: the _nside, _pix etc. attributes don't get set on # slicing because we are worried it will be too slow @property def lon(self): return self._lon @property def lat(self): return self._lat @property def nside(self): return self._nside @property def pix(self): return self._pix class ROI(object): def __init__(self, config, lon, lat): self.config = Config(config) self.lon = lon self.lat = lat self.projector = ugali.utils.projector.Projector(self.lon, self.lat) self.vec = vec = ang2vec(self.lon, self.lat) self.pix = ang2pix(self.config['coords']['nside_likelihood'],self.lon,self.lat) # Pixels from the entire ROI disk pix = query_disc(self.config['coords']['nside_pixel'], vec, self.config['coords']['roi_radius']) self.pixels = PixelRegion(self.config['coords']['nside_pixel'],pix) # Pixels in the interior region pix = query_disc(self.config['coords']['nside_pixel'], vec, self.config['coords']['roi_radius_interior']) self.pixels_interior = PixelRegion(self.config['coords']['nside_pixel'],pix) # Pixels in the outer annulus pix = query_disc(self.config['coords']['nside_pixel'], vec, self.config['coords']['roi_radius_annulus']) pix = np.setdiff1d(self.pixels, pix) self.pixels_annulus = PixelRegion(self.config['coords']['nside_pixel'],pix) # Pixels within target healpix region pix = ugali.utils.skymap.subpixel(self.pix,self.config['coords']['nside_likelihood'], self.config['coords']['nside_pixel']) self.pixels_target = PixelRegion(self.config['coords']['nside_pixel'],pix) # Boolean arrays for selecting given pixels # (Careful, this works because pixels are pre-sorted by query_disc before in1d) self.pixel_interior_cut = np.in1d(self.pixels, self.pixels_interior) self.pixel_annulus_cut = np.in1d(self.pixels, self.pixels_annulus) # Some pixel properties self.area_pixel = hp.nside2pixarea(self.config.params['coords']['nside_pixel'],degrees=True) # deg^2 self.max_pixrad = np.degrees(hp.max_pixrad(self.config['coords']['nside_pixel'])) # deg # ADW: These are really bin edges, should be careful and consistent # It would be cleaner to separate the CMD from ROI self.bins_mag = np.linspace(self.config.params['mag']['min'], self.config.params['mag']['max'], self.config.params['mag']['n_bins'] + 1) self.bins_color = np.linspace(self.config.params['color']['min'], self.config.params['color']['max'], self.config.params['color']['n_bins'] + 1) self.centers_mag = ugali.utils.binning.centers(self.bins_mag) self.centers_color = ugali.utils.binning.centers(self.bins_color) self.delta_mag = self.bins_mag[1] - self.bins_mag[0] self.delta_color = self.bins_color[1] - self.bins_color[0] def plot(self, value=None, pixel=None): """ Plot the ROI """ # DEPRECATED: ADW 2021-07-15 DeprecationWarning("'roi.plot' is deprecated and will be removed.") import ugali.utils.plotting map_roi = np.array(hp.UNSEEN \ * np.ones(hp.nside2npix(self.config.params['coords']['nside_pixel']))) if value is None: #map_roi[self.pixels] = ugali.utils.projector.angsep(self.lon, self.lat, self.centers_lon, self.centers_lat) map_roi[self.pixels] = 1 map_roi[self.pixels_annulus] = 0 map_roi[self.pixels_target] = 2 elif value is not None and pixel is None: map_roi[self.pixels] = value elif value is not None and pixel is not None: map_roi[pixel] = value else: logger.error("Can't parse input") ugali.utils.plotting.zoomedHealpixMap('Region of Interest', map_roi, self.lon, self.lat, self.config.params['coords']['roi_radius']) # ADW: Maybe these should be associated with the PixelRegion objects def inPixels(self,lon,lat,pixels): """ Function for testing if coordintes in set of ROI pixels. """ nside = self.config.params['coords']['nside_pixel'] return ugali.utils.healpix.in_pixels(lon,lat,pixels,nside) def inROI(self,lon,lat): return self.inPixels(lon,lat,self.pixels) def inAnnulus(self,lon,lat): return self.inPixels(lon,lat,self.pixels_annulus) def inInterior(self,lon,lat): return self.inPixels(lon,lat,self.pixels_interior) def inTarget(self,lon,lat): return self.inPixels(lon,lat,self.pixels_target) def indexPixels(self,lon,lat,pixels): nside = self.config.params['coords']['nside_pixel'] return ugali.utils.healpix.index_pixels(lon,lat,pixels,nside) def indexROI(self,lon,lat): return self.indexPixels(lon,lat,self.pixels) def indexAnnulus(self,lon,lat): return self.indexPixels(lon,lat,self.pixels_annulus) def indexInterior(self,lon,lat): return self.indexPixels(lon,lat,self.pixels_interior) def indexTarget(self,lon,lat): return self.indexPixels(lon,lat,self.pixels_target) def getCatalogPixels(self): """ Return the catalog pixels spanned by this ROI. """ filenames = self.config.getFilenames() nside_catalog = self.config.params['coords']['nside_catalog'] nside_pixel = self.config.params['coords']['nside_pixel'] # All possible catalog pixels spanned by the ROI superpix = ugali.utils.skymap.superpixel(self.pixels,nside_pixel,nside_catalog) superpix = np.unique(superpix) # Only catalog pixels that exist in catalog files pixels = np.intersect1d(superpix, filenames['pix'].compressed()) return pixels ############################################################
{ "redpajama_set_name": "RedPajamaGithub" }
360
← 'Black Panther' is another Marvel movie Jennifer Lawrence's movie about spies from Russia completely unwatchable → Stop what you're doing and go see 'Annihilation' Images courtesy Paramount Pictures. 10/10 Annihilation is a soaring triumph of science-fiction adventure. It is jaw-clenching, deliriously beautiful and overwhelmingly weird. Ex-military cellular biology teacher Lena (Natalie Portman) is in mourning for her husband Kane (Oscar Isaac), who has been away for a full year on a classified mission and presumed dead. He suddenly reappears in their home, but something is clearly wrong, and the couple is apprehended by a security force. Lena learns that her husband was inside the Shimmer, an alien field situated in Blackwater National Park in the Florida Panhandle. The field has been expanding for three years and will soon envelop major cities. Kane, now barely clinging to life, is the only thing that has ever entered it and come back. Desperate to save her husband, Lena joins an expedition to find the field's source. As the group goes deeper into the Shimmer, they also go deeper into themselves. The all-women expedition, which is the first to be comprised of scientists instead of soldiers, is on a suicide mission, and each of them has a reason they don't mind dying. The film is framed as Lena's debriefing after she returns to the Southern Reach. Annihilation is maybe one of the most intense films ever made from start to completely unforgettable finish, both a deeply satisfying escapist thriller and a disturbing character study. Where so much of modern escapist cinema makes fantastic technology and apocalyptic power feel like nothing out of the ordinary — Marvel! — Annihilation makes every scene compelling and dreadful, whether its a simple slasher scene with a mutated bear or something as mundane as watching cells divide through a microscope. Writer/director Alex Garland draws absolutely as much as he can out of every scene. The film is shockingly well-shot and detail-oriented, to the point that a shot of Lena taking a sip of water brings you to the edge of your seat. Annihilation is beautiful to the point that it would be an incredible experience to watch with earplugs in, but the sound is so dominant that it would be equally astonishing through a blindfold — especially in the film's electrifying climax. Like Garland's previous feature, Ex Machina, Annihilation is saturated with thematic imagery — in this case, the image of a cell. Its menagerie of themes are introduced softly and dance on the back of viewers' minds, on cancer and other cell mutations, infidelity, suicide, duality and replication. There's no treastice or statement on what it all means, just repetition and association with a creeping Lovecraftian horror as viewers, like Lena and her team, come up with more questions than answers. This lack of clarity is probably the reason the film didn't do well in a test screening last year, which lead Paramount to an in-studio conflict that was resolved with an awful compromise. Reportedly, David Ellison wanted the masterful ending re-shot, but Scott Rudin, who held final cut, wouldn't budge. What eventually ended up happening was Paramount selling the international distribution rights to Netflix and only putting Annihilation into theaters in North America and China. While the first-of-its-kind release pattern is fascinating and deserves emotionally detached study, it's a travesty for this film. This film was designed to be seen in one uninterrupted sitting on the big screen in true surround sound. It will surely still be compelling on Netflix, but that's just not the end it deserves to come to. Annihilation is a stunning sensory experience and it is an exclusive privilege to see it in theaters. Go see it as many times as you can before it cycles out — which it will quickly. Leopold Knopp is a UNT graduate and managing editor of The Lewisville Texan Journal. If you liked this post, you can donate to Reel Entropy here. Like Reel Entropy on Facebook and reach out to me at reelentropy@gmail.com. This entry was posted in Entropy and tagged #Alex Garland, #ex machina, #natalie portman, #Netflix, #Oscar Isaac, #paramount pictures, Annihilation. Bookmark the permalink. 1 Response to Stop what you're doing and go see 'Annihilation' Pingback: The most important films of 2018 | Reel Entropy
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,353
It's true, we're lucky. Monguelfo is near Plan de Corones, Val di Casies is our neighbour and Lake Braies is just a stone's throw away. In summer the Dolomites offer spectacular scenery and for those of you who appreciate beauty and respect it, Christof welcomes you! Whether you like it slow, medium or fast, we´ve got you covered…walk, cycle or borrow one of our E-bikes. The important thing is to have fun. And you will! CHRISTOF´S GOT ALL THE ANSWERS.
{ "redpajama_set_name": "RedPajamaC4" }
5,321
package ca.ualberta.cs.travelexpenses; import java.util.ArrayList; import java.util.Collection; public class ExpenseList { protected ArrayList<Expenses> expenseList; public ExpenseList() { expenseList = new ArrayList<Expenses>(); } public Collection<Expenses> getExpense() { return expenseList; } public void addExpense(Expenses expenses) { expenseList.add(expenses); } public void deleteExpense(Expenses expenses) { expenseList.remove(expenses); } public boolean contains(Expenses expenses) { return expenseList.contains(expenses); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,217
{"url":"https:\/\/blog.math.toronto.edu\/GraduateBlog\/2012\/02\/27\/departmental-phd-thesis-exam-david-li-bland\/","text":"## DEPARTMENTAL PHD THESIS EXAM \u2013 David Li-Bland\n\nMonday, March 5, 2012, 1:10 p.m.,\u00a0 2:10 p.m., in BA 6183, 40 St. George Street\n\nPhD Candidate: David Li-Bland\n\nThesis Title: $\\mathcal{LA}$-Courant Algebroids and their Applications\n(http:\/\/www.math.toronto.edu\/dbland\/thesis.html)\n\nAbstract:\n\nIn this thesis we develop the notion of $\\mathcal{LA}$-Courant algebroids, the infinitesimal analogue of multiplicative Courant algebroids. Specific applications include the integration of $q$-Poisson $(\\mathfrak{d},\\mathfrak{g})$-structures, and the reduction of Courant algebroids. We also introduce the notion of pseudo-Dirac structures, (possibly non-Lagrangian) subbundles $W\\subseteq {\\mathbb{E}}$ of a Courant algebroid such that the Courant bracket endows $W$ naturally with the structure of a Lie algebroid. Specific examples of pseudo-Dirac structures arise in the theory of $q$-Poisson $(\\mathfrak{d},\\mathfrak{g})$-structures.\n\n\u2014\u2013\nEveryone welcome. Coffee will be served in the math lounge before the exam.","date":"2022-05-24 12:49:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.816988468170166, \"perplexity\": 1968.905556220231}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662572800.59\/warc\/CC-MAIN-20220524110236-20220524140236-00134.warc.gz\"}"}
null
null
\section{Introduction}\label{sec:intro} The Quantum Approximate Optimization Algorithm (QAOA)~\cite{QAOA_2014} is designed to find good approximate solutions to the problem of maximizing a cost function defined on bit strings. It can be viewed as a sequence of quantum walks on the binary hypercube interspersed with cost function-dependent unitaries, with the goal of improving the cost function beyond its expected value in the initial state. The standard QAOA starts in the quantum state which is the uniform superposition of all classical bit strings. It is natural to imagine running a classical algorithm to produce a good string, then initializing the QAOA in the associated computational basis state and looking for improvement. Variations of this idea have been explored in existing literature under the name \emph{warm-start QAOA}. First we perform numerical experiments on the QAOA, focusing on two optimization problems: MaxCut and Maximum Independent Set. In every instance we examine, when the QAOA is initialized with a good classical string, there is little to no improvement of the cost function from its original value. The consistency of our results suggests there is an underlying theoretical reason why the QAOA with a warm start gets stuck. This paper aims to explain this non-intuitive phenomenon. In the special case of low-depth QAOA at small angles, we show analytically why most strings have zero improvement. We also give a more general statistical argument for why these warm starts see little improvement under constant-depth QAOA. The statistical argument relates the local distribution of a warm-start string to the corresponding thermal distribution, and shows that the more ``locally thermal'' a bit string appears, the less improvement the string sees under the QAOA. We argue and then numerically confirm that strings generated by simulated annealing are locally thermal. We also study small instances of the Sherrington-Kirkpatrick model and again see that warm starts generically get stuck. Here the relation to our statistical arguments is less clear. Our results apply to what we refer to as the standard QAOA, where the unitary operators comprising the QAOA do not depend explicitly on the initial string, though the parameters do. We also restrict our focus to warm starts consisting of a single classical string, rather than using more general states. This contrasts with the two approaches to warm starts in \cite{Egger_2021}. In the first approach, the initial state is a product state encoding an unrounded semidefinite programming relaxation. Then, the QAOA unitaries are modified so that the warm-start QAOA converges to the optimal solution as the depth approaches infinity, as does the standard QAOA. In the second approach, the initial state is a classical string representing the rounded semidefinite program relaxation, and the unitaries are tailored to the initial state, though it is not guaranteed to converge to the solution as the depth approaches infinity. Another study~\cite{Tate_2020} also encodes a relaxed semidefinite program in the initial state, but does not modify the standard QAOA unitaries. Our results that a single-string warm start will get stuck on the standard QAOA do not necessarily apply to these alternative, and perhaps more interesting, approaches. The remainder of the paper is organized as follows. In Section~\ref{sec:definitions}, we review the relevant definitions for the standard QAOA and formally define what we call the single-string warm-start QAOA. In Section~\ref{sec:num}, we present our numerical evidence for the performance of this warm-start QAOA on the MaxCut problem. We numerically analyze the problem both on small graphs ($12$ vertices) at high depths and also large graphs ($300$ vertices) at low depths. In Section~\ref{sec:small_angles} we present a necessary and sufficient condition for the improvement of the $3$-parameter warm-start QAOA when restricted to small angles $\beta_1,\beta_2\ll 1$. In Section~\ref{sec:thermal_1} we present a statistical argument that a uniformly chosen random string cannot be improved by the warm-start QAOA. In Section~\ref{sec:thermal_2} we specialize the statistical argument to any warm string that satisfies a ``local thermality'' property about the distribution of substrings on graph neighborhoods. We then present heuristics and numerical evidence that this local thermality holds in the case of MaxCut. In Section~\ref{sec:uniform} we comment on what happens when the QAOA is warm-started not on a single string, but on a superposition of strings with good cost value, and see that the superposition is unhelpful. In Sections~\ref{sec:MIS} and~\ref{sec:SK} we present an analysis of our warm-start QAOA as applied to the Maximum Independent Set and Sherrington-Kirkpatrick problems, respectively. In these sections, we give additional numerical evidence that our warm-start QAOA also generically fails to improve solutions for these two problems. Finally, in Section~\ref{sec:conclusion} we conclude with a summary of the results and a discussion. \section{Standard and single-string warm-start QAOA}\label{sec:definitions} \subsection{A quick review of the standard QAOA} The QAOA is designed to search for approximate solutions to combinatorial optimization problems. We focus on graph-based problems where the cost function can be written as \begin{equation} \label{eq:cost_function} C(z) = \sum_{\langle i, j\rangle\in E}C_{\langle i, j\rangle}(z_i, z_j), \end{equation} where $\langle i, j \rangle$ ranges over all edges of a graph $G = (V, E)$. We primarily restrict ourselves to the case where $C_{\langle i, j \rangle}$ is the same function for all $\langle i, j \rangle$. This includes problems such as unweighted MaxCut and, with a suitable cost function, Maximum Independent Set on $d$-regular graphs. (The Sherrington-Kirkpatrick model discussed in Section~\ref{sec:SK} is an exception.) Usually, the standard QAOA starts on the equal superposition of all bit strings on $|V|=n$ bits, \begin{equation} \ket{s} = \frac{1}{\sqrt{2^n}}\sum_z \ket{z_1 \dots z_n}. \end{equation} The QAOA applies a sequence of $2p$ unitaries depending on the $2p$ parameters $\boldsymbol{\gamma}$ and $\boldsymbol{\beta}$, producing \begin{equation} \ket{\boldsymbol{\gamma}, \boldsymbol{\beta}} = e^{-i\beta_p B} e^{-i\gamma_p C}\dots e^{-i\beta_1 B}e^{-i\gamma_1 C} \ket{s},\label{eq:standard_QAOA_unitary} \end{equation} where $B$ is the sum of the single-qubit $X$ operators, \begin{equation} B = \sum_{i}X_i. \end{equation} The goal is to find the parameters $\boldsymbol{\gamma}, \boldsymbol{\beta}$ to make $\bra{\boldsymbol{\gamma}, \boldsymbol{\beta}} C\ket{\boldsymbol{\gamma}, \boldsymbol{\beta}}$ as large as possible. In practice, as $p$ is increased, the optimized $\bra{\boldsymbol{\gamma}, \boldsymbol{\beta}} C\ket{\boldsymbol{\gamma}, \boldsymbol{\beta}}$ increases compared to its initial value of $\bra{s}C\ket{s}$, which is the mean value of $C(z)$ over all bit strings $z$. In this paper we focus mainly on MaxCut. The MaxCut cost function operator is \begin{align} C_\mathrm{MC}=\frac{1}{2}\sum_{\langle i,j\rangle \in E}(1-Z_i Z_j), \end{align} which counts the number of cut edges. The cut fraction is defined as the number of cut edges divided by the total number of edges. It will often be convenient to consider the simpler cost function operator defined by \begin{align} C_Z = -\sum_{\langle i,j\rangle \in E}Z_i Z_j. \end{align} The MaxCut cost can be written in terms of $C_Z$ as \begin{align} C_\mathrm{MC} = \frac{1}{2}(m + C_Z), \end{align} where $m=|E|$ is the number of edges in the graph. The aim is to maximize $C_\mathrm{MC}$, which is equivalent to maximizing $C_Z$. Note that when $C_Z$ is positive, the cut fraction is larger than~$1/2$. \subsection{The standard QAOA starting from a good classical string} In this paper, we focus on the standard QAOA starting at a computational basis state $\ket{w}$, where $w$ is a good string produced by running a classical algorithm. Note that when we start in $\ket{w}$, the first unitary $e^{-i\gamma_1 C}$ in equation~\eqref{eq:standard_QAOA_unitary} only introduces a phase, so effectively the state becomes \begin{equation} \ket{\boldsymbol{\gamma}, \boldsymbol{\beta}, w} = U(\boldsymbol{\gamma},\boldsymbol{\beta})|w\rangle = e^{-i\beta_k B} e^{-i\gamma_{k-1} C}\dots e^{-i\gamma_1 C}e^{-i\beta_1 B} \ket{w},\label{eq:warm_start_QAOA_unitary} \end{equation} which depend on parameters $\beta_1,\dots, \beta_k$ and $\gamma_1,\dots, \gamma_{k-1}$. Note that the number of parameters is now $2k-1$, which is odd. In keeping with the convention of the standard QAOA, where $2p$ is the number of parameters, we define the QAOA starting on a classical string with a fractional value of $p$ so that $2p = 2k-1$. In this case, $p$ takes on values $p= 1/2,\ 3/2,\ 5/2,\ \cdots$. For example, at $p=3/2$ we have the three-parameter unitary producing the state \begin{equation} \ket{\boldsymbol{\gamma}, \boldsymbol{\beta}, w} = e^{-i\beta_2 B} e^{-i\gamma_1 C}e^{-i\beta_1 B} \ket{w}. \end{equation} Given $w$, the goal is to maximize $\bra{\boldsymbol{\gamma}, \boldsymbol{\beta}, w} C\ket{\boldsymbol{\gamma}, \boldsymbol{\beta}, w}$. One might hope that by starting in a good classical string and finding optimal parameters, $\bra{\boldsymbol{\gamma}, \boldsymbol{\beta}, w} C\ket{\boldsymbol{\gamma}, \boldsymbol{\beta}, w}$ will be larger than $\bra{w} C\ket{w}$. Note that after optimizing, the parameters in equation~\eqref{eq:warm_start_QAOA_unitary} depend on $w$. One way to see this is to imagine variationally optimizing the QAOA: first, pick initial parameters $\boldsymbol{\gamma}, \boldsymbol{\beta}$, then run the quantum circuit to produce the state~\eqref{eq:warm_start_QAOA_unitary}, and then evaluate the cost function~\eqref{eq:cost_function} on this state. Next, change the parameters and repeat this process, attempting to improve. As a result, the optimized parameters depend on the initial choice of $w$. However, note that we are not modifying the operators $B$ and $C$ to depend on $w$. At a fixed depth $p$ let us define for each string $w$ the optimal parameters \begin{align} (\boldsymbol{\gamma}_w,\boldsymbol{\beta}_w) = \argmax_{(\boldsymbol{\gamma},\boldsymbol{\beta})}\,\langle w|U^\dagger(\boldsymbol{\gamma},\boldsymbol{\beta})CU(\boldsymbol{\gamma},\boldsymbol{\beta})|w\rangle. \end{align} We then define \begin{align} \label{eq:U_w} U_w = U(\boldsymbol{\gamma}_w,\boldsymbol{\beta}_w) \end{align} as the optimal QAOA unitary for the string $w$. \subsection{Cost function as a sum over neighborhoods} \label{subsec:nbhd_sums} We now review how the expected cost function may be organized as a sum over subgraphs, as in~\cite{QAOA_2014}. This analysis will be instructive for classically simulating the QAOA on large systems at constant depth in Section~\ref{subsec:large_num}. It will also be critical for the statistical argument in Section~\ref{sec:thermal}. We consider the warm-start QAOA with initial classical string $w$ and expected cost \begin{align} \langle w|U^\dagger C U |w\rangle =\sum_{\langle i,j\rangle \in E}\langle w|U^\dagger C_{\langle i,j\rangle}U|w \rangle \end{align} for some QAOA unitary $U=U(\boldsymbol{\gamma},\boldsymbol{\beta})$ acting as in equation~\eqref{eq:warm_start_QAOA_unitary}. Consider the regime of large graphs and constant $p$. Note the operator $U^\dagger C_{\langle i,j\rangle}U$ only acts non-trivially on vertices within graph distance $r$ of the edge $\langle i,j\rangle$, where the radius $r$ is the number of applications of $e^{-i \gamma C}$ in the QAOA circuit, namely $r=p-\frac{1}{2}$ using our conventions above. We refer to these subgraphs as ``edge neighborhoods.'' For the warm-start QAOA with $2p$ parameters and initial classical string $w$, let $w_{\langle i,j\rangle}$ refer to the subset of bits in the neighborhood of edge $\langle i,j\rangle$ with radius $r=p-\frac{1}{2}$. Because $U^\dagger C_{\langle i,j\rangle}U$ only acts non-trivially on the edge neighborhood, we can write \begin{align} \langle w|U^\dagger C U|w\rangle &=\sum_{\langle i,j\rangle \in E}\langle w|U^\dagger C_{\langle i,j\rangle}U|w \rangle. \\ &=\sum_{\langle i,j\rangle \in E}\langle w_{\langle i,j\rangle}|U^\dagger C_{\langle i,j\rangle}U|w_{\langle i,j\rangle}\rangle. \end{align} For constant $p$ on a bounded-degree graph, this sum can be handled with low computational cost: each term can be computed using only the Hilbert space of the qubits in the edge neighborhood. It can be further simplified by realizing that, on a bounded-degree graph, there are only a fixed number (that is not growing with $n$) of neighborhood types of radius $p-\frac{1}{2}$. We can therefore break the sum over edges into a sum over the different neighborhood types. This is what allows one to analyze the performance of the low-depth QAOA at arbitrarily large sizes, although a quantum computer is ultimately required to produce a string with that expected cost. This trick also allows us to do efficient numerical computations on hundreds of qubits in Section~\ref{subsec:large_num}. \section{Numerical experiments}\label{sec:num} We begin by presenting numerical experiments that consistently show that running the standard QAOA starting from a good string does not lead to improvement. In this section we present data for the MaxCut problem for small graphs (12 vertices) and also for larger graphs (300 vertices) at lower depth. The data we present is typical; we did numerical experiments for various graphs of different sizes and at many QAOA depths, and never saw appreciable improvements. Later in Section~\ref{sec:MIS} we look at another problem, Maximum Independent Set, on graphs of 16 vertices and obtain similar results. In Section~\ref{sec:SK} we also look at the Sherrington-Kirkpatrick~(SK) model, which is fully connected and has random couplings, and once again obtain similar results, albeit with rare exceptions discussed in Appendix~\ref{sec:magic_angle}. \subsection{MaxCut at small system sizes\label{subsec:small_num}} We first present data for an instance of MaxCut on a 12 vertex, 3-regular graph. A summary of the numerical experiments is displayed in Tables~\ref{tab:warm_start_maxcut_12} and~\ref{tab:standard_maxcut_12}. For this instance, the largest cut value is 16 with degeneracy $2$, compared to the mean cut value of 9 obtained by averaging over all $2^{12}$ strings. We start by looking at all of the 516 strings with a cut value of 11. For each warm start we optimize the QAOA with 3, 5, 7, and 9 parameters. This is done by generating a random initial guess for parameters and then running a built-in MATLAB optimizer and repeating the maximization for 40 random initial guesses. Note that most strings do not improve, and when there is improvement, it is not by much. At $p=3/2$, a string improved from its initial cut size if and only if a small angle condition was satisfied; see Section~\ref{sec:small_angles}, equation~\eqref{eq:sam_iff}. We then look at all 126 classical strings with cut value of 13, which is closer to the optimal cut value. Here even fewer strings show improvement, and again the improvements are small. We find that the QAOA never improves when initialized in near-optimal strings with a cut value of 15. One may wonder if our search for improvement is sufficiently exhaustive. By working at 12 bits, the computational resources are not taxing and we are able to explore many examples; in some examples, we ensure that the optimal parameters we find are genuine global optima. We explore up to $p=9/2$ ($9$ parameters) QAOA where the light-cone of each edge wraps around the graph many times. We expect that the failure to find improvement at very low bit number will continue at high bit number. In the next Section~(\ref{subsec:large_num}), we present evidence that this belief is correct at low depth. Before discussing large system size, we comment on the performance of the standard QAOA starting in an equal superposition of all strings on the MaxCut instance described above. The performance data is summarized in Table~\ref{tab:standard_maxcut_12}. Note the steady increase in the expected value of the cost function as $p$ increases. The QAOA starting from an equal superposition of all strings improves significantly from its initial value, and there seems to be no obstacle to its performance as the depth increases. \begin{table}[t!] \begin{center} \begin{tabular}{r|cccc} \multicolumn{5}{c}{\textsc{Warm start at $C_{\text{MC}}(w) = 11$ with 516 classical strings $w$}}\\ \hline $p$ & $3/2$ & $5/2$ & $7/2$ & $9/2$ \\ Number of strings improved & 56 & 72 & 112 & 164 \\ Mean cost of improved strings & 11.03 & 11.05 & 11.09 & 11.10\\ Largest cost of improved strings & 11.04 & 11.18 & 11.29 & 11.42 \\ \multicolumn{5}{c}{\rule{0pt}{2ex}} \\ \multicolumn{5}{c}{\textsc{Warm start at $C_{\text{MC}}(w)= 13$ with 126 classical strings $w$}}\\ \hline $p$ & $3/2$ & $5/2$ & $7/2$ & $9/2$ \\ Number of strings improved & 2 & 2& 2 & 6 \\ Mean cost of improved strings & 13.05 & 13.12 & 13.14 & 13.06 \\ Largest cost of improved strings & 13.05 & 13.12 & 13.14 & 13.15 \\ \multicolumn{5}{c}{\rule{0pt}{2ex}} \\ \multicolumn{5}{c}{\textsc{Warm start at $C_{\text{MC}}(w)= 15$ with 10 classical strings $w$}}\\ \hline $p$ & $3/2$ & $5/2$ & $7/2$ & $9/2$ \\ Number of strings improved & 0 & 0 & 0 & 0 \\ \end{tabular} \caption{\textit{MaxCut. QAOA improvement from good initial classical strings on a 12 vertex, 3-regular graph with a maximum cut size of $16$. The average cut size is $9$ and the number of parameters is $2p$. When warm starting at cut size $11$, the number of strings which improve increases steadily with depth, although all improvements are small. At a cut size of $13$, the number of strings that improves only increases at $p=9/2$. All improvements come in pairs because of the bit flip symmetry of MaxCut. At cut size $15$ we see zero improvements.}\label{tab:warm_start_maxcut_12}} \begin{tabular}{r|cccccc} \multicolumn{7}{c}{\rule{0pt}{2ex}} \\ \multicolumn{7}{c}{\textsc{Standard QAOA}}\\ \hline $p$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\ Expected cut size & 12.15 & 13.43 & 14.28 & 14.86 & 15.20 & 15.41 \\ \end{tabular} \caption{\textit{MaxCut. Performance of the Standard QAOA for the same instance as Table~\ref{tab:warm_start_maxcut_12}, starting in the uniform superposition where the expected value of $C_{\text{MC}}$ is $9$. The number of parameters is $2p$. Note the steady improvement with depth.}\label{tab:standard_maxcut_12}} \end{center} \end{table} \subsection{MaxCut at large system sizes and low depths\label{subsec:large_num}} Next, we look at MaxCut on large $3$-regular graphs. Here we present data for a random instance with $n=300$ vertices which is representative of our explorations. This instance has a largest cut size of $415$. We prepare good classical strings using two different classical algorithms: simulated annealing and the Goemans-Williamson algorithm~\cite{GW_algorithm}. The QAOA at $p=3/2$ and $p=5/2$ involves local neighborhoods with no more than $6$ or $14$ qubits, respectively, which allows us to compute the cost function locally on each edge neighborhood and then sum the cost functions over all edges (as previewed in Section \ref{subsec:nbhd_sums}). Therefore, there is no exponential overhead in the number of qubits. Simulated annealing is a classical algorithm which samples from a thermal distribution by stochastically updating a string with a probability that depends on the chosen temperature. We use a simulated annealing algorithm with a Metropolis-Hastings update rule that changes the string by up to two vertices, and we do $1000n$ updates. For temperatures of $1.25$ and $1.75$, we generate 1000 good classical strings with average cut size $310.6$ and $287.8$, respectively. The Goemans-Williamson algorithm uses semidefinite programming to generate a good classical cut. This algorithm is guaranteed to achieve a MaxCut approximation ratio (the number of cut edges output by the algorithm divided by the maximum number of edges that can be cut) of $0.87856$ on any graph. The average cut value of the 1000 classical strings we generate is $398.3$, corresponding to an approximation ratio of $0.96.$ We simulate the QAOA starting on each classical string and look for optimal parameters. The parameters are initialized randomly and then the cost function is locally optimized using a steepest ascent algorithm, and this procedure is repeated at least 70 times for each string. For $p=3/2$ and $p=5/2$ we see no improvement on any of the initial strings generated by simulated annealing for both temperatures. We also find no improvement for any of the 1000 good classical strings generated by Goemans-Williamson. At $p=3/2$, the small angle condition is never satisfied for any of the initial strings (see Section~\ref{sec:small_angles}), which confirms that improvement is impossible at small angles for all of the strings. Because fewer strings improve for large system sizes than for the smaller system sizes studied in Section~\ref{subsec:small_num}, the results here suggest that the fraction of strings which improve decreases as the system size grows. \section{Small angles and low depth}\label{sec:small_angles} When improvements are seen in the numerical experiments, they are always small. One explanation for this could be that the optimal $\beta$'s, which drive the mixing, are also small. Therefore, it is logical to examine when improvements are possible for small $\beta$'s. In this section, we derive necessary and sufficient conditions [see equation~\eqref{eq:sam_iff}] for small angle improvements at $p=3/2$ for MaxCut. We note that in all of our MaxCut numerical experiments at $p=3/2$, every improvement seen corresponds to one of the conditions in~\eqref{eq:sam_iff} being satisfied. Consistent with our intuition, improvement is possible only if it is possible for small $\beta$'s. \subsection{Quantum walk starting on a classical string ($p=1/2$)} As a warm-up, we look at the $p=1/2$ QAOA, $e^{-i\beta B}|w\rangle$, which can be viewed as a quantum walk starting on the state $\ket{w}.$ We compare the expected cost, \begin{align} \langle w |e^{i\beta B} Ce^{-i\beta B}|w\rangle \label{eq:sam_main}, \end{align} to the starting cost $C(w)$. We start by looking at small $\beta$ and write equation~\eqref{eq:sam_main} up to order $\beta^2$ as \begin{align} \langle w|(1+i\beta B -\beta^2B^2/2)C(1-i\beta B -\beta^2B^2/2)|w\rangle. \end{align} The terms of order $\beta$ cancel, and the terms of order $\beta^2$ are \begin{align} \beta^2\langle w| BCB|w\rangle - \beta^2 \langle w|B^2C|w\rangle = \beta^2\sum_{i}(C_i-C(w)),\label{eq:p=1/2_cost} \end{align} where $C_i = \langle w|X_iCX_i|w\rangle$ is the cost of the string $w$ with bit $i$ flipped, and where $\langle w|C|w\rangle = C(w)$. The $B^2$ term introduces a factor of $n$, which we replaced by a sum over $i$. The zeroth-order term is $C(w)$, so the change in expected cost due to the quantum walk for small $\beta$ is given by expression~\eqref{eq:p=1/2_cost}. Note that this formula applies to \emph{any} cost function. Now, suppose $w$ is a good string. Then we might expect that most local changes will decrease the cost function so that \eqref{eq:p=1/2_cost} is negative, indicating that the QAOA at $p=1/2$ and small $\beta$ will not make improvements. For example, if $w$ is a local maximum with respect to bit flips, then improvement is impossible. For the case of MaxCut, with $C=C_Z$, expression~\eqref{eq:p=1/2_cost} is given by \begin{align} \beta^2\sum_{i}(C_i-C(w)) = -\beta^2\sum_i\sum_{\langle j,k\rangle \in E}\left((\Delta_iw)_j(\Delta_iw)_k - w_jw_k\right), \end{align} where $\Delta_i$ operates on the string $w$ by flipping bit $i$, which negates $w_i$. For each term in the sum, we have \begin{align} (\Delta_iw)_j(\Delta_iw)_k - w_jw_k = \begin{cases}0 & i\neq j,k,\\ -2w_jw_k & i=j \text{ or }i=k.\end{cases} \end{align} Therefore for MaxCut we get that the change in expected cost is \begin{align} \beta^2\sum_{i}(C_i-C(w)) = 4\beta^2\sum_{\langle j,k\rangle \in E}w_jw_k=-4\beta^2C_Z(w).\label{eq:delta_sum} \end{align} Suppose $w$ is a good starting string, meaning that $C(w)=C_Z(w)>0$ or equivalently the cut fraction is greater than $1/2$. Then for any small $\beta \ne 0$, the change will be negative and the best strategy is to set $\beta = 0$. Therefore for small $\beta$, no improvement is possible via quantum walk starting at any $w$ with cut fraction larger than 1/2. For the case of MaxCut and weighted MaxCut problems like the Sherrington-Kirkpatrick model, we can in fact show that the $p=1/2$ QAOA does not improve on the initial cost starting at a $w$ with $C_Z(w)>0$ for any choice of $\beta$ (large or small). The expected cost after evolving under $B$ is given by \begin{align} \bra{w}e^{i\beta B}C_Ze^{-i\beta B}\ket{w} &= -\sum_{\langle i, j \rangle}\bra{w} e^{i\beta B} Z_i Z_j e^{-i\beta B}\ket{w} \nonumber \\ &= -\sum_{\langle i, j \rangle}\bra{w}(e^{i\beta X_i} Z_i e^{-i\beta X_i}) (e^{i\beta X_j} Z_j e^{-i\beta X_j})\ket{w} \nonumber \\ &=-\sum_{\langle i, j \rangle}\bra{w} (\cos(2\beta)Z_i +\sin(2\beta) Y_i )( \cos(2\beta)Z_j +\sin(2\beta) Y_j ) \ket{w}\nonumber \\ &=\cos^2(2\beta) C_Z(w)\label{eq:maxcut_one_parameter}, \end{align} where we use the fact that the expectations of $Z_i Y_j$, $Y_i Z_j$, and $Y_i Y_j$ are zero on bit strings. Therefore, if $w$ is a good starting string, meaning that $C_Z(w)$ is positive (corresponding to a cut fraction of more than 1/2), then $\beta=0$ is optimal and the optimized expected cost remains $C_Z(w)$. On the other hand, if $C_Z(w)$ is negative then the expected cost can be improved by taking $\beta=\pi/4$ and rotating to the uniform superposition over all strings weighted by phases depending on $\ket{w}$, which gives a resulting cut fraction of 1/2. In summary: a bad starting string can quantum walk to mediocrity, but a good string cannot be improved. \subsection{Warm starting at $p=3/2$ for MaxCut on $3$-regular graphs} \subsubsection{Statement of result} We now look at the $p=3/2$ QAOA warm starting in $|w\rangle$ with both $\beta$'s small. We focus on MaxCut for 3-regular graphs. We first state a necessary and sufficient condition for improvement here, and then present the derivation in Section~\ref{subsubsec:proof}. First, let us define the quantity \begin{align} \delta_i = (C_i - C(w))/2,\label{eq:definition_of_delta_i} \end{align} where $C_i$ is the cost of the string $w$ with the $i$th bit flipped. The value of $\delta_i$ is a measure of how local bit flips change the cost function. For MaxCut on 3-regular graphs $|\delta_i| = $ 1 or 3. At $p=3/2$, we find that improvement of the cost is possible for small $\beta$'s if and only if \begin{align} \sum_{|\delta_i|=1}\delta_i >0\qquad\text{or}\qquad \sum_i\delta_i^3 >0. \label{eq:sam_iff} \end{align} If $w$ is a local maximum with respect to bit flips, then all of the $\delta_i$ are negative and neither condition can be met. We now argue more generally that for good strings we expect that neither of these two conditions can be met. We will make use of the identity \begin{align} \sum_{i}\delta_i = -2C_Z(w),\label{eq:delta_sum2} \end{align} which is a restatement of equation~\eqref{eq:delta_sum}. Because $C_Z(w) > 0$ for a good starting string, the $\delta_i$ are more often negative than positive, meaning that $\sum_i\delta_i^3 > 0$ is unlikely, as is $\sum_{|\delta_i|=1}\delta_i > 0$. As the goodness of the starting string increases there will be fewer positive $\delta_i$, so the conditions are even less likely to be met. This is consistent with the numerics on small MaxCut instances at $p=3/2$ in Section~\ref{subsec:small_num}, where improvement is less likely as the initial string improved. We note that in all of our numerical experiments, every improvement seen corresponds to one of the conditions in~\eqref{eq:sam_iff} being satisfied. Therefore, improvement is possible only if it was possible for small $\beta$'s. Further, we expect that as $n$ increases, fluctuations that allow the conditions to be met will become even more rare. This is consistent with the findings in Section~\ref{subsec:large_num}, where the condition is never met and correspondingly improvement is never seen for thousands of initial strings on large MaxCut instances. \subsubsection{Proof}\label{subsubsec:proof} We now turn to demonstrating the conditions in~\eqref{eq:sam_iff}. The reader uninterested in the details can skip to the next section. For a general cost function $C$, the expected cost produced by the $p=3/2$ warm started QAOA is \begin{align} &\langle w|e^{i\beta_1B}e^{i\gamma C}e^{i\beta_2B}Ce^{-i\beta_2B}e^{-i\gamma C}e^{-i\beta_1 B}|w\rangle \nonumber\\ &\approx \langle w|(1+i\beta_1 B - \beta_1^2B^2/2)e^{i\gamma C}(1+i\beta_2 B - \beta_2^2B^2/2)C(1-i\beta_2 B - \beta_2^2B^2/2)e^{-i\gamma C}(1-i\beta_1 B - \beta_1^2B^2/2)|w\rangle, \end{align} where we expand to order $\beta^2$ on the right-hand side. The zeroth-order term is $C(w)$ and the higher order terms are the change from this value. Terms of order $\beta$ cancel, and the terms of order $\beta^2$ are \begin{align} &-\beta_1\beta_2\langle w|Be^{i\gamma C}BCe^{-i\gamma C}|w\rangle + \mathrm{c.c.} + \beta_1\beta_2\langle w|Be^{i\gamma C}CBe^{-i\gamma C}|w\rangle + \mathrm{c.c.} \nonumber\\ &- \frac{\beta_1^2}{2}\langle w|B^2e^{i\gamma C}Ce^{-i\gamma C}|w\rangle + \mathrm{c.c.} - \frac{\beta_2^2}{2}\langle w|e^{i\gamma C}B^2Ce^{-i\gamma C}|w\rangle + \mathrm{c.c.} \nonumber\\ &+ \beta_1^2\langle w|Be^{i\gamma C}Ce^{-i\gamma C}B|w\rangle + \beta_2^2\langle w|e^{i\gamma C}BCBe^{-i\gamma C}|w\rangle \nonumber \\ \nonumber \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad = -2\beta_1\beta_2C(w)\sum_i\cos[\gamma (C_i-C(w))] + 2\beta_1\beta_2\sum_i C_i\cos[\gamma(C_i-C(w))] \nonumber \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -\beta_1^2nC(w) - \beta_2^2 n C(w) + \beta_1^2\sum_i C_i + \beta_2^2\sum_i C_i. \end{align} We can replace each factor of $n$ with a sum over $i$, and using equation~\eqref{eq:definition_of_delta_i} this becomes \begin{align} 2(\beta_1^2 + \beta_2^2)\sum_i\delta_i + 4\beta_1\beta_2\sum_i\delta_i\cos2\gamma\delta_i = 2(\beta_1+\beta_2)^2\sum_i\delta_i - 8\beta_1\beta_2\sum_i\delta_i\sin^2\gamma\delta_i. \end{align} Therefore we have improvement if we can find parameters for which \begin{align} 2(\beta_1+\beta_2)^2\sum_i\delta_i - 8\beta_1\beta_2\sum_i\delta_i\sin^2\gamma\delta_i >0.\label{eq:sam_3/2} \end{align} Note that condition~\eqref{eq:sam_3/2} is valid for any cost function. Now we apply it to the MaxCut cost $C_Z = -\sum_{\langle j,k\rangle}Z_jZ_k$ and restrict our attention to $3$-regular graphs. For this case, $|\delta_i| = 1$ or $3$. We want conditions on $\{\delta_i\}$ under which the inequality~\eqref{eq:sam_3/2} can be met for small $\beta_1,\beta_2$, and some $\gamma$. Let us split the analysis into two cases, depending on whether $\sum_{i}\delta_i\sin^2(\gamma\delta_i)$ is positive or negative. If this quantity is zero, we get the $p=1/2$ condition, which is never satisfied for a warm start. If $\sum_{i}\delta_i\sin^2(\gamma\delta_i)$ is positive, then we can always meet~\eqref{eq:sam_3/2}, for example, by taking $\beta_2 = -\beta_1$. So we seek conditions on $\{\delta_i\}$ that guarantee that $\sum_{i}\delta_i\sin^2(\gamma\delta_i)$ is positive for some $\gamma$. Now \begin{align} \sum_i\delta_i\sin^2(\gamma\delta_i) &= \sum_{|\delta_i|=1}\delta_i\sin^2\gamma + \sum_{|\delta_i|=3}\delta_i\sin^2(3\gamma)\\ &=\sum_{|\delta_i|=1}\delta_i\sin^2\gamma + \sum_{|\delta_i|=3}\delta_i(3\sin\gamma - 4\sin^3\gamma)^2\\ &= \sin^2\gamma \bigg[\sum_{|\delta_i|=1}\delta_i +(3-4\sin^2\gamma)^2\sum_{|\delta_i|=3}\delta_i\bigg].\label{eq:3/2_negative} \end{align} We have that $0\le (3-4\sin^2\gamma)^2\le 9$, so we see that the term in brackets ranges from the first sum to the first sum plus $9$ times the second sum as $\gamma$ varies. This means that~\eqref{eq:3/2_negative} is positive for some $\gamma$ if and only if \begin{align} \qquad \sum_{|\delta_i|=1}\delta_i > 0\qquad\text{or}\qquad\sum_{|\delta_i|=1}\delta_i + 9\sum_{|\delta_i|=3}\delta_i >0 . \end{align} The second condition is exactly equal to $\sum_i\delta_i^3 >0$. Therefore $\sum_{i}\delta_i\sin^2(\gamma\delta_i)$ can be positive if and only if \begin{align} \sum_{|\delta_i|=1}\delta_i >0\qquad\text{or}\qquad \sum_i\delta_i^3 >0. \end{align} Now let us examine the case where $\sum_{i}\delta_i\sin^2(\gamma\delta_i)$ is negative. In this case, we can succeed in satisfying ~\eqref{eq:sam_3/2} as long as \begin{align} \sum_i\delta_i > \frac{4\beta_1\beta_2}{(\beta_1+\beta_2)^2}\sum_{i}\delta_i\sin^2(\gamma\delta_i) \end{align} for some $\beta_1,\beta_2$. The right-hand side is minimized at $\beta_1=\beta_2$, so this condition can hold if and only if \begin{align} \sum_i\delta_i > \sum_i \delta_i\sin^2(\gamma\delta_i), \end{align} or equivalently, if for some $\gamma$, \begin{align} \sum_i\delta_i\cos^2(\gamma\delta_i) > 0. \end{align} Repeating now the analysis done for the $\sum_{i}\delta_i\sin^2(\gamma\delta_i) > 0$ case, we obtain the same conditions for success. In summary, at $p=3/2$ improvement of the cost is possible for small $\beta$'s if and only if \begin{align} \sum_{|\delta_i|=1}\delta_i >0\qquad\text{or}\qquad \sum_i\delta_i^3 >0 \end{align} which proves \eqref{eq:sam_iff}. \section{Statistical argument for getting stuck\label{sec:thermal}} \subsection{Summary of statistical arguments} \label{sec:thermal_summ} In this section we develop a statistical argument to upper bound the progress made by the QAOA starting from a good string. The bound applies to the QAOA at constant depth on large bounded-degree graphs. As a warm-up, we argue in Section~\ref{sec:thermal_1} that the QAOA makes little progress when starting from a string chosen uniformly at random. Of course, such strings are not generally ``good'' strings, but this case illuminates the style of argument. Then in Section~\ref{sec:thermal_2} we extend the argument to strings we describe as ``locally thermal.'' In particular, for any classical string $w$, we define a ``thermality coefficient'' $\varepsilon_w$ in equation~\eqref{eq:local_therm} that can be used to rigorously upper bound the improvement of the QAOA acting on $|w\rangle$. The upper bound is given in~\eqref{eq:cbar0}. We sketch why we expect $\varepsilon_w$ to be small for typical good strings in Section~\ref{sec:why_small}. We then confirm this expectation with numerical experiments in Section~\ref{sec:thermal_num}, where we generate good strings for MaxCut using simulated annealing. We find that these strings are locally thermal to good approximation, i.e., they all have small thermality coefficient $\varepsilon_w$. By the upper bound in \eqref{eq:cbar0}, these strings can only have limited improvement under constant-depth QAOA. We now present an informal summary of the ideas in Section~\ref{sec:thermal_2}, where we will show that the thermality coefficient $\varepsilon_w$ of the warm-start string $w$ limits its improvement under the QAOA. The argument relies on the locality and the uniformity of the operators appearing in the QAOA circuit. Recall that we consider cost functions that are a sum of terms for each edge of a graph, as in equation~\eqref{eq:cost_function}. Let $U$ be the QAOA operator from equation~\eqref{eq:warm_start_QAOA_unitary}, where $p= 1/2,\ 3/2,\ \cdots$. Then $U$ contains $p-1/2$ exponentials that depend on $C$. The QAOA operator for a given edge is $U^\dagger C_{\langle i,j\rangle} U$, which acts non-trivially only on the qubits in a subgraph centered at $\langle i,j \rangle$ of maximal radius $p-1/2$. By uniformity, we mean that the reduced operator on any subgraph depends only on the isomorphism type of the underlying subgraph (i.e., the reduced operators on different subgraphs are the same when the subgraphs are isomorphic). Note that the operators of the standard QAOA satisfy this uniformity property as long as the $C_{\langle i,j\rangle}$ do not depend on the edge $\langle i,j \rangle$. Our statistical argument also relies on bounded degree, so it does not apply directly to problems like the Sherrington-Kirpatrick model in Section~\ref{sec:SK}. Because we assume the cost function terms $C_{\langle i,j\rangle}$ do not depend on the edge $\langle i,j\rangle$, we can denote a generic edge term by $C_{\textrm{edge}} \equiv C_{\langle i,j\rangle}$. For large, random $d$-regular graphs, nearly all constant-sized neighborhoods are isomorphic trees; we assume for simplicity, in this summary section, that all neighborhoods are such trees, providing a correction later in Section~\ref{sec:thermal_2}. In this case, we can simplify the expected cost function for a QAOA unitary $U$ applied to a string $w$ as \begin{align} \frac{1}{m}\langle w|U^\dagger C U|w\rangle &=\frac{1}{m}\sum_{\langle i,j\rangle \in E}\langle w|U^\dagger C_{\langle i,j\rangle}U|w \rangle. \\ &=\frac{1}{m}\sum_{\langle i,j\rangle \in E}\langle w_{\langle i,j\rangle}|U^\dagger C_{\langle i,j\rangle}U|w_{\langle i,j\rangle}\rangle \label{eq:sum_nbhd_1} \\ &= \Tr_{\textrm{tree}}(U^\dagger C_{\textrm{edge}} U \rhowtree), \label{eq:sum_nbhd_2} \end{align} where we define \begin{align} \label{eq:rhowtree} \rhowtree = \frac{1}{m} \sum_{\langle i,j\rangle \in E}|w_{\langle i,j\rangle}\rangle\langle w_{\langle i,j\rangle}| \end{align} and where $w_{\langle i,j\rangle}$ refers to the subset of bits of $w$ in the tree neighborhood of edge $\langle i,j\rangle$ with radius $r=p-\frac{1}{2}$. In passing from \eqref{eq:sum_nbhd_1} to \eqref{eq:sum_nbhd_2}, we implicitly consider the terms $U^\dagger C_{\langle i,j\rangle}U = U^\dagger C_{\textrm{edge}} U$ to be acting on a single Hilbert space, associated to a single tree neighborhood, by identifying the tree neighborhoods of different edges. Then $\rhowtree$ is a density matrix on this single $\textrm{tree}$ Hilbert space, and the trace in \eqref{eq:sum_nbhd_2} refers to this Hilbert space as well. For a fixed string $w$, the ensemble $\rhowtree$ captures what the string $w$ looks like locally on tree neighborhoods, averaged over the entire graph. For the warm starts we primarily consider, the initial state for the quantum circuit is the state $|w\rangle$ for a classical string $w$. We then act with the $2p$-parameter unitary $U(\boldsymbol{\gamma},\boldsymbol{\beta})$ with the goal of maximizing $\langle w|U^\dagger(\boldsymbol{\gamma},\boldsymbol{\beta})CU(\boldsymbol{\gamma},\boldsymbol{\beta})|w\rangle $. In general, the choice of parameters $\boldsymbol{\gamma},\boldsymbol{\beta}$ will depend on $|w\rangle$. We call $U_w$ the $2p$-parameter QAOA unitary that does the best job acting on the initial state $|w\rangle$, as in \eqref{eq:U_w}. So for each $w$ the optimal unitary $U_w$ is generally different. Our goal is to show that $\langle w|U_w^\dagger CU_w|w\rangle $ cannot yield much improvement over the original cost function $\langle w|C|w\rangle$ for a typical good initial string $w$. For the purpose of the statistical argument, we consider the task of maximizing $C$ as the equivalent task of minimizing the energy of the quantum Hamiltonian $H=-C$. We are interested in the performance of the QAOA on ``typical'' good strings, especially those that that might be generated by a classical algorithm. One natural notion of typicality is to sample the string from a thermal ensemble at fixed temperature. The thermal ensemble, or Gibbs state, at some inverse temperature $\beta$ is given by \begin{align} \rho_{\beta} = \frac{1}{\mathrm{Tr}(e^{-\beta H })}e^{-\beta H}. \end{align} Consider two strings $w$ and $w'$ drawn from the thermal ensemble $\rho_\beta$. While the strings may look totally different on any particular neighborhood, you might expect that $\rhowtree$ and $\rho_{w',\textrm{tree}}$ are nonetheless similar: both strings look the same locally when averaged over neighborhoods. In that case, typical strings from the thermal ensemble should all have the same expected cost function under $U$, since the expected cost is completely determined by $\rhowtree$. In particular, the cost function should behave as if the thermal state $\rho_\beta$ were itself the input to the QAOA. But a key property of the thermal state is that it has the minimum expected energy of all states with the same entropy. That is, fixing some $\beta>0$, we have \begin{align} \label{eq:min_energy_fixed_entropy} \rho_\beta = \argmin_{\rho \, :\, S(\rho) = S(\rho_\beta) } \mathrm{Tr}(H\rho). \end{align} In particular, since unitaries preserve entropy, we have \begin{align} \label{eq:min_energy} \min_{U}\mathrm{Tr}( HU \rho_\beta U^\dagger) = \mathrm{Tr}(H\rho_\beta). \end{align} This means that no unitary applied to $\rho_\beta$ can lower the energy. Suppose $\rhowtree$ is well-approximated by the thermal ensemble $\rho_\beta$ when the latter is considered locally and averaged over neighborhoods. Then $w$ gives the same expectation for the cost-function as the associated $\rho_\beta$ under any QAOA unitary, so no such unitary applied to $w$ can reduce the energy. We call such strings $w$ ``locally thermal,'' and we quantify this property with the ``thermality coefficient'' $\varepsilon_w$ defined in \eqref{eq:local_therm}. Locally thermal strings may be viewed as exhibiting a form of ergodicity: the average over local neighborhoods for the fixed string looks like the thermal average over different global strings. We will conclude that for any classical string $w$ which is close to locally thermal, as measured by small $\varepsilon_w$, the cost function after applying an optimized constant-depth QAOA cannot be much better than the energy of $\rho_\beta$. While we formalize this argument in Section~\ref{sec:thermal_2}, we begin in Section~\ref{sec:thermal_1} with uniformly random strings, which may be viewed as the special case of the argument at infinite temperature, that is $\beta=0$. \subsection{Starting from a uniformly random string \label{sec:thermal_1}} \subsubsection{Statement of result} As a warm-up to Section \ref{sec:thermal_2}, we first consider the case of starting the QAOA with a classical bit string chosen uniformly at random. To streamline the presentation, we restrict ourselves to the $p=3/2$ QAOA on triangle-free $d$-regular graphs, although the argument can be generalized to any constant depth, and to general bounded-degree graphs. Let $G = (V,E)$ be a triangle-free $d$-regular graph with $|V|=n$ vertices and $|E|=m=nd/2$ edges. As in \eqref{eq:cost_function}, we aim to maximize a general cost function of the form \begin{equation} C(z) = \sum_{\langle i, j\rangle\in E}C_{\langle i, j\rangle}(z_i, z_j)\label{eq:general_cost_function} \end{equation} where $C_{\langle i, j \rangle}$ is the same function for all $\langle i, j \rangle$, normalized so that $|C_{\langle i, j \rangle}| \leq 1$. In what follows, it will be convenient to define the average value per edge of the cost function~\eqref{eq:general_cost_function}, given by \begin{align} \overline{c} \equiv \frac{1}{2^n}\sum_{z\in\{-1,1\}^n}\frac{1}{m} C(z).\label{eq:avg_cost_per_edge} \end{align} Note that cost functions of the form~\eqref{eq:general_cost_function} exhibit a concentration property. Using the fact that $C$ is a sum of terms most of which are uncorrelated, one can show that for any $\epsilon>0$, \begin{align} \Pr_w\left(\left|\frac{1}{m} C(w) - \bar{c} \right| \ge \frac{1}{m^{1/2 - \varepsilon}}\right) \longrightarrow 0,\quad\text{as }m\rightarrow\infty. \label{eq:controverial_equation} \end{align} This implies that in the limit of large graph size $m$, almost all strings have cost function values extremely close to the average value $\overline{c}$. We will prove that typical strings cannot be improved by constant depth QAOA. In particular, we show for any $\epsilon>0$, \begin{align} \Pr_w\left(\frac{1}{m}\langle w|U_w^\dagger CU_w |w\rangle - \frac{1}{m}C(w) \ge \frac{1}{m^{1/2-\varepsilon}}\right) \rightarrow 0,\qquad\text{as }m\rightarrow\infty,\label{eq:no_improvement_typical} \end{align} where $U_w$ is the optimal QAOA unitary for each $w$. This means that for nearly all initial strings, the constant-depth QAOA can only improve the cost function (per edge) by an amount that vanishes in the limit of large graphs. While this is a satisfying result in its own right, it actually says very little about genuine warm starts. Recall that by ``warm start,'' we refer to strings that have been selected as potentially good starting points for optimization because their cost function value per edge $C(w)/m$ is substantially larger than the average $\bar{c}$. As such, warm-start strings are statistically atypical. Indeed, equation~\eqref{eq:controverial_equation} implies that warm starts form a vanishingly small fraction of all strings. Since the warm-start strings are the ones we are really interested in, in Section~\ref{sec:thermal_2} we study ensembles of strings whose typical members have large cost value per edge $C(w)/m$. We again argue that such strings cannot be improved by constant depth QAOA. \subsubsection{Proof} We now show equation~\eqref{eq:no_improvement_typical}. We can use the observations of Section~\ref{subsec:nbhd_sums} to reorganize the QAOA expectation: at $p=3/2$, the QAOA involves only a small tree around each edge consisting of $2d$ vertices. That is, we can write the expected value of the cost operator (normalized by the number of edges) as \begin{align} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle &= \frac{1}{m}\sum_{\langle i,j\rangle \in E}\langle w|U_w^\dagger C_{\langle i,j\rangle}U_w|w\rangle,\label{eq:cost_edge_decomp} \end{align} where each matrix element on the right-hand side only depends on the $2d$ bits of $w$ which lie on the tree whose central edge is $\langle i,j\rangle$. We call the restriction of $w$ to these bits $w_{\langle i,j\rangle}$. So we can write \begin{align} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle &=\frac{1}{m}\sum_{\langle i,j\rangle \in E}\langle w_{\langle i,j\rangle}|U_w^\dagger C_{\langle i,j\rangle}U_w|w_{\langle i,j\rangle}\rangle, \end{align} where each $w_{\langle i,j\rangle}$ is a bit string on $2d$ bits. Summing over the possible values of such bit strings, we can rearrange the sum as \begin{align} \label{eq:edge_to_nbhd} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle &= \frac{1}{m}\sum_{\ell \in \{-1,1\}^{2d}}\#_w(\ell)\langle \ell|U_w^\dagger C_\mathrm{edge} U_w|\ell\rangle, \end{align} where $\#_w(\ell)$ is the number of times the $2d$-bit string $\ell$ appears within the collection of $w_{\langle i,j\rangle}$, and where $C_{\mathrm{edge}}$ denotes a generic edge term, as described in Section~\ref{sec:thermal_summ} (for MaxCut, $C_{\mathrm{edge}}=-Z_a Z_b$, where $\langle a,b\rangle$ labels the central edge of a tree neighborhood). We consider $U^\dagger_wC_{\mathrm{edge}}U_w$ as an operator on the $2d$-qubit Hilbert space associated with the $2d$ vertices in the radius-1 tree neighborhood of an edge. Now we consider the case where the string $w\in \{-1,1\}^{n}$ is chosen uniformly at random. We can think of $\#_w(\ell)$ as a random variable whose average value is $m/2^{2d}$. We conveniently partition the sum as \begin{align} \label{eq:correction} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle &= \frac{1}{2^{2d}}\sum_{\ell \in \{-1,1\}^{2d}}\langle \ell|U_w^\dagger C_\mathrm{edge} U_w|\ell\rangle + \frac{1}{m}\sum_{\ell \in \{-1,1\}^{2d}}\left(\#_w(\ell) - \frac{m}{2^{2d}}\right)\langle \ell|U_w^\dagger C_\mathrm{edge} U_w|\ell\rangle. \end{align} The first term of the right-hand side of equation~\eqref{eq:correction} turns out to be a simple constant, \begin{align} \frac{1}{2^{2d}} \sum_{\ell \in \{-1,1\}^{2d}}\langle \ell|U_w^\dagger C_\mathrm{edge} U_w|\ell\rangle &= \frac{1}{2^{2d}} \sum_{\ell \in \{-1,1\}^{2d}}\mathrm{Tr}_{\textrm{tree}}\left(U_w^\dagger C_\mathrm{edge} U_w |\ell\rangle\langle \ell|\right)\\ &= \frac{1}{2^{2d}}\mathrm{Tr}_{\textrm{tree}}\left(U_w^\dagger C_\mathrm{edge} U_w I\right)\\ &= \frac{1}{2^{2d}} \mathrm{Tr}_{\textrm{tree}}(C_{\mathrm{edge}})\\ &= \frac{1}{ 2^n} \sum_{z \in \{-1,1\}^n } \frac{1}{m} \langle z | C | z \rangle \\ &= \bar{c}, \end{align} where $\overline{c}$ is the average cost value per edge as defined in equation~\eqref{eq:avg_cost_per_edge}, and where $\Tr_{\textrm{tree}}$ refers to the trace over the Hilbert space of an abstract tree neighborhood. The second term in \eqref{eq:correction} is then the deviation of the optimized cost function $\frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle$ from the averaged cost function $\bar{c}$. We want to upper bound this deviation, which we denote by $\chi$, defined as \begin{align} \label{eq:chi_def} \chi & = \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle - \bar{c} \\ & = \frac{1}{m}\sum_{\ell \in \{-1,1\}^{2d}}\left(\#_w(\ell) - \frac{m}{2^{2d}}\right)\langle \ell|U^\dagger_w C_\mathrm{edge}U_w|\ell\rangle. \end{align} We can bound $\chi^2$ using the Cauchy-Schwarz inequality as \begin{align} \chi^2 &\le \frac{1}{m^2}\bigg[\sum_{\ell \in \{-1,1\}^{2d}}\left(\#_w(\ell) - \frac{m}{2^{2d}}\right)^2\bigg]\bigg[\sum_{\ell \in \{-1,1\}^{2d}}\left|\langle \ell| U_w^\dagger C_{\mathrm{edge}} U_w|\ell\rangle\right|^2\bigg]\\ &\le \frac{1}{m^2}2^{2d}\sum_{\ell \in \{-1,1\}^{2d}} \left(\#_w(\ell) - \frac{m}{2^{2d}}\right)^2, \end{align} where we use the fact that $|C_\mathrm{edge}|\le 1$ in the second inequality. For each $\ell \in \{-1,1\}^{2d}$, we can imagine $\#_w(\ell)$ as a sum of indicator random variables $B_{\langle i,j\rangle}$. Each $B_{\langle i,j\rangle}$ is $1$ if $w_{\langle i,j\rangle} = \ell$, where we recall that $w_{\langle i,j\rangle}$ is the restriction of $w$ to the edge neighborhood centered at edge $\langle i,j\rangle$, and $0$ otherwise: \begin{align} \#_w(\ell) = \sum_{\langle i,j\rangle \in E}B_{\langle i,j\rangle}. \end{align} The mean of each $B_{\langle i,j\rangle}$ is $2^{-2d}$ because the string $w$ is chosen uniformly at random. Note $B_{\langle i,j\rangle}$ and $B_{\langle i',j'\rangle}$ are independent when the tree neighborhoods of the corresponding edges do not overlap. For each $B_{\langle i,j\rangle}$, there are at most $D = 2d^3$ distinct edge neighborhoods with which it has non-trivial overlap. Averaging over the $w$ we get \begin{align} \Ex_w\left[\left(\#_w(\ell) - \frac{m}{2^{2d}}\right)^2\right] &= \sum_{\langle i,j\rangle \in E}\sum_{\langle i',j'\rangle \in E}\Ex_w\left[\left(B_{\langle i,j\rangle} - \frac{1}{2^{2d}}\right)\left(B_{\langle i',j'\rangle} - \frac{1}{2^{2d}}\right)\right]\\ &\le mD\frac{1}{2^{2d}}\left(1 - \frac{1}{2^{2d}}\right), \end{align} because there are at most $mD$ pairs of overlapping neighborhoods in the double sum (the other terms vanish), and for these nonzero terms, by Cauchy-Schwarz, the covariance $\Ex_w\left[\cdots\right]$ is upper bounded by the variance $\mu(1-\mu)$ of the indicator random variables $B_{\langle i,j\rangle}$ with mean $\mu=2^{-2d}$. Altogether, $\Ex_w[\chi^2]$ is bounded above as \begin{align} \Ex_w\left[\chi^2\right] &\le \frac{1}{m^2}2^{2d}mD\frac{1}{2^{2d}}\left(1 - \frac{1}{2^{2d}}\right) \le \frac{D}{m}. \end{align} By the Chebyshev inequality, for any $a > 0$, we have \begin{align} \Pr_w\left(\left|\chi\right| \ge a\right) \le \frac{\Ex_w[\chi^2]}{a^2} \le \frac{D}{ma^2}. \end{align} Choosing $a=m^{-1/2+\varepsilon}$ for any $\varepsilon > 0$, and plugging in the definition of $\chi$ in equation~\eqref{eq:chi_def}, we get \begin{align} \Pr_w\left(\left|\frac{1}{m} \langle w|U^\dagger_w C U_w|w\rangle - \bar{c} \right| \ge \frac{1}{m^{1/2 - \varepsilon}}\right) \longrightarrow 0,\quad\text{as }m\rightarrow\infty.\label{eq:chi_zero} \end{align} It follows that as the number of edges $m$ tends to infinity, the probability of any non-negligible deviation from $\bar{c}$ tends to zero. Equation~\eqref{eq:chi_zero}, combined with equation~\eqref{eq:controverial_equation}, finally yields the bound \begin{align} \Pr_w\left(\frac{1}{m}\langle w|U_w^\dagger CU_w |w\rangle - \frac{1}{m}C(w) \ge \frac{1}{m^{1/2-\varepsilon}}\right) \rightarrow 0,\qquad\text{as }m\rightarrow\infty. \end{align} We conclude that for nearly all initial strings, the constant-depth QAOA can only improve the cost function (per edge) by an amount that vanishes in the limit of large graphs. \subsection{Starting from a typical good string} \label{sec:thermal_2} \subsubsection{Statement of result} We have seen why the QAOA gets stuck when starting with a typical string chosen uniformly at random. However, we are interested in the performance of the QAOA when starting with a typical \emph{good} string, e.g., a string produced by a classical algorithm such as simulated annealing. In this section, we see that the QAOA also gets stuck when starting with one of these good strings. Throughout this paper success is characterized by a high value of the cost function $C$ which often just counts the number of satisfied clauses. Here we are going to make a thermodynamic argument where conventionally one seeks a low energy sample coming from a low-temperature ensemble. Accordingly, in this section we will apply our arguments to the Hamiltonian $H = -C$, whose value we try to minimize. Once again, we work in the setting of a $d$-regular graph $G=(V,E)$, with $|V|=n$, $|E|=m=nd/2$, and a cost function of the form \begin{equation} C(z) = \sum_{\langle i, j\rangle\in E}C_{\langle i, j\rangle}(z_i, z_j), \end{equation} normalized so that $|C_{\langle i, j \rangle}| \leq 1$. Typically, for a large random $d$-regular graph, almost all of the local neighborhoods are trees. Fixing a neighborhood radius $r$ (corresponding to the QAOA with $p=r+\frac{1}{2}$), we define the fraction $\delta$ of edges whose neighborhoods are \textit{not} trees as \begin{align} \label{eq:tree_frac} \delta = 1-\frac{|E_T|}{m} , \end{align} where $E_T$ denotes the set of edges whose neighborhood of radius $r$ is a tree. Our argument will be useful in the case that $\delta$ is small. For a random $d$-regular graph, considering $n \to \infty$ with $d$ and $r$ fixed, it turns out that $\delta = O\left(1/n\right)$~(\cite{janson_random_regular}, Theorem 9.5) with probability $1-o(1)$. So typically $\delta$ is small. Before introducing our main result below in \eqref{eq:cbar0}, we need to introduce two ensembles associated with the warm-start string $w$. For any classical string $w\in \{-1,1\}^n$ on $G$ we define its \emph{local ensemble} by \begin{align} \label{eq:local_ensemble} \rhowtree = \frac{1}{|E_T|}\sum_{\langle i,j\rangle \in E_T}|w_{\langle i,j\rangle}\rangle\langle w_{\langle i,j\rangle}|, \end{align} where $w_{\langle i,j\rangle}$ denotes the restriction of $w$ onto the local tree neighborhood $T_{\langle i,j\rangle}$, and $T_{\langle i,j\rangle}$ denotes the tree neighborhood centered around edge $\langle i,j\rangle \in E_T$. Each tree neighborhood has a fixed number of bits, so $w_{\langle i,j \rangle}$ is a string of that length, and $\rhowtree$ is a density matrix over the corresponding number of qubits. Let \begin{align} \label{eq:rhobeta_defn} \rho_{\beta} = \frac{1}{\mathrm{Tr}\big(e^{-\beta(- C)}\big)}e^{-\beta(- C)} \end{align} be the global thermal density matrix with respect to the Hamiltonian $H=-C$, at some inverse temperature $\beta$. We will consider its reduced density onto the tree neighborhood $T_{\langle i,j\rangle}$, denoted $\rhobetaij$. Because all the tree neighborhoods $T_{\langle i,j\rangle}$ are isomorphic subgraphs, we can identify their associated Hilbert spaces. That is, we consider each reduced operator $\rhobetaij$ to act on the same abstract Hilbert space (this is the same Hilbert space implicit in the definition of $\rhowtree$). With this identification, we can define the average over neighborhoods, \begin{align} \label{eq:rhobetatree} \rhobetatree = \frac{1}{|E_T|}\sum_{\langle i,j\rangle \in E_T} \rhobetaij. \end{align} Again, both $\rhowtree$ and $\rhobetatree$ act on the Hilbert space of a single tree neighborhood: although their definitions involve the sum of terms associated to different neighborhoods, these neighborhoods are all isomorphic trees, and we implicitly identify the neighborhoods when summing the terms. Our main result in this section is an upper bound on how much the cost function of a warm-start string $w$ can improve under the QAOA at fixed depth. The upper bound depends on a quantity $\varepsilon_w$, which we call the ``thermality coefficient'', that measures the distance between the ensembles $\rhowtree$ and $\rhobetatree$: \begin{align} \label{eq:local_therm} \varepsilon_w = \| \rhobetatree- \rhowtree\|_1, \end{align} where $\beta$ is the inverse temperature chosen so that \begin{align} \mathrm{Tr}(C\rho_\beta) = C(w) \end{align} and where \begin{align} \|\rho-\sigma\|_1 \equiv \mathrm{Tr}\Big(\sqrt{(\rho-\sigma)^\dagger(\rho-\sigma)}\Big) \end{align} denotes the trace norm. In the present case, both $\rhobetatree$ and $\rhowtree$ are diagonal in the computational basis, i.e., they are classical probability distributions over strings, in which case $\varepsilon_w$ is also equal to twice the total variation distance. When $\varepsilon_w$ is small, we refer to $w$ as being ``locally thermal.'' Note that $\varepsilon_w$ depends on the QAOA depth $p$ which defines the neighborhood size. The main result of this section is \begin{align} \label{eq:cbar0} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle \leq c(w) +2\varepsilon_w + 4\delta , \end{align} where $c(w)= C(w)/m$ is the cut fraction of the original string $w$. We know that $\delta$ is small for random graphs, so the heart of the result is the dependence on $\varepsilon_w$. So if $w$ is locally thermal, i.e., if $\varepsilon_w$ is small, then $w$ can only improve by a small amount. In Sections~\ref{sec:why_small} and~\ref{sec:thermal_num}, we present evidence that $\varepsilon_w$ is small for typical strings $w$ of a given cut fraction. \subsubsection{Proof}\label{subsubsection:thermal_proof} We proceed to prove \eqref{eq:cbar0}. Recalling equation~\eqref{eq:cost_edge_decomp} and the observations of Section~\ref{subsec:nbhd_sums}, we have \begin{align} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle &= \frac{1}{m}\sum_{\langle i,j\rangle \in E}\langle w|U_w^\dagger C_{\langle i,j\rangle}U_w|w\rangle\\ &\leq \frac{1}{m} \sum_{\langle i,j\rangle \in E_T}\langle w|U_w^\dagger C_{\langle i,j\rangle}U_w|w\rangle + \delta \\ &= \frac{1}{m}\sum_{\langle i,j\rangle \in E_T}\langle w_{\langle i,j\rangle}|U_w^\dagger C_{\langle i,j\rangle}U_w|w_{\langle i,j\rangle}\rangle + \delta \\ &= \frac{1}{m}\Tr_{\textrm{tree}}\big(U_w^\dagger C_{\mathrm{edge}}U_w \sum_{\langle i,j\rangle \in E_T}|w_{\langle i,j\rangle}\rangle\langle w_{\langle i,j\rangle}|\big) + \delta \\ &=\frac{|E_T|}{m} \Tr_{\textrm{tree}}\big(U_w^\dagger C_{\mathrm{edge}}U_w\rhowtree\big) + \delta . \end{align} The second line drops the contribution from edges whose neighborhoods are not trees. The $\Tr_{\textrm{tree}}$ refers to the trace over the Hilbert space associated to an abstract tree neighborhood. The final line uses the definition of $\rhowtree$ in~\eqref{eq:local_ensemble}. Next, we replace $\rhowtree$ with $\rhobetatree$, picking up an error proportional to $\varepsilon_w$, which measures the local thermality of $w$. We use the property of the trace norm that \begin{align} \Tr(A^\dagger B) \le \|A\|_\infty\cdot \|B\|_1, \end{align} where $\norm{A}_\infty$ is the operator norm (the largest singular value). Recall that we assume $C$ is normalized so that $|C_{\langle i, j \rangle}| \leq 1$, or equivalently, $||C_{\textrm{edge}}||_{\infty} \leq 1$. Then we find \begin{align} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle \leq \frac{|E_T|}{m} \Tr_{\textrm{tree}}\left(U_w^\dagger C_{\mathrm{edge}}U_w \rhobetatree \right) + \varepsilon_w + \delta. \end{align} Now we substitute \eqref{eq:rhobetatree}, replace the partial trace $\mathrm{Tr}_{G\backslash T_{\langle i,j\rangle}}(\rho_{\beta})$ with the full thermal operator $\rho_\beta$, and re-sum the cost Hamiltonian $C$: \begin{align} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle &\leq \frac{1}{m}\sum_{\langle i,j\rangle\in E_T}\Tr_{\textrm{tree}}\left(U_w^\dagger C_{\langle i,j\rangle}U_w\mathrm{Tr}_{G\backslash T_{\langle i,j\rangle}}(\rho_{\beta})\right) + \varepsilon_w + \delta\\ & \leq \frac{1}{m}\sum_{\langle i,j\rangle\in E}\mathrm{Tr}\left(U_w^\dagger C_{\langle i,j\rangle}U_w\rho_\beta\right) + \varepsilon_w + 2\delta\\ &=\frac{1}{m}\mathrm{Tr}\left(U_w^\dagger CU_w\rho_\beta\right) + \varepsilon_w + 2\delta. \end{align} Finally, we use the principle of minimum energy in thermodynamics, as encoded in \eqref{eq:min_energy} and repeated here for convenience: \begin{align} \min_{U}\mathrm{Tr}\left(HU\left[\frac{e^{-\beta H}}{\mathrm{Tr}(e^{-\beta H})}\right]U^\dagger\right) = \frac{\mathrm{Tr}(He^{-\beta H})}{\mathrm{Tr}(e^{-\beta H})}. \end{align} Applying this equation to the Hamiltonian $H=-C$ with the thermal state $\rho_\beta$ of \eqref{eq:rhobeta_defn}, we get \begin{align} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle & \leq \frac{1}{m}\mathrm{Tr}\left(U_w^\dagger CU_w\rho_\beta\right) + \varepsilon_w + 2\delta \\ & \leq \frac{1}{m}\mathrm{Tr}(C\rho_\beta) +\varepsilon_w + 2\delta \\ & \leq \frac{|E_T|}{m}\mathrm{Tr}_{\textrm{tree}}(C_{\textrm{edge}}\rhobetatree) +\varepsilon_w + 3\delta \\ & \leq \frac{|E_T|}{m}\mathrm{Tr}_{\textrm{tree}}(C_{\textrm{edge}}\rhowtree) +2\varepsilon_w + 3\delta \\ & \leq \frac{1}{m}\langle w|C|w\rangle +2\varepsilon_w + 4\delta \end{align} or equivalently, \begin{align} \frac{1}{m}\langle w|U^\dagger_w C U_w|w\rangle \leq c(w) +2\varepsilon_w + 4\delta \end{align} where $c(w)$ is the cut fraction of the original string $w$, reproducing the desired result of \eqref{eq:cbar0}. Since $\delta=O(1/n)$ is vanishingly small for large graphs, we conclude that the improvement is controlled by $\varepsilon_w$, which measures how close $w$ is to being locally thermal. \subsubsection{When are initial strings locally thermal?} \label{sec:why_small} Next we ask about the values of $\varepsilon_w$ for typical good strings $w$ drawn from the thermal ensemble: how close is $\rhowtree$ to $\rhobetatree$? While $\rhowtree$ is not immediately guaranteed to be close to $\rhobetatree$ for all $w$, the relation holds on average over thermally sampled strings $w$, i.e., \begin{align} \label{eq:sampledrhobeta} \operatorname*{\mathbb{E}}_{w \sim \rho_\beta} [\rhowtree] = \rhobetatree, \end{align} where $\operatorname*{\mathbb{E}}_{w\sim\rho_\beta}$ denotes expectation with respect to samples of $w$ from the thermal distribution. This follows by commuting the thermal average with the average over neighborhoods: \begin{align} \mathop{\mathbb{E}}_{w \sim \rho_\beta} [\rhowtree] & = \frac{1}{Z} \sum_w e^{-\beta(-C(w))} \rhowtree \\ & = \frac{1}{Z} \sum_w e^{-\beta(-C(w))} \frac{1}{|E_T|} \sum_{\langle i,j\rangle \in E_T}|w_{\langle i,j\rangle}\rangle\langle w_{\langle i,j\rangle}| \\ & = \frac{1}{|E_T|} \sum_{\langle i,j\rangle \in E_T} \frac{1}{Z} \sum_w e^{-\beta(-C(w))} |w_{\langle i,j\rangle}\rangle\langle w_{\langle i,j\rangle}| \\ & = \frac{1}{|E_T|}\sum_{\langle i,j\rangle \in E_T} \rhobetaij \\ & = \rhobetatree, \end{align} where $Z=\Tr\big(e^{-\beta (-C)}\big)$. In going to the second and fourth lines above, we use the definitions of $\rhowtree$ and $\rhobetatree$ in~\eqref{eq:local_ensemble} and~\eqref{eq:rhobetatree}, respectively. Since the average of $\rhowtree$ over $w$ matches $\rhobetatree$, to show that $\rhowtree$ is close to $\rhobetatree$ for typical $w$, it suffices to show that $\rhowtree$ has small variance with respect to choice of $w$. Treating $w$ as a random variable distributed according to the thermal ensemble, $\rhowtree$ defined in \eqref{eq:local_ensemble} becomes an average over random variables $|w_{\langle i,j\rangle}\rangle\langle w_{\langle i,j\rangle}|$ defined on each neighborhood. If these variables have small correlation across different neighborhoods, then $\rhowtree$ has small variance. So $\varepsilon_w$ is small for typical strings sampled from the thermal distribution, assuming the thermal distribution has only small spatial correlations across the graph. At fixed temperature, we therefore expect the variance of $\rhowtree$ shrinks with graph size as $m^{-1}$, and $\varepsilon_w \propto m^{-1/2}$. This is what we find for the numerical instances in Section~\ref{sec:thermal_num}. When is it reasonable to assume that $\rho_\beta$ has small spatial correlations across the graph? At infinite temperature, the case examined in Section~\ref{sec:thermal_1}, the correlation is zero, so there $\varepsilon_w \propto m^{-1/2}$, consistent with our conclusion there. In the high temperature regime (where the strings are ``good,'' i.e. better than random guesses, but not too good), we similarly expect $\rho_\beta$ to have small correlations. At low temperatures, depending on the specific combinatorial problem, there may be thermal phase transitions and development of long-range correlations. However, since we are interested in warm starts which can be efficiently generated by classical algorithms, this low temperature regime may be less relevant in practice. \subsubsection{Numerical evidence for the thermality of typical good strings\label{sec:thermal_num}} In Section~\ref{subsubsection:thermal_proof}, we proved that when using a warm start with a classical string $w$, the cost function cannot improve much when $w$ is ``locally thermal.'' More precisely, we showed the improvement of the cost function per edge is controlled by thermality coefficient \begin{align} \varepsilon_w = \| \rhobetatree- \rhowtree\|_1. \end{align} When $w$ is locally thermal, meaning $\varepsilon_w$ is small, the average of $w$ over local neighborhoods looks like the thermal average over different global strings. In this section, we present numerical evidence that, for a given cost function, typical good strings $w$ are indeed approximately locally thermal. Moreover, it appears that the magnitude of the thermality coefficient diminishes with larger graph sizes. In our investigations below, we only perform numerical experiments for MaxCut, and we only analyze typical good strings associated with one fixed temperature. We leave the analysis of other problems besides MaxCut to future investigations. We study MaxCut on random 3-regular graphs of $n=10000$ to $50000$ vertices. The local thermality of typical strings is expected to hold with high accuracy only in the limit of large graphs because it involves the concentration of averages over neighborhoods. Fixing a temperature $T=1.75$, corresponding to a cut fraction of $\sim 0.64$, we use simulated annealing to sample classical strings $w$ from the thermal distribution $\rho_\beta$ of \eqref{eq:rhobeta_defn}. That is, we sample strings $w$ with probabilities proportional to $ e^{-\beta(-C(w))}$. We consider $k=10$ samples, $\{w_\alpha\}$ for $\alpha=1,\ldots,k$. For each string $w_\alpha$, it is straightforward to compute the associated density matrix $\rho_{w_\alpha,\textrm{tree}}$ of $\eqref{eq:local_ensemble}$. We consider tree neighborhoods of radius $r=2$, i.e., all vertices within distance $2$ of a given edge $\langle i,j\rangle$. For $d=3$, these have 14 vertices, so $\rho_{w_\alpha,\textrm{tree}}$ lives on a Hilbert space of 14 qubits. To verify that the sampled $w_\alpha$ are locally thermal, ideally we would calculate $\norm{\rho_{w_\alpha,\textrm{tree}} - \rhobetatree}_1 = \varepsilon_w$. Unfortunately, for these large graphs with $\geq 10000$ vertices, direct computation of $\rho_\beta$ and $\rhobetatree$ is impossible in practice. To proceed, we restate \eqref{eq:sampledrhobeta}, which says that \begin{align} \label{eq:sampledrhobeta2} \operatorname*{\mathbb{E}}_{w \sim \rho_\beta} [\rhowtree] = \rhobetatree, \end{align} where the expectation is with respect to samples of $w$ from the thermal distribution. That is, the average of $\rhowtree$ over thermal samples matches $\rhobetatree$. In fact we find that for our numerically generated samples $w_\alpha$, all of the associated $\rho_{w_\alpha,\textrm{tree}}$ are approximately equal. Thus even with our few $k=10$ samples, the LHS of \eqref{eq:sampledrhobeta2} is already well-approximated by this empirical sample average, and we consider the average deviation \begin{align} \label{eq:avg_deviation} \mathcal{E} = \frac{1}{k} \sum_{\alpha=1}^k \norm{\rho_{w_\alpha,\textrm{tree}} - \overline{\rho_{w,\textrm{tree}}}}_1, \end{align} where \begin{align} \overline{\rho_{w,\textrm{tree}}} = \frac{1}{k} \sum_{\alpha=1}^k \rho_{w_\alpha,\textrm{tree}} . \end{align} The value of $\mathcal{E}$ measures the average deviation of $\rho_{w_\alpha,\textrm{tree}}$ over samples. When $\mathcal{E}$ is small, it follows that typical samples $w$ are approximately locally thermal. We plot the average deviation $\mathcal{E}$ as a function of the number of vertices $n$ in Figure \ref{fig:nbhd_counting}, with $10$ samples for each $n$. The fit line shows a fit with $\mathcal{E} \propto n^{-1/2}$. The figure suggests that typical strings $w$ at this temperature $T=1.75$ are approximately locally thermal with $\varepsilon_w$ shrinking as $n^{-1/2}$. This behavior matches the estimate in Section~\ref{sec:why_small}. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{nbhd_counting_fig.pdf} \caption{\textit{Log-log plot of the average sample deviation $\mathcal{E}$ from \eqref{eq:avg_deviation} as a function of the number of vertices $n$. The fit line shows a one-parameter fit with $\mathcal{E} = c n^{-1/2}$ for the parameter $c$.}} \label{fig:nbhd_counting} \end{figure} \subsection{Starting from a bad string} A bad string is one whose cost function per edge $C(w)/m$ is less than the average $\bar{c}$. We now explain that, in contrast to good strings, they can all be improved by the lowest depth QAOA. However this does not serve as a counterexample to our claims. Start with the state $|w\rangle$ for any $w$ and apply the one parameter $p=1/2$ QAOA with $\beta=\pi/4$. This rotates the state into a uniform superposition over all computational basis states, each with an amplitude of $2^{-\frac{n}{2}}$ times a phase that depends on $w$. In this new state the expected value of the cost function operator $C$ is the mean of the cost function over all strings. In particular, all bad strings improve. This is consistent with \eqref{eq:chi_zero} because the fraction of bad strings is a vanishingly small fraction of all strings as the size goes to infinity. What about the thermal argument of Section~\ref{sec:thermal_2}? At an inverse temperature $\beta$ the probability of finding a string $w$ with cost $C(w)$ is proportional to $e^{\beta C(w)}$. This implies that for $\beta$ positive, when sampling from this distribution the expected value of $C$ is above the mean of $C$. To produce a thermal distribution of bad strings therefore requires a negative temperature. However, the arguments in the previous sections require positive temperature when using the minimum energy identity in \eqref{eq:min_energy_fixed_entropy}. So these arguments do not apply when starting with a bad string. \section{Starting from superpositions}\label{sec:uniform} The same argument provided earlier in Section~\ref{sec:thermal_2} may be generalized to initial states $|v\rangle$ beyond classical strings. In an analogy to $\rhowtree$, we define the density matrix \begin{align} \label{eq:local_ensemble_v} \rho_{|v\rangle,\textrm{tree}} = \frac{1}{|E_T|}\sum_{\langle i,j\rangle \in E_T} \mathrm{Tr}_{G\backslash T_{\langle i,j\rangle}} |v\rangle\langle v| , \end{align} and the corresponding thermality coefficient \begin{align} \label{eq:local_therm_v} \varepsilon_{|v\rangle} = \| \rhobetatree- \rho_{|v\rangle,\textrm{tree}} \|_1 , \end{align} with $\beta$ chosen so that $ \mathrm{Tr}(C\rho_\beta) =\langle v | C | v \rangle$. Then we reach a conclusion identical to the original result in equation~\eqref{eq:cbar0}, but replacing $w$ with $|v\rangle$. Naturally we first ask about the usual initial state for the ordinary QAOA without a warm start, namely the uniform superposition over classical strings: $|s\rangle = |+\rangle^{\otimes n}$. In practice, the ordinary QAOA improves the cost function, so the statistical argument should not be applicable in this case. The inverse temperature associated to $|s\rangle$ is $\beta=0$, so $\rho_\beta$ is maximally mixed. Then we find that $\varepsilon_{|s\rangle}= 2 - 2/2^k$, where $k$ is the number of bits in the local tree neighborhood. Indeed, we could not have obtained a small value for $\varepsilon_{\ket{s}}$ because the ordinary QAOA exhibits substantial improvement for the initial state $|s\rangle$. What about starting in a uniform superposition of basis states with the same good value of the cost function? It may be difficult to use a classical algorithm to produce such a superposition, but here we simply explore what happens if you have such a state in hand. We randomly sample a $16$ bit 3-regular graph whose largest cut value is $22$. The mean over all strings of the MaxCut cost function is $12$. We begin with the uniform superposition of all strings whose cost is exactly $12$. The overlap (amplitude squared) of this normalized state with the uniform superposition over all strings $|s\rangle$ is $0.162$. At $p=3/2$, optimal parameters take the cost function to $12.064$. At $p=5/2$ and $7/2$, the improvement is to $12.173$ and $12.246$, respectively. This is not much improvement and should be contrasted with the usual QAOA starting state $|s\rangle$, where the expected value of the cost is $12$ and where the QAOA makes steady good progress as the depth increases. For warm-start superpositions at higher values of the cost, such as $13$ up to $18$, we see no improvements at all for $p=3/2,\ 5/2$, and $7/2$. For a superposition $|v\rangle$ over a constant number (with respect to $n$) of randomly chosen strings $w$ at fixed cost, we expect the thermality coefficient $\varepsilon_{|v\rangle}$ to be small, so the statistical argument of Section~\ref{sec:thermal} applies. However, for a uniform superposition $|v\rangle$ over \textit{all} strings at fixed cost, the thermality coefficient $\varepsilon_{|v\rangle}$ need not be small, and the statistical argument may not apply. Explaining the apparent failure of the warm-start QAOA for this initial state remains an open question. \section{Maximum Independent Set}\label{sec:MIS} In this section, we show using another example that the QAOA gets stuck starting on a good string. We look at the Maximum Independent Set problem: given a graph $G=(V,E)$, the goal is to find a maximum size subset of vertices which have no edge connecting any pair of them. Formally, we want to find a string of bits $b$ such that the Hamming weight $W(b) = \sum_i b_i$ is maximized, subject to the constraint that $\sum_{\langle i,j\rangle \in E}b_ib_j = 0$. Note that in this section we are using bits $b_i$ that are $\{0,1\}$-valued, with corresponding quantum operators $\hat{b}_i = (I-Z_i)/2$. To make this problem amenable to the usual QAOA, where any bit string is a valid input, we use the relaxed cost function \begin{align} C(b) = \sum_{i\in V}b_i - \sum_{\langle i,j\rangle \in E}b_ib_j \equiv W(b)-K(b), \label{eq:MIS_C} \end{align} which we aim to maximize. Given a bit string $b$ with cost $C(b)$, we can prune it, fixing violations $b_ib_j = 1$ one at a time by setting either $b_i$ or $b_j$ equal to 0. Fixing one violation decreases $W$ by $1$ but also decreases $K$ by at least $1$, so $C$ does not decrease. Continuing to prune until there are no violations, we obtain an independent set of size at least $C(b)$. This also shows that the maximum of $C$ is the size of the largest independent set. Note that for a $d$-regular graph we can write the cost function as \begin{align} C(b) = W(b)-K(b) = \sum_{\langle j,k\rangle \in E}\left[\frac{1}{d}(b_j+b_k) - b_j b_k\right], \end{align} which shows that this cost function is a sum of terms which act the same way on each edge, satisfying the uniformity assumption required for our statistical argument in Section~\ref{sec:thermal}. We now analyze the performance of the $p=1/2$ QAOA on a $d$-regular graph starting from a string $w$. Recall that in this section the bits $w_i$ are $\{0,1\}$-valued. Using \eqref{eq:MIS_C}, for all $\beta$, we have \begin{align} \bra{w}e^{i\beta B}Ce^{-i\beta B}\ket{w} &= \sum_{i\in V}\bra{w} e^{i\beta X_i} \hat{b}_i e^{-i\beta X_i}\ket{w}- \sum_{\langle i, j \rangle\in E}\bra{w} e^{i\beta X_i} \hat{b}_i e^{-i\beta X_i}e^{i\beta X_j} \hat{b}_j e^{-i\beta X_j}\ket{w}\\ &= \sum_{i\in V}\bra{w} \left(\sin^2\beta +[1-2\sin^2\beta]\hat{b}_i\right)\ket{w}\nonumber\\ &\quad -\sum_{\langle ij \rangle\in E}\bra{w} \left(\sin^2\beta +[1-2\sin^2\beta]\hat{b}_i\right)\left(\sin^2\beta +[1-2\sin^2\beta]\hat{b}_j\right) \ket{w} \\ &= W(w)-K(w) +\frac{d}{2}\left[4W(w)-n \right]\sin^4\beta+[n-W(w)(d+2)]\sin^2\beta +K(w) \sin^2(2\beta) \label{eq:mis_one_parameter}. \end{align} Let us now assume that we start with a string $w$ which corresponds to a valid independent set, meaning that $K(w)=0$. The corresponding cost function value is then just $W(w)$, the Hamming weight of the initial string $w$. We can optimize over $\beta$ noting that we have a quadratic function of $\sin^2\beta$. The optimum occurs when \begin{align} \sin^2\beta &=\frac{1}{d}\frac{n-W(w)(d+2)}{n-4W(w)}.\label{eq:optimal_beta_MIS_1/2} \end{align} Substituting this value back into equation~\eqref{eq:mis_one_parameter} to get the optimal value of the cost function, we find \begin{align} \bra{w}e^{i\beta B}Ce^{-i\beta B}\ket{w} &= W(w)+\frac{1}{2d} \frac{(n-W(w)(d+2))^2}{n-4W(w)}. \label{eq:mis_1/2_beta_new_optimum} \end{align} For improvement we need the second term in~\eqref{eq:mis_1/2_beta_new_optimum} to be positive. This requires $W<n/4$. Since $\sin^2\beta$ is also positive, we require the stronger condition $W<n/(d+2)$. Therefore, improvement can only be made if $C(w) = W(w) < n/(d+2)$. Now, suppose that our starting string is an independent set obtained by running a standard greedy algorithm which achieves an independent set of size at least $n/(d+1)$. Then $K = 0$ and $W\ge n/(d+1)$, so the condition is not met, and no improvement is possible for $p=1/2$. Note that starting with the empty set, that is with $W=0$, the $p=1/2$ QAOA will get to an independent set of size $n/(2d)$. The previous discussion was for the $p=1/2$ (1-parameter) QAOA starting on a good string. We now present numerical evidence for higher $p$. We randomly sample a $16$ vertex $3$-regular graph whose largest independent set size is $7$, with degeneracy $3$. The average over all strings of the cost $C(b)$ is $2$, which is therefore also the expected value of $C$ in the quantum state $|s \rangle$, where the usual QAOA starts. In Table~\ref{tab:standard_mis_16} we show how the usual QAOA performs when starting with the state $|s \rangle$. Note that as $p$ gets bigger, performance improves, steadily approaching the largest independent set possible which has size $7$. \begin{table}[t!] \begin{center} \begin{tabular}{r|cccc} \multicolumn{5}{c}{\textsc{Warm start at $C_{\text{MIS}}(w) = 4$ with 371 independent sets $w$}}\\ \hline $p$ & $3/2$ & $5/2$ & $7/2$ & $9/2$ \\ Number of strings improved & 20 & 101 & 283 & 368 \\ Mean cost of improved strings & $4.06$ & $4.06$ & 4.11 & 4.23 \\ Largest cost of improved strings & $4.10$ & $4.27$ & 4.47 & 4.64 \\ \multicolumn{5}{c}{\rule{0pt}{2ex}} \\ \multicolumn{5}{c}{\textsc{Warm start at $C_{\text{MIS}}(w)= 5$ with 230 independents sets $w$}}\\ \hline $p$ & $3/2$ & $5/2$ & $7/2$ & $9/2$ \\ Number of strings improved & 0 & 0 & 1 & 1 \\ Mean cost of improved strings & $-$ & $-$ & 5.002 & 5.002 \\ Largest cost of improved strings & $-$ & $-$ & 5.002 & 5.002 \\ \end{tabular}\\ \caption{\textit{Maximum Independent Set. QAOA improvement from good initial classical strings (corresponding to valid independent sets) on a 16 vertex, 3-regular graph with a largest independent set size of 7. The number of parameters is $2p$. } \label{tab:warm_start_mis_16}} \begin{tabular}{r|cccccc} \multicolumn{5}{c}{\rule{0pt}{2ex}} \\ \multicolumn{5}{c}{\textsc{Standard QAOA}}\\ \hline $p$ & $1$ & $2$ & $3$ & $4$ \\ Expected cost & 4.40 & 5.14 & 5.61 & 5.86\\ \end{tabular} \end{center} \caption{\textit{Maximum Independent Set. Performance of the standard QAOA for the same instance as Table~\ref{tab:warm_start_mis_16}, starting in the uniform superposition where the expected value of $C_{\text{MIS}}$ is $2$. The number of parameters is $2p$. Note that there is no indication that the standard QAOA gets stuck.} \label{tab:standard_mis_16}} \end{table} We now look at what happens when starting with a good string. We only look at starting strings which are valid independent sets, which means that $K(w)=0$. Table~\ref{tab:warm_start_mis_16} shows data for starting the QAOA with strings of cost $4$ and $5$. Note that there is either no improvement or very small improvement, similar to what we found in the MaxCut example. \section{The Sherrington-Kirkpatrick model}\label{sec:SK} So far, we have looked at MaxCut and Maximum Independent Set on bounded-degree graphs. Both problems involve local cost functions comprised of identical edge terms, and we expected their behavior starting from good classical strings to be similar. This expectation was met. But the Sherrington-Kirkpatrick (SK) model is different in that it is defined on the complete graph, and on each edge there is a coupling with a random sign: \begin{align} C_{\text{SK}} = \sum_{i, j}J_{i,j}Z_i Z_j, \end{align} where each $J_{i,j}$ is $+1$ or $-1$ with probability $1/2$. Parisi discovered, for any typical set of $J$'s, that the maximum energy divided by $n^{\frac{3}{2}}$, in the limit as $n$ goes to infinity, is a computable constant approximately equal to $0.763166$. It is known that the QAOA starting in the uniform superposition $|s\rangle$ makes good progress toward the optimum as the depth increases~\cite{SK_2019}. Here it is not clear that our thermal arguments for warm starts getting stuck apply. We are on a complete graph, and each edge sees a different neighborhood because of the varying $J$'s. Therefore, we look numerically at a random instance generated at $14$ bits. The lowest energy is $-33$ and the highest is $+35$. The performance of the usual QAOA starting on the state $|s\rangle$ is shown in Table~\ref{tab:standard_sk_14}. Turning to warm starts in Table~\ref{tab:warm_start_sk_14}, we look at $50$ randomly chosen starting strings with energy $11$. We see zero or little improvement out to depth $p=7/2$ for all strings besides four notable exceptions which we will discuss below. At an energy of $17$, there are no unusual improvements and the number of strings which improve is fewer than at $11$. Aside from the exceptions which we explain next, warm starts in the SK model also get stuck as they do in our other examples. \begin{table}[t!] \begin{center} \begin{tabular}{r|ccc} \multicolumn{4}{c}{\textsc{Warm start at $C_{\text{SK}}(w) = 11$ }}\\ \multicolumn{4}{c}{746 classical strings $w$, 50 samples, 4 exceptions removed}\\ \hline $p$ & $3/2$ & $5/2$ & $7/2$ \\ Number of strings improved & 9 & 17 & 28 \\ Mean cost of improved strings & $11.03$ & $11.23$ & $11.26$\\ Largest cost of improved strings & $11.05$ & $12.02$ & $12.96$ \\ \multicolumn{4}{c}{\rule{0pt}{2ex}} \\ \multicolumn{4}{c}{\textsc{Warm start at $C_{\text{SK}}(w)= 17$}}\\ \multicolumn{4}{c}{262 classical strings $w$, 50 samples}\\ \hline $p$ & $3/2$ & $5/2$ & $7/2$ \\ Number of strings improved & 1 & 4 & 5 \\ Mean cost of improved strings & 17.003 & 17.18 & 17.28 \\ Largest cost of improved strings & 17.003 & 17.39 & 17.95 \\ \end{tabular}\\ \caption{\textit{Sherrington-Kirkpatrick Model. QAOA improvement from good initial classical strings on a 14-vertex instance of the SK model with lowest energy $-33$ and highest energy $+35$. The number of parameters is $2p$. At energy $11$, improvement was possible using magic angles on four classical strings, two of which went to $19$ and two to $27$.}\label{tab:warm_start_sk_14}} \begin{tabular}{r|cccc} \multicolumn{5}{c}{\rule{0pt}{2ex}} \\ \multicolumn{5}{c}{\textsc{Standard QAOA}}\\ \hline $p$ & $1$ & $2$ & $3$ & $4$ \\ Expected cost & 15.05 & 20.40 & 24.27 & 26.19 \\ \end{tabular} \end{center} \caption{\textit{Sherrington-Kirkpatrick Model. Performance of the standard QAOA for the same instance as Table~\ref{tab:warm_start_sk_14}, starting in the uniform superposition where the expected value of $C_{\text{SK}}$ is zero. The number of parameters is $2p$. Note the steady improvement with depth. } \label{tab:standard_sk_14}} \end{table} We now discuss the exceptions. For the SK model on an even number of sites, with any set of couplings $\{J_{i,j}\}$, there are parameters at $p=3/2$ which creates a cat state starting on any computational basis state. The ``magic angle'' unitary which creates the cat state is given by: \begin{align} U_{\text{magic}} = e^{-i \frac{\pi}{4} B} e^{i \frac{\pi}{4} C_{\text{SK}}} e^{i\frac{\pi}{4} B}. \end{align} We show in Appendix~\ref{sec:magic_angle} that this unitary takes any initial string to a cat state of two strings given (up to global phase) by \begin{align} U_{\text{magic}} \ket{w} = \frac{1}{\sqrt{2}}\left(\ket{w'}+e^{i\theta} \ket{-w'}\right), \end{align} where $\theta$ is a known phase, and where \begin{align} \ket{w'} = \prod_{J_{i, j} = -1}( -iY_i Y_j)\ket{w}. \end{align} In other words, evolving with the QAOA unitary at these parameters takes $\ket{w}$ to a cat state involving $\ket{w'}$ and $\ket{-w'}$, where $w'$ is obtained from $w$ by flipping the $i$th bit if the product over $j$ of $J_{i,j}$ is $-1$. For a typical $\{J_{i,j}\}$ this makes it seem that $C(w')$ is essentially random, not different in distribution from $C(w'')$ where $w''$ is chosen at random. If $w$ is a good string, it is unlikely that $w'$ will be better. On rare occasions, this transformation can improve the cost function. However, improvement becomes less likely for better initial strings. We see this in our numerics: when starting from strings with energy $17$, none of the 50 warm start samples improve under the magic angle unitary, whereas four do improve starting at the lower energy of $11$. \section{Conclusion}\label{sec:conclusion} In this paper we looked at the QAOA starting in a quantum state associated with a single bit string $w$. The main case of interest is when the string $w$ is the output of a good classical algorithm. Through repeated applications of the circuit to $|w\rangle$, the QAOA parameters can be optimized for any starting string $w$, resulting in a $w$-dependent unitary $U_w$. We began this project by doing numerical experiments: warm-starting the QAOA on low bit-number examples of MaxCut. The results are unvarying. For almost all good initial strings, we see zero improvement from the starting cost value. In other cases, we see only very small improvements, even with $p$ large enough so that the QAOA operator ``sees'' the whole graph. At $p=3/2$ and $5/2$ we warm start large instances (300 bits) of MaxCut and see no improvements at all in our samples. We provide some explanations for these consistent observations. First, we find conditions under which small but nonzero improvement is possible for MaxCut on 3-regular graphs at $p=3/2$, with the QAOA parameters small. These conditions become increasingly difficult to meet for larger instances. While the analysis in this case covers only small QAOA parameters, we nevertheless find that all the warm-start improvements observed in our numerical experiments met these conditions. This suggests that improvement is generically possible only when it is possible at small angles. Further, we show that on average over \emph{all} starting strings, no improvement is possible in the limit of large problem instances, even if the QAOA parameters are chosen optimally for each starting string. Since it is also the case that almost all strings have cost $C(w)$ very close to the average of $C$, this argument really shows that generic strings with a typical cost value cannot be improved by the QAOA. While this result formally says little about strings with high cost function, it morally supports the idea that the QAOA cannot be successfully warm-started from a single string: if the QAOA fails to improve even average strings, it would be surprising if it could improve good strings. Toward our interest in actual warm starts, we next focused on the case of good strings, that is, those with large cost values. We show that when these strings are ``locally thermal,'' they cannot be significantly improved. We argue that good strings produced by an algorithm such as simulated annealing would be locally thermal. For MaxCut at large sizes, we observe numerically that typical good strings are indeed locally thermal to good approximation, thus guaranteeing the failure of warm-start QAOA for MaxCut at constant depth. All of the arguments presented in this paper work only in restricted circumstances. For example, all of our statistics-based arguments in Section~\ref{sec:thermal} can only be expected to apply in the limit of large graph sizes. In contrast, our numerical experiments appear much more robust; even at small system sizes ($\le 16$ qubits) we universally observe zero or minuscule improvements on single-string warm starts. Moreover, this behavior persists even at QAOA depths which are large relative to those system sizes, and depths at which the standard QAOA is seen to reach a near-optimal solution for these small sizes. We also observe the same phenomenon numerically when we warm start on a superposition of many high-cost strings, or when examining the Sherrington-Kirkpatrick model, both problems for which we have very limited understanding even at large system sizes. All of this evidence suggests that there are deeper underlying mechanisms responsible for the failure of these kinds of warm starts. We leave as future work the determination, and perhaps circumvention, of these hidden mechanisms. The above results stand in stark contrast to the usual QAOA, which starts in the quantum state $|s\rangle$ which is the uniform superposition over all possible bit string states. The failure of the single-string warm-start QAOA is particularly striking when compared to the conventional QAOA. While the reason is not fully understood, there is something about the uniform superposition which allows the usual QAOA to overtake its warm-start counterpart and make consistent progress toward the optimum. \textbf{Acknowledgments} M.C. acknowledges support from DOE CSG award fellowship (DESC0020347). D.R. acknowledges support from NTT (Grant AGMT DTD 9/24/20). E.T. acknowledges funding received from DARPA 134371-5113608, and DOD grant award KK2014. M.C. and E.F. thank Boaz Barak, Beatrice Nash, and Leo Zhou for early discussions of these ideas. D.R. and E.T. thank Aram Harrow for discussion. \printbibliography
{ "redpajama_set_name": "RedPajamaArXiv" }
6,384
\section{Introduction} As is well known, smooth SU(N) gauge fields in a 4-dimensional compact differentiable manifold M have associated an integer topological charge \begin{equation} Q = - \frac{1}{32 \pi^2} \int_M d^4x \epsilon_{\mu\nu\rho\sigma} \hbox{tr}\left[F_{\mu\nu}F_{\rho\sigma}\right] \end{equation} where $F_{\mu\nu}$ is the gauge potential. This is not merely a mathematical curiosity; it plays a fundamental role in the understanding of the $U_A(1)$ problem through the anomaly \cite{Anomaly1,Anomaly2,THooft}, and the Witten-Veneziano formula \cite{Witten,Veneziano}. It is also crucial for the investigation of the $\theta$ vacuum in QCD and the strong CP problem \cite{Axion1}, and therefore with the current experimental searches for axions \cite{Axion2,Axion3}. The issue of obtaining a theoretically sound and practical definition for the topological charge of lattice gauge fields is an old one. Several definitions exist, each with its own advantages and disadvantages. Some such definitions are purely gluonic, essentially transcribing the continuum definition to the lattice, whereas others take advantage of the index theorem and compute the charge as the index of a conveniently chosen fermionic operator. In \cite{Adams1}, Adams introduced a new definition of the index of a staggered Dirac operator, based on the spectral flow of a related hermitian operator. Some numerical results were obtained there for synthetic configurations in the $2D$ U(1) model. The purpose of this paper is to study systematically Adams' definition in 4D (quenched) QCD (preliminary results were presented in \cite{latt11}). \section{Definition of the topological charge} The basic observation of Adams in \cite{Adams1} is that when considering the spectral flow definition of the topological charge in the continuum, there is some freedom in the choice of the relevant hermitian operator. By a suitable choice, we can construct an operator which, when implemented in the lattice with a staggered dirac operator has all the required properties. In the continuum, for a given gauge field, one usually considers the spectral flow of the hermitian operator \begin{equation} H(m) = \gamma_5 \left(D - m\right) \label{H1} \end{equation} as a function of $m$. Because of the key property that \begin{equation} H(m)^2 = D^\dagger D + m^2, \label{keyproperty} \end{equation} if we trace the flow of eigenvalues $\left\{\lambda(m)\right\}$ of $H$, the ones corresponding to the zero modes of $D$, and only those, will change sign at the origin $m = 0$, each with a slope $\pm 1$ which depends on the chirality of the corresponding mode. This gives us the index of $D$, and through the index theorem, the topological charge $Q$ of the corresponding gauge configuration. On the lattice, we can substitute in (\ref{H1}) $D$ by the discretized Wilson Dirac operator \begin{equation} H_W(m) = \gamma_5 \left(D_W - m\right) \end{equation} Now the index can be obtained similarly to the continuum, by counting the number of eigenvalues of $H(m)$ that change sign close to the origin $m = 0$, taking into account the slope of such crossings. If we try to do the same with the lattice staggered Dirac operator $D_{st}$, we realize that the procedure does not work anymore, as the corresponding $H$ fails to be hermitian. The key innovation in Adams \cite{Adams1} is the realization that one can use a different $H$ in the continuum to accomplish the same task, namely \begin{equation} H(m) = iD - m \gamma_5. \label{H2} \end{equation} This operator is hermitian and verifies (\ref{keyproperty}), and therefore its spectral flow also gives the index of $D$. But now we can substitute in (\ref{H2}) $D$ by the lattice staggered discretization, \begin{equation} H_{st}(m) = iD_{st} - m \Gamma_5, \label{Hst} \end{equation} where $D_{st}$ is the massless staggered Dirac operator and $\Gamma_5$ is the taste-singlet staggered $\gamma_5$ \cite{Golterman}. This operator is hermitian, and we can study its spectral flow, $\lambda(m)$. The would-be zero modes of $D_{st}$ are identified with the eigenmodes for which the corresponding eigenvalue flow $\lambda(m)$ crosses zero at low values of $m$, and the chirality of any such mode equals (with our conventions) the sign of the slope of the crossing \cite{Adams1}. Any staggered discretization of the Dirac operator can be used, in principle, to implement \ref{Hst}. We have chosen to work with the unimproved, 1-link staggered Dirac operator \cite{KS}, and with the highly improved HISQ discretization \cite{hisq}. In each case we have calculated, using standard numerical algorithms, the smallest (in absolute value) 20 eigenvalues of $H_{st}(m)$ for enough values of $m$ to allow us to determine unambiguously the cuts with the x axis. To compare with previous work, we have also calculated the low-lying modes of the HISQ Dirac operator at $m = 0$, and identify the would-be zero modes with the high taste-singlet chirality ones \cite{top1,top2}. \section{2D U(1)} We started by studying the behavior of the spectral flow of $H_{st}(m)$ corresponding to the 1-link staggered Dirac operator in 2D U(1) lattice gauge theory; our purpose was twofold: testing our numerical framework in a less demanding setting, and working in a theory with a simple geometrical definition of the topological charge (even if not immune from problems arising from dislocations \cite{Teper}). To this end we can either construct gauge fields configurations with an assigned topological charge \cite{instanton}, or generate realistic field configurations from a canonical ensemble, selecting those with the charge $Q$ we are interested in. We used the first possibility in the earlier stage of our tests and then we moved to the other option: the results shown in what follows are from this second phase\footnote{See \cite{Durr1} for related work in the unquenched case.}. We have generated a large ensemble of quenched configurations at different values of $\beta$, varying from 4 to 9, in order to cover the scaling region for the lattice size we consider (L=60). Then we select among this set of configurations subsets of fixed charge $Q$, for which we computed the spectral flow. The spectrum of (\ref{Hst}) has the exact symmetry $\lambda(m) \leftrightarrow - \lambda(-m)$, therefore we only need to calculate the flow for, say, $m > 0$. An equal number of crossings, with identical slopes, will be present for $m < 0$. In Fig. \ref{fig_2d_1} we show the spectral flow for a gauge configuration with a charge $Q = -2$, corresponding to a $12^2$ lattice and a coupling constant $\beta = 4.0$. We plot the lowest (in absolute value) 20 eigenvalues of $H(m)$ in the range $(-3, 3)$. We can appreciate two crossings with negative slopes for $m < 0$, and the symmetric ones for $m > 0$\footnote{Staggered fermions in $D$ dimensions have a taste degeneracy of $2^{D/2}$, and therefore the number of crossings in the continuum gets multiplied by that factor in the lattice.}. \begin{figure}[h] \includegraphics[scale=1.3,angle=0]{fig_2d_1.eps} \caption{Spectral flow of the 2D 1link Dirac operator for a gauge configuration with $Q = -2$. \label{fig_2d_1}} \end{figure} In Fig. \ref{fig_2d_2} we show the spectral flows corresponding to several gauge configurations in a larger volume, $60^2$, and at several values of the coupling. From now on we only plot the flow for $m > 0$. For clarity, in most of the figures we plot the variable \begin{equation} \tilde{\lambda} = \sgn(\lambda) \sqrt{|\lambda|} \log(|\lambda|) \end{equation} versus $\log(m)$. As we can see in every case we have the expected number of crossings. \begin{figure}[h] \includegraphics[scale=1.3,angle=0]{fig_2d_2.eps} \caption{Comparison of the spectral flow of the 2D 1link Dirac operator for several gauge configurations. \label{fig_2d_2}} \end{figure} \section{4D Quenched QCD} For our numerical calculations in 4D we have used three ensembles of tree-level Symanzik and tadpole improved quenched QCD at three values of the coupling constant $\beta$ (5.0, 4.8 and 4.6), corresponding respectively to lattice spacings of approximately 0.077, 0.093 and 0.125 fm \cite{top2}, with lattice volumes of $20^4$, $16^4$ and $12^4$ respectively . These ensembles are thus approximately matched in physical volume, $\approx 1.5 \textrm{fm}^4$. In Figs. \ref{fig_4d_beta46} and \ref{fig_4d_beta50} we show the spectral flow for several representative configurations from the coarsest and finest ensembles, and in each case for $H_{st}$ built from both the 1-link and the HISQ Dirac operators. As before, we calculate the first 20 eigenvalues in absolute value. The topological charges were also calculated by counting the number of eigenvectors of the HISQ Dirac operator with high chirality \cite{top2}, and in each case there is agreement between the two definitions and for the spectral flow corresponding to both operators. \begin{figure}[h] \includegraphics[scale=1.3,angle=0]{fig_4d_beta46.eps} \caption{Spectral flow for several configurations at $\beta = 4.6$ (coarse ensemble) and for various values of the topological charge. The left and right panels correspond to the spectral flow calculated on the same configuration but with a different operator. \label{fig_4d_beta46}} \end{figure} \begin{figure}[h] \includegraphics[scale=1.3,angle=0]{fig_4d_beta50.eps} \caption{Spectral flow for several configurations at $\beta = 5.0$ (fine ensemble) and for various values of the topological charge. The left and right panels correspond to the spectral flow calculated on the same configuration but with a different operator. \label{fig_4d_beta50}} \end{figure} We can appreciate in the figures that the cuts of the spectral flow with the x axis are closer to the origin $m = 0$ for the HISQ than for the 1-link operators, and also get closer as we go to smaller lattice spacings. This is consistent with the expectation that in the continuum limit the cuts should move to the origin, and that the improved Dirac operator is closer to the continuum than its unimproved counterpart. In order to make a more quantitative statement, we have computed a histogram (normalized to area one) of the cuts for the three different ensembles and both operators, which is shown in Fig. \ref{fig_histograms}. \begin{figure*}[h] \includegraphics[scale=.45,angle=0]{histograms.eps} \caption{Normalized histograms of the distribution of cuts of the spectral flow with the zero x axis for the three ensembles. In each individual figure we have plotted both the 1link result (broad distribution) and the HISQ result (narrow distribution) on the same scale for comparison. Scales are different for the different figures. \label{fig_histograms}} \end{figure*} We can see clearly the large differences between both operators, and how the distribution of cuts moves towards zero as we decrease the lattice spacing. In order for the identification of the topological charge through the spectral flow to be unambiguous, it is necessary that there is a clear separation between cuts close and far away from the origin. To test whether this is the case we have calculated a few flows up to very large values of the parameter $m$. We show in Fig. \ref{fig_4d_long} a representative result corresponding to a fine configuration, for both the unimproved and the HISQ cases. We can see that there is a very clear separation between cuts close to the origin and other possible cuts, which are very far away from the origin for both operators. \begin{figure}[h] \includegraphics[scale=1.2,angle=0]{fig_4d_long.eps} \caption{Spectral flow for very large values of m for a typical configuration in the fine ensemble, both for the 1link and the HISQ staggered operators. \label{fig_4d_long}} \end{figure} We have also calculated the spectral flow corresponding to the Wilson Dirac operator for a few configurations, in order to compare the results with the staggered case. We have chosen to do the comparison with the flow corresponding to the 1-link operator, which is unimproved, and therefore in a sense a closer relative to the Wilson one. In Figs. \ref{fig_4d_wilson_beta46} and \ref{fig_4d_wilson_beta50} we show both the Wilson and the 1-link staggered flow for two configurations corresponding to the coarsest and finest ensembles, both with $Q = -1$. As we can see we get consistent results in both cases. On the other hand, the computer time needed for the calculation is considerably less in the staggered case, possibly due to the better conditioning of the staggered Dirac operator. \begin{figure}[h] \includegraphics[scale=1.3,angle=0]{fig_4d_beta46_wilson.eps} \caption {Comparison of the Wilson and the staggered flow for the same configuration of charge $Q = -1$ in the coarse ensemble. The lower figures are a detailed view of the crossings with the x axis. \label{fig_4d_wilson_beta46}} \end{figure} \begin{figure}[h] \includegraphics[scale=1.3,angle=0]{fig_4d_beta50_wilson.eps} \caption {Comparison of the Wilson and the staggered flow for the same configuration of charge $Q = -1$ in the fine ensemble. The lower figures are a detailed view of the crossings with the x axis. \label{fig_4d_wilson_beta50}} \end{figure} \section{Conclusions and Outlook} We have presented clear numerical evidence that Adams' definition of the topological charge using the staggered Dirac operator works as expected also for realistic (quenched) $SU(3)$ gauge fields. The crossings near and far away from the origin are very well separated, and therefore the topological charge of a configuration is unambiguously defined, even in cases which would be ambiguous using other definitions. For most configurations we have seen that the charge as measured by the number of high taste-singlet chirality modes and by the spectral flow agree. We have also seen the expected differences between the position of the cuts between the 1link and the HISQ operators, as well as the clear move towards zero of the cuts as we decrease the lattice spacing. For a future work, it would be interesting to repeat this study in full QCD ensembles, including the effect of sea quarks. Inspired by this construction, it is possible to define an overlap operator starting with a staggered kernel, instead of the usual Wilson one \cite{Adams2}, producing a chiral operator representing two tastes of fermions. A similar construction can be carried out to further reduce the degeneracy and produce a one-flavor overlap operator \cite{Hoelbling}. The question is whether this construction has all the required properties, and is further numerically advantageous as compared with the usual overlap construction. Results are presented in \cite{deForcrand1,deForcrand2,Durr2,Adams3}. \begin{acknowledgments} We thank Alistair Hart for generating the configurations. This work was funded by an INFN-MICINN collaboration (under grant AIC-D-2011-0663), MICINN (under grants FPA2009-09638, FPA2008-10732 and FPA2012-35453, cofinanced by the EU through FEDER funds), DGIID-DGA (grant 2007-E24/2), and by the EU under ITN-STRONGnet (PITN-GA-2009-238353). E. Follana was supported on the MICINN Ram\'on y Cajal program. and A. Vaquero was supported by MICINN through the FPU program. E. Follana acknowledges financial support from the Laboratori Nazionali del Gran Sasso during several research visits where part of this work was carried out. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,579
HomeAboutLeadership We believe that a company is only as good as its people. Atlantic Development & Investments, Inc. built its business with this philosophy firmly in mind. We work with the top professionals, the finest resource managers and the best account managers. Our team centers around professionals with a wealth of experience in the development of affordable housing. That edge sets Atlantic Development & Investments, Inc. apart from the crowd. Mark D. Breen Mark Breen is the President and Chief Executive Officer of Atlantic Development & Investments, Inc. Mark has more than 35 years of experience in the real estate industry and has been the developer and owner of affordable multi-family properties for more than 25 years. Mark is the 2011 recipient of the Greater Phoenix Urban League's Whitney M. Young Jr. Individual Award in Excellence. Jessica B. Raymond Jessica Raymond is the Executive Vice President of Atlantic Development & Investments, Inc. Her responsibilities are primarily focused on acquisitions and deal structuring, including review of various program regulations. Jessica holds a PhD in Social Policy from Trinity College Dublin and is a National Association of Homebuilder's Housing Credit Certified Professional. She enjoys camping and is a member of the National Association of Professional Women. "In all our dealings with Mr. Breen, we found him to be an honorable, dedicated, hardworking person who lived up to all our expectations. He always responded promptly to our inquiries and certainly proved to be a man whose word could be relied on even in the absence of a written agreement." — Robert C. Zampano. Former Senior United States District Judge
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,886
(Golden, Colo) – MRIGlobal is pleased to announce that the Energy Systems Integration Facility at the National Renewable Energy Laboratory (NREL) has been awarded R&D Magazine's Laboratory of the Year. MRIGlobal is one of two partners in the Alliance for Sustainable Energy LLC, which manages the laboratory for the U.S. Department of Energy. MRIGlobal has managed and operated NREL since 1977, when the laboratory came into existence as the Solar Energy Research Institute. NREL is the nation's only laboratory dedicated to renewable energy and energy efficiency research and development. Completed in April 2013, the Energy Systems Integration Facility is the first center designed to help public and private sector researchers scale up promising clean energy technologies and test how they interact with each other and the grid at utility scale. With 182,500 square feet, the $135 million facility houses more than 15 experimental laboratories and a high-performance computing data center with an innovative warm-water cooling system. The general contractor for the project was the Kansas City-based J.E. Dunn Construction Co. "We've known that industry was eager for a place like ESIF, which allows utility companies and investors to see technology working in real time and on a large scale," said Dan Arvizu, NREL Director and Executive Vice President, MRIGlobal. The annual international competition receives entries from the best new and renovated laboratories. Projects are judged by a blue-ribbon panel of laboratory architects, engineers, equipment manufacturers, researchers and editors of R&D Magazine and Laboratory Design Newsletter. This is the second Lab of the Year Recognition for NREL. In 2008, NREL was recognized by R&D magazine with a special award for the Science & Technology Facility. The 71,000-square-foot laboratory was the first federal laboratory building to achieve the highest Leadership in Energy and Environmental Design (LEED) Green Building rating from the U.S. Green Buildings Council and marked the beginning of a series of LEED Platinum, high performance buildings, at NREL. ESIF project team members will be honored during the Laboratory Design Conference to be held April 2-4 at the Westin Waltham Hotel, Waltham, Mass.
{ "redpajama_set_name": "RedPajamaC4" }
9,665
About HMB 10 Most Haunted Places – Elk Grove, CA By Paul Roberts Deanna Jaxine Stinson at Laguna Creek Wetlands Note: See the clump of Eucalyptus trees behind Deanna. The old settlers of the area planted those trees there, imported from Australia. The trees serve as good windbreakers for their parked caravans. There was an 1850 cholera outbreak in this area and a little girl that died of cholera is sometimes heard whimpering in this area of the trees. A floating mist or the sighting of a blonde little girl lying in the grass dying has been seen by high school students. She is known as THE WIMPERING GIRL. You can see the 10 Most Haunted Places in Elk Grove at this photo slide: https://www.youtube.com/watch?v=bz7AWlD3jA0&feature=youtu.be 1. Lee/Wightman Historical Site At this site, there was a blacksmith shop; utility/smokehouse built in 1870, tenant house built in 1907; Barn Granary/ Pump House / Harvester Shed built in 1917; Storage Shed. Everything is now destroyed and the only thing standing is the Historical Sign Post that explains the history of the sight. It's located off Big Horn Blvd. Cross Streets: Big Horn Blvd / Anchor Bay. ACTIVITY: There is a blue orb that is constantly seen at this location. More haunting activity information about this site, can be found here: http://forums.jazmaonline.com/topic.asp?TOPIC_ID=2981 2. Laguna Creek Wetlands. Located at Francesca. Walking trail will lead you into the wetlands. At the wetlands, you have a cryptid, a giant salamander, read here: http://forums.jazmaonline.com/topic.asp?TOPIC_ID=1583 // Besides a giant salamander, there are ghostly sights at the wetlands, find out the paranormal activity at the wetlands here: https://sacramentopress.com/2011/10/16/the-floating-white-mist-of-the-elk-grove-laguna-creek-wetlands/ 3. Jack E. Hill Park. The most popular ghosts at this park are the CHINESE PHANTOM TRIO. To learn more about the hauntings of this park, go to this article at this link here: http://forums.jazmaonline.com/topic.asp?TOPIC_ID=8109 4. Merwin F. Rose Park. In 2006, 2 men stole a car in Elk Grove. The police were chasing them and they ditched the car. The driver was apprehended, while the passenger kept running. It was about 2 a.m. and the suspect jumped over my backyard fence and into a neighboring home across Frye Creek. The police cornered him at this home. The suspect picked up a ceramic plant container and threw it at the police officer. The police officer shot him dead. I woke up hearing the gunshots and had 2 police officers in my backyard with flashlights turned on. The suspect and his friend would hang out at Merwin F. Rose Park. 3 people in my neighborhood, including my wife Deanna Jaxine Stinson has seen the ghost of this suspect at Merwin F. Rose Park during dusk. He is wearing a hoodie. GHOST IDENTIFIED: THE HOODIE MAN. 5. Railroad Tracks on Big Horn Blvd (dead end road section.) MAN ON HORSE is sometimes seen trotting along the railroad tracks. Here is one report from Dana Mesler: "Around 8pm in July of 2002, I was walking my dog Brownie and I saw a cowboy looking guy sitting on top of his horse and trotting along the railroad tracks. I looked down at my dog, then I looked up and he was gone. I had to see a ghost!" 6. Elk Grove Library. OLD LADY OF THE ELK GROVE LIBRARY is seen sometimes browsing through the books. She wears a gray floppy dress and has straggly gray hair. Don Robinson saw this old lady and approached her and said: "That's a good UFO book!" The old lady looked at Don and frowned and disappeared in front of him. Don didn't mention anything about this sighting for 5 years, afraid people would think he was crazy. 7. Town of Franklin off Franklin Blvd. The town of Franklin has been absorbed by Elk Grove. Besides the town of Franklin, we have 4 very interesting cemeteries in Elk Grove, they are: Hilltop Cemetery; Elk Grove Cosumnes Cemetery; East Lawn Elk Grove Memorial; Franklin Cemetery. Here are some stories I wrote about the hauntings in the town of Franklin and there are many hauntings: http://forums.jazmaonline.com/topic.asp?TOPIC_ID=7150 http://forums.jazmaonline.com/topic.asp?TOPIC_ID=619 While the ghost of GUNSLINGER CALVIN COLT haunts the town of Franklin, you also have 4 cemeteries to check out. THE LADY IN THE BLUE GOWN haunts Elk Grove Cosumnes Cemetery, she is seen floating around the cemetery looking for something. At Hilltop Cemetery, the ghostly image of a German Shepherd is seen sniffing the ground and then just disappearing. The story is that a groundskeeper used to bring his dog named Leon to the cemetery while he would cut the grass and trimmed the bushes and Leon loved to roam around the cemetery on his own. When Leon died, he went back to the cemetery that he loved so much. GHOST DOG IDENTIFIED: LEON. Franklin Cemetery has a notable buried there from the Lewis & Clark Expedition. East Lawn Elk Grove Memorial, many ghosthunters have captured strange shadow anomalies in their photographs. Three psychics have detected my mother Rosemarie Roberts hanging around. My mother has been sighted at my own home and seen walking down the hallway by my dad in her former home in Greenhaven. My mother was familiar with this cemetery, so it would seem to me that she would hang around to see who is visiting her grave. My mother would claim that she had psychic abilities. 8. Love Junky Boutique (Old Building). PIG TAILS is a little girl that is sometimes seen looking out of one of the windows of this old building and has been seen in the back of the building playing marbles. She is a little girl with brown hair and she has her hair done in pig tails. Three people at the former Elk Grove Brewery claim that they have seen her and she will vanish in front of their eyes. 9. Lola Lounge – Building established in 1885. Used to be a General Store and Restaurant and Bar. It was also known as the Elk Grove Brewery and I conducted an investigation there, here is the story: 10. School of Rock – Used to be the building for the Independent Order of Odd Fellows – Elk Grove. 1st meeting for IOOF was on May 2, 1888. The original building burned down on July 7, 1892. Michael Jay Terrence says that he once was on one of the top floors and saw 3 men that looked like they had melted skin. Possible victims of the fire in 1892? GHOSTS IDENTIFIED: THREE BURNED MEN. Paul Dale Roberts, HPI Esoteric Detective aka The Demon Warrior Halo Paranormal Investigations (HPI International) Haunted Boston and Salem, Massachusetts The Selby Haunting A Haunting in Sutter Haunted Bali HPI UFO Hunt: Gibson Ranch & Revisit of Haunted Dyer Lane THE LOVELAND FROGS Haunted Greystone Mansion The Valencia Club Woman in White: A Haunting in Penryn/Loomis Soul Merging: Michael Jackson to Bruno Mars Alien Acropolis Do YOU want to write for HMB? Get started today by signing-up and submitting an article HERE Tags: #paranormal #selfexperience #Blessingmyroom #scared Categorised in: paranormal other This post was written by Paul Roberts 123termpapers.com. Usessaywriters.com Buy essay for cheap About the Horror Movies Blog The Horror Movies Blog provides the latest info about unexplained mystery, paranormal, aliens, ghosts, demonology, orbs, haunted houses, bizarre, horror movies, promotional services and more! © 2021 Copyright THE HORROR MOVIES BLOG. This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Privacy & Cookies Policy Accept
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,478
IRA vs 401k Finanace You are here: Home / Finanace / Bitcoin, Donald Trump, Parler and NIO: 5 Things You Must Know Bitcoin, Donald Trump, Parler and NIO: 5 Things You Must Know January 11, 2021 by Retirement Here are five things you must know for Monday, Jan. 11: 1. — Stock Futures Point to a Weaker Wall Street Open Stock futures declined Monday following Wall Street's record highs on Friday that were fueled by hopes that President-elect Joe Biden will support an economy racked by the coronavirus pandemic with more fiscal stimulus. Contracts linked to the Dow Jones Industrial Average fell 206 points, S&P 500 futures declined 21 points and Nasdaq futures were down 53 points. Stocks finished at record highs Friday as investors assessed expectations for more economic stimulus following a weaker-than-expected U.S. jobs report. The U.S. lost 140,000 jobs in December, putting an end to seven months of employment growth. For the week, the Dow rose 1.6% – the blue-chip index's fourth-straight weekly gain – the S&P 500 added 1.8% and the Nasdaq gained 2.4%. Biden said he would lay out his proposals for further fiscal support on Thursday. Stock market gains also have been driven by optimism over the rollout of coronavirus vaccines even as infections continue to surge and scientists discover new variants of the virus. Meanwhile, Donald Trump begins his final full week in office as calls grow for him to step down following the storming of the Capitol Building last week by his supporters, and as Democratic lawmakers move closer to starting a second impeachment of the president. 2. — This Week's Economic Calendar The U.S. economic calendar on Monday is bare but reports such as the Consumer Price Index, the Producer Price Index, Jobless Claims, Retail Sales and Consumer Sentiment will be released later in the week. Companies issuing earnings reports this week include JPMorgan Chase (JPM) – Get Report, Citigroup (C) – Get Report, Wells Fargo (WFC) – Get Report, BlackRock (BLK) – Get Report, PNC Financial (PNC) – Get Report, Albertsons (ACI) – Get Report, KB Home (KBH) – Get Report, IHS Markit (INFO) – Get Report, Aphria (APHA) – Get Report, Delta Air Lines (DAL) – Get Report, Charles Schwab (SCHW) – Get Report and Taiwan Semiconductor (TSM) – Get Report. JPMorgan Chase and Wells Fargo are holdings in Jim Cramer's Action Alerts PLUS member club. Want to be alerted before Jim Cramer buys or sells the stocks? Learn more now. 3. — Apple and Amazon Cut Off Parler Just days after Twitter (TWTR) – Get Report permanently suspended President Donald Trump's account, Apple (AAPL) – Get Report, Amazon.com (AMZN) – Get Report, Alphabet's (GOOGL) – Get Report Google and others have cracked down on what they feel is dangerous talk that encourages violence. Apple over the weekend pulled Parler, a blogging and social networking service, from its app store, while Amazon stopped web hosting for the service. Parler bills itself as a privacy- and free speech-focused service, but also has become a favorite of right-wing commentators, and was allowing talk that encourages "illegal activity," according to the tech companies. "We have always supported diverse points of view being represented on the App Store, but there is no place on our platform for threats of violence and illegal activity. Parler has not taken adequate measures to address the proliferation of these threats to people's safety. We have suspended Parler from the App Store until they resolve these issues," said Apple in a statement obtained by TheStreet. Apple said it was pulling the app from its app store until Parler makes changes that address "direct threats of violence and calls to incite lawless action." Amazon knocked Parler off Amazon Web Services, saying it "cannot provide services to a customer that is unable to effectively identify and remove content that encourages or incites violence against others." Apple, Amazon and Alphabet are holdings in Jim Cramer's Action Alerts PLUS member club. Want to be alerted before Jim Cramer buys or sells the stocks? Learn more now. 4. — Bitcoin Suffers Worst Two-Day Drop Since March Bitcoin suffered its worst two-day drop since March, sliding as much as 21% over Sunday and Monday to as low as $32,389. According to Bloomberg, that's the biggest two-day slide since the coronavirus pandemic hit global markets last year. Bitcoin hit a record high of nearly $42,000 on Friday amid continued interest in the digital currency as a hedge against inflation and an alternative to the falling dollar. The price of the world's largest cryptocurrency has more than quadrupled in the past year. Early Monday, bitcoin traded at $34,623, down 13.53%, according to composite prices compiled by Bloomberg. "Time to take some money off the table," said Scott Minerd, chief investment officer with Guggenheim Investments, in a tweet. "Bitcoin's parabolic rise is unsustainable in the near term." On the same day in December that bitcoin hit $20,000, Minerd told Bloomberg that the Federal Reserve's "rampant money printing" ultimately should push bitcoin to $400,000. 5. — NIO Unveils New Electric ET7 Sedan NIO (NIO) – Get Report, the Chinese electric vehicle maker, unveiled its first sedan, the all-electric ET7, which will go up against Tesla's (TSLA) – Get Report best-selling Model 3. NIO's ET7 will start at 448,000 yuan ($69,100), compared with 265,740 yuan for the entry-level, China-built Tesla Model 3. Tesla sold 120,000 of the cars in China last year, The ET7 sedan will be available in early 2022. The ET7, NIO said, has a claimed driving range of 621 miles. Tesla has said the range for the standard Model 3 is 263 miles; the long-range version has a claimed range of 353 miles. NIO also launched a bigger, 150 kilowatt-hour battery pack, and an upgraded autopilot system. Filed Under: Finanace 401k Tips 2021 — Things to Do to Your 401k If You Wanna Be Rich August 18, 2021 By Retirement Read this, college athletes: What to do with your new NIL money Dollars & Sense: Do 401k accounts really work? – KMPH Fox 26 Episode 3: Then The Pandemic Happened 401GO Raises $2M in Seed Funding Episode 1: Then I Was The Caregiver Financial Focus: Is Roth IRA better for young workers? | Features
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,622
La finca Grúzino cercana a Chúdovo, Rusia, fue construida por un equipo de arquitectos neoclásicos bajo la dirección de Vasili Stásov para el conde Alekséi Arakchéyev en la década de 1810. El conde recibió Grúzino como regalo imperial del zar Pablo I de Rusia cuando fue nombrado Comandante de San Petersburgo, aunque esta área estaba en disputa entre los gobernadores de las guberniyas de Nóvgorod y Tver. Usando el trabajo de sus siervos, Arakchéyev logró convertir Grúzino en una de las fincas más avanzadas de su época. El celebrado escultor Iván Martos contribuyó con una estatua del emperador Pablo. Dos meses después de la muerte de Arakchéyev, Nicolás I de Rusia otorgó la finca al Cuerpo de Cadetes de Nóvgorod. Aunque oficialmente se afirmó que las instalaciones fueron destruidas por los nazis durante la Segunda Guerra Mundial, algunos investigadores declaran que fueron desmanteladas durante el proceso de industrialización de Stalin en la década de 1930. Las estatuas de leones que una vez adornaban el pórtico fueron trasladadas al kremlin de Nóvgorod, quedando como muestra de este conjunto neoclásico. Enlaces externos y referencias Artículo (en ruso) Restoration of the Arakcheyev Clock (en inglés) Jenkins, Michael. (1969) Arakcheev: Grand Vizier of the Russian Empire. The Dial Press, Inc. ISBN 0-571-08222-X Óblast de Nóvgorod Arquitectura neoclásica en Rusia Arquitectura de Rusia del siglo XIX
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,455
Q: IC decoupling capacitor design recommendations I use some decoupling capacitors (100 nF) for low speed ICs (< 1 MHz) and I need some recommendations to verify my routing. My layout uses two layers: * *Top (Red) with Vcc *Bottom (Green) with GND And I have this IC as an example: The Vcc layer is connected to my cap and the cap is connected to the IC over pin 28. So the IC is directly connected with the cap and not with the layer. Here is the question: Should I do the same for the ground connection or is it fine to connect the cap as shown in the image? The "bad" thing with DIP-packages is the long routing distance between GND and Vcc (pin 14 and pin 28) for example. Then the next question is how do I handle decoupling capactitors for ICs like the FT232RL which has more GND than Vcc pin. Should I connect all GNDs and route them to the GND of the cap or can I connect them directly with GND. What is a smart way to do it? Update: Add the changes A: Considering the small green traces leaving pins 15 to 21 at the south side, you probably have completely cut the ground plane connection to pin 22 and pin 14. It looks like the ground connection from the capacitor to pin 28 runs (way) south of pins 15 to 28, heading north, west of the IC. (Or it runs way east of IC). Anyway, with such a long trace, I doubt the capacitor functions as a decoupling capacitor. I would recommend to delete all the Vcc polygons/plane, try to swap the mentioned bottom traces (originating from pins 15-21) to the top and apply a decent (uninterupted) ground plane in the bottom layer. Only when necessary (when traces have to cross other traces), use a vias to swap the trace to the bottom side, but keep those bottom traces as short as possible.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,180
{"url":"https:\/\/projecteuclid.org\/euclid.bjps\/1551690038","text":"## Brazilian Journal of Probability and Statistics\n\n### Necessary and sufficient conditions for the convergence of the consistent maximal displacement of the branching random walk\n\nBastien Mallein\n\n#### Abstract\n\nConsider a supercritical branching random walk on the real line. The consistent maximal displacement is the smallest of the distances between the trajectories followed by individuals at the $n$th generation and the boundary of the process. Fang and Zeitouni, and Faraud, Hu and Shi proved that under some integrability conditions, the consistent maximal displacement grows almost surely at rate $\\lambda^{*}n^{1\/3}$ for some explicit constant $\\lambda^{*}$. We obtain here a necessary and sufficient condition for this asymptotic behaviour to hold.\n\n#### Article information\n\nSource\nBraz. J. Probab. Stat., Volume 33, Number 2 (2019), 356-373.\n\nDates\nAccepted: January 2018\nFirst available in Project Euclid: 4 March 2019\n\nhttps:\/\/projecteuclid.org\/euclid.bjps\/1551690038\n\nDigital Object Identifier\ndoi:10.1214\/18-BJPS391\n\nMathematical Reviews number (MathSciNet)\nMR3919027\n\nZentralblatt MATH identifier\n07057451\n\n#### Citation\n\nMallein, Bastien. Necessary and sufficient conditions for the convergence of the consistent maximal displacement of the branching random walk. Braz. J. Probab. Stat. 33 (2019), no. 2, 356--373. doi:10.1214\/18-BJPS391. https:\/\/projecteuclid.org\/euclid.bjps\/1551690038\n\n#### References\n\n\u2022 Addario-Berry, L. and Reed, B. (2009). Minima in branching random walks. Annals of Probability 37, 1044\u20131079.\n\u2022 A\u00efd\u00e9kon, E. (2013). Convergence in law of the minimum of a branching random walk. Annals of Probability 41, 1362\u20131426.\n\u2022 Alsmeyer, G. and Iksanov, A. (2009). A log-type moment result for perpetuities and its application to martingales in supercritical branching random walks. Electronic Journal of Probability 14, 289\u2013312.\n\u2022 Araman, V. F. and Glynn, P. W. (2006). Tail asymptotics for the maximum of perturbed random walk. The Annals of Applied Probability 16, 1411\u20131431.\n\u2022 Athreya, K. B. and Ney, P. E. (2004). Branching Processes. Mineola, NY: Dover Publications, Inc.\n\u2022 B\u00e9rard, J. and Gou\u00e9r\u00e9, J.-B. (2011). Survival probability of the branching random walk killed below a linear boundary. Electronic Journal of Probability 16, 396\u2013418.\n\u2022 Biggins, J. D. (1976). The first- and last-birth problems for a multitype age-dependent branching process. Advances in Applied Probability 8, 446\u2013459.\n\u2022 Biggins, J. D. and Kyprianou, A. E. (2005). Fixed points of the smoothing transform: The boundary case. Electronic Journal of Probability 10, 609\u2013631.\n\u2022 Chen, X. (2015a). A necessary and sufficient condition for the nontrivial limit of the derivative martingale in a branching random walk. Advances in Applied Probability 47, 741\u2013760.\n\u2022 Chen, X. (2015b). Scaling limit of the path leading to the leftmost particle in a branching random walk. Theory of Probability and Its Applications 59, 567\u2013589.\n\u2022 Fang, M. and Zeitouni, O. (2010). Consistent minimal displacement of branching random walks. Electronic Communications in Probability 15, 106\u2013118.\n\u2022 Faraud, G., Hu, Y. and Shi, Z. (2012). Almost sure convergence for stochastically biased random walks on trees. Probability Theory and Related Fields 154, 621\u2013660.\n\u2022 Hu, Y. and Shi, Z. (2009). Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees. Annals of Probability 37, 742\u2013789.\n\u2022 It\u00f4, K. and McKean, H. P. Jr. (1974). Diffusion Processes and Their Sample Paths. Die Grundlehren der Mathematischen Wissenschaften Band 125. Berlin\u2013New York: Springer.\n\u2022 Kahane, J.-P. and Peyri\u00e8re, J. (1976). Sur certaines martingales de Benoit Mandelbrot. Advances in Mathematics 22, 131\u2013145.\n\u2022 Lyons, R. (1997). A simple path to Biggins\u2019 martingale convergence for branching random walk. In Classical and Modern Branching Processes Minneapolis, MN, 1994, IMA Vol. Math. Appl. 84, 217\u2013221. New York: Springer.\n\u2022 Lyons, R., Pemantle, R. and Peres, Y. (1995). Conceptual proofs of $L\\log L$ criteria for mean behavior of branching processes. Annals of Probability 23, 1125\u20131138.\n\u2022 Mallein, B., (2017). Branching random walk with selection at critical rate. Bernoulli 23, 1784\u20131821.\n\u2022 Mallein, B., (2018). $N$-branching random walk with $\\alpha$-stable spine. Theory of Probability and Its Applications 62, 295\u2013318.\n\u2022 Mogul\u2019ski\u012d, A. A. (1974). Small deviations in the space of trajectories. Teori\u00e2 Vero\u00e2tnostej I Ee Primeneni\u00e2 19, 755\u2013765.\n\u2022 Peyri\u00e8re, J. (1974). Turbulence et dimension de Hausdorff. Comptes Rendus Des S\u00e9ances de L\u2019Acad\u00e9mie Des Sciences S\u00e9rie 1, Math\u00e9matique 278, 567\u2013569.\n\u2022 Roberts, M. I. (2015). Fine asymptotics for the consistent maximal displacement of branching Brownian motion. Electronic Journal of Probability 20, 26.\n\u2022 Skorohod, A. V. (1957). Limit theorems for stochastic processes with independent increments. Teori\u00e2 Vero\u00e2tnostej I Ee Primeneni\u00e2 2, 145\u2013177.","date":"2019-10-18 23:40:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7503786683082581, \"perplexity\": 2599.868740288345}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986685915.43\/warc\/CC-MAIN-20191018231153-20191019014653-00110.warc.gz\"}"}
null
null
Q: Probability Bookings in a Hotel Hotel has ten rooms, on average 7 are booked per night. Website enables twelve bookings to be made on a given night. What is the probability they'll not be able to accommodate all guests on given night. State any assumptions you make. **My initial thoughts were poisson. P(11) + P(12) guests show up on a night which is 7.2% and assumption being all bookings are independent. Any help appreciated A: Here is how you can use Poisson distribution: $$\mathbf P(X=k)=\frac{\lambda^k}{k!}e^{-\lambda}$$ where $X$ is the random variable that deals with the no. of guests in the hotel. Note that $\lambda=7$. The answer is therefore: $$\mathbf P(X\geq 11)=1-\sum_{k=0}^{10} \frac{\lambda^k}{k!} e^{-\lambda}$$ because we know that the hotel cannot accommodate more than $10$ guests.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,466
A young friend, much younger and hipper than I, once tried to explain to me what a "meme" was. I couldn't understand what the hell he was saying. Now I knew what a meme was! Anyways, I'd been making custom "Dark Shadows" memes for this guy without even knowing I was doing it! Looking back, some of them were pretty OK. So I thought I'd share them. Like I said, we "Dark Shadows" nerds are the worst. As a palate cleanser, here is a video of hilarious "Dark Shadows" bloopers. Forty-eight minutes and 50 seconds' worth!
{ "redpajama_set_name": "RedPajamaC4" }
6,834
{"url":"http:\/\/duksctf.github.io\/2018\/12\/14\/Ph0wn2018-wannadrink.html","text":"# Ph0wn 2018 - Wanna drink? Move your arm!\n\nAn ssh access to a Lego Mindstorms platform: ev3dev is given to control three motors of a robot arm. The goal is to open the fridge door.\n\n### Description\n\nPh0wn aliens have abducted my coke and put it in the fridge. But, now, I am thirsty. Open the door with the robotic arm, and you will get a free coke \u2026 and a flag!\n\nFortunately, the aliens have left a few instructions in case they would format their internal memory.\n\nYou first must request access to the robot by giving your IP address, one team at a time. Then you can connect to the robot on ph0wn2018-robot with IP address 10.210.17.146\n\nOnce connected, you must make a program to move the arm and open the door. The door must rotate of 180 degrees to get the can. Knocking down the fridge is prohibited (you must get the can without destroying the fridge, you vandals!)\n\nThe flag is NOT in the robot system, it is printed on the coke can.\n\nOh, by the way, a very powerful lazer beam will strike your own computer if you were to erase to content of the system, or if you get the can without using the robotic arm.\n\nLast but not least, they have noted the login\/password on a post-it just next to the printed manual:\n\nLogin: robot\n\n\nThe flag has the usual format.\n\nAuthor: ludoze\n\n### Details\n\nPoints: 500 (intermediate)\n\n### Solution\n\nThe robotic arm was made with Lego:\n\nWhen connecting to the server, we had an indication about the service used:\n\n$ssh robot@10.210.17.146 _____ _ _____ _|___ \/ __| | _____ __ \/ _ \\ \\ \/ \/ |_ \\ \/ _ |\/ _ \\ \\ \/ \/ | __\/\\ V \/ ___) | (_| | __\/\\ V \/ \\___| \\_\/ |____\/ \\__,_|\\___| \\_\/ Debian jessie on LEGO MINDSTORMS EV3! The programs included with the Debian GNU\/Linux system are free software; the exact distribution terms for each program are described in the individual files in \/usr\/share\/doc\/*\/copyright. According to the documentention, the motors are located in the \/sys\/class\/tacho-motor\/ folder. Three motors were present: motor0, motor1 and motor3. It is possible to list all the available commands and parameters: $ ls \/sys\/class\/tacho-motor\/motor0\/\ncommand duty_cycle position speed stop_actions\ncommands duty_cycle_sp position_sp speed_pid subsystem\ncount_per_rot hold_pid power speed_sp time_sp\ndevice max_speed ramp_down_sp state uevent\n\n$cat \/sys\/class\/tacho-motor\/motor0\/commands run-forever run-to-abs-pos run-to-rel-pos run-timed run-direct stop reset Basically we were interested by the speed and the movement of each motor. We were able to move them with the following commands: $ export M3=\/sys\/class\/tacho-motor\/motor3\n$cat$M3\/position_sp\n0\n$cat$M3\/speed_sp\n500\n$echo 100 >$M3\/speed_sp\n$echo 50 >$M3\/position_sp\n$echo run-to-abs-pos >$M3\/command\n$cat$M3\/position_sp\n50\n`\n\nAfter playing a bit with the tree motors, we were able to find that motor0 controls angular movement, motor1 vertical movements and motor3 is the pinch.\n\nFinally it was only a matter of time to find the proper sequence to open the fridge door and the flag was stick to the can inside:\n\nWritten on December 14, 2018","date":"2021-09-23 09:14:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17381790280342102, \"perplexity\": 2841.4859044612967}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057417.92\/warc\/CC-MAIN-20210923074537-20210923104537-00296.warc.gz\"}"}
null
null
{"url":"https:\/\/www.iacr.org\/cryptodb\/data\/author.php?authorkey=9525","text":"## CryptoDB\n\n### Christian Matt\n\n#### Publications\n\nYear\nVenue\nTitle\n2019\nEUROCRYPT\nIn this work, we introduce and construct D-restricted Functional Encryption (FE) for any constant $D \\ge 3$D\u22653, based only on the SXDH assumption over bilinear groups. This generalizes the notion of 3-restricted FE recently introduced and constructed by Ananth et al. (ePrint 2018) in the generic bilinear group model.A $D=(d+2)$D=(d+2)-restricted FE scheme is a secret key FE scheme that allows an encryptor to efficiently encrypt a message of the form $M=(\\varvec{x},\\varvec{y},\\varvec{z})$M=(x,y,z). Here, $\\varvec{x}\\in \\mathbb {F}_{\\mathbf {p}}^{d\\times n}$x\u2208Fpd\u00d7n and $\\varvec{y},\\varvec{z}\\in \\mathbb {F}_{\\mathbf {p}}^n$y,z\u2208Fpn. Function keys can be issued for a function $f=\\varSigma _{\\varvec{I}= (i_1,..,i_d,j,k)}\\ c_{\\varvec{I}}\\cdot \\varvec{x}[1,i_1] \\cdots \\varvec{x}[d,i_d] \\cdot \\varvec{y}[j]\\cdot \\varvec{z}[k]$f=\u03a3I=(i1,..,id,j,k)cI\u00b7x[1,i1]\u22efx[d,id]\u00b7y[j]\u00b7z[k] where the coefficients $c_{\\varvec{I}}\\in \\mathbb {F}_{\\mathbf {p}}$cI\u2208Fp. Knowing the function key and the ciphertext, one can learn $f(\\varvec{x},\\varvec{y},\\varvec{z})$f(x,y,z), if this value is bounded in absolute value by some polynomial in the security parameter and n. The security requirement is that the ciphertext hides $\\varvec{y}$y and $\\varvec{z}$z, although it is not required to hide $\\varvec{x}$x. Thus $\\varvec{x}$x can be seen as a public attribute.D-restricted FE allows for useful evaluation of constant-degree polynomials, while only requiring the SXDH assumption over bilinear groups. As such, it is a powerful tool for leveraging hardness that exists in constant-degree expanding families of polynomials over $\\mathbb {R}$R. In particular, we build upon the work of Ananth et al. to show how to build indistinguishability obfuscation ($i\\mathcal {O}$iO) assuming only SXDH over bilinear groups, LWE, and assumptions relating to weak pseudorandom properties of constant-degree expanding polynomials over $\\mathbb {R}$R.\n2019\nCRYPTO\nThe existence of secure indistinguishability obfuscators ( $i\\mathcal {O}$ ) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to constructing $i\\mathcal {O}$ rely on d-linear maps. While secure bilinear maps are well established in cryptographic literature, the security of candidates for $d>2$ is poorly understood.We propose a new approach to constructing $i\\mathcal {O}$ for general circuits. Unlike all previously known realizations of $i\\mathcal {O}$ , we avoid the use of d-linear maps of degree $d \\ge 3$ .At the heart of our approach is the assumption that a new weak pseudorandom object exists. We consider two related variants of these objects, which we call perturbation resilient generator ( $\\varDelta$ RG) and pseudo flawed-smudging generator ( $\\mathrm {PFG}$ ), respectively. At a high level, both objects are polynomially expanding functions whose outputs partially hide (or smudge) small noise vectors when added to them. We further require that they are computable by a family of degree-3 polynomials over $\\mathbb {Z}$ . We show how they can be used to construct functional encryption schemes with weak security guarantees. Finally, we use novel amplification techniques to obtain full security.As a result, we obtain $i\\mathcal {O}$ for general circuits assuming:Subexponentially secure LWEBilinear Maps $\\mathrm {poly}(\\lambda )$ -secure 3-block-local PRGs $\\varDelta$ RGs or $\\mathrm {PFG}$ s\n2017\nASIACRYPT\n2015\nEPRINT\n2015\nEPRINT\n2015\nEPRINT\n2015\nASIACRYPT","date":"2020-01-28 12:11:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8544983863830566, \"perplexity\": 1653.1230154472707}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251778168.77\/warc\/CC-MAIN-20200128091916-20200128121916-00226.warc.gz\"}"}
null
null
{"url":"https:\/\/cs.stackexchange.com\/tags\/weighted-graphs\/new","text":"Tag Info\n\n1\n\nWhen solving most optimization problems on paths (e. g. shortest, longest path) a cycle is either useless or makes the optimum not exist (allowing infinitely long or short paths). Thus there's generally no point in allowing cycles.\n\n2\n\nLet us start by observing that after the $k$-th iteration in the main loop, the Bellman-Ford algorithm has computed minimal weight paths (or the weight of such a path if we do not store the predecessors) of length at most $k$ from the starting vertex $s$ to every other vertex of our graph $G$ (if such paths exists). To prove this, we can use induction: ...\n\n0\n\nThe MST indeed adresses conditions 1 and 3 but not conditions 2. The solution of the global problem (as shown by your example) is not the MST but still a tree. Let's call $T$ the solution for the input graph $G$. Let's also call $T_i$ the solution for the problem $G_i$ which is the subgraph of $G$ containing vertices with range index lower or equal to $i$ (I ...\n\n0\n\nMy current strategy is: first enumerate all elementary cycles. I use the hawick algorithm which is already implemented in the c++ boost library. sum all the weights of edges in each cycle and then check if the sum is negative.\n\n0\n\nAs indicated by the comments, this is a minimum spanning tree problem, which can be solved efficiently by Edmonds' algorithm (or Chu\u2013Liu\/Edmonds' algorithm).\n\nTop 50 recent answers are included","date":"2020-10-29 14:24:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8560144305229187, \"perplexity\": 394.77115344379655}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107904287.88\/warc\/CC-MAIN-20201029124628-20201029154628-00677.warc.gz\"}"}
null
null
/* * Tower.h * * Created on: May 19, 2013 * Author: planke */ #ifndef TOWER_H_ #define TOWER_H_ #include "Unit.h" class Tower: public Unit { public: Tower(); virtual ~Tower(); void onDie(); virtual Unit* clone(); virtual bool isImmuneToEffect(Effect* effect) { return true; } private: virtual void fillMoveAnimationPictures(); virtual void fillIdleAnimationPictures(); virtual void fillAttackAnimationPictures(); }; #endif /* TOWER_H_ */
{ "redpajama_set_name": "RedPajamaGithub" }
7,535
7 lessons from the Holocaust BY IRWIN COTLER, AISH— Whenever I write on the Holocaust – the Shoah – I do so with a certain degree of humility, and not without a deep sense of pain. For I am reminded of what my parents taught me while still a young boy — the profundity and pain of which I realized only years later — that there are things in Jewish history that are too terrible to be believed, but not too terrible to have happened; that Oswiencim, Majdanek, Dachau, Treblinka — these are beyond vocabulary. Words may ease the pain, but they may also dwarf the tragedy. For the Holocaust was uniquely evil in its genocidal singularity, where biology was inescapably destiny, a war against the Jews in which, as Nobel Peace Laureate Elie Wiesel put it, "not all victims were Jews, but all Jews were victims." But while the Holocaust was "uniquely unique" as Holocaust scholar Yehuda Bauer put it, there are important universal lessons to be acted upon. Indeed, I write at an important moment of remembrance and reminder, of witness and warning: on the 66th anniversary of the liberation of the surviving remnants of "Planet Auschwitz" — the most horrific laboratory of mass murder in history; on the 66th anniversary of the disappearance of Raoul Wallenberg – Canada's first honorary citizen – whom the UN called the greatest humanitarian of the 20th Century, and who showed that one person could confront evil, resist and prevail, and thereby transform history; in the aftermath of the 65th anniversary of the UN, which as former UN Secretary-General Kofi Annan said, "emerged from the ashes of the Holocaust"; and as he reminded us, "a UN that fails to be at the forefront of the fight against anti-Semitism and other forms of racism, denies its history and undermines its future"; on the occasion of the 65th anniversary of the Nuremberg Principles, which became the forerunner of international humanitarian and criminal law, reminding us also of the double entendre of Nuremberg — the Nuremberg of jackboots as well as the Nuremberg of judgments; on the fifth anniversary of the International Day of Commemoration in Memory of the Victims of the Holocaust. And so, on this International Day of Holocaust Remembrance — on the eve also of the 60th anniversary of the coming into effect of the Genocide Convention — the "Never Again" Convention — we have to ask ourselves, what have we learned and what must we do? Lesson 1: The Importance of Holocaust Remembrance – The Responsibility of Memory The first lesson is the importance of Zachor, of the duty of remembrance itself. For as we remember the six million Jewish victims of the Shoah — defamed, demonized and dehumanized, as prologue or justification for genocide — we have to understand that the mass murder of six million Jews and millions of non-Jews is not a matter of abstract statistics. For unto each person there is a name — unto each person, there is an identity. Each person is a universe. As our sages tell us: "whoever saves a single life, it is as if he or she has saved an entire universe." Just as whoever has killed a single person, it is as if they have killed an entire universe. And so the abiding imperative — that we are each, wherever we are, the guarantors of each other's destiny. Lesson 2: The Danger of State-Sanctioned Incitement to Hatred and Genocide — The Responsibility to Prevent The enduring lesson of the Holocaust is that the genocide of European Jewry succeeded not only because of the industry of death and the technology of terror, but because of the state-sanctioned ideology of hate. This teaching of contempt, this demonizing of the other, this is where it all began. As the Canadian courts affirmed in upholding the constitutionality of anti-hate legislation, "the Holocaust did not begin in the gas chambers — it began with words". These, as the Courts put it, are the chilling facts of history. These are the catastrophic effects of racism. The Holocaust did not begin in the gas chambers — it began with words. As the UN marks the commemoration of the Holocaust, we are witnessing yet again, a state-sanctioned incitement to hate and genocide, whose epicentre is Ahmadinejad's Iran. Let there be no mistake about it. Iran has already committed the crime of incitement to genocide prohibited under the Genocide Convention. Yet not one state party to the Genocide Convention has undertaken its mandated legal obligation to hold Ahmadinejad's Iran to account. Lesson 3: The Danger of Silence, The Consequences of Indifference — The Responsibility to Protect The genocide of European Jewry succeeded not only because of the state-sanctioned culture of hate and industry of death, but because of crimes of indifference, because of conspiracies of silence. We have already witnessed an appalling indifference and inaction in our own day which took us down the road to the unspeakable — the genocide in Rwanda — unspeakable because this genocide was preventable. No one can say that we did not know. We knew, but we did not act, just as we knew and did not act to stop the genocide by attrition in Darfur. Indifference and inaction always mean coming down on the side of the victimizer, never on the side of the victim. Indifference in the face of evil is acquiescence with evil itself. Lesson 4: Combating Mass Atrocity and the Culture of Impunity — The Responsibility to Bring War Criminals to Justice If the 20th Century — symbolized by the Holocaust — was the age of atrocity, it was also the age of impunity. Few of the perpetrators were brought to justice; and so, just as there must be no sanctuary for hate, no refuge for bigotry, there must be no base or sanctuary for these enemies of humankind. Yet those indicted for war crimes and crimes against humanity – such as President Al-Bashir of Sudan – continue to be welcomed in international fora. Lesson 5: The Trahison des Clercs — The Responsibility to Talk Truth to Power The Holocaust was made possible, not only because of the "bureaucratization of genocide", as Robert Lifton put it, but because of the trahison des clercs — the complicity of the elites — physicians, church leaders, judges, lawyers, engineers, architects, educators, and the like. Indeed, one only has to read Gerhard Muller's book on "Hitler's Justice" to appreciate the complicity and criminality of judges and lawyers; or to read Robert-Jan van Pelt's book on the architecture of Auschwitz, to be appalled by the minute involvement of engineers and architects in the design of death camps, and so on. Holocaust crimes, then, were also the crimes of the Nuremberg elites. As Elie Wiesel put it, "Cold-blooded murder and culture did not exclude each other. If the Holocaust proved anything, it is that a person can both love poems and kill children". Lesson 6: Holocaust Remembrance — The Responsibility to Educate In acting upon the International Holocaust Remembrance Day, states should commit themselves to implementing the Declaration of the Stockholm International Forum on the Holocaust, which concluded: "We share a commitment to encourage the study of the Holocaust in all its dimensions… a commitment to commemorate the victims of the Holocaust and to honor those who stood against it… a commitment to throw light on the still obscured shadows of the Holocaust… a commitment to plant the seeds of a better future amidst the soil of a bitter past… a commitment… to remember the victims who perished, respect the survivors still with us, and reaffirm humanity's common aspiration for mutual understanding and justice." Lesson 7: The Vulnerability of the Powerless — The Protection of the Vulnerable as the Test of a Just Society The genocide of European Jewry occurred not only because of the vulnerability of the powerless, but also because of the powerlessness of the vulnerable. It is not surprising that the triage of Nazi racial hygiene — the Sterilization Laws, the Nuremberg Race Laws, the Euthanasia Program — targeted those "whose lives were not worth living"; and it is not unrevealing, as Professor Henry Friedlander points out in his work on "The Origins of Genocide", that the first group targeted for killing were the Jewish disabled — the whole anchored in the science of death, the medicalization of ethnic cleansing, the sanitizing even of the vocabulary of destruction. And so it is our responsibility as citoyens du monde to give voice to the voiceless, as we seek to empower the powerless — be they the disabled, the poor, the refugee, the elderly, the women victims of violence, the vulnerable child — the most vulnerable of the vulnerable. We remember – and we trust – that never again will we be silent or indifferent in the face of evil. May this International Day of Holocaust Remembrance be not only an act of remembrance, but a remembrance to act. Category: AntisemitismBy The Jerusalem Connection May 1, 2011 1 Comment PreviousPrevious post:Jerusalem in Prophecy, Part 4 of 5NextNext post:Homemade shofar We may speak the great words but the crimes of the Holocaust go on—the Copts in Egypt —Christians who are forced to live in dwellings that are not fit for rats,who dig through the trash for a living, have no food, there meat killed by the Muslims , their young Girls stolen and put into slavery for Muslim men(if you can call them that) THEIR CHURCHES BURNT…we speak the great words but we still don't act…Our Country looks the other way…We vowed we would never forget…Our schools don't teach the truth, My so called Friends say they agree with the Muslims…They are for the People who want to KILL ME…My Jewish Friends want me DEAD! We have learned NOTHING AT ALL…THE HOLOCAUST IS STILL ALIVE AND GOING STRONG! MAY G-D SAVE US!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,288
Top 10 best sunglasses for women In 2021... Where to Purchase Cheap Sunglasses in Pakistan 2021 Top 10 best sunglasses for women In 2021 Kayla's business hit a milestone and turned over $1million last financial year by selling reusable coffee capsules Who Wants To Be A Millionaire host Jeremy Clarkson and fans left in tears Top 10 Most Expensive Fruits In The World 2020 Art & life style Published On: Thu, Sep 21st, 2017 Net Worth | By Fox News Point Dan Bilzerian Net Worth: How Much Total Net Worth Of Bilzerian? Dan Bilzerian Net Worth in 2018: $150 million Dan Bilzerian is well known as one of the best professional poker player in world, he is also an actor as well as a stuntman, his father helped him to build his net worth and now according to different sources his net worth is $150 million. The American professional poker player Dan Brandon Bilzerian was born in Tampa, Florida to Paul Bilzerian and Terri Steffen on December 7, 1980. he started his career in 2000 through joining Navy SEAL training program but he was dropped because according to officials he had failed in passing the test of some kind of shooting range and later he took the charge of his father's business while now he is known as millionaire. Bilzerian started his career as a poker man after dropped from Navy program, he took part in World Series of Poker Main Event in 2009 while he was placed at 180th position in all over the world. but he is named by Bluff Magazine on Twitter as one of the most funny poker player in the world and in recent days, he has won a won a sing game match while from which he earned about $10.8 million. In 2011, Tom Goldstein for a wager raced with Dan Bilzerian at Las Vegas Motor Speedway but at last the poker player won the $385,000, he has appeared in numerous movies from which he has got a sufficient amount and till now he has received two heart attacks. The best movies of the stuntman Dan are; Olympus Has Fallen, Lone Survivor, The Other Woman, The Equalizer, Cat Run 2, Extraction and War Dogs. Now he is working as a star as well as a executive producer of a TV series, "Blitz's Real Hollywood Stories" and his total net worth is $150 Million. Top 10 Most Beautiful and Hottest Russian Models 2017 Weird Al Yankovic Net Worth; How Much Is Weird Al Yankovic Net Worth Right Now? The Walking Dead Season 8: New Photos Released From The Set Of The Show Do You Need Life Insurance? Why You Need Life Insurance PML-N Re-Elects Nawaz Sharif As Party President Of PML-N Party LG V30: Launch Date Of Smartphone Is Confirmed As Coming To America On September 28 Top 10 Best Free Chatting Sites In The World 2019 Time To Quit Smoking in 2017, Quit Smoking Now Or Die Trying Lauren Scala Height, Weight, Age, Net Worth In 2019 Mike Rowe Height, Weight, Age, Net Worth In 2019 Anthony Michael Halls Height, Age, Weight, Net Worth In 2019 Taylor Kitsch Height, Age, Weight, Net Worth In 2019 Copyright © 2020 foxnewspoint.com All Rights Reserved
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,130
\section{Introduction} Wormholes are hypothetical tunnels which connect two different points of the Universe. The term `wormhole' was first used by Misner and Wheeler \cite{MW1}. Einstein and Rosen described the structure of the wormhole mathematically \cite{rER} whereas the study of wormholes became popular after the work of Morris and Thorne \cite{r2}. In $4D$ space-time, the line element of static spherically symmetric wormholes is given by \cite{r2, r1, Y1} \begin{equation} \label{eq1} ds^2=-e^{2\phi(r)}dt^2+\left[1-\frac{b(r)}{r}\right]^{-1}dr^2+r^2(d\theta^2+sin^2\theta d\phi^2), \end{equation} where $\phi(r)$ is the gravitational redshift function and $b(r)$ is the shape function. The radial co-ordinate `$r$' decreases from infinity to a minimum value $r_0$, where $b(r_0)=r_0$ and then again increases from $r_0$ back to infinity. The formation of an event horizon should be avoided for a traversable wormhole \cite{r3}, which are identified as the surfaces with infinite redshift at the horizon, so that $\phi(r)$ must be finite everywhere for traversability. The shape function $b(r)$ must satisfy the following conditions for obtaining wormhole solutions and for flaring out from the throat \cite{r4,r5,1}: \begin{eqnarray}\ b(r_0)&=&r_0,\label{eqc2}\\ \frac{b}{r}&\leq&1 ,\label{eqc3}\\ \frac{b-b^\prime r}{b^2}&>&0.\label{eqc4} \end{eqnarray} Also for asymptotically flatness property we must have, $\frac{b(r)}{r}\rightarrow 0$ and $\phi (r)\rightarrow \phi_0 $ (constant) as $r \rightarrow \infty$ \cite{r2, Y1}. Modified gravity theory is an answer to the accelerated expansion of the Universe and $f(R,T)$ gravity model is one of these modified gravity theories. In $f(R,T)$ gravity theory, Einstein-Hilbert action is modified by replacing $R$ (Ricci scalar) by $f(R,T)$, which is an arbitrary function of R and trace of energy momentum tensor ($T$) \cite{fRT1}. Researchers are analyzing various studies under the context of this gravity theory. In the background of $f(R,T)$ gravity, dynamical in/stability of celestial compact system has been estimated \cite{fRT2}. Different models of $f(R,T)$ gravity theory are studied with observational constraints in Lyra-Geometry \cite{fRT3}. Godani \cite{fRT4}, determined deceleration and Hubble parameter in terms of redshift and the age of the Universe is also estimated using various supernovae data in the background of this gravity theory. Tiwari et al., \cite{fRT5}, studied deceleration, Hubble and jerk parameters under LRS Bianchi type-I cosmological model in $f(R,T)$ gravity. Bianchi type -III cosmological models are studied in the presence of cosmological constants in this scenario \cite{fRT6}. There are also various topics which are studied like, study of $f(R,T)$ gravity, with the interaction between dark energy and dark matter \cite{fRT7}; with the restriction of conservation of matter \cite{fRT8}; study of energy conditions considering perfect fluid \cite{fRT9}. Study of wormholes in $f(R,T)$ gravity theory is not new. Charged wormholes are studied in \cite{WH1}, wormhole solutions are obtained by analytical approach in \cite{1}. Sahoo et al., obtained shape function using the relation $p_r=\omega \rho$, where $p_r$ is the radial pressure and $\rho $ is the energy density; also they have investigated the energy conditions \cite{2}. Mandal et al., studied the geometrical behavior of wormholes under anisotropic and isotropic cases considering the shape function $b(r)=r_0^me^{r_0-r}r^{1-m}$ \cite{3}. In \cite{4}, wormhole solutions are obtained considering different relation between the radial and transverse pressure and in \cite{5} wormhole solutions are obtained considering energy density $\rho$ as a function of Ricci scalar and its derivative with respect to the radial coordinate. Noether symmetry is also applied to obtaining wormhole solutions \cite{6}. The simplest form of $f(R,T)$ is $f(R,T)=R+\lambda T$, which is used in various papers \cite{1, 2, 3, 4, 8, 15, 16, 17, 18, 19}. \par Harko et al. \cite{harko2013} found that this is the extra curvature terms of $f(R)$ gravity which support the wormhole geometries while the matter satisfies all the energy conditions. Capozziello et al. \cite{Cap2012} discussed the possibility for the existence of wormholes in hybrid metric-Palatini gravity by exploring general conditions to violate the null energy condition at the throat. They also studied some particular examples to support their investigation by using redshift function, potential as well as shape functions etc. Many authors have worked on the existence of wormholes and energy conditions in various interesting scenarios \cite{Bej2017}-\cite{Ghashti}. Alvarenga et al. \cite{Alva} tested particular $f(R,T)$ gravity models which satisfy the Energy conditions (which are worked out via the Raychaudhuri equation for expansion) and found stable power-law and de-Sitter solutions for some values of the input parameters. Also many researchers \cite{you2017a}-\cite{MSetall} demonstrated the inhomogeneity factors of matter density for self-gravitating celestial stars evolving in the background of $f(R,T)$ gravity and imperfect fluid configurations. \par Recently, obtaining generating functions is an important tool to find wormhole solutions \cite{20, 21, 22} in Einstein gravity as well as in modified gravity theory. Herrera et.al., \cite{20} discovered that there are two generating functions which describe all static spherically symmetric anisotropic perfect fluid solutions. Using this notion Rahaman et. al., \cite{21} found that the generating function associated with redshift function is always positive and decreasing in nature in the context of Einstein gravity. The second generating function plays an important role to check the violation of null energy condition as it relates to matter distribution. The main motivation of this paper is to check the prescription, provided by Rahaman et. al., \cite{21} in the $f(R,T)$ gravity scenario. In this paper, the necessary field equations on $f(R,T)$ have been discussed in section \ref{sec2}. In section \ref{sec3}, the way of obtaining wormhole generating functions has been shown and also wormhole solutions are obtained in this section. Some new generating functions have been presented in section \ref{secG}. Energy conditions are described and examined in section \ref{sec5}. In section \ref{sec6}, wormhole embedding diagrams have been studied. The paper ends with a brief discussion in section \ref{sec7}. \section{Field equations on $f(R,T)$ gravity}\label{sec2} In $f(R,T)$ gravity theory the action is given by \cite{fRT1,1,4}: \begin{equation}\label{action} S=\frac{1}{16\pi} \int \left[f(R,T)+L_m\right]\sqrt{-g}d^4x, \end{equation} where $f(R,T)$ is an arbitrary function of Ricci scalar $(R)$ and $T$ is the trace term of the energy momentum function. In equation $(\ref{action})$, $g$ is the metric determinant, $L_m$ is the matter Lagarangian density and $c=1=G$ is considered. The energy momentum tensor is given by, \begin{equation}\label{T} T_{ij}=-\frac{2}{\sqrt{-g}}\left[\frac{\partial (\sqrt{-g}L_m)}{\partial g^{ij}}-\frac{\partial }{\partial x^k} \frac{\partial (\sqrt{-g}L_m)}{\partial (\frac{\partial g^{ij}}{\partial x^k})} \right]. \end{equation} Considering $L_m=-P$ ( where $P$ is the total pressure), the energy momentum tensor reduces to \cite{1,5}, \begin{equation} T_{ij}=(\rho+p_t)u_i u_j +p_tg_{ij}+(p_r-p_t)\xi_i \xi_j, \end{equation} where $\xi_i$ is a space like vector which is orthogonal to $u_i$ such that $u^iu_i=-1$, $\xi^i \xi_i=1$; $\rho$ is the energy density, $P=\frac{p_r+2p_t}{3}$ and $p_r$, $p_t$ are radial pressure and transverse pressure, respectively. By varying the action ($\ref{action}$) with respect to the metric $g_{ij}$ we get \cite{1}, \begin{equation}\label{FE} f_R R_{\mu \nu} -\frac{1}{2}fg_{\mu \nu} +(g_{\mu \nu} \Box -\nabla_\mu \nabla_\nu)f_R=8\pi T_{\mu \nu} +f_T(T_{\mu \nu }+Pg_{\mu \nu}), \end{equation} where $f_R\equiv \frac{\partial f}{\partial R}$, $R_{\mu \nu}$ is the Ricci tensor, $\Box$ is the D'Alembert operator, $\nabla_\mu$ is the covariant derivative and $f_T\equiv \frac{\partial f}{\partial T}$. Also covariant derivative of energy momentum tensor reduces to \cite{1} (considering $T=\rho-3P$), \begin{equation} \nabla^\mu T_{\mu \nu}=-\frac{f_T}{f_T+8\pi}\left[(T_{\mu \nu}+Pg_{\mu \nu})\nabla^\mu \ln f_T+\frac{1}{2} g_{\mu \nu} \nabla^{\mu } (\rho -P)\right]. \end{equation} Considering $f(R,T)=R+2\lambda T$ (where $\lambda$ is constant) \cite{1,2,4}, Einstein tensor $G_{\mu \nu}$ becomes \cite{1}, \begin{equation}\label{ET} G_{\mu \nu}=8\pi T_{\mu \nu}+2\lambda \left[T_{\mu \nu} +(\rho -P)g_{\mu \nu}\right]. \end{equation} Hence, the field equations are given by, \begin{eqnarray}\label{f1} &&\frac{b^\prime}{r^2}=8\pi \rho+2\lambda \left( 2\rho -\frac{p_r+2p_t}{3}\right) , \\\label{f2} &&\frac{1}{r} \left[\frac{b}{r^2}+2\phi^\prime (\frac{b}{r}-1)\right]=-8\pi p_r+2\lambda \left( \rho -\frac{4p_r+2p_t}{3} \right), \\\label{f3}\nonumber &&\frac{1}{2r} \left[\frac{1}{r}(\phi^\prime b+b^\prime-\frac{b}{r})+2(\phi^{\prime \prime} -(\phi^{\prime })^2)b-\phi^{\prime}(2-b^\prime)\right]-\phi^{\prime \prime}-(\phi^{\prime })^2\\ &&~~~~~~~~~=-8\pi p_t+2\lambda \left( \rho -\frac{p_r+5p_t}{3} \right). \end{eqnarray} Now solving equations (\ref{f1})--(\ref{f3}) we get, \begin{eqnarray}\label{rho} \rho&=&\frac{6B(2\pi+\lambda)+\lambda(2D+C)}{6(5\lambda^2+16\lambda\pi+16\pi^2)},\\\label{rp} p_r&=&\frac{(6D-12C+3B)\lambda^2+\lambda\pi(8D-44C+12B)-48\pi^2C}{6(5\lambda^2+16\lambda\pi+16\pi^2)(\lambda+4\pi)},\\\label{tp} p_t&=&\frac{(-9D+3C+3B)\lambda^2+\lambda\pi(4C-40D+12B)-48\pi^2D}{6(5\lambda^2+16\lambda\pi+16\pi^2)(\lambda+4\pi)}, \end{eqnarray} where \begin{eqnarray} B&=&\frac{b^\prime}{r^2},~ C=\frac{1}{r} \left[\frac{b}{r^2}+2\phi^\prime (\frac{b}{r}-1)\right]\text{and}\nonumber\\ D&=&\frac{1}{2r} \left[\frac{1}{r}(\phi^\prime b+b^\prime-\frac{b}{r})+2(\phi^{\prime \prime} -(\phi^{\prime })^2)b-\phi^{\prime}(2-b^\prime)\right]-\phi^{\prime \prime}-(\phi^{\prime })^2.\nonumber \end{eqnarray} \section{Obtaining Generating functions and a new shape function corresponding these} \label{sec3} A mechanism was showed by Herrera et al.\cite{20} to obtain all static spherically symmetric solutions of locally anisotropic fluids. To generate all possible solutions, this mechanism needs two types of functions, known as wormhole generating functions. Using equation (\ref{rp}) and (\ref{tp}) we obtain the result, \begin{equation} 2(\lambda+4\pi)[p_r-p_t]=\left(1-\frac{b}{r}\right)\left(\frac{\phi^\prime}{r}-\phi^{\prime\prime}-{\phi^\prime}^2+\frac{1}{r^2}\right)+\frac{1}{2}\left(\frac{b^\prime}{r}-\frac{b}{r^2}\right)\left(\phi^\prime+\frac{1}{r}\right)-\frac{1}{r^2}. \end{equation} Now, we define a function $H(r)$ by \begin{equation}\label{hr} H(r)=2(\lambda+4\pi)[p_r-p_t]=\left(1-\frac{b}{r}\right)\left(\frac{\phi^\prime}{r}-\phi^{\prime\prime}-{\phi^\prime}^2+\frac{1}{r^2}\right)+\frac{1}{2}\left(\frac{b^\prime}{r}-\frac{b}{r^2}\right)\left(\phi^\prime+\frac{1}{r}\right)-\frac{1}{r^2}. \end{equation} Let us introduce a new variable function $G(r)$ by \begin{equation}\label{G} e^{\phi(r)}=\exp\left(\int\left(2G(r)-\frac{1}{r}\right)dr\right). \end{equation} Then we have $\phi^\prime=2G(r)-\frac{1}{r}$ {\it i.e.,} $G(r)=\frac{1}{2}(\phi^\prime+\frac{1}{r})$. Considering $1-\frac{b}{r}=v(r)$, equation (\ref{hr}) reduces \begin{equation}\label{id} \left(H(r)+\frac{1}{r^2}\right)=2v(r)\left[-G^\prime-2G^2+3\frac{G}{r}-\frac{1}{r^2}\right]+v^\prime(r)G(r). \end{equation} After solving the differential equation (\ref{id}) for the variable $v$, we get the solution \begin{equation} v(r)=\frac{G^2}{r^6}e^{\int\left(\frac{2}{Gr^2}+4G\right)dr}\times\left[\int\Big\{\frac{r^6}{G^3}\left(H(r)+\frac{1}{r^2}\right)e^{-\int\left(\frac{2}{Gr^2}+4G\right)dr}\Big\}dr+C_1\right], \end{equation} where $C_1$ is an arbitrary constant. Hence we obtain the $b(r)$ as follows: \begin{equation}\label{b} b(r)=r-\frac{G^2}{r^5}e^{\int\left(\frac{2}{Gr^2}+4G\right)dr}\times\left[\int\Big\{\frac{r^6}{G^3}\left(H(r)+\frac{1}{r^2}\right)e^{-\int\left(\frac{2}{Gr^2}+4G\right)dr}\Big\}dr+C_1\right]. \end{equation} From equation (\ref{b}), it is clear that a shape function can be obtained by choosing two functions $G$ and $H$ provided it satisfies all other conditions to be a shape function. \par Now, we will obtain a new shape function by assuming generating functions {$G=1/r$} and $H=-r$. Putting the values of $G$, $H$ in (\ref{b}) and using the throat condition $b(r_0)=r_0$, we get \begin{equation}\label{obs} b(r)=r-\frac{1}{r^6}\left(\frac{1}{17}(r_0^{17}-r^{17})+\frac{1}{14}(r^{14}-r_0^{14})\right)^6, \end{equation} and from equation (\ref{G}) we obtain $\phi(r)=\ln r$. \begin{figure}[htb!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.9\linewidth]{fext1.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.9\linewidth]{shape12.eps} \centering (b) \end{minipage} \caption{Behavior of $\left(1-\frac{b(r)}{r}\right)$(a) and $\frac{b-b'r}{b^2}$(b) versus `$r$' for the obtained new shape function (\ref{obs}) with $r_0=1.5$.}\label{figure1} \end{figure} \begin{figure}[htb!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.8\linewidth]{asymp1.eps} \end{minipage \caption{Behavior of $\frac{b(r)}{r}$ versus `$r$' for the obtained new shape function (\ref{obs}) with $r_0=1.5$.}\label{figure2} \end{figure} From figure (\ref{figure1}), it is clearly shown that the new shape function obeys all the required conditions to be a shape function. This type of wormhole is not asymptotically flat (from figure (\ref{figure2})) and we will have to use junction conditions. \par Again, we will obtain another new shape function by assuming generating functions {$G=\dfrac{1}{2r}$} and $H=-\dfrac{1}{r^3}\left(1+\dfrac{1}{r^{13}}\right)$. Putting the values of $G$, $H$ in (\ref{b}) and using the throat condition $b(r_0)=r_0$, we get \begin{equation}\label{obs2} b(r)=r-\frac{1}{2r}\left\{2(r^2-r_0^2)-4(r-r_0)+\frac{1}{3}\left(\frac{1}{r^{12}}-\frac{1}{r_0^{12}}\right)\right\} \end{equation} and from equation (\ref{G}) we obtain $\phi(r)=\text{Constant}$ (say, $\phi_0$). From figure (\ref{figure11}), it is clearly shown that the new shape function obeys all the required conditions to be a shape function. Because of finite redshift function and since $\frac{b(r)}{r}\rightarrow 0$ as $r\rightarrow\infty$ (see figure (\ref{figure12})) so this represents an asymptotically flat wormhole. \begin{figure}[htb!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.9\linewidth]{shape113.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.9\linewidth]{shape112.eps} \centering (b) \end{minipage} \caption{Behavior of $\left(1-\frac{b(r)}{r}\right)$(a) and $\frac{b-b'r}{b^2}$(b) versus `$r$' for the obtained new shape function (\ref{obs2}) with $r_0=1.5$.}\label{figure11} \end{figure} \begin{figure}[htb!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.9\linewidth]{shape111.eps} \end{minipage \caption{Behavior of $\frac{b(r)}{r}$ versus `$r$' for the obtained new shape function (\ref{obs2}) with $r_0=1.5$.}\label{figure12} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{G1.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H1.eps} \centering (b) \end{minipage} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{G2.eps} \centering (c) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{H2.eps} \centering (d) \end{minipage} \caption{Behavior of the assumed generating functions $G(r)$((a) and (c)) and $H(r)$((b) and (d)).} \label{figgen} \end{figure} \section{Generating functions corresponding to known shape and redshift functions}\label{secG} In this section, we are interested to find some generating functions corresponding to some known redshift and shape functions which are used in different literature. \subsection{$\phi(r)=j\ln(\frac{r}{r_0})$, $j$ is an arbitrary real number.} Here we will consider the redshift function $\phi(r)=j\ln(\frac{r}{r_0})$\cite{rd1}. Using equation (\ref{G}), we have found the generating function \begin{equation} G(r)=\frac{1}{2r}(1+j). \end{equation} \subsection{$\phi(r)=e^{-\frac{r_0}{r}}$} In this frame, we consider the redshift function $\phi(r)=e^{-\frac{r_0}{r}}$\cite{rp}. This redshift function obeys the characteristics of a wormhole. For this model we obtain the generating function $G(r)$ by \begin{equation} G(r)=\frac{r_0e^{-r_0/r}}{r^2}+\frac{1}{2r}. \end{equation} \subsection{$\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$, $\gamma$ is an arbitrary constant} Here we use the redshift function $\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$ \cite{rp}, for this model we obtain the generating function $G(r)$ by \begin{equation} G(r)=-\frac{1}{2}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{1}{2r}. \end{equation} We have summarized the new generating functions $G(r)$ corresponding to the redshift function in the following table (\ref{Table:T1}). \begin{table}[!htb] \centering \caption{Generating function $G(r)$ corresponding to redshift functions:} \begin{tabular}{|c|c|}\hline {\bfseries Redshift functions $\phi(r)$} & {\bfseries Generating function $G(r)$} \\ \hline \text{$\phi(r)=j\ln(\frac{r}{r_0})$, $j$ is an } & \text{$G(r)=\frac{1}{2r}(1+j)$ } \\ \text{ arbitrary real number} & \text{} \\ \hline $\phi(r)=e^{-\frac{r_0}{r}}$ & $G(r)=\frac{r_0e^{-r_0/r}}{r^2}+\frac{1}{2r}$ \\ \hline $\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$ & $G(r)=-\frac{1}{2}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{1}{2r}$ \\ $\gamma$ is an arbitrary constant & \\ \hline \end{tabular} \label{Table:T1} \end{table} \par To obtain the generating function $H(r)$, we will consider three cases with different shape functions $(1).~ b(r)=r_0e^{1-\frac{r}{r_0}}$, $(2).~ b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)}$ and $(3).~ b(r)=r_0\frac{a^r}{a^r_0}$, $a\in(0, 1)$ for each of the the above redshift functions (A), (B) and (C), respectively; $${\bf I}.~ b(r)=r_0e^{1-\frac{r}{r_0}}~ \&~ \phi(r)=j\ln\left(\frac{r}{r_0}\right) $$ Let us consider the shape function $b(r)=r_0e^{1-\frac{r}{r_0}}$\cite{b1} and using equation (\ref{hr}) we get, \begin{eqnarray}\label{h1} H(r)&=&\frac{1}{r^2}(2j-j^2)+\frac{r_0^2e^{1-\frac{r_0}{r}}}{2r^4}(1+j)-\frac{r_0e^{1-\frac{r_0}{r}}}{2r^3}(3j-2j^2+3). \end{eqnarray} $${\bf II}.~b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)}~\&~ \phi(r)=j\ln\left(\frac{r}{r_0}\right)$$ Here, we consider the shape function $b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)}$\cite{b2}. For this shape function, we obtain the generating function $H(r)$ by \begin{eqnarray} H(r)&=&\frac{1}{r^2}\left(1-\frac{\ln(r+1)}{\ln(r_0+1)}\right)(2j-j^2+1)+\frac{1}{2r\ln(r+1)\ln(r_0+1)}(1+j)-\frac{1}{r^2}. \end{eqnarray} $${\bf III}.~ b(r)=r_0\frac{a^r}{a^r_0}, ~a\in(0, 1)~ \&~ \phi(r)=j\ln\left(\frac{r}{r_0}\right)$$ The form of the shape function $b(r)=r_0\frac{a^r}{a^r_0}$\cite{b3} gives a wormhole solution provided $a\in(0,1)$. For the above shape function we get $H(r)$ as follows: \begin{eqnarray} H(r)&=&\frac{j}{r^2}(2-j)-\frac{r_0a^{r-r_0}}{2r^3}(3+5j-2j^2)+\frac{r_0a^{r-r_0}\ln a }{2r^2}(j+1). \end{eqnarray} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{G11.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H11.eps} \centering (b) \end{minipage} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H12.eps} \centering (c) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H13.eps} \centering (d) \end{minipage} \caption{Diagram of generating functions $G(r)$(a) for the redshift function $\phi(r)=j\ln(\frac{r}{r_0})$, and $H(r)$((b), (c) and (d) for the shape functions 1, 2 and 3 respectively with the same redshift function) when $r_0=0.5$, $j=-3$ and $a=0.5$.} \label{fig3} \end{figure} $${\bf IV}.~b(r)=r_0e^{1-\frac{r}{r_0}} ~\&~ \phi(r)=e^{\frac{-r_0}{r}}$$ Using the combinations of $b(r)=r_0e^{1-\frac{r}{r_0}}$ and $\phi(r)=e^{\frac{-r_0}{r}}$, we obtain the generating function $H(r)$ (from equation \ref{hr}) as follows \begin{eqnarray}\nonumber H(r)&=&\left(1-\frac{r_0e^{-\frac{r_0}{r}}}{r}\right)\left\{\frac{e^{-\frac{r_0}{r}}}{r^4}(3r_0r-r_0^2)-\frac{r_0^2e^{-\frac{2r_0}{r}}}{r^4}+\frac{1}{r^2}\right\}\\ &~&+\frac{e^{1-\frac{r_0}{r}}}{2r^5}(r_0^2-rr_0)(r_0e^{-\frac{r_0}{r}}+r)-\frac{1}{r^2}. \end{eqnarray} $${\bf V}. ~b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)} ~\&~ \phi(r)=e^{\frac{-r_0}{r}}$$ For this $b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)}$, equation (\ref{hr}) gives \begin{eqnarray}\nonumber H(r)&=&\left(1-\frac{\ln(r+1)}{\ln(r_0+1)}\right)\left\{\frac{e^{-\frac{r_0}{r}}}{r^4}(3r_0r-r_0^2)-\frac{r_0^2e^{-\frac{2r_0}{r}}}{r^4}+\frac{1}{r^2}\right\}\\ &&+\frac{1}{2r(r+1)\ln(r+1)}(r_0e^{-\frac{r_0}{r}}+r)-\frac{1}{r^2}. \end{eqnarray} $${\bf VI}.~b(r)=r_0\frac{a^r}{a^r_0}, ~a\in(0, 1) ~\&~ \phi(r)=e^{\frac{-r_0}{r}}$$ Here, we consider the form $b(r)=r_0\frac{a^r}{a^r_0}$ and using equation (\ref{hr}) we get \begin{eqnarray}\nonumber H(r)&=&\left(1-\frac{r_0a^{r-r_0}}{r}\right)\left\{\frac{e^{-\frac{r_0}{r}}}{r^4}(3r_0r-r_0^2)-\frac{r_0^2e^{-\frac{2r_0}{r}}}{r^4}+\frac{1}{r^2}\right\}\\ &&+\frac{r_0a^{r-r_0}}{2r^4}(r\ln a-1)(r_0e^{-\frac{r_0}{r}}+r)-\frac{1}{r^2}. \end{eqnarray} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{G22.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H21.eps} \centering (b) \end{minipage} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H22.eps} \centering (c) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H23.eps} \centering (d) \end{minipage} \caption{Diagram of generating functions $G(r)$(a) for the redshift function $\phi(r)=e^{\frac{-r_0}{r}}$ and $H(r)$((b), (c) and (d) for the shape functions 1, 2 and 3 respectively with the same redshift function) when $r_0=0.5$, $a=0.5$} \label{fig4} \end{figure} $${\bf VII}. ~b(r)=r_0e^{1-\frac{r}{r_0}} ~\&~ \phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$$ The above consideration of $b(r)$ and $\phi(r)$ gives \begin{eqnarray}\nonumber H(r)&=&\left(1-\frac{r_0e^{1-\frac{r_0}{r}}}{r}\right)\left\{-\frac{4\gamma^2}{r^4}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{\gamma^2}{r^6}\left(1+\frac{\gamma^2}{r^2}\right)^{-2}+\frac{1}{r^2}\right\}\\ &&+\frac{e^{1-\frac{r_0}{r}}(r_0^2-r_0r)}{r^3}\left\{-\frac{\gamma^2}{2r^3}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{1}{2r}\right\}-\frac{1}{r^2}. \end{eqnarray} $${\bf VIII}.~ b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)}~\&~ \phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$$ Considering $ b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)}$ and $\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$, from equation (\ref{hr}) we get \begin{eqnarray}\nonumber H(r)&=&\left(1-\frac{\ln(r+1)}{\ln(r_0+1)}\right)\left\{-\frac{4\gamma^2}{r^4}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{\gamma^2}{r^6}\left(1+\frac{\gamma^2}{r^2}\right)^{-2}+\frac{1}{r^2}\right\}\\ &&+\frac{1}{(r+1)\ln(r_0+1)}\left\{-\frac{\gamma^2}{2r^3}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{1}{2r}\right\}-\frac{1}{r^2}. \end{eqnarray} $${\bf IX}. ~b(r)=r_0\frac{a^r}{a^r_0},~ a\in(0, 1)~ \& ~\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$$ Now, we will consider the pair $b(r)=r_0\frac{a^r}{a^r_0}$ and $\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$. Considering these, from equation (\ref{hr}) we get \begin{eqnarray}\nonumber H(r)&=&\left(1-\frac{r_0a^{r-r_0}}{r}\right)\left\{-\frac{4\gamma^2}{r^4}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{\gamma^2}{r^6}\left(1+\frac{\gamma^2}{r^2}\right)^{-2}+\frac{1}{r^2}\right\}\\ &&+\frac{r_0a^{r-r_0}(rr_0\ln(a)-1 )}{r^2}\left\{-\frac{\gamma^2}{2r^3}\left(1+\frac{\gamma^2}{r^2}\right)^{-1}+\frac{1}{2r}\right\}-\frac{1}{r^2}. \end{eqnarray} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{G33.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H31.eps} \centering (b) \end{minipage} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H32.eps} \centering (c) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{H33.eps} \centering (d) \end{minipage} \caption{Diagram of generating functions $G(r)$(a) for the redshift function $\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}}$ and $H(r)$((b), (c) and (d) for the shape functions 1, 2 and 3 respectively with the same redshift function) when $r_0=0.5$, $\gamma=1$ and $a=0.5$} \label{fig5} \end{figure} \begin{figure} \begin{minipage}{.35\textwidth} \includegraphics[width=.6\linewidth]{b1.eps} (a) \end{minipage \begin{minipage}{.35\textwidth} \centering \includegraphics[width=.6\linewidth]{b2.eps} \centering (b) \end{minipage \begin{minipage}{.35\textwidth} \centering \includegraphics[width=.6\linewidth]{b3.eps} \centering (c) \end{minipage \caption{Variation of $\frac{b(r)}{r}$ ($(a)$ for $b(r)=r_0e^{1-\frac{r}{r_0}},~r_0=1$, $(b)$ for $b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)},~r_0=1$ and $(c)$ for $b(r)=r_0\frac{a^r}{a_0^r}$, $a=0.5$, $r_0=0.5$ ) with radial co-ordinate `$r$'.} \label{figss} \end{figure} \begin{figure}[htb!] \begin{minipage}{.35\textwidth} \includegraphics[width=.6\linewidth]{phia.eps} (a) \end{minipage \begin{minipage}{.35\textwidth} \centering \includegraphics[width=.6\linewidth]{phib.eps} \centering (b) \end{minipage \begin{minipage}{.35\textwidth} \centering \includegraphics[width=.6\linewidth]{phic.eps} \centering (c) \end{minipage \caption{Variation of $\phi(r)$ ((a) for $\phi(r)=j\ln(\frac{r}{r_0}),~ j=-3,~c=0.5$ (b) for $\phi(r)=e^{-\frac{r_0}{r}}$ and (c) for $\phi(r)=\ln\sqrt{1+\frac{\gamma^2}{r^2}},~\gamma=3,~r_0=0.5$) with radial co-ordinate `$r$'.} \label{figphis} \end{figure} \section{Validation of Energy conditions} \label{sec5} In this section, we continue our discussions with the validation of energy conditions and make some regional plots to check the validity of all energy conditions. In this work, we consider null energy condition (NEC), weak energy condition (WEC), strong energy condition (SEC) and dominant energy condition (DEC) to examine the wormholes. Mathematically, the above energy conditions can be written as NEC: $T_{\alpha\beta}K^\alpha K^\beta\geq0$, WEC: $T_{\alpha\beta}V^\alpha V^\beta\geq0$, SEC: $\left(T_{\alpha\beta}-\frac{1}{2}Tg_{\alpha\beta}\right)V^\alpha V^\beta\geq0$, DEC: $-T_\beta^\alpha V^\beta$ is future directed. Here $V^\alpha$ is a unit time-like vector while $K^\alpha$ is a null vector. So for anisotropic fluid the above energy conditions given as\cite{r4}:$(i)$ NEC: $\rho+p_r\geq0, ~\rho+p_t\geq0$, $(ii)$ WEC: $\rho\geq0, ~\rho+p_r\geq0, ~\rho+p_t\geq0$, $(iii)$ SEC: $\rho+p_r\geq0,~ \rho+p_t\geq0,~ \rho+p_r+2p_t\geq0$, $(iv)$ DEC: $\rho\geq0,~\rho-|p_r|\geq0,~ \rho-|p_t|\geq0$. We have examined the energy conditions for all wormhole solutions in figures (\ref{fig6})-(\ref{fig15}). \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{combo1s11.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{combo2s11.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the obtained new shape function (\ref{obs}) with $\phi(r)=\ln r $ when $\lambda=1$ and $r_0=1.5$.} \label{fig6} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{combo1s111.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{combo2s111.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the obtained new shape function (\ref{obs2}) with $\phi(r)=\text{Const.} $ when $\lambda=1$ and $r_0=1.5$.} \label{fig66} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E111.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E112.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=j\ln\left(\frac{r}{r_0}\right)$ and shape function $b(r)=r_0e^{1-\frac{r}{r_0}}$ with the numerical values $j=2.2$, $r_0=1.1$ and $\lambda=0.9$}\label{fig7} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E121.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E122.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=j\ln\left(\frac{r}{r_0}\right)$ and shape $b(r)=r\frac{\ln(r+1)}{\ln(r_0+1)}$ with the numerical values $j=0.6$, $r_0=0.5$ and $\lambda=100$}\label{fig8} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E131.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E132.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=j\ln\left(\frac{r}{r_0}\right)$ and shape function $b(r)=r_0\frac{a^r}{a^r_0}$ with the numerical values $j=1.6$, $r_0=1.2$, $a=0.8$ and $\lambda=5$}\label{fig9} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E211.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E212.eps} \centering (b) \end{minipage} \caption{Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=e^{-\frac{r_0}{r}}$ and shape function $b(r)=r_0e^{1-\frac{r}{r_0}}$ with the numerical values $r_0=0.5$ and $\lambda=0.8$}\label{fig10} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E221.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E222.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=e^{-\frac{r_0}{r}}$ and shape function $b(r)=r\frac{ln(r+1)}{ln(r_0+1)}$ with the numerical values $r_0=0.5$ and $\lambda=100$.}\label{fig11} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E231.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E232.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=e^{-\frac{r_0}{r}}$ and shape function $b(r)=r_0\frac{a^r}{a^r_0}$ with the numerical values $a=0.95$, $r_0=1.2$ and $\lambda=5$}\label{fig12} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E311.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E312.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=\ln\left(\sqrt{1+\frac{\gamma^2}{r^2}}\right)$ and shape function $b(r)=r_0e^{1-\frac{r}{r_0}}$ with the numerical values $\gamma=0.2$, $r_0=0.1$ and $\lambda=1$}\label{fig13} \end{figure} \begin{figure}[!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E321.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E322.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=\ln\left(\sqrt{1+\frac{\gamma^2}{r^2}}\right)$ and shape function $b(r)=r\frac{ln(r+1)}{ln(r_0+1)}$ with the numerical values $\gamma=0.8$, $r_0=0.5$ and $\lambda=1$}\label{fig14} \end{figure} \begin{figure}[htb] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E331.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.6\linewidth]{E332.eps} \centering (b) \end{minipage} \caption{ Behavior of $\rho+p_r,~ \rho+p_t,~ \rho+p_r+2p_t$ (a) and $\rho,~\rho-|p_r|,~\rho-|p_t|$ diagrams (b) have been plotted for the redshift function $\phi=\ln\left(\sqrt{1+\frac{\gamma^2}{r^2}}\right)$ and shape function $b(r)=r_0\frac{a^r}{a^r_0}$ with the numerical values $\gamma=0.8$, $r_0=1.5$, $a=0.9$ and $\lambda=5$}\label{fig15} \end{figure} \section{Embedding diagrams} \label{sec6} One may use embedding diagrams to visualize a wormhole and extract some useful information for the choice of the shape function $b(r)$. In order to produce embeddings of two dimensional space slices (or hypersurface) of the wormhole in $\scriptsize{R}^3$, we make the restriction $\theta=\pi/2$. Here , we consider a fixed moment of time, $t=$constant and respectively the wormhole metric reduces \begin{equation}\label{eq24} ds^2=\left(1-\frac{b(r)}{r}\right)^{-1}dr^2+r^2d\phi^2 \end{equation} In the embedding space we introduce cylindrical coordinates $z$, $r$ and $\phi$. Then Euclidean metric of the embedding space has the form \cite{r2}, \begin{equation} ds^2=dz^2+dr^2+r^2d\phi^2. \end{equation} The embedded surface will be axially symmetric , and hence can be described by the single function $z=z(r)$. On that surface the line element can be written as \begin{equation}\label{eq26} ds^2=\Bigg[1+\left(\frac{dz}{dr}\right)^2\Bigg]dr^2+r^2d\phi^2. \end{equation} Now, comparing equation(\ref{eq24}) and equation (\ref{eq26}) , we acquire the expression for embedding function as \begin{equation} z(r)=\pm\int_{r_0}^{r}\left(\frac{r}{b(r)}-1\right)^{-\frac{1}{2}}dr \end{equation} \begin{figure}[htb] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{Emb1.eps} \centering (a) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{Embasymp.eps} \centering (b) \end{minipage \caption{Embedding diagram (a) for the obtained new shape function (\ref{obs}) with $r_0=1.5$, (b) for the obtained new shape function (\ref{obs2}) when $r_0=1.5$ } \label{fig16} \end{figure} \begin{figure}[htb] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{Emb3.eps} \centering (a) \end{minipage} \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{Emb4.eps} \centering (b) \end{minipage \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.7\linewidth]{Emb5.eps} \centering (c) \end{minipage} \caption{Embedding diagram (a) for the shape function 1 with $r_0=0.5$, (b) for the shape function 2 with $r_0=0.5$ and (c) for the shape function 3 when $r_0=0.5$, $a=0.8$.} \label{fig17} \end{figure} The graphical representation of the embedding function is shown in figs (\ref{fig16})-(\ref{fig17}) for the discussed wormhole geometries. \section{Results and Discussions} \label{sec7} \begin{table}[htb] \centering \caption{Range of radial coordinate `$r$' where the energy conditions are satisfied:}\label{T2} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\phi(r)$&$b(r)$& NEC &WEC&SEC&DEC\\ \hline \multirow{3}{*}{$\phi(r)=j\ln\left(\frac{r}{r_0}\right)$} & {$r_0e^{1-\frac{r}{r_0}}$} & {(1.5, 10)}&(3.25, 10)&{(1.5, 10)} &$\times$\\ \cline{2-6} & {$\frac{r\ln(r+1)}{\ln(r_0+1)}$} &{($r_0$, 1.75)}&{($r_0$, 1.75)}&{($r_0$, 1)}&{$r_0$, 1.75}\\ \cline{2-6} & {$\frac{r_0a^r}{a^{r_0}}$}&{(2, 10)}&{(2, 10)}&{(2, 10)}&{$\times$} \\ \hline \multirow{3}{*}{$\phi(r)=e^{-\frac{r_0}{r}}$} & {$r_0e^{1-\frac{r}{r_0}}$} & {(1.25, 10)}&(2, 10)&{(1.25, 10)} &(4, 10)\\ \cline{2-6} & {$\frac{r\ln(r+1)}{\ln(r_0+1)}$} &{($r_0$, 10)}&{($r_0$, 10)}&{($r_0$, 10)}&{$r_0$, 10}\\ \cline{2-6} & {$\frac{r_0a^r}{a^{r_0}}$}&{(3.5, 10)}&{(3.5, 10)}&{(3.5, 10)}&{(6, 10)} \\ \hline \multirow{3}{*}{$\phi(r)=ln\sqrt{1+\frac{\gamma^2}{r^2}}$} & {$r_0e^{1-\frac{r}{r_0}}$} & {(2, 10)}&(2, 10)&{(2, 10)} &(1.5, 10)\\ \cline{2-6} & {$\frac{r\ln(r+1)}{\ln(r_0+1)}$} &{($r_0$, 10)}&{($r_0$, 10)}&{$\times$}&{$r_0$, 10}\\ \cline{2-6} & {$\frac{r_0a^r}{a^{r_0}}$}&{(3.75, 10)}&{(3.75, 10)}&{(3.75, 10)}&{(3.75, 10)} \\ \hline \end{tabular} \end{table} In 4-$D$ spacetime, a new shape function of the wormhole was obtained in the present article. We made this by choosing two generating functions and examined the energy conditions for that. In this case, all the energy conditions can be satisfied in at least a small region near the wormhole throat. From figure (\ref{fig6}), it is clear that all the energy conditions are satisfied in a region $r\in(r_0,2)$ for the obtained shape function (\ref{obs}). Also for the asymptotically flat wormhole, from figure (\ref{fig66}) it is clear that all the energy conditions are satisfied in a region $r\in(5,10)$ for the obtained shape function (\ref{obs2}). \par It is shown from table (\ref{T2}), for most of the cases the wormhole satisfy all energy conditions in a region of `$r$' (see figures (\ref{fig8}), (\ref{fig10})-(\ref{fig13}), (\ref{fig15})). In some cases, $\rho$ is negative ( See figures (\ref{fig7}),(\ref{fig9}),(\ref{fig10}),(\ref{fig13})) in the neighbourhood of $r_0$ so for the existence of the traversable wormhole they need exotic matter. From figures (\ref{fig7}) and (\ref{fig8}) we can conclude two types of wormhole solutions satisfy all the energy conditions except DEC for some particular choices of parameter, one type did not satisfy SEC (See figure (\ref{fig14})). So from the above discussion we can conclude that some of the presented solutions violate the energy conditions and most of them are not asymptotically flat (from figures (\ref{figss}) and (\ref{figphis})). \par Figures (\ref{figgen})--(\ref{fig5}) show the behaviors of all generating functions $G(r)$ and $H(r)$. In \cite{21}, authors observed that the generating function $G(r)$ related to the redshift function is always positive and decreasing of `$r$' and the second generating function $H(r)$ is always negative and increasing in nature in Einstein gravity. Though, this observation regarding generating functions does not hold in $f(R,T)$ gravity (from figures \ref{fig3}(b), \ref{fig3}(c), \ref{fig3}(d), \ref{fig4}(d), \ref{fig5}(a) and \ref{fig5}(c)). If we choose $\lambda=0$ in the form of $f(R, T)$ then we will get the same $H(r)$ which we have got in Einstein gravity. Hence, comparing the work of Rahaman et al.\cite{21}, we can conclude that generating functions depend upon different gravity theories. \par In this present work, to obtain a shape function using generating functions the following algorithm can be considered: At first, two generating functions $G(r)$ and $H(r)$ have to be considered. Secondly, from equation (\ref{G}) the redshift function is obtained using $G(r)$ (provided the integral part is integrable). Shape function $b(r)$ is found by using equation (\ref{b})(provided the integration exists) and the throat condition $b(r_0)=r_0$ (to obtain the integration constant). Finally, the equations (\ref{eqc2})-(\ref{eqc4}) have to be verified by the obtained $b(r)$. If all the conditions are satisfied then $b(r)$ will be termed as a shape function.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,600
When Cummins M-Series diesel engines go down, depend on Diesel Parts Direct to help you get them up and running quickly. Not only do we have an expansive inventory, but many parts for ISM and QSM engines in stock and ready to ship. So, whether you need a water pump kit for an ISM, or a QSM turbocharger, you can get your equipment up and running in no time. In addition to inventory, Diesel Parts Direct has the expertise you need to ensure proper fitment for L10 and M11, as well as ISM and QSM engines. We have over 40 years of experience supporting Cummins diesel engines, so we know parts. No matter if you're working in mining, agriculture or some other application, you can rest assured that your parts will be ready to install as soon as they hit-the-door. Think Diesel Parts Direct when you need the right Cummins M Series parts delivered promptly. We carry a large assortment of new and remanufactured parts for L10, M11, ISM, and QSM diesel engines. For more information on ISM/QSM engines, please visit our Cummins engine specifications page.
{ "redpajama_set_name": "RedPajamaC4" }
6,562
Q: How to get list of video files in a specific folder in android? How to get list of video files stored in a specific folder using mediastore? currently i'm using this code package com.example.videolisttest; import android.app.Activity; import android.content.Context; import android.database.Cursor; import android.os.Bundle; import android.provider.MediaStore; import android.view.View; import android.view.ViewGroup; import android.widget.AdapterView; import android.widget.BaseAdapter; import android.widget.ListView; import android.widget.TextView; import android.widget.AdapterView.OnItemClickListener; import android.widget.Toast; public class MainActivity extends Activity { private Cursor videocursor; private int video_column_index; ListView videolist; int count; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); init_phone_video_grid(); } private void init_phone_video_grid() { System.gc(); String[] proj = { MediaStore.Video.Media._ID, MediaStore.Video.Media.DATA, MediaStore.Video.Media.DISPLAY_NAME, MediaStore.Video.Media.SIZE }; videocursor = managedQuery(MediaStore.Video.Media.EXTERNAL_CONTENT_URI, proj, null, null, null); count = videocursor.getCount(); videolist = (ListView) findViewById(R.id.PhoneVideoList); videolist.setAdapter(new VideoAdapter(getApplicationContext())); videolist.setOnItemClickListener(videogridlistener); } private OnItemClickListener videogridlistener = new OnItemClickListener() { public void onItemClick(AdapterView parent, View v, int position, long id) { System.gc(); video_column_index = videocursor .getColumnIndexOrThrow(MediaStore.Video.Media.DATA); videocursor.moveToPosition(position); String filename = videocursor.getString(video_column_index); /* Intent intent = new Intent(MainActivity.this, ViewVideo.class); intent.putExtra("videofilename", filename); startActivity(intent);*/ Toast.makeText(getApplicationContext(), filename, Toast.LENGTH_SHORT).show(); } }; public class VideoAdapter extends BaseAdapter { private Context vContext; public VideoAdapter(Context c) { vContext = c; } public int getCount() { return count; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { System.gc(); TextView tv = new TextView(vContext.getApplicationContext()); String id = null; if (convertView == null) { video_column_index = videocursor .getColumnIndexOrThrow(MediaStore.Video.Media.DISPLAY_NAME); videocursor.moveToPosition(position); id = videocursor.getString(video_column_index); video_column_index = videocursor .getColumnIndexOrThrow(MediaStore.Video.Media.SIZE); videocursor.moveToPosition(position); id += " Size(KB):" + videocursor.getString(video_column_index); tv.setText(id); } else tv = (TextView) convertView; return tv; } } } But this method scans the whole sdcard. i want to scan a specific folder. i don't want to use the filefilter method. please share any method using mediastore. A: private List<String> path_vid; public void searchVid(File dir) { String pattern = ".mp4"; //Get the listfile of that flder final File listFile[] = dir.listFiles(); if (listFile != null) { for (int i = 0; i < listFile.length; i++) { final int x = i; if (listFile[i].isDirectory()) { walkdir(listFile[i]); } else { if (listFile[i].getName().endsWith(pattern)) { // Do what ever u want, add the path of the video to the list path_vid.add(listFile[i]); } } } } } This function is recursive, and search vids setting it into a list, the vids are searched from an especified folder, if u want to search vids into the sdCard, use: File sdCard = new File(Environment.getExternalStorageDirectory().getAbsolutePath()); //For example: //File vidsFolder= new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/Videos"); searchVid(sdCard); if(path_vid.size()>0){ //Convert list into array String[] array = path_vid.toArray(new String[path_vid.size()]); //Create Adapter ArrayAdapter<String> adapter =new ArrayAdapter<String>(this,android.R.layout.simple_list_item, array); //Set adapter to videlist videolist.setAdapter(adapter); }else{ //No vids found exit(); } A: LOAD ALL VIDEOS ON YOUR DEVICE String[] projection = {MediaStore.Video.Media._ID, MediaStore.Video.Media.DISPLAY_NAME, MediaStore.Video.Media.DURATION}; String sortOrder = MediaStore.Video.Media.DATE_ADDED + " DESC"; Cursor cursor = getApplication().getContentResolver().query(MediaStore.Video.Media.EXTERNAL_CONTENT_URI, projection, null, null, sortOrder); LOAD SPECIFIC FOLDER VIDEOS ONLY String selection=MediaStore.Video.Media.DATA +" like?"; String[] selectionArgs=new String[]{"%Videoder%"}; Cursor cursor = managedQuery(MediaStore.Video.Media.EXTERNAL_CONTENT_URI, projection,selection,selectionArgs, MediaStore.Video.Media.DATE_TAKEN + " DESC"); A: you can use this code for get videos from specific folder as: String selection=MediaStore.Video.Media.DATA +" like?"; String[] selectionArgs=new String[]{"%FolderName%"}; videocursor = managedQuery(MediaStore.Video.Media.EXTERNAL_CONTENT_URI, parameters, selection, selectionArgs, MediaStore.Video.Media.DATE_TAKEN + " DESC"); Its works for me like a charm. A: If you know the specific folder, use: .getExternalFilesDir A: @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN) public void getVideos() { ContentResolver contentResolver = getContentResolver(); Uri uri = MediaStore.Video.Media.EXTERNAL_CONTENT_URI; String selection = MediaStore.Video.Media.DATA +" like?"; String[] selectionArgs = new String[]{"%/" + "Download" + "/%"}; Cursor cursor = contentResolver.query(uri, null, selection, selectionArgs , null); //looping through all rows and adding to list if (cursor != null && cursor.moveToFirst()) { do { String title = cursor.getString(cursor.getColumnIndex(MediaStore.Video.Media.TITLE)); String duration = cursor.getString(cursor.getColumnIndex(MediaStore.Video.Media.DURATION)); String data = cursor.getString(cursor.getColumnIndex(MediaStore.Video.Media.DATA)); VideoModel videoModel = new VideoModel(); videoModel.setVideoTitle(title); videoModel.setVideoUri(Uri.parse(data)); videoModel.setVideoDuration(timeConversion(Long.parseLong(duration))); videoArrayList.add(videoModel); } while (cursor.moveToNext()); } VideoAdapter adapter = new VideoAdapter(this, videoArrayList); recyclerView.setAdapter(adapter); adapter.setOnItemClickListener(new VideoAdapter.OnItemClickListener() { @Override public void onItemClick(int pos, View v) { Intent intent = new Intent(getApplicationContext(), VideoPlayActivity.class); intent.putExtra("pos", pos); startActivity(intent); } }); } It's works perfectly For me. Full Code https://github.com/PSOjha/PlayVideoFromSDCardFolder
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,983
Art gallery paying tribute to Tulsa Race Massacre victims vandalized in Manhattan <iframe width="476" height="267" src="https://abc7ny.com/video/embed/?pid=10725802" frameborder="0" allowfullscreen></iframe> Jim Dolan has more on the act of vandalism that took place at the gallery dedicated to victims of the Tulsa Race Massacre. SOHO, Manhattan (WABC) -- Police are investigating an act of vandalism overnight outside an art gallery that is honoring the victims of the Tulsa Race Massacre. The Soho gallery is currently featuring Black artists in a tribute to mark 100 years since the massacre. In 1921, a white mob descended on a prosperous Black neighborhood known as Black Wall Street. Homes and businesses were destroyed and hundreds of Black Americans were killed over two days of bloodshed. The vandalism outside the gallery on 26 Mercer Street included white paint scribbled over the Black Wall Street Gallery sign at the entrance of the exhibit. Curated by gallery owner Dr. Ricco Wright, a fourth-generation Tulsan, 21 Piece Salute honors those who lost their lives and livelihoods in the 1921 Tulsa Race Massacre and celebrates Black entrepreneurship. "There's a price for social justice and apparently this is it," Wright said. It is believed the vandalism happened sometime between 11 p.m. Sunday and 7 a.m. Monday even though the sign has been up for weeks. Wright doesn't believe the timing was a coincidence. "I don't believe in coincidences, in fact I think this is very deliberate and intentional, we come here, open up on Thursday, and on the very date on which the massacre happened 100 years ago, this happens overnight," Wright said. Wright believes the anniversary and the vandalism at the gallery filled with the art of Black artists may be a good time for New Yorkers to reflect on their own history. "Racism and slavery have been a part of New York's history and New Yorkers just haven't known, so this just illuminates that fact," Wright said. The NYPD Hate Crimes task force was notified and is investigating. The exhibition runs from May 27 to June 19. RELATED | Tulsa Race Massacre: Story behind Black Wall Street, racist mob that burned it to the ground * Get Eyewitness News Delivered * More Manhattan news * Send us a news tip * Download the abc7NY app for breaking news alerts * Follow us on YouTube Copyright © 2023 WABC-TV. All Rights Reserved. SOHO MANHATTAN NEW YORK CITY ART PAINT NYPD VANDALISM HATE CRIME INVESTIGATION
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
286
The Internet Speculative Fiction Database (ISFDB) is a database of bibliographic information on genres considered speculative fiction, including science fiction and related genres such as fantasy, alternate history, and horror fiction. The ISFDB is a volunteer effort, with the database being open for moderated editing and user contributions, and a wiki that allows the database editors to coordinate with each other. the site had catalogued 2,002,324 story titles from 232,816 authors. The code for the site has been used in books and tutorials as examples of database schema and organizing content. The ISFDB database and code are available under Creative Commons licensing. The site won the Wooden Rocket Award in the Best Directory Site category in 2005. Purpose The ISFDB database indexes speculative fiction (science fiction, fantasy, horror, and alternate history) authors, novels, short fiction, essays, publishers, awards, and magazines in print, electronic, and audio formats. It supports author pseudonyms, series, and cover art plus interior illustration credits, which are combined into integrated author, artist, and publisher bibliographies with brief biographical data. An ongoing effort is verification of publication contents and secondary bibliographic sources against the database, with the goals being data accuracy and to improve the coverage of speculative fiction to 100 percent. History Several speculative fiction author bibliographies were posted to the USENET newsgroup rec.arts.sf.written from 1984 to 1994 by Jerry Boyajian, Gregory J. E. Rawlins and John Wenn. A more or less standard bibliographic format was developed for these postings. Many of these bibliographies can still be found at The Linköping Science Fiction Archive. In 1993, a searchable database of awards information was developed by Al von Ruff. In 1994, John R. R. Leavitt created the Speculative Fiction Clearing House (SFCH). In late 1994, he asked for help in displaying awards information, and von Ruff offered his database tools. Leavitt declined, because he wanted code that could interact with other aspects of the site. In 1995, Al von Ruff and "Ahasuerus" (a prolific contributor to rec.arts.sf.written) started to construct the ISFDB, based on experience with the SFCH and the bibliographic format finalized by John Wenn. The first version of ISFDB went live on 8 September 1995, and a URL was published in January 1996. The ISFDB was first located at an ISP in Champaign Illinois, but it suffered from constrained resources in disk space and database support, which limited its growth. In October 1997 the ISFDB moved to SF Site, a major SF portal and review site. Due to the rising costs of remaining with SF Site, the ISFDB moved to its own domain in December 2002, but it was shut down by the hosting ISP due to high resource usage. In February 2003, it began to be hosted by The Cushing Library Science Fiction and Fantasy Research Collection and Institute for Scientific Computation at Texas A&M University. The ISFDB moved to a commercial hosting service in 2008. On 27 February 2005, the database and the underlying code became available under Creative Commons licensing. ISFDB was originally edited by a limited number of people, principally Al von Ruff and Ahasuerus. Editing was opened in 2006 to the general public on an open content basis, with changed content being approved by one of a limited number of moderators in an attempt to protect the accuracy of the database. In late 2022, the ISFDB was publicly criticized for its refusal to update its record of an author's name after a name change. The record remained uncorrected for more than a year, with an ISFDB moderator deploying transphobic talking points at one point, in spite of the fact that maintaining a trans author's deadname violates best practices and recommendations from various professional organizations. Awards and reception In 1998, Cory Doctorow wrote in Science Fiction Age that "[T]he best all-round guide to things science-fictional remains the Internet Speculative Fiction Database". In April 2009, Zenkat wrote that "it is widely considered one of the most authoritative sources about Science Fiction, Fantasy, and Horror literature available on the Internet". ISFDB was the winner of the 2005 Wooden Rocket Award in the Best Directory Site category. Ken Irwin reviewed the site for Reference Reviews in 2006, praising "the scalable level of detail available for particular authors and titles" while also pointing out "usability improvements" needed at that time. He concludes by calling it "a tremendous asset to researchers and fans of speculative fiction", stating that no other online bibliographies have "the breadth, depth, and sophistication of this database". On Tor.com, James Davis Nicoll described the site as "the single best [SFF] bibliographical resource there is". Gabriel McKee, author of The Gospel According to Science Fiction, described the site as an "indispensable [source] of information in putting this project together", and the site was described as "invaluable" by Andrew Milner and J. R. Burgmann in their book, Science Fiction and Climate Change. The Chicon 8 committee gave a special committee award to ISFDB during their opening ceremonies on 1 September 2022. As a real-world example of a non-trivial database, the schema and MySQL files from ISFDB have been used in a number of tutorials. Schema and data from the site were used throughout Chapter 9 of the book Rails For Java Developers. It was also used in a series of tutorials by Lucid Imagination on Solr, an enterprise search platform. , Quantcast estimates that ISFDB is visited by over 67,400 people monthly. The database, , contains 2,002,324 unique story titles from 232,816 authors. References External links Sources of Bibliographic Information (isfdb.org) American book websites Bibliographic databases and indexes Library 2.0 Online databases Speculative fiction websites Internet properties established in 1995 1995 establishments in the United States Creative Commons-licensed databases
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,142
Court of Appeal Rules Industrial Court Rules, 2007 CMAC Rules Court Rolls Maintenance Court Rules Practice Directives CMAC Case Law Index Labour Law Index Industrial Court of Appeal Industrial Court CMAC Awards Free Access to Law from Swaziland Reap Investments vs Lakha Investments (Pty) Ltd (35/2018) [2018] SZSC 44 (07 November 2018); Search Summary: Civil procedure: application that appeal deemed to have been abandoned; no record of appeal filed; no application for extension to time; appeal deemed abandoned and dismissed; costs awarded on the ordinary scale. 2018-szsc-44.pdf 2018-szsc-44.doc IN THE SUPREME COURT OF ESWATINI Civil Appeal Case No. 35/2018 In the matter between: REAP INVESTMENTS (PTY) LTD Appellant LAKHA INVESTMENTS (PTY) LTD Respondent Neutral Citation : Reap Investments vs Lakha Investments (Pty) Ltd (35/2018) [2018] SZSC 44 (07/11/2018) Coram : J. M. CURRIE AJA, S.J.K. MATSEBULA AND M.J. MANZINI Heard : 19 October 2018 Delivered : 7th November 2018 Summary: Civil procedure: application that appeal deemed to have been abandoned; no record of appeal filed; no application for extension to time; appeal deemed abandoned and dismissed; costs awarded on the ordinary scale. CURRIE AJA BRIEF BACKGROUND FACTS AND SEQUENCE [1] The Respondent/Appellant had lodged an application in the Magistrate's Court for ejectment of the Applicant from House No. 12 of Lot 784 Matsapha, cancellation of the lease and payment of arrear rentals. The Applicant's goods had been attached in terms of the landlord's hypothec. The matter was argued and judgment handed down. The Applicant appealed to the High Court to have the judgment set aside. Despite having been served with an Order to appear in court the Appellant failed to appear. Justice T. Mlangeni issued an ex temporae order and the Appeal was granted. On the 14th June 2018 the Respondent noted an appeal. On the 20th June 2018 a written judgment was issued by the High Court. Despite the lapse of four months no record of appeal has since been filed. The Applicant has filed an application claiming: (a) That the Notice of Appeal dated 14th June 2018 by the Respondent be and is hereby deemed to have been abandoned in terms of Rule 30 (4). (b) Costs of suit. (c) Such further and/or alternative relief. [2] The Application was served on the Respondent's attorneys on the 2nd October 2018. [3] No answering affidavit was filed before the date of hearing but at the hearing the respondent sought leave to hand up its Answering Affidavit from the bar, which it did. This should not be seen as a precedent as this Court does not accept any documents from the bar and it was only accepted so as to finalize the matter in the interests of speedy justice. APPLICANT'S AFFIDAVIT IN SUPPORT OF THE APPLICATION AND THE ARGUMENT BY COUNSEL FOR THE APPELLANT. [4] The Applicant contends that the appeal has not been pursued any further despite the lapse of some 4 (four) months. The Applicant has an order that permits the release of its goods and costs that it cannot execute without the appeal being finalized or deemed to be abandoned. Its valuable goods remain attached and same are deteriorating. [5] In terms of Rule 8 of the Court of Appeal Rules an Appellant is required to note its Appeal within 4 (four) weeks from date of judgment. [6] In terms of Rule 30 (1) the Appellant is required to lodge a record of proceedings with the Registrar for certification within two (2) months from date of noting the appeal. If it does not it is entitled to utilize the mechanism of Rule 16 (1), which provides that an application may be brought for an extension of time. [7] The Respondent was obliged to file the record of appeal by the 14th August 2018 or move an application for extension of time in terms of Rule 16 (1). It has failed to do either. [8] As an appeal has been lodged which stays execution of a judgment of the court a quo, the Applicant is unable to execute its judgment. The Applicant is prejudiced by the inaction of the Respondent, as its goods remain attached, thus preventing the Applicant from continuing with its business, whilst its goods continue to deteriorate. OPPOSING AFFIDAVIT AND ARGUMENT BY COUNSEL FOR THE RESPONDENT [9] The Respondent was served with the present Notice of Application in terms of Rule 30(4) on the 2nd October 2018 and the Respondent filed Notice of Intention to Oppose on the same day but no Answering Affidavit was filed and the Respondent sought leave to hand same from the bar which it did. (See paragraph [3] above). [10] The Respondent contends that it is desirous of pursuing the appeal to finality and sets our various reasons why it was not aware of the judgment of the court a quo and the ex tempore order. However, no reason is given as to why the record was not filed within 2 months in terms of Rule 30 (1) although the Respondent alleges that it has good and substantial reasons for not submitting the record timeously. Most surprisingly the Respondent contends that it is entitled, in terms of Rule 16 (2) to apply for an extension of time, setting forth good and substantial reasons for the application but it has failed to apply for such extension in terms of the said rule, nor did it apply for condonation in terms of Rule 17. FINDINGS OF THIS COURT [11] The relevant provisions of Rule 30 of the Rules of this Court provide that: "30. (1) The Appellant shall prepare the Record of Appeal in accordance with sub-rules (5) and (6) hereof and shall within two months of the date of noting of the Appeal lodge a copy thereof with the Registrar of the High Court for certification as correct. 30. (4) Subject to Rule 16 (1), if an Appellant fails to note an Appeal or to submit or resubmit the Record of Certification within the time provided by this Rule, the Appeal shall be deemed to have been abandoned. [12] Rule 16 of the Rules of this Court provides as follows: "Rule 16 (1) The Judge President or any Judge of Appeal designated by him may on application extend any time prescribed by these rules: provided that the Judge President or such Judge of appeal may if he thinks fit refer the Application to the Court of Appeal for decision. Rule 16 (2) An Application for extension shall be supported by an Affidavit setting forth good and substantial reasons for the Application and where the Application is for leave to Appeal the Affidavit shall contain grounds of Appeal which prima facie show good cause for leave to be granted." "Rule 17 The Court of Appeal may on application and for sufficient cause shown, excuse any party from compliance with any of these Rules and may give such directions in matters of practice and procedure as it considers just and expedient." [14] These Rules are clear and unambiguous and set out the obligations of a party who is obliged to submit a Record of Appeal in the fashion set out in Rule 30 and to bring Applications as set out in Rules 16 and/or 17 above. [15] The relevant case law relating to the activities referred to above can be referred to as follows: In Dr. Sifiso Barrow v. Dr Priscilla Dlamini and the University of Swaziland (09/2014) [2015] SZSC09 (09/12/2015) the Court at 16 stated "It has repeatedly been held by this Court, almost ad nauseam, that as soon as a litigant or his Counsel becomes aware that compliance with the Rules will not be possible, it requires to be dealt with forthwith, without any delay." In Unitrans Swaziland Limited v Inyatsi Construction Limited, Civil Appeal Case 9 of 1996, the Court held at paragraph 19 that:- "The Courts have often held that whenever a prospective Appellant realizes that he has not complied with a Rule of Court, he should, apart from remedying his fault, immediately, also apply for condonation without delay. The same Court also referred, with approval, to Commissioner for Inland Revenue v Burger 1956 (A) in which Centlivres CJ said at 449-G that: "…whenever an Appellant realizes that he has not complied with the Rule of Court he should, without delay, apply for condonation. [16] As was said in Kombayi v Berkhout 1988 (1) ZLR 53 (S) at 56 by Korsah JA: "Although this Court is reluctant to visit the errors of a legal practitioner on his client, to whom no blame attaches, so as to deprive him of a re-hearing, error on the part of a legal practitioner is not by itself a sufficient reason for condonation of a delay in all cases. (As Steyn CJ observed in Saloojee & Anor NNO v Minister of Community Development 1952 (2) SA 135 (A) at 141C): [17] In the present matter it is clear that: (1) The Respondent has flagrantly disregarded the rule of this Court. No Application has been brought in terms of Rule 16 to the present time, let alone without delay, despite having been served with the application for Abandonment on the 2nd October 2018. It is therefore astonishing that the Appellant recognizes in its Answering Affidavit that it is entitled in terms of Rule 16 to being and application but fails to do so. The Appellant knew that it was out of time but simply disregarded the provisions of the Rules. (2) No full, detailed and accurate account of causes of delay and effect thereof have been put before the Court. (3) The Appellant through its Counsel conceded that the Appellant knew that it was out of time and not in compliance with the provisions of Rules 30 and 31 and despite that, no application to this Court was brought in terms of Rule 16. (4) Accordingly the Appellant must dismally fail the test relating to the giving of detailed and acceptable reasons for delay and non-compliance with the Rules. (5) Save for stating that four months has not elapsed since the Respondent noted its Appeal all the Respondent states is that in terms of Rule 16 (1) an application for extension of time may be made and that Rule 16 (2) provides that same shall be supported by an affidavit setting forth good and substantial grounds for the application. Having said that it fails to bring such an application before or at the hearing for the abandonment of the appeal. [18] Under those circumstances this Court has not been persuaded that the Appeal is not deemed to have been abandoned in terms of Rule 30 (4). The Appeal is deemed to have been abandoned in terms of Rule 30 (4) and the Judgment of the court a quo is confirmed. Costs on the party and party scale are awarded to the Appellant. J. M. CURRIE ACTING JUSTICE OF APPEAL S. J. K. MATSEBULA J. M. MANZINI For the Appellant : W. Maseko For the Respondent : L. Magongo Dube and Ezulwini Municipality & Others (91/2016) [2018] SZSC 49 (30 November 2018); Tasty Treats (Pty) Ltd t/a TT Trusscon v KS Distributors (Pty) Ltd t/a Build Plus Hardware (34/2013) [2013] SZSC 69 (29 November 2013); Select Management Services (Pty) Ltd v Dlamini (101/2017) [2018] SZSC 47 (31 October 2018); Thandeka Maziya and Another vs Siphesihle Mbatha and Three Others (5/2016) [SZSC] [2017] 19 (01 June 2017); Latest Court Rolls Supreme Court Second Session Amended Final Roll 2019 Motion Court Roll 04-07-2019 Supreme Court Second Session Final Roll 2019 SwaziLII is a member of the global Free Access to Law Movement and of the African LII community:
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,562
American Spirituals Voice Competition Dr. J LanYe'--Creator/Organizer/Coordinator AMERICAN NEGRO SPIRITUALS No other music has shaped and influenced the very fabric of American life more than Spirituals. From their humble roots in the eighteenth century, this glorious music has evolved into American classical music, while being the catalyst for numerous other American secular and sacred music styles. The early black settlers were indentured servants, free blacks and slaves who had come mainly from the west coast countries on the continent of Africa. In spite of unfathomable circumstances, slaves managed to keep their spirits up by creating music. At first they made vocal sounds with body movements which were called "cries and hollers," to express their yearning for freedom. Later, they used the Bible to learn to read, and sang Biblical stories, mainly from the Old Testament, to show how God would deliver them out of bondage. These valiant people described situations with wonderful haunting melodies, poignant harmonies and subtle rhythms. The music, sometimes called "Sorrow Songs," became better known as "Negro Spirituals." Slaves vocalized clever escape routes which were disguised in resplendent poetic texts laced with double and sometimes triple entendre. This helped launch the slave escape initiatives, routes later known as the "Underground Railroad." In 1871, a small group of music students from a fledgling school for blacks, formed a vocal ensemble, to give concerts and raise money for the school. That school, today's Fisk University, is famous because those students toured the USA and parts of Europe giving exquisite concerts of European classical music and introducing audiences to Negro Spirituals. They were called "The Fisk Jubilee Singers." In 1892, the Czech composer, Antonin Dvorák heard the black singer/composer H. T. Burleigh sing Spirituals and said America should adopt these songs as its national music. Here in the 21st century we continue to celebrate the music which helped destroy slavery. We hail the indigenous arts songs which have helped define our country. These musical jewels must take their rightful place on the pedestal of American music. Dr. J LanYe' Announcement for 2022 About Spirituals About Dr. J LanYe'
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,487
Originally Published: March 19, 2014 6 a.m. Alta Vista Garden Club President Kathleen Madeda and the Community Beautification Committee presented a check in the amount of $500 to Ted Ihrman, superintendent of the Pioneers' Home in Prescott. This donation will go towards the Secret Garden renovation fund at the home. Those in the photo, from left to right, are Dianne Murphy, Ann Krsiean, Sharon Yessen, Kathleen Madeda, Ted Ihrman, Carol Westfall, Julie Lessard, and Toni Ristich. For more information about the Alta Vista Garden Club, please visit www.altavistagardenclub.org.
{ "redpajama_set_name": "RedPajamaC4" }
290
Peruano-brasileiro é um brasileiro de completa ou parcial ancestralidade peruana, ou um peruano residente no Brasil. O país possui aproximadamente cinquenta mil peruano-brasileiros. peruana Relações entre Brasil e Peru
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,350
{"url":"http:\/\/www.chegg.com\/homework-help\/questions-and-answers\/use-appropriate-sign-conventions-image-distance-height-220-cm-high-insect-113-m-a145-mm-fo-q219604","text":"## image height\n\nUse appropriate sign conventions for image distance and height. (a)A 2.20 cm high insect is 1.13 m from a145 mm focal-length lens.\nWhat is the image distance?\nmm\nHow high is the image?\ncm\n\nif i am trying to find the image height, i did -.166\/1.13=hi\/.022,i solved hi to be -0.15, but it is wrong, anyone see where i wentwrong?","date":"2013-05-26 02:19:53","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9433273673057556, \"perplexity\": 4647.807226279053}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368706484194\/warc\/CC-MAIN-20130516121444-00096-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
<!DOCTYPE html> <title>Canvas tests - 2d.text.draw.kern.*</title> <link rel="stylesheet" href="../frame.css"> <p><a href="index.html">[index]</a> <h1><a href="index.2d.html">2d</a>.<a href="index.2d.text.html">text</a>.<a href="index.2d.text.draw.html">draw</a>.kern.*</h1> <p> <iframe width="200" height="220" src="framed.2d.text.draw.kern.consistent.html">(iframe fallback)</iframe><!-- -->
{ "redpajama_set_name": "RedPajamaGithub" }
3,635
Q: Add "View Product" button below add to cart button in WooCommerce archives pages Most of the articles on the internet are about How to remove / replace the "view product" or "read more" button. I couldn't find something related to allowing both buttons working together. I am interested in having both buttons working in parallel ( at the same time ). The first button to be displayed should be "View product" (to be opened on the same page) then underneath "Add to Cart" At the moment, my store only displays the Add to cart button. I am using Storefront theme ( + custom child theme ). Would anyone be so kind and tell me how to do this? A: Use this custom function hooked in woocommerce_after_shop_loop_item action hook, to add your custom button linked to the product (except variable and grouped product types): add_action('woocommerce_after_shop_loop_item', 'add_a_custom_button', 5 ); function add_a_custom_button() { global $product; // Not for variable and grouped products that doesn't have an "add to cart" button if( $product->is_type('variable') || $product->is_type('grouped') ) return; // Output the custom button linked to the product echo '<div style="margin-bottom:10px;"> <a class="button custom-button" href="' . esc_attr( $product->get_permalink() ) . '">' . __('View product') . '</a> </div>'; } Code goes in functions.php file of your active child theme (or active theme). Tested and still perfectly works on WooCommerce 3.7.x (with last storefront theme): Embedding your styles (related to author comments): add_action('wp_head', 'custom_button_styles', 9999 ); function custom_button_styles() { if( is_shop() || is_product_category() || is_product_tag() ): // The styles ?> <style> .button.custom-button { background-color: white !important; color: black !important; border: 2px solid #4CAF50 !important; } .button.custom-button:hover { background-color: black !important; color: white !important; border: 2px solid black !important; } </style> <?php endif; } Code goes in function.php file of your active child theme (or theme) or also in any plugin file. Tested and works.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,415
## **Contents** Author's Note Introduction Chapter One The Battle for Survival Chapter Two The Invention of Childhood (Or Why It Hurts to Have a Baby) Chapter Three Learning Machines Chapter Four Tangled Webs—the Moral Primate Chapter Five The Everywhere Ape Chapter Six Cousin Creatures Chapter Seven Beauties in the Beast Chapter Eight The Voice Inside Your Head Epilogue: The Next Human Acknowledgments Notes Bibliography Footnotes A Note on the Author By the Same Author Plate Section For Cyn. My compass and my Gibraltar. ## **Author's Note** Despite its academic-sounding name, a good deal of brawling often goes on within the field of paleoanthropology. That it explores the deep past and counts on bits of ossified bone grudgingly revealed or scraped out of the earth doesn't help the inexactness of the science, or the disagreements it generates. Although all researchers in the field work hard to bring the objectivity of the scientific method to their work, its nature involves a lot of guesswork. So while one scientist or group of scientists may think that the unearthed fossils of a particular creature demand that it be classified as a new species, others might feel just as strongly that it is simply a new example of a species that has already been discovered. Some scientists find good reason to have created the classification _Homo antecessor_ , for example. Others, just as reputable, and just as thorough in their thinking, argue no such species ever existed. No one really knows. The evidence is too sparse and too random. We are making up these names as a convenient way of organizing the chaos of discovery over the past 180 years. It's not as though the creatures themselves went by the nomenclatures we have made up. Nor can we comprehend what we don't know. We can never say if we have discovered the fragmented evidence of 80 percent of our direct ancestors and cousin human species, or 1 percent. Too often, being human, we may give the impression we understand more than we do, or that we have just about figured it all out. We haven't, as you will see. One of the reasons this book is relevant is because the human family tree, or more precisely, our very limited view of it, has changed so much in just the past five years. Advances in genetics, innovations in radiocarbon dating, together with plain old scientific creativity and elbow grease have greatly improved our guesswork and helped flesh out the discoveries we have made. There would be no hope, for example, of having even the remotest idea that a wisdom tooth and the end of a pinkie finger found in a Siberian cave three years ago belonged to an entirely new species of human (scientists call them Denisovans) with whom we and Neanderthals may share a common ancestor. This paltry evidence even revealed we mated with them! Nor would we have learned that billions of humans (including, very possibly, you) have Neanderthal mlood running in their veins. But we now know these astonishing things are true, even as they have turned assumptions once taken as gospel entirely on their heads. Still, despite these advances and the exciting discoveries they have made possible, the illumination of our past is a little like trying to find a set of car keys in the Sahara with a flashlight. I bring this up now to clarify a point: we don't know exactly how many other human species have evolved over the past 7 million years—27 or 2700. We likely never will. But I have tried to arrive at an arguable and acceptable number that makes the larger point that, despite the disagreements that take place within the field, the story of how we came to be is a good deal more intriguing and complicated than we thought even a few years ago. And that makes the story even better. ## Introduction Over the past 180 years we have so far managed to stumble across, unearth, and otherwise bring to light evidence that twenty-seven separate human ( _hominin_ , to use the up-to-date scientific term) species have evolved on planet Earth. As you may have noticed, twenty-six of them are now no longer with us, done in by their environments, predators, disease, or the unfortunate shortcomings of their DNA. The lone survivor is a peculiar, and peculiarly successful, upright walking primate that calls itself, a little self-importantly, _Homo sapiens sapiens_ , the wise, wise one. In most circles, we simply call them you and me. Of all the varieties of humans who have come and struggled and wandered and evolved, why are we the only one still standing? Couldn't more than one version have survived and coexisted with us in a world as big as ours? Lions and tigers, panthers and mountain lions, coexist. Gorillas, orangutans, bonobos, and chimpanzees do as well (if barely). Two kinds of elephants and multiple versions of dolphins, finches, sharks, bears, and beetles inhabit the planet. Yet only one kind of human. Why? Most of us believe that we alone survived because we never had any company in the first place. According to this thinking, we evolved serially, from a single procession of gifted ancestors, each replacing the previous model once evolution had gotten around to getting it right. And so we moved step by step (Aristotle called this the "Great Chain of Being"), improving from the primal and incompetent to the modern and perfectly honed. Given that view, it would be impossible for us to have any contemporaries. Who else could have existed, except our direct, and extinguished, antecedents? And where else could it all lead, except to us, the final, perfect result? This turns out to be entirely wrong. Of the twenty-seven human species that have so far been discovered (and we are likely yet to discover far more), a considerable number of them lived side by side. They competed, sometimes they may have mated, more than once one variety likely did others in either by murdering them outright or simply outcompeting them for limited resources. We are still scrounging and scraping for the answers, but learning more all the time. If we hope to place our arrival on the scene in any sort of perspective, it's a good idea to remember that every species on Earth, and every species that has ever lived on Earth (by some estimates thirty billion of them), enjoyed a long and checkered past. Each came from somewhere quite different from where it ended up, usually by a circuitous, and startling, route. It's difficult to imagine, for example, that the blue whales that now swim the world's oceans, great leviathan submarines that they are, were once furry, hoofed animals that roamed the plains south of the Himalayas fifty-three million years ago. Or that chickens and ostriches are the improbable descendants of dinosaurs. Or that horses were once small-brained little mammals not much taller than your average cat with a long tail to match. And the Pekinese lapdogs that grace the couches of so many homes around the world can trace their beginnings to the lithe and lethal gray wolves of northern Eurasia. The point is, behind every living thing lies a captivating tale of how the forces of nature and chance transformed them, step by genetic step, into the creatures they are today. We are no exception. You and I have also come to the present by a circuitous and startling route, and once we were quite different from the way we are now. Theories about our ancestry have been amended often because new discoveries about how we came into existence keep emerging; several times, in fact, while this book was being written. But however it played out in the details, we know this: for every variety of human that has come and gone, including those we think we have identified as our direct predecessors, it has been a punishing seven million years. Survival has always been a full-time job, and the slipperiest of goals. (It still is for most humans on the planet. More than four billion people—nearly two thirds of the human race—subsist each day on less than two dollars). But luckily, for you and me at least, while evolution's turbulent dance rendered the last line of non– _Homo sapiens_ DNA obsolete eleven thousand years ago, it allowed ours to continue until finally, of all the many human species that had once existed, we found ourselves the last ape standing. Not that we should rejoice at the demise of those others. We owe a lot to the fellow humans that came before us—hairier, taller, shorter, angrier, clumsier, faster, stronger, dumber, tougher—because every one of us is the happy recipient of all the more successful traits that our ancestors acquired in their brawl to keep themselves up and running. If today we were to make the face-to-face acquaintance of an _Australopithecus afarensis_ or _Homo ergaster_ or _Paranthropus robustus_ , what would we see? Intelligence, fear, and curiosity is my guess, for starters. And they would see the same in us because we truly are kindred spirits. This has ensured that many of the deft genetic strategies that made those now departed human species once possible still remain encoded in the DNA that you and I have hauled with us out of the womb and blithely carry around each day into our personal worlds. The millions of these creatures who came and strove and passed through the incomprehensibly long epochs between their time on earth and the here and now are, after all, us, or at least some of them were. We are a marvelous and intriguing amalgamation of those seven million years of evolutionary experimentation and tomfoolery. If not for the hard planks of human behavior those others long ago laid down, we would be naught. So you can thank the lines of primates that in eons past found their way into Africa's savannas, then to Arabia and the steppes of Asia, the mountain forests of Europe, and the damp archipelagoes of the Pacific, for genetic innovations like your big toe, your ample brain, language, music, and opposable thumbs, not to mention a good deal of your personal likes and dislikes, fascinations, sexual proclivities, desires, temper, charm and good looks. Human love, greed, heroics, envy, and violence all trace the threads of their origins back to the deoxyribonucleic acid of the humans who came before us. Some might wonder what sense it makes to rummage through the leavings of the past seven million years to try to piece together the story of our peculiar emergence. The payoff is that it's the best way to understand why we do the startling, astonishing, sometimes sublime and sometimes horrible things we do. We owe it to ourselves to unravel the riddles of our evolution because we, more than any other animal, _can_. If we don't, we stand no chance of comprehending who we really are as individuals or as a species. And only by understanding can we hope to solve the problems we create. To not understand how we came into this universe damns us to remain mystified by our mistakes, and unable to build a future that is not simply human, but also humane. This in itself, however, fails to answer the nagging question of why our particular branch in the human family managed to find its way to the present when so many others were shown the evolutionary door. Plenty of other human species had a good run; many considerably longer than ours. Some were bigger, some were stronger and faster, some even had heftier brains, yet none of those traits was good enough to see them through to the present. What events, what forces, twists, and evolutionary legerdemain made creatures like you and me and the other seven billion of us who currently walk the earth possible? Somehow only we, against all odds, and despite the brutal, capricious ways of nature, managed to survive... so far. Why? Our story begins once upon a time, a long time ago... Chip Walter Pittsburgh, Pennsylvania June 2012 ## [**Chapter One The Battle for Survival**](ch00_fm06_toc.html#ch00_fm06_toc) _DNA neither cares nor knows. DNA just is. And we dance to its music_. —Richard Dawkins The universe houses, by our best count, one hundred billion galaxies. In one sector, a not very remarkable galaxy in the shape of a Frisbee with a bulge at the center spins pinwheel style through the immense void. One hundred billion suns reside in the Milky Way, each annihilating—at varying levels of violence—uncounted trillions of hydrogen molecules into helium. Along the edge of this disk, where the star clusters begin to thin out, sits the sun that we wake up to every morning. By some cosmic calculus that science has yet to decipher, the planet we call home came to rest at just the right distance away from that star, and with just the right makeup of atmosphere, gravity, and chemistry to have made an immense variety of living things possible. Our universe has been around roughly fifteen billion years; our sun, perhaps six; Earth has made four billion circuits around its mother star, and the life upon it has been ardently evolving during 3.8 billion of those journeys, give or take a few hundred million years. For the vast majority of that time anything alive on Earth was no larger than a single cell. Had we been around to see this life, we would have missed it since it is invisible to the naked eye. But of course, if it hadn't come first, we, and everything else alive on Earth, would have been out of the question. No matter how hard we try, we cannot begin to fathom the changes, iterations, and alterations our planet has undergone since it first came to rest, collided and molten, in its current orbit. Our minds aren't built to handle numbers that large or experiences that alien. And in this book we won't try. Instead we will focus on a hiccup in that Brobdingnagian history, but a crucial hiccup, especially from your particular point of view—the past seven million years. Because that is when the first humans came into existence. Compared with the other smaller, rocky planets in our solar system, Earth has always been especially capricious. During its lifetime it has been hot and molten, usually wet, sometimes cold, at other times seared. Occasionally large sectors have been encased in ice; other epochs have seen it covered with gargantuan swamps and impenetrable rain forests inhabited by insects larger than a Saint Bernard. Deserts have advanced and retreated like marauding armies, while whole seas and oceans have spilled this way and that. Its landmasses have a way of wandering around like pancakes on a buttered griddle, so that the global map we find familiar today was entirely different a billion years ago and has spent most of the time in between resolutely rearranging itself, often with interesting results for the creatures trying to survive its geologic schemes. Charles Darwin and Alfred Russel Wallace figured out by the mid-nineteenth century that these incessant alterations explain why Earth has spawned so many varieties of life. Random revisions in the DNA of living things, together with erratic modifications in environments, have a way of causing, over time, whole new and astonishing forms of life to emerge. Darwin, when after years of thought and hand-wringing finally got around to writing _On the Origin of Species_ , called this "descent by means of natural selection," which was to say that creatures change randomly (by mechanisms unknown to him; he and everyone else in 1859 were ignorant of mutating genes or spiraling ladders of DNA) and either endure in their environment, or not. If the mutations that shift their traits help the organism to survive, he surmised, then it breeds more offspring and the species continues, passing the new traits on. If not—and this has been the case with 99.99 percent of all of the life that has ever evolved—then the life form is, as scientists like to say, "selected out." Darwin figured that different environments favored different mutations, and over time—incomprehensibly long periods of time—different organisms diverged like the branches of a tree, each shoot putting increasing distance between the others around it until, eventually, you find your self with varieties of life emerging as different as a paramecium and Marilyn Monroe. So, though we may have started out with single prokaryotic cells and stromatolitic mats of algae nearly four thousand million years ago, eventually the world found itself brimming, in stages, with lungfish, slime mold, velociraptors, dodo birds, salmon and clown fish, dung beetles, ichthyosaurs, angler fish, army ants, bighorn sheep, and—after almost all of the time that ever was had passed—human beings—peculiarly complicated animals, with big brains, keen eyes, gregarious natures, nimble hands, and more self-aware than any other creature to have ever come down the evolutionary pike. Of the twenty-seven species of humans that we have so far found that once walked Earth, ours is the most favored line for the simple reason that, so far at least, we have avoided the genetic trash bin. Given evolution's haphazard ways, we might just as well have ended up a blowhole-breathing water mammal, a round-eyed, nocturnal marsupial, or a sticky-tongue anteater obsessed with poking its wobbling proboscis into the nearest nest of formicidae.a We could even have _become_ the formicidae for that matter. Or we might have become extinct. But as it turns out—and lucky for us—we emerged from the jungles of Africa, came to stand upright, clustered ourselves into close-knit packs, gave up our front paws for hands, grew thumbs, took up reformed, meat-eating diets, developed tools, and, in a remarkably short time—as evolutionary events go—rearranged the world right down to its molecules and right up to its climate. Today we are even manipulating the DNA that makes us possible in the first place—a case of evolution evolving new ways to evolve. (Think about that for a moment.) We did not pop out of the jungles of Africa in our current form like Athena from the head of Zeus, big-brained, tool-laden, and ready for modern life. We came in stages, part of a vast and jumbled experiment largely driven by our home planet's fickle ways. Between six and seven million years ago Africa's rain forests began to shrink slowly. Earth was a different place from today, but not radically so. If there were time-traveling, Earth-orbiting satellites to provide a global view of our planet then, it would look pretty much the same as the one we see on the Weather Channel today. India would be in place mostly, though still slowly plowing into Asia, creating the Himalayas. Australia would be roughly located where we find it now. The Mediterranean would be a touch larger. Partially submerged, the boot of Italy would not look quite so bootlike, and the Bosporus strait along with sectors of the Middle East would be inundated, though soon enough the closure of the strait at Gibraltar would transform the Mediterranean into a vast plain of salt flats, marshes, and brackish lakes. These geologic alterations were unfolding because the planet was warming up, thinning its ice caps and making land scarcer and Earth more watery. Ironically, the world was becoming much like the one scientists now speculate global warming is creating. In looking back at our origins, it seems we are catching a glimpse of our future. Climate, however, is complex. Weather systems veer and fluctuate. Tectonic plates beneath the Indian Ocean were shifting and sloshing whole seas. As the planet generally warmed, some parts of the world became wetter, and more tropical, while others grew drier. Among these were northeast and north-central Africa, where grasslands were gradually transforming themselves into desert, and rain forests were breaking up into semiwooded savannas. Here, a new kind of primate was evolving, probably several. Primates that were not, purely speaking, any longer creatures of the jungle. Scientists peg the emergence of Earth's first human about seven million years ago largely because around that time the fossil evidence, sparse as it is, points to a primate splitting from the last common ancestor we shared with today's chimpanzees. There is no precise method for fixing dates of these kinds. Paleoanthropology, with its reliance on the chance discovery of ancient bones and the sediments in which they lie, is replete with perplexity and, as sciences go, is far from exact. In fact, the likelihood of any ancient bone even becoming a fossil is so vanishingly small that it's just short of miraculous any discoveries are made at all. If you hoped, for example, that some part of you might be discovered fully fossilized in the distant future, there would be no chance of its happening unless you dropped dead in a layer of soft sediment that takes an impression of your body, or into some place lacking the oxygen that so enthusiastically decomposes every molecule of us when we bite the dust. A peat bog or shallow, muddy river would be a good place. From there you would have to hope that the tectonic shenanigans of the planet, lashings of wind and water and climate, the shifting courses of rivers, or the encroachments of deserts or glaciers wouldn't toss or shuttle your bones from their resting place to some location less hospitable to your preservation. Assuming they didn't, then at least some of the solid parts of your remains would have to be replaced molecule by molecule with other dissolved solids that leave behind a stone replica of your formerly carbon-made skeleton. Then finally, if all of this happens just so, you must count on the wind or rain or the instinct of an exceedingly lucky paleoanthropologist to reveal what is left of you to him or to her. The chances of your being preserved in this way are, by some estimates, one in a billion. The likelihood of this small part of you then actually being found is so small, it can't accurately be calculated. Add to this that many of our earliest ancestors met their fate in forests or jungles where decomposition happens rapidly and without leaving a trace, and you can see why the fossil record we rely upon to unlock our origins is not only tiny, but serendipitously skewed. At best we have been left with random clues that provide only the sketchiest images of the deep past. In fact, whole lines of primeval relatives were almost certainly long ago obliterated and now lie beyond discovery. We do have tools other than fossils that can help divulge our ancestry. The science of genetics is still fledgling, but it provides ways to explore the past by providing a kind of clock that allows scientists to estimate when certain branches of our family tree made off in different directions. (See the sidebar "Genetic Time Machines," page 76.) Yet the best genetic evidence is currently so foggy that it places the time we and chimpanzees shared a common ancestor somewhere between four and seven million years ago, rather a loose estimate. So neither the fossil record nor genetic science can provide anything very detailed about the precise time of our emergence. Still, we have to start somewhere. It sometimes shocks people to learn that at least twenty-six other human species once lived on earth. It further shocks them that many of them lived side by side. The point is there was not, as we often think, an orderly march of ape–men that led from chimp to you and me. One reason science has tentatively settled on seven million years as the birth date of the human species is that the oldest fossil that might reasonably lay claim to being human was found in Chad at various times between July 2001 to March 2002 (he was unearthed piecemeal). His discoverer, a student named Ahounta Djimdoumalbaye, called him _Sahelanthropus tchadensis_ —Sahel man, after the part of Africa south of the Sahara where he was found. Not much remained of this particular primate—a skull, four lower–jaw fragments, and a few teeth, but because the fossils indicated his head was positioned much like ours is, in line with his torso rather than at a forty–five–degree angle like a knuckle–walking gorilla, some paleoanthropologists speculate he (or she) walked upright. They see this as a reason to consider him (or her) an early human. All we know for certain is that _tchadensis_ was either one of the last ancestors humans shared with other great forest apes or was one of the first humans to have evolved. Or _tchadensis_ might be an evolutionary dead end. The best we can say is, the bones left behind were found in sediments that tell us _tchadensis_ walked the earth about seven million years ago, and so that is where we shall begin. When compared with the billions of years it has taken to make a universe or its suns and planets, seven million years may appear minute, but to those of us who aren't stars, comets, oceans or mountain ranges it remains a very, very long time. We are used to measuring time in hours and days, months and years, perhaps generations when forced to push the envelope. Epochs and eons bend the mind and are as incomprehensible as light–year–measured galactic distances or quantum calculations computed in qubits. To help wrap our minds around these numbers, imagine that we could squeeze the seven million years that have passed between the arrival of _Sahelanthropus tchadensis_ and the present into a single year's calendar, and then plot the arrival—and in some cases the departure—of every known human species from January to December. Let's call this the Human Evolutionary Calendar or HEC. If we look at it this way, _tchadensis_ arrives January 1. Lucy, the famous upright walking member of a line of savanna apes known as _Australopithecus afarensis_ , who lived about 3.3 million years ago, appears July 15. Neanderthals don't show up until near Thanksgiving, November 19, and we _Homo sapiens sapiens_ finally reveal ourselves near the winter solstice, December 21, a little more than a week before the end of the year. Looking at this timeline, you can't help but conclude the human species seems to have gotten off to a slow start, at least based on the current sketchy evidence.b Following _tchadensis_ nothing at all happens for more than a million years, then a creature researchers call _Orrorin tugenensis_ (Millennium Man) finally appears just before the spring equinox—on March 8. Like _tchadensis, tugenensis_ didn't leave much for us to inspect—two jaw fragments and three molars. Later finds turned up a right arm bone and a small piece of thigh—altogether enough information for paleontologists to conclude that _Orrorin_ was almost certainly human, and lived about 5.65 to 6.2 million years ago, mostly in wet grasslands and fairly thick forests that eventually became the Tungen Hills of modern Kenya. Thus the name _tugenensis_. Whether he walked upright all the time or even part of the time is debated, but if he spent his days between grasslands and jungle, he may have done a bit of both, walking on all fours in the forest and upright now and again in and among the trees and grasslands he called home. As we move into spring not one, but three new and indisputably human species arrive. On March 18 two emerge from the mists of time: _Ardipithecus ramidus_ and _Ardipithecus kadabba_ ; then on May 20, _Australopithecus anamensis_. These were all distinct species, yet all three bear a stronger resemblance to today's chimpanzees than to us, and all three probably walked upright sometimes, and at other times on all fours. By summer in the HEC, signs emerge that the human experiment was gathering momentum. Multiple species begin to appear and overlap. Recalling their names is a little like trying to follow the characters in a Russian novel, but bear with me. (We can thank the brilliant zoologist Carl Linnaeus for the long and respected tradition of assigning elongated, Latin names to all living things.) In mid–October, _Paranthropus robustus_ (sometimes known as _Paranthropus crassidens_ ) arrives. Then on July 4, _Kenyanthropus platyops_ ; ten days later, _Australopithecus afarensis_ (Lucy); and then in August, _Paranthropus aethiopicus_ and _Australopithecus garhi_ join the ranks of humans that have walked the planet. These creatures, each of whom found their way in and out of time and the plains and forests of Africa, arose and departed subject to the cantankerous whims of evolution. When we compress time this way, it's easy to forget that some of these species lived for hundreds of thousands of years. All of them were intelligent, with brains that ranged from the size of today's chimps, 350 cubic centimeters (cc) to as large as 500 cc, still a quarter to a third the size of the brain modern humans carry around, but enormous and enormously complex when compared with those of most other mammals. Something strange and intriguing was afoot across Africa's sprawling lands. Like an Olympian god, the continent's changing climate was forcing the emergence of multiple kinds of humans, all of them descended from jungle primates similar to those that still live in Africa's rain forests today (albeit in ever–dwindling numbers). In time the selective pressure that different environments exerted coupled with random genetic changes resulted in new varieties of humans that emerged all over the continent. _Aethiopicus_ arose along the banks of Lake Turkana in Kenya and the Omo River basin of Ethiopia. Lucy and her kind roamed as far north as the Gulf of Aden and as far south as the ancient volcanoes of modern Tanzania, while _Australopithecus africanus_ lived thousands of miles south, not far from Johannesburg, South Africa. A later addition to the human family, dubbed _Australopithecus sediba_ , was recently discovered in South Africa as well. The partial skeletons of a young boy, and an adult female, who lived between 1.78 and 1.95 million years ago (between mid and late October), were scraped from the dust. Depending on where they lived, all of these species dealt with surroundings that ranged from densely wooded and fairly wet, to dry, open grassland. As Africa's jungles retreated toward the center of the continent, troops of apes must have been left scattered over hundreds of thousands of square miles to adapt or die. They had no tools, only the randomly provided equipment their genes conferred upon them, all better fit for life in the jungle than the environment they now faced. Where once the rain forest provided them with ready supplies of fruits and berries that delivered plenty of energy and nutrients, they now found themselves dealing with savannas where less food was spread out over larger areas, inhabited by growing numbers of predators each exceedingly focused on making meals of them. Life was, in the immortal words of Thomas Hobbes, "poor, nasty, brutish, and short." Everything was more dangerous, and staying alive demanded more energy, mobility, toughness, and cunning. Wherever they lived and however they survived, all the hominin primates that emerged during the summer in the Human Evolutionary Calendar were players in a grand, African experiment now three million years in the making. The world was testing them, harshly, and the forces of evolution were remorselessly molding them into a new kind of ape. While the random forces of evolution endowed each with different genetic attributes that helped them all survive, every one of them seems to have developed one predominant trait: for the first time in the incomprehensibly long story of evolution on earth, species had emerged that walked upright on their hind legs. Because we do this so effortlessly every day, it may escape you how exceptionally strange this mode of transportation was four million years ago among living mammals, or any other animal for that matter. But strange it was. Yet precisely because it was so peculiar, it set in motion a string of evolutionary events that eventually enabled you and me to come into existence. We are so steeped in technology, so used to controlling our environment, that we forget that the only way the vast majority of living things can hope to survive in a changing world is to come by the right genetic mutation at just the perfect time, something that happens entirely by accident. Serendipity is the enemy and the ally of every species. It might provide you with the claws needed to bring down prey or the speed required to escape another's claws. Or it might not, in which case you are doomed to be "selected out," unfit for your new habitat and relegated to the genetic landfill. For living creatures of all kinds, and that included our ancestors living during the balmy summer months of the HEC, there are no evolutionary shortcuts, no quick technological fixes, no ways to take charge and change the rules of the game with an invention. But sometimes you get lucky. If you stand back and look at the sprawling landscape of life's evolution on Earth, it's easier to pick out big trends, and that can help clarify a mystery or two. For example, when similar creatures find themselves in similar situations, they sometimes develop nearly identical traits, but by entirely separate evolutionary paths. Take seals, dolphins, and whales. All of them were former land mammals, but each developed fins. They didn't, and couldn't, inherit these traits from one another because they were distinct species that evolved independently. But because living in water seems to favor creatures that grow fins of some kind, each shares this trait. Scientists call this convergent evolution. Something like this seems to have happened with several lines of savanna apes beginning around four million years ago. While they all descended from jungle cousins who walked on all fours, many forsook knuckle–walking. And it makes evolutionary sense that they did. In the jungle, food is never far away: plenty of low–hanging fruit. Wild gorillas, for example, travel only about a third of a mile on average every day, sometimes only a few hundred feet. Everything they could ask for is close by. On the savanna, though, life was profoundly different. Beneath the hot equatorial sun, temperatures would often have risen into triple digits (Fahrenheit). Food was scarce and rarely at hand. So while walking upright in the thick underbrush of a tropical rain forest would have done nothing to improve your chances of living a longer life in the jungle—in fact, it might shorten it—perambulating on your hind legs in open grasslands provided several advantages. You gained better visual command of your world, which is useful if you are on the daily menu of ancient jackals, hyenas, and the lion–size, saber–toothed cats called megantereons. Traveling on two feet is also more efficient than scrambling along on four. Studies reveal that knuckle–walking chimpanzees burn up to 35 percent more energy than we humans do as we stroll blithely down the street. Making your way around the broad, hot grasslands of the Pleistocene epoch on your knuckles and hind legs looking for food, watching out for predators and taking care of your young, would have been slow, tiring, and ultimately deadly. That is presumably why upright walking became the preferred form of navigation for all savanna apes no matter what locality they called home. Those that failed to come by this trait were wiped out. Precisely how ancient humans such as Lucy, _aethiopicus_ , and _Australopithecus africanus_ eventually pulled off the physical tricks necessary to stand upright remains a mystery, but they did, and one reason they did is thanks to a genetic trait common to all apes—a big toe. For some time zoologists have known that early in gestation the big toe of gorillas, chimps, and bonobos is not bent, thumblike, but is straight, similar to ours. But as they continue their development the big toe departs from the other four so that by the time they are born it has become thumblike, making it easy for their feet to grasp, stand on, or hang from limbs. But what if one of the descendants of a jungle ape found itself living in sparse forests and open savannas? And what if one of those apes was born with a big toe that never became thumblike, but instead remained straight, a freak genetic deformity? Deformities, autoimmune diseases, even mental illnesses, are often the result of genetic mutations. Somehow a gene is reshuffled, a hormone misfires, or a genetic switch is delayed. Even DNA makes mistakes. In fact, evolution depends upon it. We humans are sometimes born with an extra digit, webbed fingers, shortened legs. But one creature's deformity can become another's salvation. Every living thing on Earth is, one way or another, an amalgamation of genetic gaffes. Imagine, then, that some primates were born "deformed" with a straighter toe preserved from their time in the womb rather than a more opposable one like all other normal primates. What sort of life could such a creature look forward to? In the jungle, not a promising one. Unable to effortlessly grasp tree branches like his fellow apes, he would struggle mightily to keep up with the troop, die swiftly, and his genetic predilection for straight toes would bite the dust with him. But in a partially wooded savanna, where the grasslands were expansive and the forest broken and less dense, an ape with a straight big toe would be lucky indeed. That deformity would enable him to stand and walk upright. Without a straight big toe our current brand of walking would be impossible. With every step we take, our big toes support 30 percent of our weight, and they make the upright running, jumping, and rapid shifts in direction we excel at possible. Other complicated anatomical changes had to have taken place before our style of two–footed walking came into existence, and eons passed before those modifications were completed, but the transformation arguably began with a straight big toe. Such an odd foot would have made any savanna ape better at standing upright and able to walk for longer periods on his hind legs. In time he would have found himself endowed with a birth defect that would eventually prove a lifesaver. Moving in this odd, vertical way didn't mean that ancient humans entirely forsook swinging in trees or walking on their knuckles. Lucy, who provided paleoanthropologists with one of the most complete skeletons of an early ancestor, appears to have been a hybrid, clearly capable of walking on her knuckles if she felt like it, and outfitted with shoulders and arms that were nicely adapted to swinging through and climbing trees. Yet, the architecture of her pelvis, the tilt of her head, and the shape of her foot tell us that upright walking was her preferred way of getting around; so preferred that her footsteps in wet mud or sand would have looked almost exactly like yours or mine. _Australopithecus afarensis_ Walking upright was one evolutionary trait our predecessors shared as they marched resolutely to the present, but not the only one. Another was in play that would also have enormous ramifications: their brains were growing larger. Not immensely larger, but the difference is measurable. While a chimp's brain is about 350 cc, these grassland primates' brains ran from 450 cc to 500 cc, a 25 to 40 percent increase. The big question is why. The traditional scientific answer to this question is that a bigger brain is a better brain, so evolution's forces tended to favor smarter animals. That is true enough, but it still doesn't explain the mechanism that was causing the growth. What was forcing the issue? Why were larger brains evolving at all? Strange as it may seem, starvation might be the answer. When an animal is having a chronically difficult time filling its belly, something intriguing happens in its body at a molecular level. Aging slows down, and cells don't die as quickly as they do when food is available. Contrary to what you might think, a cell's health in this situation doesn't deteriorate. It improves. The body, sensing deprivation, seems to call all hands on deck, husbands its energy, and prepares for the worst. In a sense each cell grows tougher and more wary. This is thanks largely to a class of proteins called sirtuins, which some scientists suspect reduce the rate of cell growth. Numerous studies show that reducing the normal diets of creatures as different as fruit flies, mice, rats, and dogs by 35 to 40 percent will increase life span as much as 30 percent. (Scientists can't ethically perform these sorts of experiments on humans, but all indications are that the same holds true for us.) When food is scarce, fertility also drops and animals mate less frequently, an additional way of slowing down the cycle of life. While the deprivation makes life horrible for the creatures enduring it, from an evolutionary point of view it carries with it the quality of pure genius. Nutritional penury not only extends the life of an animal, but fewer offspring improves the chances of the whole species remaining in the evolutionary sweepstakes. Fewer offspring also places less stress on already overburdened food resources. The whole process of living decides, it seems, to bide its time until the storm passes. Cell growth on every level slows except for one key and remarkable exception: brain–cell growth increases. There the cells last longer, _and_ they begin to make new versions of themselves faster, or at least the neurotrophins generated by the hypothalamus, which are the precursors of new brain cells, do. Not only that, other experiments show that food deprivation increases an appetite–stimulating peptide called ghrelin, which enables synapses to transform themselves by some molecular magic into cortical neurons. You could say the body and the brain strike a bargain. To compensate for the aggressive growth of new neurons, the rest of the anatomy fasts, stretching scarce nutritional resources that it then redirects to the brain. Or put another way, the body slows down aging and accelerates intelligence. This means that 3.5 million years ago, by the time Lucy and her contemporaries were desperately scavenging at the margins of an unpredictable land, the chronic deprivation they were facing was accelerating the growth of their brains. So our ancestors had two assets going for them. Upright walking made them more mobile and efficient, able to cover more ground and better equipped to evade the predators that were evolving along with them. Their larger brains meanwhile made them more capable of adapting to dangerous situations on the fly, more adept scavengers, and better at collaborating successfully with one another. All good in these strange and dangerous environments. That they survived despite their desperate circumstances proves that the combination of the two adaptations was succeeding. But there was now a new challenge: the two trends were on a collision course and bound, in time, to make it impossible to survive. Something had to give. ## [**Chapter Two The Invention of Childhood (Or Why It Hurts to Have a Baby)**](ch00_fm06_toc.html#ch00_fm06_toc) _My mother groaned, my father wept, into the dangerous world I leapt_. —William Blake _The human birth canal inlet is larger transversely than it is anteroposteriorly (front to back) because bipedal efficiency favors a shorter anteroposterior distance between a line that passes through both hip joints and the sacrum... This size relationship, along with a twisted birth canal shape, makes human parturition mechanically difficult_. —Robert G. Franciscus "When Did the Modern Pattern of Childbirth Arise?," _Proceedings of the National Academy of Sciences_ Two and a half million years ago, around the end of August in the Human Evolutionary Calendar, primates like _Kenyanthropus platyops, Australopithecus afarensis_ , and _Australopithecus africanus_ begin to disappear from the fossil record. They may not actually have disappeared, but the evidence of them does. Either way, their evolutionary run was apparently nearing an end. However, with their disappearance a new, rich wave of human species began to crest and break on Africa's broad and windy plains. In the space of one million years, nine new varieties of humans emerged. Stepping back and looking at the aggregated remains scientists have labored to pick out of the hills, valleys, and ancient lake beds of Africa like so many needles from an incomprehensibly huge haystack, these species give you the impression that the savanna apes that had been struggling so long to survive were finally getting the hang of living in their new environment, fanning out in more directions, deepening the peculiar evolutionary experiment we call humanity. And it _was_ an experiment; make no mistake, because not all branches of the human family were evolving along the same lines. More precisely, species were striding down two distinctly different roads—one that included smaller, slimmer, so–called gracile apes, and another that embraced bigger, thicker humans with large jaws and teeth, known in the world of paleoanthropology as robust apes. Each approach had its advantages. But in the long run, only one would succeed. The members of the robust branch of the human family first showed their simian faces in late August among the flooded grassland along the Omo River in southern Ethiopia and the western shores of Lake Turkana in northern Kenya. Scientists call this specimen _Paranthropus aethiopicus_ , and he is perplexing because he combines so many contradictory characteristics. His bone structure seems to say that he more often than not walked on all fours among the elephants, saber–toothed cats, and hyenas with which he passed his days. Yet he lived in wet, open grasslands munching on tubers and roots with his big, flat teeth and ample jaws, rather than in wooded areas where you might think knuckle–walking would make more sense. Despite his chimplike anatomy and relatively small brain (no more than 450 cc in adulthood), he may have been the first to pull off the astounding trick of fashioning the first stone tools, preceding even the famous feats of "Handyman" ( _Homo habilis_ ), who followed him and is generally considered the inventor of the first Neolithic technology. (Scientists are debating anew who should get credit for this remarkable advance.) Whatever _aethiopicus_ accomplished, more like him were to follow. Later in the calendar year—the middle of October—two other _Paranthropus_ species, _boisei_ and _robustus_ (also known as _crassidens_ in the ever–changing argot of paleoanthropology), arrived, also generously jawed, large headed, and big of tooth, like _aethiopicus_. _Paranthropus_ humans represent an evolutionary "strategy" that modified the behaviors of jungle apes, but didn't leap dangerously far from them. Of the two routes down which evolution was walking earth's humans, this was the safer, more conservative one. Like their predecessors in the rain forests, troops of robust apes roamed from location to location, gathering what food they could find in the thinning forests, bush, and grasslands where they lived. Because of the sorts of foods they ate, _Paranthropus_ possessed heads that sported thick, sagittal crests like the ones you see on the silver–backed gorillas at your local zoo, though they were more chimp–size than gorilla–size. The crests are a stout, jagged line of bone that runs from the top of the forehead to the back of the neck like the metal rim of a medieval helmet. Anchored to these were thick ropes of muscle that ran to their massive jaws and dense necks so that the broad, square rows of teeth in their mouths could crush the cement–hard shells of the nuts they consumed, pulverize bark and seedy berries, crunch the exoskeletons of large insects, or masticate the bones of an unfortunate small animal they might have been lucky enough to snatch up. Beneath these crests the brains of _boisei_ and _crassidens_ had expanded roughly a third during the four million years that had passed since the first human emerged from Africa's rain forests. They were undoubtedly resourceful and even more socially bonded than the apes from which they had descended, mostly thanks to the menace that surrounded them. Danger breeds reliance and cooperation. Day–to–day living would have been unimaginably harsh: a life of slow migration, eating to gather the strength to move forward and moving forward to gather more food to eat. Despite its hardships, however, this was by no means an unsuccessful evolutionary path. By current accounts, _boisei_ roamed the plains of Africa for a million years, foraging the foods at hand and getting along, if not famously, then well enough. If we measure success by how long species survive, we _Homo sapiens_ , amount to little more than rookies still wet behind the ears. We have been in the game of life a scant two hundred thousand years. _Boisei_ held sway on the Horn of Africa five times as long before exiting the gene pool. If we become this lucky, we will someday be dating our letters July 12, 802013. The other path plotted by the combinations of genes, environment and random chance was the one taken by members of a branch of the human family paleoanthropologists like to call gracile. This includes _Australopithecus garhi_ , a creature who, along with _aethiopicus_ , made his debut on the Horn of Africa about 2.5 to 3 million years ago. There is some slim evidence that _garhi_ may also have fashioned simple stone tools, but as in the case of _aethiopicus_ that's a controversial and unresolved theory. At best, _garhi_ may have used crude stone hammers to break open bones to get at the marrow inside, or sharp flint to scrape and hack meat away from a bone left behind by larger predators. But even these uses of rock mark a colossal technological advance. About 1.9 million years ago another gracile human, dubbed _Homo rudolfensis_ , appeared along the shores of Lake Rudolph, now known as Lake Turkana, a long body of water that runs in the shape of an index finger from southern Ethiopia into the western heart of Kenya. _Homo habilis_ and _Homo ergaster_ soon followed, both slim and light–boned, both also passing their time in East Africa. In 1991 scientists scrounging among rocks near Dmanisi, Georgia, west of the Caspian Sea, unearthed the remains of still another species of gracile human from this epoch— _Homo georgicus_. While he remained simian in his looks, his face was flatter, a step closer to ours. Like _Homo habilis, georgicus_ was a lean toolmaker, but with a considerably more advanced case of wanderlust. He lived in a river valley more than twenty-five hundred miles north of the grasslands where _Homo habilis_ passed his days. He may be an indicator that other species, so far unknown, also strode beyond the borderlands of the Dark Continent, settling who knows where, still awaiting discovery. Although all of these species came upon the world clustered, like a posse, information about the majority of them is sketchy. Drawing any conclusions about them is a little like drawing conclusions about a long–lost family relative who headed off to the merchant marine or the French Foreign Legion. The best we have in most cases is a few battered bones that offer scant insights into the creatures' lifestyles or appearance. _Georgicus_ , for example, has seen fit to provide us with three skulls—one with jaws attached, one with a solitary jawbone, and one missing its jaws altogether. Nor did they leave anything much in the way of teeth, let alone whole limbs or vertebrae. _Homo rudolfensis_ has bequeathed a similarly ungenerous array of jaws and skulls, and a scattering of other fragments that may not belong to the species. What we know of _ergaster_ (the Workman) is based on a bundle of six or so skulls, jawbones, and a few teeth, several of which don't much resemble one another, creating some lively academic brawls about exactly which species is which. The stinginess of these creatures makes them mysterious, even among our ancestors, humans who have steadfastly held the cards of their pasts close to their primeval vests. Of all these slender primates, however, one has been a little less secretive— _Homo habilis_ , otherwise famously known as Handyman, long thought to be our direct ancestor and the first toolmaking primate. We have been able to infer a little more about the life of _habilis_ only because we have been lucky enough to have stumbled across more parts of his body than his other contemporaries—several skulls, a hand bone complete with fingers, and multiple leg and foot bones that can't conclusively be connected with the skulls, but at least provide some clues about the creature's size and gait. Together the evidence tells us that _habilis_ , though slight in stature, walked upright all the time and possessed considerably larger brains than the first ancient humans, as spacious as 950 cc, depending on which skull you inspect. The shapes of their heads and jaws indicate that unlike their robust cousins, they didn't care much for nuts, bark, and berries, but had developed an appetite for meat, and the protein it provided, which may account for their larger brains. (See sidebar "Big Guts vs. Big Brains" p. 21.) Nor did they sport great sagittal crests, or huge, square teeth made for grinding. Their teeth were better at tearing. Chances are they hunted small game in packs, not unlike the way chimpanzees sometimes do. And they helped themselves to savanna carrion and whatever other more adept and deadly predators left behind in the way of their prey's remains. _Big Guts vs. Big Brains_ Cows, as we all learned in grade school, have four stomachs. They do because it requires a lot of work to extract enough nutrients from grass to transform it into beef and milk. The same was true of our early savanna–roaming ancestors, at least some of them. Subsisting on a diet of nuts, roots, thistles, berries, and other plants required long intestines and strong stomachs if they hoped to squeeze enough nutrients from them to stay alive. As the climate changed in Africa and the savannas became broader and drier, the old jungle ways of gathering low–hanging fruit from nearby trees and not moving very far from day to day simply didn't work. Fruit and foliage became increasingly rare, and three humans had to cover more distance to gather it, which required still more energy. Ultimately that was not a sustainable survival strategy. But if you could get your hands on some meat! Then you were instantly rewarded with much more nutritional bang for your hunting–and–gathering buck. That is precisely what the robust lines of savanna humans did. But this choice paid an additional, unexpected dividend. A diet of meat of any kind (even dining on termites and small rodents) made larger brains possible, and less cowlike intestinal tracts necessary. This is something paleoanthropologist Leslie Aiello dubbed the Expensive Tissue Hypothesis when she first came up with the idea in the early 1990s. What this meant, and what fossil finds reveal, is that as our ancestors began to consume more meat, their bodies could redirect the energy those complex intestinal tracts demanded to the business of constructing larger brains. It was a close question two million years ago which approach might work best. Both experiments were tried, and for hundreds of thousands of years both worked. Ultimately, though, larger brains turned out to be a more effective survival tool than longer intestines, something the fossil record bears out. While australopithecines and the robust members of the human family were relatively small brained, often not much more cerebrally endowed than a chimpanzee, _Homo ergaster_ 's brain size ballooned to 900 cc or so. After a run of more than one million years, the last of the robust humans finally made their exit 1.2 million years ago. This evolutionary path had other ramifications as well. We aren't as strong as our primate cousins—chimps, gorillas, orangutans—for example. We seem to have exchanged brawn for brains. Richard Wrangham has argued that mastering fire and cooking made meat and other foods of all kinds easier to digest, increasing the protein we could consume and reducing the need for longer intestinal tracts even further. In time bigger brains delivered better weapons, and more strategic ways of hunting. And that likely led to bigger game, more meat, more protein, more cerebral horsepower. The result? Over the past two million years, the brains of the gracile line of humans nearly doubled their size. The scattered fossils of both of these sides of the human family tell us that evolution was putting a series of unstated questions on the table 1.5 million years ago: Which approach is best? Gracile or robust? A steady diet of tubers, nuts, and berries? Or a sparse, starvation diet of scavenged carrion along with whatever else could be scraped from nature's table. A serviceable brain with a cast–iron stomach, or a great brain with a simpler, less sturdy digestive system? If you were a betting primate, you couldn't be blamed for putting your money on the robust branch of the family. At the time, they looked to be winning the battle. They were strong and durable and had adapted the jungle ways of their antecedents to the savanna exceedingly well, meandering through flooded grasslands and clusters of forests, sometimes upright, sometimes on all fours, consuming, if not jungle fruit like their more gorilla–like ancestors, then the next–closest foods that give the term _high fiber_ a whole new meaning. Their stomachs had to be large and their intestines long to digest these foods, and their eating would itself have required liberal funds of energy. In some ways they were consuming so they would have the energy to consume. According to one theory, this explains why their brains did not grow as large and as fast as their gracile cousins'. Hardworking stomachs can't afford to redirect energy to cerebral growth. But their arrested development may have saved them and been the secret to the success of their million–year run. Gracile apes on the other hand looked less likely to succeed. They were smarter—given their increased brain size they had to be—but their diet was unpredictable. They used less energy because they walked upright all the time, but they had to make do with whatever else their smaller, less sturdy stomachs could handle. The robust approach was stable. The gracile approach was risky. Sometimes, however, risk pays off. A high–stakes wager placed on _Paranthropus_ would have paid nothing, yet against all odds, several underdog gracile species remained in the hunt. Good news for us because it was from one of these lines that you and I descended. Still, trouble loomed. Just as it appeared gracile apes were succeeding, finally brainy and efficient enough to outfox the rough treatment their savanna environment was dishing out, the self–same adaptations that were saving them—an upright gait and bigger brains—were also aligning to become the agents of their doom. Striding on two legs efficiently—not waddling the way a chimp or gorilla does when it walks upright—requires, among other adaptations, a fundamental rearrangement of pelvic architecture. An upright stride narrows the hips, and for females, narrowing the hips narrows the birth canal, and a slimmer birth canal makes for increasingly snug trips for newborns out of the womb. Despite the many advantages that upright walking delivered, it creates problems when one is simultaneously evolving bigger brains and larger heads, which was precisely what our gracile ancestors were up to. Yet, since both adaptations were working, what could be done? Each was an evolutionary blessing, yet both were on a collision course. Something would have to give. Lucky for us, the forces of evolution worked out an exceedingly clever solution: gracile humans began to bring their children into the world early. We know this because you and I, being extreme versions of gracile apes, are the living, breathing proof. If you, for example, were to be born as physically mature and as ready to take on the world as a gorilla newborn, you would have to spend not nine months in the womb, but twenty, and that would clearly be unacceptable to your mother. Or, looked at from a gorilla's point of view, we humans are born eleven months "premature." We do not reach full term, which makes us fetal apes. Of course if we didn't make our departure from the womb ahead of schedule, we wouldn't be born at all because our heads, after nearly two years in the womb, would be far too large too make an exit. We would be, literally, unbearable. It's impossible to overstate the colossal impact this turn of events had on our evolution, but it requires some context to fully appreciate what it means. Our habit of being born early is part of a larger, stranger phenomenon that scientists call _neoteny_ , a term that covers a lot of evolutionary sins at the same time it explains so much of what makes us the unique, even bizarre creatures we are. The dictionary defines _neoteny_ as "the retention of juvenile features in the adult animal." The term comes from two Greek words, _neos_ , meaning "new" (in the sense of "juvenile"), and _teinein_ , meaning to "extend." In our case it meant that our ancestors, rather remarkably, passed along to us a way to stretch youth farther into life. The question is, why, and how, did it happen? When faced with resolute obstacles, evolution—always in the service of survival—has a marvelous way of selecting astonishingly diverse solutions cooked up entirely by random chance. This is how the planet has found itself with the unearthly–looking aye–aye of Madagascar, Borneo's clownish proboscis monkey, the squashed and unappetizing blobfish of Tasmania, and the rapier–nosed narwhals of the arctic seas. It also helps explain the bizarre mating rituals of porcupines, and male anglerfish, not to mention the torturous eating habits of ichneumon wasps. Each of these creatures is a living testament to the marvelous, if accidental, creativity natural selection conjures, again and again. But as remarkable as these evolutionary banks and turns have been, neoteny can count itself as one of the strangest, and we _Homo sapiens_ are by far the most dramatic and extreme example. The term _neoteny_ was coined by Julius Kollmann—a groundbreaking German embryologist and a contemporary of Charles Darwin's. Kollman had nothing like human beings in mind when he created the term. He conceived it to describe the retention of larval features in the Mexican axolotl ( _Ambystoma mexicanum_ ), and other species of salamanders like the mud puppy ( _Necturus maculosus_ ) and the olm ( _Proteus_ ), all of which refuse in their lives to fully grow up and out of their larval stage, even in their adulthood. They mature normally and sexually, but all within the body of their youth. This would be a little bit like a two–year–old boy behaving in every way like a fully grown, sexually mature twenty-five–year–old. In humans, neoteny isn't quite that pronounced (probably a good thing), but it is nevertheless remarkable, and remarkably odd, if you are willing to circle around and look at it fresh. The idea of neoteny predates even Darwin and was explored as far back as 1836, when Étienne Geoffroy Saint–Hilaire, a French scientific prodigy and compatriot of Napoléon's, first pointed out how astonishing it was that the young orangutans that had recently arrived from Asia at the Paris zoo resembled "the childlike and gracious features of man." In the twentieth century a handful of other scientists and evolutionary thinkers adopted Kollmann's term and Geoffroy's sentiments when they began applying the idea of neoteny to humans, observing that infant apes bore a striking resemblance to adult humans especially in the shapes of their faces and heads. Naturally this raised a few questions: Was this simply a coincidence? Why would we resemble baby apes? And did this have anything to do with our own evolution? A professor of anatomy in Amsterdam named Louis Bolk became nearly obsessed with those questions. Between 1915 and 1929 he penned six detailed scientific papers and one entire pamphlet on the subject with the ambitious title _Das Problem der Menschwerdung_ ("On the Problem of Anthropogenesis"). He argued that a surprisingly high number of human physical traits "have all one feature in common, they are fetal conditions [seen in apes] that have become permanent [in adult humans]." In one paper Bolk even enumerated twenty-five specific fetal or juvenile features that disappear in apes as they grow to adulthood, but persist in humans right up to death. The flatter faces and high foreheads that we and infant chimps share, for example. Our lack of body hair compared with chimpanzees and gorillas (fetal apes have little body hair). The form of our ears, the absence of large brow ridges over our eyes, a skull that sits facing forward on our necks, a straight rather than thumblike big toe, and the large size of our heads compared with the rest of our bodies. The list is long and Bolk's observations were absolutely accurate.c You can find every one of these traits in fetal, infant, or toddling apes, and all modern human adults. No less than evolutionary biologist Stephen Jay Gould agreed with Bolk in his own landmark book, _Ontogeny and Phylogeny_ (though he didn't agree with the elder scientist's reasons for coming to those conclusions, which were tainted with racism and convoluted views of evolution). Gould called our peculiar brand of neoteny one of the most important twists in all the turns that human evolution has taken. Given its dictionary definition, you might think that neoteny is simply a matter of a species holding on to as many youthful traits of an ancestor as long into adulthood as possible (a little like Joan Rivers or Cher). But it's not that simple. Undeniably, in some ways we are childlike versions of our pongid ancestors, but in others our maturity is accelerated, rather than stunted. For example, while our faces and heads may not change as radically as an ape's as we enter adulthood, our bodies still continue to grow and change. We don't retain the three–foot stature of a two–year–old toddler. In fact at an average (worldwide) male height of five feet nine inches, give or take a few centimeters, we are among the largest gracile apes to have ever evolved. Nor is our sexual maturity slowed, though it is delayed compared with other human species (including Neanderthals, as we will see soon). And our brain development is anything but arrested. In fact, just the opposite. As I said, complicated. The different ways some parts of us seem to accelerate and mature while others bide their time or halt altogether has generated a flock of terms related to _neoteny—paedomorphosis, heterochrony, progenesis, hypermorphosis_ , and _recapitulation_. The debate is ongoing about what exactly _neoteny_ and the rest of all of these labels truly mean. In the end, however, it comes down to this—each represents an evolution of evolution itself, an exceptional and rare combination of adaptations that changed our ancestors so fundamentally that it led to an ape (us) capable of changing the very planet that brought it into existence. Put another way, it changed everything. Mostly we think of Darwin's "descent by natural selection" as a chance transformation of newly arrived mutations—usually physical—into an asset rather than a liability, which is then passed along to the next generation. So paws become fins in mammals that have taken to the sea. The spindly arms of certain dinosaurs evolve into the wings of today's birds. The ballasting bladders of ancient fish become the predecessors of land animals' lungs. All of that is true. But what neoteny (and paedomorphosis and all the rest) illustrate is that the forces of evolution don't simply play with physical attributes, they play with time, too, or more accurately they can shift the times when genes are expressed and hormones flow, which not only alters looks but behavior, with fascinating results. Evolution manages this by not affecting solely _what_ traits it reveals, but _when_ it reveals them. It moves abilities, physical features, and behaviors forward or backward, or stops them altogether by altering the expression of genes that affect developmental hormones. It plays with time like a boulevard–game master plays a shill game with walnut shells and peas. So in us, our big toe remains straight throughout our lives rather than crooking thumblike before birth as it does for chimps and gorillas. We remain relatively hairless, like fetal apes. Our jaws stay square and our foreheads flat throughout our lives rather than sloping backward as we leave our early years behind. And instead of decelerating brain growth after birth like orangutans, chimps, and gorillas, the genes that control the amount and interconnections of neurons act as though we are still in the womb and continue to fervently multiply. Put another way, after birth, processes that were once _pre_ natal in our ancestors become _post_ natal in us. By being born "early," our youth is amplified and elongated, and it continues to stretch out across our lives into the extended childhood that makes us so different from the other primates that preceded us. We see it in the fossil record. Almost without exception, the dusty bones scientists have unearthed and fitted together reveal that the faces of gracile primates such as _habilis, rudolfensis_ , and _ergaster_ , while still plenty simian, grew step by step to increasingly resemble us. Their snouts were flattening, their foreheads were growing higher and less sloped, their chins stronger. Features that once existed only in fetal forest apes like big toes and heads that rested upright on shoulders now not only existed in youth but also persisted into adulthood. Exactly how all of this unfolded on the wild and sprawling plains of Africa isn't clear precisely, but there can be no doubt that it did. We stand as the indisputable proof. All of the evidence emphatically points to our direct, gracile ape ancestors steadily extending their youth. They were inventing childhood. But most important, to us at least, in the inventing they were becoming more adept at avoiding extinction's sharp and remorseless scythe. And the main reason that was happening was because the childhood that was evolving enabled the development of a remarkably flexible brain. That is where the grand story of our evolution made an extraordinary turn. The clustered neurons that together compose the brains of all primates grow at a rate before birth that even the most objective laboratory researcher could only call exuberant, maybe even scary. Within a month of gestation primate brain cells are blooming by the thousands per _second_. But for most species that growth slows markedly after birth. The brain of a monkey fetus, for example, arrives on its birthday with 70 percent of its cerebral development already behind it, and the remaining 30 percent is finished off in the next six months. A chimpanzee completes all of its brain growth within twelve months of birth. You and I, however, came into the world with a brain that weighed a mere 23 percent of what it would become in adulthood. Over the first three years of your life it tripled in size, continued to grow for three more years until age six, underwent massive rewiring again in adolescence, and finally completed most, but not all, of its development by the time you reached your second decade (assuming that as you read this you _have_ reached your second decade). Being born so "young," you might conclude our brains arrive comparatively underdeveloped at birth, but that is not the case. Despite our early arrival we still come into the world bigheaded, even compared with our more mature cousin primates. At birth the brains of apes constitute 9 percent of their total body weight, hefty by the standards of most mammals. We, however, weigh in at a strapping 12 percent, which makes our brain 1.33 times larger than an infant ape's, relatively speaking, despite our abbreviated sojourn in the womb. In other words even arriving in our early, fetal state, with less than a quarter of our brain development under our belts, we are still born with remarkably large brains. Keep in mind that this approach to brain development is so extraordinarily strange and rare that it is unique in nature. And dangerous. If an engineer were planning the optimum size of a brain at birth, it would clearly be illogical to bring newborns into the world this cerebrally incomplete. Too fragile, and too likely to fail. Far more practical to do all the work in the safety of a mother's body. But evolution doesn't plan. It simply modifies randomly and moves forward. And in this case, remember, remaining in the womb full term was out of the question. For us it was be born early, or don't be born. As much as we might like to know the answer, exactly when it became necessary for our ancestors to exit the birth canal "younger" is frankly impossible to say. Since we _Homo sapiens_ are the only human species to still be walking the planet since Africa's retreating jungles orphaned the rain–forest apes that preceded us, and since the skeletal remains of those who came before us are rare and difficult to decipher, we simply haven't yet gathered enough clues to know precisely when an early birth became unavoidable. There are, however, a few theories. Some scientists believe earlier births would have begun when the adult brain of some predecessor or another reached 850 cc. Anthropologist Robert D. Martin calls this the "cerebral Rubicon," a line that once crossed would have required that some sort of longer, humanstyle childhood become part of that creature's life. If that's true, that narrows the candidates to those human species living between 1.8 and 2 million years ago—species like _Homo rudolfensis_ or _Homo ergaster_. Until recently scientists felt _Homo habilis_ (Handyman) was the best candidate, but new evidence has caused some realignment of the human family tree. For decades the common wisdom had it that we descended from _Homo habilis_ by way of _Homo erectus_ , which in turn evolved into what paleoanthropologists call "anatomically modern humans" (AMH), our kind. But new fossil finds now indicate that _erectus_ and _habilis_ were East African contemporaries for nearly a half million years, making it rather difficult to have descended from one another. Furthermore, _ergaster_ and _rudolfensis_ , which were often tossed in with _Homo erectus_ , are now more often considered to be their own separate species. This means that in the ever–shifting drama (and nomenclature) of human evolution, Handyman now represents an evolutionary dead end and _Homo erectus_ may turn out to be not one species, but many, with only one particular representative leading directly to us, if that. Whatever the case, around this time, when humans began to grow adult brains about three quarters of the size that ours are today, the offspring of upright walking humans may have been forced to arrive prematurely as the fit between head and pelvis grew increasingly tight. Who, the question then becomes, were the people from whom we directly descended, and where can we suppose they lived? Some history might be in order. Thirty–five million years ago the northeast corner of Africa was being carried on the back of a tectonic plate determined to make its way eastward toward Asia, while the rest of the continent was steadfastly refusing to go along. One consequence of this dogged parting of the ways was the emergence of the Arabian Sea and peninsula (with, coincidentally, all its oil beneath it). Another was the formation of a long and immense lake in East Africa made possible by the three substantial rivers that drained into it from the surrounding mountains. Two million years ago, the evidence of this great African rift, and the lake it created, was still all around. The ruptured land had left behind dozens of volcanoes, smoldering ominously and erupting unpredictably. One even rose defiantly out of the great lake itself, a disdainful sentinel that stood unfazed by the storms that howled when the seasons changed or the dust devils that spun along its flanks in the hot summer months. If you check a map of Africa today, you will notice the slender imprint of this lake we now call Turkana (formerly known as Lake Rudolf). It is still vast, a long, liquid gem that lies on the breast of East Africa, most of it in northern Kenya with just its upper nose nudging the highlands of southern Ethiopia. Today Lake Turkana fails to be as hospitable as it was earlier in its life. The rivers that once drained it are gone, so evaporation is the only exit for Turkana's waters. That has turned it a splendid jade color and made it the world's largest alkaline lake. These days the land that surrounds it is mostly dry, harsh, and remote. However, 1.8 million years ago it was an exceedingly fine place to set up housekeeping. Life of every kind thrived along Turkana's shores in the early days of the Pleistocene epoch, despite the occasional ferocity of the weather and the ominous belching of its volcanoes. Crocodiles bathed in its warm waters; _Deinotherium_ , an ancient version of the elephant, and both black and white rhinoceroses grazed among the grasslands. Hyenas yelped and hooted, scavenging what they could and hunting flamingos that fed in the shallows, while the grandcousins of lions, tigers, and panthers harvested dinner from herds of an early, three–toed horse called _Hipparion_. The lake, the streams and the rivers that fed it, and the variability of the weather made the area a kind of smorgasbord of biomes—grasslands, desert, verdant shorelines, clusters of forest and thick scrub. The bones of the extinct beasts that lie by the millions in the layers of volcanic ash beyond the shores of Lake Turkana today attest to its ancient popularity. The existence of a habitat this lush and hospitable wasn't lost on our ancestors any more than it was on the elephants, tigers, and antelope that roamed its valleys. In fact it was so well liked that _Homo ergaster_ (left), _Homo habilis_ , and _Homo rudolfensis_ were all ranging among its eastern and northern shores 1.8 million years ago, sharing the benefits of the basin with their robust cousin _Paranthropus boisei_. As many as a million years earlier, _Paranthropus aethiopicus_ came and went along the northwestern fringe of the lake, and half a million years before that the flat–faced one, _Kenyanthropus platyops_ , braved Turkana's winds and watched its volcanoes rumble and spew. _Homo ergaster_ Despite decades of sweltering work, paleoanthropologists have yet to categorically determine which of these humans who trod the shores of Turkana led directly to us, but it is possible to make an informed guess, at least based on the limited evidence scientists have to work with. We already know _Homo habilis_ is out of the question, an evolutionary dead end unrelated to _Homo erectus. Homo rudolfensis_ is also unlikely because he bears such a strong resemblance to _Paranthropus boisei_ and his robust ancestors. He may have been a bridge species of some sort. _Boisei_ himself would seem not to qualify given that he wasn't gracile (we are) and possessed the smallest brain of the group, the largest jaws, and the most apelike features. That leaves _Homo ergaster_ , "the worker" ( _ergaster_ derives from the Greek word , meaning "workman"), formerly considered an example of _Homo erectus_. Truthfully, _ergaster_ wouldn't seem to be a promising candidate for a direct ancestor either, except for one remarkable fossil find that has been, after some heated debate, assigned to the _ergaster_ line. In the scientific literature he is known as Turkana (or sometimes Nariokotome) Boy because Kamoya Kimeu, a paleoanthropologist who was working at the time with Richard Leakey, came across him on the western shore of Lake Turkana. His discovery first stunned his fellow anthropologists and then the world with the completeness of what he had found. In a scientific field where scraping up a tooth or a jaw fragment, or a wrecked piece of tibia, can be cause for wild jubilation, Kimeu and his colleagues uncovered not only a skull, but a rib cage, a complete backbone, pelvis, and legs, right down to the ankles. There, in the brittle detritus of the Dark Continent, lay the nearly complete remains of a boy who had lived 1.5 million years ago and died in the swamps of the lake somewhere between the ages of seven and fifteen. It was nothing short of remarkable. You may have noticed the wide range of the boy's possible age. There's a reason for that. Despite being among the most studied fossils in the annals of paleoanthropology, scientists cannot seem to universally agree on the age of their owner, a mystery that brings us back to the issue of long and lengthening childhoods. The boy's age is elusive because we have only two living examples of primates that we can use as benchmarks to determine his age when he died—forest apes and us. But Turkana Boy is neither. With an adult brain that would likely have been about 880 cc, he falls almost midway between the two extremes. Take away half the mass of his brain, and it would be about the size of a chimpanzee's. Add the same amount and he would be within the range of most modern humans. When scientists first inspected the boy's fossilized teeth, they immediately realized he was, in fact, a boy because several of them had not yet entirely arrived. In his lower jaw a few permanent incisors, canines, and molars had formed, but not all of them were fully grown. In his upper jaw he still had his baby, or milk, canines and no third molar. If a dentist were looking into a mouth like that today, she would conclude she was dealing with an eleven–year–old. But if the mouth belonged to a chimpanzee, seven would be a better guess. Teeth represent one type of clue scientists use to help estimate the age of a skeleton (or more precisely, the skeleton's former owner) when he died. Another is growth plates. Long bones like those in our arms and legs don't fuse permanently with the joints attached to them until they are fully grown. The state of growth plates reliably predicts age. Turkana Boy's long leg bones were still growing and had not yet fused, particularly at the hips, although one of his shoulder and elbow joints was fusing. Given the state of his growth plates, researchers concluded the boy could have been as young as eleven or as old as fifteen the day he met his untimely end, _if_ he was human. Or a mere seven if he was a chimpanzee. A final feature that helps determine age is height. Nariokotome Boy's thighbone is seventeen inches long, which would have made him roughly five feet three inches tall, about the size of an average fifteen–year–old _Homo sapiens_ , or a full–grown chimpanzee. Compared with other fossil primates, australopithecines or even his Turkana contemporaries like _Homo habilis_ and _rudolfensis_ , for example, Nariokotome was tall, and depending on his exact age, he might have grown considerably taller, had he survived. So how old _was_ the "working" boy? Viewed from either end of the spectrum, none of the clues about his age have made much sense to the teams of scientists who have labored over them. Each was out of sync with the other. Some life events were happening too soon, some too late, none strictly adhering to the growth schedules of either modern humans or forest apes. Still, the skeleton's desynchronized features strongly suggested that the relatives of this denizen of Lake Turkana were almost certainly being born "younger," elongating their childhoods and postponing their adolescence. Apes may be adolescents at age seven and humans at age eleven, but this creature fell somewhere in between. If the Rubicon theory is correct, and an adult brain of 850 cc marked the time when newborns begin to struggle to successfully make their journey through the birth canal, _ergaster_ children were likely already coming into the world earlier than the rain–forest primates that preceded them five million years earlier. On the other hand, Turkana Boy was not being born as "young" as we are. His large brain, as large as any other in the human world at that time, and his slim hips, optimized for upright walking and running, reinforce the evidence. He must have been born "premature" or he wouldn't have been born at all. But if he was being born earlier, how much earlier? Suppose the brain of a fully grown Turkana Boy was 60 percent the size of our brain today. (We have to suppose because we have no adult _ergaster_ cranium to consult.) And let's assume _ergaster_ children would have come into the world after fourteen months of gestation, approximately 30 percent sooner than a chimp. This isn't as drastically different as the eleven–month disparity between other primates and us modern humans, but it would have represented the beginning of a significant human childhood, and it would have begun to upend the daily lives of our ancestors in almost every way. Why? First, there would have been more death in a world where, unfortunately, death was no stranger. Many "early borns" would have died after birth, unable, unlike today's chimps and gorillas, to quickly fend for themselves. Because gorilla and chimp newborns are more physically mature than human newborns, they often help pull themselves out of the birth canal and quickly crawl into their mother's arms or up onto her back. It's unlikely that _ergaster_ infants were capable of this. Of all primates, human newborns are by far the most helpless. When we arrive, we are utterly incapable of walking or crawling. We can't see well or even hold our heads up. Without immediate and almost constant care, we would certainly die within a day or two. Though these "preemies" were not likely as defenseless at birth as we are, they would have been far less physically developed than their jungle or even early savanna predecessors. But even if the newborns didn't die in childbirth, their mothers might have, their narrow pelvises unable to handle what scientists call the expanding "encephalization quotients" of their babies. To compensate, _ergaster_ newborns may have begun to turn in the birth canal so that they were born faceup, a revolutionary event in human birth. Unlike other primates, our upright posture makes it necessary for babies to rotate like a screw so they emerge faceup. If they came out with their faces looking at their mother's rump as chimps and gorilla infants do, their backs would snap during birth. The job of bringing a child into the world would not only have become more complicated, but imagine life for the mothers of these offspring, assuming both survived the ordeal of birth. They were already living a precarious existence in a menacing world—open grasslands or at best thick brush with occasional clusters of forest. Predators such as striped hyenas and the sythe–toothed _Homotherium_ had appetites and needs, too. There was no such thing as a campfire to keep predators at bay. Fire had yet to be mastered. When night fell, it was black and total with nothing more than the puny illumination provided by the long spine of the Milky Way, a fickle moon, or an occasional wildfire in the distance sparked suddenly and inexplicably by lightning or an ill–tempered volcano. And the big cats of the savanna like to hunt when the sun has set. Not only were these new human infants more helpless than ever, but their neurons were proliferating outside the womb at the same white–hot rate they once did inside. Rapidly growing brains demand serious nutrition. Studies show that children five and younger use 40 to 85 percent of their standing metabolic rate to maintain their brains. Adults, by comparison, use 16 to 25 percent. Even for _ergaster_ children, a lack of food in the first few years of life would often have led to premature death. Nariokotome Boy might have been undernourished himself. His ancient teeth reveal he was suffering from an abscess. His immune system may not have been strong enough to defeat the infection, and lacking antibiotics, scientists theorize blood poisoning abbreviated his life. He was probably not the first among his kind to die this way. In every way, early borns would have made life on the savanna more difficult, more dangerous, and more unpredictable for their parents and other members of the troop. So why should evolution opt for larger brains and earlier births? And how did it manage to make a success of it? Difficult question to answer. Looking back on the scarce orts of information science has so far gathered together, premature birth doesn't make an ounce of evolutionary sense. Not on the surface. Darwinian adaptations succeed for one reason—they help ensure the continuation of the species. That means if your kind misplaces the habit of living long enough to have sex successfully, extinction will swiftly follow. Since this is the ultimate fate of 99.9 percent of all life on earth, it is difficult to fathom how the mountains of challenge that early–arriving newborns heaped on the backs of their gracile ape parents could possibly help them successfully struggle to stay even a single step ahead of the grim reaper. It certainly wouldn't seem to make much sense to lengthen the time between birth and sex. Keeping that time as brief as possible has immense advantages after all. It's a powerful way to maximize the number of newborns either by having large numbers of them at once or by having them often, or both. Dogs, for example often enter the world in bundles of five or six at a time, are weaned by six weeks, and ready to mate as early as six months. They aren't puppies long, and once they are done breast–feeding, they are soon prepared to fend for themselves. For mice the process is even more compressed. The result is that mothers bear more children with every birth, do it more often, and those offspring are quickly ready to mate and repeat the cycle. All of this accelerates the proliferation of the species _and_ improves its chances of survival. We humans, however, wait an average of nineteen years before bearing our first child. Why? If shortening the time between being born and bearing as many offspring as often as possible works so well for other mammals, for what reasons would evolution twist itself backward with Africa's struggling troops of savanna apes? Why bring increasingly defenseless infants into the world? Why expose their parents to greater danger to feed and protect them? Why insert this extra, unprecedented cycle of growth, this thing we call childhood, into a life—a time when we rely utterly on other adults to take care of us? And what advantage is there in taking nearly two decades to bring the first of the next generation into the fold? In his landmark book _Ontogeny and Phylogeny_ , Stephen Jay Gould spends considerable time discussing two types of environments that drive different varieties of evolutionary selection. One he calls _r_ selection, which takes place in environments that provide plenty of space and food and little competition. A kind of animal Valhalla. The other is called _K_ selection, environments where space and resources are scarce and competition is nasty and formidable. _R_ selection (Gould points out many studies that back this up) encourages species to have plenty of offspring as quickly as possible (think rabbits, ants, or bacteria) to take advantage of the lavish resources at hand. But _K_ –style environments require species to slow down, create fewer offspring, and take more time doing it because it reduces stress on the environment and the competition among those trying to survive in it. By random chance, evolution begins to favor the creation of fewer competitors within a species who will only die off from lack of resources. By reducing death and lengthening life, in particular early life, _K_ selection also provides species extra time to develop in ways that make them more adaptable. In our case, as Gould put it, _K_ selection made us "an order of mammals distinguished by their propensity for repeated single births, intense parental care, long life spans, late maturation, and a high degree of socialization." Today you and I stand as the poster children for _K_ –strategy evolution. Yet, while the simple fact that we are walking around today provides conspicuous proof that _K_ strategies can succeed, it still fails to explain _why_ they succeed. It is possible that it didn't, at least not all the time. Multiple species arguably walked down this Darwinian road and were snuffed out. Several—about whom we may never know a thing—were surely done in over time by the unrelenting pressures of protecting their helpless infants, braving their environment to get them more food, or becoming dinner themselves for some salivating savanna cat. Is this what wiped out _Australopithecus garhi_? Does this explain the demise of _Homo habilis_ or _rudolfensis_? So far the sparse, silent, and petrified clues that the fossil record has left us aren't parting with those secrets. They are stingy that way. We do know this: around a million years ago or so—early November in the Human Evolutionary Calendar—the robust primates had met their end, and so had many gracile species, but a handful continued and even flourished. Already some had departed Africa and had begun fanning out east to Asia and the far Pacific. The cerebral Rubicon had been crossed and there was no going back. This meant that evolution's forces had opted, in the case of our direct ancestors, for bigger and better brains rather than more sex and more offspring as a survival strategy. And, against all odds, it was working—a profound evolutionary shift. Over time, in the crucible of the hot African savanna, far away in time from the Eden of rain forests, an exchange was made—reproductive agility for mental agility. If bringing a child into the world "younger" was what it took, fine. If expending more time and energy on being a parent was necessary to ensure that a creature with a bigger, sharper brain would survive, then so be it. If evolving an entirely new phase of life that created the planet's first children was required, then it had to happen. The imponderable forces of evolution had made a bet that delivered not greater speed or ferocity, not greater endurance or strength, but greater intelligence, or put in flat Darwinian terms, greater adaptability. Because that is what larger, more complex brains deliver—a cerebral suppleness that makes it possible to adjust to circumstances on the fly, a reliance not so much on genes as on cleverness. It is strange to think that events could well have gone another way. Earth might today be a planet of seven continents and seven seas and not a single city. A place where bison and elephants and tigers roam unheeded and unharmed, and troops of bright, robust primates live throughout Africa, maybe even as far away as Europe and Asia, with not a single car or skyscraper or spaceship to be found. Not even fire or clothing. Who can say? But as it happened, childhood evolved, and despite some very long odds, our species found its way into existence. ## [**Chapter Three Learning Machines**](ch00_fm06_toc.html#ch00_fm06_toc) _It is easier to build strong children than to repair broken men_. —Frederick Douglass _Boy, n.: a noise with dirt on it_. —Anonymous _Give me the child until he is seven, and I will give you the man_. —Jesuit aphorism Youth, the french writer François Duc de la Rochefoucauld once observed, "is a perpetual intoxication; it is a fever of the mind." Ralph Waldo Emerson was more blunt: "A child is a curly, dimpled lunatic." We have all witnessed a toddler or two in action (usually and most memorably our own), and it is a sight to see. The average two–year–old is thirty inches tall and twenty-eight pounds of pure, cerebral appetite determined without plan or guile to snatch up absolutely everything knowable from the world. She, or he, is, indisputably, the most ravenous, and most successful, learning machine yet devised in the universe. It may not seem so on the surface, but toddlers accomplish prodigious amounts of work (all cleverly disguised as play) as they bowl and bawl their way through each day. By throwing a ball (or food), playing in the mud, a pool, or a sandbox, by attempting headlong runs and taking sudden tumbles, or swinging on swings or off "monkey" bars, the young are fervently familiarizing themselves with Newton's laws of motion, Galileo's insights into gravity, and Archimedes' buoyancy principle, all without the burden of a single formula or mathematical term. When a toddler smiles, cries, grimaces, gurgles, giggles, spits, bites, or hits; when she breaks free of mom or dad for a wild dash down the sidewalk; throws whatever he can grab for the sheer joy of it; dances spontaneous jigs or engages in other diabolical antics—she is learning what is socially acceptable and what is not, what is scary, what works in the way of communication and what fails and when. Food, and much that isn't food, is tasted, licked, and baptized with slobber to investigate its texture, shape, and taste. Yet no artifice or logic is behind the tasting. It's just another form of exploration. Objects, living or not, are bounced, swatted, hugged, flailed, closely inspected, all in a fervent effort to comprehend their nature. Unbounded and unstoppable greed for knowledge is the best way to put it. Acquiring language is another big job in childhood. Babbling, squeals, and other noises are, as the best linguists have so far been able to ascertain, ways of figuring out the language that the other bigger, parental creatures speak around the toddlers whether it is Swahili, German, or Hindi. Later, early conversations are short, generally. "Here! Mama! Dadda! Mine! No! Please. Want!" Often, in times of acute frustration, communication is inarticulate, loud, and punctuated with acrobatic body language. In time, however, and amazingly, vocabularies grow, syntax improves, and full sentences are expressed, all with hardly an ounce of formal instruction. The acquisition of language is one of the great miracles in nature. At age one few children can say even a single word. At eighteen months they begin to learn one new word roughly every two hours they are awake. By age four they can hold remarkably insightful conversations, and by adolescence they have gathered tens of thousands of words into their vocabulary at a rate of ten to fifteen a day and often use them with lethal effect! And nearly every word was acquired simply by their listening to, and talking with, the people around them. Children do these apparently lunatic and astonishing things for a reason. Nature has wired their brains for survival by driving them to swallow the world up as fast as they possibly can. Pulling off this feat is easier said than done. However, if we hope to comprehend how cerebral connections this complex take place, it might first be useful to step back and consider why brains exist at all, and how we eventually came by the particular brand we have. * * * By general agreement the first brain in nature belonged to a creature scientists today call planaria, known more commonly to you and me as the lowly flatworm. Flatworms are metazoans and wouldn't seem therefore to be very brainy. But intelligence is a relative thing and planaria, when they first emerged more than seven hundred million years ago, were the geniuses of their time, creatures of unparalleled intelligence blessed with an entirely new kind of sensory cell capable of extracting marvelously valuable bits of information from their environment. Unlike many of their contemporaries planaria were unusually sensitive to light, possessed rudimentary sets of eyes, and responded to, rather than ignored, changes in temperature—all radical innovations in their time. Even today they remain expert at sensing food, and then making their way with uncanny determination to it, while other metazoans (corals, for example) generally take a more leisurely approach to their cuisine, waiting for it to find them rather than the other way around. _Planaria—the Einstein of the Day_ Among the cellular innovations that made an ancient flatworm's brain possible was a protoneuron called a ganglion cell. These are clustered in the head of the worm and then connected to twin nerve cords that run in parallel down the length of its body so that certain experiences sensed alongside it can be transmitted to the flatworm's brain for some metazoan cogitation. All the brains that evolution has so far contrived rest on this tiny foundation. So for the best ideas you had today you can thank the determined metazoan that looks something like a squished noodle. The purpose of brains generally is to organize the waves of sensory phenomena that nature's cerebrally gifted creatures experience. Their job is to filter the world's chaos effectively enough to avoid, for as long as possible, the disagreeable experience of death. A direct correlation exists between survival and how well a brain maps the world around it. The more accurately it can correlate, the more likely it will survive danger, discover rewards, and keep its owner among the living. At the heart of every brain are its neurons, the specialized cells that make possible our brand of thinking, feeling, seeing, moving, and nearly everything else important to us. There are over 150 different kinds of neurons, making them the most diverse cell type in the human body. To support their greedy habit of consuming large quantities of energy, they are surrounded by clusters of glial cells, which serve as doting nannies busily shuttling nutrients and oxygen to them while fetching away debris and generally working to keep the neurons fresh and firing. Each of us carries roughly a hundred billion neurons clustered jellylike inside our skulls (coincidentally about the same number as stars cosmologists believe populate the Milky Way galaxy). Every one of them is supported by ten to fifty indulgent glial cells. This makes our brains a remarkable and mysterious place still well beyond the comprehension of the thing itself (a fascinating irony), but the cerebral cortex of a growing human child is more remarkable still. Only four weeks after a human sperm and egg successfully find one another, when we are still embryos no larger than a quarter, clusters of neurons that will eventually become our brain are replicating at the rate of 250,000 every minute, furious by any standard. Around this time, a bumpy neural tube that looks suspiciously similar to a glowworm has begun to take shape. Over the next several weeks four buds within the tube will begin developing into key areas of the brain: the olfactory forebrain and limbic system—the seat of many of our primal emotions; the visual and auditory midbrain, which governs sight, hearing and speech; the brain stem, which controls autonomic bodily functions such as breathing and heartbeat; and the spinal cord, the trunk line for brain–body communication. Two weeks later a fifth cluster of neurons begins to blossom into the frontal, parietal, occipital, and temporal lobes of the cerebral cortex, where so many exclusively human brain functions reside. The brain constructs itself this way, with neurons ebulliently proliferating, and then, like the rest of the cells in the embryonic body, they march off to undertake their genetically preordained duties. During this process and throughout our lives, every cell in the body communicates. It is in all cells' DNA, not to mention our best interests, to reach out and touch one another, mostly by exchanging proteins and hormones. But neurons are especially talented communicators. This is because whenever the biological dice fell in such a way that they came into existence, they began to evolve specialized connectors—dendrites and axons—that vastly improved their exchange of information compared to other cells in the body. Before brains came along, primitive protoneurons communicated by secreting hormones and electrical currents in no particular direction, mumbling their messages to the other cells and protoneurons in their vicinity, and not getting terribly quick results, at least compared with our current models. With the invention of dendrites and axons, however, they could form elegant, smart clusters that shared at high speed the information each of them held with the others nearby. (Planaria were among the first to accomplish this.) The emergence of high–speed, if exceedingly minute, communications cables meant that any creature fortunate enough to inherit them could more fully and rapidly sense the world it inhabited—light and dark, food, danger, pain and pleasure—then react to it all in a blink. Not only that, the cables could link different sectors of the brain the way highways connect cities. This meant the brain could not only improve contact with the world, but also stay in better touch with itself, not a trivial matter as brains grew larger. (This turns out to be important to consciousness, but we will visit that subject later.) Dendrites generally conduct signals coming into a brain cell while axons do the opposite. Dendrites (also known as dendrons) are so eager to make contact that they extend treelike in multiple directions and can place one neuron in touch with thousands of its neighbors. Axons aren't nearly as obliging as dendrites, but can still make uncounted connections as they transmit signals outward when a neuron is stimulated and reaches what is known as a threshold point, a moment that is vitally important when it comes to thinking, feeling, and sensing. At that instant an electrical impulse bolts down the axon at 270 miles an hour. When it reaches the end of the axon, a tiny pouch of chemicals bursts, sending neurotransmitters across a synaptic gap like party confetti, where they embrace the receptor sites of the next neuron like a long–lost relative and then pass along their message. Your brain is capable of making one quadrillion (that's a 10 with fifteen zeros behind it) connections like these. Even as you read the words in front of you, impulses are flaring out and back at high speed, a three–dimensional, electrochemical storm tirelessly at work conjuring your thoughts, assessing your feelings, ensuring your body operates according to plan, and generating your personal version of reality. It's a busy place. While neurons multiply at blistering rates before we are born, the business of building the brain continues even more earnestly after we enter the world. By strict decree, the twenty-five thousand genes—the "structural genome"—each of us inherits in fifty–fifty doses from our parents resolutely continue the construction of our own wetware, and its underlying neuronal infrastructure, complete with our specific talents and predispositions. Just as some of us may inherit stocky bodies and others long, slim ones, our parents can also issue brains that incline us to be gregarious or shy, a leader more than a follower, mathematically, musically, or verbally predisposed. This part of us is a genetic crapshoot, and we have no control over it. Nevertheless, more than other forms of life, even other primates, we can be thankful that we are not immutably linked to our genetic directives. In us they are editable, able to be altered by our personal experience and environment, a phenomenon that explains why each of us is not a clone of the other, not even in the case of identical twins, who carry precise copies of their sibling's DNA. It is impossible to overemphasize the impact this new ability had on human evolution and has each day on your life and mine. The farther down the evolutionary chain creatures fall, the less complex their brains are as a rule, and the less they are shaped by their personal experience, which is another way of saying that their day–to–day actions are largely, if not entirely, governed by their genes, rather than by anything we might call a "self." Moths, for example, are drawn to candle flames because they are genetically programmed to navigate by the light of the moon. Not having much of a brain, they have been known to mistake a flame for the moon and get incinerated for their trouble. This happens not simply because their brain is small, but because it is also hardwired by its genes and not readily able to learn from experience. For hundreds of millions of years genes were a perfectly effective, if plodding and random, way of adapting to changes in environment, but it wasn't efficient. It took a long time for evolution to get around to building a brain that could think, even a little, for itself. But once it did, those animals blessed with one tended to survive longer than those that weren't. Brains are more resourceful than trial–by–error genetics. They map the world in real time and increase the chances that you will make a lifesaving decision on the spot rather than a deadly, DNA–dictated one that isn't even aware you _are_ on the spot. Not that the influence of genes versus brains is either/or. All creatures endowed with a brain lie along a continuum of cerebral, and therefore behavioral, flexibility. There are no hard boundaries. But the _degree_ of that hardwiring in many ways marks the difference between, say, a flat–worm, and us. The impact that the outside world can have on our brains during our childhood explains how seven billion of us can be walking the planet every day, each a thoroughly unique universe unto ourselves, distinct in personality, experience, thought, and emotion; yet similar enough that we can (more or less) relate to one another and be counted as members of the same species. What has been far less clear, and a slippery problem for scientists, has been exactly how the genetic commands we inherit from our parents are bent by the unique relationships and events in our lives. It turns out several forces are at work. Very hard at work. In the first three years of life the human cerebral cortex triples in size. This is like nothing else in nature. Yet it isn't simply the growth of neurons that makes the human brain so powerful. It is also the way it feverishly links them up. Why should this matter? Think of the brain as a miniature, though considerably more complex, Internet, compressed in size and time. Each neuron is like a computer sitting on a lap or desk somewhere. Computers today are powerful, like neurons, and can by themselves accomplish a great deal. I am writing this book on one right now. But connect neurons or computers to one another, and they become amplified and add up to far more than the sum of their parts. When my computer links to the Internet, it enables me to research information I use in the book, share passages I am writing with others in a blink, and gather opinions, thoughts, and insights by engaging in any number of conversations. I can instantly track down specific bits of information I need or download facts, maps, images, even whole books and movies. By branching out and communicating in all directions, my computer becomes, in many ways, all the computers it can touch. Now multiply this by millions of sites from Facebook to the Library of Congress, billions of Web pages, and innumerable other computers, and you begin to get a feel for the benefits of interconnecting neurons in the brain. There is power in communication. The pathways between neurons begin to radiate almost the moment nerve cells undertake their growth in the fetal brain. Yet while the proliferation of neurons begins to slow at age three, the branching of pathways between them continues more urgently than ever. So urgently that a thirty–six–month–old child's brain is twice as active as a normal adult's, with trillions of dendrites and axons making contact, jabbering and listening and tightening the collaborative party that makes the human mind possible. One neuron can be directly linked to as many as fifteen thousand other nerve cells, generating more connections within the brain than there are electrons and protons in every heavenly body within every one of the hundred billion galaxies in the universe. That's a lot of communication, and it is all happening between your ears. The culprits behind this mad construction project, the forces that create and shape these connections, are the boisterous circles of the outside world with all of its smells and sensations, sound, touches, social interactions, and dangers. In attempting to make sense of the world it lives in, the brain creates its connective architecture by smelting and hammering out a massive, riotous explosion of wetware, which is shaped by a child's sensory conversation with the world. The trillions of connections that blossom physically and chemically represent every new, frightening, exhilarating, or surprising experience children come across, which in the case of children is almost everything. For a toddler, novelty is riot in life. Since even big brains can't predict the future, this is nature's way of attempting to prepare for all flavors of trouble (and pleasure) yet to come; an all–out effort to create synaptic antennae that can better sense what may be, or could be, and use whatever tools and information are at hand to the best possible advantage. If music is part of your life, then neuronal pathways and structures begin to fan out to better handle, at first, listening to music, and then later making it. The same holds true for language, physical dexterity, sight, and social cues. Everything from the mundane to the sublime is shaped in the brain by the events around us. You will have realized by now that this pretty much renders the old nature–versus–nurture debate irrelevant. The trillions of connections our brains make in childhood help explain why we are neither purely a product of our genes nor altogether the result of our personal experience, but both. Nevertheless, this does not represent the whole picture. The brain is like an onion. Peel back one mysterious layer and it only reveals another: a recently discovered parallel genetic system, for example, that works within each of us, and profoundly affects the person we become. This system is related to the genome, but it is not the genome. It's something else equally as fascinating called the epigenome. The long and spiraled strands of DNA that vibrate within the cells of all living things dictate whether they are plant or animal, have feet or wings, lungs or gills, and explain why you and I are tall or short, blond or brunette, Asian or black, even human as opposed to a planaria. But as if that weren't impressive enough, there is still more to our DNA. It is wrapped around proteins called histones. This two–leveled structure—the histones and the DNA—constitutes the epigenome. Scientists are a long way from fathoming the many–layered mysteries of epigenetics, but they know that when such a structure is tightly coiled around inactive genes, it renders them utterly silent and unreadable, but when it relaxes pressure the genes become more accessible and therefore more expressible. How exactly these genes are expressed depends on our personal experiences and the environment in which we live, physically, socially, and emotionally. Specific experiences can deeply affect different brain circuits during developmental stages that go by the self–descriptive term _sensitive periods_. Cells in different parts of the brain that affect sight, language, hearing, are sensitive at different times and for differing lengths of time in life, particularly childhood. How deeply our epigenetics change shapes the circuitry in our brains, which in turn shapes how we behave and who we are. Once a sensitive period passes, particular circuits grow set in their ways and then lie beyond the reach of new experience. _How the Epigenome Changes Your Brain_ So while the codes in our DNA are set for life depending on what our parents pass along to us, we still have plenty of room to deviate from the precise commands of those genes. Thanks to the epigenome, events, and the physical and psychological environments in which we live during our childhoods, can modify the expression of some genes that affect brain development. Some of these amendments can be temporary; others can change us for the rest of our lives. Study after study, for example, has found that children exposed to high stress are more likely to suffer mental illnesses later in life, including generalized anxiety and serious depression. High childhood stress has also been shown to modify how a person later handles adversity in adolescence and adulthood. When we are frightened, our adrenal glands release adrenaline, which focuses our attention, increases our heart rate, and prepares our bodies to either fight or flee, handy reactions when your life is on the line. But chronic fear and stress—the kind that continues relentlessly—can corrode us because intense, ongoing awareness of the flight–or–fight kind wears us out. In children an epigenome exposed to constant stress tends to make those children more sensitive to even minimal stress throughout their lives, and more likely to feel anxious when others might not feel the least bit nervous. Poor nutrition or toxic substances can affect epigenomes related to brain development during childhood in ways that blunt brain function later. Together these forces can gang up to have a kind of psychological domino effect that spills into our physical health to make us more susceptible to ailments like asthma, hypertension, heart disease, and diabetes. On the other hand, positive experiences—warmth, stability, security, love, and the joy that comes from play—can create equally powerful, but entirely positive, results. Your genes write the basic blueprint of what is personally possible, or impossible. They set the boundaries of who you are physically, psychologically, socially, and intellectually, but your epigenome etches the finer details of your personality—the ways you handle others, your fears, joys your intellectual and emotional prowess, personal talents, confidence, proclivities for optimism or pessimism, and your annoying (not to mention altogether charming) quirks. They influence whether, when, and how your personal set of genes build the capacity for thought, emotional control, and a whole bushel of other future skills. Exactly what route the timing and depth of their effect takes depends on the infinitely complex molecular interactions that constitute your world and your "self." No matter what, the result is that you come out of it all as unique as a snowflake. In case the connection has eluded you, it's our neotenous nature, our long childhoods, that makes our epigenome so inclined to the influences of our personal experience during the first seven years of life. Because we are born early and since we have extended our brain development well beyond the womb, neuronal networks that in other animals would never have been susceptible to change remain open and flexible, like the branches of a sapling. Although other primates enjoy these "sensitive periods," too, they pass rapidly, and their circuits become "hardwired" by age one, leaving them far less touched by the experiences of their youth. This epigenetic difference helps explain how chimpanzees, remarkable as they are, can have 99 percent of our DNA, but nothing like the same level of intellect, creativity, or complexity. As productive and interesting as all the goofy openness and flexibility of our toddlerhood is, it also creates a problem. It's not sustainable. Unless we hope to be a race of primates suffering from terminal cases of attention deficit disorder, the time eventually comes for our lively cerebral growth to be curbed. It's the biological equivalent of fishing or cutting bait. We can't afford to record, some way or another, every experience throughout our lives. The costs are too high. Our minds would grow so flexible they would become floppy, and so cluttered they would be incapable of focus. Besides, not every new experience is useful (sitting in traffic, for example). There are also physical limits to how big a brain can grow, though we _Homo sapiens_ have certainly pushed the boundaries. Finally, brains, being greedy organs, devour immense quantities of energy for their size, especially in childhood. A growing toddler's cerebral appetite gobbles up as much as 85 percent of all the energy that its body requires each day. Over a lifetime that would be insupportable. No, at some point the brain must make some tough biological choices by locking into and holding on to the influences the epigenome has expressed, while it somehow brings the sweeping array of connections it generates during childhood under control. In the case of the epigenome, this process is relatively simple, if anything that happens in the human brain can be called simple. Despite the wild partying your youthful cerebral cortex undertakes, your genes are still ultimately in charge and dictate when different areas must calm down and mature. Genes decide when the sensitive periods of different cerebral circuits end, and when they end, that's that. It's interesting that it happens this way, almost as if each sector were a different brain, each with different genetic rules, which just happen to be expressed within the same skull (which in many ways is precisely the case, since different parts of the brain evolved at different times). Neural circuits that analyze color, shape, and motion, for example, mature in the visual cortex long before higher–level functions develop to comprehend facial expressions, or the shape and meaning of frequently used objects, a glass, a fork, or a toy, for example. For its part the auditory cortex first learns to recognize simple sounds, then later comprehends the meaning of those noises as words in strings of language that in turn help us make decisions or digest a great novel. The same process is true of other areas that handle physical and cognitive capabilities. Once these parts of the brain mature, the chances of changing them drop precipitously. Not that this is the absolute end of the epigenome's work. Throughout life, some matured areas will keep interconnecting to record revisions, mistakes, and the additions of knowledge that make us smarter, even wiser. But generally, it is during your childhood that the brain takes what the world has to offer and makes a bet that what its circuits have recorded is representative enough to handle what life will bring in the future. Following our childhoods, memory becomes our most effective way for us to change our behavior. (In a way, memory is evolution's way of allowing you and me to remain in a permanently "sensitive period," always open to change, regardless of age.) Controlling the other way that personal experience changes us—that would be the rampant connections our young brains make—is also related to the epigenome, but different from closing down sensitive periods. Remember, many of the pathways are created based on what we are exposed to as we grow, from music and language to sports and social interactions. You might assume that together they confer immense evolutionary benefits. They do, up to a point. But again there are limits to what we can handle. Too many connections make for a cluttered cerebellum. The solution to this overabundance is a kind of intracranial evolutionary competition. Just as the organisms in an ecosystem are "selected out" if they can't find a niche where they can make their living, connections in our brains that are not used much after they are formed die out as well, unmasked as extravagances that have no place in the neurological ecosystems of our personal experience. You can look at the first, rapid interconnections the brain makes based on experience as something like a matrix of dirt paths branching from your neurons, tentative explorations of this or that destination. The more often you undergo an experience—listen to music, catch a ball, hear a language, or are subjected to scary or stressful situations—the more often the path is walked and the more grooved it becomes. Paths may even become, metaphorically, paved or built into interstate highways, autobahns of thought and experience, because they are traveled so often. The highways remain for life, but in time, if the dirt paths and the back roads aren't traveled much, they disappear from lack of use. Only the well–traveled roads survive. The synaptic routes we build, or not, deeply affect our perception and sense of reality. One of the most dramatic examples of this is the experience in the 1950s of anthropologist Colin Turnbull, who was researching the BaMbuti Pygmies, who live in the dense Ituri forests of central Africa. During his research he and a BaMbuti tribesman that he had come to know named Kenge traveled to another part of Africa characterized by broad plains, as opposed to the dense jungle Kenge had grown up in. One day the two men were standing overlooking the expansive grasslands. Kenge pointed his finger at a herd of water buffalo and asked Turnbull, "What insects are those?" At first Turnbull couldn't figure what Kenge meant, then realized that he was referring to the buffalo. To this man, who had never before seen so much distance between anything, the buffalo appeared not small because they were far away, but small because, like an insect, they were, well, simply small. Turnbull realized that because the BaMbuti grow up in dense forests they never develop the ability to "see" or comprehend distance. When Turnbull told Kenge that the insects were buffalo, Kenge roared with laughter and told Turnbull not to tell such stupid lies. Given the world he grew up in, seeing things that were far away was a visual extravagance for Kenge, and so those connections to the visual cortex were paired away or, perhaps, never made at all. If you were blindfolded between the ages of three and five, the same biology would be at work for you except in this case you would grow up entirely and forever blind. Not because your eyes are incapable of sight, but because the synaptic connections your brain made for sight before you were blindfolded would have been pruned away from lack of use by age five, the time when the visual cortex "hardwires" itself. The pathways would never have become paved highways because they were never used. Once that part of the brain locks in, there is no known way to restore sight. The pathways are gone. Children who suffer from amblyopia (lazy eye) often end up blind in their weak eye for the same reason. If the eye is rarely used, its connections to the visual cortex atrophy and die, even though the eye itself is perfectly functional. The most universal example of how our brains discard unused synaptic connections is language. Within five months or so of birth we all become capable of babbling every one of the sounds required to speak any of the sixty–three hundred languages humans utter throughout the world, and probably many that have long been extinct. Up to about age seven, the last year of childhood, children rapidly learn to speak in whatever tongue they are exposed to. If they encounter several, they will adopt them all with ease because to their brain the separate languages aren't separate at all, they are one; it's just that they have more words and rules. Acquiring fluency in different languages later in life becomes more difficult because the neural circuits that help us master the sounds, the accents, and the grammar of those languages were never formed or have largely evaporated from lack of use. Even if you do learn a new language and get the grammar and the vocabulary right, it is nearly impossible to drop the accents you bring with you from your mother tongue. Henry Kissinger, for example, who has been speaking English since his teens, still speaks it with a strong German accent because English was not his first language. He didn't learn it until age fifteen, when his family migrated to the United States. If the cerebral Rubicon theory is accurate, humans like _Homo ergaster_ began being born "early" one million years ago. Their brains were now roughly three quarters the size of ours, so they weren't arriving in the world at a fetal stage as delicate as yours or mine, but the increased size of their heads was pushing them out of the womb sooner, and extending their childhoods. This meant they exited the womb uncompleted, a work in progress, hormonally primed to grow vast new farms of neurons and synapses, an amalgamation of their parents' genetic donations, but editable, more than any other creature up to that time, by their personal experience and the forces of the environment they faced. There is no way to overestimate how important this was to our evolution. This was the birth of human childhood itself, and the beginning of wild and complicated processes that explain how you or I can be born in Fargo, North Dakota, learn to speak fluent French in Paris, develop wit like Woody Allen's, or become as reclusive as Howard Hughes, all while still plumbing the intricacies of subjects as wildly different as calculus, Mozart, and baseball. This began the trend that has, in many ways, made children of all of us the entire course of our lives, neurologically nimble enough that we can keep learning, changing, and overriding the primal commands of our DNA. As we age the pliability of our younger brains may grow more brittle, but they also become more stable, deeper, and broader. Or as anthropologist Ashley Montagu put it, "Our uniqueness lies in always remaining in a state of development." If we stand back and gaze thoughtfully at the whole vista of human evolution, it grows clearer that the longer a childhood lasts, the more individualized the creatures that experience it become. It is the foundation of the thing we call our personalities, the unique attributes that make you, you and me, me. Without it, we would be far more similar to one another, and far less quirky and creative and charming. Our childhoods bestow upon us the great variety of interests and personalities and talents that the seven billion of us display all around the world every day from Barack Obama to Lady Gaga to Itzhak Perlman. This diversity led, slowly, to a new line of "early born" humans living along the shores of Lake Turkana who were developing an unparalleled ability to adapt to the world around them. Something different was afoot: a species that was becoming, to borrow the phrase of Jacob Bronowski, "not a figure in the landscape, but a shaper of the landscape." Improvements, however, have a way of creating new challenges. Now that a series of surprising and unintended events had taken human evolution into entirely new territory, our ancestors found themselves caught in a strange, runaway feedback loop that would favor the arrival of increasingly large–brained, increasingly intelligent, and increasingly helpless babies. All good, you would think. Except that every one of them would require more care and long periods of time to grow up. That would shake the social lives of Africa's gracile primates to the core and lead to yet another profoundly important twist in our evolutionary story. ## [**Chapter Four Tangled Webs— The Moral Primate**](ch00_fm06_toc.html#ch00_fm06_toc) _Morality, like art, means drawing a line someplace_. —Oscar Wilde In 2005 England found itself mesmerized by the gruesome murder of fifty–seven–year–old businessman Kenneth Iddon. Each Sunday, Mr. Iddon would drive to nearby Deanwood Golf Club to play snooker with his friends, then return home around midnight. On February 1, 2004, before he could get out of the car in his driveway, prosecutors said three men bludgeoned him, dragged him into his garage, repeatedly stabbed him and finally killed him by severing his carotid. It all happened while his wife and stepson were in the family house nearby. Others in the suburban neighborhood later reported they heard cries for help, yet Lynda, Mr. Iddon's wife, and Lee Shergold, her thirty–one–year–old son by a previous marriage, said they never heard a thing. They denied hearing any cries for help, the local prosecutor charged, because both Mrs. Iddon and her son had hired the three men who killed Mr. Iddon. They wanted his money, they said, all of it, not simply what Lynda Iddon might get in a divorce settlement. The irony was that when Mr. Iddon's will was read, he had left nothing to his wife. His entire fortune was bequeathed to his twenty-two–year–old daughter, Gemma. Neither Lynda, nor Lee, inherited a dime. It's an old human story. Greed, hatred, envy, and violence. We have, in case you haven't read today's newspaper, been known to commit acts we call immoral. We find them abhorrent and disturbing, yet, given our immense numbers, we actually show our ugly sides relatively little. One of the reasons we call attention to the terrible things we do, and are horrified by them, is because the majority of us don't do them. We are the only animals that even struggle with the idea of morality because we are the only truly ethical animal. Our moral tendencies are apparently so thoroughly wired into our psyche that they even reveal themselves in young children. For years psychologists—from Sigmund Freud to Jean Piaget to Lawrence Kohlberg—denied that infants or toddlers could have any sense of right or wrong. The traditional view has long been that babies arrive without the slimmest grasp of empathy, fairness, or other similarly moral sentiments. But recent experiments show otherwise. At Yale University, psychologists Paul Bloom, Karen Wynn, and Kiley Hamlin placed infants between the ages of five and twelve months in front of a simple morality play where three puppets were throwing a ball. As the babies watched, one puppet rolled the ball to another puppet on the right, then that puppet promptly rolled it back. Next the center puppet rolled the ball to a third puppet on the left, who took the ball and, instead of rolling it back, ran off with it. The infant audiences didn't cotton to this sort of behavior. Later when they were presented with the two puppets to which the balls were rolled, each with a pile of treats, and were asked to take a treat from one of them. Invariably they took a treat from the "naughty" puppet that had absconded with the ball. One one–year–old went so far as to smack the offending puppet on the head, raising the question, is violence a proper response to an immoral act?! The primal depths to which our sense of morality runs, and the murkiness of what we consider to be a moral or an immoral act, were brilliantly illuminated in a thought experiment that British philosopher Philippa Foot conceived over thirty years ago. (American philosophers Judith Jarvis Thomson, Peter Unger, and Frances Kamm later expanded upon Foot's experiment.) It asks that you imagine you are standing on a bridge overlooking tracks down which an out–of–control train is hurtling. As your eyes follow the route of the train, you are horrified to find five people tied to the track. But, you are told, you can save the five doomed people if you flip a switch that will direct the train toward a second fork. The only problem is that there is also another single person bound to _those_ tracks. Now what do you do? The vast majority of those who take this test, or variations of it, don't hesitate to say they would flip the switch and sacrifice one person to save five. It's not a perfect situation, but at least, the thinking usually goes, five people are being saved even if one must be sacrificed to spare them. (What would you do?) Several years later, Judith Jarvis Thomson raised the experiment's stakes by offering an alternative scenario. This time the train is headed toward our doomed fivesome, but the only way you can save them is by throwing a heavy object in front of the oncoming train. As it happens a large person is standing with you on the bridge. Should you push the person over the railing to save the same five people? Could you? Deciding what to do in that second situation turns out to be a lot more difficult than in the original scenario. But why? The outcome is precisely the same: one human life sacrificed to save five. But this time it's personal. It's one thing to flip a switch; the logic is obvious and you can act by remote control. But it's quite another thing to look a fellow human in the eyes and then personally push him to his death below. Not that most people think all of these issues through coolly and logically before they answer. The reaction is visceral, primal. If we see the rudiments of morality in examples such as these, it's not difficult to imagine how primeval versions developed among the tribes of early humans who had begun a million years ago to make their way out of Africa and take the human species, for the first time, global. In many ways their situation was similar to another classic thought experiment that emerged in the 1950s out of what computer scientists call game theory. The problem is called the Prisoner's Dilemma and is based on the work of two mathematicians, Merrill Flood and Melvin Dresher, at the RAND Corporation. (Much later Albert W. Tucker added some formal touches to the game.) As much as we might like to think that our sense of fair play traces its roots to human kindness and altruism, game theory illustrates that deep down even the best behaviors stand on a practical foundation, a form of enlightened self–interest, at best. For our purposes the game goes like this. Jack and Joe are arrested by the police and charged with robbing a bank. It's a given that both men are pretty reprehensible and more concerned about their personal freedom than they are about one another. The problem is the authorities don't have sufficient evidence to convict either one. So they separate them and offer each an identical deal: You can testify against your partner, and if he doesn't testify against you, you'll go free and he'll go to jail for ten years. If you both remain silent, then you'll each serve a short sentence. If you both testify against one another, you will each serve five years. And if you refuse to testify at all, but your partner testifies against you, you will serve the full sentence. (It's a fair bet, by the way, that the British police investigating the Iddon murder used exactly this strategy with the murderers, several of whom did eventually turn on their fellow conspirators.) Scientists have found that if the game is played once, six players out of ten choose to testify against their partner. We shouldn't be too surprised that most people will rat out their partner because by testifying this one time, the best thing that happens is you walk away. The worst is that you end up with a split sentence. If the game is played again and again, however, and the players can exact revenge on one another or reward good behavior, which is more the way life is, then the players gain enough feedback that they learn how their counterparts behave, and in time something interesting happens. Each player begins to cooperate with the other because each realizes that watching out only for himself (and choosing to turn in his partner) may result in his partner's punishing him the next round of the game. What happens? They both begin to choose to _not_ testify, which results in both getting off with a slap on the wrist. A sort of morality emerges. Players begin to realize that if they treat others as they would like to be treated—the Golden Rule—life won't be perfect, but it will, on balance, be pretty good. What all of this clearly reveals is that we are, and have been for some time, moral animals. But where does our morality come from? Why did it even evolve? Other animals (with the exception of some of our cousin primates) don't struggle with morality. Why should we? The reason is because we are so shamelessly social. By the end of 2011, Facebook, the current darling of the Internet, claimed 750 million active subscribers, who together pass an average of 700 billion minutes a month digitally engaged with one another. Since its emergence in 1993, the World Wide Web has rocketed from zero Web sites to 45 million, and counting. Last year, uncounted billions of us worldwide were busily talking, incessantly texting, or otherwise interacting with one another on more than five billion cell phones. These statistics aren't simply impressive examples of our ingenuity; they represent monuments to our primal need to connect with one another. The convivial interactions of ants, termites, and certain kinds of algae not withstanding, we humans are indisputably the most socially complex species to have ever emerged on planet earth. We cannot bear to be disconnected. For a human, the worst kind of torture is solitary confinement, a punishment that can lead to depression, hallucinations, and madness. We can't seem to help keeping constantly in touch, literally or metaphorically, always reaching out, laughing, crying, gossiping; talking at, with, or about; watching, gaping, glancing; listening in, listening to. Even in ignoring one another we are tacitly bonding by acknowledging that others are around us to turn our nose up at. Ugly as they are, hatred, jealousy, envy, rage, discrimination, even murder, could not exist if we weren't, first and above all, bound to one another inextricably. It is possible, I suppose, that in some parallel universe each of us could be like Dickens's Ebenezer Scrooge, "as solitary as an oyster," but if that was the case, not only would love, marriage, business, and cities be out of the question, so would Super Bowls, World Cups, global trade, finance, symphonies, and the rest of human civilization with them. We have built the world that we have built, either in cooperation or competition with one another, but _we_ have built it. The currency of our connectedness is communication, which is so mystifying that legions of scientists still labor ceaselessly to unravel its complexities. We communicate using language, but also by tapping uncounted libraries of nonverbal behaviors, too—laughter, tears, body language, and facial expressions, not to mention painting, mathematics, sculpture, music, and dance in all of their variability across all the cultures of today, yesterday, and futures to come. Each of them represents a handful of the unending inventions we press into service to express to one another what we are thinking, feeling, exploring, want or wish for, fear, hate, and love. All of life is linked, from amoebic protozoa to the invisible oxygen–fixing microbes that make the enormity of sequoia trees possible. We aren't alone in that. Of the one hundred quadrillion cells that each of us carries through the day, for example, only 10 percent belong to us. The rest are outsiders, the microbial flora and fauna that live in our stomachs and organs and dine out on the surfaces of our bodies. Yet without those trillions of hardworking microbial committees, not a single one of us could make it through the day. We need one another. Global ecosystems likewise require connection and communication. Symbiosis and competition make the world, quite literally, go around, because without the interactions of life, from the oceans' microscopic phytoplankton to rumbling migrations of wildebeests, our world might just as easily be as dead, phlegmatic, and torridly hot as Venus, or as cold and parched as Mars. In short, life and communication can't be separated. But in our case the affliction is unusually deep and complex. We can trace these tendencies back to the early mammals that began to gather evolutionary momentum after the dinosaurs were wiped out sixty–five million years ago. A cerebral innovation that mammals brought into the world was the brain's limbic system, the seat of our emotions. Lots of mammals are social and live in packs, prides, and herds, but our particular primate line traces its roots to mammals that evolved into monkeylike creatures that mostly stuck to the jungle, but also eventually ranged out into the savanna, where they lived in small clusters. From these, and in a relative blink, multiple species of primates arose between twenty-five and five million years ago. Since every other human species that has ever managed to make its way into existence is now extinct, except for us, our closest living primate relatives are chimpanzees, bonobos, and gorillas, all of them exceedingly sociable. This tells us that when our earliest human ancestors found themselves stranded on Africa's expanding savannas, they were already inveterately communal. After all, shortly beforehand they all shared a common ancestor. The best we can figure, they lived in troops of twenty to fifty, possibly more, sometimes less, traveled together, shared food, fears, sex, and other dangers and amusements. So tight were their communities that there would have been zero chance of any member of the troop running into another and not recognizing him or her immediately. (Maybe this is why studies reveal we have trouble handling more than twelve to fifteen truly personal relationships, Facebook notwithstanding.) The only strangers these creatures would have encountered would have hailed from other troops or even other species, and those encounters would likely have been as strange as you or me running into a Sioux warrior from 1825 at the local mall. As our ancestors were left to wonder their new, more dangerous grassland environment, the ties that bound them would have grown tighter than ever. The aphorisms "misery loves company" and "there is safety in numbers" might arguably have found their origins here. On the savanna there were more predators, but fewer places to hide, less food, less water, than the jungle provided, even more competition from other troops given the dearth of resources in their new home. Disease and injury surely did in more than one man, woman, or child as the troop wandered from place to place among the volcanoes, and along the shores of lakes like Turkana, every loss threatening to rub the clan down to a size that made it impossible to survive the next disease or environmental blow. Now add to all of this the pressures early–born children brought to the mix. You find yourself awakening every day bound to your fellow creatures working to survive, raising children, forging friendships and alliances, scrounging for food, and communicating as much as your brain and body will allow. You have no other choice because if you don't, you will die. But (there is always a but) living in such a tight community also means competing with the selfsame creatures you rely upon for mates and status and resources. Tricky situation, because it requires balancing what you want for yourself with everyone else's needs. It means simultaneously taking care of number one _and_ watching out for those around you. This is the central paradox of the human condition—balancing, constantly, two seemingly opposite needs. We see the evidence of this continuing dilemma every day from sibling rivalries to office politics, from international trade to military balances of power. Every day's headlines and news reports are dramatic testaments to our struggles to act morally. Robbery, terrorism, murder, heroism, stock–market crashes, war, charity, law, international aid, trade, and political intrigue are all examples of our attempts, and failures, to deal fairly and ethically with one another, writ large. For the bands of our predecessors struggling to survive on Africa's plains, however, this was new territory, and it required the development of some kind of moral code. Think back for a minute on the Prisoner's Dilemma. Existence on the savanna among small troops of hominins would have been remarkably similar to Jack and Joe's situation after their arrest. If you were a member of the troop, it would make no sense to repeatedly abuse those around you even if you had the power to do it. If you did, you would quickly find yourself persona non grata, shunned by the troop, or, worse, dead. On the other hand, what about _your_ needs? You ignore those at your peril, too. You require a mate, food, safety, and personal power just as much as anyone else. To deny these could result in your death, too. A dicey dilemma. We are clearly and painfully still struggling with these issues, but over the long haul evolutionary forces encouraged our ancestors to cooperate enough with one another that they managed to make it to the twenty-first century. Like Jack and Joe, experience taught our ancestors that on balance cooperators tended to stay alive long enough to have babies and pass their genes along. Cooperators wouldn't have always gotten everything their way, but they also wouldn't have been tossed out of the group to fend for themselves, a sure death sentence given the harsh realities of life a million years ago. Success in such a tight community depended increasingly upon how deftly you navigated and balanced your relationships with your peers. Accomplishing that, however, required an even more powerful brain than the one nature had already bestowed on struggling predecessors. In the 1990s a Liverpudlian psychologist named Robin Dunbar conducted research that illustrated a correlation between the size of an ape's brain and the size of the troop in which he lived. The larger the group, the larger the brain. He argued that bigger troops drove the evolution of larger brains because every new addition to the group ratcheted up the number of direct and indirect relationships each member had to keep track of. Juggling more relationships required a corresponding boost in intelligence. Evolution would have favored smarter, larger–brained members of the troop because they would have been better equipped to track the growing social relationships between fellow primates. Something similar was happening among our direct ancestors on the plains of Africa a million years ago, with an important additional ingredient. The driving force behind the evolutionary change wasn't merely the size of the group; it was the complexity of the relationships inside it. Our ancestors were much smarter than Dunbar's primates, and the dynamics of their relationships would have been more complicated. After all, at that time they were the smartest creatures on earth. Greater intelligence is a multiplier of complexity because it increases the number of _factors_ in relationships. It adds more variability, more motives, more intrigue and nuance, and, in turn, drives up the advantages of possessing the additional neuronal firepower needed to constantly calibrate exactly why people are acting the way they are, and more particularly why they are acting toward _you_ the way they are. Human relationships are dynamic and fluid. They change constantly. Rarely do we unquestionably love, or entirely distrust, the people in our lives. Mostly our relationships slide along a continuum in a never–ending exchange of interpersonal, emotional, and mental calculations. The social lives of our ancestors may not have reached the Machiavellian proportions of the Soviet politburo, the court intrigues of Henry VIII, or even the office politics of _Mad Men_ , but, generation by generation, you can be sure they were getting increasingly complicated. And that would have required the introduction of a new and powerful behavior: deception. Or more precisely, as you will see, our ability to detect deception. At this point in the evolution of life on earth, deception was clearly far from new. Prevarication is an essential part of existence and has been for far longer than our kind has been around. Venus flytraps pose as beautiful flowers to lure their quarry to their doom. A leopard's spots or a chameleon's changing colors dupe prey and predator alike. Young spider monkeys have been known to fake predator calls so they can scatter their elders who are dining on recently found food, then pilfer the goods before others in the troop are any the wiser. The cake for natural deception might have to go to a particular shallow–water anglerfish (there are many species) that looks remarkably similar to a rock encrusted with sponges and algae. At the end of its head extends a thin spine that supports a piece of itself that would be the envy of every avid reader of _Field & Stream_ magazine. It looks exactly like a small living creature right down to the pigment along its flanks and "eyes" at the top of its faux head. The anglerfish even wiggles the bait so that it seems to be swimming along just like any number of other fish in the sea. When a hungry fellow fish arrives to take the bait, the angler gulps it down before it has even realized it is the hunted, and not the hunter. There is, however, a difference between these deceptions and the human variety. The human sort that was shaping up a million years ago was conscious, which is to say planned and driven not purely by genetics. In these ancestors we begin to see the evolution of chicanery in the service of self–interest at a level never before seen, the deliberate, premeditated variety. In some ways cheating of this sort was inevitable. It is the flip side of the primal moral code that was evolving at the same time. As early humans found ways to cooperate and trust one another—which was absolutely necessary if they hoped to survive—wasn't it equally inevitable that deception would also emerge? It was, after all, a powerful way to serve personal ends without having to deal with the overt danger of direct confrontation inside the troop—a perfectly understandable, even brilliant, adaptation when you consider the circumstances. Deception was an accommodation, a kind of compromise, except that only one party was in on the secret. If you can cheat and get away with it, you're riding on the backs of others to your benefit (and their detriment) without anyone's knowing it or becoming even the slightest bit upset about it. Not a bad ploy, if you can get away with it. Of course over the long haul getting away with it would have to fail, because if it succeeded indefinitely, the spread of bad behavior would unravel the success that sustained the group, something like the way a too–successful parasite will kill off its host (and itself if it succeeds). Ultimately, the bad behavior has to stop, or at least be controlled. If, among a small band of _Homo ergaster_ , for example, food was stolen, personal hoarding got out of hand, slackers consistently failed to pull their weight, or mates continually cheated on one another and refused to protect and care for their families, the group's social fabric, and the trust that kept it woven, would fly apart. No one would win. So in the arms race of ever–improving minds, detecting bad behavior would have been an extremely important skill for our ancestors to develop—an antidote to deception. And it turns out they did, at least according to evolutionary psychologists Elsa Ermer, Leda Cosmides, and John Tooby. We all engage in what scientists call social exchange. We agree to do something for someone in exchange for his or her doing something in return for us, either now or in the future. We do this because on some level we believe that the exchange works to our benefit. So does the other person. "You scratch my back, and I'll scratch yours." Everything from family relationships to world economies rest on this fundamental human behavior. And for our ancestors, it would have been essential to their common survival. But what happens when you scratch someone's back and he or she doesn't scratch your back back? According to tests Ermer, Cosmides, and Tooby conducted with everyone from hunter–gathers in the Amazon to university students in Europe, Asia, and the United States, we humans have unerring radar for sniffing out those who cheat the system; a kind of social immune system that finds and exposes free–loaders. Not that this radar is perfect in all matters of deception. The tests indicated we are not all that skillful at unmasking trickery, infidelity, or accidental cheating, but when it comes to the scratch–my–back–and–I'll–scratch–yours variety, we are extraordinarily talented. Uncovering any physical evidence of this special ability to expose cheaters among the dust and bones of our long–lost predecessors is, unfortunately, impossible. There are, to paleoanthropologists' everlasting sorrow, no fossils of behaviors. But in another study, cognitive scientist Valerie Stone at the University of Denver did find a different kind of physical evidence, this within the human brain, which indicates our ability to suss out social–exchange cheaters is wired somehow into the wetware between our ears, a little like the ability to learn language. At the heart of Stone's investigation is R.M., a man who had in a bicycle accident damaged a rare combination of areas in his brain—his orbitofrontal cortex, temporal pole, and amygdala. R.M.'s accident was tragic for him, but fortuitous for science because all three of these areas are crucial to social intelligence, particularly in making inferences about others' thoughts or feelings based on, say, an angry tone of voice, a scowl a smile, or a person's body language. Stone devised a test for R.M. to see if particular kinds of if–this–then–that statements were more difficult for him to understand than other kinds. She asked him to analyze three different types. One, for example, dealt with precautions. "If you work with toxic chemicals, you have to wear a safety mask." Others involved descriptive rules. "If a person suffers from arthritis, then that person must be over forty years old." A third kind of problem dealt with social, scratch–my–back–I'll–scratch–yours contracts. "Before you go canoeing on the lake, you first have to clean your bunkhouse." R.M. had a difficult time correctly answering the social–contract questions, like the one about the bunkhouse. The difference between getting those correct compared with correctly answering the precautionary questions ("If you work with toxic chemicals, you have to wear a safety mask") was a whopping 31 percentage points. Stone concluded that uncovering cheaters was so crucial to survival that evolution favored neural wiring optimized for understanding when someone was not living up to his or her promises. As luck would have it, R.M. had injured exactly the parts of the brain involved in this wiring. You might think if we were this good at spotting cheaters, we would be equally talented at detecting other kinds of deceptions. But that doesn't seem to be the case. A few years ago, two psychologists, Charles Bond and Bella DePaulo, wondered exactly how sharp we were when it came to catching others in the act of pulling the wool over our eyes. Rather than conduct their own study, they organized a study of studies, analyzing documents from 206 other research projects focused on various sorts of human deception and our ability to discover it. They pored over no fewer than 4,435 individuals' attempts to dupe 24,483 others and found that the dupers were unmasked by the dupees only 54 percent of the time, or just a little better than you or I would do if we flipped a coin. It turns out that one of the reasons we aren't better at spotting lies is because we have learned to be almost (but not quite) as good at hiding the truth from one another as we are at uncovering it. It's not that we are horrendously inept at calling out the equivocators among us; we have just learned to improve our lying and fakery. In the ongoing arms race between deceivers and truth seekers, the competition is so close that it's resulted in a kind of Mexican standoff. According to the research of one of the true pioneers in the field of kinesthetics, or body language, psychologist Paul Ekman, this has resulted in several intriguing insights about the way we behave in one another's presence. Sigmund Freud famously wrote in 1905, "No mortal can keep a secret. If his lips are silent, he chatters with his fingertips; betrayal oozes out of him at every pore." Ekman and his research collaborators found that the great Austrian psychoanalyst was right, our bodies can often subvert our best attempts to deceive, but not in the most obvious ways and rarely in the ways we read about so often in popular magazines. For example, we are outstandingly skilled at hiding the truth verbally, a little less good at hiding it by controlling our facial expressions and hands, and least effective of all at hiding the ways our legs and feet can reveal our fabrications. The parts of us over which we have the most conscious control are the parts we've become particularly good at masking. Bond and DePaulo theorize plenty of additional reasons for why our rate of sniffing out deception is hardly better than our ability to accurately predict a coin toss. For one thing, we are fundamentally trusting creatures, predisposed to believe those we deal with because it's rare that our dealings with them result in a catastrophic or dangerous lie. (If that were the case, we would all be far more paranoid, which would create its own set of unsavory difficulties.) Many of those we spend most of our time with tell us plenty of harmless fibs. How good we look that day, for example, or how funny a joke is; that they were late for a meeting because they had trouble starting their car, or the dog ate the weekly report—that sort of thing. Even if we don't believe everything we hear (or are pretty sure that others don't believe everything we say), this variety of truth bending isn't damaging, and sometimes it's even constructive. So our tendency to miss untruths might also be a matter of motivation because we aren't generally dealing with a world–class con artist who is hiding a dangerous whopper that puts our lives on the line. Every day is filled with rationalizations, self–deceptions, white lies, and all varieties of other spin. The competition that required liars to outfox their dupes, and then dupes to figure out the deception strategies of good liars, and so on, almost certainly contributes to one of the neatest tricks the human mind is capable of—imagining it is not the mind it is, but someone else's. If you happened long ago to be engaged in either side of this liar–dupe battle as our ancestors surely were, one of the best weapons you could possible devise would be the ability to shift your point of view and imagine yourself in the shoes of the person who might be lying to you (or the shoes of the person you are trying to deceive). This ability would allow you to not only imagine the situation from the other person's viewpoint, but you could look at yourself from the outside and, perhaps, spot flaws in your own dissembling techniques. This is the psychological equivalent of placing two mirrors face–to–face, creating an escalating infinity of images, except in this case you can create an infinity of viewpoints that shift back and forth reacting one to the other. (This recursive ability turns out to be crucial to human consciousness, as we will later see.) Novelist and screenwriter William Goldman beautifully illustrated this contest when he wrote a scene for his charming, comic send–up of the classic fairy tale, _The Princess Bride_. A Machiavellian (and hunchbacked) master of deception and intrigue named Vizzini agrees to face off with the book's masked, Robin Hood–esque hero in a battle of wits. At stake is the book's beautiful, but flinty, heroine. The two men sit, each with a goblet of wine in front of him. One of the two goblets is deadly, tainted with a poison called iocane. Under the rules of the battle, the masked hero already knows which goblet is poisoned because he put the iocane in it, but Vizzini alone gets to chose which goblet they each must drink from. If he can calculate which goblet has the poison, he will chose not to drink it, killing his rival. The scene unfolds like this... "Your guess," he [the masked hero] said. "Where is the poison?" " _Guess_?" Vizzini cried. "I don't guess. I think. I ponder. I deduce. Then I decide. But I never guess." "The battle of wits has begun," said the man in black. "It ends when you decide and we drink and we see who is right and who is dead."... "It is all so simple," said the hunchback. "All I have to do is deduce, from what I know of you, the way your mind works. Are you the kind of man who would put the wine in his own glass, or the glass of his enemy?" Vizzini then goes on to logically slice and dice the situation, not to mention the psychological makeup of his nemesis, with each deduction making the hero increasingly nervous until at last Vizzini comes to his conclusion. "I have already learned everything from you," said the Sicilian. "I know where the poison is." "Only a genius could have deduced as much." "How fortunate for me that I happen to be one," says the hunchback, growing more and more amused... "Never go against a Sicilian when death is on the line." He was quite cheery until the iocane powder took effect. The man in black stepped quickly over the corpse. How did our masked hero win the battle of wits? How could he be so sure his deception wouldn't be found out, leaving him to drink the poisoned wine? Here's how: He had spent two years building up an immunity to iocane. It didn't matter which goblet Vizzini drank from. Both were poisoned! And with that move, he raised the evolutionary stakes. Unfortunately for Vizzini, he fell one step behind in the arms race and was selected out. Goldman's little story encapsulates the ongoing battle our ancestors found themselves fighting. In dealing with their increasingly complicated relationships, those early humans who became more skilled at climbing inside the minds of the others around them would more often win the battle of wits. They would also enjoy a decided evolutionary edge because they would excel at practicing deception as well. This makes us a tricky species indeed. Psychologists call this unique human ability to hop back and forth between our own point of view and someone else's Theory of Mind or ToM. Uncovering deceit isn't the only time we use it (though it's certainly a helpful application). We employ ToM almost every waking moment we are interacting with others or thinking about interacting with them. It is, if you examine it closely enough, the foundation upon which all human social commerce is built. It enables us to empathize, anticipate, and outfox. We exercise it when we talk with one another, or about one another. It's in play when we lie awake in bed wondering why our spouse or girlfriend or boyfriend did this or said that, or the boss gave us a hard time about the quarterly report, or even why he put his arm around our shoulder and said, "Atkinson, helluva job!" What, we wonder, did he really mean by that? To put it bluntly, we are a species incessantly thinking about what everyone else around us is thinking. Yet ToM has even broader applications and effect because it provides us with a remarkable talent for running infinite numbers of what–if scenarios, all simply by firing the neurons in our own heads. You can imagine one of our ancestors wondering, "What if the leopard jumps out of that tree? What if I get caught wooing this female? What if I come back with some meat and give it to Woog? Will that get me a little troop cred? Is it worth the trouble?" What–iffing allows you to not only step into someone else's place, it gives you the magical power to step into the future and prepare for what might happen next. Or to create parallel universes where you can run multiple scenarios about taking this action or that one, then weighing them to see which might lead to a better outcome; something we call, among other things, imagination. As I write these words, my mind is what–iffing furiously about which are the best scenarios to run by you to make the points I want to make. Being able to say to ourselves, "If this, then that," builds the infrastructure of human creativity (more on this a bit later in the book). Scenario building is pure make–believe, a return to that time in our childhoods when we used to say, "Let's pretend..." It is a way to create and explore possibilities that don't exist in the real world, but live completely in the universe of our minds, and nowhere else. A remarkable thing. _Misfired Mind Reading_ Mind reading, and the abilities that make it possible, can also misfire. (A lot of evolutionary innovations do.) It can make us chronic worriers, stuck in endless loops of stomach–churning scenario building, erecting realities that aren't real at all while we suffer through them as if they were; percolating endlessly on this or that possibility and applying it to bosses, significant others, children, and just about every decision we make. We may be the proud and mighty scenario–building animal, but we also invented nail–biting, hand–wringing, and acid reflux. Sometimes imagining what someone else thinks can be absolutely paralyzing—how your mother, or the vicar, or even another version of yourself might view your first sexual encounter, for example. Whatever the uses to which we personally put the mind–reading/scenario–building powers that our ancestors developed, this much is beyond debate: no brain in nature had ever before seen its like. This is, neurologically speaking, inconceivably difficult to pull off. It demands billions of neurons and requires that the newest _and_ most ancient parts of the brain be wired deeply to one another. Valerie Stone's experiments with R.M. illustrated this. R.M., remember, had damaged his amygdala, whose evolutionary roots are reptilian; the temporal pole, which is part of the limbic/mammalian brain; and the orbitofrontal cortex, which is among the newest cerebral additions to have evolved. Our ancestors were becoming chimeras, of sorts, creatures built out of the spare parts of both ancient and modern evolutionary mutations, an amalgamated animal, both ancient and new, self–aware, yet driven by unconscious, subterranean impulses. In a phrase, we were becoming really complicated. We can't know for certain when the rudimentary ability to climb inside another's mind evolved. Such abilities are almost certainly not the result of a lone adaptation. More likely they resulted from scores of suites of adaptations that surely took an immense amount of time to emerge. One point two million years ago the robust human lines had seen their last days. It was a good run, but the evolutionary path followed by the gracile apes, unlikely as its success was, had won out. Yet, who would have predicted it? Not even a what–iffing creature. Larger brains forced earlier births, earlier births lengthened and complicated childhoods that created minds increasingly shaped by personal experience, which in turn made the mind more creative and adaptable. Brains over brawn. And as if this wasn't messy enough, now longer childhoods were producing people that were genetically similar, but behaviorally unique; every troop was loaded with highly complicated individuals, each with her own talents, psychological baggage, foibles, and agendas. Yet they bonded, despite their individual needs and selfish competitions. An odd, astonishing species, or group of species, if ever there was one. A mix this complicated would still seem doomed to failure. How do you weigh and balance all of these competing needs; manage the increasing complexity of your own motives let alone the motives of those around you while avoiding simultaneously alienating the allies you need? Depending on the situation, did "might make right," was it better to be conciliatory, or was deception the best path? It all had to be worked out, and apparently it was, otherwise you and I would not be here. Out of this complexity, these competing needs, a moral ape was born, made possible by the early childhood that had shaped our gracile ancestors. They had managed to find strength in numbers, and a workable code of conduct. It may not have been perfect, but they were successful enough that they had begun to take the species, several species actually, global. They had not only become a moral ape, they had evolved with an irrepressible case of wanderlust. ## [**Chapter Five The Everywhere Ape**](ch00_fm06_toc.html#ch00_fm06_toc) _Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose_. —J. B. S. Haldane Our species is the most itinerant and restless animal on the planet. That's a simple fact. You will find polar bears on the ice sheets of the Arctic, silverback gorillas in the mountains of central Africa, reindeer in northern Europe, tigers in India, and penguins in the Antarctic, but you will find humans in all of those places and more. We are the only mammal that inhabits all seven continents, and it doesn't matter to us how hot, how high, how humid, or how frigid the geographies are in which we live. We have even found our way, God knows how, to thousands of remote islands around the planet that amount to no more than an oceanbound fleck of dirt that your eye could easily lose looking at a decent–size map—Easter Island, for example, whose nearest inhabited neighbor is more than a thousand water–soaked miles away. We are everywhere. But we weren't always so. At one time we were almost nowhere. How we went from a few locations to many makes a fascinating story. It also says a lot about who we are. At the tip of South Africa where the Indian and Atlantic Oceans meet lie shores of basalt rock that look out on an expanse of cold and turbulent water that doesn't see another shoreline until it meets the ice cliffs of the Antarctic more than a thousand bracing, windblown miles away. If ever there was a place you could call the end of the earth, this is it. Seventy thousand years ago, a few hundred human beings lived here; anatomically modern humans or AMH, as anthropologists like to call them. They were like us in every way it seems, except for the technologies they used to survive. They were bereft of cell phones and SUVs but looked like us and carried around the same evolutionary and psychological baggage we do. In those days, they were also the last remaining members of our species, a tiny enclave of humanity twisting precariously at the end of an evolutionary thread, rubbing elbows with extinction. One hundred and twenty thousand years earlier this species, one that would later name itself _Homo sapiens_ , had come into existence, a new branch of the human family, split off from an earlier primate that had arisen on the Horn of Africa, where so many other varieties of humans had emerged. This particular tribe, the one that lived along Africa's southern shore, were gracile, built for running, and clever hunters. Because of their high foreheads, prominent chins, and brains weighing more than three pounds, triple the size of those of the first upright walking primates from which they had descended, they looked far less apelike than their predecessors, though you could certainly see the family resemblance. They were inventive, too. Not only did they use fire, they controlled it, cooking food with it and applying it like a tool to harden and shape an impressive assortment of other cleverly fashioned gadgets—knives and axes more advanced than any used before. They may have been at the ends of the earth, but this was the Silicon Valley of its time, a hotbed of innovation. They had also developed an extremely powerful way to communicate—words. Fortunately for these last survivors, the land was Eden–like. Not tropical, but temperate and sustaining. What it lacked in the big game that walked the northern savannas, it made up for with lush stores of fruit, nuts, and beans, and an inexhaustible supply of protein–rich seafood. Life must have looked very good. After all, deprived of CNN and the Weather Channel, they had no way of knowing they were the last representatives of their species, nor that much of the world and the continent beyond their small slice of paradise had been under climatological assault for thousands of years. A harsh and unrelenting ice age had already wiped out others like them farther north. Europe, Asia, North America, and the Mediterranean had been buried for millennia beneath uncounted miles of snow, howling winds, and frozen seas. Oceans of water were now locked in enormous ice sheets, leaving seas more than 225 feet shallower, and the rest of Africa chilled and bone–dry. This was the apocalypse. It was possible that they were not entirely alone. Tiny pockets of other modern humans may have survived the ice epoch in the north and west of Africa, but no one can say with certainty. No, this was probably it. Just a few hundred people dug in, the current crop of an extended family who had colonized the area as many as fourteen thousand years earlier. One catastrophic event, a plague, a typhoon, or a freeze, and that would have been the end of _Homo sapiens_. And none of the seven billion of us who exist today would ever have been the wiser; in fact we would not have "been" at all. We came that close to being snuffed out. That, at least, is how paleoanthropologist Curtis Marean sees it. It's a sobering thought, the idea that we were closer to extinction than today's mountain gorillas, and not much better off than India's dwindling prides of tigers. Plenty of scientists dispute Marean's scenario. It wouldn't be paleoanthropology if they didn't. Our past is a messy business, and today's efforts to understand how we came into existence, based largely on the ossified leavings found in the world's dust and rock, has been something like a blind man's trying to describe the details of a football stadium by feeling his way through it. If we didn't have ourselves around to inspect, we would know more about _Homo habilis_ and Neanderthals than we know about _Homo sapiens_. You would think that our being among the most recently arrived branch of the human family we would be knee deep in the evidence of our own existence, but that's not the case. Outside of Africa, fossils of early _Homo sapiens_ are nearly nonexistent. Thankfully, we have been learning to read the path of our evolution in our DNA (see sidebar "Genetic Time Machines," page 76), and that, together with some meager findings in the fossil record, has illuminated the story of our emergence at least a bit. The story goes something like this. Between 160,000 and 200,000 years ago the first anatomically modern humans emerged, probably near Ethiopia. (But there is anything but universal agreement on this.) Among these was a woman, now called the matrilineal "Eve," the "mother" of the human race, though that term is a little misleading. Eve wasn't herself the first modern human, and unlike the biblical Eve, she wasn't the only woman alive two hundred thousand years ago. She was, however, the sole woman alive then that still has descendants today. Other modern human women lived during her time and before it, but she is the one to whom every living human today is related. So it's more accurate to say she is our "most recent common ancestor," at least when looking at mitochondrial DNA as a marker. _Genetic Time Machines_ When it comes to DNA, the only certainty is change. It's restless. As DNA alters, so do genes, and when genes mutate and unwittingly express new traits, their accumulated mistakes eventually result in entirely new species—by some estimates, thirty billion separate forms of life over the past 3.8 billion years. Despite the messy nature of genetic mutations, they create markers whose rates of change are startlingly predictable. These signposts enable scientists to calculate, with reasonable, but far from perfect, accuracy where in the evolutionary picture your particular branch of the family tree diverged from other branches. Two primary kinds of DNA allow scientists to pull off this neat trick. One is the DNA of organelles that live within each of our cells, called mitochondria. Groups of mitochondria exist within each of the fifty trillion cells that make you and me possible. In an evolutionary partnership agreed to some two billion years ago, some single–celled bacteria took up residence in other single cells, but refused to give up their DNA in the bargain.d The relationship has remained unbroken ever since. Today, in exchange for the protection and nutrition they receive living within other cells, mitochondria create the chemical energy needed to power nearly every plant and animal on earth, including us. The second kind of DNA is the nuclear variety, the sort that belongs directly to you and me and within whose cells those mitochondrial guests live. It is now possible to take a fossil of our ancestors, closely scan the DNA trapped within (usually mitochondrial because there is more of that than the nuclear variety), and, if the information is robust enough, compare it with samples of our DNA and see how different the two are. Then by comparing the markers—the average rate of mutations over time—we can estimate how deep in the past the two genomes were once identical and when they went their separate ways. This is a little bit like standing on the limb of a tree and pacing off the distance between the branch you are standing on and the one from which it sprouted. Each pace provides an indication of how long ago you and other tree limbs separated. The ancestor that all humans share going back to _Sahelanthropus tchadensis_ (see The Human Evolutionary Calendar, page 7) would be represented by the tree's trunk. Each divergence, each limb, represents a new human species— _Homo habilis, Ardipithecus ramidus, Homo rudolfensis_ , and all the rest. Some lead to new branches others, some don't. You can also imagine the mutations themselves as the landscape through which a kind of time machine can travel with the genetic markers as mileposts that indicate how far back or forward in time you have journeyed. Whichever metaphor you chose, this is how scientists can compare our DNA with a Neanderthal's and conclude that we parted ways from a common ancestor— _Homo heidelbergensis_ —200,000 to 250,000 years ago. Or how they have come to discover that Neanderthals and Denisovans both shared a bed with ancestors of ours whose offspring eventually made their way to Europe, Asia, and New Guinea, even though, especially in the case of the Denisovans, we have almost no fossils to inspect. A variation on this same technique (more often this time looking at nuclear DNA) makes it possible for scientists to track down the patterns and timing of our own global wanderings—when one group remained in central Africa, for example, but another headed north. When some members of that tribe made west into Europe and others branched off to Asia and the east. This is because our DNA has mutated as we have traveled the world, though not enough in the past 190,000 years to have sprouted an entirely new species. These mutations indicate where we lived, and when. Thanks to the genetic records all creatures carry within them, and thanks to the ability of computers to compare them, we are developing a clearer, if not pristine, picture of how much we have in common with our fellow humans, when we parted ways with them, and how we have, ourselves, managed to make our way from a couple of pockets in Africa to nearly every spit of land earth has to offer. If Marean's theory is correct, the first "moderns" that arose in the Ethiopian plateaus must have spread out west and south during a population explosion shortly before the punishing ice age, known today by the memorable meteorological term Marine Isotope Stage 6 (MIS6), began to take its devastating toll. This climatic shift sabotaged life everywhere, as we will see, and may have been further boosted by the largest known volcanic eruption in the history of earth on Sumatra, Indonesia, which blasted ash into the stratosphere, causing a "volcanic winter" that rapidly accelerated the cooling of earth. (See "Killer Explosion?" sidebar, page 80.) Other genetic studies indicate that sometime between one hundred thousand and eighty thousand years ago, three lines of _Homo sapiens_ made off in separate directions from East Africa. One headed south and became the ancestors of today's Central African Pygmies as well as the Khoisan (Capoid) peoples of South Africa. A second genetic group migrated to West Africa, but also departed the continent by way of the Arabian Peninsula. Many West Africans are descended from this branch, and so are many African–Americans and South Americans who, millennia later, were transported across the Atlantic in slave ships. The third branch remained on the Horn of Africa, but others of them branched northwest and north. From these migrants descended the people who today live along the Nile Valley, the Sahara, and the Niger River, which flows through, of all places, Timbuktu in Mali into the Gulf of Guinea. Some of these people also found their way out of Africa. Ten percent of today's Middle Easterners have the blood of this third group running in their veins. Given these apparently enthusiastic migrations, you might think that as a species we were finally off and running, but there was that wintry climate that was setting in. By seventy thousand years ago it was in full, frigid swing and had begun to systematically rub out life everywhere on the planet. (We are living right now in what scientists call a slim "interglacial" period of this ice epoch, a bit of information that is itself chilling.) Genetic studies confirm that during this time _Homo sapiens_ underwent what scientists call a "bottleneck event." That is to say, we had been worn down to something like ten thousand total adult members, a troop or tribe here or there, scraping out a living, probably along ocean shorelines and receding lake beds. Ice ages rarely result in cold weather in Africa. Instead they parch the land, turn rivers into dry wadis, evaporate lakes, and wipe out the sustenance each provides. During some of these periods, the Nile itself was reduced to swamp and muck. Even today the continent is filled with ancient lake beds scarred by desiccated mud cracks that testify to exactly how arid the landscape had become. Whichever humans survived the first waves of these droughts, they had tools, but little else, and when water disappeared, so did the other animals, nuts, tubers, and fruit that supported them. Being at the top of the food chain did them little good once the chain itself was demolished. Dramatic as the scenario is, it's unlikely that the small tribe at Marean's Pinnacle Point represented the very last bastion of Earth's _Homo sapiens_. More likely they were among dozens of tribes that the changing climate squeezed into small pockets throughout the continent. Each being winnowed down until they must have wondered daily how much longer they might make it. By this time early forms of trade had undoubtedly developed, but the increasing isolation would have made it more difficult to stay in touch, share resources, or help one another out. Eventually, however, the climate relented. For three million years—an appalling length of time to us, yet less than one thousandth of the planet's life—Earth had been undergoing some of the most erratic climate fluctuations it had ever seen, shifting from cold to warm and dry and wet every few thousand years. To make matters worse, for three hundred thousand years Earth's orbit around the sun had been elongating. That led to even deeper and more frequent climatic swings. But finally, fifty thousand years ago, this particular climatic pendulum began to swing in the opposite direction, and just as the ice had once relentlessly crept from the polar caps to endanger the species at lower latitudes, it now casually reversed itself, and Africa grew warmer and wetter. The sparse pockets of the human family, like Marean's survivors living at the tip of Africa, and elsewhere here and there, again found themselves blossoming and fanning out. Isolated tribes, separated by heat and desert and their own reduced numbers, began to flow back into one another, setting the stage for a remarkable migration that changed the world. _Killer Explosion?_ Seventy millennia in the past, long before the pharaohs of Egypt ruled the Nile, even three hundred centuries before the cave painters of Lascaux began doing their remarkable work, the most powerful volcanic explosion to rock the planet in two million years shattered the island of Sumatra, Indonesia, in an area now known as Lake Toba, and nearly wiped out every _Homo sapiens_ on Earth. Or at least it may have. The explosion of rock, ash, and hot magma was so violent it's difficult to find the words to characterize its power. Scientists have coined multisyllabic terms like megacolossal and supereruptive. It was twice as powerful as the largest eruption in recorded human history, which took place in 1815 at Mount Tambora in Indonesia. Historians called the twelve months that followed it the "year without a summer" because the globe–circling debris from the eruption so severely cooled the planet. It is precisely this kind of climatic effect that makes the Toba explosion so interesting. The evidence indicates that it spewed between twelve hundred and eighteen hundred cubic miles of the planet into the sky. Some scientists believe that together with an ice age that was already in the making, Toba may have accelerated cooling and drying worldwide, and driven global temperatures down as much as 27° F. This, in turn, dropped mountain snow and tree lines by nine thousand feet, plunged the planet into a six–to–ten–year volcanic winter and possibly an additional one thousand–year cooling episode. As you might imagine this would have made life for the human species that were alive at the time even tougher than it already was, especially if they were living west and downwind of the eruption. The immediate effect would have been to drop uncounted cubic tons of choking volcanic ash on everything for thousands of miles around. Studies show that an ash layer a half foot thick draped all of south Asia, and quickly blanketed the Indian Ocean, and the Arabian and South China seas as well. A layer of ash this thick would have decimated plant and small animal life on land and sea for years, catastrophically rattling the food chain and every creature that relied on it for survival throughout Asia and into Africa. Recent fossil and genetic studies suggest that the populations of gorillas, chimpanzees, orangutans, and even cheetahs and tigers dropped to near extinction levels. Neanderthals in Europe and west Asia were apparently spared the direct effects of the volcanic fallout. _Homo erectus_ living at the time in east Asia and (possibly Australia), and _Homo floresiensis_ , the "hobbits" who lived close by seemed to have escaped because they were upwind of the debris. They may all, however, have suffered at the frigid hands of the explosion's longer–term effects. The humans who seemed to have been hit hardest by the remarkable eruption were our ancestors, pockets of _Homo sapiens_ scattered throughout Africa. As the debris spread, some scientists believe the eruption's cooling effects nearly wiped us out, a genetic coup de grâce that would have made this book, and you and I, entirely impossible. It's unlikely that Toba by itself can explain the sudden whittling of our ancestors around this time in prehistory, but it certainly didn't help. Except in one surprising way. By isolating _Homo sapiens_ settlements and placing even more survival pressure on them, it may have led to hardier and more adaptable men and women. There could be something to that hypothesis because while other primate species seem to have slowly rebounded, _Homo sapiens_ not only bounced back, its population exploded and began to move quickly into Asia, Europe, and the remainder of the planet. Mitochondrial genetic studies tell us that around this time one small group of modern humans living in Ethiopia or the Sudan, armed with an assortment of high–tech tools—bone and ivory hand axes, long spears, and fire–hardened stone knives mostly—headed northeast, then over the Red Sea into Yemen on the Arabian Peninsula. Undoubtedly, being human and curious, several waves of our direct ancestors ventured from their mother continent into the Mideast during this time. Modern human migration wasn't likely one solitary foray northward. During cold oscillations, seas everywhere would have grown shallower, including the Red Sea and the Gulf of Aden. Right where these meet at a place with the dramatic name the Gate of Grief (Bab el Mandeb), the continents of Asia and Africa nearly kiss. Even today the distance between them is slender: no more than twenty miles of seawater divides the huge continents. But during frigid climatic swings the Red Sea sometimes dropped more than 210 feet, narrowing the straits by several more miles, nearly attaching Africa and Asia like two continent–size Siamese twins. Though there is no evidence the seaway ever completely dried out, climatologists believe that a chain of small islands sometimes emerged between the immense landmasses. The shorelines of those islands would have made excellent places to fish and eat before moving northeast to the next island. In time, the traveling tribes inevitably made their way, perhaps in small boats or on rafts, island by island, to the underbelly of Asia. This feat, however long it ultimately took, liberated our kind from the boundaries of Africa to head off to every continent and landmass the planet had to offer. We weren't the first to find a way to Asia, but this migration, or series of them, would, in the astonishingly short space of fifty thousand years, utterly revamp an entire planet. Once on the mainland of Asia, modern humans began to ripple outward. Some bent east, hugging the shorelines of Yemen, Oman, Iran, and Pakistan as they headed toward India, and others migrated north through Mesopotamia (Iraq). There, this second group split again, some working their way from Turkey through the Danube corridor, others sticking closer to the Mediterranean coast as they headed toward Greece and the boot of Italy. The splitting branches of humans grew like a bush. The fossil and genetic evidence tells us that within five millennia our kind had settled the ancient continents of Sunda and Sahul, landmasses that exist today as the oceanbound islands of Indonesia and New Guinea and the continent of Australia. Forty–five thousand years ago, however, the Indian Ocean was shallower, and these exposed shelves of land were separated by straits no more than sixty miles at their widest. This means that inside of ten thousand years, wandering _Homo sapiens_ doggedly walked from the eastern Sahara to the plateaus and mountains of western Australia. Meanwhile, other branches of our kind that had radiated into Mesopotamia and moved west spent the next fifteen thousand years settling much of Europe, as far away as Spain and well north of the Alps. Still others expanded north and east into Asia, across the highlands of Tibet and the steppes of Russia until they had nearly reached the top of the world to cross the land bridge between Russia and Alaska. From there they made for Canada, into North America and then southeast to the Meadowcroft settlement just outside Pittsburgh, Pennsylvania, sixteen thousand years ago, and finally into Central and South America to become, someday, the Incans, Mayans, Aztecs, Hopis, and scores of others. The settlers of Meadowcroft predated European explorers by a mere 155 centuries. The upshot of all of this restless meandering was that after millions of years of evolution, in the tiny space of fifty millennia, the newest addition to the human family tree had wandered into and settled two Americas, all of Europe, Asia, Africa, Southeast Asia, even Japan. Only Micronesia and remote islands scattered across the Pacific, places such as Tahiti, the Philippines, and Hawaii, remained uninhabited. It took another two thousand years, as civilizations in Egypt, Mesopotamia, China, and India were rising, for small groups of dauntless explorers to travel across thousands of miles of open sea to populate, for reasons we have yet to fathom, those tiny specs of land. Though the time scales we are talking about—mere tens of thousands of years, rather than hundreds of thousands or millions—seem relatively short, they still dwarf the length of recorded human history. Civilizations in India, China, and Egypt have come and gone within thousands of years. Alexander's empire disappeared within a few generations. Even Rome ruled most of Europe, parts of the Middle East, and North Africa for fewer than one thousand years. Ninety–five percent of the time that our particular brand of humanity has existed remains mysterious and almost entirely unrecorded. Nevertheless a great deal happened and a great deal of territory was covered. Despite the mountains of research (and years of heated debate) that focus on the departure from Africa of our species, it's important to remember that we were not the first humans to forsake the continent for other parts of the planet. But before we get too deeply into the specific and considerable wandering of earlier humans, it might be best to clarify some recent, and surprising, rearrangements of our family tree. For decades paleoanthropologists generally agreed that _Homo erectus_ was a single species from which we evolved directly, and creatures we now know as _Homo ergaster_ were assigned to the _erectus_ line. But it now seems, at least by the lights of an increasing number of researchers, that _ergaster_ represents a species of its own, and the primates we have been calling _erectus_ are actually a potpourri of many human species whose bones simply haven't turned up in great enough numbers to warrant their own names or branch on the family tree. Now paleoanthropological sentiment now seems to be leaning toward our being descended, a few species removed, from _ergaster_ , while _erectus_ , and other humans who left Africa and wandered east, became one or several other species that eventually died out. Whether or not we are descended directly from them, the species we collectively refer to as _Homo erectus_ rose up and began to fan out to the mid and far east in small, tight brigades as long ago as 1.9 million years. Very adventurous to place some temporal perspective on this, consider that this is one million seven hundred thousand years before we _Homo sapiens_ even showed our face. Or put another way, eight hundred times the twenty centuries that have passed since Augustus Caesar ruled the Roman Empire. _Homo erectus_ , and the cousin species they left behind in Africa, were wickedly smart for their time and armed with the greatest evolutionary advancements of their day. Not bigger fangs, shaper claws, or stronger bodies, but a knack for thinking on their feet, working together, and adapting on the fly. As a group they were tall, slim hipped, and built for long–distance running in the unforgiving sun of the equatorial lands where they arose. Their elongated limbs exposed the maximum amount of body surface to the air to help keep them cool. They were probably nearly hairless by this time, able to perspire much the way we do (chimpanzees have about half the sweat glands we do, mostly on their hands and feet, which aren't covered with hair), and outfitted with a complex network of blood vessels in their rather large heads, which helped efficiently vent heat and thereby avoid death by heatstroke. They were more adept toolmakers than _Homo habilis_ before them and carried, by all accounts, an entirely new kind of technology with them everywhere, the same way today we tote around cell phones—the Acheulean hand ax, which was something like a Neolithic Swiss Army knife. Soon they would tame fire. Before long, wave after wave of _Homo erectus_ were settling parts of Arabia, China, India, even Indonesia, though they apparently never made it as far as Australia. Perhaps they weren't quite gifted enough to develop the advanced seafaring skills they needed to make it that far, or maybe the erratic climate prevented them. Or perhaps many tried, but none succeeded, or maybe they did succeed but their remains have yet to be found. Whatever the case, they wandered and meandered and migrated hither and yon with a far less sophisticated toolkit than _Homo sapiens_ did when they exited Africa 1.7 million years later. But migrate they did. One branch of the _Homo habilis_ family (or perhaps an early version of _Homo erectus_ ) managed to travel all the way to Dmanisi in the Republic of Georgia 1.78 million years ago. Once again the weather helped. Between 1.9 and 1.7 million years ago Earth's climate was enjoying a respite between glacial periods, and these humans might have made their way up what would have then been a lush Nile River valley and then east across the narrow straits of Suez to the Arabian Peninsula before heading north of the Black Sea. Another branch apparently walked through a "green Sahara" filled with tall grass, brimming with wildlife, then made their way to current–day Algeria to settle at a site called Aïn Hanech. Something more than simply human wandering was afoot here. The lands where these creatures were settling not only stretched from Indonesia to North Africa, but represented a widening variety of environments. They ran the gamut from marshes and streams to seacoasts and wooded mountainsides. The creatures were not only putting more distance between themselves and their home continent, but between themselves and the dictates of their genes. They were using their brains and creativity to adapt. While the other great apes had for millions of years been retreating to increasingly smaller forests, sticking to familiar environments where they were comfortable and for which they were genetically suited, these ancient humans, armed with their tools, clothing, and that magical thing called fire, were adapting their new environments to them, not the other way around. By a million to 1.3 million years ago, entirely new species began advancing into the cold climates of Europe, trudging as far north as the British Isles, from (scientists suspect) northwest Africa across the Straits of Gibraltar. These were a species paleoanthropologists call _Homo antecessor_ , a toolmaking cave dweller with a brain three quarters the size of ours and a more human and less simian face whose remains were first discovered at a railroad cut in Sierra de Atapureca, Spain. Another species descended from _antecessor_ known as _Homo heidelbergensis_ may also have made his way to this same area. More on him soon. But in either case each represented newly minted humans, most likely descended from _Homo ergaster_. Interesting evolutionary events had been unfolding on the plains of Africa for the past several hundred thousand years, making the story of our own emergence both more interesting and more murky. While brigades of various hominin explorers were fanning off in every possible direction, the root species (including _ergaster_ and _erectus_ ) were continuing to diversify back on the home continent as well. Around seven hundred thousand years ago, an altogether new, and crucially important, creature called _Homo heidelbergensis_ stepped out of the mists of time. The remarkable thing about _heidelbergensis_ , so named because the first specimen was found near Heidelberg, Germany, is that it is the species from which both we and Neanderthals descended. That news has utterly rearranged the human family tree. Until recently it was thought that we could not possibly share anything much, especially our ancestry, with these tough, burly creatures who passed into extinction some twenty-five thousand years ago. We were, according to common wisdom, directly descended from _Homo erectus_. Yet, it turns out, if not for _heidelbergensis_ , neither we nor Neanderthals would ever have walked the earth. The creatures, the people, who later evolved into _Homo sapiens_ and Neanderthals began to part ways, genetically speaking, from _heidelbergensis_ almost as soon as _heidelbergensis_ itself emerged. Some members of the species remained on Africa's horn (and are sometimes referred to as _Homo rhodesiensis_ ), but others, with a more extreme case of wanderlust, moved northwest across a new green Sahara to Gibraltar and then into Europe, following in the footsteps of _Homo antecessor_. The archaeological evidence suggests that these nomads became the first humans to build shelters, probably of rock and wood, and hunted big game, like Irish elk, mammoths, and European lions, with long wooden spears. These inventions served them well in the colder climates they were dealing with throughout Europe, especially when glacial ice descended. They had large brains—1100 to 1400 cc, as large as ours—and by this time were easily the brightest of the planet's primates. Even more than _antecessor_ , the shape of the outer and inner ears of these people indicates that they could make fine differentiations between sounds, a trait that leads some scientists to speculate they used sophisticated speech of some kind. Dental wear on teeth in the right side of their mouths means they may have been using their mouths as a "third hand" clenching tough food, tools, or clothing on their right as they worked on them. This could mean they were right–handed, and right–handedness is associated with language and lateralization of the brain. Thin soup as scientific theories go, but something worth chewing on. The original _heidelbergensis_ was, it seems, thick boned, huskier and stronger than _erectus_ , who was taller and slimmer. Not that at six feet he was short, but with a frame that easily supported two hundred or more pounds, he was built like a bouncer, or college fullback, a trait that mystifies scientists somewhat since most African primates tended to be long of limb, the better to expose more of their body to the air, a form of natural air–conditioning. _Homo heidelbergensis_ The European branch of _heidelbergensis_ maintained and built upon these traits as they evolved into Neanderthals. The colder climate favored thicker, stockier creatures that exposed less of their bodies to the air. (Inuit people and the native residents of Siberia also show these same traits as ways to conserve heat.) The strength and endurance these bodies were apparently blessed with were certainly assets as they dealt with a punishing climate. Bulkier, stronger, physically tougher individuals would also have been favored when it came to hunting big game. As far as we know, these people and the Neanderthals that followed them did not throw their wooden spears when they hunted. Instead they used them to repeatedly jab their prey at close range, a dangerous way to shop for dinner. You can imagine that this not only took courage and strength, but a body that could survive being tossed around by a wounded and enraged lion, mammoth, or woolly rhinoceros and still bounce back. Paleoanthropologists have found evidence all over the world of the beatings Neanderthals withstood. Skeletons found from the Middle East to Western Europe have revealed ugly injuries to their ribs, spine, lower and upper legs, and skull. What's more, these injuries usually healed and there are no signs of infection. More than once scientists have noted the injuries resemble the kinds of hammerings that rodeo riders sustain from big, bucking animals. Except in the case of Neanderthals they weren't riding bulls or horses, they were hopping on the backs of wolly rhinos, aurochs, or elk to jam their long spears in one killing blow through their back behind their necks. From time to time, of course, the animals they hunted might not have taken kindly to this. Despite the beatings Neanderthals survived over the next half million years and spread throughout Europe, following the retreating glaciers north when temperatures moderated, and heading south when the glaciers returned. In time they became the dominant primates in Europe and settled it from the British Isles to the shores of the Black Sea. For the African branch of the family, life was challenging, too, but for entirely different reasons. Continuing increases in climate fluctuation meant surviving waves of crippling droughts. But their large brains, their tough bodies, and their increasingly strong social structure saw them through. In the end, both the Africa and European branches of humanity outlasted several climatic swings, until finally, around two hundred thousand years ago, they had completed their transformation into two entirely different, but enviably advanced, species—the first _Homo sapiens_ and the first Neanderthals. If you hold the fossilized skull of a Neanderthal in your hand and closely inspect it, you might find it difficult to believe we share a common ancestor, but time, climate, and random chance are powerful change agents. Their brow ridge is thick, their heads longer, shaped more like a watermelon than a cantaloupe, like ours. Their chins recessed, or more accurately the middle part of their face protruded more than yours and mine and looked more muzzlelike around the mouth and nose, which was large and fleshy and well rigged for warming the cold northern air they breathed. And they were stouter, bulkier, barrel–chested. We were slimmer than they were, but not so much because _Homo_ _sapiens_ in Africa had grown more gracile over time; we simply didn't accentuate the robust traits we had inherited from _heidelbergensis_ the way Neanderthals did. In fact, when the two species later met in Europe six hundred thousand years or so after the original _heidelbergensis_ branches split, _Homo sapiens_ were probably on average taller, if not stronger, than their Neanderthal cousins. Climate made Neanderthals even huskier than their big–bodied ancestors. While their collarbones were long, their broad shoulders curved inward around a chest that was both broad and deep as if to better husband their body heat. Their fingers, which must have nearly always been exposed to the cold, grew stubbier and rounded at the tips, an antidote to frostbite. Their big upper bodies balanced on a pair of bowed thighs above Brobdingnagian knees and shortened shins. But this did not mean they walked hunched over, apelike. They didn't. Like us they stood fully upright and could walk and run just as well as we do. They were simply a human species optimized for the cold, remarkably strong and outrageously intelligent. And given their longevity, astute and wise in the unforgiving ways of survival. By the time we and Neanderthals had emerged, at least four (and probably more) intelligent, self–aware human species were still living on planet Earth. (See sidebar "The Newest Members of the Human Family," page 90.) Each was colonizing settlements spread sparsely from Britain to Indonesia, and from the Balkans to the southern tip of Africa. We do know that _Homo antecessor_ and _heidelbergensis_ , and their precursors _ergaster_ and _habilis_ , had by now gone the way of the dinosaur, but _erectus_ , or some version of it, still roamed Asia while _Homo sapiens_ made its itinerant way around Africa, and Neanderthals ruled Europe and west Asia. There were no census takers fifty thousand years ago, so we don't know how many humans were living on the planet, counting members of every species, though genetic studies may soon illuminate this; a few hundred thousand, perhaps, certainly less than a million. The generally accepted view is that we _Homo sapiens_ bided our time in Africa until we launched a concerted worldwide migration beyond the Dark Continent beginning about this time. This is called, not surprisingly, the Out of Africa theory. According to this hypothesis _Homo sapiens_ displaced and then eventually _re_ placed all other human species that had arisen over the long epochs that preceded the post–African travels of _their_ ancestors, whomever they might have been. _The Newest Members of the Human Family_ As I was writing this book, various teams of scientists around the world announced the discovery of four entirely new species of humans, an indication of exactly how quickly the field, and the human family tree that reflects it, is changing. (See The Human Family Tree, page 12.) Three of these were discovered the old–fashioned way—fossilized bones stubbornly excavated from their hiding places in the ground. Of those three, two lived some time ago— _Australopithecus sediba_ and _Ardipithecus kadabba_ —species that roamed Africa two and four and a half million years ago, respectively. From these remains paleoanthropologists have been able to develop some fairly deep insights into these creatures' anatomies and lifestyles. Based on four partial skeletons found in South Africa, _sediba_ illustrated an emerging theme in paleoanthropology: there was a good deal more variation in ancestral humans than previously thought, and therefore lots of room to debate where they fall in the family tree. _Sediba_ seems to have combined some old australopithecine traits and some traits of early _Homo_ species. Its brain wasn't terribly large (about 450 cc), but hand, pelvis, and leg bones indicate it may have been an early tool user and well on the way to walking upright more often than not. Yet fossilized plants found with some specimens tell scientists that sediba lived in forested areas as well as open ones and often ate fruits like their chimpanzee cousins. Researchers read these clues in different ways. Some argue that _sediba_ was a precursor to the _Homo_ species of humans that followed. Others didn't believe this was possible because that branch had already sprouted on the human family tree a half million years earlier with the emergence of _Homo rudolfensis_. _Ardipithecus kadabba_ is an ancestral human, and lived as many as three and a half million years before _sediba_ in Ethiopia. (Some debate this age and set it one million seven hundred thousand years earlier.) He is ancient enough that his big toe was still designed for grasping tree branches, though other aspects of his anatomy indicate he moved on two feet on open ground. His brain was about the size of a modern bonobo at 300 to 350 cc, but his smaller incisors indicate, at least to some paleoanthropologists, that he and his fellow creatures were more socially cooperative than chimps. Male chimps have large incisors often used when battling for the attentions of the troop's females. This makes sense if he was spending more time in the more dangerous open grasslands where he would have to rely more on others in the troop for survival. The third and fourth species come from a different time and different parts of the world than the first two. Both dramatically reinforce the emerging reality that our direct ancestors coexisted with a variety of other extremely sophisticated humans throughout the world until very recently. Just a few years ago an assertion of this kind would have been considered heresy in the field of human evolution. Each of these species lived when we _Homo sapiens_ did, and DNA evidence indicates that at least one also mated with us, and with Neanderthals. Human species embraced one another, it seems, in more than a metaphorical way when they had the chance. Of these two the most recent discovery was announced in March of 2012, and because of its novelty remains controversial. The fossils haven't even yet acquired a scientific classification. Instead researchers call their find the Red Deer Cave people, humans, but not likely _Homo sapiens_ , that lived in south central China, north of Vietnam, as recently as 11,500 years ago. That these people were setting up camp not long before _Homo sapiens_ had made their shattering transition from hunting and gathering to agriculture is one of the aspects of this discovery that has anthropologists both giddy and astounded. The astonishment, however, only begins there. The fossils reveal that these people looked a little bit like us but also like more ancient humans. They have our rounded brain cases, less sloped than Neanderthals, but still retained thick, simian–style brow ridges. Like ours their faces were flat and tucked under their brain, but their chin, though the jaw juts forward, isn't squared off like ours. And strangest of all, scans of their brain cases indicate that they had modern frontal lobes, but archaic parietal lobes, which sit farther back in our brain. It makes one wonder if their reality was different from ours, and if it was, how? So where did these remarkable people come from? Scientists have speculated along three lines: They might have been descended from a group of _Homo sapiens_ that departed Africa earlier than generally thought and survived and evolved in isolation. They may truly be an entirely different human species, like Neanderthals, people who evolved from an earlier branch of the human family tree, _Homo heidelbergensis_ or _Homo erectus_ , perhaps. Or they could be hybrids: _Homo sapiens_ that mated with archaic humans who were also living in south China, something that might help explain their unusual mix of features. The fourth and perhaps the most intriguing species recently discovered left behind almost no evidence of its existence; no clues about how it looked, what tools it used, or where it came from; hardly even a bone. Like the Red Deer Cave people it has also not yet been assigned a scientific name. Instead researchers refer to this species as the Denisovans because the two tiny fossils they _did_ leave behind—a wisdom tooth and the tip of a pinkie finger—were found in Denisova Cave in the remote Altai Mountains of Siberia. You could hardly imagine more meager leavings. Yet, after scanning the mitochondrial DNA within these tiny specimens, scientists at the Max Planck Institute for Evolutionary Anthropology managed to decode the creature's entire genome. And when they had they realized that the juvenile to whom these paltry fossils had once belonged represented an entirely new human species that had hunted and settled in these mountains forty thousand years ago. Amazingly, Neanderthals, _Homo sapiens_ , and Denisovans each lived in the very same cave, though probably not at the same time. The DNA analysis also revealed that the peoples who became _Homo sapiens_ , Neanderthals, and Denisovans all shared a common ancestor a million years earlier. It's not yet known exactly which species that was, possibly _Homo ergaster_. It turns out that we share a genetic link with Denisovans in another remarkable way. In analyzing Denisovan DNA the scientific team compared it with living humans from six groups: the !Kung people of South Africa, Nigerians, the French, Papua New Guineans, Pacific Bougainville Islanders, and the Han Chinese. They were electrified when they found that between 4 percent and 6 percent of the genomes of the people of Papua New Guinea and Bougainville Island contain Denisovan DNA. Scientists surmise the genes were introduced to the islands when the hybrid descendants of _Homo sapiens_ and Denisovans migrated into Southeast Asia and later Melanesia. There is even some evidence that these descendants made their way to Australia and the Philippines. It's difficult to not be transfixed by these discoveries when you really take the time to think about it. Like Neanderthals and _Homo floresiensis_ they are species who fought and struggled and lived sophisticated lives for tens, even hundreds of thousands of years alongside our direct ancestors on the same planet we inhabit today. And if that isn't astounding enough, some even mated with our kind, contributing forever to our DNA. Were these aberrations or the norm? How many more species and hybrids might we find now that DNA analysis has opened so many genetic doors? If that's true (and there's little debate that it is, though it's becoming clear it wasn't quite this simple), most of the different varieties of humans, given their nomadic ways, must have crossed paths from time to time as they wandered into the edges of one another's territories. There is evidence of this in the rocky hills of Galilee, not far from Nazareth, the birthplace of Jesus Christ. In 1929, in caves that pock the hills of Qafzeh, Israel, two scientists found an ancient burial ground and, remarkably, the bodies of eleven anatomically modern humans. At first scientists thought the bones were no more than fifty thousand years old, but later, improved dating technology revealed that they were nearly twice that age, making them the oldest modern human fossils to be found outside Africa. As researchers continued rummaging through the site, they realized the bodies retained some of the more archaic features of their ancestors, but that they were culturally advanced. The ornamental shells and red, yellow, and black ocher paints they left behind indicated as much. So did the hearth and the burials of the bodies themselves, one of which included a mother and her child. Their tools, however, weren't as advanced as later _Homo sapiens_ '. The odd thing about that was that their tools instead resembled Neanderthal implements, yet they themselves were not Neanderthal. The best guess is that somehow they, or earlier generations, had crossed paths with their northern cousins and borrowed some of their technology because it was better than their own. For all we know, these were early _Homo sapiens_ explorers, the Marco Polos and Vasco da Gamas of their day, wandering the Arabian Peninsula while less adventurous _Homo sapiens_ tribes remained on the home continent. By all accounts, their expedition wasn't terribly successful. The skeletons of red and fallow deer, small animals, aurochs, and some seafood shells show they gave colonizing the area a game effort, but their excursions never made it beyond the hills of Qafzeh and barely beyond the borderlands of Africa. There is no evidence that any of their kind ever made it north of this sector of the Middle East, not this far back in human prehistory. Maybe the explorers retreated home across the Red Sea straits, played out and tired; or maybe those eleven buried were put to rest by a last few survivors, or maybe they hung on for years in a small group like a prehistoric version of the settlers at Plymouth Rock or Jamestown, until disaster or disease at last carried them off. No one knows. In the same area, paleoanthropologists recently discovered that Neanderthal explorers likewise found their way south into Galilee thirty thousand years later (December 26 or so in the HEC), but they came from the north rather than the south. Did the fair–haired, bulky colonists run into lithe, dark–skinned people from across the Arabian straits? If so, did the Neanderthals do them in or run them off the peninsula back to Africa? "At that point," says paleoanthropologist Nicholas J. Conard of the University of Tübingen in Germany, "the two species are on pretty equal footing." The tools of both _Homo sapiens_ and Neanderthals would have been about equally advanced, and given what the Neanderthals had been facing in the wilds and weather of Europe the past 130,000 years, they would have been an extremely tough breed. Modern humans may not have been their match, not yet. Or perhaps they mated and their offspring dissolved into the continent and disappeared from the map. Either way, it seems that for another twenty thousand years or so _Homo sapiens_ ceded Asia to their barrel–chested cousins. Whatever happened, we do know that Neanderthals and _Homo sapiens_ eventually encountered one another in Europe sometime after our long–lost ancestors finally made their big push out of the Dark Continent. But what about the East and the _erectus_ bands that had begun heading off toward India and China and Southeast Asia two million years earlier? What became of them and their ancestors, and did ours make contact with them? Scientists haven't uncovered direct fossil evidence of even a single meeting—no burial sites, artifacts, or bones—but in 2004 a team that included biologist Dale Clayton and anthropologist Alan Rogers, both working at the University of Utah, proved that our ancestors did indisputably have a close encounter with another human species in the Far East sometime around twenty-five thousand years ago. How could they know if there was no fossil evidence? Head lice. Like every other living thing on Earth, head lice have DNA. And like humans or finches or predatory big cats, different species of lice have different DNA. Anytime we find head lice on ourselves—breakouts among schoolchildren are more common than parents would prefer—we find two kinds that are rarely separated. Despite nearly always being in one another's company, however, each initially evolved separately while dining on two different species of early humans. One of those species led to us. The other is extinct. For those two species of lice to coexist today, both had to have come into close contact sometime in the past. By studying their DNA and then time–stamping the evolution of both strains, the Utah study concluded that at least one meeting took place sometime between thirty thousand and twenty-five thousand years ago in Asia. "We've discovered the 'smoking louse,'" Clayton wryly observed. "The record of our past is written in our parasites," added Rogers. What makes this discovery especially surprising, aside from its creative use of parasites to track human behavior, is that most paleoanthropologists believe that _Homo erectus_ met his end seventy thousand years ago, long before this encounter could possibly have taken place. Nevertheless it's difficult to dispute the evidence. Parasites reflect the evolution of their hosts. They rely on them for their livelihood after all, and their fortunes and survival are inextricably bound. So some direct descendant of _Homo erectus_ must have survived forty–five thousand years longer than previously believed. Whoever this species was, the genetic history of the head lice that colonized it shows that it split into two species around 1.18 million years ago, about the same time that _Homo erectus_ and our direct ancestors in Africa, possibly _Homo ergaster_ , parted ways. That explains why the lice themselves also parted company and eventually evolved into two species in the first place. The lice reveal something else fascinating (who knew the little buggers could be so informative?). The _Homo sapiens_ strain corroborates evidence that our direct ancestors had been reduced to extremely small numbers between one hundred thousand and fifty thousand years ago before rebounding and rapidly expanding, with their head lice, to colonize the rest of the world. This supports mitochondrial genetic evidence that our kind nearly met an early and tragic (at least for us) end around seventy thousand years ago before recovering to spend the next fifty thousand years becoming the planets dominant species on Earth. Strangely enough, the archaic lice, the ones that made their homes on the heads of the species no longer with us, show not an iota of evidence that they went through either a similar bottleneck or population explosion. During those years, when _Homo sapiens_ had nearly been rubbed out, perhaps by the Olympian–scale eruption at Lake Toba in Indonesia, these other humans were apparently getting along just fine. One theory is that they were safely upwind of the explosion and didn't feel the immediate, violent effects, though this doesn't explain how they managed to survive the subsequent global volcanic winter some scientists feel resulted from the gargantuan blast. All indications are that this line of humanity did just fine, at least until they crossed paths again with the _Homo sapiens_ descendants of the species that they had split off from more than a million years earlier. It is not as crazy as it might once have been thought that a more modern descendant of _Homo erectus_ was still alive as recently as twenty-five thousand years ago. The more scientists examine the past, the more surprises they find. They found a particularly big one when the remnants of an entirely new human species that no one had had the slightest inkling had ever existed came to light in 2004 at Liang Bua, a cave on the island of Flores, 388 miles east of Java in Indonesia. After much debate and head scratching, most paleoanthropologists agreed that _Homo floresiensis_ , as these remarkable creatures came to be known, was a bright, toolmaking human. The big surprise, beyond the discovery that these people existed at all, was their startlingly Lilliputan stature. The press and even astounded scientists took to calling them "hobbits." One three–foot–three–inch–tall adult–woman skeleton that was discovered turned out to be even shorter than Lucy. Their brain size at 420 cc was also not much larger than Lucy's, a hominin that had walked the earth more than 3 million years earlier. Yet these creatures could control fire, make sophisticated tools, and hunt game, though it's still an open question as to whether they could speak or used any advanced language. How, scientists have wondered, could a species with a brain less than one third the size of ours pull off these sorts of impressive feats? Our best evidence indicates that the Flores hobbits lived between ninety–five thousand and seventeen thousand years ago, the descendants of earlier _Homo erectus_ settlers who were eventually reduced in size by an odd evolutionary phenomenon scientists call island dwarfing. Island dwarfing happens when natural forces cause species to shrink in size over time in isolated locations, presumably because resources are severely limited. The theory is that in a kind of ecological bargain, animals grow smaller rather than starve. By reducing their size, both resources and diversity are both preserved, and life goes on with predator, prey, and the entire ecological niche surviving in a sort of pygmy state. Dwarfing can have other advantages under these circumstances. It's easier to stay warm or cool when you are smaller, which saves energy and requires less food. On Flores, in addition to the hobbits themselves, scientists have found examples of a small, elephant–like creature called _Stegodon_ , an animal the hobbits apparently hunted with some enthusiasm. Because of _Homo floresiensis_ ' size, especially the size of its brain, scientists have enjoyed some spirited debate about whether it came to the island in the form of a lean and tall _Homo erectus_ (remains of _erectus_ have been found on nearby Java), then shrank over time due to island dwarfing, or whether it may have been the descendant of smaller, Lucy–size creatures who came out of Africa before _erectus_ and then made their way somehow to the islands of Indonesia. Could a smaller, less intelligent species such as _Homo habilis_ or _Australopithecus afarensis_ have made the ten–thousand–mile journey by land to Flores without the benefit of fairly advanced tools? It would be a remarkable feat. Their brains were considerably smaller and considerably less sophisticated than all varieties of _Homo erectus_. It seems a stretch that such wanderers would have evolved to develop the sort of technology scientists found on the island without the benefit of their brains' growing larger and more complex beforehand. It's more likely that somehow their brains had advanced to the sophisticated wiring of _Homo erectus_ , at least, and then grown mysteriously smaller while not giving up the advantages of that wiring. In other words the brain grew tinier, but its complex architecture remained intact, like the perfectly replicated miniatures of homes and furniture you might see in a history museum. The current consensus is that the last hobbit departed about seventeen thousand years ago, but some have speculated they may have lived on. Anthropologist Gregory Forth has hypothesized that Flores hobbits might be the source of stories among local tribes about the Ebu Gogo, small, hairy cave dwellers who supposedly spoke a strange language and were reportedly seen by Portuguese explorers who came to the islands in the early 1600s. Henry Gee, a senior editor at _Nature_ magazine, has even opined that species like _Homo floresiensis_ might still exist in the unexplored tropical forests of Indonesia. It makes you wonder how many other human species we may find as we comb through the planet. Could small pockets of _erectus_ descendants have managed to survive in remote areas throughout Asia, or even made their way to North America? Could there be something to the sightings of yeti in the Himalayas or Big Foot in the American West after all? The point is that almost anything is proving to be possible when it comes to human evolution, even hobbits, and if they nearly survived until the first great agricultural civilizations began to gain a toehold, then could the descendants of _Homo erectus_ , whatever we might call them, have remained abroad for our ancestors to meet as they trekked through Asia on their way to Indonesia and Australia? Possibly a larger, more evolved version of _Homo floresiensis_ had survived Toba and the ice ages that battered the Neanderthals in Europe and reduced _Homo sapiens_ to a few clans hanging on by a wispy thread in a drought–ridden Africa. It would have been no mean feat to survive that ice age, but maybe in Southeast Asia, on the ancient continent of Sundaland, life was less deadly than in other parts of the world. It could even be that the species from which we acquired the second brand of head lice we carry around with us today are a gift from the hobbits themselves, Denisovans, or the newly discovered Red Deer Cave people of China. For now we can only speculate, but that we met these people—whoever they were—and that they so generously shared their parasites with us indicates that our encounter was of the close kind. Tight quarters are generally required when divvying up lice. Unfortunately, there is no way to decipher exactly what variety the close encounters were. Possibly we killed the people and took their clothing, and the bugs came in the bargain. Murder on a large scale _has_ , unfortunately, been known to take place when a new, powerful group of humans finds less technologically advanced people. We don't have to look any further than the wrecked civilizations of the Incas and Mayans in South America, Aborigines in Australia, and Native Americans in the United States for proof. It is also possible we simply colonized the same space and outcompeted them for limited resources with better hunting strategies, better tools and weapons, and more elaborate cooperation. Or maybe we mated with them, either forcibly or affectionately, or both. We may even have run across them when they had reached the end of their evolutionary rope, and their parting gifts to humanity were a bloodthirsty bug and a few hunting grounds. Probably, whoever they were, they were not as cerebrally gifted as the _Homo sapiens_ they crossed paths with. But that doesn't mean they weren't bright. They were certainly far more intelligent than today's chimpanzee or gorilla, which are devilishly clever in their own right. If they were directly descended from _Homo erectus_ , they may have lacked advanced language. _Homo erectus_ is unlikely to have mastered the spoken word, though he may have used complex gestures or other vocalizations to communicate. Speech and language are not always the same thing, as the thousands who speak American Sign Language can attest. It's difficult to imagine how we could ever decipher how these people communicated. The business of unlocking the past without the benefit of a working time machine makes science uncertain, especially when it deals with the spoken word. Any encounter between our kind and this other branch of the human family, each of whom had been traveling quite different evolutionary roads for nearly two million years, must have boggled both of their minds when it finally took place. You might compare the meetings to those between the civilizations of the Old and New Worlds five hundred years ago, even if the comparison isn't altogether accurate. Francisco Pizarro's clashes with South America's Incas, or the Iroquois's crossing paths with early French traders who were exploring northeastern America, or Captain James Cook's legendary encounters with the people of Polynesia, all brought together cultures that were radically different and fraught with misunderstanding, often tragic (Cook eventually met his end when he was hacked to death by the natives of Hawaii, who had come to realize he and his men were not the gods they originally thought they were). But at least the meetings were between two groups that were the same species! Their cultural experience was different, but their intelligence was the same. They both used language, they had each developed tools, and they had the same brains, genetics, and anatomies. Nor, on the other hand, would the meetings have been anything like early recorded human encounters with Africa's apes. There would have been no mistaking even a friendly chimp for a member of the human race, never mind that we share nearly 99 percent of our DNA.e When our direct ancestors came face–to–face with these other humans twenty-five thousand years ago, would they have seen them as equals, as an enemy, as nothing more than an interesting, or terrifying, animal? Would their cultures have been even remotely the same after two million years of genetic divergence? _Homo erectus_ , we know, had tamed fire, like _Homo sapiens_ , but their way of communicating must have been radically different. More different than that of a British naval captain and a Hawaiian chief. Had they developed music or art? Surely they were social. _Homo erectus_ had, after all, evolved from the same gregarious stock we had, but how well organized were they, how complex was their society? Did they festoon or paint themselves? How did they dress? Had they developed religion or superstition to explain the world? Did they even care to explain it? Was there something about the chemistry or structure of their brains that made their reality fundamentally different from ours? It's not a given that our kind would have dominated this other species when they did meet. A chimpanzee, despite its diminutive size, is strong enough that it can, rather literally, tear one of us limb from limb, if it chooses. And these people may not have been diminutive. Based on earlier fossils, it is entirely possible that they were faster, bigger, and stronger. _Homo erectus_ men could easily reach heights in excess of six feet and could likely outrun our kind. (The same may have been true of the Red Deer Cave people, though we don't yet know enough.) _Homo erectus_ was a species that had been around in one form or another for nearly two million years, the longest run any human species has ever enjoyed based on the current, if sparse, information we have. The world had tested them again and again, and they had passed the test. When these people first spied the strange, globe–headed, square–jawed creatures with their throwing spears and fire–hardened tools, it must have been as shocking to them as having aliens from Tralfamadore beam down from the sky and show up in Times Square, a race of aliens with superior technology who had come seemingly out of nowhere. How would these people have explained one another to themselves? It's fascinating to speculate on all of this, but, unfortunately, speculate is all we can do because, so far, like a crime without a clue, there is no archaeological evidence of the meetings. There are only the parasites. But what a shattering event that meeting must have been. Our encounters with the ape–men of south Asia were not, however, unique. Twenty-five thousand years earlier, and half a world away, we came face–to–face with another branch of the human family tree, the native Neanderthals of Europe and west Asia. This time we were more closely related, and of similar intelligence. Here, thankfully, we have a little more hard evidence that can shed a bit of additional light on the nature of their astonishing encounters. ## [**Chapter Six Cousin Creatures**](ch00_fm06_toc.html#ch00_fm06_toc) _Ne·an·der·thal (n ē–ā nʹder–thôlʹ, – tôlʹ, nā – änʹ dēr – tälʹ) also Ne·an·der·tal (– tôlʹ, – tälʹ)—Someone who is big and stupid and thinks physical strength is more important than culture or intelligence_. —Macmillan Dictionary Maybe it's because they aren't around any longer to defend themselves, but Neanderthals are among the most maligned species paleoanthropologists have ever taken to studying, and they have been studied since before there was any such thing as a paleoanthropologist. The first Neanderthal fossils to attract serious attention were found in 1856, a full three years before a nervous Charles Darwin had finally gotten around to sharing his provocative theories about natural selection with the publication of _On the Origin of Species_. The unearthing of the skull, torso, and legs that limestone workers near Düsseldorf in western Germany had shoveled onto a hillside made their long–deceased owner the first acknowledged representative of a prehistoric human species ever. Rather a big deal. Not that anyone realized this when the quarry's owner first examined the bones. Like the workers, he assumed these were the remains of a cave bear. Others speculated that they were what was left of a Mongolian Cossack who had failed to keep up with his fellow soldiers a few decades earlier when Russians were desperately fighting off Napoléon's army. Luckily, rather than being tossed aside, and into oblivion, the fossils found their way to a local schoolteacher named Johann Carl Fuhlrott, who recognized immediately that they were human and got them into the hands of Hermann Schaaffhausen, an eminent anatomist of the day. After nearly a year's careful study, Schaaffhausen presented the bones to the rest of the scientific world and pronounced that they belonged to a savage member of a "very ancient human race." Not everyone agreed. This was, after all, a time when many Europeans still held fast to the conclusion the Church of Ireland's Archbishop James Ussher had come to in 1650. God, he said, had completed the world's creation at precisely twelve o'clock P.M., October 23, 4004 B.C. The undisputed expert on human anatomy at the time, Rudolf Virchow, reckoned that because of the skeleton's unusual shape and the heavily ridged brow, these were the bones of a rickets–ridden, cave–dwelling hermit who had met an untimely death at the site sometime in the past, but not the deep, dark past. That might have been the end of the whole discussion, but then in 1863, the highly respected British biologist Thomas Henry Huxley (of the remarkable Huxley family, which also produced Leonard, Aldous, and Andrew Huxley, among others) published his landmark book, _Evidence as to Man's Place in Nature_. Huxley was a devoted adherent of Darwin's theories, so devoted that in some circles he was known as Darwin's Bulldog. Being a bulldog, he made the argument that Neanderthals preceded modern humans somewhere down the line in our inexorable march from ape–like ancestors to our present form. In other words, he was an earlier version of you and me. _Homo neanderthalensis_ Ultimately Darwin's and Huxley's views won out, at least generally and at least in the scientific world. Then in 1908, decades after the original discovery was made in Germany, France's leading biological anthropologist, Marcellin Boule, saddled the world with a damagingly inaccurate view of Neanderthal when he got hold of another set of bones that had been found in a rock shelter in La Chapelle–aux–Saints in southwestern France. Boule studiously scrutinized the remains, but missed that the person to whom they belonged had suffered from chronic arthritis and a disease that had cruelly twisted the man's spine. So when he rebuilt the crippled Neanderthal's anatomy, the image he created was of an apish, bowed, and stoop–shouldered creature who became the prototypical caricature of the caveman that most of us still carry around in our minds—dim–witted, brutish, and slow, something along the lines of a Harry Potter troll. His conclusion: Neanderthals were not our ancestors but an evolutionary dead end, which, oddly enough, turned out to be about right, but for all the wrong reasons. Insights into Neanderthals and their world have altered considerably within the past decade as new fossils have been discovered, and scientists have applied genetic technology in creative ways to plumb exactly who these remarkable people were. It's now clear that though they lived under brutal and stupefying circumstances during their nearly two hundred thousand years in Europe and Asia, they were themselves neither brutal nor stupid. In fact their brains were slightly larger than ours are today, and their accomplishments, when placed in the context of the challenges they faced in their daily lives, were nothing short of astonishing. Two hundred millenia is a long time, and Neanderthals were by all accounts a busy species throughout. Although the best evidence is that their worldwide population never reached into six figures, they still managed to range thousands of miles in all directions. The bones of over four hundred Neanderthals have been unearthed during the past hundred years. They reveal that at one time or another these people lived as far west as the Iberian Peninsula and as far east as the Altai Mountains in southern Siberia. When the weather grew colder, they traveled south to the Arabian Peninsula and Gibraltar, and when glaciers receded, they receded with them up to the mountain ranges of northern Europe. There is no evidence that they ever ventured into Africa, which makes sense. Their bodies were optimized for cold weather, and over the past two hundred millennia there was plenty of that in Europe and their haunts in Asia. Neanderthals' physical adaptations to the cold are among the reasons we think of them as brutish. On thick necks they carried large heads to hold their big brains (one fossil cranium indicates a brain of over 1700 cc, about 300 cc larger than your brain or mine). Their jaws were big with long rows of square teeth, but their chins were small, almost as though the middle part of their face had been pulled out slightly around their nose and upper lip. (From their point of view, it would have looked as though ours had been pushed in and flattened.) The thick brow ridge that ran over their eyes gave them a brooding, almost sinister look, even if their heads were topped, as some scientists have speculated, with mounds of red or blond hair. Their hair color and their fairer, possibly freckled skin were an evolutionary accommodation to living farther north than the _Homo sapiens_ from the warmer climates of the south. Dark skin in equatorial environments evolved to reduce the amount of vitamin D we absorb, but light skin increases our absorption rate, a good thing in lands where sunshine is in short supply for half the year. The selective pressures of cold, northern climates also endowed Neanderthals with big, rounded shoulders and thick–barreled chests that would shame a professional fullback. Even their noses helped them survive frigid temperatures. They were enormous and fleshy and rigged with expanded nasal membranes that warmed and moistened the cold, dry air they breathed. Above all they were strong, much stronger than we are today, with slightly foreshortened arms and thighs that reduced the amount of skin they exposed to the air. Their hands were large and far more powerful than their _Homo sapiens_ cousins', and their forearms were thick and roped with muscle, at least if the anatomy of the fossil bones that ran from their wrists to their elbows are any indication. Despite their rounded shoulders and foreshortened legs, the fossils scientists currently have in hand indicate they were not shorter than the _Homo sapiens_ of their time, though they were shorter than their direct ancestor, _Homo heidelbergensis_ , who stood six feet tall, and the slender _Homo erectus_ , creatures who were as well optimized for running and hot climates as Neanderthals were for battling big game and cold weather. What they lacked in height they made up for in bulk, which may, in an odd way, have contributed to their undoing. To stay warm and maintain their enormous strength, some scientists have theorized, they required up to 350 calories more a day than their _Homo sapiens_ counterparts. Today 350 calories might not seem like much, nothing more than an extra muffin at Starbucks, but fifty thousand years ago that much extra food would have been exceedingly difficult to come by day in and day out. It's tough to find more persevering creatures than Neanderthals. They survived the most punishing climate Europe could dish out for a length of time that dwarfs all of the history we have so far recorded hundreds of times over. They were clever, fierce, and successful hunters who could bring down deer, bear, bison, and mammoths. One site that dates back 125,000 years reveals that a group of Neanderthals living in a cave at La Cotte de Saint Brelade drove mammoths and rhinoceroses over a nearby cliff, butchered the dead or writhing animals on the spot, and then hauled in the choicest cuts into their nearby caves before any hungry predators could get to them. Efforts like that took brains and cooperation and sophisticated communication. Their culture was advanced and their social structure tight and fair, otherwise they would never have survived as long as they did. The evidence from Shandihar Cave in Iraq indicates they began to bury their dead before we _Homo sapiens_ did, going as far back as one hundred thousand years. Long ago in a ceremony we can only imagine, fellow Neanderthals gently laid the body of a man to rest in a shallow grave, positioned fetal–like, as though he were sleeping. He had had a rough life. Multiple broken bones, degenerative joint disease, a withered arm, and an eye that was probably blinded all attest to that. Yet the pollen and the ancient remnants of evergreen bows that investigators found lying below and around him indicate that this man was loved and important to those who saw him off to death or, in their minds perhaps, to a new kind of life. The same arthritic man that Marcellin Boule had maligned in 1908 as stooped and apish had also clearly been cared for by his fellow tribesmen. He was not young when he met his end, but forty to fifty years old, ancient by Neanderthal standards. Walking must have been agonizing given the state of his bones. He died with no more than two teeth, which would have made eating the normal, rough Neanderthal diet nearly impossible. Yet this man's fellow tribesmen must have carried and fed him specialized foods for years, otherwise he would never have lived to such a ripe age. This gives us a peek into what the Neanderthal mind may have been like, but only a peek. Behaviors like these tell us that Neanderthals probably felt the loss of death, mourned those close to them who had met their end, and, by extension, understood there was something more to life than the day–to–day problems it presented. They, like our ancestors, must have wondered what follows death. It's strange to think that a creature we have always seen as a club-wielding brute was more softhearted than we are. Again and again Neanderthal fossils reveal that these people took immense punishment—yet their wounds often healed, which means that their comrades did not leave them behind even if they were severely injured, but instead kept them in the clan and nursed them back to health. It's not surprising that they were injured. The long hunting spears Neanderthals routinely used weren't the sort that could be thrown from a distance. (Most anthropologists hold that the Cro–Magnon people invented spear throwing.) Neanderthals instead thrust their long weapons directly into bison or woolly rhinoceros at close range, probably by ambushing them, jumping on their backs, then jamming the spear between their shoulder blades. This was nothing like going to the local grocery. (The hair on woolly mammoths and rhinoceroses could be several inches think and acted almost like armored plating.) If the thrust wasn't made instantly and accurately, being tossed like a rag doll and then gored would have been a very likely alternative outcome. No wonder their bodies resemble the battered torsos and limbs of broncobusters. Personal sorrow aside, for Neanderthals every life lost must have been disastrous. Given their sparse populations, they didn't have many people to spare, nor, so far as we can tell, did they often live more than thirty years or so. Their productive years were pretty limited. Despite being spread out all across Europe and well into western Asia, genetic information gleaned from a handful of bones indicates that the total population of adult Neanderthals at most reached seventy thousand, and during the last forty thousand years of their existence probably dwindled to ten thousand, until finally they departed for good. Either way, dispersed as they were across tens of thousands of square miles, their clans couldn't have been large, probably smaller than bands of Native Americans that later roamed the western plains of North America for thousands of years. Even meeting other clans must have been rare, and that would have left small groups, hardly more than extended families really, twelve, maybe as many as twenty-five people, fending entirely for themselves for long periods. Between injury, harsh weather, disease, and malnutrition, the whittling of their kind might have been slow, but it was, by all indications, also inexorable, and ultimately lethal. Despite their rarity, Neanderthals survived a remarkably long time during a period when the climate was both harsh and unpredictable, fluctuating wildly sometimes within a generation or two, thanks, in part, to multiple volcanic eruptions around the planet. This longevity begs the big question, which is debated, passionately, among paleoanthropologists, exactly how complex was Neanderthal culture and how much did it have to do with their long–term survival? At one end of the spectrum, some feel that they weren't much more advanced than the brutes Boule imagined in the early twentieth century—bereft of religion, language, much clothing, and any symbolic thought. Others speculate that they were as advanced as we were, or nearly so, with full command of some kind of language, poignant self–awareness, symbolic thoughts, and a rich social culture. Where they fell along this spectrum probably has a lot to do with language. Without complex language, it's difficult to share and preserve ideas, whether they involve ceremonies, technologies, survival strategies, or relating what Aunt Marge has been up to. The ability to export an original thought from one mind to other minds has enormous advantages. Not only do good ideas spread rapidly this way, to the benefit of everyone who learns them, but it also increases the chances of the idea's remaining in the broader culture because more minds have glommed on to them. And once glommed onto, there is always the chance that someone else will improve it. That is one of the ways cultures form to begin with. Were the Neanderthals capable of this? Maybe. Steven Mithen, an archaeologist at Reading University in England, believes that early humans going back millions of years slowly developed a sense of rhythm, which was later combined with musical sounds that themselves became ways to communicate—soothing their children, winning mates, or motivating themselves. Later _Homo erectus_ , and later still, he argues, Neanderthals, combined these primal musical skills with gesture and a kind of speech to develop a complex communication system he calls _hmmmm_ for "holistic, multi–modal, manipulative, and musical." It's not implausible. We humans are the only mammal, or primate for that matter, that can tap our feet in time to a rhythm. Powerful selective forces must have been behind the evolution of rhythm for it to become such a unique skill. Our speech is loaded with pauses, starts, and tonal inflections that in the hands of a first–rate orator have a powerful musical quality. Language without tone and inflection is flat, like that of a bad B–movie robot, devoid of feeling, and also, as a result, much of its intent and meaning. It's the music in our voices, something that scientists call prosody, that gives human language so much of its emotion, humor, and irony. It imbues speech with multiple levels of meaning, many of which we simply "get" without consciously realizing it, another indication that it evolved before words themselves. Music is marvelously powerful. Think of the effect a national anthem, a favorite pop song, the climactic close to a Beethoven symphony, or just singing a song with friends can have. (How else can you explain karaoke?) It's not difficult to see how primal chants might have combined with dance and early rituals to become more complex, and more precise, ways to share feelings, emotions, and ideas once we had evolved brains capable of inventing them, and minds large enough to have need of them. Neanderthals and _Homo sapiens_ , remember, split from a common ancestor and went their separate ways for nearly two hundred thousand years before crossing paths again in Europe. It's quite possible that they wrought sophisticated, but entirely different, methods of communicating the thoughts on their considerable minds. Both we and Neanderthals carry the FOXP2 gene in our chromosomes, a snippet of DNA key to the development of speech (but not _the_ language gene as some have characterized it; there is no language gene). Maybe both species built on a foundation of musical beats and sounds put to use by their common ancestor _Homo heidelbergensis_ , but then, when they parted, evolved different ways to share their thinking. This happens with the color of fur and the shapes of appendages. Why not communication? We know the direction we _Homo sapiens_ ultimately took. We combined sound symbols—words—with a certain musicality—prosody—that created a commanding way to both conceptualize thoughts in our minds and then share them with others. This was one of the greatest innovations in all of nature, and it supercharged the growth of human culture. Mithen imagines Neanderthals took a different path and evolved a complex combination of iconic gestures (think of the "crazy" gesture we use, an index finger twirling beside our head), songlike sounds to express emotions (more complex versions of the cooing and keening sounds we make), outright song and highly expressive dance movements (à la ballet and Broadway), all in concert to communicate on levels so intricate that they are beyond what we can even imagine. These weren't muddled, caveman efforts to ape our _Homo sapiens_ language, according to Mithen. He believes and makes a compelling argument that Neanderthals were musical and gestural virtuosos compared with us and the other human species that came before them. While we specialized in using our brains and vocal gifts as ways to deliver packets of symbols made of sound, Neanderthals evolved hyperrefined senses of sound, movement, and emotion. One reason they may have evolved this way of communicating is because the physical structure of their skulls and throats developed differently from ours, partly because they were adapted to cold climates, partly by chance. Our heads and necks surround our vocal tracts, and our uniquely shaped skulls house a long, descended tongue uniquely gifted at forming vowels as in _see, saw_ , and _sue_. Anthropologist Robert McCarthy believes Neanderthals simply couldn't make these sounds. To explore his theory, he created a synthesized computer model based on reconstructed Neanderthal vocal tracts developed by linguist Philip Lieberman from Brown University. The model pronounces the letter _e_ the way a Neanderthal might have. The result is a sound that is never heard in our speech, a vowel that doesn't quite sound like an _e_ or an _a_ or an _i_ , but something in between. McCarthy says this shows that Neanderthals could not speak what are known as "quantal" vowels. For us, these vowels provide subtle cues that help speakers with different–size vocal tracts understand one another because they enable a hearer to tune his ear in just such a way that he perceives the sound as it is meant to be heard, a little the way a radio tunes into the right frequency of a particular channel. With quantal vowels it's not simply the way we _say_ the vowel, but also how we _hear_ it. We learn how to tune in, apparently, when we begin babbling as babies. In fact, this may be one of the primary reasons we babble in infancy at all. We are not simply learning to make language, but learning how to listen to it, too. You might not think this could matter much, losing a few vowel sounds. But McCarthy and Lieberman argue that if Neanderthals were unable to attune their voices and hearing to quantal vowels, it would have been impossible for them to distinguish between, for example, the words _bit_ and _beat_. Instead the Neanderthal _e_ would have been substituted in both. The effect is that they would have had fewer vowels and therefore far fewer words to express ideas, and this would have encouraged our cousin humans to instead build on the more ancient _hmmmm_ approach that Mithen imagines. After all, why would they have created words like _bit_ and _beat_ if they had little use for them in the first place? They would have had no experience of quantal vowels and no more reason to use them than we would recreate the alien speech of a Martian whose body and throat is shaped altogether differently from ours. If McCarthy and Mithen are correct, perhaps Neanderthals compensated for a shortage of vowels with an abundance of tones. Maybe the vocabulary, as in Chinese, was based more on inflection and context, less on diphthongs, vowels, and consonants and the symbols these combinations of sounds could represent. Neanderthals might also have failed to develop the verbal palette we did because their social world was smaller. Our communication is what makes social interaction rich, and that makes our own mental and emotional lives more elaborate. A less elastic way to express ideas might equate to fewer nuanced ideas emerging from the minds of our big, northern brethren. Could Michelangelo have hoped to paint the richly colored moment of creation on the ceiling of the Sistine Chapel if his palette consisted only of gray, black, and white with a little red? Maybe. We _are_ talking Michelangelo. But it would have been an entirely different image altogether. Symbolic language, and more specifically the spoken word, also makes logic more likely. It doesn't only enable us to store and communicate ideas and thoughts; it shapes and refines the sorts of ideas and thoughts we have in the first place. Without more refined language, maybe the Neanderthals' worldview was less logical, and more dreamlike, almost surrealistic. When we dream, the dream always makes sense within the context of the dream, even though when we recall dreams after we awake, we know that flying, and time travel, and the different versions of ourselves and others in the dream aren't possible in the "real" world. Did the daily awakened life of Neanderthals possess more of this kind of ephemeral, nearly surrealistic quality? We ourselves touch on the mystical, the surrealistic, and the metaphysical in meditation, religious trances, and hypnosis. And doesn't dance and music sometimes bring people to a mystical, trancelike state? If we can come to these states, perhaps Neanderthals did, too, and then some, given the way they perceived the world. It makes you wonder what Neanderthal dreams might have been like. And it makes you wonder how accurate our "reality" is. Not that any of this means that Neanderthals were less adept at the undertakings necessary to their survival. Mithen believes that they had plenty of "domain specific" intelligence characterized by the tools they made, the hunts they organized, and the food they prepared, but there is little evidence so far that they wove ideas into a broad culture filled with story and myth. Their small numbers might have hindered their proclivities and talents. There couldn't have been much cross–pollination of ideas among separate groups. Mithen even wonders if they developed hmmmm "dialects" specific to their own clans. Each group would have been like an out–of–the way island, rarely found. Given their dialects and the rarity of chance meetings, technical and social progress would have been stunted. In the long run that would have made the rise of sophisticated culture difficult. This may explain why the more closely anthropologists have explored Neanderthal culture, the more they have noticed something odd. For all of their dogged courage and resilience, they didn't make much technological progress during their two–hundred–thousand-year run in Europe. The Mousterian tools and cultural artifacts they crafted and left behind show remarkably little innovation considering how long they were around. The craftsmanship is first–rate, and the design of the tools and the methods used to fashion them were clearly passed along quite precisely, but given their intelligence, you would have expected more novelty, more originality. Their greatest technical breakthroughs seem to have come _after_ they first crossed paths with the Cro–Magnon people. This could be coincidence or it could illustrate what they might have accomplished if they had been able to better share ideas among themselves. Or it could mean their sparse numbers and the limitations of their language hampered the two species' ability to interact once they finally and fatefully met. Imagine this encounter, and its shattering effect. Each group must have gazed at the other in bewildered amazement. In an instant they would have seen that these creatures resembled them, but were clearly not one of them. Why didn't they communicate in the same way or even make the same sounds? This wasn't simply a different tribe that dressed in unfamiliar apparel, spoke an indecipherable language, or carried odd weapons. This was another creature altogether, perhaps a god or an animal or something in between. To the Cro–Magnon (see sidebar p. 114), the large–muscled, beetle–browed white people with their fiery hair must have struck them as alien, and possibly dangerous. To the broad–backed Neanderthals, the slim creatures with their baby faces and rounded skulls might have looked slight, childlike, and at first glance weak. But the Neanderthals may have sensed danger, too, in the sophisticated weapons the strangers carried, and in the alien precision of their communication. Chances are the Neanderthals had seen the handiwork of those weapons long before they met face–to–face the creatures who had fashioned them. The evidence of such efficient killing must have had a chilling effect. The big and primal question—the mastodon in the room so to speak—that had to have entered both of their collective minds was, whoever they are, can they be trusted? Are they a friend or an enemy? For twenty-five thousand years, nearly three times longer than we have been recording our own history, _Homo sapiens_ and Neanderthals shared the same part of the world. Over time, and as the Cro–Magnon people wandered deeper into Europe, the species must have met again and again. Did they cooperate or wage war or simply do their level best to ignore one another while each worked desperately to stay beyond death's long reach? _Who Were the Cro–Magnon People?_ The term _Cro–Magnon_ can be a little confusing because it originates from a French cave, Abri de Cro–Magnon, in southwestern France, where the first fossils were found, but actually refers to the dark–skinned people whose ancestors began migrating from Africa around fifty thousand years ago. These were the earliest _Homo sapiens_ to reach Western Europe, and the people who first encountered and then coexisted with Neanderthals in places we now know as France and Spain beginning some forty thousand years ago. As you might imagine, the Cro–Magnon were a tough group. Strong, heavily muscled, and smart with a brain, at 1600 cc, larger than ours is today. Their tall foreheads and square jaws made them the first humans (as far as we know) to bring these neotenic features with them into their adulthood, the physical hallmarks of modern _Homo sapiens_. They were clearly successful. Their genes are evident in people living today from Europe to Central Asia and North Africa, Polynesia and both American continents. In short, nearly all of us. Their weaponry was advanced and included the invention of bone spear throwers that held their spears as they launched them at prey (and likely one another on occasion) with a force and accuracy that made them the most lethal hunters on earth. They also excelled in fashioning extremely sharp flint knife blades and spearheads. They even developed techniques for straightening their spears to make their flight more true. They liked to decorate their weapons, too, but discoveries of these small examples of their flair for the artistic were only a small indication of their ingenuity. The world learned of the true depth of their creative talents in 1940 when four curious teenage boys, and their dog, Robot, stumbled upon arrays of mysterious paintings on the walls of the Lascaux caves in France's Dordogne region. The artwork is nothing short of jaw–dropping, as beautiful and haunting as anything a modern artist could possibly conjure, and an indication that in these people, modern human behavior had irrevocably touched the world. Why the paintings were created is unknowable. They may have been religious, or a way to enter a spirit world, or simply the doodlings and artistry of generations of ancient but extraordinary humans who were expressing themselves in ways other humans never had. Hundreds of caves have now been discovered around the world filled with the imaginings of these and other ancient humans, all of them powerful illustrations of the playfulness and creativity, the childlike side of us, that distinguishes our species. As thoroughly as the archaeological record has been pored over, it has yielded nothing more than the skimpiest portrait of how the two species lived, let alone how they may have interacted. _Homo sapiens_ , we know, had set up trading relationships with one another thousands of years earlier, which increased cooperation and improved their chances of survival. But because both species were itinerant, neither had yet established villages or cities, though there were settlements, favored places that clans and bands regularly returned to over long periods. Neanderthal and early Cro–Magnon existence may not have been terribly different from the way Native Americans lived on the plains of North America as recently as the nineteenth century—moving with the seasons, following the large animals they fed on, hunkering down in the winter against the elements, in and near caves that provided warmth and shelter, and then moving again when the weather grew a little kinder. Generation after generation they likely lived this way, bending with the climate, following the herds of mammoth, elk, and deer that provided them with food, clothing, bones for tools, many of the raw materials they relied on for their existence. Life was a short, harsh cycle of perhaps thirty to thirty–five winters and summers of close cooperation, family feuds, and occasional encounters with other humans, then death, at which time the next generation took on the fight. In some ways it isn't all that different from our existence today. Life was shorter, it's true, and tougher and the technologies different, but the same general pattern applied. They too sought love, enjoyment, and friendship and searched for ways to express themselves, just as we do. They were, after all, human as well. Just a different variety of human. All of this only makes it more tempting to wonder what happened during those long and wintry twenty-five millennia when our kind and Neanderthals coexisted. Why did Neanderthals fail to survive? It's a vexing mystery. Every species runs its course. We know that. And the Neanderthal had made an immensely successful run. They roamed the steppes and mountain forests of Europe and western Asia through three ice ages, and their close cousin _Homo heidelbergensis_ had survived a full two hundred thousand years before departing. During most of their time the Neanderthals were the dominant primate species north of Africa. But as the last glacial age began, ever so slightly, to wane, perhaps for the Neanderthal people their time for departure had simply come just as it had come for so many others before them. If that was the case, the arrival of modern humans couldn't have helped their situation whatever the intentions of _Homo sapiens_. The Cro–Magnon were moving into the Neanderthals' ecological niche and were proving to be better survivors. Some have speculated that we systematically wiped out our long–lost cousins as we came across them. When we met, the theory goes, if hunting or choice settlements and locations were at stake, the CroMagnon, with their superior weapons, and possibly their superior planning, killed or enslaved whoever got in their way, including Neanderthals. (They might have done the same to their own kind. We still do today.) It wouldn't have been an all–out war in the sense that armies were assembled and clashed, but the damage done to the Neanderthals would have been relentless, with one settlement, tribe, or clan after another falling to the new intruders. There's not much evidence for warfare or murder in the fossil record, however. We haven't found ancient killing fields, strewn with the hacked and broken bones of the two species; no sites where dead _Homo sapiens_ lie next to the skeletons of Neanderthals. The first evidence of a violent Neanderthal death was discovered in the Shandihar Cave in northeastern Iraq. The man was about forty years old when he met his end. Scientists found evidence of a wound from a spear, or some sort of sharp object, in his rib cage. Based on the nature of the wound, Steven Churchill, an anthropologist at Duke University, suspects a light spear thrown by a Cro–Magnon enemy inflicted it. It doesn't seem to be the result of a thrust by a knife or a long Neanderthal spear, the kind they favored when hunting. It's a theory, but a long way from a certainty. If this one man was murdered in this cave, or nearby where were the other victims? It's just as likely that the man died from a wound suffered while hunting, or maybe another Neanderthal did him in. Other findings have been a little more conclusive, and considerably more gruesome. Paleontologist Fernando Ramirez Rozzi found something rare in the human fossil world, a cave called Les Rois in southwestern France that housed the bones of a modern human and a Neanderthal child lying together, positive proof that the two species came face–to–face. Unfortunately the jawbone of the child shows the same sort of markings paleontologists see on the bones of butchered reindeer skulls. The unappetizing conclusion is the child was made a meal of. Hints that other Neanderthals met a similar, cannibalized fate have been found at a site called Moula–Guercy near France's Rhône River. Except in this case those who dined on their fellow humans were themselves Neanderthals. Perhaps it was a violent ritual, or the spoils of war, or maybe some who had died of starvation became the sustenance for those who survived. Not a cheery thought, but a world this harsh would inevitably require harsh choices. If there were violent meetings, then this is the extent of the evidence we have for now. Others if they exist have yet to reveal themselves. Maybe, somewhere in Europe, in a remote mountain forest or beneath a broad river rerouted by the last glaciers, lie the bones of prehistoric warriors who fell to the invaders from the southern seas. So far, though, no battlefields, and no warriors, have been found. A second theory that could explain the disappearance of the Neanderthals is that Cro–Magnon simply outcompeted them for resources, food, and land, not unlike the way we outcompete nearly every other species living wherever we show up. The thinking is that we didn't kill them hand to hand, but we exterminated them in a war of attrition, by taking over the best habitats and hunting grounds, killing game faster than they could, and in larger numbers. Slowly, over thousands of years, the already sparse Neanderthal population retreated to pockets where it became increasingly difficult for them to survive. (We are doing this today to the gorillas and chimpanzees of Africa and the orangutans of Southeast Asia.) This might further have crippled Neanderthals' ability to band together, weakening them still more, until in the end each of the dwindling clans died away. There is some evidence for this. Neanderthals did become progressively rarer as Europe withdrew into the coldest phase of the last ice age. Leslie Aiello of University College London suggests that Neanderthals, as adapted as they were to chilly climates, couldn't survive temperatures below 0° F (−18° C). Their clothing and technology simply weren't up to it thirty thousand years ago. As temperatures dropped and a new ice age descended, warm pockets of land would have become increasingly hard to find. If the Neanderthals retreated to them, they may have been trapped and died as even these locations grew too cold. Or they may have sought them only to find that the new creatures from the south had beat them to it, leaving them bereft of their favorite settlements and with no choice but to settle for places that, in the end, couldn't sustain them. It might have happened this way. But Europe and Asia are immense territories, and it's difficult to imagine that there wouldn't have been enough resources to go around. The Neanderthal range covered tens of thousands of square miles. Genetic studies indicate the entire Neanderthal populations rarely numbered more than seventy thousand people spread from the Iberian Peninsula and the south of England clear to the plains of western Asia beyond the Caspian Sea. While each band probably needed several square miles of land to sustain them, much of the land was rich with food and resources and herds of large animals from mammoths and woolly hippopotamuses to deer, bison, and aurochs.f Even if the combined numbers of both Neanderthals and modern humans reached into the hundreds of thousands, there would seem to have been plenty of space, food, and resources, and much of Europe, even at the height of the last ice age, would have been temperate enough to accommodate any variety of human—Neanderthal or not. The frigid weather would certainly have battered Neanderthals trapped in cold areas, but why wouldn't those already living in the more temperate climates of southern Italy, Spain, France, and the Mideast have survived? Maybe because it was more complicated than any one of these scenarios. Maybe the mysterious people from the south brought new diseases or parasites with them or forced radical cultural changes that Neanderthals simply couldn't adjust to. After all, white men from Europe destroyed the cultures and ways of an entire continent of native North Americans, scores of individual tribes numbering in the hundreds of thousands, and they did it inside of four hundred years. This wasn't simply a matter of brute slaughter. The blunt impact of a different kind of culture can also do considerable damage. Could immigrants from Africa have wreaked the same kind of havoc on the Neanderthal natives of Europe, except in this case taking twenty-five thousand rather than four hundred years? It's possible. Stephen Kuhn and Mary Stiner at the University of Arizona suspect that modern humans arrived in Europe with cultures that divided labor within their tribes in a way that was safer for pregnant women, mothers, and children by keeping them focused mostly on collecting vegetables, fruits, and nuts, while men concentrated on hunting large animals. Based on their research, Kuhn and Stiner believe Neanderthals divided their labor among the sexes differently or, more accurately, didn't divide it at all. Men and women both undertook the deadly work of bringing down big game, and that meant that women who were killed in hunts would not survive to bear more children. The teens and adolescents lost in hunts would further have depleted the clan. Though the Cro–Magnon approach to dividing labor didn't mean they attacked Neanderthals, it would have had an impact nevertheless because eventually more Cro–Magnon women would have survived to bear more children than their Neanderthal counterparts, growing their population while Neanderthals struggled to keep pace with replacing the members they were losing. Even if the Neanderthal people were tougher, over thousands of years the competitive difference could have completely rearranged the population balance, just as divergent social approaches shifted the balance between whites and Native Americans. This may help explain why the Neanderthal population never really took off, even when ice ages relented. Their mortality rate was simply too high, and they were spread out too thinly. It may also help explain why their culture and technology remained doggedly unchanged for two hundred millennia. It's terribly difficult, even within a clan, to pass along new ideas and innovations when members rarely lived past thirty or thirty–five years, and others are being wiped out in the prime of their childbearing years. Who knows how many Neanderthal Galileos or Einsteins died suddenly in the hunt and took their genius and inventions to the grave with them? It's nearly impossible to build anything but the most rudimentary traditions when innovation is rare and life passes so quickly. In this scenario, the Neanderthal found themselves fighting, millennia after millennia, a pulverizing war of attrition. In the end, extinction was the only possible outcome. One last theory about the demise of Neanderthals is particularly tantalizing: If we killed them at all, we killed them with kindness. We neither murdered them nor outcompeted them. We mated with them and, in time, simply folded them into our species until they disappeared, reuniting the two branches of the human family that had parted ways in Africa two hundred and fifty thousand years earlier when small groups of restive _Homo heidelbergensis_ headed across North Africa and into Europe. It's fascinating to consider the possibility that we and another kind of human together bred a new version of the species. Whether this happened was, only a few years ago, one of the great controversies in paleoanthropology. But now there is persuasive evidence that something like it did. In 1952 the remains of an adult woman were found lying on the floor of the Pestera Muierii cave in Romania—a leg bone, a cranium, a shoulder blade, and a few other fragments. The people who discovered these bones didn't think much of them. How old could they be, after all, if they were simply lying there on the ground for anyone to kick around? As a result, soon after their discovery they were squirreled away in a researcher's drawer where for more than half a century they lay undisturbed and forgotten. Eventually, however, a team of scientists that included Erik Trinkaus at Washington University in the United States and two Romanian anthropologists, Andrei Soficaru and Adrian Dobos, rediscovered the bones and gave them a closer look, and when they did, they were stunned. Radiocarbon dating revealed the woman hadn't lived recently at all, but last walked Earth thirty thousand years ago. The other startling discovery was that the fossils exhibited features that were clearly Cro–Magnon–like, but also distinctly Neanderthal. The back of the woman's head, for example, protruded with an occipital bun, a distinct Neanderthal trait. Her chin was also larger and her brow more sloped than a modern human's. The woman's shoulder blade was narrow, not as broad as a modern human's. Was she simply a rugged–looking modern human, or, as one scientist wryly put it, proof that moderns "were up to no good with Neanderthal women behind boulders on the tundra?" Other similar finds made recently throughout Europe keep boggling the minds of scientists who study this question. In another cave in France researchers have unearthed not bones, but tools that date back thirty–five thousand years. Their location indicates that for at least a full millennium both Cro–Magnon and Neanderthals coexisted in this place. If they could live together, and if they could communicate and cooperate, isn't it likely that at least a few crossed the species line and, in a prehistoric foreshadowing of Romeo and Juliet, mated? Then there is the mysterious skeleton of a young boy unearthed in Portugal that is 24,500 years old. While conventional wisdom has it that the last Neanderthals died out thirty thousand years ago, the large size of this boy's jaw and front teeth, his foreshortened legs, and broad chest have caused Trinkaus and others to wonder if he, too, might not be a hybrid. Though his chin is Neanderthal in size, it is also square, more like ours, and his lower arms were shorter and smaller than you might expect if he were _Homo sapiens_. Strangely enough, this part of Portugal is among the last places in Europe where Neanderthals lived before they disappear from the fossil record. Was this boy simply among the last of his kind, archaeological proof that Neanderthals were finally and inevitably swallowed, genetically or otherwise, into the rising tide of modern humans spreading across the planet? Until recently, the only evidences of interbreeding were perplexing finds like these, smoking guns that indicated we and Neanderthals had mated, but nothing irrefutable. Then in 2010 a scientific consortium headed by the Max Planck Institute for Evolutionary Anthropology completed its historic analysis of the Neanderthal genome, accomplishing for our burly cousin species what we had done for our ourselves seven years earlier. The analyzed DNA was extracted from three Neanderthal bones discovered at the Vindija Cave in Croatia not far from the Adriatic seacoast. To decipher the tantalizing possibility that we and Neanderthals may have produced common offspring in the deep past, the team compared the Neanderthal DNA with the genomes of five people of different lineages from around the world—French, Han Chinese, Papuans from New Guinea, and the Yoruba and San people of Africa. The San are, genetically, very close to the first modern humans to have evolved in Africa. What dumbfounded the project's investigators, and the rest of the scientific world, was that all the genetic samples taken, except for the Yoruba and San people of Africa, contained 1 to 4 percent Neanderthal DNA. In other words, most of the human race from Europe to the islands of Southeast Asia (and probably farther) is part Neanderthal! That Africans seem not to share any Neanderthal blood indicates that these two families mated after the wave of _Homo sapiens_ departed Africa, but before their descendants headed into Europe and Asia. According to the researchers, this would have been somewhere between eighty thousand and fifty thousand years ago. Was that the only time modern humans and Neanderthals bred? The research team isn't saying, but right now they can only base their conclusions on the research in hand. This will disappoint those who believe that Neanderthals and modern humans melded during those twenty-five thousand years of cohabitation in Europe into a single species whose recombined genes, shaped by separate evolutionary pressures, created a new kind of human. But it doesn't rule the possibility out. There simply isn't enough information on the scientific table right now to say. That we mated still doesn't conclusively solve the mystery of how the hardy, quiet people of the North met their end. Was it murder, competition, or love? Does it have to be one or the other? Nature, evolution, and human relations are all chaotic and unpredictable, as much as we might like them otherwise. When Europeans colonized North and South America, they sometimes befriended the natives, sometimes brutally exterminated them, sometimes raped their women, and sometimes fell in love and raised families. Were Neanderthals so different from Cro–Magnon that sex was out of the question? If the Max Planck findings are accurate, clearly not. Both species were human, and the drive to procreate is strong and primal. Humans, after all, have been known to have sex with other primates, even other animals. Surely both species found enough common ground over twenty-five thousand years to bed down together during those frigid European winters. One of the inescapable lessons of evolution is, if anything can happen, it probably will. Whatever ultimately transpired between our two species, events eventually rendered Neanderthals first endangered and finally extinct. For two hundred thousand years they were punished by a cold climate that kept their numbers small and made it difficult to develop stable trade, and to share and amplify the advances of their scattered clan cultures. Even with complex forms of communication, no matter how different from ours, they would have been hard–pressed to build a broad and increasingly sophisticated culture. They seem to have had their work cut out for them simply maintaining the status quo. So, in time, they disappeared, presumably one band at a time. Sometimes, perhaps, at the hands of one another. Sometimes to disease or famine. Sometimes to climate change, undoubtedly a particularly destructive culprit, especially if their technologies couldn't keep up as modern humans began to usurp their favorite places to live, killing game before they could and depleting limited resources. Sometimes the Neanderthal may have battled for those places and lost or moved on, as Native Americans did until nothing but the poorest lands were left upon which to scrape out a life. Or sometimes they may have compromised and cooperated and befriended their clever, slim competitors, until there was no difference between the two, and they had disappeared into the gene pool of their cousin species, leaving a few bones and tools and a bit of their DNA in us as a legacy of their more prosperous days on earth. Precisely how the end came is impossible to say, but inevitably, somewhere, sometime, the last Neanderthal passed from this earth. Current theories hold that surviving bands may have retreated south during the last ice age to the Iberian Peninsula, holing up on Gibraltar, a last outpost at the toe of Western Europe. Gibraltar protrudes like a great snaggled tooth from the southern tip of Spain no more than fourteen miles from the north–facing coast of Africa. For one hundred thousand years Neanderthals had been revisiting and living in a great cave, today known as Gorham's, that sits at the base of the peninsula's massive promontory rocks as they hover hawklike over the Mediterranean. If the radiocarbon dating of nuggets of charcoal found in the cave is accurate, the Neanderthal stoked their last fires here twenty-four thousand years ago, much more recently than scientists once thought possible. Gibraltar was different during the last ice age than it is today. Populated with deer and ibex and rabbits in the craggy hills above the cave's colossal and vaulted ceilings, it made a perfect refuge for a vanishing species. With sea level in the Strait of Gibraltar 240 to 360 feet lower than it is today, the cave would have gazed like a cyclopean eye on sprawling plains and marshlands that stretched west across the great bay rather than the blue sea that surround the land now. The bones left behind tell us the cave dwellers dined sumptuously on tortoises, fish, and other marine life. It had long been a good place to be, and now, it seems, it was the _last_ place to be. For the Neanderthal, only Africa itself could take them farther south. To the north the Cro–Magnon had been moving into their territory for millennia, and beginning six thousand years earlier the climate had grown fiercely colder, driving clans south until finally only this one last settlement remained, clinging to the underbelly of the continent. Gibraltar, surrounded by water and so far south, would have remained temperate even in the face of the descending glaciers, up to a point. By twenty-four thousand years ago, as the ice age tightened its frigid grip, even Gibraltar grew arid, the marshlands died off, and the game with it. Each day would have become a little harder, and finally, impossible. Someone had to be the last Neanderthal, an individual like you and me. He, or she, didn't know it, but whittled down and alone, that person's end was more than a single death. It was the passing of an entire species shaped and hammered in evolution's crucible for hundreds of thousands of years. What were those last hours like? I would like to think they were spent sitting on a Gibraltar precipice, high above the shallow Mediterranean, looking west as the sun descended into the Spanish mountains. Maybe in those last moments the pale light faded on the Neanderthal's sloped and beetled brow while that strange and fiery orange ball slipped mysteriously away, and with it the last Neanderthal mind, with its last, unique Neanderthal thoughts. After two hundred thousand years, a time that mangles and boggles our ken, extinction had at last come. But why the Neanderthal and not us? To say it was because we were smarter somehow, or better communicators, or more social, strategic, or creative begs the deeper question, what happened that we developed those gifts and not them? What made the difference? As usual, the answer isn't simple, but it's fascinating, and once again it is linked to our childhood. When teeth are formed, they are built one layer of enamel at a time, each deposited on top of the other. This leaves a pattern, if you inspect them closely enough, that resembles the growth rings of a tree; the more enamel that is laid down, the wider the spaces, or perikymata, become. You wouldn't think that ancient teeth could possibly have much to say about human evolution, but, as it turns out they speak volumes and provide a terrifically convenient way to measure how quickly different primates, including our human relatives, grew up. Even if a scientist has nothing more than a single, ancient molar or bicuspid to work with—which, in paleoanthropology, is often the case—it's remarkable how illuminating a tooth can be. By inspecting the speed with which the perikymata of even very different primates were laid down, it is possible to compare how "old" they were, relative to one another, when they reached adulthood, or puberty, or even when they were likely weaned, all depending on when the tooth stopped growing. In this way a tooth can act as a rock–solid biological clock set against other biological life events that can help shed some much–needed light on whether our ancestors matured more quickly, or less quickly, than we did. You and I, we already know, take eighteen to twenty years to reach physical adulthood, whereas chimpanzees and gorillas reach adulthood by age eleven or twelve, in nearly half the time. The main reason for this is our extended childhood. The point is, our rates of growth over our lifetime are different from those of other primates. We cut our first permanent molars around age six, but chimps lose their baby teeth around the age of three and a half. The earliest humans, such as _afarensis_ , developed at the same rate as chimpanzees, and so did their teeth. But later, with the arrival of _Homo habilis_ and _Homo erectus_ , childhood lasted longer and growth slowed. A _Homo erectus_ child cut her first molar between ages four and four and a half. All of these findings pooled together from studying the teeth of precursors from around the world readjusted the age of the most famous youngster in all of anthropology, the Nariokotome (or Turkana) Boy (see chapter 2, "The Invention of Childhood"). Originally scientists pegged his time of death at age twelve, but now the consensus is that he was closer to eight even though he was already an impressive five feet three inches tall. He was hitting his adolescent growth spurt (the one that drives parents crazy when they are trying keep their kids in reasonably fitted clothes) even though by our lights he should still have been a little boy. (He was, therefore, about halfway between us and chimpanzees in his growth rate.) All of this tells us that earlier in our evolution our ancestors grew up faster, which means their childhoods were shorter, which further means that they had less time to learn before they began to get set in their adult ways. You would think that by the time Neanderthals had arrived on the scene that the speedier growth rates of Nariokotome Boy would have evolved out of us and caught up to rates similar to ours today. After all, we and Neanderthals evolved from the same common stock and came into existence at about the same time. We were roughly the same physical size and so were our brains. For some time it looked exactly this way, then in 2001 Alan Walker, one of the team who had originally discovered Nariokotome Boy, found that Neanderthals didn't attain modern growth rates like _Homo sapiens_ until about 120,000 years ago, 80,000 years after they first arrived. Or so he thought. But soon this conclusion was proven wrong when a Harvard researcher named Tanya Smith and her colleagues, after the careful inspection of many teeth, concluded that not only did Neanderthals not lengthen their lives or their childhoods as much or as early as Walker had thought, they had actually shortened them, reversing a trend at least seven million years in the making! Smith says Neanderthals reached full maturity by age fifteen, three to five years earlier than us and not terribly different from the pace that Nariokotome Boy was on, nearly a million and a half years earlier. On the other hand, human fossils unearthed in Morocco indicate that we _Homo sapiens_ reached current growth rates as early as 160,000 years ago. What, however, would cause a childhood–lengthening trend that had been in the works for so long to reverse itself in Neanderthals? The same force that causes all evolutionary trends to turn and twist—the need to survive. Neanderthals, you might recall, had a rough time of it fighting cold climates and hunting enormous animals at close quarters, among other challenges. Their population, even when the climate grew warmer, never took off, which meant that from the first moment they emerged, they were, essentially, an endangered species. It's true they were enormously strong and as tough a creature as ever walked the planet, yet they didn't live long. Because they were so quickly snuffed out, and because they congregated in small groups, evolution apparently began to favor Neanderthal children who grew up faster, could bear children sooner, and reached adult size and strength as rapidly as possible to replace the older members of the troop who passed on so quickly. This would have two immense and not terribly favorable long–term effects. First, it meant Neanderthal children spent less time playing, learning, and developing socially and creatively in early life. The effect—less personal adaptability and creativity. They had less time to develop unique personalities and talents. Second, it meant fewer mentors who could pass valuable knowledge along to younger members of the clan. Neanderthals became so focused on their short–term need to survive that they were unable to develop the more complex skills that saved us _Homo sapiens_ over the long haul. From an evolutionary point of view, however, what other route could they have taken? Neanderthal mothers could not suddenly begin to have litters of offspring like a cat or a pig to compensate for their kind's high mortality rate, and given their sparse numbers and tiny tribes, they simply didn't have the "bench strength" that a larger population supplies. They were caught in an evolutionary catch–22, and accelerating their childhoods was the best Darwinian solution at hand. For two hundred thousand years, it worked. And then it didn't. We _Homo sapiens_ were luckier. Though we had swung precariously close to extinction ourselves fifty thousand years earlier, the climatic forces behind our demise struck quickly, then reversed. Despite our near–death experience, there wasn't enough time for a genetic solution à la the Neanderthals, so when the climate recovered, so did we, in a hurry. DNA analysis shows we rapidly fanned out into Europe and Asia, and all points beyond. The main reason we could was because we had already maximized the lengths of our childhoods, which now poised these strange, slender savanna apes with their youthful looks, big brains, and enormous personal diversity to change the world in profound and startling ways. Which brings us to the next part of our story. ## [**Chapter Seven Beauties in the Beast**](ch00_fm06_toc.html#ch00_fm06_toc) _I cannot imagine, even in our most primitive time, the emergence of talented painters to make cave paintings without there having been, near at hand, equally creative people making song. It is, like speech, a dominant aspect of human biology_. —Lewis Thomas, _Lives of a Cell_ You may not find it particularly attractive to perforate your upper lip and then slip a large metal and bamboo ring called a pelele into it to force your lip two inches beyond your nose, but women in the Makololo tribe of south–central Africa did it in the nineteenth century, and the men loved it, even when a smile sent the ringed lip flipping up to cover the eyes of the woman who wore it. In 1860, when a British explorer asked the Makololo chief, "Why do women wear these things?" the chief, in stunned disbelief, answered, "For beauty! They are the only beautiful things women have; men have beards, women have none. What kind of a person would she be without pelele? She would not be a woman at all with a mouth like a man, but no beard." It is difficult to underestimate the power of the visual cues that drive human behavior, including those devoted to the arts of seduction. "Savages at the present day deck themselves with plumes, necklaces, armlets, ear–rings, etc. They paint themselves in the most diversified manner," Charles Darwin wrote in 1860. He devoted an entire chapter in _The Descent of Man_ to exhaustively detailing the wild and alien ways people all over the world embellished themselves to attract and impress the opposite sex. The natives of Malaysia painted their teeth black because it was shameful to have white teeth "like those of a dog." Some Arabs believed that no beauty could be perfect until the cheeks or temples "have been gashed." The Botocudos of Brazil placed a four–inch disk of wood in their lower lip, and the women of Tibet elongated their necks by placing metal ringlets one on top of the other until their heads almost appeared to hover magically above their shoulders. Most of the time, Darwin observed, women's adornments focused on enhancing their beauty. Men concentrated on making themselves physically attractive, too, but mostly they favored adornments designed to strike terror into their enemies during battle because a fierce warrior is often attractive to the opposite sex. So they embellished themselves with paint of all kinds, or, like the Maori of New Zealand with remarkably detailed facial tattoos. The women of some African tribes found a star stamped on a man's forehead and chin absolutely irresistible. Darwin's anecdotes aren't the only testament to how important we consider our appearance to be. Human beings focus on appearance all the time, everywhere. In 2011 the cosmetics industry induced men and women worldwide to separate their wallets from $12.5 billion. And in 2010, Americans, without help from anyone else in the world, spent $50 billion on jewelry. Darwin enumerated example after enthralling example of this aspect of human behavior because he was trying to make _the_ central point of _The Descent of Man_ —species work to guarantee their survival in two ways. First, by outflanking disease, parasites, predators, foul weather, and all the other countless dangers of their environment. And second, by having sex. Only by finding willing mates with whom to get on with the business of bringing new offspring into the world, he pointed out, can any species hope to survive. This process he called "sexual selection." The two strategies are intimately bound. Survival serves no purpose without sex, and sex, of course, is impossible without survival. Naturally, the first step to being sexually selected is getting the attention of the opposite sex in the first place. You can't mate if you can't manage to be irresistible. To go unnoticed is to go unloved, and unrequited love in nature is a sure path to extinction, at least for you and the DNA you personally have to offer the gene pool. This makes fewer goals in life more important—from an evolutionary point of view—than successfully landing at least one sexual partner. This has caused the forces of natural selection, given their collective knack for conjuring strange genetic fabrications in the interest of survival, to cook up some extravagant ways to advertise just how alluring the members of various species can be to one another. Peacock feathers are surely the most celebrated, but there are also the colossal antlers of the (now extinct) Irish elk, the elaborate songs that the red–eyed vireo sings to charm females, the thick manes of lions, and the vibrantly colored bottoms and faces of male mandrills. Even the bright colors and fragrances of flowers are a kind of sexual allurement because they attract bees that then "impregnate" other flowers by proxy. _A Hen Is an Egg's Way of Making Another Egg_ Being self–aware as we are, we tend to think that the drive to survive is a conscious thing, and so we assume that this awareness of our own mortality makes us want to remain living. But every form of life—the lowliest protozoan, deep–sea tube worm, or hardy lichen clinging to a windswept antarctic rock—fights every day, ferociously, to remain among the living. Lizards, spiders, gazelles, and lions, all focus themselves ardently to the quotidian labor of making it to the next day, yet not one of them is contemplating its mortality. The drive to live is instinctual, primal, and unconscious, even in us. But where, and this is the central question, does the instinct come from? You couldn't be blamed if you assumed it comes from the individual living thing itself, but again, so much of life doesn't have the cerebral horsepower to even know that death is possible. So something else must be at work, and it is. Long ago, packets of molecules with the remarkable ability to continually make copies of themselves evolved. Scientists and author Richard Dawkins like to call these "survival machines." In time these evolved into what we now call DNA, the long ladders of linked proteins that contain the instructions that make you and me, and every other living thing on the planet, rather improbably and astoundingly possible. To better do their work, the earliest DNA replicators inevitably stumbled across ways to better multiply. The very first cells are an outstanding example of a major leap forward. They not only supplied a membrane as a protective wall between them and the cruel protean world, but they discovered ways to ingest food and turn it into power, the better to make even more copies. Sex was another innovation—a better way to make both more and more diverse survival machines. In time cells joined together to form increasingly complex replicators, until following 3.8 billion years of trial and error, they took on millions of outrageously complex forms. One recent, and altogether serendipitous, result is you. The British poet Samuel Butler once observed, "A hen is only an egg's way of making another egg." Not the other way around. When looked at this way, it turns out you and I (and every other living thing on earth) are not so much focused on surviving because we personally want to avoid death or even desire to create more versions of ourselves. Instead we are a kind of elaborate tool in the unconscious service of the DNA swimming around inside us, determined (if strings of molecules can be determined) to make more copies of itself. Think about that. We are hosts to a kind of virus that controls our fundamental behavior in a way that ensures more copies of that "virus" will be made because that is what that virus does—it replicates. And the better "tricks" it can find that improve its duplication, the better it does its job. We, in case it escaped you, are one of the "tricks." What makes so many of these evolutionary traits intriguing is how extravagant they are. They don't seem to serve any practical purpose, not at first glance. In fact they can sometimes get in the way of survival because they require enormous strength, extra stores of nutrition, or draw the attention of predators. Biologists call these costly attention getters "fitness indicators" because they are billboards that mostly male animals use to advertise to females the fabulous genes they are toting around. Extravagant dances, battles with other males, wildly orchestrated warbling, gargantuan antlers, luminous bottoms, thick manes—all of these are powerful, but costly, signals to the opposite sex. Their sole purpose is to prove "I am the man!" Like other animals, we humans have evolved an impressive variety of fitness indicators too. Modern human women, for example, have breasts much larger than those of any other primate, yet their expanded size serves no apparent practical purpose. Female gorillas and chimpanzees have small breasts and nurse their offspring just fine. But for humans, full, round breasts subconsciously signal health and fertility. (The original meaning of the word _buxom_ was healthy and easygoing, not large breasted.)2 The same is true for rounder rumps and a clearly defined hourglass figure. Several studies have revealed that men of nearly every culture are attracted to women whose waists are about 70 percent of the size of their hips. Other studies have shown that a certain amount of fat on the backsides and hips of women is a universal signal of fertility. Because of the subtle messages they send, over time evolution favored women with these traits for the simple reason that they were terrifically accurate indicators of health. Their offspring then tended to survive and go on to mate and pass the health–enhancing genes along. Women seek out fitness indicators in men, too. They find slim hips and broad shoulders attractive because physical strength sends subliminal messages that such a man is not only a fertile source of firstrate DNA, but athletic enough to survive the dangers of the world and bring home the bacon. The importance of fitness indicators has even driven the way we look. The human face is one of the best advertisers of health in nature, which is why we are so tuned in to beauty and handsomeness. We love symmetry in the countenances of others, not to mention bright smiles, white teeth, smooth skin, and thick hair. Many of us assume that we develop our attraction to these traits because we learn it. That's partly true. Fashion trends and hairstyles can affect what we consider beautiful as Darwin's research illustrated, but our tastes in physical beauty are almost entirely primal and subconscious, which is to say, they are not learned. Psychologist Judith Langlois at the University of Texas, Austin, for example, has found these tendencies are so deep that even infants prefer comely caregivers to unlovely ones. She and her research team figured this out by gathering together the odd combination of sixty babies, one woman, and an expert mask–maker. For the experiment the team asked the mask–maker to fit two masks to a female caretaker—one that made her look pretty, the other not so pretty. This woman was a stranger to all the babies, and the masks were extremely realistic, a skin over the caregiver's real skin that smiled or frowned and moved seamlessly no matter how she expressed herself. To ensure that the woman didn't act differently depending on what sort of mask she was wearing—something that might subtly have affected the babies' behavior—the caregiver herself was never allowed to know whether the mask she was wearing made her appear handsome or unsightly. Only the babies knew. Once she was properly disguised, the woman then began play with each of the sixty babies in turn. Their playing was tightly scripted to keep the experience for each child consistent. Every play date was captured on videotape, and low and behold, the study revealed that the infants, according to Langlois, "more frequently avoided the woman when she was unattractive than when she was attractive, and they showed more negative emotion and distress in the unattractive than in the attractive condition. Furthermore, boys (but not girls) approached the female stranger more often in the attractive rather than in the unattractive condition, perhaps foreshadowing the types of interactions that may later occur at parties and other social situations when the boys are older!" Other studies reinforce the primal depth of our preference for beauty in one another. College students have been shown to prefer cuter babies to less cute ones, even when they initially said all babies look alike, and mothers have even been shown to act more attentively and affectionately toward firstborns who were considered attractive than to those who weren't. Grade–school children who are good–looking are treated better by their peers than their less attractive counterparts, and another Langlois experiment illustrated that babies no older than six months of age looked longer at pictures of attractive adults, no matter what their race or ethnic background. None of these experiments means that any of this behavior makes sense. In fact it is proof that it doesn't because the world is, regrettably, filled with attractive people who are neither kind, nor trustworthy, nor particularly intelligent, all useful traits in a human. Nothing about beauty makes it innately good or bad, and we have thankfully evolved the mental capacity to understand that. Nevertheless, we have a difficult time resisting the primal impulses that cause us to prefer physical attractiveness because it has proven over time to be a spectacularly strong indicator of a personal gene pool that endows its owner with a better chance of making it from one day to the next. It may not be as useful an indicator today as it once was, but millions of years or evolution creates habits that are wickedly difficult to shake. Why does any of this matter? Because that childhood–extending phenomenon we call neoteny and our universal preference for beauty are profoundly bound to one another, even if it isn't immediately obvious. Together they help explain why the countenance you sleepily gaze at in the mirror each morning looks more like an infant ape's than a full–grown one's. Remember Konrad Lorenz's "innate releasing mechanism"? In addition to that observation, a surfeit of other studies reveal that infant faces, especially smiling ones, create a "pleasure response" in adults. If that's true, then our more apelike ancestors may have begun to prefer mates who retained more youthful traits into their adulthood—higher foreheads, larger skulls and eyes, flatter faces, and stronger chins. Females that grew up by genetic happenstance looking more childlike would have found themselves with more enthusiastic suitors than other women who looked less childlike. That increased the chances that those baby–faced traits would be passed along to both female and male children, leading to still more neotenic looks in all of us. But in addition to triggering caretaking and pleasure responses, youth is, as we know, also a fitness indicator. It goes hand in hand with health, strength, and fertility, giving members of the opposite sex still more reasons to prefer mates who retained their youthful looks beyond childhood. The process may have been long and slow, but over tens of millions of generations the simian appearance that had once defined our human ancestors morphed from sloped brows, protruding snouts, and receded chins into more childlike traits. We can see exactly this transformation in the faces of our ancestors, species by species, as we march from the deep past toward the present. By the time _Homo sapiens_ had emerged two hundred thousand years ago, our youthful looks had pretty much reached their current state. _A Preference for Youth Is Still Shaping Our Evolution_ If more proof is needed of our preference for youth in potential mates, a study performed by scientists in Scotland, Japan, and South Africa seems to have supplied it. You may not find it terribly surprising that the research uncovered that men prefer women whose faces look more feminine, which is to say youthful; but it also turns out that that women preferred _men_ whose faces looked more feminine, or boyish. For the study, scientists digitally created an average, but attractive version of two faces for each sex, one Caucasian and one Asian, four "average" faces in all. They then digitally modified each face to create two versions, one slightly more masculine, the other slightly more feminine and childlike. The changes are subtle, but the male versions of the faces sport slightly heavier eyebrows, a hint of shaved beard, squarer jaws, and pupils that stand a bit farther apart than female pupils, something that tricks the eye into thinking that the male faces in the study were larger than their feminine counterpart (they weren't). When forced to rate the faces they found most attractive, members of both sexes, old and young, Asian and Caucasian, said they preferred the more feminine versions. In addition, when prompted to rate the faces on something more than attractiveness, such as trustworthiness, warmth, cooperativeness, and the likelihood to be a good parent, again the more feminine faces were preferred, although youthful looks didn't seem to make the study's participants feel that feminine faces were either more or less intelligent. If we have such a universal preference for feminine, youthful looks, then why don't men and women today, after millions of years of evolution, look essentially identical? Because some other factors are involved. A man's bigger body, larger muscles, and broader shoulders can also indicate a good protector and provider. Those traits require more testosterone, and more testosterone causes changes in a man's face you don't see in a woman's; a beard, thicker eyebrows, broader jaws, and a bigger head, for example. So, while male and female _Homo sapiens_ look more like one another than any other humans, and certainly more than full–grown great apes, we don't look identical. But in time we might because, clearly, even today, we remain genetically predisposed to find younger, more childlike faces attractive. The slow realignment of our looks over millions of years may have caused us to appear more childlike, but as we evolved and became more self–aware, apparently even this failed to make us attractive enough, because for at least the past fifty thousand years we have creatively and enthusiastically taken matters into our own hands, modifying our looks without waiting for genes and evolution to get around to the job. You and I can't take much credit for the blue eyes or blond hair, long, thin bodies or the round, stout ones, that our parents passed along to us for the simple reason they are nothing more than a genetic toss of the dice. But the extravagant measures we take to enhance our appearance that Darwin studied so exhaustively illustrate, pretty dramatically sometimes, something that we do that other animals don't, even other primates. We imaginatively elaborate our appearance, which may be one of the key behaviors that separate us from the rest of the animal world. But what is even more intriguing is that we don't simply tinker with our looks; we change our behaviors, too. We don't simply try to _look_ sexy, we try to _act_ sexy, and that, as a species, has taken us into entirely new territory. Some of these modifications have obvious animal analogues, but with distinctly human twists. For example, why roar like a lion when you can show up on a date with a Porsche Carrera, or Harley–Davidson Night Rod Special. We not only use clothing to protect us from the elements, but also to improve our looks and make statements about status, power, and confidence. These behavioral elaborations even help to explain our affection for what sociologist and economist Thorstein Veblen termed "conspicuous consumption" in his landmark 1899 book _The Theory of the Leisure Class_. Possessions—the latest smartphone, the most fashionable house, the biggest diamond, the hottest dress, the richest fabrics—are all human made fitness indicators. You might say, well, this all seems fairly banal, finding ways to make yourself more attractive to the opposite sex, and I am willing to agree. But you can make a powerful argument that efforts to enhance and amplify ourselves to impress potential sexual partners laid the foundation for far more creative endeavors, undertakings that have made much of modern human culture possible—song, art, invention, wit, storytelling, and humor. It could be argued that the foundations of human creativity and culture can trace their roots to our early efforts to consciously make ourselves more irresistable. Psychologist Geoffrey Miller has argued that just as shapely bodies and symmetrical, youthful faces signal physical fitness, creativity itself is a sign of a mental fitness, something that has enormous value to a potential mate, and therefore a trait that evolution would "encourage." Of course the organ that is the engine of all of this creativity is your brain. It may have evolved to make sense of the world you live in, but among us _Homo sapiens_ it has become extraordinarily effective in generating all sorts of appealing behaviors and countless personal decisions that make you cooler, sexier, and downright captivating. It enables you to be witty, conjure startling ideas, master the piano, helps you dance better or sing beautifully, or become a more stable and loyal partner. It's a kind of universal machine that can turn itself to nearly any goal, including the sexual capitulation of the opposite sex. Once our brains found themselves self–aware, Miller argues, it emerged as nature's ultimate indicator of fitness. Just as genes can deliver vibrant feathers or neon colors, brains bent themselves to the work of upping our desirability in a million different ways. He calls this the "healthy brain theory." The idea that creative behavior makes us sexier isn't brand–new. The old master Darwin, keeping in mind the antics of prancing and warbling birds, speculated in _The Descent of Man_ that humans used both dance and song to win the hearts of potential mates. "I conclude that musical notes and rhythm were first acquired by the male and female progenitors of mankind for the sake of charming the opposite sex. Thus musical tones became associated with some of the strongest passions an animal is capable of feeling... We can thus understand how it is that music, dancing, song and poetry are very ancient arts." In another part of the book he writes, "As neither the enjoyment nor the capacity of producing musical notes are faculties of the least use to man in reference to his daily habits of life, they must be ranked among the most mysterious with which he is endowed... Whether or not the half–human progenitors of man possessed... the capacity of producing, and therefore no doubt appreciating, musical notes, we know that man possessed these faculties at a very remote period." In other words, while there don't seem to be many practical reasons why talents like music, dancing, and other arts evolved, but they did, so there must have been some powerful selective forces at work to bring them into existence, winning over the opposite sex, for example. Zoologists have an oddly charming name they use to describe the singing or dancing or fighting that animals do to gain the attention of potential mates. They call it lekking. It's a way of strutting your stuff, letting the creatures you are wooing—not to mention any competitors that happen to be nearby—know just how fit and cool you are. When we stand around at a party and talk, the human version of lekking is rampant and intricate. We show off the way we look and dress, revealing subtly (or not) the clothing or jewelry we wear or the gadgets we have on hand. But the real action is in how we behave. Are we funny, insightful, charming, articulate, and quick–witted? If we are, we are advertising a first–rate mind. The more talent and creativity we bring to the party, the more likely we are to be noticed. Being outstanding is a good thing when vying for the attention of others. We cultivate these behaviors in subtle and complex ways that even we aren't consciously aware of. Researchers have found that women, for example, laugh more when they are in the company of men. This isn't because men are exceptionally funny, but because (subconsciously) women are encouraging men to lek so they can gather information and observe what the man has to offer. The more she laughs, the more he shares and reveals. And the more he reveals, the better she can judge what he offers in ideas, values, talent, and personality. If she likes what she sees, she may eventually offer him the benefit of her company. If not, the laughter stops and she moves on. This probably also explains the results of a 2005 study that indicated that women are attracted to men who make them laugh while men are attracted to women who find their jokes funny. A recent study of 425 British men and women indicated that artists, poets, and other creative "types" had two to three more sexual partners than the average Brit who participated in the study. Whatever else you might conclude about bohemian lifestyles, it seems that creativity has its attractions. Another study has found that professional dancers (and their parents) share two specific genes associated with a predisposition for being good social communicators. The theory here is that dance and song were primal ways that our ancestors bonded, prepared for battle, or celebrated, and that creative dancers not only boasted great rhythm, but great social skills, which together made them especially attractive. This would make dancing both a way to show off physical fitness _and_ a healthy brain, a kind of evolutionary twofer. Could it be that charm, creativity, and rhythm all go hand in hand? We can speculate, but the truth is it has been a struggle for scientists to take behaviors such as art, sculpture, storytelling, and music seriously because each seems, from an evolutionary point of view, so impractical. They also resist cold analysis because they are hopelessly subjective. Mostly the field of evolutionary psychology has concluded that music, song, dance, and art are best explained as accidental by–products of other forces that created the extravagant human brain. Nothing more than evolutionary filigree. But again, Geoffrey Miller begs to differ. He argues that our elaborate human behaviors evolved for the same reasons peacock feathers did, or the rainbow colors on mandrill snouts—they represent powerful personal marketing that lets the opposite sex know how extraordinarily fit the brains of their owners are, which in turn makes them great potential mates. "The healthy brain theory," he says, "proposes that our minds are clusters of fitness indicators: persuasive salesmen like art, music, and humor, that do their best work in courtship, where the most important deals are made." I believe that Miller is correct, but I also believe that advertising our cerebral fitness is good for more than landing mates, as crucially important as that is. In fact, creativity of all kinds may trump sex as _the_ most central force in human relationships because, beyond sex and sexual selection, survival is also, ultimately, about power over your environment. And fit brains not only demonstrate power, they generate it. In 1975, Amotz Zahavi, a biologist at Tel Aviv University, conceived a theory that was fascinating because on the surface it was so counterintuitive. He thought it might explain some of the exceedingly impractical traits and behaviors we see in nature that seem to hamper animals rather than help them. Why, he asked, would peacock feathers evolve when they weigh so much and their colors risk attracting the attention of predators? Or why, when an impala senses a lion nearby, does it bound straight up in the air (something called stotting), wasting valuable seconds before it sprints in the opposite direction? Why do bowerbirds create intricate and ostentatious nests for their mates that include everything from seashells to rifle shells when a simple bundle of woven grass would do the job just as well? To answer these questions he conceived the "handicap principle." Zahavi already knew some of the traits and behaviors could be explained as ways to win mates. But he also knew they help establish status. The peacock isn't simply saying, "See my remarkable feathers." He's also saying, "And have you noticed how strong I must be to get off the ground and fly with these enormous things weighing me down?" The point for potential mates is clear—I'm handsome _and_ strong. But the same message is simultaneously sent to predators and other peacock competitors: "Don't mess with me. I'm top dog. I know it. You know it. So let's all just take our place in the pecking order and move on." In the same way, an impala's pogo–stick bound before it sets out to escape from a predator may waste time and energy, but it also tells a stalking lion, "As you can see, I'm pretty healthy and rather quick. You might want to think twice before taking the time to chase me." Often as not, the lion does a quick and primal cost–benefit analysis, walks away, and looks for a less challenging meal elsewhere. These are survival strategies, pure and simple. The point is, even seemingly inefficient traits and behaviors have their purposes, though they might not be immediately obvious. It's not always necessarily about sexual selection. Sometimes the traits make you attractive, sometimes they represent a sophisticated way to survive, sometimes they help reinforce status, and sometimes it's all the above. If ever there was an example of an organ that was costly, yet delivered an enormous payoff, the human brain is it—the ultimate peacock's feathers. It devours enormous amounts of energy (far more than any other organ in the body), is outrageously complex, and subject to breaking down (with disastrous results). Yet what powerful messages it can send about its owner, and its owner's fitness! This makes the human brain the most elaborate example of the handicap principle in all of nature, an extravagance that expends enormous amounts of energy illustrating how extraordinary its owner is by conjuring the most surprising and creative things it can itself conceive. How else can you explain Beethoven's Ninth, Picasso's _Guernica_ , and sculptures from Michelangelo's _Moses_ to the great and intricate Buddha of Kamakura, Japan. Why Fred Astaire, Kabuki Theater, James Joyce, Cirque du Soleil, Steve Jobs, Gregorian chant, and _Avatar_? In short, how do you explain all the seemingly impractical yet ubiquitous examples of human creativity and inventiveness? Because the brain is invisible, unlike peacock feathers, it reveals its fitness by generating behaviors that are extra–ordinary, surprising, and impressive. To be surprising means to be different and unexpected, again, out–standing. To be impressive the behaviors have to be something others find difficult to do. The two together define creativity. The scale of human invention is broad and deep. It can encompass everything from the merely pleasing to stunning genius. When you think about it, the brain's capacity for generating captivating insights and behaviors is what makes each of us the unique people we are. We use it to fabricate the traits that define us—our wit, our charm, our drive, our insight, our humor and intelligence, our talent and interests. Some of us have been blessed with truly extraordinary gifts—Shakespeare, the ultimate storyteller; Leonardo, the ultimate imagineer; Einstein, the ultimate problem solver. The rest of us stake our ground somewhere between profound genius and a good one–liner. Why is this need and appreciation for creativity so deeply plaited into us? Because the advantage of a brain that can do surprising, remarkable, or outrageously pleasing things is that it gets attention, or rather its owner does, and that attention can be translated into fame, influence, goodwill, leadership, sex, and, in modern society, money. Look at the people we admire or reward across all cultures. Dancers, singers, thinkers, comedians, actors, political leaders, entrepreneurs, and businesspeople, even an occasional scientist or journalist. (I am not including athletes here because we don't reward them for their intelligence, though their intelligence may certainly contribute to their success.) All of these people display unusually fit brains because they are both inventive and able to effectively communicate their inventiveness. Whatever else we may think of them, we have to at least agree that they are not boring and or predictable. They stand out, and in standing out, they aggregate the most important human commodity of all—power. We often think of power as a bad thing, possibly because it can be abused with depressing effect. But in nature acquiring power is crucial to survival. All living things seek it because without it they will die. Plants may acquire it in the form of nutrients from the soil and the sun. A silverback gorilla or bighorn sheep may acquire it with raw strength. With most animals power flows to them in direct proportion to how well the genes they inherited match their environment. Penguins would be powerless in the tropics, and Komodo dragons would be equally helpless in the arctic. Cheetahs maintain power with speed, wildebeests in numbers, and condors with flight. But we humans apply our brains, not simply our genes, to acquiring power, and because we are so genial, we seek it not only to survive our physical environment, but our social one, too. Survival in a social context isn't quite as literal as it is in a physical one. If you don't survive physically, you die. If you don't survive socially, it means you don't matter, and that is, in it's own way, also deadly. Mattering is itself relative because in today's world we can live in a wide variety of social circles. We can't all matter as much as those examples I mentioned earlier, Aristotle or Confucius or Einstein, Leonardo da Vinci and Shakespeare—people whose creativity made an indelible mark on human history. But we can matter to our city or officemates or family or Facebook friends—the modern equivalents of the tribe—and that is important because how we stand with our tribe deeply affects how we feel about ourselves. Today we can even have multiple tribes to choose from, and the World Wide Web allows us to create instant new tribes to whom we can display our cerebral fitness. The important thing is that we matter, to someone. Because if we don't, the alternative is chronic depression, or worse. Creativity isn't the only way we strive to matter and gain power, but it's the most functional, sensible way. It doesn't require greed or jealousy, envy or outright violence, all of which can be highly effective, if immensely damaging, methods for gaining power. But these don't reveal a fit brain. Creativity does. It is the most impressive way to earn the attention of others. And thankfully, over the long haul, it works; otherwise it would long ago have been swept from the index of our behaviors. We would be without art, music, and dance; there would be no pyramids at Giza, no Taj Mahal, no Brahms, Voltaire, Goethe, Yeats; only brutality and violence, and therefore, very likely, no humans. The idea that the foundations of human civilization are largely an unintended consequence of complex brains wired to draw attention to their owners is both paradoxical and startling. Brains did not evolve to be creative, they are creative by the accident of evolution. And in becoming so, the exciting and innovative sideshow that bubbled up from our primal need to matter to the opposite sex, our competitors, loved ones, and everyone else in our tribe eventually took center stage. Now, after thousands of years of our brains' showing off, we find ourselves enmeshed in this massively complicated, rich, and remarkable thing called human culture, sometimes revealing the evil in us, sometimes the divine, but always surprising and innovated because we have become utterly incapable of living without originality. There is no getting around the conclusion that creativity, though it may once have been evolutionary filigree, has become _the_ force that defines our species, and the behavior that separates us from all other living things. As creative as we are, we haven't yet solved the elusive question of when, or how, we managed to get this way. It's not as though evolution one day snapped its fingers and we were smitten. The cerebral infrastructure that makes such a thing possible has been long in the making. Nevertheless, evidence of human creativity in the sense we are talking about has been scarce until quite recently, if you can consider recently within the past seventy thousand years. It's true tools and other technologies had been around millions of years, and they require creativity, but they are not examples of self–expression or symbolic thinking the way a piece of sculpture, a painting, language, or a song are. The timing of this matters because creative self–expression of this kind only became possible when our brains reached a certain critical, but as yet undefined, level. Its emergence marks a watershed event in human evolution, arguably _the_ watershed event. Most paleoanthropologists agree, for the time being at least, that _Homo sapiens_ emerged 195,000 years ago. By this they mean creatures that were anatomically modern—they looked like us. The oldest _Homo sapiens_ fossils were found in Ethiopia in 1961, but sadly no trace of symbolic thinking was found with them, no tangible demonstrations of brain fitness. This has created the underlying suspicion among scientists that though these people looked like us, they may not have _acted_ altogether like us. They made tools that were incrementally better than the tools of those who came before them. They certainly lived rich and complicated social lives. But all the fossil and genetic evidence indicates that mostly they still roamed the same grasslands in East Africa, hunting game and struggling to survive, as so many of their ancestors before them. For over a hundred thousand years the first of our kind lived this way, resembling you and me physically, and perhaps in many ways emotionally, but apparently not mentally. It was as if the brain had reached regulation size, but hadn't yet completed all the wiring and biological alchemy needed to summon up a mind that saw the world quite the way we do. This has been a gnarly problem for scientists because you cannot fathom the minds of creatures with whom you haven't the luxury of sitting down and talking. Around seventy–two thousand years ago, on December 27 in the Human Evolutionary Calendar, we begin to see the evidence of a change in what might have been a hotbed of rapid human, intellectual development—those coastal cave communities of South Africa where, according to Curtis Marean, small _Homo sapiens_ communities found themselves within a gnat's eyelash of total annihilation. At Blombos Cave the evidence tells us that a small handful of _Homo sapiens_ were decorating tiny nodules of hematite, a kind of iron rock, with geometric designs, cross–hatchings that may have represented some kind of symbol, still indecipherable to us. In the same cave, but later in time, scientists have also unearthed perforated ornamental shell beads, arguably the first evidence of human–made jewelry. These discoveries were made in the 1990s and early 2000s, but then in 2010 a team of paleoanthropologists reported finding nearly three hundred fragments of decorated ostrich eggs in the Diepkloof Rock Shelter, another South African cave complex. Each shell is sixty thousand years old, and each was painstakingly etched with precise crosshatched designs, proof, the team believes, that the people who made the markings considered them important symbols. If this theory is correct, the cross–hatchings found on rocks twelve thousand years older may be more than meaningless doodles, as some scientists suspected when they were first found. Did they contain some secret message? Words, perhaps? Or calculations? An early form of sheet music, maybe? Or someone's grocery tab? Their significance remains elusive, but enticing. Despite these clues, and some scattered signs that Neanderthals in Europe had attained a semblance of symbolic thinking, the evidence for creativity of the indisputably modern human variety doesn't begin to appear until around forty thousand years ago, and by then the evidence is both stunning and global. By this time _Homo sapiens_ had made their way out of Africa for good and were busily populating Europe, east and south Asia, and making their way through Indonesia clear to northern Australia. There on the rock walls of Australian caves, ancient humans began to paint symbolic figures and animals, having improved, perhaps, on the creative habits of their ancestors from Africa who had found and used ocher or cryptically symbolized their feelings and insights on the shells of ostrich eggs. Afterward more proof of symbolic thinking begins to surface. Archaeologists have found small but remarkable sculptures, sometimes of penises, but more often of large–breasted, pregnant women carved by talented artists, beginning thirty–five thousand years ago. They call these Venus figurines because they seem to be talismans of fertility, a trait undeniably crucial to a species who certainly found strength in numbers, but whose life spans rarely reached beyond their thirties. Most of the objects are small and portable, custom–made, perhaps, for magically connecting with the mysterious forces of nature. From Western Europe to Siberia anthropologists have found these small sculptures, and along with them figurines of chimeras—half–human, half–animal—all astonishing indications of a mind unlike anything the powers of life had produced in the long course of their 3.8 billion years of existence. Creatures that could not only imagine other worlds, beings, and forces, but express their imaginings, in the hope, somehow, that they could tap the strength of those mysteries. Some of the most breathtaking art was created by the Leonardos and Michelangelos of their time deep in caves in Lascaux and Chauvet, France, and Altamira, Spain, as the last great ice age began to release its frigid grip on Europe. These images would be the envy of art galleries around the world today, or Madison Avenue marketeers—rich, vibrant, and ingenious. You can almost see them move and ripple in the flickering firelight that once illuminated the cave walls as the Cro–Magnon artists stood with their palettes of primordial paints and dyes, dabbing the walls, extracting the beasts from their minds and applying their images to the rock. What powerful magic this must have been to the painter and those who witnessed the work. How could any creature imagine such things and then make them appear right before your very eyes? What hidden powers could enable a living thing to consciously and purposefully create beauty out of nothing more than the popping of the synapses in his head? So far more than 150 caves have been found in Western Europe, primal cathedrals where the walls have been saturated with the conjurings of artist humans showing off the startling fitness of their brains. We can only imagine how revered people like these must have been, made powerful because from their fingers flowed the symbols of the beasts that fed and clothed and killed these itinerant hunters. How out–standing they must have seemed. The purpose of these paintings remains a mystery. Colored footprints of both children and adults that show up on the floors of some caves signify, for some, that rites of passage were performed here as boys made their transition to manhood, or girls became capable of bearing children. Others seem to have been a kind of play school for ancient human children dabbling in the art of art. Some have wondered if the images became a way to control the creatures they depicted, or to draw out and drink in their predatory strength. Maybe these were the theaters of their day, where great stories were told of heroes and their exploits, or a place where men hunted, virtually, in a kind of primeval video game, imagining with their paintings the ways they would bring down prey when, at last, the long and punishing winters ended. Strangely, the cave paintings almost never depict a human form, and when they do, the figures are sticklike, as if humans are minor players in a larger drama. Are these the remnants of a creeping epidemic of human creativity, isolated breakouts of beauty? Are they examples of a new kind of mind, self–aware, curious, and brimming with ideas and emotions, that had no choice but to express itself for the pure joy of it, like a child playing with crayons, or a graffiti artist saying, "I'm here! And I matter." These settings, perhaps because they are encased in rock and filled with the ghostly work of their artists, feel sacred and magical. It's easy to imagine ceremonies of some kind taking place within the bowels of the earth accompanied by chants and primeval music. Archaeologists have found drumsticks, flutes, and a prehistoric instrument called a bull–roarer near the caves of Lascaux. You can hear the rocky acoustics amplifying the chants and music, the drumsticks beating out a steady rhythm accompanied by the eerie thrumming of the bull–roarer, a sound like the breathing of some great sleeping beast, all combining to make a powerful and ancient symphony that moved and bonded the new kind of primates who listened. Music may be the most ancient of human arts. Chanting and dancing were arguably practiced by tribes of _Homo erectus_ over a million and a half years ago, and later by _Homo heidelbergensis_ , the common ancestor of both _Homo sapiens_ and Neanderthals, seven hundred thousand years ago. Thirty–five thousand years in the past, dancing and music had likely become much more complex than the varieties our more ancient predecessors practiced, a way to entertain and express personal feelings as well as to bond and celebrate. The importance of dance and music in the human psyche is probably best illustrated by a single startling fact. We are the only primates that can tap our foot or move our body in time with a specific rhythm. It's wired into us, but not into our chimp or gorilla cousins, which tells us that it is a trait that like language, big toes, and toolmaking evolved sometime over the past seven million years. It's difficult to explain why _Homo sapiens_ took more than a hundred millennia to show off the creativity that stands as the irrefutable proof that the stock from which you and I sprang had truly arrived, but that hasn't stopped it from being passionately debated. Some paleoanthropologists argue that an explosion in _Homo sapiens_ population seventy thousand years ago eventually generated competition that in turn encouraged innovation. Others believe that there was no "big bang," no sudden blossoming of human creativity and symbolic thinking at all. Instead we are simply seeing the slow and aggregated results of gradual human progress that finally left behind enough proof in the fossil record that it existed. Others have argued that as the human race grew, creative ideas that had once been conceived but later lost were now picked up and passed along more easily. More of us were around to ensure that great ideas were absorbed, reused, and built upon rather than wiped out when the innovator passed away. Another possibility exists. Stanford paleoanthropologist Richard Klein holds that the catalyst for human creativity didn't happen outside in the real world, but inside our heads—a genetic mutation, or series of them, that transformed the way our brains functioned so that symbolic thought and the creativity it makes possible erupted from our ancestors' minds like Athena from the head of Zeus. Somewhere, somehow, he believes, the wiring or the chemistry of the brain changed, perhaps subtly, and crossed an invisible threshold that made it possible for us to attach complex meaning to otherwise meaningless pictures, objects, or sounds. Images could represent gods; beads and shells could represent value; shapes could stand in for ideas that anyone who saw them would mutually, and immediately, understand. Sounds could become symbols for words, and symbols could be built into the grammar and syntax that make language the remarkable thing it is. Once this happened, says Klein, "humanity was transformed from a relatively rare and insignificant large mammal to something like a geologic force." The mechanisms for this change are unknown. It could be random genetic mutation, or, as University of Cape Town archaeologist John Parkington theorizes, a new kind of diet. Parkington believes it is not a coincidence that the early humans in South Africa who were making jewelry from seashells were also eating large amounts of seafood out of those very shells, and that food was providing the fatty acids that we today know are crucial to brain health and function. The new sources of food, he believes, combined with a more modern cerebral architecture than earlier humans, made these _Homo sapiens_ "cognitively aware, faster–wired, faster–brained, smarter," and their seashell jewelry, art, and technical advances stand as the proof. There is evidence that the chemistry of the modern human brain, especially the prefrontal cortex, the most recently evolved part of us, operates differently from that of other primates. When scientists in Shanghai, China, compared one hundred chemicals in the brains of humans, chimpanzees, and rhesus macaques, they found that the levels of twenty-four of them were drastically higher in the human prefrontal cortex. It would be interesting to know how these levels would compare to those in the brains of Neanderthals, _Homo ergaster_ , or even _Homo sapiens_ who lived more than seventy–five thousand years ago, but, of course, none of those specimens exist. Would we find that somehow the brain had leaped chemically forward, allowing us to cross some unknown hormonal Rubicon? The findings indicate that when it comes to glutamate, the main excitatory neurotransmitter in our brain, we modern humans are in a league all of our own, constantly burning vast reservoirs of it compared with other primates. This may reinforce Parkington's theory that something has made us "faster–brained." As it happens our penchant for inventiveness is also linked to our species–wide predilection for youthfulness. That shouldn't surprise us. When you look at creativity in action, it bears a close resemblance to a child at play. One of its hallmarks is that concepts, thoughts, words, or objects that don't normally go together are joined in novel ways and result in something that is useful or arresting or jaw–droppingly beautiful. When these coalitions come together in a eureka! moment, something that once seemed improbable now stands, right there, real and complete. For children nearly everything in the world is new, and so almost any combination of unfamiliar experiences can result in those moments of discovery. Since so much is unfamiliar in a child's experience there is enormous room for learning. But as we grow older and experience more, the space for true innovation narrows, and the stakes rise. The creative bar becomes trickier to reach. Startling is tougher to come across. Still, we humans manage to do it every day, day after day. And the reason we do is because, of all the apes, we are the most childlike. By shifting the time when genes express themselves, and by rearranging brain and hormonal chemistry, neoteny not only transformed the way we look, but the way we act. Cognitive scientist Elizabeth Bates wrote about the power of neoteny and its ability to generate powerful change in 1979, but at the time she didn't connect it with creativity; she associated it with another benchmark event in human evolution, language. She (and others) believe that a human "language acquisition device" evolved, like nearly everything else in life, by recombining a variety of preexisting capacities into a new configuration. Human language, she argued, was built on the shoulders of "various cognitive and social components that evolved initially in the service of completely different functions... [and] that at some point in history, these 'old parts' reached a new quantitative level that permitted qualitatively new interactions, including the emergence of symbols." Put another way, neoteny helped shift the growth patterns of one or more capacities our ancestors already possessed for interacting with one another and commandeered them for new uses. If neoteny played a central role in the emergence of language, could it also have played an earlier role in the ingenuity that symbolic thought requires? It's possible. The timing of the expression of certain genes, including genes that control brain growth, made and makes our long childhoods. It extends the time our brains are pliable and able to bend to our personal experience. But because human neoteny is _so_ extreme, it has done even more than that. While it acts most powerfully during our childhood and makes childhood possible, it also extends childlike behavior throughout the long course of our lives. Even in old age, we are more childlike than other primates are in their youth. The brain flexes and muses and creates right up until the end. "We don't stop playing because we grow old," the aging playwright George Bernard Shaw once mused, "we grow old because we stop playing." This means we are not only children longer, we are childlike longer, and that has made us by far the most creative and adaptable creatures ever. "We are not a computer that follows routines laid down at birth," Jacob Bronowski once observed. "If we are any kind of machine, then we are a learning machine." This is why child's play and creativity are so deeply linked. _Play_ has multiple meanings depending on whether you are an anthropologist, psychologist, parent, or child, but among its hallmarks are the simple joys of pushing boundaries, expanding limits, randomly galumphing around to see what happens just for kicks. Even long–faced philosopher Martin Buber had to admit, "Play is the exultation of the possible." At the heart of playing is the strange phenomenon of curiosity. You really can't have one without the other. One theory about curiosity is that we are all born "infovores," that we crave new knowledge and experience in something like the way we crave food. It's a kind of mental and emotional hunger that requires ongoing feeding and satisfaction. Old knowledge doesn't satisfy our curiosity because it's familiar; we have "eaten" it before. So how do we know when something is new? Because it surprises us, because it's different from what we are used to, fresh. Every creature has an evolved talent for identifying what is surprising or out of the ordinary for one simple reason: it's central to survival. Those that fail to tune in to the change around them, those that aren't sensitive to surprise, soon join the legions of species no longer with us. It's a talent that reaches back hundreds of millions of years. For modern humans like you and me this makes curiosity a way to gather new information that has survival benefits, but also a process for gathering the building blocks out of which we assemble entirely new experiences and new forms of knowledge. One of the behaviors that makes us different is our affection for playing around randomly, joining this with that or that with another thing with no particular reason except to create more surprises that satisfy our curiosity, which in turn results in still newer experiences, new inventions and insights. Innovation and originality are by–products of our lifelong, childlike love off goofing off! In some ways, play resembles evolution itself, randomly introducing unpredicted and unpredictable innovations the way random mutation reshapes DNA. When you think about it, adaptation in nature is a kind of learning. Something different comes into the world, and living things adjust genetically. The adjustment is serendipitous, not conscious, but it happens. Play does something similar. It randomly introduces new experiences to our minds, again and again. We encounter novelty, and when we find it useful or enticing, we make it ours. It literally changes our minds, and therefore us. And since not one of us learns quite the same things, since each of us plays in different ways and is surprised by different experiences, your changes of mind are different from mine, which makes each of us unique. Our view of the world is not entirely distinct, but distinct enough that we ourselves become new and surprising additions to it. This also means that you and I can learn from one another by sharing our differences, a little like the way two parents' different chromosomes combine to create a genetically unique child. By acquiring new experiences and then sharing them, ideas and originality become sticky and spread from mind to mind. No matter how long we live, we can't seem to root the child out of us entirely, joyful in its experimentation, never satisfied, hungry for knowledge, and eager to show it off. When you look at us this way—a lifelong child, with a mind itching to play, and famished for surprise—you can see how the power for creating originality out of random experience, and the ability to share those experiences, could have taken us from a mere ten thousand or so primates seventy–five thousand years ago scrambling back from the abyss of extinction, to seven billion creatures who have not only populated every corner of the planet, but managed to rocket away from it a few times to orbit and land elsewhere in the solar system. By connecting the surprising experiences and ideas we spawn or stumble across, and then sharing them with one another, we have been able to construct great edifices of new knowledge—Pythagoras's geometry, Newton's and Leibniz's calculus, the wheel, clocks and longbows, the Saturn V rocket and the silicon chip and balalaikas, silk paintings, the telescope, money, sailing ships and steam engines, kissing and language, music of all kinds and toys of every imaginable stripe, chess, baseball, sculpture, and van Gogh's _Starry Night_ —all of it out of the combined, interlocked, unique imaginings of millions of minds shaped by billions of surprises shared in trillions of exchanges to create the chaotic, astonishing, tumultuous stew we call human culture. In this sense, we are a race of continually startled, and startling, creatures. Once the adaptable nature of such a pliable human brain had been sufficiently honed and wired to make all the improbable internal links needed to connect "new" into still newer creative acts, human culture was guaranteed to evolve at exponential speed. However it all happened exactly, clearly something radically different was emerging in the brains and minds of _Homo sapiens_ from Europe to Africa to Australia between seventy–five thousand and forty–five thousand years ago. Some sort of cerebral critical mass was frothing. Neoteny had created a nimble, pliant brain that remained flexible throughout life and generated both unique people and unique ideas. We had evolved into born learners, genetically encouraged to seek out and, by some strange neuronal alchemy, devour surprise and transform it into knowledge. This may be why we, and not Neanderthals, are still around today to wonder where we came from. It may explain why you are at this moment gazing at a page of symbols I have typed that your mind, rather astonishingly and without much seeming effort, translates into thoughts you can understand. Neanderthals lived faster than we did and they died younger, and possibly therein lies the reason we remain and they don't. Though they, too, were neotenic and time had also been genetically rearranged for them so that they were born earlier and remained young longer than today's chimpanzees, gorillas, and orangutans, their childhoods were not as long as ours. This gave their brains less time to shape their personal experience, their ideas, and their personalities before they began to grow more rigid. And growing more rigid, they may have been a less childlike species, less prone to experiment. That would have made them less adaptable. Perhaps this was also true of the Denisovans, and the Red Deer Cave people of south China, even the "hobbits" of Indonesia. Their minds may have been as sharp, but not as plastic, as those of the _Homo sapiens_ who had recently migrated out of Africa. Perhaps they all became more set in their ways sooner; more adult, you might say. Neanderthal tools, and the little we know of their rituals, indicate they were on the cusp of our brand of symbolic thought, but some pieces, we don't know how many, didn't quite fall into place in a way that allowed them to remain among us today. If their language was songlike as Mithen has theorized, they might have expressed the emotions they were feeling, more than the explanations of why they were feeling them. There might have been more passion, less logic, or maybe less of a balance between the two. We can imagine them as a bright, lyric, almost mystical species, but not a fully symbolic one. Perhaps they lived in a kind of surreal, Daliesque world, less self-aware, not altogether capable of encapsulating the ephemera of the new thoughts their minds conjured into carvings, sculpture, patterns, or images made from strokes of paint. Because that is what symbols do, they translate thoughts and ideas into tight, little packages of meaning for delivery from one mind to another. It is miraculous really. Maybe the Neanderthals weren't radically different from us, or less intelligent; they may simply not have been able to play their way into symbol making as complex as the kind we stumbled upon. In particular, perhaps they couldn't play their way into the most shattering gift of all—spoken language as we know it today, complete with the bells and whistles of grammar and syntax. Perhaps. Our youthfulness, our propensity for playing with, and juggling and shuffling, surprising experiences and insights continually and in more startling incarnations must have cried out for an invention as elegant as language. When you step back from it, language is something like a piano. Using nothing more than a piano's eighty–eight keys, a player can express an infinite number of songs, and infinite variations on those songs. With language we can express an infinite variety of thoughts, feelings, ideas, and insights. Before modern language, our ancestors may have been capable of gesture, art, and song with which to bundle and share the flickerings of their minds, but imagine, how modern language must have supercharged human creativity and the culture that was assembled out of it? The thing is, while language connected us to one another more closely than ever, it also enabled us to pull off another remarkable feat: it made us aware that we are aware. It may also have made madness possible. ## [**Chapter Eight The Voice Inside your Head**](ch00_fm06_toc.html#ch00_fm06_toc) _I am a strange loop_. —Douglas Hofstadter If you could shrink down to the size of a molecule and slip into your brain, you would find yourself flying among billions of neurons along great highways of dendrites and axons with streams of chemicals splashing across synaptic gaps and firestorms of electricity arcing all around. At this scale, the real estate of your mind would be vast, planetary in its dimensions, as you rode your molecule–size vehicle. Everywhere commands that make it possible for you to walk, breathe, see, smell, speak, reflect, and imagine would be at work. Witnessing the weather of your thoughts and feelings like this would be extraordinary, but even from this vantage point, or maybe because of it, you could never imagine that all of the impulses and chemistry blowing up and down the intricate infrastructure around you could possibly _be_ you. Yet it is. You are assembled from these nonstop, chaotic processes; the rolled–up, aggregated chemistry and biology through which you are zipping. Stupefying but true. The bewildering mystery of how this happens is what Douglas Hofstadter hoped to resolve when he penned his landmark book _Gödel, Escher, Bach_. He wanted to figure out, he wrote, "What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle? What is an 'I'?" It's an easy question to ask, and we've been asking it for as long as we have been around. The answer, though, is just a bit tougher to come by than the asking. But we can try. You may have noticed from time to time that when you think, you find that you are talking to yourself; not necessarily out loud, but in your mind. Nearly every waking moment we describe what is going on in our minds to ourselves, like a sports announcer calling a game, remarking on what we see, commenting on our own insights, planning our lives, polling ourselves on what we feel, wondering why this and how that? "God, I'm edgy this morning. This coffee is delicious! Hmmm, rain, better grab the umbrella. That's an interesting choice in a hat, if you are insane! Don't forget to get the oil changed and pick up the milk. You know you really have to get better at remembering people's names." Our reflections run the gamut from the mundane to the ethereal, occasionally the sublime, but they almost never stop, until we fall asleep. When you think about how you think about yourself, you are experiencing what psychologists call metaconsciousness, the ability to be aware that you are aware. Though we take it for granted, this capability requires language, and the interesting thing about that is that language requires using symbols that we sound out in our minds so we can understand what we are saying to ourselves. That we can do this is remarkable enough, but doesn't conversing as we do with ourselves make you wonder, if _you_ are doing the talking, then whom are you talking _to_? Or, if you are listening, then who, precisely, is talking? Are we one person or two? Or many? Where does that voice we call "thought" come from? Who is the voice in your head, and how did it get there? In the 1970s the Princeton psychologist and philosopher Julian Jaynes wrote a fascinating, bestselling book with the rather opaque title _The Origin of Consciousness in the Breakdown of the Bicameral Mind_. Jaynes's insights as a philosopher and a psychologist remain respected (he passed away in 1997), but the book was highly controversial. He speculated that consciousness of the kind I just described is an extremely recent evolutionary development. Between 10,000 B.C. and 1000 B.C., he argued, modern humans thought that the voice they heard in their head wasn't their own, but the voice of a chieftain or demon or god, some very real being outside their own minds. In other words, they didn't talk to themselves the way we do, they instead believed they were listening to another all–knowing being who was observing them and their thoughts. Jaynes called this kind of mind bicameral, or two–chambered, one chamber that listened and one that spoke, but neither of which was aware that they were part of the same brain. "[For bicameral humans], volition came as a voice that was in the nature of a neurological command," he wrote, "in which the command and the action were not separated, in which to hear was to obey." As evidence he points to the statues and idols that ancient cultures in Egypt, Sumeria, and Mesoamerica created as the physical symbols of the gods and chiefs that spoke to bicameral humans; the voices. This is the only possible explanation, he argued, for why they were built, and, having been built, why they exerted the enormous influence they clearly did over their cultures. Jaynes also maintained that the people who lived in these ancient societies did not in any way believe others truly died. They only moved on to another world and then, having arrived, spoke from that world directly to those left behind. And from that world the gods also spoke, commanding the creation of great temples and the elaborate rituals for their benefit because, after all, they _were_ running the show. The first laws, Jaynes explains, such as Hammurabi's Code and the Ten Commandments, were, as Hammurabi and Moses both said, rules passed directly to them by God. Is Jaynes right? Were we once incapable of thinking for ourselves? Or more precisely, was there a time in our evolution when we were unaware that we _were_ thinking for ourselves? It's impossible to know, absolutely, what was going on in the minds of humans living in ancient Sumeria, Egypt, or the Yucatán thousands of years ago, but we do know that the human brain, even today, sometimes struggles to identify the speaker within us as our "self." Schizophrenics hear a voice, or sometimes multiple voices, that they do not recognize as belonging to them. They come from "others" speaking from the outside. Yet brain scan studies illustrate that the voices are, in truth, being generated inside their own heads. The experience of schizophrenics, and Jaynes's theory, raises the question that somehow the human brain came up with a trick that helped it talk to, and control, itself. At some point it found a way to transform those external voices into internal ones. If that's true, though, then how did we manage it? The answer begins with our brain's unique ability to create symbols and weave them together in outrageously complex ways. Other animals can't invoke symbols, but they can associate a single symbol or event with an experience they hold in their brain. Your dog, Fido, for example, may recognize the sound (but not the meaning) of the word _walk_ and associate it with something he likes to do at the end of a leash with you each evening, but that's the extent of the connection between the two. The _sound_ you make when you say "walk" brings to Fido's mind a specific experience, and so when he hears you make that noise, he runs for the door and waits. Scientists call this an iconic relationship between an experience and its external representation. If you move a little farther along the evolutionary chain, you will find that primates possess more sophisticated symbolic capabilities. Take the case of two remarkable chimpanzees, named Sherman and Austin, at the Language Research Center at Georgia State University. Both were trained to associate specific symbolic pictures, or lexigrams, with certain events. The lexigrams were imprinted on a series of buttons in the laboratory where they trained. When they pressed a lexigram, they might find themselves rewarded with a goody, like a banana, or banana juice. Pretty quickly the lexigram came to stand for the reward in the minds of Austin and Sherman, not unlike the way the sound _walk_ became linked to a good time outside for Fido. Once the two chimps figured out the one–to–one relationship between a specific image and its reward, the researchers decided to present them with a new challenge. This time they were required to use two buttons in combination, like a verb and a noun, to receive a treat. One kind of lexigram represented "give" or "deliver," and the others represented a specific kind of food. So to receive a banana, Sherman and Austin had to hit the lexigram for "give" and then the correct one for "banana." This took some doing because there were multiple combinations of foods and commands, or verbs and nouns. But after some intense training, the chimps got the hang of the new system. However, researchers had still more in store for the two hard–working chimps. Once they had absorbed the rules of the two–icon system, they were next provided a different alphabet of rewards and verb symbols that they had to use to receive treats. The question now became, could Sherman and Austin transfer the command–reward system they had learned to entirely new lexigrams on their own? After some trial and error, they again rose intrepidly to the challenge and learned the new system. The big insight here is that they had comprehended an underlying organizing principle that made it easier to master the new lexigram vocabulary, an ability called an indexical symbolic relationship, one in which an animal can transfer a particular way of thinking to different situations. It represents a huge leap from iconic thinking because iconic symbolic relationships merely require one–on–one memorization. If chimpanzees can manage this today, it's likely our direct ancestors going back millions of years could master something like it as well. So what makes us different from them? Our special ability is that we cannot only make iconic and indexical connections between meaning and experience like Fido and Austin and Sherman, we can weave indexes of symbols into much more intricate systems of entirely new symbols, and we can do it in an almost infinite variety of ways. For example, you not only see the letter _e_ in the words you are reading on this page and associate a sound with that letter (an iconic relationship), but your mind effortlessly combines many _e_ 's and other letters into words, each of which has greater meaning than the individual sounds of the letters. And then you can pile together the words into sentences that have greater meaning than any one word, and so on. You also simultaneously understand the context of the letters. An _e_ in one place can represent one sound, in another it can be silent (in English, anyhow). Words can also change their meaning depending on the context, just as letters do. The interrogative "Turn right, right here, right?" uses the same word in one sentence three times, yet each time it has a different meaning because its context is different. Your mind understands this because it also grasps the underlying rules of the English language. It grasps them even if it can't entirely explain them. Chimps may eventually and painstakingly be able to comprehend a simple system of language like "(You) throw ball"—subject, verb, and object. But there will never be the ghost of a chance that any chimp, even if he is the Shakespeare of _Pan troglodytes_ , will comprehend the intricacies of the apparently simple question "Turn right, right here, right?" One of the reasons he can't is that all human language is recursive, which is another way of saying that it can embed concepts within concepts. Just as letters are embedded inside words, strings of words can be embedded within sentences to make them more meaningful. Take the sentence "John, devilishly handsome as he was, refused the title King of the Prom, even though he secretly believed he deserved it." The ideas that John was devilishly handsome, that he secretly believed he deserved the title, and that the title was King of the Prom all invest the basic sentence "John refused the title" with much deeper meaning and a lot of useful information. They tell us about John, his motives, why he has the feelings he has, and provide some insight not only into how he looks, but how he looks at himself. Yet all of these additional nuggets of thought sit nicely nested, one inside the other. Unlike iconic and indexical meaning, this ability is richly symbolic, and unique to our kind of brain. Language isn't the only symbolic system we have created that taps our special ability to recursively weave bundles of symbols into mosaics of meaning. We do it in mathematics with numbers and variables that can be built into proofs and theorems and formulas. In music we stack notes elaborately to construct melodies, then fashion melodies into themes and songs and symphonies, and even thread in harmonies with still other melodies, and finally, for good measure, add words to the melodies to create everything from pop hits to operas and Broadway shows. Paintings are collaborations of symbols, too. Different colors of paint are situated together to represent real objects or feelings or ideas so that together they fashion works that have broader meaning than each of the dabs or drops of color themselves. Georges Seurat's pointillist paintings are a perfect example—hundreds of thousands of individual dots of different colors that together bring an image alive. Without our ability to gather together an alphabet of symbols and connect them in elaborate, nested patterns, there would be no _Hamlet_ or _Faust_ or _Moby–Dick_ , no laws of thermodynamics, no science, music, architecture, no Kabuki theater, sculpture, Renaissance art, or anything else that has made the great, expansive construction project that we call human culture possible. Every iota of it is built on the unique and powerful talent of your _Homo sapiens_ brain to mysteriously direct the molecular machinations of its own neurons to manufacture symbols and then share them with the other symbol–recognizing creatures around you. This enables minds to meld, hearts to bond, and ideas to be shared and bent and shaped by many other minds. And the brain does this without really comprehending how it does it, something like the way a basketball player drives to a basket and deftly deposits the ball into it without a moment's reflection. We are able to embed symbols within symbols this way, and create intricate and outrageously complex thought structures, because our brain can take a concept, idea, or goal and set it aside temporarily while we shift our attention and work on something else. Scientists use two symbols to describe this earth–shattering capability: _working memory_. In its simplest form, working memory is something like taking a call on your cell phone, starting a conversation, and then asking the caller to hang on while you take a second call. You can then begin that conversation without losing sight that you have another conversation–in–waiting because you have filed away the first phone call as a symbol, a kind of "object" the brain can retrieve from a folder or drawer. It's almost as though it is a physical thing. This holds true for nearly anything we can think or imagine, from goals to concepts to worries. What's more, these "objects" we put on hold can have multiple concepts that live within _them_. We don't have to remember each of the individual pieces of information that reside within what we have set aside, we just have to be able to recall the big idea. And when we do, everything else comes along for the ride. If you are envisioning the Taj Mahal, you don't have to log and file away every detail while you shift your attention to preparing lunch. You simply prepare your meal, then reach back into your mind and pluck up the concept "Taj Mahal," and all of your thinking related to it returns like a nicely nested matryoshka doll, a file folder of the mind. It's not clear where, precisely, this talent lies in our brain. Like nearly everything else cerebral, it almost certainly doesn't reside in one place. The brain, like recursion itself, is nested and networked, woven together. But fMRI studies have found that humans and other primates have regions called the frontal operculum, which activates when they process indexical kinds of information, but only humans have the much more recently evolved Broca's area, which handles language and grammar and syntax, recursive symbolic systems. Broca's area is part of the human prefrontal cortex, a sector of the brain that sits directly behind our foreheads. It is one of the reasons we find ourselves with the large, childlike heads we have, and why our foreheads don't slope back as much as our cousin primates. The human prefrontal cortex (PFC) has, in evolutionary terms, sprinted toward its present state compared with other advances in brainware. While our brains have tripled in size over the past six to seven million years, our PFC has increased sixfold. When it comes to symbolic thinking, this is where the action is. The action is here because the PFC has evolved as the brain's chief executive officer. It polices, as much as they can be policed, the primal, impulsive activities of our minds. The PFC inhibits anger, fear, hunger, sexual attraction, and other strong, but ancient drives. Many of these capabilities emerged as our ancestors became increasingly social and more reliant on one another for survival. Evolution would have favored individuals with brains that were better at controlling purely selfish impulses and favored those that took a longer–term view of situations. When you live in groups, after all, it may not pay to act solely in your short–term self–interest since you may need the help of others in the future. So today perhaps you share your food so that another day food may be shared with you. Not only does the prefrontal cortex act as an executive this way, it allows us to think ahead by taking symbolized ideas, concepts, and memories and cobbling them together into scenarios that are completely nonexistent except in the brain itself, or, put another way, imagined. By recalling information held in long–term memory, packaging that with new information, setting these newly organized symbols aside in working memory so they can percolate while we move forward with other goals and ideas, we advance through the day, prioritizing, organizing, imagining, worrying, creating. Sometimes the work is mundane, like figuring out how to get showered, make phone calls, answer e–mail, and catch the subway in more or less the correct order so we don't show up at an appointment late, unbathed and misinformed. Sometimes the work is profound and results in the special theory of relativity. You never know. It doesn't take much to imagine that the special abilities of the prefrontal cortex, whenever they finally came together, made our species dramatically different, outrageous really. Able to represent our thinking symbolically, then to embed these symbols inside one another, we became beings able to efficiently create, organize, and recall enormous amounts of complex information for still more revision. In many ways it produced in us the same sort of ability that digital compression algorithms make possible. JPEG images and MP3 sound files, which make showing off the family photos on your iPad or listening to your favorite music on your phone possible, are the results of compression algorithms. What makes them useful is that they are not perfect reproductions of the images and sounds they represent, but the formulas that create the algorithms excel at extracting just enough of the _right_ information to re–create a close facsimile that requires far less information and memory than the original. The copy is similar enough that most of us can't tell the difference, yet it's far less information intensive, and therefore more efficient. Symbols do the same thing. We don't remember everything. Just what we need to remember. As helpful as all of this symbolization is in the business of expressing what you feel and think to others, it also makes it possible for you to do something else rather amazing: explain what you feel and think to yourself. In fact, it makes your "self" possible, and that may be the most stunning illusion our brains have pulled off yet over the past fifty thousand years—maybe the most stunning ever. The ability to create the ultimate symbol: you. Which takes us back to the question we asked at the beginning of this chapter: Who are "you," anyway? When you are thinking, and talking, to yourself, the you that you are speaking to is a symbol. Like a reflection in a mirror, it is a representation made possible because your brain can generate symbols. Just as your mind symbolically represents the other people in your life, it also uses this trick to represent a version of you, which makes possible an enormously powerful force in your life, this second you, who is diligently and deeply influencing your every feeling, thought, and choice. But to exert this influence, your brain pulls off still another astonishing feat. It changes itself physically. The symbolic "you" alters the real "you." Study after study has shown that the generation of our own thoughts and memories transforms the chemical and physical structures of our brains, in real time. Mind–boggling as we might find this, we shouldn't be horribly shocked. If our brains are the prime drivers of our behaviors, then when we think, feel, imagine, or change our minds in any way, our brain _must_ also change. How can it not? Our actions, feelings, and thoughts are simply the dynamic reflections of the brain's physical, chemical, and electrical states. If you doubt that your brain dictates your reality, just drink a few shots of tequila. Your reality changes because your brain chemistry has been remixed. The same is true, if more subtly, when you whip up your brain chemistry by worrying or recalling a warm memory or losing your temper because that guy in the truck cut you off in traffic this morning. In this way the brain changes itself, commands itself, reacts to itself, reshapes itself. It somehow bootstraps self–awareness and self–determination and simultaneously generates a symbolic self to be aware of and to command (as opposed to a god or a demon who dictates orders). This is a little like a box of cake ingredients purposefully opening and mixing themselves, then hopping into an oven to bake. This means that you and me (and every other of the seven billion humans currently alive on planet Earth) are, as Douglas Hofstadter might put it, "a strange loop," a supreme example of recursion, a matryoshka doll of selves. To understand why we have come to operate this way, think of social interaction as a kind of rapidly changing ecosystem made up of a mix of personalities that requires constant adaptation to the shifting agendas, relationships, alliances, and power struggles within the group. In the highly social and very bright species that preceded us, part of the battle for individuals would have been to keep motives and relationships straight in their own minds. Those among our ancestors who could successfully track and recall the behaviors of their friends and enemies would have excelled, survived, and passed their genes along. To manage this, they must have learned to symbolize different personalities. Maybe Goog tended to be aggressive; Targ, helpful and friendly; Moop, well organized and smart. This would have helped them "slot" others into organizational categories so they could deal with them in ways they saw fit, depending on their own personalities. Since these relationships only matter in so far as they are connected to you, along the way it would have been impossible not to eventually apply the same index to you. We became to ourselves another person in our social ecology. As evolution continually favored smarter and increasingly self–aware creatures from _Homo erectus_ to _ergaster_ to _heidelbergensis_ , Neanderthals, Denisovans, and _Homo sapiens_ eventually emerged. Both we and Neanderthals developed large brains and complex prefrontal cortices, but we developed in different parts of the world, under entirely different circumstances, split from a common ancestor.g We both may have developed spoken language, but very different kinds. We were both self–aware and capable of symbolizing, but to what extent remains unclear. Neanderthals may never have developed a highly complex and fully symbolic inner world, and _Homo sapiens_ may not have pulled off this level of cerebral legerdemain themselves until fifty thousand years ago, maybe later. Perhaps then the prefrontal cortex reached a plateau where it could not only fully symbolize others, but manage the one last thing that made us so profoundly different from all other primates and humans that had come before us or even grown up with us: symbolize ourselves. With that, everything changed, radically. Because when we could fully symbolize ourselves, it meant that we could also begin to embed our symbolic selves among all the other symbols around us. We could begin, entirely inside our minds, to imagine what we would do before we did it. We could guide our behaviors, or at least conceive of guiding our behaviors, the way a chess player moves the pieces on a chessboard. By creating a symbol of ourselves, we became conscious and self–aware, capable of purposefully planning our behavior. We could imagine. That, in itself, represents a remarkable leap, but it made still another leap possible. The moment we consciously act on a scenario that we have imagined, it means we have taken control of our behavior. We have consciously made choices and acted on them. With the invention of a symbolic "you," intention and free will were born. Or at least the convincing illusions of them. So the voice in your head that is talking to you? It's you. But the person that is listening isn't, not precisely. It's a symbol you have created. An image of you, a virtual version, like an avatar in a computer game, except the computer game is your inner mental life, the place where you map your actions and make your choices before transforming them into reality. The "you" you talk to is a simulation. Looked at this way, our "selves" are, quite literally, a figment of our imaginations, the ultimate illusion, but an extremely useful one because this illusion has enabled us to take a hand in the control of our fates, at least more than any other creature ever has. We are not only an animal that can explore a life not yet lived, and dream of a future we desire, we can also take hold of those dreams and make them come true. Out of a chaotic flux of random events in nature that have no agenda and are utterly incapable of making any plans, we have evolved into a planning, agenda–making, dream–conjuring creature. We are the first survival machines to also become living, breathing imagination machines. If you compare us with other animals, our ability to create symbols turns out to be a kind of superpower, like being able to fly or peer through rock with X-ray eyes. They are super because they have transformed us into a world–changing, supremely adaptable, überbeing. Not simply "a figure in the landscape," as Jacob Bronowski put it, "but the shaper of the landscape... the ubiquitous animal." This isn't human arrogance speaking, declaring that the greatest advance in all of evolution has been the sudden emergence of our kind. This takes nothing away from the remarkable abilities of other animals. It simply states an irresistible fact, as sure as blue whales are gargantuan, cheetahs are swift, and grunion dance on the beach under full moons; we alone have developed this superpower that lets us make symbols, and that has made us, bar none, the most adaptable creature planet Earth has yet witnessed. The strange thing is, this virtual version, this symbol of our "selves," not only reflects on itself (and selves within selves like the reflections of two facing mirrors), but also reflects on the mind that makes it possible in the first place. That can sometimes make our mental lives even more complicated than they already are. Superpowers, it seems, often bring difficult trade–offs in tow. Like mental illness, for example. When I was a child, I once asked my mother why our new Chrysler New Yorker didn't have power windows, when they were all the rage in the newest cars. "The more fancy the gadgets," she answered, "the more there is to break down. We would only have to fix it later." The makers of intricate technologies like cars, computers, and spaceships have inevitably found that when the engineering of anything reaches a certain level of complexity, it is much more difficult to maintain than simpler systems. A straw rarely breaks down. Nor do paperweights, generally. Space shuttles, on the other hand, were designed with thousands of "redundant systems" because so many functions could go haywire. Yet when it comes to complexity, a shuttle holds not even the dimmest candle to the human brain. The brains you and I carry into adulthood have an estimated 100,000,000,000 (one hundred billion) neurons, each connected to as many as a thousand other neurons. This is sophistication of the incomprehensible variety. The human brain is so convoluted in its wiring, genetics, and neurochemistry that it is a wonder that so many of them function so well. Of course those that evolved and didn't work up to snuff quickly resulted in the death of their owner and were swiftly tossed from the gene pool. Still the modern human brain will, and does, often go sideways. We see it in the number of people who suffer from chronic depression, not to mention more dramatic and damaging conditions such as bipolar disorder, schizophrenia, autism, obsession, compulsion, attention deficit, and multiple personality disorder. And that's just some of the labels we use to describe mental illness. The more we come to understand the human brain, the more we discover what can go wrong with it. Arguably, no human brain is really "normal." Mental illnesses like these are uniquely human because they are linked to uniquely human capabilities like language, symbolization, and working memory. Neurologists and psychologists have been especially curious about two mental illnesses—schizophrenia and autism—and what each has to say about the way we evolved. Schizophrenia is a mental disorder characterized by trouble differentiating between reality and imaginary worlds and experiences. Schizophrenics can suffer from extreme paranoia, delusions, disorganized speech and thinking. In severe cases they often hear voices and many times carry on conversations with the voices because to them they aren't the voice most of us identify as our "self," but belong to someone else, usually unseen. This is something like Julian Jaynes's bicameral mind. Those who speak are distinct, often conflicting, even abusive, and it can be maddening, or, occasionally, captivating. One schizophrenic reported waking up to hear two Israeli generals debating battle strategy, a subject he had never before in his life contemplated. "It was a fascinating experience," he recalled. It's an insight into the power of the human brain that someone can have that much detailed knowledge about a subject as arcane and complex as military strategy, yet not consciously realize it. It makes you wonder what treasures of information lie untapped within each of our minds. Schizophrenics suffering from acute paranoia often convince themselves that they are being hunted or persecuted. These fears can become part of an elaborate imaginary world they live in, a world that is as absolutely real to them as our daily lives are to the rest of us. The richness of these worlds and scenarios are further testaments to the power of human creativity. We on the outside see this as madness, and it is excruciating for those who suffer through such horrible conflicts and feelings, but it is possible to see how our minds can go to places like these. After all, don't each of us hear a voice that often sends us conflicting messages? The only difference is that we identify that voice as our own, not as belonging to unexpected, intruding, and ethereal companions who pop into our consciousness unbidden and unannounced. And don't we all live in imaginary worlds of our own making—in the tomorrows that we plan; the lives that we lay out; the conversations we imagine having with friends or enemies? Every piece of fiction ever written is an elaborate imagining manufactured out of the symbols in the mind of its author, no less labyrinthine in its way than the delusions of schizophrenia. The line between normalcy and madness may be finer than any of us would like to believe. Autism is not usually as dramatic or debilitating as schizophrenia can be, but it, too, provides a glimpse into the mysteries of the creative spirit. Like schizophrenia, autism runs along a spectrum from mild to severe, and some of the underlying symptoms for it are similar: difficulty socializing with others, a tendency to become obsessed with specific behaviors, sometimes self–injury or the need for repetitive rituals that might involve entertainment, food, or dress. In about one case out of ten, autistic people develop remarkable talents, but are otherwise incapable of leading what the rest of us might consider a normal life. Researchers sometimes refer to them as autistic savants. The movie _Rain Man_ was based on real–life autistic Bill Sackter and another savant, Kim Peek, who, for reasons not entirely clear, was blessed with an astounding memory, yet struggled with some of life's most basic undertakings. Sackter passed away in 1983, and Peek died of a heart attack in 2009, but both were remarkable people. Sackter was also a model for Charlie Gordon in the novel _Flowers for Algernon_ and the movie it inspired about a mentally retarded man who becomes a genius before returning to his earlier state. Peek could read thousands of pages of facts and trivia, then much later recall with almost perfect accuracy the information on those pages, say, the weather on December 14, 1964, or Roberto Clemente's batting average in 1967. Other autistic savants have been genetically bestowed with extraordinary talents as musicians, painters, mathematicians, sculptors, even writers. Sometimes the talents are wide–ranging and accompanied by high intelligence, as in the case of Matt Savage, who at age six taught himself to read piano music and went on to study both jazz and classical piano at the New England Conservatory of Music. In between he also somehow found time to win a statewide geography bee, compose many of his own pieces, and release nine albums while touring the world and appearing with an impressive list of jazz greats. Alonzo Clemons on the other hand has an IQ of 50, the result of a severe brain injury as a child. Strangely, though, Alonzo developed a talent for creating marvelously accurate animal sculptures out of clay, even if he had only caught a glimpse of the animal or seen a photo or drawing of it in two dimensions. His works have sold for tens of thousands of dollars. When looking at Clemons's works, it's difficult not to think of the fluid, breathtaking artwork in the caves of Altamira and Lascaux. Were these the works of a Cro–Magnon savant, someone seemingly endowed with magical talents, and magical ways of representing the world? Seth F. Henriett is another savant blessed with a high IQ like Matthew Savage, and a marvelously broad array of talents. Though Henriett suffered from severe social problems and autoimmune disorders early in her life, she also revealed aptitudes for music and art. She played flute at the age of seven and contrabass at the age of eleven. By age thirteen her abstract and surrealistic paintings were gaining attention. Shortly afterward she wrote two books about her experience as an autistic and won several international writing competitions with her stories, essays, and poems. Despite these staggering skills, each of these people has difficulty relating to others. They shun making eye contact or being touched; they often prefer to be alone and struggle with even basic personal interaction. Yet don't all of us exhibit a quirk or two, or three? Phobias, preferences, habits, interests, even obsessions? Various experts have speculated that some well–known people in history were autistic to some extent, or another, including Lewis Carroll, Charles Darwin, Emily Dickinson, Thomas Jefferson, Isaac Newton, and Wolfgang Mozart. How much emptier would human civilization be had it been bereft of the genius of these minds? On the other hand, 90 percent of autistics do not become savants, though a significant number are highly functional. Again the affliction is not binary, one or the other, all on or all off. You and I might have smidgens of autism and not realize it, especially if you happen to be a man. Scientists have sometimes described autism as an extreme version of the male brain. And in truth, of all the world's autistics, only one fifth are female. This may be because women have more axons and dendrites, which are the pathways in the brain that enable it to work as a unit. Men's brains have more neurons. In effect, this makes male brains less networked than women's, but outfitted with more processing power, largely focused, it seems, on spatial and temporal capabilities. This doesn't make one sex smarter or more talented than the other, simply different. It also helps explain, at least according to some scientists, why men are sometimes less socially tuned in than females, and why women are superior, generally, at reading social cues. The issue with autistics isn't so much with the number or the function of their neurons, but that they suffer from a dearth of connections between them. What would cause this scarcity? Neoteny, or more accurately the processes that make neoteny possible. Remember, complexity creates more opportunity for something to go wrong. During the crucial first three years that follow birth, when the brain triples in size and personal experience so strongly shapes the billions of pathways between neurons in the brain, cerebral development may mysteriously go awry in people who grow up autistic. Connections may be delayed, accelerated, or stunted. Studies show that different sectors of the brain develop in ways that keep them more separated from one another than normal, like islands in a sea, out of touch and segregated. Yet, some modules may become overwired, which could help explain remarkable feats of memory, mathematics, music, or art and views of the world so different from those of the rest of us. The downside, of course, is that the segregations also make it more difficult to be socially sensitive and tuned in to other people's nonverbal communications—their smiles, tones of voice, body language—the little things we unconsciously and effortlessly do that grease the skids of human relationships. These create a deficit of a Theory of Mind, a brain largely incapable of symbolizing its owner as a self to itself, let alone symbolize others. Rather than symbolization going rampant and boisterous as it does in schizophrenia, in autism it is reduced, stunted, and balkanized, with the manifold human genius for socialization sometimes being abandoned in exchange for a single, condensed, but breathtaking talent. Why, if evolution so ruthlessly discards traits and behaviors that undercut a living thing's ability to survive and mate, have mental illnesses like these and others survived? Can they serve a purpose? Or did they once? In a study published in _Nature_ in 2007 researchers led by Bernard Crespi and Steve Dorus analyzed human DNA from populations around the world as well as primate genomes dating back to the shared ancestor of both humans and chimpanzees to get a handle on what genes led to schizophrenia, why they evolved, and why the illness is still among us. They were astonished to find that of seventy–six gene variations known to be strongly related to schizophrenia, twenty-eight showed sturdy evidence that they were favored by natural selection when compared with other genes, even those associated with the most severe forms of schizophrenia. In other words, the genes weren't randomly repeated accidents; the forces of evolution were actively selecting them and passing them on. Why? It could be that they are bound to other genetic talents that are extremely important to human survival, like speech and creativity, for example. One current theory is that schizophrenia is a "disorder of language" that represents a trade–off some _Homo sapiens_ made in exchange for the remarkable gift of speech and consciousness the rest of us enjoy. Says Crespi, "You can think of schizophrenics as paying the price of all the cognitive and language skills that humans have." That may explain why 1 percent of the human race suffers from some form of schizophrenia. Multiple theories connect schizophrenia and autism to the evolution of our ability to symbolize others and model their behavior in our minds; and our unique talent for symbolizing ourselves by using systems like language to talk to ourselves, imagine what others are thinking and intend, and envision events that haven't yet happened and never may. Individually, the origins of both illnesses may be the result of brain development in childhood that misfires. Since the prefrontal cortex is, ultimately, a consequence of neoteny, the precise timing that the development of a modern human brain requires may be the source of both disorders. Some scientists have speculated that in schizophrenics, neoteny is retarded, or processes it sets in motion aren't completed. It's interesting that in most schizophrenics severe symptoms don't show themselves until around eighteen years of age, or older, when the brain has placed the majority of its design in order. In the case of autism the complex cerebral structures and relationships that make Theory of Mind, language, and symbolization possible could all be affected early in development. We already know the brains of autistic children grow significantly faster and larger than normal between the ages of one and sixteen months of age and remain larger until ages three to four. Researchers have also found that children with autism develop 67 percent more neurons in their prefrontal cortex and have heavier brains for their age compared to typically developing children. It's almost as if connections made prior to birth and early in life arrive before they can be properly deployed. With both illnesses something like a genetic wrench seems to have been thrown into the complex developmental processes that construct the foundations of a human mind during those long childhoods that neoteny has made possible. The timing and expression of genes that catalyze the cerebral alchemy of human behavior falter somehow, and once they do, it changes the brain in ways that aren't easily repaired, at least not based on what we know today. Still there is a larger point to all of this. Mental illness, a state in which the mind is unable to get a solid handle on what the rest of us generally agree is "real," could not exist until nature first created a brain that could model the thing we call reality in the first place. That means it takes a human mind to suffer mentally. Cats, dogs, and other primates may endure depression or grow sad, they may develop lifelong fears and strong addictions, but they don't hear voices, imagine alternate realities, or suffer from an inability to speak or empathize. And they don't because they never enjoyed those capabilities in the first place and never will. Further advances in genetics and brain imaging may reveal exactly how mental illnesses like these work and in the process expose to us some of the slick tricks the brain plays to create the illusions of self and reality. These advances already hint that the borders between reality and delusion are slim. Or more accurately, that reality _is_ an delusion, just an extremely useful one. In some ways the brain is like the Wizard of Oz, standing hidden behind a curtain, spinning the wheels and operating the levers that create the illusory symbols that make our "I' and our reality possible. All of this happens because of the elegant physical, pharmacological, and electrical interactions of the three pounds or so of wetware you currently tote around in your skull. These trillions of cerebral interactions know nothing of jealousy, love, passion, creativity, or sadness, and yet out of them emerge the threaded experiences we perceive as ourselves living a life connected, to varying degrees, to all the other humans that we encounter every day, day in and day out, until the brain that makes it all possible finally ceases to function. Once the human brain materialized in the form that we now know, outfitted with its genius for creating and shuffling the symbols that make language, imaginary worlds, and, above all, that phenomenon Hofstadter calls the "anatomically invisible, terribly murky thing called I," creatures emerged that could dream, act on their dreams, and share them with the other "I's" around them. And that changed the world. Our special talent isn't simply that we can conjure symbols or even weave elaborate, illusory tapestries of them, but that we can share these with one another, roping together both our "selves" and our imaginings, linking uncounted minds into rambunctious networks where thoughts and insights, feelings and emotions, breed still more ideas to be further shared. Creativity is contagious this way, and once a light emerged, it must have gone off like fireworks. This has made every human a kind of neuron in a vast brain of humans, jabbering and bristling with creativity, pooling, pulling, and bonding ideas into that elaborate, rambling edifice we have come to call human civilization. In this way _memes h_ have traveled along the transit lines of our relationships, some of them seeing their way to reality, others run aground for lack of interest or use, selected out and gone extinct as surely as dodos, dinosaurs, and the flightless crake. The wheel is a great example of a meme. So are the arch and the soufflé and a catchy tune like "I've Got a Lovely Bunch of Coconuts," or plumbing, sanitation, myths, and the Pythagorean theorem. Once upon a time, someone conjured up a large, circular thing that helped move heavy loads when paired with other similar circular things, and the idea stuck and was shared—wheel! As memes spread, they mutate and combine with other memes and snap together in our social worlds as neatly as genes do in the molecular one. They breed only because we humans have the ability to both conceive and duplicate them, using all of the symbols we so ardently exchange. The voice we had begun to hear in our heads—perhaps fifty thousand years ago—was a harbinger and a catalyst, the first step we needed to take before we could attempt to direct our own lives, fashion memes, and then transform them (for better or worse) into reality. At first the sharing must have been slow. It takes time for ideas to be passed along and built upon in a world where only a few tens of thousands of symbol–making creatures live. Still, compared with the geologic and genetic measures of time that preceded it, these changes came swiftly, and they gathered speed exponentially. Inside of forty thousand years agriculture and animal domestication were widely adopted, then came settlements, villages, and towns. Cities in Mesopotamia and the Middle East arose a mere nine thousand years ago. Despite wars, famine, disease, and natural disasters, we have surged forward since, inventing science, a global economy, vast communication systems that exchange thick streams of media, immensely complex governments and businesses, all of them, in their way, ceaselessly shuttling the proliferating agglomerations of human thinking around the globe every day—a humming and titanic network consisting of seven billion symbol–makers busily exchanging their symbols. Thanks to us they move from one mind to another as surely as genes move and mutate from one person to another. Was the emergence of a brain—shaped in childhood and capable of symbolizing its owner—the final piece in the human puzzle, the last brick that completed the construction of what we today call truly "human"? Was this the evolutionary act that made civilization possible? We will never know for certain because we weren't around at the moment of modern human awakening. Trying to figure out how that white light of the first symbolic insight came together is a lot like reverse engineering some alien engine we have found in a desert, fully operational, but without a manual. My guess is we will never fully comprehend how we turned the corner to become the human beings we are today. The brain itself may be the issue. Maybe the mind that it makes possible will always find itself just short of grasping how it creates its illusions, or why. Too much is at work in the unconscious, too much unavailable mystery. It doesn't mean we can't try, though, the way phycisists have tried to approach absolute zero. By definition it is impossible to get to a place where there is nothing, but we can keep working to get closer. As with the quest to reach absolute zero, maybe we can only ride upon the illusions it conjures and see where they lead. At least until a new kind of human evolves. ## **Epilogue: The Next Human** _The need is not really for more brains, the need is now for a gentler, a more tolerant people than those who won for us against the ice, the tiger and the bear. The hand that hefted the ax, out of some old blind allegiance to the past, fondles the machine gun as lovingly. It is a habit man will have to break to survive, but the roots go very deep_. —Loren Eiseley, _The Immense Journey_ In the Bay of Naples, not far from the shadow of Vesuvius, swim two seemingly unremarkable creatures, a common sea slug, and a jellyfish, called a medusa. The jellyfish, researchers know, carelessly bob through the upper waters of the bay and after birth quickly mature into full–grown, elegant adults. The sea slug's larvae also contentedly ride the water's currents, apparently happy to live the lives that such snails generally do. You wouldn't think that these creatures could possibly have anything to do with one another, but it turns out they are intimately and strangely connected. Marine biologists first saw this connection when they noticed that full–grown versions of the snails had a small, vestigial parasite attached near their mouth. Nothing you would immediately notice. But when they did notice it and looked further into the whole affair, they made their remarkable discovery. It seems that as the slug larvae bob through the bay, they often become entangled in the tentacles of the medusa and are then swallowed up into its umbrella–shaped body. At this point you would assume that the larvae would soon be done in as nice morsels by the predator jellyfish, but, that is not how it goes. Instead, and astonishingly, it is the snails that begin to dine, voraciously—first on the radial canals of the jellyfish, then on the borders of the rim, and finally on the tentacles themselves, until the medusa disappears altogether and is replaced by rather a large slug, with the small bud of a parasite attached to its skin right near its mouth. Lewis Thomas, the fine physician, researcher, and essayist, told this story in his wonderful 1970s book _The Medusa and the Snail_ to illustrate how peculiar and connected life on earth is. It certainly does that, but I'm recounting it here because in the eccentric relationship of these two creatures lies the echo of what the future holds in store for the human race. Usually at this point in a book like this, the inevitable—and heavily loaded—question arises, what next? Where will human evolution take us now after our long and astonishing adventures? Are we still evolving? And if we are, what will the next act look like? Can we expect ever–enlarging brains to cram themselves behind alienlike foreheads? Or will our noggins contract to the size of a walnut, shrunk down by media overload and pharmaceuticals (the dimensions of the human brain have diminished 10 percent over the past thirty thousand years)? Or perhaps we will grow weak, fat, and small of limb, vaguely resembling Jabba the Hutt, while simultaneously sprouting an extra digit or two to better handle all of the texting we do? It might even be possible, as one scientist has speculated, that we will diverge into two subspecies, one fit and beautiful and the other overweight and slovenly, a kind of real–world version of the Eloi and Morlocks in H. G. Wells's _Time Machine_ , except without the cannibalism and enslavement, we hope.i Evolution, as the past four billion years have repeatedly illustrated, holds an endless supply of tricks up its long and ancient sleeve. Anything is possible, given enough millennia. Inevitably the forces of natural selection will require us to branch out into differentiated versions of our current selves, like so many Galápagos finches... assuming, that is, that we have enough time to leave our evolution to our genes. We won't, though, and none of these scenarios will come to pass. Instead, we will come to an end, and rather soon. We may be the last apes standing, but we won't be standing for long. A startling thought, this, but all of the gears and levers of evolution indicate that when we became the symbolic creature, an animal capable of ardently transforming fired synapses into decisions, choices, art, and invention, we simultaneously caught ourselves in our own cross–hairs. Because with these deft and purposeful powers, we also devised a new kind of evolution, the cultural variety, driven by creativity and invention. So began a long string of social, cultural, and technological leaps unencumbered by old biological apparatuses such as proteins and molecules. At first glance you might think that this would be a boon to our kind. How better to better our lot than with fire and wheels, steam engines, automobiles, fast food, satellites, computers, cell phones, and robots, not to mention mathematics, money, art, and literature, each conspicuously designed to reduce work and improve the quality of our lives. But it turns out not to be that simple. Improvements sometimes have unintended consequences. With the execution of every bright new idea it seems we find ourselves instantly in need of still newer solutions that only seem to make the world more kerfuffled. We are ginning up so much change, fashioning thingamabobs, weaponry, pollutants, and complexity in general, so swiftly, that as creatures genetically bred to a planet quite recently bereft of technical and cultural convolutions, we are having an exceedingly difficult time keeping up, even though we are the agents of the very change that is throttling us. The consequence of our incessant innovating is that it has led us inevitably, paradoxically, irrevocably, to invent a world for which we are altogether ill fit. We have become medusae to our own snails, devouring ourselves nearly out of existence. The irony of this is Shakespearean in its depth and breadth. In ourselves we may finally have met our match: an evolutionary force to which even we cannot adapt. We are undoing ourselves because the old baggage of our evolution impels us to. We already know that every animal wants power over its environment and does its level best to gain it. Our DNA demands survival. It is just that the neoteny that has made us the Swiss Army knife of creatures, and the last ape standing has only amplified, not replaced, the primal drives of the animals we once were. Fear, rage, and appetites that cry for instant gratification are still very much with us. That combination of our powers of invention and our ancient needs will, I suspect, soon carry us off from the grand emporium of living things. The best evidence that we are growing ragged at the hands of the Brave New World we have busily been rolling off the assembly line is the growing numbers of us who freely admit to being thoroughly stressed. A recent study reported that the United States is "a nation at a critical crossroads when it comes to stress and health."j Americans are caught in a vicious cycle: managing stress in unhealthy ways while assembling insurmountable barriers that prevent them from revising their behavior to undo the damage they are inflecting on themselves. As a result, 68 percent of the population is overweight. Almost 34 percent are obese. (This is rarely a problem in hunter–gatherer cultures.) Three in ten Americans say they are depressed, with depression most prevalent between the ages of forty–five and sixty–five. Forty–two percent report being irritable or angry, and 39 percent nervous or anxious. Gen Xers and so–called Millenniums admit to being more stressed about personal relationships than even their baby–boomer parents. It's so bad that the results of our anxieties have found their way into dental offices, where dentists now spend far more of their time treating patients for jaw pain, receding gums, and worn teeth than they did thirty years ago. Why? Because we are tense and anxious, grinding our teeth down to nubs as we sleep. Stress, as the experience of lab rats everywhere has repeatedly testified, is a sign that a living thing is growing increasingly unfit for the world in which it lives, and as Darwin and Alfred Russel Wallace astutely observed more than 150 years ago, when a living thing and its environment are no longer a good match, something has to give, and it is _always_ the living thing. How are we handling our stress? Not too well. Rather than relaxing or getting more exercise when pressures mount, studies show that we instead skip meals, spend more time online or in front of the TV, then overeat and lie awake at night perfectly prepared to enter the next day bleary–eyed, short–tempered, and exhausted. What triggers this behavior? Those old primal drives and appetites we struggle so mightily to ignore. Which returns us to the question, what next? Our demise doesn't have to be a Terminator–style annihilation that leaves the world emptied of all humans, postapocalyptic cities stark and decaying with the smashed remains of our cultural accomplishments. It may be more of a butterfly–like metamorphosis, a transformation in which we step over the Rubicon of our old selves and emerge as a new creature built on our own backs without ever realizing, at least early on, that we are no longer the species we thought we were. Did the first Neanderthal know that he, or she, was no longer _Homo heidelbergensis?_ Those passages are made gradually. Perhaps we will simply morph into _Cyber sapiens_ ,k a new human, infinitely more intelligent than you or I are, perhaps more socially adept, or at least able to juggle large tribes of friends, acquaintances, and business associates with the skill of a circus performer. A creature more capable of keeping up with the change it generates. To handle the challenges of time shortages and long distances, _Cyber sapiens_ may even be able to bilocate or split off multiple, digital versions of themselves, each of whom can blithely live separate lives and then periodically rejoin their various digital selves so that they become a supersize version of a single person. Imagine being able, unlike Robert Frost's traveler in his poem "The Road Not Taken," to choose both paths, each with a separate version of yourself. It makes you wonder if something essential in us might disappear should such possibilities come to pass. But then, perhaps, that is what will make the new species new. A whole group of _Homo sapiens_ are already contemplating what the next version of us might be like. They call themselves transhumanists, anticipating a time when future anthropologists will have looked back on us as a species that had a nice run, but didn't make it all the way to the future present. Transhumanists foresee a time when beings will emerge who will literally be part biology and part machine. In this I suspect they are right, the logical next step in a long trend. We are already part and parcel of our technologies after all. When was the last time you checked your cell phone or simply walked to work, hunter–gatherer style? We have long been coevolving with our tools. It's just that now the lines between humans and machines, reality and virtuality, biology and technology, seem to have become especially blurry and will soon twitch and blink away completely. Transhumanists predict that by melding molecule–size nanomachines with old–fashioned, carbon–made DNA the next humans might not only speed up their minds and multiply their "selves," but boost their speed, strength, and creativity, conceiving and inventing hyper–intelligently while they range the world, the solar system, and, in time, the galaxy. In the not–distant future we may trade in the blood that biological evolution has so cunningly crafted over hundreds of millions of years for artificial hemoglobin. We may exchange our current brand of neurons for nanomanufactured digital varieties, find ways to remake our bodies so that we are forever fresh and beautiful, and do away with disease so that death itself finally takes a holiday. The terms _male_ and _female_ may even become passé. To put it simply, a lack of biological constraint may become _the_ defining trait of the next human. There could be a downside to these sorts of alterations, I suppose, should we find ourselves with what amounts to superhuman powers, but still burdened by our primal luggage. Our newfound capabilities might become more than we can handle. Will we evolve into some version of comic–book heroes and villains, clashing mythically and with terrible consequences? Powers like these give the term _cutting edge_ a new and lethal meaning. And what of those who don't have access to all of the fresh, amplifying technologies? Should we guard against a world of super–haves and super–have–nots? It is these sides of the equation I wonder about most. Given evolution's trajectory, short of another asteroid collision or global cataclysm, we will almost certainly become augmented versions of our current models. That has been the trend for seven million years. Apes increasingly endowed with more intelligence, and more tools, becoming simultaneously wiser and more lethal. The question now is, can we survive ourselves? Can we even manage to become the next human? It's a close question. I'm counting on the child in us to bail us out, the part that loves to meander and play, go down blind alleys, fancy the impossible, and wonder why. It is the impractical, flexible part we can't afford to lose in the transition because it makes us free in ways that no other animal can be—fallible and supple and inventive. It's the part that has gotten us this far. Maybe it will work for the next human, too. ## **Acknowledgments** Sitting here at my desk on a warm morning in 2012, it's easy to think of book writing as an entirely private undertaking. Lots of time spent tapping away all by yourself, wrestling with sentences that refuse to make sense, wrangling obstinate phrases; lots of reading, library excursions, and Web–crawling, too, for obscure facts; and a fair amount of frantic note–jotting punctuated with mindless gazings out the window. Occasionally some solitary head–banging was known to go on. But mostly the sequestered nature of writing is an illusion. A book like _Last Ape Standing_ could never find its way into the world without the help and support of battalions of people. For starters, I've been privileged over the years to sit in long conversations with scores of scientists and some of the finest minds I've ever come across—Michael Gazzaniga, Gerald Edelman, Hans Moravec, Ray Kurzweil, Michael McElroy, and the late and remarkable Lynn Margulis, to name a handful. Those conversations provided me with perspectives for this book that I would never have developed without the benefit of their hospitality, capacious intellects, and experience. "But look at it this way..." is a phrase I heard often from all of them as they gently helped me out of the small seat of my limited perspective. Then there are the hundreds of books, articles, and scientific papers I communed with for this project. Each represented years of work and research on the part of their authors, nicely condensed explorations of the corners of human evolution and behavior that I couldn't hope to cover in a thousand lifetimes. Whether it was global climate, human genetics, evolutionary psychology, anatomy, or history, the work of these researchers and writers provided the essential vitamins and minerals of the pages that follow. I may not know all of them personally, but I am deeply indebted to each of them. Jen Szymanski and Frank Harris also deserve my special thanks. Frank for his excellent eye and artwork; Jen for her good nature, unfailing attention to detail, and unshakable reliability. Books also don't happen without a publisher and an editor who believe in the book and are willing to accept that an idea can be transformed into something people will want to buy and read. I will be forever and deeply grateful for the insight and generous spirit of George Gibson at Walker/Bloomsbury. In baseball there are "player" managers. George is a "writer's" publisher, always encouraging, never negative, a fine example of the best that human evolution makes possible. This book's editor, Jacqueline Johnson, is perhaps the calmest person I have ever met, and no matter what sentences might be flailing around under my hand, what participles might be dangling, what questions I might throw her way, or what deadlines I bent and mangled, she remained as imperturbable as Kilimanjaro. And once again I owe my agent, Peter Sawyer, my deep gratitude for his excellent advice and insights on nearly everything, and for hanging in my corner no matter how cockamamy the ideas are that I bring to his sage ear. Mostly, though, I owe my gratitude to my family. My daughters, Molly and Hannah, who have, their entire lives, put up with this thing I do, which Molly, when she was two years old, once described to someone as "hitting buttons." Their smiles and laughter and company place even the toughest days at the desk in perspective. My stepchildren, Steven and Ann, have gamely learned how strange it is to have a writer in the house, and still they haven't punished me for it or given me up for mad. Above all, though, I thank Cyndy, my incomparable wife and the best human on earth, for her relentless patience, encouragement, and love. How mercilessly I have sometimes bent her beautiful ear, yet we remain married. ## **Notes** ### **INTRODUCTION** . Until recently, paleoanthropologists referred to the subfamily of hominoids that consisted of humans and their ancestors as hominids, but even the convoluted argot of science sometimes changes. Today _hominid_ refers to all great apes, including gorillas and chimpanzees, but _hominin_ refers specifically to ancient and modern humans who split off from a common chimp ancestor seven million years ago, or thereabouts. These include all of the Homo species ( _Homo sapiens, H. ergaster, H. rudolfensis_ , for example), the australopithecines ( _Australopithicus africanus, A. boisei_ , etc.), and other ancient forms such as _Paranthropus_ and _Ardipithecus_. The important point is that we are the last surviving hominin on earth. . During the writing of this book, two new modern human species were discovered and two move ancient species. For more on these, read the sidebars on pages 90–93, "The Newest member of the Human Family." ### **1: THE BATTLE FOR SURVIVAL** . Some scientists have speculated that _tchadensis_ and others like him at this time in prehistory could be the offspring of early humans and chimps who mated and brought hybrid "humanzees" into the world in the same way mating female horses and male donkeys conceive mules. Since such a hybrid wouldn't have been able to produce children of its own, the chances of this rare fossil surviving until the present are exceedingly slim, but in the world of paleoanthropology nearly anything is proving to be possible. For more, read "Human, Chimp Ancestors May Have Mated, DNA Suggests," _National Geographic News_ , May 17, 2006, <http://news.nationalgeographic.com/news/2006/05/humans-chimps.html>. . See chapter 1 of _Thumbs, Toes, and Tears: And Other Traits That Make Us Human_. . For more on this read, "Unlocking the Secrets of Longevity Genes," _Scientific American_ , December 2006. . Additional information on this interesting theory can be found in "How Dietary Restriction Catalyzed the Evolution of the Human Brain," _Medical Hypotheses_ , February 19, 2007. ### **2: THE INVENTION OF CHILDHOOD (OR WHY IT HURTS TO HAVE A BABY)** . This is an apt analogue for the situation our species faces today... our own intelligence has put us in a precarious situation that we, too, may not survive. See the epilogue, "The Next Human." . In one of his more whimsical essays written more than thirty years ago, evolutionary theorist Stephen Jay Gould depicted Mickey Mouse as a perfect example of neoteny in action. The older Mickey got, Gould pointed out, the younger (and cuter) his animators made him look. As he aged, Mickey acquired greater youth. Broadly speaking this is precisely what happened to the line of humans who eventually led to you and me. . See L. Bolk, "On the Problem of Anthropogenesis," _Proc.Section Sciences Kon. Akad. Wetens_. (Amsterdam) 29 (1926): 465–75. . "In neoteny rates of development slow down and juvenile stages of ancestors become adult features of descendants. Many central features of our anatomy link us with the fetal and juvenile stages of [nonhuman] primates." Gould, _Ontogeny and Phylogeny_ , 1977, 333. . From Barry Bogin, "Evolutionary Hypotheses for Human Childhood," _Yearbook of Physical Anthropology_ (1997), 70. "In Shea's view, a variety of heterochronic processes are responsible for human evolution. The others may be hypermorphosis, acceleration (defined as an increase in the rate of growth or development), and hypomorphosis (defined as a delay in growth with no delay in the age at maturation)... None of these acting as a single process can produce the human adult size and shape from the human infant size and shape. The same holds true for acceleration and hypomorphosis. In agreement with Schultz, Shea states that 'we [humans] have extended all of our life history periods, not merely the embryonic or juvenile ones' (pp. 84–5). Humans have also altered rates of growth from those found in other primates and possible ancestors. To accomplish all this required, in Shea's view, several genetic changes or adjustments during human evolution. Since the hormones that regulate growth and development are, virtually, direct products of DNA activity, Shea proposes that the best place to look for evidence of the evolution of ontogeny is in the action of the endocrine system. According to Shea and others (e.g., Bogin, 1988) differences in endocrine action between humans and other primates negate neoteny or hypermorphosis as unitary processes and instead argue for a multiprocess model for human evolution." . Around this time, our ancestors may also have begun to lose their hair, another neotenic trait, although loss of hair almost certainly also helped to avoid overheating on Africa's scorching savannas. . Martin, "Human Brain Evolution in an Ecological Context" (fifty–second James Arthur Lecture, American Museum of Natural History, New York, 1983). . The Pleistocene epoch lasted from about 2.5 million years to 11,700 years ago and includes Earth's recent period of repeated ice ages. The Pleistocene is the first epoch of the Quaternary Period or sixth epoch of the Cenozoic Era. The end of the Pleistocene corresponds with the end of the last glacial period, the one that immediately preceded the blossoming of recorded human history. It also corresponds with the end of the Paleolithic age used in archaeology. . Another reason that children need a special high–energy diet is the rapid growth of their brain. In their research in 1992, Leonard and Robertson estimated that due to this accelerated growth, "a human child under the age of 5 years uses 40–85 percent of resting metabolism to maintain his/her brain [adults use 16–25 percent]. Therefore, the consequences of even a small caloric debt in a child are enormous given the ratio of energy distribution between brain and body." . Bogin, "Evolutionary Hypotheses for Human Childhood," 81. . See Gould, _Ontogeny and Phylogeny_ , 290–94. . In traditional hunter–gatherer and horticultural societies, studies have found that even without the advantages of modern medicine and sanitation, people manage to raise about 50 percent of their children to adulthood. Monkeys and apes have a success rate between 14 and 36 percent. That means out of every hundred infants born, humans raise at least fourteen more successfully. Over evolutionary time that has made an enormous difference. Even in protected reserves, chimpanzees and gorillas are essentially at zero population growth and their worldwide numbers are dropping. Humans, however, have grown from small clans numbering in the thousands two hundred thousand years ago to seven billion people that live in every conceivable earthly environment, with more coming all the time. The evolutionary "strategy" of a long, if dangerous, human childhood has clearly succeeded, at least for us, for now. ### **3: LEARNING MACHINES** . For more detail see <http://users.ecs.soton.ac.uk/harnad/Papers/Py104/pinker.langacq.html>—Steven Pinker's exploration of human language and its evolution. . Planaria can pass along their personal experience to other flatworms in the oddest way. Untrained flatworms that eat the ground-up brains of other planaria that have been trained to perform specific tasks will quickly exhibit the same knowledge that the dead flatworms acquired in life. R. Joseph, _The Naked Neuron_ , 15. . Only four weeks into gestation the first brain cells begin to form at the astonishing rate of 250,000 every minute. Billions of neurons will forge links with billions of other neurons, and eventually trillions upon trillions of connections will be made between cells. . Research over the past ten years has illustrated exactly how cognitive, emotional, and social capacities are physically connected to behaviors that can affect us throughout our entire lives. Toxic stress damages developing brain architecture, which can lead to lifelong problems in learning, behavior, and physical and mental health. Scientists now know that chronic, unrelenting stress in early childhood, caused by extreme poverty, repeated abuse, or severe maternal depression, for example, can be toxic to the developing brain. On the other hand, so–called positive stress (moderate, short-lived physiological response to uncomfortable experience) is important and necessary to healthy development. Without the buffering protection of adult support, toxic stress can be built into the body largely through epigenetic processes. For more information, see "The Science of Early Childhood Development" and the Working Paper series from the National Scientific Council on the Developing Child. . www.developingchild.harvard.edu/content/publications.html. For more detailed information, see the bibliography, National Scientific Council on the Developing Child, "Children's Emotional Development is Built into the Architecture of Their Brains," 2006. . For more on the debate and latest information on exactly how much DNA we have in common with chimpanzees visit <http://news.nationalgeographic.com/news/2002/09/0924_020924_dnachimp_2.html>. . Scientists have found that neurons again overproliferate at a second time in our lives, just before puberty, in a way that they do in the first thirty-six months of life. However, the activity takes place in the prefrontal cortex, not the entire brain. It is almost as if the evolution of the prefrontal cortex required a kind of "second childhood." Connections made during this time that aren't used over the long term are also eventually pruned back. . See "A Comparison of Atropine and Patching Treatments for Moderate Amblyopia by Patient Age, Cause of Amblyopia, Depth of Amblyopia, and Other Factors," _Ophthalmology_ 110 (8) (August 2003): 1632– 37;discussion, 1637– 38, and L. Kiorpes and J. A. Movshon "Amblyopia: A Developmental Disorder of the Central Visual Pathways," _Cold Spring Harbor Symposia on Quantitative Biology_ 61:39–48, for more information about blindness, the visual cortex, and amblyopia. ### **4: TANGLED WEBS—THE MORAL PRIMATE** . The same question struck an anthropologist at the University of Southern California, Christopher Boehm, several years ago, so he surveyed fifty previously completed studies of small, nonliterate tribes and bands that live around the world. He wondered if the way these primal communities handled the complexities of ethics, fair play, and morality might offer some insight into the basics of those behaviors in the rest of us. The popular view of nonliterate societies is that they are more prone to violence or war, but Boehm's research revealed they almost always developed, independently of one another, an egalitarian approach to life; one in which they struggled conscientiously to weigh self–interest and common interest. For example, if a bully acted like a silverback, alpha–male gorilla and attempted to dominate the group, the group responded by shaming, ostracizing, or, in extreme cases, killing the perpetrator to ensure individual rights were protected. . For a summary of Dunbar's theories, see pages 122–23 of _Thumbs, Toes, and Tears: And Other Traits That Make Us Human_. . For more on this case, see Valerie E. Stone et al., "Selective Impairment of Reasoning About Social Exchange in a Patient with Bilateral Limbic System Damage," _Proceedings of the National Academy of Sciences of the United States of America_ 99.17 (2002): 11531–36. . It is probably not a coincidence that the neuroimaging studies of people who suffer from different forms of autism find that activity in the orbitofrontal cortex, superior temporal sulcus (STS), and the amygdala is low or nonexistent compared with others who don't have autism. As the experience of R.M. illustrated, these areas of the brain are crucial to social interactions most of us take for granted. Autistics are missing many of the functioning brain structures that allow them to "read" minds. Autistics struggle with grasping the intentions of others, or even comprehending that others have states of mind different from theirs. Depending on how severe the autism is, empathy, sympathy, deception, even joking, are out of the question because they all require seeing life, however briefly, from a point of view other than one's own. This means that the scenario building that comes so naturally to most of us is hard for them. Though scientists don't yet understand why, these newer and more ancient parts of the brain seemingly have been shut down or struggle to communicate with one another. ### **5: THE EVERYWHERE APE** . Curtis W. Marean, "When the Sea Saved Humanity," _Scientific American_ 303.2 (2010): 54–61. . Explore National Geographic's fascinating Genographic Project for details of our past migrations at <https://genographic.nationalgeographic.com/genographic/lan/en/atlas.html>. There is not universal agreement on these conclusions, but the information nevertheless provides fascinating insights into how we evolved and came to spread across an entire planet. Another excellent site to visit is <http://www.bradshawfoundation.com/journey>. . The most recent matrilineal common ancestor shared by all living human beings, also known as Mitochondrial Eve, lived roughly 120–150 millennia ago around East Africa. This is about the same time as _Homo sapiens idaltu_. A study of African genetic diversity headed by Dr. Sarah Tishkoff found that Africa's San people express the greatest genetic diversity among the 113 distinct populations sampled in the research, making them one of fourteen "ancestral population clusters." . Dramatic climate fluctuations began 356,000 years ago according to researchers at the Smithsonian Institution and continued until about 50,000 years ago due to the elongated orbit of Earth around the sun. During this time, Africa often grew dry and the planet cold. For more see <http://humanorigins.si.edu/evidence/human-evolution-timeline-interactive>. . Read M. Lozano et al., "Right–Handedness of _Homo heidelbergensis_ from Sima De Los Huesos (Atapureca, Spain) 500,000 Years Ago," _Evolution and Human Behavior_ 30.5 (2009): 369–76, and <http://www.newscientist.com/article/dn17184-ancient-teeth-hint-that-righthandedness-is-nothing-new.html> for more details on handedness and brain lateralization at this point in human evolution. . See: <http://humanorigins.si.edu/evidence/genetics/ancient-dna-and-neanderthals> for more insights into Neanderthal range. . See <http://news.bbc.co.uk/2/hi/science/nature/3948165.stm>. . For more details on this fascinating theory read "Genetic Analysis of Lice Supports Direct Contact Between Modern and Archaic Humans" at <http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0020340>. ### **6: COUSIN CREATURES** . Neanderthal skulls were first discovered in Engis, Belgium (1829), by Philippe–Charles Schmerling and in Forbes' Quarry, Gibraltar (1848), both prior to the specimen discovered in the Neander Valley in Erkrath near Düsseldorf in August 1856. At the time, no one was quite sure what they were. Later they were identified as Neanderthals. If they had initially been identified and investigated further, the species might have been named Gibraltarians or Engiseans rather than Neanderthals. . Lighter, straighter hair is often a by–product of lighter, fairer skin. . Scientists have speculated that one of the reasons it is so difficult to find fossils of _Homo sapiens_ from the same period is that they hadn't yet begun to bury their dead even if Neanderthals had. . It's difficult to know how many Native Americans lived in the continental United States before the arrival of white men, but it couldn't have exceeded many more than tens of thousands. In 1823 President James Monroe reported the "Chayenes" to be "a tribe of three thousand two hundred and fifty souls, dwelling and hunting on a river of the same name, a western tributary of the Missouri, a little above the Great Bend." Ten years later, Catlin, the famous painter of Native Americans, reported, "The Shiennes are a small tribe of about three thousand in number, living neighbors to the Sioux on the west of them, between the Black Hills and the Rocky Mountains." In 1822 the population of the two divisions of the Sioux was estimated at nearly thirteen thousand. . You can listen to the sound of the Neanderthal's _e_ at <http://www.fau.edu/explore/media/FAU-neanderthal.wav>. It's fascinating. ### **7: BEAUTIES IN THE BEAST** . Recounted in Darwin's _Descent of Man_ , chap. 19. . See _Thumbs, Toes, and Tears: And Other Traits That Make Us Human_ for a more detailed exploration of why women evolved large breasts and other insights into the attractions between men and women. . J. H. Langlois, L. Kalakanis, A. J. Rubenstein, A. Larson, M. Hallam, and M. Smoot, "Maxims or Myths of Beauty? A Meta–analytic and Theoretical Review." _Psychological Bulletin_ 126 (2000): 390–423. Also see <http://homepage.psy.utexas.edu/homepage/group/langloislab/facialattract.html>. . _Descent of Man_ , chap. 19. . Ibid. . At all of these sites researchers found piles of seashells. Together with the much–older evidence from the cave at Pinnacle Point, the shells suggest that seafood may have served as a nutritional trigger at a crucial point in human history, providing the fatty acids that modern humans needed to make an already large and intricate brain faster and smarter. Stanford University paleoanthropologist Richard Klein has long argued that a genetic mutation at roughly this point in human history sparked a sudden increase in brainpower, perhaps linked to the onset of speech. . E. Bates, with L. Benigni, I. Bretherton, L. Camaioni, and V. Volterra, _The Emergence of Symbols: Cognition and Communication in Infancy_. New York: Academic Press, 1979. Note the term Bates used in the passage, _heterochrony_ , which is defined as a developmental change in the timing of events leading to changes in a living thing's size and shape, is often used interchangeably with _neoteny_. . For more on the exponential rate of change in evolution of all kinds from the universe to human culture, explore Ray Kurzweil's concept of the Law of Accelerating Returns, defined in his book _The Age of Spiritual Machines: When Computers Exceed Human Intelligence_. . These mutations may also have kicked in the ultimate symbolic ability and the most extreme proof that the human brain had evolved to a point where its owners had become self–aware—modern, human language and speech. ### **8: THE VOICE INSIDE YOUR HEAD** . See Belinda R. Lennox, S. Bert, G. Park, Peter B. Jones, and Peter G. Morris, "Spatial and Temporal Mapping of Neural Activity Associated with Auditory Hallucinations," _Lancet_ 353 (February 2, 1999), <http://www.bmu.psychiatry.cam.ac.uk/PUBLICATION_STORE/lennox99spa.pdf>. . This story was related in comments online following an article in _Scientific American_ entitled "It's No Delusion: Evolution May Favor Schizophrenia Genes" at <http://www.scientificamerican.com/article.cfm?id=evolution-may-favor-schizophrenia-genes>. . Eighty percent of diagnosed autistics are men based on research in C. J. Newschaffer, L. A. Croen, J. Daniels, et al., "The Epidemiology of Autism Spectrum Disorders," _Annual Review of Public Health_ 28 (2007): 235–58. doi:10.1146/annurev.publhealth.28.021406.144007.PMID_17367287. ## **Bibliography** Ackerman, Jennifer. "The Downside of Upright." ngm.nationalgeo graphic.com, July 1, 2006, 1–2. <http://ngm.nationalgeographic.com/2006/07/bipedal-body/ackerman-text>. Akst, Jef. "Ancient Humans More Diverse?" the-scientist.com, 2010, 1–3. <http://classic.the-scientist.com/blog/display/56279/>. Amen-Ra, Nūn. "How Dietary Restriction Catalyzed the Evolution of the Human Brain: An Exposition of the Nutritional Neurotrophic Neoteny Theory." _Medical Hypotheses_ 69.5 (2007): 1147–53. "Anthropologist's Studies of Childbirth Bring New Focus on Women in Evolution." www.sciencedaily.com, February 25, 2009. http://www.sciencedaily.com/releases/2009/02/090217173043.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+sciencedaily+%28ScienceDaily%3A+Latest+Science+News%29. Bahn, Paul, consulting ed. _Written in Bones: How Human Remains Unlock the Secrets of the Dead_. Toronto, Ontario: Quintet Publishing, 2003. Baker, T. J., and J. Bichsel. "Personality Predictors of Intelligence: Differences Between Young and Cognitively Healthy Older Adults." _Personality and Individual Differences_ 41.5 (2006): 861–71. Banks, William E., Francesco d'Errico, A. Townsend Peterson, Masa Kageyama, Adriana Sima, and Maria-Fernanda Sánchez-Goñi. "Neanderthal Extinction by Competitive Exclusion." _PLoS ONE_ 3 (12) (2008): e3972. doi:10.1371/journal.pone.0003972. Bates, E. "Competition, Variation, and Language Learning. Mechanisms of Language Acquisition." Mechanisms of Language Acqusition. Edited by Brian MacWhinney, 157–93. Hillsdale, NJ: Lawrence Erlbaum Associates, 1987. Belmonte, Matthew K., et al. "Autism and Abnormal Development of Brain Connectivity." _Journal of Neuroscience_ 24.42 (2004): 9228–31. Biederman, I., and E. Vessel. "Perceptual Pleasure and the Brain: A Novel Theory Explains Why the Brain Craves Information and Seeks It Through the Senses." _American Scientist_ 94.3 (2006): 247–53. Bloom, Paul. "The Moral Life of Babies." www.nytimes.com, 2010. <http://www.nytimes.com/2010/05/09/magazine/09babies-t.html?pagewanted=all>. Boehm, Christopher. "Political Primates | Greater Good." greatergood.berkeley.edu, December 1, 2008. <http://greatergood.berkeley.edu/article/item/political_primates/>. Bogin, B. A. "Evolutionary Hypotheses for Human Childhood." _Yearbook of Physical Anthropology_ 40 (1997): 63–89. Bond, Charles F., and Bella M. DePaulo. "Accuracy of Deception Judgments." _Personality and Social Psychology Review_ 10.3 (2006): 214–34. "Brain Network Related to Intelligence Identified." www.sciencedaily.com, September 9, 2007. <http://www.sciencedaily.com/releases/2007/09/070911092117.htm>. Briggs, Adrian W., et al. "Targeted Retrieval and Analysis of Five Neandertal mtDNA Genomes." _Transactions of the IRE Professional Group on Audio_ 325.5938 (2009): 318–21. Brockman, John. "Science of Happiness: A Talk with Daniel Gilbert." www.edge.org, May 22, 2006. <http://www.edge.org/3rd_culture/gilbert06/gilbert06_index.html>. Brotherson, S. "Understanding Brain Development in Young Children." _Bright Beginnings_ 4 (2005). Brown, Kyle S., et al. "Fire as an Engineering Tool of Early Modern Humans." _Transactions of the IRE Professional Group on Audio_ 325.5942 (2009): 859–62. Brüne, Martin. "Neoteny, Psychiatric Disorders and the Social Brain: Hypotheses on Heterochrony and the Modularity of the Mind." _Anthropology & Medicine_ 7.3 (2000): 301–18. ———. "Schizophrenia: An Evolutionary Enigma?" _Neuroscience and Biobehavioral Reviews_ 28.1 (2004): 41–53. Callaway, Ewen. "Neanderthals Speak Out After 30,000 Years." www.newscientist.com, April 15, 2008. <http://www.newscientist.com/article/dn13672-neanderthals-speak-out-after-30000-years.html>. Carroll, Sean B. "Genetics and the Making of _Homo sapiens_." _Nature_ 422.6934 (2003): 849–57. Chick, Garry. "What Is Play For?" Keynote address, Association for the Study of Play, St. Petersburg, FL, February 1998. Cohen, A. S., et al. "Paleoclimate and Human Evolution Workshop." _Eos, Transactions, American Geophysical Union_ 87.16 (2006): 161. "A Comparison of Atropine and Patching Treatments for Moderate Amblyopia by Patient Age, Cause of Amblyopia, Depth of Amblyopia, and Other Factors." _Ophthalmology_ 110 (8) (August 2003): 1632–37; discussion, 1637–38. Cosmides, L., H. C. Barrett, and J. Tooby. "Colloquium Paper: Adaptive Specializations, Social Exchange, and the Evolution of Human Intelligence." _Proceedings of the National Academy of Sciences of the United States of America_ 107, supplement 2 (2010): 9007–14. Courchesne, Eric E. "Brain Development in Autism: Early Overgrowth Followed by Premature Arrest of Growth." _Developmental Disabilities Research Reviews_ 10.2 (2004): 106–11. Cowley, Geoffrey. "Biology of Beauty." www.thedailybeast.com/newsweek.html, June 2, 1996, 3. <http://www.thedailybeast.com/newsweek/1996/06/02/the-biology-of-beauty.html>. "Daniel Dennett's Theory of Consciousness: The Intentional Stance and Multiple Drafts." <http://www.consciousentities.com>. Accessed April 6, 2011. Darwin, Charles. _The Descent of Man and Selection in Relation to Sex_. Norwalk, CT: Heritage Press, 1972. ———. _The Origin of the Species_. Hardback ed. New York: Barnes and Noble, 2008. Dawkins, Richard. _The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design_. Trade paperback ed. New York: W. W. Norton, 2006. ———. _The Selfish Gene_. 30th anniversary ed. New York: Oxford University Press, 2009. Dawson, Geraldine G., et al. "Defining the Broader Phenotype of Autism: Genetic, Brain, and Behavioral Perspectives." _Development and Psychopathology_ 14.3 (2002): 581–611. Deacon, Terrence. _The Symbolic Species: The Co-Evolution of Language and the Brain_. Trade paperback. New York: W. W. Norton, 1998. Dean, Brian. "Is Schizophrenia the Price of Human Central Nervous System Complexity?" _Australian and New Zealand Journal of Psychiatry_ 43.1 (2009): 13–24. Dean, C. C., et al. "Growth Processes in Teeth Distinguish Modern Humans from _Homo erectus_ and Earlier Hominins." _Nature_ 414.6864 (2001): 628–31. Dean, Christopher. "Growing Up Slowly 160,000 Years Ago." _Proceedings of the National Academy of Sciences of the United States of America_ 104.15 (2007): 6093–94. De Waal, Frans B. M., _Chimpanzee Politics: Power and Sex Among Apes_. 25th anniversary ed. Baltimore: Johns Hopkins University Press, 2007. ———. "Do Humans Alone Feel Your Pain?" chronicle.com, October 26, 2011. <http://chronicle.com/article/Do-Humans-Alone-Feel-Your/26238/>. ———. "Morality and the Social Instincts: Continuity with the Other Primates." _Tanner Lectures on Human Values_ , 2003. DiCicco-Bloom, Emanuel, et al. "The Developmental Neurobiology of Autism Spectrum Disorder." _Journal of Neuroscience_ 26.26 (2006): 6897–6906. "DNA Evidence Tells of Human Migration." www.sciencedaily.com, February 24, 2010. <http://www.sciencedaily.com/releases/2010/02/100222121618.htm>. Doyle-Burr, Nora. "New Human Species Discovered? How China Fossils Could Redefine 'Human,'" _Christian Science Monitor_ , 2012. Dreifus, Claudia. "A Conversation with Philip G. Zimbardo; Finding Hope in Knowing the Universal Capacity for Evil." _New York Times_ , April 3, 2007. <http://www.nytimes.com/2007/04/03/science/03conv.html>. Dyson, Freeman. _Disturbing the Universe_. New York: Basic Books, 1979. Eiseley, Loren. _The Immense Journey_. Paperback. New York: Vintage Books, 1977. ———. _The Unexpected Universe_. Trade paperback. New York: Harcourt Brace Jovanovich, 1985. Enard, Wolfgang, et al. "Molecular Evolution of FOXP2, a Gene Involved in Speech and Language." _Nature_ 418.6900 (2002): 869–72. Ermer, E., et al. "Cheater Detection Mechanism." _Encyclopedia of Social Psychology_ (2007): 138–40. Fabre, Virginie V., Silvana S. Condemi, and Anna A. Degioanni. "Genetic Evidence of Geographical Groups Among Neanderthals." _Transactions of the IRE Professional Group on Audio_ 4 (4) (January 1, 2009): e5151. doi:10.1371/journal.pone.0005151. Fagan, Brian. _Cro-Magnon: How the Ice Age Gave Birth to the First Modern Humans_. New York: Bloomsbury Press, 2010. Fagan, J. F., III. "New Evidence for the Prediction of Intelligence from Infancy." _Infant Mental Health Journal_ 3.4 (1982): 219–28. Falk, Dean. "New Information About Albert Einstein's Brain." www.frontiersin.org/evolutionary_neuroscience 1 (2009): 3. <http://www.frontiersin.org/evolutionary_neuroscience/10.3389/neuro.18.003.2009/abstract>. ———. "Prelinguistic Evolution in Early Hominins: Whence Motherese?" _Behavioral and Brain Sciences_ 27.4 (2004): 491–503. "Fossil from Last Common Ancestor of Neanderthals and Humans Found in Europe, 1.2 Million Years Old." _Science Daily_ , April 4, 2008. Accessed March 17, 2011. Frankfurt, Harry G. _On Bullshit_. Princeton, NJ: Prince ton University Press, 2005. Friedman, Danielle. "Parent Like a Caveman." www.thedailybeast.com, October 10, 2010. <http://www.thedailybeast.com/articles/2010/10/11/hunter-gatherer-parents-better-than-todays-moms-and-dads.html>. Fu, X., et al. "Rapid Metabolic Evolution in Human Prefrontal Cortex." _Proceedings of the National Academy of Sciences of the United States of America_ 108.15 (2011): 6181–86. Furnham, Adrian, and Emma Reeves. "The Relative Influence of Facial Neoteny and Waist-to-Hip Ratio on Judgements of Female Attractiveness and Fecundity." _Psychology, Health & Medicine_ 11.2 (2006): 129–41. Genographic Project. National Geographic Society. https://genographic.nationalgeographic.com/genographic/lan/en/atlas.html. Ghose, Tia. "Bugs Hold Clues to Human Origins." the-scientist.com, January 22, 2009. <http://classic.the-scientist.com/blog/display/55350/>. Accessed March 3, 2011. Godfrey, L. R., and M. R. Sutherland. "Paradox of Peramorphic Paedomorphosis: Heterochrony and Human Evolution." _American Journal of Physical Anthropology_ 99.1 (1996): 17–42. Golovanova, Liubov Vitaliena, et al. "Significance of Ecological Factors in the Middle to Upper Paleolithic Transition." _Current Anthropology_ 51.5 (2010): 655–91. Gopnik, A. "How Babies Think." _Scientific American_ 303.1 (2010): 76–81. Gopnik, A., et al. "Causal Learning Mechanisms in Very Young Children: Two-, Three-, and Four-Year- Olds Infer Causal Relations from Patterns of Variation and Covariation." _Developmental Psychology_ 37.5 (2001): 620–29. Gould, Stephen Jay. _Ontogeny and Phylogeny_. Cambridge, MA: Harvard University Press, 1977. ———. _The Panda's Thumb: More Reflections in Natural History_. Trade paperback. New York: W. W. Norton, 1992. Grafton, Scott, et al. "Brain Scans Go Legal." _Scientific American_ , November 29, 2006, 84. Grant, Richard P. "Creative Madness." _Scientist_ 24.8 (2010): 23–25. Green, Richard E., et al. "A Draft Sequence of the Neanderthal Genome." _Science_ 328.5979 (2010): 710–22. Greenwood, Veronique. "Truth or Lies: A New Study Raises the Question of Whether Being Honest Is a Conscious Decision at All." seed magazine.com, August 17, 2009. <http://seedmagazine.com/content/article/truth_or_lies/>. Griskevicius, Vladas, et al. "Blatant Benevolence and Conspicuous Consumption: When Romantic Motives Elicit Strategic Costly Signals." _Journal of Personality and Social Psychology_ 93.1 (2007): 85–102. Gugliotta, Guy. "The Great Human Migration." www.smithsonianmag.com, July 2008, 1–5. <http://www.smithsonianmag.com/history-archaeology/human-migration.html>. Gunz, P., F. L. Bookstein, et al. "Early Modern Human Diversity Suggests Subdivided Population Structure and a Complex Out-of-Africa Scenario." _Proceedings of the National Academy of Sciences_ 106.15 (2009): 6094. Gunz, Philipp, Simon Neubauer, Bruno Maureille, and Jean-Jacques Hublin. "Brain Development After Birth Differs Between Neanderthals and Modern Humans." _Current Biology_ 20.21 (2010): R921–22. ———. "Enlarged Image: Brain Development After Birth Differs Between Neanderthals and Modern Humans" (supplement to the reference above). _Current Biology_ 20.21 (November 9, 2010): R921–22. doi:10.1016/j.cub.2010.10.018. Hadhazy, A. "Think Twice: How the Gut's 'Second Brain' Influences Mood and Well-Being." _Scientific American_ , 2010. <http://www.scientificamerican.com/article.cfm>. Haidt, Jonathan. "The New Synthesis in Moral Psychology." _Science_ 316.5827 (2007): 998–1002. Harcourt, Alexander H., and Kelly J. Stewart. _Gorilla Society: Conflict_ , _Compromise and Cooperation Between the Sexes_. Chicago: University of Chicago Press, 2007. Hattori, Kanetoshi. "Two Origins of Language Evolution: Unilateral Gestural Language and Bilateral Vocal Language, Hypotheses from IQ Test Data." _Mankind Quarterly_ 39.4 (1999): 399–436. Hauser, M., et al. "A Dissociation Between Moral Judgments and Justifications." _Mind & Language_ 22.1 (2007): 1–21. Hazlett, Heather Cody, et al. "Magnetic Resonance Imaging and Head Circumference Study of Brain Size in Autism: Birth Through Age 2 Years." _Archives of General Psychiatry_ 62.12 (2005): 1366–76. Henshilwood, Christopher S., et al. "A 100,000-Year-Old Ochre-Processing Workshop at Blombos Cave, South Africa." _Science_ 334.6053 (2011): 219–22. Hill, Jason, et al. "Similar Patterns of Cortical Expansion During Human Development and Evolution." _Proceedings of the National Academy of Sciences of the United States of America_ 107.29 (2010): 13135–40. Hofstadter, Douglas R. _Godel, Escher, Bach: An Eternal Golden Braid_. 20th anniversary ed. New York: Basic Books, 1999. "How Long Is a Child a Child? Human Developmental Patterns Emerged More Than 160,000 Years Ago." www.sciencedaily.com, March 14, 2007. <http://www.sciencedaily.com/releases/2007/03/070313110614.htm>. Hubel, D. H., T. N. Wiesel. "Binocular Interaction in Striate Cortex of Kittens Reared with Artificial Squint." _Journal of Neurophysiology_ (London) 28 (1965): 1041–59. ———. "Receptive Fields and Functional Architecture of Monkey Striate Cortex." _Journal of Physiolog_ (London) 195 (1968): 215–43. ———. "Receptive Fields, Binocular Interaction, and Functional Architecture in the Cat's Visual Cortex." _Journal of Physiology_ (London) 160 (1962): 106–54. Irvine, William B. _On Desire: Why We Want What We Want_. New York: Oxford University Press, 2006. Jaynes, Julian. _The Origin of Consciousness in the Breakdown of the Bicameral Mind_. Boston: Houghton Mifflin, 1976 Joseph, R. _The Naked Neuron_. New York: Plenum Press, 1993. Jung, Carl C., ed. _Man and His Symbols_. New York: Anchor Books, 1964. Kelley, Jay, and Gary T. Schwartz. "Dental Development and Life History in Living African and Asian Apes." _Proceedings of the National Academy of Sciences of the United States of America_ 107.3 (2010): 1035–40. "Key Brain Regulatory Gene Shows Evolution in Humans." www.sciencedaily.com, December 12, 2005. <http://www.sciencedaily.com/releases/2005/12/051212120211.htm>. Kiorpes L., and J. A. Movshon. "Amblyopia: A Developmental Disorder of the Central Visual Pathways." _Cold Spring Harbor Symposia on Quantitative Biology_ 61:39–48. ———. "Behavioral Analysis of Visual Development." In _Development of Sensory Systems in Mammals_ , edited by J. R. Coleman, 125–54. New York: Wiley, 1990. Kiorpes, Lynne, Daniel C. Kiper, Lawrence P. O'Keefe, James R. Cavanaugh, and J. Anthony Movshon. "Neuronal Correlates of Amblyopia in the Visual Cortex of Macaque Monkeys with Experimental Strabismus and Anisometropia." _Journal of Neuroscience_ 18 (16) (August 15, 1998): 6411–24. Konner, Melvin. _The Evolution of Childhood_. Cambridge, MA: Belknap Press of the Harvard University Press, 2010. Krasnow, Max M., et al. "Cognitive Adaptations for Gathering-Related Navigation in Humans." _Evolution and Human Behavior_ 32.1 (2011): 1–12. Krause, Johannes J., et al. "The Derived FOXP2 Variant of Modern Humans Was Shared with Neanderthals." _Current Biology_ 17.21 (2007): 1908–12. Kubicek, Stefan. "Infographic: Epigenetics—a Primer." _Scientist_ 25.3 (2001): 32. Kurtén, Björn, _Dance of the Tiger: A Novel of the Ice Age_. 3rd ed. New York: Berkeley Books, 1982. Lambert, David, and the Diagram Group. _The Field Guide to Early Man_. New York: Facts on File, 1987. Langlois, J. H., L. Kalakanis, A. J. Rubenstein, A. Larson, M. Hallam, and M. Smoot. "Maxims or Myths of Beauty? A Meta-analytic and Theoretical Review." _Psychological Bulletin_ 126 (2000): 390–423. Langlois, Judith. "The Question of Beauty." beautymatters.blogspot.com, February 4, 2000. Accessed April 1, 2011. "Last Humans on Earth Survived in Ice Age Sheltering Garden of Eden, Claim Scientists." _Daily Mail_. July 27, 2010. <http://www.dailymail.co.uk/sciencetech/article-1297765/.html>. Lennox, Belinda R., S. Bert, G. Park, Peter B. Jones, and Peter G. Morris. "Spatial and Temporal Mapping of Neural Activity Associated with Auditory Hallucinations." _Lancet_ 353 (February 2, 1999). Leonard, W. R., and M. L. Robertson. "Evolutionary Perspectives on Human Nutrition: The Influence of Brain and Body Size on Diet and Metabolism." _American Journal of Human Biology_ 4 (1992): 179–95. Leslie, Mitchell. "Suddenly Smarter." _Stanford Magazine_ , July 1, 2002, 1–11. Leutwyler, Kristin. "First Gene for Schizophrenia Discovered." _Scientific American_ , March 20, 2001. Lieberman, Philip P. "On the Nature and Evolution of the Neural Bases of Human Language." _American Journal of Physical Anthropology_ , supplement 35 (2002): 36–62. "Long Legs Are More Efficient, According to New Math Model." www.sciencedaily.com, March 19, 2007, 1–2. <http://www.sciencedaily.com/releases/2007/03/070312091455.htm>. Lozano, M., et al. "Right-Handedness of _Homo heidelbergensis_ from Sima De Los Huesos (Atapureca, Spain) 500,000 Years Ago." _Evolution and Human Behavior_ 30.5 (2009): 369–76. Maestripieri, Dario. _Machiavellian Intelligence: How Rhesus Macaques and Humans Have Conquered the World_. Chicago: University of Chicago Press, 2007. Manica, Andrea, et al. "The Effect of Ancient Population Bottlenecks on Human Phenotypic Variation." _Nature_ 448.7151 (2007): 346–48. "Man's Earliest Direct Ancestors Looked More Apelike Than Previously Believed." www.sciencedaily.com, March 27, 2007, 1–2. <http://www.sciencedaily.com/releases/2007/03/070324133018.htm>. Accessed August 20, 2010. Marean, Curtis W. "When the Sea Saved Humanity." _Scientific American_ 303.2 (2010): 54–61. Miller, Earl, and Jonathan Cohen. "An Integrative Theory of Prefrontal Cortex Function." _Annual Review of Neuroscience_ 24 (2001). Miller, Geoffrey. _The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature_. New York: Anchor Books, 2001. Mithen, Steven. _The Singing Neanderthals: The Origins of Music, Language_ , _Mind and Body_. Cambridge, MA: Harvard University Press, 2006. "Modern Humans, Arrival in South Asia May Have Led to Demise of Indigenous Populations." www.sciencedaily.com, November 7, 2005. <http://www.sciencedaily.com/releases/2005/11/051107080321.htm>. "Modern Man Found to Be Generally Monogamous, Moderately Polygamous." www.sciencedaily.com, March 3, 2010. <http://www.sciencedaily.com/releases/2010/03/100302112018.htm>. Morris, Desmond. _The Naked Ape_. First American ed. 3rd printing. New York: McGraw-Hill, 1967. Murray, Elisabeth A. "The Amygdala, Reward and Emotion." _Trends in Cognitive Sciences_ 11.11 (2007): 489–97. National Scientific Council on the Developing Child. "Young Children Develop in an Environment of Relationships: Working Paper No. 1." 2004, 1–12. ———. "Children's Emotional Development Is Built into the Architecture of Their Brains: Working Paper No. 2." 2006, 1–16. ———. "Early Exposure to Toxic Substances Damages Brain Architecture: Working Paper No. 4." 2006, 1–20. ———. "The Timing and Quality of Early Experiences Combine to Shape Brain Architecture: Working Paper No. 5." 2008, 1–12. ———. "Early Experiences Can Alter Gene Expression and Affect Long-Term Development: Working Paper No. 10." 2010, 1–12. "Neanderthal Children Grew Up Fast." www.sciencedaily.com, December 5, 2007. <http://www.sciencedaily.com/releases/2007/12/071204100409.htm>. "Neanderthals Speak Again After 30,000 Years." www.sciencedaily.com, April 21, 2008. <http://www.sciencedaily.com/releases/2008/04/080421154426.htm>. Neill, David. "Cortical Evolution and Human Behavior." _Brain Research Bulletin_ 74 (2007): 191–205. Nettle, Daniel, and Helen Clegg. "Schizotypy, Creativity and Mating Success in Humans." _Proceedings of the Royal Society B_ 273.1586 (2006): 611–15. "New Kenyan Fossils Challenge Established Views on Early Evolution of Our Genus Homo." www.sciencedaily.com, August 13, 2007. <http://www.sciencedaily.com/releases/2007/08/070813093132.htm>. Newschaffer, C. J., L. A. Croen, J. Daniels, et al. "The Epidemiology of Autism Spectrum Disorders." _Annual Review of Public Health_ 28 (2007): 235–58. doi:10.1146/annurev.publhealth.28.021406.144007. PMID_17367287. Nieder, Andreas. "Prefrontal Cortex and the Evolution of Symbolic Reference." _Current Opinion in Neurobiology_ 19.1 (2009): 99–108. NIMH. "Teenage Brain: A Work in Progress" (fact sheet). wwwapps.nimh.nih.gov/index.shtml, July 18, 2011. <http://wwwapps.nimh.nih.gov/health/publications/teenage-brain-a-work-in-progress.shtml>. Oakley, Barbara. "What a Tangled Web We Weave." the-scientist.com, April 10, 2009, 3. <http://classic.the-scientist.com/news/display/55610/>. Olivieri, Anna, et al. "The mtDNA Legacy of the Levantine Early Upper Paleolithic in Africa." _Science_ 314.5806 (2006): 1767–70. Pacchioli, David. "Moral Brain." _Research, University of Pennsylvania_ (2006): 5. Patel, Aniruddh D. _Music, Language, and the Brain_. New York: Oxford University Press, 2008. Paus, T., et al. "Structural Maturation of Neural Pathways in Children and Adolescents: In Vivo Study." _Science_ 283 (March 19, 1999): 1908. Penin, Xavier, Christine Berge, and Michel Baylac. "Ontogenetic Study of the Skull in Modern Humans and the Common Chimpanzees: Neotenic Hypothesis Reconsidered with a Tridimensional Procrustes Analysis." _American Journal of Physical Anthropology_ 118.1 (2002): 50–62. Perrett, D. I., K. J. Lee, I. Penton-Voak, D. Rowland, S. Yoshikawa, D. M. Burt, S. P. Henzi, D. L. Castles, and S. Akamatsu. "Effects of Sexual Dimorphism on Facial Attractiveness." _Nature_ 394.6696 (1998): 884–87. Pontzer, Herman H. "Predicting the Energy Cost of Terrestrial Locomotion: A Test of the LiMb Model in Humans and Quadrupeds." _Journal of Experimental Biology_ 210, pt. 3 (2007): 484–94. Potts, Richard, and Christopher Solan. _What Does It Mean to Be Human?_ Washington, DC: National Geographic, 2010. Reed, David L., et al. "Genetic Analysis of Lice Supports Direct Contact Between Modern and Archaic Humans." _Transactions of the IRE Professional Group on Audio_ 2.11 (2004): e340. Reich, David D., et al. "Genetic History of an Archaic Hominin Group from Denisova Cave in Siberia." _Nature_ 468.7327 (2010): 1053–60. Riel-Salvatore, Julien. "A Niche Construction Perspective on the Middle–Upper Paleolithic Transition in Italy." _Journal of Archaeological Method and Theory_ 17.4 (2010): 323–55. ———. "What Is a 'Transitional' Industry? The Uluzzian of Southern Italy as a Case Study." _Sourcebook of Paleolithic Transitions_ (2009): 377–96. Rightmire, G. Philip. "Human Evolution in the Middle Pleistocene: The Role of _Homo heidelbergensis_." _Evolutionary Anthropology_ (2011): 1–10. Rincon, Paul. "Neanderthals' 'Last Rock Refuge.'" www.bbc.com, September 13, 2006. <http://news.bbc.co.uk/2/hi/science/nature/5343266.stm>. ———. "Neanderthals 'Not Close Family.'" www.bbc.com, January 27, 2004. <http://news.bbc.co.uk/2/hi/science/nature/3431609.stm>. Rosen, Jeffrey. "The Brain on the Stand." _New York Times Magazine_ , March 11, 2007, 46–84. Rosenberg, K. R., and W. R. Trevathan. "The Evolution of Human Birth." _Scientific American_ 285.5 (2001): 72–77. Rozzi, Fernando V. Ramirez, and José Maria Bermudez De Castro. "Surprisingly Rapid Growth in Neanderthals." _Nature_ 428.6986 (2004): 936–39. Sawyer, G. J., and Viktor Deak. _The Last Human_. New Haven, CT: Yale University Press, 2007. "Schizophrenia: Costly By-Product of Human Brain Evolution?" _Science Daily_ , August 5, 2008. <http://www.sciencedaily.com/releases/2008/08/080804222910.htm>. Sell, A., J. Tooby, and L. Cosmides. "Formidability and the Logic of Human Anger." _Proceedings of the National Academy of Sciences of the United States of America_ 106.35 (2009): 15073–78. Sell, Aaron A., et al. "Human Adaptations for the Visual Assessment of Strength and Fighting Ability from the Body and Face." _Proceedings of the Royal Society B_ 276.1656 (2009): 575–84. "Sign In to Read: Neanderthal Body Art Hints at Ancient Language." _New Scientist_ , March 29, 2011. <http://www.newscientist.com/article/mg19726494.600-neanderthal-body-art-hints-at-ancient-language.html>. Silberman, S. "Don't Even Think About Lying: How Brain Scans Are Reinventing the Science of Lie Detection." _Wired San Francisco_ 14.1 (2006): 142. Sinclair, David A., and Lenny Guarente. "Unlocking the Secrets of Longevity Genes." _Scientific American_ 294.3 (2006): 48–51, 54–57. Singer, Emily. "An Innate Ability to Smell Scams." _Los Angeles Times_ , August 19, 2002. <http://articles.latimes.com/2002/aug/19/science/sci-cheat19>. Slimak, L., et al. "Late Mousterian Persistence near the Arctic Circle." _Science_ 332.6031 (2011): 841–45. Smith, Tanya M., et al. "Earliest Evidence of Modern Human Life History in North African Early _Homo sapiens_." _Proceedings of the National Academy of Sciences of the United States of America_ 104.15 (2007): 6128–33. Smith, Tanya M., et al. "Dental Evidence for Ontogenetic Differences Between Modern Humans and Neanderthals." _Proceedings of the National Academy of Sciences of the United States of America_ 107.49 (2010): 20923–28. Sockol, Michael D., David A. Raichlen, and Herman H. Pontzer. "Chimpanzee Locomotor Energetics and the Origin of Human Bipedalism." _Proceedings of the National Academy of Sciences of the United States of America_ 104.30 (2007): 12265–69. Sparks, B. F., et al. "Brain Structural Abnormalities in Young Children with Autism Spectrum Disorder." _Neurology_ 59.2 (2002): 184–92. Stone, Valerie E., et al. "Selective Impairment of Reasoning About Social Exchange in a Patient with Bilateral Limbic System Damage." _Proceedings of the National Academy of Sciences of the United States of America_ 99.17 (2002): 11531–36. "Study Identifies Energy Efficiency as Reason for Evolution of Upright Walking." _Science Daily_ , July 17, 2007. <http://www.sciencedaily.com/releases/2007/07/070716191140.htm>. "Supervolcano Eruption—in Sumatra—Deforested India 73,000 Years Ago." _Science Daily_ , November 24, 2009. <http://www.sciencedaily.com/releases/2009/11/091123142739.htm>. Swaminathan, Nikhil. "It's No Delusion: Evolution May Favor Schizophrenia Genes." _Scientific American_ , September 6, 2007. ———. "White Matter Matters in Schizophrenia." _Scientific American_ , April 24, 2011. Tattersall, I. "Once We Were Not Alone." _Scientific American_ 282.1 (2000): 56–62. Texier, Pierre-Jean, et al. "A Howiesons Poort Tradition of Engraving Ostrich Eggshell Containers Dated to 60,000 Years Ago at Diepkloof Rock Shelter, South Africa." _Proceedings of the National Academy of Sciences of the United States of America_ 107.14 (2010): 6180–85. Thomas, Lewis. _The Medusa and the Snail_. New York: Viking Press, 1979. "Three Neanderthal Sub-Groups Confirmed." _Science Daily_ , April 15, 2009. <http://www.sciencedaily.com/releases/2009/04/090415075150.htm>. "Toba Catastrophe Theory." _Science Daily_ , n.d. <http://www.sciencedaily.com/articles/t/toba_catastrophe_theory.htm>. Accessed March 9, 2011. Tooby, J., and L. Cosmides. "Groups in Mind: The Coalitional Roots of War and Morality." _Human Morality & Sociality: Evolutionary & Comparative Perspectives_. New York: Palgrave Macmillan, 2010. Tooby, J., and I. DeVore. "The Reconstruction of Hominid Behavioral Evolution Through Strategic Modeling." In _The Evolution of Human Behavior: Primate Models_ , edited by Warren G. Kinsey, 183–237. Albany: State University of New York Press, 1987. Tzedakis, P. C., K. A. Hughen, I. Cacho, and K. Harvati. "Placing Late Neanderthals in a Climatic Context." _Nature_ 449 (7159) (September 13, 2007): 206–8. doi:10.1038/nature06117. Van Wyhe, John. _The Darwin Experience: The Story of the Man and His Theory of Evolution_. Washington, DC: National Geographic, 2008. Volk, T., and J. Atkinson. "Is Child Death the Crucible of Human Evolution?" _Journal of Social, Evolutionary and Cultural Psychology_ 2 (2008): 247–60. Vrba, E. S. "Climate, Heterochrony, and Human Evolution." _Journal of Anthropological Research_ (1996): 1–28. Wade, Nicholas. "Scientist Finds the Beginnings of Morality in Primate Behavior." _New York Times_ , March 20, 2007. http://www.nytimes.com/2007/03/20/science/20moral.html?_r=1&pagewanted=all. ———. "Signs of Neanderthals Mating with Humans." _New York Times_ , May 5, 2007. <http://www.nytimes.com/2010/05/07/science/07nean derthal.html>. ———. "Tools Suggest Earlier Human Exit from Africa." _New York Times_ , January 28, 2011. <http://www.nytimes.com/2011/01/28/science/28africa.html?pagewanted=all>. Walter, Chip. _Thumbs, Toes, and Tears: And Other Traits That Make Us Human_. New York: Walker, 2006. Weaver, Timothy D., and Jean-Jacques Hublin. "Neanderthal Birth Canal Shape and the Evolution of Human Childbirth." _Transactions of the_ _IRE Professional Group on Audio_ 106 (20) (May 19, 2009): 8151–56. doi:10.1073/pnas.0812554106. Wesson, Kenneth. "Neuroplasticity." _Brain World_ , August 26, 2010. <http://brainworldmagazine.com/neuroplasticity>. "What Does It Mean to Be Human?" Smithsonian Institution, 2010. <http://humanorigins.si.edu>. "Why Humans Walk on Two Legs." _Science Daily_ , July 7, 2007. <http://www.sciencedaily.com/releases/2007/07/070720111226.htm>. "Why Music?" _Economist_ , December 18, 2008, 1–1. <http://www.economist.com/node/12795510>. "Why We Are, as We Are." _Economist_ , December 18, 2008. <http://www.economist.com/node/12795581>. Wills, Christopher. _The Runaway Brain: The Evolution of Human Uniqueness_. New York: HarperCollins Publishers, 1993. Wilson, David Sloan. _Evolution for Everyone: How Darwin's Theory Can Change the Way We Think About Our Lives_. New York: Bantam Dell, 2007. Wilson, Edward O. _On Human Nature_. Trade paperback. Cambridge, MA: Harvard University Press, 1978. Wong, K. "Who Were the Neanderthals?" _Scientific American_ 289 (2003): 28–37. Zak, Paul J. "The Neurobiology of Trust." _Scientific American_ 298.6 (2008): 88–92, 95. Zilhão, et al. "Symbolic Use of Marine Shells and Mineral Pigments by Iberian Neanderthals." _Proceedings of the National Academy of Sciences of the United States of America_ 107.3 (2010): 1023–28. Zimmer, Carl. "Siberian Fossils Were Neanderthals' Eastern Cousins, DNA Reveals." _New York Times_ , December 23, 2010. <http://www.nytimes.com/2010/12/23/science/23ancestor.html>. Zipursky, Lawrence S. "Driving Self-Recognition." _American Scientist_ 24.11: 40–48. ## Footnotes a Ants. b Many more human species may have existed at this time, but the farther back in time you go, the more likely those creatures lived in rain forests, and the less likely conditions were optimal for creating fossils. c See pages 37–38 of _Thumbs, Toes, and Tears: And Other Traits That Make Us Human_ for Bolk's complete list. d Life evolved soon after Earth itself came into existence some four billion years ago. The first cells were prokaryotic. The best guesses for the time when eukaryotes (cells with mitochondria) evolved range from just below 2.0 billion years to around 3.5 billion years before the present. The early fossil record for single–celled organisms, as you might expect, is sparse, so it's tough to set the exact date of this remarkable bargain. e When the ancient Carthaginian explorer Hanno the Navigator came across a group of what he called savage men and hairy women in West Africa twenty-five hundred years ago, he wasn't sure if they were human, but the difference between them and him was clearly large. His interpreters called the creatures _Gorillae_ , from which we later derived the term _gorilla_. It's possible that's exactly what Hanno had encountered. f Aurochs were a type of now–extinct, large, wild cattle that inhabited Europe, Asia, and North Africa. They survived in Europe until 1627, when the last recorded member of the species, a female, died in the Jaktorów Forest, in Poland. g The word is still out on Denisovans and the Red Deer Cave people. Even _Homo floresiensis_. Denisovans appear to also have descended from _Homo heildelbergensis_. h A term coined by evolutionary biologist Richard Dawkins in his book _The Selfish Gene_. i This is the hypothesis of evolutionary theorist Oliver Curry of the London School of Economics. j Conducted by the American Psychological Association, 2010. k A term coined in my previous book _Thumbs, Toes, and Tears: And Other Traits That Make Us Human_. ## A Note on the Author Chip Walter is founder of the popular website AllThingHuman.net, a former CNN bureau chief, and feature film screenwriter. He has written and produced several award–winning science documentaries for PBS, in collaboration with the National Academy of Sciences, including programs for the Emmy Award–winning series _Planet Earth_ and _Infinite Voyage_. Chip's science writing has embraced a broad spectrum of fields and topics. He is the author of _Space Age_ , the companion volume to the PBS series of the same title; _I'm Working on That_ , written with William Shatner; and _Thumbs, Toes, and Tears—And Other Traits That Make Us Human_. His books have been published in six languages. Chip's articles have also been featured in the _Economist_ , _Scientific American_ , _Scientific American Mind_ , _Slate_ , the _Washington Post_ , the _Boston Globe_ , _Discover_ magazine, and many other publications and websites. He is currently an adjunct professor at Carnegie Mellon University's School of Computer Science and Entertainment Technology Center. He lives in Pittsburgh with his wife, Cyndy, and their children Molly, Steven, Hannah, and Annie. ## By the Same Author _Thumbs, Toes, and Tears: And Other Traits That Make Us Human_ _I'm Working on That_ , written with William Shatner _Space Age_ ## Plate Section **_Paranthropus aethiopicus_** This creature was among three species of "robust" humans, some of whom roamed the plains of Africa for as many as a million years. They might have outcompeted the line of primates that eventually led to us, but our direct ancestors took an odd evolutionary turn that lengthened our childhoods and profoundly changed human evolution. (See Chapter 2: "The Invention of Childhood.") _Original artwork by Sergio Pérez._ **Homotherium—Big Cat of the Ancient Savanna** Life on Africa's ancient savannas had to have been terrifying. The humans who roamed and foraged there between 5 million and 1.5 million years ago probably spent a good deal of their time avoiding big cats like this one, a precursor of today's lions, panthers, and tigers. The danger they presented further bonded early humans, making cooperation among them more important than ever, one reason we are so social today. (See Chapter 2: "The Invention of Childhood.") _Homotherium © 2005 Mark Hallet._ **Lake Turkana—An Evolutionary Garden of Eden?** Today Lake Turkana is the world's largest alkaline lake, but millions of years ago it was the garden spot of Africa and home to ancient human species of all kinds, including the line that likely led to us. (See Chapter 3: "Learning Machines.") _Photo credit: Yannick Garcin._ **The Boy Who Changed Our View of Human Evolution** Also known as Nariokotome Boy, this young man met his end 1.5 million years ago. Luckily, and remarkably, most of his skeleton survived, making him one of the most important paleoanthropological finds ever. His teeth and bones have illuminated the mysterious evolution of our long childhoods and the crucial role it played in our survival. (See Chapter 2: "The Invention of Childhood.") _Photo credit: Look Sciences/Photo Researchers._ **The Ancient Continents of Sunda and Sahul** Fifty thousand years ago waves of modern humans began making their way out of Africa, scattering in all directions. Some tribes wandered to Australia, more than ten thousand miles away. They were able to make that journey because forty-five thousand years ago a swing to frigid climate locked oceans of water in earth's polar caps and dropped global sea levels. That created these immense continents in the Indian and Pacific oceans which are today submerged: Sunda and Sahul. Across these landmasses (with some short hops by sea) early humans made their way to the plateaus and mountains of western Australia, the ancestors of today's Australian Aborigines. (See Chapter 5: "The Everywhere Ape.") _Based on original artwork by Maximilian Dörrbecker._ **The Scattering of the Human Race** Once they had departed Africa, modern humans headed off to every corner of the planet—the Middle East, Europe, Asia, the Far East, the South Pacific, Australia, and the Americas. Among the last continents to be reached? Antarctica, in the nineteenth century. Remote Pacific islands were probably populated about the time the first Pharaohs ruled Egypt. (See Chapter 5: "The Everywhere Ape.") _Original artwork by Altaileopard, Wikimedia commons._ **Gorham Cave** Twenty-five thousand years ago the last Neanderthals may have lived, and died, in this cathedral-like cave. (See Chapter 6: "Cousin Creatures.") _Original photo provided by Gibmetal77, Wikimedia commons._ **Our Closest Cousin?** We now know the Neanderthal people of Europe and west Asia were remarkably intelligent and tough. This reconstruction illustrates that their large skulls, thick, ropey muscles, and expansive noses, optimized for warming cold air, helped them survive frigid temperatures and a punishing lifestyle. (See Chapter 6: "Cousin Creatures.") _Original artwork by Cicero Moraes, Wikimedia commons._ **Final Days of the Neanderthal** Did the last Neanderthal sit on the great snaggled-toothed Rock of Gibraltar and watch her (or his) final sunset? (See Chapter 6: "Cousin Creatures.") _Original photo provided by RedCoat, Wikimedia commons._ If we could compress the emergence of all of the humans we so far know of who evolved over the past seven million years into the space of twelve months, it would look something like this. Many more species probably came and went that we haven't yet discovered. (See Chapter 1: "The Battle for Survival.") _Artwork and graph by Frank Harris, 2012._ **Prehistoric Genius** Long ago a Cro-Magnon artist painted this breathtaking image deep in the Altamira caves of Spain. Today they would be the envy of art galleries around the world, or Madison Avenue marketeers—rich, vibrant, and ingenious. You can almost see the image ripple in the ancient firelight that once illuminated it. Around this time in human history there was a global blossoming of creativity. Was the wellspring of that creativity our long childhood? (See Chapter 7: "Beauties in the Beast.") _Photo credit: akg-images._ **One Reason Why We Resemble Baby Apes** The effect of youthful (more feminine) faces on members of the opposite sex illustrate that even today both men and women find their counterparts more attractive if they look more childlike. For this experiment, scientists digitally created an "average," but attractive, version of two faces for each sex, one Caucasian and one Asian, four "average" faces in all. The researchers then digitally modified each face to create two versions, one slightly more masculine, the other slightly more feminine and childlike. (See Chapter 7: "Beauties in the Beast.") _Reprinted by permission, Macmillan Publishers Ltd:_ Nature _394, "Effects of Sexual Dimorphism on Facial Attractiveness," pp 884–87, August 27, 1998._ **Our Preference for Childlike Looks Persists Today** The male versions in the study sport slightly heavier eyebrows, a hint of shaved beard, squarer jaws, and pupils that stand a bit farther apart than female pupils. This creates the illusion that the male faces are larger than the women's (they aren't). Ancient preferences like these help explain why we look, even in adulthood, more like baby apes than fully grown ones. (See Chapter 7: "Beauties in the Beast.") _Reprinted by permission, Macmillan Publishers Ltd.:_ Nature _394, "Effects of Sexual Dimorphism on Facial Attractiveness," pp 884–87, August 27, 1998._ **The Black Box We Call the Human Brain** The human brain is an amalgamation of ancient and newly evolved "mini brains," each with its own functions, cobbled together by the demands of evolution. Together they create the behavior we call human; complex, mysterious, playful, and unpredictable. Can the mind that the human brain makes possible comprehend itself? (See Chapter 8: "The Voice Inside Your Head.") _Original artwork by permission: Patric Hagmann et.al., Wikimedia commons._ **The Red Deer Cave People** Recently scientists stunned the world with the discovery of a mysterious people exhibiting both ancient and modern features who lived in southern China as recently as eleven thousand years ago, just as Homo sapiens were inventing agriculture. Are they somehow related to us, Neanderthals and the newly discovered Denisovan people, or are they an entirely separate branch of the human family tree? Recent discoveries have rapidly rearranged old assumptions. More changes will likely come. (See Chapter 6: "Cousin Creatures.") _Original artwork by Peter Schouten._ Copyright © 2013 by William J. (Chip) Walter Jr. First published in the United States of America in 2013 by Walker Books, an imprint of Bloomsbury Publishing, Inc. This electronic edition published in January 2013 www.bloomsbury.com For information about permission to reproduce selections from his book, write to Permissions, Walker BFYR, 175 Fifth Avenue, New York, New York 10010 All rights reserved You may not copy, distribute, transmit, reproduce or otherwise make available this publication (or any part of it) in any form, or by any means (including without limitation electronic, digital, optical, mechanical, photocopying, printing, recording or otherwise), without the prior written permission of the publisher. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages. Black and white image credits. Images 1, 2, 3, 7, 10, 11, and insets within 5, 6, and 12: Frank Harris. Image 4 inset of _Paranthropus aethiopicus_ : Sergio Pérez. Image 9: based on an image provided by the National Institute of Health, 2010. Image 8: based on a drawing by T. L. Lentz, originally published in _Primitive Nervous Systems_ (New Haven: Yale University Press, 1968). Published by Walker Publishing Company, Inc., New York A Division of Bloomsbury Publishing LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA HAS BEEN APPLIED FOR. eISBN: 978-0-8027-7891-8 (e-book) Visit www.bloomsbury.com to find out more about our authors and their books You will find extracts, author interviews, author events and you can sign up for newsletters to be the first to hear about our latest releases and special offers
{ "redpajama_set_name": "RedPajamaBook" }
6,832
Today some young women in the ward had a 4H show at the Lehi Rodeo grounds, so Bryson and I stopped by to watch for a while. All ready for the show! While I visited with the girl's family, Bryson followed all of the bigger kids around under the bleachers like a puppy dog and had an absolute blast!
{ "redpajama_set_name": "RedPajamaC4" }
4,262
\section{Introduction} Continuous innovations in measurement technologies have enabled the collection of high-dimensional data of various objects. This trend has created new research areas such as bioinformatics that apply knowledge and techniques from information science to the natural sciences in order to efficiently extract information from data. The importance of techniques for efficient information extraction has been growing more than ever in various fields of science and engineering \cite{Cyber05,fourth_paradigm,E-science}. Compressed sensing (CS), which is a framework for signal processing that is currently under development, is a successful resource that has produced many such techniques \cite{Donoho06compressedsensing,Candes08anintroduction}. In general, CS aims to realize high-performance signal processing by exploiting the prior knowledge of objective signals, particularly their {\em sparsity}. That is, CS utilizes the fact that real-world signals can typically be represented by a small combination of elemental components. In a standard scenario, CS utilizes this property to enable the recovery of various signals from much fewer samples of linear measurements than required by the Nyquist--Shannon theorem \cite{Donoho06,DonohoTanner09,CandesTao05,CandesTao06,KWT09,DMM09,Gangli10,krzakala,RFG12}. Besides the standard scenario, the concept of CS is now spreading in various directions. For instance, in a remote sensing situation where a sensor transmits data to a data center placed at a far distance, the amount of data to be sent through a narrowband communication channel may be the biggest hindrance to efficient signal processing. As a practical solution to such a difficulty, 1-bit CS is a proposed scheme for recovering sparse signals by using only the sign information of the measurement results \cite{BB_CISS08,yingying,yingying2}. Another variation can be utilized when multiple signals are observed in a distributed manner. In such cases, the signal recovery performance can also be enhanced by exploiting information on correlations among the signals. This is referred to as distributed CS \cite{Duarte05distributedcompressed,Baron09,shiraki15}. In this letter, we explore the possibilities and limitations of another variation of CS, which we call {\em online CS}. In this scheme, to minimize the computation and memory costs as much as possible, measured data are used for signal recovery only once and discarded after that. This approach towards information processing is a promising technique for when devices of low computational capability are used for signal recovery; such situations can arise in sensor networks, multi-agent systems, micro-devices, and so on. It is also advantageous when the signal source is time-variant. Historically, online information processing was actively investigated by the physics community more than two decades ago in the context of learning by neural networks \cite{kinouchi92,biehl94,kabashima94,saad95,opper96,vandenbroeck96,vicente98,Saad98}. However, the utility of sparsity was not fully recognized at that time, so the potential capability of online CS remains an open question. To clarify this issue, we focused on the performance when the Bayesian inference is considered in an online manner, which is guaranteed to yield the optimal performance when the signal recovery is carried out in an offline (batch) manner. \paragraph{Problem setup.} As a general scenario, we consider a situation where an $N$-dimensional signal $\bm{x}^0=(x_{i}^0) \in \mathbb{R}^N$ is sequentially measured by taking the inner product $u^{0,t}=\bm{\Phi^t}\cdot \bm{x}^0$ for a random measurement vector $\bm{\Phi^t}=(\Phi_{i}^t)\in \mathbb{R}^N$. Here, we assumed that each component of $\bm{x^0}$ is independently generated from an identical sparse distribution $\phi(x)=(1-\rho)\delta(x)+\rho f(x)$ and that each component of $\bm{\Phi}^t$ independently follows a distribution of zero mean and variance $N^{-1}$, where $0<\rho<1$, $i=1,2,\ldots, N$ and $f(x)$ is a density function that does not have a finite mass at the origin. The index $t=1,2,\ldots$ counts the number of measurements. For each measurement, the output $y^t$, which may be a continuous or discrete variable, is sampled from a conditional distribution $P(y^t|u^{0,t})$. Here, our goal is to accurately recover $\bm{x}^0$ based on the knowledge of $D^t=\{(\bm{\Phi}^1,y^1),(\bm{\Phi}^2,y^2),\ldots,(\bm{\Phi}^t,y^t)\}$ and the functional forms of $\phi(x)$ and $P(y|u)$ while minimizing the necessary computational cost. \paragraph{Bayesian signal recovery and online algorithm.} Let $\hat{\bm{x}}(D^t)$ denote the estimate of $\bm{x}^0$ by an arbitrary recovery scheme. The standard measure of the accuracy of $\hat{\bm{x}}(D^t)$ is the mean square error (MSE) $\mathsf{mse} = N^{-1} \left [ ||\hat{\bm{x}}(D^t) -\bm{x}^0 ||^2 \right ]_{D^t,\bm{x}^0}$, where $\left [ \cdots \right ]_X$ generally indicates the average operation with respect to $X$. Fortunately, under the current setup, Bayes' theorem, which is given by \begin{eqnarray} P(\bm{x}|D^t)=\frac{\prod_{\mu=1}^t P(y^\mu|u^\mu) \prod_{i=1}^N \phi(x_i)} {\int d\bm{x} \prod_{\mu =1}^t P(y^\mu |u^\mu) \prod_{i=1}^N \phi(x_i)}, \label{Bayes} \end{eqnarray} guarantees that $\mathsf{mse}$ is minimized by the minimum mean square error estimator $\hat{\bm{x}}^{\rm mmse}(D^t) =\int d \bm{x} \bm{x} P(\bm{x}|D^t)$. However, evaluating $\hat{\bm{x}}^{\rm mmse}(D^t)$ exactly is, unfortunately, computationally difficult. To practically resolve this difficulty, we introduce the following two devices: \begin{itemize} \item {\bf Online update:} We rewrite (\ref{Bayes}) in the form of a recursion formula: \begin{eqnarray} P(\bm{x}|D^{t+1})=\frac{P(y^{t+1}|u^{t+1}) P(\bm{x}|D^t)} {\int d \bm{x} P(y^{t+1}|u^{t+1}) P(\bm{x}|D^t)}. \label{recursion} \end{eqnarray} \item {\bf Mean field approximation:} To make the necessary computation tractable, we approximate $P(\bm{x}|D^t)$ by a factorized distribution of the exponential family \cite{Amari07} as \begin{eqnarray} P(\bm{x}|D^t) \simeq \prod_{i=1}^N \left ( \frac{e^{-a_i^tx_i^2/2 + h_i^t x_i} \phi(x_i)} {Z(a_i^t,h_i^t)} \right) \label{natural_param} \end{eqnarray} while utilizing the set of natural parameters $\{(a_i^t,h_i^t)\}$, where $Z(a_i^t,h_i^t)=\int dx_i e^{-a_i^t x_i^2/2 + h_i^t x_i} \phi(x_i) $. \end{itemize} Introducing online computation to the Bayesian inference based on conversion from (\ref{Bayes}) to (\ref{recursion}) has also been proposed in earlier studies on the learning of neural networks \cite{opper98,winther98}. On the other hand, the parameterization of (\ref{natural_param}) for the approximate tractable distribution may not have been popular for the online learning of neural networks. When the prior distribution of $\bm{x}$ is smooth, which is typically the case in neural network models, the posterior distribution is expected to asymptotically approach a Gaussian distribution. This means that, at least in the asymptotic region of $\alpha =t/N \gg 1$, the posterior distribution can be closely approximated as $\displaystyle P(\bm{x}|D^t) \propto \exp \left (-(\bm{x}-\bm{m}^t)^{\rm T} (C^t)^{-1} (\bm{x}-\bm{m}^t)/2 \right)$ by employing the mean $\bm{m}^t=\int d\bm{x} \bm{x} P(\bm{x}|D^t)$ and covariance $C^t= \int d\bm{x} (\bm{x}\bm{x}^{\rm T})P(\bm{x}|D^t) - \bm{m}\bm{m}^{\rm t}$ as parameters, where $\rm T$ denotes the matrix transpose. Supposing this property, earlier studies derived update rules directly for $\bm{m}^t$ and $C^t$. However, in the current case, the strong singularity of the prior $\phi(x)$, which originates from the component of $\delta(x)$, prevents $P(\bm{x}|D^t)$ from converging to a Gaussian distribution, even for $\alpha \gg 1$. To overcome this inconvenience, we derived update rules for $\{(a_i^t, h_i^t)\}$ based on the expression of (\ref{natural_param}) and computed the means and variances as $m_i^t=(\partial/\partial h_i^t)\ln Z(a_i^{t},h_i^{t})$ and $v_i^t=(\partial^2/(\partial h_i^t)^2)\ln Z(a_i^{t},h_i^{t})$, respectively. Let $\{(a_i^t, h_i^t)\}$ be given; therefore, $\{(v_i^t,m_i^t)\}$ is also provided. The update rule of $(a_i^t,h_i^t)\to (a_i^{t+1},h_i^{t+1})$ is derived by inserting the expression of (\ref{natural_param}) to the right-hand side of (\ref{recursion}) and integrating the resultant expression with respect to $\bm{x}$ except for $x_i$. In the integration, we approximate $u_{\backslash i}^{t+1} =\sum_{j \ne i}\Phi_j^{t+1} x_j$ as a Gaussian random variable whose mean and variance are $\Delta_{\backslash i}^{t+1} =\sum_{j \ne i} \Phi_j^{t+1} m_j^{t}$ and $\chi_{\backslash i}^{t+1}= \sum_{j \ne i} (\Phi_j^{t+1})^2 v_j^{t} \simeq N^{-1} \sum_{j \ne i} v_j^{t}$, respectively. This is supported by the assumption for the distribution of the measurement vectors $\bm{\Phi}^t$. By employing this Gaussian approximation to evaluate the integral and expanding the resultant expression up to the second order in $\Phi_i^{t+1} x_i$, we can obtain the online signal recovery algorithm as follows: \begin{widetext} \begin{equation} \left \{ \begin{array}{l} \displaystyle a_i^{t+1} = a_i^t - (\Phi_i^{t+1})^2 \frac{\partial^2}{(\partial \Delta^{t+1})^{2}} \ln \! \int \! \mathcal Dz P(y^{t+1}| \Delta^{t+1} + \sqrt{\chi^{t+1}} \,z) , \cr \displaystyle h_i^{t+1} = h_i^t + \Phi_{i}^{t+1} \frac{\partial}{\partial \Delta^{t+1}} \ln \! \int \! \mathcal Dz P(y^{t+1}| \Delta^{t+1} + \sqrt{\chi^{t+1}} \,z) - m_i^t(\Phi_i^{t+1})^2 \frac{\partial^2}{(\partial \Delta^{t+1})^{2}} \ln \! \int \! \mathcal Dz P(y^{t+1}| \Delta^{t+1} + \sqrt{\chi^{t+1}} \,z) , \end{array} \right . \label{eq:general-micro_dynamics} \end{equation} \end{widetext} where $\Delta^t = \sum_{i=1}^N\Phi_i^{t}m_i^{t-1}$, $\chi^t = \sum_{i=1}^N (\Phi_i^{t})^2v_i^{t-1}$ and $\mathcal Dz=dz\exp(-z^2/2)/\sqrt{2\pi}$ represents the Gaussian measure. Note that the necessary cost of computation for performing (\ref{eq:general-micro_dynamics}) is $O(N)$ per update. This means that the {\em total} computational cost for the recovery when using $t=\alpha N$ measurements is $O(N^2)$, which is comparable to the cost {\em per update} of existing fast offline signal recovery algorithms \cite{DMM09,krzakala,rangan10}. \paragraph{Macroscopic analysis.} Because $\bm{\Phi}^t$ and $y^t$ are random variables, (\ref{eq:general-micro_dynamics}) constitutes a pair of stochastic difference equations. However, because $\Phi_i^t\sim O(N^{-1/2})$, the difference with each update becomes infinitesimally small as $N$ grows. This property makes it possible to reduce (\ref{eq:general-micro_dynamics}) to a set of ordinary differential equations \begin{widetext} \begin{equation} \left \{ \begin{array}{l} \displaystyle \frac{d \hat{Q}}{d \alpha} =- \mathop{\rm Tr}_y \int \mathcal D v \int \mathcal D u P\Big (y \Big | \frac{m}{\sqrt{q}} \, v + \sqrt{Q_0-\frac{m^2}{q}} \, u \Big) \frac{\partial^2}{(\partial \sqrt{q} v)^2} \ln \int \mathcal Ds P(y| \sqrt{q} \, v + \sqrt{Q-q} \, u), \cr % \displaystyle \frac{d \hat{q}}{d \alpha} =\mathop{\rm Tr}_y \int \mathcal D v \int \mathcal Du P\Big (y \Big | \frac{m}{\sqrt{q}} \, v + \sqrt{Q_0-\frac{m^2}{q}} \, u \Big) \left (\frac{\partial}{\partial \sqrt{q} t} \ln \int \mathcal Ds P(y| \sqrt{q}\, v + \sqrt{Q-q} \, u) \right)^2, \cr % \displaystyle \frac{d \hat{m}}{d \alpha} = \mathop{\rm Tr}_y \int \mathcal D v \left (\frac{\partial}{\partial (m v /\sqrt{q})} \int \mathcal Du P\Big (y \Big | \frac{m}{\sqrt{q}} \,v + \sqrt{Q_0-\frac{m^2}{q}} \, u \Big) \right) \left (\frac{\partial}{\partial \sqrt{q} v} \ln \int \mathcal Du P(y| \sqrt{q} \, v + \sqrt{Q-q} \, u) \right), \end{array} \right . \label{eq:macrodynamics} \end{equation} \end{widetext} in the limit of $N, t \to \infty$ but keeping $\alpha=t/N$ finite, where $Q_0=\int dx\phi(x)x^2$, $\mathop{\rm Tr}_y$ denotes the integration or summation with respect to $y$, and $q$, $m$, and $Q$ are evaluated as $q=\int dx^0 \phi(x^0) \mathcal D z \left \langle x \right \rangle^2$, $m=\int dx^0 \phi(x^0) \mathcal D z x^0 \left \langle x \right \rangle$, and $Q=q+\int dx^0 \phi(x^0) \mathcal D z \partial \left \langle x \right \rangle/\partial(\sqrt{\hat{q}}z)$ using $\left \langle x \right \rangle =(\partial/\partial(\sqrt{\hat{q}}z)) \ln Z(\hat{Q},\sqrt{\hat{q}}z+\hat{m}x^0)$. Two issues are of note here. First, replacing $(d\hat{Q}/d\alpha, d\hat{q}/d \alpha, d\hat{m}/d \alpha)$ with $(\hat{Q}/\alpha, \hat{q}/\alpha, \hat{m}/\alpha)$ in (\ref{eq:macrodynamics}) yields the exact equation of state for the Bayesian {\em offline} signal recovery, which is derived by the replica or cavity method \cite{krzakala,yingying2}. This implies that the differences in the macroscopic descriptions---i.e., the use of differential instead of algebraic equations---characterize the fundamental limitations on the achievable performance of the online method (\ref{eq:general-micro_dynamics}) compared to the offline method. Second, similar to the Bayes optimal case for the offline recovery, the equation of state (\ref{eq:macrodynamics}) allows a solution with $\hat{Q}=\hat{q}=\hat{m}$, $Q=Q_0$, and $q=m$. Focusing on the solution of this type simplifies (\ref{eq:macrodynamics}) to \begin{eqnarray} &&\frac{d \hat{q}}{d \alpha} =\mathop{\rm Tr}_y \int \mathcal D v \int \mathcal Du P (y | \sqrt{q} \, v + \sqrt{Q_0-q} \, u) \cr && \ \ \ \ \times \left (\frac{\partial}{\partial \sqrt{q} v} \ln \int \mathcal Du P(y| \sqrt{q}\,v+ \sqrt{Q_0-q} \, u) \right)^2, \label{eq:Nishimori} \end{eqnarray} where $q=\int dx^0 \phi(x^0) \mathcal D z \left (\partial/\partial (\sqrt{\hat{q}} z) \ln Z(\hat{q},\sqrt{\hat{q}} z+\hat{q} x^0) \right)^2$. Because the numerical computation indicate that this solution is the unique attractor of (\ref{eq:macrodynamics}), we examined the performance of the online algorithm by utilizing (\ref{eq:Nishimori}). \paragraph{Examples.} We tested the developed methodologies on two representative scenarios of CS. The first is the standard CS, which is characterized by $P(y|u)=(2 \pi \sigma_n^2)^{-1/2}\exp \left (-(y-u)^2/(2 \sigma_n^2) \right)$. The other is the 1-bit CS, which is modeled by $P(y|u) = \int \mathcal Dz \Theta \left (yu+\sigma_n z \right)$. Here, $y \in \{+1,-1\}$, and $\Theta(x)=1$ for $x \ge 0$ and $0$ otherwise. For practical relevance, we considered situations where each measurement was degraded by Gaussian noise of zero mean and variance $\sigma_n^2$ for both cases. However, by setting $\sigma_n^0 = 0$, we can also evaluate the performance of a noiseless setup. For the generative model of sparse signals, we considered the case of the Bernolli--Gaussian prior $\phi(x)=(1-\rho)\delta(x)+\rho(2 \pi \sigma^2)^{-1/2} \exp \left (-x^2/(2 \sigma^2) \right) $, which means that $Q_0=\rho \sigma^2$. Fig. \ref{theory_vs_experiments} compares $\mathsf{mse}$ from the experimental results obtained with (\ref{eq:general-micro_dynamics}) and the theoretical predictions. The experimental results represented averages over 1000 samples, while the theoretical predictions were evaluated by solving (\ref{eq:Nishimori}) with the use of the Runge--Kutta method. With the exception of noiseless standard CS, where numerical accuracy becomes an issue because of the extremely small values of $\mathsf{mse}$, the experimental data extrapolated to $N \to \infty$ exhibited excellent agreement with the theoretical predictions. Note that the data of finite $N$ were biased monotonically to be higher for smaller $N$ and larger $\alpha$. \begin{figure*}[t] \centering \includegraphics[width=0.90\textwidth]{main_plot.eps} \caption{Comparison between $\mathsf{mse}$ from experimental results and theoretical predictions for $\rho = 0.1$. The crosses correspond to $N=200, 500, 1000, 2000$, and $4000$ (in descending order), and the white circles are the extrapolations of the data to $N\to\infty$ by quadratic fitting. The curves represent the theoretical performances of the online (continuous) and batch (dashed) reconstructions. The disagreement between the experimental and theoretical results in the noiseless standard CS case was due to the limited numerical accuracy of the computational environment used in this study. Also in this case, batch reconstruction achieves $\mathsf{mse}=0$ for $\alpha$ larger than $\alpha_{\rm c}(\rho)<1$.\label{theory_vs_experiments}} \end{figure*} For noiseless standard CS, the offline reconstruction achieves $\mathsf{mse}=0$ when $\alpha$ is greater than a certain critical ratio $0 < \alpha_{\rm c}(\rho) < 1$ \cite{krzakala}. On the other hand, the analysis based on (\ref{eq:Nishimori}) indicated that $\mathsf{mse} \simeq O\left (\exp(-\alpha /\rho) \right) $ holds for large $\alpha$, which means that perfect recovery is unfortunately impossible with (\ref{eq:general-micro_dynamics}). However, this result may still promote the use of the online algorithm for very sparse signals with $0 < \rho\ll 1$ where $\exp(-\alpha /\rho)$ becomes negligible. For noiseless 1-bit CS, the result of \cite{yingying2} meant that $\mathsf{mse} \simeq (Q_0/2) \left (\frac{\rho}{K\alpha} \right)^2$ was asymptotically achieved by the offline method, where $K=0.3603\ldots$. On the other hand, (\ref{eq:Nishimori}) yielded the asymptotic form $\mathsf{mse} \simeq 2 Q_0 \left (\frac{\rho}{K\alpha} \right)^2$ for $\alpha \gg 1$. This indicates that online recovery can save computation and memory costs considerably while sacrificing $\mathsf{mse}$ by only a factor $4$ asymptotically. These results may imply that there are fundamental gaps in the asymptotically achievable performance limit depending on the allowable computational costs in the absence of noise. However, this is not the case when Gaussian measurement noise is present, for which $P(y|u)$ becomes differentiable with respect to $u$. This property guarantees that $\hat{q} \simeq I \alpha$ asymptotically holds for both the online and offline methods, which yields a universal expression for the asymptotic MSE: $\mathsf{mse} \simeq 2\rho/\hat{q}=2 \rho/(I\alpha)$, where \begin{eqnarray} I =\mathop{\rm Tr}_{y} \!\! \int \!\! \mathcal D v P\left (y \Big |\sqrt{Q_0} v\right) \left (\frac{\partial}{\partial \sqrt{Q_0}v}\ln P\left (y \Big |\sqrt{Q_0} v\right) \right)^2 \label{eq:FisherInformation} \end{eqnarray} represents the Fisher information of the measurement model $P(y|u)$ averaged over the generation of $\bm{\Phi}^t$. This impresses the potential utility of the online algorithm and indicates that a performance similar to that of the offline method can be asymptotically achieved with a significant reduction in computational costs. \paragraph{Summary and discussion.}We developed an online algorithm to perform Bayesian inference on the signal recovery problem of CS. The algorithm can be carried out with $O(N)$ computational and memory costs per update, which are considerably less than those of the offline algorithm. From the algorithm, we also derived ordinary differential equations with respect to macroscopic variables that were utilized for the performance analysis. Our analysis indicated that the online algorithm can asymptotically achieve the same MSE as the offline algorithm with a significant reduction of computational costs in the presence of Gaussian measurement noise, while there may exist certain fundamental gaps in the achievable performance depending on usable computational resources in the absence of noise. Numerical experiments based on the standard and 1-bit scenarios supported our analysis. Here, we assumed that correct knowledge about the prior distribution of signals and the measurement model is provided in order to evaluate the potential ability of the online algorithm. However, such information is not necessarily available in practical situations. Incorporating the idea of online inference into situations lacking correct prior knowledge is an important and challenging future task. PVR acknowledges FAPESP for supporting his stay at the Tokyo Institute of Technology under Grant No. 2014/22258-2. This work was partially supported by JSPS KAKENHI No. 25120013 (YK).
{ "redpajama_set_name": "RedPajamaArXiv" }
6,883
Home | All news | Society | 16 prominent Australians join Friends of Artsakh to take network to over 70 members 16 prominent Australians join Friends of Artsakh to take network to over 70 members Siranush Ghazanchyan Send an email August 20, 2021, 15:41 Federal Senators Andrew Bragg and Janet Rice lead a group of 16 prominent additions to the ever-growing cohort of the Australian Friends of Artsakh, helping mark the two-year anniversary of the network established to support the right to self-determination of the Armenian Republic of Artsakh, reported the Armenian National Committee of Australia. The Australian Friends of Artsakh was launched in August 2019, when the Armenian National Committee of Australia (ANC-AU) and Mr. Kaylar Michaelian – the Permanent Representative of the Republic of Artsakh in Australia – hosted a delegation from the Republic of Artsakh led by then-Foreign Minister, His Excellency Masis Mayilian and the Hon. Davit Ishkhanyan MP. The network established under the #MOVINGMOUNTAINS catchphrase has grown from 40 inaugural signatories to now number over 70 prominent Australians who have all lended their names in support for the basic human rights and principles of self-determination of the indigenous Armenian population of the Republic of Artsakh, which is currently under majority occupation by the petro-dictatorship of Azerbaijan. The latest additions to the group include Federal Senators Andrew Bragg and Janet Rice, New South Wales Parliamentarians Damien Tudehope MLC, Janelle Saffin MP, James Griffin MP, Mark Coure MP and Anna Watson MP, as well as South Australian legislators Joe Szakacs MP, Sam Duluk MP, Zoe Bettison MP, Andrea Michaels MP, Irene Pnevmatikos MLC and Tammy Franks MLC – the South Australian additions follow the Parliament of South Australia recognising the right to self-determination of the Republic of Artsakh during Azerbaijan's latest invasion of the indigenous Armenian homeland in 2020. In addition, recent signatories include former New South Wales Opposition Leader, John Dowd QC, City of Ryde Councillor Peter Kim and prominent former broadcast journalist, John Mangos. ANC-AU Executive Director Haig Kayserian welcomed the new additions to the group as a signal that the Armenian-Australian community will never give up the rights of the heroic people of the Republic of Artsakh. "Following their egregious attacks on the Republic of Artsakh last year, Azerbaijan has only occupied 70 per cent of historical Armenian lands, but has also broken international laws to desecrate Armenian cultural and religions sites while illegally detaining over 150 Armenian prisoners of war to use as political bargaining chips," said Kayserian. "These actions, as well as dictator Ilham Aliyev's open and hostile Armenophobic attitudes to the Armenian people makes it an even greater priority for the Armenian-Australian community to build allies for the Republic of Artsakh," Kayserian added. "These new additions exemplify the Australian spirit and history when it comes to upholding human rights and justice and that Australia, like in the cases of Kosovo and East Timor, will not tolerate the endangerment of indigenous peoples seeking self-determination."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,039
Аршамбо V Благочестивый (; ум. 1096) — сеньор де Бурбон с 1095. Сын Аршамбо IV де Бурбона и Эрменгарды (Филиппины) Овернской (или некой Белиарды). Продолжал конфликт своего отца с приорией Сувиньи, зависевшей от аббатства Клюни. Осенью 1095 собрал ассамблею вассалов и соседних феодалов для решения этого спора. Среди приглашенных были Бернар де Виллар и Эмон сенешаль де Бурбон. Папа Урбан II, остановившийся в Сувиньи в ноябре 1095 в ходе своей поездки по Франции, взялся уладить конфликт. Монахи хотели освободиться от юрисдикции сеньоров де Бурбон, Аршамбо, напротив, хотел увеличить свои сеньориальные права. 13 ноября была выдана папская булла, закреплявшая монастырские вольности и ограничивавшая права сеньора. Урбан II заставил Аршамбо поклясться на могиле отца, что он будет соблюдать соглашение, и дал Бурбону поцелуй мира. После отъезда понтифика Аршамбо возобновил враждебные действия, и вопрос о приории Сувиньи в дальнейшем обсуждался на Клермонском соборе. Аршамбо V, по-видимому, умер в 1096, и ему наследовал малолетний сын Аршамбо VI. Имя жены Аршамбо V неизвестно; она снова вышла замуж, за Алара Гиейбо, сеньора де Шато-Мейян. Примечания Литература Achaintre N.-L. Histoire généalogique et chronologique de la maison royale de Bourbon. T. I. — P.: Mansut fils, 1825 Germain R. Les sires de Bourbon et le pouvoir : de la seigneurie à la principauté // Actes des congrès de la Société des historiens médiévistes de l'enseignement supérieur public. № 23, 1992, pp. 195—210 Paulot L. Un Pape français Urbain II. — P.: Lecoffre, 1903 Ссылки SIRES de BOURBON Сеньоры де Бурбон Правители Европы XI века Первый дом Бурбонов
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,576
{"url":"https:\/\/www.physicsforums.com\/threads\/debye-temperature-for-gold.821102\/","text":"# Debye temperature for gold\n\n1. Jun 28, 2015\n\n1. The problem statement, all variables and given\/known data\nCalculate the Debye temperature for gold\n\n2. Relevant equations\n$$\u0398_D = \\hbar \\frac{v_s}{k_b} \\sqrt[3]{6\u03c0^2 \\frac{N}{V}}$$\nSpeed of sound in gold: $$v_s=3240 m\/s$$\n\n3. The attempt at a solution\nI used the equation for \u0398D and for the concentration I used the value for atom density that I calculated for gold 5.9*10^28m^-3. I also checked this atom density for gold on the internet and I found the same value. However, I got that the Debye temperature for gold is 375.52 K and a tabular value for gold is 165 K.\n\nCan someone please tell me where I made the mistake in my calculation and these formulas?\n\n2. Jun 28, 2015\n\n### SteamKing\n\nStaff Emeritus\nWe can't tell you where you made a mistake in your calculations unless you post them.\n\n3. Jun 28, 2015","date":"2018-03-20 03:11:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.795147180557251, \"perplexity\": 833.1402716258528}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257647251.74\/warc\/CC-MAIN-20180320013620-20180320033620-00498.warc.gz\"}"}
null
null
Q: The User does not exist or is not unique - Content Deployment (MOSS 2007) I am having a Incremental Content Deployment scheduled from Stage to Production. Now it is throwing an error The User does not exist or is not unique . I have checked the logs as well as eventviewer. It says only the above error message . How do i find which user is creating the problem? A: You can read the Manifest.xml file contains all deployed objects which may help you determining which of the objects (including users) might broke. Manifest file is only completed at the end of the Export phase. See more details here A: I figured out the problem. One of our team member deleted the Users from the Production Site and i was not able to add the users from stage because the account was disabled in AD. Now we have enabled the account and added the user back in stage . Now the Content Deployment ran successful
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,684
Q: Capturing reference variable by copy in C++0x lambda According to the answers and comments for this question, when a reference variable is captured by value, the lambda object should make a copy of the referenced object, not the reference itself. However, GCC doesn't seem to do this. Using the following test: #include <stddef.h> #include <iostream> using std::cout; using std::endl; int main(int argc, char** argv) { int i = 10; int& ir = i; [=] { cout << "value capture" << endl << "i: " << i << endl << "ir: " << ir << endl << "&i: " << &i << endl << "&ir: " << &ir << endl << endl; }(); [&] { cout << "reference capture" << endl << "i: " << i << endl << "ir: " << ir << endl << "&i: " << &i << endl << "&ir: " << &ir << endl << endl; }(); return EXIT_SUCCESS; } Compiling with GCC 4.5.1, using -std=c++0x, and running gives the following output: value capture i: 10 ir: -226727748 &i: 0x7ffff27c68a0 &ir: 0x7ffff27c68a4 reference capture i: 10 ir: 10 &i: 0x7ffff27c68bc &ir: 0x7ffff27c68bc When captured by copy, ir just references junk data. But it correctly references i when captured by reference. Is this a bug in GCC? If so, does anyone know if a later version fixes it? What is the correct behavior? EDIT If the first lambda function is changed to [i, ir] { cout << "explicit value capture" << endl << "i: " << i << endl << "ir: " << ir << endl << "&i: " << &i << endl << "&ir: " << &ir << endl << endl; }(); then the output looks correct: explicit value capture i: 10 ir: 10 &i: 0x7fff0a5b5790 &ir: 0x7fff0a5b5794 This looks more and more like a bug. A: This has just been fixed in gcc-4.7 trunk and gcc-4.6 branch. These should be available in gcc-4.7.0 (a while from now - still in stage 1) and gcc-4.6.2 (alas 4.6.1 just came out.) But the intrepid could wait for the next snapshots or get a subversion copy. See audit trail for details. A: Compiled with VS 2010 gives: value capture i: 10 ir: 10 &i: 0012FE74 &ir: 0012FE78 reference capture i: 10 ir: 10 &i: 0012FF60 &ir: 0012FF60 Looks like a bug for me.
{ "redpajama_set_name": "RedPajamaStackExchange" }
483
Q: Unable to start a Intent activity inside onClick The startactivity() inside the new method is not getting called..i tried inside onclick() method..till i.putextra() method it is executing perfectly public class First_Fragment extends Fragment{ View myView; EditText figText; Button figButton; String TAG ="com.myapplication.siva.navigation_drawer"; @Nullable @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { myView = inflater.inflate(R.layout.first_layout,container,false); figText= (EditText) myView.findViewById(R.id.figText); figButton= (Button) myView.findViewById(R.id.figButton); Log.i(TAG,"Going inside 1"); figButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { newMethod(); } }); return myView; } public void newMethod() { Log.i(TAG,"Going inside 2"); String id=figText.getText().toString(); Log.i(TAG,"Going inside 3"); Intent i = new Intent(getActivity(), webView.class); Log.i(TAG,"Going inside 4"); i.putExtra("sivMessage",id); Log.i(TAG,"Going inside 5"); startActivity(i); } } the webview class code is as follows.. import android.support.v7.app.ActionBarActivity; import android.os.Bundle; import android.view.View; import android.webkit.WebSettings; import android.webkit.WebView; import android.widget.TextView; public class webView extends ActionBarActivity { WebView figWeb; String TAG ="com.myapplication.siva.sasfig"; long num; String abc; TextView regNo; public webView() { } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.web_view); getSupportActionBar().hide(); regNo=(TextView)findViewById(R.id.regNo); figWeb = ((WebView) findViewById(R.id.figWeb)); WebSettings webSettings = figWeb.getSettings(); webSettings.setBuiltInZoomControls(true); webSettings.setSupportZoom(true); webSettings.setJavaScriptEnabled(true); Bundle man = getIntent().getExtras(); if(man == null){ return; } String sivMessage = man.getString("sivMessage"); num=Long.parseLong(sivMessage); regNo.setText(sivMessage); figWeb.loadUrl("http://192.6.18/memberaccess1.asp?id="+num); } public void onPlus(View view) { num+=1; abc=""+num; regNo.setText(abc); figWeb.loadUrl("http://192.6.18/memberaccess1.asp?id="+num); } public void onMinus(View view) { num-=1; abc=""+num; regNo.setText(abc); figWeb.loadUrl("http://192.6.18/memberaccess1.asp?id="+num); } } Logcat for the code is given below:this isedited log created by me... 09-26 16:51:43.984 17879-17879/com.myapplication.siva.navigation_drawer I/com.myapplication.siva.navigation_drawer﹕ Going inside 1 09-26 16:51:43.984 17879-17879/com.myapplication.siva.navigation_drawer I/com.myapplication.siva.navigation_drawer﹕ Going inside 2 09-26 16:51:43.984 17879-17879/com.myapplication.siva.navigation_drawer I/com.myapplication.siva.navigation_drawer﹕ Going inside 3 09-26 16:51:43.991 17879-17879/com.myapplication.siva.navigation_drawer I/com.myapplication.siva.navigation_drawer﹕ Going inside 4 A: Can you please do like below: getActivity().startActivity(i); Hope this will help you. A: Try using getActivity().getBaseContext() Intent i = new Intent(getActivity().getBaseContext(), webView.class); startActivity(i); A: You my want to try passing the Context to newMethod, i.e.: public void onClick(View v) { newMethod(getActivy()); } public void newMethod(Context ctx) { Log.i(TAG,"Going inside 2"); String id=figText.getText().toString(); Log.i(TAG,"Going inside 3"); Intent i = new Intent(ctx, webView.class); Log.i(TAG,"Going inside 4"); i.putExtra("sivMessage",id); Log.i(TAG,"Going inside 5"); startActivity(i); } } Alternatively, you can try to place the dialog inside a new Runnable() Update Based on your comment: This problem can be for used getActivity() of "android.app.Fragment" or "android.support.v4.app.Fragment" if your are using "android.support.v4.app.Fragment" you need to review if you aren't using getActivity from "android.app.Fragment" or vice versa. SRC: https://stackoverflow.com/a/15908255/797495 A: try this, public void newMethod() { Log.i(TAG,"Going inside 2"); String id=figText.getText().toString(); Log.i(TAG,"Going inside 3"); // The following line is edited. Intent i = new Intent(getApplicationContext(), webView.class); Log.i(TAG,"Going inside 4"); i.putExtra("sivMessage",id); Log.i(TAG,"Going inside 5"); startActivity(i); } your mistake in this line Intent i = new Intent(getApplicationContext(), webView.class);
{ "redpajama_set_name": "RedPajamaStackExchange" }
36
Q: How to change number of spaces depending on digits in integer? I'm looking to print out the values in an array with line numbers at the start of each line. I'd like to change the spacing to make sure the values line up vertically regardless of the size of the array. @lines.each_with_index {|l,i| newtext = l[:text] space = ' ' * @lines.size.to_s.size cur_space = i.to_s.size (1..cur_space).each { space.chop! } puts "\##{i}#{space} #{newtext}" } Output #0 blah #1 blah #2 blah #3 blah #4 blah #5 blah #6 blah #7 blah #8 blah #9 blah #10 blah #11 blah #12 blah #13 blah This works, but I was wondering if there is a better way to do it? A: You are looking forString#ljust: index_size = @lines.size.pred.to_s.size @lines.each_with_index do |line, index| puts "##{index.to_s.ljust(index_size)} #{line[:text]}" end A: you can use tabs: 2.2.4 :001 > class Tabs 2.2.4 :002?> def self.do_it 2.2.4 :003?> (1..13).each do |number| 2.2.4 :004 > puts "##{number}\tblah" 2.2.4 :005?> end 2.2.4 :006?> end 2.2.4 :007?> end => :do_it 2.2.4 :008 > Tabs.do_it #1 blah #2 blah #3 blah #4 blah #5 blah #6 blah #7 blah #8 blah #9 blah #10 blah #11 blah #12 blah #13 blah A: You could also use format with "%-3d" : * *- : Left-justify the result of this conversion. *d : Convert argument as a decimal number. n = 14 width = (n-1).to_s.size pattern = "#%-#{width}d %s" n.times do |i| puts format(pattern, i, 'blah') end It outputs : #0 blah #1 blah #2 blah #3 blah #4 blah #5 blah #6 blah #7 blah #8 blah #9 blah #10 blah #11 blah #12 blah #13 blah
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,299
Washington State Governor's Office WA Governor's Office Inslee announces rollbacks to some activities to slow COVID-19 exposure Modifications related to weddings and funerals, restaurants, bars, and fitness and entertainment centers. Gov. Jay Inslee and Secretary of Health John Wiesman today announced changes to "Safe Start," Washington's phased approach to reopening. The changes target activities that data have shown provide a higher risk of COVID-19 exposure. It has been four months since the governor announced the state's "Stay Home, Stay Healthy" order. Since then, cases in Washington have risen from 2,000 to almost 50,000, and deaths have increased from 110 to nearly 1,500. To combat the rising numbers, the governor and secretary are changing guidance and regulations around restaurants, bars, and fitness centers, as well as weddings and funerals. The changes will also affect family entertainment centers, movie theaters and card rooms. "We do not take these steps lightly. We know every prohibition is a challenge for individuals and business owners," Inslee said during a press conference Thursday. "But we know that if we fail to act, we expose people and businesses to even greater risk down the line." Under the new guidance, ceremonies will remain permitted, but receptions are prohibited. Ceremonies must adhere to current guidance; for all phases, maximum indoor occupancy is 20%, or up to 30 people, whichever is less, as long as social distancing can be observed. The changes will take effect in two weeks, on Aug. 6, providing a grace period for weddings and funerals previously scheduled to take place or readjust their plans. Restaurant guidance will now require parties to be members of the same household in order to dine indoors. Outdoor dining and take-away remains available for small parties from different households. Table size for dine-in in Phase 3 will be reduced to five individuals and occupancy reduced from 75% to 50%. Restaurants must also close gaming and social areas, such as pool tables, dart boards and video games. Bars will be closed for indoor service, but can continue outdoor service. Alcohol service inside of restaurants must end by 10 p.m. These regulations take effect in one week, on July 30. The number of individuals allowed to use fitness centers and other physical health venues at a given time will also be reduced. In Phase 2, only five individuals — not including staff — will be allowed for indoor fitness services at a time. This includes gyms, fitness studios, and indoor pools, ice rinks, volleyball courts, and tennis facilities. These are limited to small group instruction or private training. Fitness center occupancy in Phase 3 will be reduced to 25%. All group fitness classes are limited to no more than 10, not including the instructors. The changes are effective July 30. Entertainment regulations Indoor family entertainment and recreation centers — like mini golf, bowling alleys, and arcades — are prohibited from opening, as well as indoor card rooms. Indoor movie theater occupancy will be reduced from 50% to 25% in Phase 3. Read the full guidance memo here. In addition to those changes, Wiesman announced an expansion of his face coverings order that will go into effect Saturday, July 25. The expansion will require face coverings in all common spaces, such as elevators, hallways and shared spaces in apartment buildings, university housing and hotels, as well as congregate setting such as nursing homes. "We're losing the momentum we had during the early months of this response," Wiesman said. "Looking ahead to the fall and hopes of schools reopening, we must dig back in to regain control. Fewer, shorter, and safer interactions are crucial. Staying home is still safest but if you go out, keep it quick, keep your distance from others, and wear your face covering." Eviction moratorium extension Inslee also announced an extension of the state's eviction moratorium to Oct. 15. Details on the extension will be released in the coming days. In addition to the moratorium, the extension also directs Governor's Office staff to convene an informal work group of landlords and tenants to discuss potential changes to the order in the short-term and long-term as the pandemic progresses, including the prospect of rent increases. "I know we are all tired of how long this emergency has gone on, and the pain it has inflicted in our households and our communities," Inslee said. "But we all remain steadfast in our refusal to allow COVID-19 to overwhelm our society, and we will lean on each other to get the job done. This is not the easy thing to do, but it is the right thing to do. These prohibitions are part of our approach, but they only supplement what we really need, which is for individuals to continue to make safe decisions and adhere to healthy practices." More from Washington State Governor's Office News and updates from Washington state Gov. Jay Inslee and his administration. Read more from Washington State Governor's Office MIRA BEAUTY™️ The Best Moisturizers with Hyaluronic Acid Dr. Mike Murray Look Mom, No Hands! COVID PreCheck @CovidPreCheck Self-Assessments Vs. Temperature Checks. Which One is Better for Creating Safe Spaces? The Foundation - MIRA BEAUTY™️ 10 of The Best Eye Creams for Wrinkles Mindy F. My Sober Ashes Surgeon General's Report Summary — Chapter 7 IBM's AI supercomputer and Amazon join Joe Biden's fight against cancer Stop complaining about vaccine passports MedRoomeyes When No Spare Parts are Available!! News and updates from Gov. Jay Inslee and his administration.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,447
Browse 283 photos of Folding Doors. Find ideas and inspiration for Folding Doors to add to your own home. By EJ Interior Design, Eugenia Jesberg. Our sliding doors permit light to travel through your space, while still providing a free, no-obligation price quote from one of our Raydoor Design Consultants. LaCantina's Interior and Exterior Folding and Bifold Doors create an was and remains today, a pioneer in designing and manufacturing folding door systems. White; Espresso; Cherry; Primed; Stain Grade Maple. More Options Available. Impact Plus Mir-Mel Primed Mirror Trim Solid MDF Interior Closet Bi-fold Door. Designer's Image 24" W x 80" H Pine Unfinished Half Louvered 2-Leaf Bi-Fold Door. Mastercraft® Pine Full-Louvered Prehung Interior Door. $136.99. Interior doors are utilitarian, but can also be glamorous and beautiful. Explore the wealth If you've got the floor space, sliding bi-fold glass doors are trending. Mar 13, 2008 Design options include flush, panel, louvered, and glass-insert models. Pros The least expensive option, folding doors are DIY friendly. May 16, 2013 Bi-folding doors are an on trend way of increasing the width of a interior whilst maintaining a consistent look to the design between rooms. Interior Glass DoorsInterior Sliding DoorsSliding WindowsInterior WindowsSliding Partition Stainless steel Glass folding door fitting or glass door accessories. Browse inspirational photos of modern doors and entryways. Explore sliding, swing, and folding door ideas for the exterior and interior of your home. Unfinished. More Options Available. Krosswood Doors 24 in. x 80 in. Rustic Knotty Alder 2-Panel Arch Top Solid Core Unfinished Wood Interior Bi-Fold Door. Results 1 - 24 of 215 Shop through a wide selection of Interior & Closet Folding Doors at Amazon.com. Free shipping and free returns on Prime eligible items.
{ "redpajama_set_name": "RedPajamaC4" }
0
{"url":"https:\/\/codeforces.com\/problemset\/problem\/799\/D","text":"D. Field expansion\ntime limit per test\n1 second\nmemory limit per test\n256 megabytes\ninput\nstandard input\noutput\nstandard output\n\nIn one of the games Arkady is fond of the game process happens on a rectangular field. In the game process Arkady can buy extensions for his field, each extension enlarges one of the field sizes in a particular number of times. Formally, there are n extensions, the i-th of them multiplies the width or the length (by Arkady's choice) by ai. Each extension can't be used more than once, the extensions can be used in any order.\n\nNow Arkady's field has size h\u2009\u00d7\u2009w. He wants to enlarge it so that it is possible to place a rectangle of size a\u2009\u00d7\u2009b on it (along the width or along the length, with sides parallel to the field sides). Find the minimum number of extensions needed to reach Arkady's goal.\n\nInput\n\nThe first line contains five integers a, b, h, w and n (1\u2009\u2264\u2009a,\u2009b,\u2009h,\u2009w,\u2009n\u2009\u2264\u2009100\u2009000)\u00a0\u2014 the sizes of the rectangle needed to be placed, the initial sizes of the field and the number of available extensions.\n\nThe second line contains n integers a1,\u2009a2,\u2009...,\u2009an (2\u2009\u2264\u2009ai\u2009\u2264\u2009100\u2009000), where ai equals the integer a side multiplies by when the i-th extension is applied.\n\nOutput\n\nPrint the minimum number of extensions needed to reach Arkady's goal. If it is not possible to place the rectangle on the field with all extensions, print -1. If the rectangle can be placed on the initial field, print 0.\n\nExamples\nInput\n3 3 2 4 42 5 4 10\nOutput\n1\nInput\n3 3 3 3 52 3 5 4 2\nOutput\n0\nInput\n5 5 1 2 32 2 3\nOutput\n-1\nInput\n3 4 1 1 32 3 2\nOutput\n3\nNote\n\nIn the first example it is enough to use any of the extensions available. For example, we can enlarge h in 5 times using the second extension. Then h becomes equal 10 and it is now possible to place the rectangle on the field.","date":"2022-05-22 11:07:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2779529392719269, \"perplexity\": 512.6010863390533}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662545326.51\/warc\/CC-MAIN-20220522094818-20220522124818-00391.warc.gz\"}"}
null
null
Semey Engineering Launches New Armoured Fire Fighting Vehicle By Michelle Witte in Nation on 1 April 2015 ASTANA – The Semey Engineering company, which produces combat and armoured vehicles, has created a new, unique, armoured vehicle for use in fighting fires, Kazakh TV reported on March 24. Armored fire engine BMP. Photograph: Army Recognition "It's a Kazakh vehicle," said Deputy Director of Semey Engineering Ramil Bayazitov, as quoted in the Kazakh TV report. "We have the rights to it. The development is ours." The machine is designed to allow firefighters to combat fires without leaving the vehicle, and has a video surveillance system that will allow them to see what they are doing. The new vehicle, described as very agile and manoeuvrable, was in development for about a year before its launch, Kazakh TV reported. Bayazitov said that the demand for machines of this type is very high in the armed forces, and that the vehicle was recently tested at Kazakhstan's Ministry of Defence. Semey Engineering, a subsidiary of Kazakh Engineering, is primarily engaged in the maintenance, repair, overhaul and upgrade of armoured vehicles, according to an Army Recognition website report from May 2014, which said the Kazakh government had ordered $49 million-worth of equipment from the firm since 2010. The company's website describes it as the only specialised company in Kazakhstan engaged in this type of work. In November 2014, a Janes.com report announced that the firm intended to expand into the armoured vehicle manufacturing market by 2021. Managing Director of Kazakhstan Engineering Nikolai Pospelov said the company had created a design bureau to begin working on the technical documentation required for armoured vehicle production, Janes and other sources reported. Semey Engineering is a regular participant in Kazakhstan's biennial KADEX Defence Technology Expo.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,381
Q: Text Extraction behavior is confusing Working on storm crawler 1.13. I am using the below config for text extraction. Little confused with the way how it will works. - MAIN[role="main"] - DIV[id="content--news"] - DIV[id="content--person"] - ARTICLE Whenever the crawler started, will it check all the tags included in the config or else it will skip the remaining tags if the first match is found. A: see JAVADOC The first matching inclusion pattern is used or the whole document if no expressions are configured or no match has been found. The code is pretty straight forward.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3
"WAR CRIMES" WAR CRIMES X SECURITY COUNCIL MEETING (202) GENERAL ASSEMBLY MEETING (100) STAKEOUT (21) SECURITY COUNCIL STAKEOUT (13) PRESS CONFERENCE (12) Unifeed (596) English 15-Dec-2022 OHCHR / HUMAN RIGHTS UKRAINE Speaking during a Human Rights Council meeting on the situation in Ukraine, the UN High Commissioner for Human Rights said, "My visit to Ukraine ended a week ago. But the horrors, suffering and the daily toll that this war is having on people in the country remain with me. The deaths. The lives uprooted. The families ripped apart. With more than 18 million people in Ukraine in need of humanitarian aid". UNTV CH Various 12-Dec-2022 International Residual Mechanism for Criminal Tribunals- Security Council, 9217th Meeting Mechanism for criminal tribunals one of United Nations' 'best investments', its president tells Security Council, highlighting significant progress. UKRAINE / TÜRK PRESSER Speaking to reporters at the end of his official visit to Ukraine, UN High Commissioner for Human Rights Volker Türk said, "the scale of civilian casualties, as well as the significant damage and destruction to civilian objects - including hospitals and schools - is shocking." OHCHR Threats to International Peace and Security- Security Council, 9206th Meeting Significant progress made gathering evidence on ISIL/Da'esh crimes in Iraq, but domestic laws needed, investigating head tells Security Council. UN / IRAQ UN senior official Christian Ritscher said, "one of our key goals is to support Iraq in playing a leading role in holding ISIL members accountable for international crimes," adding that "UNITAD is committed to building capacities of the judiciary in Iraq to work towards fair and just trials in accordance with applicable standards of the United Nations." UNIFEED Various 23-Nov-2022 Maintenance of Peace and Security of Ukraine- Security Council, 9202nd Meeting Senior official condemns Russian Federation's missile strikes against Ukraine's critical infrastructure, as Security Council holds emergency meeting on attacks. Maintenance of Peace and Security of Ukraine- Security Council, 9195th Meeting Senior official warns Security Council of 'catastrophic spillover', as death, destruction in Ukraine conflict keeps growing. The Situation in the Central African Republic- Security Council, 9190th Meeting Security Council extends mandate of Multidimensional Stabilization Mission in Central African Republic for one year, adopting Resolution 2659 (2022). CAMBODIA / ASEAN-UN SUMMIT Secretary-General Antonio Guterres said, "There is a growing risk that the global economy will be divided into two parts, led by the two biggest economies – the United States and China." UNIFEED The Situation in Libya- Security Council, 9187th meeting People of Libya need to be ensured justice, not 'an abstract idea', International Criminal Court prosecutor says, briefing Security Council from embattled country.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,923
Q: Why does my shell script give the error: "declare: not found"? Here is a simple example showing that using declare in a script the script will not run, while sourcing the script will: $ cat /tmp/new #! /bin/sh declare -i hello $ chmod a+rwx /tmp/new $ /tmp/new /tmp/new: 3: declare: not found $ source /tmp/new $ I wonder why directly running the script doesn't work, while sourcing it does? How can I make the first one work? Thanks! A: declare is a bash and zsh extension. On your system, /bin/sh is neither bash nor zsh (it's probably ash), so declare isn't available. You can use typeset instead of declare; they're synonyms, but typeset also works in ksh. In ash, there's no equivalent to typeset -i or most other uses of the typeset built-in. You don't actually need typeset -i to declare an integer variable; all it does is allow a few syntactic shortcuts like hello=2+2 for hello=$((2+2)). A: declare probably doesn't exist in the shell defined by your shebang - #! /bin/sh. Try #!/bin/bash instead. The reason why sourcing it worked is that you were already in a shell that supports declare. Sourcing it didn't open a new shell thus didn't use the shebang that doesn't know declare. A: You may also try the 2 other equivalent versions to Bash's declare, which are: typeset and local. They all work with -i (for integer). Also see my test script. PS. declare is not yet implemented in the Android (Mksh). A: declare is a builtin function and it's not available with /bin/sh, only with bash or zsh (and maybe other shells). The syntax may differ from one shell to another. You must choose your sheebang (#!) accordingly: if the script needs to be run by bash, the first line must be #!/bin/bash or #!/usr/bin/env bash
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,260
Hallerhoek is een buurtschap in de gemeente Twenterand in de Nederlandse provincie Overijssel. Het ligt in het westen van de gemeente, twee kilometer ten zuidwesten van Den Ham, richting Egede. Twenterand Buurtschap in Overijssel
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,859
Obama's Eulogy For Kennedy MJ has been busy lately, but can't let much more time pass without posting a tribute to a flawed yet genuine hero, the late Sen. Edward M. Kennedy. Here is President Barack Obama's eulogy to, arguably, the most effective lawmaker of our era. RIP, Ted. -- MJ Labels: Barack Obama, Edward Kennedy, eulogy From Joe's Vault: 'Great' American Health Care System Isn't Cutting It On Life Span Since health stats seem to be a hot topic now, it looks like a good time to dredge up a post from a couple of years ago. This was published on this blog Aug. 12, 2007. This just in -- the U.S. is now ranked 42nd among the world's nations in life span. How can this be happening in a country that spends so much on medicine, the most worldwide per capita? It's a paradox: When it comes to insurance, less isn't more; but when it comes to medication, less can indeed be more. And, we need news media that will actually report on the problem rather than essentially shill for the medical/drug establishment. To get the stats out of the way, this is from the Associated Press report: Countries that surpass the U.S. include Japan and most of Europe, as well as Jordan, Guam and the Cayman Islands. ... A baby born in the United States in 2004 will live an average of 77.9 years. That life expectancy ranks 42nd, down from 11th two decades earlier, according to international numbers provided by the Census Bureau and domestic numbers from the National Center for Health Statistics. Andorra, a tiny country ... between France and Spain, had the longest life expectancy, at 83.5 years ... It was followed by Japan, Macau, San Marino and Singapore. ... Researchers said several factors have contributed to the United States falling behind other industrialized nations. A major one is that 45 million Americans lack health insurance, while Canada and many European countries have universal health care, they say. OK, so far, so good. At least someone is observing that the number of uninsured Americans may have a lot to do with this. But wait, there's more. This Mainstream Media report lapses into whitewash and absurdity. But "it's not as simple as saying we don't have national health insurance," said Sam Harper, an epidemiologist at McGill University in Montreal. "It's not that easy." Among the other factors: • Adults in the United States have one of the highest obesity rates in the world. Nearly a third of U.S. adults 20 years and older are obese, while about two-thirds are overweight, according to the National Center for Health Statistics. "The U.S. has the resources that allow people to get fat and lazy," said Paul Terry, an assistant professor of epidemiology at Emory University in Atlanta. "We have the luxury of choosing a bad lifestyle as opposed to having one imposed on us by hard times." • Racial disparities. Black Americans have an average life expectancy of 73.3 years, five years shorter than white Americans. Black American males have a life expectancy of 69.8 years, slightly longer than the averages for Iran and Syria and slightly shorter than in Nicaragua and Morocco. • A relatively high percentage of babies born in the U.S. die before their first birthday, compared with other industrialized nations. Forty countries, including Cuba, Taiwan and most of Europe had lower infant mortality rates than the U.S. in 2004. The U.S. rate was 6.8 deaths for every 1,000 live births. It was 13.7 for Black Americans, the same as Saudi Arabia. "It really reflects the social conditions in which African American women grow up and have children," said Dr. Marie C. McCormick, professor of maternal and child health at the Harvard School of Public Health. "We haven't done anything to eliminate those disparities." Most of the above displays an astonishing lack of critical thinking by this MSM reporter, or perhaps by editors who got hold of the piece later. The story attempts to drive some wedge between the absence of universal coverage in the U.S. and (1) racial disparities, and (2) infant mortality. A national health insurance system would do a vast amount to address these two problems. Our current system is the precise reason why many minorities do not or cannot get adequate care, when they are either old or newborn. It's the lack of insurance, stupid. The passage points out that Cuba and most European countries have lower infant mortality rates than the U.S. Guess what those countries have that we don't. Obesity is certainly a problem in America, and one for which individuals can largely be blamed. Or can they? As decades of my life have passed, I have witnessed a socially irresponsible advertising culture that graduated from making people into two-pack-a-day cigarette addicts into junk-food junkies who wash it all down with sugary soft drinks. If one ate a steady diet of what one sees every day on TV ads, billboards, and in the urban sprawl of any given U.S. city, it's the superhighway to diabetes and heart disease. A thing I find quite revealing and disturbing is that although the Japanese smoke twice as much as Americans -- they light up the way we did in the '60s, back when my childhood senses were ablaze with TV cigarette commercials -- they don't have nearly as much heart disease as we do, and they're living longer than us. A simple observation is that they don't have quite the same advertising culture as we do, and so they're more likely to eat fish, tofu and veggies than a bacon cheeseburger. A decent diet can actually compensate some for other kinds of vices. Something else to consider is that, for the poor in America, a good diet is actually hard to afford. It's cheap for our poor and working class to consume a lot of starch and sugar. Even the simplest staple items like rice and pasta -- not good for diabetics -- are much cheaper than the more healthful choices. We've had a reversal of roles between rich and poor in modern America: In the bad old days, the poor were skinny because they went hungry, and the rich were plump because they had all they could eat. Now the poor eat, but it's the wrong foods, sold cheap. The rich can afford the sauteed vegetables and the catch of the day. But, I'm recalling that Emory University professor's remarks about Americans being so soft, not having a tough lifestyle imposed on them by adversity. This seems like an absurd contradiction as well. During hard times, people have trouble eating -- at all. Good food, or bad. And life spans were much shorter then. Something tells me the professor hasn't missed many meals. Now for an unintended consequence of living in an "affluent" society -- affluent for some, anyway. The U.S. is the most overmedicated nation ever. Our "health care system" is largely driven by the pharmaceutical companies' greed, and they are hooking people on meds every day with the same foresight and scruples as the corner dope dealer. Statin drugs are being pushed as though half the adult population should be on them. They may do a lot for people with severe cholesterol problems, but they can have very serious side effects. I have known a number of people who have given them up, despite warnings, because they complained that they always felt like they had the flu. My mother passed out and had to be hospitalized after three days on Zocor. I took Lipitor for three days, and I think my supervisor at work suspected that I was drunk. I have been hospitalized twice in recent years after having adverse reactions to medications. Doctors who aren't into this dope craze describe patients coming to them looking pale and wan. And wait, there's more, from a site called Health and DNA: ADRs are the fourth to sixth greatest killer in US with more than 100,000 deaths per year; and 2.2 million serious adverse reactions per year according to a 1998 Journal of the American Medical Association report. (JAMA 279:1200 1998) This study is a meta analysis of 39 research reports published from 1966 to 1996. 21.3% of the 548 most recently FDA approved medications were subsequently withdrawn from the market or given a black box warning. JAMA 287:2215 2002 The GAO reports that 51% of new drugs have serious, undetected adverse effects at the time of approval. Of the best selling prescription drugs, 148 can cause depression, 133 hallucinations or psychoses, 105 constipation, 76 dementia, 27 insomnia and 36 parkinsonism. "Worst Pills Best Pills: A Consumers Guide to Avoiding Drug-Induced Death or Illness," third edition, 1999. I know from the experience of being overmedicated that it's hard some days just to get out of bed under those conditions, let alone get one's regular exercise for general health and weight control. I have yet to see Michael Moore's Sicko, but I anticipate seeing it this week. It shouldn't be hard for him to win me over. This "health care system," coupled with a predatory advertising culture, looks likely to make either my generation or the next one the first to have a lower life expectancy than our parents had. As my fellow baby boomers age and become more dependent on this broken system to get decent and well-considered care, this is clearly one of the crucial battles that Americans must win. Posted by Manifesto Joe at 1:23 PM 12 comments: Labels: health care, life expectancy, statistics Health Care And Infant Mortality: When It Comes To A Population's Health, Statistics Don't Lie A commenter on my previous post brought up something often raised in defense of America's indefensible health care status quo. That is the alleged wait time for specialists in Canada. The commenter mentioned, in particular, OB-GYN wait times alleged to be 10 to 12 months. I rather doubt that as something typical, although I have heard from less biased sources that wait times for Canadian specialists can be up to six months. But regarding OB-GYN wait times, statistics about infant mortality should be revealing. Here are some, courtesy of the United Nations World Populations Prospect report revision of 2006: Countries with "socialized" medicine: Japan: 3.2 per 1,000 live births Sweden: 3.2 per 1,000 live births Norway: 3.3 """ France: 4.2 Germany: 4.3 Denmark: 4.4 Australia: 4.4 U.K.: 4.8 Canada: 4.8 Cuba: (Those godless commies!) 5.1 And then we have: United States: 6.3 The U.S. ranks below Brunei (5.5), Cyprus (5.9), and New Caledonia (6.1). I guess now we know why so many long-suffering Canadians, and so many other people in countries with "socialized" medicine, are clamoring to trade in their government-run systems for the private U.S. monopoly/oligopoly model. Ours is clearly so superior. Labels: health care, infant mortality, socialized medicine Health Care: When Americans Voted For Obama And Democrats, What Were They Voting For? A recent (Aug. 3-6) nationwide telephone poll by the Marist Institute of Public Opinion found that 45 percent of Americans disapprove of President Obama's handling of health care reform, while 43 percent approved. Here's a link. Obama won the popular vote last year with 53 percent. This begs the question: What did the "swing" voters, the independents, think they were voting for when they cast ballots for Obama and other Democrats? And, as for those who are wavering on the issue, did they not expect yet another disinformation barrage, perhaps even worse that the "Harry and Louise" campaign of 1993-94? There are many interest groups -- insurers, pharmaceutical companies, stockholders, doctors and other health care "professionals" -- who have a big stake in the status quo and fear they will lose money if a public option comes to pass. Wasn't a propaganda flood, financed by the vested interests, expected? My disapproval of Obama's handling of this would be purely idealistic: I favor a single-payer model similar to the ones in Canada and Germany. Even the English, whose National Health Service is often vilified here, are getting highly pissed and coming to the defense of the NHS. Here's another link. No less than renowned scientist Stephen Hawking has asserted that, contrary to U.S. right-wing demagoguery, the NHS saved his life from the longtime ravages of ALS. (Yet another link.) The idealist in me says, if "socialism" is what they have in Norway and Sweden, where do I sign up? They live longer than Americans do; their babies don't die nearly as often; and their middle class doesn't have to live in fear of bankruptcy if someone comes down with a catastrophic illness. You don't even have to look at Scandinavian countries with hyperactive welfare states. Australia, perhaps the closest nation to the U.S. ideologically, has something resembling "socialized" medicine. Among other developed countries in the world, you don't see any of them battling to trade in their government-run programs for the U.S. "Land of the Fee" model, do you? Their way looks much more like the right path to me. But -- living in America, and especially in Texas, made me stop being an idealist long ago. It's clear by now that Obama and his "allies" are going to have a hard time just putting over a public option, so single-payer will have to wait, probably until after I'm dead. Back to the American public -- that 45 percent who disapprove. I'd wager that few of them ever had to actually USE their underperforming health insurance to battle a chronic illness. I have allergies so severe that I depend on multiple medications just to live, and the co-payments and deductibles eat up much of my income. This was not something I did to myself; I was born with allergies. Let's talk about "death panels." I have the impression that among that 45 percent, their answer for me would be, "Just die and get out of the way." Such people protest loudly at being called Nazilike, but it sounds a lot like eugenics to me. That 45 percent probably never had to battle cancer, as my wife did in 1992, on one of those swell 80-20 private insurance plans, with a deductible. That experience hurled us into debt that we've never gotten out of to this day. Bankruptcy was actually on the table for us a couple of years ago -- fortunately, we found an alternative. But we are like many other middle-class Americans in that, even with both of us insured, another major illness could very well plunge us into so much debt that bankruptcy would be certain. And, the insurance companies would have the complete prerogative to drop either or both of us. That 45 percent: I shudder to think that there are so many profoundly misguided Americans. One can go back to relatively recent history and witness the fickleness of the electorate. In 1992, Clinton pulled off a plurality win, and the Democrats took both houses of Congress solidly. Two years later, the Republicans and their "Contract With America" turned the tables utterly, though not for long. Back a bit further, in 1974 the Republican Party seemed repudiated and in tatters after Watergate. It only took six years for the Reagan right wing to turn that completely around. I hope that Obama can stand his ground, and that the electorate doesn't prove to be that fickle in 2010. If my insurance company decides to kick me to the curb, I'd like that public option, or at least something that keeps me from going broke. Postscript: CNN reports that at 1:03 p.m. Mountain Standard Time, at a presidential event in Phoenix, a reporter saw a man in the anti-Obama camp carrying an AR-15 semiautomatic rifle. (Arizona has an open-carry law.) Are those mean-spirited lib'ruls gonna call him a Nazi? Labels: disinformation, health care, Obama A Classical Change Of Pace: The Most Magnificent Howard Hanson (1896-1981) Mr. Hanson was one of the most underrated classical/symphonic composers of his time, mainly because he did not follow the trends of his era, between the world wars. The American concert hall was filled with much Stravinsky and Copland during that time. I have absolutely no problem with either of them. But Mr. Hanson, a Scandanavian-American from the Midwest, was a bit neglected because of his unabashed romanticism. Some film buffs might recognize this as the piece that was played at the end of the 1979 Ridley Scott film Alien. By the way, this symphony, No. 2, "Romantic," debuted in Boston in 1930. This one has moved me to tears. -- MJ 1993 ISSMA State Champion Carmel High School Symphony Orchestra performing Hanson's Symphony No. 2, 3rd Movement. Conducted by Thomas Dick at Valparaiso, IN. Labels: Howard Hanson, romantic, Symphony No. 2 Palin Steps Up To Lead Nazilike Campaign Against Health Care Reform It's a departure from historical norms, but nobody ever demonstrated that a fascist movement couldn't be led by a pretty middle-aged woman. (She's not my type, but Sarah Palin has what could be described as generic good looks.) And it's happening -- I don't think she can ever legitimately vie for high elective office again, but sundry Brown Shirt types, jackbooted thugs and brainwashed hillbillies may be lining up behind Sarah Palin. Maybe this is why Palin resigned her Alaska governorship. It's kind of hard to lead a Nazilike movement when you've got a full-time job in Juneau. True, she's not as articulate (or bellicose) as Limbaugh. Hell, she's not even Father Coughlin. But Klondike Hottie seems to speak for a lot of manure-headed people out there, and now I think I dig where she aims to go. Dean Baker of Truthout is on the money when he blames what's left of our "news media" for a lot of this. Palin was given a huge pass when she publicly conjured up fictions about a "death panel" and rationing of health care while denouncing Obama administration policy plans. Traditionally, media are supposed to function as a "truth squad" and shoot down pig dung like this. Here's a link to Baker's article. Palin apparently has enough gutter political savvy to see a movement in the making. Nazilike hooligans have been showing up at "town hall" meetings to intimidate anyone who favors health care reform, up to and including the Congress creatures themselves. There have been some scenes that recall the fascist thuggery of Italy in the 1920s and Germany in the 1930s. On Friday, Justin Rubin of MoveOn recorded a few nuggets that show a sampling of what's been happening to members of Congress, and continues, from coast to coast: -- Last night in Tampa, Florida, a town hall meeting erupted into violence, with the police being called to break up fist fights and shoving matches. -- A Texas Democrat was shouted down by right-wing hecklers, many of whom admitted they didn't even live in his district. -- One North Carolina representative announced he wouldn't be holding any town-hall meetings after his office began receiving death threats. -- And in Maryland, protesters hung a Democratic congressman in effigy to oppose health-care reform. The Associated Press filed a more detailed account of the right-wing marauding in a Saturday story. Here's a link to that one. Back to Sarah Palin, I realize that Klondike Hottie has plenty of competition for leadership of the GOP's gaping primate squad. Limbaugh has been whipping them up for 20 years, and Sean Hannity and Glenn Beck are most decidedly in the running. But Klondike Hottie has them beaten in one crucial area -- sex appeal. To me, her appeal is very limited -- she reminds me too much those Young Republican college women who always seemed to belong to a sorority that was called "Betas" for short. But, Bubba apparently digs it. And he may be willing to misdirect some of his testosterone toward neo-fascist asskicking, if she gives the word. Stay tuned. Labels: health care, Sarah Palin, thugs "Birthers," Wingnuts Who Think Obama Was Born In Kenya, Make Me Ashamed To Be From The South This has become the Whitewater of the Obama era. Despite evidence and still more evidence that Barack Hussein Obama was born Aug. 4, 1961, in Hawaii (a belated happy birthday to a fellow Leo, Mr. President), a large number of Americans, almost all Republicans, don't believe that he's an American citizen. And they are heavily concentrated among white Republicans in the South. This reminds me a lot of Whitewater in the '90s and the "questions" that dogged Bill Clinton throughout his eight-year presidency. I was not quite as intolerant of right-wing kooks back then as I am now. I suffered through a litany of conspiracy theories from acquaintances. Bill and Hillary were not only dogged by the specter of an investment in which they lost money. Bill Clinton, I was told by these crazies, was directly involved in over 100 murders back in Arkansas, during his governorship. We went through various "Troopergates," "Tailgates," and so forth. A special prosecutor was appointed, and spent over $40 million to eventually find out that Clinton apparently got a couple of blowjobs from an White House intern and lied about them. And then he wasted a lot of time and money getting Clinton impeached, and without a conviction, over the likes of that. Now we have the same raving lunatics sizing up Barack Hussein Obama. Clinton was indeed a wheeler-dealer and a Falstaffian figure who had a hard time keeping his pants zipped. Obama admits that he sampled a few drugs while he was in his twenties, mostly pot. Now it's a big deal for him to have a beer, and there's much ado about trouble giving up his smokes. Other than that, he's a model father and husband, a regular Mr. Cleanhands. So, the rubber-room refugees zero in on this citizenship horseshit. And they're mostly, though not exclusively, Republicans in the South. To wit, a recent column from usnews.com: According to a new poll from Research 2000 (commissioned by Daily Kos), a majority of Southerners either believe that Barack Obama was not born in the United States (23 percent) or are not sure (30 percent). Only 47 percent of Southern respondents believe Obama was born in the USA. By contrast, 93 percent of Northeastern[er]s said yes, he was born here, 90 percent of Midwesterners did and 87 percent of Westerners. Here's a link to the whole column. I have been argumentive and contrarian quite often with people who characterize Southerners as stupid. Why? I have encountered my share of dumbass Jersey-talking Yankees, and my one visit to Southern California was quite an eye-opener. There, I learned that there's just a marginal difference between a philistine who sips Chardonnay and wears designer jeans and a philistine who chugs Miller High Life and got his or her jeans at Wal-Mart. But after seeing the results of this poll, I don't know if I can muster a defense for fellow Southerners anymore. Forrest Gump was only partially right -- stupid is not merely what stupid does. It's also what it says. Words can be poison, and this notion is utterly toxic. I stand ashamed to be a native Southerner, with deep roots in states other than Texas (Alabama, North and South Carolina, Arkansas). I hope I can consider myself an exception, but we seem to have de-evolved from the rest of Western culture. Labels: Barack Obama, birthers, South
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,682
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US" lang="en-US"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <title>Zencoder Dropzone</title> <!-- jQuery Include --> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <!-- Filepicker.io Include --> <script type="text/javascript" src="//api.filepicker.io/v1/filepicker.js"></script> <style type='text/css'> html, body { margin: 0; padding: 0; background-color: #1F333E; color: #fff; height: 100%; font-family: 'HelveticaNeue-UltraLight','Helvetica Neue UltraLight','Helvetica Neue','Open-Light',sans-serif; } #content { height: 100%; width: 100%; } /* Basic styles */ .hidden { display: none } h1 { font-weight: 300 } /* We need this structure for completely centered text */ .container { height: 100%; width: 100%; display: table; } .text { display: table-cell; vertical-align: middle; text-align: center; } /* Setting height/width to 100% will cause scrollbars, so absolutely position to the edges */ .full { position: absolute; left: 0; right: 0; top: 0; bottom: 0; } /* Setup */ #setup { /* Nothing special here yet */ } #setup input { width: 80%; height: 50px; text-align: center; font-size: 40px; color: #1E323D; background-color: #609FC1; border: 3px solid #fff; } #setup input:focus { outline: none; } /* Styling placeholder text still takes vendor prefixes :( */ ::-webkit-input-placeholder { color: #47758F; } :-moz-placeholder { /* Firefox 18- */ color: #47758F; } ::-moz-placeholder { /* Firefox 19+ */ color: #47758F; } :-ms-input-placeholder { color: #47758F; } /* Drop Zone */ #dropZone { /* Nothing special here yet */ } .normal { border: 3px dashed #fff; background-color: #47758F; } .over { border: 3px solid #fff; background-color: #609FC1; } </style> <script type="text/javascript"> $(function(){ // Get the API key from the initial form field and throw it in a variable for use later var apiKey; $('#api-key-form').submit(function(e){ e.preventDefault(); apiKey = $('#api-key').val(); // Once the API key is set, show the drop zone $('#setup').fadeOut(function(){ $('#dropZone').fadeIn(); }); }); // Set your Filepicker.io API key filepicker.setKey('AACRnG3VDSwyJfHqeN06ez'); // We'll be referencing these elements a few times, // so we might as well put them in vars. var $dz = $('#dropZone'); var $dzResult = $('#localDropResult'); // Set up our Filepicker.io Drop Pane. filepicker.makeDropPane($dz, { multiple: false, dragEnter: function() { $dz.find('h1').text('Drop to upload'); $dz.removeClass('normal').addClass('over'); }, dragLeave: function() { $dz.find('h1').text('Drop files here'); $dz.removeClass('over').addClass('normal'); }, onSuccess: function(fpfiles) { $dz.find('h1').text('File uploaded. Encoding...'); createJob(apiKey, fpfiles[0].url, $dz); }, onError: function(type, message) { $dzResult.text('('+type+') '+ message); }, onProgress: function(percentage) { $dz.find('h1').text('Uploading ('+percentage+'%)'); } }); }); // Send the create request to Zencoder function createJob(apiKey, file, $element) { var request = { input: file }; // Let's use $.ajax instead of $.post so we can specify custom headers. $.ajax({ url: 'https://app.zencoder.com/api/v2/jobs', type: 'POST', data: JSON.stringify(request), headers: { "Zencoder-Api-Key": apiKey }, dataType: 'json', success: function(data) { console.log(data); // Once the file is uploaded, start polling Zencoder for progress pollZencoder(apiKey, data.id, $element); }, error: function(data) { console.log(data); } }); } // Poll the Zencoder API for progress function pollZencoder(apiKey, jobId, $element) { $.ajax({ url: 'https://app.zencoder.com/api/v2/jobs/'+ jobId +'/progress.json', type: 'GET', headers: { "Zencoder-Api-Key": apiKey }, success: function(data) { if (data.state != 'finished') { console.log(data); // We don't want to update progress while the job is still queued if (data.state != 'waiting') { $element.find('h1').text('Encoding ('+ data.progress +'%)'); } // Since the job isn't finished, wait 3 seconds and poll again setTimeout(function() { pollZencoder(apiKey, jobId, $element) }, 3000); } else { // Job is finished, so let the user know. $element.find('h1').html('Finished. <a href="https://app.zencoder.com/jobs/'+ jobId +'">View Job</a>') } }, error: function(data) { console.log(data); } }); } </script> </head> <body> <div id="content"> <!-- Get the user's API key before anything else --> <div id="setup" class="full"> <div class="container"> <div class="text"> <form id="api-key-form"> <input type="text" id="api-key" placeholder="Zencoder API Key"><br /> <p>Press enter when done.</p> </form> </div> </div> </div> <!-- Drop zone needs to be hidden by default. We'll unhide it when the API key is put in --> <div id="dropZone" class="full normal hidden"> <div class="container"> <div class="text"> <h1>Drop files here</h1> </div> </div> </div> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
5,034
Q: How would I use to .xib file to make image placement fit iPhone 5 and 4 screens? I'm needing to use a .xib file to make sure that my images on my screen match both screen sizes. I would post images, but unfortunately my access is still limited on stackoverflow. A: I believe you need do it programmatically as given below. Add following define statement to you ProjectName-Prefix.pch file so its available in all over the application. #define IS_IPHONE_5 (fabs((double)[[UIScreen mainScreen] bounds].size.height-(double)568)<DBL_EPSILON) and then in the ViewController, you can do something like: if (IS_IPHONE_5) { //Set image(s) specific for iPhone5 to your ImageViews } else { //This is iPhone4 or older therefore set appropriate images for 3.5" size }
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,379
\section{Introduction} \gdef\@thefnmark{}\@footnotetext{\hspace*{-0.6cm}All authors were supported by Engineering and Physical Sciences Research Council grant EP/R010560/1.\\2010 Mathematics Subject Classification. Primary 37F10; Secondary 30D05.} Let $f$ be a transcendental entire function. We consider the iterates of $f$, which we denote by $f^n$, $n \geq 1$. The complex plane is divided into two sets: the Fatou set, $F(f)$, where the iterates $(f^n)$ form a normal family in a neighbourhood of every point, and its complement, the Julia set $J(f)$. An introduction to the theory of iteration of transcendental entire and meromorphic functions can be found in \cite{bergweiler93}. The Fatou set is open and consists of connected components, which are called \textit{Fatou components}. Fatou components can be periodic, preperiodic or wandering domains. A Fatou component $U$ is called a \textit{wandering domain} if $f^n(U) \cap f^m(U)= \emptyset,$ for all $n\neq m$. Although Sullivan showed in \cite{sullivan} that rational maps have no wandering domains, transcendental entire functions can have wandering domains. The first example of such a function was given by Baker \cite{Baker76} who proved that a {\color{red} certain} entire function given by an infinite product has a multiply connected wandering domain. Several examples of simply connected wandering domains have been constructed since then (see, for example,~\cite[p. 104]{Herman}, \cite[p. 414]{sullivan}, \cite[p. 564, p. 567]{Baker-wd}, \cite[p. 222]{Devaney-entire}, \cite[Examples 1 and 2]{pathex}, \cite{fagella-henriksen}). In \cite{BRS} the authors gave a complete description of the dynamical behaviour in multiply connected wandering domains. Recently, in \cite{BEFRS} the authors gave a detailed classification of simply connected wandering domains in terms of the hyperbolic distance between orbits of points and in terms of convergence to the boundary. More specifically, they classified simply connected wandering domains into \textit{contracting}, \textit{semi-contracting} and \textit{eventually isometric} depending on whether, for almost all pairs of points in the wandering domain, the hyperbolic distances between the orbits of these points, tend to 0, decrease but do not tend to 0, or are eventually constant, respectively. In terms of convergence to the boundary, the orbits of all points \textit{stay away} from the boundary, come arbitrarily close to the boundary but do not converge to it (\textit{bungee}), or \textit{converge} to the boundary. These two classifications give nine possible types of simply connected wandering domains. Using a new technique, based on approximation theory, they show that all of these nine possible types are indeed realisable. All the examples constructed in \cite{BEFRS} were {\it escaping} wandering domains. Hence it is natural to ask whether there exist {\it oscillating} wandering domains of all nine types. (It remains a major open question as to whether it is possible to have wandering domains of bounded orbit.) We first recall that {\color{red} in oscillating wandering domains the iterates of~$f$ have finite limit points, as well as $\infty$, so it is impossible for the orbit of a point in such a wandering domain to stay away from the boundary.} Thus three of the nine possible types are not realisable. In this paper we show that the remaining six possible types of oscillating wandering domains are all realisable. The first transcendental entire function with oscillating wandering domains was given by Eremenko and Lyubich in \cite{pathex}; this was also the first application of approximation theory in complex dynamics. The authors used sequences of discs and half-annuli and a model function which was constant on the half-annuli and a translation on the discs. This model function was approximated on the closure of every disc and half-annulus by a transcendental entire function using an extended version of Runge's approximation theorem. {\color{red} Their technique did not show though whether their wandering domains are bounded or not, and did not give information on the degree of the entire function on each of the wandering domains.} Motivated by the construction in \cite{pathex}, we adapt the new techniques from \cite{BEFRS} to construct bounded oscillating wandering domains, which, moreover, have the property that the degree of~$f$ on each of the wandering domains is equal to that of our model map. We then use this technique to construct the six {\color{red} possible} types of such wandering domains. We prove our main result Theorem~\ref{thm:main construction} in Section~3. It is worth pointing out that, in order for the wandering domains to be oscillating, the set up needs to be much more complicated than that used for escaping wandering domains in \cite{BEFRS}. Although some of the building blocks of our proof are similar to those used in the analogous result for escaping wandering domains, the proof here requires several additional techniques. In particular, great care has to be taken over the accumulating errors in the approximation, as each of the discs $D_n$ on which the approximation takes place contains infinitely many domains in the orbit of the wandering domain. {\color{red} Throughout, $D(z,r)$ denotes the open disc with centre~$z$ and radius~$r$.} \begin{thm}[Main construction]\label{thm:main construction} Let $(b_{n})_{n\geq0}$ be {a sequence of} Blaschke products of {corresponding} degree $d_n \geq 1$, and let $(\alpha_n)_{n\geq 0}$ be a sequence of real numbers with $\alpha_0 = 1$ and $\alpha_{n+1}/\alpha_n \leq 1/6$. For $n \geq 0$, let \[ D_n=D(9n,\alpha_n),\] \[ \Delta_n=D(a_n,\alpha_{n}) \;\mbox{ and } \Delta'_n=D(a_n,2\alpha_{n}), \mbox{ where } a_n=9n+4\alpha_{n},\] and \[ G_n=D(\kappa_n, 1) \;\mbox{ and }\;\; G_n'= D(\kappa_n,5/4), \mbox{ where } \kappa_n=a_n+3. \] We consider the function $$ \phi(z) = \begin{cases} z+9,\;\;\mbox{ if } z\in \overline{D_n}, \; n \geq 0,\\ \frac{z-a_n}{\alpha_n}+\kappa_n, \;\; \mbox{ if } z \in \overline{\Delta_n'},\; n \geq 0,\\ \alpha_{n+1} b_n({z-\kappa_n})+4\alpha_{n+1}, \;\;\mbox{ if }z \in \overline{G_n'},\; n \geq 0, \end{cases}$$ and the sets \[ V_m= D(\zeta_m, \rho_m) = {\color{red} \phi^m(\Delta_0)=} \begin{cases} \Delta_n, \; \mbox{ if } m=\ell_n -1, n \geq 0,\\ G_n,\; \mbox{ if } m=\ell_n, n \geq 0,\\ D(9k + 4\alpha_{n+1},\alpha_{n+1}) \subset D_k, \; \mbox{ if } m = \ell_n + k+1, 0 \leq k \leq n, \end{cases} \] where $(\ell_n)$ is defined by $\ell_0=1 \mbox{ and } \ell_{n+1}=\ell_n+n+3, \; n \geq 0.$ \begin{figure}[hbt!]\label{Fig1} \centering \begin{tikzpicture}[scale=2] \draw[black] (-4.7,0) circle [radius=0.6]; \draw[black] (-3.1,0) circle [radius=0.6]; \draw[black] (-1.5,0) circle [radius=0.6]; \draw[gray](-4.3,0) circle [radius=0.15]; \draw[gray](-4.6,0) circle [radius=0.06]; \draw[black] (-0.5,0) circle [radius=0.15]; \draw[black] (0,0) circle [radius=0.15]; \draw[black] (1, 0) circle [radius=0.6]; \draw [->] (-2,0.2) to [out=130,in=60] (-4.35,0); \draw [->] (-4.35,-0.15) to [out=295,in=230] (0,-0.2); \draw [->] (-2.65,-0.15) to [out=310,in=230] (-2,-0.1); \draw [->] (0.05,-0.1) to [out=310,in=230] (0.55,-0.1); \draw [->] (0.7,0.2) to [out=140,in=50] (-4.55,0.1); \draw[fill] (-4.7,0) circle [radius=0.01]; \draw[fill] (-1.5,0) circle [radius=0.01]; \draw[fill] (-3.1,0) circle [radius=0.01]; \draw[fill] (1,0) circle [radius=0.01]; \node at (-4.7,-0.1) {$0$}; \node at (-3.1,-0.1) {$a_0=4$}; \node at (-1.5,-0.1) {$\kappa_0=7$}; \node at (1.05,-0.1) {$\kappa_1=a_1+3$}; \node at (-4.74,-0.75) {$D_0$}; \node at (-0.5,-0.3) {$D_1$}; \node at (0,0.25) {$\Delta_1=V_3$}; \node at (-4.3,0.25) {$V_2$}; \node at (-4.65,0.15) {$V_5$}; \node at (-3.1,0.35) {$\Delta_0=V_0$}; \node at (-1.45,0.75) {$G_0=V_{\ell_0}=V_1$}; \node at (0.95,0.75) {$G_1=V_{\ell_1}=V_4$}; \end{tikzpicture} \vspace*{-0.5cm} \caption{The action of the {\color{red} model function $\phi$, discussed further in Section~3.}} \end{figure} For a suitable choice of $(\alpha_n)$, there exists a transcendental entire function~$f$ having an orbit of bounded, simply connected, oscillating, wandering domains $U_m$ such that, for $m,n\geq 0$, \begin{itemize} \item[(i)] $\overline{D(\zeta_m,r_m)} \subset U_m \subset D(\zeta_m, R_m)$, where $0<r_m<\rho_m<R_m$, and $r_m\sim \rho_m$ and $R_m \sim \rho_m$ as $m \to \infty$; \item[(ii)] $|f(z)-\phi(z)|\leq \varepsilon_m$ on $\overline{D(\zeta_m, R_m)}$, where $\varepsilon_0 \leq 1/24$ and $\varepsilon_{\ell_n+k} = \frac{\alpha_{n+1}^2}{2^{k+1}}$, for $0\leq k \le n+2$; \item[(iii)] $f(9n)=\phi(9n)=9(n+1)$ and $f'(9n)=\phi'(9n)= 1$; \item[(iv)] $f:U_{m} \to U_{m+1}$ has degree $q_{m},$ where $q_{\ell_n}=d_n$, and $q_m=1$ otherwise. \end{itemize} Finally, if $z, z' \in U_0$ and there exists $N \in {\mathbb N}$ such that $f^{\ell_N}(z), f^{\ell_N}(z') \in \overline{D(\kappa_N,r_{\ell_N})}$, then, for $n \geq N$, we have \begin{equation}\label{eqtn:double inequality} k_n\operatorname{dist}_{G_n}(f^{\ell_n}(z), f^{\ell_n}(z')) \leq \operatorname{dist}_{U_{\ell_n}}(f^{\ell_n}(z), f^{\ell_n}(z'))\leq K_n \operatorname{dist}_{G_n}(f^{\ell_n}(z), f^{\ell_n}(z')), \end{equation} where $0<k_n<1<K_n$ with $k_n,K_n \to 1$ as $n \to \infty$. \end{thm} \begin{rem} If the Blaschke products $b_n$ are real-symmetric for each $n \geq 0$, then $f$ can be taken to be real-symmetric; see Remark~\ref{rem2}. \end{rem} In Section 5 we use Theorem \ref{thm:main construction} to construct all six types of oscillating wandering domains, proving the following result. This requires several preliminary results concerning Blaschke products which we prove in Section 4. \begin{thm}\label{realizable} For each of the six possible types of simply connected oscillating wandering domains, there exists a transcendental entire function with a bounded, simply connected oscillating wandering domain of that type. \end{thm} Oscillating wandering domains for functions in the Eremenko-Lyubich class $\mathcal{B}$ have been constructed, first by Bishop in \cite{bishop15}, using the novel technique of quasiconformal folding, and more recently in \cite{FJL} and \cite{marshi}. It would be interesting to see whether their methods can be adapted to classify the resulting wandering domains as one of the six possible types described above. \subsection*{Acknowledgments} We would like to thank Anna Miriam Benini, Chris Bishop, Nuria Fagella and Lasse Rempe for inspiring discussions. \section{Preliminary results for Theorem \ref{thm:main construction}} In this section we give some existing results which are used in the proof of Theorem \ref{thm:main construction}. The following theorem, which is \cite[Theorem D]{BEFRS}, plays a key role in the proof. \begin{thm}\label{WDexist} Let $f$ be a transcendental entire function and suppose that there exist Jordan curves $\gamma_n$ and $\Gamma_n$, $n\ge 0$, compact sets $L_k$, $k\ge 0$, and a bounded domain $D$ such that \begin{itemize} \item[\rm(a)] $\Gamma_n$ surrounds $\gamma_n$, for $n \geq 0$; \item[\rm(b)] the sets $\Gamma_n$, $n \geq 0$, $L_k$, $k \geq 0$, and $\overline D$ all lie exterior to each other; \item[\rm (c)] $\gamma_{n+1}$ surrounds $f(\gamma_n)$, for $n \geq 0$; \item[\rm (d)] $f(\Gamma_n)$ surrounds $\Gamma_{n+1}$, for $n \geq 0$; \item[\rm (e)] $f(\overline D \cup \bigcup_{k\ge 0} L_k)\subset D$; \item[\rm (f)] there exists $n_k \to \infty$ such that $$ \max\{\operatorname{dist}(z,L_{k}): z \in \Gamma_{n_k}\} = o(\operatorname{dist}(\gamma_{n_k}, \Gamma_{n_k}))\;\text{ as}\; k \to \infty. $$ \end{itemize} Then there exists an orbit of simply connected wandering domains $U_n$ such that $\overline{\operatorname{int} \gamma_n} \subset U_n \subset \operatorname{int}\Gamma_n$, for $n \geq 0$. Moreover, if there exists $z_n \in \operatorname{int}\gamma_n$ such that both $f(\gamma_n)$ and $f(\Gamma_n)$ wind $d_n$ times around $f(z_n),$ then $f:U_n \to U_{n+1}$ has degree $d_n$, for $n \geq 0$. \end{thm} In order to obtain the transcendental entire function with the required properties, we consider an analytic function which is our model function and then apply the following result which is an extension of the well-known Runge's approximation theorem and was the Main Lemma in~\cite{pathex}. \begin{lem}\label{Runge E-L} Let $(E_n)$ be a sequence of compact subsets of ${\mathbb C}$ with the following properties: \begin{itemize} \item[(i)] ${\mathbb C} \setminus E_n$ is connected, for $n \geq 0$; \item[(ii)] $E_n \cap E_m = \emptyset$, for $n\neq m$; \item[(iii)] $\min\{|z|: z\in E_n\} \to \infty$ as $n \to \infty$. \end{itemize} Suppose $\psi$ is holomorphic on $E =\bigcup_{n=0}^{\infty} E_n$. For $n \geq 0$, let $\varepsilon_n>0$ and let $z_{n} \in E_n$. Then there exists an entire function $f$ satisfying, for $n \geq 0$, \begin{equation} |f(z)-\psi(z)|<\varepsilon_n, \quad \text{for } z\in E_n; \end{equation} \begin{equation} f(z_{n})=\psi(z_{n}), \quad f'(z_{n}) = \psi'(z_{n}). \end{equation} \end{lem} \begin{rem}\label{rem2} We note that if the sets $E_n$ are each real-symmetric (that is, $\overline{E_n}=E_n$), the function~$\psi$ is real-symmetric in~$E$ (that is, $\overline{\psi(\overline{z})}=\psi (z)$, for $z \in E$), and the points $z_{n}, n\geq 0$, are all real, then we can take the entire function~$f$ to be real-symmetric on ${\mathbb C}$. Indeed, if~$f$ satisfies the conclusions of Lemma \ref{Runge E-L}, then $g(z)=\frac{1}{2}(f(z)+\overline{f(\overline{z})})$ is real-symmetric and entire, and satisfies the conclusions of Lemma \ref{Runge E-L}. \end{rem} We also need the following result, which is a version of \cite[Lemma 2]{pathex}. \begin{lem} \label{lem:EL2} Let $g$ be an analytic function in the disc $\{z:|z|<R\}$ such that $g(0)=g'(0)=0$ and $|g(z)|<\epsilon R$ for $|z|<R$ and some $\epsilon< 1/4$. Then $$|g(z)| \leq \frac{\epsilon}{R}|z|^2,\;\text{ for }|z| <R.$$ \end{lem} Finally, we need the following lemma about hyperbolic distances in discs, which is \cite[Lemma 5.2]{BEFRS}. \begin{lem} \label{lem:hyp estimate} Suppose that $0<s<r<1<R$ and set \[ c(s,R)= \frac{1-s^2}{R-s^2/R},\quad D_r=D(0,r) \quad\text{and}\quad D_R=D(0,R). \] If $|z|,|w|\leq s$, then \begin{equation} \label{hyp est 1} \operatorname{dist}_{D_R}(z,w)= \operatorname{dist}_{\mathbb{D}}({z}/{R},{w}/{R})\geq c(s,R)\operatorname{dist}_{\mathbb{D}}(z,w), \end{equation} and \begin{equation} \label{hyp est 2} \operatorname{dist}_{D_r}(z,w)= \operatorname{dist}_{\mathbb{D}}({z}/{r},{w}/{r})\leq \frac{1}{c(s/r,1/r)}\operatorname{dist}_{\mathbb{D}}(z,w). \end{equation} Also, $0<c(s,R)<1$ and if the variables~$s$,~$r$ and~$R$ satisfy in addition \begin{equation}\label{srR} 1-r=o(1-s)\;\text{ as } s\to 1\quad\text{and}\quad R-1=O(1-r)\;\text{ as } r\to 1, \end{equation} then \begin{equation} \label{cR} c(s,R)\to 1\;\text{ as}\;s \to 1, \end{equation} and \begin{equation}\label{cr} c\left(s/r,1/r \right) \to 1\;\text{ as}\;s \to 1. \end{equation} \end{lem} \section{Proof of Theorem \ref{thm:main construction}} In this section, we prove our construction result. We consider the sets $V_m = \phi^m(\Delta_0)$, where $\Delta_0 = D(4,1)$, as defined in the statement of Theorem~\ref{thm:main construction} and construct a function~$f$ which is sufficiently close to $\phi$ in parts of the plane in order to ensure that $f$ has a {\color{red} bounded} wandering domain~$U$ with $f^m(U)$ close to $V_m$, {\color{red} for $m\ge 0$}, in the sense that the Hausdorff distance between $U_m$ and $V_m$ tends to $0$ as $m \to \infty$. {\color{red}\subsection*{The sets $V_m$}} We begin by noting that it follows from the definition of $\phi$ and the fact that $\alpha_{m+1}/\alpha_m \leq 1/6$, for $m \geq 0$, that, for each $n \geq 0$, \[ \phi(\Delta_n)= G_n, \] \[ \phi^2(\Delta_n) = D(4\alpha_{n+1}, \alpha_{n+1}) \subset D(0,\alpha_0) = D_0, \] so, for $0 \leq k \leq n$, \[ \phi^{k+2}(\Delta_n) = D(9k + 4\alpha_{n+1}, \alpha_{n+1}) \subset D(9k, \alpha_k) = D_k, \] and \[ \phi^{n+3}(\Delta_n) = D(9(n+1) + 4\alpha_{n+1}, \alpha_{n+1}) = \Delta_{n+1}. \] {\color{red} We obtain the following properties of $V_m$, as stated in Theorem~\ref{thm:main construction}, and illustrated in Figure 1: \begin{equation}\label{V} V_m = D(\zeta_m,\rho_m)=\phi^m(\Delta_0) = \begin{cases} \Delta_n, \;\; \mbox{ if } m=\ell_n -1, \; n \geq 0,\\ G_n,\;\; \mbox{ if } m=\ell_n, \; n \geq 0,\\ D(9k + 4\alpha_{n+1},\alpha_{n+1}) \subset D_k, \;\; \mbox{ if } m = \ell_n + k+1, \; 0 \leq k \leq n, \end{cases} \end{equation} where $(\ell_n)$ is defined by $\ell_0 = 1 \mbox{ and } \ell_{n+1} = \ell_{n} + n+3,$ for $n\ge 0$.} {\color{red} In words}, if $V_m \subset D_0$, then $\phi$ repeatedly translates $V_m$ to the right by 9 until the translated image lands on $\Delta_{n}$, for some $n \in {\mathbb N}$, at which point $\phi$ maps the disc $\Delta_{n}$ onto $G_n$ and then maps $G_n$ into $D_0$ (see Figure 1). \subsection*{Construction of the circles $\gamma_n$ and $\Gamma_n$} We now give an inductive definition of the values $r_m$ and $R_m$ described in Theorem~\ref{thm:main construction}, part~(i), and define $\alpha_n$ inductively at the same time. We will choose these values in such a way that, if we define \[ \gamma_m = \{z: |z-\zeta_m| = r_m\} \quad\mbox{and}\quad \Gamma_m = \{z: |z-\zeta_m| = R_m\}, \] then, for $m \geq 0$, \begin{equation}\label{propa} \gamma_{m+1}\;\text{ surrounds}\;\phi(\gamma_m), \end{equation} and \begin{equation}\label{propb} \phi(\Gamma_m)\;\text{ surrounds}\; \Gamma_{m+1}. \end{equation} Further, we choose these values in such a way that we are able to use Lemma~\ref{Runge E-L} and Lemma~\ref{lem:EL2} to approximate the map $\phi$ by an entire function $f$ such that $\phi$ can be replaced by $f$ in~\eqref{propa} and~\eqref{propb}. This in turn allows us to apply Theorem~\ref{WDexist} to deduce that $f$ has wandering domains with the required properties. Our construction uses the Blaschke products $b_n$ which, for $n \geq 0$, we write as \[ b_n(z) = e^{i \theta_n} \prod_{j=1}^{d_n}\frac{z+p_{n,j}}{1+\overline{p_{n,j}}z}, \] where $p_{n,j} \in \mathbb{D}=\{z:|z|<1\}$ are not necessarily different from each other, and $\theta_n \in [0, 2\pi)$. We also use the maps defined by \begin{equation}\label{Bdef} B_n(z) = b_n(z-\kappa_n), \mbox{ for } n\geq 0, \end{equation} {\color{red} noting that $B_n$ has degree~$d_n$ and maps $G_n$ to $D_0={\mathbb D}$.} First take \begin{equation}\label{r0} {\color{red} r_0 \in (5/6,1) \mbox{ and } R_0 \in (1,7/6),} \end{equation} and {\color{red} recall that} $\alpha_0 = 1$. We then choose $r_1$ such that \begin{equation}\label{r1} 0<1- r_1 \leq \min\left\{\frac{1- r_0}{2}, \operatorname{dist}(\phi(\gamma_0), \partial G_0)^2\right\} \end{equation} \[ B_0(\gamma_1)\; \text{winds exactly}\;d_0\;\text{times round}\;D(0,1/2), \] so \[ \phi(\gamma_1)\; \text{winds exactly}\;d_0\;\text{times round}\;D(\zeta_2,\rho_2/2), \] and choose $R_1$ such that \begin{equation} \label{R1} 0< R_1 - 1 \leq \min \left\{\frac{R_0-1}{2}, {\color{red} \operatorname{dist}(\phi(\Gamma_0), \partial G_0)}, \frac{1}{\max_j{|p_0,j}|-1}\right\}. \end{equation} Now assume that, for some $n \geq 0$, $\alpha_{k}$ has been chosen for $0 \leq k \leq n$, and $r_m$ and $R_m$ have been chosen for $0\leq m\leq \ell_{n}$. (Note that $\ell_0 = 1$ and we have already specified $\alpha_0$, $r_0$, $R_0$, $r_1$ and $R_1$.) We shall give a rule for choosing $\alpha_{n+1}$ and also for choosing $r_m$ and $R_m$ for $\ell_{n} + 1 \leq m \leq \ell_{n+1}$. There are three different cases depending on the value of $m$.\\ {\bf Case 1} \;First we consider the case when $m = \ell_{n} + 1$ (and so $V_m \subset D_0$). We also specify $\alpha_{n+1}$ as part of this case. We begin by choosing $c_{n+1}$, $C_{n+1}$ to be circles centred at 0, lying in the interior and exterior of $D_0$ respectively, such that \begin{equation}\label{eta} \operatorname{dist}(c_{n+1}, \partial D_0) \leq \min\left\{ \frac{\rho_{\ell_{n}}-r_{\ell_{n}}}{6}, \frac{1}{2} \operatorname{dist}(B_{n}(\gamma_{\ell_{n}}), \partial D_0)\right\} \end{equation} and \begin{equation}\label{H} \operatorname{dist}(C_{n+1}, \partial D_0) \leq \min \left\{\frac{R_{\ell_{n}}-\rho_{\ell_{n}}}{6}, \operatorname{dist}(c_{n+1}, \partial D_0), \frac{1}{2} \operatorname{dist}(B_{n}(\Gamma_{\ell_{n}}), \partial D_0)\right\}. \end{equation} We set \begin{equation} \label{eq:errors Lem2} \alpha_{n+1} = \operatorname{dist}(C_{n+1},\partial D_0) \end{equation} and note, {\color{red} using the fact that $\phi(z)=\alpha_{n+1}B_n(z)+4\alpha_{n+1}$, for $z \in G'_n$,} that $V_{\ell_n+1}=\phi(V_{\ell_n})=D(4\alpha_{n+1},\alpha_{n+1})$, so $\rho_{\ell_n+1} = \alpha_{n+1}$. We then set \begin{equation}\label{r-2a} r_{\ell_{n}+1}= \rho_{\ell_{n}+1}- \alpha^2_{n+1} \end{equation} and \begin{equation}\label{R-2a} R_{\ell_{n}+1}= \rho_{\ell_{n}+1}+ \alpha^2_{n+1}. \end{equation} Note that, together with~\eqref{eta}, \eqref{eq:errors Lem2} and \eqref{H}, these definitions imply that \begin{equation}\label{r-2} \rho_{\ell_{n}+1}- r_{\ell_{n}+1} = \alpha^2_{n+1}\leq \alpha_{n+1} \operatorname{dist}(c_{n+1}, \partial D_0) \leq \frac{1}{2}\operatorname{dist}(\phi(\gamma_{\ell_{n}}), \partial V_{\ell_n +1}), \end{equation} \begin{equation}\label{R-2} R_{\ell_{n}+1} - \rho_{\ell_{n}+1} = \alpha^2_{n+1} = \alpha_{n+1} \operatorname{dist}(C_{n+1}, \partial D_0) \leq \frac{1}{2}\operatorname{dist}(\phi(\Gamma_{\ell_{n}}), \partial V_{\ell_n +1}), \end{equation} {\color{red} and hence \begin{equation}\label{R-r} R_{\ell_{n}+1} - r_{\ell_{n}+1} =2\alpha_{n+1}^2 \le \min\{\operatorname{dist}(\phi(\gamma_{\ell_{n}}), \partial V_{\ell_n +1}),\operatorname{dist}(\phi(\Gamma_{\ell_{n}}), \partial V_{\ell_n +1})\}. \end{equation}} {\bf Case 2}\; We now consider the cases when $ m = \ell_{n} + k + 1$, for $1 \leq k \leq n+1$. Then {\color{red} \[ V_{\ell_{n} + k + 1} = \phi^k(V_{\ell_n+1}) \subset D_{k}, \text{ for } 1\le k\le n, \] and \[ V_{\ell_{n}+n+2} = V_{\ell_{n+1}-1} = D(9(n+1)+4\alpha_{n+1},\alpha_{n+1})=\Delta_{n+1}. \]} In all these cases, we simply choose $r_m$ and $R_m$ to satisfy \begin{equation}\label{r-3} \rho_m-r_{m}= \frac{\rho_{m-1}-r_{m-1}}{2}, \end{equation} and \begin{equation}\label{R-3} R_m-\rho_m= \frac{R_{m-1}-\rho_{m-1}}{2}. \end{equation} {\bf Case 3} \;Finally, we consider the case when $m = \ell_{n+1} = \ell_n + n+3$, so $V_m = G_{n+1}$. In this case, we choose $r_{\ell_{n+1}}$ and $R_{\ell_{n+1}}$ so that \begin{equation} \label{r-4} 0< \rho_{\ell_{n+1}}-r_{\ell_{n+1}} \leq \min\left\{ \frac{\rho_{{\ell_{n+1}}-1}-r_{{\ell_{n+1}}-1}}{2}, \operatorname{dist}(\phi(\gamma_{\ell_{n+1}-1},\partial G_{n+1})^2\right\}; \end{equation} \begin{equation} \label{degree-4a} {\color{red} B_{n+1}(\gamma_{{\ell_{n+1}}})}\; \text{winds exactly}\;d_{n+1}\;\text{times round}\;D(0,1/2); \end{equation} \begin{equation} \label{R-4} 0<R_{\ell_{n+1}} - \rho_{\ell_{n+1}} \leq \min \left\{\frac{R_{{\ell_{n+1}}-1}-\rho_{{\ell_{n+1}}-1}}{2}, \operatorname{dist}({\color{red} \phi(\Gamma_{\ell_{n+1}-1}}, \partial G_{n+1}), \frac{1}{\max_j\{|p_{n+1,j}|\}}- 1\right\}. \end{equation} This inductive process defines the values $r_m$ and $R_m$, and hence the circles $\gamma_m$ and $\Gamma_m$, for $m \geq 0$. Note that it follows from \eqref{H},~\eqref{eq:errors Lem2}, \eqref{R-3}, {\color{red} \eqref{R-2a}} and \eqref{R-4} that \[ \alpha_{n+1} \leq \frac{R_{\ell_{n}} - \rho_{\ell_{n}}}{6} < \frac{R_{\ell_{n-1}+1} - \rho_{\ell_{n-1}+1}}{6} = \frac{\alpha^2_{n}}{6} < \frac{\alpha_{n}}{6},\;\text{ for } n\ge 1. \] Moreover, it follows from the definition of $\phi$ together with \eqref{Bdef} and \eqref{V} that we have \begin{equation}\label{phidelta} \phi(z) = \alpha_{n+1}B_n(z) + 4\alpha_{n+1} = \rho_{\ell_{n}+1}B_n(z) + {\color{red} \zeta_{\ell_{n}+1}}, \mbox{ for } z \in G_n, n\geq 0. \end{equation} So~\eqref{degree-4a} implies that, for $n \geq 0$, \begin{equation} \label{degree-4} \phi(\gamma_{{\ell_{n+1}}})\; \text{winds exactly}\;d_{n+1}\;\text{times round}\;D(\zeta_{\ell_{n+1}+1},\rho_{\ell_{n+1}+1}/2). \end{equation} We also note that it follows from~\eqref{R-2},~\eqref{R-3} and \eqref{V} that, for $m = \ell_{n}+k + 1$, where $0 \leq k \leq n+1$, we have {\color{red} $R_m - \rho_m \le R_{\ell_n+1}- \rho_{\ell_n+1} < \alpha_{n+1} = \rho_m$.} So, for $m = \ell_{n}+k + 1$, where $0 \leq k \leq n$, we have \begin{equation}\label{V'} V_m' = D(\zeta_m,R_m) \subset D(\zeta_m,2\rho_m) = D(9k + 4\alpha_{n+1},2 \alpha_{n+1}) \subset D(9k, 6\alpha_{n+1}) \subset D_{k}, \end{equation} and {\color{red} \begin{equation}\label{Delta'} V'_{\ell_{n}+n+2} = V'_{\ell_{n+1}-1} = D(\zeta_{\ell_{n+1}-1}, R_{\ell_{n+1}-1}) \subset \Delta_{n+1}'. \end{equation} by the definitions of $D(\zeta_m,\rho_m)$ and $\Delta_{n}'$ in the statement of Theorem~\ref{thm:main construction}.} It then follows from~\eqref{R-4} and~\eqref{V} that $R_{\ell_{n+1}} - \rho_{\ell_{n+1}} < \alpha_{n+1}$ and so \begin{equation}\label{G'} V'_{\ell_{n+1}} = D(\zeta_{\ell_{n+1}}, R_{\ell_{n+1}}) \subset G_{n+1}'. \end{equation} Also, $\phi$ is analytic in $V'_{\ell_{n+1}}$ by~\eqref{phidelta} together with the last condition in~\eqref{R-4}. It follows from~\eqref{r-2} that, for $n \geq 0$, \begin{equation} \gamma_{\ell_n + 1} \mbox{ surrounds } \phi(\gamma_{\ell_n}), \end{equation} and from~\eqref{R-2} that, for $n \geq 0$, \begin{equation} \phi(\Gamma_{\ell_n}) \mbox{ surrounds } \Gamma_{\ell_n + 1}. \end{equation} Thus \eqref{propa} and \eqref{propb} hold when $m = \ell_n$, where $n \geq 0$. Also, if $m = \ell_n+k+1$, where $n \geq 0$, $0 \leq k \leq n+1$, then $\phi$ is a translation on $\gamma_m$ and $\Gamma_m$, by~\eqref{V'} {\color{red} and the definition of~$\phi$}. Since, by~\eqref{r-3}, we have \[ \rho_{m+1}-r_{m+1} \leq \frac{\rho_{m}-r_{m}}{2} \] and, by~\eqref{R-3}, \[ R_{m+1}-\rho_{m+1} \leq \frac{R_m-\rho_{m}}{2}, \] it follows that \eqref{propa} and \eqref{propb} hold for these values of~$m$ too. Finally, it follows from~\eqref{Delta'} that, on $\gamma_{\ell_{n+1}-1}$ and $\Gamma_{\ell_{n+1}-1}$, {\color{red} $n\ge 0$}, the function $\phi$ is a scaling by a factor of $1/\alpha_{n+1} > 1$ followed by a translation, and so it follows from~\eqref{r-4} and~\eqref{R-4} that \eqref{propa} and \eqref{propb} hold in this case too. We note that the sets $\overline{V_m'}$ are disjoint since, if $V'_{\ell_n+k+1}, V'_{\ell_{n+1}+k+1} \subset D_k$, for some $n \geq 0$, {\color{red} $0\le k\leq n$}, then $V'_{\ell_n+k+1} \subset D(9k + 4\alpha_{n+1}, 2\alpha_{n+1})$ and $V'_{\ell_{n+1}+k+1} \subset D(9k + 4\alpha_{n+2}, 2\alpha_{n+2})$, and \[ D(9k + 4\alpha_{n+1}, 2\alpha_{n+1}) \cap D(9k + 4\alpha_{n+2}, 2\alpha_{n+2}) = \emptyset, \] since \[4\alpha_{n+2} + 2\alpha_{n+2} = 6\alpha_{n+2} \leq \alpha_{n+1} < 4\alpha_{n+1} - 2\alpha_{n+1}. \] \subsection*{Construction of the function $f$} Our aim now is to use Lemma \ref{Runge E-L} and Lemma~\ref{lem:EL2} to approximate the map $\phi$ by a single entire function~$f$ such that, for $m \geq 0$, $\gamma_{m+1}$ surrounds $f(\gamma_m)$ and $f(\Gamma_m)$ surrounds $\Gamma_{m+1}$. We also require~$f$ to map certain curves $L_n$ near $\overline{G_n}$ in such a way that we can apply Theorem~\ref{WDexist}. We define $L_n$, for $n\geq 0$, to be the circular arc \begin{equation}\label{eq:reefs} L_n:=\{z:|z-a_n|=R_{\ell_n}+\delta_{\ell_n}^2/2,\;|\operatorname{arg}(z-a_n)| \leq \pi - \delta_{\ell_n}^2\}, \end{equation} where $\delta_m= R_m-r_m \to 0$ as $m \to \infty$; see Figure 2. \begin{figure}[hbt!] \centering \begin{tikzpicture}[scale=2.2] \draw[blue] (2.2,0) circle [radius=0.11]; \draw (-1.5,0) circle [radius=0.5]; \draw[blue] (-1.5,0) circle [radius=0.7]; \draw[fill=lime] (-4.7,0) circle [radius=0.6]; \draw[fill=lime] (-3.1,0) circle [radius=0.6]; \draw[fill=lime] (-1.5,0) circle [radius=0.6]; \draw[fill=lime] (0.4,0) circle [radius=0.17]; \draw[gray](-4.3,0) circle [radius=0.15]; \draw[gray](-4.6,0) circle [radius=0.06]; \draw[fill=lime] (0.9,0) circle [radius=0.17]; \draw[fill=lime] (2,0) circle [radius=0.6]; \draw [->] (-1.9,0.2) to [out=130,in=50] (-4.3,0.2); \draw [->] (1.8,0.2)to [out=140,in=50] (-4.6,0.08); \draw [->] (-2.7,-0.15) to [out=310,in=220] (-1.9,-0.15); \draw [->] (0.9,-0.15) to [out=310,in=220] (1.6,-0.15); \draw[fill] (-3.1,0) circle [radius=0.01]; \draw[fill] (-0.3,0) circle [radius=0.01]; \draw[fill] (-0.2,0) circle [radius=0.01]; \draw[fill] (-4.7,0) circle [radius=0.01]; \draw[fill] (-0.1,0) circle [radius=0.01]; \draw[fill] (0.9,0) circle [radius=0.01]; \draw[fill] (2,0) circle [radius=0.01]; \draw[fill] (-1.5,0) circle [radius=0.01]; \draw[blue] (-1.5,0) circle [radius=0.3]; \draw[blue] (-3.1,0) circle [radius=0.72]; \draw[blue] (-3.1,0) circle [radius=0.27]; \draw[blue] (0.9,0) circle [radius=0.22]; \draw[blue] (0.9,0) circle [radius=0.12]; \draw[blue] (2,0) circle [radius=0.65]; \draw[blue] (2,0) circle [radius=0.55]; \draw[red] (-0.7,0) arc (0:172:0.8); \draw[red] (-1.5+0.8*cos{188},0.8*sin{188}) arc(188:360:0.8); \draw[red] (2.7,0) arc (0:176:0.7); \draw[red] (2+0.7*cos{188},0.7*sin{188}) arc(188:360:0.7); \node at (-3.1,-0.1) {$4$}; \node at (-1.5,-0.1) {$7$}; \node at (-4.7,-0.1) {$0$}; \node at (2,-0.06) {$\kappa_n=a_n+3$}; \node at (0.9,-0.05) {$a_n$}; \node at (1.2,-0.45) {$\phi$}; \node at (-4.4,-0.75) {$D_0$}; \node at (-3.1,0.4) {$\Delta_0=V_0$}; \node at (-1.4,0.9) {$L_0$}; \node at (2.1,0.3) {$G_n=V_{\ell_n}$}; \node at (2.1,0.8) {$L_n$}; \node at (-1.5,0.4) {$G_0=V_1$}; \node at (0.4,-0.3) {$D_n$}; \node at (0.9,0.3) {$\Delta_n$}; \node at (-2.3,-0.5) {$\phi$}; \node at (-3.1,0.9) {$\phi$}; \node at (-1.3,1.6) {$\phi$}; \end{tikzpicture} \caption{Sketch of the setup of Theorem \ref{thm:main construction}, {\color{red} showing the location of the circles $\gamma_n$ and $\Gamma_n$ (in blue), and the arcs~$L_n$ (in red).}} \end{figure} We also define approximation error quantities $\varepsilon_m$, for $m \geq 0$, by \begin{equation}\label{eq:error} \varepsilon_m = \min \{\tfrac{1}{4}\operatorname{dist}(\phi(\gamma_{m}), \partial V_{m+1}), \tfrac{1}{4} \operatorname{dist}(\phi(\Gamma_{m}), \partial V_{m+1}), \tfrac14\delta_{m+1}\}>0. \end{equation} We now show that these errors have the upper bounds stated in part~(ii) of our theorem. First, it follows from \eqref{r0}, \eqref{r1} and \eqref{R1} that \[ {\color{red} \varepsilon_0 \leq (R_1 - r_1)/4 \leq (R_0 - r_0)/8 < 1/24.} \] {\color{red} Next we note that it follows from~\eqref{R-r} that, for $n \geq 0$, \[ \varepsilon_{\ell_n} = \delta_{\ell_n+1}/4=(R_{\ell_n +1} - r_{\ell_n +1})/4 = 2\alpha_{n+1}^2/4 = \alpha_{n+1}^2/2. \] It then follows from (\ref{r-3}), (\ref{R-3}), (\ref{r-4}) and (\ref{R-4}) that, for $0\leq k \leq n+1, \; n\geq 0$, \begin{equation}\label{eq:error-translation} \varepsilon_{\ell_n+k+1} = (R_{\ell_n +k + 2} - r_{\ell_n + k +2})/4 = (R_{\ell_n +1} - r_{\ell_n +1})/2^{k+3} = \alpha_{n+1}^2/2^{k+2}. \end{equation} Thus \begin{equation}\label{eps} \varepsilon_{\ell_n+k} = \alpha_{n+1}^2/2^{k+1},\; \mbox{ for } 0\leq k \leq n+2, \; n\geq 0, \end{equation} as required for part~(ii).} Since $\phi$ is analytic in each set $\overline{V'_{\ell_{n+1}}}$, for $n \geq 0$, it follows from Lemma \ref{Runge E-L} that there exists an entire function~$f$ such that, for $n \geq 0$, \begin{equation}\label{approx1} |f(z)-\phi(z)| < \varepsilon_{\ell_n + n+1}, \text{\ \ for } z \in \overline{D_n}, \end{equation} \begin{equation} \label{approxnew} |f(z)-\phi(z)| < \varepsilon_{\ell_n-1}, \text{\ \ for } z \in \overline{\Delta_n'}, \end{equation} \begin{equation}\label{approx2} |f(z)-\phi(z)| < \varepsilon_{\ell_n}, \text{\ \ for } z \in \overline{G_n'}, \end{equation} \begin{equation} \label{approx5} f(9n)=9({n+1}), \end{equation} \begin{equation}\label{approx6} f'(9n)= 1, \end{equation} and such that \begin{equation}\label{approx3} |f(z)+4| \leq 1/2, \text{\ \ for } z\in \overline{D(-4,1)} \cup \bigcup_{n\geq 0} L_n. \end{equation} It follows from \eqref{approx1}, {\color{red} \eqref{approx5} and \eqref{approx6}} that for each $k \geq 0$ we can apply Lemma \ref{lem:EL2} in the disc $D_k = D(9k, \alpha_k)$, with $g(z)= f(z)-\phi (z)$, $R=\alpha_k$ and associated constant $\epsilon = \varepsilon_{\ell_k + k+1}/\alpha_{k}$. Note that the conditions of Lemma~\ref{lem:EL2} are satisfied since it follows from~\eqref{eps} that \[ \epsilon = \varepsilon_{\ell_k + k+1}/\alpha_{k} = \frac{\alpha_{k+1}^2}{\alpha_{k}2^{k+2}} < \frac{\alpha_{k+1}}{6}< 1/4, \mbox{ for } k\geq 0. \] So, by Lemma~ \ref{lem:EL2}, for all $z \in D_k$, $k \geq 0$, we have \begin{equation}\label{fDk} |f(z)-\phi(z)| \leq \frac{\varepsilon_{\ell_k + k+1}}{\alpha^2_{k}}|z-9k|^2. \end{equation} We will now show that this implies that, for each $m \geq 0$, \begin{equation} \label{eq:all errors} |f(z)-\phi(z)|< \varepsilon_m,\;\text{ for } z \in \overline{V_m'}. \end{equation} First we note that, \eqref{eq:all errors} follows from~\eqref{Delta'} and~\eqref{G'} together with~\eqref{approxnew} and~\eqref{approx2} when $m = \ell_n$ or $m = \ell_n-1$, for some $n \geq 0$. {\color{red} Other values of~$m$} are of the form $m = \ell_n + k + 1$, for some $n \geq 0$, $0 \leq k \leq n$, and it follows from~\eqref{V'} that, in this case, \[ V'_{m} \subset D(9k, 6\alpha_{n+1}) \subset D_{k}.\] Therefore, by~\eqref{fDk}, \eqref{eps} and using the fact that $\alpha_{k+1} \leq \alpha_{k}/6$, we have, for $z \in V_{\ell_n + k+1}$, $n \geq 0$ and $0 \leq k \leq n$, \begin{eqnarray*} |f(z)-\phi(z)| & \leq & \frac{\varepsilon_{\ell_{k} + k +1}}{\alpha^2_{k}}(6\alpha_{n+1})^2\\ & = & \frac{\alpha_{k+1}^2}{2^{k+2} \alpha^2_{k}} 36 \alpha_{n+1}^2\\ & \leq &\frac{\alpha_{n+1}^2}{2^{k+2}} = \varepsilon_{\ell_{n} + k+1}. \end{eqnarray*} Thus~\eqref{eq:all errors} holds for all $m \geq 0$. It now follows from (\ref{propa}), (\ref{propb}), (\ref{eq:error}) and (\ref{eq:all errors}) that, for $m \geq 0$, \begin{equation}\label{propB} \gamma_{m+1}\;\text{surrounds}\; f(\gamma_m); \end{equation} \begin{equation}\label{propA} f(\Gamma_m)\;\text{surrounds}\;\Gamma_{m+1}. \end{equation} We now apply Theorem \ref{WDexist} to the Jordan curves $\gamma_m, \Gamma_m$, $m\geq 0$, the compact curves $L_n$, $n \geq 0$, and the bounded domain $D = D(-4, 1)$, noting that these sets satisfy the required hypotheses by construction and by~\eqref{propB}, \eqref{propA}, {\color{red} \eqref{eq:reefs}} and \eqref{approx3}. Part~(i) of Theorem~\ref{thm:main construction} now follows from Theorem \ref{WDexist}, part~(ii) follows from~\eqref{eq:all errors} together with the upper bounds for the errors that we obtained earlier, and part~(iii) follows from (\ref{approx5}) and (\ref{approx6}). Next we outline the proof of part~(iv). The fact that $f:U_{\ell_{n+1}} \to U_{\ell_{n+1}+1}$ has degree $d_{n+1}$ follows from the final statement of Theorem \ref{WDexist}, since (\ref{degree-4}), (\ref{eq:error}), and (\ref{approx2}) together imply that $f(\gamma_{\ell_{n+1}})$ and $f(\Gamma_{\ell_{n+1}})$ both wind exactly $d_{n+1}$ times round the disc $D(\zeta_{\ell_{n+1}+1}, \rho_{\ell_{n+1}+1}/2)$; for the details of this argument see the proof of \cite[Theorem 5.3]{BEFRS}. Since $\phi$ is univalent in all other cases, the same argument applies to show that $f:U_m \to U_{m+1}$ is univalent in all other cases. {\color{red} To complete the proof of Theorem~\ref{thm:main construction}, we note that} the double inequality that compares the hyperbolic distances in $U_{\ell_n}$ between points of two orbits under $f$ with the corresponding hyperbolic distances in the discs $G_n$ follows by applying Lemma \ref{lem:hyp estimate} with \[ s=1-\tfrac{3}{4} \operatorname{dist}(\phi(\gamma_{\ell_n-1}), \partial G_n), \quad r=r_{\ell_n}\quad \text{and}\quad R=R_{\ell_n}, \] and noting that $f^{\ell_{n+1} - \ell_n}(\overline{D(\kappa_n,r_{\ell_n})} \subset D(\kappa_{n+1},r_{\ell_{n+1}})$; we omit the details which are similar to those given in the proof of the final statement of \cite[Theorem 5.3]{BEFRS}. \section{Preliminary results for Theorem \ref{realizable}} In this section we prove some results which we use in order to construct our examples. In particular, we obtain estimates on the orbits of points in a wandering domain $U$ of a transcendental entire function $f$ obtained by applying Theorem~\ref{thm:main construction} with specific Blaschke products $b_n$. Our first result is used repeatedly in our constructions and gives estimates on the distances between orbits under the function $f$ and under the model function $\phi$. \begin{lem}\label{orberr} Let $f$ be a transcendental entire function with a wandering domain $U$ arising from applying Theorem~\ref{thm:main construction} with the Blaschke products $b_n$. Then, using the notation of Theorem~\ref{thm:main construction}, (a) if $z,z' \in U_{\ell_n}$, for some $n \geq 0$, we have \[ |f^{n+3}(z) - \phi^{n+3}(z')| \leq \alpha_{n+1} + |b_n(z - \kappa_n) - b_n(z'-\kappa_n)|; \] (b) and hence, if {\color{red} $z, z' \in \overline{D(\zeta_0,r_0)}\subset U_0$, we have} \[ |f^{\ell_{n+1}}(z) - \phi^{\ell_{n+1}}(z')| \leq \alpha_{n+1} + |b_n(f^{\ell_n}(z) - \kappa_n) - b_n(\phi^{\ell_{n}}(z')-\kappa_n)|. \] \end{lem} \begin{proof} To prove part~(a), we begin by considering the case that $z=z'$. We first use induction to show that, if $z \in U_{\ell_n}$, for some $n \geq 0$, then \begin{equation}\label{ersum} |f^m(z) - \phi^m(z)| \leq \sum_{k=0}^{m-1}\varepsilon_{\ell_n +k}, \;\mbox{ for } 1 \leq m \leq n+2. \end{equation} We note that, for $m=1$, this holds by Theorem~\ref{thm:main construction} part~(ii). Now assume that~\eqref{ersum} holds for some $m$, $1 \leq m < n+2$. We have \begin{equation}\label{erm} |f^{m+1}(z) - \phi^{m+1}(z)| \leq |f(f^m(z)) - \phi(f^m(z))| + |\phi(f^m(z)) - \phi^{m+1}(z)|. \end{equation} Since $z \in U_{\ell_n}$, we have $f^{m}(z) \in U_{\ell_n+m} \subset D(\zeta_{\ell_n+m},R_{\ell_n+m})$ and $\phi^m(z) \in D(\zeta_{\ell_n+m},R_{\ell_n+m})$. Also, $\phi$ is a translation on $D(\zeta_{\ell_n+m},R_{\ell_n+m})$ and so, together with {\color{red} Theorem~\ref{thm:main construction}} part~(ii), we can deduce from~\eqref{erm} that \[ |f^{m+1}(z) - \phi^{m+1}(z)| \leq \varepsilon_{\ell_n +m} + |f^m(z) - \phi^m(z)| \leq \sum_{k=0}^{m}\varepsilon_{\ell_n +k}. \] Thus~\eqref{ersum} holds as claimed. Next, we note that {\color{red} for $z\in U_{\ell_n}$ we have} $f^{n+2}(z), \phi^{n+2}(z) \in \Delta'_n$, on which $\phi$ is a scaling by a factor of $1/\alpha_{n+1}$ followed by a translation and so, by~\eqref{ersum} and Theorem~\ref{thm:main construction} part~(ii), \begin{eqnarray*} |f^{n+3}(z) - \phi^{n+3}(z)| & \leq & |f(f^{n+2}(z)) - \phi(f^{n+2}(z))| + |\phi(f^{n+2}(z)) - \phi^{n+3}(z)| \\ &\leq & \varepsilon_{\ell_n +n+2} + \frac{1}{\alpha_{n+1}} \sum_{k=0}^{n+1}\varepsilon_{\ell_n +k} \\ &\leq & \frac{\alpha_{n+1}}{2^{n+3}} + {\color{red} \alpha_{n+1}}\sum_{k=0}^{n+1} \frac{1}{2^{k+1}}\\ & < & {\color{red}\alpha_{n+1}}. \end{eqnarray*} This shows that \begin{equation}\label{estz} |f^{n+3}(z) - \phi^{n+3}(z)| \leq \alpha_{n+1}, \end{equation} which is the result of part~(a) in the case that $z=z'$. We now use this fact to prove {\color{red} part~(a) in general}. If $z,z' \in U_{\ell_n}$, for some $n \geq 0$, then it follows from~\eqref{estz} and the definition of $\phi$ that \begin{eqnarray*} |f^{n+3}(z) - \phi^{n+3}(z')| & \leq & |f^{n+3}(z) - \phi^{n+3}(z))| + |\phi^{n+3}(z) - \phi^{n+3}(z')| \\ & \leq & \alpha_{n+1} + |b_n(z - \kappa_n) - b_n(z'-\kappa_n)|. \end{eqnarray*} This completes the proof of part~(a). Now we suppose that {\color{red} $z, z' \in \overline{D(\zeta_0,r_0)}$}. It follows from Theorem~\ref{thm:main construction} part~(i) that {\color{red} $z \in U_0$} and hence $f^{\ell_n}(z) \in U_{\ell_n}$, for $n \geq 0$. It also follows from~\eqref{propa} in the proof of Theorem~\ref{thm:main construction} that {\color{red} $\phi^{\ell_n}(z') \in D(\zeta_{\ell_n},r_{\ell_n})$} and hence, by Theorem~\ref{thm:main construction} part~(i), that $\phi^{\ell_n}(z') \in U_{\ell_n}$, for $n \geq 0$. So part~(b) follows from part~(a) by replacing $z$ and $z'$ by $f^{\ell_n}(z)$ and $\phi^{\ell_n}(z')$ respectively. \end{proof} Our next result gives a {\color{red} delicate property of a Blaschke product that appears in one of our examples.} We use this property in the proof of {\color{red} Lemma~\ref{lem:a2} part~(a).} \begin{lem}\label{lem:b-scaling} Let $b(z) =\left(\frac{z+1/3}{1+z/3}\right)^2$ and suppose that $0<r<1$. Then $0<r<b(r)<1$ and \begin{equation}\label{b-ineq} \left|\frac{b(x)-b(r)}{x-r}\right| < \frac{b^2(r)-b(r)}{b(r)-r}, \quad \text{for } 0<x< b(r). \end{equation} \end{lem} \begin{proof} Our proof is based on a useful relationship between the cross-ratio of four points $a<b<c<d$ on ${\mathbb R}$, defined as \[ (a,b,c,d)=\frac{(b-a)(d-c)}{(d-a)(c-b)}, \] and the Schwarzian derivative of a real function~$f$, defined as \[ Sf = \frac{f'''}{f''}-\frac32\left(\frac{f''}{f'}\right)^2. \] It is well-known that if $f$ is monotonic on an interval $I$ and $Sf<0$ on $I$, then \begin{equation}\label{Sf-neg} (f(a),f(b),f(c),f(d))<(a,b,c,d), \quad \text{whenever } a,b,c,d\in I, \; a<b<c<d. \end{equation} See, for example, de Melo and van Strien \cite[Section~1]{deMelovanStrien} for a good account of the relationship between functions with negative Schwarzian and the cross-ratio, including a proof of the above fact. Other key properties (also mentioned in \cite{deMelovanStrien}) are that M\"obius maps have zero Schwarzian and the composition rule for Schwarzians is \[ S(g\circ f)(x) = Sg(f(x))(f'(x))^2+Sf(x). \] Since the Schwarzian derivative of a M\"{o}bius map is zero on its domain in ${\mathbb R}$, it follows immediately from this composition rule that the function~$b$ has negative Schwarzian on the interval $(-3,\infty)$. It is straightforward to check that $1$ is a fixed point of the function~$b$ and that $r<b(r)<1$, for $r \in (0,1)$. Note also that~$b$ is increasing on $(-1/3, \infty)$ and convex on $(-3,1)$. We first prove \eqref{b-ineq} in the case when $r<x<b(r)$, by considering the four points $r,x,b(r),1$. Since $b$ is increasing on $(-1/3,\infty)$ and has negative Schwarzian there, we deduce that \[ \frac{(b(x)-b(r))(1-b^2(r))}{(1-b(r))(b^2(r)-b(x))}<\frac{(x-r)(1-b(r))}{(1-r)(b(r)-x)}. \] Since $b$ is convex on $(0,1)$ we deduce that \[ \frac{1-b(r)}{1-r}<\frac{1-b^2(r)}{1-b(r)}. \] We deduce from the previous two inequalities that \[ \frac{b(x)-b(r)}{b^2(r)-b(x)}<\frac{(1-b(r))^2}{(1-r)(1-b^2(r))} \frac{x-r}{b(r)-x}<\frac{x-r}{b(r)-x}, \] and hence (by taking reciprocals and adding 1 to both sides) that \[ \frac{b(x)-b(r)}{b^2(r)-b(r)}<\frac{x-r}{b(r)-r}. \] This proves \eqref{b-ineq} in the case when $r<x<b(r)$. For the case when $0<x<r$, similar reasoning can be used with the points $x,r, b(r), 1$, to deduce that \[ \frac{b(r)-b(x)}{b^2(r)-b(r)} < \frac{r-x}{b(r)-r}. \] This completes the proof of Lemma~\ref{lem:b-scaling}. \end{proof} The following lemma describes {\color{red} dynamical} properties of the transcendental entire functions arising from Theorem \ref{thm:main construction} when using specific Blaschke products of a certain form. Two of our examples will be constructed using these Blaschke products. The proof of this result takes several pages. \begin{lem} \label{lem:a2} Let $b(z)=\left(\frac{z+a}{1+az}\right)^2$, where $a \in[1/3,1)$, and let~$f$ be an entire function arising by applying Theorem~\ref{thm:main construction} with $b_n = b$, for $n \geq 0$. \begin{itemize} \item[(a)] If $a=1/3$ then there exist {\color{red} $x, y \in U_0\cap {\mathbb R}$}, $N\in{\mathbb N}$ and $c>0$, with $f^n(x)\ne f^n(y)$ for $n\ge 0$, such that \[ f^{\ell_{N}}(x)=\kappa_N, \quad\text{and}\quad\kappa_{n}+1-f^{\ell_{n}}(x)\sim \frac{c}{n^{1/2}}\;\text{ as }n\to\infty\] and \[ f^{\ell_{N}}(y) = \kappa_N + 1/9, \quad\text{and}\quad\kappa_{n}+1-f^{\ell_{n}}(y) \sim \frac{c}{(n+1)^{1/2}}\;\text{ as }n\to\infty, \] and moreover \[f^{\ell_n}(y)-f^{\ell_n}(x)= \frac{O(1)}{n^{3/2}} \;\text{ as}\;n \to \infty. \] \item[(b)] If $a=1/2$ then there exist $x, y \in U_0\cap {\mathbb R}$ and $N\in{\mathbb N}$, with $f^n(x)\ne f^n(y)$ for $n\ge 0$, such that \[ f^{\ell_{N}}(x)= \kappa_n \quad\text{and}\quad\kappa_{n}+1-f^{\ell_{n}}(x) = c \lambda^n(1+\eta_n),\quad\text{for }n\ge N, \] and \[ f^{\ell_{N}}(y)= \kappa_n + 1/4 \quad\text{and}\quad\kappa_{n}+1-f^{\ell_{n}}(y) = c \lambda^{n+1}(1+\xi_n),\quad\text{for } n \ge N, \] where $c>0$, $\lambda =2/3$ and $\max\{|\eta_n|,|\xi_n|\}\le 1/10$, for $n \ge N$. \end{itemize} \end{lem} \begin{proof} First we observe that in both parts it is sufficient to prove the stated results about the behaviours of $f^{\ell_n-\ell_N}(x_N)$ and $f^{\ell_n-\ell_N}(y_N)$ when $x_N,y_N\in U_{\ell_N}$ for some particular positive integer~$N$. (a) Recall that, by the analysis of the behaviour in~${\mathbb D}$ of the iterates of~$b$ near its parabolic fixed point~1 (see \cite[Lemma 6.2 (c)]{BEFRS}, for example), there are positive constants~$c$ and~$d$ such that \begin{equation}\label{1-rn} 1-b^n(0)\sim\frac{c}{n^{1/2}}\;\text{ as } n\to \infty \end{equation} and \begin{equation}\label{rn+1-rn} b^{n+1}(0)-b^n(0)\sim \frac{d}{n^{3/2}}\;\text{ as } n\to\infty. \end{equation} Therefore, we can choose $N$ so large that \begin{equation}\label{m-large1} \frac{d}{2n^{3/2}} < b^{n+1}(0)-b^{n}(0)<\frac{2d}{n^{3/2}},\quad \text{for }n\ge N, \end{equation} and also such that \begin{equation}\label{m-large2} \frac{4}{d\,6^{n}}\le \frac{1}{10},\quad \text{for } n\ge N. \end{equation} {\color{red} We then take $r_n=b^{n-N}(0)$, for $n\ge N$, and define \begin{equation}\label{xn-def} x_N=\kappa_N\quad\text{and}\quad x_{n+1}=f^{n+3}(x_{n}),\quad\text{for }n\ge N, \end{equation} and \[ x'_n:=\kappa_{n}+r_n\in G_n\cap {\mathbb R}, \quad \text{for } n \geq N. \] It follows from the definition of~$\phi$ that \[ x'_{n+1}=\phi^{n+3}(x'_{n})=b(x'_n-\kappa_n),\quad\text{for }n\ge N. \] } We use Lemma~\ref{lem:b-scaling} to show that the orbit of $x_N=\kappa_N$ under $f$ closely follows that of~$x_N$ under~$\phi$. More precisely, we shall show that \begin{equation}\label{f-phi-est1} |x_{n}-x'_{n}|< \frac{1}{10}(r_{n+1}-r_n),\quad\text{for } n \ge N+1. \end{equation} Note that it follows from \eqref{xn-def} that $x'_n \in U_{\ell_n}\cap {\mathbb R}$ since $x'_N=x_N\in U_{\ell_N}\cap {\mathbb R}$ and~$f$ is a real entire function. We shall prove \eqref{f-phi-est1} by using induction to show that \begin{equation}\label{induction} |x_n-x'_n|\le (r_{n+1}-r_n)\sum_{k=N+1}^{n}\frac{1}{6^k(r_{k+1}-r_k)},\quad\text{for } n \ge N+1. \end{equation} Before proving \eqref{induction}, we show that it implies \eqref{f-phi-est1}. Using~\eqref{m-large1} and \eqref{m-large2}, it follows that, for $n\ge N+1$, \begin{align*} \frac{|x_n-x'_n|}{r_{n+1}-r_n} &\le\sum_{k=N+1}^{n}\frac{1}{6^k(r_{k+1}-r_k)}\\ &\le \sum_{k=N+1}^{n}\frac{2(k-N)^{3/2}}{d6^k}\\ &= \frac{2}{d\,6^{N+1}}\sum_{j=0}^{n-N-1}\frac{(j+1)^{3/2}}{6^{j}}\\ &\le \frac{4}{d\,6^{N+1}}<\frac{1}{10}, \end{align*} since the sum in the penultimate expression is dominated by the geometric series $1+1/2+1/4+\cdots$. Thus \eqref{f-phi-est1} holds. To start the proof of \eqref{induction}, we have \[|x_{N+1}-x'_{N+1}|=|f^{N+3}(x_N)-\phi^{N+3}(x_N)|\le \frac{1}{6^{N+1}},\] by Theorem~\ref{thm:main construction}\,(iv), since $x_N\in U_{\ell_N}$. Now we assume that \eqref{induction} holds for some~$n\ge N+1$ and deduce that it holds for $n+1$. Note that, whenever \eqref{f-phi-est1} holds (and so whenever \eqref{induction} holds), we have $x'_n \in [\kappa_n,\kappa_n+1)$, by the definition of $x_n$. By the definition of $\phi$, Lemma~\ref{orberr} part~(a), Lemma~\ref{lem:b-scaling} and Theorem~\ref{thm:main construction}~(iv), we have \begin{align*} |x_{n+1}-x'_{n+1}| &\le \alpha_{n+1}+|b(x_{n}-\kappa_{n})-b(x'_{n}-\kappa_{n})|\\ &\le \frac{1}{6^{n+1}} +|x_{n}-x'_{n}|\left(\frac{b^2(x'_{n}-\kappa_{n})-b(x'_{n}-\kappa_{n})}{b(x'_{n}-\kappa_{n})-(x'_{n}-\kappa_{n})}\right)\\ &= \frac{1}{6^{n+1}}+|x_{n}-x'_{n}|\left(\frac{r_{n+2}-r_{n+1}}{r_{n+1}-r_n}\right)\\ &\le \frac{1}{6^{n+1}}+(r_{n+2}-r_{n+1})\sum_{k=N+1}^{n}\frac{1}{6^k(r_{k+1}-r_k)}\\ &=(r_{n+2}-r_{n+1})\sum_{k=N+1}^{n+1}\frac{1}{6^k(r_{k+1}-r_k)}. \end{align*} This proves \eqref{induction}, so \eqref{f-phi-est1} holds. {\color{red} Next, we define \[ y_N=\kappa_N+b(0)=\kappa_N+1/9\quad\text{and}\quad y_{n+1}=f^{n+3}(y_{n}),\quad\text{for }n\ge N, \] and \[ y'_n := \kappa_n+ r_{n+1}\in G_n\cap{\mathbb R},\quad \text{for } n\ge N, \] with the same value of~$N$ used earlier. Then \begin{equation}\label{ynxn} y'_n-x'_n=r_{n+1}-r_n=b^{n+1-N}(0)-b^{n-N}(0),\;\text{ for } n\ge N, \end{equation} and \[ y'_{n+1}=\phi^{n+3}(y'_{n})=b(y'_n-\kappa_n),\quad\text{for }n\ge N. \] } Reasoning as above we obtain \begin{equation}\label{f-phi-est2} |y_{n}-y'_{n}|\le \frac{1}{10}(r_{n+2}-r_{n+1}),\quad\text{for } n \ge N+1. \end{equation} Combining \eqref{1-rn} and \eqref{rn+1-rn} with \eqref{f-phi-est1} and \eqref{f-phi-est2}, we obtain \[ \kappa_{n}+1-x_n\sim \frac{c}{n^{1/2}}\;\text{ as }n\to\infty \] and \begin{eqnarray} |y_n-x_n| &\leq& |y_n-y'_n|+|y'_n-x'_n|+|x'_n-x_n|\nonumber \\ &\leq& \frac{1}{10}(r_{n+2}-r_{n+1})+(r_{n+1}-r_n)+ \frac{1}{10}(r_{n+1}-r_n) \nonumber \\ &=& \frac{O(1)}{n^{3/2}} \; \text{ as}\;\; n \to \infty, \nonumber \end{eqnarray} which gives the required result by taking $x,y \in U_0$ such that $f^{\ell_N}(x) =x_N = \kappa_N$ and $f^{\ell_N}(y)=y_N = \kappa_N + b(0) = \kappa_N + 1/9$. {\color{red} Note that $y_n \ne x_n$ for $n\ge N$, by \eqref{f-phi-est1}, \eqref{ynxn} and \eqref{f-phi-est2}, so we deduce that $f^n(x)\ne f^n(y)$ for $n\ge 0$.} (b) The proof of part~(b) is similar to that of part~(a), and we outline the argument briefly. As in part~(a), we take $r_n=b^{n-N}(0)$, for $n\ge N$, for some sufficiently large $N\in {\mathbb N}$ to be specified later in the proof, and put {\color{red} \begin{equation}\label{xn-def2} x_N=\kappa_N\quad\text{and}\quad x_{n+1}=f^{n+3}(x_{n}),\quad\text{for }n\ge N, \end{equation} and \[ x'_n:=\kappa_{n}+r_n\in G_{n}\cap{\mathbb R}, \quad \text{for } n \ge N, \] so once again \[ x'_{n+1}=\phi^{n+3}(x'_{n})=b(x'_n-\kappa_n),\quad\text{for }n\ge N. \]} Now note that the function~$b$ has fixed point~1 with multiplier $\lambda=2/3$. It follows that \begin{equation}\label{local-b-est} 1-r_{n+1}=\lambda(1-r_n)(1+O(1-r_n))\;\text{ as } n\to\infty, \end{equation} so, for some constant $c>0$, \begin{equation}\label{lambda-est} \kappa_{n}+1-x_n=1-r_n \sim c\lambda^n\;\text{ as } n\to\infty. \end{equation} Also, since~$b$ is univalent in the disc $\{z:|z-1|<1\}$ (or by a direct calculation), we have \[ |b'(z)|\leq \lambda(1+C|z-1|),\:\text{ for } |z-1|<1/2, \] where $C$ is a positive constant, and so \begin{equation}\label{b-deriv-est} |b(w)-b(z)|\le \lambda (1+C|z-1|)|w-z|,\quad\text{for } |w-z|<1/4, |z-1|<1/4. \end{equation} As in part~(a), we show that the orbit of $x_N$ under~$f$ closely follows that of $x_N$ under $\phi$. To be precise, we claim that for~$N$ sufficiently large we have \begin{equation}\label{f-phi-est1-b} |x_{n}-x'_{n}|\le \frac{c}{10}\lambda^{n},\;\text{ for } n\ge N. \end{equation} Indeed, for $n\ge N$, we have \begin{align*} |x_{n+1}-x'_{n+1}| &\le \alpha_{n+1}+|b(x_{n}-\kappa_{n})-b(x_{n}-\kappa_{n})|\\ &\le \frac{1}{6^{n+1}}+\lambda(1+C(1-r_{n}))|x_{n}-x'_{n}|, \end{align*} by Lemma~\ref{orberr} part~(a), Theorem~\ref{thm:main construction}\,(iv), \eqref{lambda-est} and \eqref{b-deriv-est}, provided that~$N$ is sufficiently large. Since $x_N=x'_N=\kappa_N$, it follows easily by induction that, for $n\ge N+1$, we have \[ \delta_n\le \left(\prod_{k=N+1}^{\infty}(1+C(1-r_k))\right)\left(\sum_{k=N+1}^{n}\frac{1}{(6\lambda)^{k}}\right),\;\text{ where }\delta_n=\frac{|x_{n}-x'_{n}|}{\lambda^n}, \] and \eqref{f-phi-est1-b} easily follows by \eqref{lambda-est} and by taking~$N$ sufficiently large. {\color{red} We obtain the first estimate in part~(b) by taking $x \in U_0$ such that $f^{\ell_N}(x) =x_N = \kappa_N$. The second estimate follows by a similar argument but this time we use an orbit under~$f$ whose subsequence passing through $U_{\ell_n}$, $n\ge N$, closely follows the sequence $y'_n := \kappa_n+ r_{n+1}$, $n\ge N$, by taking $y \in U_0$ such that $f^{\ell_N}(y)=y_N = \kappa_N + b(0) = \kappa_N + 1/4$. The proof that $f^n(x)\ne f^n(y)$ for $n\ge 0$ uses \eqref{f-phi-est1-b} and is similar to that in part~(a).} \end{proof} Finally {\color{red} in this section}, we give {\color{red} several} estimates for a Blaschke product used in {\color{red} another} of our examples. \begin{lem}\label{lem:semi} For $n \geq 0$, let $b_n (z)= \tilde{\mu_n}(\mu_n (z)^2),$ where \[ \mu_n(z)= \frac{z+s_n}{1+s_nz} \mbox{ and } \tilde{\mu_n}(z)=\frac{z-s_n^2}{1-s_n^2z}, \] and let \[ \lambda_n = \frac{2s_n}{1+s_n^2}, \] where $s_n \in (0,1)$. Then, for $n\geq 0$, {\color{red} \begin{equation}\label{semi1} \lambda_n\,x \leq \left(\frac{x+ \lambda_n}{1+ \lambda_nx}\right)x= b_n(x) \leq x, \;\mbox{ for }0<x <1, \end{equation} and \begin{equation}\label{semi2} \lambda_n (y-x) \le b_n(y)-b_n(x)\le \frac{2}{1+\lambda_n}(y-x) , \;\mbox{ for } 0\leq x<y<1. \end{equation} } \end{lem} \begin{proof} For $x \in (0,1)$ and $n \geq 0$, we have \begin{eqnarray*} b_n(x)&=& \frac {\left(\frac{x+s_n}{1+s_nx}\right)^2-s_n^2}{1- s_n^2\left(\frac{x+s_n}{1+s_nx}\right)^2} \\ &=& \frac{(x+s_n)^2-s_n^2(1+s_nx)^2}{(1+s_nx)^2-s_n^2(x+s_n)^2} \\ &=& \frac{x^2+2s_nx-2s_n^3x-s_n^4x^2}{1+2s_nx-2s_n^3x-s_n^4} \\ & = & \left(\frac{(1-s_n^4)x + (1-s_n^2)2s_n}{1-s_n^4 + (1-s_n^2)2s_nx}\right)x\\ & = & \left(\frac{(1+s_n^2)x + 2s_n}{1+s_n^2 + 2s_nx}\right)x\\ &=& \left(\frac{x+ \lambda_n}{1+ \lambda_nx}\right)x. \end{eqnarray*} Since \[a \leq \frac{x+a}{1+ax}\leq 1,\] for $x,a \in [0,1],$ part~(a) follows. For part~(b), we deduce from the expression for $b_n$ given in part~(a) that, for $0\leq x<y<1$ and $n\geq 0$, \begin{eqnarray} b_n(y)-b_n(x)&=& y \left(\frac{y+\lambda_n}{1+\lambda_ny}\right) - x \left(\frac{x+\lambda_n}{1+\lambda_nx}\right) \nonumber \\ &=& (y-x) \frac{y+x+ \lambda_n(1+xy)}{(1+\lambda_ny)(1+\lambda_n x)}, \nonumber \end{eqnarray} {\color{red} and the conclusion then easily follows from the facts that $0<\lambda_n<1$, for $n \geq 0$, and $0\leq x<y<1$.} \end{proof} \section{Proof of Theorem \ref{realizable}} In this section we construct six examples of bounded oscillating wandering domains, based on the two classifications of simply connected wandering domain given in~\cite{BEFRS}. First, in terms of hyperbolic distances between orbits of points, simply connected wandering domains are classified as follows~(\cite[Theorem A]{BEFRS}. \begin{thm}[First classification theorem] \label{thm:Theorem A introduction} Let $U$ be a simply connected wandering domain of a transcendental entire function $f$ and let $U_n$ be the Fatou component containing $f^n(U)$, for $n \in {\mathbb N}$. Define the countable set of pairs \[ E=\{(z,z')\in U\times U : f^k(z)=f^k(z') \text{\ for some $k\in{\mathbb N}$}\}. \] Then, exactly one of the following holds. \begin{itemize} \item[\rm(1)] $\operatorname{dist}_{U_n}(f^n(z), f^n(z'))\underset{n\to\infty}{\longrightarrow} c(z,z')= 0 $ for all $z,z'\in U$, and we say that $U$ is {\em (hyperbolically) contracting}; \item [\rm(2)] $\operatorname{dist}_{U_n}(f^n(z), f^n(z'))\underset{n\to\infty}{\longrightarrow} c(z,z') >0$ and $\operatorname{dist}_{U_n}(f^n(z), f^n(z')) \neq c(z,z')$ for all $(z,z')\in (U \times U) \setminus E$, $n \in {\mathbb N}$, and we say that $U$ is {\em (hyperbolically) semi-contracting}; or \item[\rm(3)] there exists $N>0$ such that for all $n\geq N$, $\operatorname{dist}_{U_n}(f^n(z), f^n(z')) = c(z,z') >0$ for all $(z,z') \in (U \times U) \setminus E$, and we say that $U$ is {\em (hyperbolically) eventually isometric}. \end{itemize} \end{thm} Next, in terms of convergence of orbits to the boundary there are again three types of simply connected wandering domains (see~\cite[Theorem C]{BEFRS}), though only the latter two are realisable for oscillating wandering domains as explained in the Introduction. \begin{thm}[Second classification theorem]\label{ThmC} Let $U$ be a simply connected wandering domain of a transcendental entire function $f$ and let $U_n$ be the Fatou component containing $f^n(U)$, for $n \in {\mathbb N}$. Then exactly one of the following holds: \begin{itemize} \item[\rm(a)] $\liminf_{n\to\infty} \operatorname{dist}(f^{n}(z),\partial U_{n})>0$ for all $z\in U$, that is, all orbits stay away from the boundary; \item[\rm(b)] there exists a subsequence $n_k\to \infty$ for which $\operatorname{dist}(f^{n_k}(z),\partial U_{n_k})\to 0$ for all $z\in U$, while for a different subsequence $m_k\to\infty$ we have that \[\liminf_{k \to \infty} \operatorname{dist}(f^{m_k}(z),\partial U_{m_k})>0, \quad\text{for }z\in U;\] \item[\rm(c)] $\operatorname{dist}(f^{n}(z),\partial U_{n})\to 0$ for all $z\in U$, that is, all orbits converge to the boundary. \end{itemize} \end{thm} Each of the examples in this section is constructed by applying Theorem~\ref{thm:main construction} with an appropriate choice of the Blaschke products $b_n$. We make repeated use of the following two results. \begin{lem}\label{lem51} Let $f$ be a transcendental entire function with an orbit of wandering domains $(U_n)$ arising from applying Theorem~\ref{thm:main construction} with the Blaschke products $(b_n)_{n \geq 0}$ and suppose that there exist $s,t \in U_0$, $N \in {\mathbb N}$ with \[ f^{\ell_N}(s), f^{\ell_N}(t) \in \overline{D(\kappa_N, r_{\ell_N})}, \] where the sequences $(\ell_n)$, $(\kappa_n)$ and $(r_n)$ are as defined in Theorem~\ref{thm:main construction}. \begin{itemize} \item[\rm(a)] If $\operatorname{dist}_{G_n}(f^{\ell_n}(s), f^{\ell_n}(t))\underset{n\to\infty}{\longrightarrow} 0$ and $f^{\ell_n}(s) \neq f^{\ell_n}(t)$, for $n \geq 0$, then $U_0$ is contracting; \item[\rm(b)] If $\liminf_{n \to \infty} \operatorname{dist}_{G_n}(f^{\ell_n}(s), f^{\ell_n}(t))>0$ and $f:U_n \to U_{n+1}$ has degree greater than 1 for infinitely many $n \in {\mathbb N}$, then $U_0$ is semi-contracting. \end{itemize} \end{lem} \begin{proof} (a) In this case it follows from the last part of Theorem~\ref{thm:main construction} that \[ \operatorname{dist}_{U_n}(f^{\ell_n}(s), f^{\ell_n}(t))\underset{n\to\infty}{\longrightarrow} 0. \] It now follows from Theorem~\ref{thm:Theorem A introduction} that the only possibility is for $U_0$ to be contracting. (b) In this case it follows from the last part of Theorem~\ref{thm:main construction} that \[ \liminf_{n \to \infty} \operatorname{dist}_{U_n}(f^{\ell_n}(s), f^{\ell_n}(t)) > 0 \] and so $U_0$ is not contracting. Since $f:U_n \to U_{n+1}$ has degree greater than 1 for infinitely many $n \in {\mathbb N}$, we know that $U_0$ is not eventually isometric, and so it follows from Theorem~\ref{thm:Theorem A introduction} that $U_0$ is semi-contracting. \end{proof} \begin{lem}\label{lem52} Let $f$ be a transcendental entire function with an orbit of wandering domains $(U_n)$ arising from applying Theorem~\ref{thm:main construction} with the Blaschke products $(b_n)_{n \geq 0}$ and let $s \in U_0 $ with \[ f^{\ell_n}(s) \in G_n, \; \mbox{ for } n \geq 0, \] where the sets $G_n$ and the sequence $(\ell_n)$, $n \geq 0$ are as defined in Theorem~\ref{thm:main construction}. \begin{itemize} \item[\rm(a)] {\color{red} If} $\liminf_{n \to \infty} \operatorname{dist}(f^{\ell_n}(s),\partial G_{n})>0$, then orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(b). \item[\rm(b)] If $\operatorname{dist}(f^{\ell_n}(s),\partial G_{n}) \to 0$ as $n \to \infty$, then orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(c). \end{itemize} \end{lem} \begin{proof} We begin by noting that it follows from Theorem~\ref{thm:main construction} that \begin{equation}\label{Vm} \phi^m(\Delta_0) = D(\zeta_m, \rho_m) = \begin{cases} \Delta_n = D(a_n,\alpha_n), \; \mbox{ if } m=\ell_n -1, \mbox{ where } n \geq 0,\\ G_n = D(\kappa_n,1)\; \mbox{ if } m=\ell_n, \mbox{ where } n \geq 0,\\ D(9k + 4\alpha_{n+1},\alpha_{n+1}) \subset D_k, \; \mbox{ if } m = \ell_n + k+1, {\color{red} \mbox{ where } 0 \leq k \leq n.} \end{cases} \end{equation} Since we know from Theorem~\ref{thm:main construction} part~(i) that the wandering domains $U_m$ are approximated increasingly well by the sets $\phi_m(\Delta_0)$ as $m \to \infty$, it follows that $\operatorname{diam} U_m \to 0$ as $m \to \infty$ for $m \neq \ell_n$, $n \geq 0$. So, if $s \in U_0$, then \begin{equation}\label{notln} \operatorname{dist}(f^{m}(s),\partial U_m) \to 0\;\text{ as } m \to \infty, \; m \neq \ell_n, \; n \geq 0. \end{equation} (a) In this case it follows from~\eqref{Vm} together with Theorem~\ref{thm:main construction} part~(i) that \[ \liminf_{n \to \infty} \operatorname{dist}(f^{\ell_n}(s),\partial U_{\ell_n})>0. \] Together with~\eqref{notln}, this implies that orbits of points in {\color{red} $U_0$} behave as described in Theorem~\ref{ThmC} part~(b). (b) In this case it follows from~\eqref{Vm} together with Theorem~\ref{thm:main construction} part~(i) that \[ \operatorname{dist}(f^{\ell_n}(s),\partial U_{\ell_n}) \to 0 \;\mbox{ as } n \to \infty. \] Together with~\eqref{notln}, this implies that orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(c). \end{proof} In some of the examples we make use of the following estimate for the hyperbolic distance in the unit disc. \begin{obs}\label{obs:hd} For two points and $r,s \in (0,1)$ with $r<s$ we have that \[\operatorname{dist}_{{\mathbb D}}(r,s) = \int _{r}^{s} \frac{2dt}{1-t^2}, \] and so \begin{equation}\label{hypd} \log \frac{1-r}{1-s}= \int _{r}^{s} \frac{dt}{1-t} \leq \operatorname{dist}_{{\mathbb D}}(r,s) = \int _{r}^{s} \frac{2dt}{1-t} =2\log \frac{1-r}{1-s}. \end{equation} \end{obs} We now give the examples that together prove Theorem \ref{realizable}. Examples 1, 2 and 3, which follow, correspond to the three cases of Theorem~\ref{thm:Theorem A introduction}. Within each example we give two functions, corresponding to the two realisable cases of Theorem~\ref{ThmC}. \begin{ex}[\bf Two contracting wandering domains] For each of the cases (b) and (c) of Theorem~\ref{ThmC}, there exists a transcendental entire function $f$ having a sequence of bounded, simply connected, oscillating contracting wandering domains $(U_n)$ with the stated behaviour. \end{ex} \begin{proof} {\bf First example} We construct an oscillating contracting wandering domain {\color{red} $U_0$} with the behaviour described in Theorem~\ref{ThmC} part~(b) by applying Theorem~\ref{thm:main construction} with $b_n(z)=z^2$, for $n \geq 0$. We begin by considering the orbits of points in the disc {\color{red} $D(4,1/12) \subset D(4,r_0)$} under iteration by $\phi$. We note that, if $z \in D(\kappa_n, R_{\ell_n})$, for some $n \geq 0$, then \[ |\phi^{n+3}(z) - \kappa_{n+1}| = |b_n(z-\kappa_n)| = |z-\kappa_n|^2. \] So, if $z \in D(4,1/12)$, then, for $n\ge 0$, we have \begin{equation}\label{phin} |\phi^{\ell_{n}}(z) - \kappa_{n}| = |\phi(z) - \kappa_0|^{2^{n}} = |z-7|^{2^{n}}\le (1/12)^{2^n} \to 0, \;\mbox{ as } n\to \infty. \end{equation} {\color{red} Next we claim that, if $z \in D(4,1/12)$, then \begin{equation}\label{ex1} |f^{\ell_n}(z) - \phi^{\ell_n}(z)| \leq \sum_{i=1}^n \frac{\alpha_i}{3^{n-i}} + \frac{|f(z) - \phi(z)|}{3^n}, \;\mbox{ for } n \geq 0, \end{equation} and \begin{equation}\label{ex0} |f^{\ell_n}(z) - \kappa_n| \leq \frac14, \;\mbox{ for } n \geq 0. \end{equation} } {\color{red} We prove~\eqref{ex0} and~\eqref{ex1} together using induction.} First, we note that they are true when $n=0$, since {\color{red} if $z\in D(4,1/12)$, then $|f^{\ell_0}(z)-\phi^{\ell_0}(z)| = |f(z)-\phi(z)|\le \varepsilon_0 \le 1/24$, by Theorem~\ref{thm:main construction} part~(ii)}, and $|\phi^{\ell_0}(z)-\kappa_0|=|\phi(z)-\kappa_0|\le 1/12$ by \eqref{phin}. {\color{red} Next, we suppose that~\eqref{ex1} and~\eqref{ex0} hold} for $n=m \ge 0$. It follows from these two estimates and Lemma~\ref{orberr} part~(b) together with~\eqref{phin}, that if $z\in D(4,1/12)$, then \begin{eqnarray*} |f^{\ell_{m+1}}(z) - \phi^{\ell_{m+1}}(z)| & {\color{red} \le} & \alpha_{m+1} + |b_m(f^{\ell_m}(z) - \kappa_m) - b_m(\phi^{\ell_m}(z) - \kappa_m)| \\ & = & \alpha_{m+1} + |(f^{\ell_m}(z) - \kappa_m)^2 - (\phi^{\ell_m}(z) - \kappa_m)^2| \\ &\leq & {\color{red} \alpha_{m+1} + |f^{\ell_m}(z) - \phi^{\ell_m}(z)|\, \left(|f^{\ell_m}(z)-\kappa_m| + |\phi^{\ell_m}(z) - \kappa_m|\right)}\\ & \leq & \alpha_{m+1} + \frac13 |f^{\ell_m}(z) - \phi^{\ell_m}(z)|\\ & \leq & \alpha_{m+1} + \frac13\left( \sum_{i=1}^m \frac{\alpha_i}{3^{m-i}} + \frac{|f(z) - \phi(z)|}{3^m}\right)\\ & = & \sum_{i=1}^{m+1} \frac{\alpha_i}{3^{m+1-i}} + \frac{|f(z) - \phi(z)|}{3^{m+1}}, \end{eqnarray*} which gives \eqref{ex1} with $n=m+1$, and also \begin{align*} |f^{\ell_{m+1}}(z)-\kappa_{m+1}|&\le |f^{\ell_{m+1}}(z) - \phi^{\ell_{m+1}}(z)| + |\phi^{\ell_{m+1}}(z)-\kappa_{m+1}|\\ &\le \sum_{i=1}^{m+1} \frac{\alpha_i}{3^{m+1-i}} + \frac{|f(z) - \phi(z)|}{3^{m+1}} + \left(\frac{1}{12}\right)^{2^{m+1}}\\ &\le \sum_{i=1}^n \frac{1}{3^{m+1-i}6^i} + \frac{1}{12}\,\frac{1}{3^{m+1}} + \left(\frac{1}{12}\right)^{2^{m+1}}\\ &\le \frac{1}{6}+ + \frac{1}{12}\,\frac{1}{3^{m+1}} + \left(\frac{1}{12}\right)^{2^{m+1}}< \frac14, \end{align*} which gives \eqref{ex0} with $n=m+1$. Since {\color{red} $\alpha_n \leq 1/6^n$} for $n \geq 0$, it follows from \eqref{ex1} that if $z \in D(4,1/12)$, then \[ |f^{\ell_n}(z) - \phi^{\ell_n}(z)| \to 0 \;\mbox{ as } n \to \infty. \] Together with~\eqref{phin}, this implies that, if $z \in D(4,1/12)$, then \begin{equation}\label{fphi} |f^{\ell_n}(z) - \kappa_n| \to 0 \;\mbox{ as } n \to \infty. \end{equation} We now use~\eqref{fphi} together with Lemma~\ref{lem51} and Lemma~\ref{lem52} to show that the wandering domain {\color{red} ${U_0}$} has the required properties. First we take $s,t \in D(4,1/12)$ such that $f(s),f(t) \in D(\kappa_0, r_1)$ with $f^{\ell_n}(s) \neq f^{\ell_n}(t)$, for $n \geq 0$. Since $G_n = D(\kappa_n,1)$, for $n \geq 0$, it follows from~\eqref{fphi} that \[ \operatorname{dist}_{G_n}(f^{\ell_n}(s), f^{\ell_n}(t)) \to 0 \;\text{ as } n\to \infty, \] and hence, by Lemma~\ref{lem51} {\color{red} part~(a)}, $U_0$ is contracting. Also, it follows from~\eqref{fphi} that \[ \lim_{n \to \infty} \operatorname{dist}(f^{\ell_n}(s),\partial G_{n})=1 \] and hence, by Lemma~\ref{lem52} part~(a), orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(b). {\bf Second example} We construct an oscillating contracting wandering domain $U_0$ with the behaviour described in Theorem~\ref{ThmC} part~(c) by applying Theorem~\ref{thm:main construction} with $b_n(z)= \left(\frac{z+1/3}{1+z/3}\right)^2$, for $n \geq 0$. Let $x,y \in U_0$ be as in Lemma \ref{lem:a2} part~(a). Since $G_n = D(\kappa_n,1)$, for $n \geq 0$, it follows from Lemma \ref{lem:a2} part~(a) and {\color{red} the hyperbolic metric estimate}~\eqref{hypd} that \begin{align*} \operatorname{dist}_{G_{n}}(f^{\ell_{n}}(x),f^{\ell_{n}}(y))&\le 2\log\frac{\kappa_{n}+1-f^{\ell_{n}}(x)}{\kappa_{n}+1-f^{\ell_{n}}(y)}\\ &= 2\log \left(1+ \frac{f^{\ell_{n}}(t)-f^{\ell_{n}}(x)}{\kappa_n+1-f^{\ell_{n}}(y)}\right)\\ &\sim 2\log\left(1+ \frac{O(1)/n^{3/2}}{c(n+1)^{1/2}} \right)\\ &= \frac{O(1)}{n}\;\text{ as }n\to \infty. \end{align*} It now follows from Lemma~\ref{lem51} part~(a) that $U_0$ is contracting. We also know from Lemma~\ref{lem:a2} part~(a) that \[ \operatorname{dist}(f^{\ell_n}(x),\partial G_n) \to 0 \; \mbox{ as } n \to \infty, \] and so it follows from Lemma~\ref{lem52} part~(b) that orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(c). \end{proof} \begin{ex}[\bf Two semi-contracting wandering domains] For each of the cases (b) and (c) of Theorem~\ref{ThmC}, there exists a transcendental entire function $f$ having a sequence of bounded, simply connected, oscillating semi-contracting wandering domains $(U_n)$ with the stated behaviour. \end{ex} \begin{proof} {\bf First example} We construct an oscillating semi-contracting wandering domain with the behaviour described in Theorem~\ref{ThmC} part~(b) by applying Theorem~\ref{thm:main construction} with $b_n (z)= \tilde{\mu_n}(\mu_n (z)^2)$, for $n \geq 0$, where \[ \mu_n(z)= \frac{z+s_n}{1+s_nz} \; \mbox{ and } \; \tilde{\mu_n}(z)=\frac{z-s_n^2}{1-s_n^2z}. \] {\color{red} We shall use the estimates for $b_n$ obtained in Lemma~\ref{lem:semi}, and once again put} \[ \lambda_n = \frac{2s_n}{1+s_n^2},\quad\text{for } n\ge 0. \] We now choose $s_n \in (0,1)$ with $s_n \to 1$ as $n\to \infty$ {\color{red} so quickly} that \begin{equation}\label{lambdan} \prod_{j=0}^{\infty}\lambda_{j} \geq 8/9\; \mbox{ and }\; \prod_{j=0}^{\infty}\frac{2}{1+\lambda_{j}} \leq 4/3. \end{equation} We first consider the orbit of the point $4$ under iteration by $f$, noting that \begin{equation}\label{philn4} \phi^{\ell_n}(4) = \kappa_n, \;\mbox{ for } n \geq 0. \end{equation} {\color{red} It follows from \eqref{philn4},} Lemma~\ref{orberr} part~(b) and \eqref{semi1} in Lemma~\ref{lem:semi} that, for $n \geq 0$, \begin{eqnarray*} {\color{red} |f^{\ell_{n+1}}(4) - \kappa_{n+1}|} & = &|f^{\ell_{n+1}}(4) - \phi^{\ell_{n+1}}(4)|\\ &\leq & \alpha_{n+1} + |b_n(f^{\ell_n}(4) - \kappa_n) - b_n(\phi^{\ell_n}(4) - \kappa_n)|\\ & = & \alpha_{n+1} + |b_n(f^{\ell_n}(4) - \kappa_n)|\\ & \leq & \alpha_{n+1} + |f^{\ell_n}(4) - \kappa_n|. \end{eqnarray*} Together with Theorem~\ref{thm:main construction} part~(ii) and \eqref{philn4}, this implies that, for $n \geq 0$, \begin{equation}\label{f4} |f^{\ell_n}(4) - \kappa_n)| \leq |f(4) - \kappa_0| + \sum_{i=1}^{n-1} \alpha_i = |f(4) - \phi(4)| + \sum_{i=1}^{n-1} \alpha_i \leq {\color{red} \frac{1}{24}} + \sum_{i=1}^{n-1} \frac{1}{6^i} < \frac{1}{4}. \end{equation} Now we consider the orbit of {\color{red} $19/4=4\tfrac34$} under $f$. {\color{red} Once again, we} begin by considering the orbit under $\phi$. We claim that \begin{equation}\label{92} \phi^{\ell_n}(19/4) - \kappa_n \geq \frac{3}{4} \prod_{i=0}^{n-1} \lambda_i. \end{equation} We prove~\eqref{92} by induction first noting that it holds for $n=0$, since $\phi^{\ell_0}(19/4)=\phi(19/4)=\kappa_0 + 3/4$. Next suppose that~\eqref{92} holds for $n=m$. Then, by \eqref{semi1}, \begin{eqnarray*} \phi^{\ell_{m+1}}(19/4) - \kappa_{m+1} & = & b_n(\phi^{\ell_{m}}(19/4) - \kappa_{m}) \\ & \geq & \lambda_m (\phi^{\ell_{m}}(19/4) - \kappa_{m})\\ & \geq & \frac{3}{4} \lambda_m \prod_{i=0}^{m-1} \lambda_i = \frac{3}{4}\prod_{i=0}^{m} \lambda_i. \end{eqnarray*} Thus~\eqref{92} holds for $n=m+1$ and hence, by induction, for all $n \geq 0$. Together with~\eqref{lambdan}, this implies that \begin{equation}\label{194} \phi^{\ell_n}(19/4) - \kappa_n \geq \frac{2}{3}, \mbox{ for } n \geq 0. \end{equation} Next we claim that, for $n \geq 0$, \begin{equation}\label{f92} |f^{\ell_n}(19/4) - \phi^{\ell_n}(19/4)| \leq \sum_{i=1}^{n} \alpha_{i} \prod_{j=i}^{n-1}\frac{2}{1 + \lambda_j} + \prod_{i=0}^{n-1}\frac{2}{1 + \lambda_i} |f(19/4) - \phi(19/4)|. \end{equation} We prove~\eqref{f92} by induction noting that it holds for $n=0$. Next suppose that~\eqref{f92} holds for $n=m\ge 0$. Then it follows from Lemma~\ref{orberr} part~(b) and \eqref{semi2} that \begin{eqnarray*} |f^{\ell_{m+1}}(19/4) - \phi^{\ell_{m+1}}(19/4)| & \leq & \alpha_{m+1} + |b_m(f^{\ell_m}(19/4) - \kappa_m) - b_m(\phi^{\ell_n}(19/4) - \kappa_m)|\\ & \leq & \alpha_{m+1} + \frac{2}{1 + \lambda_m}|f^{\ell_{m}}(19/4) - \phi^{\ell_{m}}(19/4)|\\ & \leq & \alpha_{m+1} + \frac{2}{1 + \lambda_m}\left( \sum_{i=1}^{m} \alpha_{i} \prod_{j=i}^{m-1}\frac{2}{1 + \lambda_j} + \prod_{i=0}^{m-1}\frac{2}{1 + \lambda_i} |f(19/4) - \phi(19/4)|\right)\\ & = & \sum_{i=1}^{m+1} \alpha_{i} \prod_{j=i}^{m}\frac{2}{1 + \lambda_j} + \prod_{i=0}^{m}\frac{2}{1 + \lambda_i} |f(19/4) - \phi(19/4)|. \end{eqnarray*} Thus~\eqref{f92} holds for $n=m+1$ and hence, by induction, for all $n \geq 0$. It now follows from Theorem~\ref{thm:main construction} part~(ii) and~\eqref{lambdan} that \begin{equation}\label{f92e} |f^{\ell_n}(19/4) - \phi^{\ell_n}(19/4)| \leq \prod_{j=0}^{\infty}\frac{2}{1 + \lambda_j} \left( {\color{red} \sum_{i=1}^{\infty}\frac{1}{6^i} + \frac{1}{24}} \right) \le \frac{1}{4} \prod_{j=0}^{\infty}\frac{2}{1 + \lambda_j} \leq \frac{1}{3}. \end{equation} It follows from~\eqref{194}, \eqref{f92e} and \eqref{f4} that, for $n \geq 0$, \begin{eqnarray*} |f^{\ell_n}(19/4) - f^{\ell_n}(4)| & = & |\phi^{\ell_n}(19/4) - \kappa_n + f^{\ell_n}(19/4) - \phi^{\ell_n}(19/4) + \kappa_n - f^{\ell_n}(4)|\\ & \geq & |\phi^{\ell_n}(19/4) - \kappa_n| - |f^{\ell_n}(19/4) - \phi^{\ell_n}(19/4)| - |f^{\ell_n}(4) - \kappa_n|\\ & \geq & \frac{2}{3} - \frac{1}{3} - \frac{1}{4} = \frac{1}{12}. \end{eqnarray*} Since $G_n = D(\kappa_n,1)$, for $n \geq 0$, together with~\eqref{f4} this implies that \[ \liminf_{n \to \infty} \operatorname{dist}_{G_n}(f^{\ell_n}(19/4), f^{\ell_n}(4)) > 0. \] Also, it follows from Theorem~\ref{thm:main construction} part~(iv) that $f:U_{\ell_n} \to U_{\ell_{n+1}}$ has degree greater than 1, for $n\ge0$. So, by Lemma~\ref{lem51} part~(b), $U_0$ is semi-contracting. Finally, it follows from~\eqref{f4} that \[ \liminf_{n \to \infty} \operatorname{dist}(f^{\ell_n}(4), \partial G_n) > 0 \] and so, by Lemma~\ref{lem52} part~(a), orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(b). {\bf Second example} We construct an oscillating contracting wandering domain $U_0$ with the behaviour described in Theorem~\ref{ThmC} part~(c) by applying Theorem~\ref{thm:main construction} with $b_n(z)=\left(\frac{z+1/2}{1+z/2}\right)^2$, for $n \geq 0$. Let $x,y \in U$ be as in Lemma \ref{lem:a2} part~(b). Since $G_n = D(\kappa_n,1)$, for $n \geq 0$, it follows from Lemma~\ref{lem:a2} part~(b) and {\color{red} the hyperbolic metric estimate}~\eqref{hypd} that {\color{red} for $n$ sufficiently large we have} \begin{align*} \operatorname{dist}_{G_{n}}(f^{\ell_{n}}(x),f^{\ell_{n}}(y))&\geq \log\frac{\kappa_{n}+1-f^{\ell_{n}}(x)}{\kappa_{n}+1-f^{\ell_{n}}(y)}\\ &\geq \log \frac{c\lambda^n(1-1/10)}{c\lambda^{n+1}(1+1/10)}\\ &= \log \frac{9}{11\lambda}=\log \frac{27}{22}{\color{red} >0}, \end{align*} {\color{red} recalling that $\lambda=2/3$}. Also, it follows from Theorem~\ref{thm:main construction} that $f: U_{\ell_n} \to U_{\ell_n+1}$ has degree greater than 1, for $n\ge 0$, so $U_0$ is semi-contracting by Lemma~\ref{lem51} part~(b). Finally, we know from Lemma~\ref{lem:a2} part~(b) that \[ \operatorname{dist}(f^{\ell_n}(x),\partial G_n) \to 0 \;\mbox{ as } n \to \infty \] and so, by Lemma~\ref{lem52} part~(b), orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(c). \end{proof} \begin{ex}[\bf Two eventually isometric wandering domains] For each of the cases (b) and (c) of Theorem~\ref{ThmC}, there exists a transcendental entire function $f$ having a sequence of bounded, simply connected, oscillating eventually isometric wandering domains $(U_n)$ with the stated behaviour. \end{ex} \begin{proof} {\bf First example} We construct an eventually isometric wandering domain $U_0$ with the behaviour described in Theorem~\ref{ThmC} part~(b) by applying Theorem~\ref{thm:main construction} with $b_n(z) = z$, for $n \geq 0$. Since $b_n$ is univalent, for $n \geq 0$, it follows from Theorem \ref{thm:main construction} part~(iv) that $f:U_m \to U_{m+1}$ is also univalent, for $m \geq 0$. Thus $U$ is eventually isometric. We now consider the orbit of $4$ under iteration by $f$. We claim that, for $n \geq 0$, \begin{equation}\label{4orb} |f^{\ell_n}(4)-\phi^{\ell_n}(4)| \leq \sum_{i=1}^{n} \alpha_i + |f(4)-\phi(4)|. \end{equation} We prove~\eqref{4orb} by induction, noting that it is true for $n=0$. Next, suppose that~\eqref{4orb} holds for $n=m\ge 0$. Then it follows from Lemma~\ref{orberr} part~(b) that \begin{eqnarray*} |f^{\ell_{m+1}}(4) - \phi^{\ell_{m+1}}(4)| & = & \alpha_{m+1} + |b_m(f^{\ell_m}(4) - \kappa_m) - b_m(\phi^{\ell_m}(4) - \kappa_m)| \\ & = & \alpha_{m+1} + |f^{\ell_m}(4) - \phi^{\ell_m}(4)| \\ & = & \alpha_{m+1} + \sum_{i=1}^{m} \alpha_i + |f(4)-\phi(4)|\\ & = & \sum_{i=1}^{m+1} \alpha_i + |f(4)-\phi(4)|. \end{eqnarray*} Thus~\eqref{4orb} holds for $n=m+1$ and hence, by induction, for all $n \geq 0$. Since $\alpha_n \leq 1/6^n$ for $n \geq 0$, and $|f(4) - \phi(4)| \leq 1/24$, by Theorem~\ref{thm:main construction} part~(ii), it follows from~\eqref{4orb} that \[ |f^{\ell_{n}}(4) - \phi^{\ell_{n}}(4)| \leq 1/2, \;\mbox{ for } n \geq 0. \] Since $\phi^{\ell_{n}}(4) = \kappa_n$ and $G_n = D(\kappa_n,1)$, for $n \geq 0$, it follows from Lemma~\ref{lem52} part~(a) that orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(b).\\ {\bf Second example} We construct an eventually isometric wandering domain $U_0$ with the behaviour described in Theorem~\ref{ThmC} part~(c) by applying Theorem~\ref{thm:main construction} with $b_n(z)= b(z) = \frac{z+5/6}{1+5 z/6}$, for $n \geq 0$. Since $b_n$ is univalent, for $n \geq 0$, it follows from Theorem \ref{thm:main construction} part~(iv) that $f:U_m \to U_{m+1}$ is also univalent, for $m \geq 0$. Thus $U_0$ is eventually isometric. We now consider the orbit of 4 under iteration by $\phi$, noting that \[ \phi^{\ell_n}(4) = \kappa_n + b^n(0), \;\mbox{ for } n\geq 0. \] The Blaschke product $b$ has an attracting fixed point at 1 and we have \begin{equation} b^n(0) \to 1 \mbox{ as } n \to \infty \;\mbox{ and }\; b^n(0) \geq 5/6, \mbox{ for } n \in {\mathbb N}, \end{equation} and so \begin{equation}\label{bn0} \operatorname{dist}(\phi^{\ell_n}(4),\partial G_n) \to 0 \mbox{ as } n \to \infty, \;\mbox{ and } \; \phi^{\ell_n}(4) - \kappa_n \geq 5/6,\; \mbox{ for } n \in {\mathbb N}. \end{equation} {\color{red} We also note that} if $0 \le z_1, z_2 <1$, then \begin{equation}\label{z1z2} |b(z_1) - b(z_2)| = \left| \frac{11(z_1-z_2)}{(6+5z_1)(6+5z_2)} \right| \leq \frac{11 |z_1-z_2|}{36}. \end{equation} Next we take a point $x \in D(4,r_0)$ such that $f(x) = \kappa_0$, which is possible by Theorem~\ref{thm:main construction} part~(i), and consider the orbit of~$x$ under iteration by $f$. We claim that, for $n \geq 0$, \begin{equation}\label{sorb} |f^{\ell_n}(x)-\phi^{\ell_n}(4)| \leq \sum_{i=1}^{n} \alpha_i \left( \frac{11}{36} \right)^{n-i} \leq \frac{1}{2^n}. \end{equation} We prove~\eqref{sorb} by induction. First, we note that it is true if $n=0$, {\color{red} since $f(x)-\phi(4)=0$}. Next, suppose that~\eqref{sorb} holds for $n=m\ge 0$. Then it follows from~\eqref{bn0} and ~\eqref{sorb} that $f^{\ell_{m}}(x) > 0$ and so it follows from Lemma~\ref{orberr} part~(b) together with~\eqref{z1z2} that, {\color{red} for $m \ge 0$} \begin{eqnarray*} |f^{\ell_{m+1}}(x) - \phi^{\ell_{m+1}}(4)| & \le & \alpha_{m+1} + |b_m(f^{\ell_m}(x) - \kappa_m) - b_m(\phi^{\ell_m}(4) - \kappa_m)| \\ & \leq & \alpha_{m+1} + \frac{11|f^{\ell_m}(x) - \phi^{\ell_m}(4)| }{36}\\ & \leq & \alpha_{m+1} + \sum_{i=1}^{m} \alpha_i \left( \frac{11}{36} \right)^{m+1-i}\\ & = & \sum_{i=1}^{m+1} \alpha_i \left( \frac{11}{36} \right)^{m+1-i}\\ & \leq & \sum_{i=1}^{m+1} \frac{1}{6^i}\left( \frac{11}{36} \right)^{m+1-i} \leq{\color{red} \frac{m+1}{3^{m+1}} }\leq \frac{1}{2^{m+1}}. \end{eqnarray*} Thus~\eqref{sorb} holds for $n=m+1$ and hence, by induction, for all $n \geq 0$. It follows from~\eqref{sorb} together with~\eqref{bn0} that \[ \operatorname{dist}(f^{\ell_n}(s),\partial G_n) \to 0 \;\mbox{ as } n \to \infty \] and so, by Lemma~\ref{lem52} part~(b), orbits of points in $U_0$ behave as described in Theorem~\ref{ThmC} part~(c). \end{proof} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,448
\section{Introduction} \label{Sec.1} The problem of quantizing gravity is a long-standing one and many conceptually different approaches have been used to tackle it during the last almost five decades. One of the elements considered in the path towards quantum gravity are the higher derivatives and the role they play in the ultraviolet (UV) regime, where classical and quantum singularities show up. Motivations for the introduction of curvature-squared terms in the action come already at semiclassical level, from the observation that the renormalization of quantum field theory on curved background requires such higher-derivative terms~\cite{UtDW} (see also~\cite{book,ReviewSh} for a review). Furthermore, even though general relativity (GR) is not perturbatively renormalizable, its fourth-order counterpart is~\cite{Stelle77}. Increasing the number of derivatives in the action can make the theory even more regular. For example, in the local theories with more than four derivatives it is possible to achieve superrenormalizability~\cite{AsoreyLopezShapiro}. Indeed, the models with six derivatives have divergences only up to 3-loops, while in those with more than ten derivatives only 1-loop divergences remain. Moreover, in such models the $\beta$-functions are exact and gauge-independent. The benefits that higher derivatives bring in what concerns renormalization, however, come together with a serious drawback regarding unitarity. Although it is possible to associate the new degrees of freedom of the theory to positive-norm states in the Hilbert space, some of them may carry negative energy~\cite{Stelle77,AsoreyLopezShapiro}. These so-called ghost states introduce instabilities in the theory, with the possibility of a boundless vacuum decay via the emission of an arbitrary amount of energy in the form of gravitons. In such a scenario it makes sense to study classical and quantum aspects of models which can offer insight on how to deal with, \textit{e.g.}, the tension between renormalizability and unitarity, or the most appropriate form of treating (or avoiding) ghosts and related instabilities% \footnote{See, for example,~\cite{waveF1,waveF2,waveF3,waveF4,ABS-large} and references therein for a discussion on some of the proposals and the difficulties they face.}. In this regard, two models have been the subject of interesting investigations in recent years. The first one we mention is the Lee-Wick gravity~\cite{ModestoShapiro16,Modesto16}---see, \textit{e.g.},~\cite{ABS-large,Seesaw,Modesto-LWBH,Lens-LWBH,Modesto:2017hzl} for further developments and applications. This theory is defined by the Einstein-Hilbert action enlarged by curvature-squared terms which contain polynomial functions of the d'Alembert operator, such as $R_{\mu\nu} F_1(\Box)R^{\mu\nu}$ and $R F_2(\Box)R$. A general action of this type can be called polynomial higher-derivative gravity and was introduced in~\cite{AsoreyLopezShapiro}; the Lee-Wick gravity assumes, furthermore, that the polynomials $F_{i}$ are such that all the massive poles of the propagator which correspond to ghost modes are complex. Hence, the physical spectrum of the theory contains the usual massless graviton and, possibly, a healthy massive scalar particle (as the lightest scalar excitation is not a ghost~\cite{AsoreyLopezShapiro}). The pairs of complex conjugate massive modes are understood as virtual ones only and should decay to healthy particles. It was claimed that the presence of those complex poles do not violate the unitarity of the $S$-matrix if the Lee-Wick quantization prescription is used~\cite{ModestoShapiro16,Modesto16}. Therefore, this could be a form of restoring the unitarity, weakening the tension between renormalizability and unitarity. Another proposal for dealing with the problem of ghosts is to avoid them, at least at tree level, by replacing the polynomials $F_i$ of the action by nonpolynomial functions of the d'Alembertian, which makes the theory nonlocal~\cite{Tseytlin95,Tomboulis,Modesto12,Maz12} (see also the earlier works~\cite{Krasnikov,Kuzmin}). It is possible to choose these functions in such a manner that the theory propagator contains only the graviton pole% \footnote{In the works~\cite{Tomboulis,Modesto12,Maz12,Krasnikov,Kuzmin} the nonlocality is introduced by the use of different types of functions, which may have particularities in what concerns the renomalizability properties of the model. For further considerations on quantum and formal classical aspects in nonlocal field theories see, \textit{e.g.}~\cite{ref0,ref1,ref2,ref3,ref4,ref5,ref6,ref7,ref8,ref9,ref10,ref11,ref12,ref13,ref14,ref15,ref16,ref17,ref18,ref19} and references therein.}, at $k^2=0$. Owed to the absence of ghosts, this theory is sometimes called ghost-free gravity. As pointed out in~\cite{CountGhost}, however, quantum corrections may prompt the emergence of an infinite amount of complex ghost poles. Therefore, the study of the Lee-Wick gravity may be useful also to the better understanding and development of nonlocal UV extensions of GR. The present work revisits two topics that have previously been investigated in the context of local and nonlocal higher-derivative gravity models, namely, the nonrelativistic limit~\cite{ABS-large,Maz12,Newton-MNS,Newton-BLG,Quandt-Schmidt,Accioly-17,Head-On,EKM16} and the collapse of small mass spherical shells~\cite{Frolov:Exp,Frolov:Poly}. Our focus is on general polynomial gravity, with a special attention given to the case of complex poles---Lee-Wick gravity---and also, for the sake of completeness, higher-order (degenerate) poles. In this sense, the results presented here both generalize and refine previous considerations on the aforementioned topics, as we describe in what follows. The presence of higher derivatives in the gravitational action tends to ameliorate both classical and quantum divergences. The former can be viewed, \textit{e.g.}, on the Newtonian potential and on the effect of the gravitational collapse. The latter is related, as mentioned before, to the (super)renormalizability of the theory. Since 1977 it is known that the fourth-derivative gravity is renormalizable and has finite nonrelativistic potentials~\cite{Stelle77,Stelle78}. This relation was recently extended to superrenormalizable higher-order gravity theories with real poles, which were shown to have a finite potential too~\cite{Newton-MNS}. On the other hand, the introduction of higher derivatives only in the $R^2$-sector of the theory results in a nonrenormalizable model with a divergent (modified) Newtonian potential~\cite{Quandt-Schmidt}. These examples of simultaneous occurrence of classical and quantum singularities raised the question of whether there is a fundamental relation between them~\cite{CountGhost,Newton-MNS,Accioly13_riddle}. The negative to this conjecture was given in~\cite{Newton-BLG}, where it was shown that the Newtonian singularity is canceled in all the polynomial gravity theories with at least one massive mode in each sector, which included Lee-Wick and also some nonrenormalizable models. Nevertheless, the proof of the finiteness of the potential carried out in~\cite{Newton-BLG} was based on the calculation only of the terms which give divergent contributions to the potential, and on the demonstration of an algebraic relation between the poles of the propagator of the theory. In the Sec.~\ref{Sec.2} of the present work we derive the expression for the weak-field metric potentials to all orders in $r$---including the case of degenerate poles---and obtain an alternative verification of the cancellation of the Newtonian singularity. This simpler demonstration is based on partial fraction decomposition and on the use of the heat kernel method for deriving gravitational potentials introduced in~\cite{Frolov:Poly}. Having the expression of the linearized metric for a pointlike source in a general local higher-derivative gravity, it is possible to go beyond the analysis of the finiteness of the potential and discuss the regularity of the curvature invariants. This is carried out also in Sec.~\ref{Sec.2}, where we show that the metric is regular if and only if the model contains more than four derivatives in both the scalar and tensor sectors. This includes local superrenormalizable models and a wide class of Lee-Wick gravities. Following the aforementioned parallel between quantum and classical singularities~\cite{CountGhost,Newton-MNS,Newton-BLG,Accioly13_riddle}, one can say that GR is nonrenormalizable and has a divergent Newtonian potential, fourth-order gravity is renormalizable and has a finite gravitational potential (but its curvature invariants diverge), and the higher-order gravities which are superrenormalizable have a complete regular nonrelativistic limit, {\it i.e.}, the metric potentials and the curvatures have no singularities. In the Sec.~\ref{Sec.3}, the static solution found in the preceding section is used to obtain the metric associated to a nonspinning gyraton, which is an approximation to an ultrarelativistic massive particle without angular momentum~\cite{Aich-Sexl,Frolov:2005in,Frolov:2005zq}. The procedure comprise applying a boost to the nonrelativistic metric and then taking the Penrose limit (see, {\it e.g.},~\cite{Frolov:book}). This turns out be an intermediate step to the analysis of the collapsing null shells. The field generated by a nonspinning gyraton was derived in the context of the nonlocal ghost-free gravity in~\cite{Frolov:Exp}, and for the polynomial gravity with simple poles in~\cite{Frolov:Poly}. In the present work we show that this metric has the same small-distance behavior in all nontrivial polynomial gravity theories. To conclude this section, some particular explicit examples are presented for the cases of complex and degenerate poles. The collapse of small mass shells is analyzed in Secs. \ref{Sec.4} and~\ref{Sec.5} which, respectively, discusses the case of an infinitesimally thin shell and of a shell with a finite thickness. By small mass we mean that we work only with linearized equations for the gravitational field, in agreement to what was developed in the previous sections. The interest in this scenario is the possibility of formation of mini black holes, \textit{e.g.}, owed to the collision of ultrarelativistic particles~\cite{Head-On}. The formalism we follow was introduced in detail in~\cite{Frolov:Exp}, where it was applied to the ghost-free gravity. It was later generalized in~\cite{Frolov:Poly}, where the case of polynomial models with simple poles was considered. Our extension of the latter work to general polynomial models verifies the conclusion that there exists a mass gap to the formation of mini black holes. The presence of a mass gap is typical in higher-derivative gravity models, which is known since the 1980s~\cite{Frolov:Weyl}, and means that a black hole can only be formed if its mass is larger than a certain value. This is in contrast to what happens in GR, where any mass can become a black hole, provided it is concentrated in a sufficiently small region. Also, in Secs.~\ref{Sec.4} and~\ref{Sec.5} we discuss the emergence of singularities during the collapse of null shells within general polynomial gravities by analyzing the Kretschmann scalar $R_{\mu\nu\alpha\beta}^2$. In particular, in Sec.~\ref{Sec.5} we show that the Kretschmann scalar for a collapsing thick null shell is regular for all models with more than four derivatives in the spin-2 sector. This completely characterizes the class of models for which $R_{\mu\nu\alpha\beta}^2$ can have the logarithmic singularities found in~\cite{Frolov:Poly}. Further discussion concerning similarities between local and nonlocal higher-derivative gravity and extensions to the full nonlinear regime are carried out in Sec.~\ref{Sec.6}, where we also draw our conclusions. Our sign conventions are $\,\eta_{\mu\nu} = \,\mbox{diag}\,(-,+,+,+)\,$ for Minkowski spacetime metric and $\,{R^{\alpha}}_{\beta\mu\nu} \,=\, \partial_\mu \Gamma^{\alpha}_{\beta\nu} - \dots\,$, for the Riemann tensor. The Ricci tensor is defined by $\,R_{\mu\nu} \,=\,{R^{\alpha}}_{\mu\alpha\nu}\,$. Also, we use spatial distance and mass definitions such that $c\,=\,\hbar\,=\,1$. \section{Newtonian limit} \label{Sec.2} In the static weak-field approximation we consider metric fluctuations around Minkowski spacetime \begin{eqnarray} \n{mli} g_{\mu\nu} \,=\, \eta_{\mu\nu} + h_{\mu\nu} \, \end{eqnarray} and work with the equations of motion at the linear level. The only relevant terms in the action which contribute for the linearized field equations are those of second order in the perturbation $h_{\mu\nu}$. Consequently, at the Newtonian limit a general higher-derivative gravity model can be reduced to the action \begin{eqnarray} \n{act} S_{grav} = \frac{1}{4 \kappa} \int d^4 x \sqrt{-g} \, \Big\{ 2 R + R_{\mu\nu} \, F_1 (\Box) \, R^{\mu\nu} +\, R F_2(\Box) \, R \Big\}\,, \end{eqnarray} where $\kappa = 8 \pi G$ and $F_1$ and $F_2$ are functions of the d'Alembert operator. If $F_1$ and $F_2$ are nonzero polynomial functions, not necessarily of the same degree, we say it is a polynomial higher-derivative model\footnote{Note that the case of trivial polynomials,\textit{ i.e.}, $F_i=\text{const.}\neq 0$, reduces to fourth-order theories; while the choice $F_1=F_2=0$ recovers GR.}. Otherwise, we say the theory is nonlocal. Let us note that the term $R_{\mu\nu\alpha\beta} F_3 (\Box) R^{\mu\nu\alpha\beta}$ is irrelevant for our purposes since, by means of the Bianchi identities and integrations by parts, one can prove that (see, \textit{e.g.},~\cite{AsoreyLopezShapiro}) \begin{eqnarray} \n{gb} \int d^4 x \sqrt{-g} \,\Big\{ R_{\mu\nu\alpha\beta}F_3 (\Box) R^{\mu\nu\alpha\beta} -4R_{\mu\nu}F_3 (\Box) R^{\mu\nu} + RF_3 (\Box) R\Big\} \,=\, O (R^3) \,=\,O(h^3) \,. \end{eqnarray} Hence, the effect of such Riemann-squared term can be reproduced, at the linear level, by a redefinition of the functions $F_1$ and $F_2$. Performing the expansion \eq{mli}, the bilinear form of the action \eq{act} reads \cite{Maz12} \begin{eqnarray} \n{bili} S^{(2)}_{grav} &=& \frac{1}{4\kappa} \int d^4 x \,\bigg[ \frac{1}{2} h_{\mu\nu} \, a(\Box) \, \Box h^{\mu\nu} -\, \frac{1}{2} \, h \, c(\Box) \, \Box h + h \, c(\Box) \, \partial_\mu \partial_\nu h^{\mu\nu} \nonumber \\ && - \, h^\rho _\nu \, a(\Box) \, \partial_\rho \partial_\mu h^{\mu \nu} + \frac{1}{2} \, h^{\mu\nu} \, \left[ a(\Box) - c(\Box) \right] \, \frac{1}{ \Box} \, \partial_\mu \partial_\nu \partial_\rho \partial_\omega h^{\rho \omega} \bigg]\,, \end{eqnarray} where we introduced the condensate notations \begin{eqnarray} \label{eq NN1} a(\Box) &=& 1 + \frac12\, F_1(\Box) \, \Box \, , \\ \label{eq NN2} c(\Box) &=& 1 - 2 F_2(\Box) \, \Box - \frac12\,F_1(\Box)\, \Box \,. \end{eqnarray} The variational principle then yields the field equations, \begin{eqnarray} \n{lieq} & a (\Box) \, (\Box h_{\mu\nu} - \partial_\rho \partial_\mu h^\rho_\nu - \partial_\rho \partial_\nu h^\rho_\mu) + \, c(\Box) \, (\eta_{\mu\nu} \partial_\rho \partial_\omega h^{\rho\omega} - \eta_{\mu\nu} \Box h + \partial_\mu \partial_\nu h) & \nonumber \\ & + \, \left[a(\Box) - c(\Box) \right] \, \dfrac{1}{\Box} \, \partial_\mu \partial_\nu \partial_\rho \partial_\omega h^{\rho\omega} = \, - 2 \kappa \, T_{\mu\nu},& \end{eqnarray} where $T_{\mu\nu}$ is the energy-momentum tensor sourcing the field. As far as we are interested in a pointlike source, we assume \begin{eqnarray} T_{\mu\nu} \,=\, \rho\, \delta_\mu^0 \, \delta_\nu^0 \,, \label{Tmn-punti} \end{eqnarray} where $\rho \,=\, m \, \delta^3 ({\bf r})\,$ is the mass density. In this case the metric can be written in the isotropic form \begin{eqnarray} \label{m-New} ds^2 \,=\, - (1+ 2 \varphi) dt^2 + (1 - 2 \psi) (dx^2+dy^2+dz^2) \,. \end{eqnarray} Here $\varphi = \varphi(r)\,$ and $\psi = \psi(r)\,$ are the Newtonian potentials and $r \,=\, \sqrt{x^2+y^2+z^2}\,$. The metric potentials can be obtained by solving \begin{eqnarray} \label{met-2} [a(\Delta)-c(\Delta)] \Delta \varphi + 2 c (\Delta) \Delta \psi &=& \kappa \rho \,, \\ \label{eqmet-1} [a (\Delta)- 3 c(\Delta)] [\Delta \varphi - 2 \Delta \psi] &=& \kappa \rho \, , \end{eqnarray} which are, respectively, the $00$-component and the trace of the equations of motion~\eq{lieq}. Moreover, the substitution $\Box \mapsto \Delta$ was implemented, as the metric is static. Instead of solving the system above directly for $\varphi$ and $\psi$, it is more convenient to work with their linear combination in the form of \begin{eqnarray} \label{Chi-Ome-Def} \chi \equiv \varphi + \psi \, \qquad \text{and} \qquad \, \omega \equiv \varphi - 2\psi \, . \end{eqnarray} Once the equations are solved for $\chi$ and $\omega$ it is straightforward to obtain the original metric potentials via \begin{eqnarray} \label{pot_ori} \varphi = \frac{1}{3} (2 \chi + \omega) \, , \qquad \psi = \frac{1}{3} (\chi - \omega) \, . \end{eqnarray} The reason for working with $\chi$ and $\omega$ is threefold: first, the field equations for these new potentials have a simple structure in terms of the functions $a$ and $c$. In fact, Eqs.~\eqref{met-2} and~\eqref{eqmet-1} are equivalent to \begin{eqnarray} \label{eqx} && a (\Delta) \Delta \chi \, = \, \kappa \rho \, , \\ && b(\Delta) \Delta \omega \, = \, - \kappa \rho / 2 \, , \label{eqxB} \end{eqnarray} where the function $b(z)$ is defined by% \footnote{The multiplicative factor $1/2$ was introduced in order to have $b(0)=a(0)=1$. With this choice the discussion carried out in the Appendix~A applies directly to both $a$ and $b$, simplifying the considerations of Secs.~\ref{Sec.2.2} and~\ref{Sec.2.3}.} \begin{eqnarray} \label{b-poly} b(\Delta) \equiv \frac{1}{2} \left[ 3 c(\Delta)- a (\Delta) \right] . \end{eqnarray} Second, the functions $a$ and $b$ above correspond precisely to the terms which appear in the propagator associated to the theory~\eqref{act}~\cite{VanNieuwenhuizen:1973fi}, \begin{eqnarray} G(k) \,=\, \frac{1}{k^2a(-k^2)} \, P^{(2)} - \frac{1}{2k^2b(-k^2)} \, P^{(0-s)} \,, \end{eqnarray} where $P^{(2)}$ and $P^{(0-s)}$ are, respectively, the spin-$2$ and spin-$0$ projection operators~(see, \textit{e.g.},~\cite{book}; tensorial indices and the terms which are gauge-dependent were omitted for simplicity). Indeed, the roots of the equations $a(-k^2)=0$ and $b(-k^2)=0$ determine the massive poles of the propagator and, therefore, the (massive) spectrum of the model. In this spirit, Eq.~\eqref{Chi-Ome-Def} splits the metric potentials into the contributions owed to the spin-2 modes (through $\chi$) and to the scalar modes (via $\omega$). Based on this relation between the roots of the equations $a(-k^2)=b(-k^2)=0$ and the poles of the propagator, throughout the present work we shall refer to these quantities as either ``roots'' or ``poles''. The third motivation for working with the special combination in the form $\chi=\varphi+\psi$ is that the potential $\chi$ turns out to be the relevant one for the collapse of the spherical null shell (see discussion in~Sec.~\ref{Sec.3} and in~\cite{Frolov:Poly}). The situation resembles what occurs in the light bending~\cite{Bend-R2}. Qualitatively, this happens because in the ultrarelativistic limit the interaction between particles and the gravitational field is similar to that of photons. \subsection{Heat kernel solution} \label{Sec.2.1} Equations~\eqref{eqx} and~\eqref{eqxB} have the very same structure, the only difference being the operator function. From now on we assume that $F_1$ and $F_2$ are polynomial functions, as our interest in this work is on higher-derivative polynomial gravity. Then, $a$, $b$ and $c$ are also polynomials, but with different coefficients, and the equations for $\chi$ and $\omega$ are essentially the same. Therefore, we explicitly work out the solution for~\eqref{eqx} and, \textit{mutatis mutandis}, write down the solution for~\eqref{eqxB}. The solution for $\chi$ can be easily found by means of the heat kernel approach, based on the Laplace transformation, as carried out in~\cite{Frolov:Poly}. Indeed, introducing the Green's function for~\eq{eqx} via \begin{eqnarray} \hat{H} \cdot \hat{G} \,=\, \hat{1} \,, \end{eqnarray} where \begin{eqnarray} \label{H} \hat{H} \,=\, a(\Delta) \Delta \,, \end{eqnarray} we have the integral solution \begin{eqnarray} \label{cGF} \chi (x) \,=\, 8 \pi G \int d^3 x' G(x,x') \, \rho (x') \,. \end{eqnarray} Let us now assume that the inverse $\hat{H}^{-1}(\Delta)$ of the operator~\eq{H} can be written as the Laplace transform of some function $f(s)$, that is, \begin{eqnarray} \label{H-1} H^{-1}(-\xi) \,=\, \int_0^\infty ds \, f(s)\, e^{-s \xi} \,. \end{eqnarray} Then, the $x$-representation of the Green's function $\hat{G}$ reduces to \begin{eqnarray} \n{Gxx} G(x,x') \,=\, \int_0^\infty ds f(s) \, \langle x \,| \, e^{s \Delta} \,|\,x' \rangle \, , \end{eqnarray} where \begin{eqnarray} \langle x \,| \, e^{s \Delta} \,|\,x' \rangle \,=\, K(|x-x'|;s) \,=\, \frac{e^{-|x-x'|^2/4s}}{(4\pi s)^{3/2}} \, \end{eqnarray} is the heat kernel of the Laplacian. By choosing $x=r$ \ and \ $x'=0\,$, formula \eq{cGF} simplifies to \begin{eqnarray} \label{cHK} \chi (r) \,=\, 8 \pi G m \int_0^\infty ds\, f(s)\, K(r;s) \,. \end{eqnarray} Particularizing for the higher-derivative model \eq{act}, according to the fundamental theorem of algebra we can write the polynomial $a(-\xi)$ in the factored form\footnote{The factors $m_i^{-2}$ must be introduced because Eq.~\eqref{eq NN1} requires $a(0) = 1 $. Analogous factors must be introduced for the polynomial $b(-\xi)$, as $b(0)=1$ by definition.} \begin{eqnarray} \label{poly_a} a(-\xi) = \prod_{i=1}^{N} \left(\frac{ m_i^2 + \xi }{m_i^2} \right)^{\alpha_i} \, , \end{eqnarray} where $\xi = - m_i^2$ (with $i\in \lbrace 1,2,...,N\rbrace$) is a root of the equation $\, a(-\xi) = 0\, $ and $\alpha_i$ is its multiplicity. Notice that if $\mathcal{N}$ is the degree of $a(\Delta)$---\textit{i.e.}, if there are $2(\mathcal{N}+1)$ derivatives in the spin-2 sector---then $\sum_{i=1}^N \alpha_i = \mathcal{N}$. With the focus on general polynomial models, we shall not make any initial restriction on the complex or real nature of the quantities $m_i^2$, nor on their multiplicity. The function $f(s)$ for the general higher-derivative gravity can be promptly obtained by substituting~\eqref{poly_a} into~\eqref{H-1} and inverting the Laplace transform using expansion in partial fractions~\cite{Grad}. The result is \begin{eqnarray} \label{f(s)} f(s) = -1 + \sum_{i=1}^N \sum_{j=1}^{\alpha_i} \, A_{i,j} \, s^{j-1} \, e^{- s m_i^2} \, , \end{eqnarray} where the coefficients $A_{i,j}$ are obtained from the comparison with $H^{-1}(-\xi)$ in terms of its partial fraction decomposition, namely, \begin{eqnarray} \label{Aij} A_{i,j} = \frac{-1}{(\alpha_i - j)! (j - 1)!} \, \frac{d^{\alpha_i -j}}{d\xi^{\alpha_i-j}} \frac{\left( \xi + m_i^2\right)^{\alpha_i}}{\xi a(-\xi)} \Bigg|_{\xi = - m_i^2} \, . \end{eqnarray} Also, for compactness of notation, it is useful to define the symbol $A_{i,j}$ for $j > \alpha_i$ by setting $A_{i,j> \alpha_i}\equiv 0$. The potential $\chi$ can thus be evaluated by substituting~\eqref{f(s)} into~\eqref{cHK}, which gives \begin{eqnarray} \label{Pot.Interm} \chi(r) = -\frac{2Gm}{r} + \frac{G m}{\sqrt{\pi}} \sum_{i=1}^N \sum_{j=1}^{\alpha_i} \, A_{i,j} \int_0^\infty ds\, s^{j-\frac{5}{2}} \,e^{- (s m_i^2 + r^2 /4s)} \, , \end{eqnarray} where we assume that $\Re m_i^2 > 0$ for the integrals to converge. Under the change of variables $\, s m_i^2 \mapsto s \,$ each of the above integrals becomes \begin{eqnarray} \label{Integral} I_i = \int_0^\infty ds\, s^{j-\frac{5}{2}} \,e^{- (s m_i^2 + r^2 /4s)} = (m_i^2)^{\frac{3}{2}-j} \int_\Gamma ds\, s^{j-\frac{5}{2}} \,e^{- (s + m_i^2 r^2 /4s)} \, , \end{eqnarray} with the last integral being carried out along the line $\Gamma=\lbrace w \in \mathbb{C}: w=m_i^2 t, \, t \in \mathbb{R}^+\rbrace$. In the case of a real root $m_i^2$ the integration remains along the positive real axis, while for complex roots the integration line undergoes a rotation in the complex plane, but its points still satisfy $\Re w > 0$. However, the integrand $h(s)$ on the \textit{r.h.s.} of~\eqref{Integral} is an analytical function with only a removable singularity at the origin, and which vanishes for $\vert s \vert \rightarrow \infty$. Therefore, the integral of $h(s)$ along the oriented contour $\Gamma_\varrho = [0,\varrho] \cup C_\varrho \cup \lbrace w \in \mathbb{C}: w= m_i^2 (\varrho- t), \, t \in (0,\varrho] \rbrace$, where $C_\varrho$ is the circumference arc of radius $\varrho$ connecting the points $w_1=\varrho$ and $w_2=m_i^2 \varrho$, is null. Taking the limit $\varrho \rightarrow \infty$, it follows that $\int_0^{\infty} h(s) ds = \int_{\Gamma} h(s) ds$. We conclude that even in presence of complex roots $m_i^2$ it is possible to perform the integration along the positive real axis. Then, \begin{eqnarray} I_i = (m_i^2)^{\frac{3}{2}-j} \int_0^\infty ds\, s^{j-\frac{5}{2}} \,e^{- (s + m_i^2 r^2 /4s)} = 2 \left( \frac{r}{2 m_i} \right)^{j-\frac{3}{2}} K_{\frac{3}{2}-j}(m_i r ) \, , \end{eqnarray} where we chose the square root of $m_i^2$ with positive real part and recognized in the integral a representation of the modified Bessel function of the second kind $K_\nu$~\cite{Grad}. Hence, the potential $\chi$ is given by \begin{eqnarray} \label{Pot.General} \chi(r) = -\frac{2Gm}{r} + \frac{2G m}{\sqrt{\pi}} \sum_{i=1}^N \sum_{j=1}^{\alpha_i} \, A_{i,j} \, \left( \frac{r}{2 m_i} \right)^{j-\frac{3}{2}} \, K_{j-\frac{3}{2}}(m_i r) \, . \end{eqnarray} In deriving this result it was assumed that $\Re m_i^2 > 0$ and $\Re m_i > 0$. The last assumption is physically justified by the requirement that the potential decays to zero at large distances, as well as to avoid tachyons on the model. The former assumption, however, is related to the heat kernel method used to solve~\eqref{eqx} and the premise that the operator $\hat{H}^{-1}$ has the form of~\eqref{H-1}. Actually, the solution~\eqref{Pot.General} also holds for the cases in which the polynomial $a(-\xi)$ has roots with $\vert\Im m_i\vert > \Re m_i > 0$, as the Bessel functions provide the analytical continuation of each term in~\eqref{Pot.Interm} viewed as function of an arbitrary $m_i^2$ with $\Re m_i > 0$. We point out that it is possible to obtain the potential~\eqref{Pot.General}, even though with a longer calculation, directly for the general case of $\vert\arg m_i\vert < \pi/2$ by means of the Fourier transform method and using Basset's representation of the modified Bessel functions~\cite{Wat}. The case of GR ($a \equiv 1$) is a trivial example of the previous formulas, as $f_{\text{GR}}(s) = -1$ and $\chi_{\text{GR}}(r)=-2Gmr^{-1}$. Another direct example is if $\, a(-\xi)=0 \,$ has only nondegenerate (ND) roots. Then $\alpha_i = 1$ for all $i$, and $f(s)$ boils down to~\cite{Frolov:Poly} \begin{eqnarray} \label{f_ND} f_\text{ND}(s) = -1 + \sum_{i=1}^{N} e^{- s m_i^2} \prod_{j \neq i} \frac{m_j^2}{m_j^2 - m_i^2} \, , \end{eqnarray} while the potential is given by~\cite{Frolov:Poly} \begin{eqnarray} \label{Pot.NG} \chi_\text{ND}(r) & = & -\frac{2Gm}{r} \bigg[ 1 - \sum_{i=1}^N \, e^{-m_i r} \prod_{j \neq i} \frac{m_j^2}{m_j^2 - m_i^2} \bigg] \, . \end{eqnarray} Since the only assumption in finding the solution for $\chi$ was that it satisfied Eq.~\eqref{eqx}, one can write down the solution for $\omega$ which satisfies~\eqref{eqxB}. Let $\mathcal{N}^\prime$ be the degree of the polynomial $b(-\xi)$ and let $-m_i^{\prime 2}$ (with $i \in \lbrace 1,2,...,N^\prime\rbrace$) be the roots of the equation $b(-\xi)=0$, each of them with multiplicity $\alpha^\prime_i$. Then, the formula for $\omega(r)$ can be obtained by simply making the substitution $(\chi,A_{i,j},a,m,N,m_i,\alpha_i) \mapsto (\omega,A_{i,j}^\prime,b,-\frac{m}{2},N^\prime,m_i^\prime,\alpha_i^\prime)$ in Eqs.~\eqref{Aij} and~\eqref{Pot.General}. In view of~\eqref{pot_ori}, the modified Newtonian potential $\varphi$ for a general higher-derivative polynomial gravity is given by \begin{eqnarray} \label{Phi-Gen} \varphi(r) &=& -\frac{Gm}{r} + \frac{4}{3} \frac{G m}{\sqrt{\pi}} \sum_{i=1}^N \sum_{j=1}^{\alpha_i} \, A_{i,j} \, \left( \frac{r}{2 m_i} \right)^{j-\frac{3}{2}} K_{j-\frac{3}{2}}(m_i r) \nonumber \\ && - \, \frac{1}{3} \frac{G m}{\sqrt{\pi}} \sum_{i=1}^{N^\prime} \sum_{j=1}^{\alpha_i^\prime} \, A_{i,j}^\prime \, \left( \frac{r}{2 m_i^\prime} \right)^{j-\frac{3}{2}} K_{j-\frac{3}{2}}(m_i^\prime r) \, , \end{eqnarray} while $\psi$ reads \begin{eqnarray} \label{Psi-Gen} \psi(r) &=& -\frac{Gm}{r} + \frac{2}{3} \frac{G m}{\sqrt{\pi}} \sum_{i=1}^N \sum_{j=1}^{\alpha_i} \, A_{i,j} \, \left( \frac{r}{2 m_i} \right)^{j-\frac{3}{2}} K_{j-\frac{3}{2}}(m_i r) \nonumber \\ && + \, \frac{1}{3} \frac{G m}{\sqrt{\pi}} \sum_{i=1}^{N^\prime} \sum_{j=1}^{\alpha_i^\prime} \, A_{i,j}^\prime \, \left( \frac{r}{2 m_i^\prime} \right)^{j-\frac{3}{2}} K_{j-\frac{3}{2}}(m_i^\prime r) \, . \end{eqnarray} As noted before, the quantities $m_i$ are the masses of the extra degrees of freedom with spin-$2$, while $m_i^\prime$ are related to the scalar ones. Moreover, the potentials are real despite the possibility of complex poles in the propagator. The cancellation of the imaginary part takes place because $K_n(\bar{z}) = \overline{K_n(z)}$ for $n \in \mathbb{R}$, and $A_{\bar{i},j} = \overline{A_{i,j}}$, where the subscript index $\bar{i}$ refers to the complex pole conjugate to $m_i^2$. The general potential~\eqref{Phi-Gen} generalizes previous considerations found in the literature which took into account real massive poles only in the scalar sector~\cite{Quandt-Schmidt}, or simple real poles~\cite{Newton-MNS} and simple complex poles~\cite{Newton-BLG,Frolov:Poly} in scalar and tensor sectors. As noticed in~\cite{ABS-large,Newton-BLG,Quandt-Schmidt}, it is possible to obtain the potential for the case of degenerate poles by considering limits of the potential with only simple poles. This procedure may be ambiguous, however, when applied to poles with $\alpha_i > 2$. The formula~\eqref{Phi-Gen} clarifies the situation, as it explicitly allows for arbitrary multiplicity. \subsection{Finiteness of the metric potentials} \label{Sec.2.2} If both $\chi$ and $\omega$ are finite, so are the metric potentials $\varphi$ and $\psi$. As noticed in~\cite{Frolov:Poly}, if the roots of $a(-\xi)=0$ are all simple, then $\chi$ is finite. In what follows we use the general formula~\eqref{Pot.General} to show that $\chi$ is finite for an arbitrary nontrivial polynomial $a$ of the form~\eqref{eq NN1}. Using the similarity between the solution for $\chi$ and $\omega$, it then follows that these properties are valid also for $\omega$ defined by a nontrivial $b$ given by~\eqref{b-poly}. As a conclusion, if $a$ and $b$ have degree of at least one, then the potentials $\varphi$ and $\psi$ are finite at $r=0$. This can be viewed as an explicit verification of the result obtained in~\cite{Newton-BLG}, where only the terms of order $r^{-1}$ were evaluated and the presence of degenerate poles was dealt with by the procedure of taking limits. To this end, let us rewrite~\eqref{Pot.General} separating the terms for which $j > 3/2$: \begin{eqnarray} \label{Pot.Gen-alt} \chi(r)= -\frac{2Gm}{r} + \frac{2G m}{\sqrt{\pi}} \sum_{i=1}^N \Bigg[ A_{i,1} \sqrt{\frac{2 m_i}{r}} K_{-\frac{1}{2}}(m_i r) + \sum_{j=2}^{\alpha_i} A_{i,j} \left( \frac{r}{2 m_i} \right)^{j-\frac{3}{2}} K_{j-\frac{3}{2}}(m_i r) \Bigg] , \end{eqnarray} where the summation over $j \geq 2$ is considered only if $\alpha_i > 1$. For $j \geq 2$ and small $r$ the functions $K_{j-\frac{3}{2}}(m_i r)$ behave like $r^{-j+3/2}$. Hence, all the terms with $j \geq 2$ are finite at $r=0$. It remains to check if the terms with $j=1$ manage to cancel the Newtonian singularity. Since \begin{eqnarray} K_{\pm \frac{1}{2}}(z)= \sqrt{\frac{\pi}{2z}}e^{-z} \, , \end{eqnarray} the terms with $j=1$ have the form \begin{eqnarray} \frac{2G m}{\sqrt{\pi}} \sum_{i=1}^N A_{i,1} \sqrt{\frac{2 m_i}{r}} K_{-\frac{1}{2}}(m_i r) = \frac{2G m}{r} \sum_{i=1}^N A_{i,1} e^{-m_i r} . \nonumber \end{eqnarray} Therefore, the potential~\eqref{Pot.General} can be written as \begin{eqnarray} \label{Pot.r=0} \chi(r) = \frac{2Gm}{r} \left[ -1 + \sum_{i=1}^N A_{i,1} \right] + \chi_0 + \chi_1 r + \, O(r^2) \, , \end{eqnarray} where $\chi_0$ and $\chi_1$ are constants. Using the identity \begin{eqnarray} \sum_i A_{i,1} = 1 \end{eqnarray} (see Eq.~\eqref{SumAi1} of the Appendix~A), it follows that the Newtonian singularity at $r=0$ is canceled by the higher-derivative correction terms, even in presence of complex and/or degenerate poles. The same reasoning holds for the potential $\omega$, and therefore the metric potentials $\varphi$ and $\psi$ are finite, verifying the result of~\cite{Newton-BLG}. The condition for the cancellation of the singularity of the potential is the presence of at least one massive mode in the spin-$2$ and in the spin-$0$ sectors. For example, if $F_1=0$ but $F_2 \neq 0$ then $\omega$ is finite but $\varphi$ and $\psi$ are not~\cite{Quandt-Schmidt}. \subsection{Regularity of the curvature invariants} \label{Sec.2.3} As it is well known, the finiteness of the potential is not enough to guarantee the regularity of the solution, as the curvature can still be singular. For a general metric in the form~\eqref{m-New}\textcolor{blue}, {\it e.g.}, the Kretschmann invariant \begin{eqnarray} R_{\mu\nu\alpha\beta}^2 \,=\, 4 (\varphi ''^2 +2\psi ''^2 ) +\frac{16}{r} \, \psi ' \psi '' +\frac{8}{r^2}( \varphi '^2 +3 \psi '^2 ) \, , \end{eqnarray} clearly diverges if $\varphi^\prime(0)$ and $\psi^\prime(0)$ are not zero. In order to find more rigorously the conditions for having regular curvature invariants, let us assume that both metric potentials are finite and write \begin{eqnarray} \varphi(r) &=& \varphi_0 + \varphi_1 r + \varphi_2 r^2 + \varphi_3 r^3 + O(r^4) \,, \\ \psi(r) &=& \psi_0 + \psi_1 r + \psi_2 r^2 + \psi_3 r^3 + O(r^4) \,. \end{eqnarray} In terms of $\varphi_n$ and $\psi_n$ the Kretschmann scalar reads \begin{eqnarray} R_{\mu\nu\alpha\beta}^2 = \frac{8 (\varphi_1^2+3 \psi_1^2)}{r^2} +\frac{32 (\varphi_1 \varphi_2 + 4 \psi_1 \psi_2)}{r} + 48 \left( \varphi_2^2 +4 \psi_2^2 + \varphi_1 \varphi_3 +5 \psi_1 \psi_3 \right) +O(r) . \end{eqnarray} Therefore, the invariant $R_{\mu\nu\alpha\beta}^2$ is regular if, and only if, $\varphi_1 = \psi_1 = 0$~\cite{Frolov:Poly,Buoninfante:2018b}. Actually, this is the same condition for the regularity of the set of curvature invariants\footnote{Note, however, that the invariants $R$ and $C_{\mu\nu\alpha\beta}^2$ can be regular independently of the others as they depend, respectively, only on the scalar and on the tensor sectors.}: \begin{eqnarray} R_{\mu\nu}^2 &=& \frac{2(3 \varphi_1^2-6 \varphi_1 \psi_1 +11 \psi_1^2)}{r^2} + \frac{32 (\varphi_1 \varphi_2 - \varphi_2 \psi_1 -\varphi_1 \psi_2+ 4 \psi_1 \psi_2)}{r} \nonumber \\ && + \, 12 \big[4 \varphi_2^2 +16 \psi_2^2 -8 \varphi_2 \psi_2 +5 \varphi_1 (\varphi_3-\psi_3) + \psi_1( 21 \psi_3 -5 \varphi_3) \big] +{ O}(r) \,, \end{eqnarray} \begin{eqnarray} R &=& -\frac{4 \omega_1}{r} -12 \omega_2 + { O}(r) \,, \\ C_{\mu\nu\alpha\beta}^2 &=& \frac{4 \chi_1^2}{3 r^2} - 8 \chi_1 \chi_3 +{ O}(r) \,, \label{Weyl} \end{eqnarray} where $C_{\mu\nu\alpha\beta}$ is the Weyl tensor and \begin{eqnarray} \chi_n = \varphi_n + \psi_n \,, \quad \omega_n = \varphi_n -2 \psi_n \,, \quad n \in \mathbb{N} \,. \end{eqnarray} In this spirit, one may be tempted to ask whether the condition $\varphi_1 = \psi_1 = 0$ is recurrent in higher-derivative gravity models. For example, there is a large class of non-local gravities that satisfy this condition when coupled to a $\delta$-source~\cite{Head-On,Buoninfante:2018b,Buoninfante:2018a} (see also~\cite{BreTib2} for more general non-local theories). On what concerns local models, the ones with only fourth derivatives do not satisfy this condition~\cite{Stelle78,Stelle15PRL,Stelle15PRD}; however, it holds for the sixth-order gravity with a pair of complex poles~\cite{Modesto-LWBH}, and in~\cite{Holdom} it was given general considerations supporting the conjecture that for theories with more than four derivatives one has $\varphi_1 = \psi_1 = 0$. We here address a more direct answer to this question by explicitly showing which polynomial gravity models fulfil the the conditions for having a regular metric at the linear regime. To this end, let us extend to order $r$ the calculations of Sec.~\ref{Sec.2.2}. Using the general expression for the potential~\eqref{Pot.Gen-alt} and the series expansion of the modified Bessel functions for $j \geq 2$~\cite{Grad}, \begin{eqnarray} K_{j-\frac{3}{2}}(m_i r) = \sqrt{\pi} \, e^{-m_i r}\sum_{k=0}^{j-2} \frac{(j+k-2)!}{k!(j-k-2)!(2m_i r)^{k+\frac{1}{2}}} \, , \nonumber \end{eqnarray} it is not difficult to verify that the terms which contribute to order $r$ yield \begin{eqnarray} \chi_1 = 2 G m \sum_{i=1}^N \Bigg\lbrace \frac{A_{i,1} m_i^2}{2} - \frac{A_{i,2}}{2} + \sum_{j=3}^\mathcal{N} \frac{A_{i,j}}{(4m_i^2)^{j-2}} \left[ \frac{(2j-5)!}{(j-3)!} - \frac{(2j-4)!}{2(j-2)!} \right] \Bigg\rbrace \, . \end{eqnarray} But the term inside the summation over $j\geq 3$ is \begin{eqnarray} \frac{(2j-5)!\left[ 2(j-2) - (2j-4)\right] }{2(j-2)!} = 0 \, . \end{eqnarray} Thus, \begin{eqnarray} \chi_1 = Gm (S_1 - S_2) \, , \end{eqnarray} where we define \begin{eqnarray} \label{S_def} S_1 = \sum_{i=1}^N A_{i,1} m_i^2 \, , \qquad S_2 = \sum_{i=1}^N A_{i,2} \, . \end{eqnarray} In the Appendix~A we show that if the polynomial $a(-\xi)$ is of degree $\mathcal{N} > 1$, then $S_1 = S_2$ (see Eq.~\eqref{AppS1S2})---recall that $2(\mathcal{N} + 1)$ is the number of derivatives in the spin-2 sector of the action. It follows that for theories of order higher than four, the non-relativistic potential $\chi$ is not only finite, but it is also regular, \textit{i.e.}, $\chi_1=0$. On the other hand, for the case of $\mathcal{N} = 1$ with a root at $\xi=-m_1^2$ one has the trivial result $A_{1} = 1$ and $S_2=0$, which gives $\chi_1 = Gmm_1^2$. This reasoning can be immediately extended to the potential $\omega$, for which $\omega_1 = -\frac{1}{2} Gmm_1^{\prime 2}$ if the polynomial $b(-\xi)$ is of order $\mathcal{N}^\prime = 1$, otherwise $\omega_1 = 0$. We conclude that the condition for the regularity of the curvature invariants\footnote{The calculation of the curvatures in this section was carried out using the GRTensor program (for analogous expressions in other parametrizations see, \textit{e.g.},~\cite{Frolov:Poly,Buoninfante:2018b,Buoninfante:2018a}). It is also possible to verify that under these conditions all individual components of the curvature tensors remain finite~\cite{BreTib3}.} is the presence of at least two massive modes (or one degenerate pole) in each of the spin-2 and the spin-0 sectors---which is equivalent to having $a$ and $b$ of degree higher than one\footnote{We point out that the effect of the regularization of the curvature can be viewed also in the polynomial theories as a regularization of the source in the Poisson equation for the metric potentials~\cite{BreTib2}.}. In other words, all higher-derivative theories defined by nonconstant polynomials $F_2$ and $F_1\neq - 3 F_2$ are regular at the Newtonian limit. In particular, this holds for the superrenormalizable local higher-derivative gravity models, including Lee-Wick models. In this context, the only possibilities for having a singular solution for a point source in the Newtonian limit is to have $F_1(\Box) = \text{\textit{const.}}$ or $F_1(\Box)=-3F_2(\Box)$. In the first case the spin-2 sector contains the massless pole corresponding to the graviton and, possibly, one massive (ghost) particle. In terms of the definition of $\pi$-regularity~\cite{Frolov:Poly}, which means that $\pi_1 = 0$ for a metric potential $\pi(r)$, we can say such a solution is not $\chi$-regular, but it could be $\omega$-regular provided that $F_2(\Box) \sim \Box^p$ with $p \geq 1$. For the second case, \textit{i.e.}, if $F_1(\Box)=-3F_2(\Box)$, the solution is not $\omega$-regular. Of course, for the solution to be regular it must be both $\chi$- and $\omega$-regular. In particular, Stelle's fourth order gravity is not regular at the Newtonian limit when coupled to a $\delta$-source~\cite{Stelle78,Holdom,Stelle15PRL,Stelle15PRD}, even though it can be $\varphi$-regular for particular choices of parameters, namely, if $m_1^\prime=2m_1$. \subsubsection{Small-$r$ conformally flat solutions} \label{2.3.1} In view of the Eq.~\eqref{Weyl}, it follows that a $\chi$-regular solution yields $C_{\mu\nu\alpha\beta}^2 = 0$ at $r=0$. The components of the Weyl tensor read \begin{eqnarray} C_{trtr} &=& \frac{1}{3} \left(\chi '' - \frac{\chi '}{r} \right) \,, \\ C_{t\theta t\theta} &=& C_{r\theta r \theta} \,= \frac{C_{t\phi t\phi}}{\sin^2 \theta} =\, \frac{C_{r\phi r \phi}}{\sin^2 \theta} \,= - \frac12 \, r^2 C_{trtr} \,, \\ C_{\theta\phi\theta\phi} & = & - r^4 \sin^2 \theta \, C_{trtr} \,. \end{eqnarray} As one can see, most of them do not contain terms with powers of $r^{-1}$, which implies that if the potentials are finite in the origin, the same is true for the corresponding components. The exception is the only independent component, $C_{trtr}$. In fact, \begin{eqnarray} C_{trtr} = - \frac{\chi_1}{3r} + { O} (r) \,. \end{eqnarray} Thus, we conclude that this component is finite in $r=0$ only for $\chi$-regular theories. In such a case, $\chi_1 = 0$ and the components of the Weyl tensor tend to zero as $r \rightarrow 0$, which means that the metric is approximately conformally flat near the origin. This situation also holds in non-local higher-derivative gravity~\cite{Buoninfante:2018b,Buoninfante:2018a}. \section{Ultrarelativistic limit} \label{Sec.3} Up to this point we restricted considerations to the Newtonian limit. In the following sections the weak-field potential $\chi$ will be used to discuss the emergence of a singularity in the collapse of null shells. As a first step towards the gravitational field of a collapsing shell, we shall obtain the field associated to an ultrarelativistic point-particle, which may be done by the following procedure. First, we perform a Lorentz transformation into Eq.~\eq{m-New}, which yields the metric of a moving object with velocity $\beta$. Thereafter, we take the limit $\, \beta \to 1\,$ while keeping the relativistic mass of the object fixed (Penrose limit),~\textit{i.e.}, \begin{eqnarray} \label{Pen} \lim_{\gamma \to \infty} ( \gamma m ) \,=\, M \, , \end{eqnarray} being $M$ the mass of the ultrarelativistic particle and $\, \gamma = (1-\beta^2)^{-1/2} \,$ the Lorentz factor. The resultant metric corresponds to a nonspinning gyraton~\cite{Frolov:book}. In order to apply this scheme to the solution found in the previous section, let us rewrite the metric~\eq{m-New} in the form \begin{eqnarray} \label{ds2} ds^2 \,=\, ds_0^2 + dh^2 \,, \end{eqnarray} where \begin{eqnarray} ds_0^2 \,=\, - dt^2 + dx^2 + dy^2 + dz^2 \end{eqnarray} is the flat spacetime metric and \begin{eqnarray} dh^2 \,=\, - 2 \,[\varphi dt^2 + \psi (dx^2 + dy^2 + dz^2) ] \end{eqnarray} is the perturbation. Now, consider a boost in the $x$-direction, \begin{eqnarray} \n{tbo} t = \gamma \left(t' - \beta\, x' \right) , \qquad x = \gamma \left(x' - \beta\, t' \right) . \end{eqnarray} Introducing the null coordinates $v=t'+x'$ \ and \ $u=t'-x'$, Eqs.~\eq{tbo} read \begin{eqnarray} \n{tuv} t &=& \frac{\gamma}{2} \left[\left(1-\beta\right)v + \left(1+\beta\right)u \right] , \\ \n{xuv} x &=& \frac{\gamma}{2} \left[\left(1-\beta\right)v - \left(1+\beta\right)u \right] . \end{eqnarray} Therefore, after applying the boost to the metric~\eqref{ds2} one gets \begin{eqnarray} \n{fuv} ds_0^2 \,=\, - 2 du dv + dy^2 + dz^2 \end{eqnarray} and \begin{eqnarray} d h^2 = - \frac{\gamma^2\left(\varphi+\psi\right)}{2} \left[\left(1-\beta\right)^2 dv^2 + \left(1+\beta\right)^2 du^2 \right] -\left(\varphi-\psi\right) \, du\, dv -2 \psi \,(dy^2 + dz^2) \,. \end{eqnarray} In the limit $\beta \to 1$ the form of flat metric~\eq{fuv} remains unchanged, while the perturbation goes to \begin{eqnarray} \label{gyr} d h^2 = \Phi \, du^2 \,, \qquad \mbox{where} \qquad \Phi = - 2\, \lim_{\gamma \to \infty} ( \gamma^2 \chi ) \,. \end{eqnarray} This shows, as mentioned before, that the dominant contribution in the ultrarelativistic limit comes from the special combination $\chi = \varphi+\psi$ of the metric potentials. Owed to this fact, in this section and in Secs.~\ref{Sec.4} and~\ref{Sec.5} we restrict considerations to the spin-2 sector of the theory. In this spirit, when we refer to, \textit{e.g.}, ``models with more than four derivatives'' is must be understood that these derivatives are on the spin-2 sector. The function $\Phi$ can be evaluated through~\eqref{gyr} by combining Eqs.~\eqref{cHK} and~\eqref{Pen} and recalling that \begin{eqnarray} \lim_{\gamma \to \infty} \frac{\gamma e^{-\gamma^2 u^2 /4s}}{\sqrt{4\pi s}} \,=\, \delta(u) \, . \end{eqnarray} Indeed, taking into account that $r^2 = \gamma^2 u^2 + y^2 + z^2$ after the boost, it follows \begin{eqnarray} \Phi = -4 G \lim_{\gamma \to \infty} ( \gamma m ) \int_0^\infty \frac{ds}{s}\, f(s) \, e^{-(y^2+z^2) /4s} \lim_{\gamma \to \infty} \frac{\gamma e^{-\gamma^2 u^2 /4s}}{\sqrt{4\pi s}} \, , \end{eqnarray} which can be written as \begin{eqnarray} \Phi \,=\, -4 G M \, F(y^2+z^2) \, \delta(u) \,, \end{eqnarray} where we defined the function $F\colon\mathbb{R}\rightarrow\mathbb{R}$ via \begin{eqnarray} \label{Fdiv} F(z) \,=\, \int_0^\infty \frac{ds}{s}\, f(s) \,e^{-z /4s} \,. \end{eqnarray} The integral~\eq{Fdiv} typically has an infrared divergence, owed to the massless nature of the graviton. To overcome this problem one can introduce an infrared cutoff $\Omega$ for large $s$. Any change in the cutoff parameter can be absorbed into a redefinition of the coordinates. In other words, this ambiguity just reflects the freedom in the gauge choice. Quantities with classical physical meaning, such as the curvature tensors, do not depend on $\Omega$ (for a more detailed exposition see, \textit{e.g.}, \cite{Frolov:Exp}). For example, $f(s) = -1$ in the case of GR, so that \begin{eqnarray} F_\Omega^{GR}(z) \,=\, - \int_0^{\Omega^2} \frac{ds}{s}\, e^{-z /4s} \, = - E_1\left( \frac{z}{4\Omega^2} \right) \, . \end{eqnarray} Here $E_1(z)$ is the exponential integral function. As $\Omega$ is a huge arbitrary cutoff, we assume $z \ll \Omega^2$ and write \begin{eqnarray} \label{F_GR} F_\Omega^{GR}(z) \,\approx\, \gamma + \,\mbox{ln}\, \left( \frac{z}{\Omega^2}\right) \, , \end{eqnarray} where $\gamma$ is the Euler-Mascheroni constant, and terms of order $z/\Omega^{2}$ and higher were discarded. For the general higher-derivative model~\eqref{act} the function $f(s)$ is given by Eq.~\eqref{f(s)}, which yields \begin{eqnarray} F_\Omega(z) = - E_1\left( \frac{z}{4\Omega^2} \right) + \sum_{i=1}^N \sum_{j=1}^{\alpha_i} A_{i,j} \int_0^\infty ds\, s^{j-2} \,e^{- (s m_i^2 + z /4s)} \, . \end{eqnarray} By applying the same arguments used in Sec.~\ref{Sec.2.1} it is possible to express the function $F_\Omega$ in terms of modified Bessel functions of the second kind, \begin{eqnarray} \label{F_geral} F_\Omega(z) = - E_1\left( \frac{z}{4\Omega^2} \right) + 2 \sum_{i=1}^N \sum_{j=1}^{\alpha_i} A_{i,j} \left( \frac{\sqrt{z}}{2 m_i} \right)^{j-1} K_{j-1}(m_i \sqrt{z} ) . \end{eqnarray} Before we present some explicit calculations for the cases of complex and degenerate poles, let us show a general property of the function in Eq.~\eqref{F_geral}. On the one hand, Eq.~\eqref{F_GR} shows that in GR $F_\Omega^{GR}(z) \sim \,\mbox{ln}\, z$ diverges as $z \rightarrow 0$. On the other hand, in~\cite{Frolov:Poly} it was shown that this divergence do not occur in the case of polynomial gravity with simple poles, because the leading terms of $F_\Omega(z)$ for small $z$ are linear in $z$ or of the type $z \,\mbox{ln}\, z$. Now we prove that this feature is present also in the general polynomial theory. Indeed, for small arguments the modified Bessel functions of the second kind $K_n(z)$ ($n \in \mathbb{N}$) can be expanded as \begin{eqnarray} K_0(z) & = & - \,\mbox{ln}\, z + \frac{1}{4} z^2 (1 - \gamma + \,\mbox{ln}\, 2) - \frac{1}{4} z^2 \,\mbox{ln}\, z + c_0 + O(z^{4}) \, , \\ K_1(z) & = & \frac{1}{z} + \frac{z}{2} \left( \,\mbox{ln}\, z + \gamma - \frac{1}{2} - \,\mbox{ln}\, 2 \right) + O(z^{3}) \, , \\ K_n(z) & = & \frac{(n-1)!}{2} \left( \frac{2}{z} \right)^{n} - \frac{(n-2)!}{2} \left( \frac{2}{z} \right)^{n-2} + c_n + O(z^{-n+4}) \, , \quad \text{for} \quad n \geq 2 \, , \end{eqnarray} where $c_i$ are constants and $\gamma$ is the Euler-Mascheroni constant. Substituting these expressions in~\eqref{F_geral} and using~\eqref{SumAi1} it follows ($c^\prime$ is a new constant) \begin{eqnarray} \label{F_expand1} F(z) = - \frac{z}{4} \left[ ( \,\mbox{ln}\, z + 2 \gamma - 2 \,\mbox{ln}\, 2 - 1)(S_1 - S_2) - S_1 + S \right] + c^\prime + O(z^2) \, \end{eqnarray} The constants $S_n$ are defined just like in~\eqref{S_def}, while $S$ is given by \begin{eqnarray} \label{S_def2} S = S_1^\prime - S_2^\prime + P_3 \, , \qquad S_n^\prime = \sum_{i=1}^N A_{i,n} (m_i^2)^{2-n} \,\mbox{ln}\, m_i^2 \, , \qquad P_3 = \sum_{i=1}^N \sum_{j=3}^{\mathcal{N}} \frac{(j-3)!}{ (m_i^2)^{j-2}} A_{i,j} \, . \end{eqnarray} Note that in any higher-derivative gravity model the singular term $\,\mbox{ln}\, z$ which stems in GR (see~\eqref{F_GR}) is canceled by a specific combination of the contribution owed to each massive mode through $K_0(m_i \sqrt{z})$. This is a direct consequence of the cancellation of the Newtonian singularity discussed in Sec.~\ref{Sec.2.2} and in Refs.~\cite{Newton-MNS,Newton-BLG}. Also, while the constant $S_1^\prime$ is nontrivial for all higher-derivative polynomial models, the quantities $S_2$ and $S_2^\prime$ only appear if there is at least one pole with multiplicity equal or larger than 2, and $P_3$ is relevant only for models with at least one pole for which $\alpha_i \geq 3$---this justifies our choice for the subscript labels. \subsection{Particular cases and examples} To close this section let us consider some examples of the diversity of scenarios which occur in higher-derivative gravity. In particular, we present explicit calculations for the sixth-order gravity, which is the simplest model which admits complex or degenerate real poles. We shall return to these examples in the next section, when analyzing the gravitational field of collapsing null shells. \subsubsection{4th-order gravity} \label{Ex.4th} There is only one possible scenario: the equation $a(-\xi)=0$ has one real simple root at $\xi = - m_1^2$. Therefore, $S_1 = m_1^2$ and $S= m_1^2 \,\mbox{ln}\, m_1^2$, so that \begin{eqnarray} \label{F_4th} F(z) &=& c^\prime - \frac{z}{4} \left( \,\mbox{ln}\, z + 2 \gamma - 2 \,\mbox{ln}\, 2 - 2 + \,\mbox{ln}\, m_1^2 \right) m_1^2 + O(z^2) \, . \end{eqnarray} As the other examples show, and in consonance with the discussion in Sec.~\ref{Sec.2.3}, this is the only case in which the small-$z$ expansion of $F(z)$ contains the term $z \,\mbox{ln}\, z$. \subsubsection{Models with more than four derivatives} \label{Ex.More} For any model of order higher than four there is the identity $S_1 = S_2$ (see Eq.~\eqref{AppS1S2} of the Appendix~A). Hence, Eq.~\eqref{F_expand1} can be cast in a very simple form: \begin{eqnarray} \label{F_expand2} F(z) = c^\prime - \frac{z}{4} \left( S - S_2 \right) + O(z^2) \, . \end{eqnarray} This result is both a generalization and a simplification of the analogous expression derived in~\cite{Frolov:Poly}, as it accounts for the possibility of degenerate poles and also rules out the terms of the type $z\,\mbox{ln}\, z$. \subsubsection{Nondegenerate models} \label{Ex.Non-Deg} The case of nondegenerate roots was investigated in Ref.~\cite{Frolov:Poly}. Here we show that our general considerations correctly reproduce this particular case. If all the roots of $\,a(-\xi)=0\,$ are simple, then $\,\alpha_i = 1 \, \forall \, i \,$ and the general expression~\eqref{F_geral} for $F(z)$ reduces to~\cite{Frolov:Poly} \begin{eqnarray} F(z) = - E_1\left( \frac{z}{4\Omega^2} \right) + 2 \sum_{i=1}^\mathcal{N} K_{0}(m_i \sqrt{z} ) \prod_{j \neq i} \frac{m_j^2}{m_j^2 - m_i^2} . \end{eqnarray} Now, let us assume that $\mathcal{N} > 1$ (the case of $\mathcal{N} = 1$ was discussed in the Example~\ref{Ex.4th}). Inasmuch as all the roots are nondegenerate, it follows that $S_2=S_2^\prime=P_3=0$, whence $S=S_1^\prime$. Therefore, for small $z$ the function $F(z)$ behaves like \begin{eqnarray} F(z) = c^\prime - \frac{z}{4} S_1^\prime + O(z^2) \, . \end{eqnarray} \subsubsection{Maximally degenerate models} We say the higher-derivative model of order $\mathcal{N} > 1$ is maximally degenerate if the equation $a(-\xi) = 0$ has only one root at $\xi=-m_1^2$, with multiplicity $\mathcal{N}$. In such a case, the following relations are valid: \begin{eqnarray} S_1^\prime = S_2^\prime \, , \qquad S_2 = m_1^2 \, , \qquad S = P_3 = \left\{ \begin{array}{l l} 0 \, , & \text{if $\mathcal{N} = 2$,}\\ m_1^2 \sum_{j=3}^{\mathcal{N}} [(j-1)(j-2)]^{-1} \, , & \text{if $\mathcal{N} > 2$,}\\ \end{array} \right . \end{eqnarray} Thus, for small $z$ the function $F(z)$ can be written as \begin{eqnarray} F(z) = c^\prime + \frac{z}{4} ( m_1^2 - P_3 ) + O(z^2) \, . \end{eqnarray} \subsubsection{6th-order gravity with simple poles} For a pair of simple poles $m_1^2$ and $m_2^2$, Eqs.~\eqref{S_def} and~\eqref{S_def2} yield $S_2 = 0$ and $S_1^\prime \neq 0$. If these poles are simple and real the function $f(s)$ is given by \begin{eqnarray} f(s) = - 1 + \frac{m_2^2}{m_2^2 - m_1^2} \, e^{- m_1^2 s} + \, \frac{m_1^2}{m_1^2 - m_2^2} \, e^{- m_2^2 s} \,, \end{eqnarray} which yields, for small $z$, \begin{eqnarray} \label{F_SR} F(z) = c^\prime + \, \frac{m_1^2 m_2^2 \,\mbox{ln}\,\left( \frac{m_1}{m_2} \right)}{2(m_1^2 - m_2^2)} z \, + O(z^2,z^2 \,\mbox{ln}\, z) . \end{eqnarray} In the case of two conjugate complex roots with $m_1=\alpha + i \beta$ and $m_2=\alpha - i \beta$, it follows \begin{eqnarray} f(s) &=& -1 + \Big[ \cos(2\alpha\beta s) + \frac{\alpha^2 - \beta^2}{2\alpha\beta} \sin(2\alpha\beta s)\Big] e^{-s(\alpha^2-\beta^2)} \end{eqnarray} and \begin{eqnarray} \label{F_SC} F(z) = c^\prime + \frac{(\alpha^2+\beta^2)^2 }{4\alpha\beta}\arctan\left( \frac{\beta}{\alpha} \right) z + O(z^2,z^2\,\mbox{ln}\, z) . \end{eqnarray} \subsubsection{6th-order gravity with degenerate poles} \label{Sec.6th-degen.} For degenerate real poles $m_1^2=m_2^2$ we have \begin{eqnarray} f(s) = -1 + e^{-m_1^2 s} \left( 1 + m_1^2 s \right) . \end{eqnarray} As the particular case of the $\mathcal{N} = 2$ maximally degenerate model, it holds $S_1^\prime = S_2^\prime = m_1^2 \,\mbox{ln}\, m_1^2$, which gives $S=0$ and \begin{eqnarray} F(z) & = & \,\mbox{ln}\, \left(\frac{z}{\Omega^2} \right) + 2 K_0 \left( m_1 \sqrt{z} \right) + m_1 \sqrt{z} \, K_1 \left( m_1 \sqrt{z} \right) \nonumber \\ & = & c^\prime \, + \, \frac{z}{4} \,m_1^2 \, + \, O(z^2,z^2\,\mbox{ln}\, z) \, . \label{F_DR} \end{eqnarray} We note that Eq.~\eqref{F_DR} can be obtained from the analogous equations for simple poles by taking the limit $m_2 \rightarrow m_1$ in~\eqref{F_SR}---or the limit $\beta \rightarrow 0$ in~\eqref{F_SC}. While this procedure of taking the limit is simple to carry out in the case of two roots (see, \textit{e.g.},~\cite{ABS-large} for more examples), the situation might be not so clear if one is to consider a higher-order root. In such a case it is preferred to work with the general formula~\eqref{F_geral}, or~\eqref{F_expand2}, as discussed in Sec.~\ref{Sec.2}. \section{Thin null shell collapse} \label{Sec.4} In this section we analyze the collapse of a null shell and the formation of mini black holes. Following Refs.~\cite{Frolov:Exp,Frolov:Poly}, we first consider a shell with vanishing thickness. For this case the Kretschmann curvature invariant is still singular, but this singularity is consequence of the nonphysical approximation of a infinitesimally thin shell. The field associated to a thin null shell (or $\delta$-shell) can be obtained, at the linearized level, by the superposition of an infinite amount of gyratons spherically distributed and which pass through one given point $O$~\cite{Frolov:Exp}, which we take as the origin of the coordinate system. This point is the vertex of the null cone representing the shell, so that for $t<0$ the shell is collapsing towards the apex $O$ and for $t>0$ it proceeds its expansion after the collapse. It can be shown that, outside the shell, the averaged metric perturbation $\langle dh^2\rangle$ resulting from this distribution of nonspinning gyratons is given by (see~\cite{Frolov:Exp} for a detailed derivation of this result) \begin{eqnarray} \label{dhr} \langle dh^2\rangle = \frac{-2GM F(r^2-t^2)}{ r} \bigg[ \bigg( dt-\frac{t}{ r}dr \bigg)^2 + \frac{r^2-t^2}{ 2}d\Omega^2 \bigg] \,, \qquad r \geqslant |t|\,, \end{eqnarray} where we use spherical coordinates, so that $d\Omega^2 = d\theta^2 + \sin \theta^2 d\phi^2$ is the metric of the unit sphere and \begin{eqnarray} \label{Met_Thin} ds^2 = -dt^2 + dr^2 + r^2d\Omega^2 + \langle dh^2\rangle \end{eqnarray} is the complete metric. Here $F(z)$ is defined by~\eqref{Fdiv}, as given by the metric~\eqref{gyr} associated to a single gyraton. \subsection{Apparent horizon} The formation of black holes is closely related to the invariant \begin{eqnarray} g \equiv (\nabla \varrho)^2=\frac{1}{4 f} \, g^{\mu\nu} \, \nabla_{\mu} f \, \nabla_\nu f \,, \end{eqnarray} where $f = \varrho^2 \equiv g_{\theta\theta}$. Indeed, the points for which $g = 0$ correspond to an apparent horizon~\cite{FrolovNovikov}. If it happens that $g(t,r)$ is strictly positive then the collapsing shell generates no apparent horizon. For the general metric~\eqref{Met_Thin} the invariant $g$ is given by~\cite{Frolov:Poly} \begin{eqnarray} g = 1 - \frac{2GM}{r} \, q(r^2 - t^2) \,, \end{eqnarray} where \begin{eqnarray} q(z) \equiv z\, \frac{dF}{dz}(z)\, . \end{eqnarray} If there is a positive constant $C$ such that \begin{eqnarray} \frac{\vert q(r^2 - t^2) \vert}{r} < C \, , \end{eqnarray} then $g$ is positive anywhere provided that $M < (2GC)^{-1}$. Therefore, in order to show the existence of a mass gap to the formation of mini black holes one should verify that the function $r^{-1}q(r^2 - t^2)$ is bounded. In~\cite{Frolov:Poly} it was shown that for nondegenerate models there is the mass gap. In what follows we extend this result to the general polynomial model. For $F(z)$ given by Eq.~\eqref{F_geral} we have \begin{eqnarray} q(z) &=& 1 - \sqrt{z} \, \sum_{i=1}^N A_{i,1}\, m_i \, K_1(m_i \sqrt{z}) \nonumber \\ && +\, 2 \sum_{i=1}^N \sum_{j=2}^{\alpha_i} A_{i,j} \, \left( \frac{\sqrt{z}}{2 m_i} \right) ^{j-1} \Big[ (j-1) K_{j-1}(m_i \sqrt{z}) - \frac{m_i \sqrt{z}}{2} K_j(m_i \sqrt{z}) \Big]. \end{eqnarray} As a finite sum of continuous functions defined for all $z \in \mathbb{R}^+$, $q(z)$ is also continuous. Hence, if $q(z)$ has any singularity it can only take place for large or small $z$. The former divergence does not occur, because the functions $K_{j}(z)$ decay exponentially as $|z| \rightarrow \infty$, in such a way that $q(z) \rightarrow 1$ as $z \rightarrow \infty$. On the other hand, assuming $\mathcal{N}>1$, for small arguments one has \begin{eqnarray} \label{q-small} q(z) = - \frac{z}{4} \, \left( S - S_2 \right) \, + \, O(z^2) \, , \end{eqnarray} whence $q(z) \rightarrow 0$ as $z \rightarrow 0$. Being the asymptotic limits finite, it follows that $q(z)$ is bounded. Now let us analyze the function $r^{-1}q(r^2 - t^2)$. The function $r^{-1}$ is continuous, it vanishes for large $r$ and only diverges as $r \rightarrow 0$. In this regime, however, $q(r^2 - t^2)$ dominates over $r^{-1}$, since $|t| < r$ outside the shell implies in $r^2 - t^2 < r^2$. Thus, \begin{eqnarray} \lim_{r\rightarrow 0} \frac{\vert q(r^2 - t^2) \vert}{r} = 0 \, . \end{eqnarray} Similar analysis can be applied for the case $\mathcal{N} = 1$, with the same result~\cite{Frolov:Poly}. We conclude that $r^{-1}q(r^2 - t^2)$ is bounded for general polynomial gravity models, which implies in the existence of the mass gap for the formation of mini black holes. The size of the gap depends on the scale $\lambda = \max_i \lbrace m_i^{-1} \rbrace$ defined by the massive excitations of the model; such scale could be affected by a gravitational seesaw-like mechanism as discussed in~\cite{Seesaw} (see also~\cite{ABS-large,MG14} for experimental bounds on $\lambda$). To give an example of an explicit calculation, consider the sixth-order gravity with degenerate poles discussed in Sec.~\ref{Sec.6th-degen.}, for which \begin{eqnarray} q(z) = 1 - \frac{m_1 \sqrt{z}}{2} K_1(m_1 \sqrt{z}) - \frac{m_1^2 z}{4} [K_0(m_1 \sqrt{z}) + K_2(m_1 \sqrt{z})] \, . \end{eqnarray} Following~\cite{Frolov:Poly} we put $\beta^2 \equiv 1 - t^2 r^{-2}$ and $\varv \equiv m_1 \beta r$, so that $r^{-1}q(r^2 - t^2) = m_1 \beta V(\varv)$, with \begin{eqnarray} V(\varv) = \frac{1}{\varv} - \frac{1}{2} K_1(\varv) - \frac{\varv}{4} \left[K_0(\varv) + K_2(\varv)\right] \, . \end{eqnarray} The function $V(\varv)$ is positive and reaches its maximum of about 0.249 at $\varv \approx 2.324$ (see Fig.~\ref{Fig1}). Thus, \begin{eqnarray} \frac{2GM}{r} q(r^2 - t^2) = 2GM m_1 \beta V(\varv) \lesssim 0.5 \, GM m_1 , \end{eqnarray} as outside the shell the parameter $\beta$ ranges in the interval $(0,1)$. Therefore, if $M \lesssim 2 (G m_1)^{-1}$ the collapse does not result in a black hole. \begin{figure} [h] \centering \includegraphics[scale=0.75]{Fig1.eps} \caption{Graph of $V(\varv)$ for the sixth-order gravity with degenerate roots.} \label{Fig1} \end{figure} \subsection{Kretschmann scalar} Even though there is a mass gap for the mini black hole formation in the general higher-derivative gravity, the Kretschmann invariant is not regular at $r=0$. Indeed, for a metric in the form~\eqref{Met_Thin} it is given by~\cite{Frolov:Exp,Frolov:Poly} \begin{eqnarray} R_{\mu\nu\alpha\beta}^2 = \frac{48 G^2 M^2}{r^6} \, Q(r^2 - t^2) \, , \quad \text{where} \quad Q(z) \equiv 2 z^2 q'^2 - 2 z q q' + q^2 \, \end{eqnarray} and primes denote differentiation with respect to the argument $z$. For $\mathcal{N} > 1$ and small arguments, $q(z)$ is given by~\eqref{q-small}, yielding \begin{eqnarray} \label{KretThinGen} R_{\mu\nu\alpha\beta}^2 \approx \frac{3 G^2 M^2 \left( S - S_2 \right)^2 \beta^4}{r^2} \, , \end{eqnarray} where $\beta^2 \equiv 1 - t^2 r^{-2}$ ranges between 0 and 1 outside the shell. As the collapse proceeds, the Kretschmann scalar diverges\footnote{Note, however, that the divergence is less strong than in GR, for which $R_{\mu\nu\alpha\beta}^2 \sim r^{-6}$.} for $r\rightarrow 0$. This very same behavior occurs in the nonlocal ghost-free gravity~\cite{Frolov:Exp}, and it was previously verified to occur also in the particular case of polynomial models with simple poles~\cite{Frolov:Poly}. Actually, in view of these results on similar models it is natural to expect the nonregularity of the Kretschmann invariant, as in these cases the function $F(z)$ has the same linear dependence on $z$ for small arguments. As pointed out in~\cite{Frolov:Exp,Frolov:Poly} this singularity of $R_{\mu\nu\alpha\beta}^2$ is associated to the nonphysical assumption of an infinitesimally thin shell. The physical imploding shell must have finite thickness, which tends to regularize the curvature (see Sec.~\ref{Sec.5}). It is also instructive to recall that for the fourth-order gravity, \textit{i.e.}, $\mathcal{N} = 1$, one gets \begin{eqnarray} q(z) = - \frac{z}{4} (\,\mbox{ln}\, z + q_1 )m_1^2 + O(z^2) \, , \end{eqnarray} around $z=0$, where $q_1 \equiv 2 \gamma - 2 \,\mbox{ln}\, 2 - 1 + \,\mbox{ln}\, m_1^2$. This gives~\cite{Frolov:Poly} \begin{eqnarray} \label{KretThin4th} R_{\mu\nu\alpha\beta}^2 \approx \frac{3 G^2 M^2 m_1^4\beta^4}{r^2} \Big\lbrace q_1^2 + 2(1 + q_1 )\left[ 1+\,\mbox{ln}\, (\beta^2 r^2)\right] + \left[ \,\mbox{ln}\, (\beta^2 r^2)\right] ^2 \Big\rbrace \, , \end{eqnarray} which diverges more rapidly (\textit{c.f.} Eq.~\eqref{KretThinGen}) as the collapse proceeds and $r\rightarrow 0$. \section{Thick null shell collapse} \label{Sec.5} In the linear regime one can build the metric associated to thick null shell by superposing a set of $\delta$-shells collapsing to the same spatial point $O$, which we take as origin of the coordinate system. Of course, there are infinite possibilities of distributing the total energy of the shell throughout its thickness. Since our goal is to show that a nonsingular source regularizes the Kretschmann scalar in the polynomial gravity (and ameliorates the divergence for the fourth-order model), we choose the most simple profile by assuming that the density $\rho(t)$ at $r=0$ remains constant during the collapse, being null before/after it. Such a definition of the energy flux passing at $O$ suffices to determine the density profile of the shell, insomuch as each element of the fluid moves at the speed of the light and no self-interaction is considered inside the shell. Therefore, for a shell with total mass $M$ and thickness (or duration) $\tau$, \begin{eqnarray} \rho(t) = \left\{ \begin{array}{l l} 0 \, , & \text{if $\,|t|>\tau/2\,$,}\\ M/\tau \, , & \text{if $\,-\tau/2<t<\tau/2\,$,}\\ \end{array} \right . \end{eqnarray} where we set $t=0$ as the moment when half of the total mass crosses $O$. The corresponding metric perturbation can be obtained by averaging the metric~\eqref{dhr} of the thin null shells with respect to the density $\rho$~\cite{Frolov:Exp}, \begin{eqnarray} \n{llrr} \langle \langle dh^2\rangle \rangle (t,r) \,=\, \int dt' \rho(t') \langle dh^2 \rangle(t-t',r) \,. \end{eqnarray} The collapse of a thick null shell defines specific spacetime domains (see, \textit{e.g.},~\cite{Frolov:Exp}). In the present work we restrict considerations to the domain $I$ near $t = r = 0$, where (and when) the shell assumes its highest density---favoring the mini black hole formation and the emergence of singularities. This domain is characterized by the intersection of the in-coming and the out-coming fluxes of null fluid, and it is formally defined by the locus of the spacetime points for which $r + \vert t \vert < \tau/2$. Moreover, the metric is stationary inside $I$, for the energy density is constant. Taking into account that only the $\delta$-layers which cross $O$ at times $t^\prime \in (t-r,t+r)$ contribute to the field inside this domain, it is not difficult to verify that Eq.~\eq{llrr} yields~\cite{Frolov:Exp} \begin{eqnarray} \label{Met_Thick} \langle \langle dh^2\rangle \rangle = -\frac{2G M}{\tau r}\left[ J_0 dt^2 + J_2 \frac{dr^2}{r^2} + \frac{1}{2} \left( J_0 r^2 -J_2 \right) d\Omega^2 \right] , \quad r + \vert t \vert < \frac{\tau}{2} \, , \end{eqnarray} where we defined \begin{eqnarray} \label{J-def} J_n(r) \equiv \int_{-r}^{r} dx \ x^n\ F(r^2-x^2). \end{eqnarray} Particularizing this solution for gravity models with six or more derivatives in the action, we substitute the expression~\eqref{F_expand2} for $F(z)$ around $z=0$. It follows \begin{eqnarray} \label{J0J2} J_0(r) = 2 c^\prime r - \frac{r^3}{3} (S - S_2) + O(r^5) \, , \qquad J_2(r) = \frac{2 c^\prime r^3}{3} - \frac{r^5}{15} (S - S_2) + O(r^7) \, . \end{eqnarray} The Kretschmann scalar associated to this solution is \begin{eqnarray} R_{\mu\nu\alpha\beta}^2 \, = \, \frac{32 G^2 M^2 \left( S - S_2 \right)^2 }{3 \tau^2} \, + \, O(r^2) \ , \end{eqnarray} which is regular at $r=0$, as anticipated. It is worthwhile to mention that the nonsingularity of the source is not enough, by itself, to guarantee the regularity of the curvature. In fact, $F(z) \sim \,\mbox{ln}\, z$ in GR, which gives $R_{\mu\nu\alpha\beta}^2 \sim r^{-4}$ for the collapsing thick null shell. Also, the presence of the term $z \,\mbox{ln}\, z$ in the small-$z$ expansion of $F(z)$ could yield logarithmic divergences in the Kretschmann scalar. Such singularity was considered in~\cite{Frolov:Poly} as a possibility for general higher-derivative polynomial gravity (see Eq.~\eqref{F_expand1}). Nonetheless, it only occurs for the models with four derivatives in the spin-2 sector, since for nontrivial polynomial theories there is the relation $S_1 = S_2$ which regularizes the potential $\chi$. Explicitly, the Kretschmann scalar for a collapsing thick null shell in the fourth-derivative gravity follows from~\eqref{F_4th} and reads~\cite{Frolov:Poly} \begin{eqnarray} R_{\mu\nu\alpha\beta}^2 = \frac{32 G^2 M^2 m_1^4 }{27 \tau^2} \left[ 5 + 9c^2 + 36 c \,\mbox{ln}\, r + 36 (\,\mbox{ln}\, r)^2\right] + O(r^2) \ , \end{eqnarray} with $c \equiv 2 \gamma - 2 + \,\mbox{ln}\, m_1^2$. The origin of this singularity can be traced back to the nonrelativistic limit. Indeed, in~\cite{Frolov:Poly} it was shown that, for polynomial theories with simple poles, the nonregularity of the potential $\chi$ implied in a singular Kretschmann scalar for the collapsing thick null shell. We have seen that the divergences are softened when a $\delta$-shell is substituted by a thick shell. It is therefore natural to expect the existence of a mass gap to the formation of mini black holes for a collapsing thick null shell too. For the sake of completeness, we calculate the invariant $g(r)$ on the the domain $I$ for the solution~\eqref{J0J2}, which reads \begin{eqnarray} g(r) \, = \, 1 + \frac{2 G M (S - S_2) r^2}{3\tau} \, + \, O(r^4) \, . \end{eqnarray} Since $r < \tau$ on $I$, it follows that \begin{eqnarray} \frac{2 G M \vert S - S_2 \vert r^2}{3\tau} < \frac{2 G M \vert S - S_2 \vert \tau}{3} \, . \end{eqnarray} Hence, for a given $\tau$ it is also possible to avoid the existence of an apparent horizon inside $I$ provided that the mass $M$ is sufficiently small. \section{Summary and discussion} \label{Sec.6} Let us summarize the results obtained. We derived the solutions for the Newtonian potentials associated to a pointlike mass in a general polynomial higher-derivative gravity, \textit{i.e.}, allowing the presence of complex and degenerate poles (with arbitrary order) on the propagator. This includes the classes of (super)renormalizable theories and Lee-Wick gravity models. It was verified, in agreement to~\cite{Newton-BLG}, that the metric potentials remains finite in $r=0$ provided that there is at least one massive mode in each spin-$2$ and spin-$0$ sectors. This is not a sufficient condition, however, to ensure the regularity of the solution, because there can be singularities in the curvatures. Indeed, since the 1970s it is known that Stelle's fourth-order gravity possesses curvature singularities in the linear regime~\cite{Stelle78}. On the other hand, there were evidences that such singularities would be regularized in the models which contain more than four derivatives in the action~\cite{Modesto-LWBH,Holdom}. Using the expressions~\eqref{Phi-Gen} and~\eqref{Psi-Gen} derived in Sec.~\ref{Sec.2} for the nonrelativistic potentials $\varphi$ and $\psi$, we showed explicitly that in a generic polynomial gravity with more than four derivatives in both scalar and spin-2 sectors the curvatures remain finite at the origin. This result completely characterizes the class of local higher-derivative gravities which have a regular Newtonian limit. In the ensuing part of the paper we considered the dynamical process of the spherically symmetric collapse of null shells in linearized higher-derivative polynomial gravities. Here we generalized the discussion carried out in~\cite{Frolov:Poly} to include the possibility of degenerate poles. If one allows the shell to have a certain thickness, then the Kretschmann invariant becomes finite during the collapse provided that the model has at least six derivatives in its spin-2 sector. This observation on the regularity of the metric of the thick shell is a refinement of the result derived in~\cite{Frolov:Poly}. Indeed, the logarithmic divergences of the Kretschmann scalar which in principle could occur in polynomial theories are actually ruled out in most of the cases, due to a specific algebraic relation between the poles of the propagator. Only in the fourth-order gravity these logarithmic divergences are possible. Finally, we have shown that, like in the case of polynomial gravities with simple poles in the propagator~\cite{Frolov:Poly}, there exists a mass gap for the mini black hole formation also in the models with higher-order poles. With the results obtained in the present work it is possible to observe some similarities between the nonlocal (ghost-free) higher-derivative gravity and the local (polynomial) models with more than four derivatives. First, in both theories there is the cancellation of the Newtonian singularity of the metric potentials associated to a $\delta$-source \cite{Tseytlin95,Modesto12,Maz12,Newton-MNS,Newton-BLG,EKM16}. Second, it is known that in the nonrelativistic limit there is a class of nonlocal gravities that have a regular solution for the field generated by a pointlike source \cite{Head-On,Buoninfante:2018b,Buoninfante:2018a}. Our results show that in a generic polynomial higher-derivative gravity with more than four derivatives in each sector the Newtonian limit is regular too. (Actually, using the description of effective sources presented in~\cite{BreTib2} it is possible to deduce the regularity of some nonlocal theories from the comparison with a sequence of sources associated to the local models.) A third similarity is the regularity of the metric of the collapsing shell. In fact, if a thin shell is considered, nonpolynomial and nontrivial polynomial theories have a Kretschmann scalar which diverges quadratically for small $r$~\cite{Frolov:Exp,Frolov:Poly}. This is, however, the consequence of the nonphysical assumption of an infinitesimally thin shell. If the shell has some thickness, then in both theories the Kretschmann invariant becomes finite during the collapse. This happens, again, because the leading term in the expansion of Eq.~\eqref{F_DR} around $z=0$ is the linear one, just like what occurs in the nonlocal ghost-free gravity (see~\cite{Frolov:Exp,Frolov:Poly}). Solely in the fourth-order gravity the divergences in the Kretschmann invariant are possible, a situation analogous to what happens in the Newtonian limit. Moreover, in both theories there is a mass gap for the mini black hole formation. Indeed, this feature is present in any higher-derivative model with an arbitrary number of derivatives in the spin-2 sector~\cite{Frolov:Exp,Frolov:Poly,Frolov:Weyl}, since in these theories there is a new mass scale. These four connections between polynomial and ghost-free gravity theories can be supportive of the view that the nonlocal models may be considered as the limit of a theory with an infinite amount of complex poles hidden at the infinity~\cite{CountGhost}. In this sense, it is useful to notice that many good regularity properties of the nonlocal gravity~\cite{Modesto12,Buoninfante:2018b,Buoninfante:2018a} can be achieved without the need of losing locality at the classical level, and may be common to models with at least six derivatives. Further discussion on the similarities between local and nonlocal models are carried out in the parallel work~\cite{BreTib2}. All the results which were mentioned above have been obtained in the linear approximation. The most interesting question is whether there can be nonsingular solutions in the full nonlinear regime of polynomial gravity theories. The first step in this direction was done within the fourth-order gravity in Ref.~\cite{Stelle78}, where the asymptotic analysis of the static field equations near the origin was carried out via the Frobenius technique. It was shown the existence of three families of solutions: a set of nonsingular solutions, and two sets of singular ones---one of them containing the Schwarzschild solution. The presence of the Schwarzschild solution is expected, because by means of the Gauss-Bonnet relation \begin{eqnarray} \n{2gb2} \int d^4 x \sqrt{-g}\, E \,=\, \mbox{total derivative}, \end{eqnarray} where $E = R_{\mu\nu\alpha\beta}^2 - 4 R_{\mu\nu}^2 + R^2$, it is possible to completely remove the Riemann-squared term of the action, and it is clear that any vacuum solution of the Einstein equations ($R_{\mu\nu} = 0$) is also a solution of the fourth-order gravity~\cite{Stelle78,Frolov:2009qu}. Nonetheless, in this model the Schwarzschild solution is not coupled to a positive-definite matter source~\cite{Stelle78}. More recently, some new aspects of the nonlinear static spherically solutions in fourth-order gravity were considered in~\cite{Holdom,Stelle15PRL,Stelle15PRD} by means of numerical methods. In particular, it was studied what happens when the asymptotic solutions in strong-field regime near $r=0$ are linked with the weak-field solution at large $r$ in the form of a combination of Newton and Yukawa potentials---such a potential is the particular case of our general result, Eqs.~\eqref{Phi-Gen} and \eqref{Psi-Gen}. In summary, the result is that for a $\delta$-like source the solution has no horizon and falls to a timelike singularity at $r=0$. Actually, the presence of the singularity in this solution is expected in view of the fact that $R_{\mu\nu\alpha\beta}^2$ diverges yet at the linear regime. Moreover, the absence of horizon in the full fourth-order model is guaranteed by a general theorem~\cite{Stelle15PRL,Stelle15PRD,Nelson:2010ig}; and only the particular theories where the $R^2$-term is excluded from the action could have horizons. In what concerns the theories with derivatives higher than fourth, in Ref.~\cite{Holdom} the asymptotic solutions near $r=0$ were studied by the Frobenius series expansion method in models with up to $10$ derivatives in the action. It was shown that there is no Schwarzschild-like solutions, or other ones with singularity. Only the nonsingular solutions remain in the static spherically symmetric case for sixth- and higher-order theories\footnote{We point out, however, that the method based on the expansion in Frobenius series around $r=0$ is not sufficient to rule out the existence of singularities, as there may be solutions with a violent singularity which does not admit such representation at the origin. Also, it is possible to have solutions with singularities at a finite radius.}. The nonexistence of the exact Schwarzschild solution is due to the absence of the Gauss-Bonnet relation for the higher-order terms. The analogue relation~\eqref{gb} is insufficient to eliminate the effect of the Riemann-squared terms in the nonlinear regime, since $O(R^3)$ structures still remain. Also, the nature of these nonsingular solutions implies that the complete solutions with large $r$ behavior given by Eqs.~\eqref{Phi-Gen} and \eqref{Psi-Gen} must have no horizon or an even number of horizons. Another interesting result of Ref.~\cite{Holdom} is the necessity of theories with six or more derivatives to the possible elimination of the de~Sitter-like horizons. The results of the present work, in light of~\cite{Holdom}, bring more motivations for further investigation of the spherically symmetric static solutions in the full nonlinear regime for the polynomial theories with more than four derivatives. It would also be interesting to know whether in these theories there is some kind of no-horizon theorem, and we expect to revisit this issue in the future. In case of a positive answer, the complicated numerical search of solutions might be simplified.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,014
Abraham (de) Fabert d'Esternay, né à Metz le et mort à Sedan le , est un homme de guerre français. Lieutenant général en 1650, il reçoit le bâton de maréchal de France en 1658. Il est aussi gouverneur de l'ancienne principauté de Sedan en 1642-1662, s'illustre pendant la guerre de Trente Ans et assiège Stenay en 1654. Famille Son bisaïeul Isaïe Fabert seigneur de Xonville, dans l'ancien canton de Gorze, habitait Strasbourg vers le milieu du ; son grand-père, Dominique Fabert — Mangin selon le diminutif lorrain traditionnel — avait été imprimeur patenté à Strasbourg avant de se fixer à Metz après un bref séjour à Nancy (directeur de l'imprimerie ducale de Nancy). Quant au père du maréchal, Abraham Fabert, premier du nom, il était depuis 1595 l'imprimeur-juré de la cité de Metz, commissaire de l'artillerie au gouvernement de Metz et cinq fois maître-échevin de Metz. Il fut anobli par Henri IV pour lui avoir porté secours lors de sa montée à Paris. Sa fille Claude épouse le marquis de Caylus et a pour enfant Charles de Caylus (1669-1754), évêque d'Auxerre de 1704 jusqu'à sa mort. Biographie Abraham Fabert entre à quatorze ans dans la carrière militaire. Page du duc d'Épernon, il entre en 1613 comme cadet au régiment des Gardes, où il se fait remarquer par sa vaillance ; en 1618, il est nommé capitaine d'infanterie. Il se distingue en 1627 comme major du régiment de Rambures au siège de La Rochelle. En 1629, il déploie la plus grande valeur à l'attaque du Pas de Suze qu'assiégeait Louis XIII en personne, dirige le siège de Chivas en Savoie, et bat complètement l'armée du prince Thomas qui cherchait à débloquer la place. Le lors de l'assaut livré sur l'ouvrage à cornes de Privas, Abraham Fabert s'élance à la tête des enfants perdus, plante son échelle au pied de la muraille, arrive le premier sur le rempart, écarte à coups d'épée les ennemis, et tient ferme jusqu'à ce que les officiers et les soldats, animés par son exemple, le rejoignent. Il y est bientôt suivi par tout le régiment qui s'y établit solidement. En 1630, revenu en Piémont avec le régiment de Rambures il arrive devant Exilles. Après avoir reconnu les dehors du fort, Abraham Fabert se glisse seul dans le fossé, s'approche de l'enveloppe du donjon et combine son attaque. Le lendemain, avec un faible détachement, il conduit une tranchée jusqu'auprès du donjon, place deux canons en batterie et contraint la garnison à capituler. Il s'avance ensuite à la tête de quelques compagnies vers la Tour-Carbonnières, emporte le pont à Mafrée qui l'en séparait et force encore ce poste à battre la chamade. Au combat de Veillane, l'arrière-garde de l'armée est attaquée au passage de la montagne de Saint-Michel. Abraham Fabert, avec vingt hommes, tient tête à 400 Savoyards : pendant ce temps, Charles de Rambures descend avec le régiment de la hauteur où il était posté, et tombe sur l'ennemi avec tant de vigueur, que ses rangs, rejetés les uns sur les autres, sont mis dans le plus grand désordre : la déroute fut complète. Le régiment est ensuite au siège de Saluces où Fabert reçut deux coups de feu dans son chapeau. Il avait eu l'audace d'aller en plein jour, sous une grêle de balles, reconnaître les approches de la place. Le roi, plein d'admiration pour une aussi brillante et aussi utile bravoure, dérogea en faveur de Faber au règlement qu'il avait lui-même fait, et lui donna une compagnie, dont le commandement était dès ce temps-là incompatible avec les fonctions de sergent-major. En 1636, au siège de Saverne, Fabert monta sur la brèche au troisième assaut, et s'empara d'une maison voisine. Il s'y défendit plus d'une heure, mais les assiégés y ayant mis le feu, il fut contraint de sauter dans le fossé où il reçut plusieurs blessures. Après sa guérison, Fabert eut pour récompense une compagnie dans Picardie. Promu (par Richelieu) au grade de capitaine au régiment des Gardes-Françaises, il se signale de nouveau dans une foule d'actions, notamment au siège d'Arras en 1640, à la bataille de la Marfée en 1641, et aux sièges de Collioure et de Perpignan en 1642, après quoi il fut pourvu du gouvernement de Sedan. Il servit ensuite à l'armée de Catalogne en 1645-46, puis en Italie, aux prises de Piombino et de Portolongone, en qualité de maréchal de camp. En 1650 viennent d'autres honneurs : en mai, Il est créé marquis de Fabert (à Larrey - écrit La Ré - et Cérilly), et lieutenant général le 20 octobre ; il reçoit le commandement des troupes stationnées en Flandres, puis l'inspection des villes sur la Meuse appartenant à la couronne, en 1652. En 1654, il dirige sous les yeux de Louis XIV le siège de Stenay, et force cette place à capituler, en appliquant les nouvelles règles d'attaque ; « C'est à ce siège que l'on vit pour la première fois, écrit Pinard, les parallèles et les cavaliers de tranchée ». En 1655, il fait encore l'acquisition de la terre d'Esternay (le père Anselme l'appelle marquis d'Esternay). Fabert reçoit le bâton de maréchal de France le : c'est le premier militaire d'une famille de noblesse récente, et pour ainsi dire sorti du rang, à être élevé à cette dignité. Une délégation vient tout exprès de Metz lui porter les félicitations de sa ville natale : celui qui prononce le discours, le grand archidiacre de Metz, n'est autre que Bossuet. Voici comment Louis XIV, lui-même, dans le document de circonstance, résumait la carrière et les qualités du nouveau maréchal, après avoir évoqué ses succès à Liège puis la prise de Stenay : En décembre 1661, il refusa le collier des ordres du Roi, par modestie, ou par peur d'être mal accepté par ses pairs, étant donné sa modeste origine. Le roi lui donna en compensation le domaine de Sézanne en Brie. Le il est atteint de pneumonie et, dès le , il réclame lui-même les derniers sacrements en cette ville de Sedan dont il était gouverneur et où la mort vient le prendre le lendemain. Des Courtils colporte de singulières rumeurs : "Il avait du goût pour les sciences occultes. A sa mort, son corps ne fut pas retrouvé ; on le dit enlevé par le diable !" Sa dépouille fut déposée au couvent des Capucins irlandais, dont aujourd'hui il ne reste que la chapelle qui a abrité les sépultures de la famille du maréchal . Les corps en furent extraits lors de la Révolution. Il avait épousé Claude Richard de Clevant, dont il eut trois fils et trois filles. a) Son fils Louis, colonel au régiment de Lorraine, fut tué par les Turcs au siège de Candie le 23 juin 1669, âgé d'environ 18 ans, sans alliance ; b et c) deux autres fils morts jeunes. De ses trois filles : d) Anne x Louis de Comminges-Vervins, puis x (2) Claude-François de Mérode-Trélon. e) Claude x Charles-Henri de Tubières de Grimoard-Pestels de Lévis-Caylus. f) Angélique x Claude Brulart de Genlis (d'où Marie-Anne Brulart), puis x (2) François 3, marquis de Harcourt-Beuvron, veuf avec 1 fils (Henri), lequel fut le 1er duc et maréchal d'Harcourt en 1703. Marie-Anne et Henri se marièrent en 1687, et eurent : François, 2e maréchal-duc d'Harcourt en 1746, et Anne-Pierre, 3e maréchal-duc après son frère, en 1775. Hommages À Metz, le lycée Fabert, la rue Fabert et une statue en bronze édifiée place d'Armes, rendent hommage au maréchal. La rue Fabert, à Paris, longe l'esplanade des Invalides. Armoiries Notes et références Annexes Bibliographie Fadi El Hage, Abraham Fabert. Du clientélisme au maréchalat (1599-1662), Paris, L'Harmattan, 2016, 177 p. Abraham de Fabert, mareschal de France, dans Charles Perrault, Les Hommes illustres qui ont paru en France pendant ce siècle, chez Antoine Dezallier, 1700, tome 2, (lire en ligne) Sa Vie a été écrite par Gatien de Courtilz de Sandras, 1697, et par Joseph de Labarre, 1752. Le maréchal Fabert, Théophile Ménard (Just-Jean-Étienne Roy), Alfred Mame et fils, éditeurs, 1869. Le maréchal Fabert (Le soldat, le réformateur, l'homme), Desclée de Brouwer, 1933. Liens externes Maréchal de France nommé sous le règne de Louis XIV Siège de La Rochelle (1627-1628) Naissance en octobre 1599 Naissance à Metz Décès en mai 1662 Militaire de Metz Décès à Sedan Décès à 62 ans Décès dans la province de Champagne
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,758
Q: pass a number to external js file using addEventListener Trying to pass a value from input by a user to a java script file using add Event listener .. I have HTML var x = document.getElementById("btn"); x.addEventListener("click", resolve); var y = document.getElementById("x").value; console.log(x); console.log(y); function resolve() { if (y >= 20000) { alert("yes greater than 20000"); } else { alert("nnn oo good"); } } <label><input type="text" pattern="[0-9]{5}" maxlength="5" id="x"/> </label> <button type="submit" id="btn"> Hit!</button> <script src="functions.js"></script> file but the java script file is not getting the value
{ "redpajama_set_name": "RedPajamaStackExchange" }
36
\section{Introduction} \cite{Block_Marschak1960} introduce ``random utility models'', showing in many cases their equivalence with ``random ordering models''. In particular, the Multiple Choice Model (MCM) predicts stochastic choices from latent probability distributions over strict rankings; all sets of alternatives are choice sets, and the subject selects one alternative in the choice set\footnote{Other random utility models restrict choice sets, for instance to two-element sets. In economics, the term ``random utility model'' refers to models based on probability distributions over strict rankings, that is irreflexive linear orderings. In psychology, relations of another type often replace rankings (see for instance the references in \citealp{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018}).} (for a precise definition, see Section~\ref{SEC_MCP}). A complete characterization of the MCM is a remarkable result due to \cite{Falmagne1978}: the predictions of the MCM form the Multiple Choice Polytope (MCP), for which Falmagne obtains an affine description---that is, a system of affine inequalities whose solution set is the MCP. In economics, since \cite{Marschak1960} and \cite{Block_Marschak1960}, the MCM has been used in many different contexts. In discrete choice analysis, economists often use the MCM to describe unknown data generating process of stochastic choice, for instance over transportation methods, schools, and products (although in practice, they frequently make use of parametric models such as the mixed logit model, \citealp{McFadden2001}). The interest for the MCM is exemplified by \cite{McFadden_Richter1970,McFadden_Richter1990}\footnote{McFadden and Richter establish another characterization of the model (a more involved one than Falmagne's one).}, \cite{Barbera_Pattanaik1986}\footnote{Barbera and Pattanaik obtain a proof similar to Falmagne's one.} and \cite{Monderer1992}\footnote{Monderer derives another proof from a result of \citealp{Weber1988} in game theory, namely a characterization of random order values.}. In psychology, several papers refer to Falmagne Theorem, for instance \cite{Regenwetter_Marley_Grofman2002}, \cite{Suck2002b}, \cite{Fiorini2004}, \cite{Suck2016}. Recently, \cite{Kellen_Winiger_Dunn_Singmann2021} use the MCM in signal detection theory. In both psychology and economics, and also in operations research, another setup in which the only choice sets are binary is the object of many publications: see \cite{Fishburn1992} for a classical survey, and \cite{Marti_Reinelt2011} for a more recent overview. For example, \cite{Fishburn_Falmagne1989} provide necessary conditions for binary choice probabilities to be induced by a probability distribution on rankings. They also show that no finite set of simple necessary conditions is sufficient for inducement when the alternative set is finite but can be arbitrarily large. Today, finding a manageable characterization of the binary choice polytope appears to be out of reach in view of a related NP-hard problem (see for instance \citealp{Charon_Hudry2010}, Problem~5 and Theorem~7). For the MCP, \cite{Fiorini2004} provides an alternative proof of Falmagne Theorem, which is enlightening: he starts with a change of space coordinates or, in another interpretation, he works on the image of MCP by a well-chosen affine transformation. Next he shows that in the new viewpoint the vertices of MCP are (the characteristic vectors of) all paths from the source to the sink in a special network. Hence, the MCP is the flow polytope of the network. A characterization of the MCP by a system of affine inequalities then follows from the fundamental theorem on network flows (\citealp{Gallai1958} and \citealp{Ford_Fulkerson1962}). In Economics, \cite{Chambers_Masatlioglu_Turansick2021} apply Fiorini's technique to study a ``correlated random utility model''. However, not much is known about the geometric structure of the MCP other than its facets \citep{Suck2002}. We characterize the adjacency of vertices and the adjacency of facets. As a matter of fact, our characterizations hold for the flow polytope of any acyclic network (the MCP being a particular case). So they are also valid for the three flow polytopes built in \cite{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018} to get extended formulations of the weak order polytope, interval order polytope and semiorder polytope\footnote{We refer the reader to the last paper (and its references) for the terminology. Note that the mastery of the adjacencies on the four extended formulations should be useful in the design of optimization algorithms, particularly for the statistical tests evoked in \cite{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018}.} (see Figure~\ref{scheme_of_polytopes}). In Economics, \cite{Turansick2022}, in his Theorem~2 on the identifiability in the MCM (see \citealp{Fishburn1998}, for previous results), introduces a condition on two vertices of the MCP which we show to be equivalent to their non-adjacency (see Subsection~\ref{subs_identifiability}). To check whether the mixed logit model can approximate the MCM, \cite{Chang_Narita_Saito2022} use the fact that a convex combination between two adjacent vertices of the MCP is a prediction of the MCM that is uniquely represented. Thus a characterization of vertex adjacency can be useful. Fishburn published papers on the linear ordering polytope, notably \cite{Fishburn_Falmagne1989} and \cite{Fishburn1992}, and also on the weak order polytope, \cite{Fiorini_Fishburn2004}. He has also introduced the concept of an interval order \citep{Fishburn1970a} as an extension of the one of a semiorder \citep{Luce1956}. We dedicate our contribution to the memory of Peter Fishburn, whose influence on the fields addressed in this paper remains so strong. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[xscale=0.7,yscale=0.8,every text node part/.style={align=center}] \draw [thick,decorate,decoration={brace,amplitude=15pt,raise=4ex}] (-2.1,3) -- (13.2,3) node[midway,yshift=16mm]{flow polytopes}; \draw [thick,decorate,decoration={brace,amplitude=9pt,raise=4ex}] (12.7,3.7) -- (12.7,-1) node[midway,xshift=17mm,rotate=-90]{extended\\ formulations}; \node[rectangle] at (0,3) {$\Pmc{\cal C}=\Flo{\cal C}$}; \node[rectangle] at (0,0) {linear \\ order \\ polytope}; \node[rectangle] at (4,3) {$\Fwo{\cal C}$}; \node[rectangle] at (4,0) {weak \\ order \\ polytope}; \node[rectangle] at (8,3) {$\Fio{\cal C}$}; \node[rectangle] at (8,0) {interval \\ order \\ polytope}; \node[rectangle] at (12,3) {$\Fso{\cal C}$}; \node[rectangle] at (12,0) {semi-\\ order \\ polytope}; \draw[thick,->] (0,2.3) to (0,1.3); \draw[thick,->] (4,2.3) to (4,1.3); \draw[thick,->] (8,2.3) to (8,1.3); \draw[thick,->] (12,2.3) to (12,1.3); \end{tikzpicture} \end{center} \caption{A scheme of the various polytopes mentioned in the paper. Here $\Pmc{\cal C}$ designates the Multiple Choice Polytope MCP on the alternative set ${\cal C}$ (Section~\ref{SEC_MCP}), and ${\cal F}(D)$ designates the flow polytope of the network $D$ (see Sections~\ref {SE_MCP_flows} and \ref{SE_other} for the four specific networks). \label{scheme_of_polytopes}} \end{figure} \section{Basic Definitions and Results} \label{SEC_Basic} \subsection{Polytopes}\label{sub_Polytopes} A \textsl{polytope} ${\cal P}$ in $\mathbb R{}^d$ is the convex hull of some finite subset of $\mathbb R{}^d$, say ${\cal P}=\text{conv}(V)$ with $V \subset \mathbb R{}^d$, $V$ finite. A \textsl{face} $F$ of the polytope ${\cal P}$ is any subset $F$ of ${\cal P}$ equal to ${\cal P}$, or for which there exists an (affine) hyperplane $H$ which satisfies ${\cal P} \cap H = F$ and is \textsl{valid} for ${\cal P}$, that is, ${\cal P} \subseteq H^+$ with $H^+$ a closed side of $H$. If $H^+=\{p \in \mathbb R{}^d {\;\vrule height9pt width1pt depth1.5pt\;} \alpha(p)\ge(r)\}$ for a linear form $\alpha$ on $\mathbb R{}^d$ and a real number $r$, the inequality $\alpha(x)\ge r$ \textsl{defines} the face $F$. A \textsl{vertex} of ${\cal P}$ is a point $p$ such that $\{p\}$ is a face of ${\cal P}$. An \textsl{edge} is a segment which forms a face. A \textsl{facet} of ${\cal P}$ is a proper\footnote{Recall that $A$ is a \textsl{proper} subset of $B$ when $A \subset B$ (strict inclusion).}, maximal face of ${\cal P}$. For our polytope ${\cal P}=\text{conv}(V)$, all vertices belong to $V$ (but points in $V$ are not necessarily vertices). Even more, the vertices form the single, inclusion-minimal subset $V$ such that ${\cal P}=\text{conv}(V)$. Any face is the convex hull of the vertices it contains. A \textsl{simplex} is a polytope whose vertices are affinely independent points. Each polytope ${\cal P}$ in $\mathbb R{}^d$ is the set of solutions of a (finite) system ${\cal S}$ of affine equations and affine inequalities on $\mathbb R{}^d$. Under the restriction that the solution set is bounded, the converse does hold. The system ${\cal S}$ then forms an \textsl{affine description} of the polytope. Suppose now that ${\cal S}$ is an affine description with a minimum number of (in)equalities. If any inequality in ${\cal S}$ is satisfied with equality on the whole polytope ${\cal P}$, we replace the inequality sign with an equality sign. Then the number of equalities in ${\cal S}$ equals the codimension of ${\cal P}$ (that is, $d-\dim({\cal P})$, where $\dim$ always means the affine dimension). Moreover, there is in ${\cal S}$ one inequality per facet of ${\cal P}$. When $\dim {\cal P} < d$, the affine inequality for a given facet can be chosen among infinitely many ones. For more details (especially proofs) on polytopes, see for instance \cite{Korte_Vygen2008}, \cite{Schrijver2003}, \cite{Ziegler1998}. \subsection{Directed graphs} A \textsl{directed graph} $G$ is a pair $(N,A)$, where $N$ is a finite set of nodes\footnote{We reserve the word ``vertex'' for polytopes. In only a few other occasions when speaking of directed graphs, we depart from the exposition of \cite{Bang-Jensen_Gutin2001}.} and $A$ is a set of arcs, each arc being a pair of distinct nodes (the definition excludes loops as well as parallel arcs). For any arc $a=(u,v)$, we call $u$ the \textsl{tail} and $v$ the \textsl{head} of the arc $a$. Let $G=(N,A)$ be a directed graph. A \textsl{walk} in $G$ is a finite sequence $(u_1,v_1)$, $(u_2,v_2)$, \dots, $(u_k,v_k)$ of arcs with $k \ge 1$, $v_{i-1}=u_i$ for $i=2$, $3$, \dots, $k$. The latter walk \textsl{starts} at its \textsl{initial node} $u_1$ and \textsl{ends} at its \textsl{terminal node} $v_k$, it is \textsl{from} $u_1$ to $v_k$. It \textsl{passes} through its \textsl{internal nodes} $u_2$, $u_3$, \dots, $u_k$. The walk is a \textsl{path} when its nodes are two by two distinct. A \textsl{cycle} in $G$ has a definition similar to the one of a path, except that $u_1=v_k$ is required. A directed graph is \textsl{acyclic} if it does not possess any cycle. In an acyclic graph $(N,A)$, any walk is a path because any acyclic graph has a so-called \textsl{topological sort}, that is a linear ordering $L$ of its nodes such that for any arc $(u,v)$ there holds $u >_L v$. Although paths are by definition sequences of arcs, we often treat them as sets of arcs (for instance when we say that a path includes another one). In an acyclic graph, the set of arcs in a path determines in a unique way the path (as a sequence of these arcs). Any set $B$ of arcs from $A$ (for example, $B$ is the set of arcs in a path) has its \textsl{characteristic vector} $\chi^B$ in $\mathbb{R}^{A}$: for any arc $a$ in $A$, we set $\chi^B(a) = 1$ if $a \in B$ and $\chi^B(a) = 0$ if $a \in A \setminus B$. For a point $x$ in $\mathbb{R}^{A}$ and $B \subseteq A$, define the number \begin{equation} x(B) := \sum_{a \in B} x(a). \end{equation} For each node $v$, we denote the sets of arcs with either head or tail $v$ by $\delta^-(v)$ and $\delta^+(v)$, respectively: \begin{eqnarray*} \delta^-(v) &:= &\{a \in A {\;\vrule height9pt width1pt depth1.5pt\;} \exists u \in N : a = (u,v)\},\\ \delta^+(v) &:= &\{a \in A {\;\vrule height9pt width1pt depth1.5pt\;} \exists w \in N : a = (v,w)\}, \end{eqnarray*} and define the \textsl{in-degree} and \textsl{out-degree} of $v$ by \begin{eqnarray*} d^-(v) &:= &|\delta^-(v)|,\\ d^+(v) &:= &|\delta^+(v)|. \end{eqnarray*} \subsection{Network Flows} A \textsl{network} $D = (N,A,s,t)$ is\footnote{Here we follow \cite{Korte_Vygen2008} and depart from \cite{Bang-Jensen_Gutin2001}. Notice however that we set no cost, no capacity on the arcs and especially that we postulate acyclicity of the graph.} an acyclic, directed graph $(N,A)$ in which two special nodes are designated as the \textsl{source} $s$ and the \textsl{sink} $t$. An \textsl{$s$--$t$ path} is a path starting at $s$ and ending at $t$. There are reasons to consider only acyclic networks $D$, rather than more general networks allowing for cycles. First, the results often take an interesting, simpler form (also, we do not have the extensions to general networks of all the results presented here). Second, in the applications we have in view, the network happens to be acyclic (as in Sections~\ref{SE_MCP_flows} and \ref{SE_other}). Consider a network $D = (N,A,s,t)$ for the rest of the subsection. A \textsl{flow (of value~$1$)} of $D$ is a point\footnote{In the literature, flows are often denoted by the letter $\Phi$; we prefer to use the letter $x$ because we view flows as particular points in the space $\mathbb R{}^A$. When writing the coordinate of the point $x$ w.r.t.\ an arc $(u,v)$, we abbreviate $x((u,v))$ into $x(u,v)$.} $x$ from $\mathbb{R}^{A}$, associating a nonnegative number $x(a)$ to each arc $a$ of the network, such that the outflow $x(\delta^{+}(v))$ equals the inflow $x(\delta^{-}(v))$ at each node $v$ distinct from the source $s$ and the sink $t$, and at the source~$s$ the outflow $x(\delta^{+}(s))$ equals $1$ plus the inflow $x(\delta^{-}(s))$. All flows of $D$ form a polytope in $\mathbb R{}^A$, because by their definition they are the solutions of the following system of affine (in)equalities on $\mathbb R{}^A$ \begin{equation}\label{EQ_conservation} \left\{ \begin{array}{rcl@{\qquad}l} x(\delta^{+}(v)) - x(\delta^{-}(v)) &= &0, &\forall v \in N \setminus \{s,t\},\\ x(\delta^{+}(s)) - x(\delta^{-}(s)) &= &1,\\ x(a) &\geqslant &0, &\forall a \in A, \end{array} \right. \end{equation} and they form a bounded set because for any flow $x$ and any $a$ in $A$ there holds $0 \le x(a) \le 1$ (the latter inequality follows for instance from Theorem~\ref{thm_flow_decomposition} below, or directly by proving, for any topological sort $L$ of the acyclic directed graph $(N,A)$ and any node $w$ in $N$, that the sum of the $x(u,v)$'s with $u >_L w \ge_L v$ equals $0$ or $1$---which is easily done by recurrence along the nodes $w$ in $L$). \begin{definition}\label{de_flow_polytope} The \textsl{(value\nobreak\hspace{.3em plus .08333em} $1$-) flow polytope} ${\cal F}(D)$ of a network $D$ consists of all flows of $D$, in other words of all points $x$ in $\mathbb R{}^A$ that satisfy the system in \eqref{EQ_conservation}. The latter system\footnote{In Section~\ref{se_Facets} we will removed repeated inequalities from the canonical description. Note that the canonical description is an affine description, but not necessarily one of minimum size (as shown by Example~\ref{ex_a_network}).} is the \textsl{canonical (affine) description} of the flow polytope ${\cal F}(D)$. \end{definition} For any flow in ${\cal F}(D)$, the net inflow at $t$ equals $1$; in other words, the flow polytope moreover satisfies \begin{equation}\label{EQ_at_t} x(\delta^{+}(t)) - x(\delta^{-}(t)) \;=\;-1. \end{equation} This is derived from Equations~ \eqref{EQ_conservation} together with \begin{equation}\label{EQ_sum_of_conservations} \left( \sum_{v\in N}\; x(\delta^{+}(v)) \right) - \left( \sum_{v\in N}\; x(\delta^{-}(v)) \right) \;= \;0. \end{equation} The latter equation holds because for any $a\in A$, the term $x(a)$ appears once in each of the two summations. There can be superfluous inequalities in the canonical description of ${\cal F}(D)$. If for some node $v$ we have $\delta^-(v)=\{(u,v)\}$ and $\delta^{+}(v)=\{(v,w)\}$, the conservation law at $v$ implies $x(u,v)=x(v,w)$ for any $x$ in ${\cal F}(D)$, and so we may keep only one of the two inequalities $x(u,v)\ge0$ and $x(v,w)\ge0$. Equation~\eqref{EQ_minimum} displays a minimum affine description of the polytope ${\cal F}(D)$. The next statement is the particular case for acyclic networks of the Flow Decomposition Theorem due to \cite{Gallai1958} and \cite{Ford_Fulkerson1962} (see also, for instance, \citealp{Korte_Vygen2008}, page\nobreak\hspace{.3em plus .08333em} 169). \begin{theorem}\label{thm_flow_decomposition} Consider a network $D=(N,A,s,t)$. Any flow $x$ of $D$ equals a convex combination of the characteristic vectors $\chi^P$ of the $s$--$t$ paths $P$ of $D$. \end{theorem} Because the converse of Theorem\nobreak\hspace{.3em plus .08333em} \ref{thm_flow_decomposition} also holds (as easily seen), and the $\chi^P$ are $0$--$1$ points, we derive a geometric reformulation. \begin{theorem}\label{thm_geometric_reformulation} For any network $D=(N,A,s,t)$, the vertices of the flow polytope ${\cal F}(D)$ are exactly the characteristic vectors $\chi^P$ of all the $s$--$t$ paths $P$ of $D$. \end{theorem} \begin{figure}[ht] \begin{center} ~\hfill \begin{tikzpicture}[xscale=1,yscale=0.7,baseline=10mm] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node (s) at (0,0) [vertex,label=left:$s$] {}; \node (t) at (0,3) [vertex,label=left:$t$] {}; \node (u) at (1,0.7) [vertex,label=below:{$u$}] {}; \node (v) at (2,1.5) [vertex,label=right:{$v$}] {}; \node (w) at (1,2.3) [vertex,label=above:{$w$}] {}; \node (v1) at (2,0) [vertex,label=right:{$v_1$}] {}; \node (v2) at (2,3) [vertex,label=right:{$v_2$}] {}; \draw[->-=.7] (s) -- (t); \draw[->-=.7] (s) -- (u); \draw[->-=.7] (u) -- (v); \draw[->-=.7] (u) -- (w); \draw[->-=.7] (v) -- (w); \draw[->-=.7] (v1) -- (u); \draw[->-=.7] (v1) -- (v); \draw[->-=.7] (w) -- (v2); \draw[->-=.7] (v) -- (v2); \draw[->-=.7] (w) -- (t); \end{tikzpicture} \hfill $ \begin{array}{c@{\quad}|@{\quad}c@{\quad}c@{\quad}c} (s,t) & 1 & 0 & 0\\ (s,u) & 0 & 1 & 1\\ (u,v) & 0 & 0 & 1\\ (u,w) & 0 & 1 & 0\\ (v,w) & 0 & 0 & 1\\ (w,t) & 0 & 1 & 1\\ \text{any other arc} & 0 & 0 & 0 \end{array} $ \hfill~ \end{center} \caption{A network $D$ together with the ten coordinates (in columns) of the three vertices of the flow polytope ${\cal F}(D)$ (see Example~\ref{ex_a_network}). \label{fig_first_example} } \end{figure} \begin{example}\label{ex_a_network} Figure~\ref{fig_first_example} displays a network $D$. As $D$ has three $s$--$t$ paths, the flow polytope ${\cal F}(D)$ has three vertices (the characteristic vectors of the paths). The three columns contain the coordinates of the three vertices, respectively for the $s$--$t$ paths $(s,t)$, next $(s,u), (u,w), (w,t)$, and finally $(s,u), (u,v), (v,w), (w,t)$. The flow polytope ${\cal F}(D)$ is a convex triangle lying in a space of dimension $10$. Its canonical description is formed of six affine equalities and ten affine inequalities (so it is not a minimum-size affine description). \end{example} Many manuals on combinatorial optimization quote Theorem~\ref{thm_flow_decomposition}, which plays an important role in many applications. However, they do not say much on the geometric structure of the flow polytope $\mathcal F(D)$ of a network\nobreak\hspace{.3em plus .08333em} $D$. We collect in subsequent sections some related information. Note that for each arc $a$ in $A$, the inequality $x(a)\ge0$ defines a face of the flow polytope $\mathcal F(D)$ (as explained in Subsection~\ref{SEC_Basic}), whose vertices are the (characteristic vectors of the) $s$--$t$ paths avoiding~$a$; the latter property will be often used in the sequel. Proposition~\ref{PROP_FDI} characterizes the arcs for which the face is a facet. There are many variants of the flow polytope $\mathcal F(D)$: when each arc of the network comes with a maximum capacity (see for instance \citealp{Korte_Vygen2008}); for flows not satisfying the conservation law \citep{Borgwardt_De-Loera_Finhold2018}; or under restrictions on the $s$--$t$ paths, \citealp{Stephan2009}; etc. In the introduction, we mentioned that the MCP can be seen as a flow polytope. This result, due to \cite{Fiorini2004}, is explained in the next section. In Section~\ref{SE_other} we exhibit three other networks, whose flow polytopes play a role for the random utility models based on respectively weak orders, interval orders, and semiorders. \section{The Multiple Choice Polytope and Falmagne Theorem} \label{SEC_MCP} Let ${\cal L}{\cal O}_{{\cal C}}$ be the collection of all linear orderings of the alternative set ${\cal C}$. Let moreover $\Lambda({\cal L}{\cal O}_{{\cal C}})$ be the collection of all probability distributions on ${\cal L}{\cal O}_{{\cal C}}$. We also set \begin{equation}\label{Eq_E} E \;:=\; \{\, (i,S) {\;\vrule height9pt width1pt depth1.5pt\;} i \in S \in 2^{\cal C} \,\}. \end{equation} For each distribution $Pr$ in $\Lambda({\cal L}{\cal O}_{{\cal C}})$, the Multiple Choice Model (MCM) predicts\footnote{We use classical terminology related to probabilistic models, see for instance \cite{Doignon_Heller_Stefanutti2018}.} the various multiple choice probabilities $p(i,S)$ for $(i,S)\in E$ as \begin{equation}\label{EQ_p_iS_repeated} p(i,S) \;:=\; \sum\;\{\; Pr(L) \;{\;\vrule height9pt width1pt depth1.5pt\;}\; L\in{\cal L}{\cal O}_{\cal C} \;\text{and}\; \forall j \in S \setminus \{i\}:~i >_L j \;\}. \end{equation} We see the $p(i,S)$ as the coordinates of a point $p$ in $\mathbb R{}^E$. So the MCM is captured by the surjective mapping \begin{equation}\label{EQ_captured_by_f} f:~\Lambda({\cal L}{\cal O}_{{\cal C}}) \to \mathbb R{}^E:~ Pr \mapsto p. \end{equation} We extend $f$ to the mapping \begin{equation}\label{EQ_f_bar} \bar f:~\mathbb R{}^{{\cal L}{\cal O}_{{\cal C}}} \to \mathbb R{}^E:~ t \mapsto p \end{equation} by setting for $(i,S)\in E$ \begin{equation}\label{EQ_t} p(i,S) \;:=\; \sum\;\{\; t(L) \;{\;\vrule height9pt width1pt depth1.5pt\;}\; L\in{\cal L}{\cal O}_{\cal C} \;\text{and}\; \forall j \in S \setminus \{i\}:~i >_L j \;\}. \end{equation} Then $f$ is a linear mapping (each coordinate of $\bar f(t)$ is a sum of coordinates of $t$). The set of points predicted by the MCM is equal to $f(\Lambda({\cal L}{\cal O}_{{\cal C}}))$, and also to $\bar f(\Lambda({\cal L}{\cal O}_{{\cal C}}))$. Because $\Lambda({\cal L}{\cal O}_{{\cal C}})$ is a simplex and $\bar f$ is a linear mapping, the predicted points form a convex polytope, which we call the \textsl{multiple choice polytope} (MCP) and denote as $\Pmc {\cal C}$. In summary \begin{equation}\label{EQ_summary_f_brown} \begin{array}{c@{\qquad}c@{\qquad}c} \mathbb R{}^{{\cal L}{\cal O}_{{\cal C}}} & \stackrel{\bar f}{\longrightarrow} & \mathbb R{}^E \\ \cup & & \cup\\ \Lambda({\cal L}{\cal O}_{{\cal C}}) & \stackrel{f}{\longrightarrow} & \Pmc {\cal C} \\ \rotatebox[origin=c]{90}{$\in$} & & \rotatebox[origin=c]{90}{$\in$}\\ Pr & \stackrel{f}{\longmapsto} & p \end{array} \end{equation} Now for the probability distribution $Pr^L$ concentrated on the linear ordering $L$ of ${\cal C}$, denote by $p^L=f(Pr^L)$ the predicted point in $\Pmc {\cal C}$. The various $Pr^L$ are the vertices of the simplex $\Lambda({\cal L}{\cal O}_{\cal C})$. The image $f(Pr^L) = \bar f(Pr^L)$ is a point in $\mathbb R{}^E$, which we denote $p^L$. For $(i,S)\in E$, we have $p^L(i,S)$ equal to $1$ when $i >_L j$ for all $j \in S \setminus \{i\}$, and $0$ otherwise. The polytope $\Pmc {\cal C}$ is the convex hull of the images $p^L$ of the vertices $Pr^L$ of the simplex $\Lambda({\cal L}{\cal O}_{\cal C})$. Because the images $p^L$ have coordinates $0$ or $1$, they are the vertices of $\Pmc {\cal C}$. We reformulate the problem of characterizing the MCM as the problem of finding an affine description for the convex polytope MCP. As we saw in the introduction, \cite{Falmagne1978} proves that the MCP is exactly the solution set of the system of (his generalized) Block Marschak inequalities. Moreover, \cite{Fiorini2004} provides another proof of Falmagne Theorem by viewing the MCP as a flow polytope. Let us explain this. For $i\in{\cal C}$ and $L \in {\cal L}{\cal O}_{{\cal C}}$, the \textsl{beginning set} $L^-(i)$ and the \textsl{ending set $L(i)$} are respectively \begin{align}\label{Eq_infty} L^-(i) \;&:=\; \{j\in{\cal C} {\;\vrule height9pt width1pt depth1.5pt\;} j \ge_L i \}\\ L(i) \;&:=\; \{ j\in{\cal C} {\;\vrule height9pt width1pt depth1.5pt\;} i \ge_L j \}. \end{align} In the present paragraph, we consider a fixed distribution $Pr$ on ${\cal L}{\cal O}_{{\cal C}}$, predicting the point $p=f(Pr)$ in $\Pmc {\cal C}$. We moreover define for $i \in T \in 2^{\cal C}$ \begin{equation}\label{EQ_q_iT} q(i,T) \;:=\; \sum\;\{\; Pr(L) \;{\;\vrule height9pt width1pt depth1.5pt\;}\; L\in{\cal L}{\cal O}_{\cal C} \;\text{and}\; T = L(i) \;\}. \end{equation} Because if $i$ is ranked first in $S$ in some linear order $L$ there is only one superset $T$ of $S$ with $T = L(i)$, there holds \begin{equation}\label{EQ_p_from_q} p(i,S) \;=\; \sum_{T\in 2^{\cal C}:\; T \supseteq S} q(i,T). \end{equation} There follows from previous equation \begin{equation}\label{EQ_Mobius} q(i,T) \;=\; \sum_{S\in 2^{\cal C}:\; S \supseteq T} (-1)^{|S\setminus T|} \; p(i,S), \end{equation} by an application of the M\"obius inversion to the partially ordered set $(\{S\in{\cal C}{\;\vrule height9pt width1pt depth1.5pt\;} i\in S\},\subseteq)$ (see for example \citealp{vanLint_Wilson2001}). By its definition in Equation~\eqref{EQ_q_iT}, $q(i,T)$ is nonnegative on $\Pmc {\cal C}$; therefore for all pairs $(i,T)$ in $E$ and $p$ in $\Pmc {\cal C}$ \begin{equation}\label{EQ_BM} \sum_{S\in 2^{\cal C}:\; S \supseteq T} (-1)^{|S\setminus T|} \; p(i,S) \;\ge\; 0. \end{equation} For $|T|=2$, \cite{Block_Marschak1960} prove that the last inequality holds for the MCM, and \cite{Falmagne1978} extends the result to all $T$'s. Just above, we followed \cite{Fiorini2004} to derive the validity of \eqref{EQ_BM} for $\Pmc {\cal C}$. Falmagne Theorem states that the system on $\mathbb R{}^E$ formed by all these affine inequalities, for $(i,T)\in E$, together with the obvious equations for $S$ in $2^{\cal C}$ \begin{equation}\label{EQ_obvious} \sum_{i \in S} p(i,S) \;=\ 1 \end{equation} has $\Pmc {\cal C}$ as solution set. Next comes a summary of Fiorini's proof. Consider the network $\Dlo {\cal C} = (2^{\cal C},\prec,\varnothing{},{\cal C})$ where the nodes are the subsets of ${\cal C}$, the arcs are the covering pairs of the inclusion relation on $2^{\cal C}$ (that is, all pairs $(T\setminus\{i\},T)$ for $i \in T \in 2^{\cal C}$), the source is the empty set $\varnothing{}$, and the sink is ${\cal C}$. Denote by $\Flo {\cal C}$ the flow polytope of the network $\Dlo {\cal C}$, which lies in the space $\mathbb R{}^A$ for $A=\prec$. Define now a mapping $\rho$ by \begin{equation}\label{EQ_f} \rho:~ \mathbb R{}^E \to \mathbb R{}^A:~p \mapsto r, \end{equation} where for $(T\setminus\{i\},T)$ in $A$ we set \begin{equation}\label{EQ_r_q} r(T\setminus\{i\},T) \;:=\; q(i,T) \end{equation} with $q(i,T)$ as in \eqref{EQ_Mobius}. Note that $\rho$ is a linear mapping (each coordinate of $\rho(p)$ is a linear combination of coordinates of $p$). Moreover, $\rho$ has an inverse equal to the mapping \begin{equation}\label{EQ_g} \sigma:~ \mathbb R{}^A \to \mathbb R{}^E:~r \mapsto p, \end{equation} with $p(i,S)$ given by a rewriting of \eqref{EQ_p_from_q}: \begin{equation}\label{EQ_p_r} p(i,S) \;=\; \sum_{T\in 2^{\cal C}:\; S \subseteq T} r(T\setminus\{i\},T). \end{equation} The mapping $\rho$ induces a bijection from the vertices of the multiple choice polytope $\Pmc {\cal C}$ to the vertices of the flow polytope $\Flo {\cal C}$: for any order $L$ with \begin{equation}\label{EQ_>_L} i_1 \quad>_L\quad i_2 \quad>_L\quad \dots \quad>_L\quad i_n \end{equation} $\rho$ maps the vertex $p^L$ of $\Pmc {\cal C}$ onto the vertex of $\Flo {\cal C}$ which is the characteristic vector of the $s$--$t$ path \begin{equation}\label{EQ_path_for_L} (\varnothing{},\{i_1\}),\quad (\{i_1\},\{i_1,i_2\}),\quad \dots,\quad (\{i_1,i_2,\dots,i_{n-1}\},{\cal C}) \end{equation} (so the beginning sets of $L$ are the nodes on the $\varnothing{}$--${\cal C}$ path, in the same order). Consequently, the invertible linear mapping $\rho$ from $\mathbb R{}^E$ to $\mathbb R{}^A$ (where $A=\prec$) transforms the multiple choice polytope $\Pmc {\cal C}$ into the flow polytope $\Flo {\cal C}$. Falmagne Theorem now follows at once from Theorem~\ref{thm_geometric_reformulation}\footnote{\cite{Fiorini2004} rather refers to the total unimodularity of a certain matrix.} for the particular network $(2^{\cal C},\prec,\varnothing{},{\cal C})$. \cite{Fiorini2004} proof shows the interest of flow polytopes to solve formal problems appearing in mathematical psychology. More flow polytopes play a central role in \cite{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018} (see our Section~\ref{SE_other}). Very recently, flow polytopes make their apparition in theoretical economics papers: for instance, \cite{Turansick2022} uses them to analyze the identification of the multiple choice model. Also, \cite{Chang_Narita_Saito2022} refers in a proof to the adjacency of vertices on the multiple choice polytope. In the next section we characterize the adjacency on any flow polytope, thus covering the adjacency on the multiple choice polytope as a particular case. \section{Adjacency of Vertices on a Flow Polytope} \label{se_Adjacency_of_Vertices} In this section and the next three ones, we consider the flow polytope $\mathcal F(D)$ of a network $D=(N,A,s,t)$. We may assume that $D$ has at least one $s$--$t$ path, because otherwise $\mathcal F(D)$ is empty. A characterization of the adjacency of vertices on a flow polytope is the object of Proposition~\ref{PROP_path_adj} below. By Theorem~\ref{thm_geometric_reformulation}, the vertices of ${\cal F}(D)$ are the characteristic vectors $\chi^P$ of the $s$--$t$ paths $P$ of $D$. \begin{lemma}\label{lem_smallest_face} Let $\chi^{P_1}$, $\chi^{P_2}$, \dots, $\chi^{P_k}$ be vertices of the flow polytope ${\cal F}(D)$, that is, the characteristic vectors of $s$--$t$ paths $P_1$, $P_2$, \dots, $P_k$ of the network $D$. The vertices of the smallest face of ${\cal F}(D)$ containing $\chi^{P_1}$, $\chi^{P_2}$, \dots, $\chi^{P_k}$ are exactly the vertices $\chi^R$ for $R$ an $s$--$t$ path such that $R \subseteq P_1 \cup P_2 \cup \dots \cup P_k$. \end{lemma} \begin{proof} Let $U:=P_1 \cup P_2 \cup \dots \cup P_k$, and $F$ be the face of ${\cal F}(D)$ defined by the inequality \begin{equation}\label{EQ_smallest_face} \sum_{a\in A\setminus U} x(a) \;\ge\; 0. \end{equation} Any vertex of ${\cal F}(D)$ equals $\chi^P$ for some $s$--$t$ path $P$; this vertex $\chi^P$ belongs to $F$ if and only if $a \notin P$ for each $a \in A\setminus U$ (so that the coordinate $x(a)$ takes value $0$ at $\chi^P$), that is, if and only if $P \subseteq U$. It remains to prove that the face $F$ is the smallest face of ${\cal F}(D)$ containing $\chi^{P_1}$, $\chi^{P_2}$, \dots, $\chi^{P_k}$. Let $G$ be any facet of ${\cal F}(D)$; thus $G$ is defined by the inequality $x(b)\ge0$ for some arc $b$ of $D$. If $G$ contains $\chi^{P_1}$, $\chi^{P_2}$, \dots, $\chi^{P_k}$ then $b \in A \setminus U$. Therefore $F \subseteq G$ (because if \eqref{EQ_smallest_face} is satisfied with equality at some point $x$ of ${\cal F}(D)$, then $x(b)=0$). Hence any facet containing $\chi^{P_1}$, $\chi^{P_2}$, \dots, $\chi^{P_k}$ includes $F$. Thus $F$ is the smallest face of ${\cal F}(D)$ containing $\chi^{P_1}$, $\chi^{P_2}$, \dots, $\chi^{P_k}$. \end{proof} \begin{proposition}\label{PROP_path_adj} Let $P$ and $Q$ be two $s$-$t$ paths of a network $D=(N,A,s,t)$. The vertices $\chi^P$ and $\chi^Q$ of $\mathcal F(D)$ are adjacent if and only if \begin{quote} \textrm{($*$)}~whenever $P$ and $Q$ pass through a common internal node $v$, then $P$ and $Q$ coincide either before $v$ or after $v$. \end{quote} \end{proposition} \begin{proof} By Lemma\nobreak\hspace{.3em plus .08333em} \ref{lem_smallest_face}, a vertex $\chi^R$ of $\mathcal F(D)$ (for some $s$--$t$ path $R$) belongs to the smallest face containing $\chi^P$ and $\chi^Q$ if and only if $R \subseteq P \cup Q$. If $P$ and $Q$ do not satisfy~($*$) for some common internal node\nobreak\hspace{.3em plus .08333em} $v$, we form a walk $R$ from $s$ to $t$ by following $P$ from $s$ to $v$, next $Q$ from $v$ to $t$. Because of acyclicity, $R$ must be an $s$--$t$ path, and so the vertex $\chi^R$ belongs to the smallest face containing $\chi^P$ and $\chi^Q$. Because $\chi^R$ differs from both $\chi^P$ and $\chi^Q$, the two latter vertices are nonadjacent. Conversely, assume that ($*$) holds. We prove that the smallest face of $\mathcal F(D)$ containing the vertices $\chi^P$ and $\chi^Q$ does not contain any further vertex. Proceeding by contradiction, assume such a third vertex $\chi^R$ does exist. Then $R$ is an $s$--$t$ path such that $R \subseteq P \cup Q$ and $R\neq P, Q$. Now let $(u,u')$ be the first arc of $R$ which lies outside $P$ or outside $Q$. Assume $(u,u')\notin Q$, and thus $(u,u')\in P$ (otherwise, exchange the notations $P$, $Q$). Because $R\neq P$, there must be a first arc $(v,v')$ in $R$ after $(u,u')$ such that $(v,v')\notin P$. So $(v,v')\in Q$ in view of $R \subseteq P \cup Q$. Then the node $v$ shows that Condition\nobreak\hspace{.3em plus .08333em} ($*$) does not hold, a contradiction. \end{proof} \begin{remark}\label{REM_NP_MT} In the notation of the second paragraph of the proof above, we can create a second $s$--$t$ path $S$ by following $Q$ from $s$ to $v$, next $P$ from $v$ to $t$. We have then $(\chi^P + \chi^Q)/2 = (\chi^R + \chi^S)/2$ because the equality holds for each coordinate $x(a)$, where $a\in A$. Consequently, the flow polytope\nobreak\hspace{.3em plus .08333em} $\mathcal F(D)$ is a \textsl{combinatorial polytope} in the sense of \cite{Naddef_Pulleyblank1981}: it is a $0/1$-polytope in which for any pair of nonadjacent vertices, there is another pair of vertices having the same midpoint as the first pair. As a matter of fact, the last assertion follows also from \cite{Matsui_Tamura1995}. Any flow polytope ${\cal F}(D)$ is an \textsl{equality constraint polytope}, that is, its set of vertices is the set of $0$--$1$ points satisfying a given system of affine equations (in our case, the equalities in the canonical description of ${\cal F}(D)$). It is thus also a polytope satisfying Properties~A and B of Matsui and Tamura. Consequently all the findings of Matsui and Tamura hold for ${\cal F}(D)$, for instance those about linear optimization, or the fact that ${\cal F}(D)$ is a combinatorial polytope. However, the results we present on flow polytopes (in particular on the MCP) differ in that they refer to $s$--$t$ paths and thus require the networks from which the polytopes are built. \end{remark} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=1] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node (s) at (0,0) [vertex,label=left:$s$] {}; \node (t) at (0,8) [vertex,label=left:$t$] {}; \node (u_1) at (-1,1) [vertex,label=left:{$u_1$}] {}; \node (v_1) at ( 1,1) [vertex] {}; \node (w_1) at ( 0,2) [vertex,label=left:{$w_1$}] {}; \node (u_2) at (-1,3) [vertex,label=left:{$u_2$}] {}; \node (v_2) at ( 1,3) [vertex] {}; \node (w_2) at ( 0,4) [vertex,label=left:{$w_2$}] {}; \node (w) at ( 0,6) [vertex,label=left:{$w_{d-1}$}] {}; \node (u_d) at (-1,7) [vertex,label=left:{$u_d$}] {}; \node (v_d) at ( 1,7) [vertex] {}; \draw[->-=.7] (s) -- (u_1); \draw[->-=.7] (u_1) -- (w_1); \draw[->-=.7] (w_1) -- (u_2); \draw[->-=.7] (u_2) -- (w_2); \draw[->-=.7] (w) -- (u_d); \draw[->-=.7] (u_d) -- (t); \node at (0,5) {$\vdots$}; \draw[->-=.7] (s) -- (v_1); \draw[->-=.7] (v_1) -- (w_1); \draw[->-=.7] (w_1) -- (v_2); \draw[->-=.7] (v_2) -- (w_2); \draw[->-=.7] (w) -- (v_d); \draw[->-=.7] (v_d) -- (t); \end{tikzpicture} \end{center} \caption{\label{fig_towards_a_cube}A network for Example~\ref{EX_TOWARDS_A_CUBE}, for each a natural number $d$ with $d\ge1$.} \end{figure} \begin{example}\label{EX_TOWARDS_A_CUBE} For the network $D$ in Figure~\ref{fig_towards_a_cube}, it is an exercise to check that the flow polytpe ${\cal F}(D)$ is a $d$-dimensional $0/1$-cube (the vertices of ${\cal F}(D)$ are completely specified by the values, $0$ or $1$, of the coordinates $x(u_1,w_1)$, $x(u_2,w_2)$, \dots $x(u_{d-1},w_{d-1})$, and $x(u_d,t)$). As announced in Remark~\ref{REM_NP_MT}, it is indeed a combinatorial polytope. Moreover, the diameter of (the graph of ) the flow polytope equals~$d$. \end{example} \section{The Dimension of a Flow Polytope} \label{se_Dimension} Consider again the flow polytope $\mathcal F(D)$ of a network $D=(N,A,s,t)$, assuming that $D$ has at least one $s$--$t$ path. Let $\widetilde A$ denote the subset of $A$ formed by all arcs of $D$ that belong to at least one $s$--$t$ path, and let $\widetilde N$ be the subset of $N$ formed by all nodes of $D$ that appear on at least one arc in $\widetilde A$. The network $\widetilde D = (\widetilde N, \widetilde A, s, t)$ is called the \textsl{reduced network} of $D$, or the \textsl{reduction} of $D$ (for an illustration, see Figure~\ref{fig_corridors}). For any node $u$ of $\tilde N$, denote with $\tilde\delta^-(u)$, resp.\ $\tilde\delta^+(u)$, the sets of arcs in $\tilde A$ with head, resp.\ tail $u$. By Theorem\nobreak\hspace{.3em plus .08333em} \ref{thm_flow_decomposition}, the flow polytope $\mathcal F(D)$ satisfies $x(a)=0$ for any arc in $A \setminus \widetilde A$. Thus the flow polytopes ${\cal F}(\widetilde D)$ and $\mathcal F(D)$ are essentially the same polytope (they become equal when we naturally assimilate the space $\mathbb R{}^{\tilde A}$ with the linear subspace of the space $\mathbb R{}^A$ specified by $x(a)=0$ for all $a\in A \setminus \widetilde A$). A network $D$ is \textsl{reduced} if $D=\widetilde D$. \begin{figure}[ht] \begin{center} ~\hfill \begin{tikzpicture}[xscale=1,yscale=0.7] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node (s) at (0,0) [vertex,label=left:$s$] {}; \node (t) at (0,3) [vertex,label=left:$t$] {}; \node (u) at (1,0.7) [vertex,label=below:{$u$}] {}; \node (v) at (2,1.5) [vertex,label=right:{$v$}] {}; \node (w) at (1,2.3) [vertex,label=above:{$w$}] {}; \node (v1) at (2,0) [vertex,label=right:{$v_1$}] {}; \node (v2) at (2,3) [vertex,label=right:{$v_2$}] {}; \draw[->-=.7] (s) -- (t); \draw[->-=.7] (s) -- (u); \draw[->-=.7] (u) -- (v); \draw[->-=.7] (u) -- (w); \draw[->-=.7] (v) -- (w); \draw[->-=.7] (v1) -- (u); \draw[->-=.7] (v1) -- (v); \draw[->-=.7] (w) -- (v2); \draw[->-=.7] (v) -- (v2); \draw[->-=.7] (w) -- (t); \end{tikzpicture} \hfill \begin{tikzpicture}[xscale=1,yscale=0.7] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node (s) at (0,0) [vertex,label=left:$s$] {}; \node (t) at (0,3) [vertex,label=left:$t$] {}; \node (u) at (1,0.7) [vertex,label=below:{$u$}] {}; \node (v) at (2,1.5) [vertex,label=right:{$v$}] {}; \node (w) at (1,2.3) [vertex,label=above:{$w$}] {}; \draw[->-=.7] (s) -- (t); \draw[->-=.7] (s) -- (u); \draw[->-=.7] (u) -- (v); \draw[->-=.7] (u) -- (w); \draw[->-=.7] (v) -- (w); \draw[->-=.7] (w) -- (t); \draw[->-=.7] (s) -- (t); \draw[->-=.7] (s) -- (u); \draw[->-=.7] (u) -- (v); \draw[->-=.7] (u) -- (w); \draw[->-=.7] (v) -- (w); \draw[->-=.7] (w) -- (t); \end{tikzpicture} \hfill~ \end{center} \caption{On the left, a nonreduced network; on the right, its reduction. \label{fig_corridors} } \end{figure} \begin{proposition}\label{prop_dim} Suppose the network $D = (N,A,s,t)$ has at least one $s$--$t$ path, and let $\widetilde D = (\widetilde N,\widetilde A,s,t)$ be its reduced network. Then the dimension of the flow polytope $\mathcal F(D)$ equals $|\widetilde A| - |\widetilde N| + 1$. \end{proposition} \begin{proof} As we saw in the paragraph before the statement we may assimilate $\mathcal F(D)$ with ${\cal F}(\widetilde D)$, a polytope lying in $R^{\widetilde A}$. By definition, ${\cal F}(\widetilde D)$ is the solution set of the system on $R^{\widetilde A}$ \begin{equation}\label{EQ_conservation_tilde} \left\{\begin{array}{rcl@{\quad}l} x(\widetilde \delta^{+}(v)) - x(\widetilde \delta^{-}(v)) &= &0, &\forall v \in \widetilde N \setminus \{s,t\},\\ x(\widetilde \delta^{+}(s)) - x(\widetilde \delta^{-}(s)) &= &1,\\ x(a) &\geqslant &0, &\forall a \in \widetilde A. \end{array} \right. \end{equation} Hence ${\cal F}(\widetilde D)$ lies in the subspace of $\mathbb R{}^{\widetilde A}$ defined by the $|\widetilde N|-1$ affine equations in \eqref{EQ_conservation_tilde}. We first show that the subspace has dimension at most $|\widetilde A| - (|\widetilde N|-1)$ by establishing that the $|\widetilde N|-1$ affine equations are independent. It suffices to exhibit, for each of the equalities in \eqref{EQ_conservation_tilde}, a point in $\mathbb R{}^{\widetilde A}$ which satisfies all equalities in \eqref{EQ_conservation_tilde} but the one considered. Let first $v$ be a node in $\widetilde A\setminus\{s,t\}$. Take any path $U$ in $(\widetilde N, \widetilde A,s,t)$ from $s$ to $v$ (such a path exists because $v$ is on some $s$--$t$ path). The characteristic vector $\chi^U$ satisfies all inequalities in \eqref{EQ_conservation_tilde} as well as all equalities but the one for $v$. Second, assume $v=s$. The null vector in $\mathbb R{}^{\widetilde A}$ does the job. From previous paragraph $\dim \mathcal {\cal F}(\widetilde D) \le |\widetilde A| - (|\widetilde N|-1)$. To prove the opposite inequality, we show the existence of $1 + |\widetilde A| - (|\widetilde N|-1) $ affinely independent vertices in $F(\widetilde D)$ (Remark\nobreak\hspace{.3em plus .08333em} \ref{rem_alternate_argument} below provides an alternate argument). Because the reduced network $\widetilde D$ is acyclic, it admits a topological sort $L$ of its nodes, say \begin{equation} u_1 \quad>_L\quad u_2 \quad>_L\quad \dots \quad>_L\quad u_m, \end{equation} with $u >_L v$ for any arc $(u,v)$ in $\widetilde A$ and $m=|\widetilde N|$ (necessarily $u_1=s$ and $u_m=t$ in view of the definition of $\widetilde D$). Now for each node $u$ distinct from $u_1$, paint in green one arbitrarily chosen arc in $\widetilde A$ with head $u$. Thus $|\widetilde N| - 1$ arcs were just painted in green; paint in blue all the other arcs. Form a first $s$--$t$ path $P_G$ using only green arcs. This path is uniquely determined: its last arc is the green arc $(u_k,u_m)$ with head $u_m$ (for some unique $k$), the arc before $(u_k,u_m)$ is the green arc with head $u_k$, etc. Next, for any of the $|\widetilde A| - (|\widetilde N|-1)$ blue arcs, say $(u,v)$, form an $s$--$t$ path by first following green arcs from $s$ to $u$ (there is only one suitable sequence of green arcs), next follow the blue arc$(u,v)$ and finally arcs (green or blue) from $v$ to $t$ (such arcs do exist because $v$ is on some $s$--$t$ path). The characteristic vectors of the resulting $s$--$t$ paths, in number $1 + |\widetilde A| - (|\widetilde N| - 1)$, are affinely independent, as we next show. Build as follows a list $M$ of the $|\widetilde A| - |\widetilde N| + 2$ $s$--$t$ paths we just constructed: $M$ collects first, in any order, all the $s$--$t$ paths formed for the blue arcs with tail $u_1$ (if any); next in any order the $s$--$t$ paths formed for the blue arcs with tail $u_2$ (if any); \dots; the $s$--$t$ paths formed for the blue arcs with tail $u_{m-1}$ if any; finally, the last item in the list $M$ is the $s$--$t$ path $P_G$ consisting only of green arcs. Then the characteristic vector of any $s$--$t$ path $P$ in $M$ distinct from $P_G$ is affinely independent from the characteristic vectors of all the $s$--$t$ paths listed in $M$ after $P$. Indeed, if $P$ was formed for the blue arc $(u,v)$, then $(u,v)$ belongs to $P$ but not to any of the $s$--$t$ paths listed after $P$ in $M$. Thus the characteristic vector $\chi^P$ satisfies $x(u,v)\neq 0$ while all the characteristic vectors of the $s$--$t$ paths after $P$ in $M$ satisfy $x(u,v) = 0$. \end{proof} \begin{remark}\label{rem_alternate_argument} The proof of the second inequality can be replaced with a call to Theorem\nobreak\hspace{.3em plus .08333em} 5.6 of \cite{Schrijver2003}. Because no inequality $x(a)\ge0$, for $a\in \widetilde A$, is satisfied with equality by $\mathcal F(D)$, the dimension of $F(\widetilde D)$ equals $|\widetilde A|$ (the dimension of the space in which $F(\widetilde D)$ lies) minus the rank of the matrix of coefficients of the variables in the affine equations in \eqref{EQ_conservation_tilde}. From the first half of the proof, we know that the rank equals $|\widetilde N|-1$. \end{remark} \section{The Facets of a Flow Polytope} \label{se_Facets} We now aim at recognizing the facets of the flow polytope $\mathcal F(D)$ of a network $D=(N,A,s,t)$. In view of the canonical description of $\mathcal F(D)$ in\nobreak\hspace{.3em plus .08333em} \eqref{EQ_conservation}, any facet is for sure defined by an inequality $x(a)\ge0$ for some arc in $\widetilde A$ (remember from Section~\ref{se_Dimension} that for $b \in A \setminus \widetilde A$, the flow polytope $\mathcal F(D)$ satisfies $x(b)=0$). Proposition\nobreak\hspace{.3em plus .08333em} \ref{PROP_FDI} below characterizes the arcs $a$ such that $x(a)\ge0$ defines a facet of $\mathcal F(D)$, referring to the notions of `corridors' and `good arcs' (see Example~\ref{ex_corridors} and Figure~\ref{fig_corridors} for an illustration). For a node $u$ in the network $D=(N,A,s,t)$, set $\widetilde d^-(u) = |\widetilde\delta^-(u)| $ and $\widetilde d^+(u) = |\widetilde\delta^+(u)|$. \begin{definition}\label{DEF_corridor} A \textsl{corridor} of the network $D$ is a path of the reduced network $\widetilde D = (\widetilde N, \widetilde A, s, t)$ \begin{equation}\label{EQ_corridor} (u_1,u_2), \quad (u_2,u_3), \quad \dots,\quad (u_{m-1},u_m) \end{equation} such that \begin{gather} \widetilde d^-(u_2) = \widetilde d^+(u_2) = \widetilde d^-(u_3) = \widetilde d^+(u_3) = \dots = \widetilde d^-(u_{m-1}) = \widetilde d^+(u_{m-1}) = 1 \end{gather} which is maximal (w.r.t.\ the inclusion of arc sets) for this property, that is \begin{equation}\label{eq_maximality} \big(\widetilde d^-(u_1) \neq1 \text{ or } \widetilde d^+(u_1)\neq1\big) \;\;\text{and}\;\; \big(\widetilde d^-(u_m) \neq1 \text{ or } \widetilde d^+(u_m) \neq1\big). \end{equation} The corridor in \eqref{EQ_corridor} is \textsl{good} when $\widetilde d^+(u_1) \ge 2$ and $\widetilde d^-(u_m) \ge 2$. An arc is \textsl{good} if it belongs to some good corridor. We call arcs or corridors \textsl{bad} if they are not good. \end{definition} \begin{example}\label{ex_corridors} The network $D$ on the left in Figure~\ref{fig_corridors} is not reduced. Its reduction $\widetilde D$ is on the right. Both networks have three good corridors, namely \begin{equation} (s,t),\qquad (u,w),\qquad \text{and} \qquad (u,v), \quad (v,w), \end{equation} and two bad corridors, namely \begin{equation} (s,u)\qquad \text{and} \qquad (w,t). \end{equation} \end{example} Definition\nobreak\hspace{.3em plus .08333em} \ref{DEF_corridor} implies that no arc in $A \setminus \widetilde A$ belongs to any corridor, while each arc $a$ in $\widetilde A$ belongs to a unique corridor (sometime reduced to itself), which we denote as $\mathop{\mathrm{cor}} (a)$. Said otherwise, the corridors of the network $D=(N,A,s,t)$ form a partition of $\widetilde A$. Moreover, if an $s$--$t$ path contains any arc of some corridor, then it includes the whole corridor. For the corridor in \eqref{EQ_corridor}, the flow polytope satisfies \begin{equation}\label{EQ_equalities} x(u_1,u_2) \;=\; x(u_2,u_3) \;=\; \dots \;=\; x(u_{m-1},u_m) \end{equation} (because of the conservation law at nodes $u_2$, $u_3$, \dots, $u_{m-1}$). In the canonical description of ${\cal F}(D)$, from all the inequalities $x(u_{i-1},u_i)\ge0$ for $i=2$, $3$, \dots, $m$, we keep only one, namely $x(u_1,u_2)\ge0$. \begin{lemma}\label{lem_degree_1} Let $D = (N,A,s,t)$ be a network, and $(u,v)$ be an arc in $\widetilde A$ satisfying at least one of the two following conditions: \begin{enumerate}[\quad\rm(i)~] \item $\widetilde d^-(u) \neq1 \quad\text{and}\quad \widetilde d^+(u)=1$; \item $\widetilde d^-(v) = 1 \quad\text{and}\quad \widetilde d^+(v) \neq 1$. \end{enumerate} Then the face $F$ of the flow polytope $\mathcal F(D)$ defined by the inequality $x(u,v)\ge0$ cannot be a facet of $\mathcal F(D)$. \end{lemma} \begin{proof} We consider only Assumption~(ii), the proof under Assumption~(i) being similar. A priori, there are three cases for $v$. If $v=t$, then we have for each point $x$ of $\mathcal F(D)$ (because the net inflow at\nobreak\hspace{.3em plus .08333em} $t$ equals\nobreak\hspace{.3em plus .08333em} $1$, see Equation\nobreak\hspace{.3em plus .08333em} \eqref{EQ_at_t}) \begin{equation} x(u,v) \;=\; 1 + \sum\{x(t,w) {\;\vrule height9pt width1pt depth1.5pt\;} (t,w)\in \delta^+(t)\}. \end{equation} Even if there is no term in the summation, the last equation implies that $x(u,v)=0$ is impossible, so $F$ is the empty face. For the empty set to be a facet of $\mathcal F(D)$, it must be that $D$ has a single $s$--$t$ path. This contradicts (ii). The case $v=s$ is impossible because of the acyclicity of $D$ (remember that $(u,v)\in\widetilde A$ means that $(u,v)$ belongs to some $s$--$t$ path). Letting now $v\neq s$, $t$, we prove that $F$ cannot be a facet. From the present assumptions $(u,v)\in\widetilde A$, $v \neq t$, and $\widetilde d^+(v) \neq 1$, we derive $\widetilde d^+(v) \ge 2$. For any flow $x$ in $\mathcal F(D)$, the conservation law at $v$ gives \begin{equation}\label{eq_in_proof} x(u,v) \;=\; \sum\{x(v,w) {\;\vrule height9pt width1pt depth1.5pt\;} (v,w)\in\widetilde \delta^+(v)\}. \end{equation} Hence $x(u,v)=0$ if and only if $x(v,w)=0$ for all $(v,w) \in \widetilde \delta^+(v)$. Thus the face defined by $x(u,v)\ge0$ is the intersection of the faces defined by $x(v,w)\ge0$, for $(v,w)\in\widetilde \delta^+(v)$, each of the latter faces being proper because $\delta^+(v) \subseteq \widetilde A$. Moreover, at least two such faces must differ because any $s$--$t$ path $P$ containing $(u,v)$ contains exactly one arc $(v,w)$ in $\widetilde \delta^+(v)$, hence the vertex $\chi^P$ satisfies $x(v,w)\neq0$ and also $x(v,w')=0$ for $(v,w')\in \widetilde \delta^+(v)\setminus\{(v,w)\}$. We conclude that $F$ cannot be a facet. \end{proof} \begin{lemma}\label{lem_AequalB} Let $D=(N,A,s,t)$ be a network. For the two arcs $a$ and $b$ of $\widetilde A$, assume that both inequalities $x(a)\ge0$ and $x(b)\ge0$ on $\mathbb R{}^A$ define facets $F_a$ and $F_b$ of ${\cal F}(D)$ respectively. Then $F_a=F_b$ if and only if $a$ and $b$ belong to the same corridor. \end{lemma} \begin{proof} If $\mathop{\mathrm{cor}}(a)=\mathop{\mathrm{cor}}(b)$, then $x(a)=x(b)$ for any flow $x$ in ${\cal F}(D)$ and so $F_a=F_b$. To prove the converse, assume $F_a=F_b$. Because an empty polytope has no facet, $D$ must have at least one $s$--$t$ path. If $D$ has a single $s$--$t$ path, $a$ and $b$ belong for sure to the unique corridor of $D$. Assume from now on that $D$ has at least two $s$--$t$ paths. There exists some $s$--$t$ path $P$ containing the arc $a$ (because the facet $F_a$ must exclude some vertex of $\mathcal F(D)$). Because $F_a$ and $F_b$ avoid exactly the same vertices, $P$ must also contain $b$; say that $a$ comes before $b$ in $P$ (otherwise relabel $a$ and $b$). Now $\mathop{\mathrm{cor}}(a)$ and $\mathop{\mathrm{cor}}(b)$ are subsets of $P$. If they differ, we derive a contradiction as follows. The last node $v$ on $\mathop{\mathrm{cor}}(a)$ must then come along $P$ before $\mathop{\mathrm{cor}}(b)$ (here $v$ can be the head of $a$ and/or the tail of $b$). We have $\widetilde d^-(v) \ge 2$ or $\widetilde d^+(v) \ge 2$. If $\widetilde d^-(v) \ge 2$, there exists some arc $(u,v)$ in $\widetilde \delta^-(v)$ not in $\mathop{\mathrm{cor}}(a)$. The arc $(u,v)$ is in some $s$--$t$ path $Q$. Following $Q$ from $s$ to $v$, and next $P$ from $v$ to $t$, we get an $s$--$t$ path $R$ (in view of the acyclicity of $D$). As $R$ excludes the arc $a$ but contains the arc $b$, the vertex $\chi^R$ is in $F_a$ but not in $F_b$, a contradiction. If $\widetilde d^-(v) < 2$, then $\widetilde d^-(v) = 1$ and $\widetilde d^+(v) \ge 2$. Let $u$ be this time the node preceding $v$ on $P$. Then $x(u,v)\ge0$ also defines the facet $F_a$ (because the arcs $(u,v)$ and $a$ belong to the same corridor). By Lemma\nobreak\hspace{.3em plus .08333em} \ref{lem_degree_1}(ii), $F_a$ cannot be a facet, a contradiction. \end{proof} \begin{remark}\label{rem_one_network} In the proof of sufficiency in Lemma\nobreak\hspace{.3em plus .08333em} \ref{lem_AequalB} (from right to left) we do not need the assumption that $F_a$ and $F_b$ are facets, faces is enough. To the contrary, the necessity part (left to right) of Lemma\nobreak\hspace{.3em plus .08333em} \ref{lem_AequalB} does not remain true if we replace `facet' by `face' in the statement. This is shown by the arcs $(s,u)$ and $(w,t)$ in the network $D$ displayed in Figure\nobreak\hspace{.3em plus .08333em} \ref{fig_corridors}. Here the flow polytope $\mathcal F(D)$ has three vertices. Its three facets are respectively defined by the inequalities $x(s,t)\ge0$, $x(u,w)\ge0$, $x(u,v)\ge0$ (or $x(v,w)\ge0$). Both inequalities $x(s,u)\ge0$ and $x(w,t)\ge0$ define the same $0$-dimensional face; however, they are in distinct corridors. \end{remark} \begin{proposition}\label{PROP_FDI} Given an arc $a$ in the network $D= (N,A,s,t)$, the inequality $x(a) \ge 0$ defines a facet of the flow polytope $\mathcal F(D)$ if and only if the arc $a$ belongs to $\widetilde A$ and moreover either the network $D$ has a single $s$--$t$ path, or the arc\nobreak\hspace{.3em plus .08333em} $a$ is good. \end{proposition} \begin{proof} When $a$ belongs to some $s$--$t$ path, we assume that the successive arcs in $\mathop{\mathrm{cor}}(a)$ (the corridor containing $a$) are \begin{equation}\label{EQ_corridor_bis} (u_1,u_2), \quad (u_2,u_3),\quad \dots,\quad (u_{m-1},u_m). \end{equation} For all arcs $b$ in $\mathop{\mathrm{cor}}(a)$ the polytope $\mathcal F(D)$ satisfies $x(a)=x(b)$ (as in \eqref{EQ_equalities}). Therefore, in the canonical description of ${\cal F}(D)$, we keep only one of the inequalities $x(b)\ge0$ for $b\in\mathop{\mathrm{cor}}(a)$, namely $x(a)\ge0$. To prove sufficiency, first note that if $D$ has a single $s$--$t$ path, then $\mathcal F(D)$ has only one point and moreover $x(a)\ge0$ defines the empty facet, which is here a facet of $\mathcal F(D)$. Now suppose that the arc $a$ is good, which in the notation of \eqref{EQ_corridor_bis} means $\widetilde d^+(u_1) \ge 2$ and $\widetilde d^-(u_m) \ge 2$. To show that the inequality $x(a) \ge 0$ defines a facet, it suffices to exhibit some point $y$ of $\mathbb{R}^{A}$ that satisfies all the affine equations and inequalities of the canonical description of $\mathcal F(D)$ except for the inequality $x(a) \ge 0$. Take some arc $(u,u_m)$ in $\widetilde \delta^-(u_m)\setminus\{(u_{m-1},u_m)\}$. Thus there exists some $s$--$t$ path containing $(u,u_m)$, and so also a path $M$ starting at $s$ with last arc $(u,u_m)$. Now take some arc $(u_1,v)$ in $\widetilde \delta^+(u_1)\setminus\{(u_1,u_2)\}$. There exists some $s$--$t$ path containing $(u_1,v)$, and so a path $P$ with first arc $(u_1,v)$ and ending at $t$. Set $C:=\mathop{\mathrm{cor}}(a)$. The point $y=\chi^M + \chi^P - \chi^C$ in $\mathbb{R}^{A}$ has the desired property (even if $M$ and $P$ pass through some common nodes and/or share some arcs). To prove necessity, assume that the inequality $x(a)\ge0$ defines a facet. First note that $a$ must belong to some $s$--$t$ path otherwise the facet defined by $x(a) \ge 0$ would contain all vertices of $\mathcal F(D)$. Hence $a\in \widetilde A$. Assume further that the arc $a$ is bad. Then for its corridor $\mathop{\mathrm{cor}}(a)$ written as in \eqref{EQ_corridor_bis}, there holds $\widetilde d^+(u_1) = 1$ or $\widetilde d^-(u_m) = 1$. In the first case, we must also have $\widetilde d^-(u_1) \neq 1$ (by \eqref{eq_maximality}), and so a contradiction follows from Lemma\nobreak\hspace{.3em plus .08333em} \ref{lem_degree_1}(i). In the second case, we have $\widetilde d^+(u_m) \neq 1$, and a contradiction follows from Lemma\nobreak\hspace{.3em plus .08333em} \ref{lem_degree_1}(ii). \end{proof} \begin{corollary}\label{COR_number_facets} The number of facets of the flow polytope $\mathcal F(D)$ of a network $D$ equals the number of good corridors of $D$. \end{corollary} \begin{proof} This follows at once from Proposition\nobreak\hspace{.3em plus .08333em} \ref{PROP_FDI} and Lemma\nobreak\hspace{.3em plus .08333em} \ref{lem_AequalB}. \end{proof} From Proposition~\ref{prop_dim} and the proof of Proposition~\ref{PROP_FDI} we derive a mini\-mum-size affine description of ${\cal F}(D)$. Let $B$ be a subset of $A$ which is a \textsl{transversal} of the collection of corridors, that is, $B$ contains exactly one arc from each corridor. The system \begin{equation}\label{EQ_minimum} \left\{\begin{array}{rcl@{\quad}l} x(a) &=& 0, &\forall a \in A \setminus\widetilde A,\\ x(\widetilde \delta^{+}(v)) - x(\widetilde \delta^{-}(v)) &= &0, &\forall v \in \widetilde N \setminus \{s,t\},\\ x(\widetilde \delta^{+}(s)) - x(\widetilde \delta^{-}(s)) &= &1,\\ x(b) &\geqslant &0, &\forall b \in B \end{array} \right. \end{equation} is an affine description of ${\cal F}(D)$ having minimum size. \section{The Adjacency of Facets of a Flow Polytope} \label{se_Adjacency_of_Facets} By definition, two facets of a polytope are \textsl{adjacent} if their intersection is a face of dimension equal to the dimension of the polytope minus $2$. See Figure~\ref{FIG_gray_area} for an illustration of the next characterization of (non-)adjacency of facets of a flow polytope. \def\ratiox{1} \def\ratioy{1} \begin{figure}[ht] \begin{center} ~\hfill \begin{tikzpicture}[xscale=\ratiox,yscale=\ratioy] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node[vertex,label=left:$v~$] (v) at (0,0) {}; \node[vertex] (ah) at (-1,1.5) {}; \node[vertex] (bh) at (1.5,1.7){}; \draw[->-=.6,double] (v) to node[left]{$\mathop{\mathrm{cor}}(a)~$} (ah); \draw[->-=.6,double] (v) to node[right]{$~\mathop{\mathrm{cor}}(b)$} (bh); \draw[->-=.6] (-0.5,-1) node[vertex,label=left:$u~$]{} to (v); \draw[->-=.6] ( 0.5,-1) node[vertex,label=right:$~u'$]{} to (v); \draw[ultra thick] ($(v) +(-2mm,0)$) arc (180:0:2mm); \end{tikzpicture} \hfill \begin{tikzpicture}[xscale=\ratiox,yscale=\ratioy] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node[vertex,label=left:$v~$] (v) at (0,0) {}; \node[vertex] (u) at (0,-1.5) {}; \node[vertex] (ah) at (-1,1.5) {}; \node[vertex] (bh) at (1.5,1.7){}; \draw[->-=.6,double] (v) to node[left]{$\mathop{\mathrm{cor}}(a)~$} (ah); \draw[->-=.6,double] (v) to node[right]{$~\mathop{\mathrm{cor}}(b)$} (bh); \draw[->-=.6,double] (u) to node[left]{$\mathop{\mathrm{cor}}(u,v)~$} (v) ; \draw[->-=.7] (-0.5,-2.5)node[vertex,label=left:$~$]{} to (u); \draw[->-=.7] ( 0.5,-2.5)node[vertex,label=right:$~$]{} to (u); \draw[ultra thick] ($(v) +(-2mm,0)$) arc (180:0:2mm); \end{tikzpicture} \hfill~ \end{center} \kern2mm \begin{center} ~\hfil \begin{tikzpicture}[xscale=\ratiox,yscale=\ratioy] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node[vertex,label=left:$u~$] (u) at (0,0) {}; \coordinate (v) at (0,1){}; \textbf{} \node[vertex] (at) at (-1,-1.5) {}; \node[vertex] (bt) at (1.5,-1.7){}; \draw[->-=.6,double] (at) to node[left]{$\mathop{\mathrm{cor}}(a)~$} (u); \draw[->-=.6,double] (bt) to node[right]{$~\mathop{\mathrm{cor}}(b)$} (u); \draw[->-=.6] (u) to (-0.5,1) node[vertex,label=left:$v~$]{}; \draw[->-=.6] (u) to ( 0.5,1) node[vertex,label=right:$~v'$]{}; \draw[ultra thick] ($(u) +(-2mm,0)$) arc (180:360:2mm); \end{tikzpicture} \hfill \begin{tikzpicture}[xscale=\ratiox,yscale=\ratioy] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node[vertex,label=left:$u~$] (u) at (0,0) {}; \node[vertex] (v) at (0,1.5){}; \textbf{} \node[vertex] (at) at (-1,-1.5) {}; \node[vertex] (bt) at (1.5,-1.7){}; \draw[->-=.6,double] (at) to node[left]{$\mathop{\mathrm{cor}}(a)~$} (u); \draw[->-=.6,double] (bt) to node[right]{$~\mathop{\mathrm{cor}}(b)$} (u); \draw[->-=.6,double] (u) tonode[left]{$\mathop{\mathrm{cor}}(u,v)~$} (v); \draw[->-=.6] (v) to (-0.5,2.5) node[vertex,label=left:$~$]{}; \draw[->-=.6] (v) to ( 0.5,2.5)node[vertex,label=right:$~$]{}; \draw[ultra thick] ($(u) +(-2mm,0)$) arc (180:360:2mm); \end{tikzpicture} \hfill~ \end{center} \caption{ \label{FIG_gray_area} An illustration of Proposition~\ref{PRO_non_adjacency_of_facets}: on top, Condition~(i) with the half-circle indicating $\widetilde d^+(v)=2$; on bottom, Condition~(ii) with the half-circle indicating $\widetilde d^-(u)=2$. } \end{figure} \begin{proposition}\label{PRO_non_adjacency_of_facets} For two good arcs $a$ and $b$ in a network $D=(N,A,s,t)$, let $F_a$ and $F_b$ be the facets of the flow polytope ${\cal F}(D)$ respectively defined by $x(a)\ge0$ and $x(b)\ge0$. The facets $F_a$ and $F_b$ are \underline{not} adjacent if and only if at least one of the two following conditions holds: \begin{enumerate}[\qquad\rm(i)] \item the corridors $\mathop{\mathrm{cor}}(a)$ and $\mathop{\mathrm{cor}}(b)$ have the same initial node, say $v$, with $\widetilde d^+(v)=2$, and \begin{enumerate}[\qquad\rm(1)] \item either $\widetilde d^-(v)\ge2$, \item or $\widetilde\delta^-(v)=\{(u,v)\}$ and the initial node of $\mathop{\mathrm{cor}}(u,v)$ has in-degree at least~$2$; \end{enumerate} \item the corridors $\mathop{\mathrm{cor}}(a)$ and $\mathop{\mathrm{cor}}(b)$ have the same terminal node, say $u$, with $\widetilde d^-(u)=2$, and \begin{enumerate}[\qquad\rm(1)] \item either $\widetilde d^+(u)\ge2$, \item or $\widetilde\delta^+(u)=\{(u,v)\}$ and the final node of $\mathop{\mathrm{cor}}(u,v)$ has out-degree at least~$2$. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} (Necessity). For any polytope, two of its facets $F$ and $G$ are \underline{not} adjacent if and only if there exists some facet $K$ such that $F \cap G \subseteq K$ with $K$ distinct from $F$ and $G$. In view of Proposition~\ref{PROP_FDI}, nonadjacency of the given facets $F_a$ and $F_b$ of ${\cal F}(D)$ implies the existence of some good arc $c$ for which the facet $F_c$ defined by the inequality $x(c)\ge0$ includes $F_a \cap F_b $ and is distinct from $F_a$ and $F_b$ (note that $F_a\neq F_b$ implies that the network has more than one $s$--$t$ path). Then by Lemma~\ref{lem_AequalB} $\mathop{\mathrm{cor}}(c) \neq \mathop{\mathrm{cor}}(a)$, $\mathop{\mathrm{cor}}(c) \neq\mathop{\mathrm{cor}}(b)$. All vertices of the face $F_a \cap F_b$ are vertices of $F_c$, equivalently all $s$--$t$ paths containing $c$ also contain $a$ or $b$. Take some $s$--$t$ path $P$ containing $c$ (there exists such a $P$ because $F_c\neq{\cal F}(D)$). Say that $P$ contains $a$ (if $P$ does not contain $a$, exchange the notation $a$ and $b$), then $P$ includes $\mathop{\mathrm{cor}}(a)$. In $P$, the arc $a$ comes either after the arc $c$ or before $c$. Treating only the second case, we will derive (ii) (in a similar way, the first case leads to (i)). Let $u$ be the final node of $\mathop{\mathrm{cor}}(a)$, and $v$ be the final node of $\mathop{\mathrm{cor}}(b)$. We first prove $u=v$. Because $a$ is good, there exists some arc $(u',u)$ in $\tilde A$ outside $\mathop{\mathrm{cor}}(a)$, thus also outside $P$. Take an $s$--$t$ path $Q$ containing the arc $(u',u)$. Following $Q$ from $s$ to $u$, next $P$ from $u$ to $t$ we get an $s$--$t$ path $R$ containing $c$ which avoids $a$ and passes through $u$. Then $R$ must contain $b$, thus $R$ includes $\mathop{\mathrm{cor}}(b)$. Now if $u\neq v$, we derive a contradiction in each of the two remaining possible positions of $v$ in $R$ with respect to $\mathop{\mathrm{cor}}(c)$: \begin{enumerate}[\rm(i)~] \item $v$ comes in $R$ after the last node of $\mathop{\mathrm{cor}}(c)$. Then the initial node $v_1$ of $\mathop{\mathrm{cor}}(b)$ comes on $R$ at or after the last node of $\mathop{\mathrm{cor}}(c)$. Because the arc~$b$ is good, there is some arc $(v_1,w)$ in $\widetilde\delta^+(v_1)$ outside $\mathop{\mathrm{cor}}(b)$. Following $R$ from $s$ to $v_1$, next $(v_1,w)$, finally some path from $w$ to $t$, we obtain an $s$--$t$ path containing $c$ but neither $a$ nor $b$, a contradiction. \item $v$ comes in $R$ before or at the initial node of $\mathop{\mathrm{cor}}(c)$. Because the arc $b$ is good, there is an arc $(v',v)$ outside $\mathop{\mathrm{cor}}(b)$, thus an $s$--$t$ path containing $(v',v)$. Following this last path from $s$ to $v$, next $R$ from $v$ to $t$, we get an $s$--$t$ path $S$ containing $c$ but not $b$. If $S$ happens to avoid $a$, we have a contradiction. If $S$ contains $a$, then $a$ must be before $b$ on $S$ and we can then similarly build an $s$--$t$ path $S$ containing $c$ but neither $b$ nor $a$, the same contradiction. \end{enumerate} We have thus proved $u=v$. In view of $\mathop{\mathrm{cor}}(a)\neq\mathop{\mathrm{cor}}(b)$, there holds $\widetilde d^-(u)\ge2$. If $\widetilde d^-(u)>2$ were true, there would exist some arc $(w,u)$ outside $\mathop{\mathrm{cor}}(a) \cup \mathop{\mathrm{cor}}(b)$. Following some $s$--$t$ path from $s$ to $w$, next $(w,u)$ and finally the part after $u$ of the path $R$ (as above), we form an $s$--$t$ path containing $c$ but neither $b$ nor $a$, contradiction. Thus $\widetilde d^-(u)=2$. Next assuming (1) were not true, we prove (2) still referring to the arc~$c$ and the $s$--$t$ path~$R$ met in previous paragraph. Note $|\widetilde\delta^+(u)|\ge1$ because of the arc $c$. Now if $\widetilde\delta^+(u)=\{(u,v)\}$, then $\mathop{\mathrm{cor}}(u,v)$ is on the $s$--$t$ path $R$ and entirely before the arc~$c$ (we cannot have $\mathop{\mathrm{cor}}(u,v)=\mathop{\mathrm{cor}}(c)$ because $c$ is a good arc and the assumption $\widetilde\delta^+(u)=\{(u,v)\}$). Let $w$ be the final node of $\mathop{\mathrm{cor}}(u,v)$. If $w$ had out-degree less than $2$, then $w$ would have in-degree as least $2$ (by the definition of $\mathop{\mathrm{cor}}(u,v)$). Any arc $(w',w)$ in $\widetilde A \setminus \mathop{\mathrm{cor}}(u,v)$ is on some $s$--$t$ path. Following the latter from $s$ to $w$, then $R$ to $t$, we get an $s$--$t$ path containing $c$ but avoiding both $a$ and $b$: contradiction. \medskip (Sufficiency). For any polytope, two of its facets $F$ and $G$ are \underline{not} adjacent if and only if there exists some proper face $K$ such that $F \cap G \subseteq K$ and moreover $K \not\subseteq F$ and $K \not\subseteq G$ (indeed, any facet including $K$ is a facet which includes $F \cap G$ and differs from $F$ and $G$ ). Assuming (ii) (assuming (i) leads to similar arguments), either (1) or (2) holds: (1)~If $\widetilde d^+(u)\ge2$, let $(u,v)$ and $(u,v')$ be arcs in $\widetilde\delta^+(u)$. For the face $K$ defined by $x(u,v)\ge0$, we have $F_a \cap F_b \subseteq K$ (because in view of $\widetilde d^-(u)=2$, any $s$--$t$ path containing $(u,v)$ contains $a$ or $b$). Moreover $K \not\subseteq F_a$ (an $s$--$t$ path including $\mathop{\mathrm{cor}}(a)$ and containing $(u,v')$ gives a vertex in $K$ but not in $F_a$), and similarly $K \not\subseteq F_b$. Thus the facets $F_a$ and $F_b$ are not adjacent. (2)~If $\widetilde\delta^+(u)=\{(u,v)\}$, let $w$ be the final node of $\mathop{\mathrm{cor}}(u,v)$. By assumption, $\widetilde d^+(w)\ge2$, so let $(w,z)$, $(w,z')$ be two arcs in $\widetilde A$. Letting $K$ be the face defined by $x(w,z)\ge0$, we conclude as in previous paragraph that the facets $F_a$ and $F_b$ are not adjacent. \end{proof} \begin{remark}\label{REM_many_networks} For many networks $D$, the facets of the flow polytope ${\cal F}(D)$ are two by two adjacent: it suffices that the network has no node of in- or out-degree equal to $2$. \end{remark} \section{Consequences for the Multiple Choice Polytope} \label{SE_MCP_flows} We saw in Section~\ref{SEC_MCP} that the multiple choice polytope $\Pmc {\cal C}$ is affinely isomorphic to the flow polytope $\Flo{\cal C}$ of the network $\Dlo{\cal C}=(2^{\cal C},\prec,\varnothing{},{\cal C})$; we keep this notation here, with $n:=|{\cal C}|$. By Proposition~\ref{prop_dim}, the dimension of both $\Flo{\cal C}$ and $\Pmc {\cal C}$ equals $2^{n-1}\,(n-2)+1$. Proposition 4 of \cite{Chang_Narita_Saito2022} also implies this result. The vertices of the multiple choice polytope $\Pmc {\cal C}$ are the points $p^L$, where $L$ is a linear ordering of the set ${\cal C}$ of alternatives. The linear mapping (as in \eqref{EQ_f}) \begin{equation}\label{EQ_f_rep} \rho:~ \mathbb R{}^E \to \mathbb R{}^A:~p \mapsto r, \qquad\text{with } r(T\setminus\{i\},T) :=\; q(i,T) \end{equation} maps the vertex $p^L$ of $\Pmc {\cal C}$ onto the vertex $\chi^P$ of $\Flo{\cal C}$, where if $L$ is given by \begin{equation}\label{EQ_>_L_rep} i_1 \quad>_L\quad i_2 \quad>_L\quad \dots \quad>_L\quad i_n \end{equation} then $P$ is the $\varnothing{}$--${\cal C}$ path \begin{equation}\label{EQ_path_for_L_rep} (\varnothing{},\{i_1\}),\quad (\{i_1\},\{i_1,i_2\}),\quad \dots,\quad (\{i_1,i_2,\dots,i_{n-1}\},{\cal C}). \end{equation} To determine when two vertices of $\Pmc {\cal C}$ are adjacent, we rather look at their images by $\rho$ in $\Flo{\cal C}$. Proposition~\ref{PROP_path_adj} states when two vertices of any flow polytope are adjacent. Its particularization to $\Flo{\cal C}$ translates as follows to the MCP: \begin{proposition}\label{PROP_MCP_vertex_adj} For any two linear orderings $L_1$ and $L_2$ of ${\cal C}$, the vertices $p^{L_1}$ and $p^{L_2}$ of $\Pmc {\cal C}$ are adjacent if and only if \begin{quote} whenever a nontrivial\footnote{Recall that $A$ is a \textsl{nontrivial} subset of $B$ when $\varnothing{} \neq A \subset B$.} subset $S$ of ${\cal C}$ is a beginning set of both $L_1$ and $L_2$,\quad then $L_1$ and $L_2$ coincide in $S$ or in ${\cal C}\setminus S$. \end{quote} \end{proposition} For $|{\cal C}| =2,3$, the graph of the flow polytope $\Flo{\cal C}$ has diameter $1$ (the polytope is a segment, a $5$-dimensional simplex respectively). \begin{corollary} For $|{\cal C}| \ge4$, the diameter of the graph of the flow polytope $\Flo{\cal C}$ equals $2$. \end{corollary} \begin{proof} Again, we work on the flow polytope $\Flo{\cal C}$. Given two $\varnothing{}$--${\cal C}$ paths $P$ and $Q$, we show the existence of a $\varnothing{}$--${\cal C}$ path $R$ such that the vertex $\chi^R$ is adjacent to both vertices $\chi^P$ and $\chi^Q$. If $(\varnothing{},\{i_1\})$ and $(\varnothing{},\{j_1\})$ are the two first arcs on respectively $P$ and $Q$, we consider two cases. If $i_1=j_1$, we let $R$ be any $\varnothing{}$--${\cal C}$ path with last arc $({\cal C}\setminus\{i_1\},{\cal C})$. If $i_1\neq j_1$ we let $R$ be any $\varnothing{}$--${\cal C}$ path with two last arcs $({\cal C}\setminus\{i_1,j_1\},{\cal C}\setminus\{i_1\})$ and $({\cal C}\setminus\{i_1\},{\cal C})$. Then no node on $R$, distinct of both $\varnothing{}$ and ${\cal C}$, is on $P$ or $Q$ (because the only node on $R$ that contains $i_1$ is ${\cal C}$, and if $i_1\neq j_1$, the only two nodes on $R$ that contain $j_1$ are ${\cal C}\setminus\{i_1\}$ and ${\cal C}$). By Proposition~\ref{PROP_MCP_vertex_adj} $\chi^R$ is adjacent to both $\chi^P$ and $\chi^Q$. \end{proof} \medskip We now turn to the adjacency of facets of the MCP, and again reason on the flow polytope $\Flo{\cal C}$. By Proposition~\ref{PROP_FDI}, a facet of the latter polytope is defined by an inequality $x(a)\ge0$ where $a$ is a good arc in the network $\Dlo{\cal C}$ (as soon as $|{\cal C}| \ge 3$ all corridors consist of a single arc, hence distinct good arcs define distinct facets). For the network~$\Dlo{\cal C}$, the arc $a=(T\setminus\{i\},T)$ is good if and only if $2 \le |T| \le |{\cal C}|-1$. We deduce that an inequality as in \eqref{EQ_BM}, that is for $(i,T)\in E$ (or $i\in T \in 2^{\cal C}$) \begin{equation}\label{EQ_BM_rep} \sum_{S\in 2^{\cal C}:\; S \supseteq T} (-1)^{|S\setminus T|} \; p(i,S) \;\ge\; 0, \end{equation} defines a facet of $\Pmc {\cal C}$ if and only if $2 \le |T| \le |{\cal C}|-1$ (\citealp{Suck1995}, unpublished, and \citealp{Fiorini2004}). We derive from Proposition~\ref{PRO_non_adjacency_of_facets}: \begin{proposition}\label{PROP_MCP_facet_adj} Assume $|{\cal C}|\ge 4$. Consider the two facets of $\Pmc {\cal C}$ defined by inequalities as in \eqref{EQ_BM_rep}, for the two distinct pairs $(i,T)$ and $(i',T')$ in $E$ with $2 \le |T|,|T'| \le |{\cal C}|-1$. The two facets are adjacent if and only if neither of the two following cases occurs: \begin{enumerate}[\qquad\rm(i)] \item $T={\cal C}\setminus\{i'\}$ and $T'={\cal C}\setminus\{i\}$; \item $T=T'=\{i,i'\}$. \end{enumerate} \end{proposition} For $n:=|{\cal C}|\ge4$, it readily follows that the adjacency graph on the collection of facets of $\Pmc {\cal C}$ is the complete graph on $2\,n\,(2^{n-2}-1)$ nodes minus $n\,(n-1)$ two by two disjoint links; thus the graph has diameter~$2$. For $n\le3$, the graph is complete. \subsection{Identifiability in the MCM} \label{subs_identifiability} It is well known that the MCM is not identifiable (see \citealp{Falmagne1978}; \citealp{Fishburn1998} collects several results and references). In terms of \eqref{EQ_captured_by_f}, it means the existence of at least one predicted point $p$ in $\Pmc{\cal C}$ for which there exists more than one point $Pr$ in $\Lambda({\cal L}{\cal O}_{\cal C})$ such that $f(Pr)=p$; in this situation, we say that the point $p$ is \textsl{non-identifying}, and the points $Pr$ are \textsl{non-identified}\footnote{As in \cite{Doignon_Heller_Stefanutti2018}, the term ``non-identifiable'' is currently used in both cases, but we prefer to reserve it to qualify the model.}. Proposition~1 in \cite{McClellon2015b} states that all points in the relative interior of $\Pmc{\cal C}$ are non-identifying. Theorem~2 in \cite{Turansick2022} characterizes as follows the non-identified points in $\Lambda({\cal L}{\cal O}_{\cal C})$, in terms of beginning sets of linear orderings (beginning sets were defined in \eqref{Eq_infty}). \begin{proposition}[\citealp{Turansick2022}]\label{prop_Turansick} In the MCM, the distribution $Pr$ on ${\cal L}{\cal O}_{\cal C}$ is identified if and only if there is \underline{no} pair of linear orderings $L$, $L'$ of ${\cal C}$ such that \begin{enumerate}[\quad\rm(1)] \item $Pr(L)>0$ and $Pr(L')>0$; \item there exist alternatives $i$, $j$, $k$ with \begin{enumerate}[\qquad\rm(a)] \item $i >_L k$,\quad $j >_L k$,\quad $i >_{L'} k$,\quad and\quad $j >_{L'} k$; \item $i \neq j$; \item $L^-(k) \neq L'^-(k)$; \item $L^-(i) = L'^-(j)$. \end{enumerate} \end{enumerate} \end{proposition} Here is a geometric interpretation of Condition~(2) from Proposition~\ref{prop_Turansick}. Recall that $Pr^L$ designates the distribution on ${\cal L}{\cal O}_{{\cal C}}$ that is concentrated on the linear ordering $L$; in other terms, $Pr^L$ is a vertex of the simplex $\Lambda({\cal L}{\cal O}_{{\cal C}})$. Moreover, the vertices of the polytope $\Pmc{\cal C}$ are the images by $f$ of the vertices of $\Lambda({\cal L}{\cal O}_{{\cal C}})$; we set $p^L=f(Pr^L)$. \begin{proposition} The three following conditions on two linear orderings $L$ and $L'$ of ${\cal C}$ are equivalent: \begin{enumerate}[\quad\rm(A)] \item $L$ and $L'$ satisfy Conditions~{\rm(2)} in Proposition~\ref{prop_Turansick}; \item there exists a nontrivial subset $U$ of ${\cal C}$ such that \begin{enumerate} \item[\qquad($\alpha$)] $U$ is a beginning set of both $L$ and $L'$, \quad and \item[\qquad($\beta$)] $L$ and $L'$ do not coincide on $U$ nor on ${\cal C}\setminus U$; \end{enumerate} \item the vertices $p^L$ and $p^{L'}$ of $\Pmc{\cal C}$ are \underline{not} adjacent. \end{enumerate} \end{proposition} \begin{proof} \noindent(A) $\Rightarrow$ (B) Letting $U=L^-(i)$, we prove that $U$ satisfies ($\alpha$) and ($\beta$). Necessarily $i\in U$, and because $L^-(i) = L'^-(j)$, also $j \in U$. Moreover, $i$ and $j$ being distinct and also the smallest elements in $U$ for respectively the orderings $L$ and $L'$, the two orderings do not coincide on $U$. Next, because by (a) we have $k\notin U$, (c) implies that $L$ and $L'$ do not coincide on ${\cal C}\setminus U$. \medskip \noindent(B) $\Leftarrow$ (A) Among all the nontrivial subsets $U$ of ${\cal C}$ satisfying ($\alpha$) and ($\beta$), take the minimum one w.r.t.\ set inclusion. Then $U=L^-(i)=L'^-(j)$ for some $i$, $j$ in ${\cal C}$; moreover by the minimality requirement, $i\neq j$. Because $L$ and $L'$ do not coincide on ${\cal C}\setminus U$, there must be some alternative $k$ in ${\cal C}\setminus U$ which is ranked differently by $L$ and $L'$. The alternatives $i$, $j$ and $k$ ``do the job''. \medskip \noindent The equivalence of (B) and (C) is the object of Proposition~\ref{PROP_MCP_vertex_adj}. \end{proof} Thus Turansick's result (here Proposition~\ref{prop_Turansick}) states in a hidden way that the point $Pr$ in ${\cal L}{\cal O}_{{\cal C}}$ is identified if and only if for any two linear orderings $L$ and $M$ of~${\cal C}$ \begin{align*} & Pr(L)>0 \;\land Pr(M)>0 \quad\implies\\ &\qquad\qquad \text{ the vertices } p^L \text{ and } p^M \text{ of } \Pmc{\cal C} \text{ are adjacent}. \end{align*} In a future project, we intend to search for a more efficient characterization of adjacency. \section{Consequences for some other particular Flow Polytopes} \label{SE_other} The multiple choice polytope appears in \cite{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018} as an `extended formulation' for the `linear order polytope' (we refer the reader to this paper for the definitions of technical terms used only in the present section). Three more flow polytopes appear there, also as extended formulations, these times for the `weak order polytope', the `interval order polytope' and the `semiorder polytope'. We provide characterization of the adjacencies of vertices and of facets for the three flow polytopes. \subsection{An extended formulation for the weak order polytope} Consider the network $\Dwo{\cal C}=(2^{\cal C},\subset,\varnothing{},{\cal C})$, where the arcs are pairs $(S,T)$ of subsets of ${\cal C}$ with $S \subset T$. The $\varnothing{}$--${\cal C}$ path $P$ equal to (where $S_0=\varnothing{}$ and $S_k={\cal C}$) \begin{equation} (S_0, S_1),\quad (S_1, S_2), \quad \dots,\quad (S_{k-1}, S_k) \end{equation} derives from exactly one weak order on ${\cal C}$ (a \textsl{weak order} is a binary relation which is transitive and complete), namely the weak order $W$ whose equivalence classes are \begin{equation}\label{Eq_char_wo} S_1\setminus S_0 \quad\succ_W \quad S_2\setminus S_1 \quad\succ_W \quad \cdots\quad \succ_W \quad S_k\setminus S_{k-1}. \end{equation} A \textsl{beginning set} of a weak order $V$ on ${\cal C}$ is any subset $S$ of ${\cal C}$ such that $i\in S$ and $i \ge_V j$ implies $j\in S$ (this extends the definition given in \eqref{Eq_infty} for linear orders). The weak order $W$ characterized in \eqref{Eq_char_wo} is the weak order whose beginning sets are \begin{equation} S_0, \quad S_1,\quad S_2,\quad \cdots,\quad S_k. \end{equation} We say that the vertex $\chi^P$ of the flow polytope $\Fwo{\cal C}$ corresponding to the $\varnothing{}$--${\cal C}$ path $P$ also corresponds to the weak order $W$. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=1] \tikzstyle{vertex}=[circle,draw,fill=white, scale=0.3] \node (s) at (0,0) [vertex,label=left:$\varnothing{}$] {}; \node (t) at (0,2) [vertex,label=left:${\cal C}$] {}; \node (u) at (-1,1) [vertex,label=left:{$\{1\}$}] {}; \node (v) at ( 1,1) [vertex,label=right:{$\{2\}$}] {}; \draw[->-=.7] (s) -- (u); \draw[->-=.7] (s) -- (v); \draw[->-=.7] (u) -- (t); \draw[->-=.7] (v) -- (t); \draw[->-=.7] (s) -- (t); \end{tikzpicture} \end{center} \caption{\label{fig_weak_order}The network in Example~\ref{ex_weak_order}.} \end{figure} \begin{example}\label{ex_weak_order} For ${\cal C}=\{1,2\}$, the network $\Dwo{\cal C}=(2^{\cal C},\subset,\varnothing{},{\cal C})$ is displayed in Figure~\ref{fig_weak_order}. The flow polytope $\Fwo{\cal C}$ is a triangle. \end{example} Note that for $|{\cal C}|\ge3$, all corridors of the network $(2^{\cal C},\subset,\varnothing{},{\cal C})$ have size~$1$. \begin{proposition}\label{prop_WO_vertex_adj} Assume $|{\cal C}|\ge3$. The two vertices of the flow polytope $\Fwo{\cal C}$ corresponding to the two weak orderings $W_1$ and $W_2$ of ${\cal C}$ are adjacent if and only if when a nontrivial subset $S$ of ${\cal C}$ is a beginning set of both $W_1$ and $W_2$, then $W_1$ and $W_2$ coincide in $S$ or in ${\cal C}\setminus S$. \end{proposition} \begin{corollary} When $|{\cal C}|\ge3$, the diameter of the flow polytope $\Fwo{\cal C}$ equals~$2$. \end{corollary} \begin{proof} The weak order ${\cal C}\times {\cal C}$ (with ${\cal C}$ as its single equivalence class) produces a vertex of $\Fwo{\cal C}$ which is adjacent to all other vertices. \end{proof} \begin{proposition}\label{prop_WO_facet_adj} Assume $|{\cal C}|\ge3$. An inequality $x(a)\ge0$, for $a=(S,T)$ with $S \subset T \subseteq {\cal C}$, defines a facet of the flow polytope $\Fwo{\cal C}$ if and only if $\varnothing{} \neq S$ and $T \neq {\cal C}$. Any two facets of $\Fwo{\cal C}$ are adjacent. \end{proposition} More terminology is needed to describe the next two flow polytopes. To keep the length of this paper (hopefully) acceptable, we state our results without repeating all definitions from \cite{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018}. \subsection{An extended formulation for the interval order polytope} For any set ${\cal C}$ of $n$ alternatives, the network $\Dio {\cal C} = (N,A,s,t)$ is defined as follows (see Figure~\ref{FIG_int_order} for $|{\cal C}|=2$): \begin{eqnarray*} N & := & \{(X,Y) {\;\vrule height9pt width1pt depth1.5pt\;} Y \subseteq X \subseteq {\cal C}\},\\[1mm] A & := & \left\{ ((X,Y),(Z,T))\in N \times N \;\vrule height23pt width1pt depth14pt \, \begin{array}{l} X \subseteq Z,\; Y \subseteq T,\;\text{and}\\ \begin{array}{rl} \text{either~}&|Z| = |X| + 1,\; |T| = |Y|\\ \text{or~}&|Z| = |X|,\; |T| = |Y| + 1 \end{array} \end{array} \kern-2mm \right\},\\ s & := & (\varnothing{},\varnothing{}),\\ t & := & ({\cal C},{\cal C}). \end{eqnarray*} The flow polytope $\Fio {\cal C}$ is an extended formulation of the interval order polytope (the vertices of the last polytope are the characteristic vectors of the interval orders on ${\cal C}$), see \cite{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018}. The numbers of nodes and arcs in the network $\Dio {\cal C}$ are respectively, for $n:=|{\cal C}|$, \begin{equation} |N| = 3^n \quad \textrm{and} \quad |A| = 2 \, n \, 3^{n-1} \end{equation} (several $(\varnothing{},\varnothing{})$--$({\cal C},{\cal C})$ paths encode the same interval order). \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=1] \tikzstyle{vertex}=[circle,draw,fill=white,scale=0.3] \scriptsize \node (t) at (0,4) [vertex,label=above:{$(\{1,2\},\{1,2\})$}] {}; \node (n12v1) at (-1,3) [vertex,label=left:{$(\{1,2\},\{1\})$}] {}; \node (n12v2) at ( 1,3) [vertex,label=right:{$(\{1,2\},\{2\})$}] {}; \node (n1v1) at (-2,2) [vertex,label=left:{$(\{1\},\{1\})$}] {}; \node (n12ve) at ( 0,2) [vertex] {}; \node (n2v2) at ( 2,2) [vertex,label=right:{$(\{2\},\{2\})$}] {}; \node (n1ve) at (-1,1) [vertex,label=left:{$(\{1\},\varnothing{})$}] {}; \node (n2ve) at ( 1,1) [vertex,label=right:{$(\{2\},\varnothing{})$}] {}; \node (s) at (0,0) [vertex,label=below:{$(\varnothing{},\varnothing{})$}] {}; \draw[->-=.7] (n12v1) -- (t); \draw[->-=.7] (n12v2) -- (t); \draw[->-=.7] (n1v1) -- (n12v1); \draw[->-=.7] (n12ve) -- (n12v1); \draw[->-=.7] (n12ve) -- (n12v2); \draw[->-=.7] (n2v2) -- (n12v2); \draw[->-=.7] (n2ve) -- (n12ve); \draw[->-=.7] (n2ve) -- (n2v2); \draw[->-=.7] (n1ve) -- (n1v1); \draw[->-=.7] (n1ve) -- (n12ve); \draw[->-=.7] (s) -- (n1ve); \draw[->-=.7] (s) -- (n2ve); \end{tikzpicture} \end{center} \caption{\label{FIG_int_order}The network $\Dio {\cal C}$ used in the investigation of interval orders, for $|{\cal C}|=2$. The label of the central node is $(\{1,2\},\varnothing{})$.} \end{figure} When $|{\cal C}|\ge3$, all corridors of the network $\Dio {\cal C}$ have size~$1$. For the adjacency of vertices, we cannot tell more than the characterization in Proposition~\ref{PROP_path_adj} (note that the vertices of $\Fio {\cal C}$ do not have a simple interpretation while the vertices of $\Flo {\cal C}$ and $\Fwo {\cal C}$ exactly correspond to linear orders and weak orders on ${\cal C}$ respectively; see \citealp{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018}, for more details on $\Fio {\cal C}$). For the facets we have: \begin{proposition}\label{prop_IO_facet_adj} Let $a$ be any arc in $\Dio {\cal C}$, with $|{\cal C}|\ge3$. The inequality $x(a)\ge0$ defines a facet $F_a$ of the flow polytope~$\Fio{\cal C}$ if and only if the arc~$a$ is good, equivalently $a$ is not of any of the four forms, for some $i \in{\cal C}$, \begin{gather*} (\, (\varnothing{},\varnothing{}),\, (\{i\},\varnothing{}) \,), \qquad (\, (\{i\},\varnothing{}),\, (\{i\},\{i\}) \,),\\ (\, ({\cal C}\setminus\{i\},\, {\cal C}\setminus\{i\} \,), (\, {\cal C},\, {\cal C}\setminus\{i\} \,)),\qquad (\, ({\cal C},{\cal C}\setminus\{i\}),\, ({\cal C},{\cal C}) \,). \end{gather*} If the two arcs $a$ and $b$ of $\Dio {\cal C}$ are good, then the two facets $F_a$ and $F_b$ are \underline{not} adjacent if and only if $\{a,b\}$ is, for some distinct alternatives $i$ and $j$, one of the six pairs of arcs shown in Figure~\ref{fig_6_pairs}. \end{proposition} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=1] \tikzstyle{vertex}=[circle,draw,fill=white,scale=0.3] \scriptsize \node[vertex,label=above:{$({\cal C},{\cal C}\setminus\{i\})$}] (a)at(-1,3) {}; \node[vertex,label=above:{$({\cal C},{\cal C}\setminus\{j\})$}] (b) at (1,3) {}; \node (g) at (0,2)[vertex,label=below:{$({\cal C},{\cal C}\setminus\{i,j\})$}] {}; \draw[->-=.7] (g) -> (a); \draw[->-=.7] (g) -> (b); \node (c) at (3,3)[vertex,label=above:{$({\cal C},{\cal C}\setminus\{i,j\})$}] {}; \node (d) at (5,3)[vertex,label=above:{$({\cal C}\setminus\{i\},{\cal C}\setminus\{i\})$}] {}; \node (h) at (4,2)[vertex,label=below:{$({\cal C}\setminus\{i\},{\cal C}\setminus\{i,j\})$}] {}; \draw[->-=.7] (h) -> (c); \draw[->-=.7] (h) -> (d); \node (e) at (7.5,3)[vertex,label=above:{$({\cal C}\setminus\{i\},{\cal C}\setminus\{i,j\})$}] {}; \node (f) at (10.1,3)[vertex,label=above:{$({\cal C}\setminus\{j\},{\cal C}\setminus\{i,j\})$}] {}; \node (i) at (8.8,2)[vertex,label=below:{$({\cal C}\setminus\{i,j\},{\cal C}\setminus\{i,j\})$}] {}; \draw[->-=.7] (i) -> (e); \draw[->-=.7] (i) -> (f); \node (j) at (0,0)[vertex,label=above:{$(\{i,j\},\varnothing{})$}] {}; \node (m) at (-1,-1)[vertex,label=below:{$(\{i\},\varnothing{})$}] {}; \node (n) at (1,-1)[vertex,label=below:{$(\{j\},\varnothing{})$}] {}; \draw[->-=.7] (m) -> (j); \draw[->-=.7] (n) -> (j); \node (k) at (4,0)[vertex,label=above:{$(\{i,j\},\{j\})$}] {}; \node (o) at (3,-1)[vertex,label=below:{$(\{j\},\{j\})$}] {}; \node (p) at (5,-1)[vertex,label=below:{$(\{i,j\},\varnothing{})$}] {}; \draw[->-=.7] (o) -> (k); \draw[->-=.7] (p) -> (k); \node (l) at (8,0)[vertex,label=above:{$(\{i,j\},\{i,j\})$}] {}; \node (q) at(7,-1)[vertex,label=below:{$(\{i,j\},\{i\})$}] {}; \node (r) at (9,-1)[vertex,label=below:{$(\{i,j\},\{j\})$}] {}; \draw[->-=.7] (q) -> (l); \draw[->-=.7] (r) -> (l); \end{tikzpicture} \end{center} \caption{\label{fig_6_pairs}The six types of pairs of arcs producing pairs of nonadjacent facets of $\Fio {\cal C}$.} \end{figure} \begin{proof} By Proposition~\ref{PROP_FDI} and because the network $\Dio {\cal C}$ has more than one $\varnothing{}$--${\cal C}$ path, $x(v)\ge 0$ defines a facet if and only if the arc $v$ is good. When $|{\cal C}|\ge 3$, any corridor is formed by a single arc. Note that a node $(X,Y)$ has in-degree $|X|$ and out-degree $|{\cal C}\setminus Y|$. Hence the in-degree of any node $v$ in $\Dio {\cal C}$ is at least $2$ except when $v$ equals $(\varnothing{},\varnothing{})$, $(\{i\},\varnothing{})$, or $(\{i\},\{i\})$ for some alternative $i$ (here again we need $|{\cal C}|\ge3$, as testified by Figure~\ref{FIG_int_order}). Similarly, the out-degree of any node $w$ in $\Dio {\cal C}$ is at least $2$ except when $w$ equals $({\cal C}\setminus\{j\},{\cal C}\setminus\{j\})$, $({\cal C},{\cal C}\setminus\{j\})$ or $({\cal C},{\cal C})$ for some alternative $j$. It follows that the only bad arcs are those mentioned in the statement. Now suppose that the two arcs $a$ and $b$ are good. Referring to Proposition~\ref{PRO_non_adjacency_of_facets}, we see that the facets $F_a$ and $F_b$ are \underline{not} adjacent exactly if either $a$ and $b$ have the same initial node, say $u$, with $d^+(u)=2$, or $a$ and $b$ have the same terminal node, say $v$, with $d^-(v)=2$ (here the cases (2) in Proposition~\ref{PRO_non_adjacency_of_facets} cannot occur in view of $|{\cal C}|\ge3$). When $|{\cal C}|\ge3$, the latter happens exactly for any of the six types of arcs displayed in Figure~\ref{fig_6_pairs}. \end{proof} \subsection{An extended formulation for the semiorder polytope} \cite{Davis-Stober_Doignon_Fiorini_Glineur_Regenwetter2018} introduce still another network $\Dso {\cal C} = (N,A,\linebreak s, t)$ with $n:=|{\cal C}|$, whose flow polytope makes an extended formulation of the `semiorder polytope'. The definition of $\Dso {\cal C}$ goes as follows, where $L + i$ means that we append alternative $i$ at the end of the linear ordering $L$ of some subset of ${\cal C}$ excluding $i$. Moreover $L -j$ denotes the removal of $j$ from the ground set of the linear order $L$. As a convention, the only linear ordering of the empty set is $L=\varnothing{}$. \begin{eqnarray*} N &=& \{(X,Y,L) {\;\vrule height9pt width1pt depth1.5pt\;} {\cal C} \supseteq X \supseteq Y,\; L\textrm{ linear ordering of } X\setminus Y\};\\[2mm] A &=& \{\big((X,Y,L),\, (Z,T,M) \big) \in N^2 {\;\vrule height9pt width1pt depth1.5pt\;} \\ &&\text{either for some } i\in{\cal C}\setminus X:\quad \left\{\begin{array}{lcl} Z &=& X \cup \{i\},\\ T &=& Y,\\ M &=& L + i, \end{array}\right.\\ && \text{or for the alternative $j$ in $X \setminus Y$ which is the first one in $L$}:\\ && \phantom{\text{either for some } i\in{\cal C}\setminus X:\quad} \left\{\begin{array}{lcl} Z &=& X,\\ T &=& Y \cup \{j\},\\ M &=& L - j; \end{array}\right.\\ s &=& (\varnothing{},\varnothing{},\varnothing{});\\ t &=& ({\cal C},{\cal C},\varnothing{}). \end{eqnarray*} Each $(\varnothing{},\varnothing{},\varnothing{})$--$({\cal C},{\cal C},\varnothing{})$ path is a sequence of $2\,n$ arcs (here, again, $n:=|{\cal C}|$). See Figure~\ref{FIG_semiorder} for $\Dso {\cal C}$ when $n=2$. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=1] \tikzstyle{vertex}=[circle,draw,fill=white,scale=0.3] \scriptsize \node (t) at (0,4) [vertex,label=above:{$(\{1,2\},\{1,2\},\varnothing{})$}] {}; \node (n12v1) at (-1,3) [vertex,label=left:{$(\{1,2\},\{1\},L)$}] {}; \node (n12v2) at ( 1,3) [vertex,label=right:{$(\{1,2\},\{2\},L)$}] {}; \node (n1v1) at (-2,2) [vertex,label=left:{$(\{1\},\{1\},\varnothing{})$}] {}; \node (n12veA) at (-0.5,2) [vertex] {}; \node (n12veB) at ( 0.5,2) [vertex] {}; \node (n2v2) at ( 2,2) [vertex,label=right:{$(\{2\},\{2\},\varnothing{})$}] {}; \node (n1ve) at (-1,1) [vertex,label=left:{$(\{1\},\varnothing{},L)$}] {}; \node (n2ve) at ( 1,1) [vertex,label=right:{$(\{2\},\varnothing{},L)$}] {}; \node (s) at (0,0) [vertex,label=below:{$(\varnothing{},\varnothing{},\varnothing{})$}] {}; \draw[->-=.7] (n12v1) -- (t); \draw[->-=.7] (n12v2) -- (t); \draw[->-=.7] (n1v1) -- (n12v1); \draw[->-=.7] (n12veA) -- (n12v1); \draw[->-=.7] (n12veB) -- (n12v2); \draw[->-=.7] (n2v2) -- (n12v2); \draw[->-=.5] (n2ve) -- (n12veB); \draw[->-=.7] (n2ve) -- (n2v2); \draw[->-=.7] (n1ve) -- (n1v1); \draw[->-=.5] (n1ve) -- (n12veA); \draw[->-=.7] (s) -- (n1ve); \draw[->-=.7] (s) -- (n2ve); \end{tikzpicture} \end{center} \caption{\label{FIG_semiorder}The network $\Dso {\cal C}$ used in the investigation of semiorders, for $|{\cal C}|=2$. When $|X \setminus Y| \le 1$, the linear ordering of $X \setminus Y$ is obvious; we simply write $\varnothing{}$ or $L$ for it. The labels of the central nodes are $(\{1,2\},\varnothing{},1 <_L 2)$ and $(\{1,2\},\varnothing{}, 2 <_L 1)$ respectively.} \end{figure} \begin{lemma}\label{LEM_SO_degrees} For any node $(X,Y,L)$ in the network $\Dso {\cal C}$, \begin{align} \widetilde d^+(X,Y,L) \;&=\: \begin{cases} n-|X| &\text{ if } X=Y,\\ n-|X|+1 \quad&\text{ if } X \supset Y; \end{cases} \\ \widetilde d^-(X,Y,L) \;&=\: \begin{cases} |Y| &\text{ if } X = Y,\\ |Y|+1 \quad&\text{ if } X \supset Y. \end{cases} \end{align} \end{lemma} \begin{proof} The first two equations derive from the definition of arcs with tail $(X,Y,L)$. To derive the last two equations, rewrite the definition as follows. For two nodes $(Z,T,M)$ and $(X,Y,L)$, the pair $\big((Z,T,M),\, (X,Y,L) \big)$ is an arc if and only if \begin{equation} \text{for $i\in\ X\setminus Y$ which is the last for $L$}:\; \left\{\begin{array}{lcl} Z &=& X \setminus \{i\},\\ T &=& Y,\\ M &=& L - i, \end{array}\right. \end{equation} or \begin{equation} \text{for some $j$ in $Y$}: \\ \left\{\begin{array}{lcl} Z &=& X,\\ T &=& Y \setminus \{j\},\\ M &=& j + L. \end{array}\right. \end{equation} \end{proof} Here again, as for the interval order case, there is no more about adjacency of vertices that we can say beside Proposition~\ref{PROP_path_adj}. We thus turn to the adjacency of facets. \begin{proposition}\label{pro_SO_good} Assume $|{\cal C}|\ge3$. All corridors of $\Dso {\cal C}$ consist of either one arc or two arcs. The corridors of size~$2$ have central nodes of the form $({\cal C},\varnothing{},L)$, for some linear ordering $L$ of ${\cal C}$; both of their arcs are good. An arc of $\Dso {\cal C}$ is good if and only if it is not of any of the following types: \begin{align} \label{eq_alpha}\tag{$\alpha$} ((X,\varnothing{},L),(X\cup\{i\},\varnothing{},L+i)) &\qquad\text{where } X\subset{\cal C},\, i\in {\cal C}\setminus X; \\ \label{eq_beta}\tag{$\beta$} (({\cal C}\setminus\{i\},{\cal C}\setminus\{i\},\varnothing{}),({\cal C},{\cal C}\setminus\{i\},L)) &\qquad \text{where } i\in{\cal C};\\ \label{eq_gamma}\tag{$\gamma$} (X,\varnothing{},L),(X,\{j\},L-j) &\qquad \text{where } j\in X \subseteq {\cal C}\\ \label{eq_delta}\tag{$\delta$} (({\cal C},Y,L),({\cal C},Y\cup\{j\},L-j) &\qquad\text{where } Y \subset {\cal C} ,\, j \in {\cal C}\setminus Y. \end{align} \end{proposition} \begin{proof} By Lemma~\ref{LEM_SO_degrees}, the only nodes of $\Dso {\cal C}$ having both in- and out-degree $1$ are the $({\cal C},\varnothing{},L)$'s with $L$ any linear ordering of ${\cal C}$. So the corridors are of size $1$ or $2$, and the corridors of size $2$ have $({\cal C},\varnothing{},L)$ as their middle nodes. Note moreover that each arc in a corridor of size $2$ is good because both the terminal node $({\cal C},\{j\},L-j)$ of the corridor (with $j$ the first element in $L$) has in-degree at least~$2$ and the initial node $({\cal C}\setminus\{i\},\varnothing{},L-i)$ of the corridor (with $i$ the last element in $L$) has out-degree at least~$2$. According to the definition of $\Dso {\cal C}$, there are two types of arcs, which we now review for badness: \medskip \noindent$\triangleright$~ If the arc $((X,Y,L),(X\cup\{i\},Y,L+i))$ is bad (where $i\in{\cal C}\setminus X$), then $d^-((X\cup\{i\},Y,L+i)) = 1$ or $d^+((X,Y,L)) = 1$. By Lemma~\ref{LEM_SO_degrees}, in the first case, ($X\cup\{i\}=Y$ and $|Y|=1$) or ($Y=\varnothing{}$ and $X\cup\{i\} \supset Y$). The first eventuality being impossible because by assumption $X \supseteq Y$, we get \eqref{eq_alpha}. In the second case, again by Lemma~\ref{LEM_SO_degrees} and with $n:=|{\cal C}|$, we have ($X=Y$ and $|X|=n-1$) or ($X={\cal C} \supset Y$). The second eventuality being impossible (because we need $i$ in ${\cal C}\setminus X$), we get \eqref{eq_beta}. \medskip \noindent$\triangleright$~ If the arc $(X,Y,L),(X,Y\cup\{j\},L-j)$ is bad (where $j \in X \setminus Y$ is the first element in the linear ordering $L$ of $X \setminus Y$), then $d^-((X,Y\cup\{j\},L-j)) = 1$ or $d^+((X,Y,L)) = 1$. By Lemma~\ref{LEM_SO_degrees}, in the first case, ($X=Y\cup\{j\}$ and $|Y\cup\{j\}|=1$) or ($X \supset Y \cup \{j\}$ and $Y = \varnothing{}$), so we get \eqref{eq_gamma}. In the second case, ($X=Y$ and $|X|=n-1$) or ($X \supset Y$ and $|X|=n$). The first eventuality being impossible (in view of $j\in X \setminus Y$), we get \eqref{eq_delta}. \end{proof} \begin{proposition} Assume $n:=|{\cal C}|\ge3$. Take the two facets of $\Fso {\cal C}$ defined by the inequalities $x(a)\ge0$ and $x(b)\ge0$, where $a$ and $b$ are two good arcs. The two facets are \underline{not} adjacent if and only if the corridors $\mathop{\mathrm{cor}}(a)$ and $\mathop{\mathrm{cor}}(b)$ \begin{enumerate}[\quad\rm(i)] \item have the same tail of the form either $(X,X,\varnothing{})$ with $|X|=n-2 \ge 2$, or $(X,Y,L)$ with $|X|=n-1$ and $X\neq Y$, \item or they have the same head of the form either $(X,X,\varnothing{})$ with $|X|=2$ and $n\ge4$, or $(X,Y,L)$ with $|Y|=1$ and $X\neq Y$. \end{enumerate} \end{proposition} \begin{proof} Refer to Proposition~\ref{PRO_non_adjacency_of_facets} and Proposition~\ref{pro_SO_good}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,094
The Financial Conduct Authority (FCA) is a quasi-governmental agency in the United Kingdom, formed as one of the successors to the Financial Services Authority (FSA). It regulates financial firms providing services to consumers and maintains the integrity of the UK's financial markets. It is to focus on the regulation of conduct by both retail and wholesale financial services firms. Like its predecessor the FSA, the FCA is structured as a company limited by guarantee. Unlike offshore companies, FSA regulated brokers can't spend clients' money to cover their expenses. They must keep all clients funds in segregated accounts with bank, approved by FSA. If a FSA regulated broker go bancrupt, its clients are covered by the Financial Services Compensation Scheme (FSCS). Forex traders are entitled to receive 100% the first £30,000 + 90% of the next £20,000, but no more than £48,000 in total. The authority has significant powers, including the power to regulate conduct related to the marketing of financial products. It is able to specify minimum standards and to place requirements on products. It has the power to investigate organisation and individuals. In addition, the FCA is able to ban financial products for up to a year while considering an indefinite ban; it will have the power to instruct firms to immediately retract or modify promotions which it finds to be misleading, and to publish such decisions.
{ "redpajama_set_name": "RedPajamaC4" }
7,459
CRX, sigle composé des trois lettres C, R et X, peut faire référence à : Honda CR-X (où CR-X signifie Civic Renaissance eXperimental), une automobile du constructeur japonais Honda. CRX est un code qui peut faire référence à : ; . Code
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,665
\section{Introduction\label{sec:Introduction}} Deception has always garnered attention in popular culture, from the deception that planted a seed of anguish in Shakespeare's Macbeth to the deception that drew viewers to the more contemporary television series \emph{Lie to Me}. Our human experience seems to be permeated by deception, which may even be engrained into human beings via evolutionary factors \cite{key-30,key-29}. Yet humans are famously bad at detecting deception \cite{key-14,key-26}. An impressive body of research aims to improve these rates, especially in interpersonal situations. Many investigations involve leading subjects to experience an event or recall a piece of information and then asking them to lie about it \cite{key-31,key-14,key-15}. Researchers have shown that some techniques can aid in detecting lies - such as asking a suspect to recall events in reverse order \cite{key-14}, asking her to maintain eye contact \cite{key-15}, asking unexpected questions or strategically using evidence \cite{key-32}. Clearly, detecting interpersonal deception is still an active area of research. While understanding interpersonal deception is difficult, studying deception in cyberspace has its set of unique challenges. In cyberspace, information can lack permanence, typical cues to deception found in physical space can be missing, and it can be difficult to impute responsibility \cite{key-18}. Consider, for example, the problem of identifying deceptive opinion spam in online markets. Deceptive opinion spam consists of comments made about products or services by actors posing as customers, when they are actually representing the interests of the company concerned or its competitors. The research challenge is to separate comments made by genuine customers from those made by self-interested actors posing as customers. This is difficult for humans to do unaided; two out of three human judges in \cite{key-20} failed to perform significantly better than chance. To solve this problem, the authors of \cite{key-20} make use of approaches including a tool called the \emph{Linguistic Inquiry Word Count}, an approach based on the frequency distribution of part-of-speech tags, and third approach which uses a classification based on \emph{n}-grams. This highlights the importance of an interdisciplinary approach to studying deception, especially in cyberspace. Although an interdisciplinary approach to studying deception offers important insights, the challenge remains of putting it to work in a quantitative framework. In behavioral deception experiments, for instance, the incentives to lie are also often poorly controlled, in the sense that subjects may simply be instructed to lie or to tell the truth \cite{key-19}. This prohibits a natural setting in which subjects could make free choices. These studies also cannot make precise mathematical predictions about the effect of deception or deception-detecting techniques \cite{key-19}. Understanding deception in a quantitative framework could help to give results rigor and predictability. To achieve this rigor and predictability, we analyze deception through the framework of game theory. This framework allows making quantitative, verifiable predictions, and enables the study of situations involving free choice (the option to deceive or not to deceive) and well-defined incentives \cite{key-19}. Specifically, the area of incomplete information games allows modeling the information asymmetry that forms part and parcel of deception. In a signaling game, a receiver observes a piece of private information and communicates a message to a receiver, who chooses an action. The receiver's best action depends on his belief about the private information of the sender. But the sender may use strategies in which he conveys or does not convey this private information. It is natural to make connections between the signaling game terminology of pooling, separating, and partially-separating equilibria and deceptive, truthful, and partially-truthful behavior. Thus, game theory provides a suitable framework for studying deception. Beyond analyzing equilibria, we also want to design solutions that control the environment in which deception takes place. This calls for the reverse game theory perspective of \emph{mechanism design}. In mechanism design, exogenous factors are manipulated in order to design the outcome of a game. In signaling games, these solutions might seek to obtain target utilities or a desired level of information communication. If the deceiver in the signaling game has the role of an adversary - for problems in security or privacy, for example - a defender often wants to design methods to limit the amount of deception. But defenders may also use deception to their advantage. In this case, it is the adversary who may try to implement mechanisms to mitigate the effects of the deception. A more general mechanism design perspective for signaling games could consider other ways of manipulating the environment, such as feedback and observation (Fig. \ref{fig:A-general-framework}). \begin{figure} \begin{centering} \includegraphics[width=0.8\columnwidth]{generalMechDesign} \par\end{centering} \protect\caption{\label{fig:A-general-framework}A general framework for mechanism design. Manipulating the environment in which deception takes place in a signaling game could include adding additional blocks as well as manipulating exogenous parameters of the game. In general, type $m$ can be manipulated by input from a \emph{controller} before reaching the sender. The controller can rely on an \emph{observer} to estimate unknown states. In this paper, we specifically study the roll of a \emph{detector}, which compares type to message and emits evidence for deception.} \end{figure} In this paper, we study deception in two different frameworks. The first framework is a typical game of costless communication between a sender and receiver known as \emph{cheap-talk}. In the second framework, we add the element of deception detection, forming a game of \emph{cheap-talk with evidence}. This latter model includes a move by nature after the action of the sender, which yields evidence for deception with some probability. In order provide a concrete example, we consider a specific use of deception for defense, and the employment of antideceptive techniques by an attacker. In this scenario, a defender uses honeypots disguised as normal systems to protect a network, and an adversary implements honeypot detection in order to strike back against this deception. We give an example of how an adversary might obtain evidence for deception through a timing classification known as \emph{fuzzy benchmarking}. Finally, we show how network defenders need to bolster their capabilities in order to maintain the same results in the face of honeypot detection. This mechanism design approach reverses the mappings from adversary power to evidence detection and evidence detection to game outcome. Although we apply it to a specific research problem, our approach is quite general and can be used in deceptive interactions in both interpersonal deception and deception in cyber security. Our main contributions include 1) developing a model for signaling games with deception detection, and analyzing how this model includes traditional signaling games and complete information games as special cases, 2) demonstrating that the ability to detect deception causes pure strategy equilibria to disappear under certain conditions, and 3) showing that deception detection by an adversary could actually increase the utility obtained by a network defender. These results have specific implications for network defense through honeypot deployment, but can be applied to a large class of strategic interactions involving deception in both physical and cyberspace. The rest of the paper proceeds as follows. Section \ref{sec:Cheap-Talk-Signaling-Games} reviews cheap-talk signaling games and the solution concept of perfect Bayesian Nash equilibrium. We use this framework to analyze the honeypot scenario in Section \ref{sec:AnalysisNoEv}. Section \ref{sec:Cheap-Talk-Ev} adds the element of deception detection to the signaling game. We describe an example of how this detection might be implemented in Section \ref{sec:Deception-Detection-Example}. Then we analyze the resulting game in section \ref{sec:AnalysisEv}. In Section \ref{sec:Mechanism-Design}, we discuss a case study in which a network defender needs to change in order to respond to the advent of honeypot detection. We review related work in Section \ref{sec:Related-Work}, and conclude the paper in Section \ref{sec:Discussion}. \section{Cheap-Talk Signaling Games\label{sec:Cheap-Talk-Signaling-Games}} In this section, we review the concept of signaling games, a class of two-player, dynamic, incomplete information games. The information asymmetry and dynamic nature of these games captures the essence of deception, and the notion of separating, pooling, or partially-separating equilibria can be related to truthful, deceptive, or partially-truthful behavior. \subsection{Game Model} Our model consists of a signaling game in which the types, messages, and actions are taken from discrete sets with two elements. Call this two-player, incomplete information game $\mathcal{G}$. In $\mathcal{G}$, a sender, $S$, observes a type $m\in M=\left\{ 0,1\right\} $ drawn with probabilities $p\left(0\right)$ and $p\left(1\right)=1-p\left(0\right)$. He then sends a message, $n\in N=\left\{ 0,1\right\} $ to the receiver, $R$. After observing the message (but not the type), $R$ plays an action $y\in Y=\left\{ 0,1\right\} $ . The flow of information between sender and receiver is depicted in Fig. \ref{fig:Block-diagram-signal}. Let $u^{S}\left(y,m\right)$ and $u^{R}\left(y,m\right)$ be the utility obtained by $S$ and $R$, respectively, when the type is $m$ and the receiver plays action $y$. Notice that the utilities are not directly dependent on the message, $n$; hence the description of this model as a ``cheap-talk'' game. \begin{figure} \begin{centering} \includegraphics[width=0.67\columnwidth]{signalBlockDiagram} \par\end{centering} \protect\caption{\label{fig:Block-diagram-signal}Block diagram of a signaling game with two discrete types, messages, and actions.} \end{figure} The sender's strategy consists of playing a message $n$, after observing a type $m$, with probability $\sigma_{S}\left(n\,|\, m\right)$. The receiver's strategy consists of playing an action $y$, after observing a message $n$, with probability $\sigma_{R}\left(y\,|\, n\right)$. Denote the sets of all such strategies as $\Gamma^{S}$, and $\Gamma^{R}$. Define expected utilities for the sender and receiver as $U^{S}:\,\Gamma^{S}\times\Gamma^{R}\rightarrow\mathbb{R}$ and $U^{R}:\,\Gamma^{S}\times\Gamma^{R}\rightarrow\mathbb{R}$, such that $U^{S}\left(\sigma_{S},\sigma_{R}\right)$ and $U^{R}\left(\sigma_{S},\sigma_{R}\right)$ are the expected utilities for the sender and receiver, respectively, when the sender and receiver play according to the strategy profile $\left(\sigma_{S},\sigma_{R}\right)$. Finally, define $\tilde{U}^{S}:\,\Gamma^{S}\times\Gamma^{R}\times M\to\mathbb{R}$ and $\tilde{U}^{R}:\,\Gamma^{R}\times M\times N\to\mathbb{R}$ such that $\tilde{U}^{S}\left(\sigma_{S},\sigma_{R},m\right)$ gives the expected utility for $S$ for playing $\sigma_{S}$ when $R$ plays $\sigma_{R}$ and the type is $m$, and $\tilde{U}^{R}\left(\sigma_{R},m,n\right)$ gives the expected utility for $R$ for playing $\sigma_{R}$ when the type is $m$ and she observes message $n$. \subsection{Perfect Bayesian Nash Equilibrium} We now review the concept of Perfect Bayesian Nash equilibrium, the natural extension of subgame perfection to games of incomplete information. A Perfect Bayesian Nash equilibrium (see \cite{key-16}) of signaling game $\mathcal{G}$ is a strategy profile $\left(\sigma_{S},\sigma_{R}\right)$ and posterior beliefs $\mu_{R}(m\,|\, n)$ of the receiver about the sender such that \begin{equation} \forall m\in M,\,\sigma_{S}\in\underset{\bar{\sigma}_{S}\in\Gamma^{S}}{\arg\max\,}\tilde{U}^{S}\left(\bar{\sigma}_{S},\sigma_{R},m\right),\label{eq:PBE1} \end{equation} \begin{equation} \forall n\in N,\,\sigma_{R}\in\underset{\bar{\sigma}_{R}\in\Gamma^{R}}{\arg\max}\,\underset{\bar{m}\in M}{\sum}\mu_{R}\left(\bar{m}\,|\, n\right)\tilde{U}^{R}\left(\bar{\sigma}_{R},\bar{m},n\right), \end{equation} \begin{equation} \mu_{R}\left(m\,|\, n\right)=\begin{cases} \frac{\sigma_{S}\left(n\,|\, m\right)p\left(m\right)}{\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)}, & \text{if}\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)>0\\ \text{any distrubution on }M, & \text{if}\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)=0 \end{cases}.\label{eq:def1-beliefUp} \end{equation} Eq. \ref{eq:PBE1} requires $S$ to maximize his expected utility for the strategy played by $R$ for all types $m$. The second equation requires that, for all messages $n$, $R$ maximizes his expected utility against the strategy played by $S$ given his beliefs. Finally, Eq. \ref{eq:def1-beliefUp} requires the beliefs of $R$ about the type to be consistent with the strategy played by $S$, using Bayes' Law to update his prior belief according to $S$'s strategy. \section{Analysis of Deceptive Conflict Using Signaling Games\label{sec:AnalysisNoEv}} In this section, we describe an example of deception in cyber security using signaling games. These type of models have been used, for instance, in \cite{key-17,key-27,key-5,key-25}. We give results here primarily in order to show how the results change after we add the factor of evidence emission in Section \ref{sec:AnalysisEv}. Consider a game $\mathcal{G}_{honey}$, in which a defender uses honeypots to protect a network of computers. We consider a model and parameters from \cite{key-17}, with some adaptations. In this game, the ratio of normal systems to honeypots is considered fixed. Based on this ratio, nature assigns a \emph{type} - normal system or honeypot - to each system in the network. The sender is the network defender, who can choose to reveal the type of each system or disguise the systems. He can disguise honeypots as normal systems and disguise normal systems as honeypots. The \emph{message} is thus the network defender's portrayal of the system. The receiver in this game is the attacker, who observes the defender's portrayal of the system but not the actual type of the system. He forms a \emph{belief} about the actual type of the system given the sender's message, and then chooses an \emph{action:} attack or withdra \footnote{In the model description in \cite{key-17}, the attacker also has an option to condition his attack on testing the system. We omit this option, because we will consider the option to test the system through a different approach in the signaling game with evidence emission in Section \ref{sec:AnalysisEv} }. Table \ref{tab:Parameters-GHoney} gives the parameters of $\mathcal{G}_{honey}$, and the extensive form of $\mathcal{G}_{honey}$ is given in Fig. \ref{fig:ExtFormGHoney}. We have used the game theory software \emph{Gambit} \cite{key-6} for this illustration, as well as for simulating the results of games later in the paper. \begin{table*} \protect\caption{\label{tab:Parameters-GHoney}Parameters of $\mathcal{G}_{honey}$. M.S. signifies Mixed Strategy} \centering{ \begin{tabular}{|c|c|} \hline Parameter Symbol & Meaning\tabularnewline \hline \hline $S$ & Network defender\tabularnewline \hline $R$ & Network attacker\tabularnewline \hline $m\in\left\{ 0,1\right\} $ & Type of system ($0$: normal; $1$: honeypot)\tabularnewline \hline $n\in\left\{ 0,1\right\} $ & Defender description of system ($0$: normal; $1$: honeypot)\tabularnewline \hline $y\in\left\{ 0,1\right\} $ & Attacker action ($0$: withdraw; $1$: attack)\tabularnewline \hline $p(m)$ & Prior probability of type $m$\tabularnewline \hline $\sigma_{S}\left(n\,|\, m\right)$ & Sender MS prob. of describing type $m$ as $n$ \tabularnewline \hline $\sigma_{R}\left(y\,|\, n\right)$ & Receiver MS prob. of action $y$ given description $n$ \tabularnewline \hline $v_{o}$ & Defender benefit of observing attack on honeypot\tabularnewline \hline $v_{g}$ & Defender benefit of avoiding attack on normal system\tabularnewline \hline $-c_{c}$ & Defender cost of normal system being compromised \tabularnewline \hline $v_{a}$ & Attacker benefit of comprimizing normal system\tabularnewline \hline $-c_{a}$ & Attacker cost of attack on any type of system\tabularnewline \hline $-c_{o}$ & Attacker additional cost of attacking honeypot\tabularnewline \hline \end{tabular} \end{table*} \begin{figure} \begin{centering} \includegraphics[width=0.6\columnwidth]{honeyNoEv_edit} \par\end{centering} \protect\caption{\label{fig:ExtFormGHoney}Extensive form of $\mathcal{G}_{honey}$, a game in which defender $S$ chooses whether to disguise systems in a network of computers, and an attacker $R$ attempts to gain from compromising normal systems but withdrawing from honeypots. Note that the type $m$ is determined by a chance move.} \end{figure} In order to characterize the equilibria of $\mathcal{G}_{honey}$, define two constants: $\mathcal{CB}_{0}^{R}$ and $\mathcal{CB}_{1}^{S}$. Let $\mathcal{CB}_{0}^{R}$ give the relative benefit to $R$ for playing attack ($y=1$) compared to playing withdraw ($y=0$) when the system is a normal system ($m=0$), and let $\mathcal{CB}_{1}^{R}$ give the relative benefit to $R$ for playing withdraw compared to playing attack when the system is a honeypot ($m=1$). These constants are defined by Eq. \ref{eq:CB0} and Eq. \ref{eq:CB1}. \begin{equation} \mathcal{CB}_{0}^{R}\triangleq u^{R}\left(1,0\right)-u^{R}\left(0,0\right)\label{eq:CB0} \end{equation} \begin{equation} \mathcal{CB}_{1}^{R}\triangleq u^{R}\left(0,1\right)-u^{R}\left(1,1\right)\label{eq:CB1} \end{equation} We now find the pure-strategy separating and pooling equilibria of $\mathcal{G}_{honey}$. \begin{thm} The equilibria of $\mathcal{G}_{honey}$ differ in form in three parameter regions: \end{thm} \begin{itemize} \item Attack-favorable: $p\left(0\right)\mathcal{CB}_{0}^{R}>\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$ \item Defend-favorable: \textbf{$p\left(0\right)\mathcal{CB}_{0}^{R}<\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$} \item Neither-favorable: \textbf{$p\left(0\right)\mathcal{CB}_{0}^{R}=\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$} \end{itemize} In attack-favorable, $p\left(0\right)\mathcal{CB}_{0}^{R}>\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$, meaning loosely that the relative benefit to the receiver for attacking normal systems is greater than the relative loss to the receiver for attacking honeypots. In defend-favorable, \textbf{$p\left(0\right)\mathcal{CB}_{0}^{R}<\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$}, meaning that the relative loss for attacking honeypots is greater than the relative benefit from attacking normal systems. In neither-favorable, \textbf{$p\left(0\right)\mathcal{CB}_{0}^{R}=\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$}. We omit analysis of the neither-favorable region because it only arises with exact equality in the game parameters. \subsection{Separating Equilibria} In separating equilibria, the sender plays different pure strategies for each type that he observes. Thus, he completely reveals the truth. The attacker $R$ in $\mathcal{G}_{honey}$ wants to attack normal systems but withdraw from honeypots. The defender $S$ wants the opposite: that the attacker attack honeypots and withdraw from normal systems. Thus, Theorem \ref{thm:No-separating-equilibria} should come as no surprise. \begin{thm} \label{thm:No-separating-equilibria}No separating equilibria exist in $\mathcal{G}_{honey}$. \end{thm} \subsection{Pooling Equilibria} In pooling equilibria, the sender plays the same strategies for each type. This is deceptive behavior because the sender's messages do not convey the type that he observes. The receiver relies only on prior beliefs about the distribution of types in order to choose his action. Theorem \ref{thm:pooling-noEv-attack} gives the pooling equilibria of $\mathcal{G}_{honey}$ in the attack-favorable region. \begin{thm} \label{thm:pooling-noEv-attack}$\mathcal{G}_{honey}$ supports the following pure strategy pooling equilibria in the attack-favorable parameter region: \begin{equation} \forall m\in M,\,\sigma_{S}\left(1\,|\, m\right)=1, \end{equation} \begin{equation} \forall n\in N,\,\sigma_{R}\left(1\,|\, n\right)=1, \end{equation} \begin{equation} \mu_{R}\left(1\,|\,0\right)\leq\frac{\mathcal{CB}_{0}^{R}}{\mathcal{CB}_{0}^{R}+\mathcal{CB}_{1}^{R}};\;\mu_{R}\left(1\,|\,1\right)=p\left(1\right), \end{equation} and \begin{equation} \forall m\in M,\,\sigma_{S}\left(1\,|\, m\right)=0, \end{equation} \begin{equation} \forall n\in N,\,\sigma_{R}\left(1\,|\, n\right)=1, \end{equation} \begin{equation} \mu_{R}\left(1\,|\,0\right)=p\left(1\right);\;\mu_{R}\left(1\,|\,1\right)\leq\frac{\mathcal{CB}_{0}^{R}}{\mathcal{CB}_{0}^{R}+\mathcal{CB}_{1}^{R}}, \end{equation} both with expected utilities given by \begin{equation} U^{S}\left(\sigma_{S},\sigma_{R}\right)=u^{S}\left(1,1\right)-p\left(0\right)\left(u^{S}\left(1,1\right)-u^{S}\left(1,0\right)\right), \end{equation} \begin{equation} U^{R}\left(\sigma_{S},\sigma_{R}\right)=u^{R}\left(1,1\right)-p\left(0\right)\left(u^{R}\left(1,1\right)-u^{R}\left(1,0\right)\right). \end{equation} \end{thm} Similarly, Theorem \ref{thm:pooling-noEv-defend} gives the pooling equilibria of $\mathcal{G}_{honey}$ in the defend-favorable region. \begin{thm} \label{thm:pooling-noEv-defend}$\mathcal{G}_{honey}$ supports the following pure strategy pooling equilibria in the defend-favorable parameter region: \begin{equation} \forall m\in M,\,\sigma_{S}\left(1\,|\, m\right)=1, \end{equation} \begin{equation} \forall n\in N,\,\sigma_{R}\left(1\,|\, n\right)=0, \end{equation} \begin{equation} \mu_{R}\left(1\,|\,0\right)\geq\frac{\mathcal{CB}_{0}^{R}}{\mathcal{CB}_{0}^{R}+\mathcal{CB}_{1}^{R}};\;\mu_{R}\left(1\,|\,1\right)=p\left(1\right), \end{equation} and \begin{equation} \forall m\in M,\,\sigma_{S}\left(1\,|\, m\right)=0, \end{equation} \begin{equation} \forall n\in N,\,\sigma_{R}\left(1\,|\, n\right)=0, \end{equation} \begin{equation} \mu_{R}\left(1\,|\,0\right)=p\left(1\right);\;\mu_{R}\left(1\,|\,1\right)\geq\frac{\mathcal{CB}_{0}^{R}}{\mathcal{CB}_{0}^{R}+\mathcal{CB}_{1}^{R}}, \end{equation} both with expected utilities given by \begin{equation} U^{S}\left(\sigma_{S},\sigma_{R}\right)=p\left(0\right)\left(u^{S}\left(0,0\right)-u^{S}\left(0,1\right)\right)+u^{S}\left(0,1\right), \end{equation} \begin{equation} U^{R}\left(\sigma_{S},\sigma_{R}\right)=p\left(0\right)\left(u^{R}\left(0,0\right)-u^{R}\left(0,1\right)\right)+u^{R}\left(0,1\right). \end{equation} \end{thm} In both cases, it is irrelevant whether the defender always sends $1$ or always sends $0$ (always describes systems as honeypots or always describes systems as normal systems); the effect is that the attacker ignores the description. In the attack-favorable region, the attacker always attacks. In the defend-favorable region, the attacker always withdraws. \subsection{Discussion of $\mathcal{G}_{honey}$ Equilibria} We will discuss these equilibria more when we compare them with the equilibria of the game with evidence emission. Still, we note one aspect of the equilibria here. At \textbf{$p\left(0\right)\mathcal{CB}_{0}^{R}=\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$}, the expected utility is continuous for the receiver, but not for the sender. As shown in Fig. \ref{fig:Expected-Utilities-NoEv}, the sender's (network defender's) utility sharply improves if he transitions from having \textbf{$p\left(0\right)\mathcal{CB}_{0}^{R}>\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$ }to\textbf{ $p\left(0\right)\mathcal{CB}_{0}^{R}<\left(1-p\left(0\right)\right)\mathcal{CB}_{1}^{R}$}\emph{, i.e.} from having\emph{ }$40\%$ honeypots to having $41\%$ honeypots. This is an obvious mechanism design consideration. We will analyze this case further in the section on mechanism design. \begin{figure} \begin{centering} \includegraphics[width=0.6\columnwidth]{noEvUtilVsP_edit} \par\end{centering} \protect\caption{\label{fig:Expected-Utilities-NoEv}Expected Utilities verses Fraction of Normal Systems in Network.} \end{figure} \section{Cheap-Talk Signaling Games with Evidence\label{sec:Cheap-Talk-Ev}} In Section \ref{sec:AnalysisNoEv}, we used a typical signaling game to model deception in cyberspace (in $\mathcal{G}_{honey}$). In this section, we add to this game the possibility that the sender gives away evidence of deception. In a standard signaling game, the receiver's belief about the type is based only on the messages that the sender communicates and his prior belief. In many deceptive interactions, however, there is some probability that the sender gives off evidence of deceptive behavior. In this case, the receiver's beliefs about the sender's private information may be updated both based upon the message of the sender and by evidence of deception. \subsection{Game Model \label{sub:ProtocolEvEmiss}} Let $\mathcal{G}^{evidence}$ denote a signaling game with belief updating based both on sender message and on evidence of deception. This game consists of four steps, in which step 3 is new: \begin{enumerate} \item Sender, $S$, observes type, $m\in M=\left\{ 0,1\right\} $. \item $S$ communicates a message, $n\in N=\left\{ 0,1\right\} $, chosen according to a strategy $\sigma_{S}\left(n\,|\, m\right)\in\Gamma^{S}=\Delta N$ based on the type $m$ that he observes. \item \emph{$S$ emits evidence, $e\in E=\left\{ 0,1\right\} $ with probability $\lambda\left(e\,|\, m,n\right)$. Signal $e=1$ represents evidence of deception and $e=0$ represents no evidence of deception. } \item Receiver $R$ responds with an action, $y\in Y=\left\{ 0,1\right\} $, chosen according to a strategy $\sigma_{R}\left(y\,|\, n,e\right)\in\Gamma^{R}=\Delta Y$ based on the message $n$ that he receives and evidence $e$ that he observes. \item $S$, $R$ receive $u^{S}\left(y,m\right)$, $u^{R}\left(y,m\right)$. \end{enumerate} Evidence $e$ is another signal that is available to $R$, in addition to the message $n$. This signal could come, \emph{e.g.}, from a \emph{detector}, which generates evidence with a probability that is a function of $m$ and $n$. The detector implements the function $\lambda\left(e\,|\, m,n\right)$. We depict this view of the signaling game with evidence emission in Fig. \ref{fig:BlockWithEvidence}. We assume that $\lambda\left(e\,|\, m,n\right)$ is common knowledge to both the sender and receiver. Since evidence is emitted with some probability, we model this as a move by a ``chance'' player, just as we model the random selection of the type at the beginning of the game as a move by a chance player. The outcome of the new chance move will be used by $R$ together with his observation of $S$'s action to formulate his belief about the type $m$. We describe this belief updating in the next section. \begin{figure} \begin{centering} \includegraphics[width=0.67\columnwidth]{signalBlockDiagramEvidence} \par\end{centering} \protect\caption{\label{fig:BlockWithEvidence}Block diagram of a signaling game with evidence emission.} \end{figure} \subsection{Two-step Bayesian Updating} Bayesian updating is a two-step process, in which the receiver first updates his belief about the type based on the observed message of the sender, and then updates his belief a second time based on the evidence emitted. The following steps formulate the update process. \begin{enumerate} \item $R$ observes $S$'s action. He computes belief $\mu_{R}\left(m\,|\, n\right)$ based on the prior likelihoods $p\left(m\right)$ of each type and $S$'s message $n$ according to Eq. \ref{eq:def1-beliefUp}, which we rewrite here in Eq. \ref{eq:firstStepUpdateBelief-1}. \begin{equation} \mu_{R}\left(m\,|\, n\right)=\begin{cases} \frac{\sigma_{S}\left(n\,|\, m\right)p\left(m\right)}{\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)}, & \text{if}\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)>0\\ \text{any distribution on }M, & \text{if}\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)=0 \end{cases}\label{eq:firstStepUpdateBelief-1} \end{equation} \item $S$ computes a new belief based on the evidence emitted. The prior belief in this second step is given by $\mu_{R}\left(m\,|\, n\right)$ obtained in the first step. The conditional probability of emitting evidence $e$ when the type is $m$ and the sender communicates message $n$ is $\lambda\left(e\,|\, m,n\right)$. Thus, the receiver updates his belief in this second step according to \begin{equation} \mu_{R}\left(m\,|\, n,e\right)=\begin{cases} \frac{\lambda\left(e\,|\, m,n\right)\mu_{R}\left(m\,|\, n\right)}{\underset{\bar{m}\in M}{\sum}\lambda\left(e\,|\,\bar{m},n\right)\mu_{R}\left(\bar{m}\,|\, n\right)}, & \text{if}\underset{\bar{m}\in M}{\sum}\lambda\left(e\,|\,\bar{m},n\right)\mu_{R}\left(\bar{m}\,|\, n\right)>0\\ \text{any distribution on }M, & \text{if}\underset{\bar{m}\in M}{\sum}\lambda\left(e\,|\,\bar{m},n\right)\mu_{R}\left(\bar{m}\,|\, n\right)=0 \end{cases}.\label{eq:secondStepUpdateBelief-1} \end{equation} \end{enumerate} Having formulated the belief updating rules, we now give the conditions for a Perfect Bayesian Nash equilibrium in our signaling game with evidence emission. \subsection{Perfect Bayesian Nash Equilibrium in Signaling Game with Evidence} The conditions for a Perfect Bayesian Nash Equilibrium of our augmented game are the same as those for the original signaling game, except that the belief update includes the use of emitted evidence. Here, however, we must also define a new utility function for $R$ that takes expectation conditional upon $e$ in addition to $n$. Define this utility function by $\hat{U}^{R}:\,\Gamma^{R}\times M\times N\times E\to\mathbb{R}$ such that $\hat{U}^{R}\left(\sigma_{R},m,n,e\right)$ gives the expected utility for $R$ for playing $\sigma_{R}$ when the type is $m$ and she observes message $n$ and evidence $e$. \begin{defn} A perfect Bayesian Nash equilibrium of the game $\mathcal{G}^{evidence}$ is a strategy profile $\left(\sigma_{S},\sigma_{R}\right)$ and posterior beliefs $\mu_{R}(m\,|\, n,e)$, such that system given by Eq. \ref{eq:senderMaxEv} through Eq. \ref{eq:beliefStep2Thm} are simultaneously satisfied. \begin{equation} \forall m\in M,\,\sigma_{S}\in\underset{\bar{\sigma}_{S}\in\Gamma^{S}}{\arg\max\,}\tilde{U}^{S}\left(\bar{\sigma}_{S},\sigma_{R},m\right)\label{eq:senderMaxEv} \end{equation} \begin{equation} \forall n\in N,\,\forall e\in E,\,\sigma_{R}\in\underset{\bar{\sigma}_{R}\in\Gamma^{R}}{\arg\max\,}\underset{\bar{m}\in M}{\sum}\mu_{R}(\bar{m}\,|\, n,e)\hat{U}^{R}\left(\bar{\sigma}_{R},\bar{m},n,e\right)\label{eq:receiverMaxEv} \end{equation} \begin{equation} \forall n\in N,\,\mu_{R}\left(m\,|\, n\right)=\begin{cases} \frac{\sigma_{S}\left(n\,|\, m\right)p\left(m\right)}{\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)}, & \text{if}\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)>0\\ \text{any distribution on }M, & \text{if}\underset{\bar{m}\in M}{\sum}\sigma_{S}\left(n\,|\,\bar{m}\right)p\left(\bar{m}\right)=0 \end{cases}\label{eq:beliefStep1Thm} \end{equation} \begin{equation} \begin{array}{c} \forall n\in N,\,\forall e\in E,\\ \mu_{R}\left(m\,|\, n,e\right)= \end{array}\begin{cases} \frac{\lambda\left(e\,|\, m,n\right)\mu_{R}\left(m\,|\, n\right)}{\underset{\bar{m}\in M}{\sum}\lambda\left(e\,|\,\bar{m},n\right)\mu_{R}\left(\bar{m}\,|\, n\right)}, & \text{if}\underset{\bar{m}\in M}{\sum}\lambda\left(e\,|\,\bar{m},n\right)\mu_{R}\left(\bar{m}\,|\, n\right)>0\\ \text{any distribution on }M, & \text{if}\underset{\bar{m}\in M}{\sum}\lambda\left(e\,|\,\bar{m},n\right)\mu_{R}\left(\bar{m}\,|\, n\right)=0 \end{cases}\label{eq:beliefStep2Thm} \end{equation} \end{defn} Again, the first two definitions require the sender and receiver to maximize their expected utilities. The third and fourth equations require belief consistency in terms of Bayes' Law. \section{Deception Detection Example in Network Defense\label{sec:Deception-Detection-Example}} Consider again our example of deception in cyberspace in which a defender protects a network of computer systems using honeypots. The defender has the ability to disguise normal systems as honeypots and honeypots as normal systems. In Section \ref{sec:AnalysisNoEv}, we modeled this deception as if it were possible for the defender to disguise the systems without any evidence of deception. In reality, attackers may try to detect honeypots. For example, \emph{send-safe.com}'s ``Honeypot Hunter'' \cite{key-22} checks lists of HTTPS and SOCKS proxies and outputs text files of valid proxies, failed proxies, and honeypots. It performs a set of tests which include opening a false mail server on the local system to test the proxy connection, connecting to the proxy port, and attempting to proxy back to its false mail server \cite{key-21}. Another approach to detecting honeypots is based on timing. \cite{key-23} used a process termed \emph{fuzzy benchmarking} in order to classify systems as real machines or virtual machines, which could be used \emph{e.g.}, as honeypots\emph{. }In this process, the authors run a set of instructions which yield different timing results on different host hardware architectures in order to learn more about the hardware of the host system. Then, they run a loop of control modifying CPU instructions (read and write control register 3, which induces a translation lookaside buffer flush) that results in increased run-time on a virtual machine compared to a real machine. The degree to which the run-times are different between the real and virtual machines depends on the number of sensitive instructions in the loop. The goal is to run enough sensitive instructions to make the divergence in run-time - even in the presence of internet noise - large enough to reliably classify the system using a timing threshold. They do not identify limits to the number of sensitive instructions to run, but we can imagine that the honeypot detector might itself want to go undetected by the honeypot and so might want to limit the number of instructions. Although they do not recount the statistical details, such an approach could result in a classification problem which can only be accomplished successfully with some probability. In Fig. \ref{fig:ClassificationFuzzy}, $t$ represents the execution time of the fuzzy benchmarking code. The curve $f_{0}\left(t\right)$ represents the probability density function for execution time for normal systems ($m=0$), and the curve $f_{1}\left(t\right)$ represents the probability density function for execution time for virtual machines ($m=1$). The execution time $t_{d}$ represents a threshold time used to classify the system under test. Let $AR_{i}$, $i\in\left\{ 1,2,3,4\right\} $ denote the area under regions $R_{1}$ through $R_{4}$. We have defined $\lambda\left(e\,|\, m,n\right)$ to be the likelihood with which a system of type $m$ represented as a system as type $n$ gives off evidence for deception $e$ (where $e=1$ represents evidence for deception and $e=0$ represents evidence for truth-telling). A virtual machine disguised as a normal system may give off evidence for deception, in this case in terms of the run-time of fuzzy benchmarking code. We would then have that \begin{equation} \begin{array}{c} \lambda\left(1\,|\,1,0\right)=AR_{3}+AR_{4}\\ \lambda\left(0\,|\,1,0\right)=AR_{2}=1-\left(AR_{3}+AR_{4}\right) \end{array}. \end{equation} If the system under test were actually a normal system, then the same test could result in some likelihood of a false-positive result for deception. Then, we would have \begin{equation} \begin{array}{c} \lambda\left(1\,|\,0,0\right)=AR_{3}\\ \lambda\left(0\,|\,0,0\right)=AR_{1}+AR_{2}=1-\left(AR_{3}\right) \end{array}. \end{equation} \begin{figure} \begin{centering} \includegraphics[width=0.67\columnwidth]{timingDetection_edit} \par\end{centering} \protect\caption{\label{fig:ClassificationFuzzy}Classification of systems as normal or virtual (\emph{e.g.} a honeypot) based on run-time for a set of control modifying CPU instructions (based on fuzzy benchmarking in \cite{key-23}).} \end{figure} Let us assume that the likelihood with which one type of system masquerading as another can be successfully detected is the same regardless of whether it is a honeypot that is disguised as a normal system or it is a normal system that is disguised as a honeypot. Denote this probability as $\epsilon\in\left[0,1\right]$. Let $\delta\in\left[0,1\right]$ be defined as the likelihood of falsely detecting deceptio \footnote{Note that we assume that $\epsilon$ and $\delta$ are common knowledge; the defender also knows the power of the adversary }. These probabilities are given by \begin{equation} \epsilon=\lambda\left(1\,|\, m,n\right),\, m\neq n,\label{eq:epsilon} \end{equation} \begin{equation} \delta=\lambda\left(1\,|\, m,n\right)\, m=n.\label{eq:delta} \end{equation} In \cite{key-23}, the authors tune the number of instructions for the CPU to run in order to sufficiently differentiate normal systems and honeypots. In this case, $\epsilon$ and \emph{$\delta$ }may relate to the number of instructions that the detector asks the CPU to run. In general, though, the factors which influence $\epsilon$ and \emph{$\delta$} could vary. Powerful attackers will have relatively high $\epsilon$ and low $\delta$ compared to less powerful attackers. Next, we study this network defense example using our model of signaling games with evidence. \section{Analysis of Network Defense using Signaling Games with Evidence\label{sec:AnalysisEv}} Figure \ref{fig:Extensive-formGHEv} depicts an extensive-form of the signaling game with evidence for our network defense problem. Call this game $\mathcal{G}_{honey}^{evidence}$. (See \cite{key-17} for a more detailed explanation of the meaning of the parameters.) In the extremes of $\epsilon$ and $\delta$, we will see that the game degenerates into simpler types of games. \begin{figure} \begin{centering} \includegraphics[width=0.75\columnwidth]{honeyEv_edit} \par\end{centering} \protect\caption{\label{fig:Extensive-formGHEv}Extensive form depiction of $\mathcal{G}_{honey}^{evidence}$. Note that the type $m$ and the evidence $e$ are both determined by chance moves.} \end{figure} First, because $R$ updates his belief based on evidence emission in a Bayesian manner, any situation in which $\delta=\epsilon$ will render the evidence useless. The condition $\delta=\epsilon$ would arise from an attacker completely powerless to detect deception. This is indicated in Fig. \ref{fig:Degenerate-cases} by the region \emph{game without evidence}, which we term $\mathcal{R}_{Weak}$ to indicate an attacker with weak detection capability. Second, on the other extreme, we have the condition $\epsilon=1,\,\delta=0$, which indicates that the attacker can always detect deception and never registers false positives. Denote this region $\mathcal{R}_{Omnipotent}$ to indicate an attacker with omnipotent detection capability. $\mathcal{R}_{Omnipotent}$ degenerates into a \emph{complete information game} in which both $S$ and $R$ are able to observe the type $m$. Third, we have a condition in which the attacker's detection capability is such that \emph{evidence guarantees deception }(when $\delta=0$ but $\epsilon$ is not necessarily $1$) and a condition in which the attacker's power is such that \emph{no evidence guarantees truth-telling} (when $\epsilon=1$ but $\delta$ is not necessarily $0$). We can term these two regions $\mathcal{R}_{Conservative}$ and $\mathcal{R}_{Aggressive}$, because the attacker never detects a false positive in $\mathcal{R}_{Conservative}$ and never misses a sign for deception in $\mathcal{R}_{Aggressive}$. Finally, we have the region $\mathcal{R}_{Intermediate}$ in which the attacker's detection capability is powerful enough that he correctly detects deception with greater rate than he registers false positives, but does not achieve $\delta=0$ or $\epsilon=1$. We list these attacker conditions in Table \ref{degenerateCases \footnote{We have defined these degenerate cases only for the case in which $\epsilon\geq\delta$ - \emph{i.e.}, evidence for deception is more likely to be emitted when the sender lies then when he tells the truth. Mathematically, the equilibria of the game are actually symmetric around the diagonal $\epsilon=\delta$ in Fig. \ref{fig:Degenerate-cases}. This can be explained intuitively by considering the evidence emitted to be ``evidence for truth-revelation'' in the upper-left corner. In interpersonal deception, evidence for truth-revelation could correlate, \emph{e.g.}, in the amount of spatial detail in a subject's account of an event }. Let us examine the equilibria of $\mathcal{G}_{honey}^{evidence}$ in these different cases. \begin{figure} \begin{centering} \includegraphics[width=0.6\columnwidth]{effectOfEvidence} \par\end{centering} \protect\caption{\label{fig:Degenerate-cases}Degenerate cases of $\mathcal{G}_{honey}^{evidence}$} \end{figure} \begin{table} \protect\caption{\label{degenerateCases}Attacker capabilities for degenerate cases of $\mathcal{G}_{honey}^{evidence}$} \centering{ \begin{tabular}{|c|c|c|} \hline Name of Region & Description of Region & Parameter Values\tabularnewline \hline \hline $\mathcal{R}_{Weak}$ & Game without evidence & $\delta=\epsilon$\tabularnewline \hline $\mathcal{R}_{Omnipotent}$ & Complete information game & $\epsilon=1,\,\delta=0$\tabularnewline \hline $\mathcal{R}_{Conservative}$ & Evidence guarantees deception & $\delta=0$ \tabularnewline \hline $\mathcal{R}_{Aggressive}$ & No evidence guarantees truth-telling & $\epsilon=1$ \tabularnewline \hline $\mathcal{R}_{Intermediate}$ & No guarantees & $\epsilon\neq1>\delta\neq0$\tabularnewline \hline \end{tabular} \end{table} \subsection{Equilibria for $\mathcal{R}_{Weak}$ } The equilibria for $\mathcal{R}_{Weak}$ are given by our analysis of the game without evidence ($\mathcal{G}_{honey}$) in Section \ref{sec:AnalysisNoEv}. Recall that a separating equilibrium was not sustainable, while pooling equilibria did exist. Also, the equilibrium solutions fell into two different parameter regions. The sender's utility was discontinuous at the interface between parameter regions, creating an optimal proportion of normal systems that could be included in a network while still deterring attacks. \subsection{Equilibria for $\mathcal{R}_{Omnipotent}$ } For $\mathcal{R}_{Omnipotent}$, the attacker knows with certainty the type of system (normal or honeypot) that he is facing. If the evidence indicates that the system is a normal system, then he attacks. If the evidence indicates that the system is a honeypot, then he withdraws. The defender's description is unable to disguise the type of the system. Theorem \ref{thm:pooling-Ev-Omni} gives the equilibrium strategies and utilities. \begin{thm} \label{thm:pooling-Ev-Omni}$\mathcal{G}_{honey}^{evidence}$, under adversary capabilities $\mathcal{R}_{Omnipotent}$ supports the following equilibria: \begin{equation} \sigma_{S}\left(m\,|\, n\right)\in\Gamma^{S} \end{equation} \begin{equation} \sigma_{R}\left(1\,|\, n,e\right)=\begin{cases} n, & e=1\\ 1-n, & e=0 \end{cases},\,\forall n\in N, \end{equation} \begin{equation} \mu_{R}\left(1\,|\, n,e\right)=\begin{cases} 1-n, & e=1\\ n, & e=0 \end{cases},\,\forall n\in N, \end{equation} with expected utilities given by \begin{equation} U^{S}\left(\sigma_{S},\sigma_{R}\right)=p\left(0\right)\left(u^{S}\left(1,0\right)-u^{S}\left(0,1\right)\right)+u^{S}\left(0,1\right), \end{equation} \begin{equation} U^{R}\left(\sigma_{S},\sigma_{R}\right)=p\left(0\right)\left(u^{R}\left(1,0\right)-u^{R}\left(0,1\right)\right)+u^{R}\left(0,1\right). \end{equation} \end{thm} Similarly to $\mathcal{R}_{Weak}$, in $\mathcal{R}_{Omnipotent}$ the expected utilities for $S$ and $R$ are the same regardless of the equilibrium strategy chosen (although the equilibrium strategy profiles are not as interesting here because of the singular role of evidence). Next, we analyze the equilibria in the non-degenerate cases, $\mathcal{R}_{Conservative}$, $\mathcal{R}_{Aggressive}$, and $\mathcal{R}_{Intermediate}$ , by numerically solving for equilibria under selected parameter settings. \subsection{Equilibria for $\mathcal{R}_{Conservative}$ , $\mathcal{R}_{Aggressive}$, and $\mathcal{R}_{Intermediate}$ } In Section \ref{sec:AnalysisNoEv}, we found analytical solutions for the equilibria of a signaling game in which the receiver does not have the capability to detect deception. In this section, we give results concerning signaling games in which the receiver does have the capability to detect deception, using illustrative examples rather than an analytical solution. To study equilibria under the three non-degenerate cases, we choose a set of parameters for the attacker and defender utilities (Table \ref{tab:Sample-parameters-which}). In this model (from \cite{key-17}), the defender gains utility from maintaining normal systems that are not attacked in the network, and also from observing attacks on honeypots. The defender incurs a loss if a normal system is attacked. The attacker, on the other hand, gains only from attacking a normal system; he incurs losses if he attacks a honeypot. \begin{table} \protect\caption{\label{tab:Sample-parameters-which}Sample parameters which describe $\mathcal{GS}_{honey}^{evidence}$} \centering{ \begin{tabular}{|c|c|} \hline Parameter Symbol & Value\tabularnewline \hline \hline $v_{o}$, sender utility from observing attack on honeypot & $5$\tabularnewline \hline $v_{g}$, sender utility from normal system surviving & $1$\tabularnewline \hline $-c_{C}$, sender cost for compromised normal system & $-10$\tabularnewline \hline $-c_{o}-c_{a}$, cost due to attacker for attacking honeypot & $-22$\tabularnewline \hline $0$, utility for attacker for withdrawing from any system & $0$\tabularnewline \hline $v_{a}-c_{a}$, benefit of attacker for compromising normal system & $15$\tabularnewline \hline \end{tabular} \end{table} Based on these parameters, we can find the equilibrium utilities at each terminal node of Fig. \ref{fig:Extensive-formGHEv}. We study examples in the attacker capability regions of $\mathcal{R}_{Conservative}$ , $\mathcal{R}_{Aggressive}$, and $\mathcal{R}_{Intermediate} \footnote{The values of $\epsilon$ and $\delta$ are constrained by Table \ref{degenerateCases}. Where the values are not set by the region, we choose them arbitrarily. Specifically, we choose for $\mathcal{R}_{Weak}$, $\epsilon=0,\,\delta=0$; for $\mathcal{R}_{Intermediate}$, $\epsilon=0.8,\,\delta=0.5$; for $\mathcal{R}_{Conservative}$, $\epsilon=0.8,\,\delta=0$; for $\mathcal{R}_{Aggressive}$, $\epsilon=1,\,\delta=0.5$, and for $\mathcal{R}_{Omnipotent}$, $\epsilon=1.0,\,\delta=0. }. For each of these attacker capabilities, we look for equilibria in pure strategies under three different selected values for the percentage of normal systems (compared to honeypots) that make up a network. For the high case, we set the ratio of normal systems to total systems to be $p\left(0\right)=0.9$. Denote this case \emph{normal-saturated. }For the medium case, we set $p\left(0\right)=0.6$. Denote this case \emph{non-saturated}. Finally, label the low case, in which $p\left(0\right)=0.2$, \emph{honeypot-saturated}. For comparison, we also include the equilibria under the same game with no evidence emission (which corresponds to $\mathcal{R}_{Weak}$ ), and the equilibria under the same game with evidence that has a true-positive rate of $1.0$ and a false-positive rate of $0$ (which corresponds to $\mathcal{R}_{Omnipotent}$ ). In Table \ref{tab:Equilibria-for-Selected}, we list whether each parameter set yields pure strategy equilibria. \begin{table*} \protect\caption{\label{tab:Equilibria-for-Selected}Equilibria for Selected Parameter Values in $\mathcal{R}_{Conservative}$ , $\mathcal{R}_{Aggressive}$, and $\mathcal{R}_{Intermediate}$ , when the percentage of honeypots in a network is high, medium, and low.} \centering{ \begin{tabular}{|c|c|c|c|} \hline {\small{}Saturation} & {\small{}$\mathcal{R}_{Weak}$ } & {\small{}$\mathcal{R}_{Intermediate}$, $\mathcal{R}_{Conservative}$, $\mathcal{R}_{Aggressive}$} & {\small{}$\mathcal{R}_{Omnipotent}$ }\tabularnewline \hline \emph{\small{}Normal} & {\small{}Yes} & {\small{}Yes} & {\small{}Yes}\tabularnewline \hline \emph{\small{}None} & {\small{}Yes} & {\small{}None} & {\small{}Yes}\tabularnewline \hline \emph{\small{}Honeypot} & {\small{}Yes} & {\small{}Yes} & {\small{}Yes}\tabularnewline \hline \end{tabular} \end{table*} For adversary detection capabilities represented by $\mathcal{R}_{Weak}$ , we have a standard signaling game, and thus the well-known result that a (pooling) equilibrium always exists. In $\mathcal{R}_{Omnipotent}$, the deception detection is fool-proof, and thus the receiver knows the type with certainty. We are left with a complete information game. Essentially, the type merely determines which Stackelberg game the sender and receiver play. Because pure strategy equilibria always exist in Stackelberg games, $\mathcal{R}_{Omnipotent}$ also always has pure-strategy equilibria. The rather unintuitive result comes from $\mathcal{R}_{Intermediate}$, $\mathcal{R}_{Conservative}$, and $\mathcal{R}_{Aggressive}$. In these ranges, the receiver's ability to detect deception falls somewhere between no capability ($\mathcal{R}_{Weak}$ ) and perfect capability ($\mathcal{R}_{Omnipotent}$ ). Those regions exhibit pure-strategy equilibria, but the intermediate regions may not. Specifically, they appear to fail to support pure-strategy equilibria when the ratio of honeypots within the network does not fall close to either $1$ or $0$. In Section \ref{sec:Mechanism-Design} on mechanism design, we will see that this region plays an important role in the comparison of network defense - and deceptive interactions in general - with and without the technology for detecting deception. \section{Mechanism Design for Detecting or Leveraging Deception\label{sec:Mechanism-Design}} In this section, we discuss design considerations for a defender who is protecting a network of computers using honeypots. In order to do this, we choose a particular case study, and analyze how the network defender can best set parameters to achieve his goals. We also discuss the scenario from the point of view of the attacker. Specifically, we examine how the defender can set the exogenous properties of the interaction in 1) the case in which honeypots cannot be detected, and 2) the case in which the attacker has implemented a method for detecting honeypots. Then, we discuss the difference between these two situations. \subsection{Attacker Incapable of Honeypot Detection} First, consider the case in which the attacker does not have the ability to detect honeypots, \emph{i.e. }$\mathcal{G}_{honey}$. The parameters which determine the attacker and defender utilities are set according to Table \ref{tab:Sample-parameters-which}. The attacker's utility as a function of the fraction of normal systems in the network is given by the red (circular) data points in Fig. \ref{fig:uSAttacker1}. We can distinguish two parameter regions. When the proportion of honeypots in the network is greater than approximately $40\%$, (\emph{i.e.} $p\left(0\right)<60\%$), the attacker is completely deterred. Because of the high likelihood that he will encounter a honeypot if he attacks, he chooses to withdraw from all systems. As the proportion of normal systems increases after $p\left(0\right)>60\%$, he switches to attacking all systems. He attacks regardless of the sender's signal, because in the pooling equilibrium, his signal does not convey any information about the type to the receiver. In this domain, as the proportion of normal systems increases, the expected utility of the attacker increases. For this case in which the attacker cannot detect honeypots, the defender's expected utility as a function of $p\left(0\right)$ is given by the red (circular) data points in Fig. \ref{fig:uDefender}. We have noted that, in the domain $p\left(0\right)<60\%$, the attacker always withdraws. In this domain, it is actually beneficial for the defender to have as close as possible to the transition density of $60\%$ normal systems, because he gains more utility from normal systems that are not attacked than from honeypots that are not attacked. But if the defender increases the proportion of normal systems beyond $60\%$, he incurs a sudden drop in utility, because the attacker switches form never attacking to always attacking. Thus, the if the defender has the capability to design his network with any number of honeypots, he faces an optimization in which he wants to have as close as possible to $40\%$ of systems be normal \footnote{At this limit, the defender's utility has a jump, but the attacker's does not. It costs very little extra for the attacker to switch to always attacking as $p\left(0\right)$ approaches the transition density. Therefore, the defender should be wary of an ``malicious'' attacker who might decide to incur a small extra utility cost in order to inflict a large utility cost on the defender. A more complete analysis of this idea could be pursued with multiple types of attackers. }. \begin{figure} \begin{centering} \includegraphics[width=0.6\columnwidth]{EvNoEvUtilVsPAttacker_edit} \par\end{centering} \protect\caption{\label{fig:uSAttacker1}Expected utility for the attacker in games of $\mathcal{G}_{honey}$ and $\mathcal{G}_{honey}^{evidence}$ as a function of the fraction $p\left(0\right)$ of normal systems in the network.} \begin{centering} \includegraphics[width=0.6\columnwidth]{EvNoEvUtilVsPDefender_edit} \par\end{centering} \protect\caption{\label{fig:uDefender}Expected utility for the defender in games of $\mathcal{G}_{honey}$ and $\mathcal{G}_{honey}^{evidence}$ as a function of the fraction $p\left(0\right)$ of normal systems in the network.} \end{figure} \subsection{Attacker Capable of Honeypot Detection} Consider now how the network defense is affected if the attacker gains some ability to detect deception. This game takes the form of $\mathcal{G}_{honey}^{evidence}$. Recall that, in this form, a chance move has been added after the sender's action. The chance move determines whether the receiver observes evidence that the sender is being deceptive. For Fig. \ref{fig:uSAttacker1} and Fig. \ref{fig:uDefender}, we have set the detection rates at $\epsilon=0.8$ and $\delta=0.5$. These fall within the attacker capability range $\mathcal{R}_{intermediate}$. Observing evidence does not guarantee deception; neither does a lack of evidence guarantee truth-revelation. In the blue (cross) data points in Fig. \ref{fig:uSAttacker1}, we see that, at the extremes of $p\left(0\right)$, the utility of the attacker is unaffected by the ability to detect deception according to probabilities $\epsilon$ and $\delta$. The low ranges of $p\left(0\right)$, as described in table \ref{tab:Equilibria-for-Selected}, correspond to the \emph{honeypot-saturated} region. In this region, honeypots predominate to such an extent that the attacker is completely deterred from attacking. Note that, compared to the data points for the case without deception detection, the minimum proportion of honeypots which incentives the attacker to uniformly withdraw has increased. Thus, for instance, a $p\left(0\right)$ of approximately $0.50$ incentivizes an attacker without deception detection capabilities to withdraw from all systems, but does not incentivize an attacker with deception detection capabilities to withdraw. At $p\left(0\right)=0.50$, the advent of honeypot-detection abilities causes the defender's utility to drop from $0.5$ to approximately $-2$. At the other end of the $p\left(0\right)$ axis, we see that a high-enough $p\left(0\right)$ causes the utilities to again be unaffected by the ability to detect deception. This is because the proportion of normal systems is so high that the receiver's best strategy is to attack constantly (regardless of whether he observes evidence for deception). In the middle (non-saturated)\emph{ }region of $p\left(0\right)$, the attacker's strategy is no longer to solely attack or solely withdraw. This causes the ``cutting the corner'' behavior of the attacker's utility in Fig. \ref{fig:uSAttacker1}. This conditional strategy also induces the middle region for the defender's utility in Fig. \ref{fig:uDefender}. Intuitively, we might expect that the attacker's ability to detect deception could only decrease the defender's utility. But the middle (\emph{non-saturated}) range of $p\left(0\right)$ shows that this is not the case. Indeed from approximately $p\left(0\right)=0.6$ to $p\left(0\right)=0.7$, the defender actually benefits from the attacker's ability to detect deception! The attacker, himself, always benefits from the ability to detect deception. Thus, there is an interesting region of $p\left(0\right)$ for which the ability of the attacker to detect deception results in a mutual benefit. Finally, we can examine the effect of evidence as it becomes more powerful in the green (triangle) points in Fig. \ref{fig:uSAttacker1} and Fig. \ref{fig:uDefender}. These equilibria were obtained for $\epsilon=0.9$ and $\delta=0.3$. This more powerful detection capability broadens the middle parameter domain in which the attacker bases his strategy partly upon evidence. Indeed, in the omnipotent detector case, the plots for both attacker and defender consist of straight lines from their utilities at $p\left(0\right)=0$ to their utilities at $p\left(0\right)=1$. Because the attacker with omnipotent detector is able to discern the type of the system completely, his utility grows in proportion with the proportion of normal systems, which he uniformly attacks. He withdraws uniformly from honeypots. \section{Related Work\label{sec:Related-Work}} Deception has become a critical research area, and several works have studied problems similar to ours. Alcan et al. \cite{key-27} discuss how to combine sensing technologies within a network with game theory in order to design intrusion detection systems. They study two models. The first is a cooperative game, in which the contribution of different sensors towards detecting an intrusion determines the coalitions of sensors whose threat values will be used in computing the threat level. In the second model, they include the attacker, who determines which subsystems to attack. This model is a dynamic (imperfect) information game, meaning that as moves place the game in various information sets, players learn about the history of moves. Unlike our model, it is a complete information game, meaning that both players know the utility functions of the other player. Farhang et al. study a multiple-period, information-asymmetric attacker-defender game involving deception \cite{key-5}. In their model, the sender type - benign or malicious - is known only with an initial probability to the receiver, and that probability is updated in a Bayesian manner during the course of multiple interactions. In \cite{key-25}, Zhuang et al. study deception in multiple-period signaling games, but their paper also involves resource-allocation. The paper has interesting insights into the advantage to a defender of maintaining secrecy. Similar to our work, they consider an example of defensive use of deception. In both \cite{key-5} and \cite{key-25}, however, players update beliefs only through repeated interactions, whereas one of the players in our model incorporates a mechanism for deception detection. We have drawn most extensively from the work of Carroll and Grosu \cite{key-17}, who study the strategic use of honeypots for network defense in a signaling game. The parameters of our attacker and defender utilities come from \cite{key-17}, and the basic structure of our signaling game is adapted from that work. In \cite{key-17}, the type of a particular system is chosen randomly from the distribution of normal systems and honeypots. Then the sender chooses how to describe the system (as a normal system or as a honeypot), which may be truthful or deceptive. For the receiver's move, he may choose to attack, to withdraw, or to condition his attack on testing the system. In this way, honeypot detection is included in the model. Honeypot detection adds a cost to the attacker regardless of whether the system being tested is a normal system or a honeypot, but mitigates the cost of an attack being observed in the case that the system is a honeypot. In our paper, we enrich the representation of honeypot testing by making its effect on utility endogenous. We model the outcome of this testing as an additional move by nature after the sender's move. This models detection as technique which may not always succeed, and to which both the sender and receiver can adapt their equilibrium strategies. \section{Discussion\label{sec:Discussion}} In this paper, we have investigated the ways in which the outcomes of a strategic, deceptive interaction are affected by the advent of deception-detecting technology. We have studied this problem using a version of a signaling game in which deception may be detected with some probability. We have modeled the detection of deception as a chance move that occurs after the sender selects a message based on the type that he observes. For the cases in which evidence is trivial or omnipotent, we have given the analytical equilibrium outcome, and for cases in which evidence has partial power, we have presented numerical results. Throughout the paper, we have used the example of honeypot implementation in network defense. In this context, the technology of detecting honeypots has played the role of a malicious use of anti-deception. This has served as a general example to show how equilibrium utilities and strategies can change in games involving deception when the agent being deceived gains some detection ability. Our first contribution is the model we have presented for signaling games with deception detection. We also show how special cases of this model cause the game to degenerate into a traditional signaling game or into a complete information game. Our model is quite general, and could easily be applied to strategic interactions in interpersonal deception such as border control, international negotiation, advertising and sales, and suspect interviewing. Our second contribution is the numerical demonstration showing that pure-strategy equilibria are not supported under this model when the distribution of types is in a middle range but are supported when the distribution is close to either extreme. Finally, we show that it is possible that the ability of a receiver to detect deception could actually increase the utility of a possibly-deceptive sender. These results have concrete implications for network defense through honeypot deployment. More importantly, they are also general enough to apply to the large and critical body of strategic interactions that involve deception.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,262
/* * spiral_reference_line_smoother.h */ #ifndef MODULES_PLANNING_REFERENCE_LINE_SPIRAL_REFERENCE_LINE_SMOOTHER_H_ #define MODULES_PLANNING_REFERENCE_LINE_SPIRAL_REFERENCE_LINE_SMOOTHER_H_ #include <vector> #include "Eigen/Dense" #include "modules/planning/proto/planning.pb.h" #include "modules/planning/reference_line/reference_line.h" #include "modules/planning/reference_line/reference_point.h" namespace apollo { namespace planning { class SpiralReferenceLineSmoother { public: SpiralReferenceLineSmoother() = default; virtual ~SpiralReferenceLineSmoother() = default; void set_max_point_deviation(const double d); bool Smooth(const ReferenceLine& raw_reference_line, ReferenceLine* const smoothed_reference_line) const; private: bool Smooth(std::vector<Eigen::Vector2d> point2d, std::vector<common::PathPoint>* ptr_smoothed_point2d) const; std::vector<common::PathPoint> to_path_points( const double start_x, const double start_y, const double start_s, const double theta0, const double kappa0, const double dkappa0, const double theta1, const double kappa1, const double dkappa1, const double delta_s, const double resolution) const; common::PathPoint to_path_point(const double x, const double y, const double s, const double theta, const double kappa, const double dkappa) const; double max_point_deviation_ = 0.0; }; } // namespace planning } // namespace apollo #endif // MODULES_PLANNING_REFERENCE_LINE_SPIRAL_REFERENCE_LINE_SMOOTHER_H_
{ "redpajama_set_name": "RedPajamaGithub" }
9,866
Mycophycias är ett släkte av svampar som beskrevs av Kohlm. och Brigitte Volkmann-Kohlmeyer. Mycophycias ingår i klassen Dothideomycetes, divisionen sporsäcksvampar och riket svampar. Släktet innehåller bara arten Mycophycias ascophylli. Källor Externa länkar Sporsäcksvampar Mycophycias
{ "redpajama_set_name": "RedPajamaWikipedia" }
577
License ============================ The MIT License (MIT) Copyright (c) 2015 Kai-Ting (Danil) Ko Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. About ============================ Ecommerce Website Sample integrate with Paypal API, Shippfy API and Google reCaptcha with Jquery and Bootstrap Frontend It utilize Spring, HikariCP and DataNeclus for backend PostgresqlRDBMS Integration Please note this is a sample, it is by no means a complete solution with any security implication Compile ============================ On project git repo level, type ``` gradle build ``` Execute ============================ Copy the build controller.war into tomcat 7+ container engine Following environment variables need to be present ``` GOOGLE_RECAPTCHA_API_ENDPOINT GOOGLE_RECAPTCHA_API_SECRET PAYPAL_API_SERVICE_ENDPOINT PAYPAL_API_CLIENT_ID PAYPAL_API_CLIENT_SECRET SHIPPO_CLIENT_API_KEY ``` PostgreSQL 9.2 or up needs to be present with following values or one will need to change the default settings in application.properties within: WAR/classes/application.properties A sample docker container that fulfill can be build with ``` docker run -p 127.0.0.1:5432:5432 -d --name postgresql stackbrew/postgres:latest ```
{ "redpajama_set_name": "RedPajamaGithub" }
1,076
Q: Send one AJAX call after the other with jQuery I'm having problems sending an AJAX call after the other while looping and array of objects. The code is the following var def = true; $.each(urls, function(index, val) { var postResult = $.Deferred(); link = 'http://myurl.com' + val.aslug + '/' + val.slug; $.when(def).then(function(){ $.post('https://graph.facebook.com ', {id : link, scrape : 'true' }, function(data, textStatus, xhr) { }).always(function(){ postResult.resolve(5); }); }); def = postResult; }); The problem is the first and the second call they are ok. But the third call and the following are the same as 2nd call :/ I think the each is not changing to the next object A: The easiest way in this case will be: move variable link assignment inside of the "when-then" construction. var def = true; $.each(urls, function(index, val) { var postResult = $.Deferred(); $.when(def).then(function(){ var link = 'http://myurl.com' + val.aslug + '/' + val.slug; $.post('https://graph.facebook.com ', {id : link, scrape : 'true' }, function(data, textStatus, xhr) { }).always(function(){ postResult.resolve(5); }); }); def = postResult; }); You got this issue because of JS asynchronous ability. $.each() - loop simply doesn't wait when your "deferred piece of code" will be run. It loops thru your array and creates a queue of tasks which will be executed. Also you may consider to use $.AJAX with option async:False, instead A: Your instinct is correct - the biggest problem is that you don't want to create a function within a loop. By the time $.when resolves for the second time, your link variable inside that closure will no longer reference the correct values. That's why you are getting a bunch of calls to the same url. In addition, to create a sequence of promises, you are going to want to use something like this: https://github.com/kriskowal/q#sequences (not sure if jQuery promises has a comparable feature) Example code: // not sure if this is necessary // haven't used jQuery promises much var result = $.when(true); mapUrlsToPosts(urls).forEach(function(f) { result = result.then(f); }); function mapUrlsToPosts(urls) { return urls.map(function(url) { // create a function that when executed will make a request to the desired url return post.bind(null, url); }); } function post(val) { var link = 'http://myurl.com' + val.aslug + '/' + val.slug, postResult = $.Deferred(), params = { link: link, scrape: 'true' }; $.post('https://graph.facebook.com ', params, function(data, textStatus, xhr) { }).always(function(){ postResult.resolve(5); }); return postResult; } EDIT: By the way, all this can be made much simpler if you don't need to wait for the previous request to finish before making the next. Most of the complexity here comes from queueing the requests.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,215
window.onfontsready = function (fontNames, onReady, options, fIterator, finishedCount) { // Ensure options is defined to prevent access errors options = options || 0; // Combine assignment for compression fIterator = finishedCount = 0; for (; fIterator < fontNames.length; fIterator++) { window.onfontready(fontNames[fIterator], function () { // fIterator was not counted down here because some font // detections might operate synchronously // Prefix increment allows inline increment with comparison if (++finishedCount >= fontNames.length) { // All fonts have been loaded and parsed onReady(); } }, { // The timeoutAfter is included here to force onfontready to // shutdown the detection elements in the event of a timeout // The onTimeout callback is handled below timeoutAfter: options.timeoutAfter, sampleText: options.sampleText instanceof Array ? options.sampleText[fIterator] : options.sampleText, generic: options.generic instanceof Array ? options.generic[fIterator] : options.generic }); } if (options.timeoutAfter && options.onTimeout) { setTimeout(function () { // Prevent timeout call if already finished if (finishedCount < fontNames.length) { // All comparisions to NaN are false, preventing onReady call // Combine two operations, setting finishedCount value first options.onTimeout(finishedCount = NaN); } }, options.timeoutAfter); } };
{ "redpajama_set_name": "RedPajamaGithub" }
4,412
{"url":"http:\/\/mathoverflow.net\/questions\/115264\/multiplication-of-cauchy-and-dedekind-real-numbers","text":"# Multiplication of Cauchy and Dedekind real numbers\n\nIn Michael Dummett's book \"Elements of Intuitionism\", the product of real numbers is defined as follow:\n\n$x\\cdot y= \\{ \\langle r_n\\rangle \\cdot \\langle s_n\\rangle$ | $\\langle r_n\\rangle\\in x , \\langle s_n\\rangle\\in y \\}$, where $\\langle r_n\\rangle ,\\langle s_n\\rangle$ are Cauchy sequences of rational numbers, and $\\langle r_n\\rangle \\cdot \\langle s_n\\rangle=\\langle r_n\\cdot s_n\\rangle$.\n\nThis definition is valid iff we can prove intuitionistically $\\{ \\langle r_n\\rangle \\cdot \\langle s_n\\rangle$ | $\\langle r_n\\rangle\\in x , \\langle s_n\\rangle\\in y \\}$ is indeed a real number, i.e. $\\{ \\langle r_n\\rangle \\cdot \\langle s_n\\rangle$ | $\\langle r_n\\rangle\\in x , \\langle s_n\\rangle\\in y \\}$ is closed under \"equivalent\" (we say that $\\langle r_n\\rangle$ is equivalent to $\\langle s_n\\rangle$ if for any natural number $k$, we can find a natural number $n$ such that $|r_m-s_m|<2^{-k}$ for all $m>n$ ). This is to say, we must prove the following proposition intuitionistically:\n\nLet $\\langle r_n\\rangle , \\langle s_n\\rangle , \\langle t_n\\rangle$ be Cauchy sequences of rational numbers, and $\\langle t_n\\rangle$ is equivalent to $\\langle r_n\\rangle \\cdot \\langle s_n\\rangle$. Then we can construct two Cauchy sequences $\\langle r_n'\\rangle , \\langle s_n'\\rangle$ of rational numbers such that\n(1)\u2003 $\\langle r_n'\\rangle$ is equivalent to $\\langle r_n\\rangle$;\n(2)\u2003 $\\langle s_n'\\rangle$ is equivalent to $\\langle s_n\\rangle$;\n(3)\u2003 $\\langle t_n\\rangle =\\langle r_n'\\rangle \\cdot \\langle s_n'\\rangle$.\n\nBut Michael Dummett doesn't justify his definition, and I find it's very difficult to prove the above proposition intuitionistically. Could you help me?\n\n-\nThe classical proof of the above proposition uses the fact \" $\\langle r_n\\rangle$ is equivalent to 0 or not\", but this fact isn't valid intuitionistically. \u2013\u00a0 Set Dec 3 '12 at 8:55\nI don\u2019t have Dummett\u2019s book, but are you sure you are reading the definition correctly? Isn\u2019t it simply defined as the equivalence class of $\\langle r_n\\cdot s_n\\rangle$, which would make it closed under equivalence by definition? What is really needed to ensure the function is well-defined is that $\\langle r_n\\cdot s_n\\rangle$ is equivalent to $\\langle r'_n\\cdot s'_n\\rangle$ whenever $\\langle r_n\\rangle$ is equivalent to $\\langle r'_n\\rangle$ and $\\langle s_n\\rangle$ is equivalent to $\\langle s'_n\\rangle$, but this should be straightforward to prove. \u2013\u00a0 Emil Je\u0159\u00e1bek Dec 3 '12 at 15:40\n@Je\u0159\u00e1bek Yes, I'm sure that it's defined as above in Dummett\u2019s book (first edition in 1977). But I don't know whether he justify his definition in the second edition (in 2000) of the book. I'm also Surprised that he uses this definition. I consult other books about constructive analysis, such as Bishop's \"Foundations of constructive analysis\" and Troelstra's \"Principles of Intuitionism\" , the definition in these books are unlike the definition of Dummett\u2019s, but it is the same as you say, defined as the equivalence class of <$r_n S_n$> . \u2013\u00a0 Set Dec 3 '12 at 17:02\n\nWhilst the definition of addition of Cauchy or Dedekind real numbers is \"obvious\", multiplication is rather more tricky. Unfortunately, most accounts, including [RD], leave it as an \"exercise for the reader\", without even giving a hint about what the problem is, so the questioner is right to ask about this. The difficulty is intrinsic to multiplication: the only difference in the intuitionistic setting is that we must do the job properly, instead of bodging it by treating positive, zero and negative numbers separately.\n\nThe point is that, if you want to achieve precision $\\epsilon$ in the product of two numbers, one of which is bounded by $B$, then the other must be given within $\\epsilon\/B$.\n\n[MD] is not quoted verbatim in the Question, but it is close enough, whilst the accounts in [AH] and [AT] are essentially the same.\n\n[TD] gives a more general account of uniform continuity, taking explicit account of the modulus of convergence of Cauchy sequences (the function that says how far down the sequence you have to go to get a desired accuracy). This is needed elsewhere in constructive analysis.\n\n[BB] has by far the clearest treatment that I have seen of the arithmetic of Cauchy reals, building the modulus into the definition. (I admire this book for its \"can do\" attitude, not dwelling on the counterexamples.) It gives the explicit (but snappy) proof of correctness for multiplication.\n\n[BT] defines multiplication for Dedekind reals and proves correctness. It shows how Dedekind reals are the limiting case of intervals and also considers \"back-to-front\" (Kaucher) intervals, which are related to existential quantification just as ordinary intervals are related to universal quantification.\n\n[JC] defines multiplication in a completely novel fashion for Conway (surreal) numbers. This is adapted to multiplication of real numbers in a topos in [PJ].\n\nSince [MD,AH,AT] do not give the explicit answer to the Question, here it is.\n\nAs above, $\\langle r_n\\rangle$ is a Cauchy sequence if $\\forall k.\\exists n.\\forall m.|r_{n+m}-r_n|\\lt 2^{-k}$.\n\nWe write $\\alpha(k)$ for such an $n$ for each given $k$; this is the modulus of convergence.\n\nIn particular, with $k=0$, for any Cauchy sequence $\\langle r_n\\rangle$ there are integers $N=\\alpha(0)$ and $K=\\log_2(r_N)$ such that $\\forall m.-2^{K}\\lt r_N-1\\lt r_{N+m}\\lt r_N+1\\lt 2^{K}$.\n\nLet $M$, $L$ and $\\beta$ be the corresponding integers and modulus for the Cauchy sequence $\\langle s_n\\rangle$.\n\nNow, given $h$, let $k\\geq h+L+1$, $l\\geq h+K+1$, $n\\geq\\alpha(k)$ and $n\\geq\\beta(l)$. Then, for all $m$, $$\\begin{eqnarray} |r_{n+m} s_{n+m} - r_n s_n| &\\leq& |r_{n+m}| |s_{n+m} - s_n| + |r_{n+m} - r_n| |s_n| \\\\ &\\lt& 2^K 2^{-l} + 2^{-k} 2^L \\leq 2^{-h}. \\end{eqnarray}$$ Hence $\\langle r_n s_n\\rangle$ is a Cauchy sequence with modulus $\\gamma(h)=\\max(\\alpha(h+L+1),\\beta(h+K+1))$.\n\nWe can avoid considering equivalence of sequences explicitly, by observing that two Cauchy sequences are equivalent iff they are both subsequences of the same Cauchy sequence. Rationals are represented by constant Cauchy sequences and the new operation for them agrees with multiplication of rationals.\n\nThis argument amounts to saying that the new operation is continuous with respect to the Euclidean topology. Also, the rationals are dense amongst Cauchy reals. Hence the new operation is the unique continuous extension and it follows that it obeys the usual algebraic laws for multiplication.\n\n[BT] Andrej Bauer and Paul Taylor, The Dedekind Reals in Abstract Stone Duality, in Mathematical Structures in Computer Science, 19 (2009) 757-838.\n\n[BB] Errett Bishop and Douglas Bridges, Foundations of Constructive Analysis, Grundlehren der mathematischen Wissenschaften, Springer-Verlag, 1985.\n\n[JC] John Horton Conway, On Numbers and Games, Number 6 in London Mathematical Society Monographs. Academic Press, 1976. Revised edition, 2001, published by A K Peters, Ltd.\n\n[RD] Richard Dedekind, Stetigkeit und irrationale Zahlen, Braunschweig, 1872. Reprinted in [DW], pages 315\u2013334; English translation, Continuity and Irrational Numbers, in [DE].\n\n[DE] Richard Dedekind, Essays on the theory of numbers, Open Court, 1901; English translations by Wooster Woodruff Beman; republished by Dover, 1963.\n\n[DW] Richard Dedekind. Gesammelte mathematische Werke, volume 3. Vieweg, Braunschweig, 1932; edited by Robert Fricke, Emmy Noether and Oystein Ore; republished by Chelsea, New York, 1969.\n\n[MD] Michael Dummett, Elements of Intuitionism, Oxford University Press, 2000.\n\n[AH] Arend Heyting, Intuitionism, an Introduction, Studies in Logic and the Foundations of Mathematics, North-Holland, 1956. Third edition, 1971.\n\n[PJ] Peter Johnstone, Topos Theory, London Mathematical Society Monographs 10, Academic Press, 1977.\n\n[AT] Anne Troelstra, Principles of Intuitionism, Lectures presented at the Summer Conference on Intuitionism and Proof Theory (1968) at SUNY at Buffalo, NY, Lecture Notes in Mathematics 95, Springer-Verlag, 1969.\n\n[TD] Anne Sjerp Troelstra and Dirk van Dalen, Constructivism in Mathematics, an Introduction, Number 121 and 123 in Studies in Logic and the Foundations of Mathematics, North-Holland, 1988.\n\nIf you know of other explicit accounts of multiplication for Cauchy or Dedekind reals then please give the references in comments below.\n\n-","date":"2015-07-04 00:10:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9260406494140625, \"perplexity\": 464.2062611624681}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-27\/segments\/1435375096290.39\/warc\/CC-MAIN-20150627031816-00034-ip-10-179-60-89.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} The periodically forced SIR model \begin{equation}\label{S}S'=\mu(1-S) -\beta(t) SI,\end{equation} \begin{equation}\label{I}I'=\beta(t) SI-(\gamma+\mu) I,\end{equation} \begin{equation}\label{R}R'=\gamma I -\mu R,\end{equation} and variants of it, are extensively used to model seasonally recurrent diseases \cite{aron,bolker,dietz,earn,grassly,greenman,jodar,katriel,keeling,keeling1,kuznetsov,london,olinky}. Here $S,I,R$ are the fractions of the population which are susceptible, infective, and recovered, $\mu$ denotes the birth and death rate, $\gamma$ the recovery rate, and $\beta(t)$, which we assume is a positive continuous $T$-periodic function, is the seasonally-dependent transmission rate (so that $T$ is the yearly period). When simulating this model numerically (see section \ref{numerical}), it is observed that: (i) If ${\cal{R}}_0\leq 1$, where $${\cal{R}}_0=\frac{\bar{\beta}}{\gamma+\mu},$$ $$\bar{\beta}=\frac{1}{T}\int_0^T \beta(t)dt,$$ then all solutions tend to the disease free equilibrium $S_0=1, I_0=0, R_0=0$. This fact can be rigorously proved, see \cite{ma}. (ii) If ${\cal{R}}_0> 1$, then depending on the values of the parameters, one observes convergence to $T$-periodic orbits, or to $nT$-periodic orbits with $n>1$ (subharmonics), or chaotic behavior. A fundamental question, that is addressed here, is the {\it{existence}} of a $T$-periodic solution of the system. We demand, of course that the components $S(t),I(t),R(t)$ of this solution will be positive. Obviously when ${\cal{R}}_0\leq 1$, a positive periodic solution cannot exist, because such a solution would not converge to the disease-free equilibrium. We will prove, however, that \begin{theorem}\label{main} Whenever ${\cal{R}}_0>1$, there exists at least one $T$-periodic solution $(S(t),I(t),R(t))$ of (\ref{S})-(\ref{R}), all of whose components are positive. \end{theorem} Thus, when $T$-periodic behavior is not observed in simulations, this is not due to the fact that such a solution does not exist, but rather to the fact that all $T$-period solutions are unstable. Despite the fact that the existence of a $T$-periodic solution of the $T$-periodically forced SIR model is a fundamental issue, the only paper in the literature of which we are aware to have dealt with this question is the recent paper of J\'odar, Villanueva and Arenas \cite{jodar}. They treated a more general system then we do here (including loss of immunity and allowing other coefficients besides $\beta(t)$ to be $T$-periodic). Restricting their existence result to the case of the SIR model (\ref{S})-(\ref{R}), they proved, using Mawhin's continuation theorem, that a $T$-periodic solution exists whenever the condition \begin{equation}\label{jodar}\min_{t\in\Real}\beta(t)>\gamma+\mu\end{equation} holds. Note that the condition (\ref{jodar}) implies that $\bar{\beta}>\gamma+\mu$, that is ${\cal{R}}_0>1$, but it is a stronger condition. Theorem \ref{main} uses only the condition ${\cal{R}}_0>1$, so that together with the fact noted above, that when ${\cal{R}}_0\leq 1$ a $T$-periodic solution does {\it{not}} exist, we have that ${\cal{R}}_0>1$ is a {\it{necessary and sufficient}} condition for the existence of a $T$-periodic solution with positive components. Our technique for proving Theorem \ref{main} relies on nonlinear functional analysis, for which we refer to the textbooks \cite{brown,zeidler}. Reformulating the problem as one of solving an equation in an infinite dimensional space of periodic functions, we define a homotopy between the periodically forced problem and the autonomous problem in which $\beta(t)$ is replaced by the mean $\bar{\beta}$. The autonomous problem has an endemic equilibrium, which is a trivial periodic solution. We then employ Leray-Schauder degree theory to continue this solution along the homotopy. The challenge here lies in the fact that there we always have a trivial periodic solution, given by the disease-free equilibrium, which lies on the boundary of the relevant domain $D$ in the functional space, which requires us to construct a smaller domain $U\subset D$ excluding the trivial solution, and to show that the conditions for applying the Leray-Schauder theory hold for the domain $U$. We note that our proof of Theorem \ref{main} is easily extended to give the same result for the SIRS model, which includes loss of immunity \begin{equation}\label{S5}S'=\alpha S+\mu(1-S) -\beta(t) SI,\end{equation} \begin{equation}\label{I5}I'=\beta(t) SI-(\gamma+\mu) I,\end{equation} \begin{equation}\label{R5}R'=\gamma I -(\mu+\alpha) R.\end{equation} We present the proof for the SIR model ($\alpha=0$) in order to avoid notational clutter. In section \ref{proof} we prove Theorem \ref{main}. In section \ref{numerical} we discuss how to obtain the $T$-periodic solution numerically, which cannot be done by direct numerical simulation when it is unstable, and present some results obtained by using the Galerkin method. Finally, in section \ref{discussion} we mention some other works providing rigorous mathematical results on forced SIR models, beyond numerical simulation. \section{Proof of the Theorem} \label{proof} Since $S(t),I(t),R(t)$ are fractions of the population we have $S(t)+I(t)+R(t)=1$ for all $t$ - note that by adding the equations (\ref{S})-(\ref{R}) we have $(S(t)+I(t)+R(t))'=0$. Since $R$ does not appear in (\ref{S}),(\ref{I}), the equation (\ref{R}) can be ignored, and it suffices to proved the existence of a periodic solution of (\ref{S}),(\ref{I}) satisfying \begin{equation}\label{pos}S(t)>0,\;I(t)>0,\;S(t)+I(t)<1,\;\;\forall t,\end{equation} where the third condition is equivalent to $R(t)=1-I(t)-S(t)>0$. We decompose $\beta(t)$ as $$\beta(t)=\bar{\beta}+\beta_0(t),\;\;\int_0^T \beta_0(t)dt=0.$$ Setting, for $\lambda\in [0,1]$, \begin{equation}\label{deco}\beta_{\lambda}(t)=\bar{\beta}+\lambda\beta_0(t),\end{equation} we consider the system \begin{equation}\label{S1}S'=\mu(1-S) -\beta_{\lambda}(t) SI\end{equation} \begin{equation}\label{I1}I'=\beta_{\lambda}(t) SI-(\gamma+\mu) I,\end{equation} which is homotopy between an unforced system with $\beta_0(t)=\bar{\beta}$ and our system (\ref{S}),(\ref{I}), which corresponds to $\lambda=1$. For $\lambda=0$, (\ref{S1}),(\ref{I1}) has exactly two periodic solutions, which are constant, given by \begin{equation}\label{I00}S_0=1,\;\;I_0=0,\;\;\end{equation} \begin{equation}\label{I0}S^*=\frac{\gamma+\mu}{\bar{\beta}},\;\;I^*=\mu\Big(\frac{1}{\gamma+\mu}-\frac{1}{\bar{\beta}}\Big). \end{equation} We note that $(S_0,I_0)$ (the disease-free equilibrium) is in fact a (trivial) periodic solution of (\ref{S1})-(\ref{I1}) for {\it{all}} $\lambda$. Our aim is to continue the solution $(S^*,I^*)$ with respect to $\lambda$ in order to prove the existence of a periodic solution for $\lambda=1$. To this end, we now reformulate the problem in a functional-analytic setting, which will enable us to employ degree theory. We rewrite (\ref{S1}),(\ref{I1}) as \begin{equation}\label{S2}S'+\mu S=\mu -\beta_{\lambda}(t) SI\end{equation} \begin{equation}\label{I2}I'+(\gamma+\mu) I=\beta_{\lambda}(t) SI\end{equation} Let $X,Y$ be the Banach spaces $$X=\{ (S,I)\;|\; S,I \in C^1(\Real),\; S(t+T)=S(t), I(t+T)=I(t)\}$$ $$Y=\{ (S,I)\;|\; S,I \in C^0(\Real),\; S(t+T)=S(t), I(t+T)=I(t)\}$$ Define the linear operator $L:X\rightarrow Y$ by $$L(S,I)=(S'+\mu S,I'+(\gamma+\mu) I)$$ and the nonlinear operator $N:Y\rightarrow Y$ $$N_\lambda(S,I)=(\mu -\beta_{\lambda}(t) SI,\beta_{\lambda}(t) SI).$$ Then the periodic problem for (\ref{S2})-(\ref{I2}) can be rewritten as \begin{equation}\label{re}L(S,I)=N_{\lambda}(S,I).\end{equation} It is easy to check that $L$ is invertible, that is the equations $S'+\mu S=f$ and $I'+(\gamma+\mu)I=g$ have unique $C^1$ $T$-periodic solutions $S,I$ for any $f,g\in Y$, and the mapping $L^{-1}:Y\rightarrow X$ given by $L^{-1}(f,g) =(S,I)$ is bounded. We can thus rewrite (\ref{re}) as \begin{equation}\label{re1}F_{\lambda}(S,I)=0.\end{equation} where $F_{\lambda}:Y\rightarrow X$ is given by \begin{equation}\label{fr}F_\lambda(S,I)=(S,I)-L^{-1}\circ N_\lambda(S,I).\end{equation} Since $L^{-1}:Y\rightarrow X$ is bounded, and since, by the Arzela-Ascoli Theorem, $X$ is compactly embedded in $Y$, we can consider $L^{-1}$ as a compact operator from $Y$ to itself, and since $N:Y\rightarrow Y$ is continuous, $L^{-1}\circ N_\lambda$ is compact as an operator from $Y$ to itself. We therefore consider (\ref{re1}) in the space $Y$, and we note that any solution in $Y$ will in fact be in $X$, hence a classical solution of (\ref{S2}),(\ref{I2}). Since $F_{\lambda}$ is a compact perturbation of the identity on $Y$, Leray-Schauder theory is applicable. Since we want our solution to satisfy (\ref{pos}), we want to solve (\ref{re1}) in the subset $D\subset Y$ given by $$D=\{ (S,I)\in Y\;|\; S(t)>0,\; I(t)>0,\; S(t)+I(t)<1\}.$$ Note that for $\lambda=0$ the solution $(S^*,I^*)$ given by (\ref{I0}) lies in $D$. Our aim is to continue this solution in $\lambda$ up to $\lambda=1$. We recall that the Leray-Schauder degree theory (see e.g. \cite{brown,zeidler}) implies that, given a bounded open set $U\subset Y$, the existence of a solution $(S,I)$ of (\ref{re1}) for all $\lambda\in [0,1]$ will be assured if the following conditions hold: (I) $(S^*,I^*)\in U$, (II) $deg(F_0,(S^*,I^*))\neq 0$, (III) $F_\lambda(S,I)\neq 0$ for all $(S,I)\in \partial U$, $\lambda\in [0,1]$. The most obvious choice for $U$ would be $U=D$. However, this will not do, since $(S_0,I_0)$ (given by (\ref{I00})) satisfies $(S_0,I_0)\in \partial D$ and $F_\lambda(S_0,I_0)=0$, so that (III) does not hold. To satisfy (III) we will need to choose $U$ so as to exclude $(S_0,I_0)$ from its boundary. We take $U$ to be the open subset of $D$ given by \begin{equation}\label{U}U=\{ (S,I)\in D\;|\; \min_{t\in\Real}S(t)<\delta\},\end{equation} where $\delta\in (0,1)$ is fixed. Note that $(S_0,I_0)\not\in \bar{U}$. We will show below that $U$ satisfies (I)-(III) if $\delta$ is chosen so that $\delta\in ({\cal{R}}_0^{-1},1)$. We first show that $(S_0,I_0)$ is the {\it{only}} solution of (\ref{re1}) on $\partial D$. \begin{lemma}\label{only} If $(S,I)\in\partial D$ is a solution of (\ref{re1}) for some $\lambda\in [0,1]$, then $(S,I)=(S_0,I_0)$, as given by (\ref{I00}). \end{lemma} \begin{proof} Assume that $(S,I)\in \partial D$ is a solution of (\ref{S2}),(\ref{I2}). Note that $(S,I)\in \partial D$, if an only if \begin{equation}\label{cl}S(t)\geq 0,\; I(t)\geq 0,\; S(t)+I(t)\leq 1,\end{equation} and at least one of the following conditions holds: (i) There exists $t_0\in \Real$ so that $I(t_0)=0$. (ii) There exists $t_0\in \Real$ so that $S(t_0)=0$. (iii) There exists $t_0\in \Real$ so that $S(t_0)+I(t_0)=1$. We now consider each of these three cases: (1) Assume (i) holds. Let $\tilde{S}$ be the solution of $$\tilde{S}'=\mu(1-\tilde{S}),\;\; \tilde{S}(t_0)=S(t_0).$$ and let $\tilde{I}(t)=0$. Then $\tilde{S},\tilde{I}$ is a solution of the initial-value problem (\ref{S2}),(\ref{I2}) with initial condition $$\tilde{S}(t_0)=S(t_0),\;\tilde{I}(t_0)=0.$$ By uniqueness of the solution for the initial-value problem, we conclude that $S=\tilde{S}$, $I=0$. Thus $S$ satisfies $S'=\mu(1-S)$, and since the only periodic solution of this equation is $S=1$, we conclude that $(S,I)=(1,0)$, as we wanted to prove. (2) Assume now that (ii) holds. Then from (\ref{S2}) we get $S'(t_0)=\mu$. But this implies that $S(t)<0$ for $t<t_0$ sufficiently close to $t_0$, which contradicts (\ref{cl}). Thus this case is impossible. (3) Assume now that (iii) holds. Moreover since we have already proven the result in the case that (i) holds, we may assume that $I(t)>0$ for all $t$. Adding (\ref{S1}) and (\ref{I1}) we get $$(S+I)'(t_0)=\mu(1-S(t_0)-I(t_0))-\gamma I(t_0)=-\gamma I(t_0)<0.$$ Therefore we conclude that $S(t)+I(t)>1$ for $t<t_0$ sufficiently close to $t_0$, contradicting (\ref{cl}). Therefore this case is impossible. \end{proof} We can now show that $U$, defined by (\ref{U}), satisfies (III). \begin{lemma}Assume ${\cal{R}}_0>1$. If $\frac{1}{{\cal{R}}_0}<\delta<1$ then, for any $\lambda\in [0,1]$ there are no solutions $(S,I)$ of (\ref{re1}) with $(S,I)\in\partial U$. \end{lemma} \begin{proof} Suppose $(S,I)\in\partial U$. Then either $(S,I)\in \partial D$ or $(S,I)\in D$ and \begin{equation}\label{bc} \min_{t\in\Real}S(t)=\delta. \end{equation} In the first case, Lemma \ref{only} and the fact that $(S_0,I_0)\not\in\partial U$ imply that $(S,I)$ is not a solution of (\ref{re1}). We therefore assume that $(S,I)\in D$ and (\ref{bc}) holds, which implies that \begin{equation}\label{bc1} S(t)\geq \delta, \;\;\forall t. \end{equation} Assume by way of contradiction that $(S,I)$ solves (\ref{re1}), or equivalently $(S,I)$ solves (\ref{S2}),(\ref{I2}). Using the assumption $(S,I)\in D$, we have that $I$ is everywhere positive, so we can divide (\ref{I2}) by $I$, and integrate over $[0,T]$, to obtain \begin{equation}\label{ee}\frac{1}{T}\int_0^T \beta_\lambda(t)S(t)dt=\gamma+\mu.\end{equation} But from (\ref{bc1}) we get \begin{equation}\label{vv}\frac{1}{T}\int_0^T \beta_\lambda(t)S(t)dt \geq \delta \frac{1}{T}\int_0^T \beta(t)dt=\delta \bar{\beta}.\end{equation} By the assumption $\delta>\frac{1}{{\cal{R}}_0}$ we have $\delta \bar{\beta}>\gamma+\mu$, so that (\ref{vv}) implies \begin{equation*}\label{vv1}\frac{1}{T}\int_0^T \beta_\lambda(t)S(t)dt >\gamma+\mu,\end{equation*} contradicting (\ref{ee}). \end{proof} To apply the Leray-Schauder degree it remains to verify that (I) and (II) hold. Since $S^*=\frac{1}{{\cal{R}}_0}$, the condition $\delta> \frac{1}{{\cal{R}}_0}$ implies $(S^*,I^*)\in U$, so (I) holds. To prove (II), it suffices to show that the Fr\'echet derivative $DF_{0}(S^*,I^*)$ is invertible. Since $F$ is a compact perturbation of the identity so that $DF_{0}(S^*,I^*)$ is Fredholm, it suffices to prove that the kernel of $DF_{0}(S^*,I^*)$ is trivial. Indeed, let us assume that $(V,W)\in \ker(DF_{0}(S^*,I^*))$, and prove that $(V,W)=0$. We have $DF_{0}(S^*,I^*)(V,W)=0$, or, equivalently, \begin{equation}\label{ker}L(V,W)=DN_0(S^*,I^*)(V,W).\end{equation} Note that $$DN_0(S^*,I^*)(V,W)=(-\bar{\beta} (S^*W+I^*V),\bar{\beta} (S^*W+I^*V)),$$ so that (\ref{ker}) is equivalent to \begin{equation}\label{sys}\left( \begin{array}{c} V \\ W \\ \end{array} \right)'=\left( \begin{array}{cc} -\mu {\cal{R}}_0 & -(\gamma+\mu) \\ \mu({\cal{R}}_0-1) & 0 \\ \end{array} \right)\left( \begin{array}{c} V \\ W \\ \end{array} \right). \end{equation} The characteristic polynomial of the above matrix is $$p(x)=x^2+\mu {\cal{R}}_0 x+ (\gamma+\mu)\mu({\cal{R}}_0-1).$$ Noting that $p(0)>0$ and that, for $\omega\in \Real$, $Im(p(\omega i))=\mu {\cal{R}}_0 \omega$, we see that the matrix has no imaginary or $0$ eigenvalues, so that (\ref{sys}) has no periodic solutions except $(V,W)=(0,0)$, and the claim is proved. We have thus proven that (I)-(III) hold, which completes the proof of Theorem \ref{main}. \section{Observing the $T$-periodic solution} \label{numerical} As we have noted in the Introduction, the period solution whose existence was proved whenever ${\cal{R}}_0>1$ is observable in numerical simulation of (\ref{S})-(\ref{R}) only for those parameter regimes for which it is stable. Therefore, if we are interested in examining the shape and amplitude of the $T$-periodic solution for values of the parameters for which the system displays subharmonic or chaotic behavior, we need a different computational approach. We now describe a simple approach that we successfully implemented, which allows us to observe the $T$-periodic solution for arbitrary parameters. We used the Galerkin method, expanding the periodic solution $S(t),I(t)$ in a Fourier series. We used the Maple system, whose symbolic capabilities make the implementation particularly easy. We take $T=2\pi$, \begin{equation}\label{for}\beta(t)=\bar{\beta}(1+\lambda \cos(t)), \end{equation} and search for approximate periodic solutions of (\ref{S}),(\ref{I}) the form \begin{eqnarray}\label{fo}\tilde{S}(t)&=&A_S^{0}+\sum_{n=1}^N [A_S^{n}\cos(nt)+B_S^{n}\sin(nt)],\nonumber\\ \tilde{I}(t)&=&A_I^{0}+\sum_{n=1}^N [A_I^{n}\cos(nt)+B_I^{n}\sin(nt)]. \end{eqnarray} Plugging (\ref{fo}) into (\ref{S})-(\ref{I}), and then taking the Fourier coefficients of both sides of the equations with respect to $$\{1,\cos(t),\cdots,\cos(Nt),\sin(t),\cdots,\sin(Nt)\},$$ we get $4N+2$ algebraic equations in $4N+2$ variables, which we numerically solve for $A_S^0,\cdots A_S^N,B_S^1,\cdots,B_S^N,A_I^0,\cdots A_I^N,B_I^1,\cdots,B_I^N$, obtaining the approximate $T$-periodic solution (\ref{fo}). We have found that the numerical iteration for solving the algebraic equations, using Maple's fsolve command, works well, when we start with the initial conditions for the iteration given by the endemic equilibrium of the autonomous case, that is $A_S^0=S^*,I_S^0=I^*$ (see (\ref{I0})) and $A_S^k=B_S^k=A_I^k=B_I^k=0$ for $1\leq k\leq N$. We check that the functions $\tilde{S}(t),\tilde{I}(t)$ indeed approximate a periodic solution of (\ref{S}),(\ref{I}) by observing that the highest Fourier coefficients $A_S^N,B_S^N,A_I^N,B_I^N$ are very small, and by plugging $\tilde{S}(t),\tilde{I}(t)$ into (\ref{S}),(\ref{I}) and checking that the residual is small. We note that theoretical justification of the Galerkin method for approximating periodic solutions can be found, e.g., in \cite{bobylev}. We now present some examples of results obtained by the method described above. With the period $2\pi$ of the forcing representing one year, we took take $\gamma$ corresponding to a $2$-week infectious period, $\bar{\beta}=20\gamma$, $\mu$ corresponding to $\%4$ population growth rate per year, giving ${\cal{R}}_0=19.97$. These parameters are approximately those estimated for measles. We consider different values of the strength of seasonality $\lambda$ (see (\ref{for})). In figure 1 we plot, for different value of $\lambda$, the periodic solution found by the Galerkin method (with $N=8$), together with a solution of (\ref{S})-(\ref{I}) obtained by direct simulation, starting the plot at $t=3000\pi$ to ensure that transients have decayed. \begin{figure}\label{fig1} \centering \includegraphics[height=4.5cm,width=4.5cm, angle=0]{ffsir0.1.eps}\includegraphics[height=4.5cm,width=4.5cm, angle=0]{ffsir0.21.eps}\includegraphics[height=4.5cm,width=4.5cm, angle=0]{ffsir0.3.eps}\\ \includegraphics[height=4.5cm,width=4.5cm, angle=0]{ffsir0.45.eps}\includegraphics[height=4.5cm,width=4.5cm, angle=0]{ffsir0.6.eps}\includegraphics[height=4.5cm,width=4.5cm, angle=0]{ffsir0.8.eps} \caption{Solutions of the SIR model with $T$-periodic forcing ($T=2\pi$), obtained by direct simulation, and the $T$-periodic solution obtained by the Galerkin method (dashed line), for varying strength of seasonality $\lambda$. Top row, from left to right: $\lambda=0.1,0.21,0.3$, Bottom row: $\lambda=0.45,0.6,0.7$. Other parameters: $\gamma=14\frac{2\pi}{365}$, $\bar{\beta}=20\gamma$, $\mu=\frac{0.04}{2\pi}$.} \end{figure} \begin{figure}\label{fig2} \centering \includegraphics[height=5cm,width=10cm, angle=0]{ffsir_all.eps} \caption{The $T$-periodic solution obtained by the Galerkin method, for varying strength of seasonality $\lambda=0.1,0.21,0.3,0.45,0.6,0.7$.} \end{figure} When $\lambda=0.1$, the system behavior is $2\pi$-periodic, so the solution of the simulated system coincides with the $2\pi$-periodic solution found by the Galerkin method. At $\lambda=0.21$, the $2\pi$-periodic solution has lost stability, and we see bifurcation to a subharmonic of order 2 (period $4\pi$), which is still quite close to the $2\pi$-periodic solution, with larger and smaller epidemics alternating. At $\lambda=0.3,0.45$ the $4\pi$-periodic subharmonic solution is already quite different, with a large epidemic every two years. At $\lambda=0.6$ we observe that the system has a subharmonic of order 4 (period $8\pi$), while at $\lambda=0.7$ we observe chaotic behavior. The $2\pi$-periodic solution (which is unstable except for the case $\lambda=0.1$) increases in amplitude and becomes less sinusoidal as $\lambda$ increases (note the differences in scales in the different plots). In figure 2 we plot the $2\pi$-periodic solutions for all values of $\lambda$, for a better view. \section{Discussion} \label{discussion} The forced SIR model is a beautiful example of a simple nonlinear dynamical system which displays complicated behaviors which are difficult to understand in intuitive terms. Moreover, these complicated behaviors are relevant to explaining the epidemiology of infectious diseases in humans, as studies comparing the behavior of the SIR and variants of it to surveillance data have shown \cite{bolker,earn,keeling1}. We have proven the fundamental result that a $T$-periodic solution exists for the $T$-periodically forced SIR model whenever ${\cal{R}}_0>1$. As we have stressed, this does not mean that the dynamics of the model is periodic, since the periodic solution whose existence is proved need not be stable, although one can use standard perturbation theory to prove that the $T$-periodic solution {\it{is}} stable provided the seasonality parameter $\lambda$ in (\ref{deco}) is sufficiently small. Numerical simulations show that complex dynamics - subharmonic and chaotic behavior - is very common in the forced SIR model. It is interesting to ask to what extent the complex dynamics of the forced SIR model can be rigorously understood, beyond numerical simulations. While we do not expect to be able to precisely characterize the dynamics of the model for different parameter values, it is of great interest even to be able to rigourously prove that complicated dynamics occurs for at least {\it{some}} parameter values. In this context we mention the work of H.L. Smith \cite{smith1,smith2}, who proved that the forced SIR model can have multiple stable subharmonic oscillations in certain parameter ranges. Chaotic behavior has been rigorously established by Glendinning \& Perry \cite{glendinning} for a variant of the forced SIR model, in which the dependence of the incidence term on $I$ is nonlinear. For the standard SIR model (\ref{S})-(\ref{R}), we are not aware of a proof of chaotic behavior. Classifying and explaining the dynamical patterns observed in simulations of the forced SIR model is still very challenging, so that, like other well-known `simple' models such as the forced pendulum equation, the forced SIR model can serve as a stimulus and as a benchmark problem for new developments in nonlinear analysis. {\bf{Funding:}} The author acknowledges support of EU-FP7 grant Epiwork.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,502
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/187027\/solve-returns-solution-that-isnt-always-one\/211944","text":"# Solve returns solution that isn't (always) one\n\nI was asking Mathematica to find the roots of $$f(x) = a+\\sqrt{x^2-b}$$ and it returns $$x = \\pm \\sqrt{a^2+b}$$. These are however solutions only if $$a\\leq 0$$ (there are no roots if a is positive since principal square roots are by definition non-negative). Why is Solve not returning ConditionalExpression[$$\\pm \\sqrt{a^2+b}$$, $$a\\leq 0$$]? Is it a bug?\n\nHere's a numerical example:\n\nnumc = {a -> 2, b -> 5};\nf = a + Sqrt[x^2 - b];\nPlot[f \/. numc, {x, -3.5, 3.5}, PlotRange -> {0, 5}]\nSolve[f == 0, x]\n% \/. numc\n\n\n\u2022 Mathematica is right: complex solutions are produced by default. Use Solve[f == 0, x, Reals] in order to obtain conditional expressions. \u2013\u00a0user64494 Nov 30 '18 at 9:49\n\u2022 Still if, a=1, b=0 this solution is wrong. \u2013\u00a0kiara Nov 30 '18 at 9:56\n\u2022 @user64494 Solve[f == 0, x, Reals] does indeed work! However, I don't see why Mathematica is right. If you take a=2, b=5, you get $x=\\pm 3$ even though $f(\\pm 3) = 4$. Correct me if I'm wrong but there is no real or complex solution in that example. \u2013\u00a0user2737248 Nov 30 '18 at 11:01\n\u2022 I don't think this is a bug. Up to the help to Solve, Solve may make nonequivalent transforms. \u2013\u00a0user64494 Nov 30 '18 at 15:57\n\nYou can instruct Solve to generate all conditions using MaxExtraConditions, or you can use Reduce instead of Solve.\n\nSolve[a + Sqrt[x^2 - b] == 0, x, MaxExtraConditions -> All]\n\nDuring evaluation of Solve::useq: The answer found by Solve contains equational condition(s) {0==-a-Sqrt[a^2],0==-a-Sqrt[a^2]}. A likely reason for this is that the solution set depends on branch cuts of Wolfram Language functions.\n(* {{x ->\nConditionalExpression[-Sqrt[a^2 + b], a + Sqrt[a^2] == 0]}, {x ->\nConditionalExpression[Sqrt[a^2 + b], a + Sqrt[a^2] == 0]}} *)\n\nReduce[a + Sqrt[x^2 - b] == 0, x]\n\nDuring evaluation of Reduce::useq: The answer found by Reduce contains unsolved equation(s) {0==-a-Sqrt[a^2],0==-a-Sqrt[a^2]}. A likely reason for this is that the solution set depends on branch cuts of Wolfram Language functions.\n(* (0 == -a - Sqrt[a^2] &&\nx == -Sqrt[a^2 + b]) || (0 == -a - Sqrt[a^2] && x == Sqrt[a^2 + b]) *)\n\n\nQuoting from the Solve documentation:\n\nSolve gives generic solutions only. Solutions that are valid only when continuous parameters satisfy equations are removed. Additional solutions can be obtained by using nondefault settings for MaxExtraConditions.\n\nIf you want to obtain solutions over reals, try\n\nnumc = {a -> 2, b -> 5};f = a + Sqrt[x^2 - b];Solve[f == 0, x,Reals]\n\n\n{{x->ConditionalExpression[-Sqrt[a^2+b],a<0&&a^2+b>0]},{x->ConditionalExpression[Sqrt[a^2+b],a<0&&a^2+b>0]}}\n\n% \/. numc\n\n\n{{x->Undefined},{x->Undefined}}\n\n\u2022 Indeed, but what if I am also interested in complex solutions? There are no real or complex solutions when a is positive so why isn't there a ConditionalExpression? \u2013\u00a0user2737248 Nov 30 '18 at 13:03\n\u2022 Making use of sol = Reduce[f == 0, x, Complexes], one obtains Reduce::useq: The answer found by Reduce contains unsolved equation(s) {0==-a-Sqrt[a^2],0==-a-Sqrt[a^2]}. A likely reason for this is that the solution set depends on branch cuts of Wolfram Language functions. (0 == -a - Sqrt[a^2] && x == -Sqrt[a^2 + b]) || (0 == -a - Sqrt[a^2] && x == Sqrt[a^2 + b]) \u2013\u00a0user64494 Nov 30 '18 at 15:33","date":"2020-03-29 06:55:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 5, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2795453667640686, \"perplexity\": 2232.835714765167}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370493818.32\/warc\/CC-MAIN-20200329045008-20200329075008-00296.warc.gz\"}"}
null
null
MESSAGE FROM A KILLER Sitting on the glider on the porch, Caprice turned to the letters in her hand. There were bills, of course. A letter-sized envelope caught her eye. It was one of those envelopes with the blue stripes so that you couldn't see what was inside. No one wrote letters these days. They sent e-mails. So she couldn't imagine whom it was from. There was no return address. That should have been her first warning. But she was watching Lady and appreciating the day and thinking about meeting Roz and Vince at Cherry on the Top for ice cream. She didn't expect the plain white piece of paper she pulled out of the envelope. It was folded in thirds, and when she opened it, the printing alerted her she might not like what it was going to say. She didn't. In printed letters it read, If you value that pretty dog and your life, stop asking questions . . . Books by Karen Rose Smith STAGED TO DEATH DEADLY DÉCOR GILT BY ASSOCIATION DRAPE EXPECTATIONS SILENCE OF THE LAMPS Published by Kensington Publishing Corporation Silence of the Lamps Karen Rose Smith KENSINGTON PUBLISHING CORP. http://www.kensingtonbooks.com All copyrighted material within is Attributor Protected. Table of Contents MESSAGE FROM A KILLER Books by Karen Rose Smith Title Page Dedication Acknowledgements Chapter One Chapter Two Chapter Three Chapter Four Chapter Five Chapter Six Chapter Seven Chapter Eight Chapter Nine Chapter Ten Chapter Eleven Chapter Twelve Chapter Thirteen Chapter Fourteen Chapter Fifteen Chapter Sixteen Chapter Seventeen Chapter Eighteen Chapter Nineteen Epilogue Original Recipes SHADES OF WRATH Teaser chapter Copyright Page To my grandmother, Rosalie Arcuri. When I was a little girl, she fashioned and sewed pinafores and velvet jumpers for me. She also taught me many basics of cooking and how to make a Shirley Temple! Everyone needs a Nana and memories that live forever in the heart. Acknowledgements I would like to thank Officer Greg Berry, my law enforcement consultant, who so patiently answers all my questions. Chapter One Caprice De Luca caught sight of the guest who stepped over the threshold. She braced for trouble on this balmy June Saturday afternoon. Spinning on her kitten heels, her long, straight brown hair flowing over her shoulder, she rushed to the living room of the 4,000-square-foot house. She'd staged the stone and stucco home with the theme of French Country Flair. Bringing the rustic country flavor from the outside in, she'd used the colors of lavender and green, rust and yellow, mixing them for inviting warmth. Carved curved legs on the furniture, upholstered in toile with its pastoral scenes, mixed with the gray distressed wood side tables. Prospective buyers who entered should have been screened by real estate agents. So how had Drew Pierson ended up standing in the foyer of today's open house? The chef was her sister Nikki's archenemy. Ever since he'd opened Portable Edibles, a catering company that competed with Nikki's Catered Capers, the two of them had been in a battle to make their businesses succeed. Just why was he here? Caprice hurried to the dining room with its wall-length, whitewashed wood hutch, rushed past the table with its pale blue tablecloth and white, gently scalloped stoneware dinnerware, and headed for the scents emanating from the grand kitchen. She hardly noticed the still lifes of flowers that she'd arranged on the walls. The floor of the kitchen mimicked rustic brick, reflecting the colors in the floor-to-ceiling fireplace. Blue-and-rust plaid cushions graced the chairs in the bay-windowed breakfast nook. Two-toned cupboards—white on top, dark cherry on the bottom—along with copper pots hanging over the granite island made the space inviting for cooking or family-centered activities. Nikki and her servers had almost finished readying the chafing dishes and serving platters in the state-of-the-art kitchen. The combination of Nikki's culinary skills and Caprice's staging talent would pull in prospective buyers. More often than not, houses sold quickly because of their efforts, and the real estate agent on board made a hefty profit. The luxury broker today was Denise Langford, and Caprice wondered if Drew Pierson knew her and that's how he'd added his name to her list. While one server poured vin d'orange into crystal glasses, another took a cheese soufflé from the double oven. Nikki's assistant was stirring soupe au pistou—a thick vegetable soup with vermicelli—while a platter of pan bagnat hors d'oeuvres, which were basically tuna, tomato, green pepper, olive, and sliced hard-boiled egg sandwiches, rested beside her. Since Caprice had gone over the menu carefully with Nikki, she knew other chafing dishes held blanquette de veau—veal in white sauce with carrots, leeks, onions, and cloves—and poulet basquaise, which was pan-fried chicken dipped in pepper sauce. Nikki was stirring the boeuf bourguignon. The braised beef cooked in wine with carrots and potatoes and garnished with bacon smelled wonderful. Nikki was so intent on stirring the dish in front of her that she didn't see Caprice approach. Caprice was about to warn her that Drew Pierson had arrived when he appeared beside Caprice, looking over the food to be offered to interested house buyers. "I thought I'd stop by and see what my competition was offering today," he said smoothly. At the sound of Drew's voice, Nikki's head snapped up, her eyes widened, and she frowned. "You're thinking of buying a French country bungalow?" Caprice asked, giving her sister time to compose herself. "I told Denise Langford I wouldn't mind having a look at this place," he answered. This "place" was definitely out of Drew's budget, since he was a fledgling business owner. Portable Edibles couldn't be making that much money yet. Drew ignored Caprice and stared down at the boeuf bourguignon, sniffed it, then smiled at Nikki. "Anyone can make boeuf bourguignon, but I see you added bacon. Nice touch." "Don't think I'm going to serve you any of my food," Nikki responded, her tone kept in tight restraint. "If it were up to me, I'd have you removed from the property." Drew, his handsome face producing a fake smile, clicked his tongue against his teeth. "Your envy is showing. I guess you heard I'll be catering the exclusive fund-raising dinner at the Country Squire Golf and Recreation Club. My bid came in lower than yours." Caprice had to wonder about that conclusion. Nikki's bids were more than competitive. It was quite possible that someone on the selection committee for the dinner had favored Drew. She could read her sister well, and she saw that Nikki was thinking the same thing. "Just because you won that job doesn't mean your food will win the taste test," Nikki offered. "I have a growing client base. Do you? I have repeat customers. Do you?" "Your social media following is pitiful," he responded with bitterness, and Caprice wondered where that bitterness was coming from. What had Nikki ever done to Drew? They'd actually worked well together when she'd first hired him to assist her on a few catering jobs. It was after she'd turned him down as a partner that their relationship had fallen apart. "I believe in growing my business one happy customer at a time," Nikki returned. "My followings will grow. The way ten thousand followers suddenly flowed into your Twitter stream, I suspect you bought them. How loyal do you think they're going to be?" Everyone in the vicinity was listening and watching now, and Caprice knew the sparring match between Nikki and Drew would only escalate. Caprice leaned a little closer to him. "We'll serve you if you want so you can sample Nikki's food to see exactly how delicious it is. But I don't think you want a scene here any more than she does. That could be bad for business, and business is what you're all about, isn't it?" She didn't know what had made her throw that question in. But when she saw the look on Drew's face, she understood this wasn't just about business. There was something personal underlying his rancor for Nikki. Still, she must have gotten through to him. He took a step back from the food and her sister. "Good luck, Nikki. You're going to need it, because I'm going to cut your business off at the knees." After that shot, he turned and headed for the front door. Everyone around Nikki went back to what they'd been doing and pretended they hadn't heard anything. But Nikki knew better and she looked upset. "I knew you two were competing, but I didn't realize he nursed a vendetta against you. What gives?" Caprice asked. Nikki lowered her voice. "It's more than professional. You're right. He made a pass at me before I turned him down as a partner. I had already turned him down as a love interest. I think that rejection really bothered him. Rejected by me both ways, he decided to try to wipe out my business. But he can't. My food's better than his. He's an efficient cook and he'll do fine at catering, but I don't think he has the creative spark to make his dishes really special. I'm determined to show him up next Sunday." "What's next Sunday?" Caprice asked, thinking about her schedule. "It's the wedding expo. Area bakeries, caterers, photographers, dress shops, and flower stores are going to be showing their wares. I'll have sample menus for couples planning their wedding and food they can taste. Drew will too. But mine's going to be better." Of course it was. Nicoletta De Luca could rise to any challenge. Couldn't she? * * * Caprice sat with her sister Bella on a bench in the front yard of their childhood home the following evening, eating a slice of cake. The weather couldn't have been more perfect as dusk shadowed the lawn. "You outdid yourself with the coconut cake this time, and the fluffy icing is wonderful. I don't think I have your recipe. You're going to have to e-mail it to me." "I know coconut cake is Mom's favorite and she doesn't make it much for herself because Dad would rather have chocolate. Just like Joe and the kids. No palate at all." Caprice laughed. Her sister Bella was nothing but blunt. With her curly black hair, sparkling brown eyes, and heart-shaped face, she'd always been a beauty. She was two years younger than Caprice but usually felt she knew best and never hesitated to give advice. She was the married sibling, and with a husband, three kids, a part-time job, and a burgeoning online business making costumes and christening outfits for kids, she was one busy lady. "We have a house showing tomorrow night. Keep your fingers crossed," Bella pleaded. When Bella and Joe had decided to sell their home and look for something to fit their growing family, Caprice had staged the home for them. "I'll do better than crossing my fingers. I'll visualize the right couple finding your house." After a few moments of comfortable sister silence, Bella nudged Caprice's arm. "Look at Mom with Benny." Their mom sat on a lawn chair under a red maple, holding Bella's five-month-old son. Caprice could see he was almost asleep. "Since Megan and Timmy are out of school," Bella explained, "I'm working only evenings at All About You. That way Joe can stay with the kids on those nights. Mom said she'd come over a few mornings and babysit Benny to give me time to sew costumes and christening outfits. I'm keeping up with the orders as long as I have blocks of time at the machine." Bella's costume business was taking off. At some point, she might stop working at All About You, a dress shop that Caprice's best friend, Roz Winslow, owned. With her degree in fashion design, Bella worked there part-time, and it was a good fit for now. Roz was dating Caprice's brother, Vince, and Caprice didn't see either of them in the front yard. That didn't surprise her. They might have snuck around back to make out. Never too old to steal a few kisses. Her gaze targeted Grant Weatherford, who was tossing a soccer ball to five-year-old Megan and nine-year-old Timmy, Bella's older children. Her heart did a little flip-flop when he caught her eye and smiled in that way he had of making her feel special. They'd been dating for about two months and, in spite of herself, she was dreaming about a future with him. Suddenly Caprice's cocker spaniel, Lady, came running up to her, wound around her leg, and then settled on her foot. Patches was Grant's cocker, not golden like Lady but with patches of brown and white in a curlier coat. He scampered over too, followed by Caprice's uncle Dom. Her uncle had experienced a divorce and a financial downturn and was living with her parents temporarily until he got back on his feet. He and her family had had their differences, but he seemed to be at peace with them now, especially with Nana, her paternal grandmother, who was sitting on the porch in the shade talking to Nikki and watching them all. Her uncle Dom, her dad's younger brother, grinned down at her. "A man doesn't need to go to a gym when he has dogs to chase." Patches sniffed at Bella's white sneaker. She moved her foot and then got to her feet. "I'll let you enjoy your hairy companions." Bella tolerated animals, but she wasn't a lover of them like Caprice, Grant, and her uncle. Uncle Dom pushed his tortoiseshell-framed oval glasses higher on his nose and lowered himself onto the bench, stooping down to rub Patches's ear. She knew the dog liked to be scratched there, and apparently her uncle had discovered that too. "How's the job hunt?" she asked him. He grimaced. "I have two interviews lined up, one with a bank and another with an insurance agency. I never thought I'd be an insurance salesman, but it's something people need." Her uncle had worked for a large financial agency that had collapsed with the economic downturn. He was having trouble finding a job in that sector. Even if he could, she wasn't sure he was enthusiastic about it. He definitely wasn't enthusiastic about becoming an insurance salesman. "Tell me something, Uncle Dom. What do you love to do? What have you always wanted to do?" She was a big believer in putting your heart in your work for your life path to be a success. Bella was doing that with her online costume-making business, and orders were pouring in. Caprice had done that when she'd turned from interior decorating to house staging for high-end clients. She'd needed to turn something she loved into a business that would work in the present economy. Nikki invested her heart in her cooking. Her mom threw her heart into her teaching. Her dad, a mason, had put his life into building structures he could be proud of. She knew Vince and Grant cared about their clients in their law partnership. Yep, to be a success, you had to do what you loved to do. Her uncle thought about her question for a moment, then motioned to Patches and Lady. "I've always wanted to tend to animals. Being around Lady and your cats has brought that home again. But I'm a little old to be a veterinarian." "You're never too old if that's what you want to do. But you could tend to animals in another way." "And that is?" he asked with a raised brow. "Have you ever thought about being a pet sitter? I use one at times, and I have clients who would like to have their pets taken care of in their homes. They can't find someone to do it. To really make it a business, you'd have to be bonded and insured. But that's possible, isn't it?" Her uncle studied the dogs again, patted Patches at his feet, and then nodded. "I never thought about pet sitting. But, you know, I think I'd like it." "I can give you the name of the pet sitter I use who lives in York, if you'd like to interview her. That might give you an idea of whether you want to do it or not. Do you have your phone on you? I can give you her number." After her uncle took out his phone and entered the number, Grant approached them. He nodded to her uncle. "Thanks for giving the dogs a run." "Anytime," her uncle responded, rising to his feet. He gave Caprice's arm a squeeze. "Thanks for your idea. I'll let you know what I decide." Once her uncle had moved off and joined her dad and Bella's husband, Joe, in conversation, Grant asked, "Are you ready to leave? We could go back to your place for a while." There was a look in his eyes that told her he wanted to be someplace private with her. Maybe they'd have a make-out session of their own. * * * An hour later, Caprice brought tall glasses of iced tea into her living room. Grant stood at the floor-to-ceiling, turquoise-carpeted cat tree. As he petted her white Persian named Mirabelle, who was on a lower shelf, he studied Sophia, her long-haired calico, who was on the top shelf. He said to Sophia, "I'm glad to see you two are getting along now." He glanced at Caprice. "Do they still squabble?" "Now and then. Mostly if Mirabelle wants to be friendly and Sophia doesn't want to be bothered. But considering Mirabelle's been here only two and a half months, they're doing well." She nodded to Lady and Patches, who were gnawing on toys near the sofa. "Mirabelle still stays out of Lady's way, but she doesn't seem scared of her anymore. And look at her. She doesn't even mind Patches being here." Grant came to join Caprice on the sofa. It was striped in purple and lime and fuchsia to complement the sixties decor, including a lava lamp. As he sat beside her—very close beside her—she took a sip of tea and then placed the sweating glass on the mosaic-topped coffee table. She hadn't turned the air on because the night breeze floated in the open windows. "Dinner at your mom and dad's is always like a family reunion," he mused. "That's why we do it once a month, whether there's a special occasion or not. Everybody enjoys going all-out—Nikki's antipasto, Nana's ravioli, Bella's lima bean casserole and cake, my bread, Vince's choice of wine." "The weather was perfect for the kids to play outside afterward." Kids were sometimes a sore subject with Grant, though he tried not to let it show. He'd experienced a tragedy in his past. His daughter had drowned, and his marriage had broken up because of it. When he'd moved to Kismet to join her brother's law practice—she and her family had gotten to know him when he'd been her brother's college roommate—he'd started a new life. Yet he really hadn't been ready to move on. It had been only in the past few months that Caprice had felt he was putting the past behind him . . . or not regretting it as much. "Megan and Timmy can be a handful," she agreed. "It's great when they can be outdoors to release some of that energy. Just wait until Benny joins in the fray." Apparently wanting to leave the subject of children, Grant changed the direction of their conversation. "Last night at Grocery Fresh, I ran into a client who'd stopped in at your open house." "Really? What did she think?" "She liked the way the house was staged. I think she picked up one of your cards. She liked the food too, but—" "But?" Caprice was surprised there was any question about Nikki's food. "Apparently she overheard an argument between Nikki and some guy." Caprice groaned. "That wasn't some guy. That was Drew Pierson. I think he came by just to goad Nikki . . . and maybe intimidate her. Thank goodness she didn't take him on as a partner. That could have been disastrous." "This was a heated argument?" "Heated enough. He threatened to destroy her business. Fortunately not too many guests were there yet. Nikki told me yesterday that Drew made a pass at her when they were working together. I have a feeling it was more than just a pass. She didn't confide the details to me, but I think whatever happened shook her up and that's why she didn't consider taking him on as a partner." "Why would he do that if he wanted to work with her?" "Maybe he thought their working relationship could have benefits. Maybe it was his way of thinking he could solidify the deal." Grant set down his glass of iced tea next to hers. Then he curved his arm around her shoulders. "A kiss or a relationship should have nothing to do with a deal." Caprice gazed up at him, totally lost in his gray eyes, and he seemed lost in her dark brown ones. "I absolutely agree." For an instant she thought he was going to kiss her, but instead he asked, "How would you like to go with me to a concert in the park on Wednesday night? We could spread out a blanket, take some snacks . . . and the dogs." "Who might want to eat the snacks," she joked. Grant smiled. "We'll take a few treats for them too. What do you say?" "I say it's a terrific idea." The words were no sooner out of her mouth than Grant bent his head and kissed her. The living room became a psychedelic swirl, and she knew she felt something good and true and lasting for him. She just hoped he felt the same. * * * Caprice's childhood home was a haven for her. That's why she visited it often. As she strolled up the walk on Monday morning, Lady padding beside her, she realized once again how the house's Mediterranean-style exterior didn't fit its Pennsylvania surroundings. When her parents had purchased it, it had been a real fixer-upper. They'd been "fixing up" for years because there was always something to repair. Yet with her dad's masonry and carpentry talents and his coworker friends helping him, he'd kept up improvements year by year. A few years back when Nana had sold her house, Caprice's parents had built an addition so she could live with them but still be independent. With Lady sniffing the grass edging the sidewalk, Caprice went around to her Nana's side of the house, mounted the steps, and knocked. Nana was an early riser and she might have turned on her morning TV programs. Caprice hoped she could hear the knock. However, Nana immediately came to the door in yellow knit sportswear pants and a matching top. Her gray hair was fixed in the usual bun at her nape, and her golden brown eyes were alight with morning energy. "Did I know you were coming?" Nana asked with a fond smile and a pat for Lady. Caprice gave Nana a hug, then unhooked Lady's leash. "No, you didn't. But we didn't have much of a chance to talk yesterday and I wanted to catch up." Nana motioned her inside. "I'm just having my morning cup of tea. You can join me." As soon as they stepped inside Nana's small living room, Valentine came scampering from the bedroom. Caprice had found the gray tabby kitten in her yard one cold February night. Nana had decided she needed a pet, and bonds had formed quickly. Now, at five months, Valentine was becoming lankier and longer. She danced up to Lady, who took a sniff, then they both made a beeline for the kitchen. "They want a midmorning snack too," Caprice translated with a laugh. "I have fresh-made biscotti for us, Greenie treats for Valentine, and a Perky Paws peanut butter cookie for Lady." Fifteen minutes later as the animals chased and played in the living room, Nana served Caprice tea at her small kitchen table, the TV sounding in the background. "What are you watching?" Caprice asked, unfamiliar with morning TV. Morning was her best work time—meeting clients, making phone calls, or running errands to find furniture for her next house staging. "Mornings With Mavis," Nana responded. "It's that new, local morning talk show. I learn about all kinds of businesses in the area, local charities, events that are coming up. It's very informative." Caprice glanced at the TV as she pulled one of Nana's biscotti from the canister that her grandmother had brought to the table. Then she took a second look at the TV, realizing what she'd seen. "That's Drew Pierson!" "Drew who?" Nana asked. "Drew Pierson, the caterer who's competing against Nikki. Can we turn it up?" They both moved into the living room and Nana picked up the remote, increasing the volume. Drew was sitting in one of the interview chairs, looking all dapper and casual, his hair perfectly gelled in that new mussed way, while another gentleman in a suit and tie sat next to him. Mavis—at least Caprice guessed it was Mavis—with her flaming red hair and broad lipstick smile sat across from them. She said, "Your chain of restaurants, Rack O' Ribs, is well known up and down the East Coast, Mr. Cranshaw. And we're so glad you opened a restaurant in Kismet not so long ago. Tell me how you came to decide that Chef Drew's blackberry barbeque sauce would be used in your chain." "You're kidding!" Caprice exclaimed. "He sold barbeque sauce?" "Maybe he'll be rich now and stop competing with Nikki," Nana observed. "I don't know about rich. But if he sold the recipe, that could be quite profitable." "I tasted it," Mr. Cranshaw said. "As soon as I did, I knew I wanted it." "The barbeque sauce is only the beginning," Drew informed Mavis. "My catering service, Portable Edibles, is going to specialize in original recipes—main dishes, pies, and cakes. If you want a sampling, come out to the Kismet wedding expo on Sunday. I'll be introducing a chocolate walnut groom's cake. I've been told the recipe is to get married for!" He laughed as if he'd made an exceptional joke. As the camera zoomed in on Mavis, Caprice realized the show must consist of short interviews. Mavis said, "We'll post information about the wedding expo on our Web site. Viewers, make sure you check it out. If you're in the area, stop by the wedding expo on Sunday." The segment over, the program went to commercial and Nana turned down the sound. "Do you need to tell Nikki about this?" Nana asked. "I certainly do." Caprice was already reaching for her phone in the pocket of her yellow bell-bottomed slacks—a staple in her vintage wardrobe. "You know," Nana said softly, "Drew's grandmother, Rowena Pierson, makes a wonderful chocolate walnut cake . . . with maple icing, if I remember correctly." Caprice forgot the call to Nikki for the moment. "Do you know his grandmother?" Nana frowned. "She attends St. Francis of Assisi church. Has for years. She has a reputation for being a wonderful cook. But she does have arthritis and some sight problems. She doesn't cook as much as she used to. The thing is, I can't imagine her giving anyone her recipes. She insists they're unique and she doesn't want everybody copying them. She once told me that she keeps them hidden." Either Drew's grandmother had become generous with her recipes and given them to Drew, or else . . . Could Drew have found those recipes and stolen them? Chapter Two Caprice had already been seated at Rack O' Ribs on Tuesday when Bella entered the restaurant. Caprice spotted her sister over the line of people waiting to be seated and waved. Bella waved back and said something to the hostess, a redhead dressed in a white blouse, red tie, and very short black skirt. Then her sister wound her way around the rustic wooden tables, bumped into one of the black iron chairs, and met Caprice at her back booth. The vinyl on the booth's wooden bench sported a cowhide pattern. Bella slipped into the booth across from Caprice with a resigned sigh. "And just why did you want me to meet you at this busy place? It's new and everybody's still trying it. We'll hardly be able to hear ourselves over the chatter." "Good afternoon to you too," Caprice said cheerily. Bella wrinkled her nose at her and asked, "Well? When Mom came over, I told her I'd be away for an hour. You could have just gone through the drive-thru and brought lunch to my house." Yes, Caprice could have ordered the ribs, picked them up at the drive-thru, and taken them to Bella's, but she wanted to be in the midst of the action and actually experience the restaurant. "You can take Mom some ribs after we finish lunch. I told you this is about Drew Pierson. I want us to taste his recipe for blackberry barbeque sauce. They just started serving it yesterday." Bella looked around. "Rack O' Ribs is a nice all-around restaurant for families, teens, and couples." She turned her attention back to Caprice. "Why isn't Nikki tasting the sauce with us?" "Oh, I'm sure she'll taste it. But she's too upset about the whole thing right now to think clearly, or to taste well. We need to be objective. She couldn't believe Drew had devised some secret recipe that was good enough to sell. She said he can cook well enough, he just can't create from scratch. He insists his catering business is going to be known for its original recipes, and Nikki's just shaking her head over the whole thing. He'll be at the wedding expo this weekend with a supposedly divine chocolate walnut groom's cake. Nana believes it's his grandmother's recipe." "Maybe it is," Bella responded. "You know how recipes are handed down." "Maybe. But then he shouldn't say it's his recipe. I'm hoping this sauce isn't anything special. Then we can tell Nikki that." "Does she know you're here?" Bella asked. Caprice shook her head. "No, I want to get the verdict first. I ordered half a rack for both of us, sweet potato fries, and steamed broccoli." "No carbs there. The broccoli will save us," Bella decided with a grimace. "Tell me about your house showing last night." Caprice raised her voice a little so Bella could hear her over a sudden burst of loud chatter. Bella squeezed the thin slice of lemon hanging on her tumbler into the water and dropped it in with the ice cubes. "The prospective buyers were a young couple, maybe in their midtwenties. They have one little girl who's two. The real estate agent said they seemed to be interested. It is a perfect starter home. She said they liked the way you had it staged with the sectional sofa, the colorful throw pillows, and the bright stoneware on the kitchen table. Apparently that attracts family buyers." "That's what I was hoping. Your house is terrific for a family just starting out. Have you and Joe found anything you really like?" "We have our eye on a couple of places online, but we don't want to go look at them and then get disappointed if it takes a long while for our house to sell." "It won't," Caprice assured her. "The market's picking up, and you're right in the perfect price range." "I wish I had your confidence. Joe insists we shouldn't seriously look until we sell." "That's one way to do it, but then everything could happen really fast. You might have to move out of your sold house and move in with Mom and Dad." Bella groaned. "Oh, right, with Uncle Dom there too. Wouldn't that be a hoot?" "I don't think he'll be there that much longer," Caprice confided. "He's seriously considering setting up a pet-sitting business." Bella shook her head. "I couldn't even imagine pet-sitting—walking dogs, cleaning up after cats, all day long and into the evening." "If you love animals, it's not a chore, and I think Uncle Dom really does like caring for animals. Mix in house-sitting, and he could have a good business. It's hard to find qualified, trusted people who will pet sit or house sit and take care of everything. "I think our food's here," Caprice noted, catching sight of a waitress who was winding her way toward them with a tray. The brunette with the jaunty ponytail and sunny smile set one dish in front of Caprice and the other in front of Bella. She said, "The plate is hot, so be careful." She took foil packets from her pocket and set three in front of each plate. "These are to wipe sticky fingers. We know our customers like to eat their ribs with their fingers. Enjoy." Once she'd left the table, Caprice glanced down at the ribs. They were heavily glazed and glossy, and did look delicious. Bella was shaking her head. "There goes my diet." "You can try just one or two," Caprice offered. "Isn't diet all about balance?" Bella rolled her eyes. "Let's see how good these are." Whereas Caprice was pulling the ribs apart with her fingers and then licking them, Bella used a knife and fork. Her sister was particular and wouldn't get her fingers dirty with something like sticky rib sauce if she could help it. Bella stabbed a nice chunk of meat with her fork, smelled it, put it to her lips, and then ate it. Her eyes widened and she smiled. "I don't think one rib will be enough. Oh my gosh, Caprice, this is really good. If these are the kind of recipes Drew Pierson devises, Nikki's going to have a battle on her hands." Caprice picked up a rib and, as delicately as she could, ate the meat from it. The taste on her tongue was fruity and sweet, yet with a bit of heat. Bella was right. This sauce was good. Maybe even genius. "They're selling bottles of it up at the cash register," Bella informed her. "You can bet Drew Pierson will get his cut of each one." Caprice suddenly realized what a lucrative deal this had been for Drew. He was definitely on his way. On his way to destroying Nikki's business? Or on his way to something else? * * * Caprice sat at her computer working on Wednesday before she changed for her date with Grant. At least she was supposed to be working. But she was thinking about how good those ribs had tasted yesterday and whether she should tell Nikki. Mirabelle sat on the cushy lime-green chair beside Caprice's computer worktable. Every once in a while, she looked up and meowed and Caprice would pet her. Mirabelle was vocal, as lots of Persians were. Suddenly her long-haired calico, with her strikingly beautiful white ruff, sauntered into the room and saw Lady sitting by Caprice's foot and Mirabelle on the chair. Usually laid back, Sophia hadn't been particularly happy about this recent addition to their family. The cats were adjusting to each other. Without hesitating, Sophia stood up on her hind legs and pawed at Mirabelle. It wasn't a nasty pawing, more like an I-just-want-to-bother-you pawing. Mirabelle meowed, hopped down, jumped over Lady, and dashed for Caprice's office closet. Caprice always left the door open in case one of the cats wanted to take a nap in there. She knew the animals had to find their own relationships, and they were . . . slowly. Caprice's doorbell rang, and she checked the small portable monitor on her desk. Since her last brush with a murderer, she'd had an alarm system put in her house. Now she saw her sister waving at her, and she smiled. After hurrying to the front door, she unlocked it and Nikki stepped inside. "I didn't expect to see you today," she said. "I came from Rack O' Ribs. I tried Drew's sauce and it's really good. I'm so disappointed." Nikki sounded dejected, and that wasn't like her. "You wouldn't be able to create a sauce that's just as good?" "I don't know. I don't know anything anymore, Caprice. Maybe I should just get a job as a chef somewhere." "Don't talk nonsense. Your Catered Capers is doing well, isn't it?" "I'm meeting my bills and paying my help. But I want more than that." "Then we have to get your name out there, like Drew has gotten his out there." "He's done more than that. If he sold the rights to his recipe to the Rack O' Ribs chain, he's making major bucks." Caprice led her sister into her living room. "You can't let him take jobs from you. What do you have planned for the expo on Sunday?" she inquired. "You mean what food do I intend to serve?" "Yes. How is it special or different from anyone else's?" Nikki thought about it. "I'm cooking my roast beef with the white horseradish glaze, bite-sized duck l'orange samples, salmon with a bourbon sauce, and then assorted cookies and desserts." "What's your pièce de résistance?" Caprice pushed. "I don't know what you mean." "Well . . ." Caprice drawled. "Drew is advertising this groom's chocolate walnut cake as his specialty item. What are you going to advertise as the epitome of wedding cakes?" "Oh, I see what you mean. I'll have to think about that. Maybe I can coax Serena, who helps me sometimes, to decorate a mini–wedding cake." "Think about the topper too. Something different and really classy, like Waterford crystal." "You do have ideas." "I've always told you that. Bring plenty of those new pamphlets you had printed up, and run over to the Quick Print shop and have a poster with Catered Capers and your name and your specialties printed so we can put it on an easel. It's time to go big, Nikki." "Or go home," Nikki muttered, again with that note of dejection. "This isn't like you. You're usually filled with confidence. What's going on?" Nikki sighed. "I'm tired of working and feeling like a hamster on a wheel. Maybe if I had a social life and somebody to care about, all of it would seem more worthwhile." "Or more frustrating," Caprice offered. "But I know what you mean. Dating Grant . . . It's become part of the focus of my life. We're going to the park tonight for the concert. Why don't you come along?" "I'm not barging in on your date." "We're bringing both dogs, and they'll be chaperones. There will be a hundred other people there. Come. I know Grant won't mind." "You know him so well?" Nikki asked with a wink. "We're becoming very well acquainted," Caprice assured her with a sly smile. Suddenly Mirabelle dashed out of Caprice's office into the living room and jumped up to the back of the sofa. Sophia wasn't far behind, chasing after her and then settling on the arm of the couch. Lady ran to Nikki, sniffed her pants legs, then rolled over and lay down at her feet for a tummy rub. "I was just going to change," Caprice said. "Why don't you help me choose what to wear? You're better at this dating thing than I am." "Are you trying to distract me?" "Am I succeeding?" "You have to promise to wear whatever I pull from your closet." If this were Bella making that offer, Caprice would probably refuse. But she trusted Nikki's taste, even if it wasn't vintage. That was the fashion she most enjoyed wearing. "You've got a deal," Caprice decided, knowing Nikki's choice would be something Grant would appreciate. * * * Seated on a blanket next to Grant two hours later, his arm wrapped around her shoulders, Caprice was absolutely happy. The band on the temporary park stage was playing oldies but goodies, her favorites. She couldn't think of any place she'd rather be, as Lady and Patches romped around the blanket on their leashes and then settled down with chew toys. Folks on folding patio chairs, from teenagers looking for something to do to seniors letting the music bring back memories from the past, were seated across the grass lawn. Her parents might be somewhere in the crowd. She wasn't sure yet. Grant leaned close and kissed her on the cheek. "What are you thinking about?" he murmured at her ear. "I'm thinking about how much I enjoy our dates," she said truthfully. He squeezed her a little tighter. "You two look too comfortable," a voice behind Caprice said. "I have a feeling you're slipping into the older crowd instead of going out and raising Cain on a Wednesday night." Caprice glanced over her shoulder and spied her brother, Vince. With his dark good looks, wearing a tan Polo shirt and navy board shorts, he looked younger than he did in his business suit. Beside him, Roz looked her beautiful self in a violet blouse and matching shorts. Caprice imagined her friend's leather sandals came straight from Italy. Roz always wore jewelry. Her amethyst earrings and ring sparkled even though the sun had begun to dip below the horizon. She was holding a leash with her dog Dylan who excitedly greeted Patches and Lady and sat on the blanket with them. Dylan was a Pomeranian-Shih Tzu mix and his fluffy tail swept back and forth over the blanket. "Can we join you?" Roz asked. "We forgot our blanket." "The more the merrier," Caprice said. After Vince and Roz settled themselves on the blanket, Roz took imported chocolate bars from her purse, passing them around. "This is the fun part of a lawn concert. Better than those greasy fries in that service cart over there." The white Chuck's Snacks truck contracted with the Chamber of Commerce to do business at these concerts. But its offerings were limited to sodden fries, greasy burgers, and ice-cream sandwiches. Roz took a bite from her chocolate bar, then winked at Caprice. "Are you ready for the reunion?" Their high school reunion was only five weeks away. She and Roz were members of the planning committee. "I'm ready, but I don't know if the committee is. Did Alicia look into decorations yet?" "Are we going to talk flowers?" Vince muttered. Roz jabbed him in the ribs. "Do you have a better idea than flowers in vases on the tables?" "Let's see. Fifteen years ago. Why don't you do movie themes from that year? Incorporate that into centerpieces." "It's a little late for an all-new concept," Caprice said. "Though that would have been a good one. We'll probably just stick to our class's colors with the flowers." "You are bringing a date?" Roz asked Caprice, with a sly look at Grant. "I might ask a certain lawyer I know," Caprice teased back. "And maybe this time I'll ask you to dance," Grant assured her. Grant was referring to the Valentine's Day Dance when he hadn't asked her and she'd been terribly disappointed. From the affectionate pressure of his hand on her arm right now, she knew that wouldn't happen this time. Suddenly Nikki was beside Caprice, unfurling her own blanket beside them. "This will give you all a little more room," she said. Nevertheless, there was something in her tone that alerted Caprice that something could be wrong . . . something new could be wrong. While Vince spread his long legs over onto Nikki's blanket, Caprice waved her hand over her outfit. "See, I wore what you suggested." Her blouse, reminiscent of Stevie Nicks, was gauzy, though not Nicks's representative black. It was turquoise with embroidery and flowed over her white culottes. Her white sandals with jewels of fuchsia, lime, and turquoise completed the ensemble. Nikki knew her well and wouldn't have suggested anything Caprice didn't want to wear. "Perfect," Nikki said with a glance, though she looked distracted. Grant raised a brow at Caprice. He was coming to know her sister too. Usually energetic and effervescent, this was a different Nikki. "Are you nervous about the expo, Nik?" Caprice asked her, leaning closer. Vince overheard. "What do you have to be nervous about? You've done expos before." "None of those was this important," Nikki maintained. "I've come up with my pièce de résistance," she told Caprice. "Carrot cake with cream cheese icing. I'm going to bake it tomorrow and freeze it. Serena is going to frost and decorate it for me early Sunday morning." "Then you'll be all set." "I just hope my presence there will make a difference and stop Drew from stealing my clients. I lost another to him—Warren Shaeffer, who's president of Kismet's Chamber of Commerce. He lives in Reservoir Heights, and I catered a cocktail party for him last year." "He belongs to the Country Squire Golf and Recreation Club," Roz said. "I can ask around and find out how Drew stole Warren away from you. For all we know, he could be giving a discount that you could never give, just to take clients away from you." "And what good would that do?" Vince asked. "That discounted event might help him to capture further events by spreading his name around. I don't know, but I'll find out for you, Nik." "Your carrot wedding cake is going to beat Drew's groom's cake. I'd bet my life on it," Caprice assured her sister. "Let Drew Pierson be the king of barbeque sauce. You can be the queen of catering." Although Nikki tried to smile, Caprice could see that her words weren't assuring her sister. Nikki was worried she'd lose her business . . . and Caprice couldn't blame her. Chapter Three The building where the wedding expo was held on Sunday was huge, probably the largest public building in the town of Kismet. So many of the edifices in the town, especially downtown, were old and refurbished. This expo center, however, on a plot of ground where old houses had been demolished to make room, was shiny, bright, and about eight months old. The town council and mayor, after doing some research, had decided Kismet could bring in revenue by having a facility where wedding receptions could be held, or businesses could show their wares, where craft shows could flourish, where gun shows could have their day. A building like this could draw crowds, not just from York, Harrisburg, and Lancaster, but maybe from farther away—from Philadelphia and Baltimore. Who knew what people might come to see? Caprice was meeting several people here. Nikki, of course, would be inside serving. Juan Hildago, Caprice's assistant in her house stagings, would be sampling food and thinking up ideas for future open houses. Yes, it was a wedding expo, but ideas could be gathered anywhere. Roz and Vince might be here too, if matrimony was anywhere in their heads . . . or if they weren't too hesitant to admit it. Since Uncle Dom was trying out his pet-sitting skills, Caprice had left her furry crew with him. She'd seen him interact with Lady and her cats since he'd been living with her parents, and she trusted him. A pet-sitting career could be just what he needed. A pet sitter was just what she needed when she didn't want to impose on family or friends. The expo center was spacious and divided into several groupings. Bridal gowns and bridesmaids dresses, mothers of the bride dresses, and elegant shoes were displayed against the eastern wall, each vendor having something different to offer. Deejays were set up showing off their sound systems and computer-generated music, trying to lure in customers with particular playlists. Jewelry vendors took an aisle down the center. Every bride needed jewelry, not only for herself, but for gifts. There were leather makers who provided suggestions for male groomsmen, and china dealers with everything from Spode to Fiesta dinnerware. Flower shops showed off representative floral bouquets for the church and for the bride to carry, as well as potted palms and hibiscus to dress up reception areas. As Caprice navigated the aisles and checked out the wares, a little thrill of excitement jumped up her spine. If she and Grant were really serious, if they really meant what she thought they did to each other, they could be walking around here together, not only dreaming but planning. Maybe next year if the wedding expo was here again. The bakers' aisle garnered her attention as she strolled down the row where bakers were presenting their specialty wedding cakes. Caprice passed one ten tiers tall with beautiful pale pink flowers, silver balls, and white pillars. It was a little much for Caprice's taste, but some brides would love it. The food sampling stands were all located against the west wall. Nikki had called Caprice when she was setting up so Caprice knew where she was located. She headed that way, easily spotting the top of Nikki's head. She'd had her dark brown hair highlighted with golden lights again. Since she was serving food today, she had it pulled back in a bun. But her beautiful oval, Madonna-like face only looked prettier with the severe hairstyle. Nikki was busy testing food in the food warmers. Caprice knew her routine. She didn't let anything go by untasted. But before she could reach Nikki's table, she spotted Drew . . . and he wasn't behind a table of his own. He approached Nikki's table and took one of the business cards from a cut-glass stand right next to the pamphlets about her catering business. He pocketed it with a little show and grinned at Nikki. Then he went around the corner of the table where Nikki was holding a spoon in one hand and a fork in the other. With her hands busy, with customers stopping and looking and sometimes asking questions, she was at an obvious disadvantage with her enemy so close. And close he was. He leaned in to her, his lips almost brushing her cheek, and he whispered something in her ear. Caprice rushed forward when she saw Nikki blush. Her sister didn't only blush, she elbowed him away. He, however, just laughed, gave her a wave, and returned to his own station, two tables up from hers. Caprice hurried over and scurried around the back of the table. "What was that about?" Nikki laid down the fork and the spoon, got hold of the chafing dish's lid, and plopped it on top with a clang. "Nothing." "Your face is still flaming, so it wasn't just nothing. What did he say?" "It was a lewd remark, and I'm just going to ignore him. I don't have any other choice." "He's behaving like an adolescent who wants a pretty girl's attention. You turned him down, and now it's like he's going to do anything possible to make you notice him again." "Even if that means putting me out of business?" "That's not going to happen. That's why you're here. Just look at that carrot cake." Instead of the traditional round shape, Nikki had created a square wedding cake. Her friend Serena had made classical swirls and twists with the icing. The three layers looked professionally done yet practical too. The topper on the cake was a beautiful cut-crystal heart. "Who wouldn't want this cake?" Caprice asked, then took one of the small dishes with samples—Nikki had baked sheet cakes with the same icing—used the plastic fork, and transferred the bite into her mouth. She sighed with gastronomical pleasure. "The carrot cake is moist and rich, and that cream cheese icing . . . You've got a winner with this one." A couple who had been standing about a foot away closed in on Nikki now. They held a copy of one of her sample menus in their hands, looking quite interested in it. The man spoke first. "I'm John Laughton, and this is my fiancée, Danica. We tasted your wedding cake, and we've checked out your menus. We're quite interested in hiring you. We like the variety of food you have to offer. Is it possible to change up these menus, or are they set? We have food restrictions in our family." As Nikki explained they could come up with a custom-made menu, Juan Hildago appeared by Caprice's side. In addition to helping with all of her house stagings, Juan sometimes assisted Nikki in planning the menus in order to fit the theme. He was as familiar with Nikki's food as Caprice was. Right now he looked seriously disturbed. "What's the matter?" Caprice asked before he said a word. Juan lowered his voice as he explained, "You know that horseradish-glazed beef dish Nikki is giving samples of?" Caprice nodded. "Pierson is serving the same dish." "You aren't serious." "More serious than the price on that designer wedding dress over there. Do you think we should tell Nikki?" Caprice considered what she should do. She and Nikki didn't keep secrets from each other. The young couple who had approached her sister seemed to be finishing up their conversation with her. She noticed Nikki pick up her phone and enter the woman's number. The man took a business card and one of the pamphlets. After smiles all around, the couple moved away. "I'm going to get a sample of Pierson's beef," Juan said. "Be right back." Nikki's questioning gaze followed him as he got lost in the crowd. "Where's he going in such a hurry?" "You'll find out in a minute. I'm not sure you're going to be happy when you do. But knowing your competition is ninety percent of the battle. Juan said that Drew is serving horseradish-glazed beef. He's gone to get us a sample." Nikki's expression was that worried look Caprice had seen so often on her face lately. "We made it together when he worked for me." Juan was back, saying, "Pierson didn't see me. One of his assistants handed off the sample. Taste it and see what you think." Both Caprice and Nikki picked up forks. They each took bites. Nikki looked as if she was going to blow a gasket. "That is my recipe. I can taste every spice I put in it. I taught Drew how to make it when he was cooking with me. He'd never heard of horseradish-glazed beef, let alone known there was a white horseradish. I'm going to tell him he's not going to get away with this." Caprice caught her sister's arm. "Wait, Nik. Think about this." She was usually the impulsive one, not her sister. But when Nikki was angry— "He can't think it's right to do this," Nikki protested. "No, he can't," Caprice agreed. "But making a scene here is just going to reflect badly on you. Maybe that's what he expects you to do, because you face issues head on. Let me go sneak a sample of his groom's cake, the one Nana thinks is his grandmother's recipe. We can taste it together. Besides . . ." Caprice waved to a line of people coming their way. "You have customers and you need to drum up business. Put a smile on that pretty face and do it. I'll be back." She noticed her sister make an effort at that smile when prospective clients approached. Caprice flitted from table to table at first, then honed in on Drew's table. He had three assistants working with him, and it was very easy to just slip a plate with the groom's cake from the table and carry it along with her. When she reached Nikki's stand again, Nikki stepped away from the servers, letting her assistants take over. Then she and Caprice and Juan put their heads together in a quiet corner as they each took a bite of the chocolate walnut groom's cake. "I hate to say it," Juan said, watching Nikki carefully, "but he's nailed this. Every groom in town will probably want it." Caprice threw her assistant a warning look. "Sorry," Juan mumbled. Nikki sighed. "You're right. It's delicious. But I just can't believe he came up with it himself. If it's his grandmother's recipe, does she know he used it?" There were lots of possibilities, Caprice supposed, giving Drew the benefit of the doubt. He might have heard about his grandmother's cake and decided to try to replicate it. But if he wasn't good at creating recipes, that would be darn hard to do. Maybe Drew's grandmother had just handed over the recipe. After all, giving it to her grandson was much different than giving it to an acquaintance, right? Nikki's table was becoming deluged with customers wanting to sample her food, as well as examine the menus. This was her sister's chance to grow her business, to spread the word about her services, to let new customers realize how good a cook she was. Suddenly Bella appeared at the table and checked out the line of people taste-testing Nikki's food. "She's doing great." "What are you doing here?" Caprice asked. "Joe took the kids to the park so I could stop in. I knew Nikki needed some support. But she looks like she's doing fine." "For now," Caprice said, with some doubt in her voice. "Did you stop at Drew Pierson's table?" Bella lowered her voice. "I didn't want to tell Nikki, but his food is good. I wasn't going to mention it." "She already knows. He stole her recipe for the horseradish-glazed beef. She's wondering whose recipe he stole for the cake." Seeing that Nikki was too busy for conversation, Bella nodded to the runway show across the room. "Let's take a walk over to those bridal dresses." Bella had an opinion about everything, and never hesitated to express it. Passing a table filled with decorations for wedding centerpieces, she said, "They look cheap. I'd never put them on my table at a reception." Caprice smiled. No, Bella would want quality all the way, even if she had to cut corners somewhere else to pay for it. A dais and stairs had been set up near the bridal dress vendors. Now a crowd was gathering around that area, and Caprice suspected why. Models would be showing off some of those wedding dresses. Bella grabbed Caprice's arm and pulled her along, snaking around women until they both had a good vantage point about five feet from those stairs. "We don't need to be so close," Caprice murmured. "Yes, we do. You're interested, aren't you? I mean, you and Grant are dating, and he's what you want, isn't he? You sent Seth packing so Grant could be your exclusive." Her exclusive. Just what did that mean? "But we're not . . ." Caprice waved her hand at the model climbing the steps who stopped on the high dais and smiled at everyone around. "We're not this serious." Bella faced Caprice squarely, staring straight into her eyes. "When you close your eyes at night, do you see Grant's face? When you wake up in the morning, do you think of him? Aren't you weaving dreams about kids and dogs and a minivan?" Caprice was always straight with her sisters. "Just because I'm thinking of Grant that way doesn't mean he's thinking of me that way. And I already have a van." "You're in denial," Bella warned her. "If you don't accept what you're feeling, Grant won't either. You have trust issues, Caprice. I know that. You've been hurt before. But you have to forget about Craig going to California to college and sending you a Dear Caprice e-mail. You have to move on, past Travis going back to his ex-wife. You're the one with that antique silent butler full of affirmations. You've got to look ahead, not back, and embrace it. That's what Father Gregory told me and Joe—embrace our future. That's what you need to do too." That was sage advice coming from Bella. Apparently she and Joe had been listening carefully to Father Gregory when they'd had counseling sessions with him, and they'd taken everything he had said to heart. She hesitated a moment and leaned close to Bella. "Grant has a past too." "I know that," Bella commiserated. "And losing a child isn't something he's going to ever forget. That tragedy ruined his life for a while. But now you can help him really make a fresh start, can't you?" Caprice had been telling herself that Grant had to be ready. Maybe she was the one who had to embrace the future first. She studied the model at the top of the stairs. Her dress was a strapless concoction of tulle, froth, and glass beads that made the whole gown shimmer. It was beautiful. Still—"That's not me," Caprice said with certainty. Bella cut her a sideways glance. "It's gorgeous." "Maybe, but I don't want Cinderella. I want retro-elegance." Bella rolled her eyes, something she did quite often. "Are you going to search for a vintage wedding gown?" "When the time comes, I might." "The time is now, dear sister, especially if you're going to hunt down one of those." It could be fun checking online websites for vintage wedding gowns. Not that she had a lot of spare time to do it. Nevertheless . . . Searching for a vintage wedding gown might help embrace the future. Wasn't she ready to dream again? * * * Carrying bags from her stop at Grocery Fresh after she'd left the expo, Caprice let herself into her house and was immediately struck by the silence. Quite a difference from the music and constant background hum of voices at the expo or the Sunday shoppers at the grocery store. Silence could be good or it could be bad. Where were Uncle Dom and the fur babies? Hanging her purse on the antique oak mirrored stand in her foyer, she looked for signs of the cats and Lady in the living room and the dining room. Had something gone terribly wrong and Dom had to cage everyone and take them to the vet? In the kitchen, she set her bags on the counter and spied Mirabelle and Sophia in one of their rare moments of close proximity. Mirabelle sat on the counter at one corner of the window over the sink while Sophia sat at the other. They were both staring outside. Caprice suspected they were watching more than a stray ladybug. They didn't seem mindful of her at all as she came up behind them. She touched them both at the same time, not wanting to play favorites. Mirabelle meowed loudly and then directed her focus back out the window. Sophia butted her head against Caprice's hand but didn't move away from whatever was out there. "I guess I'd better look too," Caprice capitulated. She laughed when she saw Uncle Dom rolling around on the ground with Lady. They seemed to be tussling over a toy that Lady used to play fetch. "I'll be back in," she told her two felines. "As soon as I round up Uncle Dom and Lady." She opened the back door and the screen and stepped out onto the porch. One side led into her garage. Another side was decorated with wrought-iron railing. The third side led down two steps into her backyard. She hopped down the steps and went to stand by her uncle. He looked up, his face wreathed in a grin. "Lady's giving me a workout." "Or you're giving her one." Uncle Dom got to his feet and tossed the toy about ten feet away. Lady scampered to it, picked it up in her mouth, and shook it back and forth several times. Dom waved his hand at her. "You can play with it. Bring it in when you're ready." He walked to the porch with Caprice. Instead of going up the steps and sitting on the fifties-style, robin-egg-blue glider, he sank down onto one of the steps. "I think I'm going to like this." "This, meaning pet sitting? You've made a decision?" "I have. One of your mom's friends, another teacher, hired me to pet-sit her Lab and two cats for a week starting tonight. I'm going to house-sit too. That way I'll be out of your parents' hair. I know living there is a real imposition. My background check is being completed to become bonded, and I'm making inquiries into the insurance. If this pet-sitting experience goes well and she gives me recommendations, I'll be able to move into my own place. We're heading into vacation time, so more work will be coming. I've been doing bookwork on the side for a couple of small businesses, and I've stowed that money away. I'm sure your mom and dad will be happy about it." "But they'll miss you too. I know they will. You're going to stay in Kismet?" "Yes, I think I'd like to. When I was a kid, I used to complain like everyone else that there was nothing to do here. But now that I'm an adult, I can see the possibilities. There are plays at Hershey Theater, at the Fulton in Lancaster too. Baltimore and D.C. aren't that far away for concerts." "Don't forget the Giant Center in Hershey. Ace Richland's going to play there soon." Caprice had staged a house that Ace, a rock star legend, had purchased. Since then, they'd become friends. Like a whirlwind, Lady came bounding over to Uncle Dom, dropping the toy at his feet. Then she turned to Caprice, circling her legs, pushing against her, wagging her tail. "She certainly seems happy enough. And I can't believe the cats are sitting together at the window." "I brought a secret along. A woman at the farmers market in York makes catnip pouches. I brought one for each of them. They played with them for about a half hour, and they weren't that far apart then. So maybe catnip promotes peace." Caprice laughed out loud at that thought. She was about to ask her Uncle Dom if he'd like a glass of iced tea when the phone in her pocket played "Let It Be." She was surprised to see the caller was Nikki. Uncle Dom said, "Go ahead and take it. I made a pot of coffee. I'll have another cup." Caprice answered her phone. "Hey, Nik. Did something happen while you were wrapping up?" "No, I just had more time to think. The hairs on the back of my neck are tingling, and I think my blood pressure's up. I want to confront Drew." "About the white horseradish beef?" "About everything. He's not going to get away with this, Caprice. Using my recipe, stealing my clients. He needs to know I won't put up with it." "When are you going to do it?" "Well, that's the thing. I want to be reasonable and civil. Will you go with me?" "When?" "Now. I want to get this over with. Can you meet me at Drew's grandmother's house?" "How do you know he's there?" "When I was closing down my stand and carrying things to my van, I overheard him say he was meeting someone." "And you want to just barge right in?" "Yes, I do. I want to take him by surprise. I want to catch him off guard." "I need some time to thank Uncle Dom properly, put groceries away, and make sure Lady and Mirabelle and Sophia are happy to curl up for the evening. If I feed them before I leave, they should be ready to do that." "Maybe Uncle Dom can stay." "He's starting to pet-sit tonight for a friend of Mom's. I think he's really going to like the pet-sitting profession." "So how long do you need?" Nikki asked. "An hour and a half should do it. I'll meet you at Rowena Pierson's house at seven-thirty." Caprice disconnected and went inside. There she thanked her uncle. When he wouldn't accept payment for his stay with her pets, she insisted he take along slices of the chocolate-coffee loaf that she'd baked that morning. After he left, she stowed away her groceries, played with Lady for a while, and made sure the cats got affection too. Then she went to change clothes. If she was going to help Nikki confront Drew, she wanted to be comfortable doing it. She changed into shorts and a tie-dyed T-shirt. Her platform sandals were retro all the way. After she fed her furry crew and cleaned up a bit, her watch said seven-fifteen. Time to hit the road. She left Lady with a ball that dispensed kibble for treats, picked up her fringed purse, and headed for her yellow Camaro. The car had been in an accident recently, but thanks to Don Rodriguez's body shop, it was as good as new. It varoomed nicely as she started it up, backed out of her driveway, and headed for an older section of town. On her way, she drove through downtown Kismet with its sand-blasted brick buildings with white window frames and black shutters, heading for a neighborhood on the south side of town. She drove up the tree-lined street, knowing she liked the older neighborhoods better with their maples and elms, poplars and birches, myrtle and ivy. She spotted Nikki's blue car parked in front of a two-story brick home set back from the street about twenty feet. She pulled up behind Nikki's car and exited her Camero, meeting her sister at the curb. "There's Drew's van," she said, nodding to the driveway. It was large and white with Drew's Portable Edibles logo painted on the side. "Do you know why Drew lives with his grandmother?" she asked Nikki. "When we were on talking terms, he told me he moved in with her because she was having more trouble getting around and seeing properly." "That was nice of him." "If that was the real reason he wanted to live here," Nikki added. "From what he said, I think he spent some of his childhood here." As they walked up the cement block path, Caprice said, "So you two really got to know each other." Nikki hesitated. "Some, before I realized—" Caprice stopped her sister by grabbing her arm. "Did he only make a pass?" Nikki hesitated, then sighed. "Let's just say it was a very strong pass, and I had to knee him where it hurt to get him to back off." "Nikki! Why didn't you tell me?" "Because I took care of it. At least I thought I did. But I think this rivalry between us is all about that." They walked up the rest of the path in silence and mounted the three porch steps sandwiched between mature arborvitae. On the porch, they stared at each other. The screen door was a wooden one. The door inside was open. They both stepped up to the door rang. It was shadowy inside. Nikki called, "Drew? Mrs. Pierson?" There wasn't an answer. "Is his grandmother hard of hearing?" Caprice inquired. "I don't know." "The door is open. Just step over the threshold and call inside." Since Nikki wanted to get this over with as much as Caprice, she opened the screen door and did what Caprice suggested. But a moment later, she gasped, let out a yelp, and backed out quickly. "What?" "Drew's on the floor. There's blood all around his head." Caprice didn't hesitate. She stepped inside and saw for herself what Nikki had seen. "Call nine-one-one," she told Nikki. "I'll see if he has a pulse." But from the blood pooling on the floor around his head and the flat look in his wide-open eyes, Caprice was fairly sure that Drew Pierson was dead. Chapter Four Caprice wrapped her arm around Nikki and felt her sister tremble. A patrol car had arrived, and so had the paramedics. "What happened in there?" Nikki asked Caprice, not for the first time. "I don't know, Nik," Caprice answered honestly. She tried to remember the details she'd absorbed by standing in the room for a few minutes. A Tiffany-style lampshade sat on a side table with the base nowhere in sight. A tall Tiffany-style floor lamp had obviously been knocked over and lay on the carpet near the sofa. Miraculously it hadn't broken. Whether they were true Tiffany lamps only an expert could determine. But if they were . . . Caprice remembered some auction figures on Tiffany lamps from her design courses. Besides the possible worth of the lamps, she had noticed another thing. There had been a slip of paper sticking out from the base of the floor lamp. She knew better than to handle anything that could be considered evidence, or else she would have examined it. As it was, that piece of paper was part of the crime scene and she knew she shouldn't touch it. There had been one other important detail. The outside back door in the kitchen had stood open. She wished she could record all of this on her electronic tablet, but she'd left that at home. If she concentrated on those details, maybe she could forget about seeing Drew's body. Maybe she could forget about the blood. Yet she knew that might be impossible, because she'd witnessed crime scenes before. "When I called Vince, he said he'd be here right away," Nikki murmured. Caprice patted her back. "That was only a few minutes ago. Grant said the same thing." Caprice knew what was going to happen next, and they both would want a lawyer by their sides. Ten minutes later, she was proven right. Detectives Carstead and Jones drove up in the same sedan, an unmarked vehicle. "The patrol officer should have separated you," Jones snapped as he passed them and nodded to one of the officers to do just that. Caprice watched Carstead and Jones as they pulled on booties, filled in the police log, and went inside. Five minutes later, they were back out. Caprice was at the curb with a patrol officer at one end of the property, and Nikki was with another officer at the other end . . . outside the crime scene tape. Detective Carstead approached Caprice, and Jones went toward Nikki. Caprice wished it was the other way around. Nikki was shaky, and Caprice didn't want her to say something to the hard-core detective that could be misinterpreted. Carstead just arched his brow at Caprice as if asking why she was at another crime scene. But he didn't vocalize the question, at least not that one. Rather he inquired, "Are you ready to tell me what you saw?" "I'll tell you whatever I can," she assured him. "Did you touch anything inside?" "No. Just the door when I went in after Nikki." "So she went in first?" "She did." Just then a gray SUV pulled up in back of the patrol car and parked. Caprice told the detective, "I called Grant Weatherford." Again Carstead arched a brow. "Well, of course you did. You're getting to be an expert at this, aren't you?" She didn't answer. She knew better than to say too much. That had been drummed into her by her brother and Grant time and time again. Being helpful was one thing. Being too chatty was another. Grant made a beeline for Caprice, took her hand, and squeezed it. Carstead gave Grant a nod, noticing. "Can we go on?" he asked Caprice. "Sure. Ask away." "How did you know Drew Pierson?" "He was a chef and worked with Nikki for a while." Carstead made notes in his pocket-sized spiral-bound book. "For a while? Were they working together now?" "No." "Just no? Was there a reason?" Caprice thought carefully about what she wanted to say, and then decided to give him a little bit of information. After all, Nikki did have a connection to Drew. "For a while Nikki thought she and Drew might go into a partnership with her catering business. But then Drew decided to go out on his own, and Nikki decided she might want to partner with someone else." The detective made notes. "Were you friends with Pierson?" "No." He eyed her carefully. "When did you last see him?" "I saw him this afternoon." The detective said, "I thought you said you weren't friends." "We aren't . . . weren't. There was a wedding expo in Kismet, and he had a booth. So did Nikki." "And why did you come here tonight?" "I came along with Nikki to discuss business." "Your sister's business?" "Yes." "And you just came along for support?" This detective seemed to know her a little too well, but maybe that was because he'd done background checks on her, including looking into her family. After all, Bella and Joe had been involved in a murder investigation. So had Caprice's friend Roz. And then there had been Ace's situation . . . "I did come along to support her." Grant gave her arm a little squeeze, maybe because he didn't want her to say more. Carstead saw the signal and sighed. "You can go for now, but you're going to have to come down to the station tomorrow for more questions and to give your statement." Caprice noticed that Vince had arrived and was standing beside Nikki. She was glad he was there . . . glad he could protect her. "I want to stay and wait until Detective Jones is finished questioning Nikki." "I know if I tell you you can't, you're just going to give me an argument, and then your lawyer friend here is going to weigh in on it too. As long as you stay on the public side of the tape, you can wait." As Carstead moved away, a snazzy red sedan zoomed down the street, pulled up at the driveway, and parked right across it. Vince and Nikki came over to join Caprice and Grant. "How did it go?" Vince asked Caprice. "All right. I have to go down to the station tomorrow and give my statement." "So does Nikki. But I have a feeling Jones is going to put the screws to her. He's got a chip on his shoulder that I'd like to knock off. But I know better." Caprice could see that Grant was ready to take her home and get her away from yet another crime scene when two women emerged from the sporty red sedan and Detectives Carstead and Jones immediately went to them. "The woman with the cane is Drew's grandmother," Nikki told Caprice. "He had her photo on his phone. I saw it when she called him. I've seen her at church too." "And I know Kiki Hasselhoff, the woman with her," Caprice said. "I often stop in at her bookstore for the latest crime novel." She also knew Kiki from Chamber of Commerce meetings. Caprice could see Rowena Pierson was in tears now. She'd taken a handkerchief from her purse and was dabbing her eyes. Kiki had her arm around her friend's shoulders. "Detective Jones isn't going to let them inside," Grant said. "Maybe we should stay until after the detectives talk to them," Nikki offered. "Drew's grandmother might need something." "But you shouldn't be the person who offers to give it," Vince warned her. "Don't be silly, Vince," Nikki scolded. "Nana and Mom know Mrs. Pierson from church. Both would want us to help her if we can. Imagine how devastated she is." They all thought about that. "Wait until Carstead and Jones are finished talking to them," Grant counseled. "If Drew's grandmother and her friend don't leave, you can approach them then." Fifteen minutes later, Rowena Pierson looked wrung out and shaky as Detectives Carstead and Jones went inside the house to the crime scene once more. Nikki nudged Caprice. "Let's talk to her." Vince advised, "Maybe you shouldn't, Nikki. Just let Caprice go." Nikki looked defiant. "I didn't do anything wrong. I'm going to tell this woman that I'm sorry her grandson is dead. If the police don't like it, they can arrest me for being compassionate." With that, she started toward the two women. Caprice just gave her brother an I'll-watch-over-her-look and followed her. Nikki approached Kiki and Rowena slowly. Caprice could see that Kiki recognized them both. When they stopped beside the two women, Kiki said to Caprice, "You must be the person who found Drew." "My sister did," Caprice responded. Nikki introduced herself to Rowena. "Mrs. Pierson, I'm Nikki De Luca. Drew and I worked together at one time. I'm so sorry for what's happened here." "You found him?" Rowena asked. "They won't let me see him. That detective asked if I had a photo, and I did in my wallet. But they won't let me go in." Caprice gently touched the older woman's arm. "You don't want to see Drew like that. You don't want to remember him that way." "He was like a son to me." Tears dripped from Rowena's eyes. "I raised both Drew and his sister, Jeanie, you know." Rowena went on, "Drew and Jeanie came to me when they were just little ones after their parents died in a small plane crash. Drew was ten and Jeanie was eight. Oh my gosh—Jeanie. I need to call her." Kiki stayed Rowena's hand as the woman rummaged in her purse. "Give yourself a little time to absorb what's happened. The detective said he'd notify Jeanie." "Did he?" Rowena asked, looking a little lost. "I don't remember that." Caprice knew that devastating news was enough of a shock to make a person forget her name. She said, "We don't want to keep you. We just wanted to give you our condolences. Do you have someplace you can stay? I imagine the forensics unit will have the house tied up at least through tomorrow." After studying Caprice, Kiki remembered, "You've been through this before." Apparently Kiki remembered the articles about Caprice in the Kismet Crier when a reporter had interviewed her in conjunction with murders she'd solved. "A few times," Caprice responded. "Rowena's going to stay with me," Kiki revealed. "For as long as she needs to." She shook her head. "I just can't believe that two hours ago we were sitting at the American Music Theater enjoying a production." Rowena said, "I can't see too well. I have to have that cataract surgery I've been putting off. But I can hear just fine. The music was lovely. I expected to come home and hear how Drew's day had gone—" Kiki opened the passenger side of the vehicle. "The detective said we can go. Let me take you to my place. Then you can call Jeanie and maybe she'll come over for a while." After Caprice and Nikki gave their condolences again and said their good-byes, they returned to Grant's SUV. There was more hubbub around the house than before because the crime scene unit had arrived. Now the evidence gathering would begin in earnest. Vince was still standing at Grant's vehicle too. He asked, "How is she?" "She's devastated," Caprice answered. "It seems she was more like a mom to Drew than a grandmother." Vince nodded toward the house. "Carstead was on the porch watching you two. If your conversation had gone on too long, he might have broken it up. As it was, I think he realized you were just giving your condolences. I want to talk to you and Nikki about what you saw and heard. Let's go back to your place," he said to Caprice. "Since your car and Nikki's will be impounded, why don't you go with Vince and Nikki," Grant suggested. "I'll pick up Patches and meet you back at your house." They heard the front door of Rowena's house open and shut as techs went inside. Carstead was still on the front porch watching them. "I think he likes you," Grant muttered to Caprice. She was totally surprised by that remark. "Why do you think that?" "I've seen him question witnesses before. He's always respectful, not like Jones who can be sharp. But Brett was almost kind with you." "We've crossed paths several times before. He knows I wouldn't kill anyone." "Maybe," Grant said thoughtfully. In spite of the situation, Caprice's heart turned just a little bit lighter. She asked, "Do you mind if Detective Carstead likes me?" Grant took a long moment to answer, and then he said, "Yeah, I mind." Caprice liked the idea that Grant could be just a little bit jealous. But she didn't want him to worry that she had eyes for another man. So she rose on tiptoe and kissed him on the cheek. "I'll see you and Patches in a little while. I'll make lemonade." After Grant gave her a hug, she joined Nikki and Vince. The four of them were going to have a lot to talk about. * * * By the time Grant and Patches arrived at Caprice's house, she had served Nikki and Vince tall glasses of lemonade and had set some out for herself and Grant. With golden eyes, Mirabelle watched them from her perch on the lowest shelf of the cat tree. Sophia was stretched out on the fireplace hearth, just looking pretty. Lady ran to the door to greet Patches when Grant came in. It wasn't long until they were all sitting around the coffee table, sipping lemonade, and eating slices of chocolate-coffee loaf. Vince gobbled up half a slice and then shook his head. "I don't know how you always get involved." "I'm not involved," Caprice protested. "Nikki is. She's the one who knew Drew." "Not really," her sister disagreed with a little sigh. "I mean, I knew his work history. I knew where he'd studied and where he'd cooked before coming back home. But I never knew his grandmother raised him. I thought he just lived with her to help out. In fact, that was one of the things I admired about him." "Did he live there to help out?" Grant asked. "Or had he moved in with his grandmother again because his finances were on the downturn? That wouldn't be unusual if he lost a job one place and came back here to find another." "I wonder if he was going to stay there now that he was making it big," Vince offered. "Selling the barbecue sauce might have gotten him a nice nest egg and a licensing royalty," Grant interjected. "But he'd have to sustain his business and his reputation." He studied Caprice. "Are you going to poke around in this?" "Is Nikki the number one suspect?" she asked both her brother and Grant. Vince shrugged. "The detectives always look at anyone who found the body. I'm not as concerned about that as about the fact that she told them she and Drew worked together. They're going to be looking at her closely because of her association with Drew, and because they parted ways." "After that fight we had at the open house, they could also think I have a motive." Nikki sounded worried, and Caprice didn't blame her. "Drew as much as threatened me, and several people overheard him." "It all depends on what the York County Forensics Team finds," Vince reminded them. "Nikki, tell me exactly what you saw when you went inside." "I don't remember a thing," Nikki admitted. "I saw the blood and everything in my mind went blank." Vince turned to Caprice. "What do you remember?" Unfortunately, Caprice had done this type of exercise before and hated doing it. But because she hadn't remembered the evidence correctly on one of the murder scenes, she'd almost missed something important. This time she wouldn't let that happen. She took a sip of her lemonade, then a deep breath. Closing her eyes, she attempted to relive the moments when she'd walked inside behind Nikki. "Drew was on the floor, and my guess is from his injury and the way he had fallen, he'd been hit from behind. There were two Tiffany-style lamps. Well, not exactly two. Just the shade for one was sitting on the library table beside the sofa. The base was missing. Based on the shade size, it wasn't a huge lamp, so someone could have carried that base out with them if they'd used it to hit Drew. There wasn't anything else lying around him that could have been the weapon. A floor lamp was lying on the floor. There seemed to be a piece of paper sticking out from its base." "They aren't hollow, are they?" Vince asked. "It would be easy to stick something up in there around the cord. If those are true Tiffany lamps, they were designed from the 1890s to the 1930s and they could be worth hundreds of thousands of dollars. A Magnolia lamp that was auctioned in the eighties sold for more than five hundred thousand." Absorbing what Caprice had said, Grant decided, "I don't think the motive was robbery, or the lamps would have been gone." Vince suggested, "Most people might not realize what they're worth . . . or think they're reproductions." "Or," Nikki added, "maybe the killer knows their worth and was interrupted and will be back." "A murder like this in the victim's home means Drew knew whoever killed him," Grant deduced. "He let that person in. If he knew them, it had to be personal." "Look how he treated Nikki," Caprice pointed out. "If he was that bitter about her, he could be bitter about lots of other people. We don't know anything about his friends. Do you know anything about his sister?" she asked Nikki. "No, he never talked about her. Nana might know since she's acquainted with Rowena." Nikki took out her phone, jumped up from the sofa, and went into the kitchen. "I've never seen her this fidgety," Vince said. "She's really shaken up." "Finding a dead body will do that," Grant said wryly. "You two aren't going to stay out of this, are you?" Caprice gave him a weak smile. "Are we going to have an argument about it?" "There wouldn't be anything to argue about if you and Nikki just sit back and let the police do their job." "How can we just sit back when Nikki could be a suspect?" Grant held up both hands in surrender. "I give up. No argument. Just consider each step you take carefully, because Detective Jones will be watching you." Grant's absolutely serious warning shook her this time, a way it hadn't in the past. Maybe because he was right. Detective Jones did not like her interference. Would he take that out on Nikki? Vince leaned over to Caprice and asked in a low voice, "Did Nikki and Pierson really go at it at the open house, enough that more than a few people would notice?" "They raised their voices," Caprice acknowledged. "Truth be told, I think Drew was angrier than Nikki. She really tried to restrain herself. But then she let loose with resonating barbs too. I'd say five to ten guests were around, along with Nikki's servers, so there were a lot of witnesses, if that's what you're asking." "And each one will have a different take on the argument," Vince said. He was probably right. Except one fact was obvious. Nikki and Drew were competing for business. That could be motive for murder. Caprice hated to admit it, but it was true. It all boiled down to the detectives' perspective on the evidence. If they decided Nikki was their main suspect, they'd go after her. Grant tapped Caprice's hand. "Let's not worry before we have to." She liked the way he used that word "we." Nikki returned to the living room looking somber. "What did you find out? Does Nana know anything?" Caprice asked before the others could. "First of all, she told me to tell you to be careful." "She knows I will be." Both Grant and Vince arched their brows. Caprice insisted, "I try to be." She swiveled her attention back to her sister. "So what did Nana say?" "She admitted she doesn't know Rowena Pierson well. But she does know that Rowena's arthritis over the past few years has gotten worse. She doesn't climb steps if she can help it, and Drew helped her arrange her bedroom on the first floor in what was once a study." Caprice hadn't seen much of the house, just the living room and a glance into the kitchen. And that open back door. Nikki continued, "Nana said Rowena hasn't talked about Drew and Jeanie much, but here's something interesting. Jeanie was married when she was nineteen but divorced six months later. Her married name is Jeanie Boswell and she owns Posies." "The flower shop?" Grant asked. "She's the 'Jeanie' from Posies?" Caprice asked. "I know her. I've dealt with her now and then to get flowers for stagings." "Uh-oh. You know her. That sounds like trouble. What are you thinking?" Grant asked, sounding worried. "I don't actually know her. But I've spoken with her on the phone and purchased flowers from her. Because you and Vince and Nana want me to be careful, I'm not going to do anything, at least not yet. Nikki and I will see how it goes tomorrow morning when we give our statements." "I'll be there with you," Grant assured Caprice, just as Vince said the same thing to Nikki. "We really don't need both of you, do we?" Nikki asked. "You do," Grant answered firmly. "They're going to separate you. They'll want to make sure every detail of your stories lines up. They'll ask you the same questions over and over again. I'll be with Caprice, and Vince will be with you. We're going to make sure you stay calm and focused and don't blow your tops, even if they try to push your buttons. You can't just be innocent, you have to look innocent and act innocent." "It's not an act," Caprice protested. "I know it's not," Grant maintained. "But I also know that both you and Nikki have Italian tempers when riled. I don't want them popping out at the detectives. Just consider Vince and me your reasonable buddies to keep you on an even keel." Gazing up at Grant, Caprice knew she wanted more than buddyship with him, but she also knew what he meant. Tomorrow had to go smoothly for her sake, but especially for Nikki's sake. A public argument with threats and a rivalry were two good reasons to find evidence to pin this murder on her sister. Chapter Five Caprice sat across from Detective Jones in the interrogation room at the police station the next morning. No, she didn't expect a rubber hose and brass knuckles from the detective, but he was sharp, sometimes inconsiderate, and even harsh. She wondered where Detective Carstead had taken Nikki. His office, maybe? Grant sat next to her, and she was glad of that. He was a tall, comforting presence beside her. The police station had been refurbished several years ago. Workers gave it a face-lift by sandblasting the brick and repainting the cupola. But whoever had picked out the paint colors for the rooms had gone for drab. She would never have picked the interrogation room's ugly green shade for painting anything. "Go over it for me again, Miss De Luca," Detective Jones ordered. "You touched the door handle when you went in even though your sister had gone in first?" It would be so easy for Caprice to become impatient with Detective Jones. But Grant had warned her over and over again to keep her cool, to listen to Beatles music in her head if she had to in order to calm herself down. "Nikki went in first," she explained. "I held the screen door as she went inside." "I see. And your fingerprints won't be anywhere else in the room?" She thought about that all over again. "If you could capture them from skin, you'd find them near Drew's carotid artery and on his wrist. I placed my hand in front of his mouth to see if I could feel breath. But I did not touch anything." Detective Jones had a hard cleft jaw, a nose that looked as if it had been broken, and medium brown hair that appeared to have seen a breeze. "All right," he said, pushing a legal pad over to her along with a pen. "Write it all down—from when you got there to when you left. I'll have it typed up and you can come in and sign it." Wanting to leave as soon as she could, she took the pen and started writing. She'd been over it so many times she wrote quickly, remembering each detail, relating each fact of what she'd seen and what had happened. When she was finished, she pushed the pad over to the detective and then stood. "You know, instead of treating me like a criminal, you ought to go after who did this." "Caprice . . ." Grant laid a warning hand on her arm. "If you and your sister didn't have anything to hide, you wouldn't need lawyers here," Detective Jones snapped. She slipped away from Grant's arm. The detective's reasoning was as fake as his sly smile, and she wasn't going to let him bully her. "Your police department almost charged Roz Winslow, who was innocent, when her husband was murdered. Your department was also ready to charge my brother-in-law for a crime he didn't commit. Of course Nikki and I need lawyers here. Thank goodness we're fortunate to have a lawyer in the family and his partner as a close friend." When she glanced at Grant, she could see he was worried for her, but he didn't stop her. She just hoped he'd bail her out if Jones arrested her for mouthing off. "You De Lucas think just because you know Chief Powalski you can say and do anything," Jones returned stone-faced. "You can't. There are rules. There are regulations. And there is protocol." Suspecting Jones was still miffed because she'd called on Chief Powalski, her dad's friend, to help with the first murder she'd solved, she was silent for a moment. Grant stepped in. "The De Lucas have never received special treatment because of Chief Powalski. I suspect he'd be insulted if you imply otherwise." Jones frowned and looked away as if maybe he'd gone a little too far. Just then, Detective Carstead appeared with Nikki outside the door with a glass window. Jones opened the door and said to Caprice, "You can go. Someone will call you when your statement's ready for signing. But you'd better take my advice. Don't stick your nose into this." If Caprice wanted to be childish, she would have tossed back, "Or what?" But she knew challenging the detective wasn't in her best interest or Nikki's. She didn't say anything. She just preceded Grant out the door. Once they were all standing in the parking lot by Vince's car, Grant said to her, "You didn't assure him you'd stay out of it." "That's because I'm not going to lie." Vince took out his remote to open his car doors. "You'd better be careful. You've solved their crimes for them in the past, and none of them is happy about that." "I called Detective Carstead the last time . . . after I figured out who the murderer was." "I'm not worried so much about Carstead," Vince said. "He didn't hammer Nikki to death, just drew her story out of her. After all, what was there to hide? Practically everybody in town knows about the rivalry between Nikki and Pierson. But Jones—if he can nail you with anything, Caprice, he will, including obstruction of justice." "Don't warn me again," she said with a sigh. "He can't arrest me for talking to friends, giving condolences, finding out more about Drew's barbecue sauce." Vince just shook his head. "Are you riding with me, Nikki?" Nikki gave Caprice a hug, and then climbed in the passenger side. "Call me," Nikki said to Caprice right before she shut her door. "I will," Caprice assured her. As they walked to Grant's SUV, he asked her, "Do you and Nikki talk every day?" "That depends on what's going on. But I talk to somebody every day—Bella, Mom, Nana, Nikki. Then the news gets around. You know how that is." Grant was silent. Finally he confessed, "No, actually I don't know how that is. I have one brother, Caprice, and we're not close. My parents aren't like yours. They're terrifically conservative, aren't prone to outbursts, and don't express emotion well." This was the first Grant had talked about his parents and brother with her. "You told me your parents live in Vermont." "They do. Our family home is in a rural area, and they don't have particularly close friends. Mom plays bingo in town with the women from her sewing group, and my dad plays poker about once a month." "My dad does too." "Your dad plays poker with men he's known all his life. My dad . . . He grew up on a farm where daily life was about sunrise and chores and more chores and sunset. Out of high school, he got a job at a canned foods company in a nearby town bigger than where he was from. When he married Mom, they bought a house near the family farm. Dad commuted every day until he retired. He didn't make friends at work, maybe because the factory was in the next town over. I don't know. Maybe because he doesn't know how to make friends. My parents and my brother are just very different from your folks." They were at Grant's SUV then, and he stopped talking. She wished he'd go on. They'd had conversations about lots of subjects, but nothing as personal as this. After they were in the vehicle, Grant started the ignition and the air conditioning, but he didn't make a move to drive out of the parking lot. Instead he turned toward Caprice. He took her hand in his and asked, "Am I a close friend?" She remembered that that's exactly what she had told Detective Jones—that Grant was her brother's partner and a close friend. "What would you have preferred I say?" she asked him, hoping he'd say that she should have told Jones she was his girlfriend. After a long pause, when Grant obviously thought about it, he gave her a half smile and a shrug. "Close friend will do it for now." She was disappointed by that, but she also knew Grant had to be ready for whatever came next. However, the conversation they'd just had gave her hope that he'd be ready for a lot more than close friendship soon. * * * Caprice was deep into work later that afternoon when she received a text from Nikki. Let's go to Bella's. She's home sewing costumes today. Caprice knew Nikki was agitated and restless, and when something was happening to one sister, all three sisters united. So she didn't question Nikki's text. She texted back, I'll be there in fifteen minutes. When Caprice arrived at Bella's, Lady by her side, there was frantic activity. Pots and pans were stacked in Bella's sink. Megan and Timmy were squabbling over toys that they were either taking out of a carton or putting back into it. "What's going on?" Caprice asked after hugs for the kids. Lady stayed with Megan and Timmy, knowing they were more likely to play with her. Bella threw up her hands. "We have a house showing in half an hour. I was working on costumes and I didn't expect that today. Megan and Timmy are supposed to be picking up toys. Thank goodness Benny is taking a nap." Caprice glanced around the living room and kitchen. She'd encouraged Bella to remove some of the furniture, and she'd restyled other pieces with bright throw pillows and even a fresh coat of paint. She'd changed the drapes too. Before, everything had been drab in rust and green, mostly because of Joe's taste in colors and furniture. But now the house was colorful, letting light in. However, it was a mess. "I'm not going to have time to run the sweeper," Bella wailed as she scrubbed the bottom of the stainless steel pot that she'd apparently made a batch of soup in. "It's more important we clean up the kitchen and put the toys away," Caprice advised her. "Usually you know a day ahead about a house showing." Caprice had hooked Bella and Joe up with a real estate agent who handled mostly midpriced dwellings, families searching for their first or second homes. Kayla Langtree was a top-notch agent and usually well organized. "Kayla called and said she had a couple that just stopped into the office. They're moving to the area and they're in a hurry to buy. They wanted to see properties today. I couldn't say no." As she set the pot on the drainer, Nikki took it and swiftly dried it. Bella rinsed tomato residue from another pot. "So what are you going to do to make sure Nikki doesn't get charged with Drew's murder? She told me all about what's going on. You know Mom and Dad and Nana are worried sick. They remember what Joe and I went through." Caprice and Nikki exchanged a glance, and Nikki explained, "Everyone is warning Caprice away from this one. Maybe we should just let the detectives take over." Bella stopped washing, took a towel in her hands and dried them, and plopped her fists on her hips. "Let the detectives take over? Since when? Come on, Caprice. You're not going to do that, are you?" Caprice almost had to smile at Bella's vehemence. Almost. This really wasn't a smiling matter. She could stick her feet into some deep doo-doo if she wasn't careful. "I'm thinking about our best strategy." "Well, don't think too long," Bella warned her, "or Nikki could end up in jail. From what I understand, her car was already parked at Rowena Pierson's when you arrived. And she doesn't have an alibi from the time she left the expo until the time you found Drew." Timmy ran into the kitchen, Lady right behind him. "Megan won't let me put her toys in the box. I can't do it if she keeps taking them out." At nine years old, Timmy tried to lord it over his sister, but his sister wouldn't let him. "How about if Caprice helps you and Megan put all the toys in the box? Would that help?" Bella asked. Timmy grinned up at Caprice. "Sure would." "Where are you going to go during the house showing?" Nikki asked. "I called Mom. We're going over there." "Do you want me to help you?" Caprice asked. "If you help me load them in the car, I'll be fine." Bella studied Caprice again. "So what are you going to do?" "I think Nikki and I are going to make a stop and see Drew's grandmother. She's probably still at Kiki's house and could use a care package and some comforting. What do you think, Nik?" "I think Nana's biscotti, a tin of tea, maybe some nice hand lotion might be appreciated. But don't you have anything pressing you have to get done right now with work?" "I'm caught up for the moment, though I've left Lady, Sophia, and Mirabelle alone a lot lately. I'll see if my neighbor Dulcina Mendez can watch Lady so we can spend whatever time we need with Rowena. She never minds short notice." Timmy pulled on Caprice's hand. "Come on, before Megan takes everything out of the box." Lady barked as if she agreed with that assessment. Caprice laughed and followed Timmy and Lady to the living room. She wished cleaning up the mess of murder was as easy as cleaning up toys. * * * Caprice was thankful she and Nikki had a passing acquaintance with Kiki Hasselhoff, Rowena's friend. Caprice loved books and enjoyed just being around them. She often went into the store to browse, and so did Nikki. Kiki lived in a two-story Colonial in an older section of Kismet. It was neat with its white siding, black shutters, and brick front facing on the first floor. The yard was pristinely landscaped with trimmed shrubs. Caprice would have added annuals to give the front dabs of color, but if you planted them they had to be tended and pulled out when fall came. Caprice moved the basket she and Nikki had put together from one hand to the other. Nana had baked a fresh batch of biscotti and they'd included a canister of those. After taking Lady to Dulcina's house, they'd stopped at Country Fields Shopping Center where they'd visited the specialty tea shop Tea For You. While Caprice bought tea, Nikki had gone to a bath and body shop, purchasing cucumber and melon hand lotion and body wash. They'd put all of it into a basket, and Caprice hoped it would help Rowena feel just a little better. Her grandson murdered. How must she be feeling? If Rowena had raised Drew, he'd be like a son. Kiki answered the door looking very solemn. "Thank you for coming," she said to Caprice and Nikki when she saw the basket of goodies. "Rowena hasn't received many condolences. With murder involved, even people she thought were her friends are staying away." "I'm sorry to hear that," Caprice said. "She needs her friends now more than ever. It's nice of you to let her stay here." Kiki waved her age-spotted hand. "Nonsense. That's what real friends do. I've offered her my downstairs bedroom. She's in my den right now watching an old movie, but I don't think she's paying much attention to it. Come on in." Kiki's house was a mixture of comfortable and stylish. The multi-cushioned sofa wore a fabric of bright flowery blooms, most of them hydrangeas in pink and blue. An oversized chair with an ottoman accompanied it. Glass-topped tables were sparkling bright and streak-free. Kiki, in her late sixties, obviously took good care of herself and her surroundings. She led them to a room adjacent to the living room. In this parlor, the color theme was sage green and gray. Two recliners in sage faced a flat-screen TV. The furniture was polished pine. Although the entertainment center housed the TV, all of its shelves were filled with books. Caprice caught sight of mysteries and romances, spy thrillers, and nonfiction titles too. "You have visitors," Kiki announced to her guest. Rowena began to lower her footrest, but Caprice stopped her. "Don't get up. We brought you a basket of goodies we thought you might enjoy. Nana included some of her biscotti." Rowena smiled. "Everyone loves Celia's biscotti. They're so different from those hard cookies you buy at the market." Nana's biscotti were lemon-iced, soft cookies that went well with coffee or tea. Caprice tried to replicate them and did to a certain extent, but they never tasted just like Nana's. Kiki said, "I'll let you talk. Just call me if you need anything." After she left the room, Nikki sat in the other recliner and Caprice sat cross-legged on the floor near Rowena. "How are you doing?" she asked gently. "Not so well, I'm afraid. I want to get back into my house. I don't even have clothes. Kiki let me borrow some of hers." She motioned to the black slacks that were a little too long, and the green striped blouse that was a bit too big. "Jeanie was here for awhile last night," she offered. "She's taking care of many of the arrangements. She's a go-getter, that one, though she and Drew were never really close." "You said they came to live with you after their parents died?" "They did," Rowena assured her. "There wasn't anyone else. My husband had passed away, and somehow we all muddled through. They were so lost for a while. And Drew?" Rowena shook her head. "I was really worried about him. As a teenager, he was a handful. Even when he received his inheritance—" Rowena stopped as if remembering Drew and the years they'd spent together was a little much right now. To keep the conversation going and to help Rowena, Nikki jumped in. "I suppose Drew used his inheritance to fund chef school." Rowena seemed to rally. "I was the trustee for Jeanie and Drew's inheritance until they were twenty-one. Then the money was split and they received the balance. Jeanie used hers to go to business school and to buy her flower shop. She's done quite well—except for an impulsive marriage after she graduated high school that was practically over before it started. After she bought Posies, she seemed to find her footing. At first, I thought Drew was going to waste all of his inheritance and there wasn't a thing I could do about it. He went through most of it quickly, buying an expensive car, taking vacations. He used the last of it to go to cooking school and he seemed to settle down after that. When he came back here, he was really kind to me. I had had a fall and broken my arm, so he came here to live to help out, and he stayed. I was so proud when he sold that barbecue sauce recipe. He even told me the secret ingredient," she said with a shaky smile. "Can you tell us?" Nikki asked kindly. "Sure. I don't know all the ingredients, but the secret one was the habanero sea salt. Just a touch. It gave the sweetness a kick." "Did you ever make barbecue sauce?" Caprice asked. Rowena fluttered her hand. "Just the traditional kind, with vinegar, oil, and sugar . . . and a bit of tomato." "Did Drew tell you about the expo he was cooking for?" "I just knew he'd be gone all day Sunday." "I was there too," Nikki said. "It was a wedding expo where prospective brides and grooms could study the kind of flowers they might want to use, or gowns, or food. Drew introduced a chocolate walnut cake. My nana said you used to make one of those. Did you give your recipe to Drew?" Rowena slowly shook her head. "No, I didn't give him any recipes." "Drew didn't have access to them?" Caprice asked, keeping her voice light. "No, he didn't. Years ago, a member of my canasta club tried to steal them. So I hid the more important ones where no one will find them. The chocolate walnut cake with the maple icing was one of those. Only Kiki knows where I've hidden them in case something happens to me." Caprice wondered if the hollow tube around the cord of the lamp was one of those hiding places. That lamp could be heavy, but not so heavy that Rowena couldn't tilt it on the floor and stuff a few papers around the cord. But she could see Rowena was tiring, and she didn't want to upset her by going into more of it now. Kiki must have overheard some of their conversation, because as she swept into the room with a tray holding a coffee carafe and mugs, she explained, "Eventually Drew would have inherited the recipes as well as Rowena's Tiffany lamps. They are Tiffany, by the way." Rowena's hands fluttered in her lap. "Those lamps have been in my family since the early 1900s. We always knew they were Tiffany because of the special glass and the Tiffany New York stamp on them with a number." "Now half of one of them is missing, according to what the police are telling Rowena," Kiki said. "We're hoping they can find it." "If you don't mind my asking," Nikki said, "if Drew was inheriting the lamps and recipes, what would Jeanie inherit?" "Jeanie was going to inherit my house, jewelry, and the rest of my belongings." In other words, now Jeanie would inherit the house and everything else too, Caprice thought. Was that a motive enough for murder? Chapter Six Caprice knew that some of her clients owned valuables, but she never knew exactly how much they were worth. She asked, "Most people don't understand the value of lamps like yours. Did you ever have them appraised?" "They'd be worth stealing, maybe even killing over," Rowena said with a frown. "I told the police that. Isaac Hobbs from Older and Better consulted a New York City contact of his who's an expert authenticator. So I do know what they're worth," Rowena assured her. "It was after his appraisal that I set up a new will." "Did Drew and Jeanie know about the new will? Did they know what the lamps are worth?" "Yes, they knew about it. I'm not like some older folk who keep everything secret. They would inherit everything I have some day, so I wanted them to know what I was thinking. They didn't pressure me in the least. Drew always liked the lamps, and Jeanie, well, I guess I have to say, she's more interested in the bottom line." "So she'd only be interested in what your house and belongings are worth? Not the sentimental value of keeping everything?" "Exactly. And I understand that. Young folks are different these days. Neither she nor Drew had a happy time growing up here because of what happened to their parents. They had trouble bonding to me. Their school counselor told me they tried to stay detached because they didn't want to get hurt again if something happened to me. It made perfect sense. I had no illusions about either one of them. If I died, they'd sell off whatever I gave them and do whatever they wanted with the money. But I think Jeanie would do that quicker than Drew." Kiki was nodding her head as if she absolutely agreed. These two women apparently had no secrets. "I was around during those rough years," Kiki elaborated, pouring the coffee. "Their parents suddenly being taken from them was earth-shattering. Jeanie withdrew. Drew acted out. Sometimes I thought the only time he was really happy was when he was cooking with Rowena. Then he forgot about the fact that he hated the world, and he started getting closer to her in that way." "Did you see that?" Nikki asked Rowena. "Oh yes. He would always make such a mess in the kitchen. But I didn't scold him. Because when I watched him cook, I saw in his eyes a bit of that sparkle that he had when he was a boy." Caprice decided to take the conversation down another path. "When kids can't bond with adults, sometimes they bond with their peers instead. Did Jeanie and Drew have a group of friends? Anyone they could confide in?" Rowena thought about it. "Jeanie pretty much kept to herself. She spent a lot of time in her room reading. When the weather was nice, she'd plant flowers in the yard or sit on the back porch swing reading." "That one could get lost in books," Kiki agreed. "She devoured everything I brought her. Drew, on the other hand, couldn't be bothered with books. He wanted to be out and about doing something. And he had friends he'd do it with. I don't know if he confided in them, but he spent a lot of time with them." "When he was in high school, I hardly ever saw him," Rowena explained. "He had two best buddies—Larry Penya and Bronson Chronister. The three of them seemed to be like brothers." "It was odd, really," Kiki said. "The three of them were very different." "How so?" Caprice asked. "Are you familiar with Happy Camper Recreational Vehicle Center?" Caprice noticed that when Rowena picked up her mug of coffee, her hand shook a little. All of this talk about Drew and his past could be having an adverse effect. "I've heard of Happy Camper RV Center," Nikki responded. "Their sales center is on the east end of Kismet. Some of their campers look like mini houses. One of my servers has a pop-up tent camper she bought there." "Bronson Chronister's dad built up that business," Rowena told them. "He once had a little store downtown where he sold camping equipment. That developed into the enterprise existing today. Bronson runs it now. When he lost his dad a few years ago, he took it over with nary a glitch. And he's rich enough to have bought one of those fancy one-of-a-kind houses in Reservoir Heights. Drew was using Bronson's kitchen for his catering business." "Bronson isn't married?" Caprice asked. "No. He's one of those bachelors like you see on TV. He has everything he wants. He travels a good bit. Just hasn't settled down, I guess." "I thought maybe Drew was renting kitchen space somewhere," Nikki mused, looking thoughtful. "Oh, no. He didn't have to. When Drew decided to open his own business, Bronson was right there for him. The three of them were always like the Three Musketeers." "What does Larry Penya do?" Caprice asked. "That's another story," Rowena acknowledged. "He came from the wrong side of the tracks. His father left when he was a boy, and his mom always struggled to make ends meet. In high school, Larry had a job bagging at the local grocery store so he could help her out. Unfortunately he's still struggling. He was an electrician and worked for one of the contractors in town for years. But then with the economic downturn, he was let go. He opened a handyman business that Drew said was taking off, but I'm not sure Larry has the business sense to make it work. He's married, with a little boy who's around four, I think. But Drew had mentioned that Larry and his wife, Linda, separated. Such a shame for their child." When Rowena picked up her coffee mug again, the trembling in her hand was evident, and she set it down quickly. Then she ran her hand across her brow as if she might have a headache. Caprice caught Nikki's eye and nodded to the door. Rowena was still processing everything that had happened. She looked as if she hadn't had much sleep, and Caprice guessed that the insomnia might go on for a while. Murder and the grief and shock surrounding it could steal sleep as well as peace and happiness. "We don't want to take up any more of your time. You really should rest," Caprice suggested. "I don't think I got a wink of sleep last night," Rowena admitted. "Maybe a nap would help me cope with everything a little better. I have to speak with the funeral director tomorrow, but I don't know what I'm going to tell him. I don't know when the police are going to let me plan the funeral." "The detectives will notify you as soon as they can," Caprice assured her. "After an autopsy," Rowena murmured. Yes, the body would be released after the autopsy, though the cause of death seemed pretty obvious. But there was evidence that could be gathered from the body. Nikki rose now too and came to stand by Rowena. "We just wanted to tell you again how very sorry we are. Drew and I . . ." Nikki stumbled. "We weren't on the best of terms, but I just want you to know, I didn't wish him any harm." Rowena patted Nikki's hand. "My dear, I never suspected that you did. A few months ago when Drew was working with you, he seemed happier than I've seen him for a long time. He was in a foul mood when you told him you didn't want him for a partner. In fact, he was all grouchy and grumpy until this barbecue sauce deal came through. I just wish . . ." She stopped and shook her head. "I wish a lot of things," she said with a sigh. "I just can't imagine why someone would have done this to him." Caprice couldn't imagine why someone would do it either, but someone did have a motive. Could she figure out who that person was, and what kind of motive would drive them to murder? * * * Nikki had no sooner closed the door of Caprice's van when she asked, "Does anyone really know anyone else?" Caprice glanced at her. Although her hand was on the ignition, she didn't turn the key. "Do you think everyone has a secret life?" "No, but we skate on the surface of one another's lives. Do you know what I mean? I never suspected everything in Drew's background. Just imagine losing your parents at an early age and not being able to adjust." "Are you thinking more kindly about him?" "Not really. I just wonder if I'd known all this whether I would have treated him differently, maybe a little more gently." After a few moments' hesitation, Caprice suggested, "You mean you wouldn't have kneed him where it hurt when he assaulted you?" "He didn't—" "Think about it, Nikki. Think about what he did and how you reacted. Would you have kneed him if it wasn't assault?" "You want me to put it in plain terms and it's not that easy. I might have given him signals that I wanted him to come on to me." "And he didn't know when to stop." After a brief silence, Nikki admitted, "He didn't know when to stop." "That has nothing to do with knowing him or not knowing him." "Are you going to try to figure out who murdered him?" "Yes." "But we didn't even really like him," Nikki protested. "Liking Drew or not isn't part of this, Nik. Drew nudged close to you at the expo. I saw you push him away. What if your DNA or hair or something transferred onto him? He was wearing the same clothes he had on at the expo when he was killed." "Oh my gosh! I never thought of that. How could you notice something like that, with the blood and the smell—" "Because I've witnessed murder scenes before. Taking in details wasn't even a conscious choice. You have to protect yourself. We have to protect you, and there's only one way to do that. Talk to Vince about how to do it, but you've got to tell Detective Carstead about that encounter at the expo. He might find your DNA. Be up front about it, and in the meantime I'll look for motives." She just hoped she'd find that someone else had a motive other than Nikki. After Nikki pulled away in her work van, Caprice crossed the street and climbed the steps to Dulcina's house. Lady barked before she even rang the bell, and she smiled. Lady had good intuition. Dulcina opened the door with a wide smile. "She knew it was you. I think she knows the sound of your van. Her tail started wagging when she heard it. Can I interest you in some butter rum coffee? New flavor." "You can always interest me in coffee." Caprice followed Dulcina through her living room decorated in gray and blue to her pristine kitchen that was blue and white. Caprice liked the white counters, but if she was cooking on and around them, they probably wouldn't stay white. She'd admit she was a messy cook. "You still didn't get one of these brewers?" Dulcina asked as she placed a pod in the coffeemaker. "No, I didn't buy one yet, though I'm thinking about it. It would be nice to make one cup of coffee like that when I'm on the run. I could just keep the other coffeemaker in the closet for when family and friends come." Dulcina quickly brewed two cups, set the mugs on the table, and pulled sugar from the cupboard and milk from the refrigerator. After they were seated, Lady by their chairs chomping on a treat Dulcina had given her, Caprice's neighbor said, "I need to ask you something." "Sure. What is it?" "Since I've been dating Rod the past six months, I feel as if my life is . . . more fulfilling." "That's a good thing." "Yes, I suppose so. But my problem is . . . his girls. The older one especially. Janet really did a number on him, and he hasn't dated much. So Leslie and Vanna aren't used to having women in their dad's life. I felt their resentment from the first moment they met me, and I don't know what to do about it. Leslie is becoming almost belligerent, and I'm having a tough time getting to know her. I don't know what to do." After Caprice thought about Dulcina's situation for a moment, she asked, "Are his daughters into music?" "I see them wearing their earbuds a lot, so I suppose they might be." "I have an idea. It concerns Ace Richland. Do you think his music would be appropriate for them?" "I don't see why not. He does pop rock, right?" "Yep, his new stuff's in line with his old hits." Last fall, Caprice had staged an estate to sell in a Wild Kingdom theme. Ace Richland, an eighties pop star legend, had decided he needed a haven on the East Coast, not so far from his daughter. He'd bought the estate Caprice had staged and they'd become friends. They'd become even closer friends in March when his girlfriend had been murdered. Caprice had helped find the killer and gotten Ace off the hook. "He's doing his comeback tour. He started on the West Coast, but he's returning east in a couple of weeks for a concert at the Giant Center in Hershey. What if I can get you, Rod, and the girls tickets and VIP passes? Do you think they'd be impressed meeting somebody like Ace?" "That's a wonderful idea." Her neighbor rose from her chair and came over to hug her. "I'll be forever grateful. When do you think we'll know if you can get tickets?" Caprice took out her phone. "Let's see if I can find out. Ace might still be at his hotel." She pressed speed dial for the rock star's number. Ace, who was once Al Rizzo from Scranton, Pennsylvania, had a family background similar to hers, and they'd connected for that reason. He could have a short fuse sometimes, but he was a great guy. He answered on the second ring. "Hey, there," he said. "Do you need my help staging a house?" She laughed. "Only if you have some nautical ideas up your sleeve." Nautical Interlude was the theme on a new house-staging contract. He chuckled. "Fresh out. Though I should have you stop by my place to see if the landscaper is doing a good job on the fire pit I'm having built out back." "I might have to enjoy s'mores at your fire pit sometime." "Any time. What can I do for you?" "Do you remember my talking about my neighbor Dulcina who pet-sits for Lady sometimes?" "Sure." "She'd like to impress the daughters of a man she's dating. They're having a little problem . . . communicating, finding common ground." "I certainly understand that." Ace and his daughter, Trista, had experienced problems after his divorce. But since he'd moved to Pennsylvania, they were bonding once again now that they were seeing each other more often. "Do you think I could get them tickets for the concert and maybe VIP passes at Hershey?" "After the way you helped me dodge a very big bullet, I'll do anything for you. This favor is easy. How many tickets do you need?" "Four for Dulcina." "How about front-row seats for your friend and your family too. Tell Vince to bring Roz and you can bring Grant. How does that sound?" "That sounds stupendous. Are you sure you can get that many?" "I'm Ace Richland. I'll make sure the promoter puts them aside. Marsha and Trista will be there too. You can all visit me before the concert. I'll make sure everybody has VIP passes." "Ace, you're wonderful!" "My mom tells me that a lot." She laughed. "Believe her." "I'll have my agent overnight all of it to you so you'll have it in plenty of time. Sound good?" "Sounds perfect. Do you have a show tonight?" "I'm headed over to the theater now for a sound check." "You'll do great." Ace had been nervous about going out on tour again, afraid he still didn't have "it." But the first few venues had proven he certainly did. "From your mouth to the audience's ears. Give Lady an ear rub for me." "I will. Thank you." "Front-row seats and VIP passes. Rod and his daughters will get first-class treatment," Caprice told Dulcina after she ended the call. Dulcina gave Caprice another hug. "If that doesn't work, I don't know what will." Caprice knew teens and preteens weren't always easy to impress. But kids liked excitement. Caprice hoped that they'd have a great evening and see Dulcina in a different light. They might respect her for knowing Ace and sharing his spotlight. Or maybe they'd finally see her as a kind woman who wanted to get to know them better. * * * The following morning, Caprice needed to visit Isaac Hobbs's shop, Older and Better, to select rustic pieces to make her Nautical Interlude theme work. The house was a bit unusual. She wouldn't be planning a catered open house right away for this one. The owners decided to forgo that expense and just let the uniqueness of the property stand on its own. Older and Better was located on the outskirts of Kismet. When Caprice entered the store, she felt as if she were in a time capsule, stepping back decades earlier. . . . Until she heard static and chatter from a police scanner that Isaac kept under the counter. As she approached him, she saw him stoop to turn down the sound. Lady was always good in the shop, and she made a beeline for Isaac. A fast-food biscuit concoction layered with eggs, bacon, and cheese sat on the counter in front of him. Even though Lady had just eaten her own breakfast, the aroma drew her. "Uh-oh," Isaac said. "I'm going to have to share my biscuit." "Lady just had breakfast. Don't give her more than a bite." Isaac leaned down to Lady. "I'll give you two bites." "While you two are conspiring, I'm going to look around." "What house this time?" "It's near Reservoir Heights but not in it. It's the house that looks as if it has a lighthouse on one side. Nautical Interlude is the theme." "How far into it are you?" "The owners moved back to Maine last month. The husband descends from a family of lobstermen and wanted to settle back there. So I have an empty house and rental furniture to work with. I need primitive pieces." She gravitated toward a highboy with distressed wood, probably walnut or chestnut. "I'd like to use this, but I don't want to buy it." "It's probably cheaper to buy it than rent it for a couple of months or longer. Then you'll have it." "My storage sheds are full. I was thinking about renting another one." "You have back-to-back clients, and you signed a contract to decorate those model homes again. Go for it. You can always empty out the storage units and drop them." "You just want me to buy more stuff here and put it there." Isaac laughed and wiped biscuit crumbs from the corner of his lip with a napkin. "You know me too well." He waved at the side of the shop by a window. "Check out those lamps and shelves." Caprice went to the wall in question and spotted a primitive shelf where three old hurricane lamps sat. They would be perfect. "Okay, sold on the shelf and the hurricane lamps." As she wandered about, Isaac said, "I heard about the Pierson murder. The scanner was all abuzz that night. He was the one who tried out for Nikki's partner, wasn't he?" "Yes, he was. And Nikki and I found the body." Isaac just stared at her. Finally, he shook his head. "Caprice, I don't want to say you have a black cloud hanging over your head, but bodies seem to crop up wherever you go." "Don't exaggerate. At least this didn't concern one of my house stagings." That had happened four times before! "So are you trying to solve this one too?" "I am, for Nikki's sake. She and Drew were at odds, and the police could think she has a motive . . . and no alibi." "Would they be right?" "Nikki didn't do it, if that's what you mean." "I know she wouldn't do it. She's your sister. All of you De Lucas have honesty and integrity in your blood. But can the police pin a motive on her?" "Yes, they can. Drew was taking clients away from her, and her business was suffering. He even threatened in public to kill her business." Isaac shook his head again. "How can I help?" "Tiffany lamps are involved. To be specific, Rowena Pierson's Tiffany lamps. She said you appraised them. Do you remember what they're worth?" Isaac finished his biscuit, crumpled up the paper, tossed it into the fast-food bag, and stashed it in a trash can under the counter. "A friend from New York who's been in the auction business over forty years actually came down here to appraise them. The floral lamps bring the highest prices, and that's what Rowena has. My memory isn't what it used to be, but I do recall the two lamps together were worth well over a half-million dollars. I probably still have the info on them. That appraisal's probably in a box in the storage shed. I can get back to you after I go through the papers." Caprice walked up to Isaac's counter and gave him a wide smile. "I'll take the highboy." "Tit for tat?" he asked. "Not exactly, but we'll be doing each other a favor." "That's what friends are for." Caprice thought about friendships, hers and Roz's, hers and her sisters', Kiki and Rowena's, Drew's and Larry Penya's and Bronson Chronister's. Friends helped make the world go round. Who knew anyone better than longtime friends? Maybe it was time to find out if Drew's friends knew how much those Tiffany lamps were worth. She'd think about that as she swam laps at Shape Up. Maybe a little exercise would clear her thinking and help her solve a murder. Chapter Seven If Caprice could take Lady with her, she would. But of course, she couldn't. Not to Shape Up, Kismet's popular gym. Caprice didn't like exercise. Oh, she walked Lady. But as far as machines and jogging, she didn't particularly like to sweat. That's why swimming suited her. She knew the best time to hit the pool was when the fitness center wasn't too busy. There always seemed to be a lull between eleven and noon. Today, as soon as she walked in the locker room, she ran into Marianne Brisbane, who probably had the same idea she did. Marianne was a reporter for the Kismet Crier and had helped Caprice on a couple of cases. Now Marianne greeted her with, "I like getting wet better in the summer than in the winter, don't you?" Caprice laughed. "At least I don't have to dry my hair in the summer. Are we going to race?" "Maybe for the first five laps, but then I just need to work out all the muscles that cramped up sitting at my desk. Actually, I was going to give you a call today." "You were?" She wasn't exactly sure what was on Marianne's mind, but she could guess. Marianne had sources and contacts who kept her up-to-date on the most recent developments in Kismet. Murder was a recent development. "Video footage crossed my desk yesterday," Marianne said. "What video footage?" "I have a contact at the police station who phoned me that someone had put video on their social media page about Drew Pierson's murder." "Witnesses?" Caprice's heart started thumping. "No, not in the way you mean. You and your sister Nikki were on the video." "You're kidding." "Nope. It was cell phone footage captured by a bystander. When Detective Jones found out about it, he made the guy take it down. But you and Nikki were standing outside the house. The footage showed a patrolman leading you one way and Nikki another. Split up for questioning, I would guess. Did you find the body?" "Nikki did, but I was right behind her. I went in to see if there was anything we could do, and she called nine-one-one. So, this video is no longer spreading around the fact that we were there?" "Nope. The police handled it. I went over it with a fine-toothed comb but couldn't find anything important. It didn't start until the police were already on the scene. Are you going to try to investigate this one?" "I might have to. The detectives will be looking at Nikki, and I want her in the clear." "If I had a sister, I'd want her in the clear too. If there's anything I can do, let me know." Taking a stab in the dark, Caprice asked, "Do you know Jeanie Boswell, Drew's sister?" "Can't say I do." "How about Larry Penya or Bronson Chronister? They were friends of Drew's." "Bronson Chronister. I've been hearing that name batted about lately. He's become influential in Kismet. He's good-looking, has family money and business sense. I think he'll be the next Chamber of Commerce president." Caprice turned to the lockers, opened one with a key she'd picked up at the desk, and plopped her duffel bag on the bench. "Drew's grandmother is the only one who's given me any information. I don't want to push and prod her right now. She has enough to deal with. But I need to learn more about Drew's background, even his younger years." "I graduated a year before your sister Nikki. I think Drew graduated the year after you did, didn't he?" "I really hadn't thought much about that," Caprice answered. "But that would be easy to find out. The library has old yearbooks." Marianne closed her locker door. "There's somebody working out in the gym you might want to talk to if you want to know about Drew's teenage years." "Who?" Caprice really hadn't paid any attention to the members who were working out when she'd entered Shape Up. "Louis Fairchild was on the treadmill when I came in. He was the shop teacher when we were in high school, wasn't he?" Caprice thought about it. She remembered the shop teacher with his red hair, freckles, and friendly green eyes. He'd been well liked. She hadn't crossed his path in years. "He left teaching, didn't he?" she asked Marianne. "He did. Rumor had it he wanted to make more money doing something else. He crafted the most beautiful furniture. I think he opened a store for a while. But he ended up as an insurance salesman. I don't know if learning what Drew was like in his classes would help, but he might be a good source." Caprice glanced toward the door that led to the pool entrance, then back to the door that led out to the gym. "I'll catch up with you in a few minutes. I'm going to see if I can talk to him, if he's still here." Louis Fairchild was in the gym area. His red hair was almost all gray now, but he still had freckles. He'd beefed up a bit since she'd known him. That's what working out would do. He was wearing black sweatpants and a white T-shirt. His jowls were a bit more saggy than she remembered, and he didn't seem quite as tall. But she knew that was an illusion. She'd been a kid when she'd walked the halls of the high school. Back then, adults had just seemed taller, she supposed. When she approached him, he held up a finger. "One more minute, then you can have it." Sweat beaded his brow, and he had a towel slung around his neck. She didn't try to explain she didn't want the machine. She just waited. When he completed his time, he turned down the treadmill, took a few deep breaths as it slowed, then stepped off the machine. "It's all yours." "Do you remember me?" she asked, knowing he probably wouldn't. Fifteen years was a long time. He studied her for a few moments, from her straight brown hair to her jeweled flip-flops. "Your picture was in the paper." Then recognition dawned on his face and he snapped his fingers. "Caprice De Luca, isn't it?" "Right. I went to Kismet High when you taught there." He smiled at her. "That was a lifetime ago." "I was wondering if I could talk to you for a few minutes about Drew Pierson." A somber look stole over his face. "Drew. I can't believe what happened to him." "I know. He wasn't in my class, but my sister knew him." "Then or now?" Louis Fairchild asked, curious. "Now." Caprice motioned to a quiet corner of the gym. "Can we go over there?" "Sure, but I don't quite understand why you want to talk to me." "Nikki and I found Drew." He frowned. "I'm so sorry. That must have been a terrible experience." "It was. And what makes it worse is that the police are questioning my sister. She and Drew had rival businesses." "What kind of businesses?" Fairchild asked. "They're both chefs. Drew opened a catering company in competition with Nikki's. So I'm guessing she's on the detectives' persons-of-interest list." "I'm sorry to hear that." "I'm looking into Drew's background. I know he was still friends with his high school buddies, Larry Penya and Bronson Chronister." "Those three were fast friends, and they could be hellions." "Drew's grandmother hinted at as much. Can you tell me about them?" "There's not much to say, really. They were your typical guys who didn't want to be in school. Anything and everything was more interesting. Shop interested them somewhat because I kept them busy, working with their hands." "Did they get into trouble at school?" "They were ordered to detention now and then, never suspension or expelled. They got caught drag racing a couple of times, but they weren't charged." "Why not?" Fairchild looked as if he shouldn't say, but then he shrugged. "It's really no secret. Bronson's dad knew the police chief back then. That's the way it was before Chief Powalski took over as chief of police. Money talked. One of the other teachers claimed they cheated on tests, helped each other out somehow, but no one could ever prove it. They weren't just drinking buddies. You know, beer out on top of Lookout Point on weekends. They were as thick as thieves." She realized that was just a saying, but she knew it was true. Until thieves turned on each other. Were Bronson and Larry really Drew's good friends now? Or had one of them turned on him for some reason? Fairchild glanced at the weight stations, and she knew he wanted to continue with his workout. She should get to hers. Marianne would be about ten laps ahead of her by now. "Thank you for talking to me." "No problem," he said with a wry smile. "That article I read about you—it said you rescue stray animals." "They seem to find me." "It's good work. My dog Nanook is a constant companion, a best buddy." "What kind is he?" "Shepherd-husky mix. I rescued him from a shelter. Good luck with looking into Drew's background. I hope you and the police can figure it out." She hoped they could too. Wouldn't it be great if they could work together for a change? But she knew that wouldn't happen. She was a civilian. Carstead and Jones wouldn't let her near the info they collected. But if Grant kept his ear to the ground, maybe he could find out what was going on. At least, whether there was a hint that Nikki could be charged. Caprice said good-bye to Louis Fairchild and headed for the pool. Twenty laps would clear her head enough so she'd know what step to take next. * * * After an early supper, Caprice was enjoying a cup of coffee and checking her list for the Nautical Interlude house staging with her pets nearby when Bella called her. "I know Jeanie Boswell." "What do you mean, you know Jeanie?" "She prepared the flower arrangements for the school's Christmas pageant last year. I worked with her, positioning them on the stage, and hanging garlands, setting up fake candles. I think we should stop in her flower shop and have a chat with her. Posies is open until eight, and Joe said he'll watch the kids for an hour." It was unusual for Bella to get involved in one of Caprice's murder investigations, but they were all involved now because of Nikki. "I want to go see her. But she might not even be at the shop," Caprice mused aloud, thinking about the grief a sister would feel. "Unless she's working at Posies to keep busy. I was just trying to figure out an angle for stopping in, and I didn't want to tell Nikki, because I didn't want her to go along." "Exactly," Bella agreed. "Nikki needs to stay out of this. The police will just look on whatever she does as suspect. I know because of what Joe went through." When Joe was suspected of murder, his family and his life were in turmoil. Thank goodness he and Bella had gotten back on track. It had taken a lot of hard work and counseling with Father Gregory, but they were doing it. "Does Joe want you involved in this?" "Joe understands. We can stop in at Posies and tell Jeanie we need a bouquet of flowers for Nana. It's true. Nana would like a bouquet of flowers." "Do you want me to pick you up?" Caprice asked her. "I have the Camaro back." The police had released it. "You know I think riding in that is as rad as Timmy does. Sure, I'll save on gas. When can you be here?" "I'll let Lady out and be there in twenty." Lady wanted to go along, of course, but tonight Caprice thought it was better if she stayed home. She patted her on the head and ruffled her ears. "I'll leave treats in your kibble ball. You can entertain Mirabelle and Sophia." Lady cocked her head and stared at Caprice with those huge brown eyes. Then she gave a little resigned "ruff" and went off to find Sophia and Mirabelle, who were taking their evening nap and were about to be bothered. Caprice's retro fashion sense seemed to irk Bella, but she didn't let that bother her. Tonight she chose a sixties-style shift with vertical stripes in lime and fuchsia. She added white ballet flats, a white vinyl retro purse and was ready to go. After Bella slid into the passenger side of the Camaro, she gave Caprice's outfit a once-over and shook her head. "You're an escapee from the past. Someday you'll learn how to dress up-to-date." "I don't want to learn. I have a whole history of fashion to choose from. Isn't that more fun?" Bella rolled her eyes. "I hate to think what you're going to wear to Ace's concert. Leather and rivets?" "Maybe," Caprice said with a laugh. "Grant might like that. I haven't talked to him about it yet. I left a message, but we've been playing phone tag." Bella couldn't help but break into a smile. "Yeah, he might like leather and rivets. My next-door neighbor's going to babysit, so Joe and I are all set." "I'll have to use a pet sitter to check on Lady and the felines." "Have you heard whether Uncle Dom has started pet sitting yet?" "I don't know if his whole bonding and insurance process has gone through, but he started Sunday night for a friend of Mom's. He'll be at the concert too, or I'd ask him. I hope pet sitting works for him." "I'm sure Mom and Dad hope that too," Bella said wryly. "Any guest who stays as long as he has must cramp their style." "Their style?" "You know, running around the house in a nightie, she and Dad going on a date night once a week and coming home to just watch a movie together. That kind of thing." Caprice remembered when her friend Roz had stayed with her during her husband's murder investigation. Caprice had enjoyed having her there. But that was different. In the summer, Kismet drew tourists from Gettysburg, Harrisburg, and Lancaster. They wandered in and out of the shops and helped the local economy. In spite of the increased traffic, however, Caprice found a parking space directly in front of Posies. Twinkle lights surrounded the windows on both sides of the flower shop's door. A summery display of silk flowers was arranged attractively in one window, and hanging baskets were displayed in the other. Inside the store, refrigerated cases held fresh arrangements and vases of roses, tulips, and lilies. The rest of the store was dotted with glass shelves displaying gifts and silk flower arrangements. Posies sold everything to do with flowers, as well as the trinkets and baubles to decorate them. One corner housed the balloon station, and several Mylar samples with printed sayings from Get Well to Congratulations to Happy Birthday bobbed near the ceiling. "There she is," Bella said, elbowing Caprice. A woman around their age sat at the counter, studying the computer monitor before her. Caprice could see photos of flowers, and she guessed the page pointed to a website for ordering. When they'd opened the door, a buzzer had sounded. At their footsteps, Jeanie Boswell looked up. She wore her brown hair pulled back into a low ponytail. She had a round face and wide-set eyes and didn't resemble Drew at all. When she stood, she pursed her thin lips. She was wearing blue jeans and a T-shirt emblazoned with POSIES. Her scowl almost made her look ferocious. "Your sister did it, didn't she?" Caprice was totally taken aback. Glancing at Bella, she saw her sister's face was reddening, and Caprice knew that happened when Bella got angry. "Why would you say such a thing?" Bella shot at Jeanie. Bella was always one to give as good as she got. She wasn't particularly a peacemaker. Caprice, on the other hand, tried to throw a wet blanket over conflict. Now she jumped in. "Jeanie, we're sorry about Drew. So sorry. I can't imagine what it would be like to lose a brother." At that Jeanie backed up a step, but her face didn't show any other expression. She was silent as she crossed her arms over her chest. Ignoring Caprice's condolences, she said, "I call it as I see it. Your sister had the most to gain from Drew being taken out of the picture. With him gone, she doesn't have any competition." After another quick look at Bella, who appeared ready to pick up one of the flower bouquets and toss it at Jeanie, Caprice decided a little bit of fire of her own might not hurt. "Don't you have something to gain with Drew dead? You'll be your grandmother's only heir." Now Jeanie's face pinkened. She blurted out, "I would never—" Caprice held up her hand as if to try to stop the whole interchange. "Let's start over," she suggested. "We didn't come here to accuse you of anything. I'm trying to figure out what happened to Drew." "Someone bashed his skull in," Jeanie muttered. "Nikki and I saw that firsthand. We're trying to figure out who might have had a grudge against him, or something worse. Can you tell us who he hung out with the most?" Standing and pushing her stool under the counter, Jeanie thought about it. "Drew knew a lot of people, but his best buddies were Larry Penya and Bronson Chronister. Bronson owns that Happy Camper Recreational Vehicle Center." It seemed everyone close to Drew knew about Larry and Bronson. "Your grandmother told us Drew was cooking and catering out of Bronson's kitchen. Do you know anything about that?" "You should see Bronson's house," Jeanie said as if she envied the man. "Drew took me over there once. Bronson's got a state-of-the-art refrigerator. You know. The walk-in kind?" Caprice did know, because Nikki had one. Jeanie went on, "His kitchen is all that stainless steel and black granite, three ovens, with an island in the middle. It was perfect for Drew to work out of. And Bronson isn't there all that much. He's either working or traveling." "That sounds like a friend helping out a friend. He didn't charge Drew rent?" Caprice asked. Jeanie shook her head. "No, those guys are tight . . . or were tight. They helped each other whenever they needed it." Again Jeanie sounded wistful, as if she wished she had friends like that. "Did he hang out with anyone else?" Bella asked. "There was another chef he once worked with and toured restaurants with. You know, if a new place opened up, they'd go and try it. His name is Mario Ruiz." The name sounded familiar to Caprice, but she wasn't sure where she'd heard it. "They worked together at a high-class hotel in D.C.," Jeanie continued. "But when the hotel cut staff, both Drew and Mario came back to Kismet. Mario works at a downtown York restaurant now, a little expensive bistro that I can't afford. He and Drew catch up when they can." "I heard a rumor that Drew got into trouble in his teens," Caprice prompted. "So you know about the drag racing," Jeanie commented. Playing along, Bella said, "Just a little. Drag racing is serious trouble. You know I have a son. If he even thought about doing that, I'd lock him in his room." Jeanie gave a wry laugh. "There was no locking Drew up anywhere. He was stubborn and wild. Just ask any of his teachers. But then he seemed to get some sense when he went to chef school. He was different when he came back. I couldn't believe it when he moved in with Gram after he left D.C." "You couldn't believe Drew would do that, or you couldn't believe your grandmother would want him to do that?" "I'd never seen that side of Drew before," Jeanie confessed. "Gram had broken her arm, was starting to have trouble seeing and getting around. So he said he'd help her out instead of getting a place of his own. He cooked her meals, bought groceries, drove her to doctors' appointments when he could. I think he was trying hard to do what was right because it didn't come naturally. Maybe he felt he wasn't grateful enough for all those years she took care of us. On the other hand, he didn't have to pay room and board, and he could save whatever he made. I think in the back of his mind, he nursed the idea that he wanted to open up a restaurant someday." That was new information. Had Drew changed his mind about that? Maybe he decided to go in a different direction after the barbecue sauce recipe sale? "You've told us about Drew's friends. Do you know if he had any enemies?" That was an important question in any investigation, Caprice knew. Jeanie had to think about that. "I don't know of anybody specifically. But Drew could rub people the wrong way without half trying. I don't know anything about the staff he hired to help him cater." The buzzer on the door sounded, and a couple walked inside. They migrated to the refrigerated cases. "Be with you in a minute," Jeanie called to them. Then she asked Caprice, "Are we done?" "For now," Caprice said gently. "We really are sorry about Drew." "Thank you," Jeanie mumbled. "We'd like to buy one of those bouquets of sweetheart roses in the case," Bella told her. "It's for Nana. I think the yellow one would be great." Jeanie said, "I'll wrap it up for you." Caprice was done asking questions for now. She really had no other choice. Jeanie had given her information to explore, even if she didn't know about specific enemies Drew might have had. Caprice remembered how nasty he'd been with Nikki. Anyone who could be that nasty had to have enemies. She just needed to find out who they were. Chapter Eight "Your uncle Dom isn't here," Nana announced, as she arranged the sweetheart roses in a crystal vase. Caprice exchanged a look with Bella and her mom. Their mom had joined them at Nana's for a glass of iced tea and girl talk. "That's just an opening gambit so you ask where he is." Fran's smile for Nana was affectionate. Caprice knew her mom had come to look on Nana as the mother she'd lost. Valentine jumped up on the counter to explore the flowers. "Oh, no, you don't," Caprice said, scooping her up and setting her back on the floor. "I have a feeling you're going to have to put that arrangement someplace she can't get to it." "That will probably be in the pantry closet," Nana teased. "I didn't think of that when we bought them," Bella said. "I can keep them in our living room," Fran suggested. "You can still enjoy them there, but Valentine won't be tempted." Nana nodded. "Good idea." Caprice said, "I don't want to steal your thunder, but I know Uncle Dom is pet sitting. I think it's terrific." Her mother added, "Roberta and her husband had vacation plans and airline tickets when their pet sitter cancelled. When she mentioned it to me, I told her about your uncle." Caprice was about to say more, how her uncle was suited for the profession, when her phone played "Let It Be." Automatically, she took it from her pocket and glanced at the screen. "It's Grant. We've been playing phone tag. Mind if I take this?" Nana gave her a sly look. "It doesn't matter if we mind, does it?" She waved toward her small bedroom. "Why don't you go in there for some privacy." "I won't be long," she assured them. To Bella she said, "I know you have to get back home." "When Joe takes care of all three at once, he appreciates me more when I get home." Caprice had to smile as she headed for Nana's bedroom, suspecting Bella was right. At one time, Joe had been a very macho and almost removed husband. He'd thought his job was to earn money and Bella's was to take care of the kids and cook. But Bella's third pregnancy had caused a crisis in their marriage. Now they were more appreciative of each other and worked as partners. It was good to see. Valentine scampered after Caprice as she headed toward the bedroom. When Caprice sat on the mauve-and-lilac quilted spread, the kitten jumped up beside her and rubbed against her arm. She petted her soft fur as she answered Grant's call. "Hi, there. I got your message that you were tied up in court all day." "I'm sorry we couldn't connect last night either. My client meetings went late." That's what one of Grant's messages had told her. She'd wondered about it, though, because late didn't seem to matter with them. They'd talked at midnight some nights. She just wanted to share her excitement about Ace's concert tickets and VIP passes that would be coming by overnight courier tomorrow. "You sound tired," she noted. There was a long pause, and Caprice didn't like the vibrations she was getting. She scooped Valentine onto her lap and rubbed the kitten under the chin. Valentine purred. "About the concert, Caprice," Grant said. "I can't go. I have an appointment that day . . . that night." That was a funny way to put it. "Can't your appointment be changed?" "No, it can't. I was going to tell you about it as soon as we had a few quiet minutes." She kept petting Valentine as wariness stole over her. "Why do we need a few quiet minutes?" Again he paused as if this was something he didn't want to tell her. Her heart skipped a beat, and anxiety stole into her stomach. "Naomi is coming to town. She'll be here for about a week to ten days, staying at the Purple Iris. I'm going to have dinner with her that night." Rarely was Caprice speechless, but she was now. Grant's ex-wife had moved to Oklahoma after their divorce. Why was she coming here? "I didn't really want to talk to you about this over the phone. How about we get together tomorrow evening?" Caprice heard Grant's dog, Patches, barking in the background. Grant said, "Just a minute, boy, and I'll get you something to eat. He's been with my neighbor all day," Grant explained. "Simon does a great job with him, but he missed me. I need to feed him and settle him for the night." Was that really what Grant needed to do? Or was he avoiding the conversation they were going to have. And just what would that conversation result in? Their splitting up? As if Grant could almost read her mind, he said kindly, "Caprice, don't jump to any conclusions. Please. We'll talk about this tomorrow night." From past experience, Caprice knew Grant compartmentalized. That's the way he'd handled losing his daughter and losing his marriage. Now she wished they'd talked about all of this over the weeks they'd been dating. Now she wished she knew exactly how he felt. But this was Grant, and she didn't want to wish him away. Maybe she didn't have anything to worry about. But that conclusion didn't ring true. "I can cook tomorrow night," she offered. "I modified Nikki's recipe for beef bourguignon for the Crock-Pot." "You're inventive." Small talk wasn't either of their fortes. "When I have to be," she joked. "Is around six all right?" "Around six is fine. I'll see you then." After Caprice murmured "I'll see you then" and ended the call, she sat and studied her phone for a couple of seconds. She had a knot in her chest that wasn't going to go away until she and Grant talked. And maybe not even then. * * * "I need your help." Caprice had been playing fetch with Lady out in the backyard the following morning when her phone played from her pocket. She'd taken it from her jeans and heard her uncle's voice. If he needed her help— "Is it Nana? Mom or Dad?" "No, no, everyone's fine. But I'm still house and pet sitting. I have been for the past few days." "How's it going?" "It's going fine. It's like being on vacation, really. I'm calling because you've had more experience with animals than I have." "I've had some. What's the problem?" She wondered if he was encountering a behavioral issue with the animals he was pet sitting. That wasn't uncommon when their owners were away. "There's a stray cat that's been coming around every day. She's a tortoiseshell." "Silver or dark?" "Lots of silver, but gold and white and stripes too. My clients told me about her—that they'd fed her now and then. She's thin and she looks like she really needs some care. This house is out in the country and there aren't any close neighbors. So it's not like I can go checking door-to-door to see if anybody lost her. If I had a place of my own, I'd keep her." "Have you talked to your clients about this since you've been there?" "I called them last night. They already have two inside cats and a dog, and they don't want to take on another animal. But I told them about you, that you've taken in strays and found them homes. They said it was okay if I consulted you. What do you think?" "Can you tell if she's feral? Does she want any human contact?" "They haven't had contact with her. She stays at least twenty feet away until they put the food down and go inside. Then she eats. With me, it's been a little different. The first evening I saw her in the yard, I put the food down and waited. I just sat on the patio and kept really still. It took her a while, but eventually she came up and ate. I did the same thing each day. Yesterday, she came closer, maybe about three feet away. She looks like she wants contact, but she's afraid." Caprice needed something to keep her from thinking about Grant's visit tonight. She feared he was going to tell her that they were over before they started. Instead of worrying about that all day, she might as well help her uncle. "What time did she come around before?" "She was here around ten yesterday morning, and then again around seven in the evening." Caprice checked her watch. It was eight o'clock. "I'll come out and we'll see what we can tell about her from a distance if she won't get close. Then we can talk about our options. I can be there in about half an hour. Give me the address." A half hour later, she drove toward York, taking side roads according to her uncle's directions. She ended up on a beautiful bucolic property. Alaskan cedars that had to be at least thirty years old flanked one side of the two-story house. The rest of the property was dotted with decades-old silver maples. Pink and white petunias bordered the front gardens while a hanging basket with impatiens in a beautiful fuchsia color dangled from the front porch ceiling. After Caprice parked, she went up to the porch and her uncle Dom was there, ready to let her inside. A chocolate Lab greeted her too. "He's friendly," her uncle said with a hug for her. "His name's Loafer because he likes to loaf by the sofa." "How old is he?" Caprice asked. "About eight. They rescued him from a shelter, so they're not sure. Come on in and I'll introduce you to Mitzi and Tux. They were rescue kittens too and are brother and sister." "I think I like your clients and I haven't even met them." Dom laughed. "They're good people. I could tell right away. I'll give Loafer a toy with some treats in it, and we can go out on the patio and sit. How about iced coffee?" "That sounds great." She wiggled a Ziploc bag she'd brought with her. "I brought you some of my choco chunks and chips cookies." "Now that's a breakfast my doctor wouldn't approve of, but I'll run it off with Loafer later." "You're going to make me feel guilty enough to go for a swim, aren't you?" Her uncle laughed. "Come on. If we sit out here long enough, maybe our visitor will arrive." On the patio her uncle asked, "Have you made any headway in the Drew Pierson case?" "You make me sound like a private investigator." "Ever think of getting a license?" "Like I don't have enough to do. No, if I give up home staging for anything, it will be to run an animal rescue shelter. But that's not on the horizon right now." He pulled two patio chairs close together. "So, any progress on the case? Do you have any suspects?" Caprice sat, and waited for him to do the same. "Not yet. I do have people I want to question, though. I just have to figure out the best way to do it. Bella and I talked with Drew's sister. She told us about some of his friends, and I want to talk to them." "Did he have many friends?" her uncle asked. As she related what she had learned about Drew's friendships, her uncle held up his hand to stop her. "Over there," he said. "Under the gnarled redbud. I've seen her there before. She uses it like a tent. I think that's where she takes her naps in the afternoon. She's completely shaded and surrounded by the leaves and branches that reach to the ground. She must feel safe there." Caprice watched as the silver-haired tortie snuck under one side of the bush and the leaves jiggled. The branches swished a little, and then she came out on the other side where she could see them. "Do you mind if I talk to her?" Caprice asked. "Go ahead. I've mostly been just sitting here like a statue, afraid I'd scare her away." "I might scare her, but let's see." She lowered her voice. "Hey, pretty girl. Are you hungry? We have some food for you." Caprice had made up a dish of cat food, and now she stood and took it over to the edge of the patio. The tortie retreated under the redbud bush but didn't run off. Caprice kept talking. "We just want to see how you are, and if you need somebody to take care of you. Do you think you'd like that?" "Somebody else who talks to animals as if they're human. At least I don't feel so crazy," her uncle muttered. "I'm going to let you eat, and I'm going to go back over to that chair and sit. Okay? You can come over. It will be all right." Caprice went back over to the chair and sat down beside her uncle. She kept talking. "It's okay, baby. Come on. Get some breakfast." After a few minutes, when the cat saw that the coast was clear around the dish, she came out from under the bush and unevenly walked toward the food. She kept her eyes on Caprice and Dom, though. Caprice murmured to her uncle, "I think she's limping a little. "Back right leg. I've noticed it too. It's one of the reasons I called you." As the feline ate, Caprice took her camera from her pocket and zoomed in to examine her. The cat was thin, yet rounded a little at the belly. Bloating, or something more? Her green eyes looked clear, not at all weepy. The tortie suddenly stopped eating, sat back, and scratched at her neck. Caprice suspected she had fleas. She needed good nutrition and maybe a flea treatment to get healthy again. After the cat finished eating, she sat on the corner of the patio in the sun, washing herself. She cast wary glances at Caprice and her uncle Dominic every once in a while, but seemed more relaxed than afraid. "I'm going to try to approach her," Caprice said. "I really don't want to use a trap cage unless we have to." Caprice approached the cat until she was about three feet away. She sat down on the patio on the same level. The tortie eyed her but didn't run. She lowered herself, facing Caprice, her paws tucked under her. "You're not afraid of people, are you? What happened? Did you get lost?" And so it went. Caprice spent about half an hour just sitting there, talking to the cat, letting the tortie eye her and get used to her. She knew she'd be taking a risk if she tried to pick up the animal. Cat scratches and bites were nothing to fool around with. Suddenly the cat stood, looked around, finished a few scraps on the dish, and then went to the bush and hid underneath. Caprice picked up the dish, stood, and turned toward her uncle. "If you lived here and we had all the time in the world, I think we could gain her trust." "I only have two days left here," her uncle said. "All right. Then we'll use the Havahart trap. I'll bring it out here tomorrow and put tuna inside. Once she walks in, she'll trip a mechanism and the door will shut. I'll call Marcus and see if I can get an appointment. He'll understand if this doesn't work out and we have to cancel." "Then what will you do with her? Not take her to some shelter—" "Of course not. I'll try to find her a home. I will find her a home. It might take a little time, but she can stay in my garage until I find someone who will take her." "I'm glad I called you," Uncle Dom said with a grin. "I'm glad you did too. I'll come back tomorrow around the same time and we'll see what we can do." * * * After her visit with her uncle and his new feline friend, Caprice drove to the storage locker center. She punched in her passcode, and the gate slid open. She drove to the row where her units were located. As she usually did, she parked to the side so another car could pass. After she climbed out of her van, she found her key ring and chose the small key for her padlock. She unlocked the first of her three storage units. Grasping the door handle, she lifted the door and it rumbled up. Her compartments were ten feet deep and fifteen feet wide. Although they were stacked with staging items—rolled rugs, lamps, and tables—she could reach everything. Labeled boxes lined the sides of the units, and she kept a path open to walk through. As she sorted through items, she tried to keep her mind distracted from thoughts about seeing Grant later. Not that she didn't want to see him. She just wasn't sure she wanted to hear what he had to say. Taking a clipboard from a side table, she checked her inventory list for the unit. Crossing to the rear, she pulled a box of old bottles from the top of the stack. She remembered there were several cobalt blue ones in there. They'd look perfect on the primitive shelf she'd purchased from Isaac. She found another carton she was looking for in the second storage shed. It was tucked along the side on the bottom of a stack. She lifted off the top two cartons and then opened the flaps on the bottom one. There was a fishing net. She'd picked that up at Colonial Days in East Berlin, she remembered. It would be perfect draped on a wall in the octagonal room of the house. She was checking her list again for other possibilities when her cell phone played. She thought about letting it go to voice mail, but her curiosity usually won out. Vince's face stared up at her and she answered. "Hi. What's up?" she asked. It was unusual for him to call her during the day. "I wanted to let you know that the police cleared the crime scene, so Rowena should be back in her house. Not that you should go talk to her or anything." "Did you let Nikki know?" "No, I did not. She needs to stay away from this, Caprice. Don't argue with me on that point." She wouldn't, because she knew Vince was right. "Rowena won't want to go back there if the place needs to be cleaned up." "From what I understand, that was supposed to happen yesterday. Rowena asked the detectives for a recommendation for a cleaning service that specializes in this kind of cleanup. They gave her one." "How do you know all this? I'm sure Detective Jones didn't tell you." "I have my sources." "Like someone you used to date who still works at the police department?" "Maybe," he drawled. "At least my romantic past is good for something." "You mean other than experience?" He chuckled. "What are your plans for today?" "I'm at my storage units collecting a few items, but then I'm going home and taking Lady to visit Dulcina while I work at the house I'm staging. I just came from the house where Uncle Dom is pet sitting. Tomorrow we're going to catch a stray cat." "What are you going to do with another cat?" "Take her to Marcus. But then I'll help find her a good home." "Pretty soon you're going to run out of people in Kismet who want animals," Vince said wryly. "Then I guess I'll have to expand my reach, won't I." "I'd expect nothing less from you." Of course, Grant was still on Caprice's mind, and since Vince worked with him . . . "Has Grant talked to you about anything unusual happening?" "You're going to have to give me a more specific hint than that. He still works mainly from home. You know that. He drops in here only when he needs something, or meets a client here instead of there." "You do talk, though, right?" "Guy talk and girl talk are two different entities. What do you want to know?" "Did you know his ex-wife is coming to town?" Her question was met with silence. Then Vince whistled low. "No, I didn't know that." "He hasn't said anything to you about it?" "Caprice, this is Grant we're talking about. He doesn't talk about his personal life, not even with me. He told you about it?" She explained about Ace's concert and how that was the night that Grant was having dinner with Naomi. "So he won't be coming along to the concert, huh?" "I'm not as upset about that as I am about the whole idea of his seeing her again. What if—" "Caprice, just stop. You said you're seeing him tonight?" "Yes, he thinks we should talk about it in person. I'm afraid he's going to say we should stop dating." "It's not like you to be a doomsday proponent. Talk to Grant. Then worry if you have to." Her brother's advice was good. She just didn't know if she could take it. A half hour later, she was back at home rounding up Lady and then taking her to Dulcina's. "Thanks so much for watching her for me today," she told Dulcina as Lady ran inside her neighbor's house. "After leaving her alone earlier, I didn't want her to be alone the rest of the day. I was at the property where my uncle is pet sitting. A stray has been visiting, and we're going to try to catch her tomorrow." Dulcina was already kneeling on the floor, rubbing Lady's belly. "You know I've thought about adopting a pet." "I know you have." "I just wasn't sure about the timing, with dating Rod and all." "Were his girls excited about the concert tickets?" "Not as excited as I'd like them to be. They didn't even know who Ace was. Rod and I showed them photos on the computer and told them about his tour. His younger daughter seems more excited than his older daughter. I don't know, Caprice, I'm not sure this is going to work out." "But you don't know that it isn't either," Caprice interjected hopefully. "No, I don't know that it isn't. But I do know one thing for certain. I can't live my life waiting around. I can't live my life for him and his daughters when I'm not even really included in his life yet. Do you know what I mean?" Caprice knew exactly what she meant. "You have to live your life just in case Rod isn't the one for you." "Exactly. And you know, I think I'd like a cat. It just seems like serendipity that you're going to catch one." "Maybe. Sometimes they can outsmart the cage." "How old do you think the cat is?" "I'd say between two and five. It's hard to tell. She's a tortoiseshell." "I don't care what color or breed," Dulcina responded. "This cat could need a lot of care and attention," Caprice warned. "She's malnourished. I can tell that just from looking at her. She doesn't seem frightened of us, but I'm not sure she wants close contact with us either. Would you be ready to take on a pet like that?" "Dating Rod and being around his daughters, I realize I need to nurture. I'm a patient person. I think I could help an animal like you're talking about." "She could be out on her own for a reason." "You mean FIV?" "So you know about that?" "I do. And I say let's cross one hurdle at a time. You said you have a vet appointment for her?" Dulcina asked. "I do. That's if all goes well." "If she has FIV, I could still take care of her, right? Especially if she's not showing symptoms." "It would be best for you to talk to Marcus about that if it happens." "Text me if you capture her. Text me from Dr. Reed's, then I'll decide what to do. Fair enough?" "Very fair." Just what were the chances that everything would go as planned? What were the chances that she could capture a cat? What were the chances the cat would be healthy? What were the chances that she and Grant would still be dating when the night was over? Chapter Nine That evening Caprice was out back playing with Lady when Grant and Patches arrived. She'd needed to do something to burn off excess energy and excess worry. He came into the yard from the back gate, and he'd never looked so good. He was wearing blue jeans that fit just right and a chambray shirt with the sleeves rolled up. His black hair was a bit mussed from the breeze. Grant bent and unleashed Patches. The dog ran to Lady and they began rooting through the shrubs together. As Grant approached her, he gave her an unsure smile. She couldn't quite find a smile to give him back. He asked, "Do you want to toss balls for them, or do you want to go up to the porch and talk?" "Is this going to be a long conversation or a short one?" she returned, in a way just wanting to get the conversation over with. Would Grant even stay for dinner? "It's whatever we decide it's going to be." He simply motioned to the glider on her back porch. Lady and Patches came running when they moved to the porch. Both dogs followed them, took a few slurps from the water bowl there, then settled at their feet as they sat together on the glider, though not quite close enough to touch. "How was your day?" he asked. It seemed he wasn't eager to jump into their conversation. "Dulcina might take in a stray I'm going to help Uncle Dom catch." "Your uncle Dom likes pet sitting?" "He seems to." Caprice had already had enough of this surface chitchat. She slanted toward him, bringing her leg up onto the glider. "Talk to me about what you're going to do." He looked nonplused for a few seconds. "I'm not going to do anything. You have to trust me, Caprice." Her dad was the only man she truly trusted. Well, okay, maybe she trusted Vince too. But as far as her romantic life? She'd trusted men and they'd hurt her. Her first love had been in high school. Craig had gone to California after graduation and had eventually sent her a "Dear Caprice" letter, breaking off their relationship once he was established in college. Okay, so long distance didn't work. If she had truly learned that lesson, she and Seth Randolph would have gone their separate ways when he'd taken the fellowship in Baltimore to pursue a career in trauma medicine. But she'd been infatuated with the handsome doctor and had let that linger a little too long. A few years ago, she'd fallen in love with a man with a daughter. Travis had seemed ready to move on, but then he and his ex-wife had reunited. That reunion made her doubly wary of Grant's situation. He took her hand as if he could read the thoughts running through her head. "I know you've been hurt before. I don't intend to hurt you. But this is something I have to do. Naomi and I have never had closure. She's coming to town the weekend of Ace's concert. We're going to have dinner and talk. Maybe more than once. She's going to sightsee while she's here, driving down to the Inner Harbor, possibly the Smithsonian and the art gallery in D.C., touring Gettysburg for sure." "Are you going to sightsee with her?" "I can't tell you what I'm going to do because I'm not sure yet. I'm leaving my schedule open for the week, and Simon assures me he'll watch Patches if I'm away for an afternoon or an evening." "You've covered the bases." "You're upset." Truthfully, she said, "I think you're putting our relationship in jeopardy. I thought you'd moved beyond the shadows in your past." "That's what I'm trying to do, Caprice. Honestly I am." Patches's nose went into the air. He rose to his paws and Lady did the same. Patches jumped down the steps and Lady gave Caprice a look that asked, Can I go too? Caprice nodded and waved her hand for Lady to follow her friend. The two dogs romping across the yard now, headed toward a flower bed. Maybe they'd seen a squirrel. She watched them instead of looking at Grant. She couldn't gaze at him without her heart breaking. She didn't have a good feeling about this, whether it was gut instinct or only her own anxieties and insecurities. The thought of him seeing Naomi again just didn't feel right. Grant gently nudged her chin around until she faced him. "Are you telling me you can't trust me?" "I don't know," she confessed. "It's hard. I feel like it's déjà vu. I've been in this situation before. I know how it ends up." He shook his head. "You're putting a wall up between us." Maybe she was, but it was in self-defense. "If anything happens, will you tell me right away?" He scowled and looked almost angry now. "Nothing is going to happen." She wasn't thinking about sex as much as Naomi and Grant resurrecting the bond they once had. "Just promise me you'll tell me right away if your feelings toward me change." The truth was, he'd never declared his feelings toward her, but she'd felt them whenever they were together and whenever they kissed. "I'm not going to give you a blow-by-blow, hour-by-hour. Sometimes trust isn't a feeling, it's a decision you have to make." Was he right? "Do you want me to stay or go?" he asked. "If you stay, we won't resolve anything, not until after this is all over." He studied her for a long few moments, then he rose to his feet and picked up Patches's leash that he'd laid over the side of the glider. "Don't be incommunicado," he told her. "I know you're going to be involved in Drew Pierson's murder investigation. If something happens, if you're in danger, if you're unsure, call me." She couldn't promise that she would. Grant had saved her life once and she was indebted to him for that. But he wasn't a knight on a white charger. He was a real man and she was a real woman—a strong woman. She could take care of herself and solve her own problems. When she didn't tell him she'd call, he accused, "You're stubborn." "No more stubborn than you," she responded. After a long, last look straight into her eyes, he called to Patches. His cocker came running to him and they left the way they'd come in, the gate closing behind them. Caprice sank down onto the glider and Lady came running to her and looked up at her as if sensing something was wrong. Something was wrong, all right. As far as Grant was concerned, she didn't know exactly how strong she was. But she was going to find out. * * * The late-June day was rife with sunshine as Caprice and her uncle set up the Havahart trap the next morning. Instead of putting it out on the patio, they decided to place it near the redbud where the tortoiseshell usually took haven. The leaves from the bush partially covered it there, so it would be some camouflage. Caprice hoped the cat would be hungry enough not to notice what she was walking into. Uncle Dom had already exercised his client's dog until they were both tired out. The animals inside would not be a problem if she and her uncle were occupied outside. Caprice forked tuna onto a dish inside the carrier. Then she and her uncle crossed to the other side of the patio to wait. They sat, not knowing how long this would take. "With the warmer temperature, we can't leave the tuna inside the cage too long. If she doesn't come within a half hour or so, I'll have to change it and put a new dish in." Food spoilage could always be a problem with outside cats. People thought they could just put the food out and let it sit forever. But it broke down, grew bacteria, and could make an animal sick. As she and her uncle sat there in the almost eighty-degree heat, Uncle Dom took a sheet of paper from his pocket and unfolded it. "This is a form I made up to keep notes for my clients. What do you think?" Caprice took the page and studied it. Her uncle had drawn blocks on the left for each visit—the time he came and left—and then another space blocked off to the right of that, where he could make notes. He'd documented what he'd done this morning and had details about yesterday. He mentioned how much the dog and cats had eaten, how far he and Loafer had gone for their walk, and how eagerly the cats had played with their toy mice. "While I'm house sitting like this, I'll write in notes every few hours. For other clients, I'll just fill in a block for each visit." "Other clients?" Caprice asked. "I'm going to be walking two dogs next week and one day checking in on a cat when her owner has a medical procedure. Your mom has been great about spreading the word. And since the pet sitter I interviewed already has too many clients, she recommended me to someone who called her. Most people are just so grateful to have someone take care of their pet, they don't care if I'm bonded and insured. I'm still going to go through with all that. It's safer for me and for them. It will definitely be necessary if I want to take on anyone to help me with this." "You're thinking about expanding?" Caprice asked with a grin. Her uncle looked a bit sheepish. "I know it's early days yet, but I can dream, can't I?" "Of course you can. I'm sure you can turn this into a thriving business. What are you calling yourself? I mean, the name of your business." "I'm going to keep it simple and just call it Pampered Pets. After I'm finished here, day after tomorrow, I have a few apartments to check out. I just need a one bedroom. I don't want anything fancy. Even a studio apartment would do. The first place is going to be low budget and temporary. Once I'm on my feet, I'll find someplace a little nicer, someplace with a yard so if I want a dog, it would be a good location." "I'll be on the lookout for you. Are you still going to be doing bookwork for small businesses?" "Oh yes. That will fill in the rest of my time, at least until the pet sitting really takes off." "You know this is going to be seven days a week, holidays too." He raised his hands in surrender. "What else do I have, Caprice? Really. This will keep me busy so I stay out of trouble." They had kept their voices low, and both scanned the area across the yard and anywhere around the redbud. Her uncle nudged her elbow. "There she is, over by the pampas grass. She takes cover there too. Do you think we made a mistake not putting the food on the patio?" "Too late now. If we go rustling around we'll scare her off. Let's see what happens." The cat slowly walked down the grassy incline and canvassed the area outside the patio doors. When she walked, Caprice detected that limp again—her back right leg. The stray stopped a few feet from the pampas grass, washed her front paw, and then continued on, maybe looking for a shady spot. She headed for the redbud and raised her nose into the air as if she was sensing something good. Then she saw the dish of tuna. "In the sun, she looks as if she has a halo on her head," Caprice whispered. This tortoiseshell had a lot of gold in her, and there seemed to be a circle of it on the top of her head. "Halo could be the perfect name for her." The straw Caprice had laid in the forefront of the cage hid the metal. The cat must have been hungry, because she walked straight in, and when she did the door came down. Caprice moved immediately and her uncle followed her. The cat was already meowing and circling inside the cage, looking for the way out. "Do you have the burlap?" Caprice asked her uncle. He went to the patio, picked it up from a spare lawn chair, and brought it over. With the cat meowing loudly, they laid it over the top of the cage, hoping that would help calm her. "I wish I could go with you," he said. "Are you sure you can handle this?" "Sure I can. I've done this before. It's not a pleasant ride over to Furry Friends, but we'll survive." "I'll carry her to the van for you." As Caprice suspected, the trip to the vet wasn't pleasant. In fact, it was even more unpleasant than she expected. Apparently this cat got car sick. Caprice heard the sounds, she smelled the result, but there was nothing she could do about it while she was driving. She just kept talking to Halo, assuring her everything would be all right. But just as when you assured a sick child, the patient didn't believe her. The receptionist at Furry Friends knew Caprice. The vet tech, Jenny, came out to the front when the receptionist buzzed her, and she and Caprice took the cat to an examination room. "It's a mess," Caprice said. "I'll help you clean up." Jenny disappeared for a moment into the back, returning with a second roll of paper towels. They removed the burlap, and Halo looked up at them, meowing pitifully. Caprice said, "I don't think she sat in it, and I don't think she has any on her. She's small so there was plenty of room in the cage." Jenny opened the cage. "Come on, pretty girl. Come on out." Halo cowered in the cage. "Let me try something," Jenny said. "I don't want to scare her further." She went to the cupboard, took out a little bag, and pulled a few treats from it. Then she laid them on the counter and stepped back. Halo looked from one of them to the other and meowed again. But then she sniffed and she saw the treats. After another moment of hesitation, she emerged from the cage. Jenny said, "I'll take this to the back and wash it up. It will be easier that way. Are you okay in here with her?" "Yes, we're fine. But I hate to put her back in that to take her along with me. Do you have one of those cardboard carriers?" "Sure. Let's see what Marcus says first." Caprice knew what Jenny meant. If the cat was sick . . . She wasn't going to think about that right now. She took out her phone and texted Dulcina, telling her she was at the vet. Then she stood by Halo and talked to her. "We have to get you checked out. It's not healthy or safe for you to be out there, especially since you're limping a little. We need to see what that's all about." Halo finished the last treat and looked up at her warily. Caprice took another step closer so she was against the table. Halo came over to her and sat in front of her. A few minutes later Marcus came in. "I hope I didn't mess up your schedule," Caprice apologized. "I was running a little late, but my next appointment cancelled so we're good. Let's take a look at her." Marcus was African American, big and burly. His buzz cut was just part of his character, and he usually had a smile on his face. He was running his big hands over Halo, and she was letting him. "She seems to be a sweetie," he said. "Either that or resigned. She limps a little, back right leg." After a few more minutes examining her, Marcus said, "I think the limp was caused by an injury that healed." Then he looked directly into Caprice's eyes. "You probably don't want to hear this." "You didn't even take any tests yet." "I will. But she's about a month pregnant." "You can tell that just by running your hands over her?" "And palpitating a little. In another month, somebody's going to have kittens." This had happened to Caprice before, only it had been a dog she'd taken in, not a cat. How would Dulcina feel about taking care of a pregnant stray? "What does that mean for her health?" "I'll test her for FIV and feline leukemia. I can give her a flea treatment and a wormer that's safe for a pregnant cat. But I can't give her any shots until she's finished nursing. She should be separated from other cats for a couple of weeks just to make sure nothing else develops." "I might have someone who's willing to take her." "Of course you do. If all my clients did what you do, I wouldn't have any strays to worry about." "How old is she?" "She's older than she looks. She has a couple of teeth missing, and some decay. I'd say she's between three and five. I also think her back leg might have been broken, but it's healed now. This little gal could have gotten hit by a car or was in some kind of accident." Caprice's heart went out to the cat as she patted her. "Let's do the testing," she said, trying to detach a little. A half hour later, she breathed a huge sigh of relief. Halo's FIV and feline leukemia tests were clear. Now Caprice called Dulcina. "How is she?" Dulcina asked, already concerned. Caprice told her everything Marcus had found, including the pregnancy. "I don't know anything about taking care of a pregnant cat, let alone helping one deliver." "I helped deliver puppies," Caprice offered. "I'm sure Marcus will advise you. I found a lot of things online, especially on YouTube. I watched actual deliveries that made me feel a little more confident. Letting nature take its course is usually the best rule. But this is up to you, Dulcina. I'll keep her until I can find her another home if you don't want her." This time Dulcina didn't hesitate. "I want her. I'll take care of her and her babies. I need to feel I'm doing something good." "Do you have any supplies? A litter box, litter, dishes, food?" "I bought a litter box and litter. No food because I wanted to see what you would suggest." "I can pick up some food for you here." "That works. Do you want me to take Lady over to your place?" Dulcina had been keeping Lady at her house while Caprice helped her uncle. "That would be great. Marcus gave Halo a flea treatment. Do you have someplace washable you can keep her? A bathroom, maybe?" "I'll do better than that. I'll let her stay in my sunroom. It's air conditioned like the rest of the house, so if it gets too warm she'll still be okay. Do you think that would be a good place?" "I think that would be great. I'm going to be running an errand later this afternoon." Caprice had already made up her mind that she was going to check on Rowena and the house and see what else she could find out. "So I can stop at Perky Paws," she went on, "and get you whatever you need." "Do you think she'll like me?" Dulcina asked Caprice. "She'll learn to like you," Caprice assured her. "What she's going to love most is your kindness." * * * Caprice smiled as she headed for Rowena's house later that day. Dulcina and Halo were settling in together. Dulcina had laid old washable rugs on the ceramic-tiled floor of the sunroom and dimmed the shades a bit on one side. Caprice had taken her a bag of catnip she'd purchased at the clinic and sprinkled a little on the rugs. After eating and drinking, Halo had settled on one of them and fallen asleep. All was well there for now. As far as Rowena went, however . . . As Caprice approached Rowena's house, she hoped the woman had friends other than Kiki who could help her. Would she be able to stay in that house by herself? Would Jeanie consider moving in with her? How spooky would it be to know someone was murdered in your house and you were still going to live there? Parking in front of Rowena's home, Caprice noted that all looked quiet. But that didn't mean anything. Of course, Rowena could have returned to Kiki's. Once at the porch, however, Caprice saw the main door was open. She stood at the screen and rang the bell. She heard Rowena call, "Be right there." From what Caprice could see in the living room, everything looked to be in order. The Oriental rug that had covered the floor was gone. But other than that, there didn't seem to be any sign of what had happened here. Caprice could just glimpse the shade of the Tiffany lamp sitting on the side table. Still no base. Had the murderer taken it? Or had the police found it and collected it as evidence? When Rowena came to the door, she peered at Caprice a few moments as if she had trouble seeing her. So Caprice said, "It's Caprice, Rowena." "I thought it was you," she said. "No one but Kiki wants to talk to me since my grandson was murdered." Caprice suspected that Rowena wasn't just being paranoid. A murder in a family wasn't the same thing as a death. Friends shied away from getting too close to it. She'd seen that happen with her friend Roz. "I'm glad you have Kiki." "So am I. Come on in. Everything's been cleaned up, at least that's what Kiki tells me. She went out to buy groceries for dinner. She's staying with me a few nights." "That's kind of her. I'm glad you have a good friend you can count on." "It's not like I can count on Jeanie. Sometimes I don't think that girl has any sense of family at all." "Maybe she just doesn't know how to express what she's feeling." Rowena nodded. "That's always been the case. Would you like to sit in the living room . . . or in the kitchen?" Caprice couldn't help but remember Drew's body lying in the living room right next to the sofa. She took another quick look around but didn't think she'd learn anything from sitting in there. "The kitchen would be great. My family did their best talking in the kitchen." "Preparing meals?" Rowena asked. "Exactly. When we were chopping or dicing or mixing, we'd reveal things we wouldn't share otherwise. Maybe that's why Mom liked to see us cook." When Caprice stepped into the kitchen, she realized it looked like a throwback to the fifties. The maple cupboards were worn from years of being open and shut. The floor had been tiled in beige and white, and the counters were covered with green Formica. She did notice that an old stove and refrigerator must have been replaced with stainless steel ones. Had that been Drew's doing? A red teapot sat on one of the stove's burners. Caprice offered, "Would you like me to make us cups of tea?" Rowena sank into one of the kitchen chairs and pulled herself in at the round oak table. "That would be lovely. I miss Drew not being around here and doing . . ." Her voice broke. She composed herself. "Things like that." Caprice patted Rowena's arm. "I'll make us that tea." After she filled the kettle with water, she said, "I guess Drew cooked for you." "Yes, he did. He found his vocation with cooking. In recent years I didn't worry about him as much as I did before." "Did he come up with lots of original recipes?" "No, not really. He was always finding recipes on his computer. When he started out with something new, especially for his business, he didn't usually test the recipe here, but rather over at his friend's house." "Bronson's." "Yes, Bronson's. Drew described that kitchen of his to me. It didn't surprise me. Bronson's family has always been wealthy. Now he's rich too, and he enjoys nice things. It was so nice of him to let Drew use his kitchen for his business. Drew told me he couldn't have rented a place any better." "You know, I was at the expo on Sunday. Nikki and Drew were trying to convince the wedding crowd to hire them for their receptions." "Drew cooked for me, but he didn't tell me much about his business. But that sounds like a good place to drum it up." "He called his chocolate walnut cake a groom's cake. At a wedding, there's often a traditional cake, considered the bride's cake that is served to the guests. But often now, a couple chooses a cake called the groom's cake." "Really? How odd. No more just white cake. I guess that's supposed to give everybody a choice." She was quiet a few moments, then asked, "Did Drew's cake have maple-flavored frosting?" "Yes, it did." Rowena looked away from Caprice into the living room toward the tall Tiffany floor lamp by the armchair. Caprice remembered the piece of paper she'd seen peeking out from its base. Had Rowena hidden her recipes in there and Drew had known that? Could someone have murdered him for the recipes? "That certainly does sound like my cake," Rowena said in a soft voice. "I've been making chocolate walnut cake with maple icing since I was about ten. It was my father's favorite recipe, and my mom made it often. Drew enjoyed eating it. I don't remember him ever asking me about it, like what spices I put in it, what kind of maple syrup I might use for the icing." "It's possible that Drew's palate was so well honed he could replicate your recipe just by tasting it." Rowena shook her head. "I don't think so. He was never good at that kind of thing. I used to make these glazed carrots and he couldn't even tell I had ginger in them. No, either that recipe wasn't mine or . . ." Rowena just trailed off. Or Drew had somehow stolen the recipe and not given his grandmother credit for it. "In which cupboard might I find the teabags?" Caprice asked. "Top cupboard, on the right, next to the sink. Take your pick. There's some of that herbal stuff that Drew liked." "What would you prefer?" "I like the plain green tea with just a little bit of sugar." Caprice chose the green tea too, found two cups and saucers in the cupboard, and brought them over to the table. "Your china is beautiful." The teacups were painted with tiny roses, as were the saucers. The cream china looked like fine porcelain. "They were my mother's too," Rowena explained. "Using them brings back so many memories. Drew had his morning coffee mugs, but I always prefer to use these." "My Nana has a collection of teacups we use whenever we have tea together." "Your Nana and I came from a time when there was pride in everything that was made, from china to the towels we used, to the beautiful linens for the table. Wash-and-wear is important now. People toss away anything damaged and buy new. Not the way I was taught, and probably not you either." "No, that wasn't the way I was taught." She took a seat across from Rowena. "Maybe that's why I enjoy vintage clothing and antique jewelry and antiques themselves." She looked toward the living room. "Your Tiffany lamp is absolutely beautiful. And those colors in the table lampshade—Is there any word on the base?" "No one knows what happened to it. It was here that day before I left for the performance. The police think the murderer used it to hit Drew and then ran off with it." Because it was valuable? Or because he knew he had to get rid of it? And exactly how much value would it have without the shade? Caprice added sugar to her tea and stirred. Rowena did the same. They both took tentative sips. "There's something about a cup of tea that's comforting, don't you think?" Rowena asked. "I agree." "Kiki and I finished Celia's biscotti. They were delicious." "I try to bake them too, but mine aren't as good as hers. Next time I visit, I'll bring you some of mine and you can compare." "I'd like that," Rowena assured her. "I have a feeling I'm going to be lonely here all by myself. A visit now and then would be nice." "I could bring Nana too, and we could have a real tea party." A smile played on Rowena's lips. "That would be wonderful." She sighed. "I should probably pack everything up and move to a retirement center. But I like having memories around me. I want to stay here as long as I can." "Nana felt the same way for a long time, but then she realized she could be happy living near my mom and dad as long as she had some of her memories around her." "I have to have cataract surgery soon or I won't be able to see what's around me. I've been putting it off and putting it off. Jeanie says she'll take me and bring me home, and I suppose I'll have to depend on her." "Maybe if you depend on Jeanie, she'll open up to you more." "I can always hope." Rowena reached out and patted Caprice's hand. "You're a nice young woman. I've heard about you, you know. At church. At the beauty parlor. You've helped the police solve a few murders." "I never intended to do that," Caprice admitted. "It just sort of happened." "Are you going to try to figure out who killed Drew?" "How would you feel about that?" "Those detectives were awfully grumpy and gruff. If they're like that with everyone, they won't learn anything. Now you, on the other hand . . . You can get people to talk to you. I imagine that's what solves a murder." There was merit in what Rowena said. Maybe that's why she had solved four murders. "Can you try to find Drew's murderer?" Rowena asked. It wasn't just for Nikki's sake anymore. It was for Rowena's too. Caprice didn't hesitate to say, "Yes, I can." Chapter Ten Caprice didn't think she'd slept much the past two nights. The conversation she'd had with Grant weighed heavily on her since he'd left her yard. Both of her cats had stayed close again through the night as if they'd known she needed some kind of furry comfort. While Lady snored beside her in her bed on the floor, Caprice thought of all the things she should have said to Grant. But she didn't know if any of them would have made a difference, because his mind had been made up. The phone on her bedside table rang at seven a.m. and she was grateful for the noise. When she picked it up, she saw her mom was calling. "Is everything okay?" Caprice asked automatically. Ever since the scare with Nana not so long ago, she worried. "That depends," her mom answered. "Can you help out at the soup kitchen today? Nana and I are signed up, but two of the volunteers have come down with something and we're shorthanded for a Friday." "How long do you need me?" "We're almost through with breakfast, so we'd need your help preparing for lunch and then through service." "Okay. I'll call Nikki. If she's planning menus or just making calls today, maybe she can watch Lady. I'll text you back in a minute." A little over an hour later, Caprice entered Everybody's Kitchen, a community effort staffed by several different churches that provided volunteers. Caprice had helped out here on occasion but wasn't a regular like Nana and her mom—when her mom wasn't teaching . . . or babysitting Benny. The directorship of this facility was a paid position. The kitchen manager received a monthly stipend. She didn't know who was filling that spot now. It seemed to change every few months. The soup kitchen was located in a renovated older building that had once housed a chain grocery store. Part of the edifice was dedicated to the Kismet Food Pantry, which took any and all donations, as long as the foodstuffs weren't expired. The pantry doled out food to needy families on a weekly basis, rationing according to donations. Caprice almost never wore her hair in a ponytail. However, she did so today so she could confine it in a hairnet while she helped make and serve lunch. She detected the scent of broiling meat as she approached the kitchen. Today was burger and red-skinned potato day. It was a popular lunch, and Caprice knew the dining area would be full. After the burgers were broiled, they were kept warm in a spicy sauce. No chafing dishes here, just steam trays, and they hoped enough food to last through the luncheon line. She spotted her mom. She was scraping carrots while Nana halved potatoes. "What can I do to help?" Caprice asked. A volunteer was setting up trays that would be placed at the head of the cafeteria line. Another pulled dishes from the dishwasher. A third wrapped silverware in napkins. With a knife in one hand and a potato in the other, Nana came over to Caprice and gave her a hug. "Good to see you, honey." "Do you want me to help you with the potatoes?" "I think it will take your mom longer to scrape the carrots. Better help her." "Who's in charge of the kitchen?" Caprice asked, looking around, seeing that everyone was doing their job and doing it effectively. "Mario Ruiz is here this month. He made up the menus and is overseeing the cooking." Caprice watched a woman take a tray of burgers from the broiler. "No one's overseeing now." "He went into the pantry. There was some confusion about a delivery." Caprice realized where she'd heard Mario's name. When Jeanie had mentioned him, Caprice couldn't quite remember where she'd come across his name before. But now she knew. She'd heard it in conjunction with the soup kitchen. Her mom or Nana had probably told her he was involved, but she hadn't paid much attention. Fifteen minutes later, Mario appeared. He seemed to be everywhere at once. He was short and thin with black curly hair, a long nose, and a wide smile as he supervised everyone. Caprice wanted to talk to him, but that would have to wait until after lunch was served. In the meantime, she finished helping her mom with the carrots and then found Nana wasn't finished slicing the potatoes. Caprice slipped over beside her. "My hands aren't as agile as they once were," Nana complained. "I'll help." "I noticed you working with your mom," Nana said. "You weren't as talkative as usual." No, she was too busy thinking about Grant. "I have a lot on my mind." "Anything you want to unload?" Caprice shook her head. "This isn't the place." "Any place is the place. What's going on, teso-rina mia?" My little treasure. Her Nana called her that when she felt deeply about something. So Caprice had to tell her what was wrong. "It's Grant. He's going to be talking to his ex-wife." Nana gave her a long look. "And you don't think he should." "I'm not sure what he should or shouldn't do. I just know that marriage creates deep bonds. They had a child together. And whether they know it or not, the trauma of that child dying brings them together in a way that will always connect them. What if they decide they should try to get back together?" "Then you need to know it now," Nana said practically. That shocked Caprice. She hadn't really thought about the situation like that, but it certainly was true. "How do you feel about Grant?" Nana asked. Without hesitation, Caprice said, "I love him." "Then you have to trust what you feel for him, and you have to trust what he feels for you." "But I don't know for sure what he feels." "Do you trust him to make the right decision, no matter what that is?" Wasn't that a wise question? "Do you mean, do I think he's a good moral man who will do the best thing he knows how to do? Yes, I do." Nana shrugged. "Then there you have it." "So I have to let him walk away if he decides he wants to renew his marriage?" "You can't deny what's already happened. You can't erase it, though you might have to try if you ever want to marry Grant in the church." She'd never thought about that either. To marry Grant in the church, he'd have to obtain an annulment. She didn't agree with that, because he certainly had had a marriage, and there would have to be grounds. But it was the only way she and Grant could ever be married in the Catholic Church. "Do you believe in the annulments, Nana?" "I believe that you and Grant are going to have to do what's right for you. You might decide to elope to Las Vegas." "If we ever get that far." Nana put her arm around Caprice and gave her a squeeze. "Trust, honey. Trust." But who should she trust? Herself? Grant? Fate? A higher power? Maybe she should start praying again instead of slipping affirmations into her silent butler. The thing was, she had to decide what to pray for. Probably just wisdom to know what to do next. Caprice thought about what Nana had said all throughout lunch as she ladled out carrots and potatoes, as she made sure everyone who passed through the line had a bun as well as a burger. She noticed the diversity in the faces that went before her—black and brown, white and yellow. Large eyes, small eyes, big mouths, small mouths, long noses, short noses, glasses. All people just trying to make their way. She kept her eye on Mario too, so she could find him when she wanted him. Finally after the last person had been served and she'd helped with some of the cleanup, Caprice saw Mario in the dining area wiping off a table with a cloth. She approached him and asked, "Mr. Ruiz?" "Mario," he said with a grin before he even saw who she was. Then he turned, and his smile became broader. "You're Mrs. De Luca's granddaughter." "I am." She extended her hand. "Caprice De Luca." "And I'm just Mario. Mr. Ruiz is my father, and my grandfather." "Can I talk to you for a few minutes?" "Sure." He looked puzzled. "Do you want to talk about the food we serve . . . something that can make it better?" "No. Actually I want to talk about Drew Pierson." At that, Mario's eyes widened, and he ran a hand through his tumbled curls. "What about Drew?" Mario asked, his eyes a bit narrowed now. "My sister and I found him." The wary expression left Mario's face for a moment. "I'm so sorry." He motioned to a little office where she spotted a computer and some bookshelves. "Maybe we should go in there where we have privacy." She went to the office with him and stepped inside. He closed the door. "What did you want to ask me?" "I heard that you worked with Drew." "Yes, we worked together in Washington, D.C. We were both sous chefs at the same hotel. But then the hotel was bought by a different corporation and we were let go." "Did you come back to Kismet, expecting to work together again?" Mario went to the desk and propped on the corner. "I'm not sure what we expected. We put in our résumés everywhere we could. If we applied somewhere and they needed more than one person, one of us recommended the other. I thought we were a team of sorts. At least in getting new jobs. But then I found the one in York. Drew drifted a bit until he decided to open his own catering business. If it weren't for helping out his grandmother, I think he would have gone back to D.C. and found work there again. But he was pretty insistent on staying here. He got tired of the short-order cook type jobs he was getting rather than the more prestigious ones." "And what about you?" "Me? I'm satisfied with what I've got. I respect the owner of the bistro I'm working for, and we serve good food. Would I rather be cooking in a French restaurant in New York City? Possibly. But I'm giving this a try." "Did Drew ever tell you he wanted to partner with my sister?" There wasn't any hesitation when Mario answered. "Yes, he did. And I thought he was going to. But then their deal fell through. He never told me exactly why." Caprice didn't see any harm in telling him, and she might learn more if she gave him information. "Nikki didn't feel a partnership with Drew would be in her best interest. I think she was right. He stole one of Nikki's recipes and served it at the wedding expo last weekend as his. He might have even stolen one of his grandmother's recipes." Mario went silent for at least three heartbeats. Then he studied Caprice as if deciding what to tell her. Finally he said, "Do you know about the blackberry barbecue sauce he sold to the Rack O' Ribs chain?" "I do. I saw him announce it on Mornings with Mavis. I tasted it, and it's darn good." "Yeah, it's darn good. It was mine. I developed it before D.C. Drew knew I used it and he tasted it often. He'd seen me prepare it." "Did you consider suing him?" "I did. I actually saw a lawyer. But he said I simply don't have enough proof. Are you looking into this because you knew Drew?" "I didn't know him, and I'm not sure Nikki did either. I'm looking into it because I'm afraid Nikki could be on the detectives' suspect list. She and Drew were rivals." "Drew could put up a charming front when he wanted to, but he was a conniver underneath. Lots of people knew that, so I'm sure he had his host of enemies." "Did you consider yourself his enemy?" "My grandfather has a saying. When a wrong is done to you, don't let your anger make it develop into more than one wrong." "That sounds like advice Nana would give me." "I wouldn't consider Drew my enemy," Mario added. "I merely cut him out of my life." That could be true. Or . . . Drew's lucrative deal to sell the barbecue sauce could have been a revenge motive for Mario to murder him. * * * Caprice found herself in Rowena Pierson's house again the following day. Nikki had decided it was better if she didn't show up for Drew's funeral. Between gossip and her motives being questioned, it just seemed the safer route to take. But Caprice said she would go for both of them . . . for the family. Nana decided to send a Mass card instead of attending. She knew Rowena in passing, but not well enough that her presence would be missed. Caprice thought about Nana's Mass card, and Father Gregory saying a Mass to aid Drew's soul in finding heaven or growth or whatever actually happened after someone died. It couldn't hurt. Not many people had attended the funeral and the graveside prayers. It hadn't seemed appropriate to speak to anyone at the church or at the cemetery, so she'd accepted the invitation from Rowena to attend the reception at her house afterward. At the funeral Caprice had noticed something unusual. Jeanie didn't seem broken up about her brother's death. In fact, as she stood listening to Father Gregory at graveside, she'd appeared a little smug. Maybe that was the result of the way she had her lipstick applied. Caprice didn't know. But she did know that Jeanie didn't have the expression of a grieving sister. Was she a bit removed because she knew one day she'd inherit everything of her grandmother's, and Drew's death had given her that? Maybe she was just the type who couldn't show emotion easily, though she'd seemed to show plenty of emotion the day Bella and Caprice had gone into her shop. Rowena's house wasn't large, and the funeral goers who came to the reception spread from the living room into the kitchen as well as the sitting room adjacent to the living room that once might have been a dining room. It was close quarters. Caprice caught sight of Jeanie again, and this time she was talking to an older man in a suit. She gestured toward the Tiffany floor lamp beside an armchair. What was that all about? Just what secrets did those lamps hold? Rowena, checking to make sure everyone had food or drink in their hands, approached Caprice. She was using her cane today. She took Caprice's elbow. "I'm so glad you could come." "Nikki sends her respects too . . . and Nana and my mom." "I saw your Nana's Mass card. Tell her thank you. Kiki is going to help me with thank-you notes after all of it settles down. I don't know what I'd do without her. She's the one who took care of the food for today. I'm sure it's not as elegant as anything Drew might have prepared." Caprice wasn't sure about that. As a lead-in, she said, "I don't know many people here. Do you know the gentleman who's talking to Jeanie?" Rowena put a finger to her lips as she gazed at him. "I don't know who that is. Maybe it's one of her friends." Just then, the man left Jeanie's side and stepped into the sitting room. The front door suddenly opened and two men walked in. Rowena turned their way. "Will you excuse me, Caprice? I need to welcome Drew's friends." As Caprice studied the two men at the door, she realized they could be Bronson Chronister and Larry Penya. Just from their appearance, she could tell who was who. Bronson's suit shouted dollar signs. It was charcoal and impeccably made. His white shirt gleamed with a silk finish, and his tie looked like a designer one. His black leather wingtips were spit-shined. Larry Penya, on the other hand, had a scruffier look. He would have been more handsome without the goatee, Caprice thought. His blue eyes were piercing, his dirty blond hair just a little too long. He looked uncomfortable in navy slacks and a white Oxford shirt. He and Bronson were about the same height, though Larry was thinner. For some reason, she got the idea that maybe Larry had borrowed his clothes from Bronson. The two men were hugging Rowena, and she didn't want to intrude. So instead of lingering in the living room, she followed the Oriental runner into the sitting room. Jeanie was nowhere in sight. Caprice supposed she could have gone either upstairs or down to the basement. However, the man who had been talking to her stood there, and he wasn't mourning Drew. Rather, he was appraisingly studying a claw-foot table. As she watched, he took out his cell phone and snapped a photo of it. How odd. Caprice approached him, saying, "That's a fine table, isn't it?" As she studied him, he studied her. "It is. Are you interested in antiques?" Extending her hand, she said, "I'm Caprice De Luca. My profession is home staging. I often use antiques to fill in. It's amazing how many places they fit, even with modern décor. I know Rowena, and I knew Drew. Were you a friend of Drew's?" The man looked a little uneasy, but then he shrugged and pulled a business card from the inner pocket of his suit coat. "No, actually I didn't know Drew. My name's Carter Gottlieb. I'm an antique dealer from York. Jeanie and I are friends." She glanced at the card and saw an address in the east end. "I know you might think it's a little odd I'm snapping photos, but Jeanie asked me to come today. She wanted me to unobtrusively capture photos of her grandmother's antiques and evaluate them." "Without Rowena knowing?" Caprice asked, wondering if this guy would be honest with her. "She told her grandmother she was going to do it sometime. I guess she thought I could look around today and not have to bother her grandmother any other time. I think she's being sensitive to her grandmother's feelings, not wanting to talk about it so soon after Drew died." "You mean her own inheritance?" "I don't know about that. I'm only concerned with the antiques. The stars of the collection, of course, are the Tiffany lamps. And they are Tiffany, not reproductions or attempts at the same style. They are totally amazing." He motioned to the claw-foot table, a bentwood rocker, a curio cupboard with engravings. "The rest of Jeanie's grandmother's antiques are quite ordinary. But that floor lamp in there alone is worth six figures. It's such a shame that the base to the other lamp is missing." "The motive for Drew's murder might have been robbery, I suppose," Caprice offered, just to see what Carter Gottlieb would say. "If this was robbery, it was a poor attempt at it. A robber who knew what those lamps were worth never would have left the shade." That's exactly what Caprice had concluded. "So I guess Drew's murder had nothing to do with the lamps, even though they're worth what could be a small fortune to someone." "Jeanie thought her grandmother might have another small Tiffany lamp upstairs. She went up to check. She showed me photos of the bedrooms upstairs, and I didn't see anything remarkable. For the most part, antiques are worth only the pleasure that they give, the memories and the history attached to them." "I agree, but it sounds as if Jeanie's thinking about selling everything eventually." "I don't know if she wants to sell it, but she's tallying it up. My guess is, she thinks her grandmother might move to a retirement facility and auction off all of this. She told me how much Drew helped Mrs. Pierson. I'm not sure she'll be able to handle the house if she stays here alone." That could be true. On the other hand, there were services that could help someone in Rowena's position—Meals on Wheels, a cleaning lady once a week, a church network of volunteers who drive seniors to doctors' appointments. If Rowena wanted to stay in her house, there was a way to do it. "I suppose it hurts Jeanie to think about selling the house. After all, she grew up here." Gottlieb looked thoughtful. "From what she's told me, I'm not sure those times were happy times. She confided in me about her parents dying and she and Drew coming to live here. She doesn't seem attached to anything in the house, not even those lamps. Believe me, if I owned a Tiffany lamp like that, I'd be attached." "Did she mention whether she and Drew were close? Only two years separated them." "She really hadn't discussed that. But then again, I got the impression that she felt her grandmother was catering to Drew, letting him stay here, giving him free room and board. I suppose Jeanie felt a little bit left out of that, or like her grandmother wasn't being fair." "I don't understand," Caprice said. Carter lowered his voice. "I think Jeanie felt that Drew was manipulating their grandmother, trying to get into her good graces." "Maybe he was just trying to redeem himself for all the problems he'd caused when he was a teenager." "Perhaps that's true. I just got the feeling that Jeanie felt she was in competition with Drew." And now that competition was over. Would Jeanie try to convince Rowena to move into a senior center? The most important question was, Did she kill her brother so she would inherit everything her grandmother left when she died? Chapter Eleven After Caprice rejoined the rest of the guests in the living room, Rowena was still talking to the two men. She'd settled on the sofa. The man in the suit was on the left of her, the blond-haired man on the right. Rowena motioned to Caprice, and Caprice joined them, eager to find out if these were Drew's best friends. She sat on one of the folding chairs that had been set up near the sofa and caught a whiff of stale smoke. Apparently the man to Rowena's right was a smoker. "I'd like you to meet friends of Drew's." Rowena looked proud that Drew had had friends. She introduced Bronson first. Bronson shook Caprice's hand. "How did you know Drew?" he asked. "Drew worked with my sister. I stage houses and she provides the food." Bronson snapped his fingers. "I've heard of you. You're well-known for your shindigs . . . and for the marvelous food at them." "And this is Larry Penya," Rowena said, maybe expecting Larry to extend his hand to Caprice as Bronson had. But Larry didn't. He just nodded, then glanced around the room. Caprice decided to try to draw the two men out. "Were you friends of Drew's for a long time?" To her surprise, Larry was the one who answered. "Since high school." "It's amazing that you kept your friendships. They don't often last." Bronson and Larry exchanged a look, and Caprice wondered what that was about. But before she could get a better read on it, Rowena interjected, "I think I still have Drew's high school yearbook around here somewhere. After he moved back here, he was going to toss it. But I saved it. If I could just remember where I put it." "It will come back, Mrs. Pierson." Again Larry surprised Caprice by patting the older woman's hand. "I wish more things would come back," Rowena ruminated. "Like my ability to use my knees to go up and down stairs." Then she addressed Bronson directly. "I should have accepted your daddy's invitation to go camping for a weekend in one of his RV trailers while I still could have enjoyed it. Now trekking around in the outdoors is something I can't do." "But you could still enjoy a campfire," Bronson suggested kindly. Both of these men acted as if they were fond of Rowena. After all, they'd known her for years. "You should go out to Happy Camper RV Center sometime and take a look at what Bronson sells," Rowena advised Caprice. "Some of those campers are amazing. Drew showed me pictures. The side actually extends from one of them, and it's almost as big as a house!" "A home away from home on wheels," Bronson agreed. "That's what people want. Oh, they like to say they've gone camping. Real campers use a tent. People that come to our center . . . they want a few conveniences when they're camping, including heat, air, and bathroom facilities. Many camper vehicles can provide that now." Caprice could see that Bronson was enthusiastic about the subject, and she supposed he had to be to make the business a success. "My only experience camping was a tent in the backyard with my sisters and brother," Caprice said with a smile. "And I can't say it was the best time of my life, especially with Vince trying to scare us half to death in the middle of the night." "Come on out to Happy Camper sometime. I'd be glad to show you around." Bronson's invitation sounded sincere. "And Mrs. Pierson," he added, "if you want to enjoy a camping experience, I would take you myself some weekend . . . and pick out the best camper to do it. You fed me and Larry often enough through the years, let alone let us sleep over here." That explained the almost grandmotherly appeal that Rowena had for Bronson and Larry. "You boys weren't always good for Drew, but you weren't always bad for him either. Don't think I didn't know about the trouble you often got into. But you stood by Drew and he stood by you . . . and that's what friends are for." Caprice studied both of the men during Rowena's little speech. Their expressions gave nothing away. She'd like to know a lot more about their friendships with Drew. Maybe she'd have the opportunity to talk to them separately. A cell phone beeped. Bronson slid his from inside his jacket pocket and studied the screen. Then he slanted toward Rowena. "That was a text from the manager at Happy Camper. I'm going to have to get back there. But Larry and I just wanted to stop and pay our respects. "I'm glad you did," Rowena said, giving them both a hug. The two men stood, and Bronson said to Caprice, "It was good to meet you. Remember what I said about coming out for a tour sometime." As the two men moved toward the door, Rowena shook her head. "They didn't even have anything to eat." "I think they were just glad to talk to you. Reviving memories always helps at a time like this." "I suppose that's true. You know, I thought maybe the girl that Drew had dated would stop by his funeral . . . or here." "He was dating someone?" "Not lately. But he did in the spring. You know, I could see and hear better than he thought I could. He snuck her up to his room on weekends because he knew I'd never approve. I don't know who the girl was. But she was a redhead. I caught sight of her one night when I left my room to go to the bathroom for a drink. He and the girl were snuggling on the couch. But he never introduced her to me, and I thought that was a bit odd." "Maybe he didn't want to introduce you to someone he didn't know if he was serious about." "That's probably true." To her surprise, Mario Ruiz came through the front door. He spotted Caprice sitting near Rowena and he came over to them. "Mrs. Pierson," he said. "I'm Mario Ruiz. I worked with Drew in D.C." "Oh yes. Drew mentioned you." Kiki, who had been supervising everything in the kitchen, called to Rowena from the doorway. "Rowena, can you show me where you keep your extra tea bags?" Rowena stood, using her cane to support her. She said, "Thank you for coming, Mario. We'll talk after I solve this kitchen problem." After Rowena had moved away, Caprice said to Mario, "I'm surprised to see you here." "Drew and I were friends in D.C. At least, I thought we were. Maybe he stole my recipe because it was the only way he could get ahead. I have more talent than that one recipe. I have to get over it. I just wanted to pay my respects." Studying Mario, Caprice tried to read him, to figure out if he was sincere. After all, he could have had a strong motive for murder. But maybe he really believed that grudges didn't serve him any purpose. "Before you came in, Rowena was telling me that Drew had a girlfriend for a while. Rowena didn't know who she was, but she was a redhead. Do you know who he was dating?" "A redhead? Oh, sure, I know who that was. That was Tabitha Dennis. She's the hostess at Rack O' Ribs and the daughter of the manager. Drew knew how to get ahead, and my guess is that's where he started when he wanted to sell the barbeque sauce. I could be all wrong. Maybe he started dating her and she put the idea in his head. Either way, he always had a reason for what he did." "Are you saying he was ruthless?" "I'm not sure about ruthless. I am sure about determined and motivated. At least, since I knew him." Very different from the teenager he'd been, Caprice surmised. Could love of cooking make that kind of change in a person? Only if that's what they chose for their vocation. Only if there was more behind it than dollar signs. * * * That evening, Nikki stopped at Caprice's house around dinner time. She hadn't been able to stay away, and she wanted to know everything Caprice had learned. After the reception at Rowena's, Caprice had driven to Grocery Fresh and bought tomatoes, a pepper, and a new bulb of garlic. When Nikki arrived, the aroma of garlic, onions, and simmering tomatoes permeated the air. "A salad with this, or fresh broccoli?" Caprice asked her sister. Nikki set a bag on the table. "I stopped at the Tasting Totem and got a bottle of that peach balsamic vinegar you like so much. Let's just do salad." "Baby greens in the fridge," Caprice assured Nikki. Nikki washed her hands and then went to the refrigerator to pull out ingredients for their salads. "You're restraining yourself, aren't you?" Caprice asked with a smile. "You bet I am." Lady had run into the kitchen with Nikki, but Caprice shook her finger at her. "You already had your dinner. I promise that Nikki and I will play a game of chase with you after we eat if you let us talk now." Lady cocked her head at Caprice, one ear flapping. Her big brown eyes seemed to say, I'd like your attention now, but I understand if I have to wait. After a little yip, she ran off toward Caprice's office, where Caprice knew Mirabelle was lounging on her chair. "I'm glad she and the cats keep each other company," she said as she stirred the sauce another time. "Maybe their relationship will last as long as Drew's and Bronson's and Larry's." "So you met them?" "I did. And they seem to have a genuine fondness for Rowena." "What did you learn?" "Nothing concrete. But the three of them were fast friends. I could tell there was a bond between Larry and Bronson. You know, that "guy" thing? They exchanged looks a couple of times, and I got the impression they knew what the other was thinking." "Sort of like sisters?" Nikki jibbed. "Actually, yes. It was sort of like that. Bronson invited me to tour Happy Camper whenever I'd like. I might take him up on it." "Rowena had told us that Larry had fallen on hard times. So how does he fit into Bronson's world? Their lifestyles are so different," Nikki mused. "I don't know. Maybe Bronson's helping him out." "And what does Bronson get in return?" "If they're like brothers, maybe he doesn't need anything in return. Or maybe it strokes his ego to be the big man on campus, so to speak, and help out his friends. I did learn that Drew had dated Tabitha Dennis." Nikki looked puzzled. "Should I know the name?" "She's the daughter of the Rack O' Ribs manager, and the hostess there." Nikki whistled through her teeth. "Do you think that has something to do with the barbeque sauce?" "I don't know, but it's certainly an avenue to pursue. He didn't introduce her to Rowena, though. He just sort of snuck her in at night. That makes me wonder why. If he liked her and he was dating her, why wouldn't he bring her to meet Rowena?" "Maybe he was dating her for a purpose. You know, the same way he made a pass at me for a purpose." "His purpose with you was that he found you attractive." "I'm not saying that doesn't go along with it. But I'm beginning to think Drew was a lot more calculating than I ever gave him credit for." "Except he messed up with you," Caprice pointed out. "He underestimated me. Before I drove over here, I got a call from Detective Carstead. I have to go to the police station again tomorrow for more questioning." "On a Sunday? Do you want me to come along?" She was supposed to meet Juan at a house they'd be staging, but he could take a preliminary tour without her. She knew if she went with Nikki, she'd probably have to sit on that hard bench in the lobby. But if Nikki needed the support, she'd be there. "There's no point in you coming along," Nikki muttered. "I know they're going to want to question me alone. I really think Detective Carstead is a good guy who just wants to find the truth." Caprice's conversations with the detective had led her to the same conclusion. Still, this was her sister's freedom that was at stake. "You should take Vince along." Nikki went to the pantry for Caprice's salad spinner. When she came back out, she dumped the salad greens into it and added water to wash them. "I'm not going to ask Vince. I think Detective Carstead is right. Taking a lawyer along makes me look guilty. I don't have anything to hide." Even if that was true on some level, Caprice didn't like the idea of Nikki talking to the police without her brother present. Carstead might be a good guy, but just like Jones he wanted to pin the murder on someone. She just hoped it wasn't Nikki. * * * The house was amazing. Caprice toured it slowly on Sunday, appreciating every detail. Then she went outside to the front yard again to wait for Juan. Plans for staging it seemed to materialize before her eyes. She'd staged many houses, and each had its own beauty. That's why she gave them unique themes. But this one, with its Spanish-style design and architecture— Her theme for this house staging was easy to devise—Hacienda Haven. The 5,500-square-foot, two-story edifice, including a four-car garage, had a wondrous story to tell. At least that's the impression Caprice wanted to give any buyer who might come in. She wanted them to see a possible home that was all about hearth, family, rusticity, and old-world charm. The house was empty now, except for the beauty that was innate. But she could envision exactly what she wanted to do with it. This structure was about more than a Mediterranean feel. She wanted Hacienda Haven to manifest a culturally rich home that invited family to gather, talk, and play. As she faced the front entrance, the sun shone on the sprawling home with its red-tiled roof. Its villa ambience was made unique by interesting angles. Rooms weren't just square or rectangular. There were rounded walls, high ceilings, arches, and wrought-iron lacy grillwork. With five bedrooms—three downstairs and two upstairs—a loft, a media room, and even a meditation room, the house could appeal to a host of prospective buyers. The exposed beams, the dark wood flooring and unique tiling, the brick and stone, terra-cotta tiles, rough edges, and textured plaster gave the illusion of comfort and ease, even though every detail had been done to perfection. No, Caprice could never afford a home like this, but she knew exactly what she would do with it if she could. Juan arrived, parked in the circular drive, and met her at the heavy dark wood door. When they walked inside, he gave a loud whistle. The entranceway was magnificent with its thirty-foot ceiling. "Have we ever staged anything like this before?" he asked her. "Remember the castle house that Roz owned?" "That's different. Nobody would want to live in that one. But this . . ." He sounded in awe of the architecture, the style, and the materials. On the left, a doorway opened into a den or study. It was almost a trapezoid shape with a hexagonal front window and a rounded roof. If they went down the hallway in that direction—the left wing of the house—they would find the master bedroom and bath, a powder room, and a set of stairs to an upper level. If they walked straight ahead, they would find the seven-sided family room. If they stepped through the doorway on their right, they'd enter the dining room that led into a grand kitchen with a breakfast nook and family eat-in area large enough for a dinner party. The staircase that led to the loft was incredibly beautiful with traditional tiling used in Spanish homes. The tiles ranged in design colors from orange to blue, taupe and fuchsia . . . handmade, no two identical. She wouldn't change the multilayered wrought-iron chandeliers swinging from the vaulted ceilings. "I want to stage this house with color," she offered. "Vibrant color. No neutrals here. There's enough of that in the stonework and the tile and the brick. Think yellow and orange, pink and blue." Juan ran his hand over a wall. "Plaster skimmed with a whitewash?" "Specialized paint, for sure. It looks like something you might find in a Mediterranean villa, but this will withstand the cold and heat of Pennsylvania. I want to find woven rugs in the same colors as those tiles on the staircase." "You're not asking for much." "Do I ever?" Juan laughed as they climbed the curved staircase leading to the second floor. Once there, they stood at the loft railing looking down on the floor below. "We're both going to have to look through Spanish artwork and even videos of flamenco dance, maybe study paintings by Dali, Goya, and Picasso. Those will give us design images. I'll look through the rental company's Web site for pieces in that flavor. But I also want to use pottery—lots of it—as well as sconces, unusual headboards, dark wood, and wrought iron." "How about leather? Think metalwork too. And Spanish landscapes," Juan advised. She nodded, already picturing it all. "Most of all, I want each space functional with not too many items. The covered porch on the back is going to need its own treatment as if it were inside the house instead of outside." As Juan stared down below, he said, "I can imagine sectionals . . . maybe leather trimmed with wood. Possibly a couple of large mirrors to reflect those chandeliers." "We might also want to think about framed tapestries with bold designs. Greenery too in the arched crooks and crannies. Soft wool throws in whatever color we decide is dominant." "When do we have to be ready to put this on the market?" "I think it will take us at least a couple of weeks to collect everything. So let's give it a two-week time frame." "Aren't you going to be tied up with a new murder investigation?" She remembered all too well her last investigation and what had almost happened. In fact, she'd found herself in danger every time she'd insinuated herself into an investigation. That's why Grant and her family wanted her to stay out of it. But with Nikki at the police station again right now— She wasn't exactly sure what she wanted to do next. "I want Nikki to be in the clear, but I don't want to create enemies for her or for me. I'm waiting for some kind of lead. Do you know what I mean?" "One of your signs," Juan determined wryly. "I guess so. Let's face it. In the past, I've jumped in and started wading around and made gigantic waves. I didn't know what I was doing. I still don't. But this time I want to make sure I don't put anybody in danger . . . including myself . . . and especially not Nikki. I have to be as unobtrusive as possible." "That's kind of tough when you go around wearing lime green bell-bottoms and tie-dyed T-shirts, never mind the jeweled flip-flops." She wrinkled her nose at him. "You sound like Bella." He laughed. "So what are you going to do next?" "Grocery Fresh is hosting a raspberry festival on Saturday. Since Nikki is involved in the investigation and word is going to spread that she and I found Drew's body, I would expect if we just mingled there, go from stand to stand, chat people up, we could find out tidbits without even trying. I don't have to ruffle feathers that way if I just listen. As it is, I think Bella and I ruffled Jeanie's feathers—Drew Pierson's sister. She believes Nikki did it. And if she goes spreading that rumor all over town, it could catch more fire. More fire, more pressure on the police department to solve this." "Is Nikki going to be serving anything at this raspberry festival?" "I don't think so. It's better if she keeps a low profile. But Nana's entering the raspberry dessert contest. I might too." "Speaking of food, do you have any ideas what you want to serve at this open house?" "I'll leave that up to Nikki. Maybe churros—Spanish fritters. While I was waiting for you, I also read something about a garbanzo and chorizo stew. I saw a picture of these long cigar-shaped sweetbreads too, which originated in the region of Valencia. I'm sure Nikki will have a ton of ideas. This house is going to generate one idea after the other. Can't you just see it, Juan?" He gave her an affectionate smile. "Can I see your vision? Sure, I can. Down to a tall acilino on a credenza." The strands of "Let It Be" played from Caprice's pocket. She slipped her phone out and saw Nikki's photo. "I have to take this," she said to Juan. "It's Nikki." "I'll go downstairs and explore outside. Maybe the landscaping will provide ideas for the covered porch furniture." As he loped down the stairs, Caprice connected with Nikki. "Are you finished at the police station?" she asked her sister. "I'm done for now. I doubt I'm finished for good. They took me over the same ground repeatedly. Finally Detective Jones left and it was just me and Detective Carstead." "Are you wishing Vince had been there?" "No. They didn't try to trip me up or anything. They're just checking every little detail. Detective Carstead had a list. When did I meet Drew? How often did I work with him? When did I stop working with him? It's a good thing I keep accurate work notes on my tablet so I could tell him the exact dates." "But you had told him all that before." "Yes, I had. And, at times, he seemed almost apologetic for asking again. You know, he's really kind of cute." "Brett Carstead? Cute?" Every woman had her own idea of cute. "You didn't flirt with him, did you? That could get you into big trouble." "No flirting. I controlled myself. It's too serious a situation to even think about it. But after this is all over, who knows what could happen?" Caprice thought she heard hope in Nikki's voice. Her sister had been so down . . . first about Drew's competition and then about what had happened. She was glad to hear positive energy from Nikki, even if it had to do with the hunkiness of Detective Carstead. Do you know anything about him?" Nikki asked. "Like, is he married?" "Don't know," Caprice said. "Never asked." "He doesn't wear a ring. But that might have to do with his work." "Or not," Caprice suggested blandly. "Grant might know." Then she remembered what was going on with her and Grant. "But now isn't a good time to ask him . . . anything." She'd already told Nikki about Grant's ex-wife and what he planned to do. "Aren't you two talking?" Nikki asked, sounding surprised. "There's nothing to talk about right now. Not until this is all over. Not until he makes decisions." "Whether he wants a serious relationship with you?" "Even more important, he has to decide whether his bonds with his ex-wife are cut or if he wants to keep those threads." "And if he does?" Nikki asked. "I don't know. I don't know what that will mean for either of us or for both of us together." "Don't give up," Nikki counseled her. "I'm not giving up. I'm just afraid to hope. I'm going to concentrate on staging this Spanish-themed house. And figuring out who might have murdered Drew. I want you to mingle with me at the raspberry festival and see what we can learn. "At least we'll have raspberry delights to munch on while we snoop." Raspberry delights. She'd like to be sharing them with Grant. Chapter Twelve Caprice had been keeping tabs on Dulcina and her new adoptee through text messages. But she wanted to see for herself how Halo was faring. She knew Dulcina was a kind, gentle person. But not everyone was a cat person. Maybe she'd thought more about the responsibility of caring for a cat with kittens and had changed her mind. On Monday morning while Lady played with her kibble release toy and Sophia and Mirabelle napped, Caprice crossed the street to Dulcina's house. After she rang the bell, it took her neighbor a few minutes to come to the door. Caprice was almost ready to text her to see if she was home when Dulcina opened it. She looked a bit harried. Instead of her hair being tied back, it was loose around her face and a bit flyaway. "I was in the closet upstairs looking for old towels," she explained. "They're fine for Halo but not for the kittens. I think receiving blankets would be better, from what I've read on the Internet. Their little claws won't get caught in them." Caprice had to smile as Dulcina motioned her inside. "So you're going to visit the baby store?" "No, I found a good deal online. They'll be here in two days. I'll have everything washed up and ready. I have one of those storage bins. I'm going to line it with newspaper and put the receiving blankets on top." Caprice followed Dulcina into her sunroom, where Halo was sitting on a new condo in front of the window. "How's she doing?" "This morning she let me pet her. She didn't back away from my hand." "That's a good sign. Is she eating for you?" "She gobbles everything down like she hasn't eaten for months." "Marcus said she was malnourished. She might eat like that the whole way through her pregnancy and while she's nursing. Are you still willing to do all of this?" "Yes, I am. I downloaded a book about cats having kittens, and I've watched a few videos. I know there's a possibility that things can go wrong. If for some reason she's not a good mother, I might have to hand-feed the kittens every two hours. But, Caprice, it feels so good to be giving time to nurturing this little being. Do you know what I mean?" "I know exactly what you mean." Slowly approaching the condo, Caprice said, "Hi there, Halo. Do you like it here?" The purring cat gave her a slanted-eye look that wasn't either cautious or accepting. It was quite serene, really. "That first night I wasn't too sure how she'd be in here," Dulcina explained. "She went from window to window and looked like she wanted to go back out. She meowed. But I just talked to her softly and kept showing her the litter box. I closed the blinds and stayed in here with her and read. Finally she just sat too and then fell asleep. She looked exhausted." "The trauma of being captured and taken to the vet could have exhausted her. But you have to remember, being outside, she was never safe where she slept. She probably always slept with one eye open. Feral or stray cats have to be vigilant constantly. Are you going to keep her in here?" "Just for today yet. The flea treatment should have done its thing by now, according to the pamphlets the vet gave you. I'll wash up the sunroom really well and then let her explore. I'll watch to see her favorite places and then put a bin nearby. Maybe I'll put one in two different places." "She might like someplace darker than the sunroom to have her babies." "That's what I read. I'm thinking in the kitchen. I can move the chair away from my little desk nook and put the bin under there. She should feel safe." "It sounds as if you have all the bases covered." Caprice walked closer to Halo and then stretched out her hand, very slowly. The cat eyed her warily but didn't jump or move away. She sniffed a few times, then folded her paws underneath her. "You're a beautiful girl," Caprice said to her. "After good food and loving, you'll make a great companion." "I talked to Rod last night and told him about her." "And?" Caprice prompted. "And he told his girls right while we were on the phone. Vanna even got on the phone to ask me about her. She's the younger one." "So kittens could be a bonding experience with them too." "I can hope. Nothing else has been. The concert on Sunday might be, but I won't know how they're going to react until we're there. On the other hand, who can resist kittens?" Caprice laughed. "Lots of people can." "I still have so much to learn. From what I read, I shouldn't handle them if I don't have to for two weeks, except to weigh them and that kind of thing. And I don't think I'd let anybody else touch them for a month, especially not anyone who hasn't been around animals." "That sounds about right." "I am nervous about being a midwife, though." "You don't have to do it on your own. I can give you Marcus's number. And you can call me if you need me. I've never delivered kittens, but I've helped to deliver pups." That brought back bittersweet memories. She and Grant . . . delivering Lady's litter. Caprice had found Lady's mom in her mother's tomato garden. She'd named her Shasta because she was the color of Caprice's daisies. When Caprice had found her owners, however, she learned that Shasta's real name was Honey. "I thought of asking around to see if anyone wanted a kitten," Dulcina said, studying Halo. "But I don't want to be superstitious about this. I'm just going to wait until they're born. Then I'll go from there. I know for sure I want to keep one of them with Halo." For Halo's sake, as well as Dulcina's, Caprice hoped all went well. Dulcina was definitely invested in the process. Caprice lowered herself into one of the lawn chairs Dulcina had arranged in the sunroom. The blue-and-green-flowered cushion was comfortable. Dulcina sat in the chair beside her. "Would you like coffee? Vanilla hazelnut." Caprice laughed. "You've convinced me." Dulcina was already on her feet. "You just stay there and commune with Halo. I'll get us some." About five minutes later, Dulcina returned with two mugs. She handed one to Caprice. "I hope it's right. A dab of sugar and a couple of teaspoons of milk." "You've got it." After Dulcina was seated, Caprice asked, "Are you going to the raspberry festival?" "I don't know. I think I'm going to stay close to home for the next month, except for short errands . . . and the concert. I want to make sure Halo is okay." "How about Rod?" "I don't see him that often as it is. It's rare that the girls don't have to be run here, there, and everywhere on a weekend. He doesn't like me to be too involved in that, or else the girls don't want me to be involved. I'm not sure which it is." "They play soccer, right?" "They do." "Maybe he feels it would be boring for you to sit at their games. Have you told him you want to go?" "Not really. I didn't want to push in where I wasn't wanted." "If you don't ask or push, he might not know you're interested in the girls' welfare as well as his." "That's a thought." She paused, looked at Halo and then out the window at the sunny end-of-June day. "Even though he's been divorced for a long time, I don't know if he's ready for a relationship." "I can relate to that," Caprice said before she thought better of it. "But you're dating Grant now. Aren't you two becoming more serious?" "I thought we were. But his ex-wife's coming to town and he's going to see her. He feels as if he has to." "And you're worried he's not ending anything." "Something like that. There's this wall up between us now. And until she comes and goes, I can't see either of us jumping over it." "Don't be out of touch with him," Dulcina counseled. "You need to stay connected." "But that hurts when I don't know what he wants," Caprice admitted to Dulcina and herself. "Maybe you could text him 'Thinking of you' or something like that." "With a little heart?" Caprice almost joked. "Don't get too flowery about it. But just let him know you want it to work." "Are you doing that with Rod?" Caprice asked slyly. "Does he know you want to get close to his girls? Not just because you want to date him, but because you want to mother them?" "I don't know if they want to be mothered." "Everyone wants to be mothered whether they'll admit it or not," Caprice suggested. "A lot will depend on their going to the concert and their reaction to it." "Don't put any expectations on it, or you won't have fun yourself. If they see you and Rod having fun, that can make a difference too. Surely they want their dad to be happy." "Are girls that age that unselfish?" "If he's raised them to care about others, they might be." Halo suddenly rose, stretched, then studied the two of them. "She has such long legs," Caprice said. "If you watch how she sits there," Dulcina noted, "you can see that she's crooked. That one back leg folds up higher than the other one. And when she walks, there's a slight limp there." "She's a lucky kitty to have survived some kind of accident. The wonderful thing is that she's not wild or nasty. Even at the clinic, she let Marcus examine her and didn't put up a fuss. There's a resignation about her. Or maybe it's just serenity. I don't know." Halo jumped down off the condo and went to a bowl that held a few kitten crunchies. She gobbled them up quickly as if someone might take them if she didn't. "She hasn't sat on the chairs yet," Dulcina observed. "It's as if she's just used to the ground, and maybe trees. I guess that's why she likes the condo." Halo made a turn around the room, stopped at the door leading into the kitchen and living room, then went to sit on the rug that Dulcina had laid in front of the French door. "Have you found out anything more about who might have killed Drew Pierson?" "I met a couple of his friends at the funeral reception. And his sister seems to already be numbering her grandmother's possessions for when she inherits." Caprice shook her head. "I shouldn't have said that. You didn't hear me say that." Dulcina laughed. "If it's true, then maybe she has a motive. What kind of person is she?" "I'm not exactly sure. She seemed volatile. On the other hand, inviting an antiques dealer to the funeral reception is calculating." "And Drew's friends?" "From what I could tell after spending just a few minutes with them, Bronson Chronister seems like an interesting guy. He comes from money. His father made Happy Camper RV Center into a huge success. Bronson's taken over now." "Camping," Dulcina said with disgust. "Not something I want to contemplate." "From what Bronson says, the newest campers have every convenience. It's not the camping in a tent experience. It's more like staying in your own hotel room on wheels and seeing the surrounding sights experience." "I wonder if Rod has ever thought about doing that with his daughters. Do you think you can rent them?" Dulcina asked. "You want to be cooped up with Rod, a teen, and a preteen for a weekend?" Caprice returned. "That does sound pretty unsettling. And once the kittens are born, they'll probably need me twenty-four hours a day." Caprice wondered if Dulcina wasn't using the idea of Halo and her kittens to give herself an out with Rod in case things didn't work out. Could they be an excuse for her not to get more involved? Maybe she wasn't any more ready than he was. "If I'm prying, just tell me to butt out. But you never talk about your first marriage." Caprice knew Dulcina had been a young widow but not the details of what had happened. "It was a wonderful marriage," Dulcina assured Caprice. "And I don't say that looking back with rose-colored glasses. Johnny was perfect for me, and I seemed perfect for him. Once we met in high school, we knew we were going to be together forever. But I learned the hard way that forever is for fairy tales. An icy road and a drunk driver coming at him . . . he didn't have a chance for forever . . . and neither did I." "I'm sorry." "I try not to think about it anymore," Dulcina said with a sudden catch in her voice. "I still miss him so much. And the truth is, I don't think I'll ever find anything like that relationship again. We were soul mates. How do you have a second act to that?" "I guess you start by deciding if you want a second act. Do you?" "I think I do. Being alone sucks after awhile. I'm not afraid to be alone but, on the other hand, I don't want to settle for less than I had." She sighed. "If you play armchair psychologist with me, you'll have a field day. You'd say I'm not pushing things with Rod because I might not want to." Dulcina didn't need her to play psychologist. She'd already come to important realizations on her own. "Right now, I'm not the one to give any advice." "Maybe I should help you solve a murder instead of worrying about my relationship woes." "I have a feeling when those kittens arrive, you won't have much time for anything else." "I see that as a good thing," Dulcina decided. "If I lower my expectations with Rod and concentrate on the kittens, maybe karma will take care of itself." That was the thing with karma. The universe was made up of actions and reactions. Every action caused a reaction. So if you did nothing, were there no reactions? Either the murder or worrying about Grant was getting to her. "I have a feeling I'm going to be over here watching those kittens a lot. Then both of us can forget about everything else." Was that possible? In about a month she'd find out. * * * Raspberries were definitely in the air on Saturday. Grocery Fresh had commandeered the town park for their festival. Their stand with quart boxes of raspberries sent the sweet aroma into the whole area, or so it seemed. Caprice held Lady's leash loosely as she watched everyone with interest. Lady trotted along beside her, nosing the ground around the food and craft stands. Caprice, Bella, and Nana submitted their desserts to a tent for judging in the late afternoon. Bella's raspberry trifle, Caprice's raspberry bread, and Nana's raspberry shortcake were given numbers. The judges would have no idea who had prepared the desserts. Winners would be announced right before the chicken barbecue stand began serving dinners. As they were leaving the tent, Nikki ran up to them. "Sorry I'm late. I was doing cold calls, trying to line up more clients." "Did any pan out?" Caprice asked. "One out of twenty," Nikki admitted. Bella patted her shoulder. "Ace Richland's concert tomorrow might be good for all of us. Joe and I badly need a date night. And you need to forget about work and Drew Pierson's murder." She checked her watch. "I'm going to meet Joe at the playground so I can watch Benny while he and the kids can have a little fun," Bella told her. "I'll catch up to you later." She waved to them and headed off toward the swings. Nana squeezed Nikki's hand. "How are you holding up?" "I'm fine. Caprice and I are going to do a little sleuthing." Nana narrowed her eyes at them. "Nothing that will catch too much attention, I hope." "Caprice always attracts attention," Nikki teased. "Look at her outfit." Juan had said the same thing! Was she that conspicuous? Today she'd worn a flowing, beaded paisley Bohemian-style top over white clamdiggers reminiscent of the fifties. She'd left her jeweled flip-flops at home and chosen a pair of white leather sandals instead. This was a fairly conservative outfit for her. "You can see the lime green and fuchsia in that top coming and going," Nana continued to joke. "You're all taking lessons from Bella, and I don't like it," Caprice complained. Nana gave her a hug. "You know we're just teasing. We love the way you dress. I suppose you're just going to mingle and ask questions, and that's fine. Nobody has to know you're doing the detectives' work." "We're not doing the detectives' work," Nikki protested. "I'm sure Brett is doing a fine job on his own." Nana eyed Nikki thoughtfully. "Brett, is it?" Nikki blushed. "We're not on a first name basis, but I wouldn't mind if we would be. The title Detective Carstead just seems so formal." Caprice dropped to a crouch to give Lady attention. The pup looked up at her adoringly. "Murder investigations are always formal." Nana gave Nikki a kiss on the cheek. "I'll talk to you about this Detective Carstead when you're no longer on his list of persons of interest to be questioned. I'm meeting Darla Watson over at her knitting stand. She makes these adorable little hats for babies. She's going to show me how. See you in a while." Straightening up, Caprice watched Nana walk toward the knitters' stand. "Where do you want to start?" Caprice asked Nikki after Nana had strolled away. "Let's just make the rounds. If we see anybody we know, we'll stop and chat. The murder will probably crop up." They meandered from one stand to another slowly, appreciating the hanging baskets filled with geraniums, the craft stand with raspberry-patterned runners for tables, and another with shawls that had embroidered raspberries dotting the wool. Caprice thought she recognized someone trying on a shawl. When the brunette with the pageboy turned around, Caprice smiled. "Hey, Helen. I haven't seen you for a while." Helen Parcelli had been in her high school class. Helen twirled in the shawl and asked Caprice, "What do you think?" "I think it's pretty and would keep you warm on a cool night. How are you doing?" The last she'd heard, Helen was in charge of advertising at the Kismet Crier. "My hours were cut again. I only want to work part-time because of the kids, but pretty soon I'm going to have to look for something else. Eight hours a week just isn't enough." "I'm sorry to hear that," Nikki said. "Could you get a job on the York newspaper?" "It's possible. But my salary isn't that great, and I'm beginning to think maybe I'd like to try something else." "What kind of something else?" Caprice asked. Lady nosed around Helen's shoes and Helen dropped down to pet her. "Hey, girl." Then she gave her attention back to Caprice. "Maybe marketing. It's a whole new world out there now, incorporating social media into advertising. I was thinking of becoming a social media consultant for businesses. It's not just celebrities who need them anymore." "That's an interesting concept," Caprice responded, meaning it. "Nikki and I are trying to grow her business, and we've tapped into the social media world. But it's rough getting a foothold." "You have to know the right outlets to push the word out. Do you want me to try to secure a couple of well-placed ads for your catering business?" Helen asked Nikki. "I don't have much of a budget," Nikki warned her. "Let me see what I can find out." After Helen and Nikki exchanged numbers, Helen studied the shawl again that she'd thrown around her shoulders. "This might be nice topping a sundress for the reunion. Are you going?" she asked Caprice. "I'm on the development committee, so yes, I'll be there." Originally she'd expected the reunion to be a happy occasion, that she'd take Grant as her guest. But now, Grant might not be accompanying her. "You know the gossip will be all over the reunion about Drew Pierson," Helen said. "Isn't it just awful? Killed in his grandmother's house." "It is terrible," Caprice agreed. "Did you know him?" "He went to school with my sister. She had a crush on his friend Bronson Chronister, but he only dated girls who came from well-to-do families like his own." "I understand Bronson is successful in his own right now," Caprice prompted, hoping to learn more. "That's true," Helen admitted. "He's on several boards including the hospital in York, the school board in Kismet, and Kismet's new Chamber of Commerce tourism board. He thinks his business influence can pull businesses into the area." "He did expand his RV centers," Nikki said. "And think about his client base. They come from far and wide, and they go far and wide." "Each one of those customers could be a voter," Helen explained. "I heard he might be stepping into politics soon. Since he's a bachelor, my sister still has her eye on him. She even got a part-time job at the pro shop at the Country Squire Golf and Recreation Club hoping to chat him up. He plays a lot of tennis." Caprice's mind started spinning. She knew Bronson had invited her to Happy Camper to tour the property and the recreational vehicles, but she'd really like a conversation on turf other than his. Maybe it was time she used her own contacts. Roz had a membership at Country Squire. They could both play tennis if Caprice went as her guest. "Would you vote for Bronson?" Caprice asked Helen. "I might. He has a lot going for him. He's intelligent, he has connections, and he's traveled around the world. With family money backing him, he could be good." Helen turned to Caprice. "Since you're on the development committee, is this going to be a dressy reunion? It's at the high school, so I kinda figured it wouldn't be." "If you want to go glitzy, you're free to go glitzy. We're old enough to be and dress the way we want to, don't you think? We haven't defined a wardrobe code. Nikki's catering it, so the food will be exceptional. One of the guys is springing for bottles of champagne, and someone else's family owns a winery, so he's bringing wine. We have a DJ who can play anything from the forties to now. I am hoping the guys wear suits, and I'll push that if anybody asks." "That sundress I mentioned might be just right," Helen said thoughtfully. "It has a cute sequined top. I can sparkle that night." They all laughed. After a few more exchanged pleasantries and talk about the July Fourth fireworks over the reservoir that night, Helen took the shawl she'd thrown over her shoulders to the cashier and paid for it. After she walked away, Caprice turned to her sister. "So Bronson wants to run for something." Nikki shrugged. "He probably has aspirations that will take him beyond Kismet." As Caprice glanced toward the judging tent, she suddenly froze. Something odd must have shone in her expression, because Nikki grabbed her elbow. "What's wrong?" "Look," Caprice said morosely. Her sister looked in the same direction. Grant stood at one of the baked goods stands holding Patches's leash . . . and next to him was a very attractive blonde. "Do you think that's his ex?" Nikki asked with compassion in her voice. "I don't know who else it would be." Caprice took a step forward to get a better look. As she did, both Grant and the woman turned around and headed in their direction. Caprice wanted to duck behind one of the stands, or at least behind Nikki. But that would be a coward's way out. Nikki leaned close to her and said, "They're just walking beside each other, not arm in arm, or holding hands, or anything like that." What Nikki said was true, but Caprice could see how their elbows brushed, and the body language said they'd done this before. There was a certain familiarity there that exes have. As she watched them come nearer, she saw her dreams going up in smoke. To think, only two weeks before, she'd been contemplating scouting vintage wedding gowns online. Scanning the area in front of him with Patches nosing ahead, Grant suddenly spied her. Patches spotted Lady and pulled on his leash. Lady, recognizing her doggie friend, did the same. Caprice had no choice but to walk forward and let the two dogs meet. Could any situation be more awkward? Grant crouched down with Lady and Patches, maybe to calm them a bit. Caprice felt she needed to be calmed too, but that wasn't going to happen. Caprice noticed the look Naomi gave the dogs. It wasn't an I-want-to-pet-them-too look. It was an I-wish-they-weren't-here look. Possibly she wasn't an animal lover. After Grant rose to his feet, he said to Naomi, "These are the De Luca sisters. Caprice and Nikki, this is my ex-wife, Naomi. She just flew in today. I thought the Raspberry Festival was a good way to introduce her to Kismet." The De Luca sisters? That was how he was going to introduce her? Of course, what could he say? "I've been dating Caprice, but now you've interrupted our relationship. Caprice was getting serious about me, but I don't know what I'm feeling about her." She told herself to get a grip. This was awkward for everybody, because she had the feeling that Naomi did know who she was from the look and assessment the blonde gave her. From Naomi's well-tailored slacks and her fashion-forward blouse, Caprice suspected she didn't appreciate Bohemian chic. Trying to be polite as well as civil, Caprice forced herself to make conversation. "How do you like the raspberry desserts?" Nikki gave her a can-you-think-of-anything-lamer? look. Oh well. Naomi sent her a practiced smile. "Those dessert stands are a dieter's nightmare." Grant interjected, "But the raspberries look really luscious, don't you think?" All Caprice could think of was sharing a bowl of fresh raspberries topped with whipped cream with him. This wasn't going well. "Bella, Joe, and the kids are over at the swings," she mentioned. "I'm sure they'd love to see Patches. I'm going to take Lady over there in a little while, after the dessert judging announcements." "Did you enter?" His attention was all on Caprice, and she could feel his regret and longing. "I did, but so did Nana and Bella, and about thirty other people." Patches and Lady somehow got tangled around Naomi's legs. She did a little two-step to extricate herself and frowned. "They must be quite rambunctious when they're together." Grant answered before Caprice could. "After their first burst of excitement, they get along really well and calm each other down. But neither of them is crazy about football. They snooze when it's on." Caprice knew he was trying to stay connected to her by referring to the time when he pup-sat Lady with Patches and they'd joked about what TV show they should be watching. But Naomi had her own ammunition. "Seems to me you used to snooze through football too." Oh, yes, get that history in, Caprice thought. Nikki's shoulder bumped Caprice's. "Everyone's gathering over at the dessert judging tent. We'd better get over there in case you won." Caprice saw the expression on Grant's face. He looked as if he wanted to stop her from leaving. Yet there was no point in her staying. "Have a good time at the festival," she told the couple. Then she patted her hip and called to Lady to follow her. But Lady was reluctant to leave Patches. Caprice took a treat from her fanny pack and patted her hip again. When Lady went to her, she gave her the treat. Before she could move away, Grant said, "Good luck. I hope your dessert wins." Caprice threw a "thank you" his way and a "nice to have met you" over her shoulder to Naomi. Then she hurried off with Lady to find out if she'd won the dessert competition. But she couldn't help taking a last glance over her shoulder to see Naomi and Grant walking away. She already felt as if she'd lost him. "She doesn't like dogs." Nikki sounded sure of that as they approached the dessert tent. "She just didn't want to get those pretty slacks slobbered on." "Me-ow," Nikki said, giving her a you're-not-usually-catty look. Caprice shook her head. "I don't know what's gotten into me." "True love." "It can't be true if he's looking in another direction." "He's looking back. That doesn't mean he can't look forward again." Out of nowhere, Caprice heard a beeping. Her phone didn't beep. Nikki slipped hers from her pocket. She said, "I recognize this name. She inquired about my services. I should take this." Caprice gave her a nod and went toward the tables where the judges and onlookers were gathered. The judges, however, weren't ready to announce. Caprice dropped to the ground giving attention to Lady until Nikki came back, a worried expression on her face. Before Caprice could ask, she said, "I have a decision to make." "It doesn't sound like one you want to make." "The call was from Trudi Swenson. She wants me to cater her wedding reception." "What's wrong with that?" "Drew was supposed to cater it. Her wedding is Tuesday evening and she can't find anyone else. I had even met with them, and they decided to go with Drew instead." "What did you say?" "I said I'd call her back. She was in tears, Caprice. If I don't cater it, she's going to have to cancel her reception. But if I do it, how's that going to look to the police?" "You can't live your life worried about what the police are going to think. You also have to make an income. On the other hand, if the powers that be need just one little excuse to go after you, I don't know if that could give it to them. Why don't you call Vince and ask his opinion." "I don't want someone else making decisions for me." "This could be an important one, Nik." "Instead of thinking about the police or consulting Vince, I'm just going to put myself in her shoes. Her wedding is a few days away and she can't find a caterer. What would I want someone to do for me?" "And the answer is?" Caprice knew what Nikki was going to say. "I'm going to cater it, and I'll deal with the fallout later." That fallout could be a murder charge that Nikki wasn't ready for. Someone tapped on the microphone at the head of the tent, and Caprice heard a man's booming voice say, "We have the results. We're ready to disclose the winners of this year's Grocery Fresh Raspberry Festival." The manager of the store, Irving Bradford, was doing the announcing. Caprice could see he was enthusiastic about what he did and how he did it. That's why Grocery Fresh was one of her favorite places to shop. There was a round of applause, and Irving raised his hand. "Here we go. In third place, for a twenty-five dollar gift certificate from Grocery Fresh, the winner is Caprice De Luca's raspberry bread." Nikki gave her a hug and Nana waved from across the tent. "In second place, for a fifty-dollar Grocery Fresh certificate, the winner is Teresa Arcuri with her raspberry rhubarb cobbler." Caprice knew Teresa. She'd redecorated her living room and dining room not so long ago. She was her mom's age and took baking as seriously as anyone in the De Luca family. "And for our grand prize, a one hundred dollar Grocery Fresh gift certificate, the winner of this year's dessert raspberry competition is Celia De Luca with her raspberry shortcake. I've got to tell you, Celia, there isn't much of it left. The judges gobbled it all." Everyone laughed, and Caprice and Nikki rushed to Nana to give her a hug. At least something good had come from today. Then Caprice remembered talking to Helen and learning about Bronson's tennis matches. As soon as she found a quiet spot, she'd call Roz and see if her friend could pull a few strings to reserve a court next to his. She couldn't do anything about Grant and his ex-wife, but she could solve Drew's murder. It was time she put more effort into that endeavor. Chapter Thirteen Caprice was so excited as she and all of her family, as well as Roz trailed from the parking area toward the Giant Center on Sunday evening. They entered as soon as the doors opened and went to a special window. Dulcina, Rod, and the girls met them there. After introductions, Caprice could see Dulcina was excited too. "I can't believe he did this for all of us." Leslie said, "He's a rock star. He can do whatever he wants." Caprice spoke up. "Ace might be a rock star, but he's just a nice guy too. He likes lasagna and times with his family, and most of all he loves music. You'll see that tonight." When Leslie looked as if she might have a retort, her father hung his arm around her shoulders. Rod was about five-ten, with sandy brown hair and hazel eyes. All of his attention was on his daughters, and Caprice supposed that's how it should be. But he was dating Dulcina. He should be giving her some attention too. After Caprice presented their tickets and their backstage passes at the window, the clerk made a call. A security guard came to escort them to the area where they'd be meeting Ace. Caprice's uncle Dom and Dulcina began discussing Halo as they followed the security guard. They all clambored into an elevator that took them to the ground floor. Excitement practically hummed in the air as they followed the security guard . . . or bodyguard. This was the home of the Hershey Bears Hockey Club. It also sponsored other sports events. Soon the place would be filled with an enormous concert crowd that could number more than ten thousand fans who wanted to hear Ace's music. This was a huge venue for Ace, and she wished him every success. Caprice kept her gaze on her Nana as they went down a hall. They seemed to be headed to the hockey players' dressing rooms. This had been a long walk for Nana from the entrance of the arena where they left her off to where they were now. Although Nana was a spry seventy-six, Caprice worried about her. She tapped her nana's arm, leaned close, and whispered, "Are you okay?" "Don't you start," Nana retorted. "That security guard asked me if I wanted a wheelchair. Seriously? Just look at my sneakers. I walk every day. I'm fit for more than this." Caprice had to smile. That was Nana. Raring to go. Soon they were shown to an area that was set up for Ace's Meet-and-Greet. Ace was waiting at a table for them. Caprice caught sight of Marsha, Ace's ex-wife, and his daughter, Trista, talking to the band members, who were also milling about. Marsha waved and Trista ran over to give Caprice a hug. "Isn't this so exciting?" Trista asked. "I've never seen Dad perform in a place like this before. And Mom and I might be moving to Kismet to be near him." "That's wonderful. Then I can see you more often too." She saw that Marsha was smiling as she spoke to Zeke Stoltz, Ace's bass player. Zeke had almost quit the tour, but he and Ace had resolved the misunderstandings between them. Caprice noticed Ace give Marsha's arm a squeeze before he came over to them, and she seemed to be looking at him adoringly once more. Was it possible the two of them would reunite? Ace was a different man now than he'd been before his divorce. He'd taken responsibility for his daughter and a past road-life he didn't want to repeat with his comeback tour. Whether or not he and Marsha got back together again, their renewed relationship could only be good for Trista. Although she and Ace had become friends and he'd met Nikki, he'd never been introduced to all of her family until now. He gave Nana a big hug, and Caprice heard him say to her, "From what Caprice tells me, you remind me a lot of my own grandmother. She was a big influence in my life growing up. I consider her my guardian angel now." Nana whispered something in his ear, and Ace laughed. The evening seemed to take wings as there were photos all around, for her and her family, and for Dulcina, Rod, and the girls. The Meet-and-Greet fun was over way too soon. But Caprice knew Ace had other fans to greet, some who had won an autograph session from a nearby radio station, and others who had won a contest via e-mail through his fan club. The guard walked everyone back out to the main arena and showed them to their seats in the front row. The ground floor seating was slowly filling up. The bright lights would be dimmed at concert time, but for now it was easy to see the flow of people, all age groups, who had come to hear Ace and his band perform. Caprice's attention wandered to Bella and Joe down the row from her. Joe's arm was around his wife's shoulders, and they both looked happy to be here. With a baby, date night was a special thing for them. Maybe she and Nikki could hold down the fort for them some night soon so they could go out again. Caprice didn't feel quite capable of handling an infant and two kids on her own. She was about to say something to Nikki in the seat beside her when Nikki snagged her attention first. "Caprice, look over there." She pointed to a row of seats behind a railing, up the first few steps from the ground floor. Caprice didn't even need the binoculars she'd brought along to see who it was. "Is that Judy Clapsaddle and Jeanie Boswell?" Nikki asked. "Sure looks like it." Judy Clapsaddle owned the Nail Yard in Kismet where Caprice had had her nails done and had bought a gift certificate for her mom. Judy had given her good information when Caprice looked into the murder of her mom's best friend. "Maybe I'll just wander over there and talk to them," Caprice said. "I've got plenty of time before the concert starts." "Do you want me to come with you?" Nikki asked. "No, you stay here. I won't be long." But as she made her way over to the steps and started climbing them, she noticed Jeanie leave the row and head toward a concession stand in the back. Apparently she wanted an Ace Richland T-shirt more than she wanted to talk to Caprice. Judy, however, aimed a welcoming smile at Caprice. "Imagine seeing you here. I'll bet we'd notice a lot more Kismet residents if we take stock of everybody who walks in." "Probably. I thought I'd come over and say hi. I was going to talk to Jeanie too, but she ran off." The din in the arena was growing louder with more people talking now. Music was playing in the background over the speakers. Judy leaned into Caprice so she could hear her. "I think she's just embarrassed. She wasn't sure she should come tonight, you know, after Drew's death and all. But she had the tickets, and what good would it do her to stay home? When she mentioned it to me, I said we could come together." "That's really nice of you. How is she doing?" Judy seemed hesitant to answer. "Are you looking into Drew's murder?" "Let's just say I'm keeping my ears open." "You really helped the police when Louise was killed." "Some of that was inadvertent," Caprice admitted, remembering how she'd come face-to-face with the murderer but had ended up winning out. Recalling all that, and apparently the information she'd given Caprice before, Judy said, "It's always been hard to get to know Jeanie. But because we have businesses practically across from each other, we often bump into each other at the Koffee Klatch or at the deli. So we've had a few conversations. That's how I learned she was coming up here tonight when I ran into her at the deli yesterday." "Did she take any time off?" "Just for Drew's funeral. And to tell you the truth, she doesn't seem all that affected by Drew's death. It's crazy. I mean, I have a brother. If anything happened to him, I'd be devastated." The same was true for Caprice. "What was her mood driving up here tonight?" "She was all light and excited as if nothing unusual had happened. I don't know. Maybe I'm reading her all wrong. Maybe she's just really good at covering up what she's feeling." After a few more exchanged pleasantries with Judy, Caprice returned to her seat beside Nikki. Nikki asked, "Did you find anything out?" Caprice just shook her head. Not long after, as the lights in the arena dimmed, as the excitement and the buzz and then the applause grew, as Ace's opening act appeared on the stage and swung into an introduction, Caprice's thoughts couldn't stop tripping over one hurdle. What if Jeanie Boswell wasn't just hiding her feelings? What if she was a cold-blooded murderess? * * * Caprice swam laps on Monday afternoon as if her life depended on it, and maybe it did. She needed to burn off excess energy. If she did, maybe she could think more clearly. Her thoughts were disrupted with worry about what Grant and Naomi were saying . . . and doing. Especially doing. They were also disrupted by everything she had learned and not learned about Drew Pierson, as well as his sister's attitude. Just what direction should she go next? The gym part of Shape Up was a busy place this afternoon. The pool, not so much. Because she didn't dawdle, her swim took about a half hour, her shower and hair dry about another fifteen minutes. She'd be home to her animals to spend time with them before an evening tennis game with Nikki. Maybe she'd take Lady along. Animals were like kids. You couldn't leave them for hours on end without them missing you. If they missed you too much, they misbehaved, or tried to get your attention in unusual ways. She tried to prevent that. She pushed open the women's locker room door, about to head straight through the gym area to the front of Shape Up. But as she passed the elliptical trainers, she spotted Larry Penya. Nana would tell her that was a sign. Away from Bronson, would he open up more than he had at Rowena's? There was only one way to find out. She "accidentally" brushed against his machine, her bag catching on the corner. As she stopped to apologize, their gazes met. "I'm really sorry," she said. "I wasn't watching where I was going." Then as if a lightbulb had gone on in her head, she said, "I met you at Rowena's. Larry Penya, isn't it?" "You have a good memory. You're Nikki De Luca's sister." "You know Nikki?" "No, I don't know her. I was at the Valentine's Day dance. Drew pointed you out to me, along with Nikki while he was working with her." Almost everybody in Kismet had been at that dance. She'd been preoccupied that night with Grant . . . and with Seth. But now she wasn't preoccupied. She moved a little bit closer to Larry. "Because my sister and I found Drew, as well as for Rowena's sake, I'd like to get to the bottom of what happened if the police don't. So maybe you could help me with something." He looked reluctant to do so, then asked, "What do you need help with?" "It's a personal thing, really. Can you tell me if Drew really liked Nikki? He gave her the impression he did, but she didn't want to mix business with pleasure. I'm not sure if she regrets that now." Larry took the towel from around his neck and wiped sweat from his brow. "I guess it doesn't much matter anymore what I say, so I guess I can tell you the truth." Caprice held her breath. It was rare that someone actually spoke the truth. "Drew knew Nikki was experienced with her business and a good chef. He intended to work with her to learn what he could from her—about running a catering business and about the type of food she cooked. He made a play for her because he thought if they got serious, they could partner up, and he'd be on his way to what he wanted." "What did he want?" "To make some money, to be a success, to ride on somebody else's coattails without putting a lot of effort into it." Caprice didn't want to respond out of pique, so she waited a few beats before she said, "But it didn't work out with Nikki. So he must have been motivated. He ventured out on his own and then he managed that lucrative deal with Rack O' Ribs." "Drew was motivated, all right. He wanted the good life like Bronson has. He just wasn't exactly sure how to get it. There's a reason he got that deal with Rack O' Ribs." "A reason other than the barbecue sauce tasting good?" "Lots of barbecue sauces taste good. Drew got serious with the manager's daughter, and she put in a good word for him with her dad. Drew knew that manager was friends with the CEO of the chain. That's the kind of conniving Drew did." Caprice put the manager on her list of people to talk to next. Maybe Drew's conniving is what had gotten him killed. Larry said, "I'm going to hit the showers. Good luck screening Drew's enemies. He was racking them up." Before Caprice could inquire about more of them, Larry had climbed from the machine and disappeared into the men's locker room. Did he have more information she could tap, or did he have something to hide? * * * On Monday evening, Caprice knew she had to practice her swing before she and Roz accidentally ran into Bronson on the tennis courts at the country club, she hoped by the end of the week. So she'd asked Nikki to join her at a playground near her sister's condo tonight. There were four tennis courts here for the general public. Caprice had to admit she didn't like swatting around tennis balls and sweating. The only good thing was, she'd brought Lady along too. Lady, of course, happily wanted to run and catch each tennis ball. But after about fifteen minutes of that, Caprice had given her a chew toy and now she sat under a bench while Nikki served the ball to Caprice once more. Her sister lobbed it just over the net. Caprice ran forward, swung under the ball, and managed to fish it up so it bounced back to her sister. Nikki called, "I didn't think you'd get that." She'd gotten it with luck, not skill. After another fifteen minutes of running and missing and practically falling over her own feet, she pointed to the bench. Nikki joined her there, and they both opened bottles of water Nikki had brought along. "I can't believe you even managed to look vintage when playing tennis." Caprice had worn a skort in pink gingham and a pink tank with a little fringe. Her sneakers were printed with peace signs in fuchsia. "I don't look vintage. I just look cool, or groovy, whichever word you want to choose." Nikki groaned. Her own blue tank and running shorts were skimpy, but that's what she liked to wear to move around the tennis court. Her outfit looked great on her, since Nikki was more slender than Caprice. If Caprice lost about ten pounds . . . That had been her wish for the past few years. "I have a favor to ask," Nikki said. "I already asked Bella and she said yes." That always applied subtle pressure when two sisters were on board. "What favor?" "Bella doesn't work at All About You tomorrow night, and she agreed to help serve with me and Serena at the wedding reception. Can you help too? Since this is last minute, I'm having trouble finding waitresses. The four of us would work together well. You've been around Serena at the open houses." Serena was friendly and efficient, and Caprice liked her. "Sure I can help. I'll see if Uncle Dom can sit with Lady. If not, maybe Mom can keep her company." As they sipped water and caught their breath, Caprice thought about what she should tell Nikki, and what she shouldn't, about the information on Drew she'd ferreted out so far. They were friends as much as sisters. Because of the friendship as well as the sisterhood, they didn't keep secrets. She took another long swig of water. "I ran into Larry Penya at Shape Up today." "You did? Did you learn anything?" When she was slow to answer, Nikki eyed her with a shrewd sisterly look. "You should have called me if you learned something." "Why call when I was seeing you tonight." "Okay, now you're seeing me. Talk." "You're not going to like it." "What else is new these days? What did he have to say?" Lady must have heard the tension in their voices, because she stopped chewing, dropped the toy to the ground, wiggled herself between Caprice's and Nikki's legs, and put her paws up on the bench. Caprice stroked her dog's neck. "It's okay, baby, we're just talking." "Not yet we're not," Nikki murmured, then gave Lady a pat too. "First of all," Caprice began, "Larry told me that Drew pointed us out when he was working at the Valentine's Day dance." "That's not toxic. So tell me what is." "I asked him if Drew really liked you, as a woman, not as a chef." Nikki's face, already flushed from exercise, grew a little pinker. "And?" "And, he told me Drew wanted to work with you to learn what he could from you—about running a catering business, about the type of food you cooked." "That's what sous chefs do." "Larry also confided that Drew thought if the two of you got serious, then you'd partner up and he'd be on his way." "I'd already guessed that, but it isn't easy to hear." "I didn't just learn about his attitude toward you, though. Apparently Drew was a conniver. Larry maintains that Drew got serious with the manager's daughter at Rack O' Ribs because the manager could put in a good word for him. Mr. Dennis was friends with the CEO of the chain. That's how the barbecue sauce got its tasting, and that's how it got put into the pipeline. I'm putting that manager on my short list of people to talk to." "You've decided to go after this full throttle, haven't you? Vince isn't going to like it. Grant's not going to like it." "Vince can live with it. He has before. And as far as Grant goes? He doesn't have any say over my life." "Caprice?" Nikki's voice held a cautious note that warned Caprice to be cautious too. "How am I supposed to think about this, Nikki? He's doing what he needs to do. I need to do what I need to do. Quid pro quo, or something like that, if we have to put it in lawyer's terms." "We don't have to put it in lawyer's terms, and I don't think Grant would either. Just because he's spending some time with Naomi doesn't mean he doesn't care about you. Can't you get that through your head?" "I only know what I'm feeling, and if I have insecurities, well, so be it. I'm not confident about our relationship because of his background." "Your own background doesn't help much either. Maybe you could trust Grant if you hadn't been dumped by two men." "Thanks a lot for the reminder." Nikki nudged her shoulder. "I meant it in the nicest way. You deserve better than a man who could forget about you because of a long-distance relationship, or because of a man who wasn't finished with his ex-wife." "Gee, who does that sound like?" Caprice muttered. "You usually have a better attitude." Caprice was saved from a response when a black sedan pulled up along the curb beside the tennis courts. It was shiny and just washed and caught their attention. Both were surprised when Detective Carstead climbed out. He wasn't wearing a suit tonight, but rather navy dress slacks and a wrinkled cream Oxford shirt. No tie was in evidence. Had he spotted them when he was driving by and just decided to stop and chat? As the detective strode closer, Lady yipped at him. It wasn't a stay-away yip. It was sort of a "hello" yip. After all, Lady was friendly. Her soft bark didn't seem to bother the detective. He stepped right up to the bench and looked down at the cocker. She looked up at him as if she wanted a head pat. Caprice reminded herself that her dog was a good judge of character. "Hello, ladies." His glance toward Nikki seemed to take in her tennis attire. But then he turned to Caprice. "Does she belong to you?" "Yes, she does. She's all mine." "Is she friendly?" Feeling a bit out of sorts this evening, Caprice returned, "Friendlier than I am." The corner of his lip twitched up as if he wanted to smile but wasn't going to. Holding out his hand to the cocker, he let Lady sniff it. She rubbed her ear against it, then she rolled over for a tummy rub. Caprice just shook her head. What if Detective Carstead wasn't a friend, but rather the enemy? Could she trust her dog to decide which he was? Receiving the message, the detective rubbed her tummy for a while, said, "You're a beauty," then rose to his feet again. "I recognized you when I drove by. I was going to give you a call in the morning, but this is just serendipity." "Serendipity," Caprice repeated. "What were you going to call me about?" "I heard you went to Drew Pierson's funeral." "I did. Nikki didn't." He glanced at her sister again. "I know that." Of course he did. After all, he was an investigator. "You knew Drew Pierson's grandmother well enough to pay your respects?" he asked with a probing look. "First of all, we were there the day he was murdered. Second of all, my mom and Nana know her from church. Third of all, it only seemed right." His expression was totally neutral, and she clamped her lips shut before she said anything else. Nikki had remained silent, which was a good thing. Brett Carstead shifted on his wing-tipped shoes, then he asked, "Are you investigating the murder?" Caprice knew when the Fifth Amendment had to apply. She wasn't going to answer that one. He gave a resigned sigh. "I warned you before, and I'm going to warn you again. Keep out of it. You're putting yourself in danger, and it's not necessary." She knew he was thinking about the last murder she inadvertently solved at the same time the police were closing in. Now Nikki spoke up. "Am I a suspect?" The look the detective gave her sister was a bit longer than necessary. "I can't discuss the investigation, and you shouldn't be either." Nikki held up her hands like stop signs. "I'm not discussing the investigation, at least not with anybody other than Caprice . . . and of course Vince." Carstead gave a little grunt. He couldn't fault her for that answer. Caprice noticed the way Brett Carstead was gazing at her sister. He didn't want to fault her at all. He didn't want to charge her with a murder. In fact, he could want to date her. And the way Nikki was gazing at him— Carstead broke his eye contact with Nikki. "Enjoy your game of tennis. I'm glad I stopped. This saved me a phone call." Not forgetting about Lady, he gave her another pat on the head, then he turned and walked away. Nikki watched his long-legged stride, the way he rounded the car, then opened his door and climbed inside. "What are you thinking?" she asked Nikki. "I'm thinking he's pretty hot for a detective." Caprice groaned. As if they didn't have trouble enough. Chapter Fourteen For Caprice, dressing on Tuesday evening as a server—which meant conservatively—was almost painful. She smoothed down her white apron tied over black slacks and a white blouse. The wedding reception was being held in the social hall adjacent to the Kismet United Methodist Church. Although Nikki had been nervous about catering this event, she needed the income, and she also needed the recommendations if the reception went well. Bella nudged Caprice and nodded toward the wedding cake. It was Nikki's new specialty—a square carrot cake with two connected crystal hearts perched on top. Silver swirls ran down the sides. It was quite attractive. "Would you want that at your wedding?" Bella was just making conversation, helping the time go faster while they served the meal. But Caprice didn't want to talk about weddings. Still she answered cheerily, "I love Nikki's carrot cake." Bella gave her a long look. "Are you and Grant still on the outs?" Caprice shrugged. "I haven't heard from him." "He stopped by the swings with Patches at the Raspberry Festival to say hello. His ex didn't look too pleased," Bella said with a wink. Caprice was silent. "I know from my counseling sessions with Joe, you shouldn't let things fester," Bella added more seriously. "Nothing's festering. He has to make a decision." "Or, you have to stand by him," Bella warned sagely. Caprice knew Bella had learned a lot about standing by Joe when she and her husband had been going through their problems. Was she looking at this all wrong? Should she just be there for Grant? It was time for the couple to cut the beautiful cake. The billowing wedding gown sparkled under the lights as the groom took the bride's hand and they strolled toward the cake stand together. Nikki was waiting for them with an engraved cake knife that the bride had provided. As soon as the bride and groom cut those first slices, Bella and Caprice would swoop in with trays and dishes. Nikki would push the cake into the kitchen and Serena would quickly slice pieces for the guests. Trudi placed her hand on top of her groom's on the knife. They were so young, Caprice thought, probably in their midtwenties. They looked as if they expected their lives to turn out just the way they wanted them to. Maybe they would. Nikki was there with a silver-trimmed white plate to collect the slice the couple cut. Then she held it up for each of them to take a piece to feed each other. Trudi fed her groom first, and he had icing all over his mouth. He fed her a bit more daintily. Everyone applauded when they were finished. Nikki was about to wheel the cake toward Bella and Caprice when one of the guests approached her. The woman was older than Caprice, but it was hard to tell how old with her bleached blond hair and her polished red fingernails. She wore loads of makeup too, and she caught Caprice's attention because of it. Caprice didn't wear much makeup, and when she saw someone who did, Caprice took notice and wondered what she was trying to prove . . . or what she was trying to hide. In this case, she was probably trying to hide wrinkles. The woman pointed to Nikki's wedding cake. "I saw you at the wedding expo when I was there with Trudi. She was supposed to be using Drew Pierson. They would have had a chocolate walnut groom's cake then." "I'm sorry if you would have preferred that," Nikki said blandly, and Caprice could tell she didn't intend to give in to an argument with this woman. "Maybe you shouldn't have taken over this wedding reception. Maybe you should have let someone else handle it." Caprice could tell that Nikki just wanted to go hide somewhere, but her sister was made of sterner stuff than that. "Drew and I were in competition for business. I saw no reason to turn down this job when Trudi couldn't find anyone else. She would have had to cancel the reception. Is that what you would have wanted her to do?" The woman who had accosted Nikki took a step back. "She could have had a deli cater it. There were alternatives." "Not according to Trudi. Maybe you should ask her. Maybe you should ask her why she chose me." Just then, Trudi came over to Nikki and said, "Everyone's raving about the food. You've done a marvelous job here tonight." She looked at the guest who was a relative or a friend. "Delia, are you telling Nikki how pretty her cake is?" "No," the woman snapped. "Pretty doesn't matter if she had a motive for murder." Instead of being embarrassed, Trudi patted the woman's arm. "Delia, I think you've been reading too many mystery novels. Nikki's trying to do her job just like everyone else. That chocolate walnut groom's cake attracted us to Drew Pierson's menu, but Nikki's meal tonight was flawless, and we should have just gone with her in the first place. Please try her carrot cake and see how good it is. That's all that matters." Caprice had been about to step in, but Trudi had done it for her, and very adroitly too. Delia took a last look at Nikki and huffed away. Her bridal gown rustling from here to next year, Trudi pushed her veil over her shoulder and gave Nikki a huge hug. She said, "My husband's the one who wanted to go with Drew in the first place. I would have chosen you. You've done a fabulous job tonight. So don't let what Delia said bother you one little bit. No one else is thinking it." Bella leaned close to Caprice and nodded to some of the other guests who were looking their way. "That's a nice sentiment, and I'm sure Trudi means it. But I have a feeling there's more than one person in this room thinking that Nikki might have done it." Caprice was absolutely sure that Bella was right. * * * "You need to get yourself a police scanner." Caprice had been in the middle of working up figures for a proposal for a house staging when she'd answered Isaac Hobbs's call Wednesday evening. "Why should I get a police scanner when you have one and Lloyd Butterworth at the Koffee Klatch has one. I usually hear the news before it makes it down the street." Isaac gave a grunt. "I just have one for entertainment value when I don't have any customers in the shop. Lloyd Butterworth milks his for all it's worth and thinks it brings him business." "He could be right about that. His coffee's darn good too." "And mine isn't?" Isaac let his pot of coffee sit all day. Sometimes when she went to visit in the afternoon, it tasted as if it had been burnt to a crisp, and that was hard to do with coffee. "Your coffee provides great conversation." She went back to their original subject. "Why do I need a scanner?" "Because Rowena Pierson's house was burglarized last night." "What?" "You heard me. This is small-town Kismet. You don't just go by codes. I listen to chatter too. The police were called to that address for an attempted break-in. That's basically all I know, except . . . I called a friend of a friend who knows one of the officers. She said they don't think anything was taken." "Then why the break-in?" Caprice mused. "I don't know. I did find the paperwork on her lamps. The table lamp is worth around $200,000 and the floor lamp around $400,000. But those prices swing around at auctions. One auction house I know of deals mainly with Tiffany lamps. They have a list of private collectors always on the lookout. Then, of course, there is Christie's." The most high-end auction house, Caprice thought. Isaac added, "There are lots of forgeries. Provenance often tells the tale. Rowena's lamps have provenance dating back to 1929. Are you going to pay Rowena a visit and nose around?" "I can't very well do that tonight. I have work with deadlines. Besides, a visit this soon would be unseemly." "Like you were nosing around," he agreed. "I have to be careful, Isaac. Detective Jones's eyes are on me." "Is Rowena Pierson's place within walking distance?" "It could be if I wanted the exercise. Why?" "So take Lady for a walk tomorrow and Jones won't be the wiser." Not only her work van but her yellow Camaro was recognizable, and Isaac might have a valid point. "I'll think about it." "If you snooze, you lose." She laughed. "I get the idea, Isaac. If you hear anything else, will you let me know?" "Sure will. And if you need my services, you know where to find me." Isaac and the paperwork at his shop had helped her out before. "You're a good friend, Isaac." "And you're a great customer." She knew Isaac tried to be hard-boiled on the outside, but he was a softie on the inside. After all, he'd attended her birthday party in April and brought her the cutest little vintage cat creamer. "I'll take your advice to heart," she told him. "And you'll let me know what happened with the break-in?" Caprice had to smile. "I promise I'll let you know. Thanks for the tip." "Anytime." As she ended the call with Isaac, Caprice realized that he was a good friend, not just a contact. She'd have to invite him over for dinner sometime so they could really chat, or maybe invite him to one of the De Luca family dinners. He'd get a kick out of that. * * * She was thinking about the next family dinner, what she'd make, whether Grant would be there, as she walked Lady the following morning and headed for Rowena's. Midmorning in early July, heat was already setting in. She'd chosen to wear fifties-style turquoise pedal pushers and a white blouse with turquoise pinstripes. Her sneakers were comfortable for walking. Lady didn't seem in any hurry as she snuffled the grass along the sidewalk and then looked up at Caprice inquiringly. Does this walk have a destination? "Yes, it does," Caprice told her. "I don't know if Rowena likes dogs, though, so we might be staying outside on the porch. Lady tilted her head as if considering that. Caprice rubbed her, and Lady heeled perfectly for the rest of the walk. She responded to praise so well, and treats worked too, though Caprice used them less now than she used to. At ten months old, Lady was growing into her beautiful self. Her golden color was rich and deep, and the cream along her ears reminded Caprice of the golden highlights in Nikki's hair. Nikki probably wouldn't like being compared to Lady. As Caprice reached Rowena's block, she noticed the flowers dotting the yards—purple and white petunias, red roses in full glory, marigolds a neighbor had planted along a border. Caprice wondered if Rowena would even be staying at her house or if she would be staying with Kiki again because of the break-in. The next minute, her question was answered. A white van had parked at the curb outside of Rowena's house. Two men hurried down the steps and climbed into the vehicle, slamming the doors. As Caprice and Lady approached, she heard the van start up, then it pulled away from the curb and sped down the street. Maybe repairs had been necessary if someone had broken in. Had Rowena been here when it happened? She was hoping she'd soon have her questions answered. Lady ran up the steps beside Caprice. Caprice put her finger to the doorbell, but before she could even press it, Rowena was at the door. "Hi, Caprice, what brings you here?" Caprice nodded to Lady. "I was taking her for a walk and just headed in your direction." "Oh my. I missed her at first." "I understand if you don't want a dog inside. I just came to check on you and see how you're doing." "I've never had a dog, but I don't mind yours coming in as long as she doesn't run around and knock everything over." "She's usually pretty well behaved," Caprice assured Rowena. "If she gets rambunctious I'll bring her back outside again. I brought one of her toys that she can chew on while we're talking." "That sounds good." "Was that a repair truck I saw leaving?" Rowena waited until Caprice and Lady were inside before she answered. "Not exactly a repair truck. One of those was here yesterday to fix my basement window. Someone broke in night before last." "Were you here?" "Yes, I was here. I was all settled in my bedroom when I heard a noise. I didn't know what it was. Apparently it was someone breaking in that basement window. They made it up to the living room, but I had my four-pronged cane and I went after whoever it was. The person wore a hoodie, so I couldn't tell if the intruder was male or female. I wish my sight was as good as it once was. Anyway, I chased whoever it was back down the basement and shut the door and put a chair in front of it. Then I called the police." "I can't believe you did that! You're fortunate the burglar ran." "I am, aren't I? That's what the police said too. They think whoever it was wanted to steal something. Maybe the Tiffany lamps. But I don't know. I did see that the burglar had something on his hands. They looked white. The police think those were latex gloves. From what I could tell, nothing was taken. I guess I surprised him. Maybe he expected me to still be at Kiki's." That was a reasonable supposition. "So the police didn't find any evidence of who was here?" "Only the broken glass from the basement window. The men you saw leaving were installing my burglar alarm system. I should have had it done a long time ago because of the Tiffany lamps if nothing else. But nobody knew their worth. Not really." Was that true? Were the lamps the object of the break in? Or did Rowena have something of Drew's that the burglar might have wanted? Even more possible, what if the burglar knew about the recipes inside the light? Did he or she want those? Rowena motioned to the sofa. "Please sit." Caprice undid Lady's leash and gave her the toy she'd brought along. "Your nana called me to talk awhile. I so appreciated that. It seems since my grandson was murdered I'm persona non grata in Kismet." "What do you mean?" Caprice could guess, but it seemed Rowena needed a listening ear. "I thought I had friends in this town. I've lived here all my life. I raised Drew and Jeanie the best way I knew how. They went through the public school system, and I made friends with other parents even though they were younger than I was. Granted, since I haven't been able to get out and about as much, I've let a lot of friendships slide. When you can't go and do, people forget you're around. All except for Kiki. She's been a true friend. The others—they're all keeping their distance. It's as if I have the plague." Caprice didn't know if she could help Rowena, but she could try. "I don't know if I can find out who killed Drew, but I might be able to find some tidbits of information that could help the police. What I want you to know is that the general public does look on murder as if it's something that's catching. It's not fair, but they don't want to be tainted by it. They don't want to think that they could bring something like that on themselves. They want to believe they're different. They're not, of course. Violence can touch anyone." "I just feel . . . so alone now." Caprice knew Rowena was missing Drew desperately, and she was hurt by her friends ignoring her and putting her in a "do not touch" category. "Is there anything I can do to help?" "Just your visits help. You don't know how much they mean to me. Your grandmother said she'd visit too. She didn't want to barge in too soon, but I don't think there is a too soon with this. I'm never going to get over Drew being taken from me. All I can do is learn to live with it." "Has Jeanie been by to visit?" "She's at her store all the time, and she tells me she can't get away. But from what I've seen of write-ups about you in the newspaper, and what your nana says, you work a lot too. Yet here you are." Yes, here she was. And she wasn't going to ask any more questions. She was going to keep Rowena company and just let her talk about Drew. She had the feeling that that's what the woman wanted to do most, and Caprice was going to let her. * * * That evening Caprice had just ended a video conferencing call with a client when Roz texted her. Are you busy? Can I come over? Caprice texted back, Sure. Anything wrong? Roz texted back, We'll talk. Hmmm. That didn't sound good. A problem with Bella working for her? A pothole in the road with Vince? Caprice was wearing her favorite pair of lounging pants, patterned with kittens, and a bright pink T-shirt that matched part of the design. She briefly thought about changing, but this was Roz. She could be comfortable. After almost exactly fifteen minutes, her doorbell rang. She checked the monitor next to her computer. Yep, that was Roz standing under her porch light, and she had Dylan with her. Caprice opened the door and invited them inside. Dylan yipped, danced around the foyer, then met Lady in the dining room and took off for the kitchen. "I made decaf coffee," Caprice told her friend. "It's a new flavor—butternut rum." "Do you have a bottle of wine? I think that's more my speed tonight." She and Roz had shared wine before, but it wasn't usually their beverage of choice. Something was wrong. "I think I have Tears of Gettysburg that Vince brought me from Adams County Winery." It was a sweet white wine that went down easy. "That might be appropriate," Roz agreed, going into the living room and plopping down on the sofa. Roz's golden-blond hair was always perfectly coifed. She usually wore gold earrings or jewels even when she was dressed casually. Casual for Roz was a well-tailored, probably designer top and slacks. Tonight she wore a pale green set with emeralds at her ears. "Do you want to tell me what's wrong before we have the wine or afterward?" Caprice asked. "First of all, I want to tell you we have a tennis court date for tomorrow at four-thirty. We'll be on the court next to Bronson." "That's great. How did you manage it?" "That wasn't hard. I just dropped in at the Country Squire pro shop. I asked about court availability. I hinted that I might want to do some business with Bronson, and the manager set me up." "It's good to have friends in high places," Caprice joked. "Or at the computer in the pro shop. When Ted was alive—" She stopped abruptly. "Go on," Caprice prompted. "You can talk about him, you know. You were married to him." "And what a sham that was," Roz said. "He often had the manager rearrange court times or court schedules to suit him when he wanted to discuss business over a tennis game. I felt it was manipulative, and here I am doing the same thing." "Does it make a difference that I'm trying to catch a murderer?" Roz's gaze met hers. "Of course, it does. I just . . . I just regret so many things about my marriage to him." "Where is this coming from now?" "Let's open the wine." Caprice brought out two crystal wineglasses. Vince had given them to her as a housewarming present when she'd bought this house. They were blown glass with an etched flower pattern. She suspected he'd gone to Isaac's shop to find them. After Caprice sliced cheddar cheese and paired it with one of her favorite crackers, she arranged a plate for the two of them. When she returned to the living room with the dish and the open wine, Roz was staring into space. Something had her spooked. After Caprice poured the wine, she handed Roz a glass. "Okay, spill it. What's going on?" "It's your brother. It's Ted. It's my history with men. I'm confused about all of it, and I'm not sure what I should do or shouldn't do." "That's one very broad topic. Can we narrow it down?" Roz drank at least half her glass of wine. "I haven't always made the best choices when I've tried to have relationships in my life. I dated a few men after Mom died . . . before Ted." Although Roz was rich now, she'd had few advantages growing up. Her mom had raised her on her own. When Roz was a senior in high school, her mother had been diagnosed with breast cancer. The summer after graduation, out of necessity, Roz had had to put her dream of being a flight attendant on hold and waitress while she'd taken care of her mom. Before her mother died, Joan Hulsey had made Roz promise not to put her dreams on hold again. So after the funeral, Roz had trained for her job, flown everywhere, and then met Ted Winslow. Roz's traveling and then her marriage to Ted had interfered with their friendship. They'd kept in touch, but weren't the good friends they'd been in high school. Not until after her husband's murder. Roz had been accused of killing Ted, and Caprice had stepped in. Now she and Roz were close again, close enough to be honest with each other and tell each other the truth. "The men you dated before Ted. Were they really serious relationships?" Roz thought about it. "I didn't let them get too serious, I guess, because of my traveling. As you found out, it's hard to have a long-distance relationship." "But Ted was different because he promised you the sun and the moon and the stars?" Caprice asked without judgment. "I guess you could say that. He was rich, powerful, and confident. He swept me off my feet. He wouldn't take no for an answer. I was blinded by what Ted could give me, by the facade he showed me. He wasn't who he seemed." "You don't really know a person until you're with him for a while." Caprice couldn't help but think about her own situation with Grant. She thought she knew him. But did she really? "What are you afraid of most?" Caprice took a few sips of her wine and thought about the answer she would give. Roz drained her glass, set it on the coffee table, and poured herself another. "I'm afraid of getting hurt again. I'm afraid I'll hurt Vince. Up until now, we've had fun together. We've enjoyed each other's company. We've given in to a romance that just happened. If my relationship with him goes south, what happens to my friendship with you . . . with your family? If he and I really don't belong together, what damage are we going to do to each other?" "I've misplaced my crystal ball," Caprice said. "You can't possibly think you're going to answer these questions, do you?" Roz sipped more wine, then laid her head back against the sofa cushion. "Here I thought you'd have some answers." "Not to those questions. I can tell you no matter what happens between you and Vince, it's not going to affect our friendship. We've been through too much together." She motioned to the dish of cheese. "Eat something before all that alcohol goes to your head." "My head's already spinning, so it's not going to make much difference." As Roz assembled cheese on a cracker and popped it into her mouth, Caprice asked, "What brought all this on?" Roz chewed, swallowed, and took another sip of wine. Lady and Dylan raced into the room, awakening Sophia perched on the top shelf of her cat tree, as well as Mirabelle, who was prettily sleeping on the bottom shelf. After the dogs ran through the room, around the circular floor plan that Caprice's animals loved, she suspected they'd detoured into her office where a few of Lady's toys lay strewn across the floor. Mirabelle hopped down off the cat tree, came over to the sofa, sat at Caprice's feet, and meowed at her. "Do you want closer company?" Caprice asked as she waited for Roz to answer her question. Mirabelle hopped up onto the sofa and padded over onto Caprice's lap. She settled in and purred. Roz studied the beautiful Persian for a few moments. "Vince wants me to move in with him." "Wow," Caprice said without stopping herself. "That's huge for him." "And huge for me. Maybe it's too soon. I don't know what to do. I don't know what to tell him. I don't want to hurt him, either by rejecting his offer or by moving in and having it all not work out." Hard questions that had to be answered. "Do you think you're over Ted?" "How does anyone get over a situation like that? Some days I think I am, and some days I think I'm not." "Do you believe on the days you're not that Vince will support you through it? Or will he just get impatient that you haven't moved on?" "He hasn't been impatient so far." "But you think that just might be romance's rosy glasses, or Vince not showing you his true self." Roz took another gulp of wine. "Yes." "There's only one way you're going to know Vince's true self, and that's if you're around him more. Not just for wine-tasting dates and movie dates and dinners out. But first thing in the morning and last thing at night, when he hasn't shaved and when he has, when you can't find something in your closet to wear even though your closet's full, when you have an argument with him and he leaves and you don't know what's going to happen next." Roz raised glistening eyes to Caprice. "I want it to work." "If you want it to work, then you have to give it a chance. If you shut down now, how will you ever know?" "You think I should move in with him?" "I think you need to talk to him about it more, and maybe compromise." "How do you compromise on something like that?" After a few sips of her own wine, Caprice responded. "I don't know how busy Vince is this time of year, or you either for that matter. But what if instead of moving in with him, the two of you went on a vacation for a week? I'm sure he could use one, and you probably could too. Just be with each other day and night. No, it's not real life. But you'd be in each other's company whether you're in a good mood or a bad mood, whether you're having fun or whether you're not." Roz set down her glass and turned it in a circle as if she was nervous about all of it. "Do you think he'd go for it?" "You won't know until you ask. At least you wouldn't be saying no. You'd be taking a step forward." Roz thought about it some more. "I don't want to board Dylan." "Board? I'll take him. You don't have to board him. He's used to my house, and he's used to Lady. He's even getting used to having two cats around. It would be fine for a week." Roz rubbed her hand across her temple. "All of this is making my head spin." "That's probably the wine. You're a one-glass girl like me. You've had two. In fact, why don't you just stay the night?" After her husband's murder, Roz had stayed with Caprice for a while. They gotten along great, and right now, Caprice could use the girlfriend company too. "I don't want to put you out." Caprice brushed her concerns away. "You're not." Lady and Dylan trotted into the living room and sat down beside each other near the coffee table. "It's better if I don't drive," Roz agreed. "That's the smart thing to do. And I'll think about your idea of a vacation. Bella might like the extra hours for a week if she can find a babysitter. It will be a matter of whether Vince can get away." "Ask him." "I'm seeing him tomorrow evening after you and I have our tennis match. I'll broach the subject then." Caprice knew life could be about compromise, about taking baby steps one at a time. A jump into the ocean wasn't necessary when you could just jump into the little pond where you were sure you could swim. However, her relationship with Grant was more complicated than Roz's with Vince. Weren't their situations different? Caprice poured herself another glass of wine and thought about her tennis match with Roz next to Bronson's court. She needed to form a strategy for her approach to him. That was much easier than thinking about Grant and his ex-wife having an intimate dinner together . . . or more. Chapter Fifteen Roz left the next morning, and Caprice missed her after she was gone. It was fun engaging in girl talk again with a "roommate." Of course, Grant popped into her mind as a possible roommate, and she shooed the image away. On her to-do list this morning was a stop at Rack O' Ribs. Later Nana would be pup-sitting while Caprice staged the Spanish house. She would definitely make it a Hacienda Haven, then Denise Langford could bring her other agents through the mansion for a tour. Later this afternoon, she'd meet Roz on the courts. She wouldn't think about Grant all day. Lady looked up at her and barked. Her cocker knew she was in denial. As Caprice drove to Rack O' Ribs, she understood that restaurants weren't staffed only during their posted hours. The manager and kitchen staff had to prep, and they came into work long before the restaurant opened. She drove around the side with the drive-up window and parked on the other side of the restaurant. Instead of going to the front doors, which she knew would be locked, she went around to the back. Lady had wanted to come along, but Caprice promised her she wouldn't be gone long and then she'd take her to visit Nana to play with Valentine. She was as fond of the animals that filled her life as she was the people in it. There was a buzzer on the back door of the restaurant and Caprice pushed it. It was possible the manager wasn't here. A member of the waitstaff answered, his apron messy with barbecue sauce streaks. "Can I help you?" he asked impatiently. She read his name tag. It said STAN JONES. "I'd like to talk to the manager, Bertram Dennis. Is he here?" The young man glanced over his shoulder. He couldn't have been more than twenty, and his immaturity showed when he answered, "I'll check if he wants to see anyone." Caprice held the door as he went back inside, and she stepped in. Once inside, it would be easier to get her answers. Once inside, it would be harder for Bertram Dennis to ignore her. But Bertram Dennis had apparently dealt with whatever came up. He entered the hallway and saw her standing there. "Can I help you?" "I hope so." She held out her hand. "My name's Caprice De Luca." He cocked his head as if he was thinking about her name, as if he might recognize it. But he wasn't making any connections. "You're Mr. Dennis?" He nodded. "I am." He still looked puzzled as to why she was there. "Did you have a meal you weren't satisfied with at the restaurant? Something like that?" "Oh, no. My sister and I were in a few weeks ago, and the ribs with that new barbecue sauce are wonderful." He appeared pleased to hear that. "They've been good for business, that's for sure. Everybody must be spreading the word. Do you want to buy in bulk?" She gave a small laugh. "No. That's not why I'm here. I'd like to talk to you about Drew Pierson." At that, Mr. Dennis took a step back and frowned. "I don't think that's a good idea. Besides, why would I want to talk to you about him?" "I'm friends with his grandmother, Rowena Pierson." Mr. Dennis looked a little less hostile, but he still wasn't ready to cooperate. "I already talked to both detectives on the case. What do you have to do with it, besides being friends with his grandmother?" "My sister and I found Drew." Mr. Dennis's eyes opened in shock, then he checked over his shoulder. Noise poured from the kitchen area. He and Caprice both knew workers were milling about. He motioned to her to follow him. "Let's step in here." They proceeded a few feet down the hall, and he opened an office door. It was a small office, messy too. Papers were strewn all over the desk. But Dennis didn't go behind the desk. He just closed the door and stood right beside it. "Tell me again why I should talk to you." "Because my sister is looking for answers. She worked with Drew for a while. She runs Catered Capers." Dennis snapped his fingers. "The caterer who was fighting Drew for clients. Do the police think she had a reason to kill him?" "They might," Caprice admitted honestly. "So I'm trying to stay one step ahead of them. Please tell me what you knew about Drew." Dennis paced the office, then went around his desk and shuffled a few papers, from one side to the other. Afterward, he looked up at Caprice with a troubled expression. "I'll tell you what I told the police. Pierson was a scumbag in nice duds." Caprice wasn't necessarily surprised by the admission, but she was a little surprised by the venom in Bertram Dennis's voice. "Can you give me a reason why you thought that?" If she had to push, she would. Because she had the feeling this man knew something . . . something important. "So many reasons I probably can't count them all," he muttered. "But let's start with my daughter, Tabitha. Pierson acted as if he was interested in her. He took advantage of her. He used his charm on her so that he could present his barbecue sauce to me. Somehow he found out that I knew the CEO of Rack O' Ribs personally." "Why do you think he used your daughter?" "Because as soon as the contracts were signed, he dropped her. If it weren't for me, the CEO never would have heard of his barbecue sauce, let alone tasted it. I got him that deal because I thought he and Tabitha were serious." The bitterness was so obvious that Caprice wondered if there wasn't more to this story. "Did he bother your daughter after he dropped her?' "No. He wouldn't even take her calls." "Did he ask you for more favors? After all, you manage the local restaurant that was going to sell his barbecue sauce." "I had to sell the sauce, and it had to do well or my ass was on the line. I threatened to tell the owner of the chain what a true jerk Pierson was, but Pierson blackmailed me." "How's that possible?" "Tabitha trusted him. She thought they had a future. She told him things she didn't tell anyone else. She told him things about me." Caprice kept silent, not knowing if this man would tell her what those things were. When Dennis didn't go on, she prompted, "Personal things about you?" He looked uncomfortable. "Do you really need to know this?" "Anything I know about anyone's connection to Drew will help me . . . and my sister." He ran his hand through his hair. "All right. The detectives already know and so does my wife. No secrets anymore. Tabitha found out I had an affair last year. She saw a text that came in on my phone. I didn't even know she knew, but Drew Pierson did. He threatened to tell my wife if I made any statements about him to the CEO of the chain. He had me. I made a mistake, and he knew about it, so I was going to sell his barbecue sauce if it killed me." "And now?" Caprice asked. "And now my daughter thinks I'm a cheater. My wife knows I'm a cheater, because Tabitha told her after Pierson was murdered. She was so upset about everything that it tumbled out. And Drew Pierson's barbecue sauce is a doggone success. There's irony in that, don't you think?" There wasn't only irony in it, there was motive for murder in it. Tabitha, Bertram Dennis, and maybe even his wife could be placed on that suspect list. The suspects for Drew's murder were multiplying much too fast. * * * The tennis courts at the Country Squire Golf and Recreation Club were much different than the public courts Caprice and Nikki had played on. These courts were maintained with pristine white nets and janitorial care that rivaled any a maid would give inside a mansion. Caprice knew her way around the Golf and Recreation Club because she'd gone to dinners here, given workshops here, even played golf—very poorly she might add—with clients. But she'd never played tennis here. Walking along a golf course on a beautiful day was preferable to running and sweating and tripping and exerting energy in the late-afternoon sun. But today she'd do that. Anything to get more answers to her questions. She'd wondered on the way here if she should just go visit Bronson's Happy Camper RV site. But if she did, he'd be on guard. If she did, he'd know his way around and she wouldn't. If she did, she couldn't have easily cut off conversation she might want to, or leave when questions got too sticky. No, this was the better venue and she should stop second-guessing herself. Was Grant second-guessing himself now that he'd brought Naomi to Kismet? Did she like the Purple Iris, the small town bed-and-breakfast? Did she feel Kismet could house her aspirations? Those questions led to the fact that Caprice was concerned Naomi would move here to be close to Grant. Should she text him that she was thinking about him? With all of that racing around in her head, it was very easy to work off her frustration. Every time she imagined one of those questions she couldn't answer, she slammed that ball so hard, she won her point. Roz was a little taken aback at how hard she tried. "When did you learn to play like this?" "I'm just playing to get the most out of the game." "Or demolish your opponent. This is just a friendly game, Caprice. I lob a ball to you. You lob one back to me, and we keep it going." "I don't know if that's enough to attract Bronson's attention." "Then find it another way." Caprice and Roz were into another set when Bronson and a second man about his age came onto the court next to theirs. Caprice kept playing, not wanting to be obvious. This could be a very long exercise stretch if she wanted their meeting to seem casual. She and Roz played. Bronson and his partner played. Finally, somehow they all managed to take a break around the same time. Plucking a towel from a nearby bench, Caprice flung it around her neck and Roz did the same. Bronson and his partner were talking at a bench close by. When he looked away for a moment, their gazes connected. After a few seconds, she smiled and gave a little wave. Bronson said something to his partner and came toward her. "Caprice, isn't it?" "Yes, it is. And you're Bronson." He laughed. "So we both have a good memory, except . . . I've never seen you playing tennis here before." "Roz Winslow invites me to the Country Squire as her guest. I guess our court times have never matched before." He didn't know this was her first court time. He looked in Roz's direction as Roz spotted a friend on another court and crossed to speak to her. "Mrs. Winslow. She was widowed last year, right?" He snapped his fingers. "You helped solve that murder, and now she's dating your brother." "You are up to date." "Partly that. But Mrs. Winslow is a high-profile woman. She was before her husband was murdered, and she is now." "I don't know if that's good or bad." "People consider her a mover and shaker. Ted Winslow definitely was. She owns a shop in town, doesn't she?" "She does. All About You." Bronson gave a nod toward Roz again. "And she knows everybody under the sun. She has great public relations skills." "That sounds as if you wouldn't mind dating her yourself," Caprice observed. "I wouldn't. But I didn't want to move in too quickly. Then she started bringing your brother around. I don't horn in on another man's territory." If he was being honest, she respected that. "Since we're being so honest"—she gave him a somewhat flirtatious smile—"I have a question for you." "Go ahead. I'll answer it if I can, especially if it's about recreational vehicles." Always the salesman, she thought. "No, not about campers. It's about Drew Pierson." "What about Drew?" "I know you were good friends, and I know you let him work out of your kitchen." "I did. It was stupid of him to pay for another facility when I had that big kitchen that he could use." "Did you ever talk about his cooking with him?" Bronson considered her question. "He tested recipes on me, and I tasted them. He said I should get some benefit from his cooking in my kitchen." "My sister says that Drew was an adequate cook but that he never created recipes, that he didn't have that talent. Did you notice if he created in the kitchen?" Moving his towel back and forth across the back of his neck, Bronson shrugged. "When I watched Drew in the kitchen, he usually had a recipe to follow, one printed off the Internet, or something like that. Does that help?" "I'm not sure. Nikki claims he stole one of her recipes and served it at the wedding expo. Someone else who knew him claimed he stole his recipe. That's why I wondered." "I just knew Drew was a good cook, not where his inspiration came from. Though if you're talking about the barbecue sauce recipe, I know Mario Ruiz thinks Drew stole it from him. The two of them had a loud and serious argument a few days before Drew was killed. I came home from work and found them practically at each other's throats. I didn't want my place torn apart, so I told Mario to leave or else I'd call the police." "Do you think Drew did steal the recipe?" "I don't know. Drew wouldn't talk about the fight afterward. He just muttered some comment about everybody was angry with him. Bertram Dennis was the other man who wanted a piece of his hide. Drew told me if Dennis came around, I should tell him I didn't know where Drew was." "Would you have lied for him?" Bronson didn't hesitate. "I would have lied if it meant keeping Drew from getting beaten up or hurt. Sure." "Nikki believes that Drew was using her to get ahead. Do you think that's something Drew would do?" Bronson sighed. "I hate to admit it, but Drew could be a real cad with women, dating them if he thought he could gain something from it. We were friends, but that didn't mean I approved of all of his tactics." In spite of her concerns about who killed Drew, Caprice found herself liking Bronson. He could be lying to her, that was true. But he seemed honest about his friendship with Drew, and what he thought about it. Roz came back on the court about the same time Bronson's partner stood at the bench and motioned to him. Bronson gave Roz a wave, saying to Caprice, "If she ever splits up with your brother, let me know." Then he smiled and jogged over to his friend, who was already on the court again. When Roz came to meet Caprice at the bench, she asked, "Did you find out anything essential?" "I'm not sure. He just confirmed a lot I already knew. Though he did tell me Mario and Drew had a fight a few days before Drew was killed—a serious fight. Mario neglected to tell me that." "He doesn't want any suspicion coming down on him. Can you blame him?" "No, but if the police talk to Bronson and find out, he'll be under suspicion anyway, maybe doubly so." "Anything else?" "Bronson could be interested in you." "You're not serious." "Very serious. If you and Vince ever break up, he told me I should let him know." Roz glanced over at Bronson. "I never thought of him in that way. I mean, we see each other around here a lot, but he's never shown any interest." "He was giving you time." "I see," Roz said seriously. "That was kind of him." Standing there together, they watched Bronson and his partner play. Anyone could tell Bronson was a consummate athlete. He handled himself, the ball, and the racquet to perfection. "I certainly can understand why he's one of Kismet's most eligible bachelors," Caprice said. "But I'm dating your brother, and I like it that way." Caprice waved her racquet at the courts. "Have you had enough of this?" "I have if you have." A sly glint came into Roz's eyes. "How about an ice-cream sundae from Cherry on the Top after all this good exercise? I'm not meeting Vince until eight o'clock." Caprice knew she shouldn't. She should eat a healthy meal and forget the ice cream and toppings. But Roz was her friend, and she needed a breather from worry and work. An ice-cream sundae and chatting with Roz again could be just the break she needed. * * * The following morning Caprice did a last examination of the hacienda that was so much more than a house. Real estate agents would be here in about fifteen minutes to take a walk-through. They'd be snapping photos and shooting their own video footage. This was an absolute gem, and Caprice was sure she and Juan had done a beautiful job of staging it. It wasn't trying to be something it wasn't. The house had an earthy energy that flowed throughout. Its magnificence would come through easily in the photos and on its video. If the open house tomorrow didn't sell it, the agents and the photos and Web sites would. She was sure of it. The woven rugs Juan had found bore geometric designs in blue and orange and fuchsia, the same colors that dominated the tiles lining the staircase. As she climbed those stairs, she peered down over the railing into the living room. Juan had found sectionals in leather and wood in a rich shade of royal blue. The end tables were topped with intricate mosaics in rust and orange. Somehow the splash of colors in each room worked together to coordinate the whole house. She'd reached the master suite with its dark wood floor and brass bed, with a headboard that reached halfway to the ceiling, when her phone buzzed. She stepped into the master bathroom with its marble sunken tub and stand-alone shower big enough for two and pulled out her phone. She saw Bella was calling. "Hi, Bee, what's up?" "We sold our house!" "That's wonderful. Tell me about it." "Two contracts came in at once, so we got full asking price." "I'm so happy for you, Bee." "I wanted to tell you because . . . I have a favor to ask." "What kind of favor? Do you need me to sit in on the paperwork?" "I don't know about that, but that's not the favor. I know you're busy, but I've been watching the stats on two houses that are online. One of them has been on sale for a year. Can you come look at them with me and Joe?" Caprice checked her watch, estimating how long she'd be tied up here. "What time do you have in mind?" "We're open to what works for you." "How about five o'clock? That will give me all day here if I need it. I never know how long the real estate agents will take. Where do you want to meet?" "My neighbor will be watching the kids, so why don't you just come over here." "I'll be there at five." "You're the best." Caprice ended the call, smiling. Then she heard voices coming from downstairs. The house had a state-of-the-art alarm system, but she wasn't the only one who knew the code. Denise Langford knew it, and she was probably letting all of the other agents in. Caprice took a last look around the upstairs and then went down to meet them. Two hours later, after hearing more oohs and aahs, and everything in between, Caprice told Denise, "I'm going outside to the patio. I want to make sure nothing got moved around so it's ready for tomorrow." She'd raised the outdoor umbrellas that would lend a festive quality to the back patio. Guests who filled their plates with Nikki's food could go out there and sit too if they wanted. Fortunately, the weather was all clear and called for a sunny day tomorrow. When Caprice emerged from the sliding glass doors off the dining room onto the covered terraced patio, she approved of Juan's concept of using outdoor furniture with brushed copper frames and colorful orange, blue, and rust cushions. She'd walked the perimeter and was studying the rest of the yard when the back door from the kitchen opened and a man dressed in a gray uniform with TROY'S DELIVERY SERVICE stitched onto the pocket of his shirt stepped outside. His gray cap matched. He held up a large bag. "Are you Miss De Luca?" he asked. The man was probably in his early twenties with a thin mustache and brown eyes that looked more sheepish than anything. He said, "You look like the woman in the photo I was sent." Caprice's skin started to crawl. "What photo?" "This was a crazy order," the delivery man said, setting the bag onto the frosted glass-topped table. "I have a courier service in York. I received an e-mail telling me I'd get a big bonus if I delivered a package to this address and to you. Your photo was included in the e-mail. I think it had been in the newspaper. That was so I'd know exactly who to give this bag to." Chills ran up and down her spine now. "So you don't know who placed the order?" "Nope. I was just instructed that this bag would be sitting on the ledge outside of Rack O' Ribs. I should pick it up and deliver it. No questions asked. Five hundred dollars for my trouble." "You didn't think that was odd?" "Sure, I thought it was odd. But money is money. My wife's pregnant. We need it." She could certainly understand that. But still.... She studied the tall white bag and didn't know if she wanted to know what was in it. He motioned to the bag. "Aren't you going to open it? It felt kind of hot underneath when I picked it up." Hot. Oh great. Maybe she should get the hose or the fire department or the police department. But the delivery man said, "It feels like one of those boxes like Rack O' Ribs gives you when you go through the drive-thru." She approached the table warily. The white bag was folded down at the top, and she opened it slowly. When she peeked inside she did see the Rack O' Ribs box. What the heck? She tore the bag wide open around the container and studied it carefully. It didn't look threatening. Not at all. It even smelled good. It smelled like barbecued ribs. She slipped open the flap that closed the container. Inside there was indeed a rack of ribs. But that wasn't all. There was a waxy paper with grease pencil lettering on it. It was attached to the ribs with a paring knife. The note read, Stop asking questions or this is what will happen to you. Caprice must have given a little squeak because the delivery man looked at her and said, "Are you going to faint?" But she didn't faint. She pulled out her phone and speed-dialed Detective Carstead. Chapter Sixteen A half hour later, Caprice and the delivery man stood on the patio with Detective Carstead as the detective listened intently to what had happened. "And the bag was just sitting on the ledge by Rack O' Ribs?" Detective Carstead asked. Troy Weyland answered quickly, as if he wanted to make sure the detective knew he wasn't the perpetrator. "Those were my instructions. Pick up the bag sitting outside the door on the brick ledge at Rack O' Ribs. I wasn't supposed to look inside or anything—just pick it up and deliver it here." "And this is your business, courier service, so to speak?" "Yes, I have two trucks. A friend and I went into business together about a year ago. We mostly deliver legal documents, like from lawyer to lawyer, and that kind of thing. But we're a courier service. We don't ask questions. We just do our job. The more deliveries we make, the more money we make. This seemed to be a simple one. I thought it was a birthday gift or something." "Some gift," Caprice muttered, staring down at the rack of ribs with the knife protruding from it. Detective Carstead had brought a tech along with him, and now he nodded to him. "Bag it all up and record it. We'll have it analyzed for fingerprints." "This isn't going to affect my open house is it?" Caprice asked the detective. He looked angry for a minute. "You're worried about the open house rather than your life?" "They're not in the same category," she snapped, then was sorry she had. "Look, Detective, there's nothing I can do about this. Apparently someone knew I was going to be here. That wasn't any secret. It's the day before an open house. I have a lot to get ready. I've spent valuable work time on this, and I don't want it messed up because some idiot is trying to scare me." "You think that's all it is? Scare tactics? What if it's more? What if it's a prelude? What have you been doing, Miss De Luca, to cause this?" With a sigh, she told him about her conversation with Bertram Dennis, and then about "running into" Bronson on the tennis court. To her relief, Carstead didn't ask about how she'd come to be playing tennis on a court next to Bronson's. He said, "We knew about Bertram Dennis's daughter. We'll follow up again with him and with her about the package." He shook his head. "I'm not sure how you get the information we do." "I want to clear Nikki from your persons-of-interest list." He scowled, gave her a narrowed-eyed look, and then maintained, "We have to consider anyone who had contact with the victim." "But my sister tops your list?" "I'm not going to say." He turned to Troy. "I have your information. I'll give you a call if we need to go over this again. Thanks for letting me examine your phone and retrieve the number where the call came from." "It's probably a burner phone," Caprice muttered. This time the detective gave her a look of respect. "We'll check into it, but you're probably right." The tech had taken charge of the bag. He'd used some kind of electronic device and fingerprinted the delivery man so they could distinguish his fingerprints from others. The police already had Caprice's on file with AFIS—the Automated Fingerprint Identification System. The detective nodded to Weyland. "You can go. I'll be in touch if I need anything else." The tech went into the house at the same time, and that left Caprice with Detective Carstead. He'd come through the inside of the house, and now he gazed over the vast yard, with its butterfly bushes, hydrangea, and uniform flower beds planted with pink geraniums. Although Caprice expected Detective Carstead to give her more warnings, he didn't. Instead, he said, "I can't quite imagine living like this. Can you?" "You mean the largesse of it?" "Yeah. It's almost too big to contemplate. A house with enough rooms to get lost in, and probably so many bathrooms no one would ever use them all." His arm swept over the landscape. "This kind of yard where a dog or a kid would be out of sight in a minute." That was interesting. It sounded as if Detective Carstead dreamed of a house with a yard where he could have a family, including a dog and a kid. She asked, "Do you have a yard now?" He gave her a look that said he didn't know if he should answer or not. But then he did. "Not my own. I rent an apartment on the first floor of an old house. I cut the grass for the landlord once a week, but that's about it." "I know real estate agents," she teased, "if you're ever looking for a house to call your own." He actually gave her a smile. "I guess you do." He stared at her a few moments, shifted on his feet, and then asked, "Is your sister involved with anyone?" Thinking about his question, studying his almost embarrassed-looking expression, she asked, "Is that a question for the investigation, or is it personal?" "It's personal," he admitted. Without causing them further awkwardness, she answered, "No, she's not, but she's very picky." Detective Carstead's eyes gleamed with what Caprice thought was amusement as he nodded. "Good to know." Then his warning came again, but it was a little different from those he'd given her before. "Watch yourself, Miss De Luca. Remember that talking to the wrong person could be as dangerous as chasing a getaway car down a high-traffic highway." "I understand, Detective, I do, and I promise, I'll keep you in my loop." He shook his head. "Your persistence is admirable. I just don't want it to be regrettable." On that note, he left her standing on the beautiful patio, thinking about what he'd said. * * * Caprice wasn't telling anybody close to her about the Rack O' Ribs threat. She didn't want her family worrying about her. She would take care and not go anywhere alone . . . at least not for the next few days. She was meeting Bella to look at the houses, and then she'd go home to her pets inside of her alarm system. She'd keep her phone near her hand so she could dial Detective Carstead if she needed to. Tomorrow was the open house, where scads of people would be all around her. Nobody would be able to get near her. For today and tomorrow, at least, she would stop asking questions. However, at some point, everything would come to a head. It always did. If the murderer had his eye on her, she'd want it off of her. But for now, she'd help Bella and Joe decide whether they should buy a house. At five-thirty, Caprice rode with Bella and Joe in their red van to the first house on their list. The real estate agent was there when they arrived. The neighborhood, maybe about ten years old, was located near the shopping center on the east end of town. As they entered the house, Caprice knew it was considered a high rancher. That meant inside the foyer, steps led down to a family room and basement area. Another set of steps led up to the first floor, which consisted of the living room, dining area, and four bedrooms. That was the main aspect of a new house that Joe and Bella were looking for—a room for the baby that could be a guest room later, a room for Timmy, a room for Megan, and a suite for themselves. The bedroom area in this house stretched over the garage, and Caprice thought about that garage being unheated in the winter and the cold floors. They went upstairs first without Bella and Joe making many comments. They would have to install new carpet in the living room and dining area. When they toured downstairs, they saw that the family room was large and spacious, but there was only a small basement area for storage. And, of course, the whole place would have to be redecorated to Bella and Joe's taste. After they'd scoped the yard, which was mostly grass with no shrubs, they all gathered on the front walk. Their real estate agent, Kayla Langtree, who was in her late thirties, wore her hair in a blunt, straight cut, neck length. Her large green eyes were her best feature. She was only five foot three, and Joe seemed to tower over her as he asked, "Will they come down in price?" "Every deal is about negotiation now," Kayla said. "But this house has been on the market for a year, and they've cut the price three times. So I don't know if you can get them to go lower." "The question is," Caprice interjected, "Do you like the house?" She studied Bella, not Joe. "It's all right," Bella said, not with much enthusiasm. "It has four bedrooms," Joe pointed out. "That's what we need. Even without the price going lower, it's in our range." "I don't know if that's a good-enough reason to buy a house that you're going to live in the rest of your lives," Caprice advised them. "Bella, you're not saying much." That wasn't like her sister at all. Finally, a conclusion burst out of Bella. "It doesn't have any character." Joe looked puzzled. "What do you mean, character? We can decorate it however we want. You can even paint the walls your favorite color instead of the green I like. We need to find something, Bella." Although Bella and Joe's marriage was back on track, they still had their disagreements and their personality quirks. Joe obviously didn't understand what her sister was talking about. "Can you explain to Joe what you're looking for?" Caprice asked Bella. "I'll know when I find it," she said, crossing her arms over her chest. "It's just something—" She waved her hand in the air. "The house where I grew up has character. It has a red-tiled roof and casement windows and plaster walls. It has a multitiered yard and a little balcony." "Your parents' house would be way out of our price range," Joe grumbled. "Caprice's house has character too," Bella protested. "It has that arched front door that you don't see anymore, and that little copper overhang. It has a fireplace and a cute back porch." "Caprice's house wouldn't be big enough for us," Joe pointed out, still not quite getting the message. Caprice put her hand on Bella's arm. "Joe, what I think she's looking for is a house with unique qualities. This isn't like buying a car with a checklist. It's more like finding a house that had some love in it, even if that was just in the choice of a Quoizel ceiling lamp. Joe shook his head. "How about we go look at the second house." He apparently had learned to accept Bella's thinking without arguing with it. That was smart. As Joe drove them to the second destination, Caprice noticed they were headed toward her parents' neighborhood. Maybe they were just going to drive through that area. "This house is near Mom and Dad's?" "About a block away," Joe said, with a straight face, not letting his feelings on the subject show. Bella glanced over her shoulder at Caprice and just gave a shrug. "In the listing it seemed to have what we need. It's a much older house, probably a hundred years old. But it has four bedrooms and a renovated kitchen. The picture on the Internet shows some large spruces on either side of it, so that could be why we haven't noticed it and why I don't remember it." When Joe pulled up to the curb in front of the house, Caprice remembered it. She'd once ridden her bike up and down these streets and often passed it. It was tucked between the spruces, which had grown larger over the years. There was a myrtle-covered bank in the front, and eight steps led up to the full front porch where a wooden swing was attached on the left side. A cane rocker and a small table sat nearby it. Caprice guessed if she peeked in the large plate-glass window there, she'd see into the living room. After they climbed the steps, Joe said, "Someone's going to have to paint these porch railings at least every other year." "I like the white with the pale yellow siding, don't you?" Bella asked, ignoring his paint remark. He gave her a shrug. The door was an old-fashioned one with sidelights on either side. The white storm door was decorated with a black emblem of a carriage with a horse. Once inside the foyer, Joe stared down at the parquet floor. "Is this practical with kids?" Kayla said, "It has a polyurethane finish." Blue and brown tweed carpet covered the steps, which led to a landing, then turned left to the upstairs. Caprice could see the kitchen straight ahead. To her left, wooden pillars looked as if they supported the living room. The woodwork over and around the doors and the archway were a deep rich golden brown and appeared to have been taken care of over the years. Bella ran her hand down one pillar. "Isn't this beautiful? And look at those French doors." French doors led from the living room into what could either be a dining room or a family room. Bookshelves in a beautiful birch lined the wall straight ahead in that room. "A lot of care has gone into this house," Bella said. She turned to Kayla. "Why are they selling?" "An older couple lives here. They're moving into one of those retirement villages. You know—one floor, wheelchair accessible. The steps are becoming a problem for them. But they raised their family here, and as you can tell, the house has been renovated through the years and well taken care of." In the kitchen, Caprice glanced around and couldn't find any fault. There was a unique corner sink with a counter that stretched across the room. The dishwasher was housed on one side of it, but on the other side were four stools for anyone to sit and snack or have a light meal. There was a large-enough area behind that for a dining room table and a hutch. A four-foot-square plate-glass window looked out over the backyard. Kayla motioned to the left. "That's the downstairs powder room." She opened a second door beyond it. "This leads to the utility room and the pantry." "What's the basement like?" Joe asked. "Basic. Cement floor. Furnace." "And what about a garage?" "There's a wooden structure in the back that the couple had sided when they sided the house. It can house two cars." "But you have to walk the length of the yard to get to it," Joe murmured. "Good exercise," Bella maintained, and Caprice understood with just one look at her sister that she'd fallen in love with the house. "We'd have to redo the basement for a family room," Joe said. "Or," Bella proposed, "we could just use that middle room for a family room and the TV. The kids would be right here with us. We don't need a formal dining room. A table in the kitchen's dining area can seat six or eight, and we have the counter too where we could always put the kids." Kayla smiled. "You haven't seen the upstairs yet. Granted, there's only one big bathroom for all of you, but there's a screened-in balcony off of the back bedroom. It was once a porch, but now it's closed in and weatherproofed. There's room for three or four lawn chairs, maybe a chaise. It's sort of a little sunroom." "Oh, let's go look," Bella said enthusiastically. She followed the real estate agent to the stairs. "She likes this place," Joe said with surprise in his voice. "It has character," Caprice said calmly. "How do you feel about being so close to Mom and Dad's house?" "I'm fine with it," he assured her. "I don't know what we would have done without them over the past year. I never thought I'd say it, but I'm grateful for all of you, even you." She and Joe had had their head butts, but underneath it all, he was a good guy. "If this is what she wants, we'll see if we can negotiate a good price. I can live with the garage," he decided. She laughed. "And if you need help decorating, you know my number." Chapter Seventeen The Monday morning breakfast meeting of the Kismet Chamber of Commerce at the Purple Iris Bed-and-Breakfast seemed to be attended by more members than usual. Because summer allowed for more freedom? Or because the Purple Iris with its beautiful restaurant wasn't only a tourist destination but also a popular place for residents of Kismet and York to gather? The bed-and-breakfast had a quaintness about it, from its stained glass windows with iris motif to its vintage wood trim. The restaurant, a recent addition by owner Holly Swope, had already garnered great reviews. The Chamber of Commerce had managed to reserve it for this morning's meeting. Holly had offered to serve her guests breakfast in their rooms to accommodate the Chamber patrons. Naomi, Grant's ex, could be having breakfast in her room right now. The nine guest rooms were usually occupied throughout the summer because the bed-and-breakfast was at a convenient location between York, Gettysburg, and Lancaster. Lots of sights to see in the area, from the Amish country, to the battlefield, to the historic buildings in downtown York. When Caprice entered the dining area with its lilac-colored drapes, yellow-and-purple pin-striped wallpaper, and framed iris prints, Holly—all dressed in purple—greeted her. "I'm glad you could come," Holly said, her blue eyes sparkling. The evening Caprice and Seth Randolph had enjoyed dinner here, the owner of the B&B had reminded her that the Chamber of Commerce breakfast was a way to network and that new projects were afoot. When Caprice thought about Seth, she didn't regret her decision to end her romantic relationship, if not her friendship, with him. But they hadn't been in contact since the night she'd chosen Grant. She hoped sometime in the future they would be again, no matter what happened with Grant and Naomi. Grant's ex should be leaving soon. Had he made a decision yet? Her attendance at this morning's breakfast was just another way for her to put that out of her mind. Besides, she wondered if Jeanie Boswell might be here, or another business owner who might have had dealings with Drew. Holly ran her hand through her short black hair. "You're early. Have a seat at any table. Service will start soon." Members of the Chamber were scattered around the room at the round tables, over which hung white enamel and crystal chandeliers. Caprice scanned each table until her eyes fell on someone she knew. Kiki, Rowena's friend, was seated at a table all by herself. She'd been reading a pamphlet, but when she raised her head, she spotted Caprice. She waved and Caprice crossed to her. "Have a seat," Kiki said, motioning to the one beside her. "Unless you're meeting someone else here." "My friend Roz will probably attend. Maybe we could save her a seat?" "Of course we can. Are you talking about Roz Winslow who owns All About You?" "I am." "I often frequent her shop. She doesn't just carry those short dresses with bare midriffs for twenty-somethings but classy clothes I can wear too." "She tries to appeal to all ages and all sizes." "Rowena told me you're still looking into Drew's background. You heard about the break-in at her house?" Kiki wanted to know. "I did. I can't believe she went after the burglar with her cane. She is one gutsy woman." Kiki harrumphed, "Or foolish. I told her she should be staying with me until this whole situation is taken care of, until the police catch the murderer. She insists no one's going to drive her out of her home. I suppose that's a good thing because that means even Jeanie can't do that." "Is she trying to?" "She brought an antique dealer to the reception after Drew's funeral. I was furious when I found out." "How did Rowena feel about it?" "Maybe she's survived by taking everything in stride, or maybe she just accepts Jeanie as she is. But she didn't seem overly upset by it. Jeanie's trying to convince her to move into a retirement center, but Rowena won't hear of it. She even told Jeanie she appreciated having the antique dealer come in to appraise some of her belongings. To tell you the truth, I think that got Jeanie's goat." Caprice had to smile until she reminded herself that Jeanie could very well have murdered Drew. "Do you know if Jeanie's coming to this meeting?" "I doubt it. She really keeps to herself and doesn't mingle much. She never has. Quite the opposite of her brother." "So she was a loner as a teenager?" "Yes, she was. On the other hand, she could fly off the handle quite easily. In that way, she was different from Drew. Jeanie was fifteen when she pushed a girl down a flight of stairs because the girl said something mean. I advised Rowena then she should send Jeanie to counseling. But Rowena didn't want a stigma attached to Jeanie's name. She did make appointments for her, though, with the guidance counselor at the high school, and that seemed to help." "Sometimes all teenagers need is an objective listening ear." "I suppose. I have a feeling she resented Rowena because she missed her parents so deeply. Rowena was the surrogate she didn't want to deal with." "That's a shame." "Drew was always closer to Rowena, and maybe Jeanie resented that too." "Maybe she just wanted someone of her own to love her." "Possibly. That's why her marriage was a flop." "She's made Posies into a success, though," Caprice offered. "Yes, she has, and kudos to her for that. She did well in business school. She has an associates degree. When she used her inheritance to buy Posies, Rowena was concerned. But she's done well with it, except for one problem. She has trouble keeping employees. I don't know if that's because of her temper or because she's very particular. However, she does do beautiful work with flowers." Did Jeanie just have an artistic temperament? Had she become a smart businesswoman, or was she an unstable loner who was capable of committing murder? Caprice didn't feel she could be that blunt and ask Kiki that particular question. Nevertheless, she could probe a little deeper. "So you believe Drew and Jeanie were very different personalities?" "Oh yes. Drew could charm the skirt off of you if he put his mind to it. He had charisma and he knew it. I knew him much better than Jeanie. He'd talk to me when Rowena and I got together. If we played cards, he might even sit in on a hand. If we cooked, he joined us. He came into the bookstore often too." Kiki frowned and looked sad. "We had our last falling out over that." "You had a falling out with Drew?" "Since he returned to Kismet, he'd come into the store and page through cookbooks. However, he didn't just page through them. He used his phone to snap photos of recipes. I couldn't allow that. It's not fair to the cookbook author. Rowena's my friend, and I didn't want to cause a real fuss, so I warned Drew not to do it. After the warning, I caught him doing it anyway. He always tried to get his own way, no matter what." And when Drew was thwarted, he apparently made enemies. Now that she had Kiki talking, she wondered if the woman would confide in her about Rowena's recipes. "I have a question for you, and I'll understand if you can't answer. When I found Drew, the Tiffany floor lamp had been overturned and was lying on the floor. I saw a piece of paper sticking out. Can you tell me if Rowena's recipes were hidden in there?" Kiki didn't say a word, but her expression said it all. At first she showed surprise and then a shuttered look that said she was hiding something. Caprice patted her hand. "It's okay. I haven't asked Rowena yet, but I'm going to. I feel her recipes could be an important part of the puzzle—of solving who killed Drew." The dining room was filling up now with more Chamber members arriving and seating themselves at the tables. Caprice caught sight of Bronson Chronister, who entered with two men. One of them, Warren Shaeffer, was the president of the Chamber of Commerce. Caprice didn't know the second man. Caprice said, "I heard Bronson might run for Chamber of Commerce president. He's with Warren." Kiki targeted her gaze toward the door. "He's not only with Warren, he's with Ira Rogers." Caprice thought she knew most of the Chamber members. After all, they received a list with every newsletter, and she could identify most of the names. "Who's Ira Rogers?" "He is a fund-raising guru." "Does the Chamber intend to have a fund-raiser for a special project?" "That's possible. But I suspect Bronson brought him along as a guest for another reason. You know my bookstore is a haven for gossip. Residents come in and I overhear lots of conversations. Some I'm not supposed to hear—women leaving their husbands and the reasons why, businesses failing because of poor management, some of the doings in the police department even." Caprice could bet Kiki kept her ear tuned in to all of those conversations. She'd formed little reading nooks where her bookstore patrons could be as comfortable as guests as they paged through the latest novels or magazines. "So what other reason might Ira have for being here?" "There's scuttlebutt that Bronson's throwing his hat in the ring for a seat in the state house. I've also heard he has his eye on Congress after that. With his family money behind him, fund-raising at grassroots level on up, and his own success, he has the wherewithal to rise in the political scene. You mark my words. State house. Senator or governor. President." Wow. Caprice hadn't thought that far ahead for Bronson. But why not? "I imagine he'd interview well and look good on a TV screen too." Kiki laughed. "That's what it's all about these days, isn't it?" Caprice hoped if Bronson was elected, he would be elected for more than his good looks and facile interview skills. With her attention focused on Bronson, she didn't see Roz come in, but her friend hurried over to the table and asked, "Did you save a seat for me?" Kiki motioned to the seat on the other side of her. "With you two smart women here to have a confab, I imagine we could think up projects on our own that would benefit this Chamber of Commerce. Let's give them ideas to bring more tourists to this town." Roz laughed. "I'm game." Caprice said, "I'm in too." The three of them had much to talk about over breakfast, and Caprice had a lot to think about concerning Jeanie Boswell and her brother, who might have been her rival not only for Rowena's affection but also for her inheritance. * * * When Caprice returned from the breakfast, she realized the meal and the meeting hadn't taken as long as she'd expected. Parking at the curb instead of in her driveway, she decided to take a short detour over to Dulcina's house to see how she was doing with the cat, and to find out how Rod and his girls had enjoyed the concert. Lady would be okay for another fifteen minutes. She knew she probably worried about her furry crew more than most, but they were like her kids. Though she had to admit, she wanted children, as well as fur babies. Caprice crossed the street and went to Dulcina's door. She pressed the bell. If Dulcina was busy, she wouldn't stay. When the door opened, Dulcina's headphones lay around her neck. "If you're working, I don't have to come in," Caprice assured her. "I just wanted to see how you and Halo were getting along, and if the concert made any inroads to relationships with Rod's girls." "I do have a bunch of records to transcribe this morning, but come on in. I want you to see what I fixed up for Halo." No mention of Rod. Hmmm. After they transversed the living room and entered the kitchen, Dulcina motioned under the cubicle where a stool could sit at a built-in desk. Halo was nestled into receiving blankets in the storage bin, sound asleep. "She likes this one best," Dulcina pointed out. "I put another bed in a darker corner of the sunroom. She uses them both, so maybe she knows what they're for. It's as if she's nesting. Sometimes she'll get in, go around in a circle, and paw the receiving blanket before she sits down." The receiving blanket was patterned with cute little yellow ducks waddling across it. Caprice crouched down beside Halo. Although the cat had looked as if she were sound asleep, her ears twitched and she gazed up at Caprice. Caprice said, "You're going to be a mom. I guess you're getting ready. Are you eating a whole lot?" "I've been feeding her about every four hours and she gobbles it down. I give her crunchies in between. She's such a sweet cat. I don't know how she ended up out there on her own." "She could have gotten lost and not been able to find her way back. Then if she was injured and hurt, she might have wandered even farther. Or someone could have put her out because they couldn't pay for the cat food. It's hard to know. But she does seem like a real sweetie. We can hope she knows how to mother." "I'll help her," Dulcina said with certainty. "I won't hold you up," Caprice said. "How did the girls enjoy the concert?" When Dulcina hesitated, Caprice knew there were probably problems. "Leslie wasn't impressed. She said she didn't connect with Ace's music. But during the concert, I saw her foot tapping along. I even caught her snapping her fingers at one point. She wants to seem so removed, and I don't know if I'll be able to reach her." "And his younger daughter?" "Vanna was into it. She even asked her dad if she could download Ace's music after they were back here. They stopped in for a few minutes because Vanna wanted to see Halo. That was the only hopeful sign of the evening. Leslie even asked me questions about the pregnancy and how long it would be until the kittens were born. I told them after the kittens were old enough, they could come over and play with them because I'm sure they'll have energy to burn. So maybe kittens can make a difference when a rock concert didn't. It was sure nice to have the evening out, to sit next to Rod and just hold hands. For a change, he didn't seem to mind doing that in front of his daughters." "So there's progress." "Yes, there's progress. How about your investigation?" "I feel like I'm advancing in baby steps. At the Chamber of Commerce breakfast this morning, I learned a few things about Drew and his sister. But also about one of Drew's friends, Bronson Chronister." "There was talk on one of the local shows about him running for office," Dulcina offered. "I guess the rumors are true. There's scuttlebutt he might run for a seat in the state house with an eye on more." "He's easy on the eyes." Caprice laughed. "On that note, I think I'll leave you to your work. Lady misses spending time with you." "And I miss her. Maybe you could bring her over and let her meet Halo." "We can talk about that more later." She offered her hand to Halo, and the cat rubbed her cheek against Caprice's palm. Already she could see a difference in the feline. She was beginning to trust humans, and that was a big step. Dulcina's kindness had done that. Caprice left Dulcina's, walked across the street, and fished her key for the front door out of her purse. Once inside she pressed in the alarm code to disengage the system. Lady came running. Mirabelle was stretched out on the fuchsia oversized chair, while Sophia sat atop the afghan on the back of the sofa. Two sets of golden eyes studied her while Lady danced around her feet. Caprice had picked up the mail from her porch mailbox on the way in. She didn't pay much attention to it, because her first concern was letting Lady outside. Still she gave Lady the hand motion for "sit." After a bit of tail wagging, sit Lady did. Caprice praised her and petted her, and then patted her hip and said, "Come on. After you do your thing, maybe we can play a little fetch." In the kitchen, she deposited her purse on the counter but kept the letters in her hand as she let Lady outside and followed her onto the porch. Lady wasted no time running into the yard. Caprice remembered the days when she ran along with her to give her the "go potty" command. But now Lady didn't need that. The day could turn into a sweltering one, and she might have to turn on her air-conditioning. The zinnias were starting to bud. The snapdragons she'd planted in bunches were colorful against the reblooming lilacs. She took in a deep breath of the summery air, closed her eyes, and appreciated the scents of the season. Sitting on the glider on the porch, she turned to the letters in her hand. There were bills, of course. There were always bills along with ads for products she'd never use. A letter-sized envelope caught her eye. It was one of those envelopes with the blue stripes so that you couldn't see what was inside. No one wrote letters these days. They sent e-mails. So she couldn't imagine whom it was from. There was no return address. That should have been her first warning. But she was watching Lady and appreciating the day and thinking about meeting Roz and Vince at Cherry on the Top for ice cream. Roz had suggested it at breakfast. Vince and Roz would give her fresh eyes on everything she'd learned about her investigation. She wasn't surprised by the white piece of paper she pulled out of the envelope. It was folded in thirds. But when she opened it, the printing alerted her she might not like what it was going to say. She didn't. The printed letters more reminiscent of a child's writing than an adult's, even a little jagged, read, If you value that pretty dog and your life, stop asking questions. As explicit as the knife in the rack of ribs, there was only one thing to do, of course. She speed-dialed Detective Carstead. * * * Cherry on the Top was like a step back to the fifties when ice-cream sundaes could be the best part of anybody's week. Caprice often tried to convince herself that the dairy concoction with walnuts on top could be a balanced meal. Tonight she'd skipped supper to have the sundae sitting before her, a scoop of vanilla with strawberry glaze, and a scoop of vanilla with chocolate fudge sauce. Whipped cream topped it, and walnuts were sprinkled over the whole thing. She sat across the Formica-topped table from Vince and Roz. Up until now, they'd kept the conversation light. The way Vince and Roz interacted, the way he laughed at her jokes and she fondly brushed his arm, told Caprice they were definitely a couple. But how serious a couple? Vince took a spoonful of his sundae, a CMP, and licked the spoon. Then he eyed Caprice. "Roz mentioned to me that she told you I asked her to move in with me." "She did." Caprice wasn't about to reveal any confidences or what Roz had told her in private. Vince seemed to realize that. "Are you going to convince her she should?" That surprised Caprice a little. Vince didn't usually ask favors of her, especially not this kind. "Then I guess you haven't made a decision," she said to Roz. "No, I haven't. You know where I'm coming from, and so does Vince." Instead of convincing Roz of anything, Caprice addressed her brother. "Don't push." With a sigh he leaned back in his chair and crossed his arms over his chest. "You're supposed to convince her, not give me advice." "Since when did I ever do what I was supposed to do?" His lips twitched up in amusement. "Maybe when you were about five." "You two will work it out," she said. "Just be patient with each other." "We're thinking about planning that vacation you suggested. It could be a smart idea," Vince responded. "I have them now and then," Caprice teased. Yet she wasn't in a teasing mood. She was worried. She couldn't hide much from her good friend or her family, so she decided to tell them about the letter. "I had a meeting with Detective Carstead this afternoon." Now Vince was on alert. "What about?" Deciding to confide in her brother and Roz, she explained about the rack of ribs threat and the letter. "When I called Detective Carstead initially, he told me not to touch it more than I had to and to slip it into a Ziploc bag. So that's what I did. He came by to collect it." "And?" Vince asked. "I saw those doubts in his eyes. I asked him if he thought I sent it to myself to get Nikki off the hook." "He wouldn't think that," Roz protested. "I believe he did for about a minute. But then he admitted from what he knows about me, he doesn't believe I would do that. Of course, he wouldn't share any information about the investigation, but he did tell me they have positive leads they're following. At least that's something, coming from him." "So now what? Is he going to send a patrol car by your house every once in a while?" "He said he'd inform the patrol officers to be on the lookout, but they don't have the manpower to do that. He knows I have a good alarm system and I'll be careful. I'll be especially careful with my pets. If anyone comes near them, I'll use more than what I learned in that self-defense class on them." Roz and Vince exchanged a look that said they believed she would. "I can't just sit by. I can take care of myself, but anybody who threatens Lady is a real pervert and I'm going to find out who that is." "And just how are you going to do that?" Vince asked. "I'm still thinking about it." "Who are your suspects?" Vince wanted to know. She went down the list from Jeanie Boswell with her motives, to Mario Ruiz and his, to the relationships Drew had with Larry Penya and Bronson Chronister, and Bronson's aspirations to run for political office. She didn't know how they all played together, but they might. After Caprice finished, Roz looked pensive. "I keep thinking about that Tiffany lamp. Sometimes the shades are attached to the base. It's not that easy to just lift it up and use it as a weapon. What if that shade was removed from the lamp beforehand . . . before the murder? What if it was just sitting on the table?" That took Caprice's mind in a different direction. Certainly the murderer didn't remove the lampshade from the base before he hit Drew. The scene seemed more like an impulsive situation where maybe the murderer hadn't even intended to strike. Caprice realized if she learned the answer to why the shade was off the lamp, she might possibly know who the murderer was. There was only one thing to do in the morning. Visit Rowena again. Chapter Eighteen The summer breeze blew in the kitchen window of Rowena's house the following morning. Caprice had brought Lady along today, and Rowena seemed to enjoy interacting with her. While Caprice sat at the kitchen table, Lady lounging with one of her toys at her feet, Rowena served pineapple pomegranate tea in a cherished rose-patterned porcelain teapot. Caprice accepted the cup offered her. This time, she'd brought along biscotti she'd made. Rowena took a bite from a cookie. "You say they're not like your nana's. And they aren't quite. But they're very good. I love the lemon icing." "She must have magic in her hands when she rolls them," Caprice offered. "I appreciate your visit today," Rowena admitted. She nodded to the cocker spaniel. "And Lady's. Kiki is going to be at the bookstore until eight o'clock tonight. So I enjoy the company. But I know you probably have more questions about Drew, don't you?" "Having tea with you isn't just about Drew," Caprice assured her. "After his murder is solved, we're going to have that grand tea party with Nana. And I'll still visit. I promise." Rowena nodded as if she believed Caprice and bent to pet Lady again. As always, Lady enjoyed the attention. "I have a whole collection of teapots and teacups that we can use when you come over. We'll try every flavor of tea there is to try. Kiki's not a tea drinker. She likes coffee. But nothing is more relaxing or comforting than a well-steeped cup of tea." After Caprice finished her cookie, she sipped at her tea from a cup of delicate fine china. She understood that Rowena treasured some of these belongings she'd had for decades. That brought her back to the subject of her Tiffany lamps. "I'd like to ask you about your Tiffany lamps again." "Ask away." "As far as you know, the shade was not off the small lamp before you left that day?" "That's right. Everything in the room was as it should be." "Do you know any reason why it would have been taken apart? That the shade would have been taken off the base?" Rowena appeared troubled. "I really have no idea. Unless, of course, Drew did it." She studied her hands, then her teacup. Finally, she turned her gaze on the floor lamp in the living room. "I do have my recipes hidden in the floor lamp. That channel inside is perfect. Yes, the lamp's heavy. But I can easily tilt it against the arm of the sofa and take recipes in or out . . . if I want to. But I never want to. I put them in there because I know them by heart. I don't need them to bake or to cook." "As far as you know, had Drew ever seen you take the recipes in or out of the floor lamp?" "No. Because I just don't. I don't know how he could have known they were in there." "How long have you done this?" "Since that episode of my card club member trying to steal them from me." "How old was Drew?" "That was shortly after he came to live with me. He might have been eleven." "Is it possible that over the years he heard you talking about those recipes being hidden in the floor lamp? Possibly to Kiki?" "I suppose that's possible." Caprice continued with that train of thought. "Maybe he suspected you kept recipes hidden in the table lamp, too. Maybe he checked every once in a while to see if you had inserted any." "I suppose that's likely. Do you think he was looking for the recipes in the lamp when he let the murderer in?" "If whoever came to the door was someone he knew, maybe he just left the lamp apart while he answered the door. There's no way of knowing, but it's as likely a theory as any." Rowena suddenly snapped her fingers. "You know what? I found Drew's yearbook for his senior year. Would you like to see it?" "I would." Rowena crossed to a stand of cookbooks on the counter and pulled a tall volume from the wooden holder. "After I found this, I just put it here so I could get my fingers on it easily. Kiki looked through it. I haven't. I remember too well what Drew and his friends looked like back then." She handed the book to Caprice. Caprice began paging through the volume. There were the usual shots—the football team and the cheerleaders. She found a photo of Drew and Larry and Bronson, standing at what looked like a lab table. The picture must have been taken during a science class. She commented about it to Rowena. "Larry was the one interested in science," Rowena explained. "I remember I bought Drew a chemistry kit one year. The three of them were in the basement using it. Suddenly they ran upstairs and told me I had to open all the windows. I don't know what they had done, but I think Larry was the instigator of that one. Now and then I found him helping Drew with his math. I think if Larry could have gone to college, he would have done well. But his family didn't have the money. And I don't think his achievements showed up well enough on paper to earn him scholarships." Paging through the rest of the yearbook, Caprice found Drew's photo in the lineup of the senior graduates. She studied his face. Larry's photo was right before his. As she turned each page, she thought about her own high school reunion that was soon coming up. For her, the past fifteen years hadn't changed the way she looked at the world that much. She might be more confident about what she did and how she did it, but her basic values were still the same. What her parents and teachers had taught her was ingrained and had become part of her moral code. As in most yearbooks, at the end of the volume, pages had been saved for autographs. She studied the signatures Drew had collected, which were mostly short comments—You did it! Congratulations, you passed. What's next, bro? But then she passed her finger over one that was a little longer. Larry had written, Hey Drew—Never forget we've got a pact. All for one and one for all. Larry. Just what kind of pact had this trio made? Something general, like they'd always be friends? Or had that pact been about something more particular? She considered both Bronson and Larry. Bronson's manner was too facile to give anything away. He considered carefully what he said and who he said it to. But from her conversation with Larry Penya, she had a feeling he might be more open. "Do you have any idea where I can find Larry?" she asked Rowena. "You mentioned he separated from his wife and moved out. I left a message at that number, but she hasn't called me back or given Larry the message to call me." "From what I understand, Linda is bitter about the marriage he couldn't give her. She stayed in the house, and she's a single mom trying to make payments on her own. You might want to give her another call or just try to see if you can snag her in person." "Do you know where she works?" "She works at that daycare center over near the mall—Little Tykes." "Then she should be home in the evening. I'll try to visit her tonight. Her attitude would probably be even more closed if I tried to visit her at work." "You're right about that," Rowena agreed. "Though it might not be much better if she's trying to take care of a four-year-old and get supper at the same time." "I'll have to take my chances." Caprice remembered the threat that had been made against Lady. She wouldn't let anything happen to her dog, her friends, or herself. The best way to keep harm from happening was to figure out who killed Drew and to do it quickly. * * * The house Larry Penya had moved out of was basically a box shape with a carport attached to one side. As Caprice had driven up to the curb, she'd spotted a shed in the back. The yard wasn't very big, so that outbuilding was close to the house. Still, there was a small swing set and a Big Wheel bike crisscrossed in front of it. As she walked to the front stoop, she had no expectations. Linda Penya might slam the door in her face. She hoped she could prevent that. When she pressed the bell, she didn't hear a corresponding ding inside. Not working maybe? Opening the screen door, she knocked. From inside, she heard "Just a minute" in an impatient voice. That didn't sound like a good start. The woman who opened the door looked frustrated. Her ash-blond hair was gathered in a messy topknot. Strands escaped around her face. She didn't even wait for Caprice to open her mouth. "If you're selling something, I don't want any. I have a four-year-old in the kitchen who's in the middle of supper." She turned and was about to close the door when Caprice stopped her. "Wait. This is important. I want to talk to you about Drew Pierson and your husband." That froze the woman in her tracks. She turned around slowly. "Who are you?" "I'm Caprice De Luca. My sister worked with Drew. I know Drew and Larry and Bronson were good friends. I'd like to speak to Larry. Do you know where he is?" The woman crossed her arms over her chest, thought about it a moment, and then opened the screen door. "Come on in. I have to get back to Joey or he'll have food all over the kitchen." As Caprice stepped into the small living room, she could see at once that the house was in disorder with Joey's stuff thrown here and there and toys scattered across the floor. But it looked clean. Not only that, but Larry or his wife had framed their little boy's drawings and hung them on the wall. There were pictures, too, of when Joey was an infant and later photos taken in the backyard. This appeared to be a house that had once held love. Linda didn't stop in the living room but went straight into the kitchen, where a towheaded four-year-old in a T-shirt and jeans was digging into what looked like a bowl of SpaghettiOs. He had sauce all over his mouth and his little fingers, and he'd picked up one of the tiny meatballs and was holding it in his hand. Linda shook her head, went over to him, and advised, "Put the meatball in your mouth, then I'll wash your hands." Joey's hazel eyes twinkled as he did as she'd asked and then grinned at her. "Gotta love the cuteness," she murmured. Caprice knew what she meant. Megan and Timmy could get away with a lot too with a smile like that. She said to Caprice, "Larry doesn't live here anymore." "Drew's grandmother told me that. She said you were separated?" "For about six months now. Larry lost his job, and then he tried to set up his own handyman business, fixing people's appliances and things you can't get repaired anymore. He's good at that." She motioned out back. "He has his workshop out there. He always went there to smoke. He still stops in to use it now and then. But sometimes he doesn't even let me know he's out there. Probably when he's been drinking. That's one of the reasons I asked him to move out." Caprice wondered if Larry could have a Tiffany lamp base in his shop somewhere. Nothing ventured, nothing gained. "Can I take a look at the workshop?" she asked. Linda looked perplexed. "Why would you want to do that?" "I'm just curious to see how big his enterprise was. I heard Bronson helped him out." A little fib now and then to get information didn't hurt. Looking embarrassed, Linda admitted, "Bronson gave us money to tide us over—so our electricity wouldn't get shut off and we could pay our mortgage. But he didn't help with the shop as far as I know. Larry already had tools, a workbench, things like that. But advertising is a problem. I think Bronson lent him money to do an ad in the paper, but that didn't bring in many people. I think he's looking into setting up a social media page. But he really doesn't know anything about all of that." "I saw Larry at the gym," Caprice said, wondering where he got the money for the membership. It wasn't cheap. If they were having financial difficulties, wouldn't he drop that first? "That was also a gift from Bronson last Christmas. He thought it might help Larry's mood if he kept up physical activity. Bronson's been a good friend. We owe him so much. Life just got too overwhelming for both of us. We argued all the time. Joey was getting upset. It just seemed better if Larry moved out for a while. Do you really want to see the shop?" "If you don't mind." "I don't mind. It's never locked." Linda went to one of the cupboards and pulled out a pack of cookies. She unfastened the package, took out five of them, and placed them on the table in front of Joey. She said to him, "You can have these with your milk. I'll be right back." The back door was already open to let in the hot breeze. As she led Caprice outside, she said, "I know I should be giving him carrot sticks instead of cookies, but those cookies were on special and I got them really cheap. Everything for me is about money these days. I hate it." Caprice felt sorry for Linda, who seemed to be in the middle of a hurricane with everything around her spinning out of control. What if it fell apart even further? What if Larry had killed Drew? For what reason? Caprice had no idea. It was only about ten steps to the shed. It looked as if it had been hand-built by either Larry or a previous owner. She said as much. "It was here when we moved in," Linda said. "It was one of the reasons Larry liked this place." She threw open the wooden door, and a wave of heat and stale smoke accosted Caprice. There were two windows, and they were open. But the small building seemed to draw the heat into itself. She spotted a fan sitting on the workbench and realized that Larry probably kept that going during the summer. On one side of the building, shelves were filled with small appliances—mixers and blenders. She thought she spotted two XBoxes. A canister-style vacuum cleaner with its attached hose was sprawled across on the floor. The workbench held the usual tools—chisels, pliers, a utility knife, and even a small hammer. Rows of jars with different types of screws and nails lined the back of the workbench against the wall. A roll of appliance cord leaned against two rolls of duct tape. On quick inspection, she didn't notice a Tiffany lamp base anywhere. But then it would be foolish to keep that here, wouldn't it? "I'm checking into Drew's background," she told Linda. "I'm trying to discover if he had any enemies that nobody knew about. Did Larry ever talk about Drew? What he was doing now? Or maybe the old days when they were drag racing in high school?" "Drag racing?" Linda asked. "I never knew about that. Larry had it rough growing up, so he never talks about that very much. I knew he and Bronson and Drew were friends since then but not much else. Larry and Bronson drove down to D.C. when Drew worked there. And when Drew came home they'd meet for drinks." "So Larry never mentioned a pact that he and Drew and Bronson might have had in high school?" "A pact? No. They did act like blood brothers, though, when they were around each other . . . the joking, arm punching, sports stuff." "You said Bronson helped out Larry. Did Drew know about that?" "Sure. I don't think they kept anything from each other. And Bronson was helping Drew too. He let him open his business from his kitchen. Granted he has a huge house and he isn't there much, but still . . ." "So Drew and Larry got along fine?" "As far as I knew. Sometimes I thought Drew looked down his nose at Larry. You know, like he was going places and Larry wasn't, especially after he sold that barbecue sauce recipe. From what Larry said, the last time he phoned, Drew wouldn't stop bragging. Larry was down on his luck, and it was like Drew kept throwing it in his face . . . all that success. I'm sure that annoyed Larry some, but as I said, they were like brothers." Like brothers. There was only one way Caprice could get a real beat on this. She had to talk to Larry. "Can you tell me where Larry's staying? He might know some detail that could lead to Drew's murderer." "He stayed at Bronson's for a while, but I don't think Bronson liked him hanging around when he was drinking so much. So I think Bronson sent him to the cabin that belonged to his dad. Larry always liked fishing, being in the woods. I think Bronson figured it would help." "Are you in touch with Larry?" "I haven't been for a couple of weeks. We couldn't afford cell phones anymore, so we don't have those, and that cabin doesn't have a landline. When Larry calls, he does it from a convenience store. Bronson told me if I needed to get in touch with Larry, he'd go get him. But I haven't needed to. I'm just trying to make life work for me and Joey now." Caprice could call Bronson to find out where his dad's cabin was located. But what if Bronson, or Larry, had killed Drew? What if one knew the other had done it? No, she didn't want to raise Bronson's suspicions. She didn't want to think he'd sent her the note . . . or the ribs. On the other hand, she didn't think Larry in his mental state right now would devise that plan either. That was just a gut feeling. She'd gone with gut feelings before. If only Detective Carstead would share what he knew. But she knew he wouldn't. If she talked to Larry and figured anything out, she'd go to the detective. Again. In the meantime she knew someone who could find out the directions or the address of Bronson's dad's cabin. Reporter Marianne Brisbane had helped her before. She had access to all kinds of databases and public records. Caprice was on a mission now, and she wouldn't stop until she had some answers. She studied Linda. "You know, don't you, that there's a food pantry connected to the soup kitchen. They even have fresh produce this time of year. Gardeners who have extras bring it in. By August there will be tomatoes and cucumbers and zucchini." "I haven't wanted to go that route," Linda confessed with pride in her voice. "You're going through a tough time, and Joey deserves the best you can give him, doesn't he? Even if you have to accept a little help from others." Caprice took out one of her business cards and handed it to Linda. When Linda looked at it, she laughed. "A home stager? That's the last thing I need right now." "I'm not handing it out for professional reasons. My home number's on there if you want to know more about the Kismet Food Pantry or Everybody's Kitchen." Linda glanced down at the card again, then at Caprice. "I don't have any family. Larry lost his mom, and his dad has his own problems with alcohol, so he's no help. I don't like to keep taking from Bronson either." "There's a social worker who comes into Everybody's Kitchen. She tries to hook people up with the programs they need. She's usually there from four to five while volunteers are preparing dinner. Just think about it, okay?" Linda nodded. "Okay." Then she headed toward her house and her son. Caprice hoped she'd accept help to get her life back on track. Chapter Nineteen All the house needed, Caprice surmised the following day, was a rotating strobe light in the octagonal-shaped room. It was a silly notion, but it seemed fitting. Denise Langford, the broker handling the Nautical Intertude house, had called her this morning and told her she had a couple who wanted to look at the property late this afternoon. Kim and David were moving from Delaware to Pennsylvania to be closer to her family who lived in York. They'd love to be near the Chesapeake Bay, but that was just a little too far away from her parents. However, this house in Kismet would give them the nautical feel that they'd like, yet put them in a good location. Both husband and wife were self-employed. He was a video game developer and she was a web designer, so they could work from anywhere. And from what they'd seen of this house online, they thought it might be perfect for them. And Denise was eager for the sale. Caprice wasn't sure why they needed her here, but she supposed she'd find out. Denise was already at the house with the couple when Caprice arrived. She was sure she was on time. She set her phone on vibrate so any calls coming in wouldn't disturb the meeting. She found the front door, with its porthole window, unlocked. After she pushed it open, she stepped inside onto beautiful teak floors. She'd used the colors of the waterfront to decorate—from furniture to wall hangings. This was an eastern seaboard retreat, splashed with yellows, blues, whites, and reds. The downstairs, or main level, was basically one large open space that encompassed the great room, dining area, and kitchen. There was a study and, although it was still open to the other rooms, it was tucked into an alcove to provide privacy. The first floor also boasted, of course, the lighthouse room with its two-and-a-half-story ceiling. The upstairs level held the master suite, in addition to three other bedrooms. An outdoor balcony ran across the second floor and met the widow's walk, which circled the lighthouse room. As an additional incentive for this couple, the basement level, which was a walk-in from the back with French doors and several plate-glass windows, housed a large bedroom suite, kitchenette, and sitting area. Caprice heard voices as they echoed from the upstairs down the circular curved staircase to the downstairs. She heard Denise say, "I understand you both want a home office." "We do," a male voice answered. "Kim likes to be closed up and quiet when she works. I, on the other hand, like activity. That first-floor den would be perfect for me. She could use the lighthouse." A woman's voice responded, "I love the lighthouse. I'm thinking that eventually my parents will move in here with us. That suite downstairs could be perfect for them. But my mother likes floral tones. She wouldn't go for the Cape Cod atmosphere we like. We'd have to redecorate." As Denise descended the last few steps, she spotted Caprice. "Caprice! I'm so glad you're here. This is Kim and David Wilkins. They like the house a lot. But Kim has some concerns about decorating—in the bedrooms, the lighthouse room, and the lower level. I told her you're an expert at that." Caprice stepped forward and extended her hand. "It's good to meet you." She shook both Kim's and David's extended hands. Kim was scanning her outfit and grinning. Caprice had worn coral clamdiggers, a Bohemian-styled bell-sleeved coral-and-green top, and her sneakers with peace signs. "I love your outfit," Kim said. "Thank you." Caprice was pleased somebody appreciated her wardrobe. "What are your concerns about decorating? You don't want the nautical theme throughout?" "David and I like it, but as I was telling Denise, I anticipate my parents eventually moving into the basement. Though it's really not a basement with that outside entrance and all the out-of-ground windows. That's what makes it perfect. There's lots of light down there." "And heated floors, too," Caprice said. "Really?" David asked. "Denise didn't mention that. Even more perfect. And just imagine the sunsets from that balcony upstairs. We have friends in New York City who would want to come here just for the view." "I can decorate however you'd like," Caprice assured them. "We checked out your website online when we saw that you'd staged the house. One of your credits was that you decorated for Ace Richland. Is that true?" "Yes, it is. I redid a room for his daughter and did his pool area." "And you staged the house that Ace bought too, Denise told us," Kim added. "I did. His was a wild kingdom theme." Kim laughed. "I think my mom would like something sedate—florals in peach and green and maybe cherry furniture? She has a four-poster bed she'll probably want to move in here." "You have to convince them that moving in here with us is the right thing to do," David said to his wife. "That might take a year or two . . . or maybe three. But once they see this house, I'm sure they'll love it too. They'll have room to roam and not have upkeep. We can drive them to doctors' appointments if need be. Especially when Dad has his knee surgery, he might be able to recuperate here, which will get them used to the idea." "Tell Miss De Luca about the changes you want in the lighthouse room," David reminded his wife. "I'd like my office decorated all in blues. It's my favorite color—from turquoise to aqua to baby blue. Do you think you could make that work?" "I can make anything work," Caprice assured her with a smile. "I have a few catalogs in the van. Would you like to see them? Sample books too—for wallpaper and material for upholstery fabric or drapes." "That sounds wonderful. Maybe we can take a look out back while you get them." "I'll meet you back here." Caprice spun on her heels and headed out the door. This sounded like an imminent sale. It would be great for Denise's pocket and good for Caprice's reputation. She liked this couple a lot. They were positive and upbeat and seemed to have a handle on their lives. Denise had confided that David was a multimillionaire because of the video games he'd developed. But they didn't have an arrogant attitude that some wealthy people adopted. She liked that. She also liked that they were thinking about caring for Kim's parents. She was in the back of her van stacking sample books when her cell phone vibrated in her pocket. She might have let it go to voice mail, but she was hoping Grant would call. Soon. Hadn't Naomi had enough of Kismet and sightseeing yet? Maybe Naomi wasn't going to leave. Maybe Grant wouldn't call. Maybe— Cutting off that thought, she also realized she hadn't heard from Marianne, who was supposed to get back to her with an address for Bronson's dad's cabin. When she checked her phone's screen, she saw Marianne was the caller. Disappointment stabbed at her, but she ignored it and answered. "Hi, Marianne. Could you find it?" "I did. It really wouldn't do me much good to just give you the address. It's a rural P.O. box near Wellsville." Wellsville was located about fifteen minutes from Kismet. "I have explicit directions. I e-mailed them to you," Marianne said. "Thank you. You don't know how much I appreciate this. I owe you one." "Yes, you do. Remember, I get first scoop if you find out anything juicy. Or if you solve this murder. You're going to soon be a celebrity." "Bite your tongue." Marianne laughed. "So what are you going to do?" It took Caprice only a few seconds to think about it. "I'm talking to a couple now about buying and decorating a house, and as soon as I'm done here, I'm going to follow your directions. I want to talk to Larry Penya sooner rather than later." * * * It was later than Caprice would have liked when she finished up at the Nautical Interlude house. The couple had loads of questions and had pored over her sample books. Caprice knew Denise was chomping at the bit to settle the sale, to actually hold earnest money in her hand. This was all part of the process. Kim and David had to see themselves in the house . . . and enjoy the adventure of it. So Caprice had patiently aided them in finding exactly what they wanted. After they all said good-bye and Kim and David headed off with Denise to her office to begin the paperwork for buying the property, Caprice headed home to give her animals attention—and supper. Eager to drive to the Wellsville area, she grabbed a container of broccoli salad and ate it outside while Lady ran and played. Between bites, Caprice said to her, "I wish I could take you along. But I can't. And Dulcina has a new guest who isn't used to you yet. Maybe . . ." She took her phone from her pocket and dialed her mom. After her mother answered, she asked, "Are you busy tonight?" "Need a listening ear?" her mom asked. "No, I need a pupsitter. I was away most of the afternoon, and I have an errand I need to run tonight. I really don't want to leave Lady alone again." "Sure. Bring her on over." Twenty minutes later, Lady happily soaked up Fran's attention as Caprice told her parents where she was headed and outlined the directions to her dad. Her father said, "I know that area. It's near Pinchot State Park. Lots of woods. Creeks. Beautiful farmland too. You should be okay if you follow those directions." She should be okay. Of course she should. She was just going to question one of Drew's friends. * * * Dusk was falling as Caprice found the gravel lane Marianne had detailed and turned her van onto it. The narrow road wound around a few curves and then stopped abruptly before a wooded area. No one had told her she'd be hiking tonight. Once she exited her van, she spied a three-foot-wide path that led through the stand of maples and sycamores. It wasn't long before the cabin came into view. Even though it was rustic, it was a hidden gem because no one would suspect it was here. It was a square with a slanted roof. She suspected the floor plan would show a loft and an open ceiling. The screened-in porch ran along the front and side of the cabin. As she approached it, the silence of the woods was broken by male voices. A pickup truck zigzagged along the far side of the cabin, and she realized that either there was another winding entrance that ran around the back or she'd missed a turnoff that circled around the trees. Who was here with Larry? Glad she'd worn sneakers that made little noise on the gravel, she stood at the corner of the screened-in porch and unabashedly listened. "Give me plane fare and a stake, and I'll just disappear. Linda doesn't care if I'm gone." That was Larry's voice. His words sounded slurred, as if he'd been drinking. "I don't have anything to offer Joey," he added morosely. "Nothing's holding me here." If he was running, did that mean he'd killed Drew? Then she heard another male voice. "You can't just leave. We have to stand up to Fairchild together. I don't understand why he called this meeting now." Fairchild? Louis Fairchild, the men's high school shop teacher? Caprice recognized the second voice too—it belonged to Bronson Chronister. "Exactly what did Fairchild say?" Larry asked shakily. "He said he wanted to talk over old times." Bronson sounded agitated, his tone rising and falling as if he was pacing. "You know what that means. I can't have that accident brought up now." "You weren't even in the car!" Larry shot back. "Drew and I ran him down." "Drew ran him down. You were about as drunk as you are now. I wasn't in the car, but I knew about it after the fact. The whole mess could ruin my career in politics." "We have something worse to think about," Larry whined. "What would that be?" Bronson sounded genuinely perplexed. "Drew's murder." "I didn't have anything to do with that. Neither did you." All was silent a few seconds until Bronson asked, "Did you?" "Not me," Larry protested. "But I did see Drew that night and I didn't tell the police. I'd fixed the cord on that light that he said was so expensive. Apparently Drew was messing with it and the cord gave way. It's old. But he didn't want his grandmother to know. After Rowena left for the day with her friend, I picked it up and took it to my workshop to repair it." "Did Linda see you had it?" Bronson sounded appalled. "She wasn't home. She takes Joey to the playground on Sundays." "What did you do with it after you fixed it?" "Drew called me when he was through at that expo. I took it back to the house. But . . ." He hesitated, then went on. "But Fairchild came to the door. He said he wanted to talk to Drew privately. So I left. But I stopped outside to smoke and . . . I heard them arguing. That's when I headed out." "And you didn't tell the police?" "If I ratted out Fairchild to the police, I knew he'd tell them about me and Drew and the accident." "If you hadn't spilled the beans to him when you were drunk back then—" Suddenly the hairs on the back of Caprice's neck prickled. It was as if a cold wind had blown through the summer night. Before she could react, take a breath, or turn around, she felt something poke the middle of her back. Something hard. Something like the barrel of a gun. She recognized the voice when Louis Fairchild shouted to the two men inside. "You've confessed everything in front of a witness. What do you think we should do with her?" After a silent moment, Bronson and Larry both rushed out of the porch and down the front steps. They saw Caprice and Fairchild behind her. "What are you doing here?" Bronson asked her. She could hardly find her voice, but she finally did. "I came to talk to Larry. I never expected to run into . . . all three of you." All three of them had committed crimes. But it seemed Louis Fairchild had killed Drew. Why else would Fairchild have pretended he hadn't had any contact with Bronson, Larry, and Drew for years? Why else would he have a gun pointed at her? Now was no time for cowardice. She needed to get them to turn on each other so she could slip her phone out of her pocket and dial Detective Carstead. Fairchild's gun wasn't tight against her now. That didn't mean he couldn't kill her in an instant, but this might give her a little leeway. It sounded as if Larry and Drew and Bronson had been caught up in something as teenagers and they hadn't known how to get out of it. Her gaze went from Bronson to Larry, making a point. Then she said, "You trusted Mr. Fairchild back in high school, didn't you? After all, he was your teacher. But he wouldn't be holding a gun on me if he can be trusted." Bronson shook his head. "I didn't trust him. Larry did. But he said he'd always keep the secret." Larry's eyes were glazed, but his words were clear when he said in a low voice, "I've never been able to forget the sound of the car hitting that old man. Never." And that's why he drank. "I knew what had happened before Larry told me," Fairchild muttered. "All those years ago, they thought I didn't notice Drew's dented bumper and the piece of material caught on it. I knew about their drag racing. When I heard about the hit-and-run accident, I put two and two together. But by then a friend of Drew's had fixed his bumper and the car was cleaned up. I found Larry drunk on the bleachers one night and he spilled it all. A secret is a handy thing to have in your back pocket . . . especially when you want to retire." Caprice understood that the only reason Fairchild was talking was because he was going to kill her. What would Bronson and Larry do? Let him? She was panicking inside but she had to keep her wits about her. She could get out of this somehow. She could. She should have texted Grant that she was thinking of him. She didn't want him to think she'd died hurt and angry. She wasn't going to die. "You were going to blackmail Drew, weren't you?" Caprice asked in order to keep Fairchild talking as her hand slipped to her pocket. She hadn't brought her mace gun. Stupid, stupid, stupid. "Drew was making it big, and I wanted some of that," Fairchild said. Caprice nodded to Bronson. "Why not blackmail him? He had the money." "He wasn't in the car that night. Larry and Drew were. But I called him out here tonight because now that he has politics on his mind, he can grease my palm to keep me quiet. Enough of this chitchat. You've got to go, girl. Apparently my threats didn't work with you. The woods are dark and deep. Let's move it." But before Fairchild could poke her with the gun again, she caught the appalled expression on Bronson's face when he realized that Fairchild had intended to blackmail him and that he intended to kill her. With an angry shout, Bronson rushed Fairchild. When he did, the gun went off! Both men staggered, and Caprice didn't know if either of them had been hit. Suddenly Fairchild pushed away from Bronson. As he did, she saw Bronson clutch his shoulder. Fairchild stooped to retrieve the gun that must have fallen out of his hand when Bronson grabbed him. Caprice didn't need any advice on what to do next. She ran for the woods, yelling at Larry to use Bronson's phone to call 9-1-1. She grabbed hold of her phone as she ran and pressed the number to speed-dial Carstead. After tripping over a tree root, she caught herself, hung onto the trunk, and rounded another tree. When Carstead answered her call, she didn't give him a chance to speak. She rat-a-tat-tatted her location—Elliot Chronister's cabin near Wellsville. Then she added, "Louis Fairchild killed Drew. I think he shot Bronson. Need paramedics. Get here." Then she pocketed her cell phone so she could run faster. Fairchild was in shape, but she was younger. Maybe she could fool him and circle around . . . or climb a tree. As she scurried through the brush, she heard a loud grunt and swearing behind her. Maybe Fairchild had fallen over a tree root. She could only hope. She ran faster, increasing the distance between them. Her cell phone vibrated in her pocket. Carstead wanting her to keep the line open? That would be the smart thing to do. But as she pulled out her phone and saw its glow in the dimming light, she realized the caller was Grant! He sure picked a dandy time to call. She was torn, but she knew she had to answer. This was the first he'd contacted her since Naomi's visit. Grant and his call were as important as her life. Breathless, she asked in a low voice, "Can I call you back?" But Grant knew her moods and her voice. "What's wrong?" They didn't keep secrets between them. She whispered, "Hold on a minute," and ducked behind a thick tree truck. But Grant wasn't holding on. "Where are you?" She heard brush cracking, branches moving. If she talked to Grant, Fairchild would hear her. She whispered, "I'll text." Moving farther into the woods and the night, she curled herself behind a sycamore so Fairchild couldn't see the glow from her phone and quickly texted, Murderer chasing me at Elliot Chronister's cabin. Dad has directions. I called Carstead. Pocketing the phone, she moved a little farther through the trees, then decided the best thing for her to do was to climb one. Fortunately Vince had taught her well. In fact, he'd taught her lots of skills that could save her life as well as any self-defense course. She jumped at the lowest branch, caught it with her arms, then used her sneakered feet to scramble up the tree. It was practically dark now, with no moon lighting the woods. She didn't know how long she was in that tree. It seemed like centuries. How much distance had she put between herself and Fairchild? Had he gone off in another direction? She waited and waited and waited, afraid to make a move. Maybe she should climb down and run again. But which way? Toward the cabin? Into the woods? She could run right into him. Minute after minute slowly ticked by. Then suddenly she spotted a beam of light and suspected it was the flashlight app on Fairchild's cell phone. Wasn't technology a wonder? All she could do was say Hail Marys and hope. Fairchild was obviously trying to be quiet, but she could hear his shuffles through the brush, his low grunt when a branch grazed him or a bramble caught his jeans. Her body was rigid and stiff. Finally she decided she'd better breathe. She took a few shallow breaths. He was using that flashlight beam in circles but not shining it up into the trees. Maybe he was too afraid he'd trip again. After all, maybe he wasn't as nimble as she was. The light inched closer to her tree. She didn't move. She didn't breathe. She didn't even flinch. "I'm going to find you," he called out to the general area. "You know I will. You might as well come out." She wondered if he underestimated all women. Maybe that's why he never married. Or maybe women always discovered his mean streak, because he obviously had one. He stopped, probably to listen. When he didn't hear anything, he moved on. Now he kept quiet, maybe thinking he could sneak up on her wherever she was hiding. But the woods were dark and deep, and soon he was farther into them. Now she could scramble down and run back to the cabin . . . maybe even reach her van. The wail of sirens broke the stillness of the night. The sound was faint at first but grew louder with each second. Thank goodness for GPSs and cell phone towers. Thank goodness for detectives who knew how to find addresses. Thank goodness for Hail Marys and brothers who didn't mind her tagging along. And self-defense courses. The siren sounds were almost deafening now in the hushed night. Not caring about scratched and cut hands or brush and brambles, she scurried down the tree, lit up her own phone's flashlight app, and ran as fast as she could back toward the cabin. Before she emerged from the trees, she could hear officers shouting to each other. She heard them spreading out through the woods. As she reached the cabin, she spotted Bronson and Larry sitting on the porch steps, Carstead looming over them. "I'm here," she called as she waved and approached them. She could see Bronson holding his arm across his chest, blood staining his shirt sleeve. But before she reached Carstead, another man came running from the makeshift road. A man who was tall with black hair and broad shoulders—the man she loved. Grant rushed to Caprice and took her into his arms. "Are you all right? What are you doing out here? I don't know whether to shake you or kiss you." She didn't wait for him to decide. She kissed him. He wouldn't be here if he didn't love her. He wouldn't be here if he'd made a different choice. After Grant pulled away, he said, "Everything's going to be all right. I promise. We can talk later." Just then, two patrol officers dragged Louis Fairchild from the edge of the woods. He was handcuffed and looked as if he wanted to murder someone again. They none too gently pushed him toward the patrol car. Finally Detective Carstead approached Caprice. He gave her a look that said he'd never understand her. He muttered, "Maybe I should put you on the Kismet P.D.'s payroll. Can you meet me at the station and fill me in on exactly what happened?" "I'd be glad to," she answered agreeably. She wasn't shaking now that Grant was holding on to her so tightly. He'd promised everything would be all right . . . and she did trust him. As Carstead walked away, Grant said, "He likes you." She heard that hint of jealousy in Grant's voice again, and it made her heart sing. Turning to him, remembering Nana's advice to jump without a net when she knew what she wanted, she gazed into his eyes and assured him, "But I like you. You're the only man I want to consider a future with. That is, if you want a future with me." "We have a lot to talk about," Grant assured her, pulling her close again. "Naomi went back to Oklahoma. This week put resentment and recriminations to rest. We revived good memories of Sally. But my life with Naomi is in the past. After you and I finish at the police station, I want to talk to you about what comes next for us." That was a conversation she couldn't wait to have. Epilogue Ten Days Later The cafeteria at Kismet High School had been transformed for the night. The committee developing the reunion wanted to make the night affordable for as many classmates as they could, so they'd decided to have the reunion at the school. It was a sweltering July night, but no one seemed to mind as they stepped into the air-conditioning and the music that poured from the speakers the DJ had set up in the lobby adjacent to the cafeteria. On Grant's arm, Caprice was glad she'd dressed up. She'd found a fifties-style lacy crinoline dress in off-white with an embroidered flower pattern. Donned in its capped sleeves, sweetheart neck, and tight waist, along with teal strappy pumps and a teal and cream purse, she felt good. When Grant looked at her, she felt pretty. They'd been spending as many hours together as they could. She could tell his time with Naomi had settled things in his mind. He'd shared some of the conversations he'd had with his ex-wife. He'd also shared some of his grief at losing his daughter. She knew that would always be with him. But she accepted that, just as she accepted him. And he seemed to accept her just the way she was, even when he was angrier than an irate bull that she'd put herself in danger again, inadvertently or not. They sat at one of the tables in the cafeteria beside Roz and Vince. Other members of the reunion committee were seated across the table. Vince waved to the centerpieces. "They're looking good." He turned to Roz. "I hear you helped with those." "I did. I've always liked arranging flowers. Jeanie Boswell gave us a discount on them, as well as the vases." Since the apprehension of Drew's murderer, Caprice had learned more about Jeanie and the way she often hid her emotions behind indifference and anger. That hadn't made her a guilty sister, just a grieving one. "We've finally decided we're going to take a vacation together," Roz told her in a low voice. Her friend sounded excited . . . and happy. "Where?" Caprice asked. Vince answered, "The Finger Lakes in New York State. We'll have a whole week together—day and night." Roz blushed. Grant leaned close to Caprice. She caught a whiff of his woodsy cologne that was one of her favorite scents these days. He looked so handsome tonight in a charcoal suit with a blue tie and pale blue shirt. But then she thought he looked handsome no matter what he wore. He murmured near her ear, "Did I tell you how beautiful you look tonight?" She smiled. "In spite of the fact that I was up all last night watching Halo deliver her kittens?" The tortoiseshell had given birth to three—a dark tortoiseshell, a gray-striped tabby, and a lighter tortoiseshell—all still to be named. Watching them being born and settling in to nurse had been an awesome experience. Grant's voice went a little lower. "Do you remember the night we delivered Shasta's pups?" "Of course I do. I'll never forget it." Grant took her hand and asked, "Would you like to dance?" "I definitely would." They stood and excused themselves. Roz winked. Vince gave them a thumbs-up, and Caprice didn't even feel embarrassed. Before they reached the lobby, where couples were dancing, Helen Parcelli, whom Caprice had run into at the Raspberry Festival, approached them. "Hi. I've been wanting to talk to you, but I didn't want to interrupt at your table. I want the real scoop on what happened at Elliot Chronister's cabin." Helen wasn't the first person to ask Caprice, and she answered by rote. "Marianne Brisbane reported what happened in the article in the Kismet Crier." Narrowing her eyes, Helen prodded, "Come on now. Fill in the details for me." Caprice looked at Grant, and he gave a shrug. Caprice studied Helen. "What do you know?" "Everything I've been reading online. Larry and Drew were involved in an accident in high school and didn't report it. Bronson made a public apology concerning his knowledge of it. I think he hopes if he comes clean about everything, maybe he still can have some kind of career in politics. After all, he wasn't in the car when it happened. What exactly are the charges against all of them?" Those were points of public record, so Caprice answered easily. "Larry was charged with conspiracy to commit homicide by motor vehicle and obstruction of justice." "Don't forget breaking and entering at Rowena's when he tried to find Drew's yearbook," Grant interjected. "An inscription in the yearbook could have been damming if it came to light. A prosecuting DA could have used it to his advantage." "I heard Bronson is paying for a good lawyer for Larry," Helen said. "Maybe he'll get a minimum sentence." "That's what they're hoping," Grant agreed. "Bronson was charged with obstruction of justice but will probably be sentenced to probation or community service. I doubt if a political career is in the cards for him anymore." "I can't believe Louis Fairchild murdered Drew. Exactly why did he do it?" Helen asked. Caprice knew Fairchild was having the book thrown at him—homicide, attempted homicide, and obstruction of justice. The prosecutor also tacked on a charge of terroristic threats for scaring her out of her wits with the knife and note on the rack of ribs and the letter in the mail. "When Louis Fairchild wanted to retire," Caprice explained, "he had a problem. He'd blown most of his 401K on gambling debts, and social security wouldn't fund the retirement he wanted. He knew Larry, Drew, and Bronson had each other's backs, and he decided to cash in on their secret. He couldn't blackmail Bronson—this was before Bronson's political ambition—because Bronson only knew about the hit-and-run. He hadn't been in the car. But after Drew hit the big time with his barbeque sauce, Fairchild thought he could squeeze money from him." "So Fairchild confessed?" "He did," Caprice responded. "The whole story came out when the police questioned him, because he was so angry . . . at me, at Bronson, at Drew, at Larry. Apparently Drew had been snooping for recipes in Rowena's Tiffany lamps. When the cord broke on the table lamp, Larry fixed it and returned the lamp. After Fairchild showed up at Drew's, Larry left. But he stood outside to smoke and heard raised voices. He didn't stay because he didn't want to get involved in whatever was brewing. Fairchild said that when he tried to blackmail Drew, Drew just laughed at him. They argued. Drew turned away, and Fairchild picked up the lamp base that Larry had returned, conked Drew with it, and took it with him. On his rush to leave, he knocked over the other Tiffany lamp. The police found the base of the lamp stashed in a closet at his residence." "The irony of it," Grant added, "was that if he'd just stolen the lamps, he'd have had a windfall of sorts." "I'd heard they were Tiffany," Helen said. "I wonder if Drew's grandmother is going to keep them." "She's selling them," Caprice revealed. "They're going up for auction." "I knew you could tell me more than was in that newspaper article." Another classmate waved to Helen from across the room. She waved back. "I'd better get going," she said. "You two have a nice time tonight." After she moved away, Grant wrapped his arm around Caprice's waist and led her to the dance floor. A ballad had begun playing. As he took her hand in his and guided her to the music, she knew murder and mayhem were behind her for now. "What are you thinking about?" Grant asked her as they danced. She answered honestly. "You." He pulled her closer and rested his chin on top of her head. Smiling, she squeezed his hand and sighed. She was right where she wanted to be . . . close to his heart. Original Recipes Caprice's Easy Beef Bourguignon 6 bacon slices, plus 3 tablespoons of the drippings 2 ½ pounds stewing cubes 2 cups flour 1½ teaspoons salt ½ teaspoon pepper Coat a 5-quart slow cooker with no-stick spray. Fry bacon, then remove it from the drippings. When it is cool, crumble it and set it aside in the refrigerator. Save 3 tablespoons of the drippings. Dry beef cubes as much as possible with food-friendly paper towels. Combine flour, 1½ teaspoons salt, and ½ teaspoon pepper in a Ziploc bag. Drop in a few stewing cubes at a time to coat them, then place the coated cubes in the slow cooker. 1 cup chopped onion 1 cup peeled and sliced carrots 1 cup chopped celery 1 clove grated garlic teaspoon marjoram 1 teaspoon salt ½ teaspoon pepper 3 cups beef broth (use Swanson for no MSG) 1½ cups white burgundy wine (red burgundy discolors the mixture) 4 or 5 red pepper flakes Add onion, carrots, celery, garlic, marjoram, 1 teaspoon salt, ½ teaspoon pepper, and the red pepper flakes to the beef cubes in the slow cooker. Pour broth over the mixture, then add the three tablespoons of bacon drippings and the wine. Cover and cook on low for 5 hours or until beef cubes are tender. Stir every two hours for a smooth gravy base without flour lumps. I serve over 1 pound of wide egg noodles and top each portion with crumbled bacon. Serves six to eight. Bella's Lima Bean Casserole 2 16-ounce bags frozen lima beans, thawed 8 slices fried, crumbled bacon ¾ cup chopped fresh onion ¾ teaspoon salt ½ teaspoon pepper ¼ cup melted butter ½ cup water 8 ounces finely grated cheddar cheese 6 tablespoons Italian bread crumbs Preheat oven to 400 degrees. Combine thawed lima beans, crumbled bacon, onion, salt, pepper, butter, water, and grated cheese. Turn into a two-quart casserole sprayed with no-stick cooking spray. Bake covered at 400 degrees for 45 minutes. Remove cover and sprinkle with bread crumbs. Bake an additional 10 to 15 minutes or until bread crumbs are browned and beans are tender. Serves six to eight. Nikki's Carrot Surprise Cake Cake 1½ cups flour 2 teaspoons baking powder 1 teaspoon baking soda 1 teaspoon salt 1½ teaspoons cinnamon 1 teaspoon orange zest 1½ cups sugar 1 cup vegetable oil 3 eggs 1½ cups shredded carrots ½ cup crushed pineapple, drained ½ cup chopped walnuts 1 cup flaked coconut Cream Cheese Frosting ½ cup softened butter 1 8-ounce package softened cream cheese 1 teaspoon vanilla ½ teaspoon orange zest 1 pound confectioner's sugar Preheat oven to 350 degrees. Grease and flour 9 by 13-inch cake pan. Cake Stir together the flour, baking powder, baking soda, salt, cinnamon, and orange zest in a large mixer bowl. Add sugar, oil, and eggs. Beat on medium speed for about 1 minute until well mixed. Fold in carrots, pineapple, walnuts, and coconut until completely blended. Pour mixture into the greased and floured cake pan. Bake at 350 degrees for 35 minutes or until toothpick comes out clean. Let cake cool completely before frosting. Cream Cheese Frosting Beat softened butter with softened cream cheese until smooth. Add vanilla and orange zest. Blend in confectioner's sugar and beat until smooth and spreadable. Frost the cake and refrigerate until you are ready to serve it. Please turn the page for an exciting sneak peek of Karen Rose Smith's next Caprice De Luca Home Staging Mystery SHADES OF WRATH coming in December 2016 wherever print and e-books are sold! Chapter One The mansion that stood before Caprice De Luca was a bit run-down but still magnificent. As an early September breeze tossed her long, dark-brown hair as well as the leaves around her feet, Caprice remembered that the Tudor revival had hit America during the 1920s and 30s when this edifice had been built. It was a monstrous home, yet charming too because of the steeply pitched roof with prominent cross gables. Those gables were embellished with half-timbering against stucco walls. Decorative chimney pots topped the thick brick chimney. End-of-the-day light flickered against the tall windows arranged in groups of three. Each had diamond-shaped panes that reflected the sunlight. As a home stager, Caprice considered how light shone into a room. However, she wouldn't be looking at this house to stage it to sell. She'd be planning how to furnish and decorate it. Caprice ascended the front steps, passing under the arched portico that supported a room above it. She couldn't wait to see the inside. Wendy Newcomb had said she'd be waiting for her. This estate had been donated to Sunrise Tomorrow, a cause that had been a passion of Wendy Newcomb's since she'd established a foundation for the women's shelter in Kismet about a decade ago. She ate and slept her work, advocating for and caring for women who were victims of domestic violence. Caprice was here today to take a look at the mansion and propose ideas for decorating it so that it was suitable for a housing facility for women who were in need of transitional care. She was going to make their rooms feel like places they'd want to spend time. The heavy wood-paneled door stood slightly open. Before Caprice understood what was happening, a yellow tabby cat ran up the steps and slipped inside. Did the Wyatt estate have a resident feline? Caprice pushed the door open wider, but it took an effort. Maybe the hinges were just warped . . . or maybe that door was meant to be a barricade. Instantly, she was in awe of the Carrera marble floor in the grand foyer and the spacious living room before her. She'd started across the foyer when she heard angry voices near the wide, curved stairway that led to the second floor. Since the mansion was practically empty, except for sawhorses, ladders, and building supplies, she could hear some of what was being said across the room. There was no sign of the cat. Where had he disappeared to? A man's voice rose. "Where is she?" he demanded. Caprice recognized Wendy's voice, lower than his. She couldn't catch every word. From what she knew of Wendy, the director of Sunrise Tomorrow was trying to remain calm and serene in the face of an angry male. Whatever explanation Wendy gave didn't seem to satisfy the man, and now Caprice recognized his voice too. It belonged to Warren Shaeffer, CEO of Kismet's Millennium Printing and the president of the town's Chamber of Commerce. He didn't fly off the handle easily. Stoic was usually his middle name. Apparently not today. Moving closer to the stairway, Caprice saw Warren point his finger at Wendy. "You have no right to interfere." Unsure exactly what to do, Caprice continued to approach them. She could see Wendy was red-faced. Oblivious to a third party, she poked Warren in the chest and determined in a clipped voice, "I don't need to tell you anything. You'd better leave before I call the police. I don't think you'd want the general public to know that you can't hold your temper." Suddenly Shaeffer shifted on his feet and spotted Caprice. He seemed to take a breath, rein in his anger, intentionally relax his shoulders, and act as if this interchange with Wendy was no big deal. Caprice was almost at Wendy's side now as he made a nonchalantly composed remark that she didn't expect. "I'll look forward to seeing both of you at the next Chamber of Commerce meeting." With a forced smile at Caprice, he hurried to the front door and exited the house. Wendy looked so relieved as he left. She pasted on a smile just as forced as his had been and pretended as if her conversation with him hadn't affected her at all. "That was an unexpected meeting," she apologized. "Come on, let me show you around. The workmen have left for the day." It was obvious Wendy didn't want to talk about the encounter that had just happened. But as she walked beside Caprice and led her up the stairs, Caprice could see the woman's hands were shaking a little. A result of stress . . . her position . . . the legacy of the Wyatt estate? Or because of Warren Shaeffer? Caprice had never seen him lose his temper. As head of the Chamber of Commerce, she had watched him remain cool over many a heated discussion. However, those discussions hadn't been personal. Today's discussion with Wendy had sounded very personal. The stairs led to a second-floor hallway. As they reached the second-floor landing, Caprice could see a balustrade that stretched from one side of the hall to the other. It overlooked the spacious foyer. She could smell the dust in the air, spotted sawhorses down the hall and drop cloths in the first bedroom. Roller window shades, perhaps collected from the upstairs rooms, lay in a stack near the balustrade. Sitting next to the pile of shades was the yellow tabby, blinking at her with jewel-like green eyes. Caprice started toward the wooden railing, intending to get closer to the cat. Wendy caught her arm. "Oh no! Don't go near that. The balusters aren't stable. I told the contractor he should put a warning sawhorse there or something, but he hasn't. I'll remind him again tomorrow." Caprice glanced down at the pile of shades, with their scalloped edge and fringes and wooden bar across the front for lowering and raising the window covering. They were yellowed and looked as old as the house. Then she again studied the feline. "Are you friendly?" she asked him. As if in answer, he stretched and came to her, rubbing against her retro plaid slacks with an insert side pleat that ran from knee to ankle. Retro fashion was one of her passions. Cats and dogs were another. She crouched down and offered him her hand. He butted his head against it. "He's very friendly," Wendy said. "I found him curled up in one of the bedrooms upstairs when nights began turning colder. I've been here almost every day to check on something so I feed him wet food then leave a bowl of dry in the bedroom upstairs. I think he slips in through a broken basement window when the door isn't open. He doesn't seem to mind the workers who are in and out." The tabby purred as Caprice petted him. Then suddenly he left her, crossing to sit once more beside the pile of window shades. Wendy motioned to the rest of the upstairs and began walking. Caprice glanced over her shoulder at the tabby who seemed quite contented where he was. As they walked down the hall, she asked Wendy, "Is most of the work here cosmetic, or are there structural issues?" "Fortunately, inside the house most of the work is cosmetic—steaming off wallpaper, patch plastering, painting, modernizing a few of the bathrooms. We also have to add a new roof. I don't want any problems in the next few years with leakage. But all in all, the mansion is in amazingly good shape. Houses were built to last in the 1920s. This one had a grand past with good upkeep until the last dozen years or so. When Leona got sick, everything seemed to be a burden, even lifting the phone to call a plumber." Leona Wyatt had faced a battle with cancer. Scuttlebutt had it she thought she'd won after her first bout with it. But ten years later it had come roaring back and had taken her. "She had children, didn't she? Couldn't they help her?" A look passed over Wendy's face that Caprice couldn't quite decipher. But then Caprice learned its meaning—disgust. Wendy said, "Her son and daughter didn't pay much attention to her, even when she was sick. They were no help at all." Caprice wondered if that was why Leona Wyatt had left her son and daughter money in trust yet left the mansion and the rest of her money to Sunrise Tomorrow. She'd heard the Wyatt siblings were squabbling over the fact they felt they should have received the legacy left to the Sunrise Tomorrow Foundation. However, there had been a stipulation in Leona's will that if they contested it, they would receive nothing. The Wyatt estate had been through probate and settled without incident. Apparently, the brother and sister hadn't wanted to take the chance of contesting the will and losing. Caprice followed Wendy from room to room, snapping photos on her phone, loving a peek into the house that had to be filled with memories of days gone by. They chatted about goings-on in Kismet as they walked. When they finished on the second floor, they descended the back stairs into the kitchen area where a grand brick fireplace stretched from floor to ceiling in the sitting area. A butler's pantry and maid's suite were located behind the kitchen. Caprice noted the utility room was almost as large as her living room. Wendy explained, "I want to uphold the original grandeur of the house, yet I want it to be homey too. Does that make sense?" "Perfect sense. You said this would be a transitional facility. Just what does that mean?" An old-time deacon's bench still sat in one corner of the kitchen. Wendy motioned to it, and they both took a seat there. "Most of the women who come to Sunrise Tomorrow need a port in a storm for a couple of nights until they find shelter with a friend or a family member," Wendy explained. "Once they've left their immediate situation, obtained a PFA—Protect From Abuse order—most are involved in programs such as counseling and job training." She shifted a bit on the bench. "But some women need a haven for longer than a few nights. Before the Wyatt legacy came through, I'd encouraged the board of the Foundation to consider buying some sort of apartment building to house women in those circumstances. Now the Wyatt estate will be perfect for that. Clients can stay a month or two or three, do job training on site, pick up skills they need, as well as self-confidence and independence. Our main facility will be what it still is—an emergency haven with services for follow-up. But this place? This place can be so much more. I'm so excited about it, Caprice, because we can help so many more women." Caprice waited a beat, but then asked, "Do you want to talk about what happened earlier . . . when I first arrived?" Wendy stared into the empty fireplace, soot-stained from years of use. "I can't talk about most of it, you know that. Although I'm not a therapist, the work I do is confidential. It has to be." Wendy said that last with such vehemence, Caprice studied the woman's face. There were lines around her eyes and around her mouth. Her nose looked as if it might have been broken. Strands of gray salted her medium brown hair. Caprice didn't know Wendy's story. Nobody did, as far as she knew. And that was a feat to keep background a secret in a town the size of Kismet. On the other hand, Wendy's present-day life seemed to be an open book. She lived with her significant other, Sebastian Thompson, and his two sons. But her past was a blank slate on the gossip mill. "I know you have to keep confidences," Caprice agreed. "But that discussion I walked in on concerned you too, didn't it?" After hesitating, Wendy admitted, "It did. And I'll confess Warren Shaeffer scares me. He has one of those Jekyll-Hyde personalities. Only those closest to him see the Mr. Hyde side." No, Wendy couldn't break any confidences, but what she was implying was that Warren's wife had probably sought services at the shelter. His question, Where is she? could mean that Wendy had helped his wife relocate somewhere else, at least for now. Obviously wanting this discussion to be over, Wendy stood. "Let me show you the rest of the house. I have a meeting in half an hour, but we can finish touring the inside anyway. You can explore the grounds when you come to actually do the decorating. I'm going to trust you with the furnishings for most of it. I think you understand what I want. But after you turn your proposal in, I'd like to discuss updating our original quarters. Do you have time to fit that into your schedule too?" Furnishing and decorating the Wyatt estate and updating the original Sunrise Tomorrow facility could bring in substantial income. She'd fit it into her schedule somehow. "I'd be glad to talk about updating. When would you like to do that?" "Why don't we keep that less formal? Why don't you come to dinner on Saturday with me and Sebastian and the boys?" Caprice didn't know how long Wendy had been living with Sebastian Thompson and his sons, but it sounded as if they'd formed a family. "I'd like that. Where do you live?" "We live in the Poplar Grove Co-housing Development. Do you know where that is?" Although Caprice wasn't exactly sure what co-housing meant, she'd passed the Poplar Grove development on occasion. It was located east of town. "I've never visited the development itself, but I've gotten a glimpse of it when I visited Ace Richland's estate." "That's right, you helped him out of a sticky situation not so long ago." Last spring, Ace—a rock star legend—had found himself a suspect in a murder investigation, and Caprice had indeed helped clear his name. Treating that lightly, however, she responded, "Ace and I have gotten to know each other since he bought the house I staged." "He's on tour now, isn't he?" "He's zigzagging across the country. He returns every couple of weeks to spend time with his daughter." "The way it should be," Wendy said. "Sebastian complains he wants to spend time with his boys, but they don't want to spend time with him anymore. You have to take advantage of bonding time while you can." "I'd like to find out more about co-housing." "Basically, it's cooperative living. We're not self-sustaining like many co-housing communities, but we help out each other when we can. I'm sure Sebastian can tell you all about it. He's the one who developed the mission statement." "I look forward to it," Caprice said. As Wendy walked Caprice out of the kitchen into a smaller dining room, and then a much larger one, Caprice began envisioning the colors she would use to warm up the house as well as the groupings of furniture she'd select that would invite conversation. This project could easily consume her. She smiled to herself. It was exactly the kind of project she liked best. * * * Caprice would have loved to have taken her dog, Lady, along to the Wyatt estate, but she hadn't known what she'd find there. She also hadn't known whether Wendy liked dogs. So she'd asked her neighbor Dulcina if she'd watch Lady while she took care of business. Now as she climbed the steps onto Dulcina's porch, she couldn't help thinking about Warren Shaeffer and his anger toward Wendy . . . and Wendy's obvious fear. However, when Dulcina opened the door, Caprice forgot about her experience at the Wyatt estate. Not only had Dulcina come to greet her at the door, but Halo, the pregnant tortoiseshell stray cat she'd adopted in July, wound around Dulcina's legs while her seven-week-old tortie kitten, Miss Paddington, as well as Caprice's cocker, Lady, chased through the living room. Caprice couldn't help but laugh. "How do you get any work done?" "They were all napping until about fifteen minutes ago. Then Paddy decided she wanted crunchies and the gang all woke up." Mason and Tia, Halo's other two kittens, now came racing into the living room too. Halo was a silver-haired tortie with tabby-like stripes and golden spots of coloring. Her firstborn was Miss Paddington, who had a unique split color face—tan and gold on one side and dark brown on the other. Her body was likewise defined. Mason, a gray-striped Tabby with a white chest, was the boy in the bunch, and as rowdy as could be. Tia, third-born, had fur that displayed striking tortoiseshell colors and lots of white. A princess, she usually held herself above the fray. But not this evening. They were all joining in. Caprice stepped inside, careful to watch the screen door close so no fur babies escaped. "Coffee?" Dulcina asked. "Sure. I have time." The kittens and Lady continued their chase, but Halo followed Caprice into the kitchen. She was becoming quite attached to humans. No one knew her whole story. Caprice's uncle Dom had found her when he was pet-sitting for a client. Caprice had captured her and taken her to her veterinarian who surmised Halo had been in an accident. She had a slight limp, and he suspected a broken bone had healed. Dulcina had decided to take in Halo, even though she'd been pregnant. The birth of the kittens had been a joy to watch and experience. They were seven weeks old now, and Caprice had some good news for Dulcina. As Dulcina brewed two mugs of coffee in her single-cup brewer, Caprice asked, "So do you think Tia and Mason are ready for their forever homes?" "If I can find good ones," Dulcina answered, sounding worried. "A couple of people have asked me about them, but I just didn't feel they were right. Not real cat lovers, know what I mean? I almost feel like I have to do background checks and home studies." Caprice laughed. "They've been your babies, as well as Halo's, since they were born. Of course you're invested in their welfare. But I might have a solution." "You know two cat lovers?" "I know one cat lover, and wouldn't it be good to keep Mason and Tia together?" "It would! Who do you have in mind?" "My uncle Dom. He's living in his own place now, and he'd like to adopt the kittens. He does bookwork at home for a few clients, and when he pet-sits, the two of them could keep each other company." "Doesn't he house-sit too? What would happen during those times?" "He's already thought of that. I told him I'd bring them to my place." "Wouldn't that be a riot with your two cats and Lady? I suppose I could bring them back here too." "I know he'd love Tia and Mason and care for them as if they were his kids. But it's your decision to make. Don't feel pressured because I suggested it." "Before I make any decision, he should meet them. I'd like to see him interact with them. Do you think he'd come over for a visit?" "I'm sure of it." Dulcina and her uncle had met in the summer when they'd all attended an Ace Richland concert together. In fact, Dulcina had been dating then. But she'd broken up with Rod. One of the mugs had finished brewing, and Dulcina set it in front of Caprice at the table. "Pumpkin spice." Caprice took a whiff and smiled. "Perfect for this time of year." As Dulcina watched her mug of coffee brew, Caprice asked, "How are you doing?" "Thank goodness I have the kittens," she said as the three balls of fur ran into the kitchen and tumbled over the cat bed Dulcina had tucked under the desk area of the counter. "Between work and them, I don't think about much." "Have you talked to Rod since your breakup?" "No. There's no point. His girls were having a hard time accepting me dating their dad, and he was doing nothing to make the transition easier. Yes, they come first. But if I was going to interact with them, and we would eventually try to make a family, he needed to include me in their family life. He wasn't doing that. He wouldn't even discuss it." "The concert didn't help as it should have." "No, it didn't. His older daughter, Leslie, had her mind closed to Ace's music even before we attended the concert. I think Vanna and I could have become friends, but Rod prevented us from trying. Even when they came over to visit the kittens, he wouldn't let Vanna stay a little bit longer even though she wanted to. He could have left her here while he took Leslie to her activity, but he wouldn't do it. That was the final straw for me. I understand his wife walked out on him, and he has trouble trusting women. But with that huge issue between us, we couldn't form a real relationship." "Were you ready for one?" "I thought I was. But my marriage to Johnny was unforgettably right. I'm just afraid I'll never have my expectations met again. How are you and Grant doing? I know you had a rough patch this summer when he saw his ex-wife." "He needed to do that." Caprice was sure of it now, even though at the time she hadn't been. "He and Naomi lost a child. That's something both of them will deal with for the rest of their lives. He's been sharing more with me about what they talked about and what he felt, and we're becoming closer every day. I love him, Dulcina. I'm all in." She'd known Grant Weatherford, her brother's law partner, since he and her brother were college roommates. Divorce and tragedy had brought Grant to Kismet to find a new life and join her brother in his practice. She and Grant had had their ups and downs, but he was now the love of her life. Dulcina nodded. "That's the way it should be if your relationship is going to last." Mason chased Tia over to Caprice's chair, and then he climbed up her pant leg and ended up on her knee, looking up at her. "You're just too adorable and you know it," Caprice told him. He meowed at her, a squeaky little meow that he was growing into. Dulcina just shook her head. "They make me laugh and they fill me with joy. Just call me or text me when your uncle wants to visit. You know me. I'm flexible." Caprice liked to think she was too. "I'll check with Uncle Dom and see when he's free. I know he's anxious to make his new place a home." For some reason, Caprice's mind wandered once more to the Wyatt estate. Had that mansion ever really been a home? Her objective would be to turn it into one for women who sorely needed a place of warmth and stability. * * * On Thursday morning, Caprice stood at the door to Sunrise Tomorrow, the original facility. Wendy had received her proposal and had a few questions that she wanted to talk over in person. Caprice said her name through the intercom and waved at the camera. She recognized the security camera setup as one that accompanied her own alarm system. The shelter had to be careful about whom they let through its door. Caprice did too when she was involved with solving a murder. Not so long ago, danger had come calling. Wendy opened the door herself, wearing a smile. "Come on in. We can talk in my office." The original facility for Sunrise Tomorrow was very different from what Wendy wanted to accomplish at the Wyatt estate. This building had once been an assisted living facility that had gone bankrupt. Wendy had rounded up a group of investors and taken on the challenge of turning it into a shelter with rooms where women could spend the night. An office area had been utilized for day-today administration. As she walked through a small reception area and around a large desk where a receptionist sat to monitor not only who came in and out but what was going on inside too, Caprice could see that the inside of the shelter could use a little polishing. The furniture was looking shabby. But she wasn't here to talk about that today. She suddenly stopped as she spotted a woman who came from a back hall and walked through the reception area to the other wing. Caprice recognized her. Alicia Donnehy . . . and she was carrying a stack of what looked like just-washed laundry. As if her high school classmate could feel Caprice's eyes on her, Alicia stopped and glanced over her shoulder. She didn't wave or say hello. A shuttered look came across her face, and she turned toward the direction where she'd been headed and continued walking. Alicia had been on the committee with Caprice to plan their high school reunion in July. What was she doing here? She was carrying laundry. Did that mean she was a volunteer? If so, why? Caprice's curiosity had gotten her into a lot of trouble . . . from childhood to the present day. She'd always asked questions that had baffled her teachers, stumped her priest, and amused her parents. Now the implications behind seeing Alicia here were serious. Caprice hurried to catch up with Wendy and noticed a woman rifling through Wendy's file cabinet. She turned when Caprice and Wendy stepped inside. Wendy said, "Lizbeth, this is Caprice De Luca. Caprice, this is Lizbeth Diviney. She's my second in command and can answer questions when I'm not around. She's going to be the director of the new facility once it's up and running." Lizbeth was a redhead with a pixie hair style. She was only five foot two and as slender as Caprice would like to be. In a quick movement, Lizbeth pulled a folder and shut the file drawer. Then she shook Caprice's hand. "It's great to meet you. I've heard good things about your work." She waved the folder at Wendy. "I'll get right on this." In the next moment, she was gone from the office. "She's high energy," Wendy said with a smile and motioned Caprice to a chair. Wendy's desk held stacks of papers, but otherwise the space looked feminine with its flowered chairs and pin-striped wallpaper. She didn't waste any time. "Your proposal makes a lot of sense to me, and I agree with ninety percent of it. The other ten percent has to do with the grand salon at the mansion and your bunk bed idea for two of the rooms upstairs. I'm thinking of having a partition divide the grand salon into two rooms. Two workshops could be conducted at the same time that way." "No problem there," Caprice agreed. "Do you want them decorated the same way or do you want two different designs?" "Even though I have the money with the legacy, I'm not going to splurge. Let's keep them both uniform. That's more economical, isn't it?" "Yes, it is. And the reason you don't want the bunk beds?" "I don't want these rooms to have a prison-cell feel. Bunk beds could suggest that, don't you think?" "I proposed the bunk beds because it would give residents more room for a sitting area or double desks. Those rooms upstairs are anything but small or cell-like, and of course the decorating would make all the difference. Light airy draperies and coordinating bedspreads would never give a jail atmosphere. But again, that's up to you." "Let me think about it." Wendy had pulled a list in front of her along with Caprice's proposal that she'd printed out. They went over several more items. Wendy was an easy client to work with because she seemed to take Caprice's suggestions, and Caprice had no problem compromising to give Wendy exactly what she wanted. "I suppose you'll have volunteers working at the transitional facility too," Caprice said finally. "We count on our volunteers," Wendy agreed. "And the women who've been helped by us want to give back." "Are your volunteers all women who've needed to take refuge in the shelter?" Wendy didn't hesitate to answer. "They usually are. The truth is, most people don't want to get involved, not with anything that has to do with domestic violence and protective orders." "I can see that." She again thought about Alicia and wondered if her best friend, Roz, knew Alicia better than she did. Roz had been on the reunion committee too. Wendy glanced up at the clock hanging on the wallpapered wall. "I have another meeting in fifteen minutes. I think we've covered everything." Caprice rose to go. Wendy snapped her fingers. "I forgot to tell you that you're most welcome to bring Grant Weatherford to dinner on Saturday. Rumor has it that the two of you are dating." "We are," Caprice answered. "I'll ask him and see if he'd like to come along. He might be interested in the co-housing concept too and enjoy talking to Sebastian." Wendy's phone rang. She held up her finger to Caprice and picked it up. Caprice waited. Even three feet from the phone, she could hear an angry voice on the other end, and it sounded male. Wendy seemed to take a bolstering breath and then she slammed down the receiver without saying a word. "Trouble?" Caprice asked. "Trouble we often get here." "An angry husband?" Wendy just nodded. Then she said, "That's one of the reasons why a state-of-the-art alarm system as well as security cameras are a must for the new facility, no matter what the cost. I'd like to have a few inside too, in the public areas. Do you think you can come up with inventive ways to disguise them?" "My family insists I can be very inventive." Wendy gave Caprice a weak smile. "It's coming together, Caprice—all of it. I'm determined to keep these women safe from anyone who intends to do them harm." Wendy's vehemence came from more than a desire to do good, Caprice suspected. Maybe someday soon she'd find out what had driven Wendy into this life's work. To the extent that the image or images on the cover of this book depict a person or persons, such person or persons are merely models, and are not intended to portray any character or characters featured in the book. KENSINGTON BOOKS are published by Kensington Publishing Corp. 119 West 40th Street New York, NY 10018 Copyright © 2016 by Karen Rose Smith All rights reserved. No part of this book may be reproduced in any form or by any means without the prior written consent of the Publisher, excepting brief quotes used in reviews. If you purchased this book without a cover, you should be aware that this book is stolen property. It was reported as "unsold and destroyed" to the Publisher and neither the Author nor the Publisher has received any payment for this "stripped book." Kensington and the K logo Reg. U.S. Pat & TM Off. ISBN-13: 978-1-61773-772-5 ISBN-10: 1-61773-772-0 First Kensington Mass Market Edition: May 2016 ISBN: 978-1-6177-3772-5 First Kensington Electronic Edition: May 2016
{ "redpajama_set_name": "RedPajamaBook" }
9,665
Za druga značenja, vidi SARS. Sveže amputirana ruka Satrijanija (skraćeno S.A.R.S.) muzički je sastav iz Beograda, tačnije Resnika. Njihova muzika je svojevrsna kombinacija pop, rok, rege, hip-hop i bluz, muzike sa elementima tradicionalnog srpskog folka. Biografija benda Bend su osnovali marta 2006. godine u Beogradu Aleksandar Luković Lukac i Dragan Kovačević Žabac. Prvu postavu benda činili su: Žarko Kovačević Žare (vokal), Dragan Kovačević Žabac (vokal), Vladimir Popović Hobbo (vokal), Aleksandar Luković Lukac (gitara), Miloš Kovačević Kriva (bas gitara), Branislav Lučić Beban (udaraljke), Goran Mladenović Japanac (bubnjevi) i Ivana Blažević Violina (violina). Svi članovi benda potiču iz različitih muzičkih miljea i interesovanja. U fuziji muzičkih pravaca i ideja iz kojih su potekli nastaje jedinstveni zvuk i stil benda. Nakon nekoliko manjih svirki po beogradskim klubovima i nastupa u niškom K.S.T.-u (Klubu studenata tehnike) u sklopu Mirovnog karavana, bend je doživeo krizu. Početkom 2008. godine, basista Kriva odlazi u vojsku, a Japanac i Violina napuštaju sastav, pa bend privremeno prestaje sa radom. U tom periodu, prijatelj benda (Siniša - kum) upoznat sa njihovim radom, skida pesmu sa njihovog zvaničnog myspace profila i pravi video za pesmu Buđav lebac (snimljenu u skromnim uslovima kućne produkcije.), postavlja ga na YouTube Pesma "Buđav Lebac" ostvaruje neverovatno veliku popularnost na prostoru Srbije i bivše Jugoslavije. "Buđav lebac" je najslušanija numera na Youtube-u u okviru srpsko-hrvatskog govornog područja (isključujući turbo-folk žanr). Uspeh i prihvaćenost pesme Buđav Lebac je poseban sociološki i kulturološki fenomen. Pogotovo uzimajući u obzir da se radilo o potpuno anonimnom bendu bez novčane podrške i klasične medijske promocije. Buđav lebac je pesma benda alternativne, underground scene, čiji je jedini medijum prezentovanja bio internet. Fenomen "buđavog lepca" je ohrabrio preostale članove da se ponovo okupe sredinom 2008. godine i da nastave snimanje demo pesama u Resničkoj laboratoriji zvuka (R.L.Z.). Jedan od tih demo snimaka, i to za pesmu "Ratujemo ti i ja", postavljen je na internet. Sastavu se pridružuju i novi članovi: Miloš Bakalović - Bakal (bubnjevi) i Boris Tasev - Bora (klavijature, harmonika). Prvi album "S.A.R.S." Krajem 2008. godine bend dobija ponudu da snimi album za nacionalnu izdavačku kuću PGP RTS. Prvi album je, uz producentsku pomoć Đorđa Miljenovića (Skaj Viklera), završen za nekoliko meseci i objavljen je u martu 2009. godine. Tekstove su pisali Žabac i Hobbo, muziku i aranžmane su radili Žarko, Lukac, Skaj Vikler i ostali članovi benda, a među gostima na ovom izdanju pojavljuju se multi-instrumentalista Nemanja Kojić - Kojot (Eyesburn) koji je svirao trombon u pesmama "Ratujemo ti i ja" i "Zubarka", multiinstrumentalista Aleksandar Sale Sedlar Bogoev koji je svirao buzuki u istim pesmama, Nikola Demonja koji je u istim pesmama svirao saksofon, bubnjar Vladan Rajović - Vlada (Kanda, Kodža i Nebojša) koji je svirao u "Zubarki". Istovremeno sa albumom, izašao je i animirani spot za pesmu Buđav lebac , koji je crtao srpski ilustrator Aleksa Gajić. Album je uskoro reizdala i izdavačka kuća Zmex. Grupa je dobila nagradu za muzičko otkriće 2008. godine od TV Metropolisa. i osvojila je dve nagrade po izboru publike u glasanju koje je organizovao internet magazin Popboks za najbolji debitantski bend i najbolji singl u 2008. godini. Međutim, samo dva meseca kasnije, novinar Popboksa Uroš Smiljanić je debi album S.A.R.S.-a ocenio kao "tanak sloj prost(ačk)og humora i jeftinijeg formalnog eksperimentisanja razmazan preko debelog komada ničega" Na sajtu b92 svoju recenziju debi albuma grupe S.A.R.S. objavio je Uroš Milovanović Srpska plivačica Nađa Higl je pred finalnu trku na 200 metara prsnim stilom u Rimu 2009. godine, u kojoj je postavila novi evropski rekord i postala svetski šampion, pevušila njihovu pesmu "Buđav lebac". Kasnije joj je publika na dočeku ispred beogradske Gradske skupštine skandirala refren pesme, a na dočeku organizovanom u njenu čast u rodnom Pančevu, nastupila je i grupa S.A.R.S. kojoj se u pevanju priključila i sama Nađa. Promene u sastavu U junu 2009. godine došlo je do velikih promena u sastavu benda. Hobbo, Kriva, Beban i Bakal nisu više članovi S.A.R.S.-a, a bendu su pristupili novi članovi: Nenad Đorđević - Đole (bas gitara), Tihomir Hinić - Tihi (bubnjevi), Petar Milanović - Pera (trombon, saksofon) i Sanja Lalić (prateći vokal). Bivši članovi benda Hobbo (vokal), Kriva (bas gitara), Bakal (bubnjevi) i Violina (violina), te novi član Strašni (gitara) nastavili su rad u novom bendu pod nazivom -{VHS (Very Heavy S.A.R.S.)}-. U tom periodu, pojavljuje se i drugi po redu video sa aktuelnog albuma za pesmu Debeli lad, u kojem su se po prvi put široj javnosti predstavili novi članovi benda. U novembru 2009. godine pojavio se i treći po redu spot, i to za numeru Rakija Bend se pojavljuje i u emisiji "Fajront republika" na "FOX televiziji", i to sa pesmama "Buđav lebac" i "Rakija", a nastupa i na festivalu "Raskršće" i tom prilikom izvodi pesmu "Perspektiva". Bend je istu pesmu izveo i na jednom festivalu u Prijedoru gde je takođe nastupio. Tek 16. januara 2012. godine bend je objavio spot i studijsku verziju pesme "Perspektiva" . Turneja Još u oktobru 2009. godine, bend započinje svoju prvu veliku turneju. Najpre svira 23. oktobra u Hali sportova u Velikoj Plani, zatim 29. oktobra u Smederevskom centru za kulturu, 31. oktobra u klubu "Nirvana" u Kikindi, 6. novembra u "Privilidž klubu" u Tuzli, 7. novembra u zagrebačkom klubu "Koloseum", 20. novembra u "Domu željezničara" u Sarajevu, da bi potom usledio veliki koncert 28. novembra u beogradskom "Domu omladine", kojim se ta turneja i završava. Tokom te turneje, bend je imao i još jedan zapažen nastup u Tuzli. Po završetku turneje, bend ulazi u studio, kako bi snimio materijal za svoj sledeći album, što je i potvrđeno na zvaničnom sajtu, kao i na svim ostalim profilima benda. Tokom marta 2010. godine, bend se pojavljuje i na MTV-u, gde je imao vrlo zapažen nastup. Izlazak tog albuma očekuje se sredinom iste godine. Početkom januara 2010. godine na RTS-u je krenula sa emitovanjem serija "Može i drugačije". S.A.R.S. potpisuje muziku koja se pojavljuje u seriji, kao i samu numeru koja se pojavljuje na odjavnoj špici. Drugi album "Perspektiva" Drugi studijski album S.A.R.S. objavljen je u ponoć 31. decembra 2010. godine. Album je mogao besplatno da se preuzme sa lokalnih-regionalnih sajtova MTV-a i za tri meseca je postigao blizu 50 hiljada preuzimanja što je u tom trenutku bio veliki uspeh. Prvi singl je bila je pesma "To rade" autora Stefana Tarane, za koju je S.A.R.S. snimio spot koji je premijerno prikazan 21. februara 2011. godine. Pored pesme "To rade" snimljen je i spot za pesmu "Mir i ljubav". Zanimljivo je da se naslovna pesma "Perspektiva" nije našla na free download izdanju. Diskografija Studijski albumi S.A.R.S. (2009) Перспектива (2011) Кућа части (2013) Иҝоне поп културө (2014) Пролеће (2015) Последњи албум (2016) Глава (2019) Live albumi Мир и љубав (2015) Singlovi Живим на Балкану (2017) Љубав умире (2018) Тешко је (2018) Ostalo "Naša mala zemlja" (Brkati gosti, 2013) Reference Spoljašnje veze -{S.A.R.S.}- zvanični sajt -{S.A.R.S.}- na fejsbuku -{S.A.R.S.}- na jutjubu Mi nismo puka senzacija, imamo štošta da kažemo ("Politika", 1. januar 2017) Ne preostaje nam ništa drugo nego da stisnemo zube i borimo se kako umemo (B92, 25. februar 2018) Posao u finansijama napustio zbog koncerata ("Politika", 29. decembar 2019) Српске рок групе Српске кросовер групе Српске ска групе Српске поп рок групе Српске фолк рок групе Музичке групе из Београда Музичке групе основане 2006. Добитници награде Годум Нова српска сцена
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,955
If the cube supports editing (the dimension group has a ssas writeback partition) it is possible to use the Pivot Grid table in the editing mode to make changes to the cube data directly. The table section accessible for editing is highlighted with yellow background color. It is formed depending on the cube security settings, types of indicators (calculated indicators are not editable), etc. It is permitted to write an arithmetical expression using the syntax and functions of the MDX language. It gives users the possibility to calculate an indicator using the current context (the server does it when recalculating). The table cells modified by the user are highlighted with blue background color and dark blue bold text in the cell). Display of the cell depends on the current server data synchronization mode. Figure 1. Editing Ranet OLAP pivot table with writeback partition. Automatic update: any modification of the cell results in sending the changes to the server and recalculation of the data. The recalculated result is returned to the client. Working with the cache memory: the modifications are stored in the cache memory on the local workstation; the modifications are sent to the server upon the user's command Save changes (Recalculate, Recalculate table data with current changes), in order to reduce traffic and ensure a fluent response. Thus, the modified cells that are not yet updated to the server are highlighted with blue background color, the cells updated with the server are shown in dark blue bold text. When using cache memory, the user can send the data to the server for recalculation as often as necessary. Only the modifications stored in the cache memory since the last save operation (recalculation) or since the beginning of editing, can be rolled back by using Undo in the pivot grid table. In order to undo the recalculated changes, it is necessary to roll back the whole transaction. All changes are isolated within the user session, they are not accessible to other users until the transaction is fixed. The data can be entered both to list type members and to aggregates. When editing aggregates, allocation mechanisms are applied automatically based on the number of subordinated list members. Generally, there is a large number of members in the cube dimensions, therefore automatic allocation should be used with caution, as it can generate millions of records, which would substantially slow down the application or turn it inoperative. For this reason, the number of updated cells shall be controlled and limited, and the developer of the MDX query should determine the updating rules in the UPDATE CUBE command. Figure 2. Sample script for a cube update command. To obtain the coordinates of the cell being modified due to a specific change, it is necessary to address the change hierarchy in the UPDATE CUBE command (the change hierarchy is enclosed between symbols <% %> , <%[Change].[Hierarchy]%> ), parameter <%newValue%> returns the new value, parameter <%oldValue%> returns the old value that was stored in the server. Thus, it is possible to correctly calculate the delta when editing an aggregate. It is recommended to include the whole cortege in the update command, in order to prevent uncontrolled generation of individual write back operations. In order to ensure the offline work mode, it is possible to save the result of the original query and the changes in an XML file on the local machine and read them from the XML file later. Can u please post a asp.net Silverlight demo for Olap Brower.
{ "redpajama_set_name": "RedPajamaC4" }
6,475
{"url":"http:\/\/mron.umood.it\/big-o-notation-calculator.html","text":"# Big O Notation Calculator\n\nSimply enter your problem and click Answer to find out if you worked the problem correctly. COSMOS 2020 - PUZZLES; COSMOS games - Spaceship Racing Game; Kayley pyramid. See the Scientific Notation Calculator to add, subtract, multiply and divide numbers in scientific notation or E notation. Next up we've got polynomial time algorithms. For example, the bubble sort has a O(n2) notation because in the worst case scenario, you have to. So, if I say your algorithm , it really is. GROWTH OF FUNCTIONS 138 Discussion As Theorem 4. Convert Scientific Notation to a Real Number. So, x log x < x^2 for all x > 0. Interview question for SummerEngineering Program in Dublin, Co. _ \u2022Big-O gives an upper bound on the growth of a function, while Big-Omega gives a lower bound. We say that a linear search is a O(n) algorithm (read \u201cbig-Oh of n\u201d), while a binary search is a O(log n) algorithm (\u201cbig-Oh of log n\u201d). What big-O complexity means Given two functions f(n) and g(n), we say that f is O(g) (\"f is big-O of g\") if f(n) is bounded by. Basically, it tells you how fast a function grows or declines. For example, an algorithm linear according to Big-O notation reduces the size of the problem by a constant amount at each step, and also involves looking at each part of the input a constant number of times. The worst case scenario in that case would be n. Multiply the decimal number by 10 raised to the power indicated. MEASURES & BAR LINES. big-O complexity: 4n. leading 1 in the first row, use row operations to obtam o's below It the process With each successive co I unm after getting a and then repeat 4 0 0 0 0 72 3 0 11 0 0 3 4 3 11 0 0 01- Elementary Row Operations Assume A IS an augmented matrix conespondmg to a given system of equations. Big-O Notation; Binary search; Bitwise Operators; Bobby Tables; Brownie Points; Building applications with React and Flux; CamperBot; Capitalize the First Letter of a String; Chai. Big-O notation, sometimes called \u201casymptotic notation\u201d, is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. We use the Big-O notation to classify algorithms based on their running time or space (memory used) as the input grows. Here are a few common expressions: Constant or O(1). Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Google Domains Hosted Site. Example 2: Using Transformations to Graph a Rational Function. Postfix expression calculator. in memory or on disk) by an algorithm. The range of \"reasonable\" n. \u039f Notation; \u03a9 Notation; \u03b8 Notation; Big Oh Notation, \u039f. With Big O notation, this becomes T(n) \u220a O(n 2), and we say that the algorithm has quadratic time complexity. What each terms in big O notation means, Is there any blog or site that explain big O for java. See the Scientific Notation Calculator to add, subtract, multiply and divide numbers in scientific notation or E notation. O: Asymptotic Upper Bound 'O' (Big Oh) is the most commonly used notation. We help businesses of all sizes operate more efficiently and delight customers by delivering defect-free products and services. Additional Resources. Any number that is represented in decimal notation is called as decimal number. Indeed you should be able to see that the constants will only be the same if and only if t(n) = cg(n) for some c and for some su\ufb03ciently large n. big-o growth. 38 into exponential form to get. See this talk (PDF) and this sort video interview. First, read our intro to big O notation. e the longest amount of time taken to execute the program. The below table has list of some common asymptotic notations in ascending order. The Big O notation is useful when we only have upper bound on time complexity of an algorithm. Thus we don\u2019t have to do a proof!. And along comes Big O. Reminds me of way back when, just for fun, I tapped a number into a calculator, then LOG then x2. The dominant term is the term that grows most quickly as nbecomes. What happens when the functions involved are not polynomials? For example, how does the growth of the exponential compare to a polynomial? Or factorial and exponential?. Big O, Big Omega, and Big Theta Notation In this algorithms video, we lay the groundwork for the analysis of algorithms in future video lessons. The following table presents the big-O notation for the insert, delete, and search operations of the data structures: Data Structure Average cases. It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to complete. Interview question for Software Engineering Manager in San Francisco, CA. Use MathJax to format equations. I am obliged by scholarly tradition to document other sources which have taken a similar approach. f = O(g) and f = \u03a9(g) \u21d4 f = \u0398(g) f = o(g) \u21d2 f = O(g) f = O(g) \u21d4 g = \u03a9(f) f = \u03c9(g) \u21d2 f = \u03a9(g) f = o(g) \u21d4 g = \u03c9(f) f \u223c g \u21d2 f = \u0398(g). Active 3 years, 5 months ago. of the function, without. introduce big O notation as a way of indicating \"higher order terms\" in a more precise way, and use the identity exp (i x)=cos (x)+i\u00b7sin (x) for calculating anything trig-related. 2 Problem 2TY. Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. To calculate Big O, you can go through each line of code and establish whether it\u2019s O (1), O (n) etc and then return your calculation at the end. b) Big-Theta notation describes both the upper and lower bounds of the efficiency of a given algorithm with a given input size c) Big-O and Big-Omega notation do the same thing as Big-Theta notation, except they only bound one of either the upper limit or lower limit of the efficiency of the algorithm, respectively. n is the thing the complexity is in relation to; for programming interview questions this is almost always the size of a collection. big examples notation how function omega and calculator the python big o - What is the difference between \u0398(n) and O(n)? Sometimes I see \u0398(n) with the strange \u0398 symbol with something in the middle of it, and sometimes just O(n). On most systems, you can type C-x * to start the Calculator. The best time complexity in Big O notation is O(1). Even with the improvement of computing power, the scientific notation remains popular among engineers, scientists and mathematicians, who, alongside students, are the primary users of this scientific notation calculator \/ converter. f(x0)f(x1). Vaidehi Joshi's explanation on BaseCS made the concept behind Big-O click for me. For example, consider the case of Insertion Sort. What happens when the functions involved are not polynomials? For example, how does the growth of the exponential compare to a polynomial? Or factorial and exponential?. 000000001 \u2264 x \u2264 9999999999 \u2022 NORM2: 0. Depth first search, breadth first search, Big O notation. The Big O notation is useful when we only have upper bound on time complexity of an algorithm. Although big-O notation is a way of describing the order of a function, it is also often meant to represent the time complexity of an algorithm. LOGIN New to Big Ideas Math? LOG IN. The symbol , pronounced \"big-O of ,\" is one of the Landau symbols and is used to symbolically express the asymptotic behavior of a given function. I was wondering if there are any calculus relationships implicit in Big-O notation. o(f(n)) is an upper bound, but is not an asymptotically tight bound. That means it will be easy to port the Big O notation code over to Java, or any other language. Some people use a comma to mark every 3 digits. Let's break that down: how quickly the runtime grows \u2014It's hard to pin down the exact runtime of an algorithm. Experience handheld calculating in the age of touch with the HP Prime Graphing Calculator, which has a full-color, gesture-based, and pinch-to-zoom interface, background images, function sketching, multiple math representations, wireless connectivity 1, and a rechargeable battery. Asymptotic notation, also known as \"big-Oh\" notation, uses the symbols O, and. Big Calculator is a large calculator for Windows. the Big-Oh condition cannot hold (the left side of the latter inequality is growing infinitely, so that there is no such constant factor c). Big-O Notation. When studying the time complexity T(n) of an algorithm it's rarely meaningful, or even possible, to compute an exact result. The Calculator automatically determines the number of correct digits in the operation result, and returns its precise result. What matters in Big O notation is where everything goes wrong. Note there is a spreadsheet posted in the Notes\/Examples section of WebCT showing sample running times to give a sense of a) relative growth rates, and b) some problems really are intractable. O(2 n) operations run in exponential time - the operation is impractical for any reasonably large input size n. This page documents the time-complexity (aka \"Big O\" or \"Big Oh\") of various operations in current CPython. This is called big-O notation. Big O notation ranks an algorithms\u2019 efficiency. leading 1 in the first row, use row operations to obtam o's below It the process With each successive co I unm after getting a and then repeat 4 0 0 0 0 72 3 0 11 0 0 3 4 3 11 0 0 01- Elementary Row Operations Assume A IS an augmented matrix conespondmg to a given system of equations. Big O notation isn't time consuming calculation. Visit Stack Exchange. Let's have a look at a simple example of a quadratic time algorithm:. O(n^2) is most commonly referred to as \"quadratic\" in Big O notation. While big-O is the most commonly useful, since we most often care about the longest possible running time of a particular scenario (e. We need a way to show really big and really small numbers. Common Asymptotic Notations. Big-Omega Notation Definition: Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. o(f(n)) is an upper bound, but is not an asymptotically tight bound. 4n +3 \u2208 O(n) \u2014 de\ufb01nition is satis\ufb01ed using c = 5 and N0 = 3. upper and lower. May often be determined by inspection of an algorithm. Repeat forever. There are four basic notations used when describing resource needs. Working with a large number such as this can quickly become awkward and cumbersome. What matters in Big O notation is where everything goes wrong. Big O notation Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e. Big-O notation, sometimes called \"asymptotic notation\", is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Vaidehi Joshi's explanation on BaseCS made the concept behind Big-O click for me. Big O notation is a convenient way to describe how fast a function is growing. Make sure youprove this using the formal definition of big-\u0398, not just an intuitive explanation. This will give you some good practice finding the Big O Notation on your own using the problems below. Intuition: growth rate of f is at most (same as or less than) that of g Eg. Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Additional Resources. For example, if you want to convert from meters to micrometers you would convert from \"base unit\" to micro. \u03a9 and \u0398 notation. What Is Big O Notation In Java Mar 6, 2015. The second number in the equation is a power of 10, written as 10 with an exponent, like 102 which stands for 10x10. Become a fan!. The way we write expressions uses infix notation. Big o cheatsheet with complexities chart Big o complete Graph ![Bigo graph][1] Legend ![legend][3] ![Big o cheatsheet][2] ![DS chart][4] ![Searching chart][5] Sorting Algorithms chart ![sorting chart][6] ![Heaps chart][7] ![graphs chart][8] HackerEarth is a global hub of 3M+ developers. The only thing I have is the header file. If you're just joining us, you will want to start with that article, What is Big O Notation? What is Big O? Big O notation is a system for measuring the rate of growth of an algorithm. For example, consider the case of Insertion Sort. Big O notation. COSMOS 2020 - PUZZLES; COSMOS games - Spaceship Racing Game; Kayley pyramid. For example, a calculator would show the number 25 trillion as either 2. equal Big-O complexity. Active 3 years, 5 months ago. Often, all it takes is one term or one fragment of notation in an equation to completely derail your understanding of the entire procedure. If you want to know more about the O notation for some interview questions, please check my Java Coding Interview Pocket Book. Linear time means that the time taken to run the algorithm increases in direct proportion to the number of input items. big-O complexity: 4n. Scientific Notation Calculator Scientific Notation Converter. On the third page of that paper, they write Next we apply the conditional Hamiltonian evoluti Stack Exchange Network Stack Exchange network consists of 177 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. | Find, read and cite all the research you need on ResearchGate. When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. PDF | Pembahasan tentang Time Complexity dan Big-O-Notation. \u03a9 and \u0398 notation. We can safely say that the time complexity of Insertion sort is O(n^2). Technically, f is O(g) if you can find two constants c and n0such that f(n) <= cg(n) for all n > n0. 1 Starting Calc. Suppose you need to run an algorithm on a set of n inputs. The Big-O notation The simplified term n^2 stands for a whole class of terms, so let\u2019s give it a name. Big-O Notation is a statistical measure, used to describe the complexity of the algorithm. Although big-O notation is a way of describing the order of a function, it is also often meant to represent the time complexity of an algorithm. only ever used if k isn\u2019t larger than n; in other words, if the range of input values isn\u2019t greater than the number of values to be sorted. Example 2: Using Transformations to Graph a Rational Function. Little o always means big O is true too. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e. Simply enter your problem and click Answer to find out if you worked the problem correctly. Acceptable formats include: integers, decimal, or the E-notation form of scientific notation, i. O\/8 players generally don't understand the game and play the wrong kinds of cards, all the time. f(p)=p+ 1\/{s?+7)+2\/(s?). High precision calculator (Calculator) allows you to specify the number of operation digits (from 6 to 130) in the calculation of formula. Bisection method is bracketing method and starts with two initial guesses say x0 and x1 such that x0 and x1 brackets the root i. Pages in category \"Mathematical notation\" The following 102 pages are in this category, out of 102 total. And you can't bluff as much. These algorithms are even slower than n log n algorithms. Help! I have a question where I need to analyze the Big-O of an algorithm or some code. See the note at big-O notation. I N S T R U C T I O N S. , c = 1) and x0 (e. Data structures We have covered some of the most used data structures in this book. ( wiki ) Roughly speaking, you remove everything but the fastest-growing term from the complexity function. Interview question for SummerEngineering Program in Dublin, Co. Pengenalan Big O Notation Bachman-Landau. 38 into exponential form to get. In your example, we can say f(N) = O(N^4). Simple zsh calculator. Here's how I understand it: Big-O notation is a way to mathematically express how an algorithm performs as its input increases. \u039f (Big Oh) Notation * It is used to describe the performance or complexity of a program. in memory or on disk) by an algorithm. by Festus K. Thus any fractional number can be represented in decimal notation. Similarly, when x approaches 0, x^n becomes smaller for bigger n's. \u039f Notation; \u03a9 Notation; \u03b8 Notation; Big Oh Notation, \u039f. log [(log n) 2] n! (n\u22121)! Answer. I promise you: I am not overstating this for effect. Big-O notation is a way of ranking about how much time it takes for an algorithm to execute ; How many operations will be done when the program is executed? Find a bound on the running time, i. Big-O notation doesn\u2019t care. Determine*the*running*time*of*simple*algorithms*! Bestcase*! Average*case*! Worst*case*! Pro\ufb01le*algorithms*! Understand*O*notation's*mathematical*basis*. The wiki article from which this is taken is an excellent reference if you need to quickly know the Big O of common\/popular algorithms. o(f(n)) is an upper bound, but is not an asymptotically tight bound. Note: If a +1 button is dark blue, you have already +1'd it. Analysis of Algorithms 13 Asymptotic Analysis of The Running Time \u2022 Use the Big-Oh notation to express the number of primitive operations executed as a function of the input size. Big O notation is a method for determining how fast an algorithm is. For instance, if I say an algorithm , it's not guaranteed that the algorithm runs \"slow\". Big O Notation Practice Problems. TI's most advanced scientific calculator with enhanced mathematical functionality. People refer to Big O notation as \u201cbig o of\u201d or \u201corder of,\u201d and then whatever is inside the parentheses, usually n. When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. introduce big O notation as a way of indicating \"higher order terms\" in a more precise way, and use the identity exp(i x )=cos( x )+i\u00b7sin( x ) for calculating anything trig-related. Similar to big O notation, big Omega(\u03a9) function is used in computer science to describe the performance or complexity of an algorithm. Vaidehi Joshi\u2019s explanation on BaseCS made the concept behind Big-O click for me. A beginner's guide to Big O notation. LOGIN New to Big Ideas Math? LOG IN. This is the best way to understand Big-O thoroughly to produce some examples: O(1) - Constant time complexity. Logarithmic. To calculate Big O, you can go through each line of code and establish whether it\u2019s O (1), O (n) etc and then return your calculation at the end. * Big O describes the worst-case scenario i. Is this code an example of sieve of eratothenes? What is meaning of Algorithmic Notations. This is lower case \"o\", not the lower case Greek letter omicron. dominating term. Anyone who's read Programming Pearls or any other Computer Science books and. In Big O notation, this would be written as O(n) - which is also known as linear time. Calculate enormous mathematical equations from within your browser. Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Basically, it tells you how fast a function grows or declines. Thus, it gives the worst case complexity of an algorithm. 38i; Basically, we've found that converting between forms is simply a matter of plugging values into. Sorting and other algorithms are covered. When calculating a bearing, azimuthal notation refers to the angle of the bearing from North, measured clockwise. Hmmm, sounds like overkill to me. It is commonly used by scientists, mathematicians and engineers, in part because it can simplify certain arithmetic operations. Big-O notation doesn\u2019t care. If you like this Site about Solving Math Problems, please let Google know by clicking the +1 button. Big O notation is a way of calculating preformance of an algorithm. The algorithm analysis can be expressed using Big O notation. A shorthand used to write sets, often sets with an infinite number of elements. 10n 3 + 24n 2 + 3n log n + 144. 1 Exercises and Solutions Most of the exercises below have solutions but you should try \ufb01rst to solve them. Take two numbers (123456 and 789012). Online Read. Get the free \"Big-O Domination Calculator\" widget for your website, blog, Wordpress, Blogger, or iGoogle. big examples notation how function omega and calculator the python big o - What is the difference between \u0398(n) and O(n)? Sometimes I see \u0398(n) with the strange \u0398 symbol with something in the middle of it, and sometimes just O(n). Re-cursively, g(n)=o(f (n)) means g(n)\/f (n)=o(1), or g(n)\/f (n)! 0. This is the best way to understand Big-O thoroughly to produce some examples: O(1) - Constant time complexity. They are especially useful when expressing and comparing very large and very small measurements. You'll get the hang of it faster than you think. java - Big O Notation Hausaufgaben-Code Fragment Algorithm Analysis? F\u00fcr die Hausaufgaben wurden mir die folgenden 8 Codefragmente zur Analyse gegeben und eine Big-Oh Notation f\u00fcr die Laufzeit gegeben. Then we say that f(n) is O(g(n)) provided that there are constants C > 0 and N > 0 such that for all n > N, f(n) \u2264Cg(n). Big O notation ranks an algorithms\u2019 efficiency. I am obliged by scholarly tradition to document other sources which have taken a similar approach. We can also say that it runs in time. They don\u2019t want to be made to feel stupid. Thus, it gives the worst case complexity of an algorithm. Calculate enormous mathematical equations from within your browser. Photo by Lysander Yuen on Unsplash. You can [\u2026]. Yangani A Beginners Guide to Big O Notation Big O Notation is a way to represent how long an algorithm will take to execute. Pengenalan Big O Notation Bachman-Landau. We would also drop any lower-order terms from an expression with multiple terms \u2013 for example, O (N 3 + N 2) would be simplified to O (N 3). Hot Network Questions. This includes. Scientific notation is the way that scientists easily handle very large numbers or very small numbers. It enables a software Engineer to determine how efficient different approaches to solving a problem are. The symbol , pronounced \"big-O of ,\" is one of the Landau symbols and is used to symbolically express the asymptotic behavior of a given function. Big O, Big Omega, and Big Theta Notation In this algorithms video, we lay the groundwork for the analysis of algorithms in future video lessons. Yangani A Beginners Guide to Big O Notation Big O Notation is a way to represent how long an algorithm will take to execute. Acceptable formats include: integers, decimal, or the E-notation form of scientific notation, i. o -notation: the idea. That is, if you for example know you can spare 1 MB for the bit vector, and you know that you will insert ~10,000 elements, you can calculate the number of hash functions that minimizes the false positives in subsequent lookups. 11 is a decimal number. pdf) or read online for free. n f(n) g(n) \u2308f(n)\/g(n)\u2309 1 10 1 10 10 37 10 4 100 307 100 4 This table suggests trying k = 1 and C = 10 or k = 10 and C = 4. Big O notation, Big-omega notation and Big-theta notation are used to this end. - Formalize definition of big-O complexity to derive asymptotic running time of algorithm. Here are some examples of scientific. We can safely say that the time complexity of Insertion sort is O(n^2). Srinivas Devadas | Prof. If a running time is \u03a9(f(n)), then for large enough n, the running time is at least k\u22c5f(n) for some constant k. Hi I got this code on my midterm, for i in range(2): for j in range(2): print(i,j) I need help finding the big-o complexity for this code, and for any code in general. Similar to big O notation, big Omega(\u03a9) function is used in computer science to describe the performance or complexity of an algorithm. For K-12 kids, teachers and parents. Note that O(n^2) also covers linear time. f(n) is O(g(n)) if there are positive constants C and k such that:. Summation Notation Summation notation represents an accurate and useful method of representing long sums. We call this notation big O notation, because it uses the capital O symbol (for order). By using this website, you agree to our Cookie Policy. Big O notation has attained superstar status among the other concepts of math because of programmers like to use it in discussions about algorithms (and for good reason). Big-O Notation. Big-O notation is a way of ranking about how much time it takes for an algorithm to execute ; How many operations will be done when the program is executed? Find a bound on the running time, i. For K-12 kids, teachers and parents. I am working with C. Srinivas Devadas | Prof. When you loop through an array in order to find if it contains X item the worst case is that it\u2019s at the end or that it\u2019s not even present on the list. Re: Compute the frequency count and big-oh notation for a certain code segment Here's the general method for easily computing the performance of a code segment: Find out what you want to vary for a section. For example, consider the case of Insertion Sort. y = logx log 2 2. Help! I have a question where I need to analyze the Big-O of an algorithm or some code. This can be extremely frustrating, especially for machine learning beginners coming from the world of development. Big O notation is mostly used, and it is used mainly all the times to find the upper bound of an algorithm whereas the Big \u03b8 notation is sometimes used which is used to detect the average case and the \u03a9 notation is the least used notation among the three. Interview question for SummerEngineering Program in Dublin, Co. In O\/8, cards matter, more so than in Holdem, for example. However, we don't consider any of these factors while analyzing the algorithm. Big O Notation Calculator We can use Big O notation to compare how efficient different approaches to solving a problem are. Use parenthesis to alter the order of operations or to specify function. Big O Notation. The Big-O Notation is the way we determine how fast any given algorithm is when put through its paces. Big-O Notation Analysis of Algorithms (how fast does an algorithm grow with respect to N) (Note: Best recollection is that a good bit of this document comes from C++ For You++, by Litvin & Litvin) The time efficiency of almost all of the algorithms we have discussed can be characterized by only a few growth rate functions: I. Similarly, when x approaches 0, x^n becomes smaller for bigger n's. As for big-oh notation, you can prove using its definition that a polynomial f (N) can be expressed as O (N^k), where k is the exponent in the leading term. Then we want to find a function that will express the number of operations code in terms of n. For example, the bubble sort has a O(n2) notation because in the worst case scenario, you have to. Discrete Mathematics and Its Applications (7th Edition) Edit edition. The order of the dominant term will also be the order of the function. Find the big o notation of the following functions: I. Big O Notation Calculator We can use Big O notation to compare how efficient different approaches to solving a problem are. Big-O Notation The time \/ space complexity is expressed and represented using the Big-O Notation. In other words, E (or e) is a short form for scientific notation. Multiply the decimal number by 10 raised to the power indicated. See this talk (PDF) and this sort video interview. O(2 n) means that the time taken will double with each additional element in the input data set. As you might have noticed, Big O notation describes the worst case possible. The Big O notation can be used to compare the performance of different search algorithms (e. Big Number Calculator. Here are some common types of time complexities in Big O Notation. This can be extremely frustrating, especially for machine learning beginners coming from the world of development. Note there is a spreadsheet posted in the Notes\/Examples section of WebCT showing sample running times to give a sense of a) relative growth rates, and b) some problems really are intractable. It takes linear time in best case and quadratic time in worst case. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann\u2013Landau notation or asymptotic notation. Since s binary search tree with n nodes has a minimum of O(log n) levels, it takes at least O(log n) comparisons to find a particular node. In fact, when used for algorithms, big-O is almost never about time. Saved from. Big O notation is a method for determining how fast an algorithm is. Scientific notation (also referred to as scientific form or standard index form, or standard form in the UK) is a way of expressing numbers that are too big or too small to be conveniently written in decimal form. The key sequence C-x * is bound to the command calc-dispatch, which can be rebound if convenient (see Customizing Calc). The corresponding little-o means \u201cis ul-timately smaller than\u201d: f (n)=o(1) means that f (n)\/c! 0 for any constant c. Acceptable formats include: integers, decimal, or the E-notation form of scientific notation, i. Remember O is like <= so f(n) = O(g(n)) if f(n) is a lot smaller or roughly the same. Big-O (O()) is one of \ufb01ve standard asymptotic notations. txt), PDF File (. | Find, read and cite all the research you need on ResearchGate. in memory or on disk) by an algorithm. With Big O notation, this becomes T(n) \u220a O(n 2), and we say that the algorithm has quadratic time complexity. Interview question for SummerEngineering Program in Dublin, Co. Big-O Notation question. big-o growth. For easier readability, numbers between 1,000 and -1,000 will not be in scientific notation but will still have the same precision. Interpret Big-Oh notation carefully. Big O is defined as the asymptotic upper limit of a function. f = O(g) and f = \u03a9(g) \u21d4 f = \u0398(g) f = o(g) \u21d2 f = O(g) f = O(g) \u21d4 g = \u03a9(f) f = \u03c9(g) \u21d2 f = \u03a9(g) f = o(g) \u21d4 g = \u03c9(f) f \u223c g \u21d2 f = \u0398(g). In big O, we only care about the biggest \"term\" here. f \u2208 O(g) : There exist constants c > 0 and N0 \u2265 0 such that f(n) \u2264 c \u00b7 g(n) for all n \u2265 N0. Adding and subtracting with scientific notation may require more care, because the rule for adding and subtracting exponential expressions is that the expressions must havelike terms. So, x log x < x^2 for all x > 0. TI's most advanced scientific calculator with enhanced mathematical functionality. Big O Notation allows us to measure the time and space complexity of our code. This lesson is all about finding the order or complexity of functions and processes using Big O, Omega, and Big Theta notation. Scientific Notation: A Matter of Convenience. WebMath is designed to help you solve your math problems. Big O notation isn't time consuming calculation. Little o and Big O notation (Fall '15: I skipped the Big O notation and anything that uses it) These notions are used to compare the growth rate of two functions near the origin. Big O notation includes \"O\" (big O) to represent upper bounds, say the worst case of an algorithm; \"omega\" to represent lower bound and \"theta\" tho represent precise grow. f = O(g) and f = \u03a9(g) \u21d4 f = \u0398(g) f = o(g) \u21d2 f = O(g) f = O(g) \u21d4 g = \u03a9(f) f = \u03c9(g) \u21d2 f = \u03a9(g) f = o(g) \u21d4 g = \u03c9(f) f \u223c g \u21d2 f = \u0398(g). This smart calculator is provided by wolfram alpha. Big O Notation Practice Problems. Homework Check: Our algebra calculator can help you check your homework. Now run these practice questions. \u2022 We say that f(x) is big-Omega of g(x). Sigma notation provides a way to compactly and precisely express any sum, that is, a sequence of things that are all to be added together. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e. Help! I have a question where I need to analyze the Big-O of an algorithm or some code. Log in with ClassLink. Anyone who's read Programming Pearls or any other Computer Science books and. You can operate the calculator directly from your keyboard, as well as using the buttons with your mouse. Pete Date: 12\/12\/2000 at 11:26:13 From: Doctor Ian Subject: Re: Matrix multiplication done in big-O notation Hi Peter, Big-O notation is just a way of being able to compare expected run times without getting bogged. Recall that when we use big-O notation, we drop constants and low-order terms. f(x) = ( x2) implies both a lower bound and an upper bound, as the graph below shows. e the longest amount of time taken to execute the program. The third article talks about understanding the formal definition of Big-O. Big O Notation Calculator We can use Big O notation to compare how efficient different approaches to solving a problem are. Learn more Is there a tool to automatically calculate Big-O complexity for a function [duplicate]. We can safely say that the time complexity of Insertion sort is O(n^2). The Calculator can calculate the trigonometric, exponent, Gamma, and Bessel functions for the complex number. Quicksort uses the partitioning method and can perform, at best and on average, at O(n log (n)). We use the Big-O notation to classify algorithms based on their running time or space (memory used) as the input grows. Scientific notation and order of magnitude are fundamental concepts in all branches of science. A narrated Flash animation on the topic \"Big O notation\" The master theorem is a technique for determining asymptotic growth in terms of big O notation. Accessing data is slower in trees, but traversing a tree is faster than traversing a hash. big-o growth. Big O notation is mostly used, and it is used mainly all the times to find the upper bound of an algorithm whereas the Big \u03b8 notation is sometimes used which is used to detect the average case and the \u03a9 notation is the least used notation among the three. CSC236 Week 4 Larry Zhang 1. In computer science, Big-O notation is used to classify algorithms according to how their running time or space requirements. To easily understand Big O Notation, we'll compare these two algorithms: Linear \u2014 O(n) and Logarithmic \u2014 O(log n). Interview Candidate on Jan 13, 2011. in the Big O notation, we are only concerned about the worst case situation of an algorithm's runtime. It is a notation to represent the grow of resource required by an algorithm. The Big O notation can be used to compare the performance of different search algorithms (e. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e. O\/8 players generally don't understand the game and play the wrong kinds of cards, all the time. Big O Notation Calculator We can use Big O notation to compare how efficient different approaches to solving a problem are. Use MathJax to format equations. Intuition: growth rate of f is at most (same as or less than) that of g Eg. Know Thy Complexities! Hi there! This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science. I have a fair idea of what Big-O Notation is, but I'd like to know if there's a sure fire way to calculate the values of C and k for which. \u03a3 \u2013 (sigma ) means to add You can write a series in sigma notation: k = max value n= starting value = expression. The technical way to say it would be n^2 = o(2^n). Determine*the*running*time*of*simple*algorithms*! Bestcase*! Average*case*! Worst*case*! Pro\ufb01le*algorithms*! Understand*O*notation's*mathematical*basis*. Beyond a million, the names of the numbers differ depending where you live, and also the context. $\\endgroup$ \u2013 idan Jun 9 '12 at 17:49. It gives an upper bound on complexity and hence it signifies the worst-case performance of the algorithm. A simple guide to Big-O notation. Depth first search, breadth first search, Big O notation. People refer to Big O notation as \"big o of\" or \"order of,\" and then whatever is inside the parentheses, usually n. 4n +3 \u2208 O(n) \u2014 de\ufb01nition is satis\ufb01ed using c = 5 and N0 = 3. This is called big-O notation. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e. Big-Oh notation describes an upper bound. If a running time is \u03a9(f(n)), then for large enough n, the running time is at least k\u22c5f(n) for some constant k. In contrast, Big O notation describes the worst case of a running algorithm. The \"big-O\" notation is not symmetric: n = O(n2) but n2\u2260 O(n). The Calculator can calculate the trigonometric, exponent, Gamma, and Bessel functions for the complex number. You should know how to prove this Fact. - Formalize definition of big-O complexity to derive asymptotic running time of algorithm. Big O notation is a method for determining how fast an algorithm is. WebMath is designed to help you solve your math problems. As you might have noticed, Big O notation describes the worst case possible. * Use e for scientific notation. only ever used if k isn\u2019t larger than n; in other words, if the range of input values isn\u2019t greater than the number of values to be sorted. Decimal to Fraction Calculator. Interview Candidate on Jan 13, 2011. Google Domains Hosted Site. It just keeps track of the digits and makes the numbers easier to read. Big O is the upper-bound (longest possible run time), while big-\u03a9 is the lower bound (shortest possible run time). We can also say that it runs in time. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann-Landau notation or asymptotic notation. 12-5 Sigma Notation and the nth term. It's a quick way to talk about algorithm time complexity. Photo by Lysander Yuen on Unsplash. Ideal for engineering and computer science courses. Summation Notation Summation notation represents an accurate and useful method of representing long sums. Big O notation or Big Oh notation, and also Landau notation or asymptotic notation, is a mathematical notation used to describe the asymptotic behavior of functions. Software Architecture & C Programming Projects for $10 -$30. You can [\u2026]. Here\u2019s how I understand it: Big-O notation is a way to mathematically express how an algorithm performs as its input increases. how do you figure out the Big O notation order of magnitude for something? for example, whats the order of magnitude for each fundamental operation performed on arrays? imi. org are unblocked. The basic arithmetic operations we learnt in school were:. Convert Scientific Notation to a Real Number. ca Includes product lines, store locations, warranties, financing, and credit applications. In computer science, Big-O notation is used to classify algorithms according to how their running time or space requirements. This will give you some good practice finding the Big O Notation on your own using the problems below. Log in with ClassLink. Order of operations also matter. Usually, a simple array access is defined of magnitude O(1) - like all other direct access to variables. Kann mir bitte jemand sagen, ob ich auf dem richtigen Weg bin?\/\u2026. 38i; Basically, we've found that converting between forms is simply a matter of plugging values into. We can safely say that the time complexity of Insertion sort is O(n^2). b) Big-Theta notation describes both the upper and lower bounds of the efficiency of a given algorithm with a given input size c) Big-O and Big-Omega notation do the same thing as Big-Theta notation, except they only bound one of either the upper limit or lower limit of the efficiency of the algorithm, respectively. If you're behind a web filter, please make sure that the domains *. I'm very familiar with matrix multiplication, but I've never worked with big-O notation, so any help would be greatly appreciated. Instead, Big O notation is a way of approximating how much more time and space is required by the algorithm during runtime as the size of the input increases from small to arbitrarily large. November 15, 2017. Example 2: Using Transformations to Graph a Rational Function. Date: 12\/12\/2000 at 11:26:13 From: Doctor Ian Subject: Re: Matrix multiplication done in big-O notation Hi Peter, Big-O notation is just a way of being able to compare expected run times without getting bogged down in detail. Accessing data is slower in trees, but traversing a tree is faster than traversing a hash. Rewrite the expression in standard form then find the sum 1) 2). txt), PDF File (. The below table has list of some common asymptotic notations in ascending order. O refers to the order of the function, or its growth rate, and; n is the length of the array to be sorted. Beyond a million, the names of the numbers differ depending where you live, and also the context. Big-O notation describes the behavior of a function for big inputs. The Big O notation is useful when we only have upper bound on time complexity of an algorithm. Interview question for SummerEngineering Program in Dublin, Co. The study of the performance of algorithms \u2013 or algorithmic complexity \u2013 falls into the field of algorithm analysis. The Intuition of Big O Notation We often hear the performance of an algorithm described using Big O Notation. For example, an algorithm that runs in time. Big-O Analysis Order of magnitude analysis requires a number of mathematical definitions and theorems. Ask Question Asked 8 years, 2 months ago. Iff isO(g)andg isO(h),thenf isO(h). Big O notation treats two functions as being roughly the same if one is c times the other where c is a constant (something that doesn't depend on n). \u2022 For example, we say that thearrayMax algorithm runs in O(n) time. \u039f Notation; \u03a9 Notation; \u03b8 Notation; Big Oh Notation, \u039f. For example, lets take a look at the following code. Big-O Notation. Big-O Notation\u00b6. This is the fourth in a series on Big O notation. This time complexity is defined as a function of the input size n using Big-O notation. Big o cheatsheet with complexities chart Big o complete Graph ![Bigo graph][1] Legend ![legend][3] ![Big o cheatsheet][2] ![DS chart][4] ![Searching chart][5] Sorting Algorithms chart ![sorting chart][6] ![Heaps chart][7] ![graphs chart][8] HackerEarth is a global hub of 3M+ developers. Big O Notation is denoted as O(n) also termed as Order of n, or also termed as O of n. So, how does this work? We can think of 5. Scientific notation is the way that scientists easily handle very large numbers or very small numbers. If a running time is \u03a9(f(n)), then for large enough n, the running time is at least k\u22c5f(n) for some constant k. april 23, 2003 prepared by doug hogan cse 260. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Prove by the definition of the O-notation, this implies T1(n) * T2(n)= O(f(n) * g(n)). I am obliged by scholarly tradition to document other sources which have taken a similar approach. Next up we've got polynomial time algorithms. Big O notation is a method for determining how fast an algorithm is. When it comes to comparison sorting algorithms, the n in Big-O notation represents the amount of items in the array that's being sorted. Big O Calculator. Then, we plug the known inputs (cumulative probability, standard deviation, sample mean, and degrees of freedom) into the calculator and hit the Calculate button. Let's have a look at a simple example of a quadratic time algorithm:. Data structures We have covered some of the most used data structures in this book. Using Big O notation, we can learn whether our algorithm is fast or slow. The values of c and k must be fixed for the function f and must not depend on n. - Formalize definition of big-O complexity to derive asymptotic running time of algorithm. The above 3 formulas are used for solving problems involving flow rate calculations. - \"Online Calculator\" always available when you need it. Rule of thumb: Simple programs can be analyzed by counting the nested loops of the program. Use this tool in calculator mode to perform algebraic operations with scientific numbers using the e-notation (add, subtract, multiply and divide exponential notation numbers like 1. Big O notation is a method for determining how fast an algorithm is. The size of the calculator buttons is determined by the width of the calculator window. pdf) or read online for free. Is this code an example of sieve of eratothenes? What is meaning of Algorithmic Notations. However, the constant coefficient hidden by the Big O notation is so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers. in memory or on disk) by an algorithm. Big-O Notation (O-notation) Big-O notation represents the upper bound of the running time of an algorithm. I N S T R U C T I O N S. Here\u2019s how I understand it: Big-O notation is a way to mathematically express how an algorithm performs as its input increases. Big O notation (with a capital letter O, not a zero), also called Landau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. introduce big O notation as a way of indicating \"higher order terms\" in a more precise way, and use the identity exp (i x)=cos (x)+i\u00b7sin (x) for calculating anything trig-related. A function that needs roughly n^2 units of time to process n input items has a time complexity of O(n^2). [9] [10] Since any algorithm for multiplying two n \u00d7 n -matrices has to process all 2 n 2 entries, there is an asymptotic lower bound of \u03a9( n 2 ) operations. The calculator below can compute very large numbers. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e. Please enter your access code. Prove by the definition of the O-notation, this implies T1(n) * T2(n)= O(f(n) * g(n)). Big-O Notation and Tight Bounds \u2022 Big-O notation provides an upper bound, not a tight bound (upper and lower). O(2 n) means that the time taken will double with each additional element in the input data set. Big-Oh notation describes an upper bound. Determine*the*running*time*of*simple*algorithms*! Bestcase*! Average*case*! Worst*case*! Pro\ufb01le*algorithms*! Understand*O*notation's*mathematical*basis*. 456 x 10^4 = 3. Then the base b logarithm of a number x: log b x = y. 1 Time complexity and Big-Oh notation: exercises 1. Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. This is called big-O notation. Vaidehi Joshi\u2019s explanation on BaseCS made the concept behind Big-O click for me. in memory or on disk) by an algorithm. Big-O gives the upper bound of a function O(g(n)) = { f(n): there exist positive constants c and n 0 such that 0 \u2264 f(n) \u2264 cg(n) for all n \u2265 n 0}. In this article, we will briefly review algorithm analysis and Big-O notation. Accessing data is slower in trees, but traversing a tree is faster than traversing a hash. 1 Big-Oh Notation Let f and g be functions from positive numbers to positive numbers. I'm very familiar with matrix multiplication, but I've never worked with big-O notation, so any help would be greatly appreciated. 17 July 2015 \/ Big O notation My mathematical education was somewhat lacking as at school I was put in the \"divvy\" classes - classes for the \"not too bright\" or \"remedial\". The Big-O notation The simplified term n^2 stands for a whole class of terms, so let's give it a name. About the calculator: This super useful calculator is a product of wolfram alpha, one of the. Proving either one is good enough to prove. For example, consider the case of Insertion Sort. Associated with big O notation are several related notations, using the symbols o, \u03a9, \u03c9, and \u0398, to describe other kinds of bounds on asymptotic growth rates. f(s) = (s2 + 3) log s+8s log(s!)+ log(s2+1). In Big O notation, this would be written as O(n) - which is also known as linear time. java - Big O Notation Hausaufgaben-Code Fragment Algorithm Analysis? F\u00fcr die Hausaufgaben wurden mir die folgenden 8 Codefragmente zur Analyse gegeben und eine Big-Oh Notation f\u00fcr die Laufzeit gegeben. Big O: De nition O(g(n)) = ff(n) : there exists positive constants c and n 0 such that f(n) cg(n) for all n n 0g I Notice: O(g(n)) is a set of functions I When we say f(n) = O(g(n)) we really mean f(n) 2O(g(n)) I E. The O simply denoted we're talking about big O and you can ignore it (at least for the purpose of the interview). You should know how to prove this Fact. Active 3 years, 5 months ago. n f(n) g(n) \u2308f(n)\/g(n)\u2309 1 10 1 10 10 37 10 4 100 307 100 4 This table suggests trying k = 1 and C = 10 or k = 10 and C = 4. Id like some help to get proficient in that. Srinivas Devadas | Prof. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann-Landau notation or asymptotic notation. - Formalize definition of big-O complexity to derive asymptotic running time of algorithm. The second number in the equation is a power of 10, written as 10 with an exponent, like 102 which stands for 10x10. We could then say that the time complexity of the first algorithm is \u0398(n 2), and that the improved algorithm has \u0398(n) time complexity. This is a formula calculator. In computer science, big O notation is used to classify algorithms. in memory or on disk) by an algorithm. O: Asymptotic Upper Bound 'O' (Big Oh) is the most commonly used notation. What is Big O notation in java, what all things we need to remember before choosing a collection framework's class. In practice, Big-O is used as a tight upper-bound on the growth of an algorithm\u2019s e\ufb00ort (this e\ufb00ort is described by the function f(n)), even though, as written, it can also be a loose upper-bound. The idea captured by big-Oh notation is like the concept of the derivative in. The difference is that a power series is centered on a particular x-value, so the big-O notation reflects that (in the example above, the big-O notation expresses how the remaining terms behave as x\u21920, which is the center of the power series). The best time complexity in Big O notation is O(1). Big-O is the shorthand used to classify the time complexity of algorithms. leading 1 in the first row, use row operations to obtam o's below It the process With each successive co I unm after getting a and then repeat 4 0 0 0 0 72 3 0 11 0 0 3 4 3 11 0 0 01- Elementary Row Operations Assume A IS an augmented matrix conespondmg to a given system of equations. Big O Notation Calculator We can use Big O notation to compare how efficient different approaches to solving a problem are. com, we have a calculator that will do all the work for you. Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. If an algorithm has the number of operations required formula of:. | Find, read and cite all the research you need on ResearchGate. The values of c and k must be fixed for the function f and must not depend on n. 17 July 2015 \/ Big O notation My mathematical education was somewhat lacking as at school I was put in the \"divvy\" classes - classes for the \"not too bright\" or \"remedial\". big examples notation how function omega and calculator the python big o - What is the difference between \u0398(n) and O(n)? Sometimes I see \u0398(n) with the strange \u0398 symbol with something in the middle of it, and sometimes just O(n). Ideal for engineering and computer science courses. Here's how I understand it: Big-O notation is a way to mathematically express how an algorithm performs as its input increases.","date":"2020-10-23 10:30:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7371540665626526, \"perplexity\": 693.9736876701378}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107881369.4\/warc\/CC-MAIN-20201023102435-20201023132435-00624.warc.gz\"}"}
null
null
title: Codecademy Lesson 2 layout: post author: akash.sharma permalink: /codecademy-lesson-2/ source-id: 16Cw6U5jNvAeaKMT1wptaZUO5cQ54u2e12-4pI8LzUlc published: true --- <table> <tr> <td>Title</td> <td>Codecademy Lesson 2</td> <td></td> <td>26/01/17</td> </tr> </table> <table> <tr> <td>Lesson Review</td> </tr> <tr> <td>How did I learn? What strategies were effective? </td> </tr> <tr> <td>As we were doing Python I was quite used to the terms it was using. I got stuck on one of the stages and I straight away looked in the top right corner of the screen. That told me what the problem was. I managed to get to 19% of Python and our homework was to do 5% more. At home I managed to get to 23% and at school next week my target would to be get 5% more again.</td> </tr> <tr> <td>What limited my learning? Which habits do I need to work on? </td> </tr> <tr> <td>I think I could have been concentrated a bit more. However it was fine apart from that.</td> </tr> <tr> <td>What will I change for next time? How will I improve my learning?</td> </tr> <tr> <td>Next time I will again fix the problems quicker and ala concentrate more.</td> </tr> </table>
{ "redpajama_set_name": "RedPajamaGithub" }
5,763
using System.Collections; using System.Collections.Generic; namespace Jose { public class JwkSet : IEnumerable<Jwk> { private List<Jwk> keys; public List<Jwk> Keys { get { return keys; } } public JwkSet(IEnumerable<Jwk> keys) { this.keys = new List<Jwk>(keys); } public JwkSet(params Jwk[] keys) { this.keys = new List<Jwk>(keys); } public void Add(Jwk key) { if (keys == null) { keys = new List<Jwk>(); } keys.Add(key); } public static JwkSet FromDictionary(IDictionary<string, object> data) { var keyList = Dictionaries.Get<IEnumerable<object>>(data, "keys"); JwkSet result = new JwkSet(); foreach (var key in keyList) { result.Add(Jwk.FromDictionary((IDictionary<string, object>)key)); } return result; } public IDictionary<string, object> ToDictionary() { var result = new Dictionary<string, object>(); var keyList = new List<IDictionary<string, object>>(); if (keys != null) { foreach (Jwk key in keys) { keyList.Add(key.ToDictionary()); } } result["keys"] = keyList; return result; } public string ToJson(IJsonMapper mapper = null) { return mapper.Serialize(ToDictionary()); } public static JwkSet FromJson(string json, IJsonMapper mapper = null) { return JwkSet.FromDictionary( mapper.Parse<IDictionary<string, object>>(json) ); } public IEnumerator<Jwk> GetEnumerator() { return keys.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return keys.GetEnumerator(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,532
module network_indices ! this module is for use only within this network -- these ! quantities should not be accessed in general MAESTRO routines. ! Instead the species indices should be queried via ! network_species_index() implicit none integer, parameter :: ic12_ = 1 integer, parameter :: io16_ = 2 integer, parameter :: img24_ = 3 !$acc declare copyin(ic12_, io16_, img24_) end module network_indices module rpar_indices implicit none integer, parameter :: n_rpar_comps = 3 integer, parameter :: irp_dens = 1 integer, parameter :: irp_temp = 2 integer, parameter :: irp_o16 = 3 !$acc declare copyin(n_rpar_comps, irp_dens, irp_temp, irp_o16) end module rpar_indices
{ "redpajama_set_name": "RedPajamaGithub" }
5,144
**Namespace:** [OfficeDevPnP.Core.Framework.Provisioning.Providers.Xml.V201508](OfficeDevPnP.Core.Framework.Provisioning.Providers.Xml.V201508.md) **Assembly:** OfficeDevPnP.Core.dll ## Syntax ```C# public ProvisioningTemplate[] ProvisioningTemplate { get; set; } ``` ### Property Value Type: [OfficeDevPnP.Core.Framework.Provisioning.Providers.Xml.V201508.ProvisioningTemplate[]](OfficeDevPnP.Core.Framework.Provisioning.Providers.Xml.V201508.ProvisioningTemplate.md) ## See also - [Templates](OfficeDevPnP.Core.Framework.Provisioning.Providers.Xml.V201508.Templates.md) - [OfficeDevPnP.Core.Framework.Provisioning.Providers.Xml.V201508](OfficeDevPnP.Core.Framework.Provisioning.Providers.Xml.V201508.md)
{ "redpajama_set_name": "RedPajamaGithub" }
1,111