text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
\section{Introduction}
Evaluating a policy of interest via a scalar performance metric
is a fundamental problem in reinforcement learning \citep{sutton2018reinforcement}.
Such evaluation makes it possible to directly compare two policies
and lays out the foundations for the more ambitious control problem,
the goal of which is to find the best-performing policy.
Monte Carlo methods \citep{kakutani1945markoff} are the most commonly used methods for such evaluation problems.
Given an interested policy,
Monte Carlo methods give estimates by repeatedly executing this policy to collect trajectories and taking the averaged outcomes.
Trajectory samples collected during this process are called \emph{online samples}.
Online samples are cheap when reliable simulators exist. For example, board games (\citealt{tesauro1995temporal, silver2016mastering}) and video games (\citealt{mnih2015human, vinyals2019grandmaster}) give a large amount of instant online samples with little cost.
However, in many real-world applications, online samples are usually expensive and take non-negligible time and financial costs to collect,
e.g.,
online recommendations (\citealt{li2010contextual, Gauci2018horizon}) and
inventory management (\citealt{ Giannoccaro2002inventory}).
In those scenarios,
reducing the number of the required online samples in Monte Carlo methods brings in substantial time and economic benefits.
Online sample reduction can be achieved by reducing the variance of Monte Carlo estimators (see, e.g., \citet{Bernstein24Chebyshev}).
One well-known variance reduction tool in statistics is \emph{importance sampling} (\citealt{Rubinstein1981Simulation, Benjamin1998Simulation}). In the setting of reinforcement learning, it corresponds to evaluating an interested policy by executing a different policy to collect online samples. The interested policy is called \emph{target policy} and the executed policy is called \emph{behavior policy}.
To get an unbiased estimate,
we rely on importance sampling ratios to reweight online samples from the behavior policy
and thus give an unbiased estimate of the target policy.
This idea of estimating a target policy by running a different behavior policy is called \emph{off-policy} learning (\citealt{ geweke1988antithetic, hesterberg1995weighted, precup:2000:eto:645529.658134, koller2009probabilistic}).
By contrast,
estimating a target policy by running this policy itself is called \emph{on-policy} learning \citep{sutton1988learning}.
When the behavior policy is properly tailored,
the variance of the off-policy Monte Carlo estimator can be significantly smaller than that of the ordinary on-policy one.
In this paper,
we propose algorithms to learn such behavior policies from previously logged, existing \emph{offline data},
which are much cheaper than online data.
Notably, the learned behavior policy from our proposed method may suffer from learning errors,
just like any other learning process,
due to, e.g.,
insufficient offline data coverage,
mismatched hypothesis spaces, etc.
Nevertheless, our proposed Monte Carlo-off policy estimator is always unbiased.
We further use a bandit algorithm to strategically switch between the learned behavior policy and the target policy
for online data collection,
which guarantees that the regret is reasonably small even if the learned behavior policy contains non-negligible learning errors.
\section{Background}
We consider a finite horizon Markov Decision Process (MDP, \cite{puterman2014markov}) with a finite state space $\mathcal{S}$, a finite action space $\mathcal{A}$,
a reward function $r: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$,
a transition probability function $p: \mathcal{S} \times \mathcal{S} \times \mathcal{A} \to [0, 1]$,
an initial distribution $p_0: \mathcal{S} \to [0, 1]$,
and a constant horizon length $T$.
Without loss of generality,
we consider this undiscounted setting for simplifying notations.
Our results naturally apply to the discounted setting \citep{puterman2014markov} as long as the horizon is fixed and finite.
At time step 0,
an initial state $S_0$ is sampled from $p_0$.
At time step $t \in \qty{0, 1, \dots, T-1}$,
an action $A_t$ is sampled according to $\pi_t(\cdot | S_t)$
where $\pi_t: \mathcal{A} \times \mathcal{S} \to [0, 1]$ is the policy at time step $t$.
A finite reward $R_{t+1} \doteq r(S_t, A_t)$ is then emitted and a successor state $S_{t+1}$ is sampled from $p(\cdot | S_t, A_t)$.
We define abbreviations $\pi_{i:j} \doteq \qty{\pi_i, \pi_{i+1}, \dots, \pi_j}$ and $\pi \doteq \pi_{0:T-1}$.
The return at time step $t$ is defined as
\begin{align}
G_t \doteq \sum_{i={t+1}}^T R_i,
\end{align}
which allows us to define the state- and action-value functions as
\begin{align}
v_{\pi, t}(s) \doteq& \mathbb{E}_{\pi}\left[G_t | S_t = s\right], \\
q_{\pi, t}(s, a) \doteq& \mathbb{E}_{\pi}\left[G_t | S_t = s, A_t = a\right].
\end{align}
It is easy to see that
\begin{align}
\label{eq q v relationship}
v_{\pi, t}(s) = \sum_{a} \pi_t(a|s) q_{\pi, t}(s, a).
\end{align}
We focus on the total rewards performance metric \citep{puterman2014markov} to measure the performance of the policy $\pi$,
which is defined as
\begin{align}
J(\pi) \doteq \sum_s p_0(s) v_{\pi, 0}(s).
\end{align}
Knowing such a scalar performance metric makes it possible to easily compare two policies and
is also preferred in machine learning applications and research (\citealt{ng2017mlyearning}) because it offers a clear learning goal.
In this paper, we focus on Monte Carlo
methods introduced by \citet{kakutani1945markoff} to estimate the total rewards $ J(\pi)$.
Among many of its variants,
the most straightforward and widely used way is to draw samples of $J(\pi)$ by executing the policy $\pi$ online.
As the number of samples increases,
the empirical average of the sampled returns converges to $J(\pi)$.
This idea is called \emph{on-policy} learning (\citealt{sutton1988learning}) because it estimates a policy $\pi$ by executing itself.
From now on, we consider \emph{off-policy learning},
where we estimate the total rewards $J(\pi)$ of an interested policy $\pi$ called \emph{target policy} by executing a different policy $\mu$ called \emph{behavior policy}.
Off-policy learning has substantive advantages.
First, estimating the value of a target policy $\pi$ without actual deployment makes the learning process much safer (\citealt{thomas2015safe}). Safety and reliability are critical factors in real-world applications.
Second, trajectory samples collected by one behavior policy can be used to evaluate multiple target policies (\citealt{sutton2011horde}),
making the estimation more efficient.
In off-policy learning, each trajectory
\begin{align}
\qty{S_0, A_0, R_1, S_1, A_1, R_2, \dots, S_{T-1}, A_{T-1}, R_T}
\end{align}
is generated by a behavior policy $\mu$ with
\begin{align}
S_0 \sim p_0, A_{t} \sim \mu_{t}(\cdot | S_{t}), \, t \in \qty{0, 1, \dots, T-1}.
\end{align}
Let
$
\tau^{\mu_{t:T-1}}_{t:T-1} \doteq \qty{S_t, A_t, R_{t+1}, \dots, S_{T-1}, A_{T-1}, R_{T}}
$
be a shorthand for a segment of a random trajectory generated by the behavior policy $\mu$ from the time step $t$ to the time step $T-1$ inclusively.
In off-policy learning, we use the \emph{importance sampling ratio} to reweight rewards collected by $\mu$ in order to give an estimate of $J(\pi)$.
The importance sampling ratio at time step $t$ is defined as
\begin{align*}
\rho_t \doteq \frac{\pi_t(A_t | S_t)}{\mu_t(A_t | S_t)}.
\end{align*}
The product of importance sampling ratios from time $t$ to the last step $T-1$ is defined as
\begin{align*}
\rho_{t:T-1} \doteq \prod_{k=t}^{T-1} \frac{\pi_k(A_k | S_k)}{\mu_k(A_k | S_k)}.
\end{align*}
There are various methods to use the importance sampling ratios in off-policy learning.
The most straightforward \emph{ordinary importance sampling} (IS) estimator is defined as
\begin{align}\label{eq:IS}
G^{\text{IS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) \doteq \rho_{t:T-1} G_t.
\end{align}
This ordinary importance sampling estimator is an \emph{unbiased} estimator when the behavior policy $\mu$ \emph{covers} the target policy $\pi$.
That is when $\mu_t(a|s) = 0 \implies \pi_t(a|s)=0$,
we have
\begin{align}
\mathbb{E}\left[ G^{\text{IS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) | S_t = s \right] = \mathbb{E}\left[ \rho_{t:T-1} G_t | S_t = s \right] = v_{\pi,t}(s) \quad \forall t, s.
\end{align}
However, weighted by the entire product $\rho_{0:T-1}$,
the ordinary importance sampling estimator has a high variance.
Intensive research has been conducted in finding importance sampling estimators with reduced variance,
e.g. the weighted importance sampling estimator (\citealt{ geweke1988antithetic, hesterberg1995weighted, koller2009probabilistic}), the per-decision importance sampling estimator (\citealt{precup:2000:eto:645529.658134}), the consistent weighted per-decision importance sampling estimator (\citealt{thomas2015safe}), etc.
Our paper is based on the \emph{per-decision importance sampling estimator} (PDIS),
which is defined as
\begin{align}
G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) \doteq \sum_{k=t}^{T-1} \rho_{0:k} R_{k+1}.
\end{align}
We choose the per-decision importance sampling estimator because it is an unbiased estimator for any behavior policy $\mu$ that covers target policy $\pi$ \citep{precup:2000:eto:645529.658134}.
In other words,
when $\mu_t(a|s) = 0 \implies \pi_t(a|s)=0$, we have
\begin{align}\label{eq:PDIS-unbiased}
\mathbb{E}[ G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) | S_t = s ] = v_{\pi,t}(s) \quad \forall t, s.
\end{align}
We will intensively use the recursive expression of the per-decision importance sampling estimator
\begin{align}\label{eq:PDIS-recursive}
G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) =
\begin{cases}
\rho_t \left(R_{t+1} + G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1})\right) & 0 \leq t < T-1 \\
\rho_tR_{t+1} & t = T-1. \\
\end{cases}.
\end{align}
The per-decision importance sampling estimator has a lower variance than the ordinary importance sampling estimator since each reward $R_{k+1}$ is only weighted by the importance sampling ratio $\rho_{0:k}$,
instead of the entire product $\rho_{0:T-1}$.
Intuitively, this is still unbiased because given the current state,
the current reward is independent of future actions and states.
This paper focuses on further reducing the variance of the per-decision importance sampling estimator by using a proper behavior policy.
From a statistics perspective,
with a lower variance,
fewer trajectory samples are required to achieve an evaluation accuracy of $J(\pi)$ with the same confidence (\citealt{Bernstein24Chebyshev, bertsekas2002probability}).
From a machine learning perspective, empirical error can be decomposed into bias and variance.
For unbiased methods,
a lower variance induces a lower empirical error.
In empirical experiments,
estimators with a lower variance take fewer steps and data to achieve convergence in reinforcement learning algorithms (\citealt{sutton2018reinforcement}).
\section{Variance Reduction in Statistics} \label{sec:var-stats}
In this section,
we provide the mathematical foundation for variance reduction with importance sampling ratios.
The notations here are independent of the rest of this paper -- we use similar notations only for easy interpretation in later sections.
Consider a discrete random variable $A$ taking values from a finite space $\mathcal{A}$ according to a probability mass function $\pi:\mathcal{A} \to [0,1]$
and a function $q:\mathcal{A} \to \mathbb{R}$ mapping a value in $\mathcal{A}$ to a real number.
We are interested in estimating
\begin{align}
\mathbb{E}_{A\sim \pi}[q(A)].
\end{align}
The ordinary Monte Carlo methods then sample $\qty{A_1, \dots, A_N}$ from $\pi$ and use the empirical average
\begin{align}
\label{eq stats on-policy mc}
\frac{1}{N}\sum_{i=1}^N q(A_i)
\end{align}
as the estimate.
In statistics, importance sampling is introduced as a variance reduction technique for Monte Carlo methods (\citealt{Rubinstein1981Simulation}).
The main idea is
to sample $\qty{A_i, \dots, A_N}$ from a different distribution $\mu$
and use
\begin{align}
\label{eq stats off-policy mc}
\frac{1}{N}\sum_{i=1}^N \rho(A_i) q(A_i)
\end{align}
as the estimate,
where
\begin{align}
\rho(A) \doteq \frac{\pi(A)}{\mu(A)}
\end{align}
is the importance sampling ratio.
Assuming $\mu$ covers $\pi$, i.e.,
\begin{align}
\label{eq stats converage}
\forall a, \mu(a) = 0 \implies \pi(a) = 0
\end{align}
the estimation~\eqref{eq stats off-policy mc} is then unbiased because
\begin{align}
\mathbb{E}_{A\sim \pi}[q(A)] = \mathbb{E}_{A\sim \mu}[\rho(A)q(A)].
\end{align}
If the sampling distribution $\mu$ is carefully designed,
the variance of~\eqref{eq stats off-policy mc} can be smaller than that of~\eqref{eq stats on-policy mc}.
This problem of searching for a variance reducing sampling distribution can be
formulated as an optimization problem:
\begin{align}
\text{min}_{\mu \in \Lambda_+} \quad &
\mathbb{V}_{A\sim \mu}(\rho(A)q(A)) \label{eq:math-optimization1}.
\end{align}
Here $\Lambda_+$ denotes the set of all the policies that give unbiased estimations, i.e.,
\begin{align}
\Lambda_+ \doteq \qty{\mu \in \Delta(\mathcal{A}) \mid \mathbb{E}_{A\sim\mu}\left[\rho(A)q(A)\right] = \mathbb{E}_{A\sim\pi}\left[q(A)\right]},
\end{align}
where $\Delta(\mathcal{X})$ denotes the set of all probability distributions on the set $\mathcal{X}$.
Solving~\eqref{eq:math-optimization1} is in general very challenging.
To see this,
consider a concrete example where $\mathcal{A} = \qty{a_1, a_2, a_3}$ and
\begin{align}
\label{eq stats example}
\begin{cases}
&q(a_1) = -10 \\
&q(a_2) = 2 \\
&q(a_3) = 2
\end{cases}, \quad
\begin{cases}
&\pi(a_1) = 0.1 \\
&\pi(a_2) = 0.5 \\
&\pi(a_3) = 0.4
\end{cases}, \quad
\begin{cases}
&\mu(a_1) = 0 \\
&\mu(a_2) = 0 \\
&\mu(a_3) = 1
\end{cases}.
\end{align}
It can be computed that $\mathbb{E}_{A\sim\pi}\left[q(A)\right] = 0.8$ and $\mathbb{E}_{A\sim\mu}\left[\rho(A)q(A)\right] = 0.8$.
In other words,
we could sample $A$ from $\mu$ and use $\rho(A)q(A)$ as an estimator.
This estimator is unbiased.
But apparently, this $\mu$ does not cover $\pi$.
Moreover, since this $\mu$ is deterministic,
the variance of this estimator is 0,
which is the minimum possible variance.
In other words, this $\mu$ is an optimal sampling distribution.
However,
this $\mu$ is hand-crafted based on the knowledge that $q(a_1)\pi(a_1) + q(a_2)\pi(a_2) = 0$.
Without such knowledge,
we argue that there is little hope to find this $\mu$.
This example suggests that searching over the entire $\Lambda_+$ might be too ambitious.
One natural choice is to restrict the search to
\begin{align}
\Lambda_- \doteq \qty{\mu \in \Delta(\mathcal{A}) \mid \forall a, \mu(a) = 0 \implies \pi(a) = 0}.
\end{align}
In other words,
we aim to find a variance minimizing sampling distribution among all distributions that cover $\pi$.
Because coverage implies unbiasedness,
we have $\Lambda_- \subseteq \Lambda_+$.
By searching over only $\Lambda_-$,
we reduce the search space,
with the hope that the problem is more tractable.
It turns out that
we can slightly enlarge $\Lambda_-$ to $\Lambda$ defined as
\begin{align}
\label{eq stats search space}
\Lambda \doteq \qty{\mu \in \Delta(\mathcal{A}) \mid \forall a, \mu(a) = 0 \implies \pi(a)q(a) = 0}.
\end{align}
If for some $\mu$ and $a_0$,
there is $\pi(a_0) > 0, q(a_0) = 0, \mu(a_0) = 0$.
Then this $\mu$ would not be in $\Lambda_-$ but it is still in $\Lambda$.
Importantly, any distribution in $\Lambda$ still gives unbiased estimation,
though it may not cover $\pi$.
The intuition is that the only sample $a_0$ where $\mu$ does not cover $\pi$ must satisfy $q(a_0) = 0$,
i.e.,
this sample does not contribute to the expectation anyway.
\begin{lemma}
\label{lem stats unbiasedness}
$\forall \mu \in \Lambda, \mathbb{E}_{A\sim\mu}\left[\rho(A)q(A)\right] = \mathbb{E}_{A\sim\pi}\left[q(A)\right]$
\end{lemma}
The proof is in Appendix~\ref{sec proof lem stats unbiasedness}.
We now consider the variance minimization problem on $\Lambda$, i.e.,
\begin{align}
\text{min}_{\mu \in \Lambda} \quad &
\mathbb{V}_{A\sim \mu}(\rho(A)q(A)) \label{eq:math-optimization}.
\end{align}
The following lemma gives a solution $\mu^*$ to the optimization problem~\eqref{eq:math-optimization}.
\begin{lemma}\label{lem:math-optimal}
Define
\begin{align}
\mu^*(a) \propto
\begin{cases}
\pi(a) \abs{q(a)} & \text{ if } \exists a_0, \pi_0(a_0) q(a_0) \neq 0\\
\frac{1}{|\mathcal{A}|} & \text{ otherwise}. \\
\end{cases}
\end{align}
Then $\mu^*$ is an optimal solution to \eqref{eq:math-optimization}.
\end{lemma}
Here by $f(a) \propto g(a)$,
we mean $f(a) \doteq \frac{g(a)}{\sum_b g(b)}$.
The proof is detailed in Appendix \ref{append:math-optimal}.
To understand why $\mu^*$ gives the minimum variance, considering an example where $\forall a \in \mathcal{A}, q(a) > 0$, we have $\forall a \in \mathcal{A}$
\begin{align}
\mu^*(a) = \frac{\pi(a) \abs{q(a)}}{c},
\end{align}
where $c>0$ is a normalizing constant.
By plugging $\mu^*$ to $\rho(A)q(A)$, we get $\forall a \in \mathcal{A}$
\begin{align}
\rho(a)q(a) = \frac{\pi(a)}{\mu^*(a)}q(a) = \frac{\pi(a)}{\frac{\pi(a) \abs{q(a)}}{c} }q(a) = c,
\end{align}
i.e.,
the random variable $\rho(\cdot)q(\cdot)$ is now a constant function.
The variance of a constant random variable is of course zero.
The following lemma slightly generalizes this intuition,
whose proof is in Appendix~\ref{append:math-variance-0}.
\begin{lemma}\label{lem:math-variance-0}
If $\forall a \in \mathcal{A}, q(a) \geq 0$ or $\forall a \in \mathcal{A}, q(a) \leq 0$,
then $\Lambda = \Lambda_+$,
and
the $\mu^*$ defined in Lemma~\ref{lem:math-optimal} gives a zero variance,
i.e.,
\begin{align}
\mathbb{V}_{A\sim \mu^*}(\rho(A)q(A)) = 0.
\end{align}
\end{lemma}
A sampling distribution proportional to $\pi(a)\abs{q(a)}$ dates back to \citet{Rubinstein1981Simulation, Benjamin1998Simulation} and is also previously used in reinforcement learning \citep{carpentier2015adaptive,mukherjee2022revar}.
This $\mu^*$ is previously deemed as an optimal sampling distribution.
We, however, make two notes here,
both of which, to our knowledge, appear novel.
(1) We have to carefully specify the set of distributions under consideration before claiming the optimality of $\mu^*$.
For example,
if we compute this $\mu^*$ for the example~\eqref{eq stats example},
it can be easily found that $\frac{\pi(A)q(A)}{\mu^*(A)}$ has a strictly positive variance because it evaluates negative for $a_1$ and positive for $a_2$ and $a_3$,
while the $\mu$ in~\eqref{eq stats example} has a zero variance and is also unbaised.
In other words, $\mu^*$ can actually be suboptimal in the set $\Lambda_+$.
(2) This $\mu^*$ does not necessarily cover $\pi$.
It is possible that for some $a_0$, there are $\pi(a_0) > 0, \mu^*(a_0) = 0, q(a_0) = 0$.
Lemma~\ref{lem stats unbiasedness},
however, still ensures that $\mu^*$ gives unbaised estimation.
\section{Variance Reduction in Reinforcement Learning} \label{sec:variance}
We now apply the techniques in Section~\ref{sec:var-stats} in the reinforcement learning setting.
In particular,
we seek to reduce the variance $\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1})\right)$
by designing a proper behavior policy $\mu$.
Of course,
we need to ensure that the PDIS estimator with this behavior policy is unbiased.
In other words,
ideally
we should search over
\begin{align}
\Lambda_+ \doteq \qty{\mu \in \Delta(\mathcal{A})^T \mid \mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1})\right] = J(\pi)}.
\end{align}
As discussed in Section~\ref{sec:var-stats},
this might be too ambitious without domain-specific knowledge.
Instead,
we can search over all policies that cover $\pi$, i.e.,
\begin{align}
\Lambda_- \doteq \qty{\mu \in \Delta(\mathcal{A})^T\mid \forall t, s, a, \mu_t(a|s) = 0 \implies \pi_t(a|s) = 0}.
\end{align}
Set $\Lambda_-$ contains all policies that satisfy the \emph{policy coverage} constraint in off-policy learning (\citealt{sutton2018reinforcement}).
We relax the policy coverage constraint while maintaining unbiasedness. Define
\begin{align}
\Lambda \doteq \qty{\mu \in \Delta(\mathcal{A})^T\mid \forall t, s, a, \mu_t(a|s) = 0 \implies \pi_t(a|s)q_{\pi, t}(s, a) = 0}.
\end{align}
The following theorem ensures unbiasedness,
which is proved in Appendix~\ref{sec lem rl pdis unbaised}.
\begin{theorem}[Unbiasedness]
\label{lem rl pdis unbaised}
$\forall \mu \in \Lambda, t, s, \, \mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) | S_t = s\right] = v_{\pi, t}(s)$.
\end{theorem}
One immediate consequence of Theorem~\ref{lem rl pdis unbaised} is that
\begin{align}
\forall \mu \in \Lambda, \mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1})\right] = J(\pi).
\end{align}
It, however,
turns out that searching over $\Lambda$ is still intractable.
We instead consider a set $\Lambda^*$ such that $\Lambda_- \subseteq \Lambda^* \subseteq \Lambda$.
This $\Lambda^*$ will be defined soon.
We now formulate our problem as
\begin{align}
\label{eq rl opt problem}
\min_{\mu \in \Lambda^*} \quad \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1})\right).
\end{align}
By the law of total variance, given any two random variables $X$ and $Y$, we have
\begin{align}\label{eq:total-variance}
\mathbb{V}[Y] = \mathbb{E}[\mathbb{V}(Y|X)] + \mathbb{V}(\mathbb{E}[Y|X]) .
\end{align}
For any $\mu \in \Lambda$,
we decompose the variance of the PDIS estimator as
\begin{align}
\label{eq:varaince-1}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1})\right) \\
=& \mathbb{E}_{S_0}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1}) \mid S_0\right)\right] + \mathbb{V}_{S_0}\left(\mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1}) \mid S_0\right]\right) \\
=& \mathbb{E}_{S_0}\left[\mathbb{V}\left(G^{\text{PDIS}}_0(\tau^{\mu_{0:T-1}}_{0:T-1}) \mid S_0\right)\right] + \mathbb{V}_{S_0}\left(v_{\pi, 0}(S_0)\right) \explain{Theorem~\ref{lem rl pdis unbaised}}.
\end{align}
The second term in~\eqref{eq:varaince-1} is a constant given a target policy $\pi$ and is unrelated to the choice of $\mu$.
In the first term,
the expectation is taken over $S_0$ that is determined by the initial probability distribution $p_0$.
Consequently,
solving the problem~\eqref{eq rl opt problem} is equivalent to solving for each $s$,
\begin{align}
\label{eq rl opt problem2}
\min_{\mu \in \Lambda^*} \quad \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{0:T-1}}_{0:T-1})| S_0 = s\right).
\end{align}
Denote $\qty{0, 1, \dots, n} $ as $[n]$.
Define the variance of the state value for the next state
given the current state-action pair $(s,a)$ as
\begin{align}\label{def:nu}
\nu_{\pi,t}(s, a) \doteq
\begin{cases}
\mathbb{V}_{S_{t+1}}\left(v_{\pi, t+1}(S_{t+1})\mid S_t=s, A_t=a\right) & \text{ if } t \in [T-2] \\
0 & \text{ if } t = T-1.
\end{cases}
\end{align}
The variance of the PDIS estimator can be expressed in the following recursive form.
\begin{lemma} \label{lem:recursive-var}
For any $\mu \in \Lambda$, we have for $t = T-1$,
\begin{align}
\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})\mid S_t\right) = \mathbb{E}_{A_t \sim \mu_t}\left[\rho_t^2 q_{\pi, t}^2(S_t, A_t) \mid S_t\right] - v_{\pi, t}^2(S_t);
\end{align}
For $t \in [T-2]$,
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=& \mathbb{E}_{A_t\sim \mu_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1})\mid S_t\right) \mid S_t, A_t\right] + \nu_{\pi,t}(S_t, A_t) + q_{\pi, t}^2(S_t, A_t)\right) \mid S_t\right] \\
&- v_{\pi, t}^2(S_t).
\end{align}
\end{lemma}
Its proof is in Appendix \ref{append:recursive-var}.
This recursive form allows us to construct a behavior policy $\mu^*$ as
\begin{align}
\label{eq mu star def1}
\mu_t^*(a|s) \propto
\begin{cases}
\pi_{t}(a|s) \sqrt{u_{\pi, t}(s, a)} & \text{ if } \exists a', \pi_{t}(a'|s)\abs{u_{\pi, t}(s, a')} \neq 0\\
\frac{1}{|\mathcal{A}|} & \text{ otherwise }
\end{cases},
\end{align}
where for $t = T-1$,
\begin{align}
u_{\pi, t}(s, a) \doteq q_{\pi, t}^2(s, a);
\end{align}
for $t \in [T-2]$,
\begin{align}
\label{eq u def}
u_{\pi, t}(s, a) \doteq \sum_{s'} p(s'|s, a)\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu^*_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1} = s'\right) + \nu_{\pi, t}(s, a) + q_{\pi, t}^2(s, a).
\end{align}
This $u_{\pi, t}(s, a)$ is always non-negative because all the summands are non-negative.
This $\mu^*$ is optimal in the following sense.
\begin{theorem}[{Optimal Behavior Policy}]\label{lem:rl-optimal}
For any $t$ and $s$,
the behavior policy $\mu_t^*(a|s)$ defined above is an optimal solution to the following problem
\begin{align}
\min_{\mu_t \in \Lambda_t, \dots, \mu_{T-1} \in \Lambda_{T-1}} \quad \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})| S_t = s\right),
\end{align}
where
\begin{align}
\Lambda_t \doteq \qty{\mu_t \in \Delta(\mathcal{A})\mid \forall s, a, \mu_t(a|s) = 0 \implies \pi_t(a|s)u_{\pi, t}(s, a) = 0}.
\end{align}
\end{theorem}
Its proof is in Appendix \ref{append:rl-optima}.
Theorem~\ref{lem:rl-optimal} indicates that $\mu^*$ achieves optimality in the set
\begin{align}
\Lambda^* \doteq \Lambda_0 \times \dots \times \Lambda_{T-1}.
\end{align}
Since $u_{\pi, t}(s, a) = 0 \implies q_{\pi, t}(s, a) = 0$,
we have $\Lambda^* \subseteq \Lambda$.
If $\mu_t(a|s) = 0 \implies \pi_t(a|s) = 0$,
it follows immediately that $\mu_t(a|s) = 0 \implies \pi_t(a|s)u_{\pi, t}(s, a) = 0$.
This indicates $\Lambda_- \subseteq \Lambda^*$.
Though the set of policies $\Lambda^*$ considered in Theorem~\ref{lem:rl-optimal} is not as broad as $\Lambda$,
it still includes all the policies that cover the target policy,
which is the setting where most off-policy results consider \citep{precup:2000:eto:645529.658134,maei2011gradient,sutton2016emphatic,zhang2022thesis}.
It is easy to see $\pi \in \Lambda^*$.
So Theorem~\ref{lem:rl-optimal} ensures,
by the definition of optimality,
that
the variance of the off-policy PDIS estimator with the behavior policy $\mu^*$ is no larger than the variance of the on-policy PDIS estimator,
which reduces to the ordinary on-policy Monte Carlo estimator.
\begin{corollary}\label{lem:optimal-compare}
For all $s, t$,
\begin{align}
\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu^*_{t:T-1}}_{t:T-1}) | S_t = s\right) \leq \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1}) | S_t = s\right) \quad \forall t,s.
\end{align}
\end{corollary}
Unfortunately,
implementing $\mu^*_t$ requires to know $u_{\pi, t}$ \eqref{eq u def} that contains the transition function $p$.
Approximating the transition function is very challenging in MDPs with large stochasticity and approximation (cf. model-based reinforcement learning \citep{sutton1990integrated,sutton2012dyna,deisenroth2011pilco,chua2018deep}).
Thus, we seek to build another policy $\hat{\mu}$ that can be implemented without direct knowledge of the transition function $p$ (cf. model-free reinforcement learning \citep{sutton1988learning,watkins1989learning}).
\emph{We achieve this by aiming at local optimality instead of global optimality.}
In particular,
at a time step $t$,
if we aim for global optimality,
we should try to find the best $\mu_t$
assuming in the future we follow $\mu^*_{t+1}, \dots, \mu^*_{T-1}$.
Instead,
we aim for the local optimality
and try to find the best $\mu_t$
assuming in the future we follow $\pi_{t+1}, \dots, \pi_{T-1}$.
We refer to such a local optimal behavior policy as $\hat \mu_t$.
Similarly,
to define optimality we first need to specify the set of policies we are concerned about.
To this end, we define
for $t = T-1$,
\begin{align}
\hat q_{\pi, t}(s, a) \doteq q_{\pi, t}^2(s, a);
\end{align}
for $t \in [T-2]$,
\begin{align}
\label{eq def q hat}
\hat q_{\pi, t}(s, a) \doteq \sum_{s'} p(s'|s, a)\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1} = s'\right) + \nu_{\pi,t}(s, a) + q_{\pi, t}^2(s, a).
\end{align}
This $\hat q_{\pi, t}$ is always non-negative since all the summands are non-negative.
Accordingly,
we define for $t \in [T-1]$,
\begin{align}
\hat \Lambda_t \doteq \qty{\mu_t \in \Delta(\mathcal{A}) \mid \forall s, a, \mu_t(a|s) = 0 \implies \pi_t(a|s)\hat q_{\pi, t}(s, a) = 0}.
\end{align}
Corollary~\ref{lem:optimal-compare} ensures that $\hat q_{\pi, t}(s, a) \geq u_{\pi, t}(s, a) \geq 0$ holds $\forall s, a, t$.
As a result, if $\mu_t \in \hat \Lambda_t$,
we have
\begin{align}
\mu_t(a|s) = 0 \implies \pi_t(a|s)\hat q_{\pi, t}(a|s) = 0 \implies \pi_t(a|s)u_{\pi, t}(a|s) = 0,
\end{align}
indicating $\mu_t \in \Lambda_t$.
In other words, we have $\hat \Lambda_t \subseteq \Lambda_t$.
To search for $\hat \mu_{0:T-1}$,
we work on
\begin{align}
\hat \Lambda \doteq \hat \Lambda_0 \times \dots \times \hat \Lambda_{T-1}.
\end{align}
To summarize,
we have
\begin{align}
\Lambda_- \subseteq \hat \Lambda \subseteq \Lambda_* \subseteq \Lambda \subseteq \Lambda_+.
\end{align}
Recall that $\Lambda_+$ is the set of all behavior policies such that the corresponding PDIS estimator is unbiased.
$\Lambda$ is a sufficient but not necessary condition to ensure such unbiasedness (Theorem~\ref{lem rl pdis unbaised}).
$\Lambda_*$ is a restriction of $\Lambda$ such that we are able to find an optimal solution.
Here we future restrict $\Lambda_*$ to $\hat \Lambda$,
aiming for a sub-optimal but implementable policy.
Our search space is still larger than
$\Lambda_-$,
which contains all behavior policies that cover the target policy.
According to the recursive expression of the variance in Lemma~\ref{lem:recursive-var} and the aforementioned goal for local optimality,
we let $\hat \mu_t$ be an optimal solution to the following problem
\begin{align}
\label{eq local opt problem}
\min_{\mu_t \in \hat \Lambda_t} \quad \mathbb{E}_{A_t\sim \mu_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[ \mathbb{V}\left(G^{\text{PDIS}}(\tau^{{\pi}_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \nu_{\pi,t}(S_t, A_t) + q_{\pi, t}^2(S_t, A_t) \right) \mid S_t\right].
\end{align}
Simple calculation yields
\begin{align}
&\mathbb{E}_{A_t\sim \mu_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{{\pi}_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \nu_{\pi,t}(S_t, A_t) + q_{\pi, t}^2(S_t, A_t) \right) \mid S_t\right] \\
=& \mathbb{E}_{A_t\sim\mu_t}\left[\rho_t^2 \hat q_{\pi, t}(S_t, A_t) | S_t\right] \\
=& \mathbb{V}_{A_t\sim\mu_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)} | S_t\right) - \mathbb{E}_{A_t\sim\mu_t}^2\left[\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)} | S_t\right] \\
=& \mathbb{V}_{A_t\sim\mu_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)} | S_t\right) - \mathbb{E}_{A_t\sim\pi_t}^2\left[\sqrt{\hat q_{\pi, t}(S_t, A_t)} | S_t\right] \explain{Lemma~\ref{lem stats unbiasedness} and $\mu_t \in \hat \Lambda_t$}.
\end{align}
According to Lemma~\ref{lem:math-optimal},
if we define
\begin{align}
\label{def hat mu}
\hat \mu_t(a|s) \propto \begin{cases}
\pi_t(a|s)\sqrt{\hat q_{\pi, t}(s, a)} & \qq{if} \exists a_0, \pi_t(a_0|s) \sqrt{\hat q_{\pi, t}(s, a_0)} \neq 0 \\
\frac{1}{\abs{\mathcal{A}}} & \qq{otherwise}
\end{cases},
\end{align}
then $\hat \mu_t$ is an optimal solution to~\eqref{eq local opt problem}.
To further characterize the property of $\hat \mu$,
we need a more explicit treatment of \\
$\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1}) | S_t = s\right) $,
the variance of the on-policy Monte Carlo estimator.
To this end, we make use of the well-known fact that
this variance can be expressed recursively in the form of a Bellman equation \citep{variance2016Tamar,o2017uncertainty,sherstan2018directly}.
Formally speaking,
define
shorthands
\begin{align}
&\tilde r_{\pi,t}(s, a) \doteq
\nu_{\pi,t}(s, a) + q^2_{\pi, t}(s, a) - v^2_{\pi, t}(s) \quad \forall t \in [T-1], \label{def:tilde-r} \\
&\tilde q_{\pi, t}(s, a) \doteq
\begin{cases}
\tilde r_{\pi,t}(s, a) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \tilde{q}_{\pi, t+1}(s', a') & \text{ if } t \in [T-2] \\
\tilde r_{\pi,t}(s, a) & \text{ if } t = T-1
\end{cases}.\label{def:tilde-q}
\end{align}
Then we have
\begin{lemma}\label{lem:u_variance_eq}
\begin{align}
\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1}) | S_t = s\right) = \sum_a \pi_t(a|s)\tilde q_{\pi, t}(s, a) \quad \forall t,s.
\end{align}
\end{lemma}
Its proof is in Appendix \ref{append:u_variance_eq}.
Here,
this $\tilde q$ is exactly the action value function of the target policy $\pi$ in the MDP w.r.t. to a new reward function $\tilde r$.
Manipulating~\eqref{eq def q hat} then yields
\begin{align}
\label{eq hat and tilde q}
\hat q_{\pi, t}(s, a) =& \sum_{s'}p(s'|s, a)\sum_{a'} \pi_{t+1}(a'|s')\tilde q_{\pi, t+1}(s' ,a') + \nu_t(s, a) + q_{\pi, t}^2(s, a) \\
=&\tilde q_{\pi, t}(s, a) + v_{\pi, t}^2(s).
\end{align}
This behavior policy $\hat \mu$ is of course inferior to the optimal behavior policy $\mu^*$.
We,
however,
argue that \emph{$\hat \mu$ is provably better than the target policy $\pi$}.
In particular,
since we have $\hat \mu \in \hat \Lambda \subseteq \Lambda$,
Theorem~\ref{lem rl pdis unbaised} ensures that the PDIS estimator using $\hat \mu$ as the behavior policy gives unbiased estimation,
even if $\hat \mu$ may not cover $\pi$.
Moreover,
the following theorem confirms
that the PDIS estimator using $\hat \mu$ has a provably smaller variance than the PDIS estimator using $\pi$ as the behavior policy,
which is exactly the ordinary on-policy Monte Carlo estimator.
\begin{theorem}[Variance Reduction]\label{lem:var_smaller}
For any $t$ and $s$,
\begin{align}
\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1})\mid S_t = s\right) \leq& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t = s\right)
\end{align}
\end{theorem}
\begin{proof}
We proceed via induction. For $t=T-1$,
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 q_{\pi, t}^2(S_t, A_t) \mid S_t\right] - v_{\pi, t}^2(S_t) \explain{Lemma \ref{lem:recursive-var}}\\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \hat q_{\pi, t}(S_t, A_t) \mid S_t\right] - v^2_{\pi, t}(S_t) \explain{Definition of $\hat q$ \eqref{eq def q hat}} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \mathbb{E}_{A_t\sim \hat{\mu}_t}^2\left[\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right] - v^2_{\pi, t}(S_t) \explain{Definition of variance and non-negativity of $\hat q$} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \left(\sum_a \pi_t(a|S_t) \sqrt{\hat q_{\pi, t}(S_t, a)}\right)^2 - v_{\pi, t}^2(S_t) \explain{Lemma~\ref{lem stats unbiasedness}} \\
=& \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - v^2_{\pi, t}(S_t) \explain{Definition of $\hat \mu$ \eqref{def hat mu} and Lemma \ref{lem:math-variance-0}} \\
\leq& \sum_{a} \pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) - v_{\pi, t}^2(S_t) \explain{Jensen's inequality}\\
=& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) \explain{By~\eqref{eq hat and tilde q} and Lemma \ref{lem:u_variance_eq}}. \\
\end{align}
For $t\in [T-2]$,
we have
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat {\mu}_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \nu_{\pi,t}(S_t, A_t) + q_{\pi, t}^2(S_t, A_t)\right) \mid S_t\right] \\
&- v_{\pi, t}^2(S_t) \explain{Lemma \ref{lem:recursive-var}}\\
\leq&\mathbb{E}_{A_t\sim \hat{\mu}_t}\Big[\rho_t^2 \Big(\mathbb{E}_{S_{t+1}}\Big[ \sum_{a'} \pi_{t+1}(a'|S_{t+1})\tilde {q}_{\pi, t+1}(S_{t+1}, a') \mid S_t, A_t\Big] + \nu_{\pi,t}(S_t, A_t) \\
& + q_{\pi, t}^2(S_t, A_t)\Big) \mid S_t\Big] - v_{\pi, t}^2(S_t) \explain{Inductive hypothesis and Lemma~\ref{lem:u_variance_eq}} \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \left(\tilde{q}_{\pi, t}(S_t, A_t) + v^2_{\pi, t}(S_t)\right) \mid S_t\right] - v^2_{\pi, t}(S_t) \explain{Definition of $\tilde q $ \eqref{def:tilde-q}} \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \hat q_{\pi, t}(S_t, A_t) \mid S_t\right] - v^2_{\pi, t}(S_t) \explain{Definition of $\hat q$ \eqref{eq def q hat}} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \mathbb{E}_{A_t\sim \hat{\mu}_t}^2\left[\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right] - v^2_{\pi, t}(S_t) \explain{Repeating the arguments for $t=T-1$} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \left(\sum_a \pi_t(a|S_t) \sqrt{\hat q_{\pi, t}(S_t, a)}\right)^2 - v_{\pi, t}^2(S_t) \\
=& \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - v^2_{\pi, t}(S_t) \\
\leq& \sum_{a} \pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) - v_{\pi, t}^2(S_t) \\
=& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right).
\end{align}
This completes the proof.
\end{proof}
We also prove a stronger lemma in the following to further compute the exact amount of the reduced variance.
\begin{theorem}\label{lem:var_smaller_stronger}
For any $t$ and $s$,
\begin{align}
\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1})\mid S_t = s\right) \leq \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t = s\right) - \epsilon_t(s).
\end{align}
where
\begin{align}
c_t(s) \doteq& \sum_a \pi_t(a|s) \hat{q}_{\pi, t}(s, a) - \left(\sum_a \pi_t(a|s)\sqrt{\hat{q}_{\pi, t}(s, a)}\right)^2, \quad \forall t \in [T-1] \\
\epsilon_t(s) \doteq&
\begin{cases}\label{def:epsilon}
c_t(s) + \min_a \sum_{s'}p(s'|s, a) \epsilon_{t+1}(s') & \text{ if } t \in [T-2] \\
c_t(s) & \text{ if } t = T-1.
\end{cases}
\end{align}
\end{theorem}
The proof is similar to Theorem~\ref{lem:var_smaller} and is in Appendix \ref{append:var_smaller_stronger}.
Notably, this $c_t$ is always non-negative thanks to Jensen's inequality,
which ensures that $\epsilon_t$ is also non-negative.
If we regard $c_t$ as a cost function,
then the reduced variance $\epsilon_t$ is exactly
the optimal cost-to-go function of the stochastic shortest path problem in the MDP induced by the cost function $c_t$ \citep{bertsekas1996neuro}.
\section{Variance Reduction with Offline Data}
Having identified a sub-optimal but provably better policy $\hat \mu$,
the next step is to approximate it,
preferably with offline data.
Since the target policy $\pi_t$ is considered known,
to approximate $\hat \mu_t$,
according to~\eqref{eq def q hat},
it is sufficient to approximate $\hat q_{\pi, t}$.
This, according to~\eqref{eq hat and tilde q},
requires to approximate $\tilde q_{\pi, t}$ and $v_{\pi, t}$.
The state value function $v_{\pi, t}$ can be learned using any existing offline evaluation methods.
In particular,
we can use fitted $Q$-learning \citep{le2019batch} to learn the action value function $q_{\pi, t}$ first,
then compute $v_{\pi, t}$ analytically using~\eqref{eq q v relationship}.
The observant reader may question,
\emph{if we have learned the action value function,
do we still need to do Monte Carlo evaluation?}
The answer is affirmative and more details are deferred to the discussion regarding model-free offline evaluation in Section~\ref{sec related work}.
Having learned $v_{\pi, t}$,
the remaining is to approximate $\tilde q_{\pi, t}$,
which is exactly the action value function w.r.t. a different reward function $\tilde r_{\pi, t}$.
If $\tilde r_{\pi, t}$ is known,
we could then resort to fitted $Q$-learning again.
To approximate~$\tilde r_{\pi, t}$,
we, accroding to~\eqref{def:tilde-r}, need to approximate $\nu_{\pi, t}$, $q_{\pi, t}$, and $v_{\pi, t}$.
We have already learned $q_{\pi, t}$ and $v_{\pi, t}$.
The remaining is thus to learn $\nu_{\pi, t}$,
which by its definition in~\eqref{def:nu} is a variance.
Approximating a variance is in general a supervised learning problem -- we can approximate the first and second moments separately.
In particular,
we have
\begin{align}
&\mathbb{V}_{S_{t+1}}\left(v_{\pi, t+1}(S_{t+1}) \mid S_t = s, A_t = a\right) \\
=& \mathbb{E}_{S_{t+1}}\left[v_{\pi, t+1}^2(S_{t+1}) \mid S_t = s, A_t = a\right] - \mathbb{E}_{S_{t+1}}^2\left[v_{\pi, t+1}(S_{t+1}) \mid S_t = s, A_t = a\right].
\end{align}
The two expectations can be learned via supervised learning,
using our approximation of $v_{\pi, t+1}$ to generate regression targets.
The approximation error in $v_{\pi, t+1}$ will,
however,
inevitably compound into the approximation error of this variance term.
The approximation error in this variance term then compounds into the approximation error of $\hat q_{\pi, t+1}$.
\emph{This repeated error compounding is the first challenge in learning $\tilde q_{\pi, t}$ and $v_{\pi, t}$ separately.}
Moreover,
to ensure that $\hat \mu_t$ is well defined,
it is necessary to make sure that the approximation of $\tilde q_{\pi, t}(s, a) + v_{\pi, t}^2(s)$ is non-negative.
Apparently, $v_{\pi, t}^2(s)$ is always non-negative.
There is, however, no guarantee that the learned approximation of $\tilde q_{\pi, t}(s, a)$ is non-negative.
One way to achieve this is to use a non-negative function class,
e.g., $(\cdot)^2$ or $\ln(1 + \exp(\cdot))$,
to approximate $\tilde q_{\pi, t}(\cdot, \cdot)$,
such that our approximation is always non-negative.
This unfortunately introduces bias as we use a non-negative function class to approximate a function whose value can possibly be negative.
\emph{This bias is the second challenge in learning $\tilde q_{\pi, t}$ and $v_{\pi, t}$ separately.}
We now address those two challenges simultaneously via learning $\hat q_{\pi, t}(s, a)$ directly.
This is made possible thanks to the following observation.
\begin{lemma}\label{lem:hat-q-recursive}
$\forall s,a, $
\begin{align}
\hat{q}_{\pi, t}(s, a) =
\begin{cases}
\hat{r}_{\pi,t}(s,a) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \hat{q}_{\pi, t+1}(s', a') & \text{ if } t \in [T-2] \\
\hat{r}_{\pi,t}(s,a) & \text{ if } t = T-1
\end{cases},
\end{align}
where
\begin{align}
\label{eq def r hat}
\hat{r}_{\pi,t}(s,a) \doteq 2r(s, a) q_{\pi, t}(s, a)- r^2(s, a).
\end{align}
\end{lemma}
\begin{proof}
For $t = T-1$, we have
\begin{align}
\hat{q}_{\pi, t}(s, a) &= q^2_{ \pi, t}( s, a) \explain{Definition of $\hat q_{\pi, t}$~\eqref{eq def q hat}} \\
&= \hat{r}_{\pi,t}(s,a). \explain{By $q_{\pi, T-1}(s, a) = r(s, a)$ and~\eqref{eq def r hat}}
\end{align}
For $t\in [T-2]$, we have
\begin{align}
&\hat{q}_{\pi,t}(s,a) \\
=& \tilde{q}_{ \pi, t}( s, a) + v^2_{\pi, t}(s) \explain{By~\eqref{eq hat and tilde q}} \\
=& \tilde r_{\pi,t}(s, a) + v^2_{\pi, t}(s) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \tilde{q}_{\pi, t+1}(s', a') \explain{Definition of $\tilde{q}$ \eqref{def:tilde-q}} \\
=& \tilde r_{\pi,t}(s, a) + v^2_{\pi, t}(s) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') (\tilde{q}_{\pi, t+1}(s', a') + v^2_{\pi, t+1}(s') - v^2_{\pi, t+1}(s')) \\
=& \tilde r_{\pi,t}(s, a) + v^2_{\pi, t}(s) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') (\hat{q}_{\pi, t+1}(s', a') - v^2_{\pi, t+1}(s')) \explain{By~\eqref{eq hat and tilde q}}\\
=& \nu_{\pi,t}(s, a) + q^2_{\pi, t}(s, a) - \sum_{s'} p(s'|s, a) v^2_{\pi, t+1}(s') + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \hat{q}_{\pi, t+1}(s', a') \explain{Definition of $\tilde{r}$ \eqref{def:tilde-r}}\\
=& -(\mathbb{E} [ v_{\pi, t+1}(S_{t+1})\mid S_t=s, A_t=a ])^2 + q^2_{\pi, t}(s, a) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \hat{q}_{\pi, t+1}(s', a') \explain{Definition of $\nu$ \eqref{def:nu}} \\
=& -(q_{\pi, t}(s, a) - r(s, a) )^2 + q^2_{\pi, t}(s, a) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \hat{q}_{\pi, t+1}(s', a') \\
=& 2r(s, a) q_{\pi, t}(s, a)- r^2(s, a) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \hat{q}_{\pi, t+1}(s', a') \\
=& \hat r_{\pi, t}(s, a) + \sum_{s', a'} p(s'|s, a) \pi_{t+1}(a'|s') \hat{q}_{\pi, t+1}(s', a'),
\end{align}
which completes the proof.
\end{proof}
In other words,
$\hat q$ is exactly the action value function of the policy $\pi$ w.r.t. the reward function $\hat r$.
Suppose $\hat r$ is learned,
we can then learn $\hat q$ with any offline evaluation methods for action-value functions,
e.g.,
fitted $Q$-learning.
To learn $\hat r$,
it is sufficient to learn $r$ and $q$.
Fitted $Q$-learning can be used to learn $q$ and learning $r$ is a simple regression problem.
Importantly,
this regression problem now has accurate targets.
By contrast,
the regression of the variance $\nu$ use the estimation of $v_{\pi, t}$ to compute the target.
As a result,
error compounding is reduced by learning $\hat q_{\pi, t}$ directly.
We consider the behavior policy agnostic offline learning setting \citep{nachum2019dualdice},
where the offline data in the form of
\begin{align}
\qty{(t_i,s_i,a_i,r_i,s_i')}_{i=1}^m.
\end{align}
consists of $m$ previously logged data tuples.
In the $i$-th data tuple,
$t_i$ is the time step,
$s_i$ is the state at time step $t_i$,
$a_i$ is the action executed on state $s_i$,
$r_i$ is the sampled reward,
and $s_i'$ is the successor state.
Those tuples can be generated by one or more, possibly unknown behavior policies.
And those tuples do not need to form a complete trajectory.
Our proposed algorithm for learning $\hat \mu$ is detailed in Algorithm \hyperlink{Algorithm1}{1}.
In particular,
we first learn $r$ with supervised learning and $q_{\pi, t}$ with fitted $Q$-learning from the offline data.
Then we compute $\hat r$ analytically
and apply fitted $Q$-learning again to learn $\hat q_{\pi, t}$,
which is then used to compute $\hat \mu$ analytically.
We split the offline data into training sets and test sets to tune all the hyperparameters in Algorithm \hyperlink{Algorithm1}{1},
based on the supervised learning loss or the fitted $Q$-learning loss on the test set.
\begin{table}
\begin{center}
\begin{tabular}{m{40em}}
\hline
\textbf{\hypertarget{Algorithm1}{Algorithm 1}}: Approximating $\hat{\mu}$ with offline data \\
\hline
Input: a differentiable function parameterization $r_w: \mathcal{S} \times \mathcal{A} \times \mathbb{R}^{d_0} \to \mathbb{R}$ \\
\hspace{12mm }a differentiable function parameterization $q_w: [T] \times \mathcal{S} \times \mathcal{A} \times \mathbb{R}^{d_1} \to \mathbb{R}$ \\
\hspace{12mm }a differentiable function parameterization $\hat{q}_w: [T] \times \mathcal{S} \times \mathcal{A} \times \mathbb{R}^{d_2} \to \mathbb{R}$ \\
\hspace{12mm }a target policy $\pi$ \\
\hspace{12mm }an offline dataset $\qty{(t_i,s_i,a_i,r_i,s_i)}_{i=1}^m$ \\
Initialize $ w_r\in \mathbb{R}^{d_0}, w_q \in \mathbb{R}^{d_1}, w_{\hat{q}}\in \mathbb{R}^{d_2} $ arbitrarily. \\
Algorithm Parameters: learning rates $\alpha_r,\alpha_q,\alpha_{\hat{q}}$ \\
Output: a behavior policy $\hat{\mu}$ \\
\\
\textbf{Step 1: Augment data with $a'$}\\
Sample an action $a'$ by $\pi_{t+1}(\cdot | s')$ for each $(t, s, a, r, s') $ pair.\\
\\
\textbf{Step 2: Approximate $r_w$}\\
Loop for each training step:\\
\hspace{1em} Sample a minibatch of $(s, a, r)$\\
\hspace{1em} Perform a mini-batch gradient descent step based on \\
\hspace{1em} $w_r \leftarrow w_r + \alpha_r[ r - {r}_{w}(s,a,w_r)] \nabla {r}_{w}(s,a,w_r) $ \\
\\
\textbf{Step 3: Approximate $q_w$}\\
Loop for each training step:\\
\hspace{1em} Sample a minibatch of $(t, s, a, r, s', a')$\\
\hspace{1em} Perform a mini-batch gradient descent step based on \\
\hspace{1em} $w_q \leftarrow w_q + \alpha_q[ r + q_{w,t+1}(s',a',w_q) - q_{w,t}(s,a,w_q)] \nabla q_{w,t}(s,a,w_q) $ \\
\\
\textbf{Step 4: Approximate $\hat{q}_w$}\\
Loop for each training step:\\
\hspace{1em} Sample a minibatch of $(t, s, a, s',a')$\\
\hspace{1em} Perform a mini-batch gradient descent step based on \\
\hspace{1em} $\hat{r} \leftarrow 2r_w(s, a,w_r) q_{w, t}(s, a,w_q)- r_w^2(s, a,w_r)$ \\
\hspace{1em} $w_{\hat{q}} \leftarrow w_{\hat{q}} + \alpha_{\hat{q}} [ \hat{r} + \hat{q}_{w,t+1}(s',a',w_{\hat{q}}) - \hat{q}_{w,t}(s,a,w_{\hat{q}}) ] \nabla \hat{q}_{w,t}(s,a,w_{\hat{q}}) $ \\
\\
\textbf{Step 5: Output $\hat \mu$ }\\
$\hat \mu_t(a|s) \propto \pi_t(a|s)\sqrt{\hat{q}_{ w, t}(s, a,w_{\hat{q}})}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{ m{40em} }
\hline
\textbf{\hypertarget{Algorithm2}{Algorithm 2}}: Adaptive Monte Carlo Evaluation\\
\hline
Input: A policy $\pi$ to be evaluated \\
\hspace{12mm}A policy $\hat{\mu}$ computed from Algorithm \hyperlink{Algorithm1}{1} \\
Algorithm Parameters: UCB parameter $c$ \\
Initialize:
$Rewards(\pi) \leftarrow$ an empty list \\
\hspace{16mm} $Rewards(\hat{\mu}) \leftarrow$ an empty list\\
\hspace{16mm} $n \leftarrow 0$\\
\hspace{16mm} $J \leftarrow 0$\\
Output: Total rewards estimation $J$\\
\\
Loop for $K$ episodes: \\
\hspace{1em} $S_0 \sim p_0$ \\
\hspace{1em} $b \leftarrow \text{argmax}_{b \in \qty{\hat{\mu},\pi}} Average\big(Rewards(b)\big) + c \sqrt{\frac{\log n}{|Rewards(b)|}}$ \hfill (UCB)\\
\hspace{1em} Generate a trajectory $\qty{S_0,A_0,R_1,S_1,A_1,R_2,\cdots,S_{T-1},A_{T-1},R_{T}}$ following $b$ \\
\hspace{1em} $G \leftarrow 0$ \\
\hspace{1em} for $t = T-1,T-2,\cdots,0$: \\
\hspace{2em} $G \leftarrow \frac{\pi_t(A_t|S_t)}{ b_t(A_t|S_t)} ( R_{t+1} + G) $\\
\hspace{1em} $Rewards(b)$ append $-G^2$\\
\hspace{1em} $n \leftarrow n+1$ \\
\hspace{1em} $J \leftarrow J + \frac{1}{n} (G-J) $\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Variance Reduction in Online Execution}\label{sec:online-execuation}
Though the PDIS Monte Carlo estimator with $\hat \mu$ is provably better than that with $\pi$ (Theorems~\ref{lem rl pdis unbaised} \&~\ref{lem:var_smaller}),
we
do not have access to $\hat \mu$ but only its approximation \emph{learned} from offline data.
This learning process suffers from various biases,
e.g.,
insufficient data coverage,
mismatched hypothesis space,
incomplete optimization,
insufficient hyperparameter tuning,
etc,
just like any other learning process.
As a result,
using the learned approximation of $\hat \mu$ is not necessary better than using $\pi$ directly.
With a slight abuse of notation,
we in this section use $\hat \mu$ to denote the learned approximation from Algorithm \hyperlink{Algorithm1}{1}.
Thus when we actually collect online data,
there are two choices,
to use the target policy $\pi$ or to use the learned $\hat \mu$.
Though both yield unbiased estimation,
their variances are different.
This places us in the exploration and exploitation dilemma (see, e.g., \citet{lattimore2020bandit}).
On the exploration side,
we want to execute both policies more to know their variance.
On the exploitation side,
we want to commit to a better policy as soon as possible.
Motivated by the celebrated success in the bandit community in solving the exploration and exploitation dilemma \citep{lattimore2020bandit},
we now formulate this problem as a multi-armed bandit problem,
where the target policy $\pi$ and the learned $\hat \mu$ are two arms.
To complete the bandit formulation,
the next step is to specify a reward function for each arm.
Since we want to identify the policy that yields a lower variance,
the natural choice is then to use the additive inverse of the variance,
i.e.,
$-\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{0:T-1}}_{0:T-1})\right)$ and $-\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \pi_{0:T-1}}_{0:T-1})\right)$,
as a reward.
We,
however,
cannot estimate the variance from a single trajectory $\tau_{0:T-1}$.
Using multiple trajectories either reduces the sample efficiency or makes the reward function non-stationary.
To address this challenge,
we use the following observation,
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{0:T-1}}_{0:T-1})\right) = \mathbb{E}\left[(G^{\text{PDIS}}(\tau^{\hat \mu_{0:T-1}}_{0:T-1}) )^2\right] - \mathbb{E}^2 \left[G^{\text{PDIS}}(\tau^{\hat \mu_{0:T-1}}_{0:T-1})\right] \label{eq:var-equivalent-1}, \\
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{0:T-1}}_{0:T-1})\right) = \mathbb{E}\left[(G^{\text{PDIS}}(\tau^{\pi_{0:T-1}}_{0:T-1}) )^2\right] - \mathbb{E}^2 \left[G^{\text{PDIS}}(\tau^{\pi_{0:T-1}}_{0:T-1})\right] \label{eq:var-equivalent-2}.
\end{align}
Since the PDIS estimator is unbiased for both $\hat \mu$ and $\pi$,
the $\mathbb{E}^2\left[\cdot\right]$ terms above are equal.
To compare the variances,
it is sufficient to compare the second moments.
We, therefore,
use $-(G^{\text{PDIS}}(\tau^{\hat \mu_{0:T-1}}_{0:T-1}))^2$
and $-(G^{\text{PDIS}}(\tau^{\pi_{0:T-1}}_{0:T-1}))^2$
as the rewards for the arms $\hat \mu$ and $\pi$
respectively.
Those rewards are immediately available after a trajectory $\tau_{0:T-1}$ is sampled.
We use the Upper Confidence Bound (UCB, \citet{auer2002using}) algorithm to adaptively switch between the two arms during online executions.
The details are documented in Algorithm~\hyperlink{Algorithm2}{2}.
Notably,
since both $\hat \mu$ and $\pi$ induce unbiased estimation,
all trajectories,
not just those from the better policy,
contribute to the final estimation.
No online data is wasted in this sense.
Denote $b^{(i)} \in \qty{\hat{\mu}, \pi}$ as the chosen behavior policy at the $i$-th online episode. Define $K$ as the total number of online episodes
and define the regret for the variance as
\begin{align*}
&\text{Regret}(K) \\
\doteq& \sum_{i =1}^{K} \mathbb{V}(G^{\text{PDIS}}(\tau^{ {b^{(i)}}_{0:T-1}}_{0:T-1}) ) - K \cdot \min \qty{\mathbb{V}(G^{\text{PDIS}}(\tau^{ \hat{\mu}_{0:T-1}}_{0:T-1}) ),\mathbb{V}(G^{\text{PDIS}}(\tau^{ \pi_{0:T-1}}_{0:T-1}) ) }.
\end{align*}
The following lemma gives a sublinear regret guarantee, regardless of the approximation error in $\hat \mu$.
Its proof is in Appendix \ref{append:UCB-regret}.
\begin{lemma}\label{lem:UCB-regret}
$\mathbb{E}\left[\text{Regret}(K)\right] = \mathcal{O}(\sqrt{K \ln K})$
\end{lemma}
\section{Empirical Results}\label{sec:experiment}
In this section,
we present empirical results to answer the following two questions.
\begin{enumerate}
\item If the approximation of $\hat \mu$ is of high quality, can the PDIS Monte Carlo estimator outperform the ordinary on-policy Monte Carlo estimator?
\item If the approximation of $\hat \mu$ is of poor quality, can the adaptive execution strategy still ensure a low estimation error?
\end{enumerate}
We use grid worlds with different sizes as our benchmark environments.
For a grid world with size $n$, its width, height,
and time horizon $T$ are all set to $n$.
There are four possible actions: up, down, left, and right. After taking an action, the agent has 0.9 probability to move accordingly and 0.1 probability to move uniformly at random.
If the agent runs into a boundary,
the agent stays in its current location.
The reward function $r(s, a)$ is randomly generated and fixed after generation.
We normalize the rewards across all $(s, a)$ such that $\max_{s,a} r(s,a) = 1$.
We consider a set of randomly generated target policies.
The ground truth policy performance is estimated using the on-policy Monte Carlo method by running each target policy for $10^6$ episodes.
We test three different sizes of the grid world, i.e., $n \in \qty{5, 10, 15}$.
The offline dataset always contains $m = 10^5$ randomly generated tuples regardless of $n$.
Given an environment and a target policy,
we execute Algorithm \hyperlink{Algorithm1}{1} to approximate function $r$, $q$, and $\hat{q}$.
We consider a tabular setting where each $(t,s,a)$ pair is represented by a distinct one-hot vector.
As shown in Algorithm \hyperlink{Algorithm1}{1},
we train $r$ using supervised learning by batch gradient descent.
We train $q$ and $\hat{q}$ using fitted $Q$-learning.
We split the offline data into a training set and a test set and
tune all hyperparameters offline
based on the supervised learning loss and fitted $Q$-learning loss on the test set.
We use the same set of hyperparameters for all grid worlds and target policies.
We end up with learning rates being 1,
training steps being $10^3$,
and batch size being $128$.
In each grid world environment, we test $30$ randomly generated target policies and each target policy is tested $30$ times.
For an environment and a target policy,
we execute $\hat \mu$ and $\pi$ for $500$ steps to estimate the expected total rewards of the target policy. Each step is defined as one interaction between the agent and the environment. Thus, the estimate for the environment with time horizon $15$ starts from steps $15$.
Because we are interested in estimation accuracy,
we define the \emph{estimation error} at step $t$ as the absolute difference between the PDIS estimation and the ground truth divided by the ground truth.
We use \emph{normalized estimation error} which is the estimation error divided by the average estimation error of the on-policy estimator after the first episode. This ensures that the normalized estimation error of the on-policy estimator starts from $1$.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{figure/gridworld.pdf}
\centering
\caption{
The normalized estimation errors of the off-policy estimator with the learned $\hat{\mu}$ and the on-policy estimator. Each curve averages the results of 30 random policies and each policy has 30 independent runs.
The shaded regions denote standard errors and are invisible for some curves because they are too small.
}
\label{fig:gridworld}
\end{figure}
Blue lines in Figure \ref{fig:gridworld} show the experiment results when we always use learned $\hat{\mu}$ as the actual behavior policy.
For the grid world of size $n=5$,
the off-policy estimator with the learned $\hat{\mu}$ consumes significantly fewer online steps to achieve the same estimation accuracy,
compared with the on-policy estimator.
For example, to achieve $0.2$ normalized estimation error,
the off-policy estimator consumes around $40$ online steps while the on-policy estimator consumes around $120$ online steps.
Substantial improvements also exist in the grid world environment with size $n=10$. However, in the grid world environment with size $n=15$, the off-policy estimator with the learned $\hat{\mu}$ is actually worse than the on-policy estimator.
This is because the learned $\hat{\mu}$ contains a non-negligible approximation error due to insufficient data coverage as the environment size increases while the number of offline data remains unchanged.
To maintain a reasonably low estimation error when this occurs,
we use Algorithm \hyperlink{Algorithm2}{2} with a standard UCB confidence value $c=1$ to identify the better policy and has results shown in Figure \ref{fig:gridworld}.
As shown with red lines, after adopting the UCB algorithm to adaptively switch between the learned policy $\hat{\mu}$ and the target policy $\pi$ during online executions,
the improvements on the grid world with size $5$ or $10$ is still significant (e.g., when $n=5$, to achieve $0.2$ normalized estimation error,
the off-policy estimator takes $60$ online steps while the on-policy estimator takes $120$).
The jump at the start of the red lines is because Algorithm \hyperlink{Algorithm2}{2} initially breaks ties by choosing the learned $\hat{\mu}$ and then chooses to explore the target policy $\pi$. This causes the error to increase because of the large on-policy estimation error.
The performance degeneration when $n=15$ has been effectively mitigated shown by the closer curves in Figure \ref{fig:gridworld}.
As we examine the variance numerically below, this mitigation is significant,
Define
the variance ratio as
\begin{align*}
\frac{\mathbb{V}\left(G^{\text{PDIS}}(\tau^{b_{0:T-1}}_{0:T-1})\right)}{\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{0:T-1}}_{0:T-1})\right)}
\end{align*}
where $b$ is the actual behavior policy during online executions.
By using collected online samples to estimate the variance, Table \ref{tab:varaince-ratio} shows that
when $n=5$ or $10$,
the variance of the off-policy estimator with the learned $\hat{\mu}$ is much smaller than the variance of the on-policy estimator.
After adopting the UCB algorithm,
the significant variance reduction for $n \in \qty{5, 10}$ still exists while the variance when $n=15$ has been reduced to almost one-third of its original size.
Thus, Algorithm \hyperlink{Algorithm2}{2} greatly reduces the regret when the learned $\hat{\mu}$ is of poor quality.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ c| c c }
\hline
Gridworld size $n$ & Without UCB & With UCB \\
\hline
5 & 0.282 & 0.350 \\
10 & 0.352 & 0.422 \\
15 & 3.673 & 1.390 \\
\hline
\end{tabular}
\caption{Variance Ratio for Gridworld Environments. }
\end{center}
\end{table}\label{tab:varaince-ratio}
\section{Related Work}
\label{sec related work}
\paragraph*{Monte Carlo methods.} Reducing the variance of Monte Carlo estimators via learning a proper behavior policy has been explored before.
\citet{hanna2017data} model the problem of finding a variance-reducing behavior policy
as an optimization problem and thus rely on stochastic gradient descent to update a parameterized behavior policy directly.
In particular,
\citet{hanna2017data} consider the ordinary importance sampling.
\emph{By contrast,
we consider the per-decision importance sampling,
which is fundamentally better \citep{precup:2000:eto:645529.658134}.}
Moreover,
\citet{hanna2017data}
require new online data to learn this behavior policy.
\emph{By contrast,
our method works with offline data and does not need any more online data for behavior policy learning.}
\citet{hanna2017data} also requrie the online data to be complete trajectories.
\emph{By contrast,
our method copes well with offline tuples.}
\citet{mukherjee2022revar} also investigate variance-reducing behavior policies for the per-decision importance sampling estimator.
Their results,
however,
apply to only tree-structured MDPs,
which is rather restrictive because many MDPs of interest are not tree-structured.
For example,
in the finite horizon MDP considered in this paper,
if two states at time $t$ can transit to the same successor state at time $t+1$,
then this MDP is not tree-structured.
Moreover,
\citet{mukherjee2022revar} require to directly approximate the transition function in the MDP by counting,
making it essentially a model-based approach.
\citet{mukherjee2022revar}, therefore,
suffer from all canonical challenges in model learning \citep{sutton1990integrated,sutton2012dyna,deisenroth2011pilco,chua2018deep}.
\emph{By contrast,
we work on general MDPs without making any assumption regarding the underlying structures of the MDPs and do not need to approximate the transition function.
Our approach is model-free.}
\citet{zhong2022robust} adjust the behavior policy by encouraging under-sampled data.
\citet{zhong2022robust},
however,
rely on strong assumptions on offline data.
Their offline data has to be complete trajectories generated by known policies.
In their experiments,
they also require the policies for generating offline data to be similar to the target policy
since they do not have any importance sampling.
\emph{By contrast,
our method copes well offline data in the form of incomplete segments from probably unknown behavior policies that can be arbitrarily different from the target policy}.
Moreover,
there is no theoretical guarantee that the estimates made by \citet{zhong2022robust} are unbiased or consistent.
\emph{By contrast,
our estimate is always provably unbiased}.
Other attempts for variance reduction in Monte Carlo evaluation are mostly by using control variates based on value function \citep{zinkevich2006optimal,white2009learning,jiang2015doubly}.
Such control variates can be easily integrated into our estimator,
which we, however, save for future work.
\paragraph*{Model-based offline evaluation.}
One straightforward way to exploit offline data for policy evaluation is to learn a model of the MDP first,
probably with supervised learning \citep{jiang2015doubly,paduraru2013off,zhang2021autoregressive},
and then execute Monte Carlo methods inside the learned model.
Learning a high-fidelity model is, however, sometimes even more challenging than evaluating the policy itself.
And the model prediction error can easily compound over time steps during model rollouts \citep{wan2019planning}.
\emph{Nevertheless,
if a good model can somehow be learned,
our work still helps reduce the required rollouts when Monte Carlo is applied within the learned model.}
\paragraph*{Model-free offline evaluation.}
Offline data can also be exploited for policy evaluation without explicitly constructing a model.
Those model-free offline evaluation methods instead learn some other quantities,
including density ratio (a.k.a. marginalized importance sampling ratio, \citet{liu2018breaking,nachum2019dualdice,zhang2020gradientdice,yang2020off,mousavi2020blackbox,li2019perspective,uehara2019minimax,xie2019towards}) and action value function \citep{le2019batch,harutyunyan2016q,precup:2000:eto:645529.658134,munos2016safe,farajtabar2018more}.
But those learning processes bring in bias,
just like any other learning process,
either due to the misspecification of the function class or due to the complexity of optimization.
Consequently,
the estimation they make is biased and it is hard to quantify such bias without restrictive assumptions.
\emph{To our knowledge,
the only practical way in general settings to certify that their estimation is indeed accurate is to compare those estimations with estimations made by Monte Carlo methods.}
We believe this is why Monte Carlo methods still dominate the evaluation of policies.
Even worse, those learning algorithms also have hyperparameters to tune,
just like any other learning algorithm.
In other words,
we need to evaluate different outputs of those learning algorithms corresponding to different hyperparameters.
This is called \emph{model selection}.
Apparently, we cannot use the aforementioned density ratio or action value function based model free evaluation methods for model selection -- otherwise we run into a self-loop.
In fact,
those works \citep{liu2018breaking,nachum2019dualdice,zhang2020gradientdice,yang2020off,mousavi2020blackbox,li2019perspective,uehara2019minimax,xie2019towards} usually use Monte Carlo with online data for evaluating different candidates.
The online data comes from either a simulator or a learned model.
As a result,
\emph{this work helps reduce the online data used in model selection by those model-free offline evaluation methods}.
Efforts have been made to perform model selection with only offline data without explicitly learning a model as well \citep{kumar2021workflow,paine2020hyperparameter,zhang2021towards,xie2021batch}.
Those offline model selection methods, however, rarely have a correctness guarantee without restrictive assumptions.
\emph{They can probably provide a preliminary screen in model selection but Monte Carlo methods make the final decision when correctness really matters.}
To summarize,
if obtaining online data is entirely impossible,
existing offline evaluation methods without using any online data
might be the only choices.
These include model-based methods and model-free methods augmented by offline model selection.
However, in many real-world problems,
it is practical to assume that a small amount of online data is available.
If in addition, evaluation correctness should be honored,
then the improved Monte Carlo method in this work might be a better choice.
\paragraph*{Control.}
Control algorithms can use online data \citep{watkins1992q,sutton2000policy,mnih2015human,schulman2017proximal},
offline data \citep{fujimoto2019off,kumar2020conservative,yu2020mopo,kidambi2020morel,schrittwieser2021online},
or a mix of online and offline data \citep{vecerik2017leveraging,nair2020awac,lee2022offline,ijspeert2002learning,kim2013learning,rajeswaran2017learning,ajay2020opal}.
Control algorithms also have hyperparameters.
Consequently, to use control algorithms, model selection is necessary,
where Monte Carlo methods now dominate.
As a result,
\emph{this work makes almost all control algorithms more efficient, in terms of using online data}.
Using offline data to help online model selection in control problems is previously explored by
\cite{konyushova2021active}.
In particular, it uses offline data to decide which policy,
among a given set of policies, should be given priority to evaluate.
When it comes to the actual online evaluation,
\cite{konyushova2021active} still uses the ordinary online Monte Carlo methods.
\emph{\cite{konyushova2021active}, therefore, again benefit from the improved Monte Carlo method in this paper}.
\section{Conclusion}
Monte Carlo methods are the most dominating approach for evaluating a policy.
The development and deployment of almost all RL algorithms implicitly or explicitly depend on Monte Carlo methods more or less.
For example,
when a reinforcement learning researcher wants to plot a curve of the agent performance against training steps,
Monte Carlo methods are usually the first choice.
This work develops a method to improve the online data efficiency of Monte Carlo evaluation by learning a tailored behavior policy from offline data.
The Monte Carlo estimator with this tailored behavior policy is provably better than the canonical Monte Carlo estimator.
The theoretical advantage is also demonstrated empirically,
\tb{as a proof-of-concept},
in the tested domains.
We save the investigation on large-scale problems for future work.
Moreover,
this work considers only the total rewards performance metric on finite horizon MDPs.
One natural next step is to consider the average reward \citep{puterman2014markov} or the discounted total rewards \citep{puterman2014markov} on infinite horizon MDPs.
\section{Proofs}
\subsection{Proof of Lemma~\ref{lem stats unbiasedness}}
\label{sec proof lem stats unbiasedness}
\begin{proof}
\begin{align}
\mathbb{E}_{A\sim\mu}\left[\rho(A)q(A)\right] =& \sum_{a \in \qty{a|\mu(a) > 0}} \mu(a) \frac{\pi(a)}{\mu(a)} q(a) \\
=& \sum_{a \in \qty{a|\mu(a) > 0}} \pi(a) q(a) \\
=& \sum_{a \in \qty{a|\mu(a) > 0}} \pi(a) q(a) + \sum_{a \in \qty{a | \mu(a) = 0}} \pi(a)q(a) \explain{$\mu \in \Lambda$} \\
=&\sum_a \pi(a)q(a) \\
=&\mathbb{E}_{A\sim\pi}\left[q(A)\right].
\end{align}
\end{proof}
\subsection{Proof of Lemma \ref{lem:math-optimal}}\label{append:math-optimal}
\begin{proof}\hspace{1cm}\\
For a given $\pi$ and $q$,
define
\begin{align}
\mathcal{A}_+ \doteq \qty{a \mid \pi(a)q(a) \neq 0}.
\end{align}
For any $\mu \in \Lambda$,
we expand the variance as
\begin{align}
\label{eq:math-variance}
&\mathbb{V}_{A\sim \mu}(\rho(A)q(A)) \\
=& \mathbb{E}_{A\sim \mu}[(\rho(A)q(A))^2] - \mathbb{E}_{A\sim \mu}^2[\rho(A)q(A)] \\
=& \mathbb{E}_{A\sim \mu}[(\rho(A)q(A))^2] - \mathbb{E}_{A\sim \pi}^2[q(A)] \explain{Lemma~\ref{lem stats unbiasedness}} \\
=& \sum_{a \in \qty{a \mid \mu(a) > 0}} \frac{\pi^2(a)q^2(a)}{\mu(a)}- \mathbb{E}_{A\sim \pi}^2[q(A)] \\
=& \sum_{a \in \qty{a \mid \mu(a) > 0} \cap \mathcal{A}_+} \frac{\pi^2(a)q^2(a) }{\mu(a)}- \mathbb{E}_{A\sim \pi}^2[q(A)] \explain{$\pi(a)q(a) = 0, \forall a \notin \mathcal{A}_+$} \\
=& \sum_{a \in \mathcal{A}_+} \frac{\pi^2(a)q^2(a) }{\mu(a)}- \mathbb{E}_{A\sim \pi}[q(A)]^2 \explain{$\mu \in \Lambda$}
\end{align}
The second term is a constant and is unrelated to $\mu$.
Solving the optimization problem \eqref{eq:math-optimization} is,
therefore, equivalent to solving
\begin{align}
\text{min}_{\mu \in \Lambda} \quad &
\sum_{a \in \mathcal{A}_+} \frac{\pi^2(a)q^2(a) }{\mu(a)} \label{eq:math-optimization-2}.
\end{align}
\textbf{Case 1: $\abs{\mathcal{A}_+} = 0$} \\
In this case,
the variance is always $0$ so any $\mu \in \Lambda$ is optimal.
In particular, $\mu^*(a) = \frac{1}{\mathcal{A}}$ is optimal. \\
\textbf{Case 2: $\abs{\mathcal{A}_+} > 0$} \\
The definition of $\Lambda$ in~\eqref{eq stats search space} can be equivalently expressed, using contraposition, as
\begin{align}
\Lambda = \qty{\mu \in \Delta(\mathcal{A}) \mid \forall a, a \in \mathcal{A}_+ \implies \mu(a) > 0}.
\end{align}
The optimization problem~\eqref{eq:math-optimization-2} can then be equivalently written as
\begin{align}
\text{min}_{\mu \in \Delta(\mathcal{A})} \quad &
\sum_{a \in \mathcal{A}_+} \frac{\pi^2(a)q^2(a) }{\mu(a)} \label{eq:math-optimization-3} \\
\text{s.t.} \quad &
\mu(a) > 0 \quad \forall a \in \mathcal{A}_+.
\end{align}
If for some $\mu$ we have
$\sum_{a\in \mathcal{A}_+} \mu(a) < 1$,
then there must exist some $a_0 \notin \mathcal{A}_+$ such that $\mu(a_0) > 0$.
Since $a_0$ does not contribute to the summation in the objective function of~\eqref{eq:math-optimization-3},
we can move the probability mass on $a_0$ to some other $a_1 \in \mathcal{A}_+$ to increase $\mu(a_1)$ to further decrease the objective.
In other words,
any optimal solution $\mu$ to~\eqref{eq:math-optimization-3} must put all its mass on $\mathcal{A}_+$.
This motivates the following problem
\begin{align}
\text{min}_{z \in \Delta(\mathcal{A}_+)} \quad &
\sum_{a \in \mathcal{A}_+} \frac{\pi^2(a)q^2(a)}{z(a)} \label{eq:math-optimization-4} \\
\text{s.t.} \quad &
z(a) > 0 \quad \forall a \in \mathcal{A}_+.
\end{align}
In particular, if $z_*$ is an optimal solution to~\eqref{eq:math-optimization-4},
then an optimal solution to~\eqref{eq:math-optimization-3} can be constructed as
\begin{align}
\label{eq opt mu construnction}
\mu_*(a) = \begin{cases}
z_*(a) & a \in \mathcal{A}_+ \\
0 & \text{otherwise}.
\end{cases}
\end{align}
Let $\mathbb{R}_{++} \doteq (0, +\infty)$.
According to the Cauchy-Schwarz inequality,
for any $z \in \mathbb{R}_{++}^\abs{\mathcal{A}_+}$,
we have
\begin{align}
\left(\sum_{a \in \mathcal{A}_+} \frac{\pi^2(a)q^2(a)}{z(a)}\right)\left(\sum_{a \in \mathcal{A}_+} z(a)\right) \geq \left(\sum_{a\in\mathcal{A}_+} \frac{\pi(a)\abs{q(a)}}{\sqrt{z(a)}} \sqrt{z(a)}\right)^2 = \left(\sum_{a\in\mathcal{A}_+} \pi(a)\abs{q(a)}\right)^2.
\end{align}
It can be easily verified that the equality holds for
\begin{align}
z^*(a) \doteq \frac{\pi(a)\abs{q(a)}}{\sum_{b} \pi(b)\abs{q(b)}} > 0.
\end{align}
Since $\sum_{a\in\mathcal{A}_+} z^*(a) = 1$,
we conclude that $z^*$ is an optimal solution to~\eqref{eq:math-optimization-4}.
An optimal solution $\mu_*$ to~\eqref{eq:math-optimization} can then be constructed according to~\eqref{eq opt mu construnction}.
Making use of the fact that $\pi(a)\abs{q(a)} = 0$ for $a \notin \mathcal{A}_+$,
this $\mu_*$ can be equivalently expressed as
\begin{align}
\mu_*(a) = \frac{\pi(a)\abs{q(a)}}{\sum_{b \in \mathcal{A}} \pi(b)q(b)},
\end{align}
which completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:math-variance-0}}\label{append:math-variance-0}
\begin{proof}
We start with showing $\Lambda = \Lambda_+$.
Lemma~\ref{lem stats unbiasedness} ensures that $\mu \in \Lambda \implies \mu \in \Lambda_+$.
We now show that $\mu \in \Lambda_+ \implies \mu \in \Lambda$.
For any $\mu \in \Lambda_+$,
we have
\begin{align}
\sum_{a \in \qty{a | \mu(a) > 0}} \mu(a) \frac{\pi(a)}{\mu(a)} q(a) = \sum_a \pi(a) q(a).
\end{align}
This indicates that
\begin{align}
\sum_{a \in \qty{a | \mu(a) = 0}} \pi(a) q(a) = 0.
\end{align}
Since $\pi(a) \geq 0$ and all $q(a)$ has the same sign,
we must have
\begin{align}
\pi(a)q(a) = 0, \, \forall a \in \qty{a | \mu(a) = 0}.
\end{align}
This is exactly $\mu(a) = 0 \implies \pi(a)q(a) = 0$,
yielding $\mu \in \Lambda$.
This completes the proof of $\Lambda_+ = \Lambda$.
We now show the zero variance.
When $\forall a \in \mathcal{A}, q(a) \geq 0$, if $\exists a_0, \pi_0(a_0) q(a_0) \neq 0$, we have $\forall a \in \mathcal{A}$
\begin{align}
\mu^*(a) = \frac{\pi(a) \abs{q(a)}}{c}
\end{align}
and $c>0$ is a normalizing constant. Plugging $\mu^*$ to $\rho(A)q(A)$, we get $\forall a \in \mathcal{A}$
\begin{align}
\rho(a)q(a) = \frac{\pi(a)}{\mu^*(a)}q(a) = \frac{\pi(a)}{\frac{\pi(a) \abs{q(a)}}{c} }q(a) = c.
\end{align}
This means in this setting, with the optimal distribution $\mu^*$, the random variable $\rho(\cdot)q(\cdot)$ is a constant function.
Thus,
\begin{align}
\mathbb{V}_{A\sim \mu^*}(\rho(A)q(A)) = 0.
\end{align}
When $\forall a \in \mathcal{A}, q(a) \geq 0$, if $\forall a_0, \pi_0(a_0) q(a_0) = 0$, we have $\forall a \in \mathcal{A}$
\begin{align}
\mu^*(a) = \frac{1}{|\mathcal{A}|}.
\end{align}
Plugging $\mu^*$ to $\rho(A)q(A)$, we get $\forall a \in \mathcal{A}$
\begin{align}
\rho(a)q(a) = \frac{\pi(a)}{\mu^*(a)}q(a) = \frac{\pi(a)q(a)}{ \frac{1}{|\mathcal{A}|}} = 0.
\end{align}
This shows $\rho(A)q(A)$ is a also constant.
Thus,
\begin{align}
\mathbb{V}_{A\sim \mu^*}(\rho(A)q(A)) = 0.
\end{align}
The proof is similar for $\forall a \in \mathcal{A}, q(a) \leq 0$ and is thus omitted.
\end{proof}
\subsection{Proof of Theorem~\ref{lem rl pdis unbaised}}
\label{sec lem rl pdis unbaised}
\begin{proof}
We proceed via induction.
For $t = T-1$,
we have
\begin{align}
\mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) \mid S_t\right] =& \mathbb{E}\left[\rho_t R_{t+1} \mid S_t\right] = \mathbb{E}\left[\rho_t q_{\pi, t}(S_t, A_t) \mid S_t \right] \\
=& \mathbb{E}_{A_t \sim \pi_t(\cdot | S_t)}\left[q_{\pi, t}(S_t, A_t) | S_t\right] \explain{Lemma~\ref{lem stats unbiasedness}} \\
=& v_{\pi, t}(S_t).
\end{align}
For $t \in [T-2]$,
we have
\begin{align}
&\mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) \mid S_t\right] \\
=& \mathbb{E}\left[\rho_t R_{t+1} + \rho_tG^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t\right] \\
=& \mathbb{E}\left[\rho_t R_{t+1} \mid S_t\right] + \mathbb{E}\left[\rho_tG^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t\right] \\
\explain{Law of total expectation}
=& \mathbb{E}\left[\rho_t R_{t+1} \mid S_t\right] + \mathbb{E}_{A_t \sim \mu_t(\cdot | S_t), S_{t+1} \sim p(\cdot | S_t, A_t)}\left[ \mathbb{E}\left[\rho_tG^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t, A_t, S_{t+1}\right] \right] \\
\explain{Conditional independence and Markov property}
=& \mathbb{E}\left[\rho_t R_{t+1} \mid S_t\right] + \mathbb{E}_{A_t \sim \mu_t(\cdot | S_t), S_{t+1} \sim p(\cdot | S_t, A_t)}\left[ \rho_t \mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right] |S_t \right] \\
=& \mathbb{E}\left[\rho_t R_{t+1} \mid S_t\right] + \mathbb{E}_{A_t \sim \mu_t(\cdot | S_t), S_{t+1} \sim p(\cdot | S_t, A_t)}\left[ \rho_t v_{\pi, t+1}(S_{t+1}) | S_t \right]
\explain{Inductive hypothesis} \\
=& \mathbb{E}_{A_t \sim \mu_t(\cdot | S_t)}\left[\rho_t q_{\pi, t}(S_t, A_t) | S_t\right] \explain{Definition of $q_{\pi, t}$} \\
=& \mathbb{E}_{A_t \sim \pi_t(\cdot | S_t)}\left[q_{\pi, t}(S_t, A_t) | S_t\right] \explain{Lemma~\ref{lem stats unbiasedness}} \\
=& v_{\pi, t}(S_t),
\end{align}
which completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:recursive-var}}\label{append:recursive-var}
\begin{proof}
When $t\in [T-2]$, we have
\begin{align}
\label{eq tmp4}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=& \mathbb{E}_{A_t}\left[ \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})\mid S_t, A_t\right)\mid S_t\right] + \mathbb{V}_{A_t}\left(\mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1}) \mid S_t, A_t\right]\mid S_t\right)
\explain{Law of total variance \eqref{eq:total-variance}} \\
=& \mathbb{E}_{A_t}\left[ \rho_t^2 \mathbb{V}\left(r(S_t,A_t) + G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t, A_t\right)\mid S_t\right] \\
&+ \mathbb{V}_{A_t}\left(\rho_t \mathbb{E}\left[r(S_t,A_t) + G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t, A_t\right]\mid S_t\right)
\explain{Using \eqref{eq:PDIS-recursive}}
\\
=& \mathbb{E}_{A_t}\left[ \rho_t^2 \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t, A_t\right)\mid S_t\right] + \mathbb{V}_{A_t}\left(\rho_t q_{\pi, t}(S_t, A_t)\mid S_t\right) \explain{Deterministic reward $r$}.
\end{align}
Further decomposing the first term, we have
\begin{align}
\label{eq tmp3}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t, A_t\right) \\
=& \mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t, A_t, S_{t+1}\right) \mid S_t, A_t\right] \\
&+ \mathbb{V}_{S_{t+1}}\left(\mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_t, A_t, S_{t+1}\right]\mid S_t, A_t\right)
\explain{Law of total variance \eqref{eq:total-variance}}
\\
=& \mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \mathbb{V}_{S_{t+1}}\left(\mathbb{E}\left[G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right]\mid S_t, A_t\right) \explain{Markov property} \\
=& \mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \mathbb{V}_{S_{t+1}}\left(v_{\pi, t+1}(S_{t+1})\mid S_t, A_t\right). \explain{Theorem~\ref{lem rl pdis unbaised}}
\end{align}
With $\nu_{\pi, t}$ defined in~\eqref{def:nu},
plugging~\eqref{eq tmp3} back to~\eqref{eq tmp4} yields
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=&\mathbb{E}_{A_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \nu_t(S_t, A_t)\right) \mid S_t\right] \\
&+ \mathbb{V}_{A_t}\left(\rho_t q_{\pi, t}(S_t, A_t)\mid S_t\right) \\
=&\mathbb{E}_{A_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \nu_t(S_t, A_t)\right) \mid S_t\right] \\
&+ \mathbb{E}_{A_t}\left[\rho_t^2 q_{\pi, t}^2(S_t, A_t)\mid S_t\right] - \left(\mathbb{E}_{A_t}\left[\rho_t q_{\pi, t}(S_t, A_t) \mid S_t\right]\right)^2 \\
=&\mathbb{E}_{A_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \nu_t(S_t, A_t)\right) \mid S_t\right] \\
&+ \mathbb{E}_{A_t}\left[\rho_t^2 q_{\pi, t}^2(S_t, A_t)\mid S_t\right] - v_{\pi, t}^2(S_t). \explain{Lemma~\ref{lem stats unbiasedness}}
\end{align}
When $t = T-1$, we have
\begin{align}
\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})\mid S_t\right) =& \mathbb{V}\left(\rho_t r(S_t, A_t)\mid S_t\right) \\
=& \mathbb{V}\left(\rho_t q_{\pi, t}(S_t, A_t)\mid S_t\right) \\
=& \mathbb{E}_{A_t}\left[\rho_t^2 q_{\pi, t}^2(S_t, A_t) \mid S_t\right] - v_{\pi, t}^2(S_t),
\end{align}
which completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{lem:rl-optimal}}\label{append:rl-optima}
\begin{proof}
We proceed via induction.
When $t = T-1$,
we have
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{T-1:T-1}}_{T-1:T-1}) \mid S_{T-1} = s\right) \\
=& \mathbb{V}_{A_{T-1}}\left(\rho_{T-1} r(s, A_{T-1})\mid S_{T-1} =s \right) \\
=& \mathbb{V}_{A_{T-1}}\left(\rho_{T-1} q_{\pi, T-1}(s, A_{T-1})\mid S_{T-1} =s \right)
\end{align}
The definition of $\mu^*_{T-1}$ in~\eqref{eq mu star def1} and Lemma~\ref{lem:math-optimal} ensure that $\mu^*_{T-1}$ is an optimal solution to
\begin{align}
\min_{\mu_{T-1} \in \Lambda_{T-1}} \quad \mathbb{V}\left(G^{\text{PDIS}}\left(\tau^{\mu_{T-1}}_{T-1}\right) | S_{T-1} = s\right).
\end{align}
Now,
suppose for some $t \in [T-2]$,
$\mu^*_{t+1:T-1}$ is an optimal solution to
\begin{align}
\min_{\mu_{t+1} \in \Lambda_{t+1}, \dots, \mu_{T-1} \in \Lambda_{T-1}} \quad \mathbb{V}\left(G^{\text{PDIS}}\left(\tau^{\mu_{t+1:T-1}}_{t+1:T-1}\right)| S_{t+1}=s\right).
\end{align}
To complete induction,
we proceed to proving that $\mu^*_{t:T-1}$ is an optimal solution to
\begin{align}
\label{eq induction opt problm}
\min_{\mu_t \in \Lambda_t, \dots, \mu_{T-1} \in \Lambda_{T-1}} \quad \mathbb{V}\left(G^{\text{PDIS}}\left(\tau^{\mu_{t:T-1}}_{t:T-1}\right) | S_t = s\right).
\end{align}
In the rest of this proof,
we omit the domain $\Lambda_t, \dots, \Lambda_{T-1}$ for simplying notations.
For any $\mu_{t:T-1}$, we have
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{t:T-1}}_{t:T-1})\mid S_{t}\right) \\
=& \mathbb{E}_{A_{t}}\left[\rho_{t}^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu_{{t+1}:T-1}}_{{t+1}:T-1}) \mid S_{t+1}\right) \mid S_{t}, A_{t}\right] + \nu_{t}(S_{t}, A_{t}) + q_{\pi, t}^2(S_{t}, A_{t}) \right) \mid S_{t}\right] \\
& - v_{\pi, t}^2(S_{t}) \explain{By Lemma \ref{lem:recursive-var}} \\
\stackrel{(a)}{\geq} & \mathbb{E}_{A_{t}}\left[\rho_{t}^2 \left(\mathbb{E}_{S_{t+1}}\left[\min_{\mu'_{t+1:T-1}}\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu'_{{t+1}:T-1}}_{{t+1}:T-1}) \mid S_{t+1}\right) \mid S_{t}, A_{t}\right] + \nu_{t}(S_{t}, A_{t}) + q_{\pi, t}^2(S_{t}, A_{t}) \right) \mid S_{t}\right] \\
& - v_{\pi, t}^2(S_{t}) \explain{Monotonically non-increasing in $\mathbb{V}(\cdot)$} \\
=& \mathbb{E}_{A_{t}}\left[\rho_{t}^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\mu^*_{{t+1}:T-1}}_{{t+1}:T-1}) \mid S_{t+1}\right) \mid S_{t}, A_{t}\right] + \nu_{t}(S_{t}, A_{t}) + q_{\pi, t}^2(S_{t}, A_{t}) \right) \mid S_{t}\right] \\
& - v_{\pi, t}^2(S_{t}) \explain{Inductive hypothesis} \\
=& \mathbb{E}_{A_{t}}\left[\rho_{t}^2 u_{\pi,t}(S_t,A_t)\mid S_{t}\right] - v_{\pi, t}^2(S_{t}) \explain{By \eqref{eq u def}} \\
=& \mathbb{V}_{A_{t}}\left(\rho_{t} \sqrt{u_{\pi,t}(S_t,A_t)}\mid S_{t}\right) + \mathbb{E}_{A_t}\left[\rho_t \sqrt{u_{\pi, t}(S_t, A_t)} | S_t\right]^2 - v_{\pi, t}^2(S_{t}) \explain{Definition of variance} \\
=& \mathbb{V}_{A_t}\left(\rho_{t} \sqrt{u_{\pi,t}(S_t,A_t)}\mid S_{t}\right) + \mathbb{E}_{A_t\sim\pi_t(\cdot | S_t)}\left[\sqrt{u_{\pi, t}(S_t, A_t)} | S_t\right]^2 - v_{\pi, t}^2(S_{t}) \explain{Lemma~\ref{lem stats unbiasedness} and $\mu_t \in \Lambda_t$} \\
\stackrel{(b)}{\geq}& \mathbb{E}_{A_t\sim\pi_t(\cdot | S_t)}\left[\sqrt{u_{\pi, t}(S_t, A_t)} | S_t\right]^2 - v_{\pi, t}^2(S_{t}) \explain{Non-negativity of variance}.
\end{align}
According to the inductive hypothesis,
the equality in $(a)$ can be achieved when $\mu_{t+1:T-1} = \mu^*_{t+1:T-1}$.
According to the construnction of $\mu^*_t$ in~\eqref{eq mu star def1} and Lemma~\ref{lem:math-variance-0},
the equality in $(b)$ can be achieved when $\mu_t = \mu^*_t$.
This suggests that $\mu^*_{t:T-1}$ achieves the lower bound and is thus an optimal solution to~\eqref{eq induction opt problm},
which completes the induction and thus completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:u_variance_eq}}\label{append:u_variance_eq}
\begin{proof}
We proceed via induction.
When $t = T-1$, we have
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=& \mathbb{V}_{A_t}\left(\rho_t r(S_t, A_t)\mid S_t\right) \\
=& \mathbb{V}_{A_t}\left(r(S_t, A_t)\mid S_t\right) \explain{By on-policy} \\
=& \mathbb{V}_{A_t}\left(q_{\pi,t}(S_t, A_t)\mid S_t\right) \\
=& \mathbb{E}_{A_t}\left[q_{\pi, t}^2(S_t, A_t) \mid S_t\right] - v_{\pi, t}^2(S_t) \\
=& \sum_{a} \pi_t(a|S_t)\tilde q_{\pi, t}(S_t, a) \explain{By~\eqref{def:tilde-q} and $\nu_{\pi, T-1}(s, a) = 0$}.
\end{align}
For $t \in [T-2]$,
we have
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=&\mathbb{E}_{A_t}\left[ \mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + q_{\pi, t}^2(S_t, A_t) + \nu_{\pi,t}(S_t, A_t) \mid S_t\right]
- v_{\pi, t}^2(S_t) \explain{Lemma~\ref{lem:recursive-var} and on-policy} \\
=&\sum_{a} \pi_t(a|S_t) \left( \sum_{s'} p(s'|S_t, a) \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1} = s' \right) + \tilde r(S_t, a) \right)\\
=&\sum_{a} \pi_t(a|S_t) \left( \sum_{s'} p(s'|S_t, a) \sum_{a'}\pi_{t+1}(a'|s')\tilde q_{\pi, t+1}(s', a') + \tilde r(S_t, a) \right) \explain{Inductive hypothesis}\\
=&\sum_{a} \pi_t(a|S_t) \tilde q_{\pi, t}(S_t, a), \explain{By~\eqref{def:tilde-q}}
\end{align}
which completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{lem:var_smaller_stronger}}\label{append:var_smaller_stronger}
\begin{proof}
We proceed via induction.
For $t = T-1$, we have
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 q_{\pi, t}^2(S_t, A_t) \mid S_t\right] - v_{\pi, t}^2(S_t) \explain{Lemma \ref{lem:recursive-var}}\\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \hat q_{\pi, t}(S_t, A_t) \mid S_t\right] - v^2_{\pi, t}(S_t) \explain{Definition of $\hat q$ \eqref{eq def q hat}} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \mathbb{E}_{A_t\sim \hat{\mu}_t}^2\left[\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right] - v^2_{\pi, t}(S_t) \explain{Definition of variance and non-negativity of $\hat q$} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \left(\sum_a \pi_t(a|S_t) \sqrt{\hat q_{\pi, t}(S_t, a)}\right)^2 - v_{\pi, t}^2(S_t) \explain{Lemma~\ref{lem stats unbiasedness}} \\
=& \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - v^2_{\pi, t}(S_t) \explain{Definition of $\hat \mu$ \eqref{def hat mu} and Lemma \ref{lem:math-variance-0}} \\
=& \sum_{a} \pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) + \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - \sum_{a}\pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) - v_{\pi, t}^2(S_t) \\
=& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) + \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - \sum_{a}\pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) \explain{By~\eqref{eq hat and tilde q} and Lemma \ref{lem:u_variance_eq}} \\
=& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) - \epsilon_t(S_t) \explain{Definition of $\epsilon$ \eqref{def:epsilon}}
\end{align}
For $t\in [T-2]$,
we have
\begin{align}
&\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1})\mid S_t\right) \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \left(\mathbb{E}_{S_{t+1}}\left[\mathbb{V}\left(G^{\text{PDIS}}(\tau^{\hat {\mu}_{t+1:T-1}}_{t+1:T-1}) \mid S_{t+1}\right) \mid S_t, A_t\right] + \nu_{\pi,t}(S_t, A_t) + q_{\pi, t}^2(S_t, A_t)\right) \mid S_t\right] \\
&- v_{\pi, t}^2(S_t) \explain{Lemma \ref{lem:recursive-var}}\\
\leq&\mathbb{E}_{A_t\sim \hat{\mu}_t}\Big[\rho_t^2 \Big(\mathbb{E}_{S_{t+1}}\Big[ \sum_{a'} \pi_{t+1}(a'|S_{t+1})\tilde {q}_{\pi, t+1}(S_{t+1}, a') \mid S_t, A_t\Big] + \nu_{\pi,t}(S_t, A_t) \\
& + q_{\pi, t}^2(S_t, A_t)\Big) \mid S_t\Big] - v_{\pi, t}^2(S_t) - \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right] \explain{Inductive hypothesis and Lemma~\ref{lem:u_variance_eq}} \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \left(\tilde{q}_{\pi, t}(S_t, A_t) + v^2_{\pi, t}(S_t)\right) \mid S_t\right] - v^2_{\pi, t}(S_t) - \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right] \explain{Definition of $\tilde q $ \eqref{def:tilde-q}} \\
=&\mathbb{E}_{A_t\sim \hat{\mu}_t}\left[\rho_t^2 \hat q_{\pi, t}(S_t, A_t) \mid S_t\right] - v^2_{\pi, t}(S_t) - \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right] \explain{Definition of $\hat q$ \eqref{eq def q hat}} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \mathbb{E}_{A_t\sim \hat{\mu}_t}^2\left[\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right] - v^2_{\pi, t}(S_t) \\
&- \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right] \explain{Definition of variance and non-negativity of $\hat q$} \\
=& \mathbb{V}_{A_t\sim \hat{\mu}_t}\left(\rho_t \sqrt{\hat q_{\pi, t}(S_t, A_t)}|S_t\right) + \left(\sum_a \pi_t(a|S_t) \sqrt{\hat q_{\pi, t}(S_t, a)}\right)^2 - v_{\pi, t}^2(S_t) \\
&- \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right] \explain{Lemma~\ref{lem stats unbiasedness}} \\
=& \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - v^2_{\pi, t}(S_t) - \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right]\explain{Definition of $\hat \mu$ \eqref{def hat mu} and Lemma \ref{lem:math-variance-0}} \\
=& \sum_{a} \pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) - v^2_{\pi, t}(S_t) + \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - \sum_{a} \pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) \\
&- \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right] \\
=& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) + \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - \sum_{a}\pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) \\
&- \mathbb{E}_{A_t \sim \hat \mu_t}\left[\rho_t^2\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})| S_t, A_t\right]\right]\explain{By~\eqref{eq hat and tilde q} and Lemma \ref{lem:u_variance_eq}} \\
=& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) + \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - \sum_{a}\pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) \\
&- \mathbb{V}_{A_t \sim \hat \mu_t}\left(\rho_t \sqrt{\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})|S_t, A_t\right]}\right) - \left(\sum_a \pi(A_t|S_t) \sqrt{\mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})|S_t, A_t\right]} \right)^2 \explain{Definition of variance and Lemma~\ref{lem stats unbiasedness}} \\
\leq& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) + \left(\sum_{a} \pi_t(a |S_t)\sqrt{\hat{q}_{\pi, t}(S_t, a)}\right)^2 - \sum_{a}\pi_t(a|S_t)\hat{q}_{\pi, t}(S_t, a) \\
& - \min_a \mathbb{E}_{S_{t+1}}\left[\epsilon_{t+1}(S_{t+1})|S_t, A_t\right] \explain{Non-negativity of variance, property of $\min$} \\
=& \mathbb{V}\left(G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1})\mid S_t\right) - \epsilon_t(S_t) \explain{Definition of $\epsilon$ \eqref{def:epsilon}}.
\end{align}
\end{proof}
\subsection{Proof of Lemma \ref{lem:UCB-regret}}\label{append:UCB-regret}
\begin{proof}
Based on the definition of the variance regret:
\begin{align*}
&\text{Regret}(K) \\
=& \sum_{i =1}^{K} \mathbb{V}(G^{\text{PDIS}}(\tau^{ {b^{(i)}}_{0:T-1}}_{0:T-1}) ) - K \cdot \min \qty{\mathbb{V}(G^{\text{PDIS}}(\tau^{ \hat{\mu}_{0:T-1}}_{0:T-1}) ),\mathbb{V}(G^{\text{PDIS}}(\tau^{ \pi_{0:T-1}}_{0:T-1}) ) } \\
= & \sum_{i =1}^{K} \mathbb{E}\left[(G^{\text{PDIS}}(\tau^{{b^{(i)}}_{0:T-1}}_{0:T-1}) )^2\right] - K \cdot \min \qty{\mathbb{E}\left[(G^{\text{PDIS}}(\tau^{\hat \mu_{0:T-1}}_{0:T-1}) )^2\right],\mathbb{E}\left[(G^{\text{PDIS}}(\tau^{\pi_{0:T-1}}_{0:T-1}) )^2\right] } \explain{By \eqref{eq:var-equivalent-1}, \eqref{eq:var-equivalent-2}, and unbiasedness} \\
=&\max \qty{\mathbb{E}\left[-(G^{\text{PDIS}}(\tau^{\hat \mu_{0:T-1}}_{0:T-1}) )^2\right] ,\mathbb{E}\left[-(G^{\text{PDIS}}(\tau^{\pi_{0:T-1}}_{0:T-1}) )^2\right] } - \sum_{i =1}^{K} \mathbb{E}\left[-(G^{\text{PDIS}}(\tau^{{b^{(i)}}_{0:T-1}}_{0:T-1}) )^2\right] .
\end{align*}
The last equation gives a standard regret equation with $-(G^{\text{PDIS}}(\tau^{{b^{(i)}}_{0:T-1}}_{0:T-1}) )^2$ as collected arm rewards. These rewards are arm rewards we used in Algorithm~\hyperlink{Algorithm2}{2}.
The UCB regret bound theorem
(see, e.g., \citet{Agrawal2018UCBnote}) can be restated as
\begin{theorem}\label{thm:UCB-bound}
When rewards are finite, with $N$ arms,
the expected total regret achieved by the UCB algorithm in round $K$ is bounded by
$\mathcal{O} ( \sqrt{NK \ln K })$.
\end{theorem}
Thus, to bound the regret, it suffices to show that our arm rewards are finite.
Because the reward emitted by the MDP is finite, we have $R_t \in [-r_{b},r_{b}$] with $r_b > 0$. Let $\mathcal{A}_t^+(s) \doteq \qty{a | \hat{\mu}_t(a|s)>0 }$. Define
$$\varrho \doteq \max_{t, s, a \in \mathcal{A}_t^+(s)} \frac{\pi_t(a|s)}{\mu_t(a|s)} $$
as the largest importance sampling ratio.
Define
\begin{align}
C_{ub} \doteq (\sum_{t=1}^{T} r_b \varrho^t )^2.
\end{align}
We have $C_{ub}$ as the upper bound of $G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1}) ^2$ because $\forall \tau^{\hat \mu_{t:T-1}}_{t:T-1}$,
\begin{align*}
C_{ub} = (\sum_{t=1}^{T} r_b \varrho^t )^2 \geq (\sum_{t=1}^{T} R_t \varrho^t )^2 \geq G^{\text{PDIS}}(\tau^{\hat \mu_{t:T-1}}_{t:T-1}) ^2 .
\end{align*}
Because $\varrho \geq 1$, we also have $\forall \tau^{\pi_{t:T-1}}_{t:T-1}, C_{ub} \geq (\sum_{t=1}^{T} r_b )^2 = G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1}) ^2$. Since $C_{ub}$ is a constant, $\forall \tau^{\hat{\mu}_{t:T-1}}_{t:T-1}, G^{\text{PDIS}}(\tau^{\hat{\mu}_{t:T-1}}_{t:T-1}) ^2$ is finite and $\forall \tau^{\pi_{t:T-1}}_{t:T-1}, G^{\text{PDIS}}(\tau^{\pi_{t:T-1}}_{t:T-1}) ^2$ is finite.
Since their additive inverses are also finite, we proved the arm rewards are finite.
Because we only have two arms, by Theorem \ref{thm:UCB-bound}, the expected total regret of Algorithm 2 is bounded by
$\mathcal{O} ( \sqrt{K \ln K })$
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,332 |
from model import *
from parameterization.parameterized import adjust_name_for_printing, Parameterizable
from parameterization.param import Param, ParamConcatenation
from parameterization.observable_array import ObsAr
from gp import GP
from sparse_gp import SparseGP
from mapping import *
| {
"redpajama_set_name": "RedPajamaGithub"
} | 586 |
\section{Introduction}
In this note we consider the convergence properties of a new
stochastic simulation technique, the equi-energy sampler
introduced in (Kou, et al.~2006). This is a method designed to draw samples
from a probability measure $\pi\in\mathscr{P}(E)$ (where
$\mathscr{P}(E)$ denotes the class of probability measures) on
measurable space $(E,\mathscr{E})$, where $E$ may be a high
dimensional space and the density, is known pointwise up to a
potentially unknown constant. In particular, the algorithm
generates a non-Markovian stochastic process $\{X_n\}_{n\geq 0}$
whose stationary distribution is ultimately $\pi$; this algorithm is described
fully in Section 2.
In the paper of Kou et al.~(2006),
an attempt to analyze the algorithm is made (in Theorem
2).
However, it was noticed in the discussion by Atchad\'e \& Liu
(2006) that this result is incomplete. We note the points that
were stated by Atchad\'e \& Liu and further expand upon their
point; see Section 3. An important remark is that Atchad\'e \& Liu attempt to
provide an alternative convergence result, via a Strong
Law of Large Numbers (SLLN) for bounded measurable functions.
Although this proof is correct, the authors study a
stochastic process which does not correspond to the algorithm; this
problem is outlined in Section 3.
The objective of this note is to provide some convergence proofs
for the EE sampler in a simple scenario (one feeding chain). We also note the difficulties
associated with the analysis of this type of algorithm and present the main methods that can be used to prove the SLLN.
To avoid unnecessary technicalities and
focus on the `essence' of the proof, strong
assumptions are made: including the uniform ergodicity of some transition kernels.
Our proof strategy is via the Poisson equation (e.g.~Glynn \&
Meyn (1996)) and the techniques developed for Non-Linear MCMC
(Andrieu et al.~2007). That is,
the EE sampler is a non-linear MCMC algorithm and may be analyzed in a similar
manner. Our results can be found in Section 4.
\section{Notation and Algorithm}
We now outline the notation that is adopted throughout the paper as well as the algorithm that is analyzed.
\subsection{Notation}
Define a measurable space $(E,\mathscr{E})$, with $\pi\in\mathscr{P}(E)$
(recall $\mathscr{P}(E)$ denotes the class of probability measures on $(E,\mathscr{E})$)
a target probability measure of interest.
For a stochastic process $\{X_n\}_{n\geq 0}$ on $(E^{\mathbb{N}},\mathscr{E}^{\otimes\mathbb{N}})$,
$\mathscr{G}_n=\sigma(X_0,\dots,X_n)$
is the natural filtration. $\mathbb{P}_{\mu}$ is taken as a probability law of a stochastic
process with initial distribution $\mu$ and $\mathbb{E}_{\mu}$ the associated expectation.
If $\mu=\delta_x$ (with $\delta$ the Dirac measure)
$\mathbb{P}_x$ (resp.~$\mathbb{E}_{x}$) is adopted instead of $\mathbb{P}_{\delta_x}$ (resp.~$\mathbb{E}_{\delta_x}$).
We use $X_n\stackrel{a.s}{\longrightarrow}_{\mathbb{P}}X$ to denote almost sure
convergence of $X_n$ to $X$. The equi-energy sampler generates
a stochastic process on $(\Omega,\mathscr{F})$, which is defined in the next Section.
Let $\|\eta-\mu\|_{\textrm{tv}}:=\sup_{A\in\mathscr{E}}|\eta(A)-\mu(A)|$
denote the total variation distance between $\eta,
\mu\in\mathscr{P}(E)$. Throughout,
$K:E\rightarrow\mathscr{P}(E)$ is taken as a generic Markov kernel; the standard notations, for measurable $f:E\rightarrow\mathbb{R}$,
$K(f)(x):=\int_{E}f(y)K(x,dy)$ and for $\mu\in\mathscr{P}(E)$
$\mu K(f):=\int_{E}K(f)(x)\mu(dx)$ are used. Let $f:E\times
E\rightarrow\mathbb{R}$, then for $\mu\in\mathscr{P}(E)$,
$\mu(f)(x):=\int_{E}f(x,y)\mu(dy)$, with an obvious extension to
higher dimensional spaces. $\mathcal{B}_b(E)$ is used to
represent the bounded measurable functions and for
$f\in\mathcal{B}_b(E)$, $\|f\|_{\infty}:=\sup_{x\in E}|f(x)|$
is used to denote the supremum norm.
We will denote by $K_{\mu}:\mathscr{P}(E)\times E\rightarrow\mathscr{P}(E)$ a generic \emph{non-linear} Markov kernel
and its
invariant measure (given its existence) as $\omega(\mu)$ ($\omega:\mathscr{P}(E)\rightarrow\mathscr{P}(E)$).
For a sequence of probability measures $\{\mu_n\}_{n\geq 0}$ we denote the
composition $\int_{E^{n-1}} K_{\mu_1}(x,dy_1)\dots K_{\mu_n}(y_{n-1},A)$ as
$K_{\mu_1:\mu_n}(x,A)$.
The empirical
measure of an arbitrary stochastic process $\{X_n\}_{n\geq 0}$ is
defined, at time $n$, as:
\begin{eqnarray*}
S_n(du) & := & \frac{1}{n+1}\sum_{i=0}^n\delta_{x_i}(du).
\end{eqnarray*}
In addition, $a\vee b:=\max\{a,b\}$ (resp.~$a\wedge b:=\min\{a,b\}$). The indicator
function of $A\in\mathcal{E}$ is written $\mathbb{I}_A(x)$.
Note also that $\mathbb{N}_0=\mathbb{N}\cup\{0\}$, $\mathbb{T}_m:=\{1,\dots,m\}$.
\subsection{Algorithm}
We introduce a sequence of probability measures, for $r \geq 2$, $\{\pi_n\}_{n\in\mathbb{T}_{r}}$, $\pi_n\in\mathscr{P}(E)$, $n\in\mathbb{T}_r$ and $\pi_r\equiv \pi$ which are assumed to be absolutley
continuous, wrt some reference measure $\lambda^*$, and, in an abuse of notation,
write the Radon-Nikodym derivatives as $d\pi_n/d\lambda^*(x)=\pi_n(x)$ also. The EE sampler will generate a stochastic process
$\{Y_{n}^r\}_{n\geq 0}$, with $Y_n^r=(X_n^1,\dots,X_n^r)$, with $X_{n}^i:E\rightarrow \mathbb{R}^k$, $i\in\mathbb{T}_r$, $k\geq 1$ (that is $\{Y_{n}^r\}_{n\geq 0}$ is a stochastic
process on $(\Omega,\mathscr{F})=((E^r)^{\mathbb{N}},(\mathscr{E}^{\otimes r})^{\otimes\mathbb{N}})$).
Central to the construction of the EE sampler is the concept of the energy
rings; this will correspond to the partition $E=\bigcup_{i=1}^d E_i$.
For each $X_n^i$ we associate a non-linear Markov kernel $\{K_{\mu,n}\}_{n\in\mathbb{T}_{r}}$
with $K_{\mu,1}\equiv K_1$ (i.e.~$K_1$ is an ordinary Markov kernel) and
$\mu\in\mathscr{P}(E)$. Additionally, assume that
for $i=2,\dots,r-1$:
\begin{eqnarray}
\omega_{i}(\pi_{i-1})K_{\pi_{i-1},i}(dy) & = & \omega_{i}(\pi_{i-1})(dy) = \pi_{i}(dy)\label{invmeas}
\end{eqnarray}
and that $\pi_1 K_1 = \pi_1$. Here, it is assumed that, given that we input the invariant probability
measure for $K_{\pi_{i-2},i-1}$ into the non-linear kernel $K_{\mu,i}$, the target probability
measure $\pi_i$ is obtained. Define:
\begin{eqnarray}
K_{\mu,i}(x,dy) & := & (1-\epsilon) K_i(x,dy) + \epsilon Q_{\mu_x,i}(x,dy)\label{nonlinker}
\end{eqnarray}
$i=2,\dots, r$, $\epsilon \in [0,1]$, with $K_i$ a Markov kernel
of invariant distribution $\pi_i$ and also:
\begin{eqnarray*}
Q_{\mu_x,i}(x,dy) & := & \int_{E}\mu_x(dz) K^{S}_i(K_i(dy))(x,z)\\
\mu_{x}(A) & := & \sum_{i=1}^d\mathbb{I}_{E_i}(x)\frac{\mu(E_i\cap A)}{\mu(E_i)}
\end{eqnarray*}
where it is assumed $\mu(E_i)>0$; let $\mathscr{P}_{d}(E)=\{\mu\in\mathscr{P}(E):\mu(E_i) > 0~\forall i\in\mathbb{T}_d\}$.
Finally define:
\begin{eqnarray*}
K^S_i((x,y),d(x',y')) & := & \delta_{x}(dy')\delta_{y}(dx')\alpha_i(x,y) + \delta_{x}(dx')\delta_{y}(dy')[1-\alpha_i(x,y)]\\
\alpha_i(x,y) & = & 1\wedge\frac{\pi_i(y)\pi_{i-1}(x)}{\pi_i(x)\pi_{i-1}(y)}
\end{eqnarray*}
which is the swapping kernel.
It is easily seen that the kernels (\ref{nonlinker}) satisfy the equation (\ref{invmeas}). However,
it is often the case that such a system cannot be simulated exactly. The idea is to approximate the
correct probability measures $\pi_{n}$ via the empirical measures generated by the previous chain.
The algorithm which corresponds to the equi-energy sampler is as follows. Define predetermined
integers $N_1,\dots, N_r$ and assume that for all $i\in\mathbb{T}_r$, $j=\mathbb{T}_d$ (recall $d$ corresponds to the number of energy levels) we have $S^i_{N_{1:i}}(E_j)>0$ with $S^i$ the
empirical measure of the $i^{th}$ process and $N_{1:i}=\sum_{j=1}^i N_j$. The algorithm is in Figure \ref{eesampler}.
\begin{figure}[h]
\begin{flushleft}
\noindent\textsf{0.: Set $n=0$ and $X_0^{1:r}=x_0^{1:r}$, $S^l_0=\delta_{x_0^l}$, $l=1,\dots,r$. Set $i=1$.}\\
\textsf{1.: Perform the following for $i=1$ until $i=r$. Set $j=1$.}\\
\textsf{2.: Perform the following for $j=1$ until $j=N_i$, then set $i=i+1$ and go to 1.}\\
\textsf{3.: Set $n=n+1$, $k=1$.}\\
\textsf{4.: Perform the following for $k=1$ until $k=i$, then set $k=i+1$ and go to 5.}\\
$X_{n}^k\sim K_{S_{n}^{k-1},k}(x_{n-1}^k,\cdot)$, $S_n^k=S_{n-1}^k + \frac{1}{n+1}[\delta_{x_n^k} - S_{n-1}^k]$,
\textsf{set $k=k+1$ and go to 4.}\\
\textsf{5.: Perform the following for $k=i+1$ until $k\geq r$, then set $j=j+1$ and go to 2.}\\
\textsf{6.: $X_n^k\sim \delta_{x_{n-1}^k}(\cdot)$ then set $k=k+1$
and go to 5.}
\end{flushleft}
\caption{An equi-energy sampler.}
\label{eesampler}
\end{figure}
\noindent \emph{\textbf{Remark 1}.
We point out here that our algorithm is slightly
different from that of Kou et al. There, the EE
jump can be seen as using a Metropolis-Hastings (M-H) independence
sampler with proposal $\pi_{i-1}$ constrained to the set $E_i$
currently occupied by the current state (the kernel is then
approximated). We have preferred to do this in a
selection/mutation type format (see Del Moral (2004)) where a
value is selected from the empirical measure of the lower chain and
then put through a M-H exchange step. We then
allow a possibility of mutation (sampling from $K_i$).
This has been done
in order to fit our proof in the framework of Andrieu et al.~(2007), which allows us, below, to refer to minor technical results from that work and
hence reduce the length of this note. It should be noted that, from
a technical point of view, changing the algorithm back to the EE
sampler presents no difficulties, in terms of the following arguments. Indeed,
the only real changes to the proofs are some of the technical assumptions
in Andrieu et al.~(2007) and the uniform in time drift condition presented
there (Proposition 4.1).}
\noindent \emph{\textbf{Remark 2}. In our view, the non-linear kernel
interpretation of the equi-energy sampler allows us to
intuitively understand some practical issues associated to the
algorithm, whilst perhaps not requiring a full technical understanding. For example, if there is only one feeding chain, and
it is stopped at some point, then we can observe from equation
(\ref{invmeas}) that this algorithm is then biased (contrary
to the point of Kou et al.~(2006) pp-1647, 5th par, although we realize that it is not possible to store an infinite number of samples).}
\section{Discussion of the Previous Proofs}
The difficulties of the convergence proofs of Kou
et al~(2006) and Atchad\'e \& Liu (2006) are now discussed.
\subsection{Theorem 2 of Kou et al.~(2006)}
We begin with the proof of Theorem 2 of Kou et al.
Recall that the Theorem states, under some assumptions,
that the steady state distribution of $\{X^{i}_n\}_{n\geq 0}$ is $\pi_i$. The authors use induction
and start by using the ergodicity of the M-H chain which verifies the case $r=1$ and continue from there.
Atchad\'e \& Liu state that equation (5) of the proof is not
clear, however, we note that the equation can indeed be verified
(and as stated by Kou et al.~(2006) in the rejoinder to the
discussion (pp-1649)) by using the SLLN (via the induction
hypothesis) and bounded convergence theorem.
The main difficulty of the proof is as follows, quoting Kou et al (2006), pp-1590:
\begin{quote}
\emph{Therefore, under the induction assumption, $X^{(i)}$ is asymptotically equivalent to a Markovian sequence
governed by $S^{(i)}(x,\cdot)$.}
\end{quote}
Here the kernel $S^{(i)}(x,\cdot)$ is the theoretical kernel
corresponding to $K_{\pi_{i-1},i}$. The authors then state that
$S^{(i)}(x,\cdot)$ is an ergodic Markov kernel which then yields
the convergence of $X^{(i)}$. This is the difficulty of the proof:
the authors verify that the transitions of the stochastic process
are asymptotically equivalent to that of an ergodic Markov kernel, however,
this is not enough to provide the required convergence of the process.
That is, Kou et al.~(2006) prove that (suppressing the notation $N_{1:i-1}$)
\begin{equation*}
\lim_{n\rightarrow \infty}|K_{S_n^{i-1},i}(x,A)-K_{\pi_{i-1},i}(x,A)| \stackrel{a.s}
{\longrightarrow}_{\mathbb{P}^{(i-1)}}0
\end{equation*}
where $\mathbb{P}^{(i-1)}$ is the probability law of the process with $i-1$
chains. However, this convergence property essentially means that when the input probability measure $S_n^{i-1}$ is converging to the
`correct' probability measure $\pi_{i-1}$ then a set-wise convergence
of the non-linear kernel $K_{\cdot,i}$ is induced.
This is far from sufficient as the
law of the process at iteration $n$ is, for $A\in\mathscr{E}$
$$
K_{S_1^{i-1},i}[K_{S_2^{i-1},i}[\cdots K_{S_n^{i-1},i}(A)],
$$
where $S_1^{i-1},S_2^{i-1},\ldots,S_n^{i-1}$ are empirical distributions
constructed from the same realisation of the process at level $i-1$. It is
clear that if the algorithm is to converge, then the joint distributions of $X_{n-\tau}^{(i)},\ldots,X_n^{(i)}$ for any (in fact increasing with $n$) lag $\tau$ should converge to
\[
K_{\pi_{i-1},i}\times K_{\pi_{i-1},i}\times\cdots \times K_{\pi_{i-1},i},
\]
which as we shall see is far from trivial.
This remark indicates an appropriate approach to a proof; via standard Markov chain convergence theorems.
As a result, using the arguments of Kou et al.~(2006), we cannot
even say that
\begin{equation*}
\lim_{n\rightarrow \infty}|K_{S_{1}:S_{n+N_{1:i-1}},i}(x,A)-\pi_i(A)| \stackrel{a.s}
{\longrightarrow}_{\mathbb{P}^{(i-1)}}0
\end{equation*}
via the ergodicity of $K_{\pi_{i-1},i}(x,A)$; i.e.~a set-wise convergence of the kernel that is \emph{simulated}.
\subsection{Theorem 3.1 of Atchad\'e \& Liu (2006)}
Atchad\'e \& Liu state (pp-1625, in the proof of Theorem 3.1):
\begin{quote}
\emph{Note that the $i^{th}$ chain is actually a non-homogeneous
Markov chain with transition kernels
$K_0^{(i)},K_{1}^{(i)},\dots$, where
$K_n^{(i)}(x,A)=\mathbb{P}(X_{n+1}^{(i)}\in A|X_n^{(i)}=x)$.}
\end{quote}
This statement is not quite accurate. The $i^{th}$ chain is a
non-homogeneous Markov chain only conditional upon a realization
of the previous chain; unconditionally, it is not a Markov
chain. As a result, Atchad\'e \& Liu analyze the process of
kernel:
\begin{eqnarray*}
K_{n}^{(i)}(x,dy) & = & (1-\epsilon) K_i(x,dy) + \epsilon \mathbb{E}\bigg[R_n^{(i)}(x,dy)\bigg]
\end{eqnarray*}
where $R_n^{(i)}$ is defined in Atchad\'e \& Liu. This is not the
kernel corresponding to the algorithm; the algorithm simulates:
\begin{eqnarray*}
Q_{S^{i-1}_x,i}(x,dy) & = & \int_{E} S_x^{i-1}(dy) K^{S}_i(K_i(dy))(x,y)
\end{eqnarray*}
that is, we do not integrate over the process $\{X_n^{i-1}\}$, we
condition upon it. Therefore, the proofs of Atchad\'e \& Liu do
not provide a theoretical validation of
the equi-energy sampler.
\section{Ergodicity Results}
The SLLN is now presented: \emph{we have only proved
the case when} $r=2$ and this is assumed hereafter. There are some
difficulties in extending our proof to the case $r\geq 3$; this
will be outlined after the proofs. Note that our proof is
non-trivial and relies on a SLLN for $U-$statistics of stationary
ergodic stochastic processes (Aaronson et al.~1996).
\subsection{Assumptions}
We make the following assumptions (it is assumed that for any
$i\in\mathbb{T}_r$, $j\in\mathbb{T}_d$, $\pi_i(E_j)>0$
throughout).
\begin{hypA}\label{assump:stability}
\noindent$\bullet$ (\emph{Stability of Algorithm}): There is a
universal constant $\theta> 0$, such that for any $n\geq 0$, $j\in\mathbb{T}_d$, $i\in\mathbb{T}_{r-1}$ we have, recalling that $N_{1:i}=\sum_{j=1}^i N_j$:
\begin{eqnarray*}
S_{N_{1:i}+n}^i(E_j) & \geq & \theta \qquad \mathbb{P}_{x_0^{1:r}}-a.s.
\end{eqnarray*}
\end{hypA}
\begin{hypA}\label{assump:KandP}
\noindent$\bullet$ (\emph{Uniform Ergodicity}): The $\{K_n\}_{n\in\mathbb{T}_r}$
are uniformly ergodic Markov kernels with a one step minorization condition.
That is: $\forall n\in\mathbb{T}_r$, $\exists (\phi_n,\nu_n)\in \mathbb{R}^+\times\mathscr{P}(E)$
such that $\forall (x,A) \in E\times \mathscr{E}$:
\begin{eqnarray*}
K_n(x,A) & \geq & \phi_n\nu_n(A).
\end{eqnarray*}
\end{hypA}
\begin{hypA}\label{assump:statespace}
\noindent$\bullet$ (\emph{State-Space Constraint}):
$E$ is polish (separable complete metrisable topological space).
\end{hypA}
\subsection{Discussion of Assumptions}
The assumptions we make are quite strong. The first assumption (A\ref{assump:stability}) is used to allow us
to bound:
\begin{displaymath}
\frac{1}{S^i_{m+1}(E_i)} - \frac{(m+2)}{(m+1)S^i_{m}(E_i)}
\end{displaymath}
which will appear in the proof below.
This assumption, on the empirical measure, is removed in Andrieu et al.~(2007);
however, this is at the cost of a significant increase in the technicalities
of the proof. As a result, (A\ref{assump:stability}) is adopted as an intuitive
assumption as it states:
\begin{enumerate}
\item{Make sure that $\pi_i(E_j)$ for all $i, j$ is non-negligable.}
\item{Let $N_1,\dots, N_{r-1}$ be reasonably large so that we can expect convergence.}
\end{enumerate}
The second assumption (A\ref{assump:KandP}) might appear strong, but allows us to
significantly simplify both notation and our proofs whilst preserving the `essence' of the general proof. In addition, this condition will often be satisfied on finite state spaces. More general assumptions could be
used, at the expense of significant notational and technical complexity.
The assumption allows us to use the following facts:
\begin{enumerate}
\item{For any fixed $\mu\in\mathscr{P}_d(E)$, $\exists \omega_i(\mu)\in\mathscr{P}(E)$ such that $\omega_i(\mu)K_{\mu,i}=\omega_i(\mu)$.}
\item{For any fixed $\mu\in\mathscr{P}_d(E)$, $i\in\mathbb{T}_r$, $\exists\rho \in(0,1)$, $M<\infty$ such that for any $n\in\mathbb{N}$
we have $\sup_{x\in E}\|K_{\mu,i}^n(x,\cdot)-\omega_i(\mu)\|_{\textrm{tv}}\leq M\rho^n$.}
\end{enumerate}
These properties will help to simplify our proofs below.
The final assumption (A\ref{assump:statespace}) will be related to
some technical arguments in the proof.
\subsection{SLLN}
We are to establish the convergence of $S_{n}^r(f)\stackrel{a.s}{\longrightarrow}_{\mathbb{P}_{x_{0}^{1:r}}}\pi_r(f)$
for some $f$ to be defined in the proof and $n\geq N_{1:r-1}$.
\subsubsection{Strategy of the Proof}
Our approach is to consider
$S^{\omega}_{n,r}= 1/(n-N_{1:r-1}+1)\sum_{j=N_{1:r-1}}^n\omega_r(S_j^{r-1})$
and adopt the decomposition:
\begin{eqnarray}
S_n^r(f) -\pi_r(f) & = & S_n^r(f) - S^{\omega}_{n,r}(f) + S^{\omega}_{n,r}(f) -\pi_r(f)\label{eq:prfdecomp}.
\end{eqnarray}
The analysis of the first term on the RHS of (\ref{eq:prfdecomp})
relies upon a Martingale argument using the classical
Poisson's equation solution:
\begin{eqnarray*}
f(X_n^r)-\omega(S_n^{r-1})(f) & = & \hat{f}_{S_n^{r-1}}^{r}(X_n^r) - K_{S_n^{r-1},r}(\hat{f}_{S_n^{r-1}}^{r})(X_n^r)
\end{eqnarray*}
where $\hat{f}_{S_n^{r-1}}^{r}$ is a solution of the Poisson equation.
Indeed, the first term on the RHS of (\ref{eq:prfdecomp}) can be rewritten:
\begin{eqnarray}
(n-N_{1:r-1}+1)[S_n^i-S_{n,r}^{\omega}](f) & = & M_{n+1}^r + \label{eq:decomp}
\sum_{m=N_{1:r-1}}^{n}[\hat{f}_{S_{m+1}^{r-1}}^r(X_{m+1}^r) -
\\ & & \hat{f}_{S_m^{r-1}}^r(X_{m+1}^r)] + \hat{f}_{S_{N_{1:r-1}}^{r-1}}^r(X_0^r)
- \hat{f}_{S_{n+1}^{r-1}}^r(X_{n+1}^r) \nonumber
\end{eqnarray}
where
\begin{eqnarray}
M_{n+1}^r & = & \sum_{m=N_{1:r-1}}^{n}[\hat{f}_{S_m^{r-1}}^r(X_{m+1}^r) - K_{S_m^{r-1},r}(\hat{f}_{S_m^{r-1}}^r)(X_m^r)] \nonumber\\
\hat{f}_{S_m^{r-1}}^r(X_{m+1}^r) & = &
\sum_{n\in\mathbb{N}_0}[K^{n}_{S_m^{r-1},r}(f)(X_{m+1}^r) -
\omega_r(S_m^{r-1})(f)]\label{eq2}
\end{eqnarray}
and $\{M_n^r,\mathscr{G}_n\}_{n\geq 0}$ is a martingale and $M_n^r:=0$, for
$0\leq n\leq N_{1:r-1}$.
Recall that (\ref{eq2}) is a solution to the Poisson equation,
which will exist under our assumptions above.
The proof will deal
with the Martingale via the Burkh\"older inequality and the fluctuations
of the solution of the Poisson equation due to the evolution of the empirical
measure (\ref{eq2}) using continuity properties of the kernel $Q_{\mu}$.
The bias term $S^{\omega}_{n,r}(f) -\pi_r(f)$ is controlled by a SLLN for
$U-$statistics of stationary ergodic stochastic processes.
\subsubsection{Main Result}
\begin{slln}
Assume (A\ref{assump:stability}-\ref{assump:KandP}). Then for any $p\geq 1$, $\exists B_p<\infty$ such that for any $n\geq N_{1:r-1}$ and $f\in\mathcal{B}_b(E)$ we have that:
\begin{eqnarray*}
\mathbb{E}_{x_0^{1:r}}\bigg[|[S_{n}^r - S_{n,r}^{\omega}](f)|^p\bigg]^{1/p} & \leq & \frac{B_p\|f\|_{\infty}}{(n - N_{1:r-1} + 1)^{\frac{1}{2}}}.
\end{eqnarray*}
if, in addition, (A\ref{assump:statespace}) holds then
for any $f\in\mathcal{B}_b(E)$:
\begin{eqnarray*}
S_{n}^2(f)\stackrel{a.s}{\longrightarrow}_{\mathbb{P}_{x_{0}^{1:2}}}
\pi_2(f).
\end{eqnarray*}
\end{slln}
\begin{proof}
Our proof relies heavily upon the theory of Andrieu et al.~(2007).
Note that, under (A2) and, for any fixed $\mu\in\mathscr{P}_d(E)$,
the uniform ergodicity of the kernel $K_{\mu,i}$
allows us to use
the methods of Andrieu et al.~(2007).
We will follow the proof of Theorem 6.5 of that paper.
In order
to prove the SLLN in the paper, the authors combine a series of
technical results. The first of which is the Lipschitz
continuity of the kernel $Q_{\mu}$; we establish the result for
bounded functions and the particular kernel considered here. To
simplify the notation, we remove the sub/superscripts from the
various objects below.
Let $f\in\mathcal{B}_b(E)$ and $\mu,\xi\in\mathscr{P}_d(E)$, then we have:
\begin{eqnarray*}
|Q_{\mu_x}(f)(x) - Q_{\xi_x}(f)(x)| & = & \sup_{(x,y)\in E^2}\|K^S(K(f))(x,y) - Q_{\mu_x}(f)(x)\|_{\infty} \\ & &
\times\bigg|\int_{E\times E}\frac{K^S(K(f))(x',y) - Q_{\mu_x}(f)(x')}{\sup_{(x,y)\in E^2}\|K^S(K(f))(x,y) - Q_{\mu_x}(f)(x)\|_{\infty}}
\times \\ & &
\mu_x(dy)\times\delta_x(dx') - \xi_x(dy)\times\delta_x(dx')\bigg|\\
& \leq & 2\|f\|_{\infty} \sup_{x\in E}\|\mu_x - \xi_x\|_{\textrm{tv}}
\end{eqnarray*}
We then note that Propositions 6.1 and 6.2 (bounding the solution
of the Poisson equation and Martingale in the $\mathbb{L}_p$ norm)
of Andrieu et al.~(2007) are proved in the same manner. That is, in a similar
way to the proofs constructed there, we can show that:
\begin{eqnarray*}
\mathbb{E}_{x_0^{1:r}}\bigg[|\widehat{f}_{S_m}(X_{m+1})|^p\bigg]^{1/p}
& \leq & M\|f\|_{\infty}\\
\mathbb{E}_{x_0^{1:r}}\big[|M_n|^p\big]^{1/p} & \leq & M\|f\|_{\infty}n^{1/2}.
\end{eqnarray*}
As a result, the verification of Proposition 6.3 (bounding the
fluctuations of the Poisson equation due to the evolution of the empirical measure) and
Theorem 6.5 (the SLLN) are required.
We begin with the equation (\ref{eq2}); the bound is proved by establishing:
\begin{eqnarray}
|S_{m+1,x}(f) - S_{m,x}(f)| & \leq & \frac{M\|f\|_{\infty}}{m+2}\label{eq:prf1}
\end{eqnarray}
for $M<\infty$ some constant and any $f\in\mathcal{B}_b(E)$.
Consider
\begin{eqnarray*}
|S_{m+1,x}(f) - S_{m,x}(f)| & = & \bigg|\sum_{i=1}^d\mathbb{I}_{E_i}(x)\bigg[\frac{S_{m+1}(\mathbb{I}_{E_i}f)}{S_{m+1}(E_i)} - \frac{S_{m}(\mathbb{I}_{E_i}f)}{S_{m}(E_i)}\bigg] \bigg|\\
& = & \bigg|\sum_{i=1}^d\mathbb{I}_{E_i}(x)\bigg[\frac{f(x_{m+1})\mathbb{I}_{E_i}(x_{m+1})}{(m+2)S_{m+1}(E_i)}
+ \frac{1}{m+2}\sum_{j=0}^m f(x_j)\mathbb{I}_{E_i}(x_{j})\\ & & \times \bigg\{\frac{1}{S_{m+1}(E_i)} - \frac{m+2}{(m+1)S_{m}(E_i)}\bigg\}
\bigg]\bigg|.
\end{eqnarray*}
Now, since:
\begin{eqnarray*}
\bigg|\frac{1}{S_{m+1}(E_i)} - \frac{(m+2)}{(m+1)S_{m}(E_i)}\bigg| & = & \frac{|(m+1)S_m(E_i) - \delta_{x_{m+1}}(E_i) - (m+1)S_{m}(E_i)|}{(m+1)S_m(E_i)S_{m+1}(E_i)}\\
& = & \frac{\delta_{x_{m+1}}(E_i)}{(m+1)S_m(E_i)S_{m+1}(E_i)}\\
& \leq & \frac{1}{(m+1)\theta^2}
\end{eqnarray*}
it follows that:
\begin{eqnarray*}
|S_{m+1,x}(f) - S_{m,x}(f)| & \leq & \frac{\|f\|_{\infty}}{(m+2)}
\sum_{i=1}^d\mathbb{I}_{E_i}(x)\bigg[\frac{1}{\theta}
+\frac{1}{\theta^2}\bigg] \\
& \leq & \frac{M\|f\|_{\infty}}{m+2}
\end{eqnarray*}
as required.
To bound the fluctuations of the Poisson equation, the decomposition (Proposition B.5) in Andrieu et al.~(2007) is adopted, along with Minkowski's inequality:
\begin{displaymath}
\mathbb{E}_{x_{0}^{1:r}}\bigg[|\widehat{f}_{S_{m+1}}(X_{m+1}) -
\widehat{f}_{S_{m}}(X_{m+1})|^p\bigg]^{1/p} \leq
\end{displaymath}
\begin{displaymath}
\sum_{n\in\mathbb{N}_0}\bigg[\mathbb{E}_{x_{0}^{1:r}}\bigg[|\sum_{i=0}^{n-1}[K_{S_{m+1}}^i
- \omega(S_{m+1})](K_{S_{m+1}} - K_{S_{m}})[K_{S_m}^{n-i-1}-\omega(S_m)](f)(X_{m+1})|^p\bigg]^{1/p}
+
\end{displaymath}
\begin{displaymath}
\mathbb{E}_{x_{0}^{1:r}}\bigg[|[\omega(S_{m+1})-\omega(S_{m})](K_{S_m}^n-\omega(S_{m}))|^p\bigg]^{1/p}\bigg]
\end{displaymath}
To bound the first expression on the RHS, we can use the fact
that, for a fixed (deterministic) pair of empirical measures
$S_m,S_{m+1}\in\mathscr{P}_d(E)$ and for any $x\in E$:
\begin{displaymath}
|[K_{S_{m+1}}^i
- \omega(S_{m+1})](K_{S_{m+1}} - K_{S_{m}})[K_{S_m}^{n-i-1}-\omega(S_m)](f)(x)|
\leq \end{displaymath}
\begin{displaymath}
M\rho^{i}\|(K_{S_{m+1}} - K_{S_{m}})[K_{S_m}^{n-i-1}-\omega(S_m)](f)\|_{\infty}
\end{displaymath}
and further, for any $x\in E$:
\begin{displaymath}
|(K_{S_{m+1}} - K_{S_{m}})[K_{S_m}^{n-i-1}-\omega(S_m)](f)(x)|
\leq \frac{M\|[K_{S_m}^{n-i-1}-\omega(S_m)](f)\|_{\infty}}{m+2}
\end{displaymath}
due to the Lipschitz continuity of $Q$ and the bound (\ref{eq:prf1}); therefore:
\begin{eqnarray*}
\|[K_{S_{m+1}}^i
- \omega(S_{m+1})](K_{S_{m+1}} - K_{S_{m}})[K_{S_m}^{n-i-1}-\omega(S_m)](f)\|_{\infty}
& \leq & \frac{M\rho^{n-1}}{m+2}.
\end{eqnarray*}
Since, due to (A\ref{assump:stability}), this property holds almost surely,
it is possible to bound the first expression. The second expression is dealt
with in a similar manner, using the inequality (see Andrieu et al.~(2007)):
\begin{eqnarray*}
\|[\omega(S_{m+1})-\omega(S_{m})](f)\|_{\infty} & \leq &
M\|[K_{S_{m+1}}-K_{S_{m}}](f)\|_{\infty}.
\end{eqnarray*}
This result can be obtained by the continuity of invariant measures
of uniformly ergodic Markov kernels indexed by a parameter.
To complete the first part of the proof, we can use the manipulations of Del Moral \& Miclo (2004), Proposition 3.3, to yield:
\begin{eqnarray*}
\mathbb{E}_{x_0^{1:r}}\bigg[|[S_{n}^r - S_{n,r}^{\omega}](f)|^p\bigg]^{1/p} & \leq & \frac{B_p\|f\|_{\infty}}{(n - N_{1:r-1} + 1)^{\frac{1}{2}}}.
\end{eqnarray*}
To control the bias $S^{\omega}_{n,r}(f) -\pi_r(f)$ when
$r=2$, the following decomposition is adopted:
\begin{eqnarray*}
|[\omega(S_m) - \omega(\pi_{1})](f)| & \leq & |[K_{S_m}^q - K_{\pi_1}^q](f)|
+ |[\omega(S_m) - K_{S_m}^q](f)| +|[K_{\pi_1}^q - \omega(\pi_{1})](f)|.
\end{eqnarray*}
Due to the uniform ergodicity bound $\|K_{\mu}^q-\omega(\mu)\|_{\textrm{tv}}\leq M\rho^k$
we will show that for any $q\in\mathbb{N}$:
\begin{eqnarray}
\lim_{m\rightarrow\infty}|[K_{S_m}^q - K_{\pi_1}^q](f)| & = & 0 \qquad \mathbb{P}_{x_{0}^{1:r}}-a.s.
\label{eq:prfeq2}
\end{eqnarray}
Let $\epsilon=1$; the general case is dealt with below.
Let $\mu\in\mathscr{P}_d(E)$, and for simplicity write $K^S(K(f)\times 1)(x,y):=
P(f)(x,y)$, $f\in\mathcal{B}_b(E)$, then we will prove by induction that:
\begin{equation}
\underbrace{Q_{\mu_x}Q_{\mu_{\cdot}}\dots Q_{\mu_{\cdot}}}_{q~\textrm{times}}(f)(x)
= \sum_{(i_1,\dots,i_q)\in\mathbb{T}_d^q}
\frac{\mathbb{I}_{E_{i_1}}(x)}
{\prod_{j=1}^q\mu(E_{i_{j}})}\mu^{\otimes
q}\bigg\{\big(\prod_{j=1}^q\mathbb{I}_{E_{i_j}}\big)
\underbrace{P(\mathbb{I}_{E_{i_2}}P(\mathbb{I}_{E_{i_3}}\cdots
P(\mathbb{I}_{E_{i_q}}P(f))))}_{q-1~\textrm{terms}} \bigg\}(x)
\label{eq:prfeq1}
\end{equation}
where a composition of the $P$ kernels is defined as:
\begin{equation*}
P^q(f)(x,x_{1:q}) := \int_{E^{q+1}}P((x,x_1),dy_1)P((y_1,x_2),dy_2)\dots P((y_{q-1},x_q),dy_q)f(y_q).
\end{equation*}
For $q=1$ (\ref{eq:prfeq1}) clearly holds, so assume for $q-1$ and consider
$q$:
\begin{eqnarray*}
Q_{\mu_x}Q_{\mu_{\cdot}}\dots Q_{\mu_{\cdot}}(f)(x) & = &
\sum_{(i_1,\dots,i_{q-1})\in\mathbb{T}_d^{q-1}}
\frac{\mathbb{I}_{E_{i_1}}(x)}
{\prod_{j=1}^{q-1}\mu(E_{i_{j}})}\mu^{\otimes (q-1)}\bigg\{\big(\prod_{j=1}^{q-1}\mathbb{I}_{E_{i_j}}\big)\\
& &
P(\mathbb{I}_{E_{i_2}}\cdots
P(\mathbb{I}_{E_{i_{q-1}}}P(Q_{\mu_{\cdot}}(f))))\bigg\}(x)
\end{eqnarray*}
To continue the proof, consider:
\begin{eqnarray*}
P(Q_{\mu_{\cdot}}(f))(x,x_1) & = & \int_E P((x,x_1),dy_1)\int_{E}\mu_{y_1}(dx_2)P(f)(y_1,x_2)\\
& = & \int_E P((x,x_1),dy_1) \sum_{i=1}^d\mathbb{I}_{E_i}(y_1)\frac{\int\mathbb{I}_{E_i}(x_2)P(f)(y_1,x_2)\mu(dx_2)}{\mu(E_i)}\\
& = & \sum_{i=1}^d\frac{\mu(\mathbb{I}_{E_i}P(\mathbb{I}_{E_i}P(f)))}{\mu(E_i)}.
\end{eqnarray*}
Thus, due to the above equation:
\begin{eqnarray*}
Q_{\mu_x}Q_{\mu_{\cdot}}\dots Q_{\mu_{\cdot}}(f)(x) & = & \sum_{(i_1,\dots,i_{q-1})\in\mathbb{T}_d^{q-1}}
\frac{\mathbb{I}_{E_{i_1}}(x)}
{\prod_{j=1}^{q-1}\mu(E_{i_{j}})}\mu^{\otimes (q-1)}\bigg\{\big(\prod_{j=1}^{q-1}\mathbb{I}_{E_{i_j}}\big)\\
& &
P(\mathbb{I}_{E_{i_2}}\cdots
P\bigg[\mathbb{I}_{E_{i_{q-1}}}\sum_{i_q=1}^d\frac{\mu(\mathbb{I}_{E_{i_q}}P\mathbb{I}_{E_{i_q}}P(f)))}{\mu(E_{i_q})}\bigg])\bigg\}(x)\\
& = & \sum_{(i_1,\dots,i_{q})\in\mathbb{T}_d^{q}}
\frac{\mathbb{I}_{E_{i_1}}(x)}
{\prod_{j=1}^{q}\mu(E_{i_{j}})}\mu^{\otimes (q-1)}\bigg\{\big(\prod_{j=1}^{q-1}\mathbb{I}_{E_{i_j}}\big)\\
& &
P(\mathbb{I}_{E_{i_2}}\cdots
P\bigg[\mathbb{I}_{E_{i_{q-1}}}\mu(\mathbb{I}_{E_{i_q}}P\mathbb{I}_{E_{i_q}}P(f)))\bigg])\bigg\}(x).
\end{eqnarray*}
Application of Fubini's theorem yields the desired result.
To prove, for $\epsilon=1$, that (\ref{eq:prfeq2}) holds, observe that:
\begin{eqnarray*}
|[K_{S_m}^q - K_{\pi_1}^q](f)| & = & \bigg|\sum_{(i_1,\dots,i_q)\in\mathbb{T}_d^q}\bigg[
\frac{\mathbb{I}_{E_{i_1}}(x)}
{\prod_{j=1}^q S_m(E_{i_{j}})}S_m^{\otimes q}\bigg\{\big(\prod_{j=1}^q\mathbb{I}_{E_{i_j}}\big)
P(\mathbb{I}_{E_{i_2}}P(\mathbb{I}_{E_{i_3}}\cdots
P(\mathbb{I}_{E_{i_q}}P(f))))
\bigg\}(x)- \\ & &
\frac{\mathbb{I}_{E_{i_1}}(x)}
{\prod_{j=1}^q \pi_1(E_{i_{j}})}\pi_1^{\otimes q}\bigg\{\big(\prod_{j=1}^q\mathbb{I}_{E_{i_j}}\big)
P(\mathbb{I}_{E_{i_2}}P(\mathbb{I}_{E_{i_3}}\cdots
P(\mathbb{I}_{E_{i_q}}P(f))))
\bigg\}(x)
\bigg]\bigg|.
\end{eqnarray*}
Application of Theorem U and Proposition 2.8 of Aaronson et al.~(1996) (along with the Theorem
for almost sure convergence of continuous transformations of almost surely
convergent random variables) yields the desired result.
Firstly, note that these are
results associated to the almost sure convergence of $U-$ and $V-$ (Von Mises) statistics; this is where (A\ref{assump:statespace}) is required.
Secondly, we remark
that it is not required that the auxiliary process is started in its stationary
regime (as stated in the result of Aaronson et al.~(1996)): We can adopt
a coupling argument for uniformly ergodic Markov chains, along the lines of Andrieu et al.~(2007) (Theorem 6.5 and Proposition C.1).
To complete
the proof for $\epsilon\in(0,1)$, we note the following decomposition for
iterates of mixtures of Markov kernels $K$ and $P$:
\begin{equation*}
((1-\epsilon)K+\epsilon P)^n(x,dy) = \sum_{l=0}^n\epsilon^l(1-\epsilon)^{n-l}
\sum_{(\alpha_1,\dots,\alpha_n)\in\mathcal{S}_l}
K^{1-\alpha_1}P^{\alpha_1}\dots K^{1-\alpha_n} P^{\alpha_n}(x,dy).
\end{equation*}
where $\mathcal{S}_l=\{(\alpha_1,\dots,\alpha_n):\sum_{j=1}^n\alpha_j=l\}$;
there is no difficulty to extend the result, using the bounded convergence
theorem where required.
\end{proof}
\noindent \emph{\textbf{Remark 3}. In the proof we have adopted a decomposition
that has naturally led to the use of SLLN for $U-$statistics. Essentially,
the algorithm requires that the invariant measures converge to the desired
distribution, and this is manifested, in our proof, via the iterates of
the non-linear kernel. This is the main difficulty in proving the SLLN
for the equi-energy sampler.
An alternative
approach, via uniform SLLN, may also be adopted, possibly at the cost of more
abstract assumptions; see Del Moral (2004) for example, in the case of particle
approximations of Feynman-Kac formulae.}
\noindent \emph{\textbf{Remark 4}.
We note that it is possible to extend our proof, via a density argument (see Del Moral (1998)), for a related algorithm, (NL3) of Andrieu et al.~(2007), with $r-1$ feeding
chains, but that this cannot be used for the equi-energy sampler, due to
the fact that the indicator functions in the definition of the kernel (\ref{nonlinker})
are not continuous. In general, a proof by induction requires
more complicated arguments and
as
a result, we feel that the convergence of the equi-energy sampler, as well
as the convergence rate (as brought up in the discussion of Kou et al.~(2006))
are non-trivial research problems.}
\vspace{0.05 in}
{\ \nocite{*} \centerline{ REFERENCES}
\begin{list}{}{\setlength{\itemindent}{-0.3in}}
\item {\sc Aaronson}, J., {\sc Burton}, R., {\sc Dehling}, H.,
{\sc Gilhat}, D., {\sc Hill}, T. \& {\sc Weiss} B.~(1996). Strong
laws for $L-$ and $U-$ statistics. {\it Trans. Amer. Math. Soc.},
{\bf 348}, 2845--2866.
\item
{\sc Andrieu,} C., {\sc Jasra}, A., {\sc Doucet}, A. \& {\sc Del
Moral} P.~(2007). Non-Linear Markov chain Monte Carlo via self
interacting approximations. Technical Report, University of
Bristol. \item {\sc Atchad\'e}, Y. \& {\sc Liu}, J. S.~(2006).
Discussion of Kou, Zhou \& Wong. {\it Ann. Statist.}, {\bf 34},
1620--1628. \item {\sc Del Moral}, P.~(1998). Measure valued
processes and interacting particle systems. Application to non
linear filtering problems. {\it Ann. Appl. Prob.}, {\bf 8},
438--495. \item {\sc Del Moral}, P.~(2004). \textit{Feynman-Kac
Formulae: Genealogical and Interacting Particle Systems with
Applications}. Springer: New York. \item {\sc Del Moral}, P. \&
{\sc Miclo} L.~(2004). On convergence of chains with occupational
self-interactions. {\it Proc. R. Soc. Lond. A}, {\bf 460},
325--346. \item {\sc Glynn,} P. W. \& {\sc Meyn} S. P.~(1996). A
Liapounov bound for solutions of the Poisson equation. {\it Ann.
Prob.}, {\bf 24}, 916--931.
\item {\sc Kou}, S. C, {\sc Zhou}, Q., \& {\sc Wong}, W.
H.~(2006). Equi-energy sampler with applications to statistical
inference and statistical mechanics (with discussion). {\it Ann.
Statist.}, {\bf 34}, 1581--1619.
\end{list}
\vspace{0.2cm}
\noindent
{\sc Department of Mathematics}\hspace{4cm} {\sc Department of Mathematics}\\
{\sc University of Bristol}\hspace{5.2cm} {\sc Imperial College London}\\
{\sc Bristol}\hspace{7.8cm} {\sc London}\\
{\sc England}\hspace{7.6cm} {\sc England}\\
{\sc E-Mail:} c.andrieu@bris.ac.uk\hspace{4.5cm} {\sc E-mail:}a.jasra@ic.ac.uk\\
\noindent
{\sc Department of Statistics}\hspace{4.6cm}{\sc Department of Mathematics}\\
{\sc University of British Columbia}
\hspace{3.3cm}{\sc University of Nice}\\
{\sc Vancouver}\hspace{7.1cm} {\sc Nice}\\
{\sc Canada}\hspace{7.7cm} {\sc France}\\
{\sc E-Mail:} arnaud@stat.ubc.ca\hspace{4.5cm} {\sc E-mail:}delmoral@math.unice.fr\\
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,419 |
\section{Introduction}
The Schur functions are perhaps best known as a basis of the ring of symmetric functions.
As such, the numbers $c_{\mu \nu}^{\lambda}$, commonly known as the \textit{Littlewood-Richardson coefficients}, that arise in the product
\[ s_\mu s_\nu = \sum_{\lambda} c_{\mu \nu}^{\lambda} s_{\lambda}, \]
are of paramount importance to this structure of this ring.
This structure appears in several other areas.
In the representations of the symmetric group, the
Specht modules can be placed in one-to-one correspondence with the Schur functions and given two Specht modules $S^\mu$ and $S^\nu$ we have
\[ (S^\mu \otimes S^\nu)\uparrow^{S_n} = \bigoplus_{\lambda} c_{\mu \nu}^{\lambda} S^{\lambda}, \]
and,
in the cohomology ring of the Grassmannian, the Schubert classes are in correspondence to the Schur functions and the cup product of each pair $\sigma_\mu$, $\sigma_\nu$ of Schubert classes satisfies
\[ \sigma_\mu \cup \sigma_\nu = \sum_{\lambda} c_{\mu \nu}^{\lambda} \sigma_\lambda.\]
It is well known that $c_{\mu \nu}^{\lambda} \geq 0.$
Thus each product $s_\mu s_\nu$ gives rise to a linear combination of Schur functions with non-negative coefficients.
Such an expression is said to be \textit{Schur-positive}.
In recent years, there has been significant interest in determining instances of Schur-positivity in expressions of the form
\[ s_{\mu} s_{\nu} - s_{\lambda} s_{\rho} \textrm{ } \textrm{ and } \textrm{ } s_{\lambda / \mu} - s_{\rho / \nu}.\]
A collection of work in this vein includes \cite{{g2},{complementcite},{lpp},{m},{mvw2}}.
Each of these Schur-positive differences gives a set of inequalities that the corresponding Littlewood-Richardson coefficients must satisfy.
In \cite{fomin3}, a Schur-positivity result was used to characterize the eigenvalues of a Hermatian matrix.
Further, any Schur-positive homogeneous symmetric function of degree $n$ can be expressed as a Frobenius image of some representation of $S_n$.
In this paper we shall define certain types of staircase diagrams and answer the question of Schur-positivity of each difference of any pair of hook augmentations of a given staircase and each difference of any pair of hook complement augmentations of a given staircase.
\section{Preliminaries}
A \textit{partition} $\lambda$\label{def:lambda} of a positive integer $n$, written $\lambda \vdash n$, is a sequence of weakly decreasing positive integers $\lambda = (\lambda_1,\lambda_2, \ldots, \lambda_k)$ with $\sum_{i=1}^{k} \lambda_i = n$.
We shall use $j^r$ to denote the sequence $j,j,\ldots, j$ consisting of $r$ $j$'s.
Under this notation, $\lambda = (k^{r_k}, {k-1}^{r_{k-1}}, \ldots ,1^{r_1})$ \label{parts} denotes the partition which has $r_1$ parts of size one, $r_2$ parts of size two, \ldots, and $r_k$ parts of size $k$.
We say $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_k)$ is a \label{comp} \textit{composition} of $n$ if each $\alpha_i$ is a positive integer and $\sum_{i=1}^{k} \alpha_i = n$.
If we relax this condition to allow each $\alpha_i$ to be non-negative, then we call the result a \textit{weak composition}
If $\lambda$ is either a partition or a composition we call each $\lambda_i$ a \textit{part} of $\lambda$, and if $\lambda$ has exactly $k$ parts we say $\lambda$ is of \textit{length} $k$ and write $l(\lambda)=k$.
The \textit{size} of $\lambda$ is given by $| \lambda |=\sum_{i=1}^{k} \lambda_i$.
Given a partition $\lambda$, we can represent it via the diagram of left-justified rows of boxes whose $i$-th row contains $\lambda_i$ boxes.
The diagrams of these type are called \textit{Ferrers diagrams}.
We shall use the symbol $\lambda$ when refering to both the partition and its Ferrers diagram.
Whenever we find a diagram $\mu$ contained in a diagram $\lambda$ as a subset of boxes, we write $\mu \subseteq \lambda$ and say that $\mu$ is a \textit{subdiagram} of $\lambda$.
In this case we can form the \textit{skew \label{skew} diagram} $\lambda / \mu$ by removing the boxes of $\mu$ from the top-left corner of $\lambda$.
A \textit{hook} is the Ferrers diagram corresponding to a partition $\lambda$ that satisfies $\lambda_i \leq 1$ for all $i>1$.
Hence a hook has at most one row of length larger than $1$.
Given diagrams $D_1$ and $D_2$, we define their \textit{direct sum} to be the skew diagram $D= D_1 \oplus D_2$ that consists of the subdiagrams $D_1$ and $D_2$ such that the top-right box of $D_1$ is one step left and one step down from the bottom-left box of $D_{2}$.
Further, given any diagram $D$, the $180^{\circ}$ rotation of a diagram $D$ is denoted by $D^{\circ}$.
\begin{example} Let $D_1 = (2,2,2)/(1,1)$ and $D_2 = (4,4,2)$. Then ${D_1}^\circ$ is the hook given by $(2,1,1)$. We display the direct sum $D_1 \oplus D_2$.
\setlength{\unitlength}{0.4mm}
\begin{picture}(120,50)(-35,-5)
\put(32,10){$D_1 \oplus D_2$}
\put(100,30){\framebox(10,10)[tl]{ }}
\put(110,30){\framebox(10,10)[tl]{ }}
\put(120,30){\framebox(10,10)[tl]{ }}
\put(130,30){\framebox(10,10)[tl]{ }}
\put(100,20){\framebox(10,10)[tl]{ }}
\put(110,20){\framebox(10,10)[tl]{ }}
\put(120,20){\framebox(10,10)[tl]{ }}
\put(130,20){\framebox(10,10)[tl]{ }}
\put(100,10){\framebox(10,10)[tl]{ }}
\put(110,10){\framebox(10,10)[tl]{ }}
\put(90,0){\framebox(10,10)[tl]{ }}
\put(90,-10){\framebox(10,10)[tl]{ }}
\put(90,-20){\framebox(10,10)[tl]{ }}
\put(80,-20){\framebox(10,10)[tl]{ }}
\end{picture}
\end{example}
\
If $D$ is a diagram, then a \textit{tableau}---plural \textit{tableaux}---$\mathcal{T}$ \textit{of shape $D$} is obtained by filling the boxes of the $D$ with the positive integers.
It is a \textit{semistandard Young tableau} (SSYT---plural SSYTx) if each row of $\mathcal{T}$ gives a weakly increasing sequence of integers and each column of $\mathcal{T}$ gives a strictly increasing sequence of integers.
The \textit{content} of a tableau $\mathcal{T}$ is the weak composition given by
\[ \nu(\mathcal{T}) = ( \# \textrm{1's in }\mathcal{T}, \# \textrm{2's in }\mathcal{T}, \ldots).\]
Given a skew diagram $D$, the \textit{skew Schur function corresponding to $D$} is defined to be \label{slambda}
\begin{equation}
\label{schurdef}
s_{D}(\textbf{x}) = \sum_{\mathcal{T}} {x_1}^{\# \textrm{1's in } \mathcal{T}}{x_2}^{\# \textrm{2's in } \mathcal{T}} \cdots,
\end{equation}
where the sum is taken over all semistandard Young tableaux $\mathcal{T}$ of shape $D$.
When $D=\lambda$ is a partition, $s_\lambda$ is called the \textit{Schur function corresponding to $\lambda$}.
The set $\{s_\lambda | \lambda \vdash n \}$ is a basis of $\Lambda^n$, the set of homogeneous symmetric functions of degree $n$.
Therefore for each $f \in \Lambda^n$ we can write $f=\sum_{\lambda} a_\lambda s_\lambda$ for appropriate coefficients.
For any partitions $\mu$ and $\nu$ we have
\begin{equation}
\label{smusnu}
s_\mu s_\nu = \sum_{\lambda \vdash n} c_{\mu \nu}^{\lambda} s_\lambda,
\end{equation}
and for any skew diagram $\lambda / \mu$ we have
\begin{equation}
\label{slambdaskewmu}
s_{\lambda / \mu} = \sum_{\nu \vdash n} c_{\mu \nu}^{\lambda} s_\nu
\end{equation}
where the $c_{\mu \nu}^{\lambda}$ are the \textit{Littlewood-Richardson coefficients}.
The Littlewood-Richardson coefficients are non-negative integers and count an interesting class of SSYT that we now describe.
Given a tableau $\mathcal{T}$, the \textit{reading word} of $\mathcal{T}$ is the sequence of integers obtained by reading the entries of the rows of $\mathcal{T}$ from right to left, proceeding from the top row to the bottom.
We say that a sequence $r=r_1,r_2,\ldots, r_k$ is \textit{lattice} if, for each $j$, when reading the sequence from left to right the number of $j$'s that we have read is never less than the number of $j+1$'s that we have read.
\begin{theorem} [Littlewood-Richardson Rule] (\cite{LR})
\label{lr}
For partitions $\lambda, \mu$, and $\nu$, the \textit{Littlewood-Richardson coefficient} $c_{\mu \nu}^{\lambda}$ is the number of SSYTx of shape $\lambda / \mu$, content $\nu$, with lattice reading word.
\end{theorem}
For any $f = \sum_{\lambda \vdash n} a_{\lambda} s_{\lambda} \in \Lambda^n$, we say that $f$ is \textit{Schur-positive}, and write $f \geq_s 0$, if each $a_{\lambda} \geq 0$.
The Littlewood-Richardson rule shows that both $s_{\mu} s_{\nu}$ and $s_{\lambda / \mu}$ are Schur-positive.
For $f,g \in \Lambda^n$, we will be interested in whether or not the difference $f-g$ is Schur positive.
We shall write $f \geq_s g$ whenever $f-g$ is Schur-positive.
If neither $f-g$ nor $g-f$ is Schur-positive we say that $f$ and $g$ are \textit{Schur-incomparable}.
Further, we write $D_1 \succeq_s D_2$ if $s_{D_1} \geq_s s_{D_2}$.
If we consider the relation $\succeq_s$ on the set of all Schur-equivalent classes of diagrams (i.e. $[D]_s= \{D' | s_{D}=s_{D'}\}$), then $\succeq_s$ defines a partial ordering.
This allows us to view the Hasse diagram for the relation $\succeq_s$ on the set of these Schur-equivalent classes.
Some work in determining these equivalence classes includes \cite{{btvw},{g3},{mvw1},{rsvw}}.
\
We close these preliminaries by mentioning two useful results regarding skew Schur functions.
\begin{theorem}{(\cite{stanley}, Exercise 7.56(a))}
\label{rotate}
Given a skew diagram $D$,
\begin{equation} s_D= s_{D^{\circ}}.
\end{equation}
\end{theorem}
\begin{theorem}
\label{disjprod}
The Schur function of any disconnected skew diagram is reducible. If $D = D_1 \oplus D_2$, then we have
\begin{equation}
s_{D}=s_{D_1} s_{D_2}.
\end{equation}
\end{theorem}
\begin{proof} Any SSYT of shape $D_1 \oplus D_2$ gives rise to SSYTx of shape $D_1$ and $D_2$ by restricting to the subdiagrams $D_1$ and $D_2$.
Conversely, any pair of SSYTx $\mathcal{T}_1$ of shape $D_1$ and $\mathcal{T}_2$ of shape $D_2$ give rise to the tableau $\mathcal{T}_1 \oplus \mathcal{T}_2$ of shape $D_1 \oplus D_2$, which is clearly semistandard. \qed
\end{proof}
A thorough study of this material can be found in sources such as \cite{sagan} or \cite{stanley}.
\section{Staircases and Fat Staircases}
A Ferrers diagram is a \textit{staircase} if it is the Ferrers diagram of a partition of the form $\lambda = (n,n-1,n-2,\ldots, 2,1)$ or if it is the $180^{\circ}$ rotation of such a diagram.
Both these diagrams are referred to as \textit{staircases of length $n$} and will be denoted by $\delta_n$ and $\Delta_n$ respectively.
\begin{example} Here we see the two staircases of length 5.
\
\
\
\setlength{\unitlength}{0.35mm}
\begin{picture}(100,50)(-80,-5)
\put(-10,50){\framebox(10,10)[tl]{ }}
\put(0,50){\framebox(10,10)[tl]{ }}
\put(10,50){\framebox(10,10)[tl]{ }}
\put(20,50){\framebox(10,10)[tl]{ }}
\put(30,50){\framebox(10,10)[tl]{ }}
\put(-10,40){\framebox(10,10)[tl]{ }}
\put(0,40){\framebox(10,10)[tl]{ }}
\put(10,40){\framebox(10,10)[tl]{ }}
\put(20,40){\framebox(10,10)[tl]{ }}
\put(-10,30){\framebox(10,10)[tl]{ }}
\put(0,30){\framebox(10,10)[tl]{ }}
\put(10,30){\framebox(10,10)[tl]{ }}
\put(-10,20){\framebox(10,10)[tl]{ }}
\put(0,20){\framebox(10,10)[tl]{ }}
\put(-10,10){\framebox(10,10)[tl]{ }}
\put(20,0){$\delta_5$}
\put(100,0){$\Delta_5$}
\put(120,50){\framebox(10,10)[tl]{ }}
\put(120,40){\framebox(10,10)[tl]{ }}
\put(120,30){\framebox(10,10)[tl]{ }}
\put(120,20){\framebox(10,10)[tl]{ }}
\put(120,10){\framebox(10,10)[tl]{ }}
\put(110,40){\framebox(10,10)[tl]{ }}
\put(110,30){\framebox(10,10)[tl]{ }}
\put(110,20){\framebox(10,10)[tl]{ }}
\put(110,10){\framebox(10,10)[tl]{ }}
\put(100,30){\framebox(10,10)[tl]{ }}
\put(100,20){\framebox(10,10)[tl]{ }}
\put(100,10){\framebox(10,10)[tl]{ }}
\put(90,20){\framebox(10,10)[tl]{ }}
\put(90,10){\framebox(10,10)[tl]{ }}
\put(80,10){\framebox(10,10)[tl]{ }}
\end{picture}
\end{example}
Given a composition $\alpha=(\alpha_1, \ldots,\alpha_n)$, we let
\[ \delta_\alpha= (n^{\alpha_n},{n-1}^{\alpha_{n-1}}, \ldots,2^{\alpha_2}, 1^{\alpha_1}) \textrm{ and } \Delta_\alpha= (n^{\alpha_n},{n-1}^{\alpha_{n-1}}, \ldots,2^{\alpha_2}, 1^{\alpha_1})^\circ. \]
We call a skew diagram $D$ a \textit{fat staircase} if $D=\delta_{\alpha}$ or $D=\Delta_{\alpha}$ for some composition $\alpha$.
The numbers $\alpha_i$ count the number of rows of $D$ with $i$ boxes, for each $i$.
Using this notation the regular staircases may be expressed as $\delta_n = \delta_{(1^n)}$ and $\Delta_n = \Delta_{(1^n)}$, respectively.
Both fat staircases $\delta_\alpha$ and $\Delta_\alpha$ have width $= l(\alpha)$ and length $= |\alpha| = \sum_{i=1}^{n} \alpha_i$.
\begin{example}
Here we see the the fat staircases $\delta_{(1,2,2)}$ and $\Delta_{(3,1,2,3)}$.
\
\
\
\setlength{\unitlength}{0.35mm}
\begin{picture}(100,50)(-50,-15)
\put(10,10){\framebox(10,10)[tl]{ }}
\put(20,10){\framebox(10,10)[tl]{ }}
\put(30,10){\framebox(10,10)[tl]{ }}
\put(10,0){\framebox(10,10)[tl]{ }}
\put(20,0){\framebox(10,10)[tl]{ }}
\put(30,0){\framebox(10,10)[tl]{ }}
\put(10,-10){\framebox(10,10)[tl]{ }}
\put(20,-10){\framebox(10,10)[tl]{ }}
\put(10,-20){\framebox(10,10)[tl]{ }}
\put(20,-20){\framebox(10,10)[tl]{ }}
\put(10,-30){\framebox(10,10)[tl]{ }}
\put(20,-40){$\delta_{(1,2,2)}$}
\put(115,-40){$\Delta_{(3,1,2,3)}$}
\put(100,-30){\framebox(10,10)[tl]{ }}
\put(110,-30){\framebox(10,10)[tl]{ }}
\put(120,-30){\framebox(10,10)[tl]{ }}
\put(130,-30){\framebox(10,10)[tl]{ }}
\put(100,-20){\framebox(10,10)[tl]{ }}
\put(110,-20){\framebox(10,10)[tl]{ }}
\put(120,-20){\framebox(10,10)[tl]{ }}
\put(130,-20){\framebox(10,10)[tl]{ }}
\put(100,-10){\framebox(10,10)[tl]{ }}
\put(110,-10){\framebox(10,10)[tl]{ }}
\put(120,-10){\framebox(10,10)[tl]{ }}
\put(130,-10){\framebox(10,10)[tl]{ }}
\put(110,0){\framebox(10,10)[tl]{ }}
\put(120,0){\framebox(10,10)[tl]{ }}
\put(130,0){\framebox(10,10)[tl]{ }}
\put(110,10){\framebox(10,10)[tl]{ }}
\put(120,10){\framebox(10,10)[tl]{ }}
\put(130,10){\framebox(10,10)[tl]{ }}
\put(120,20){\framebox(10,10)[tl]{ }}
\put(130,20){\framebox(10,10)[tl]{ }}
\put(130,30){\framebox(10,10)[tl]{ }}
\put(130,40){\framebox(10,10)[tl]{ }}
\put(130,50){\framebox(10,10)[tl]{ }}
\end{picture}
\end{example}
\
\
\
Given a composition $\alpha$, $k \geq 0$, and a partition $\lambda$ with $\lambda_1 -k \leq l(\alpha)$ we now define $\mathcal{S}(\lambda, \alpha;k)$ to be the diagram obtained by placing $\lambda$ immediately below $\Delta_{\alpha}$ such that the rows of the two diagrams overlap in precisely $\lambda_1 -k$ positions.
We call $\mathcal{S}(\lambda, \alpha;k)$ a \textit{fat staircase with bad foundation}.
The subdiagram $\lambda$ is called the foundation of $\mathcal{S}(\lambda,\alpha;k)$.
The fact that $\Delta_{\alpha}$ and $\lambda$ overlap in precisely $\lambda_1 -k$ positions means that the first row of $\lambda$ begins exactly one box below and $k$ boxes left of the bottom-left box of the diagram $\Delta_{\alpha}$.
\begin{example} If we take $\alpha = (1,1,3,1,2,1)$, $\lambda = (6,5,5,5,3)$, and $k=0$, then we obtain the following staircase with bad foundation $\mathcal{S}(\lambda, \alpha;k)$.
\
\
\
\setlength{\unitlength}{0.3mm}
\begin{picture}(000,140)(110,-80)
\put(180,30){$\Delta_{\alpha}$}
\put(182,-50){$\lambda$}
\put(160,-20){\dashbox{3}(160,0)[tl]{ }}
\put(210,-70){\framebox(10,10)[tl]{ }}
\put(220,-70){\framebox(10,10)[tl]{ }}
\put(230,-70){\framebox(10,10)[tl]{ }}
\put(210,-60){\framebox(10,10)[tl]{ }}
\put(220,-60){\framebox(10,10)[tl]{ }}
\put(230,-60){\framebox(10,10)[tl]{ }}
\put(240,-60){\framebox(10,10)[tl]{ }}
\put(250,-60){\framebox(10,10)[tl]{ }}
\put(210,-50){\framebox(10,10)[tl]{ }}
\put(220,-50){\framebox(10,10)[tl]{ }}
\put(230,-50){\framebox(10,10)[tl]{ }}
\put(240,-50){\framebox(10,10)[tl]{ }}
\put(250,-50){\framebox(10,10)[tl]{ }}
\put(210,-40){\framebox(10,10)[tl]{ }}
\put(220,-40){\framebox(10,10)[tl]{ }}
\put(230,-40){\framebox(10,10)[tl]{ }}
\put(240,-40){\framebox(10,10)[tl]{ }}
\put(250,-40){\framebox(10,10)[tl]{ }}
\put(210,-30){\framebox(10,10)[tl]{ }}
\put(220,-30){\framebox(10,10)[tl]{ }}
\put(230,-30){\framebox(10,10)[tl]{ }}
\put(240,-30){\framebox(10,10)[tl]{ }}
\put(250,-30){\framebox(10,10)[tl]{ }}
\put(260,-30){\framebox(10,10)[tl]{ }}
\put(210,-20){\framebox(10,10)[tl]{ }}
\put(220,-20){\framebox(10,10)[tl]{ }}
\put(230,-20){\framebox(10,10)[tl]{ }}
\put(240,-20){\framebox(10,10)[tl]{ }}
\put(250,-20){\framebox(10,10)[tl]{ }}
\put(260,-20){\framebox(10,10)[tl]{ }}
\put(220,-10){\framebox(10,10)[tl]{ }}
\put(230,-10){\framebox(10,10)[tl]{ }}
\put(240,-10){\framebox(10,10)[tl]{ }}
\put(250,-10){\framebox(10,10)[tl]{ }}
\put(260,-10){\framebox(10,10)[tl]{ }}
\put(220,0){\framebox(10,10)[tl]{ }}
\put(230,0){\framebox(10,10)[tl]{ }}
\put(240,0){\framebox(10,10)[tl]{ }}
\put(250,0){\framebox(10,10)[tl]{ }}
\put(260,0){\framebox(10,10)[tl]{ }}
\put(230,10){\framebox(10,10)[tl]{ }}
\put(240,10){\framebox(10,10)[tl]{ }}
\put(250,10){\framebox(10,10)[tl]{ }}
\put(260,10){\framebox(10,10)[tl]{ }}
\put(240,20){\framebox(10,10)[tl]{ }}
\put(250,20){\framebox(10,10)[tl]{ }}
\put(260,20){\framebox(10,10)[tl]{ }}
\put(240,30){\framebox(10,10)[tl]{ }}
\put(250,30){\framebox(10,10)[tl]{ }}
\put(260,30){\framebox(10,10)[tl]{ }}
\put(240,40){\framebox(10,10)[tl]{ }}
\put(250,40){\framebox(10,10)[tl]{ }}
\put(260,40){\framebox(10,10)[tl]{ }}
\put(250,50){\framebox(10,10)[tl]{ }}
\put(260,50){\framebox(10,10)[tl]{ }}
\put(260,60){\framebox(10,10)[tl]{ }}
\end{picture}
\end{example}
One of the advantages in computing the skew Schur functions of fat staircases with bad foundations is that, when using the Littlewood-Richarson rule, the $\Delta_\alpha$ portion of the diagram can be filled in only one way.
By using Theorem~\ref{rotate} we can be see this algebraically from the equation
\[ s_{\Delta_\alpha} = s_{\Delta_\alpha^\circ} = s_{\delta_\alpha}, \]
where $s_{\delta_\alpha}$ is a Schur function.
The unique filling of $\Delta_\alpha$ obeying the semistandard conditions and the lattice condition is easily seen to be the filling that places the entries $1,2,\ldots, l$ into each column of length $l$.
\setlength{\unitlength}{0.5mm}
\begin{picture}(0,0)(0,0)
\put(90,-74){\line(0,1){26} }
\end{picture}
\begin{lemma}
\label{kfatfirstrowlemma}
Let $\mathcal{S}(\lambda,\alpha;k)$ be a fat staircase with bad foundation for some $k \geq 0$ and $\mathcal{T}$ be a SSYT of shape $\mathcal{S}(\lambda,\alpha;k)$ whose reading word is lattice.
If $\alpha=(\alpha_1,\dots, \alpha_n)$, then the entries in the first row of the foundation of $\mathcal{T}$ consist values taken from the set
\[R_{\alpha,k} = \left\{ 1+\sum_{i=1}^{j} \alpha_{n+1-i} \textrm{ } \textrm{ } \textrm{ } j= 1, 2, \ldots, n \right\} \cup \left\{ \begin{array}{cll}
\{1\} & \mbox{if} & k > 0 \\
\emptyset & \mbox{if} & k=0.
\end{array}\right.\]
Furthermore, the value $1$ can occur at most $k$ times and the rest of the values can appear at most once.
\end{lemma}
\begin{proof}
Let $R$ be the first row of the foundation of $\mathcal{T}$ and $t \in R$.
Since $\mathcal{T}$ is a SSYT, the columns strictly increase. Thus $t = 1$ is allowed if and only if $k \geq 1$ since it is precisely in that case that the first value in $R$ is not below an entry of $\Delta_{\alpha}$.
Furthermore, since there are only $k$ boxes from the first row of the foundation of $\mathcal{T}$ that extend out from $\Delta_\alpha$, there can be at most $k$ $1$'s in $R$.
If $t > 1$ then, when reading the row $R$ from right to left, the lattice condition implies that there is at least one more $t-1$ in $\Delta_{\alpha}$ than there are $t$'s in $\Delta_{\alpha}$.
Since the content of $\Delta_{\alpha}$ is $(n^{\alpha_n},{n-1}^{\alpha_{n-1}}, \ldots, 1^{\alpha_1})$, the only instances when this occurs are when $t=1+\sum_{i=1 \ldots j} \alpha_{n+1-j}$ for $j=1,2,\ldots,n$.
Therefore every entry of $R$ is an element of $R_{\alpha,k}$.
Further, if a value $t>1$ appeared twice in $R$, then the lattice condition would be violated.
Hence each $t \in R_{\alpha,k}$, $t \neq 1$, can appear at most once in $R$. \qed
\end{proof}
The next result tells us when we may obtain a SSYT of shape $\mathcal{S}(\lambda,\alpha;k)$ with lattice reading word from a SSYT of shape $\lambda \oplus \Delta_\alpha$ with lattice reading word.
\begin{lemma}
\label{kfatjoinlemma}
Let $\alpha$ be a composition, $\lambda$ be a partition, and $k \geq 0$ such that $\lambda_1 -k \leq l(\alpha)$.
If $T$ is a SSYT of shape $\lambda \oplus \Delta_\alpha$ with lattice reading word such that there are at most $k$ $1$'s in the first row of $\lambda$, then the tableau of shape $\mathcal{S}(\lambda,\alpha;k)$ obtained from $T$ by shifting the foundation $\lambda$ to the right is also a SSYT with lattice reading word.
\end{lemma}
\begin{proof}
Let $T$ be a SSYT of shape $\lambda \oplus \Delta_\alpha$ with lattice reading word and let $T_k'$ be the tableau of shape $\mathcal{S}(\lambda,\alpha;k)$ obtained from $T$ by shifting the foundation $\lambda$ to the right.
Since shifting $\lambda$ to the right does not affect the order in which the entries are read, $T_k'$ has a lattice reading word.
Also, the rows of $T_k'$ weakly increase since they are the same as the rows of $T$.
Further, to check that the columns of $T_k'$ strictly increase, we need only check that they strictly increase at the positions where the two subdiagrams $\Delta_{\alpha}$ and $\lambda$ are joined.
Let $R$ denote the first row of $\lambda$ and $\alpha= (\alpha_1, \ldots, \alpha_n)$.
As in the proof of Lemma~\ref{kfatfirstrowlemma}, the lattice condition on $T$ implies that the entries of $R$ consist of values of $R_{\alpha, k}$.
Further, the value $1$ can occur at most $k$ times and the rest of the values of $R$ are distinct.
Let $q$ be the number of times $1$ appears in $R$, so that $k \geq q$.
Further, let $r_1 \leq r_2 \leq \ldots \leq r_{\lambda_1}$ be the entries of $R$.
\
Consider the case $k \geq 1$.
Since $r_1=r_2 =\ldots = r_q =1$, we have $r_q = \textrm{min}(R_{\alpha,k})$ and for each $1 \leq j \leq n$ we have
\[ r_{j+q} \geq \textrm{ the } (j+1) \textrm{-th smallest value of } R_{\alpha,k} = 1 + \sum_{i=1}^{j} \alpha_{n+1-i}. \]
Since $k \geq q$, for each $1 \leq j \leq n$ we have
\[ r_{j+k} \geq r_{j+q} \geq 1 + \sum_{i=1}^{j} \alpha_{n+1-i}. \]
As illustrated in the diagram below, the entry $r_{j+k}$ is beneath $\sum_{i=1}^{j} \alpha_{n+1-i}$ boxes.
From the unique filling of $\Delta_{\alpha}$, the entry of $\Delta_\alpha$ directly above $r_{j+k}$ is $\sum_{i=1}^{j} \alpha_{n+1-i}$.
\
\
\
\
\setlength{\unitlength}{0.7mm}
\begin{picture}(100,80)(30,5)
\put(115,105){$j$}
\put(142,105){$n-j$}
\put(40,15){$\sum_{i=1}^{j} \alpha_{n+1-i}$}
\put(75,-30){\line(0,1){90}}
\put(75,-30){\line(1,0){5}}
\put(75,60){\line(1,0){5}}
\put(90,100){\line(1,0){49}}
\put(141,100){\line(1,0){19}}
\put(90,100){\line(0,-1){5}}
\put(139,100){\line(0,-1){5}}
\put(141,100){\line(0,-1){5}}
\put(160,100){\line(0,-1){5}}
\put(90,-30){\dashbox{2}(70,120)[tl]{ }}
\put(50,-62){\dashbox{2}(110,30)[tl]{ }}
\put(175,10){$\Delta_\alpha$}
\put(175,-55){$\lambda $}
\put(90,-30){\framebox(10,10)[tl]{ }}
\put(100,-30){\framebox(10,10)[tl]{ }}
\put(110,-30){\framebox(10,10)[tl]{ }}
\put(130,-30){\framebox(10,10)[tl]{ }}
\put(150,-30){\framebox(10,10)[tl]{ }}
\put(100,-20){\framebox(10,10)[tl]{ }}
\put(110,-20){\framebox(10,10)[tl]{ }}
\put(130,-20){\framebox(10,10)[tl]{ }}
\put(150,-20){\framebox(10,10)[tl]{ }}
\put(100,-10){\framebox(10,10)[tl]{ }}
\put(110,-10){\framebox(10,10)[tl]{ }}
\put(130,-10){\framebox(10,10)[tl]{ }}
\put(150,-10){\framebox(10,10)[tl]{ }}
\put(110,0){\framebox(10,10)[tl]{ }}
\put(130,0){\framebox(10,10)[tl]{ }}
\put(150,0){\framebox(10,10)[tl]{ }}
\put(110,10){\framebox(10,10)[tl]{ }}
\put(130,10){\framebox(10,10)[tl]{ }}
\put(150,10){\framebox(10,10)[tl]{ }}
\put(130,20){\framebox(10,10)[tl]{ }}
\put(150,20){\framebox(10,10)[tl]{ }}
\put(130,30){\framebox(10,10)[tl]{ }}
\put(150,30){\framebox(10,10)[tl]{ }}
\put(130,40){\framebox(10,10)[tl]{ }}
\put(150,40){\framebox(10,10)[tl]{ }}
\put(130,50){\framebox(10,10)[tl]{ }}
\put(150,50){\framebox(10,10)[tl]{ }}
\put(150,60){\framebox(10,10)[tl]{ }}
\put(150,70){\framebox(10,10)[tl]{ }}
\put(150,80){\framebox(10,10)[tl]{ }}
\put(91,-39){$r_{1+k}$}
\put(101,-39){$r_{2+k}$}
\put(111,-39){$r_{3+k}$}
\put(131,-39){$r_{j+k}$}
\put(53,-39){$r_1$}
\put(63,-39){$r_2$}
\put(72,-49){$\cdots$}
\put(83,-39){$r_k$}
\put(50,-42){\framebox(10,10)[tl]{ }}
\put(60,-42){\framebox(10,10)[tl]{ }}
\put(80,-42){\framebox(10,10)[tl]{ }}
\put(90,-42){\framebox(10,10)[tl]{ }}
\put(100,-42){\framebox(10,10)[tl]{ }}
\put(110,-42){\framebox(10,10)[tl]{ }}
\put(130,-42){\framebox(10,10)[tl]{ }}
\put(50,-52){\framebox(10,10)[tl]{ }}
\put(60,-52){\framebox(10,10)[tl]{ }}
\put(80,-52){\framebox(10,10)[tl]{ }}
\put(90,-52){\framebox(10,10)[tl]{ }}
\put(100,-52){\framebox(10,10)[tl]{ }}
\put(110,-52){\framebox(10,10)[tl]{ }}
\put(130,-52){\framebox(10,10)[tl]{ }}
\put(50,-62){\framebox(10,10)[tl]{ }}
\put(60,-62){\framebox(10,10)[tl]{ }}
\put(80,-62){\framebox(10,10)[tl]{ }}
\put(90,-62){\framebox(10,10)[tl]{ }}
\put(100,-62){\framebox(10,10)[tl]{ }}
\put(122,-49){$\cdots$}
\put(142,-49){$\cdots$}
\put(122,-5){\ldots}
\put(142,-5){\ldots}
\end{picture}
\
\
\
\
\
\
\
\
\
\
\
\
\
Thus the columns strictly increase.
Therefore $T_1'$ is a SSYT with lattice reading word, as desired.
\
Now consider the case when $k=0$. Then for each $1 \leq j \leq n$ we have
\[ r_j \geq j \textrm{-th smallest value of } R_{\alpha, k} \geq 1 + \sum_{i=1}^{j} \alpha_{n+1-i}. \]
Also, the entry $r_j$ is beneath precisely $\sum_{i=1}^{j} \alpha_{n+1-i}$ boxes, so the entry of $\Delta_\alpha$ directly above $r_j$ is $\sum_{i=1}^{j} \alpha_{n+1-i}$.
Thus the columns strictly increase.
Therefore $T_k'$ is a SSYT with lattice reading word, as desired.
\qed
\end{proof}
\begin{example}
Let $\alpha = (2,2,1)$, $\lambda = (3,2)$, and $k=2$.
Consider the SSYT of shape $\lambda \oplus \Delta_\alpha$ with lattice reading word and two $1$'s in the first row of $\lambda$ shown on the left.
This gives rise to the SSYT of shape $\mathcal{S}(\lambda,\alpha;2)$ with lattice reading word shown on the right.
\setlength{\unitlength}{0.35mm}
\begin{picture}(0,100)(-30,10)
\put(70,90){\framebox(10,10)[c]{\textrm{ }$1$ }}
\put(70,80){\framebox(10,10)[c]{\textrm{ }$2$ } }
\put(70,70){\framebox(10,10)[c]{\textrm{ }$3$ } }
\put(70,60){\framebox(10,10)[c]{\textrm{ }$4$ } }
\put(70,50){\framebox(10,10)[c]{\textrm{ }$5$ } }
\put(60,70){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(60,60){\framebox(10,10)[c]{\textrm{ }$2$ } }
\put(60,50){\framebox(10,10)[c]{\textrm{ }$3$ } }
\put(50,50){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(20,40){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(30,40){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(40,40){\framebox(10,10)[c]{\textrm{ }$6$ } }
\put(20,30){\framebox(10,10)[c]{\textrm{ }$2$ } }
\put(30,30){\framebox(10,10)[c]{\textrm{ }$7$ } }
\put(170,90){\framebox(10,10)[c]{\textrm{ }$1$ }}
\put(170,80){\framebox(10,10)[c]{\textrm{ }$2$ } }
\put(170,70){\framebox(10,10)[c]{\textrm{ }$3$ } }
\put(170,60){\framebox(10,10)[c]{\textrm{ }$4$ } }
\put(170,50){\framebox(10,10)[c]{\textrm{ }$5$ } }
\put(160,70){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(160,60){\framebox(10,10)[c]{\textrm{ }$2$ } }
\put(160,50){\framebox(10,10)[c]{\textrm{ }$3$ } }
\put(150,50){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(130,40){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(140,40){\framebox(10,10)[c]{\textrm{ }$1$ } }
\put(150,40){\framebox(10,10)[c]{\textrm{ }$6$ } }
\put(130,30){\framebox(10,10)[c]{\textrm{ }$2$ } }
\put(140,30){\framebox(10,10)[c]{\textrm{ }$7$ } }
\end{picture}
\end{example}
\section{Fat Staircases with Hook Foundations}
Recall from the introduction, that we write $D_1 \succeq_s D_2$ whenever $s_{D_1}-s_{D_2} \geq_s 0$.
If we consider the relation $\succeq_s$ on the set of all Schur-equivalent classes of diagrams (i.e. $[D]_s= \{D' | s_{D}=s_{D'}\}$), then $\succeq_s$ defines a partial ordering.
This allows us to view the Hasse diagram for the relation $\succeq_s$ on the set of these Schur-equivalent classes.
For the sake of convenience, we write $D$ in place of $[D]_s$.
\begin{example}
Here we show the Hasse diagram for $\succeq_s$ on the collection of staircases with bad foundations $\mathcal{S}(\lambda,(1^7);0)$, for $\lambda$ varying over all hooks of size $7$.
A line drawn from a diagram $D_1$ to a diagram $D_2$ in an upwards direction indicates that $s_{D_1} - s_{D_2} \geq_s 0$.
We note that the diagrams along the top are all Schur-incomparable. That is, they form an anti-chain with regards to $\succeq_s$.
Also, the diagrams along the right are all comparable. That is, they form a chain with regards to $\succeq_s$.
\setlength{\unitlength}{0.20mm}
\begin{picture}(00,500)(-100,-400)
\put(42,-80){\line(1,-1){263}}
\put(130,-80){\line(1,-1){165}}
\put(130,-80){\line(2,-3){176}}
\put(240,-80){\line(1,-1){60}}
\put(240,-80){\line(1,-3){55}}
\put(240,-80){\line(1,-4){66}}
\put(345,-320){\line(0,1){30}}
\put(345,-210){\line(0,1){30}}
\put(345,-90){\line(0,1){30}}
\put(310,-390){\framebox(10,10)[tl]{ }}
\put(320,-390){\framebox(10,10)[tl]{ }}
\put(330,-390){\framebox(10,10)[tl]{ }}
\put(340,-390){\framebox(10,10)[tl]{ }}
\put(350,-390){\framebox(10,10)[tl]{ }}
\put(360,-390){\framebox(10,10)[tl]{ }}
\put(370,-390){\framebox(10,10)[tl]{ }}
\put(310,-380){\framebox(10,10)[tl]{ }}
\put(320,-380){\framebox(10,10)[tl]{ }}
\put(330,-380){\framebox(10,10)[tl]{ }}
\put(340,-380){\framebox(10,10)[tl]{ }}
\put(350,-380){\framebox(10,10)[tl]{ }}
\put(360,-380){\framebox(10,10)[tl]{ }}
\put(370,-380){\framebox(10,10)[tl]{ }}
\put(320,-370){\framebox(10,10)[tl]{ }}
\put(330,-370){\framebox(10,10)[tl]{ }}
\put(340,-370){\framebox(10,10)[tl]{ }}
\put(350,-370){\framebox(10,10)[tl]{ }}
\put(360,-370){\framebox(10,10)[tl]{ }}
\put(370,-370){\framebox(10,10)[tl]{ }}
\put(330,-360){\framebox(10,10)[tl]{ }}
\put(340,-360){\framebox(10,10)[tl]{ }}
\put(350,-360){\framebox(10,10)[tl]{ }}
\put(360,-360){\framebox(10,10)[tl]{ }}
\put(370,-360){\framebox(10,10)[tl]{ }}
\put(340,-350){\framebox(10,10)[tl]{ }}
\put(350,-350){\framebox(10,10)[tl]{ }}
\put(360,-350){\framebox(10,10)[tl]{ }}
\put(370,-350){\framebox(10,10)[tl]{ }}
\put(350,-340){\framebox(10,10)[tl]{ }}
\put(360,-340){\framebox(10,10)[tl]{ }}
\put(370,-340){\framebox(10,10)[tl]{ }}
\put(360,-330){\framebox(10,10)[tl]{ }}
\put(370,-330){\framebox(10,10)[tl]{ }}
\put(370,-320){\framebox(10,10)[tl]{ }}
\put(310,-290){\framebox(10,10)[tl]{ }}
\put(310,-280){\framebox(10,10)[tl]{ }}
\put(320,-280){\framebox(10,10)[tl]{ }}
\put(330,-280){\framebox(10,10)[tl]{ }}
\put(340,-280){\framebox(10,10)[tl]{ }}
\put(350,-280){\framebox(10,10)[tl]{ }}
\put(360,-280){\framebox(10,10)[tl]{ }}
\put(310,-270){\framebox(10,10)[tl]{ }}
\put(320,-270){\framebox(10,10)[tl]{ }}
\put(330,-270){\framebox(10,10)[tl]{ }}
\put(340,-270){\framebox(10,10)[tl]{ }}
\put(350,-270){\framebox(10,10)[tl]{ }}
\put(360,-270){\framebox(10,10)[tl]{ }}
\put(370,-270){\framebox(10,10)[tl]{ }}
\put(320,-260){\framebox(10,10)[tl]{ }}
\put(330,-260){\framebox(10,10)[tl]{ }}
\put(340,-260){\framebox(10,10)[tl]{ }}
\put(350,-260){\framebox(10,10)[tl]{ }}
\put(360,-260){\framebox(10,10)[tl]{ }}
\put(370,-260){\framebox(10,10)[tl]{ }}
\put(330,-250){\framebox(10,10)[tl]{ }}
\put(340,-250){\framebox(10,10)[tl]{ }}
\put(350,-250){\framebox(10,10)[tl]{ }}
\put(360,-250){\framebox(10,10)[tl]{ }}
\put(370,-250){\framebox(10,10)[tl]{ }}
\put(340,-240){\framebox(10,10)[tl]{ }}
\put(350,-240){\framebox(10,10)[tl]{ }}
\put(360,-240){\framebox(10,10)[tl]{ }}
\put(370,-240){\framebox(10,10)[tl]{ }}
\put(350,-230){\framebox(10,10)[tl]{ }}
\put(360,-230){\framebox(10,10)[tl]{ }}
\put(370,-230){\framebox(10,10)[tl]{ }}
\put(360,-220){\framebox(10,10)[tl]{ }}
\put(370,-220){\framebox(10,10)[tl]{ }}
\put(370,-210){\framebox(10,10)[tl]{ }}
\put(310,-180){\framebox(10,10)[tl]{ }}
\put(310,-170){\framebox(10,10)[tl]{ }}
\put(310,-160){\framebox(10,10)[tl]{ }}
\put(320,-160){\framebox(10,10)[tl]{ }}
\put(330,-160){\framebox(10,10)[tl]{ }}
\put(340,-160){\framebox(10,10)[tl]{ }}
\put(350,-160){\framebox(10,10)[tl]{ }}
\put(310,-150){\framebox(10,10)[tl]{ }}
\put(320,-150){\framebox(10,10)[tl]{ }}
\put(330,-150){\framebox(10,10)[tl]{ }}
\put(340,-150){\framebox(10,10)[tl]{ }}
\put(350,-150){\framebox(10,10)[tl]{ }}
\put(360,-150){\framebox(10,10)[tl]{ }}
\put(370,-150){\framebox(10,10)[tl]{ }}
\put(320,-140){\framebox(10,10)[tl]{ }}
\put(330,-140){\framebox(10,10)[tl]{ }}
\put(340,-140){\framebox(10,10)[tl]{ }}
\put(350,-140){\framebox(10,10)[tl]{ }}
\put(360,-140){\framebox(10,10)[tl]{ }}
\put(370,-140){\framebox(10,10)[tl]{ }}
\put(330,-130){\framebox(10,10)[tl]{ }}
\put(340,-130){\framebox(10,10)[tl]{ }}
\put(350,-130){\framebox(10,10)[tl]{ }}
\put(360,-130){\framebox(10,10)[tl]{ }}
\put(370,-130){\framebox(10,10)[tl]{ }}
\put(340,-120){\framebox(10,10)[tl]{ }}
\put(350,-120){\framebox(10,10)[tl]{ }}
\put(360,-120){\framebox(10,10)[tl]{ }}
\put(370,-120){\framebox(10,10)[tl]{ }}
\put(350,-110){\framebox(10,10)[tl]{ }}
\put(360,-110){\framebox(10,10)[tl]{ }}
\put(370,-110){\framebox(10,10)[tl]{ }}
\put(360,-100){\framebox(10,10)[tl]{ }}
\put(370,-100){\framebox(10,10)[tl]{ }}
\put(370,-90){\framebox(10,10)[tl]{ }}
\put(310,-50){\framebox(10,10)[tl]{ }}
\put(310,-40){\framebox(10,10)[tl]{ }}
\put(310,-30){\framebox(10,10)[tl]{ }}
\put(310,-20){\framebox(10,10)[tl]{ }}
\put(320,-20){\framebox(10,10)[tl]{ }}
\put(330,-20){\framebox(10,10)[tl]{ }}
\put(340,-20){\framebox(10,10)[tl]{ }}
\put(310,-10){\framebox(10,10)[tl]{ }}
\put(320,-10){\framebox(10,10)[tl]{ }}
\put(330,-10){\framebox(10,10)[tl]{ }}
\put(340,-10){\framebox(10,10)[tl]{ }}
\put(350,-10){\framebox(10,10)[tl]{ }}
\put(360,-10){\framebox(10,10)[tl]{ }}
\put(370,-10){\framebox(10,10)[tl]{ }}
\put(320,0){\framebox(10,10)[tl]{ }}
\put(330,0){\framebox(10,10)[tl]{ }}
\put(340,0){\framebox(10,10)[tl]{ }}
\put(350,0){\framebox(10,10)[tl]{ }}
\put(360,0){\framebox(10,10)[tl]{ }}
\put(370,0){\framebox(10,10)[tl]{ }}
\put(330,10){\framebox(10,10)[tl]{ }}
\put(340,10){\framebox(10,10)[tl]{ }}
\put(350,10){\framebox(10,10)[tl]{ }}
\put(360,10){\framebox(10,10)[tl]{ }}
\put(370,10){\framebox(10,10)[tl]{ }}
\put(340,20){\framebox(10,10)[tl]{ }}
\put(350,20){\framebox(10,10)[tl]{ }}
\put(360,20){\framebox(10,10)[tl]{ }}
\put(370,20){\framebox(10,10)[tl]{ }}
\put(350,30){\framebox(10,10)[tl]{ }}
\put(360,30){\framebox(10,10)[tl]{ }}
\put(370,30){\framebox(10,10)[tl]{ }}
\put(360,40){\framebox(10,10)[tl]{ }}
\put(370,40){\framebox(10,10)[tl]{ }}
\put(370,50){\framebox(10,10)[tl]{ }}
\put(200,-60){\framebox(10,10)[tl]{ }}
\put(200,-50){\framebox(10,10)[tl]{ }}
\put(200,-40){\framebox(10,10)[tl]{ }}
\put(200,-30){\framebox(10,10)[tl]{ }}
\put(200,-20){\framebox(10,10)[tl]{ }}
\put(210,-20){\framebox(10,10)[tl]{ }}
\put(220,-20){\framebox(10,10)[tl]{ }}
\put(200,-10){\framebox(10,10)[tl]{ }}
\put(210,-10){\framebox(10,10)[tl]{ }}
\put(220,-10){\framebox(10,10)[tl]{ }}
\put(230,-10){\framebox(10,10)[tl]{ }}
\put(240,-10){\framebox(10,10)[tl]{ }}
\put(250,-10){\framebox(10,10)[tl]{ }}
\put(260,-10){\framebox(10,10)[tl]{ }}
\put(210,0){\framebox(10,10)[tl]{ }}
\put(220,0){\framebox(10,10)[tl]{ }}
\put(230,0){\framebox(10,10)[tl]{ }}
\put(240,0){\framebox(10,10)[tl]{ }}
\put(250,0){\framebox(10,10)[tl]{ }}
\put(260,0){\framebox(10,10)[tl]{ }}
\put(220,10){\framebox(10,10)[tl]{ }}
\put(230,10){\framebox(10,10)[tl]{ }}
\put(240,10){\framebox(10,10)[tl]{ }}
\put(250,10){\framebox(10,10)[tl]{ }}
\put(260,10){\framebox(10,10)[tl]{ }}
\put(230,20){\framebox(10,10)[tl]{ }}
\put(240,20){\framebox(10,10)[tl]{ }}
\put(250,20){\framebox(10,10)[tl]{ }}
\put(260,20){\framebox(10,10)[tl]{ }}
\put(240,30){\framebox(10,10)[tl]{ }}
\put(250,30){\framebox(10,10)[tl]{ }}
\put(260,30){\framebox(10,10)[tl]{ }}
\put(250,40){\framebox(10,10)[tl]{ }}
\put(260,40){\framebox(10,10)[tl]{ }}
\put(260,50){\framebox(10,10)[tl]{ }}
\put(90,-70){\framebox(10,10)[tl]{ }}
\put(90,-60){\framebox(10,10)[tl]{ }}
\put(90,-50){\framebox(10,10)[tl]{ }}
\put(90,-40){\framebox(10,10)[tl]{ }}
\put(90,-30){\framebox(10,10)[tl]{ }}
\put(90,-20){\framebox(10,10)[tl]{ }}
\put(100,-20){\framebox(10,10)[tl]{ }}
\put(90,-10){\framebox(10,10)[tl]{ }}
\put(100,-10){\framebox(10,10)[tl]{ }}
\put(110,-10){\framebox(10,10)[tl]{ }}
\put(120,-10){\framebox(10,10)[tl]{ }}
\put(130,-10){\framebox(10,10)[tl]{ }}
\put(140,-10){\framebox(10,10)[tl]{ }}
\put(150,-10){\framebox(10,10)[tl]{ }}
\put(100,0){\framebox(10,10)[tl]{ }}
\put(110,0){\framebox(10,10)[tl]{ }}
\put(120,0){\framebox(10,10)[tl]{ }}
\put(130,0){\framebox(10,10)[tl]{ }}
\put(140,0){\framebox(10,10)[tl]{ }}
\put(150,0){\framebox(10,10)[tl]{ }}
\put(110,10){\framebox(10,10)[tl]{ }}
\put(120,10){\framebox(10,10)[tl]{ }}
\put(130,10){\framebox(10,10)[tl]{ }}
\put(140,10){\framebox(10,10)[tl]{ }}
\put(150,10){\framebox(10,10)[tl]{ }}
\put(120,20){\framebox(10,10)[tl]{ }}
\put(130,20){\framebox(10,10)[tl]{ }}
\put(140,20){\framebox(10,10)[tl]{ }}
\put(150,20){\framebox(10,10)[tl]{ }}
\put(130,30){\framebox(10,10)[tl]{ }}
\put(140,30){\framebox(10,10)[tl]{ }}
\put(150,30){\framebox(10,10)[tl]{ }}
\put(140,40){\framebox(10,10)[tl]{ }}
\put(150,40){\framebox(10,10)[tl]{ }}
\put(150,50){\framebox(10,10)[tl]{ }}
\put(-20,-70){\framebox(10,10)[tl]{ }}
\put(-20,-60){\framebox(10,10)[tl]{ }}
\put(-20,-50){\framebox(10,10)[tl]{ }}
\put(-20,-40){\framebox(10,10)[tl]{ }}
\put(-20,-30){\framebox(10,10)[tl]{ }}
\put(-20,-20){\framebox(10,10)[tl]{ }}
\put(-20,-80){\framebox(10,10)[tl]{ }}
\put(-20,-10){\framebox(10,10)[tl]{ }}
\put(-10,-10){\framebox(10,10)[tl]{ }}
\put(0,-10){\framebox(10,10)[tl]{ }}
\put(10,-10){\framebox(10,10)[tl]{ }}
\put(20,-10){\framebox(10,10)[tl]{ }}
\put(30,-10){\framebox(10,10)[tl]{ }}
\put(40,-10){\framebox(10,10)[tl]{ }}
\put(-10,0){\framebox(10,10)[tl]{ }}
\put(0,0){\framebox(10,10)[tl]{ }}
\put(10,0){\framebox(10,10)[tl]{ }}
\put(20,0){\framebox(10,10)[tl]{ }}
\put(30,0){\framebox(10,10)[tl]{ }}
\put(40,0){\framebox(10,10)[tl]{ }}
\put(0,10){\framebox(10,10)[tl]{ }}
\put(10,10){\framebox(10,10)[tl]{ }}
\put(20,10){\framebox(10,10)[tl]{ }}
\put(30,10){\framebox(10,10)[tl]{ }}
\put(40,10){\framebox(10,10)[tl]{ }}
\put(10,20){\framebox(10,10)[tl]{ }}
\put(20,20){\framebox(10,10)[tl]{ }}
\put(30,20){\framebox(10,10)[tl]{ }}
\put(40,20){\framebox(10,10)[tl]{ }}
\put(20,30){\framebox(10,10)[tl]{ }}
\put(30,30){\framebox(10,10)[tl]{ }}
\put(40,30){\framebox(10,10)[tl]{ }}
\put(30,40){\framebox(10,10)[tl]{ }}
\put(40,40){\framebox(10,10)[tl]{ }}
\put(40,50){\framebox(10,10)[tl]{ }}
\end{picture}
\end{example}
\
We shall summarize all the $\succeq_s$ relationships between diagrams of the form $\mathcal{S}(\lambda,\alpha;k)$ when $\lambda$ is a hook of fixed size $h \leq n+k$ and $0 \leq k \leq h$.
The restriction $h \leq n+k$ is needed to guarantee that $\mathcal{S}(\lambda,\alpha;k)$ is a skew diagram for every hook $\lambda$ of size $h$.
We impose the restriction $k \leq h$ since, for $k \geq h$, every diagram of the form $\mathcal{S}(\lambda,\alpha;k)$, when $\lambda$ is a hook of fixed size $h$, is disconnected.
Thus, for each $k \geq h$, the skew Schur function of these diagrams factor as
\[ s_{\mathcal{S}(\lambda,\alpha;k)} = s_{\lambda} s_{\Delta_\alpha}. \]
In particular
\[s_{\mathcal{S}(\lambda,\alpha;k)} = s_{\mathcal{S}(\lambda,\alpha;h)} \]
for all $k \geq h$, so there is no change among the differences for $k \geq h$.
Furthermore, we shall see that for $k \geq h$, none of the differences is Schur-positive.
First we give one more example. This time with $k$ varying from $0$ to $h$.
\begin{example} For each $0 \leq k \leq 6$ we show the Hasse diagrams for $\succeq_s$ on the collection of staircases with bad foundations $\mathcal{S}(\lambda,\alpha;k)$ for some $\alpha=(\alpha_1,\ldots,\alpha_n)$ where $n\geq 6$, and $\lambda$ varying over all hooks of size $h=6$.
\setlength{\unitlength}{0.19mm}
\begin{picture}(100,60)(-120,0)
\put(-150,-400){\framebox(650,440)[tl]{ }}
\put(-100,-180){$k=0,1$}
\put(42,-80){\line(1,-1){263}}
\put(130,-80){\line(1,-1){165}}
\put(130,-80){\line(2,-3){176}}
\put(240,-80){\line(1,-1){60}}
\put(240,-80){\line(1,-3){55}}
\put(240,-80){\line(1,-4){66}}
\put(345,-320){\line(0,1){30}}
\put(345,-210){\line(0,1){30}}
\put(0,0){$s_{\mathcal{S}(\hooka \textrm{ } \textrm{ },\alpha;k) }$}
\put(100,0){$s_{\mathcal{S}(\hookb \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(200,0){$s_{\mathcal{S}(\hookc \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-130){$s_{\mathcal{S}(\hookd \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-250){$s_{\mathcal{S}(\hooke \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-360){$s_{\mathcal{S}(\hookf \textrm{ } \textrm{ } \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\end{picture}
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\newpage
\setlength{\unitlength}{0.20mm}
\begin{picture}(100,60)(-120,0)
\put(-100,-180){$k=2$}
\put(-150,-400){\framebox(650,440)[tl]{ }}
\put(130,-80){\line(2,-3){176}}
\put(240,-80){\line(1,-3){55}}
\put(240,-80){\line(1,-4){66}}
\put(345,-320){\line(0,1){30}}
\put(345,-210){\line(0,1){30}}
\put(0,0){$s_{\mathcal{S}(\hooka \textrm{ } \textrm{ },\alpha;k) }$}
\put(100,0){$s_{\mathcal{S}(\hookb \textrm{ }\textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(200,0){$s_{\mathcal{S}(\hookc \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-130){$s_{\mathcal{S}(\hookd \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-250){$s_{\mathcal{S}(\hooke \textrm{ } \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-360){$s_{\mathcal{S}(\hookf \textrm{ } \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\end{picture}
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\setlength{\unitlength}{0.20mm}
\begin{picture}(100,60)(-120,0)
\put(-100,-180){$k=3$}
\put(-150,-400){\framebox(650,440)[tl]{ }}
\put(240,-80){\line(1,-4){66}}
\put(345,-320){\line(0,1){30}}
\put(345,-210){\line(0,1){30}}
\put(0,0){$s_{\mathcal{S}(\hooka \textrm{ } \textrm{ },\alpha;k) }$}
\put(100,0){$s_{\mathcal{S}(\hookb \textrm{ }\textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(200,0){$s_{\mathcal{S}(\hookc \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-130){$s_{\mathcal{S}(\hookd \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-250){$s_{\mathcal{S}(\hooke \textrm{ } \textrm{ } \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-360){$s_{\mathcal{S}(\hookf \textrm{ } \textrm{ } \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\end{picture}
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\newpage
\setlength{\unitlength}{0.20mm}
\begin{picture}(100,60)(-120,0)
\put(-100,-180){$k=4$}
\put(-150,-400){\framebox(650,440)[tl]{ }}
\put(375,-330){\line(-1,2){20}}
\put(375,-330){\line(1,6){24}}
\put(0,0){$s_{\mathcal{S}(\hooka \textrm{ } \textrm{ },\alpha;k) }$}
\put(100,0){$s_{\mathcal{S}(\hookb \textrm{ } \textrm{ }\textrm{ } \textrm{ },\alpha;k) }$}
\put(200,0){$s_{\mathcal{S}(\hookc \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(380,-130){$s_{\mathcal{S}(\hookd \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(250,-250){$s_{\mathcal{S}(\hooke \textrm{ } \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-360){$s_{\mathcal{S}(\hookf \textrm{ } \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\end{picture}
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\setlength{\unitlength}{0.20mm}
\begin{picture}(100,60)(-120,0)
\put(-100,-180){$k=5$}
\put(-150,-400){\framebox(650,440)[tl]{ }}
\put(345,-320){\line(0,1){30}}
\put(0,0){$s_{\mathcal{S}(\hooka \textrm{ } \textrm{ },\alpha;k) }$}
\put(100,0){$s_{\mathcal{S}(\hookb \textrm{ }\textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(200,0){$s_{\mathcal{S}(\hookc \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-130){$s_{\mathcal{S}(\hookd \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-250){$s_{\mathcal{S}(\hooke \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-360){$s_{\mathcal{S}(\hookf \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\end{picture}
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\newpage
\setlength{\unitlength}{0.20mm}
\begin{picture}(100,60)(-120,0)
\put(-100,-180){$k \geq 6$}
\put(-150,-400){\framebox(650,440)[tl]{ }}
\put(0,0){$s_{\mathcal{S}(\hooka \textrm{ } \textrm{ },\alpha;k) }$}
\put(100,0){$s_{\mathcal{S}(\hookb \textrm{ }\textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(200,0){$s_{\mathcal{S}(\hookc \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-130){$s_{\mathcal{S}(\hookd \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-250){$s_{\mathcal{S}(\hooke \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\put(320,-360){$s_{\mathcal{S}(\hookf \textrm{ }\textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ } \textrm{ },\alpha;k) }$}
\end{picture}
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
For $0 \leq k \leq 1$ we have the same structure of the Hasse diagrams that was displayed in the first example.
As $k$ increases, fewer of the $\succeq_s$ relations remain satisfied, until finally, when $k \geq 6$, there are no Schur-positive differences among these diagrams.
We note that the chain that was apparent among the diagrams on the right when $k=0$ also lost its structure as $k$ increased.
\end{example}
\
When working with hooks, we shall find it convenient to describe each hook by its \textit{arm length} and \textit{leg length}.
Hence, we let $\mu$ be the hook $(\mu_a,1^{\mu_l - 1})$ and $\lambda$ be the hook $(\lambda_a,1^{\lambda_l - 1})$.
Throughout this section we shall use a fixed fat staircase $\Delta_\alpha$, where $\alpha = (\alpha_1,\alpha_2,\ldots,\alpha_n)$.
Thus $n$ is the width of the fat staircase.
The following results summarize all the $\succeq_s$ relationships between diagrams of the form $\mathcal{S}(\lambda,\alpha;k)$ when $\lambda$ is a hook of fixed size $h \leq n+k$ and $0 \leq k \leq h$.
For each pair of hooks $\lambda$, $\mu$ with $\lambda_a, \mu_a \leq \longlroof \frac{h}{2} \longrroof$, Theorem~\ref{kantichainhooks1} and Theorem~\ref{kantichainhooks11} each prove one side of the Schur-incomparability of this pair, thus describing the antichain structure displayed along the top of the Hasse diagrams in the previous examples.
For each pair of hooks $\lambda$, $\mu$ with $\longlroof \frac{h}{2} \longrroof \leq \lambda_a < \mu_a$, Theorem~\ref{kchainhooks} and Theorem~\ref{kchainhooksb} shows that $\mathcal{S}(\lambda,\alpha;k) \succeq_s \mathcal{S}(\mu,\alpha;k)$ if and only if $\lambda_a \geq \mu_l+k-1$ .
This describes relations among those diagrams displayed along the right of the Hasse diagrams in the previous examples.
Finally, for each pair of hooks $\lambda$, $\mu$ with $\lambda_a, \mu_l < \longlroof \frac{h}{2} \longrroof$,
Theorem~\ref{kcrosshooks1} and Theorem~\ref{kcrosshooks11} shows that when $1 \leq k \leq h$ we have $\lambda_a \geq \mu_l+k-1$ if and only if $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$, and when $k=0$ we have $\lambda_a \geq \mu_l$ if and only if $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$.
This describes the relationships between the diagrams displayed on the right with the diagrams displayed along the top in the previous Hasse diagrams.
\
Let us finally begin. We start by looking at the antichain structure.
\begin{theorem}
\label{kantichainhooks1}
Let $\lambda$ and $\mu$ be distinct hooks with $|\lambda|=|\mu| =h \leq n+k$ and $\lambda_a < \mu_a \leq \longlroof \frac{h}{2} \longrroof$, and let $0 \leq k \leq h$.
Then $\mathcal{S}(\mu, \alpha;k) \not\succeq_s \mathcal{S}(\lambda, \alpha;k)$.
\end{theorem}
\begin{proof}
We shall show that there exists a SSYT $\mathcal{T}$ of shape $\mathcal{S}(\lambda, \alpha;k)$ with lattice reading word such that there is no SSYT of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word having the same content.
This is sufficient to prove the theorem.
Since $\lambda_a < \mu_a$ and $|\lambda|=|\mu|$, we have $\lambda_l > \mu_l$.
Let $r_1=r_2= \ldots = r_k=1$ and let $r_{1+k} < r_{2+k} < \ldots < r_{n+k}$ be the values of $R_{\alpha,k}$ greater than $1$.
We can create a SSYT of shape $\lambda$ by filling the boxes of $\lambda$ as follows.
\begin{center}
\begin{tabular}{cccccccc}
$r_1$ & \ldots & $r_k$ & $r_{1+k}$ & $r_{2+k}$ & $\cdots$ & $r_{\lambda_a-1}$ & $|\alpha|+1$ \\
$|\alpha|+2$\\
$|\alpha|+3$\\
$\vdots$\\
$|\alpha|+\lambda_l$ \\
\end{tabular}
\end{center}
\noindent Using the unique filling of $\Delta_\alpha$, it is easy to check that the resulting tableau of shape $\lambda \oplus \Delta_\alpha$ has lattice reading word since each of the entries in the first row of $\lambda$ are from $R_{\alpha,k}$ and the entry $1$ appears $k$ times.
Thus Lemma~\ref{kfatjoinlemma} provides us with a SSYT $\mathcal{T}$ of shape $\mathcal{S}(\lambda, \alpha;k)$ with lattice reading word, where $\lambda$ is filled as shown above.
Since $\mu_l < \lambda_l$, we have $l(\mathcal{S}(\mu, \alpha;k))=|\alpha|+\mu_l<|\alpha|+\lambda_l$.
Therefore no SSYT of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word can contain the entry $|\alpha|+\lambda_l$.
Thus no SSYT of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word can have the same content as $\mathcal{T}$.
Hence, it follows that $s_{\mathcal{S}(\mu,\alpha;k)}-s_{\mathcal{S}(\lambda,\alpha;k)} \not\geq_s 0$. \qed
\end{proof}
\begin{theorem}
\label{kantichainhooks11}
Let $\lambda$ and $\mu$ be distinct hooks with $|\lambda|=|\mu| =h \leq n+k$ and $\lambda_a < \mu_a \leq \longlroof \frac{h}{2} \longrroof$ and let $0 \leq k \leq h$.
Then $\mathcal{S}(\lambda, \alpha;k) \not\succeq_s \mathcal{S}(\mu, \alpha;k)$.
\end{theorem}
\begin{proof}
We first consider the case when $k>0$.
We let $r_1 = r_2 = \ldots = r_k = 1$ and let $r_{1+k} < r_{2+k} < \ldots r_{n+k}$ be the values of $R_{\alpha,k}$ greater than 1, where we note $n+k \geq h$.
We can create a SSYT of shape $\mu$ by filling the boxes of $\mu$ as follows.
\begin{center}
\begin{tabular}{cccccccc}
$r_1$ & $\cdots$ & $r_k$ & $r_{1+k}$ & $r_{2+k}$ & $\cdots$ & $r_{\mu_a}$ \\
$r_{\mu_a+1}$\\
$r_{\mu_a+2}$\\
$\vdots$\\
$r_{h}$ \\
\end{tabular}
\end{center}
\noindent Since $r_1 = r_2 = \ldots = r_k = 1$ and $r_{1+k} < r_{2+k} < \ldots < r_{h}$ are distinct values of $R_{\alpha,k}$, it is easy to check that the resulting tableau of shape $\mu \oplus \Delta_\alpha$ has lattice reading word.
Thus Lemma~\ref{kfatjoinlemma} provides us with a SSYT $\mathcal{T}$ of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word, where $\mu$ is filled as shown above.
We now wish to count all SSYTx of shape $\mathcal{S}(\mu, \alpha;k)$ (shape $\mathcal{S}(\lambda, \alpha;k)$, respectively) with content $\nu=c(\mathcal{T})$.
Since $\Delta_\alpha$ has a unique way of being filled, we must find all semistandard fillings of $\mu$ ($\lambda$, resp.) with the values $r_1,r_2,\ldots,r_h$.
Since $r_1 = r_2 = \ldots = r_k = 1$ and $r_{1+k} < r_{2+k} < \ldots < r_{h}$, the values $r_1, r_2, \ldots, r_k$ must appear as the first $k$ values of the first row of $\mu$ ($\lambda$, resp.).
Further, once we choose $\mu_a-k$ ($\lambda_a -k$, resp.) of the values $r_{1+k}, r_{2+k}, \ldots, r_h$ to appear in the first row of $\mu$ (first row of $\lambda$, resp.), then the remaining $r$'s must appear in the first column and the order of all these values is uniquely determined by the semistandard conditions.
Therefore the number of SSYTx of shape $\mathcal{S}(\mu, \alpha ;k)= \kappa' / \rho'$ with lattice reading word and content $\nu =c(\mathcal{T})$ is given by
\[ c_{\rho' \nu}^{\kappa'} = \left( \begin{array}{c}
h-k \\
\mu_a -k\\
\end{array} \right) \]
and the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha ;k)= \kappa / \rho$ with lattice reading word and content $\nu=c(\mathcal{T})$ is given by
\[ c_{\rho \nu}^{\kappa} = \left( \begin{array}{c}
h-k \\
\lambda_a -k\\
\end{array} \right). \]
\
Since $\lambda_a < \mu_a \leq \longlroof \frac{h}{2} \longrroof$, we have $\lambda_a +\mu_a < h+1 \leq h+k$.
Therefore we have $h - \lambda_a > \mu_a-k$ and we obtain
\begin{equation}
\label{antieq1}
h - \lambda_a -i> \mu_a-k-i,
\end{equation}
for each $i$.
\
\noindent Therefore
\begin{eqnarray*}
\ c_{\rho' \nu}^{\kappa'} & = & \frac{(h-k)! }{(h-\mu_a)!(\mu_a-k)!}
\\ & = & \frac{(h-k)! }{(h-\lambda_a)!(\lambda_a-k)!} \times \prod_{i=0}^{\mu_a - \lambda_a-1} \frac{h-\lambda_a-i}{\mu_a-k -i}
\\ & = & c_{\rho \nu}^{\kappa} \times \prod_{i=0}^{\mu_a - \lambda_a-1} \frac{h-\lambda_a-i}{\mu_a-k -i}
\\ & > & c_{\rho \nu}^{\kappa} ,
\end{eqnarray*}
where we have used Equation~\ref{antieq1} in the final step. Therefore $s_{\mathcal{S}(\lambda,\alpha ;k)}-s_{\mathcal{S}(\mu,\alpha ;k)} \not\geq_s 0$.
\
Now consider the case $k=0$.
We now let $r_{1} < r_{2} < \ldots < r_{n}$ be the values of $R_{\alpha,k}$.
We can create a SSYT of shape $\mu$ by filling the boxes of $\mu$ as follows.
\begin{center}
\begin{tabular}{cccccccccc}
$r_1$ & $r_2$ & $\cdots$ & $r_{\mu_a}$ \\
$r_{\mu_a+1}$\\
$r_{\mu_a+2}$\\
$\vdots$\\
$r_{h}$ \\
\end{tabular}
\end{center}
As before, Lemma~\ref{kfatjoinlemma} provides us with a SSYT $\mathcal{T}$ of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word, where $\mu$ is filled as shown above.
We now wish to count all SSYTx of shape $\mathcal{S}(\mu, \alpha;k)$ (shape $\mathcal{S}(\lambda, \alpha;k)$, respectively) with content $\nu =c(\mathcal{T})$.
In this case only $r_1$ is required to appear at the beginning of the first row of $\mu$ ($\lambda$, resp.).
Further, once we choose $\mu_a-1$ ($\lambda_a -1$, resp.) of the values $r_{2} < \ldots <r_h$ to appear in the first row of $\mu$ (first row of $\lambda$, resp.), then the remaining $r$'s must appear in the first column of $\mu$ (first column of $\lambda$, resp.) and the order of all these values is uniquely determined by the semistandard conditions.
Therefore the number of SSYTx of shape $\mathcal{S}(\mu, \alpha ;k)= \kappa' / \rho'$ with lattice reading word and content $\nu =c(\mathcal{T})$ is given by
\[ c_{\rho' \nu}^{\kappa'} = \left( \begin{array}{c}
h-1 \\
\mu_a -1\\
\end{array} \right) \]
and the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha ;k)= \kappa / \rho$ with lattice reading word and content $\nu=c(\mathcal{T})$ is given by
\[ c_{\rho \nu}^{\kappa} = \left( \begin{array}{c}
h-1 \\
\lambda_a -1\\
\end{array} \right). \]
\
Since $\lambda_a < \mu_a \leq \longlroof \frac{h}{2} \longrroof$, we have $h+1 > \lambda_a +\mu_a$.
Therefore we have $h - \lambda_a > \mu_a-1$ and we once again obtain
\begin{equation}
\label{antieqkkkr}
h - \lambda_a -i> \mu_a-1-i,
\end{equation}
for each $i$.
\
\noindent Therefore
\begin{eqnarray*}
\ c_{\rho' \nu}^{\kappa'} & = & \frac{(h-1)! }{(h-\mu_a)!(\mu_a-1)!}
\\ & = & \frac{(h-1)! }{(h-\lambda_a)!(\lambda_a-1)!} \times \prod_{i=0}^{\mu_a - \lambda_a -1} \frac{h-\lambda_a-i}{\mu_l-1 -i}
\\ & = & c_{\rho \nu}^{\kappa} \times \prod_{i=0}^{\mu_l - \lambda_a-1} \frac{h-\lambda_a-i}{\mu_a-1 -i}
\\ & > & c_{\rho \nu}^{\kappa} ,
\end{eqnarray*}
where we have used Equation~\ref{antieqkkkr} in the final step. Therefore $s_{\mathcal{S}(\lambda,\alpha ;k)}-s_{\mathcal{S}(\mu,\alpha ;k)} \not\geq_s 0$. \qed
\end{proof}
We now depart from looking at the hooks
$\lambda,\mu$ satisfying $\lambda_a < \mu_a \leq \longlroof \frac{h}{2} \longrroof$.
Instead, we turn to the hooks
$\lambda,\mu$ satisfying $\longlroof \frac{h}{2} \longrroof \leq \lambda_a < \mu_a$.
The following theorems describes the relations among the diagrams we displayed on the right in our examples.
\begin{theorem}
\label{kchainhooks}
Let $\lambda$ and $\mu$ be hooks with $|\lambda|=|\mu|=h \leq n+k$ and $\longlroof \frac{h}{2} \longrroof \leq \lambda_a < \mu_a$, and let $0 \leq k \leq h$.
If $\lambda_a \geq \mu_l + k -1$ then $\mathcal{S}(\lambda, \alpha ;k) \succeq_s \mathcal{S}(\mu, \alpha ;k)$.
\end{theorem}
\begin{proof}
To prove the result, we shall consider any content $\nu$ such that a SSYT of shape $\mathcal{S}(\mu,\alpha ;k)$ with content $\nu$ and lattice reading word exists.
First we shall show that there is also a SSYT of shape $\mathcal{S}(\lambda,\alpha ;k)$ with content $\nu$ and lattice reading word.
Then, letting $\mathcal{S}(\lambda,\alpha ;k)=\kappa / \rho$ and $\mathcal{S}(\mu,\alpha ;k)=\kappa' / {\rho'}$ for partitions $\kappa$, $\kappa'$, $\rho$, and $\rho'$, we shall show that the Littlewood-Richardson coefficients for these two diagrams and this content satisfy
\[ c_{\rho \nu}^{\kappa} \geq c_{\rho' \nu}^{\kappa'}.\]
Having shown that this inequality holds for any content $\nu$ for which a SSYT of shape $\mathcal{S}(\mu,\alpha ;k)$ with content $\nu$ and lattice reading word exists, this will imply that $s_{\mathcal{S}(\lambda, \alpha ;k)}-s_{\mathcal{S}(\mu,\alpha ;k)} \geq_s 0.$
\
Let $\nu$ be a content such that there is a SSYT $\mathcal{T}_1$ of shape $\mathcal{S}(\mu,\alpha ;k)$ with content $\nu$ and lattice reading word.
By Lemma~\ref{kfatfirstrowlemma} we know that the first row of $\mu$ contains at most $k$ $1$'s.
Let $q$ be the number of $1$'s in the first row of $\mu$. We write $a_1=a_2 = \ldots a_q =1$.
Then the rest of the first row of $\mu$ consists of a strictly increasing sequence $a_{q+1}<a_{q+2}<\ldots < a_{\mu_a}$ where each $a_{q+i} \in R_{\alpha,k}$ with $a_{q+i}>1$.
Also, since the columns of $\mathcal{T}_1$ strictly increase, the first column of $\lambda$ contains a strictly increasing sequence $a'_1<a'_2<\ldots < a'_{\mu_l}$, where $a'_1=a_1$.
Since $\Delta_\alpha$ can only be filled in one way (in both shapes $\mathcal{S}(\mu,\alpha ;k)$ and $\mathcal{S}(\lambda,\alpha ;k)$) and the values $a_1$, \ldots $a_q$ must be placed in the first $q$ positions in the first row of either foundation, in order to obtain a tableau of shape $\mathcal{S}(\lambda,\alpha ;k)$ and content $\nu$ we only need to show how to place the values of $\{a_{q+1},a_{q+2},\ldots,a_{\mu_a}\} \cup \{a'_2,a'_3\ldots,a'_{\mu_l}\}$ in $\lambda$.
\
If $l(\nu) > |\alpha|+1$ for this particular content $\nu$ then there are entries of $\mathcal{T}_1$ greater than $|\alpha|+1$.
Since $\Delta_\alpha$ has content $\delta_\alpha$, the lattice condition implies that $|\alpha|+2$ appears in $\mu$.
Since each $a_i \in R_{\alpha,k}$, we have $a_i \leq |\alpha|+1$ for each $i$ and so $a'_j=|\alpha|+2$ for some $j$.
The lattice condition and the fact that the column strictly increases gives that
\begin{eqnarray*}
a'_j &=& |\alpha|+2 \\
a'_{j+1} &=& |\alpha|+3 \\
&\vdots&\\
a'_{\mu_l} &=& |\alpha|+ \mu_l -j +2. \\
\end{eqnarray*}
\noindent Again, by the lattice condition, it is clear that any SSYT of shape $\mathcal{S}(\lambda,\alpha ;k)$ with lattice reading word and content $\nu$ must also have these values $a'_j, a'_{j+1},\ldots, a'_{\mu_l}$ as the last $\mu_l - j+1$ entries of the first column of $\lambda$.
Since $\lambda_a < \mu_a$ gives $\mu_l < \lambda_l$, we have $\mu_l - j+1 < \lambda_l -j+1$.
Since $j \geq 2$ this gives,
\begin{equation}
\label{kneedeqn3}
\mu_l - j+1 \leq \lambda_l,
\end{equation}
so these entries do fit in this column.
Note that if $l(\nu) \leq |\alpha|+1$, then this sequence of values $a'_j, a'_{j+1},\ldots, a'_{\mu_l}$ is empty and we do not have to worry about placing any entries larger than $|\alpha|+1$ into $\lambda$.
(In such a case we may consider $j= \mu_l +1$.)
Let $M$ be the multiset $\{a_{q+1},a_{q+2},\ldots,a_{\mu_a}\} \cup \{a'_2,a'_3\ldots,a'_{j-1} \}$.
Then $M$ is the remaining entries that we still need to place in $\lambda$ to obtain a tableau of shape $\mathcal{S}(\lambda,\alpha;k)$ and content $\nu$.
We have $|M|=\mu_a+j-2-q$ and $\textrm{max}(M)=|\alpha|+1$.
Let $R =\{a_{q+1},a_{q+2},\ldots,a_{\mu_a}\} \cap \{a'_2,a'_3,\ldots,a'_{j-1} \}$.
We note that $R =\{a_{q+1},a_{q+2},\ldots,a_{\mu_a}\} \cap \{a'_2,a'_3,\ldots,a'_{\mu_l} \}$ since Lemma~\ref{kfatfirstrowlemma} shows $a_{\mu_a} \in R_{\alpha,k}$, which implies $a_{\mu_a} < |\alpha|+2 = a'_{j}$.
Thus \textit{$R$ is the set of values (except 1) that appear in both the first row and the first column of $\mu$.}
Since the values of $R$ all appear in the first row of $\mu$, Lemma~\ref{kfatfirstrowlemma} gives that $R \subseteq R_{\alpha,k}$.
For any SSYT of shape $\mathcal{S}(\lambda,\alpha ;k)$ with lattice reading word and content $\nu$, Lemma~\ref{kfatfirstrowlemma} shows that, besides $1$, the values in the first row of $\lambda$ are distinct, so, when creating a filling of $\lambda$, the values in $R$ must also appear in both the first row of $\lambda$ and the first column of $\lambda$.
Consider $A=\{a_{q+1},a_{q+2},\ldots,a_{\mu_a}\}-R$ and $A'=\{a'_2,a'_3\ldots,a'_{j-1} \}-R$.
Since we know that the values of $R$ must appear in both the first row of $\lambda$ and first column of $\lambda$, $A \cup A'$ contains the remaining values of $M$ that need to be placed in $\lambda$.
In other words, \textit{$A \cup A'$ is the set of all values $\leq |\alpha|+1$ that can appear in exactly one of the first row of $\lambda$ or the first column of $\lambda$.}
We wish to show that
\begin{equation}
\label{keqneed1}
|R| \leq \lambda_a -q
\end{equation} holds.
If $q=0$, then $|R| \leq j \leq \mu_l+1 \leq \lambda_l < \lambda_a = \lambda_a -q$.
Now, when $q \geq 1$ the top-left entry of $\mu$ is $1$ which is not in $R$, hence we have $|R| \leq \mu_l -1 \leq \lambda_a -k \leq \lambda_a -q$, where
we have used the fact that $\lambda_a \geq \mu_l + k -1$.
Now, because Equation~\ref{keqneed1} holds,
we can extend the values of $R$ to an increasing sequence $b_{q+1} < b_{q+2} < \ldots < b_{\lambda_a}$ by choosing $\lambda_a - |R|-q$ additional values from $(A \cup A') \cap R_{\alpha,k}$.
There are enough values to choose from since there are
\begin{equation}
\label{kneed4}
\mu_a - |R| -q \geq \lambda_a -|R|-q
\end{equation}
values of $(A \cup A')\cap R_{\alpha,k}$ present in the first row of $\mu$.
The sequence of $b_i$'s is strictly increasing since $(A \cup A') \cap R = \emptyset$.
Now $M-\{b_{q+1},\ldots,b_{\lambda_a} \} \subseteq M-R$ contains $w=|M|-(\lambda_a-q)=\mu_a+j-2 -\lambda_a$ distinct values, each no greater than $|\alpha|+1$.
That is, they are an increasing sequence $c_1 < c_2 < \ldots < c_{w}$, where $c_{w}\leq |\alpha|+1$ and $c_1>1$.
We have
\begin{eqnarray*}
\ \lambda_l & = & \mu_a + \mu_l - \lambda_a
\\ & = & 1+ (\mu_a +j-2-\lambda_a) + (\mu_l -j +1)
\\ & = & 1+w+(\mu_l -j +1),
\\
\end{eqnarray*}
so, letting $b_1=b_2=\ldots=b_q=1$, we may fill $\lambda$ as shown below.
\
\begin{center}
\begin{tabular}{cccc}
$b_1$ & $b_2$ & $\cdots$ & $b_{\lambda_a}$ \\
$c_1$\\
$c_2$\\
$\vdots$\\
$c_{w}$\\
$a'_{j}$\\
$a'_{j+1}$\\
$\vdots$\\
$a'_{\mu_l}$
\end{tabular}
\end{center}
\
Since the sequence of $c_i$ is strictly increasing and since $c_1 >1$ and $c_w \leq |\alpha|+1 < |\alpha|+2 =a_j'$ we have
\[ b_1 < c_1 <c_2 \ldots < c_w < a'_j < \ldots < a'_{\mu_l}.\]
That is, the first column of $\lambda$ is increasing.
We also have
\[ b_1=b_2=\ldots=b_q=1 \] and
\[ b_{q+1} < b_{q+2} < \ldots < b_{\mu_a},\]
so the first row of $\lambda$ is weakly increasing with $q \leq k$ $1$'s.
Hence this filling gives us a SSYT $T$ of shape $\lambda \oplus \Delta_\alpha$ and content $\nu$.
We now check that $T$ has a lattice reading word so that we may apply Lemma~\ref{kfatjoinlemma} to obtain the desired tableau $\mathcal{T}_2$ of shape $\mathcal{S}(\lambda,\alpha;k)$.
Suppose that $T$ does not have a lattice reading word.
Then, when reading the foundation $\lambda$ of $T$, we must reach a point where the lattice condition failed.
Let $x$ be the value that, when read, caused the lattice condition to fail.
The lattice condition could not have failed when reading the first row of $\lambda$ since the lattice condition places no restriction on the number of $1$'s and the remaining values in the first row of $\lambda$ were distinct values chosen from $R_{\alpha,k}$.
Therefore $x>1$ and \textit{this} $x$ which violated the lattice condition appears somewhere in the first column of $\lambda$.
We inspect the the two cases $x \in R_{\alpha,k}$ and $x \not\in R_{\alpha,k}$.
Consider the first case, $x \in R_{\alpha,k}$.
If a value of $R_{\alpha,k}$ appears only once in the foundation then reading this value cannot violate the lattice condition.
Thus, since the lattice condition failed at this $x$, this $x$ cannot be the first time that $x$ was read in $\lambda$.
Since the columns strictly increase, the previous $x$ must have appeared in the first row of $\lambda$ and, since the values in the first row are distinct, this is the only other $x$ in $\lambda$.
Since the content of $\lambda$ is the same as the content of $\mu$, these two $x$'s appear in $\mu$ as well.
Using the fact that $\mathcal{T}_1$ has a lattice reading word, together with the content of $\Delta_{\alpha}$, we find that the value $x-1$ appeared in $\mu$.
Thus the value $x-1$ also appears in $\lambda$.
Now, since both the rows and columns of $\lambda$ must weakly increase, either the $x-1$ appears in the first column of $\lambda$ above the entry $x$, or the $x-1$ appears in the first row of $\lambda$ left of the entry $x$.
In either case the $x-1$ is read before the second $x$ is read and the lattice condition will not fail when reading this second $x$, contrary to our assuption.
We now look at the second case, where $x \not\in R_{\alpha,k}$.
Again, the $x$ we are interested in appears in the first column of $\lambda$.
There cannot be a second $x$ in $\lambda$ since the column strictly increases and $x \not\in R_{\alpha,k}$ implies that no other $x$ was placed in the first row of $\lambda$.
As before, $x$ must have appeared in $\mu$ and, in particular, it also appears somewhere in the first column.
Since $\mathcal{T}_1$ has a lattice reading word we must read a sequence of values $t$, $t+1$, $t+2$, $\ldots$, $x-2$, $x-1$ in $\mu$, where $t \in R_{\alpha,k}$, before we read the $x$, and we may assume that none of the values $t+1$, $t+2$, $\ldots$, $x-2$, $x-1$ are from $R_{\alpha,k}$.
Hence each of the values $t$, $t+1$, $\ldots$, $x-2$, $x-1$ also appear in $\lambda$.
None of the values $t+1$, $t+2$, $\ldots$, $x-1$ can appear in the first row of $\lambda$ since the first row was chosen from $R_{\alpha,k}$.
That is, each value $t+1,t+2,\ldots,x-1$ appears in the first column of $\lambda$.
Also, since both the rows and columns of $\lambda$ must weakly increase, either the $t$ appears in the first column of $\lambda$ above the entry $t+1$, or the $t$ appears in the first row of $\lambda$.
In either case the entire sequence $t$, $t+1$, $\ldots$, $x-2$, $x-1$ is read before the $x$ is read in $\lambda$ and the lattice condition does not fail at $x$, contradicting our assumption.
Since $T$ has a lattice reading word, we can now apply Lemma~\ref{kfatjoinlemma} to obtain the SSYT $\mathcal{T}_2$ of shape $\mathcal{S}(\lambda,\alpha;k)$ with lattice reading word and content $\nu$.
Therefore from any SSYT $\mathcal{T}_1$ of shape $\mathcal{S}(\mu, \alpha ;k)$ with lattice reading word and content $\nu$ we can create a SSYT $\mathcal{T}_2$ of shape $\mathcal{S}(\lambda,\alpha ;k)$ with lattice reading word and content $\nu$.
\
Let $c_{\rho \nu}^{\kappa}$ be the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha;k)$ with lattice reading word and content $\nu$, and $c_{\rho' \nu}^{\kappa'}$ be the number of SSYTx of shape $\mathcal{S}(\mu,\alpha ;k)$ with lattice reading word and content $\nu$.
We shall show that the sets $R$ and $A \cup A'$ that were described above are completely determined by the content $\nu$.
That is, without starting with a specific tableau, but only starting with the desired content of a SSYT of some fat staircase with hook foundation with lattice reading word, we show how to determine $q$, the number of $1$'s in the foundation; $R$, the set of values $\neq 1$ that must appear in both the first row and first column of the foundation; and $A \cup A'$, the set of values $\leq |\alpha| +1$ that can only appear in one of the first row or the first column of the foundation.
Since $\Delta_\alpha$ is uniquely filled, from $\nu$ we can determine the content of the foundation $\mu$ ($\lambda$, respectively) needed to create a SSYT of shape $\mathcal{S}(\mu, \alpha ;k)$ ($\mathcal{S}(\lambda, \alpha ;k)$, resp.) with lattice reading word and content $\nu$.
From the content of the foundation we can determine $q$, the number of $1$'s in the foundation, and the values $a'_j, a'_{j+1},\ldots$ greater than $|\alpha|+1$.
Since Lemma~\ref{kfatfirstrowlemma} shows that the entries $\neq 1$ in the first row of the foundation strictly increase, any value $\neq 1$ that appears twice in the foundation must appear in both the first row of $\mu$ ($\lambda$, resp.) and first column of $\mu$ ($\lambda$, resp.).
These values give the set $R$.
Then $A \cup A'$ is the set of values in the foundation that are $>1$, $\leq |\alpha|+1$, but are not in $R$.
The first row of $\mu$ (first row of $\lambda$, resp.) must contain $q$ $1$'s and the values in $R$.
After we determine the remaining entries of the first row of $\mu$ (first row of $\lambda$, resp.), the rest of the foundation is uniquely determined.
Now, to actually form a SSYT of shape $\mathcal{S}(\mu, \alpha ;k)$ ($\mathcal{S}(\lambda, \alpha ;k)$, resp.) with lattice reading word and content $\nu$,
we only need to choose the remaining $\mu_a-q - |R|$ ($\lambda_a-q - |R|$, resp.) values from the set $(A \cup A') \cap R_{\alpha,k}$
to place in the first row of $\mu$ (first row of $\lambda$, resp.).
Therefore the number of SSYTx of shape $\mathcal{S}(\mu, \alpha ;k)= \kappa' / \rho'$ with lattice reading word and content $\nu$ is given by
\[ c_{\rho' \nu}^{\kappa'} = \left( \begin{array}{c}
|(A \cup A')\cap R_{\alpha,k}| \\
\mu_a -q-|R|\\
\end{array} \right) \]
and the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha ;k)= \kappa / \rho$ with lattice reading word and content $\nu$ is given by
\[ c_{\rho \nu}^{\kappa} = \left( \begin{array}{c}
|(A \cup A')\cap R_{\alpha,k}| \\
\lambda_a -q-|R|\\
\end{array} \right). \]
\
Since $\lambda_a \geq \lambda_l > \mu_l \geq j-1$, we have
\begin{equation}
\label{keqneed2}
0 \geq j-1 - \lambda_a.
\end{equation}
Thus, for each $i$ we have
\begin{eqnarray*}
\ \mu_a -q-|R|-i & \geq & \mu_a - q- |R|-i + (j-1 - \lambda_a)
\\ & \geq & (\mu_a -|R|) + (j-1 -|R|) - (\lambda_a -|R|) -i-q
\\ & \geq & |A|+|A'|- (\lambda_a -|R|) -i-q
\\ & \geq & |(A \cup A')\cap R_{\alpha,k}|- (\lambda_a -|R|) -i-q.
\end{eqnarray*}
That is,
\begin{equation}
\label{khookynew}
\mu_a -q-|R|-i \geq |(A \cup A')\cap R_{\alpha,k}|- (\lambda_a -|R|) -i-q,
\end{equation}
for each $i$.
\
\noindent Therefore
\begin{eqnarray*}
\ c_{\rho \nu}^{\kappa} & = & \frac{|(A \cup A')\cap R_{\alpha,k}|! }{(|(A \cup A')\cap R_{\alpha,k}|-(\lambda_a-q-|R|))!(\lambda_a-q-|R|)!}
\\ & = & \frac{|(A \cup A')\cap R_{\alpha,k}|! }{(|(A \cup A')\cap R_{\alpha,k}|-(\mu_a-q-|R|))!(\mu_a-q-|R|)!}
\\ & & \times \prod_{i=0}^{\mu_a - \lambda_a-1} \frac{\mu_a -q- |R| -i}{|(A \cup A')\cap R_{\alpha,k}| -(\lambda_a - |R|) -i-q}
\\ & = & c_{\rho' \nu}^{\kappa'} \times \prod_{i=0}^{\mu_a - \lambda_a-1} \frac{\mu_a -q- |R| -i}{|(A \cup A')\cap R_{\alpha,k}| -(\lambda_a - |R|) -i-q}
\\ & \geq & c_{\rho' \nu}^{\kappa'} ,
\end{eqnarray*}
where we have used Equation~\ref{khookynew} in the final step.
Since this inequality holds for all contents $\nu$ for which there was a SSYT of shape $\mathcal{S}(\mu,\alpha;k)$ with lattice reading word and content $\nu$, we have $s_{\mathcal{S}(\lambda,\alpha ;k)}-s_{\mathcal{S}(\mu,\alpha ;k)} \geq_s 0$. \qed
\end{proof}
\begin{theorem}
\label{kchainhooksb}
Let $\lambda$ and $\mu$ be hooks with $|\lambda|=|\mu|=h\leq n+k$ and $\longlroof \frac{h}{2} \longrroof \leq \lambda_a < \mu_a$, and let $0 \leq k \leq h$.
If $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$, then $\lambda_a \geq \mu_l+k-1$.
\end{theorem}
\begin{proof}
Since $\longlroof \frac{h}{2} \longrroof \leq \lambda_a < \mu_a$, where $|\lambda| = |\mu| =h$, we have
\[ \mu_l < \lambda_l \leq \longlroof \frac{h}{2} \longrroof \leq \lambda_a < \mu_a.\]
In the case $k=0$ we only need to show that $\lambda_a \geq \mu_l-1$, which is true since $\lambda_a \geq \longlroof \frac{h}{2} \longrroof$ and $\mu_l \leq \longlroof \frac{h}{2} \longrroof$.
\
We turn to the case $1 \leq k \leq h$.
Towards a contradiction, suppose $\lambda_a < k+\mu_l-1$.
As before, we let $r_1=r_2=\ldots =r_k=1$ and $r_{k+1} < r_{k+2} < \ldots < r_{k+n}$ be the values of $R_{\alpha,k}$ greater than $1$.
We can create a SSYT of shape $\mu$ by filling the boxes of $\mu$ as follows.
\begin{center}
\begin{tabular}{cccccccccc}
$r_1$ & $r_2$ & $\cdots$ & $r_k$ & $r_{k+1}$ & $\cdots$ & $r_{\mu_a}$ \\
$r_{\mu_a+1}$\\
$r_{\mu_a+2}$\\
$\vdots$\\
$r_{h}$ \\
\end{tabular}
\end{center}
\noindent Since we are using $k$ 1's followed by distinct values of $R_{\alpha,k}$, it is easy to check that the resulting tableau of shape $\mu \oplus \Delta_\alpha$ has a lattice reading word.
Thus Lemma~\ref{kfatjoinlemma} provides us with a SSYT $\mathcal{T}$ of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word, where $\mu$ is filled as shown above.
We now wish to count all SSYTx of shape $\mathcal{S}(\mu, \alpha;k)$ (shape $\mathcal{S}(\lambda, \alpha;k)$, respectively) with content $\nu =c(\mathcal{T})$.
Since $\Delta_\alpha$ has a unique way of being filled, we must find all semistandard fillings of $\mu$ ($\lambda$, resp.) with the values $r_1,r_2,\ldots,r_h$.
Since $r_1 = r_2 = \ldots =r_k=1$, the values $r_1$, $r_2$, $\ldots$, $r_k$ must appear in the first $k$ positions of the first row of $\mu$ ($\lambda$, resp.).
Further, once we choose $\mu_l-1$ ($\lambda_a -k$, resp.) of the values $r_{k+1} <r_{k+2} <\ldots <r_h$ to appear in the first column of $\mu$ (first row of $\lambda$, resp.), then the remaining $r$'s must appear in the first row of $\mu$ (first column of $\lambda$, resp.) and the order of all these values is uniquely determined by the semistandard conditions.
Therefore the number of SSYTx of shape $\mathcal{S}(\mu, \alpha ;k)= \kappa' / \rho'$ with lattice reading word and content $\nu =c(\mathcal{T})$ is given by
\[ c_{\rho' \nu}^{\kappa'} = \left( \begin{array}{c}
h-k \\
\mu_l -1\\
\end{array} \right) \]
and the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha ;k)= \kappa / \rho$ with lattice reading word and content $\nu=c(\mathcal{T})$ is given by
\[ c_{\rho \nu}^{\kappa} = \left( \begin{array}{c}
h-k \\
\lambda_a -k\\
\end{array} \right). \]
\
Since $\mu_l < \lambda_l \leq \longlroof \frac{h}{2} \longrroof \leq \lambda_a < \mu_a$, where $|\lambda|=h$, we have
\[ \mu_l + \lambda_a < \lambda_l + \lambda_a =h+1 \]
This gives $h - \lambda_a > \mu_l-1$ and we obtain
\begin{equation}
\label{antieqkkkq}
h - \lambda_a -i> \mu_l-1-i,
\end{equation}
for each $i$.
\
\noindent Therefore
\begin{eqnarray*}
\ c_{\rho' \nu}^{\kappa'} & = & \frac{(h-k)! }{(h-k-\mu_l+1)!(\mu_l-1)!}
\\ & = & \frac{(h-k)! }{(h-\lambda_a)!(\lambda_a-k)!} \times \prod_{i=0}^{\mu_l - \lambda_a+k-2} \frac{h-\lambda_a-i}{\mu_l-1 -i}
\\ & = & c_{\rho \nu}^{\kappa} \times \prod_{i=0}^{\mu_l - \lambda_a+k-2} \frac{h-\lambda_a-i}{\mu_l-1 -i}
\\ & > & c_{\rho \nu}^{\kappa} ,
\end{eqnarray*}
where we have used Equation~\ref{antieqkkkq} in the final step. Therefore $s_{\mathcal{S}(\lambda,\alpha ;k)}-s_{\mathcal{S}(\mu,\alpha ;k)} \not\geq_s 0$, which is a contradiction.
Therefore we have $\lambda_a \geq \mu_l+k-1$. \qed
\end{proof}
Finally, we turn to the hooks
$\lambda,\mu$ satisfying $\lambda_a, \mu_l \leq \longlroof \frac{h}{2} \longrroof$.
\begin{theorem}
\label{kcrosshooks1}
Let $\lambda$ and $\mu$ be hooks with $|\lambda|=|\mu|=h \leq n+k$ and $\lambda_a, \mu_l \leq \longlroof \frac{h}{2} \longrroof$.
If $1 \leq k \leq h$ and $\lambda_a \geq \mu_l+k-1$ then $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$.
If $k=0$ and $\lambda_a \geq \mu_l$ then $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$.
\end{theorem}
\begin{proof}
For $1 \leq k \leq h$ we are given that $\lambda_a \geq \mu_l+k-1$ and we wish to show that $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$.
We claim that proof of Theorem~\ref{kchainhooks}, with a few equations verified under the current hypotheses, also proves this theorem.
Given the SSYT of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word and content $\nu$, in order to create the SSYT of shape $\mathcal{S}(\lambda, \alpha;k)$ with lattice reading word and content $\nu$ we first needed to check that Equation~\ref{kneedeqn3}, Equation~\ref{keqneed1} and Equation~\ref{kneed4} held.
Namely, we required that $\mu_l-j+1 \leq \lambda_l$, $|R| \leq \lambda_a-q$, and $\mu_a - |R|-q \geq \lambda_a -|R|-q$.
Since the first two equations were satisfied, we were able to fit the required values into the first row and first column of $\lambda$. Further, since $\mu_a -|R|-q \geq \lambda_a -|R|-q$, there were enough values to fill the first row of $\lambda$, and therefore we could construct the tableau with all the desired properties.
In order to show Equation~\ref{kneedeqn3} holds for the assumptions of this theorem, we note that $\mu_l \leq \longlroof \frac{h}{2} \longrroof \leq \lambda_l$.
In order to show Equation~\ref{keqneed1} holds for the assumptions of this theorem, we note that
\[|R|\leq j-2 \leq \mu_l -1 \leq \lambda_a - k \leq \lambda_a -q.\]
In order to show Equation~\ref{kneed4} holds for the assumptions of this theorem, we note that $\mu_a \geq \longlroof \frac{h}{2} \longrroof \geq \lambda_a$.
Therefore we can create a SSYT of shape $\mathcal{S}(\lambda, \alpha;k)$ with lattice reading word and content $\nu$ whenever there exists a SSYT of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word and content $\nu$.
Next, the proof of Theorem~\ref{kchainhooks} checked that, for each of these contents $\nu$, the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha;k)$ with lattice reading word and content $\nu$ is greater than or equal to the number of SSYTx of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word and content $\nu$.
To prove this, we first required that Equation~\ref{keqneed2} held.
Namely, we required that $\lambda_a \geq j-1$.
We used this equation to show that Equation~\ref{khookynew} held for each $i$, which gave us the desired inequality for the Littlewood-Richardson numbers.
In order to show Equation~\ref{keqneed2} holds for the assumptions of this theorem, we note that
\[ \lambda_a \geq \mu_l +k-1 \geq j-1 +k-1 \geq j-1. \]
Therefore the inequality for the Littlewood-Richardson numbers holds here as well, which proves that \[s_{\mathcal{S}(\lambda,\alpha;k)}-s_{\mathcal{S}(\mu,\alpha;k)} \geq_s 0. \]
\
In the case of $k=0$ we are assuming that $\lambda_a \geq \mu_l$.
The only parts of the above argument that need to be adjusted are the proofs that Equation~\ref{keqneed1} and Equation~\ref{keqneed2} hold under the current hypotheses.
For Equation~\ref{keqneed1} we note that
\[|R|\leq j-2 \leq \mu_l -1 \leq \lambda_a - 1 \leq \lambda_a = \lambda_a -q\]
since $q=0$, and for Equation~\ref{keqneed2} we note that
\[ \lambda_a \geq \mu_l \geq j-1. \]
As before, we therefore obtain \[s_{\mathcal{S}(\lambda,\alpha;k)}-s_{\mathcal{S}(\mu,\alpha;k)} \geq_s 0. \qed\]
\end{proof}
\begin{theorem}
\label{kcrosshooks11}
Let $\lambda$ and $\mu$ be hooks with $|\lambda|=|\mu|=h\leq n+k$ and $\lambda_a, \mu_l \leq \longlroof \frac{h}{2} \longrroof$.
If $1 \leq k \leq h$ and $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$, then $\lambda_a \geq k+\mu_l-1$.
If $k=0$ and $\mathcal{S}(\lambda, \alpha;k) \succeq_s \mathcal{S}(\mu, \alpha;k)$, then $\lambda_a \geq \mu_l$.
\end{theorem}
\begin{proof}
We begin with the case $1 \leq k \leq h$.
Towards a contradiction, suppose $\lambda_a < k+\mu_l-1$.
As before, we let $r_1=r_2=\ldots =r_k=1$ and $r_{k+1} < r_{k+2} < \ldots < r_{k+n}$ be the values of $R_{\alpha,k}$ greater than $1$.
We can create a SSYT of shape $\mu$ by filling the boxes of $\mu$ as follows.
\begin{center}
\begin{tabular}{cccccccccc}
$r_1$ & $r_2$ & $\cdots$ & $r_k$ & $r_{k+1}$ & $\cdots$ & $r_{\mu_a}$ \\
$r_{\mu_a+1}$\\
$r_{\mu_a+2}$\\
$\vdots$\\
$r_{h}$ \\
\end{tabular}
\end{center}
\noindent Since we are using $k$ 1's followed by distinct values of $R_{\alpha,k}$, it is easy to check that the resulting tableau of shape $\mu \oplus \Delta_\alpha$ has a lattice reading word.
Thus Lemma~\ref{kfatjoinlemma} provides us with a SSYT $\mathcal{T}$ of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word, where $\mu$ is filled as shown above.
We now wish to count all SSYTx of shape $\mathcal{S}(\mu, \alpha;k)$ (shape $\mathcal{S}(\lambda, \alpha;k)$, respectively) with content $\nu =c(\mathcal{T})$.
Since $\Delta_\alpha$ has a unique way of being filled, we must find all semistandard fillings of $\mu$ ($\lambda$, resp.) with the values $r_1,r_2,\ldots,r_h$.
Since $r_1 = r_2 = \ldots =r_k=1$, the values $r_1$, $r_1$, $\ldots$ ,$r_k$ must appear in the first $k$ positions of the first row of $\mu$ ($\lambda$, resp.).
Further, once we choose $\mu_l-1$ ($\lambda_a -k$, resp.) of the values $r_{k+1} <r_{k+2} <\ldots <r_h$ to appear in the first column of $\mu$ (first row of $\lambda$, resp.), then the remaining $r$'s must appear in the first row of $\mu$ (first column of $\lambda$, resp.) and the order of all these values is uniquely determined by the semistandard conditions.
Therefore the number of SSYTx of shape $\mathcal{S}(\mu, \alpha ;k)= \kappa' / \rho'$ with lattice reading word and content $\nu =c(\mathcal{T})$ is given by
\[ c_{\rho' \nu}^{\kappa'} = \left( \begin{array}{c}
h-k \\
\mu_l -1\\
\end{array} \right) \]
and the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha ;k)= \kappa / \rho$ with lattice reading word and content $\nu=c(\mathcal{T})$ is given by
\[ c_{\rho \nu}^{\kappa} = \left( \begin{array}{c}
h-k \\
\lambda_a -k\\
\end{array} \right). \]
\
Since $\lambda_a, \mu_l \leq \longlroof \frac{h}{2} \longrroof$, we have $h+1 \geq \lambda_a +\mu_l$.
If we have $h+1 = \lambda_a +\mu_l$, then this implies that $h$ is odd and $\lambda_a = \mu_l = \longlroof \frac{h}{2} \longrroof = \frac{h+1}{2}$.
Using the fact that $|\lambda| = |\mu| = h$ implies $\lambda_l = \mu_a = \longlroof \frac{h}{2} \longrroof = \frac{h+1}{2}$ as well.
Therefore $\lambda = \mu$.
However, we are only interested in distinct hooks $\lambda$ and $\mu$.
Thus, among distinct pairs $\lambda$ and $\mu$, we cannot have $h+1 = \lambda_a +\mu_l$.
Thus for $\lambda \neq \mu$ with $\lambda_a, \mu_l \leq \longlroof \frac{h}{2} \longrroof$, we have $h+1 > \lambda_a +\mu_l$.
This gives $h - \lambda_a > \mu_l-1$ and we obtain
\begin{equation}
\label{antieqkkkt}
h - \lambda_a -i> \mu_l-1-i,
\end{equation}
for each $i$.
\
\noindent Therefore
\begin{eqnarray*}
\ c_{\rho' \nu}^{\kappa'} & = & \frac{(h-k)! }{(h-k-\mu_l+1)!(\mu_l-1)!}
\\ & = & \frac{(h-k)! }{(h-\lambda_a)!(\lambda_a-k)!} \times \prod_{i=0}^{\mu_l - \lambda_a+k-2} \frac{h-\lambda_a-i}{\mu_l-1 -i}
\\ & = & c_{\rho \nu}^{\kappa} \times \prod_{i=0}^{\mu_l - \lambda_a+k-2} \frac{h-\lambda_a-i}{\mu_l-1 -i}
\\ & > & c_{\rho \nu}^{\kappa} ,
\end{eqnarray*}
where we have used Equation~\ref{antieqkkkt} in the final step. Therefore $s_{\mathcal{S}(\lambda,\alpha ;k)}-s_{\mathcal{S}(\mu,\alpha ;k)} \not\geq_s 0$, which is a contradiction.
Therefore we have $\lambda_a \geq \mu_l+k-1$.
\
Now consider the case $k=0$.
Towards a contradiction, suppose $\lambda_a < \mu_l$.
We now let $r_{1} < r_{2} < \ldots < r_{n}$ be the values of $R_{\alpha,k}$.
We can create a SSYT of shape $\mu$ by filling the boxes of $\mu$ as follows.
\begin{center}
\begin{tabular}{cccccccccc}
$r_1$ & $r_2$ & $\cdots$ & $r_{\mu_a}$ \\
$r_{\mu_a+1}$\\
$r_{\mu_a+2}$\\
$\vdots$\\
$r_{h}$ \\
\end{tabular}
\end{center}
As before, Lemma~\ref{kfatjoinlemma} provides us with a SSYT $\mathcal{T}$ of shape $\mathcal{S}(\mu, \alpha;k)$ with lattice reading word, where $\mu$ is filled as shown above.
We now wish to count all SSYTx of shape $\mathcal{S}(\mu, \alpha;k)$ (shape $\mathcal{S}(\lambda, \alpha;k)$, respectively) with content $\nu =c(\mathcal{T})$.
In this case only $r_1$ is required to appear at the beginning of the first row of $\mu$ ($\lambda$, resp.).
Further, once we choose $\mu_l-1$ ($\lambda_a -1$, resp.) of the values $r_{2} < \ldots <r_h$ to appear in the first column of $\mu$ (first row of $\lambda$, resp.), then the remaining $r$'s must appear in the first row of $\mu$ (first column of $\lambda$, resp.) and the order of all these values is uniquely determined by the semistandard conditions.
Therefore the number of SSYTx of shape $\mathcal{S}(\mu, \alpha ;k)= \kappa' / \rho'$ with lattice reading word and content $\nu =c(\mathcal{T})$ is given by
\[ c_{\rho' \nu}^{\kappa'} = \left( \begin{array}{c}
h-1 \\
\mu_l -1\\
\end{array} \right) \]
and the number of SSYTx of shape $\mathcal{S}(\lambda, \alpha ;k)= \kappa / \rho$ with lattice reading word and content $\nu=c(\mathcal{T})$ is given by
\[ c_{\rho \nu}^{\kappa} = \left( \begin{array}{c}
h-1 \\
\lambda_a -1\\
\end{array} \right). \]
\
As before, for $\lambda \neq \mu$ with $\lambda_a, \mu_l \leq \longlroof \frac{h}{2} \longrroof$, we have $h+1 > \lambda_a +\mu_l$.
This gives $h - \lambda_a > \mu_l-1$ and we obtain
\begin{equation}
\label{antieqkkk}
h - \lambda_a -i> \mu_l-1-i,
\end{equation}
for each $i$.
\
\noindent Therefore
\begin{eqnarray*}
\ c_{\rho' \nu}^{\kappa'} & = & \frac{(h-1)! }{(h-\mu_l)!(\mu_l-1)!}
\\ & = & \frac{(h-1)! }{(h-\lambda_a)!(\lambda_a-1)!} \times \prod_{i=0}^{\mu_l - \lambda_a -1} \frac{h-\lambda_a-i}{\mu_l-1 -i}
\\ & = & c_{\rho \nu}^{\kappa} \times \prod_{i=0}^{\mu_l - \lambda_a-1} \frac{h-\lambda_a-i}{\mu_l-1 -i}
\\ & > & c_{\rho \nu}^{\kappa} ,
\end{eqnarray*}
where we have used Equation~\ref{antieqkkk} in the final step. Therefore $s_{\mathcal{S}(\lambda,\alpha ;k)}-s_{\mathcal{S}(\mu,\alpha ;k)} \not\geq_s 0$, which is a contradiction.
Therefore we have $\lambda_a \geq \mu_l$. \qed
\end{proof}
\section{Fat Staircases with Hook Complement Foundations}
In this section we show how to extend the results of Section 3.1 to fat staircases with bad foundations where the foundations are the complements of hook diagrams.
As before, when considering a composition $\alpha$ we shall let $n= l(\alpha)$.
That is, $\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_n)$.
With this convention, the diagram $\delta_\alpha$ ($\Delta_\alpha$, respectively) has width $n$ and length $|\alpha|= \sum_{i=1}^{n} \alpha_i$.
However, in this section we shall restrict the values of $k$ so that $0 \leq k \leq 1$.
Given a partition $\rho$ contained in the $a \times b$ rectangle $(b^a)$ we define the \textit{complementary partition $\rho^c$ in the rectangle $(b^a)$} by $\rho^c = ((b^a) / \rho)^\circ$.
That is, $\rho^c$ is the complement of $\rho$ in $(b^a)$ rotated by $180^\circ$.
It is easy to see that this definition does define a partition.
We display the relevant diagrams below for clarity.
\setlength{\unitlength}{0.4mm}
\begin{picture}(100,60)(120,-25)
\put(170,5){$\rho$}
\put(193,-15){$(\rho^c)^\circ $}
\put(160,20){\line(1,0){60}}
\put(160,-10){\line(1,0){20}}
\put(180,0){\line(1,0){10}}
\put(190,10){\line(1,0){20}}
\put(160,-10){\line(0,1){30}}
\put(180,-10){\line(0,1){10}}
\put(190,0){\line(0,1){10}}
\put(210,10){\line(0,1){10}}
\put(160,-30){\dashbox{1}(60,50)[tl]{ }}
\put(285,0){$\rho^c$}
\put(312,-21){$\rho^\circ $}
\put(270,20){\line(1,0){60}}
\put(270,-30){\line(1,0){10}}
\put(280,-20){\line(1,0){20}}
\put(300,-10){\line(1,0){10}}
\put(310,0){\line(1,0){20}}
\put(270,-30){\line(0,1){50}}
\put(280,-30){\line(0,1){10}}
\put(300,-20){\line(0,1){10}}
\put(310,-10){\line(0,1){10}}
\put(330,0){\line(0,1){20}}
\put(270,-30){\dashbox{1}(60,50)[tl]{ }}
\end{picture}
\
\
For what follows, we shall make use of the following fact.
\begin{theorem} (\cite{complementcite})
\label{firstcut}
Let $\rho$ be a partition contained in the $a \times b$ rectangle $(b^a)$, $\kappa \subset \rho$ be a second partition. Then the skew diagram $\rho / \kappa$ satisfies
\[ s_{\rho / \kappa} = \sum_{\nu \subseteq (b^a)} c_{\kappa \rho^c}^{\nu} s_{\nu^c}, \] where $c_{\kappa \rho^c}^{\nu}$ are the Littlewood-Richardson coefficients.
\end{theorem}
Given a symmetric function $f=\sum_{\nu } a_{\nu} s_{\nu}$ we define the \textit{truncated complement of $f$ in the rectangle $(b^a)$} as
\begin{equation}
\label{cf}
c(f) = \sum_{\nu \subseteq (b^a)} a_{\nu} s_{\nu^c}.
\end{equation}
The rectangle being used should be clear from the context if it is not specifically mentioned.
\
We may now restate Theorem~\ref{firstcut} as follows.
\begin{corollary}
\label{rectcor}
Let $\rho$ be a partition contained in the $a \times b$ rectangle $(b^a)$, and $\kappa \subset \rho$ be a second partition. Then the skew diagram $\rho / \kappa$ satisfies
\[ s_{\rho / \kappa} = c(s_\kappa s_{\rho^c}). \]
\end{corollary}
\begin{proof}
From the definition of the Littlewood-Richardson numbers, we have
\[ s_{\kappa} s_{\rho^c} = \sum_{\nu} c_{\kappa \rho^c}^{\nu} s_{\nu}.\]
Hence \[c(s_{\kappa} s_{\rho^c}) = c(\sum_{\nu} c_{\kappa \rho^c}^{\nu} s_{\nu}) = \sum_{\nu \subseteq (b^a)} c_{\kappa \rho^c}^{\nu} s_{\nu^c}. \]
By Theorem~\ref{firstcut}, this is just $s_{\rho / \kappa}$, so we are done.\qed
\end{proof}
We begin by applying this truncation result to the shapes of the form $\rho / \kappa = \mathcal{S}(\lambda, \alpha;k)$, for $0 \leq k \leq 1$.
In the next proof we use the operator $[s_\lambda]$ to extract the coefficient of $s_\lambda$ in an expression. That is, if $f=\sum_{\lambda} a_\lambda s_\lambda$ then $[s_\lambda] (f) = a_\lambda$.
\begin{lemma}
\label{cfatstairlemma}
For a partition $\lambda$, composition $\alpha$, and $0 \leq k \leq 1$ we have
\[c(s_{\lambda} s_{\Delta_{\alpha}}) = c(s_{\mathcal{S}(\lambda,\alpha;k)} ),\]
for any complementation in a rectangle of width $w=n+k$.
\end{lemma}
\begin{proof}
Let the rectangle be $(w^l)$, say.
We begin by comparing $c(s_{\lambda \oplus \Delta_{\alpha}})$ and $c(s_{\mathcal{S}(\lambda,\alpha; k)})$.
Consider a content $\nu$ that contributes to $c(s_{\lambda \oplus \Delta_{\alpha}})$.
Then $\nu \subseteq (w^l)$ and there is a SSYT $\mathcal{T}$ of shape ${\lambda \oplus \Delta_{\alpha}}$ and content $\nu^c$ with lattice reading word.
Since $\nu^c$ is contained in a rectangle of width $w$, $\mathcal{T}$ contains at most $w=n+k$ $1$'s.
We know that exactly $n$ $1$'s appear in the copy of $\Delta_{\alpha}$.
Thus the copy of $\lambda$ contains at most $k$ $1$'s.
Therefore, by Lemma~\ref{kfatjoinlemma}, we can obtain a SSYT $\mathcal{T}'$ of shape $\mathcal{S}(\lambda,\alpha;k)$ with lattice reading word of content $\nu^c$ by simply filling the entries of $\lambda$ in $\mathcal{S}(\lambda,\alpha;k)$ identically to the filling of $\lambda$ in $\mathcal{T}$.
This correspondence, $\mathcal{T} \mapsto \mathcal{T}'$ gives a bijection.
That is, $[s_{\nu^c}] (s_{\lambda \oplus \Delta_{\alpha}}) = [s_{\nu^c}] (s_{\mathcal{S}(\lambda,\alpha;k)})$ for all $\nu^c \subseteq (w^l)$.
Thus we find that $c(s_{\lambda \oplus \Delta_{\alpha}}) = c(s_{\mathcal{S}(\lambda,\alpha;k)})$.
We also have $s_{\lambda \oplus \Delta_{\alpha}} = s_{\lambda} s_{\Delta_{\alpha}}$ by Theorem~\ref{disjprod}, and so $c(s_{\lambda \oplus \Delta_{\alpha}}) = c(s_{\lambda} s_{\Delta_{\alpha}})$.
Thus we have $c(s_{\lambda}s_{\Delta_{\alpha}} ) = c(s_{\mathcal{S}(\lambda,\alpha;k)})$, as desired. \qed
\end{proof}
\
Given a composition $\alpha = (\alpha_1,\alpha_2, \ldots, \alpha_n)$ and a width $w=n+k$, where $0 \leq k \leq 1$, we let
\[ \alpha^r = \left\{ \begin{array}{lcc}
(\alpha_n, \alpha_{n-1},\ldots,\alpha_2, \alpha_1) & \textrm{ if } & k=1 \\
(\alpha_{n-1}, \alpha_{n-2},\ldots,\alpha_2, \alpha_1) & \textrm{ if } & k=0 \\
\end{array} \right. \] denote the \textit{reverse composition}.
With this definition we have ${\delta_\alpha}^c = \delta_{\alpha^r}$, where the complement is performed in the rectangle $(w^{|\alpha|})$.
We illustrate the two cases $k=0$ and $k=1$ below.
\
\
\
\
\
\
\
\setlength{\unitlength}{0.35mm}
\begin{picture}(100,60)(50,-15)
\put(282,05){$k=1$}
\put(87,05){$k=0$}
\put(282,65){$\delta_{\alpha^r}$}
\put(296,35){$\Delta_{\alpha}$}
\put(346,85){$\alpha_1$}
\put(346,60){$\alpha_2$}
\put(348,45){$\vdots$}
\put(148,45){$\vdots$}
\put(346,33){$\alpha_{n-1}$}
\put(346,22){$\alpha_n$}
\put(335,20){\line(0,1){9}}
\put(335,20){\line(-1,0){3}}
\put(335,29){\line(-1,0){3}}
\put(335,31){\line(0,1){9}}
\put(335,31){\line(-1,0){3}}
\put(335,40){\line(-1,0){3}}
\put(335,50){\line(0,1){19}}
\put(335,50){\line(-1,0){3}}
\put(335,69){\line(-1,0){3}}
\put(335,71){\line(0,1){29}}
\put(335,71){\line(-1,0){3}}
\put(335,100){\line(-1,0){3}}
\put(270,20){\line(1,0){50}}
\put(270,30){\line(1,0){10}}
\put(280,40){\line(1,0){10}}
\put(290,50){\line(1,0){10}}
\put(300,70){\line(1,0){10}}
\put(310,100){\line(1,0){10}}
\put(270,20){\line(0,1){10}}
\put(280,30){\line(0,1){10}}
\put(290,40){\line(0,1){10}}
\put(300,50){\line(0,1){20}}
\put(310,70){\line(0,1){30}}
\put(320,20){\line(0,1){80}}
\put(260,20){\dashbox{1}(60,80)[tl]{ }}
\put(82,65){$\delta_{\alpha^r}$}
\put(96,35){$\Delta_{\alpha}$}
\put(146,85){$\alpha_1$}
\put(146,60){$\alpha_2$}
\put(146,33){$\alpha_{n-1}$}
\put(146,22){$\alpha_n$}
\put(135,20){\line(0,1){9}}
\put(135,20){\line(-1,0){3}}
\put(135,29){\line(-1,0){3}}
\put(135,31){\line(0,1){9}}
\put(135,31){\line(-1,0){3}}
\put(135,40){\line(-1,0){3}}
\put(135,50){\line(0,1){19}}
\put(135,50){\line(-1,0){3}}
\put(135,69){\line(-1,0){3}}
\put(135,71){\line(0,1){29}}
\put(135,71){\line(-1,0){3}}
\put(135,100){\line(-1,0){3}}
\put(70,20){\line(1,0){50}}
\put(70,30){\line(1,0){10}}
\put(80,40){\line(1,0){10}}
\put(90,50){\line(1,0){10}}
\put(100,70){\line(1,0){10}}
\put(110,100){\line(1,0){10}}
\put(70,20){\line(0,1){10}}
\put(80,30){\line(0,1){10}}
\put(90,40){\line(0,1){10}}
\put(100,50){\line(0,1){20}}
\put(110,70){\line(0,1){30}}
\put(120,20){\line(0,1){80}}
\put(70,20){\dashbox{1}(50,80)[tl]{ }}
\end{picture}
Thus we have $l(\delta_{\alpha^r}) \leq l(\delta_{\alpha})$ and $w(\delta_{\alpha^r}) \leq w(\delta_{\alpha})$.
In particular, we have $|\alpha| = |\alpha^r| + (1-k)\alpha_n$.
\begin{theorem}
\label{fatstairtocutstair}
Let $\alpha$ be a composition, $w=n+k$ where $0 \leq k \leq 1$, $l \geq 1$, and $\rho$ be a partition with $|\alpha|+l$ parts such that $(w^{|\alpha|}) \subset \rho \subset (w^{|\alpha|+l})$, $\mu =(\rho_{|\alpha|+1}, \rho_{|\alpha|+2},\ldots,\rho_{|\alpha|+l})$, and $\lambda = \rho^c$ be the complement of $\rho$ in $(w^{|\alpha|+l})$.
Then $\rho / \delta_{\alpha^r} = \mathcal{S}(\mu, \alpha;k)$ and
\[s_{\mathcal{S}(\mu, \alpha;k)} =c(s_{\mathcal{S}(\lambda,{\alpha^r};k)}).\]
\end{theorem}
\begin{proof}
We are interested in the following diagrams, each contained in the rectangle $(w^{l})$. The first set of diagrams illustrates the case $k=1$ and the second set illustrates the case $k=0$.
\
\
\
\
\
\
\setlength{\unitlength}{0.35mm}
\begin{picture}(100,60)(30,-5)
\put(50,20){\line(1,0){10}}
\put(60,30){\line(1,0){10}}
\put(70,40){\line(1,0){10}}
\put(80,50){\line(1,0){10}}
\put(90,70){\line(1,0){10}}
\put(100,100){\line(1,0){10}}
\put(60,20){\line(0,1){10}}
\put(70,30){\line(0,1){10}}
\put(80,40){\line(0,1){10}}
\put(90,50){\line(0,1){20}}
\put(100,70){\line(0,1){30}}
\put(110,20){\line(0,1){80}}
\put(50,-10){\line(1,0){20}}
\put(70,0){\line(1,0){10}}
\put(80,10){\line(1,0){20}}
\put(100,20){\line(1,0){10}}
\put(50,-10){\line(0,1){30}}
\put(70,-10){\line(0,1){10}}
\put(80,0){\line(0,1){10}}
\put(100,10){\line(0,1){10}}
\put(50,-10){\dashbox{1}(60,110)[tl]{ }}
\put(63,65){$\delta_{\alpha^r}$}
\put(88,-5){$\lambda^\circ$}
\put(68,-25){$\rho / \delta_{\alpha^r}$}
\put(132,15){$=$}
\put(173,65){$\delta_{\alpha^r}$}
\put(196,35){$\Delta_{\alpha}$}
\put(170,5){$\mu$}
\put(198,-5){$\lambda^\circ $}
\put(160,20){\line(1,0){60}}
\put(170,30){\line(1,0){10}}
\put(180,40){\line(1,0){10}}
\put(190,50){\line(1,0){10}}
\put(200,70){\line(1,0){10}}
\put(210,100){\line(1,0){10}}
\put(170,20){\line(0,1){10}}
\put(180,30){\line(0,1){10}}
\put(190,40){\line(0,1){10}}
\put(200,50){\line(0,1){20}}
\put(210,70){\line(0,1){30}}
\put(220,20){\line(0,1){80}}
\put(160,-10){\line(1,0){20}}
\put(180,0){\line(1,0){10}}
\put(190,10){\line(1,0){20}}
\put(160,-10){\line(0,1){30}}
\put(180,-10){\line(0,1){10}}
\put(190,0){\line(0,1){10}}
\put(210,10){\line(0,1){10}}
\put(160,-25){$\mathcal{S}(\mu,\alpha;1)$}
\put(160,-10){\dashbox{1}(60,110)[tl]{ }}
\put(277,65){$\delta_{\alpha}$}
\put(302,35){$\Delta_{\alpha^r}$}
\put(285,5){$\lambda$}
\put(314,-3){$\mu^\circ $}
\put(270,20){\line(1,0){60}}
\put(280,50){\line(1,0){10}}
\put(290,70){\line(1,0){10}}
\put(300,80){\line(1,0){10}}
\put(310,90){\line(1,0){10}}
\put(320,100){\line(1,0){10}}
\put(280,20){\line(0,1){30}}
\put(290,50){\line(0,1){20}}
\put(300,70){\line(0,1){10}}
\put(310,80){\line(0,1){10}}
\put(320,90){\line(0,1){10}}
\put(330,20){\line(0,1){80}}
\put(270,-10){\line(1,0){10}}
\put(280,0){\line(1,0){20}}
\put(300,10){\line(1,0){10}}
\put(310,20){\line(1,0){20}}
\put(270,-10){\line(0,1){30}}
\put(280,-10){\line(0,1){10}}
\put(300,0){\line(0,1){10}}
\put(310,10){\line(0,1){10}}
\put(330,20){\line(0,1){20}}
\put(270,-25){$\mathcal{S}(\lambda,{\alpha^r};1)$}
\put(270,-10){\dashbox{1}(60,110)[tl]{ }}
\put(15,60){$|\alpha|$}
\put(15,0){$l$}
\put(35,21){\line(0,1){79}}
\put(35,21){\line(1,0){3}}
\put(35,100){\line(1,0){3}}
\put(35,-10){\line(0,1){29}}
\put(35,-10){\line(1,0){3}}
\put(35,19){\line(1,0){3}}
\end{picture}
\
\
\
\
\
\
\
\
\setlength{\unitlength}{0.35mm}
\begin{picture}(100,60)(30,-5)
\put(60,30){\line(1,0){10}}
\put(70,40){\line(1,0){10}}
\put(80,50){\line(1,0){10}}
\put(90,70){\line(1,0){10}}
\put(100,100){\line(1,0){10}}
\put(60,20){\line(0,1){10}}
\put(70,30){\line(0,1){10}}
\put(80,40){\line(0,1){10}}
\put(90,50){\line(0,1){20}}
\put(100,70){\line(0,1){30}}
\put(110,20){\line(0,1){80}}
\put(60,-10){\line(1,0){10}}
\put(70,0){\line(1,0){10}}
\put(80,10){\line(1,0){20}}
\put(100,20){\line(1,0){10}}
\put(60,-10){\line(0,1){30}}
\put(70,-10){\line(0,1){10}}
\put(80,0){\line(0,1){10}}
\put(100,10){\line(0,1){10}}
\put(60,-10){\dashbox{1}(50,110)[tl]{ }}
\put(68,65){$\delta_{\alpha^r}$}
\put(88,-5){$\lambda^\circ$}
\put(73,-25){$\rho / \delta_{\alpha^r}$}
\put(137,15){$=$}
\put(178,65){$\delta_{\alpha^r}$}
\put(196,35){$\Delta_{\alpha}$}
\put(177,7){$\mu$}
\put(198,-5){$\lambda^\circ $}
\put(170,20){\line(1,0){50}}
\put(170,30){\line(1,0){10}}
\put(180,40){\line(1,0){10}}
\put(190,50){\line(1,0){10}}
\put(200,70){\line(1,0){10}}
\put(210,100){\line(1,0){10}}
\put(170,20){\line(0,1){10}}
\put(180,30){\line(0,1){10}}
\put(190,40){\line(0,1){10}}
\put(200,50){\line(0,1){20}}
\put(210,70){\line(0,1){30}}
\put(220,20){\line(0,1){80}}
\put(170,-10){\line(1,0){10}}
\put(180,0){\line(1,0){10}}
\put(190,10){\line(1,0){20}}
\put(170,-10){\line(0,1){30}}
\put(180,-10){\line(0,1){10}}
\put(190,0){\line(0,1){10}}
\put(210,10){\line(0,1){10}}
\put(172,-25){$\mathcal{S}(\mu,\alpha;0)$}
\put(170,-10){\dashbox{1}(50,110)[tl]{ }}
\put(277,65){$\delta_{\alpha}$}
\put(297,35){$\Delta_{\alpha^r}$}
\put(290,5){$\lambda$}
\put(270,20){\dashbox{1}(10,0)[tl]{ }}
\put(280,20){\line(1,0){40}}
\put(280,50){\line(1,0){10}}
\put(290,70){\line(1,0){10}}
\put(300,80){\line(1,0){10}}
\put(310,90){\line(1,0){10}}
\put(280,20){\line(0,1){30}}
\put(290,50){\line(0,1){20}}
\put(300,70){\line(0,1){10}}
\put(310,80){\line(0,1){10}}
\put(320,20){\line(0,1){70}}
\put(280,-10){\line(1,0){10}}
\put(290,0){\line(1,0){20}}
\put(310,10){\line(1,0){10}}
\put(320,20){\line(1,0){0}}
\put(280,-10){\line(0,1){30}}
\put(290,-10){\line(0,1){10}}
\put(310,0){\line(0,1){10}}
\put(320,10){\line(0,1){10}}
\put(272,-25){$\mathcal{S}(\lambda,{\alpha^r};0)$}
\put(270,-10){\dashbox{1}(50,110)[tl]{ }}
\put(15,60){$|\alpha|$}
\put(15,0){$l$}
\put(35,21){\line(0,1){79}}
\put(35,21){\line(1,0){3}}
\put(35,100){\line(1,0){3}}
\put(35,-10){\line(0,1){29}}
\put(35,-10){\line(1,0){3}}
\put(35,19){\line(1,0){3}}
\end{picture}
\
\
\
\
It is clear from the definition of $\mu$ that we have $\mathcal{S}(\mu, \alpha;k) = \rho / \delta_{\alpha^r}$.
Therefore we obtain
\begin{eqnarray*}
s_{\mathcal{S}(\mu, \alpha;k)} &=& s_{\rho / \delta_{\alpha^r}} \\%\textrm{ since $\mathcal{S}(\mu, \alpha;k) = \rho / \delta_{\alpha^r}$}\\
&=& c( s_{\lambda} s_{\delta_{\alpha^r}}) \textrm{ by Corollary~\ref{rectcor}}\\
&=& c( s_{\lambda} s_{\Delta_{\alpha^r}}) \\%\textrm{ by Theorem~\ref{rotate}} \\
&=& c(s_{\mathcal{S}(\lambda,{\alpha^r};k)} ) \textrm{ by Lemma~\ref{cfatstairlemma}},\\
\end{eqnarray*}
which is what we wanted to prove. \qed
\end{proof}
\
We now consider two hooks $\lambda$, $\mu$ both contained in a rectangle $(w^l)$ and let $\lambda^c$ and $\mu^c$ denote their complements in this rectangle. We call these \textit{hook complements}.
For a fat staircase $\Delta_\alpha$ and $0 \leq k \leq 1$, we now inspect when the difference $s_{\mathcal{S}(\lambda^c, \alpha ;k)}-s_{\mathcal{S}(\mu^c, \alpha ;k)}$ is Schur-positive.
Thus we are interested in the differences of skew Schur functions for pairs of diagrams such as the pair displayed below.
\
\
\
\setlength{\unitlength}{0.25mm}
\begin{picture}(00,140)(-10,-90)
\put(210,-70){\framebox(10,10)[tl]{ }}
\put(220,-70){\framebox(10,10)[tl]{ }}
\put(230,-70){\framebox(10,10)[tl]{ }}
\put(210,-60){\framebox(10,10)[tl]{ }}
\put(220,-60){\framebox(10,10)[tl]{ }}
\put(230,-60){\framebox(10,10)[tl]{ }}
\put(240,-60){\framebox(10,10)[tl]{ }}
\put(250,-60){\framebox(10,10)[tl]{ }}
\put(210,-50){\framebox(10,10)[tl]{ }}
\put(220,-50){\framebox(10,10)[tl]{ }}
\put(230,-50){\framebox(10,10)[tl]{ }}
\put(240,-50){\framebox(10,10)[tl]{ }}
\put(250,-50){\framebox(10,10)[tl]{ }}
\put(210,-40){\framebox(10,10)[tl]{ }}
\put(220,-40){\framebox(10,10)[tl]{ }}
\put(230,-40){\framebox(10,10)[tl]{ }}
\put(240,-40){\framebox(10,10)[tl]{ }}
\put(250,-40){\framebox(10,10)[tl]{ }}
\put(210,-30){\framebox(10,10)[tl]{ }}
\put(220,-30){\framebox(10,10)[tl]{ }}
\put(230,-30){\framebox(10,10)[tl]{ }}
\put(240,-30){\framebox(10,10)[tl]{ }}
\put(250,-30){\framebox(10,10)[tl]{ }}
\put(260,-30){\framebox(10,10)[tl]{ }}
\put(210,-20){\framebox(10,10)[tl]{ }}
\put(220,-20){\framebox(10,10)[tl]{ }}
\put(230,-20){\framebox(10,10)[tl]{ }}
\put(240,-20){\framebox(10,10)[tl]{ }}
\put(250,-20){\framebox(10,10)[tl]{ }}
\put(260,-20){\framebox(10,10)[tl]{ }}
\put(220,-10){\framebox(10,10)[tl]{ }}
\put(230,-10){\framebox(10,10)[tl]{ }}
\put(240,-10){\framebox(10,10)[tl]{ }}
\put(250,-10){\framebox(10,10)[tl]{ }}
\put(260,-10){\framebox(10,10)[tl]{ }}
\put(220,0){\framebox(10,10)[tl]{ }}
\put(230,0){\framebox(10,10)[tl]{ }}
\put(240,0){\framebox(10,10)[tl]{ }}
\put(250,0){\framebox(10,10)[tl]{ }}
\put(260,0){\framebox(10,10)[tl]{ }}
\put(230,10){\framebox(10,10)[tl]{ }}
\put(240,10){\framebox(10,10)[tl]{ }}
\put(250,10){\framebox(10,10)[tl]{ }}
\put(260,10){\framebox(10,10)[tl]{ }}
\put(240,20){\framebox(10,10)[tl]{ }}
\put(250,20){\framebox(10,10)[tl]{ }}
\put(260,20){\framebox(10,10)[tl]{ }}
\put(240,30){\framebox(10,10)[tl]{ }}
\put(250,30){\framebox(10,10)[tl]{ }}
\put(260,30){\framebox(10,10)[tl]{ }}
\put(240,40){\framebox(10,10)[tl]{ }}
\put(250,40){\framebox(10,10)[tl]{ }}
\put(260,40){\framebox(10,10)[tl]{ }}
\put(250,50){\framebox(10,10)[tl]{ }}
\put(260,50){\framebox(10,10)[tl]{ }}
\put(260,60){\framebox(10,10)[tl]{ }}
\put(100,-70){\framebox(10,10)[tl]{ }}
\put(110,-70){\framebox(10,10)[tl]{ }}
\put(120,-70){\framebox(10,10)[tl]{ }}
\put(130,-70){\framebox(10,10)[tl]{ }}
\put(100,-60){\framebox(10,10)[tl]{ }}
\put(110,-60){\framebox(10,10)[tl]{ }}
\put(120,-60){\framebox(10,10)[tl]{ }}
\put(130,-60){\framebox(10,10)[tl]{ }}
\put(140,-60){\framebox(10,10)[tl]{ }}
\put(100,-50){\framebox(10,10)[tl]{ }}
\put(110,-50){\framebox(10,10)[tl]{ }}
\put(120,-50){\framebox(10,10)[tl]{ }}
\put(130,-50){\framebox(10,10)[tl]{ }}
\put(140,-50){\framebox(10,10)[tl]{ }}
\put(100,-40){\framebox(10,10)[tl]{ }}
\put(110,-40){\framebox(10,10)[tl]{ }}
\put(120,-40){\framebox(10,10)[tl]{ }}
\put(130,-40){\framebox(10,10)[tl]{ }}
\put(140,-40){\framebox(10,10)[tl]{ }}
\put(100,-30){\framebox(10,10)[tl]{ }}
\put(110,-30){\framebox(10,10)[tl]{ }}
\put(120,-30){\framebox(10,10)[tl]{ }}
\put(130,-30){\framebox(10,10)[tl]{ }}
\put(140,-30){\framebox(10,10)[tl]{ }}
\put(100,-20){\framebox(10,10)[tl]{ }}
\put(110,-20){\framebox(10,10)[tl]{ }}
\put(120,-20){\framebox(10,10)[tl]{ }}
\put(130,-20){\framebox(10,10)[tl]{ }}
\put(140,-20){\framebox(10,10)[tl]{ }}
\put(150,-20){\framebox(10,10)[tl]{ }}
\put(110,-10){\framebox(10,10)[tl]{ }}
\put(120,-10){\framebox(10,10)[tl]{ }}
\put(130,-10){\framebox(10,10)[tl]{ }}
\put(140,-10){\framebox(10,10)[tl]{ }}
\put(150,-10){\framebox(10,10)[tl]{ }}
\put(110,0){\framebox(10,10)[tl]{ }}
\put(120,0){\framebox(10,10)[tl]{ }}
\put(130,0){\framebox(10,10)[tl]{ }}
\put(140,0){\framebox(10,10)[tl]{ }}
\put(150,0){\framebox(10,10)[tl]{ }}
\put(120,10){\framebox(10,10)[tl]{ }}
\put(130,10){\framebox(10,10)[tl]{ }}
\put(140,10){\framebox(10,10)[tl]{ }}
\put(150,10){\framebox(10,10)[tl]{ }}
\put(130,20){\framebox(10,10)[tl]{ }}
\put(140,20){\framebox(10,10)[tl]{ }}
\put(150,20){\framebox(10,10)[tl]{ }}
\put(130,30){\framebox(10,10)[tl]{ }}
\put(140,30){\framebox(10,10)[tl]{ }}
\put(150,30){\framebox(10,10)[tl]{ }}
\put(130,40){\framebox(10,10)[tl]{ }}
\put(140,40){\framebox(10,10)[tl]{ }}
\put(150,40){\framebox(10,10)[tl]{ }}
\put(140,50){\framebox(10,10)[tl]{ }}
\put(150,50){\framebox(10,10)[tl]{ }}
\put(150,60){\framebox(10,10)[tl]{ }}
\end{picture}
Our final result, Theorem~\ref{cuthookhasse} states that we obtain the same Hasse diagram for fat staircases with hook complement foundations as was obtained for fat staircases with hook foundations.
For this proof we utilise the following common notation.
Namely, given two partitions $\lambda = (\lambda_1, \ldots, \lambda_n)$ and $\mu = (\mu_1, \ldots, \mu_m)$, we let $\lambda \cup \mu$ denote the partition that consists of the parts $\lambda_1, \ldots, \lambda_n, \mu_1, \ldots, \mu_m$ placed in weakly decreasing order.
We shall also find it useful to treat partitions and weak compositions as vectors with non-negative integer entries that can be added componentwise.
We may add vectors of different lengths by adding zeroes to the end of the vectors.
Further, given a positive integer $i$, we shall let $e_i$ denote the $i$-th standard basis vector.
\begin{theorem}
\label{cuthookhasse}
Let $\lambda$ and $\mu$ be hooks with $|\lambda|=|\mu|=h \leq n+k=w$ and let $0 \leq k \leq 1$.
Then $\mathcal{S}(\lambda^c,\alpha;k) \succeq_s \mathcal{S}(\mu^c,\alpha;k)$ if and only if $\mathcal{S}(\lambda,{\alpha^r};k) \succeq_s \mathcal{S}(\mu,{\alpha^r};k)$.
\end{theorem}
\begin{proof}
We wish to apply Theorem~\ref{fatstairtocutstair} to both diagrams.
To this end, we let $\rho(\lambda^c) = (w^{|\alpha|}) \cup \lambda^c$
and $\rho(\mu^c) = (w^{|\alpha|}) \cup \mu^c$
Then we have $(w^{|\alpha|}) \subset \rho(\lambda^c), \rho(\mu^c) \subset (w^{|\alpha|+l})$ so we may apply Theorem~\ref{fatstairtocutstair} to both $\rho(\lambda^c) / \delta_{\alpha^r} = \mathcal{S}(\lambda^c,\alpha;k)$ and $\rho(\mu^c) / \delta_{\alpha^r} = \mathcal{S}(\mu^c,\alpha;k)$.
This gives
\begin{eqnarray*}
s_{\mathcal{S}(\lambda^c,\alpha;k)} - s_{\mathcal{S}(\mu^c,\alpha;k)}
&=& c(s_{\mathcal{S}(\lambda,{\alpha^r};k)})-c(s_{\mathcal{S}(\mu,{\alpha^r};k)}) \\
&=& c(s_{\mathcal{S}(\lambda,{\alpha^r};k)}-s_{\mathcal{S}(\mu,{\alpha^r};k)}), \\
\end{eqnarray*}
where these complements are performed in the rectangle $(w^{|\alpha|+l})$.
Now, if $\mathcal{S}(\lambda,{\alpha^r};k) \succeq_s \mathcal{S}(\mu,{\alpha^r};k)$, then the above equation shows that $\mathcal{S}(\lambda^c,\alpha;k) \succeq_s \mathcal{S}(\mu^c,\alpha;k)$ as well.
For the converse direction, suppose that $\mathcal{S}(\lambda,{\alpha^r};k) \not\succeq_s \mathcal{S}(\mu,{\alpha^r};k)$.
Thus, by assumption, the difference $s_{\mathcal{S}(\lambda,{\alpha^r};k)}-s_{\mathcal{S}(\mu,{\alpha^r};k)}$ is not Schur-positive.
However, we need to verify that the truncated version $c(s_{\mathcal{S}(\lambda,{\alpha^r};k)}-s_{\mathcal{S}(\mu,{\alpha^r};k)})$ is also not Schur-positive.
In Section 4, we saw that the only cases where the difference was not Schur-positive among these staircases with hook foundations were those cases covered by Theorem~\ref{kantichainhooks1}, Theorem~\ref{kantichainhooks11}, and Theorem~\ref{kcrosshooks11}.
Thus $\lambda$ and $\mu$ must satisfy the hypotheses of one of these three theorems.
In each of these three theorems, by inspecting a particular term $s_\nu$ in the difference, it was proved that the difference was not Schur-positive.
We need only check that for each theorem the partition $\nu$ constructed satisfies $\nu \subseteq (w^{|\alpha|+l})$.
In this way, we prove that the term $s_\nu$ also appears in the truncated difference and hence shows that the this truncated difference is not Schur-positive.
\
In both Theorem~\ref{kantichainhooks11} and Theorem~\ref{kcrosshooks11} we used the content $\nu = \delta_{\alpha^r} + \sum_{i=1}^{h} e_{r_{i}}$, where $0 \leq k \leq 1$ implies that $r_1 < r_2 < \dots$ are the values of $R_{\alpha^r,k}$.
We have $w(\delta_{\alpha^r}) \leq w(\delta_{\alpha}) = n$.
Further, since the $r_i$ are distinct, adding the terms $\sum_{i=1}^{h} e_{r_{i}}$ to $\delta_{\alpha^r}$ can only increase the width by 1, and this only happens when $r_1=1$ which implies that $k=1$.
Thus, in either case, $w(\nu) \leq n +k =w$.
Also, since $l(\delta_{\alpha^r}) \leq l(\delta_{\alpha}) = |\alpha|$ and each $r_i \leq |\alpha|+1$, we have $l(\nu) \leq |\alpha|+1 \leq |\alpha|+l$.
Therefore $\nu$ is contained in the rectangle $(w^{|\alpha|+l})$.
In Theorem~\ref{kantichainhooks1} we used $\nu = \delta_{\alpha^r} + \sum_{i=1}^{\lambda_a-1} e_{r_{i}} + (0^{|\alpha^r|},1^{\lambda_l})$, where $r_1 < r_2 < \dots$ are values of $R_{\alpha^r,k}$.
As in the previous case we find that $w(\nu) \leq n +k =w$.
For the length of $\nu$ we have $l(\nu) = |\alpha^r| + \lambda_l \leq |\alpha| + l$, since $|\alpha^r| \leq |\alpha|$ and $\lambda \subseteq (w^l)$.
Therefore $\nu$ is contained in the rectangle $(w^{|\alpha|+l})$.
\
Thus, in each case $\nu$ is contained in the rectangle $(w^{|\alpha|+l})$. Therefore the term $s_\nu$ in the difference $s_{\mathcal{S}(\lambda,{\alpha^r};k)}-s_{\mathcal{S}(\mu,{\alpha^r};k)}$, is also in the difference $c(s_{\mathcal{S}(\lambda,{\alpha^r};k)}-s_{\mathcal{S}(\mu,{\alpha^r};k)})$.
Since this term has a negative coefficient, it shows that $c(s_{\mathcal{S}(\lambda,{\alpha^r};k)}-s_{\mathcal{S}(\mu,{\alpha^r};k)})$, and hence $s_{\mathcal{S}(\lambda^c,\alpha;k)} - s_{\mathcal{S}(\mu^c,\alpha;k)}$ is not Schur-positive.
That is, $\mathcal{S}(\lambda^c,\alpha;k) \not\succeq_s \mathcal{S}(\mu^c,\alpha;k)$.
This completes the converse direction.
Thus we have shown that $\mathcal{S}(\lambda^c,\alpha;k) \succeq_s \mathcal{S}(\mu^c,\alpha;k)$ if and only if $\mathcal{S}(\lambda,{\alpha^r};k) \succeq_s \mathcal{S}(\mu,{\alpha^r};k)$. \qed
\end{proof}
\begin{example} Here we see the Hasse diagram obtained by considering all diagrams of the form $\mathcal{S}(\lambda^c,\alpha;k)$ where $\alpha = (1,1,3,1,2)$, $k=1$, and $\lambda$ is a hook of size 6, where $\lambda^c$ is computed in the rectangle $(6^6)$.
\
\
\setlength{\unitlength}{0.2mm}
\begin{picture}(00,140)(-60,-90)
\put(42,-80){\line(1,-1){263}}
\put(130,-80){\line(1,-1){165}}
\put(130,-80){\line(2,-3){176}}
\put(240,-80){\line(1,-1){60}}
\put(240,-80){\line(1,-3){55}}
\put(240,-80){\line(1,-4){66}}
\put(345,-345){\line(0,1){30}}
\put(345,-190){\line(0,1){30}}
\put(320,-460){\framebox(10,10)[tl]{ }}
\put(330,-460){\framebox(10,10)[tl]{ }}
\put(340,-460){\framebox(10,10)[tl]{ }}
\put(350,-460){\framebox(10,10)[tl]{ }}
\put(360,-460){\framebox(10,10)[tl]{ }}
\put(370,-460){\framebox(10,10)[tl]{ }}
\put(320,-450){\framebox(10,10)[tl]{ }}
\put(330,-450){\framebox(10,10)[tl]{ }}
\put(340,-450){\framebox(10,10)[tl]{ }}
\put(350,-450){\framebox(10,10)[tl]{ }}
\put(360,-450){\framebox(10,10)[tl]{ }}
\put(370,-450){\framebox(10,10)[tl]{ }}
\put(320,-440){\framebox(10,10)[tl]{ }}
\put(330,-440){\framebox(10,10)[tl]{ }}
\put(340,-440){\framebox(10,10)[tl]{ }}
\put(350,-440){\framebox(10,10)[tl]{ }}
\put(360,-440){\framebox(10,10)[tl]{ }}
\put(370,-440){\framebox(10,10)[tl]{ }}
\put(320,-430){\framebox(10,10)[tl]{ }}
\put(330,-430){\framebox(10,10)[tl]{ }}
\put(340,-430){\framebox(10,10)[tl]{ }}
\put(350,-430){\framebox(10,10)[tl]{ }}
\put(360,-430){\framebox(10,10)[tl]{ }}
\put(370,-430){\framebox(10,10)[tl]{ }}
\put(320,-420){\framebox(10,10)[tl]{ }}
\put(330,-420){\framebox(10,10)[tl]{ }}
\put(340,-420){\framebox(10,10)[tl]{ }}
\put(350,-420){\framebox(10,10)[tl]{ }}
\put(360,-420){\framebox(10,10)[tl]{ }}
\put(370,-420){\framebox(10,10)[tl]{ }}
\put(330,-410){\framebox(10,10)[tl]{ }}
\put(340,-410){\framebox(10,10)[tl]{ }}
\put(350,-410){\framebox(10,10)[tl]{ }}
\put(360,-410){\framebox(10,10)[tl]{ }}
\put(370,-410){\framebox(10,10)[tl]{ }}
\put(330,-400){\framebox(10,10)[tl]{ }}
\put(340,-400){\framebox(10,10)[tl]{ }}
\put(350,-400){\framebox(10,10)[tl]{ }}
\put(360,-400){\framebox(10,10)[tl]{ }}
\put(370,-400){\framebox(10,10)[tl]{ }}
\put(340,-390){\framebox(10,10)[tl]{ }}
\put(350,-390){\framebox(10,10)[tl]{ }}
\put(360,-390){\framebox(10,10)[tl]{ }}
\put(370,-390){\framebox(10,10)[tl]{ }}
\put(350,-380){\framebox(10,10)[tl]{ }}
\put(360,-380){\framebox(10,10)[tl]{ }}
\put(370,-380){\framebox(10,10)[tl]{ }}
\put(350,-370){\framebox(10,10)[tl]{ }}
\put(360,-370){\framebox(10,10)[tl]{ }}
\put(370,-370){\framebox(10,10)[tl]{ }}
\put(350,-360){\framebox(10,10)[tl]{ }}
\put(360,-360){\framebox(10,10)[tl]{ }}
\put(370,-360){\framebox(10,10)[tl]{ }}
\put(360,-350){\framebox(10,10)[tl]{ }}
\put(370,-350){\framebox(10,10)[tl]{ }}
\put(370,-340){\framebox(10,10)[tl]{ }}
\put(320,-310){\framebox(10,10)[tl]{ }}
\put(320,-300){\framebox(10,10)[tl]{ }}
\put(330,-300){\framebox(10,10)[tl]{ }}
\put(340,-300){\framebox(10,10)[tl]{ }}
\put(350,-300){\framebox(10,10)[tl]{ }}
\put(360,-300){\framebox(10,10)[tl]{ }}
\put(320,-290){\framebox(10,10)[tl]{ }}
\put(330,-290){\framebox(10,10)[tl]{ }}
\put(340,-290){\framebox(10,10)[tl]{ }}
\put(350,-290){\framebox(10,10)[tl]{ }}
\put(360,-290){\framebox(10,10)[tl]{ }}
\put(370,-290){\framebox(10,10)[tl]{ }}
\put(320,-280){\framebox(10,10)[tl]{ }}
\put(330,-280){\framebox(10,10)[tl]{ }}
\put(340,-280){\framebox(10,10)[tl]{ }}
\put(350,-280){\framebox(10,10)[tl]{ }}
\put(360,-280){\framebox(10,10)[tl]{ }}
\put(370,-280){\framebox(10,10)[tl]{ }}
\put(320,-270){\framebox(10,10)[tl]{ }}
\put(330,-270){\framebox(10,10)[tl]{ }}
\put(340,-270){\framebox(10,10)[tl]{ }}
\put(350,-270){\framebox(10,10)[tl]{ }}
\put(360,-270){\framebox(10,10)[tl]{ }}
\put(370,-270){\framebox(10,10)[tl]{ }}
\put(320,-260){\framebox(10,10)[tl]{ }}
\put(330,-260){\framebox(10,10)[tl]{ }}
\put(340,-260){\framebox(10,10)[tl]{ }}
\put(350,-260){\framebox(10,10)[tl]{ }}
\put(360,-260){\framebox(10,10)[tl]{ }}
\put(370,-260){\framebox(10,10)[tl]{ }}
\put(330,-250){\framebox(10,10)[tl]{ }}
\put(340,-250){\framebox(10,10)[tl]{ }}
\put(350,-250){\framebox(10,10)[tl]{ }}
\put(360,-250){\framebox(10,10)[tl]{ }}
\put(370,-250){\framebox(10,10)[tl]{ }}
\put(330,-240){\framebox(10,10)[tl]{ }}
\put(340,-240){\framebox(10,10)[tl]{ }}
\put(350,-240){\framebox(10,10)[tl]{ }}
\put(360,-240){\framebox(10,10)[tl]{ }}
\put(370,-240){\framebox(10,10)[tl]{ }}
\put(340,-230){\framebox(10,10)[tl]{ }}
\put(350,-230){\framebox(10,10)[tl]{ }}
\put(360,-230){\framebox(10,10)[tl]{ }}
\put(370,-230){\framebox(10,10)[tl]{ }}
\put(350,-220){\framebox(10,10)[tl]{ }}
\put(360,-220){\framebox(10,10)[tl]{ }}
\put(370,-220){\framebox(10,10)[tl]{ }}
\put(350,-210){\framebox(10,10)[tl]{ }}
\put(360,-210){\framebox(10,10)[tl]{ }}
\put(370,-210){\framebox(10,10)[tl]{ }}
\put(350,-200){\framebox(10,10)[tl]{ }}
\put(360,-200){\framebox(10,10)[tl]{ }}
\put(370,-200){\framebox(10,10)[tl]{ }}
\put(360,-190){\framebox(10,10)[tl]{ }}
\put(370,-190){\framebox(10,10)[tl]{ }}
\put(370,-180){\framebox(10,10)[tl]{ }}
\put(320,-150){\framebox(10,10)[tl]{ }}
\put(330,-150){\framebox(10,10)[tl]{ }}
\put(320,-140){\framebox(10,10)[tl]{ }}
\put(330,-140){\framebox(10,10)[tl]{ }}
\put(340,-140){\framebox(10,10)[tl]{ }}
\put(350,-140){\framebox(10,10)[tl]{ }}
\put(360,-140){\framebox(10,10)[tl]{ }}
\put(320,-130){\framebox(10,10)[tl]{ }}
\put(330,-130){\framebox(10,10)[tl]{ }}
\put(340,-130){\framebox(10,10)[tl]{ }}
\put(350,-130){\framebox(10,10)[tl]{ }}
\put(360,-130){\framebox(10,10)[tl]{ }}
\put(320,-120){\framebox(10,10)[tl]{ }}
\put(330,-120){\framebox(10,10)[tl]{ }}
\put(340,-120){\framebox(10,10)[tl]{ }}
\put(350,-120){\framebox(10,10)[tl]{ }}
\put(360,-120){\framebox(10,10)[tl]{ }}
\put(370,-120){\framebox(10,10)[tl]{ }}
\put(320,-110){\framebox(10,10)[tl]{ }}
\put(330,-110){\framebox(10,10)[tl]{ }}
\put(340,-110){\framebox(10,10)[tl]{ }}
\put(350,-110){\framebox(10,10)[tl]{ }}
\put(360,-110){\framebox(10,10)[tl]{ }}
\put(370,-110){\framebox(10,10)[tl]{ }}
\put(320,-100){\framebox(10,10)[tl]{ }}
\put(330,-100){\framebox(10,10)[tl]{ }}
\put(340,-100){\framebox(10,10)[tl]{ }}
\put(350,-100){\framebox(10,10)[tl]{ }}
\put(360,-100){\framebox(10,10)[tl]{ }}
\put(370,-100){\framebox(10,10)[tl]{ }}
\put(330,-90){\framebox(10,10)[tl]{ }}
\put(340,-90){\framebox(10,10)[tl]{ }}
\put(350,-90){\framebox(10,10)[tl]{ }}
\put(360,-90){\framebox(10,10)[tl]{ }}
\put(370,-90){\framebox(10,10)[tl]{ }}
\put(330,-80){\framebox(10,10)[tl]{ }}
\put(340,-80){\framebox(10,10)[tl]{ }}
\put(350,-80){\framebox(10,10)[tl]{ }}
\put(360,-80){\framebox(10,10)[tl]{ }}
\put(370,-80){\framebox(10,10)[tl]{ }}
\put(340,-70){\framebox(10,10)[tl]{ }}
\put(350,-70){\framebox(10,10)[tl]{ }}
\put(360,-70){\framebox(10,10)[tl]{ }}
\put(370,-70){\framebox(10,10)[tl]{ }}
\put(350,-60){\framebox(10,10)[tl]{ }}
\put(360,-60){\framebox(10,10)[tl]{ }}
\put(370,-60){\framebox(10,10)[tl]{ }}
\put(350,-50){\framebox(10,10)[tl]{ }}
\put(360,-50){\framebox(10,10)[tl]{ }}
\put(370,-50){\framebox(10,10)[tl]{ }}
\put(350,-40){\framebox(10,10)[tl]{ }}
\put(360,-40){\framebox(10,10)[tl]{ }}
\put(370,-40){\framebox(10,10)[tl]{ }}
\put(360,-30){\framebox(10,10)[tl]{ }}
\put(370,-30){\framebox(10,10)[tl]{ }}
\put(370,-20){\framebox(10,10)[tl]{ }}
\put(210,-70){\framebox(10,10)[tl]{ }}
\put(220,-70){\framebox(10,10)[tl]{ }}
\put(230,-70){\framebox(10,10)[tl]{ }}
\put(210,-60){\framebox(10,10)[tl]{ }}
\put(220,-60){\framebox(10,10)[tl]{ }}
\put(230,-60){\framebox(10,10)[tl]{ }}
\put(240,-60){\framebox(10,10)[tl]{ }}
\put(250,-60){\framebox(10,10)[tl]{ }}
\put(210,-50){\framebox(10,10)[tl]{ }}
\put(220,-50){\framebox(10,10)[tl]{ }}
\put(230,-50){\framebox(10,10)[tl]{ }}
\put(240,-50){\framebox(10,10)[tl]{ }}
\put(250,-50){\framebox(10,10)[tl]{ }}
\put(210,-40){\framebox(10,10)[tl]{ }}
\put(220,-40){\framebox(10,10)[tl]{ }}
\put(230,-40){\framebox(10,10)[tl]{ }}
\put(240,-40){\framebox(10,10)[tl]{ }}
\put(250,-40){\framebox(10,10)[tl]{ }}
\put(210,-30){\framebox(10,10)[tl]{ }}
\put(220,-30){\framebox(10,10)[tl]{ }}
\put(230,-30){\framebox(10,10)[tl]{ }}
\put(240,-30){\framebox(10,10)[tl]{ }}
\put(250,-30){\framebox(10,10)[tl]{ }}
\put(260,-30){\framebox(10,10)[tl]{ }}
\put(210,-20){\framebox(10,10)[tl]{ }}
\put(220,-20){\framebox(10,10)[tl]{ }}
\put(230,-20){\framebox(10,10)[tl]{ }}
\put(240,-20){\framebox(10,10)[tl]{ }}
\put(250,-20){\framebox(10,10)[tl]{ }}
\put(260,-20){\framebox(10,10)[tl]{ }}
\put(220,-10){\framebox(10,10)[tl]{ }}
\put(230,-10){\framebox(10,10)[tl]{ }}
\put(240,-10){\framebox(10,10)[tl]{ }}
\put(250,-10){\framebox(10,10)[tl]{ }}
\put(260,-10){\framebox(10,10)[tl]{ }}
\put(220,0){\framebox(10,10)[tl]{ }}
\put(230,0){\framebox(10,10)[tl]{ }}
\put(240,0){\framebox(10,10)[tl]{ }}
\put(250,0){\framebox(10,10)[tl]{ }}
\put(260,0){\framebox(10,10)[tl]{ }}
\put(230,10){\framebox(10,10)[tl]{ }}
\put(240,10){\framebox(10,10)[tl]{ }}
\put(250,10){\framebox(10,10)[tl]{ }}
\put(260,10){\framebox(10,10)[tl]{ }}
\put(240,20){\framebox(10,10)[tl]{ }}
\put(250,20){\framebox(10,10)[tl]{ }}
\put(260,20){\framebox(10,10)[tl]{ }}
\put(240,30){\framebox(10,10)[tl]{ }}
\put(250,30){\framebox(10,10)[tl]{ }}
\put(260,30){\framebox(10,10)[tl]{ }}
\put(240,40){\framebox(10,10)[tl]{ }}
\put(250,40){\framebox(10,10)[tl]{ }}
\put(260,40){\framebox(10,10)[tl]{ }}
\put(250,50){\framebox(10,10)[tl]{ }}
\put(260,50){\framebox(10,10)[tl]{ }}
\put(260,60){\framebox(10,10)[tl]{ }}
\put(100,-70){\framebox(10,10)[tl]{ }}
\put(110,-70){\framebox(10,10)[tl]{ }}
\put(120,-70){\framebox(10,10)[tl]{ }}
\put(130,-70){\framebox(10,10)[tl]{ }}
\put(100,-60){\framebox(10,10)[tl]{ }}
\put(110,-60){\framebox(10,10)[tl]{ }}
\put(120,-60){\framebox(10,10)[tl]{ }}
\put(130,-60){\framebox(10,10)[tl]{ }}
\put(140,-60){\framebox(10,10)[tl]{ }}
\put(100,-50){\framebox(10,10)[tl]{ }}
\put(110,-50){\framebox(10,10)[tl]{ }}
\put(120,-50){\framebox(10,10)[tl]{ }}
\put(130,-50){\framebox(10,10)[tl]{ }}
\put(140,-50){\framebox(10,10)[tl]{ }}
\put(100,-40){\framebox(10,10)[tl]{ }}
\put(110,-40){\framebox(10,10)[tl]{ }}
\put(120,-40){\framebox(10,10)[tl]{ }}
\put(130,-40){\framebox(10,10)[tl]{ }}
\put(140,-40){\framebox(10,10)[tl]{ }}
\put(100,-30){\framebox(10,10)[tl]{ }}
\put(110,-30){\framebox(10,10)[tl]{ }}
\put(120,-30){\framebox(10,10)[tl]{ }}
\put(130,-30){\framebox(10,10)[tl]{ }}
\put(140,-30){\framebox(10,10)[tl]{ }}
\put(100,-20){\framebox(10,10)[tl]{ }}
\put(110,-20){\framebox(10,10)[tl]{ }}
\put(120,-20){\framebox(10,10)[tl]{ }}
\put(130,-20){\framebox(10,10)[tl]{ }}
\put(140,-20){\framebox(10,10)[tl]{ }}
\put(150,-20){\framebox(10,10)[tl]{ }}
\put(110,-10){\framebox(10,10)[tl]{ }}
\put(120,-10){\framebox(10,10)[tl]{ }}
\put(130,-10){\framebox(10,10)[tl]{ }}
\put(140,-10){\framebox(10,10)[tl]{ }}
\put(150,-10){\framebox(10,10)[tl]{ }}
\put(110,0){\framebox(10,10)[tl]{ }}
\put(120,0){\framebox(10,10)[tl]{ }}
\put(130,0){\framebox(10,10)[tl]{ }}
\put(140,0){\framebox(10,10)[tl]{ }}
\put(150,0){\framebox(10,10)[tl]{ }}
\put(120,10){\framebox(10,10)[tl]{ }}
\put(130,10){\framebox(10,10)[tl]{ }}
\put(140,10){\framebox(10,10)[tl]{ }}
\put(150,10){\framebox(10,10)[tl]{ }}
\put(130,20){\framebox(10,10)[tl]{ }}
\put(140,20){\framebox(10,10)[tl]{ }}
\put(150,20){\framebox(10,10)[tl]{ }}
\put(130,30){\framebox(10,10)[tl]{ }}
\put(140,30){\framebox(10,10)[tl]{ }}
\put(150,30){\framebox(10,10)[tl]{ }}
\put(130,40){\framebox(10,10)[tl]{ }}
\put(140,40){\framebox(10,10)[tl]{ }}
\put(150,40){\framebox(10,10)[tl]{ }}
\put(140,50){\framebox(10,10)[tl]{ }}
\put(150,50){\framebox(10,10)[tl]{ }}
\put(150,60){\framebox(10,10)[tl]{ }}
\put(-10,-70){\framebox(10,10)[tl]{ }}
\put(0,-70){\framebox(10,10)[tl]{ }}
\put(10,-70){\framebox(10,10)[tl]{ }}
\put(20,-70){\framebox(10,10)[tl]{ }}
\put(30,-70){\framebox(10,10)[tl]{ }}
\put(-10,-60){\framebox(10,10)[tl]{ }}
\put(0,-60){\framebox(10,10)[tl]{ }}
\put(10,-60){\framebox(10,10)[tl]{ }}
\put(20,-60){\framebox(10,10)[tl]{ }}
\put(30,-60){\framebox(10,10)[tl]{ }}
\put(-10,-50){\framebox(10,10)[tl]{ }}
\put(0,-50){\framebox(10,10)[tl]{ }}
\put(10,-50){\framebox(10,10)[tl]{ }}
\put(20,-50){\framebox(10,10)[tl]{ }}
\put(30,-50){\framebox(10,10)[tl]{ }}
\put(-10,-40){\framebox(10,10)[tl]{ }}
\put(0,-40){\framebox(10,10)[tl]{ }}
\put(10,-40){\framebox(10,10)[tl]{ }}
\put(20,-40){\framebox(10,10)[tl]{ }}
\put(30,-40){\framebox(10,10)[tl]{ }}
\put(-10,-30){\framebox(10,10)[tl]{ }}
\put(0,-30){\framebox(10,10)[tl]{ }}
\put(10,-30){\framebox(10,10)[tl]{ }}
\put(20,-30){\framebox(10,10)[tl]{ }}
\put(30,-30){\framebox(10,10)[tl]{ }}
\put(-10,-20){\framebox(10,10)[tl]{ }}
\put(0,-20){\framebox(10,10)[tl]{ }}
\put(10,-20){\framebox(10,10)[tl]{ }}
\put(20,-20){\framebox(10,10)[tl]{ }}
\put(30,-20){\framebox(10,10)[tl]{ }}
\put(0,-10){\framebox(10,10)[tl]{ }}
\put(10,-10){\framebox(10,10)[tl]{ }}
\put(20,-10){\framebox(10,10)[tl]{ }}
\put(30,-10){\framebox(10,10)[tl]{ }}
\put(40,-10){\framebox(10,10)[tl]{ }}
\put(0,0){\framebox(10,10)[tl]{ }}
\put(10,0){\framebox(10,10)[tl]{ }}
\put(20,0){\framebox(10,10)[tl]{ }}
\put(30,0){\framebox(10,10)[tl]{ }}
\put(40,0){\framebox(10,10)[tl]{ }}
\put(10,10){\framebox(10,10)[tl]{ }}
\put(20,10){\framebox(10,10)[tl]{ }}
\put(30,10){\framebox(10,10)[tl]{ }}
\put(40,10){\framebox(10,10)[tl]{ }}
\put(20,20){\framebox(10,10)[tl]{ }}
\put(30,20){\framebox(10,10)[tl]{ }}
\put(40,20){\framebox(10,10)[tl]{ }}
\put(20,30){\framebox(10,10)[tl]{ }}
\put(30,30){\framebox(10,10)[tl]{ }}
\put(40,30){\framebox(10,10)[tl]{ }}
\put(20,40){\framebox(10,10)[tl]{ }}
\put(30,40){\framebox(10,10)[tl]{ }}
\put(40,40){\framebox(10,10)[tl]{ }}
\put(30,50){\framebox(10,10)[tl]{ }}
\put(40,50){\framebox(10,10)[tl]{ }}
\put(40,60){\framebox(10,10)[tl]{ }}
\end{picture}
\end{example}
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,798 |
Smart Green Rural Transport
A consortium including Urban Foresight has won €3.9 million for a European project focused ennhancing the capacity of authorities to reduce CO2 from personal transport in remote, rural and island areas.
Green Passenger Transport in Rural Areas (G-PaTRA) will focus on integrated transport services and new organisational ownership models for sustainable rural public transport in the North Sea Region of Europe. The project aims to do this by embedding increasing numbers of zero-emission vehicles in rural transport systems and by improving, optimising and better integrating available passenger transport services.
"Rural areas present unique territorial challenges and have received relatively limited attention in terms of funding" said Andrew Willis, Head of Projects at Urban Foresight. "Public transport in these settings is noted for being particularly high carbon and subsidy intensive, and car-alternative forms of transport are often limited. In addition, carbon reduction strategies focused on urban transport are rarely transferable to rural areas."
The G-PaTRA consortium is led by Robert Gordon University and comprises partners from UK, Norway, Denmark, Sweden, The Netherlands, Germany and Belgium. The project will run from 2017 through 2020 and will demonstrate the technical innovations and the institutional, operational and social changes needed. It will also aim to transfer these new techniques to countries across Europe.
Urban Foresight will lead on the Innovation Capture and Transfer work package, designed to assess innovation across the whole project, ensuring objectives are met and knowledge is effectively exchanged across partners and disseminated across the stakeholder network. This includes the development and delivery of an expert workshop to capture current state of the art in rural green passenger transport.
A total of 15 projects across four priority areas have received €31.4m and have a total budget of €62.8m. They are being delivered through the North Sea Programme 2014-2020, part of the wider Interreg initiative which is funded by the European Regional Development Fund.
Further information on all of the projects including G-PaTRA can be found here.
G-PaTRA is a project co-funded by the North Sea Programme of the European Regional Development Fund of the European Union. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,791 |
package com.danan.business.smot.rest.impl;
import com.danan.business.smot.domain.Event;
import com.danan.business.smot.rest.EventManager;
import com.danan.business.smot.rest.NotEventException;
import com.danan.business.smot.rest.bean.SearchAuditLogResponse;
import com.github.sawied.persistent.domain.SearchAuditResponse;
import com.github.sawied.persistent.repository.UserAuditLogRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.util.StringUtils;
import org.springframework.web.bind.annotation.*;
import java.util.ArrayList;
import java.util.List;
@RestController
@RequestMapping("/events/*")
@ResponseBody
public class EventManagerImpl implements EventManager {
@Autowired(required=false)
private UserAuditLogRepository userAuditLogRepository=null;
@Override
@RequestMapping(method = RequestMethod.GET, path = "/{id}")
@ResponseBody
public List<Event> retrieveEvent(@PathVariable String id) throws Exception {
if (StringUtils.isEmpty(id) || id.length() > 3) {
throw new NotEventException(Long.valueOf(id),
"cann't find the event entity in database");
}
List<Event> events = new ArrayList<Event>();
events.add(new Event("1", "New project Created"));
return events;
}
@Override
@RequestMapping(path = "/", method = RequestMethod.POST)
public List<SearchAuditLogResponse> searchAuditLog() {
List<SearchAuditResponse> searchAuditUseResultMapping = userAuditLogRepository.searchAuditUseResultMapping();
return null;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 921 |
Our Performance promise
Sustainabilityadmin2023-01-26T12:41:10+01:00
WE LOVE SUSTAINABILITY
For more natural and greener cosmetics
We have set ourselves high goals so that we, too, will leave a green footprint as a company and contribute to a greener world. Our Green Agenda focuses on three key aspects: CO2, packaging materials and ingredients.
Getting rid of CO2
As of now, we are already CO2-neutral.
We not only want to keep compensating for our production's CO2 emissions, but actively reduce them by 50% until 2025.
By 2027 at the latest, we want to cease the use of fossil fuels to supply our buildings, production, and operational mobility pool.
We always offer our customers the most sustainable production solutions.
We always favour climate-neutral as well as local suppliers.
We focus on more sustainable packaging materials
95% of all packaging materials will be suited for the recycling system in accordance with the German standards by 2027.
We always offer our customers the most sustainable product solutions.
By 2030, 100% of our ingredients for formulations and our packaging materials shall be traceable.
We abstain from the use of ingredients that are harmful to the environment
We always offer our customers the most environmentally friendly product solutions.
Our formulations are free of microplastics and environmentally critical synthetic polymers.
We will switch resource-critical raw materials to certified ones or alternative certified raw materials by 2025.
OUR PROMISE TO A MORE BEAUTIFUL WORLD
We have set ourselves high goals so that we, too, will leave a green footprint as a company and contribute to a better world.
Learn more about sustainability
FEMIA Cosmetic
Gut Weide 1
+49 (0) 241 / 9279 - 0
info@femia.de
Performance promise
© FEMIA Cosmetic Vertriebsgesellschaft mbH 2023
We need your consent before you can continue to visit our website. If you are under 16 and wish to give your consent to volunteer services, you must ask your parent or guardian for permission. We use cookies and other technologies on our website. Some of them are essential, while others help us improve this website and your experience. Personal data may be processed (e.g. IP addresses), e.g. for personalized ads and content or ad and content measurement. For more information about how we use your data, please see our Privacy Policy. There is no obligation to consent to the processing of your data in order to use this offer. You can revoke or adjust your selection at any time under preferences. Please note that due to individual settings not all functions of the website may be available. Some services process personal data in the USA. By consenting to the use of these services, you also consent to the processing of your data in the USA in accordance with Art. 49 (1) lit. a DSGVO. The ECJ classifies the USA as a country with insufficient data protection according to EU standards. For example, there is a risk that U.S. authorities will process personal data in surveillance programs without an existing possibility of legal action for Europeans.
Individual privacy settings Cookie-Details Privacy policy Imprint
If you are under 16 and wish to give your consent to volunteer services, you must ask your parent or guardian for permission. We use cookies and other technologies on our website. Some of them are essential, while others help us improve this website and your experience. Personal data may be processed (e.g. IP addresses), e.g. for personalized ads and content or ad and content measurement. For more information about how we use your data, please see our Privacy Policy. There is no obligation to consent to the processing of your data in order to use this offer. Please note that due to individual settings not all functions of the website may be available. Some services process personal data in the USA. By consenting to the use of these services, you also consent to the processing of your data in the USA in accordance with Art. 49 (1) lit. a DSGVO. The ECJ classifies the USA as a country with insufficient data protection according to EU standards. For example, there is a risk that U.S. authorities will process personal data in surveillance programs without an existing possibility of legal action for Europeans. Here you can find an overview of all cookies used. You can give your consent to entire categories or view more information and thus select only certain cookies.
Stores the settings of the visitors selected in the Cookie Box of Borlabs Cookie. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,912 |
\section{Introduction}
In the field of mathematical optimization one is interested in efficiently solving a minimization problem of the form
\begin{align} \label{opt:gen_min}
\min_{{\mathbf x}\in X} f({\mathbf x})
\end{align}
where the \emph{objective function} $f$ is some real-valued function defined over the \emph{constraints set} $X$. Many core problems in the field of Computer Science, Economic, and Operations Research can be readily expressed in this form, rendering this minimization problem far-reaching. That being said, in its full generality this problem is just too hard to solve or even to approximate. As a consequence, various structural assumptions on the objective function and the constraints set, along with better-suited optimization algorithms, have been proposed so as to make this problem viable.\\
One such case is smooth and strongly convex functions over some $d$-dimensional Euclidean space\footnote{More generally, one may consider smooth and strongly convex functions over some Hilbert space.}. Precisely, we consider continuously differentiable $f:\mathbb{R}^d\to\mathbb{R}$ which are \emph{$L$-smooth}, i.e.,
\begin{align*}
\norm{\nabla f({\mathbf x})- \nabla f({\mathbf y})} &\le L\norm{{\mathbf x}-{\mathbf y}},\quad\forall {\mathbf x},{\mathbf y}\in\mathbb{R}^d
\end{align*}
and \emph{$\mu$-strongly convex}, that is,
\begin{align*}
f({\mathbf y}) \ge f({\mathbf x}) + \inprod{{\mathbf y}-{\mathbf x}}{\nabla f({\mathbf x})} + \frac{\mu}{2}\normsq{{\mathbf y}-{\mathbf x}},\quad\forall {\mathbf x},{\mathbf y}\in\mathbb{R}^d
\end{align*}
A wide range of applications together with efficient solvers have made this family of problems very important. Naturally, an interesting question arises: how fast can these kind of problems be solved? better said, what is the computational complexity of minimizing smooth and strongly-convex functions to a given degree of accuracy?\footnote{Natural as these questions might look today, matters were quite different only few decades ago. In his book 'Introduction to Optimization' which dates back to 87', Polyak B.T devotes a whole section as to: 'Why Are Convergence Theorems Necessary?' (See section 1.6.2 in \cite{polyak1987introduction}).}
Prior to answering these, otherwise ill-defined, questions, one must first address the exact nature of the underlying computational model. \\
Although being a widely accepted computational model in the theoretical computer sciences, the Turing Machine Model presents many obstacles when analyzing optimization algorithms. In their seminal work, \cite{nemirovskyproblem} evaded some of these difficulties by proposing the \emph{black box computational model}, according to which information regarding the objective function is acquired iteratively by querying an \emph{oracle}. This model does not impose any computational resource constraints\footnote{In a sense, this model is dual to the Turing Machine model where all the information regarding the parameters of the problem is available prior to the execution of the algorithm, but the computational resources are limited in time and space.}. Nemirovsky and Yudin showed that for any optimization algorithm which employs a first-order oracle, i.e. receives $(f({\mathbf x}),\nabla f({\mathbf x}))$ upon querying at a point ${\mathbf x}\in\mathbb{R}^d$, there exists an $L$-smooth $\mu$-strongly convex function $f:\mathbb{R}^d\ \to\mathbb{R}$, such that for any $\epsilon>0$ the number of oracle calls needed for obtaining an \emph{$\epsilon$-optimal} solution $\tilde{{\mathbf x}}$, i.e.,
\begin{align} \label{ineq:eps_subopt}
f(\tilde{{\mathbf x}}) < \min_{{\mathbf x}\in\mathbb{R}^d} f({\mathbf x})+ \epsilon
\end{align}
must satisfy
\begin{align} \label{ineq:sqrtlb}
\# \text{ Oracle Calls} \ge \tilde{\Omega}\circpar{\min\left\{d, \sqrt{\kappa}\ln(1/\epsilon \right\}}
\end{align}
where $\kappa\stackrel{\vartriangle}{=} L/\mu$ denotes the so-called \emph{condition number}. \\
The result of Nemirovsky and Yudin can be seen as the starting point of the
present paper. The restricted validity of this lower bound to the first
$\mathcal{O}\!\circpar{d}$ iterations is not a mere artifact of the analysis. Indeed,
from an information point of view, a minimizer of any convex quadratic
function can be found using no more than $\mathcal{O}(d)$ first-order queries.
Noticing that this bound is attained by the Conjugate Gradient Descent method
(CGD, see \cite{polyak1987introduction}), it seems that one cannot get a
non-trivial lower bound once the number of queries exceeds the dimension $d$.
Moreover, a similar situation can be shown to occur for more general classes
of convex functions. However, the known algorithms which attain such behavior
(such as CGD and the center-of-gravity method, e.g., \cite{nemirovski2005efficient}) require
computationally intensive iterations, and are quite different than many
common algorithms used for large-scale optimization problems, such as
gradient descent and its variants. Thus, to capture the attainable
performance of such algorithms, we must make additional assumptions on their
structure. This can be made more solid using the following simple observation. \\
\emph{When applied on quadratic functions, the update rule of many optimization algorithms reduces to a recursive application of a linear transformation which depends, possibly randomly, on the previous $p$ query points}.
\\
\\
Indeed, the update rule of CGD for quadratic functions is \emph{non-stationary}, i.e. uses a different linear transformation at each iteration, as opposed to other optimization algorithms which utilize less complex update rules such as: stationary updates rule, e.g., Gradient Descent, Accelerated Gradient Descent, Newton's method (see \cite{nesterov2004introductory}), The Heavy Ball method \cite{polyak1987introduction}, SDCA (see \cite{shalev2013stochastic}) and SAG (see \cite{roux2012stochastic}); cyclic update rules, e.g,. SVRG (see \cite{johnson2013accelerating}); and piecewise-stationary update rules, e.g., proximal methods and Accelerated SDCA (see \cite{shalev2013accelerated}). Inspired by this observation, in the present work we explore the boundaries of optimization algorithms which admit stationary update rules. We call such algorithms $p$-Stationary Canonical Linear Iterative optimization algorithms (abbr. $p$-SCLI), where $p$ designates the number of previous points which are necessary to generate new points. The quantity $p$ can be instructively interpreted as a limit on the amount of memory at the algorithm's disposal. \\
Similar to the analysis of power iteration methods, the convergence properties of such algorithms are intimately related to the eigenvalues of the corresponding linear transformation. Specifically, as the convergence rate of the recursive application of a linear transformation is essentially characterized by its largest magnitude eigenvalue, the asymptotic convergence rate of $p$-SCLI algorithms can be bounded from above and from below by analyzing the spectrum of the corresponding linear transformation. At this point we would like to remark that the technique of linearizing iterative procedures and analyzing their convergence behavior accordingly, which dates back to the pioneering work of the Russian mathematician Lyapunov, has been successfully applied in the field of mathematical optimization many times, e.g., \cite{polyak1987introduction} and more recently \cite{lessard2014analysis}. However, whereas previous works were primarily concerned with deriving upper bounds on the magnitude of the corresponding eigenvalues, in this work our reference point is lower bounds. \\
As eigenvalues are merely roots of characteristic polynomials\footnote{In fact, we will use a polynomial matrix analogous of characteristic polynomials which will turns out to be more useful for our purposes.}, our approach involves establishing a lower bound on the maximal modulus (absolute value) of the roots of polynomials. Clearly, in order to find a meaningful lower bound, one must first find a condition which is satisfied by all characteristic polynomials that correspond to $p$-SCLIs. We show that such condition does exist by proving that characteristic polynomials of consistent $p$-SCLIs, which correctly minimize the function at hand, must have a specific evaluation at $\lambda=1$. This in turn allows us to analyze the convergence rate purely in terms of the analytic theory of polynomials, i.e.,
\begin{align} \label{opt:intro_poly_lb}
\textbf{Find} \quad\min\myset{\rho(q(z))}{q(z) \text{ is a real monic polynomial of degree } p \text{ and } q(1)=r }
\end{align}
where $r\in\mathbb{R}$ and $\rho(q(z))$ denotes the maximum modulus over all roots of $q(z)$. Although a vast range of techniques have been developed for bounding the moduli of roots of polynomials (e.g. \cite{marden1966geometry,rahman2002analytic,milovanovic1994topics,walsh1922location,milovanovic2000distribution,fell1980zeros}), to the best of our knowledge, few of them address lower bounds (see \cite{higham2003bounds}. The minimization problem (\ref{opt:intro_poly_lb}) is also strongly connected with the question of bounding the spectral radius of 'generalized' companion matrices from below. Unfortunately, this topic too lacks an adequate coverage in the literature (see \cite{wolkowicz1980bounds,zhong2008bounds,horne1997lower,huang2007improving}). Consequently, we devote part of this work to establish new tools for tackling (\ref{opt:intro_poly_lb}). It is noteworthy that these tools are developed by using elementary arguments. This sharply contrasts with previously proof techniques used for deriving lower bounds on the convergence rate of optimization algorithms which employed heavy machinery from the field of extremal polynomials, such as Chebyshev polynomials (e.g., \cite{mason2002chebyshev}).\\
Based on the technique described above we present a novel lower bound on the convergence rate of $p$-SCLI optimization algorithms. More formally, we prove that any $p$-SCLI optimization algorithm over $\mathbb{R}^d$, whose iterations can be executed efficiently, requires
\begin{align} \label{opt:lblb_conv_pcli}
\# \text{Oracle Calls} \ge \tilde{\Omega}\circpar{ \sqrt[p]{\kappa} \ln(1/\epsilon)}
\end{align}
in order to obtain an $\epsilon$-optimal solution, \emph{regardless of the dimension of the problem}. This result partially complements the lower bound presented earlier in \ineqref{ineq:sqrtlb}. More specifically, for $p=1$, we show that the runtime of algorithms whose update rules do not depend on previous points (e.g. Gradient Descent) and can be computed efficiently scales linearly with the condition number. For $p=2$, we get the optimal result for smooth and strongly convex functions. For $p>2$, this lower bound is clearly weaker than the lower bound shown in (\ref{ineq:sqrtlb}) at the first $d$ iterations. However, we show that it can be indeed attained by $p$-SCLI schemes, and surprisingly, some of them can be executed efficiently for certain classes of quadratic functions. Finally, we believe that a more refined analysis of problem (\ref{opt:intro_poly_lb}) would show that this technique is powerful enough to meet the classical lower bound $\sqrt{\kappa}$ for any $p$, in the worst-case over all quadratic problems.\\
The last part of this work concerns a cornerstone in the field of mathematical optimization, i.e., Nesterov's well-known Accelerated Gradient Descent method (AGD). At the time the work of Nemirovsky and Yudin was published, it was known that Gradient Descent (GD) obtains an $\epsilon$-optimal solution by issuing no more than
\begin{align*}
\bigO{\kappa \ln (1/\epsilon)}
\end{align*}
first-order queries. The gap between this upper bound and the lower bound shown in (\ref{ineq:sqrtlb}) has intrigued many researchers in the field. Eventually, it was this line of inquiry that led to the discovery of AGD by Nesterov (see \cite{nesterov1983method}), a slight modification of the standard GD algorithm, whose iteration complexity is
\begin{align*}
\bigO{\sqrt{\kappa} \ln (1/\epsilon)}
\end{align*}
Unfortunately, AGD lacks the strong geometrical intuition which accompanies many optimization algorithms, such as FGD and the Heavy Ball method. Primarily based on sophisticated algebraic manipulations, its proof strives for a more intuitive derivation (e.g. \cite{beck2009fast,baes2009estimate,tseng2008accelerated,sutskever2013importance,allen2014novel}). This downside has rendered the generalization of AGD to different optimization scenarios, such as constrained optimization problems, a highly non-trivial task which up to the present time does not admit a complete satisfactory solution. Surprisingly enough, by designing optimization algorithms whose characteristic polynomials are optimal with respect to a constrained version of (\ref{opt:intro_poly_lb}), we have uncovered a novel simple derivation of AGD. This reformulation as an optimal solution for a constrained optimization problem over polynomials, shows that AGD and the Heavy Ball are essentially two sides of the same coin. \\
\\
To summarize, our main contributions, in order of appearance, are the following:
\begin{itemize}
\item We define a class of algorithms ($p$-SCLI) in terms of linear operations on the last $p$ iterations, and show that they subsume some of the most interesting algorithms used in practice.
\item We prove that any $p$-SCLI optimization algorithm must use at least
\begin{align*}
\tilde{\Omega}\circpar{\sqrt[p]{\kappa}\ln(1/\epsilon)}
\end{align*}
iterations in order to obtain an $\epsilon$-optimal solution. As mentioned earlier, unlike existing lower bounds, our bound holds for every fixed dimensionality.
\item We show that there exist matching $p$-SCLI optimization algorithms which attain the convergence rates stated above for all $p$. Alas, for $p\ge3$, an expensive pre-calculation task renders these algorithms inefficient.
\item As a result, we focus on a restricted subclass of $p$-SCLI optimization algorithms which can be executed efficiently. This yields a novel systematic derivation of Full Gradient Descent, Accelerated Gradient Descent, The Heavy-Ball method (and potentially others efficient optimization algorithms), each of which corresponds to an optimal solution of optimization problems on the moduli of polynomials' roots.
\item We present new schemes which offer better utilization of second-order information by exploiting breaches in existing lower bounds. This leads to a new optimization algorithm which obtains a rate of $\sqrt[3]{\kappa}\ln(1/\epsilon)$
in the presence of large enough spectral gaps.
\end{itemize}
\subsection{Notation} \label{subsec:notation}
We denote scalars with lower case letters and vectors with bold face letters. We use $\mathbb{R}^{++}$ to denote the set of all positive real numbers. All functions in this paper are defined over Euclidean spaces equipped with the standard Euclidean norm and all matrix-norms are assumed to denote the spectral norm. \\
We denote a block-diagonal matrix whose blocks are $A_1,\dots,A_k$ by the conventional direct sum symbol, i.e., $\oplus_{i=1}^k A_k$. We devote a special operator symbol for scalar matrices $\text{Diag}\circpar{a_1,\dots,a_d} = \oplus_{i=1}^d a_i$. The spectrum of a square matrix $A$ and its spectral radius, the maximum magnitude over its eigenvalues, are denoted by $\spec{A}$ and $\rho(A)$, respectively. Recall that the eigenvalues of a square matrix $A\in\mathbb{R}^{d\times d}$ are exactly the roots of the characteristic polynomial which is defined as follows
\begin{align*}
\chi_A(\lambda) &= \det(A-\lambda I_d)
\end{align*}
where $I_d$ denotes the identity matrix. Since polynomials in this paper have their origins as characteristic polynomials of some square matrices, by a slight abuse of notation, we will denote the roots of a polynomial $q(z)$ and its root radius, the maximum modulus over its roots, by $\spec{q(z)}$ and $\rho(q(z))$, respectively, as well.
\\
\\
The following notation for quadratic functions and matrices will be of frequent use,
\begin{align*}
\posdef{d}{\Sigma} &\stackrel{\vartriangle}{=} \left\{A\in\mathbb{R}^{d\times d}\middle| A \text{ is symmetric and } \spec{A} \subseteq \Sigma\right\}\\
\posdefun{d}{\Sigma} &\stackrel{\vartriangle}{=} \myset{\quadab{A,{\mathbf b}}}{A\in \posdef{d}{\Sigma},{\mathbf b}\in\mathbb{R}^d}
\end{align*}
where $\Sigma$ denotes a non-empty set of positive reals, and where $\quadab{A,{\mathbf b}}$ denotes the following quadratic function
\begin{align*}
\quadab{A,{\mathbf b}} = {\mathbf x}^\top A {\mathbf x} + {\mathbf b}^\top {\mathbf x}
\end{align*}
\section{Framework}
In the sequel we establish our framework for analyzing optimization algorithms for minimizing smooth and strongly convex functions. First, to motivate this technique, we show that the analysis of SDCA presented in \cite{shalev2013stochastic} is tight by using a similar method. Next, we lay the foundations of the framework by generalizing and formalizing various aspects of the SDCA case. We then examine some popular optimization algorithms through this formulation. Apart from setting the boundaries for this work, this inspection gives rise to, otherwise subtle, distinctions between different optimization algorithms. Lastly, we discuss the computational complexity of $p$-SCLIs, as well as their convergence properties.
\subsection{Case Study - Stochastic Dual Coordinate Ascent} \label{section:sdca_case_study}
We consider an optimization algorithm called Stochastic Dual Coordinates Ascent (SDCA\footnote{For a detailed analysis of SDCA, please refer to \cite{shalev2013stochastic}.}) for solving Regularized Loss Minimization (RLM) problems (\ref{opt:RLM}), which are of great significance for the field of Machine Learning. It is shown that applying SDCA on quadratic loss functions allows one to reformulate it as a recursive application of linear transformations. The relative simplicity of such processes is then exploited to derive a lower bound on the convergence rate.\\
A smooth-RLM problem is an optimization task of the following form
\begin{align} \label{opt:RLM}
\min_{{\mathbf w}\in\mathbb{R}^d}P({\mathbf w}) &\stackrel{\vartriangle}{=} \frac{1}{n} \sum_{i=1}^n \phi_i({\mathbf w}^\top {\mathbf x}_i) + \frac{\lambda}{2} \normsq{{\mathbf w}}
\end{align}
where $\phi_i$ are $1/\gamma$-smooth and convex, ${\mathbf x}_1,\ldots,{\mathbf x}_n$ are vectors in $\mathbb{R}^d$ and $\lambda$ is a positive constant. For ease of presentation, we further assume that $\phi_i$ are non-negative, $\phi_i(0)\le 1$ and $\norm{{\mathbf x}_i}\le 1$ for all $i$. \\
The optimization algorithm SDCA works by minimizing an equivalent optimization problem
\begin{align*}
\min_{\alpha\in\mathbb{R}^n} D(\boldsymbol{\alpha}) \stackrel{\vartriangle}{=} \frac{1}{n}\sum_{i=1}^n \phi_i^\star(\alpha_i) +\frac{1}{2\lambda n^2} \normsq{\sum_{i=1}^n \alpha_i {\mathbf x}_i}
\end{align*}
where $\phi^\star$ denotes the Fenchel conjugate of $\phi$, by repeatedly picking $z\sim \mathcal{U}([n])$ uniformly and minimizing $D(\boldsymbol{\alpha})$ over the $z$'th coordinate. The latter optimization problem is referred to as the \emph{dual problem}, while the problem presented in (\ref{opt:RLM}) is called the \emph{primal problem}.
As shown in \cite{shalev2013stochastic}, it is possible to convert a high quality solution of the dual problem into a high quality solution of the primal problem. This allows one to bound from above the number of iterations required for obtaining a prescribed level of accuracy $\epsilon>0$ by
\begin{align*} \label{bigo:RLM_conv_rate}
\bigtO{\circpar{n+\frac{1}{\lambda \gamma}}\ln(1/\epsilon)}\\
\end{align*}
Let us show that this analysis is indeed tight. First, let us define the following $2$-smooth functions
\begin{align*}
\phi_i(y) = y^2,\quad i=1,\dots,n
\end{align*}
and let us define ${\mathbf x}_1={\mathbf x}_2=\cdots={\mathbf x}_n=\frac{1}{\sqrt{n}}\mathbbm{1}$. This yields
\begin{align}
D(\boldsymbol{\alpha}) &= \frac{1}{2}\boldsymbol{\alpha}^\top \circpar{ \frac{1}{2n}I+ \frac{1}{\lambda n^2} \mathbbm{1}\mathbbm{1}^\top}\boldsymbol{\alpha}
\end{align}
Clearly, the unique minimizer of $D(\boldsymbol{\alpha})$ is $\boldsymbol{\alpha}^*\eqdef0$. Now, given $i\in[n]$ and $\boldsymbol{\alpha}\in\mathbb{R}^n$ , it is easy to verify that
\begin{align}
\argmin_{\alpha' \in\mathbb{R}} D(\alpha_1,\dots,\alpha_{i-1},\alpha',\alpha_{i+1},\dots,\alpha_{n}) = \frac{-2}{2+\lambda n} \sum_{j\neq i} \alpha_j
\end{align}
Thus, the next test point $\boldsymbol{\alpha}^+$, generated by taking a step along the $i$'th coordinate, is linear transformation of the previous point, i.e.,
\begin{align} \label{eq:rlm_costep}
\boldsymbol{\alpha}^+=\circpar{I-{\mathbf e}_i {\mathbf u}_i^\top}\boldsymbol{\alpha}
\end{align}
Where
\begin{align*}
{\mathbf u}_i^\top &\stackrel{\vartriangle}{=} \circpar{\frac{2}{2+\lambda n }, \dots, \frac{2}{2+\lambda n },\underbrace{1}_{i\text{'s entry}},\frac{2}{2+\lambda n },\ldots, \frac{2}{2+\lambda n } }
\end{align*}
Let $\boldsymbol{\alpha}^k,~ k=1,\dots,K$ denote the $k$'th test point. The sequence of points $(\boldsymbol{\alpha}^k)_{k=1}^K$ is randomly generated by minimizing $D(\boldsymbol{\alpha})$ over the $z_i$'th coordinate at the $i$'th iteration, where $z_1,z_2,\dots,z_K\sim \mathcal{U}([n])$ is a sequence of $K$ uniform distributed i.i.d random variables. Applying (\ref{eq:rlm_costep}) over and over again starting from some initialization point $\boldsymbol{\alpha}^0$ we obtain
\begin{align*}
\boldsymbol{\alpha}^k&=\circpar{I-{\mathbf e}_{z_K}{\mathbf u}_{z_K}^\top}\circpar{I-{\mathbf e}_{z_{K-1}}{\mathbf u}_{z_{K-1}}^\top}\cdots\circpar{I-{\mathbf e}_{z_1}{\mathbf u}_{z_1}^\top}\boldsymbol{\alpha}^0
\end{align*}
To compute $\mathbb{E}[\boldsymbol{\alpha}^K]$ note that by the i.i.d hypothesis and by the linearity of the expectation operator,
\begin{align}
\mathbb{E}\left[\boldsymbol{\alpha}^K\right]&=\mathbb{E}\left[\circpar{I-{\mathbf e}_{z_K}{\mathbf u}_{z_K}^\top}\circpar{I-{\mathbf e}_{z_{K-1}}{\mathbf u}_{z_{K-1}}^\top}\cdots\circpar{I-{\mathbf e}_{z_1}{\mathbf u}_{z_1}^\top}\boldsymbol{\alpha}^0\right]\nonumber\\
&=\mathbb{E}\left[\circpar{I-{\mathbf e}_{z_K}{\mathbf u}_{z_K}^\top}\right]\mathbb{E}\left[\circpar{I-{\mathbf e}_{z_{K-1}}{\mathbf u}_{z_{K-1}}^\top}\right]\cdots\mathbb{E}\left[\circpar{I-{\mathbf e}_{z_1}{\mathbf u}_{z_1}^\top}\right]\boldsymbol{\alpha}^0\nonumber\\
&=\mathbb{E}\left[\circpar{I-{\mathbf e}_{z}{\mathbf u}_{z}^\top}\right]^K\boldsymbol{\alpha}^0 \label{kpoint}
\end{align}
The convergence rate of this process is governed by the spectral radius of $$E\stackrel{\vartriangle}{=} \mathbb{E}\left[I-{\mathbf e}_{z}{\mathbf u}_{z}^\top\right]$$A straightforward calculation shows that the eigenvalues of $E$, ordered by magnitude, are
\begin{align}
\underbrace{\frac{1}{2/\lambda + n},\dots , \frac{1 }{2/\lambda + n}}_{n-1 \text{ times}}, 1 - \frac{2 + \lambda}{2+\lambda n}
\end{align}
By choosing $\boldsymbol{\alpha}^0$ to be the following normalized eigenvector which corresponds to the largest eigenvalue,
$$\boldsymbol{\alpha}^0=\circpar{\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}},0,\dots,0}$$
and plugging it into \eqref{kpoint}, we can now bound from below the distance of $\mathbb{E}[\boldsymbol{\alpha}^K]$ to the optimal point $\boldsymbol{\alpha}^*=0$,
\begin{align*}
\norm{\mathbb{E}\left[\boldsymbol{\alpha}^K\right]-\boldsymbol{\alpha}^*}
&=\norm{\mathbb{E}\left[\circpar{I-{\mathbf e}_{z}{\mathbf u}_{z}^\top}\right]^K\boldsymbol{\alpha}^0}\\
&=\circpar{1 - \frac{1}{2/\lambda+n}}^K\norm{\boldsymbol{\alpha}^0}\\
&=\circpar{1 - \frac{2}{(4/\lambda+2n-1)+1}}^K\\
&\ge \circpar{\exp\circpar{\frac{-1}{2/\lambda+n-1}}}^K
\end{align*}
Where the last inequality is due to the following inequality,
\begin{align} \label{ineq:exp_x}
1-\frac{2}{x+1}\ge \exp\circpar{\frac{-2}{x-1}},\quad \forall x\ge1
\end{align}
We see that the minimal number of iterations required for obtaining a solution whose distance form the $\boldsymbol{\alpha}^*$ is less than $\epsilon>0$ must be greater than
\begin{align*}
\circpar{2/\lambda+n-1}\ln\circpar{1/\epsilon}
\end{align*}
Thus showing that, up to logarithmic factors, the analysis of the convergence rate of SDCA is tight.
\subsection{Definitions}
In the sequel we introduce the framework of $p$-SCLI optimization algorithms which generalizes the analysis shown in the preceding section.\\
We denote the set of $d\times d$ symmetric matrices whose spectrum lies in $\Sigma\subseteq\mathbb{R}^{++}$ by $\posdef{d}{\Sigma}$ and denote the following set of quadratic functions
\begin{align*}
f_{A,{\mathbf b}}({\mathbf x}) \stackrel{\vartriangle}{=}\frac{1}{2}{\mathbf x}^\top A{\mathbf x} + {\mathbf b}^\top {\mathbf x},\quad A\in\posdef{d}{\Sigma}
\end{align*}
by $\posdefun{d}{\Sigma}$. Note that since twice continuous differentiable functions $f({\mathbf x})$ are $L$-smooth and $\mu$-strongly convex if and only if $$\spec{\nabla^2 (f({\mathbf x}))}\subseteq [\mu,L]\subseteq\mathbb{R}^{++},\quad {\mathbf x}\in\mathbb{R}^d$$ we have that $\posdefun{d}{[\mu,L]}$ comprises $L$-smooth $\mu$-strongly convex quadratic functions. Thus, any optimization algorithm designed for minimizing smooth and strongly convex functions can be used to minimize functions in $\posdefun{d}{[\mu,L]}$. The key observation here is that since the gradient of $\quadab{A,{\mathbf b}}$ is linear in ${\mathbf x}$, when applied to quadratic functions, the update rules of many optimization algorithms also become linear in ${\mathbf x}$. This formalizes as follows.
\begin{definition} [$p$-SCLI optimization algorithms] \label{definition:pscli}
An optimization algorithm $\mathcal{A}$ is called a $p$-stationary canonical linear iterative (abbr. $p$-SCLI) optimization algorithm over $\mathbb{R}^d$ if there exist $p+1$ mappings $C_0(X),C_1(X),\dots,C_{p-1}(X),N(X)$ from $\mathbb{R}^{d\times d}$ to $\mathbb{R}^{d\times d}$-valued random variables, such that for any $\quadab{A,{\mathbf b}}\in\posdefun{d}{\Sigma}$ the corresponding initialization and update rules take the following form:
\begin{align}
&{\mathbf x}^0,{\mathbf x}^1,\dots,{\mathbf x}^{p-1}\in\mathbb{R}^d \label{def:pscli_initialization_points}\\
&{\mathbf x}^k = \sum_{j=0}^{p-1} C_{j}(A) {\mathbf x}^{k-p+j}+ N(A){\mathbf b},\quad k=p,p+1,\dots \label{def:pscli_update_rule}
\end{align}
We further assume that in each iteration $C_j(A)$ and $N(A)$ are drawn independently of previous realizations\footnote{In this context, this assumption is usually referred to as stationarity.}, and that $\mathbb{E} C_i(A)$ are finite and simultaneously triangularizable\footnote{Intuitively, having this technical requirement is somewhat similar to assuming that the coefficients matrices commute (see \cite{drazin1951some} for a precise statement), and as such does not seem to restrict the scope of this work. Indeed, it is common to have $\mathbb{E} C_i(A)$ as polynomials in $A$ or as diagonal matrices, in which case the assumption holds true.}.
\end{definition}
Let us introduce a few more definitions and terminology which will be used throughout this paper. The number of previous points $p$ by which new points are generated is called the \emph{lifting factor}. The matrix-valued random variables $C_0(X),C_1(X),\dots,C_{p-1}(X)$ and $N(X)$ are called \emph{coefficient matrices} and \emph{inversion matrix}, respectively. The term inversion matrix refers to the mapping $N(X)$, as well as to a concrete evaluation of it. It will be clear from the context which interpretation is being used. The same comment holds for coefficient matrices. \\
As demonstrated by the following definition, coefficients matrices of $p$-SCLIs can be equivalently described in terms of polynomial matrices\footnote{For a detailed cover of polynomial matrices see \cite{gohberg2009matrix}.}. This correspondence will soon play a pivotal role in the analysis of $p$-SCLIs.
\begin{definition} \label{def:l_lambda}
The characteristic polynomial of a given $p$-SCLI optimization algorithm $\mathcal{A}$ is defined by
\begin{align}
\syspol{\mathcal{A}}{\lambda,X} &\stackrel{\vartriangle}{=} I_d\lambda^p - \sum_{j=0}^{p-1} \mathbb{E} C_j(X) \lambda^{j}
\end{align}
where $C_j(X)$ denote the coefficient matrices. Moreover, given $X\in\mathbb{R}^{d\times d}$ we define the root radius of $\syspol{\mathcal{A}}{\lambda,X}$ by
\begin{align*}
\rho_\lambda(\syspol{\cA}{\lambda,X})&= \rho(\det{\mathcal{L}(\lambda,X)}) = \max\myset{|\lambda'|}{\det{\mathcal{L}(\lambda^\prime,X)}=0}
\end{align*}
\end{definition}
For the sake of brevity, we will sometimes specify a given $p$-SCLI optimization algorithm $\mathcal{A}$ using an ordered pair of a characteristic polynomial and an inversion matrix as follows $$\mathcal{A}\stackrel{\vartriangle}{=}(\syspol{\mathcal{A}}{\lambda,X},N(X))$$
Lastly, note that nowhere in the definition of $p$-SCLIs did we assume that the optimization process converges to the minimizer of the function under consideration - an assumption which we refer to as \emph{consistency}.
\begin{definition} [Consistency of $p$-SCLI optimization algorithms] \label{definition:consistency}
A $p$-SCLI optimization algorithm $\mathcal{A}$ is said to be consistent with respect to a given $A\in\posdef{d}{\Sigma}$ if for any ${\mathbf b}\in\mathbb{R}^d$, $\mathcal{A}$ converges to the minimizer of $\quadab{A,{\mathbf b}}$, regardless of the initialization point. That is, for $\left({\mathbf x}^k\right)_{k=1}^\infty$ as defined in (\ref{def:pscli_initialization_points},\ref{def:pscli_update_rule}) we have that
\begin{align*}
{\mathbf x}^k\to-A^{-1}{\mathbf b}
\end{align*}
for any ${\mathbf b}\in\mathbb{R}^d$. Furthermore, if $\mathcal{A}$ is consistent with respect to all $A\in\posdef{d}{\Sigma}$, then we say that $\mathcal{A}$ is consistent with respect to $\posdefun{d}{\Sigma}$.
\end{definition}
\subsection{Specifications for Some Popular optimization algorithms} \label{section_spec_algo}
Having defined the framework of $p$-SCLI optimization algorithms, a natural question now arises: how broad is the scope of this framework and what does characterize
optimization algorithms which it applies to? Loosely speaking, any optimization algorithm whose update rules depend linearly on the first and the second order derivatives of the function under consideration is eligible for this framework. Instead of providing a precise characterization for such algorithms, we apply various popular optimization algorithms on a general quadratic function $\quadab{A,{\mathbf b}}\in\posdefun{d}{[\mu,L]}$ and then re-express them as $p$-SCLI optimization algorithms.\\
\begin{description}
\item [Full Gradient Descent (FGD) ] \label{spec:fgd} is a $1$-SCLI optimization algorithm,
\begin{align*}
{\mathbf x}^0 &\in \mathbb{R}^d\\
{\mathbf x}^{k+1} &= {\mathbf x}^k - \beta \nabla f({\mathbf x}^k)= {\mathbf x}^k - \beta(A{\mathbf x}^k +{\mathbf b})=(I-\beta A){\mathbf x}^k -\beta {\mathbf b}\\
\beta &= \frac{2}{\mu +L}
\end{align*}
See \cite{nesterov2004introductory} for more details.
\item [Newton method] \label{spec:newton} is a $0$-SCLI optimization algorithm.
\begin{align*}
{\mathbf x}^0 &\in \mathbb{R}^d\\
{\mathbf x}^{k+1} &= {\mathbf x}^k- (\nabla^{2} f({\mathbf x}^k))^{-1} \nabla f({\mathbf x}^k) = {\mathbf x}^k- A^{-1}(A{\mathbf x}^k + {\mathbf b})\\&= (I-A^{-1}A){\mathbf x}^k - A^{-1}{\mathbf b} = -A^{-1}{\mathbf b}
\end{align*}
Note that Newton method can be also formulated as a degenerate $p$-SCLI for some $p\in\mathbb{N}$, whose coefficients matrices vanish. See \cite{nesterov2004introductory} for more details.
\item [The Heavy Ball Method] is a $2$-SCLI optimization algorithm.
\begin{align*}
{\mathbf x}^{k+1} &= {\mathbf x}^k - \alpha\nabla f({\mathbf x}^{k})+ \beta ({\mathbf x}^{k}-{\mathbf x}^{k-1}) \\
&= {\mathbf x}^k - \alpha(A{\mathbf x}^{k} + {\mathbf b})+ \beta ({\mathbf x}^{k}-{\mathbf x}^{k-1}) \\&= \circpar{(1+\beta) I-\alpha A } {\mathbf x}^{k} -\beta I {\mathbf x}^{k-1} -\alpha {\mathbf b}\\
\alpha &= \frac{4}{\circpar{\sqrt{L}+\sqrt{\mu}}^2},\quad
\beta = \circpar{\frac{\sqrt{L}-\sqrt{\mu}}{\sqrt{L}+\sqrt{\mu}} }^2
\end{align*}
See \cite{polyak1987introduction} for more details.
\item [Accelerated Gradient Descent (AGD)]\label{spec:agd} is a $2$-SCLI optimization algorithm.
\begin{align*}
{\mathbf x}^0&={\mathbf y}^0 \in \mathbb{R}^d\\
{\mathbf y}^{k+1} &= {\mathbf x}^{k} - \frac1L\nabla f({\mathbf x}^{k})\\
{\mathbf x}^{k+1} &= \circpar{1+\alpha}{\mathbf y}^{k+1} - \alpha {\mathbf y}^{k} \\
\alpha &= \frac{\sqrt{L}-\sqrt{\mu}}{\sqrt{L}+\sqrt{\mu}}
\end{align*}
Which can be rewritten as,
\begin{align*}
{\mathbf x}^0& \in \mathbb{R}^d\\
{\mathbf x}^{k+1} &= \circpar{1+\alpha}\circpar{{\mathbf x}^{k} - \frac1L\nabla f({\mathbf x}^{k})} - \alpha \circpar{{\mathbf x}^{k-1} - \frac1L\nabla f({\mathbf x}^{k-1})}\\
&= \circpar{1+\alpha}\circpar{{\mathbf x}^{k} - \frac1L (A{\mathbf x}^{k}+{\mathbf b})} - \alpha \circpar{{\mathbf x}^{k-1} - \frac1L (A{\mathbf x}^{k-1}+{\mathbf b})}\\
&= \circpar{1+\alpha}\circpar{I - \frac1L A} {\mathbf x}^{k}
-\alpha\circpar{I - \frac1L A} {\mathbf x}^{k-1}
-\frac1L {\mathbf b}
\end{align*}
See \cite{nesterov2004introductory} for more details.
\item [Stochastic Coordinate Descent (SCD)] is a $1$-CLI optimization algorithm. This is an extension of the example shown in Section \ref{section:sdca_case_study}. SCD acts by repeatedly minimizing a uniformly randomly drawn coordinate in each iteration. That is,
\begin{align*}
&{\mathbf x}^0 \in \mathbb{R}^d\\
&\text{Pick } i\sim \mathcal{U}([d]) \text{ and set }{\mathbf x}^{k+1} = \left(I-\frac{1}{A_{i,i}}{\mathbf e}_i \mathbf{a}_{i,\star}^\top \right){\mathbf x}^k - \frac{b_i}{A_{i,i}}{\mathbf e}_i
\end{align*}
where $\mathbf{a}_{i,\star}^\top$ denotes the $i$'th row of $A$ and ${\mathbf b}\stackrel{\vartriangle}{=}\circpar{b_1,b_2,\dots,b_d}$. Note that the expected update rule of this method is equivalent to the well-known Jacobi's iterative method.
\end{description}
We now describe some popular optimization algorithms which do not fit this framework, mainly because the stationarity requirement fails to hold. The extension of this framework to cyclic and piecewise stationary optimization algorithms is left to future work.
\begin{description}
\item [Conjugate Gradient Descent (CGD)] Can be expressed as a non-stationary linear iterative method.
\begin{align*}
{\mathbf x}^{k+1} &= \circpar{(1+\beta_k) I-\alpha_k A } {\mathbf x}^{k} -\beta_k I {\mathbf x}^{k-1} -\alpha_k {\mathbf b}
\end{align*}
where $\alpha_k$ and $\beta_k$ are computed at each iteration based on ${\mathbf x}^k,{\mathbf x}^{k-1},A$ and $b$. Note the similarity of CGD and the heavy ball method. See \cite{polyak1987introduction,nemirovski2005efficient} for more details.
\item [Stochastic Gradient Descent (SGD) ] A straightforward extension of the deterministic FGD. Specifically, let $(\Omega,\mathcal{F},\mathcal{P})$ be a probability space and let $G({\mathbf x},\omega):\mathbb{R}^d\times \Omega\to\mathbb{R}^d$ be
an unbiased estimator of $\nabla f({\mathbf x})$ for any ${\mathbf x}$. That is,
\begin{align*}
\mathbb{E}[ G({\mathbf x},\omega) ] &= \nabla f({\mathbf x}) = A {\mathbf x} + {\mathbf b},\quad {\mathbf x}\in\mathbb{R}^d
\end{align*}
Equivalently, define ${\mathbf e}({\mathbf x},\omega)= G({\mathbf x},\omega) - (A {\mathbf x} + {\mathbf b}) $ and assume $\mathbb{E}[{\mathbf e}({\mathbf x},\omega)]=0,~{\mathbf x}\in\mathbb{R}^d$. SGD may be defined using a suitable sequence of step sizes $(\gamma_i)_{i=1}^\infty$ as follows
\begin{align*}
\text{Generate } \omega_{k} \text{ randomly and set } {\mathbf x}^{k+1} &= {\mathbf x}^k - \gamma_i G({\mathbf x}^k,\omega_{k})\\ &= \circpar{I-\gamma_i A}{\mathbf x}^k - \gamma_i {\mathbf b} - \gamma_i {\mathbf e}({\mathbf x},\omega)
\end{align*}
Clearly, some types of noise may not form a $p$-SCLI optimization algorithm. However, for some instances, e.g., quadratic learning problems, we have $${\mathbf e}({\mathbf x},\omega)=A_\omega{\mathbf x} + {\mathbf b}_\omega$$ such that
\begin{align*}
\mathbb{E}[A_\omega]&=0,\quad \mathbb{E}[{\mathbf b}_\omega]=0
\end{align*}
If, in addition, the step size is fixed then we get a $1$-SCLI optimization algorithm. See \cite{kushner2003stochastic,spall2005introduction,nemirovski2005efficient} for more details.
\end{description}
\subsection{Computational Complexity}
The stationarity property of general $p$-SCLIs optimization algorithms implies that the computational cost of minimizing a given quadratic function $\quadab{A,{\mathbf b}}$, assuming $\Theta\circpar{1}$ cost for all arithmetic operations, is
\begin{align*}
\# \text{ Iterations } \times
\begin{cases}
\text{ Generating coefficient and inversion matrices randomly}\\
\qquad\qquad+\\
\text{ Executing update rule (\ref{def:pscli_update_rule})based on the previous } p \text{ points }
\end{cases}
\end{align*}
The computational cost of the execution of update rule (\ref{def:pscli_update_rule}) scales linearly with $d$ the dimension of the problem and $p$ the lifting factor. Thus, the running time of $p$-SCLIs is mainly affected by the iterations number and the computational cost of randomly generating coefficient and inversion matrices each time. Notice that for deterministic $p$-SCLIs one can save running time by computing the coefficient and inversion matrices once, prior to the execution of the algorithm. Not surprisingly, but interesting nonetheless, there is a law of conservation which governs the total amount of computational cost invested in both factors: the more demanding is the task of randomly generating coefficient and inversion matrices, the less is the total number of iterations required for obtaining a given level of accuracy, and vice verse. Before we can make this statement more rigorous, we need to present a few more facts about $p$-SCLIs. For the time being, let us focus on the \emph{iteration~complexity}, i.e., the total number iterations, which forms our analogy for black box complexity. \\
The \emph{iteration~complexity} of a $p$-SCLI optimization algorithm $\mathcal{A}$ with respect to an accuracy level $\epsilon$, an initialization points $\mathcal{X}^0$ and a quadratic function $\quadab{A,{\mathbf b}}$, symbolized by
$$\cI\cC_\mathcal{A}\circpar{\epsilon,\quadab{A,{\mathbf b}},\mathcal{X}^0}$$
is defined to be the minimal number of iterations $K$ such that
\begin{align*}
\norm{\mathbb{E} [{\mathbf x}^k - {\mathbf x}^*] } <\epsilon,\quad \forall k\ge K
\end{align*}
where ${\mathbf x}^*=-A^{-1}{\mathbf b}$ is the minimizer of $\quadab{A,{\mathbf b}}$, assuming $\mathcal{A}$ is initialized at $\mathcal{X}^0$. We would like to point out that although iteration complexity is usually measured through
\begin{align*}
\mathbb{E}\norm{ {\mathbf x}^k - {\mathbf x}^*}
\end{align*}
here we employ a different definition. We will discuss this issue shortly. \\
In addition to showing that the iteration complexity of $p$-SCLI algorithms scales logarithmically with $1/\epsilon$, the following theorem provides a characterization for the iteration complexity in terms of the root radius of the characteristic polynomial. The full proof for this theorem is somewhat long and thus provided in Section \ref{subsection:conv_prop}.
\begin{theorem} \label{thm:ic_cli}
Let $\mathcal{A}$ be a $p$-SCLI optimization algorithm over $\mathbb{R}^d$ and let $\quadab{A,{\mathbf b}}\in\posdefun{d}{\Sigma},~(\Sigma\subseteq \mathbb{R}^{++})$ be a quadratic function. Then, there exists $\mathcal{X}^0 \in \mathbb{R}^{dp}$ such that
\begin{align*}
\cI\cC_\mathcal{A}\circpar{\epsilon,\quadab{A,{\mathbf b}},\mathcal{X}^0}=\tilde{\Omega}\circpar{\frac{\rho}{1-\rho}\ln(1/\epsilon)}
\end{align*}
and for all $\mathcal{X}^0 \in \mathbb{R}^{dp}$, it holds that
\begin{align*}
\cI\cC_\mathcal{A}\circpar{\epsilon,\quadab{A,{\mathbf b}},\mathcal{X}^0}=\bigtO{\frac{1}{1-\rho}\ln(1/\epsilon)}
\end{align*}
where $\rho$ denotes the root radius of the characteristic polynomial evaluated at $X=A$.
\end{theorem}
We remark that the constants in the asymptotic behavior above may depend on the quadratic function under consideration, and that the logarithmic terms depend on the distance of the initialization points from the minimizer, as well as the lifting factor and the spectrum of the second-order derivative. For the sake of clarity, we shall usually omit the dependency on the initialization points.\\
There are two, rather subtle, issues regarding the definition of iteration complexity which we would like to address. First, observe that in many cases a given point $\tilde{{\mathbf x}}\in\mathbb{R}^d$ is said to be $\epsilon$-optimal w.r.t some real function $f:\mathbb{R}^d\to\mathbb{R}$ if
\begin{align*}
f(\tilde{{\mathbf x}}) < \min_{{\mathbf x}\in\mathbb{R}^d} f({\mathbf x}) + \epsilon
\end{align*}
However, here we employ a different measure for optimality. Fortunately, in our case either can be used without essentially affecting the iteration complexity. That is, although in general the gap between these two definitions can be made arbitrarily large, for $L$-smooth $\mu$-strongly convex functions we have
\begin{align*}
\frac{\mu}{2} \normsq{{\mathbf x}-{\mathbf x}^*} \le f({\mathbf x})-f({\mathbf x}^*) \le \frac{L}{2} \normsq{{\mathbf x}-{\mathbf x}^*}
\end{align*}
Combining these two inequalities with the fact that the iteration complexity of $p$-SCLIs depends logarithmically on $1/\epsilon$ implies that in this very setting these two distances are interchangeable, up to logarithmic factors.\\
Secondly, here we measure the sub-optimality of the $k$'th iteration by $\norm{\mathbb{E} [{\mathbf x}^k - {\mathbf x}^*] }$,
whereas in many other stochastic settings it is common to derive upper and lower bounds on $\mathbb{E}\rectpar{\norm{{\mathbf x}^k - {\mathbf x}^* }}$. That being the case, by
\begin{align*}
\mathbb{E}\rectpar{\normsq{{\mathbf x}^k - {\mathbf x}^* }} &= \mathbb{E}\rectpar{\normsq{{\mathbf x}^k - \mathbb{E} {\mathbf x}^k}} +\normsq{ \mathbb{E} \rectpar{{\mathbf x}^k -{\mathbf x}^* }}
\end{align*}
we see that if the variance of the $k$'th point is of the same order of magnitude as the norm of the expected distance from the optimal point, then both measures are equivalent. Consequently, our upper bounds imply upper bounds on $\mathbb{E}\rectpar{\normsq{{\mathbf x}^k - {\mathbf x}^* }}$ for deterministic algorithms (where the variance term is zero), and our lower bounds imply lower bounds on $\mathbb{E}\rectpar{\normsq{{\mathbf x}^k - {\mathbf x}^* }}$, for both deterministic and stochastic algorithms (since the variance is always non-negative). We defer a more adequate treatment for this matter to future work.
\section{Deriving Bounds for \texorpdfstring{$p$}{p}-SCLI Algorithms}
The goal of the following section is to show how the framework of $p$-SCLI optimization algorithms can be used to derive lower and upper bounds. Our presentation follows from the simplest setting to the most general one. case to th First, we present a useful characterization of consistency (see \defref{definition:consistency}) of $p$-SCLIs using the characteristic polynomial. Next, we demonstrate the importance of consistency through a simplified one dimensional case. This line of argument is then generalized to finite dimensional spaces and is used to explain the role of the inversion matrix. Finally, we conclude this section by providing a schematic description of this technique for the most general case which is used both in Section (\ref{chapter:lower_bounds}) to establish lower bounds on the convergence rate of $p$-SCLIs with diagonal inversion matrices, and in Section (\ref{section:ub}) to derive efficient $p$-SCLIs.\\
\subsection{Consistency} \label{subsection:cons}
Closely inspecting various specifications for $p$-SCLI optimization algorithms (see Section (\ref{section_spec_algo})) reveals that the coefficient matrices always sum up to $I+\mathbb{E} N(X)X$, where $N(X)$ denotes the inversion matrix. It turns out that this is not a mere coincidence, but an extremely useful characterization for consistency of $p$-SCLIs. To see why this condition must hold, suppose $\mathcal{A}$ is a deterministic $p$-SCLI algorithm over $\mathbb{R}^d$ whose coefficient matrices and inversion matrix are $C_0(X),\dots,C_{p-1}(X)$ and $N(X)$, respectively, and suppose that $\mathcal{A}$ is consistent w.r.t some $A\in\posdef{d}{\Sigma}$. Recall that every $p+1$ consecutive points generated by $\mathcal{A}$ are related by (\ref{def:pscli_update_rule}) as follows
\begin{align*}
{\mathbf x}^k = \sum_{j=0}^{p-1} C_{j}(A) {\mathbf x}^{k-p+j}+ N(A){\mathbf b},\quad k=p,p+1,\dots
\end{align*}
Taking limit of both sides of the equation above and noting that by consistency
\begin{align*}
{\mathbf x}^k &\to -A^{-1}{\mathbf b}
\end{align*}
for any ${\mathbf b}\in\mathbb{R}^d$, yields
\begin{align*}
-A^{-1}{\mathbf b} = -\sum_{j=0}^{p-1} C_{j}(A) A^{-1}{\mathbf b} + N(A){\mathbf b}
\end{align*}
Thus,
\begin{align*}
-A^{-1} = -\sum_{j=0}^{p-1} C_{j}(A) A^{-1} + N(A)
\end{align*}
Multiplying by $A$ and rearranging, we obtain
\begin{align} \label{equation:sum_for_consistency}
\sum_{j=0}^{p-1} C_{j}(A) = I_d + N(A)A
\end{align}
On the other hand, if instead of assuming consistency we assume that $\mathcal{A}$ generates a convergent sequence of points and that \eqref{equation:sum_for_consistency} holds, then the arguments used above show that the limit point must be $-A^{-1}{\mathbf b}$. In terms of the characteristic polynomial of $p$-SCLIs, this formalized as follows.
\begin{theorem} [Consistency - System Polynomials]\label{thm:conv_correct}
Suppose $\mathcal{A}\stackrel{\vartriangle}{=}(\syspol{\cA}{\lambda,X},N(X))$ is a $p$-SCLI optimization algorithm. Then, $\mathcal{A}$ is consistent with respect to $A\in\posdef{d}{\Sigma}$ if and only if the following two conditions hold:
\begin{align}
1.&~\syspol{\mathcal{A}}{1,A} = -\mathbb{E} N(A)A \label{consis_1_syspol}\\
2.&~\rho_\lambda(\syspol{}{\lambda,A}) < 1 \label{consis_2_syspol}
\end{align}
\end{theorem}
The proof for the preceding theorem is provided in Section \ref{section:conv_correct}.
This result will be used extensively throughout the reminder of this work.
\subsection{Simplified One-Dimensional Case} \label{section:odc}
To illustrate the significance of consistency in the framework of $p$-SCLIs, consider the following simplified case. Suppose $\mathcal{A}$ is a deterministic 2-SCLI optimization algorithm over $\posdefun{1}{[\mu,L]}$, such that its inversion matrix $N(x)$ is some constant scalar $\nu\in\mathbb{R}$ and its coefficient matrices $c_0(x),c_1(x)$ are free to take any form. The corresponding characteristic polynomial is
\begin{align*}
\mathcal{L}(\lambda,x) &= \lambda^2 - c_1(x)\lambda -c_0(x)
\end{align*}
Now, let $f_{a,b}(x)\in\posdefun{1}{[\mu,L]}$ be a quadratic function. By \thmref{thm:ic_cli}, we know that $\mathcal{A}$ converges to the minimizer of $f_{a,b}(x)$ with an asymptotic geometric rate of $\rho_\lambda(\mathcal{L}(\lambda,a))$, the maximal modulus root. Thus, ideally we would like to set $c_j(x)=0,~j=0,1$. However, this might violate the consistency condition (\ref{consis_1_syspol}), according to which, one must maintain $$\mathcal{L}(1,a)=-\nu a$$ That being the case, how little can $\rho_\lambda\circpar{\mathcal{L}(\lambda,a)}$ be over all possible choices for $c_j(a)$ which satisfy $\mathcal{L}(1,a)=-\nu a$? Formally, we seek to solve the following minimization problem.
\begin{align*}
\rho_* = \min\left\{ \rho_\lambda(\syspol{}{\lambda,a})~ \left|~ \syspol{}{\lambda,a} \text{ is a real monic quadratic polynomial in $\lambda$ and } \syspol{}{1} = -\nu a \right.\right\}
\end{align*}
By consistency we also have that $\rho_*$ must be strictly less than one. This readily implies that $-\nu a>0$. In which case, \lemref{lem:comp_poly} below gives
\begin{align}
\rho_* & \ge \rho\circpar{\circpar{\lambda-1-\sqrt{-\nu a}}^2} = \absval{\!\sqrt{-\nu a}-1\!}
\end{align}
The key ingredient here is that $\nu$ cannot be chosen so as to be optimal for all $\posdefun{1}{[\mu,L]}$, at one and the same time. Indeed, the preceding inequality holds in particular for $a=\mu$ and $a=L$, by which we conclude that
\begin{align} \label{ineq:one_dim_case}
\rho_* &\ge \max\left\{\absval{\!\sqrt{-\nu \mu}-1\!}, \absval{\!\sqrt{-\nu L}-1\!} \right\} \ge \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}
\end{align}
where $\kappa\stackrel{\vartriangle}{=} L/\mu$. Plugging in \ineqref{ineq:one_dim_case} into \thmref{thm:ic_cli} implies that there exists $f_{a,b}(x)\in\posdefun{1}{[\mu,L]}$ such that the iteration complexity of $\mathcal{A}$ for minimizing it is
\begin{align*}
\tilde{\Omega}\circpar{\frac{\sqrt{\kappa} -1}{2}\ln(1/\epsilon)}
\end{align*}
To conclude, by applying this rather natural line of argument we have established a lower bound on the convergence rate of any $2$-SCLI optimization algorithms for smooth and strongly convex function over $\mathbb{R}$, e.g., AGD and HB.
\subsection{General Case and the Role of the Inversion Matrix} \label{subsection:general_case}
We now generalize the analysis shown in the previous simplified case to any $p$-SCLI optimization algorithm over any finite dimensional space. This generalization relies on a useful decomposability property of the characteristic polynomial, according to which deriving a lower bound on the convergence rate of $p$-SCLIs over $\mathbb{R}^d$ is essentially equivalent for deriving $d$ lower bounds on
the maximal modulus of the roots of $d$ polynomials over $\mathbb{R}$. \\
Let $\mathcal{A}\stackrel{\vartriangle}{=}(\mathcal{L}(\lambda,X),N(X))$ be a consistent deterministic $p$-SCLI optimization algorithm and let $\quadab{A,{\mathbf b}}\in\posdefun{d}{\Sigma}$ be a quadratic function. By consistency (see \thmref{thm:conv_correct}) we have
\begin{align*}
\mathcal{L}(1,A)&=-NA
\end{align*}
(for brevity we omit the functional dependency on $X$). Since coefficient matrices are assumed to be simultaneously triangularizable, there exists an invertible matrix $Q\in\mathbb{R}^{d\times d}$ such that $$T_j\stackrel{\vartriangle}{=} Q^{-1} C_j Q,\quad j=0,1,\dots,p-1$$ are upper triangular matrices. Thus, by the definition of the characteristic polynomial (Definition \ref{def:l_lambda}) we have,
\begin{align} \label{decomp_sp}
\det \syspol{\cA}{\lambda,X} &= \det\circpar{Q^{-1} \syspol{\cA}{\lambda,X} Q } = \det\circpar{I_d\lambda^p - \sum_{j=0}^{p-1} T_j\lambda^{j}} = \prod_{j=1}^d \ell_j(\lambda)
\end{align}
where
\begin{align}
\label{eq:system_polys}
\ell_j(\lambda) &= \lambda^p - \sum_{k=0}^{p-1} \sigma_j^k\lambda^{k}
\end{align}
and where $\sigma_1^j,\dots, \sigma_d^{j},~j=0,\dots,p-1$ denote the elements on the diagonal of $T_j$, or equivalently the eigenvalues of $C_j$ ordered according to $Q$. Hence, the root radius of the characteristic polynomial of $\mathcal{A}$ is
\begin{align} \label{eq:max_over_ells}
\rho_\lambda(\syspol{\cA}{\lambda,X}) &= \max {\myset{\absval{\lambda}}{\ell_i(\lambda)=0 \text{ for some } i\in[d]}}
\end{align}
On the other hand, by consistency condition (\ref{consis_1_syspol}) we get that for all $i\in[d]$
\begin{align} \label{constraint_sp}
\ell_i(1)=\sigma_i\circpar{\syspol{}{1}}=\sigma_i\circpar{-NA}
\end{align}
It remains to derive a lower bound on the maximum modulus of the roots of $\ell_i(\lambda)$, subject to the constraint (\ref{constraint_sp}). To this end, we employ the following lemma whose proof can be found in Section \ref{proof:lem:comp_poly}.
\begin{lemma} \label{lem:comp_poly}
Suppose $q(z)$ is a real monic polynomial of degree $p$. If $q(1)<0$, then $$\rho(q(z))>1$$ Otherwise, if $q(1)\ge0$, then
$$\rho(q(z))\ge\absval{\!\sqrt[q]{q(1)}-1\!}$$
In which case, equality holds if and only if $$q(z) = \circpar{z-(1-\sqrt[p]{q(1)})}^p$$
\end{lemma}
We remark that the second part of \lemref{lem:comp_poly} implies that subject to constraint (\ref{constraint_sp}), the lower bound stated above is unimprovable. This property is used in Section \ref{section:ub} where we aim to obtain optimal $p$-SCLIs by designing $\ell_j(\lambda)$ accordingly. Clearly, in the presence of additional constraints, one might be able to improve on this lower bound (see Section \ref{section:is_this_tight}).\\
Since $\mathcal{A}$ is assumed to be consistent, \lemref{lem:comp_poly} implies that $\spec{-N(A)A}\subseteq\mathbb{R}^+$, as well as the following lower bound on the root radius of the characteristic polynomial,
\begin{align} \label{nice_one}
\rho_\lambda(\syspol{\cA}{\lambda,X})&\ge \max_{i\in[d]} \absval{\sqrt[p]{\sigma_i(-N(A)A)}-1}
\end{align}
Noticing that the reasoning above can be readily applied to stochastic $p$-SCLI optimization algorithms, we arrive at the following corollary which combines \thmref{thm:ic_cli} and Inequality (\ref{nice_one}).
\begin{corollary} \label{cor:inv_std_pol}
Let $\mathcal{A}$ be a consistent $p$-SCLI optimization algorithm with respect to some $A\in\posdef{d}{\Sigma}$, let $N(X)$ denote the corresponding inversion matrix and let
\begin{align*}
\rho^*&= \max_{i\in[d]} \absval{\sqrt[p]{\sigma_i(-\mathbb{E} N(A)A)}-1}
\end{align*}
then the iteration complexity of $\mathcal{A}$ for any $\quadab{A,{\mathbf b}}\in\posdefun{d}{\Sigma}$ is lower bounded by
\begin{align} \label{cor:bound_by_inv}
\tilde{\Omega}\circpar{\frac{\rho^*}{1-\rho^*}\ln(1/\epsilon)}
\end{align}
\end{corollary}
Using \corref{cor:inv_std_pol}, we are now able to provide a concise 'plug-and-play' scheme for deriving lower bounds on the iteration complexity of $p$-SCLI optimization algorithms. To motivate this scheme, note that the effectiveness of the lower bound stated in \corref{cor:inv_std_pol} is directly related to the magnitude of the eigenvalues of $-N(X)X$. To exemplify this, consider the inversion matrix of Newton method (see Section \ref{section_spec_algo})
\begin{align*}
N(X)=-X^{-1}
\end{align*}
Since $$\spec{-N(X)X}=\{1\}$$ the lower bound stated above is meaningless for this case. Nevertheless, the best computational cost for computing the inverse of $d\times d$ regular matrices known today is super-quadratic in $d$. As a result, this method might become impractical in large scale scenarios where the dimension of the problem space is large enough. A possible solution is to employ inversion matrices whose dependence on $X$ is simpler. On the other hand, if $N(X)$ approximates $-X^{-1}$ very badly, then the root radius of the characteristic polynomial might get too large. For instance, if $N(X)=0$ then $$\spec{-N(X)X}=\{0\}$$ contradicting the consistency assumption, regardless of the choice of the coefficient matrices. In the light of this, many optimization algorithms can be seen as strategies for balancing the computational cost of obtaining a good approximation for the inverse of $X$ and executing large number of iterations. Put differently, various structural restrictions on the inversion matrix yield different $\spec{-N(X)X}$, which in turn lead to a lower bound on the root radius of the corresponding characteristic polynomial. This gives rise to the following scheme: \\
\begin{table}[H]
\centering
\begin{tabular*}{0.95\textwidth}{ll}
\toprule
\textbf{Scheme 1 }& Lower bounds \\
\midrule
\textbf{Parameters:} &$\bullet$ A family of quadratic functions $\posdefun{d}{\Sigma}$\\&$\bullet$ An inversion matrix $N(X)$ \\& $\bullet$ A lifting factor $p\in\mathbb{N}$,\\
\textbf{Choose }& $\mathcal{S}'\subseteq \posdef{d}{\Sigma}$\\
\textbf{Verify}& $\forall A\in\mathcal{S}',~\spec{-\mathbb{E} N(A)A}\subseteq \circpar{0,-2^p}$ for consistency\\
\textbf{Bound} &$\displaystyle\max_{A\in\mathcal{S}',i\in[d]} \absval{\sqrt[p]{\sigma_i(-\mathbb{E} N(A)A)}-1}$ from below by some $\rho_*\in[0,1)$\\
\midrule
\textbf{Lower bound: }& $\tilde{\Omega}\circpar{\frac{\rho_*}{1-\rho_*}\ln(1/\epsilon)}$ \\
\bottomrule
\end{tabular*}
\end{table}
This scheme is implicitly used in the previous Section (\ref{section:odc}), where we established a lower bound on the convergence rate of $2$-SCLI optimization algorithms over $\mathbb{R}$ with constant inversion matrix and the following parameters
\begin{align*}
\Sigma = [\mu,L],\quad \mathcal{S}'=\{ \mu,L\}
\end{align*}
In Section \ref{chapter:lower_bounds} we will make this scheme concrete for scalar and diagonal inversion matrices.
\subsection{Bounds Schemes} \label{subsection:gen_bounds}
In spite of the fact that Scheme 1 is expressive enough for producing meaningful lower bounds under various structures of the inversion matrix, it does not allow one to incorporate other lower bounds on the root radius of characteristic polynomials whose coefficient matrices admit a particular form, e.g., linear coefficient matrices (see \ref{definition:first_linear_coefficient_matrices} below). Abstracting away from Scheme 1, we now formalize one of the main pillar of this work, i.e., the relation between the amount of computational cost one is willing to invest in executing each iteration and the total number of iterations needed for obtaining a given level of accuracy.
We use this relation to form two schemes for establishing lower and upper bounds for $p$-SCLIs.\\
Given a compatible set of parameters: a lifting factor $p$, an inversion matrix $N(X)$, set of quadratic functions $\posdefun{d}{\Sigma}$ and a set of coefficients matrices $\mathcal{C}$, we denote by $\mathfrak{A}(p,N(X),\posdefun{d}{\Sigma}, \mathcal{C})$ the set of consistent $p$-SCLI optimization algorithms for $\posdefun{d}{\Sigma}$ whose inversion matrix are $N(X)$ and whose coefficient matrices are taken from $\mathcal{C}$. Furthermore, we denote by $\mathfrak{L}(p,N(X),\posdefun{d}{\Sigma}, \mathcal{C})$ the following set of polynomial matrices
\begin{align*}
\left\{\syspol{\cA}{\lambda,X}\stackrel{\vartriangle}{=} I_d\lambda^p - \sum_{j=0}^{p-1} \mathbb{E} C_j(X) \lambda^{j}\middle|~C_j(X)\in\mathcal{C},~ \syspol{}{1,A}=-N(A)A,~\forall A\in\posdef{d}{\Sigma} \right\}
\end{align*}
Since both sets are determined by the same set of parameters, the specifications of which will be occasionally omitted for brevity. The natural one-to-one correspondence between these two set, as manifested by \thmref{thm:ic_cli} and \corref{thm:conv_correct}, yields
\begin{equation} \label{eq:pillar}
\boxed{\min_{\mathcal{A}\in\mathfrak{A}} ~\max_{\quadab{A,{\mathbf b}}\in\posdefun{d}{\Sigma}} \rho_\lambda(\syspol{\mathcal{A}}{\lambda,A}) = \min_{\mathcal{L}(\lambda,X)\in\mathfrak{L}} ~\max_{A\in\posdef{d}{\Sigma}} \rho_\lambda(\mathcal{L}(\lambda,A))}
\end{equation}
The importance of \eqref{eq:pillar} stems from its ability to incorporate any bound on the maximal modulus root of polynomial matrices into a general scheme for bounding the iteration complexity of $p$-SCLIs. This is summarized by the following scheme.
\begin{table}[H]
\centering
\begin{tabular*}{\textwidth}{lll}
\toprule
\textbf{Scheme 2}& Lower bounds \\
\midrule
\textbf{Given} & a set of $p$-SCLI optimization algorithms $\mathfrak{A}(p,N(X),\posdefun{d}{\Sigma}, \mathcal{C})$\\
\textbf{Find} &$\rho_*\in[0,1)$ such that \\
&$\qquad\displaystyle \min_{\syspol{}{\lambda,X} \in\mathfrak{L}} ~\max_{A\in \posdef{d}{\Sigma} } \rho_\lambda \circpar{{\syspol{}{\lambda,A}}}\ge\rho_*$\\
\midrule
\textbf{Lower bound: }& $\tilde{\Omega}\circpar{\frac{\rho_*}{1-\rho_*}\ln(1/\epsilon)}$ \\
\bottomrule
\end{tabular*}
\end{table}
Thus, Scheme 1 is in effect an instantiation of the scheme shown above using \lemref{lem:comp_poly}. This correspondence of $p$-SCLI optimization algorithms and polynomial matrices can be also used contrariwise to derive efficient algorithm optimization. Indeed, in Section \ref{section_spec_algo} we show how FGD, HB and AGD can be formed as optimal instantiations of the following dual scheme.
\begin{table}[H]
\centering
\begin{tabular*}{\textwidth}{lll}
\toprule
\textbf{Scheme 3}& Optimal $p$-SCLI Optimization Algorithms \\
\midrule
\textbf{Given} & a set of polynomial matrices $\mathfrak{L}(p,N(X),\posdefun{d}{\Sigma}, \mathcal{C})$\\
\textbf{Compute} & $\rho^*=\displaystyle \min_{\syspol{}{\lambda,X} \in\mathfrak{L}} ~\max_{A\in \posdef{d}{\Sigma} } \rho_\lambda \circpar{\syspol{}{\lambda,A}}$ \\&and denote its minimizer by $\mathcal{L}^*\circpar{\lambda,A}$\\
\midrule
\textbf{Upper bound: }& The corresponding $p$-SCLI algorithm of $\mathcal{L}^*\circpar{\lambda,A}$\\
\textbf{Convergence rate:} &$\bigO{\frac{1}{1-\rho^*}\ln(1/\epsilon)}$ \\
\bottomrule
\end{tabular*}
\end{table}
\section{Lower Bounds} \label{chapter:lower_bounds}
In the sequel we derive lower bounds on the convergence rate of $p$-SCLI optimization algorithms whose inversion matrices are scalar or diagonal, and discuss the assumptions under which these lower bounds meet matching upper bounds. It is likely that this approach can be also effectively applied for block-diagonal inversion, as well as for a much wider set of inversion matrices whose entries depend on a relatively small set of entries of the matrix to be inverted.\\
\subsection{Scalar and Diagonal Inversion Matrices} \label{section_non_dep}
We derive a lower bound on the convergence rate of $p$-SCLI optimization algorithms for $L$-smooth $\mu$-strongly convex functions over $\mathbb{R}^d$ with a scalar inversion matrix $N(X)$ by employing Scheme 1 (see Section \ref{subsection:general_case}). Note that since the one-dimensional case was already proven in Section \ref{section:odc}, we may assume that $d\ge2$.\\
First, we need to pick a `hard' matrix in $\posdef{d}{[\mu,L]}$. It turns out that any positive-definite matrix $A\in\posdef{d}{[\mu,L]}$ for which
\begin{align}\label{assump:full_spec_e}
\set{\mu,L}\subseteq \spec{A}
\end{align}
will meet this criterion. For the sake of concreteness, let us define
\begin{align*}
A\stackrel{\vartriangle}{=}\operatorname{Diag}(L,\underbrace{\mu,\dots,\mu}_{d-1\text{ times}})
\end{align*}
In which case,
\begin{align*}
-\nu\{\mu,L\} = \spec{-\mathbb{E} N(A)A}
\end{align*}
where $\nu\stackrel{\vartriangle}{=}\mathbb{E} [N(A)]$. Thus, to maintain consistency, it must hold that\footnote{On a side note, this reasoning also implies that if the spectrum of a given matrix $A$ contains both positive and negative eigenvalues then $A^{-1}b$ cannot be computed using $p$-SCLIs with scalar inversion matrices.}
\begin{align} \label{nu_good_range}
\nu\in\circpar{\frac{-2^p}{L},0}
\end{align}
Next, to bound from below
\begin{align*}
\rho_* \stackrel{\vartriangle}{=} \max_{i\in[d]} \absval{\sqrt[p]{\sigma_i(- \nu A)}-1} = \max\left\{|\sqrt[p]{- \nu \mu}-1|,
|\sqrt[p]{- \nu L}-1| \right\}
\end{align*}
we split the feasible range of $\nu$ (\ref{nu_good_range}) into three different sub-ranges as follows:
\begin{table}[H]
\centering
\begin{tabular}{l|l|l}
& $\sqrt[p]{- \nu \mu}-1 < 0$ & $\sqrt[p]{- \nu \mu}-1 \ge 0$ \\
\hline
& \underline{Case 1}& N/A \\
$\sqrt[p]{- \nu L}-1 \le 0$ & Range: $[-1/L,0)$ & \\
& Minimizer: $\nu^*=-1/L$ &\\
& Lower bounds: $1-\sqrt[p]{\frac{\mu}{L}}$ &\\
\hline
&\underline{Case 2} & \underline{Case 3 (requires: $p\ge \log_2\kappa$)}\\
$\sqrt[p]{- \nu L}-1 > 0$& Range: $(-1/\mu,-1/L)$ & Range: $(-2^p/L,-1/\mu]$\\
& Minimizer: $-\circpar{\frac2{\sqrt[p]{ L} +\sqrt[p]{\mu} }}^p $ &Minimizer: $-1/\mu$ \\
& Lower bound: $\frac{\sqrt[p]{ L/\mu} -1 }{\sqrt[p]{L/\mu} +1}$ &Lower Bound: $\sqrt[p]{\frac{L}{\mu}}-1$
\end{tabular}
\caption{Lower bound for $\rho_*$ by subranges of $\nu$} \label{table:nu_subranges}
\end{table}
Therefore,
\begin{align} \label{ineq:spec_all_in_all}
\rho_* \ge \min \left\{
1-\sqrt[p]{\frac{\mu}{L}},
\frac{\sqrt[p]{ L/\mu} -1 }{\sqrt[p]{L/\mu} +1},
\sqrt[p]{\frac{L}{\mu}}-1
\right\} = \frac{\sqrt[p]{\kappa}-1}{\sqrt[p]{\kappa}+1}
\end{align}
Where $\kappa\stackrel{\vartriangle}{=} L/\mu$, upper bounds the condition number of functions in $\posdefun{d}{[\mu,L]}$. Thus, by Scheme 1 we get the following lower bound on the worse-case iteration complexity,
\begin{align}
\tilde{\Omega}\circpar{\frac{\sqrt[p]{\kappa}-1}{2}\ln(1/\epsilon)}
\end{align}
As for the diagonal case, it turns out that for any quadratic $\quadab{A,b}\in\posdefun{d}{[\mu,L]}$ which has
\begin{align}
\mymat{ \frac{L+\mu}{2} & \frac{L-\mu}{2} \\ \frac{L-\mu}{2} & \frac{L+\mu}{2}}
\end{align}
as a principal sub-matrix of $A$, the best $p$-SCLI optimization algorithm with a diagonal inversion matrix does not improve on the optimal asymptotic convergence rate achieved by scalar inversion matrices (see Section \ref{ap:reduction}). Overall, we obtain the following theorem.
\begin{theorem} \label{thm:lb_ic_dia}
Let $\mathcal{A}$ be a consistent $p$-SCLI optimization algorithm for $L$-smooth $\mu$-strongly convex functions over $\mathbb{R}^d$. If the inversion matrix of $\mathcal{A}$ is diagonal, then there exists a quadratic function $\quadab{A,{\mathbf b}}\in\posdefun{d}{[\mu,L]}$ such that
\begin{align}
\cI\cC_\mathcal{A}\circpar{\epsilon,\quadab{A,{\mathbf b}}}=\tilde{\Omega}\circpar{\frac{\sqrt[p]{\kappa}-1}{2}\ln(1/\epsilon)}
\end{align}
where $\kappa = L/\mu$.
\end{theorem}
\subsection{Is This Lower Bound Tight?} \label{section:is_this_tight}
A natural question now arises: is the lower bound stated in \thmref{thm:lb_ic_dia} tight? In short, it turns out that for $p=1$ and $p=2$ the answer is positive. For $p>2$ the answer heavily depends on whether a suitable spectral decomposition is within reach. Obviously, computing the spectral decomposition for a given positive definite matrix $A$ is at least as hard as finding the minimizer of a quadratic function whose hessian is $A$. To avoid this, we will later restrict our attention to linear coefficients matrices which allow efficient implementation.
\begin{description}
\item [A matching upper bound for $p=1$] In this case the lower bound stated in \thmref{thm:lb_ic_dia} is simply attained by FGD (see Section \ref{spec:fgd}).
\item [A matching upper bound for $p=2$] In this case there are two 2-SCLI optimization algorithm which attain this bound, namely, Accelerated Gradient Descent and The Heavy Ball method (see Section \ref{spec:agd}), whose inversion matrices are scalar and correspond to Case 1 and Case 2 in Table \ref{table:nu_subranges}, i.e.,
\begin{align*}
N_{\text{HB}} &= -\circpar{\frac{2}{\sqrt{L}+\sqrt{\mu}}}^2I_d,\quad
N_{\text{AGD}} = \frac{-1}{L}I_d
\end{align*}
Although HB obtains the best possible convergence rate in the class of 2-SCLIs with diagonal inversion matrices, it has a major disadvantage. When applied on general smooth and strongly-convex functions, one cannot guarantee global convergence. That is, in order to converge correctly, HB must be initialized close enough to the minimizer (see Section 3.2.1 in \cite{polyak1987introduction}). Indeed, if the initialization point is too far from the minimizer then HB may diverge as shown in Section 4.5 in \cite{lessard2014analysis}. In contrast to this, AGD attains a global linear convergence with a slightly worse factor. Put differently, the fact HB is highly adapted to quadratic functions prevents it from converging globally to the minimizers of general smooth and strongly convex functions.
\item [A matching upper bound for $p>2$] In Subsection \ref{section:new_algo} we show that when no restriction on the coefficient matrices is imposed, the lower bound shown in \thmref{thm:lb_ic_dia} is tight, i.e., for any $p\in\mathbb{N}$ there exists a matching $p$-SCLI optimization algorithm with scalar inversion matrix whose iteration complexity is
\begin{align}
\bigtO{\sqrt[p]{\kappa}\ln(1/\epsilon)}
\end{align}
In light of the existing lower bound which scales according to $\sqrt{\kappa}$, this result may seem surprising at first. However, there is a major flaw in implementing these seemingly ideal $p$-SCLIs. In order to compute the corresponding coefficients matrices one has to obtain a very good approximation for the spectral decomposition of the positive definite matrix which defines the optimization problem. Clearly, this approach is rarely practical. To remedy this situation we focus on linear coefficient matrices
which admit a relatively low computational cost per iteration. That is, we assume that there exist real scalars $\alpha_1,\dots,\alpha_{p-1}$ and $\beta_1,\dots,\beta_{p-1}$ such that
\begin{align} \label{definition:first_linear_coefficient_matrices}
C_j(X) &= \alpha_j X + \beta_j I_d,\quad j=0,1,\dots,p-1
\end{align}
We believe that for these type of coefficient matrices the lower bound derived in \thmref{thm:lb_ic_dia} is not tight. Precisely, we conjecture that for any $0<\mu<L$ and for any consistent $p$-SCLI optimization algorithm $\mathcal{A}$ with diagonal inversion matrix and linear coefficients matrices, there exists $\quadab{A,{\mathbf b}}\in\posdefun{d}{[\mu,L]}$ such that
\begin{align*}
\rho_\lambda(\syspol{\mathcal{A}}{\lambda,X}) \ge \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}
\end{align*}
where $\kappa\stackrel{\vartriangle}{=} L/\mu$. Proving this may allow to derive tight lower bounds for many optimization algorithm in the field of Machine Learning, such as SAG (see Section \ref{section_spec_algo}), whose structure is often very close to that of $p$-SCLIs with linear coefficients matrices. By Scheme 2, this conjecture may be equivalently stated as follows: suppose $q(z)$ is a $p$-degree monic real polynomial such that $q(1)=0$. Then, for any polynomial $r(z)$ of degree $p-1$ and for any $0<\mu<L$, there exists $\eta\in[\mu,L]$ such that
\begin{align*}
\rho(q(z) - \eta r(z) ) &\ge \frac{\sqrt{L/\mu} -1}{\sqrt{L/\mu}+1}
\end{align*}
That being so, can we do better if we allow families of quadratic functions $\posdefun{d}{\Sigma}$ where $\Sigma$ are not necessarily continuous intervals? It turns out that the answer is positive. Indeed, in Section \ref{section:Lift_factor_3} we present a $3$-SCLI optimization algorithm with linear coefficient matrices which, by being intimately adjusted to quadratic functions whose hessian admits large enough spectral gap, beats the lower bound of Nemirovsky and Yudin (\ref{ineq:sqrtlb}). This apparently contradicting result is also discussed in Section \ref{section:Lift_factor_3}, where we show that lower bound (\ref{ineq:sqrtlb}) is established by employing quadratic functions whose hessian admits spectrum which densely populates $[\mu,L]$. We would like to stress that as useful as such optimization algorithm might be, it is provided only for the purpose of demonstrating the detailed analysis which this framework allows and for indicating that applications which exhibit spectrum that distribute differently in $[\mu,L]$ might admit faster general solvers than what is dictated by lower bound (\ref{ineq:sqrtlb}).
\end{description}
\section{Upper Bounds} \label{section:ub}
Up to this point we have projected various optimization algorithms on this framework of $p$-SCLI optimization algorithms, thereby converting questions on convergence properties into questions on moduli of roots of polynomials. In what follows, we shall head in the opposite direction. That is to say, first we define a polynomial (see Definition (\ref{def:l_lambda})) which meets a prescribed set of constraints, and then we form the corresponding $p$-SCLI optimization algorithm. As stressed in Section \ref{section:is_this_tight}, we will focus exclusively on linear coefficient matrices which admit a low per-iteration computational cost and allow a straightforward extension to general smooth and strongly convex functions. Surprisingly enough, this allows a systematic recovering of FGD, HB, AGD, as well as establishing new optimization algorithms which allow better utilization of second-order information. This line of inquiry is particularly important due to the obscure nature of AGD, and further emphasizes its algebraic characteristic. We defer stochastic coefficient matrices, as in SDCA, (Section \ref{section:sdca_case_study}) to future work.\\
This section is organized as follows: First we apply Scheme 3 to derive general $p$-SCLIs with linear coefficients matrices; Next, following this line of argument, we recover AGD and HB as optimal instantiations under this setting; Finally, although general $p$-SCLI algorithms are exclusively specified for quadratic functions, we show how $p$-SCLIs with linear coefficient matrices can be extended to general smooth and strongly convex functions.
\subsection{Linear Coefficient Matrices} \label{section:lin_coeff_mat}
In the sequel we instantiate Scheme 3 (see Section \ref{subsection:gen_bounds}) for $\mathcal{C}_{\text{Linear}}$, the family of deterministic linear coefficient matrices. \\
First, note that due to consistency constraints, inversion matrices of constant $p$-SCLIs with linear coefficient matrices must be either constant scalar matrices or else be computationally equivalent to $A^{-1}$. Therefore, since our motivation for resorting to linear coefficient matrices was efficiency, we can safely assume that $N(X)=\nu I_d$ for some $\nu\in(-2^p/L,0)$. Following Scheme 3, we now seek the optimal characteristic polynomial in $\mathfrak{L}(p,\nu I_d,\posdefun{d}{[\mu,L]}, \mathcal{C}_{\text{Linear}})$ with a compatible set of parameters (see Section \ref{subsection:gen_bounds}). In the presence of linearity, the characteristic polynomials takes the following simplified form
\begin{align*}
\syspol{}{\lambda,X} &= \lambda^p - \sum_{j=0}^{p-1} (a_j X + b_j I_d) \lambda^j,\quad a_j,b_j\in\mathbb{R}
\end{align*}
By (\ref{eq:max_over_ells}) we have
\begin{align*}
\rho_\lambda(\syspol{}{\lambda,X})&= \max \myset{|\lambda|}{\exists i\in[d],~ \ell_i(\lambda)=0 }
\end{align*}
where $\ell_i(\lambda)$ denote the factors of the characteristic polynomial as in (\ref{eq:system_polys}). That is, denoting the eigenvalues of $X$ by $\sigma_1,\dots,\sigma_d$ we have
$$\ell_i(\lambda)=\lambda^p - \sum_{j=0}^{p-1} (a_j \sigma_i + b_j) \lambda^j=\lambda^p - \sigma_i \sum_{j=0}^{p-1} a_j \lambda^j+\sum_{j=0}^{p-1} b_j \lambda^j$$
Thus, we can express the maximal root radius of the characteristic polynomial over $\posdefun{d}{[\mu,L]}$ in terms of the following polynomial
\begin{align} \label{definition:general_characteristic}
\ell(\lambda,\eta) &= \lambda^p-(\eta a(\lambda)+b(\lambda))
\end{align}
for corresponding real univariate $p-1$ degree polynomials $a(\lambda)$ and $b(\lambda)$, whereby
\begin{align*}
\max_{A\in \posdef{d}{\Sigma} } \rho_\lambda \circpar{\syspol{}{\lambda,A}}&=\max_{\eta\in[\mu,L]} \rho\circpar{ \ell(\lambda,\eta) }
\end{align*}
That being the case, finding the optimal characteristic polynomial in $\mathfrak{L}$ translates to the following minimization problem,
\begin{center}
\fbox{
\addtolength{\linewidth}{-2\fboxsep}%
\addtolength{\linewidth}{-2\fboxrule}%
\begin{minipage}{0.5\linewidth}\vspace{-1em}
\begin{align}
\underset{\ell(\lambda,\eta)\in\mathfrak{L}}{\text{minimize}} & ~\max_{\eta\in[\mu,L]} \rho_\lambda(\ell(\lambda,\eta))\nonumber\\
\text{s.t. } & \ell(1,\eta) = -\nu\eta,\quad \eta \in [\mu,L] \label{eq:lin_cor1} \\
&\rho_\lambda(\ell(\lambda,\eta)) < 1 \label{eq:lin_cor2}
\end{align}
\end{minipage}
}
\end{center}
(Note that in this case we think of $\mathfrak{L}$ as a set of polynomials whose variable assumes scalars). Let us calculate the optimal characteristic polynomial for the setting where the lifting factor is $p=1$, the family of quadratic functions under considerations is $\posdefun{d}{[\mu,L]}$ and the inversion matrix is $N(X)=\nu I_d,~\nu\in(-2/L,0)$. In which case (\ref{definition:general_characteristic}) takes the following form
\begin{align*}
\ell(\lambda,\eta) = \lambda - \eta a _0 - b_0
\end{align*}
where $a_0,b_0$ are some real scalars. In order to satisfy (\ref{eq:lin_cor1}) for all $\eta\in[\mu,L]$, we have no other choice but to set
\begin{align*}
a_0 &= \nu,\quad b_0 = 1
\end{align*}
which implies
\begin{align*}
\rho_\lambda(\ell(\lambda,\eta)) = 1+\nu\eta
\end{align*}
Since $\nu\in(-2/L,0)$, condition \ref{eq:lin_cor2} follows, as well. The corresponding 1-SCLI optimization algorithm is
\begin{align*}
{\mathbf x}^{k+1}=(I+\nu A) {\mathbf x}^{k} + \nu {\mathbf b}
\end{align*}
and its first-order extension (see Section \ref{section:f_o_e} below) is precisely FGD (see Section \ref{section_spec_algo}). Finally, note that the corresponding root radius is bounded from above by
\begin{align*}
\frac{\kappa}{\kappa+1}
\end{align*}
for $\nu=-1/L$, the minimizer in Case 2 of Table \ref{table:nu_subranges}, and by
\begin{align*}
\frac{\kappa-1}{\kappa+1}
\end{align*}
for $\nu=\frac{-2}{\mu+L}$, the minimizer in Case 3 of Table \ref{table:nu_subranges}. This proves that FGD is optimal for the class of 1-SCLIs with linear coefficient matrices. \figref{figure:FGD} shows how the root radius of the characteristic polynomial of FGD is related to the eigenvalues of the hessian of the quadratic function under consideration.
\begin{figure}[H] \label{figure:FGD}
\centering
\includegraphics[scale=0.6,trim= 0 240 0 250,clip]{ch5sec3FGD.pdf}
\caption{The root radius of FGD vs. various eigenvalues of the corresponding hessian.}
\end{figure}
\subsection{Recovering AGD and HB}
Let us now calculate the optimal characteristic polynomial for the setting where the lifting factor is $p=2$, the family of quadratic functions under considerations is $\posdefun{d}{[\mu,L]}$ and the inversion matrix is $N(X)=\nu I_d,~\nu\in(-4/L,0)$ (recall that the restricted range of $\nu$ is due to consistency). In which case (\ref{definition:general_characteristic}) takes the following form
\begin{align} \label{def:poly_q_p2}
\ell(\lambda,\eta)= \lambda^2 - \eta(a_1 \lambda + a_0 ) - (b_1 \lambda + b_0)
\end{align}
for some real scalars $a_0,a_1,b_0,b_1$. Our goal is to choose $a_0,a_1,b_0,b_1$ so as to minimize $$\max_{\eta\in[\mu,L]}\rho_\lambda(\ell(\lambda,\eta))$$ while maintaining conditions (\ref{eq:lin_cor1}) and (\ref{eq:lin_cor2}). Note that $\ell(\lambda,\eta)$, when seen as a function of $\eta$, forms a linear path of quadratic functions. Thus, a natural way to achieve this goal is to choose $\ell(\lambda,\eta)$ so that $\ell(\lambda,\mu)$ and $\ell(\lambda,L)$ take the form of the 'economic' polynomials introduced in \lemref{lem:comp_poly}, namely
\begin{align*}
\circpar{\lambda-(1-\sqrt{r})}^2
\end{align*}
for $r=-\nu\mu$ and $r=-\nu L$, respectively, and hope that for others $\eta\in(\mu,L)$, the roots of $\ell(\lambda,\eta)$ would still be of small magnitude. Note that due to the fact that $\ell(\lambda,\eta)$ is linear in $\eta$, condition (\ref{eq:lin_cor1}) readily holds for any $\eta\in(\mu,L)$. This yields the following two equations
\begin{align*}
\ell(\lambda,\mu) &= \circpar{\lambda-(1-\sqrt{-\nu\mu)}}^2\\
\ell(\lambda,L) &= \circpar{\lambda-(1-\sqrt{-\nu L)}}^2
\end{align*}
Substituting (\ref{def:poly_q_p2}) for $\ell(\lambda,\eta)$ and expanding the r.h.s of the equations above we get
\begin{align*}
\lambda^2 - ( a_1\mu +b_1)\lambda - (a_0\mu + b_0) &= \lambda^2 -2(1-\sqrt{-\nu\mu})\lambda + (1-\sqrt{-\nu\mu})^2 \\
\lambda^2 - ( a_1 L +b_1)\lambda - (a_0 L+ b_0) &= \lambda^2 -2(1-\sqrt{-\nu L})\lambda + (1-\sqrt{-\nu L})^2
\end{align*}
Which can be equivalently expressed as the following system of linear equations
\begin{align}
- ( a_1\mu +b_1) &= -2(1-\sqrt{-\nu\mu}) \label{eq_params_1}\\
- (a_0 \mu+ b_0) &= (1-\sqrt{-\nu \mu})^2 \label{eq_params_2} \\
- ( a_1 L +b_1) &= -2(1-\sqrt{-\nu L}) \label{eq_params_3}\\
- (a_0 L+ b_0) &= (1-\sqrt{-\nu L})^2 \label{eq_params_4}
\end{align}
Multiplying \eqref{eq_params_1} by -1 and add to it \eqref{eq_params_3}. Next, multiply \eqref{eq_params_2} by -1 and add to it \eqref{eq_params_4} yields
\begin{align*}
a_1(\mu-L) &= 2\sqrt{-\nu}(\sqrt{L}-\sqrt{\mu}) \\
a_0 (\mu-L) &= (1-\sqrt{-\nu L})^2 -(1-\sqrt{-\nu \mu})^2
\end{align*}
Thus,
\begin{align*}
a_1 &= \frac{-2\sqrt{-\nu} }{\sqrt{\mu}+\sqrt{L}},\qquad
a_0 = \frac{2\sqrt{-\nu}} {\sqrt{\mu}+\sqrt{L}} + \nu
\end{align*}
Remarkably enough, plugging in $\nu=-1/L$ (see Table \ref{table:nu_subranges}) into the equations above and solving for $b_1$ and $b_0$ yields a 2-SCLI optimization algorithm whose extension is precisely AGD (see Section \ref{section_spec_algo}). Following the same derivation only this time by setting $$\nu = -\circpar{\frac2{\sqrt{L}+\sqrt{\mu}}}^2$$ yields the Heavy-Ball method.\\
\begin{figure}[H] \label{figure:AGD_HB}
\centering
$\left.\includegraphics[scale=0.6,trim= 100 240 150 250,clip]{ch5sec3AGD.pdf}
\includegraphics[trim= 150 240 100 250,clip,scale=0.6]{ch5sec3HB.pdf}\right.$
\caption{The root radius of AGD and HB vs. various eigenvalues of the corresponding hessian.}
\end{figure}
Moreover, using standard formulae for roots of quadratic polynomials one can easily verify that
\begin{align*}
\rho_\lambda\circpar{\ell(\lambda,\eta)}\leq \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}},~\eta\in[\mu,L]
\end{align*}
for AGD, and
\begin{align*}
\rho_\lambda\circpar{\ell(\lambda,\eta)}\leq \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1},~\eta\in[\mu,L]
\end{align*}
for HB. In particular, Condition \ref{eq:lin_cor2} holds. \figref{figure:AGD_HB} shows how the root radii of the characteristic polynomials of AGD and HB are related to the eigenvalues of the hessian of the quadratic function under consideration.
\subsection{First-Order Extension for \texorpdfstring{$p$}{p}-SCLIs with Linear Coefficient Matrices} \label{section:f_o_e}
As mentioned before, as coefficient matrices of $p$-SCLIs can take any form, it is not clear how to use a given $p$-SCLI algorithm, efficient as it may be, for minimizing general smooth and strongly convex functions. That being the case, one could argue that recovering the specifications of, say, AGD for quadratic functions does not necessarily imply how to recover AGD itself. Fortunately, consistent $p$-SCLIs with linear coefficients can be reformulated as optimization algorithms for general smooth and strongly convex functions in a very natural way by substituting $\nabla f({\mathbf x})$ for $A{\mathbf x}+{\mathbf b}$, while preserving the original convergence properties to a large extent. In the sequel we briefly discuss this appealing property, namely, canonical first-order extension, which completes the path from the world of polynomials to the world optimization algorithm for general smooth and strongly convex functions. \\
Let $\mathcal{A}\stackrel{\vartriangle}{=}(\syspol{\mathcal{A}}{\lambda,X},N(X))$ be a consistent $p$-SCLI optimization algorithm with a scalar inversion matrix, i.e., $N(X) \stackrel{\vartriangle}{=} \nu I_d,~\nu\in(-2^p/L,0)$, and linear coefficient matrices
\begin{align} \label{def:lin_coeff}
C_j(X) = a_j X + b_j I_d ,~\quad j=0,\dots,p-1
\end{align}
where $a_0,\dots,a_{p-1}\in\mathbb{R}$ and $b_0,\dots,b_{p-1}\in\mathbb{R}$ denote real scalars. Recall that by consistency, for any $\quadab{A,{\mathbf b}}\in\posdefun{d}{\Sigma}$ it holds that
\begin{align*}
\sum_{j=0}^{p-1}C_j(A) = & I + \nu A
\end{align*}
Thus
\begin{align} \label{eq:line_coeff_cons}
\sum_{j=0}^{p-1} b_j &=1 \text{ and } \sum_{j=0}^{p-1} a_j =\nu
\end{align}
By the very definition of $p$-SCLI optimization algorithms (\defref{definition:pscli}), we have that
\begin{align*}
{\mathbf x}^{k} = C_0(A) {\mathbf x}^{k-p} + C_1(A) {\mathbf x}^{k-(p-1)} + \dots +C_{p-1}(A) {\mathbf x}^{k-1} + \nu {\mathbf b}
\end{align*}
Substituting $C_j(A)$ for (\ref{def:lin_coeff}), gives
\begin{align*}
{\mathbf x}^{k} = (a_0 A + b_0) {\mathbf x}^{k-p} + (a_1 A + b_1){\mathbf x}^{k-(p-1)} + \dots +(a_{p-1} A + b_{p-1}) {\mathbf x}^{k-1} + \nu {\mathbf b}
\end{align*}
Rearranging and plugging in \ref{eq:line_coeff_cons}, we get
\begin{align*}
{\mathbf x}^{k} &= a_0 (A {\mathbf x}^{k-p} + {\mathbf b}) + a_1 (A {\mathbf x}^{k-(p-1)} + {\mathbf b}) +\dots+
a_{p-1} (A {\mathbf x}^{k-1} + {\mathbf b})\\ &\quad+
b_0 {\mathbf x}^{k-p} + b_1 {\mathbf x}^{k-(p-1)} + \dots + b_{p-1} {\mathbf x}^{k-1}
\end{align*}
Finally, by substituting $A{\mathbf x} + {\mathbf b}$ for its analog $\nabla f({\mathbf x})$, we arrive at the following canonical first-order extension of $\mathcal{A}$
\begin{align} \label{eq:nabla_count}
{\mathbf x}^{k} &= \sum_{j=0}^{p-1} b_j {\mathbf x}^{k-(p-j)} + \sum_{j=0}^{p-1} a_j \nabla f({\mathbf x}^{k-(p-j)})
\end{align}
Being applicable to a much wider collection of functions, how well should we expect the canonical extensions to behave? The answer is that when initialized close enough to the minimizer, one should expect a linear convergence of essentially the same rate. A formal statement is given by the theorem below which easily follows from Theorem 1 in Section 2.1, \cite{polyak1987introduction} for
\begin{align*}
g({\mathbf x}^{k-p},{\mathbf x}^{k-(p-1)},\dots,{\mathbf x}^{k-1}) &= \sum_{j=0}^{p-1} b_j {\mathbf x}^{k-(p-j)} + \sum_{j=0}^{p-1} a_j \nabla f({\mathbf x}^{k-(p-j)})
\end{align*}
\begin{theorem}
Suppose $f:\mathbb{R}^d\to\mathbb{R}$ is an $L$-smooth $\mu$-strongly convex function and let ${\mathbf x}^*$ denotes its minimizer. Then, for every $\epsilon>0$, there exist $\delta>0$ and $C>0$ such that if
\begin{align*}
\norm{{\mathbf x}^j-{\mathbf x}^*}\le \delta,\quad j=0,\dots,p-1
\end{align*}
then
\begin{align*}
\norm{{\mathbf x}^k-{\mathbf x}^0}\le C(\rho^* + \epsilon)^k,\quad k =p,p+1,\dots
\end{align*}
where
\begin{align*}
\rho^* = \sup_{\eta\in\Sigma} \rho\circpar{\lambda^p - \sum_{j=0}^{p-1} (a_j \eta + b_j) \lambda^j}
\end{align*}
\end{theorem}
Unlike general $p$-SCLIs with linear coefficient matrices which are guaranteed to converge only when initialized close enough to the minimizer, AGD converges linearly, regardless of the initialization points, for any smooth and strongly convex function. This merits further investigation as to the precise principles which underlie $p$-SCLIs of this kind.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,472 |
import React, { Component } from 'react';
// import Sidebar from 'react-sidebar';
import { Grid, Row, Col } from 'react-bootstrap';
import Markdown from 'react-markdown2';
import Ace from 'react-ace';
import 'brace/mode/markdown';
import 'brace/theme/github';
export default class Root extends Component {
constructor(props) {
super(props);
this.state = {
// TODO load last from local storage
content: '',
};
}
render() {
const onChange = value => {
this.setState({ content: value });
};
const aceProps = {
mode: 'markdown',
theme: 'github',
value: this.state.content,
onChange,
};
return (
<Grid className="app">
<Row>
<Col md={6}>
<Ace {...aceProps}/>
</Col>
<Col md={6}>
<Markdown source={this.state.content}/>
</Col>
</Row>
</Grid>
);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,077 |
People love talking about the weather. Whether it's on Twitter in 2017, or in letters to the Editor in 1841, we are never short of a meteorological-based conversation starter, particularly in Australia.
The aim of these posts is to share some of the events that piqued people's interest back in the day. Not always the hottest, driest, coldest or wettest day, but some days that got people whinging.
During this week in 1844, Sydney experienced a swing of temperatures which was echoed across the colony of New South Wales.
The Morning Chronicle reported a 6am temperature of 43ºF (6.1ºC) on the morning of 26 March 1844. This was the coldest March morning in the 10 years of published observations in the paper, although there is no information about where the thermometer was kept, so the data may not be that accurate.
Morning Chronicle, 27 March 1844.
Chilly mornings and premature frosts were also reported in the Hunter Valley, where correspondents claimed that the nights were "unusually cold for the season of the year".
Regional correspondents also reported the hot blustery day. In Goulburn, southern NSW, the wind on the 29th was "unusually violent. Indeed the most disagreeable day we have had for a long season. Rain is much wanted to put the land in condition for the plough". James Waugh at Jamberoo, also in southern New South Wales, recorded a maximum temperature of 90 ºF (32.2 ºC) in his diary (available at the State Library of New South Wales), with hot and windy weather.
The Hunter Valley felt the effects of the hot northerly too. On the 2nd of April, a Morpeth resident "of upwards of forty years" wrote to the Sydney Morning Herald of their fears that their sunflowers would "be apt to blights from the hot winds – that have been the case (in a great measure) this season".
On the 6th, the word from Muswellbrook in the upper Hunter was that "The weather has recently been very sultry, and the late corn has suffered severely from hot winds".
While the term wasn't used at the time, the 28th of March sounds like a classic "brickfielder", a strong dusty wind that blows during the summer months.
The term "brickfielder" was bandied about in the mid to late 1800s across southern Australia. According to this neat article, a brickfielder in the southern states was a hot dusty northerly, while in New South Wales it originally referred to a cool southerly wind which blew red dust from the brickworks into Sydney town.
Sydney Gazette and New South Wales Advertiser, 10 March 1836.
It might be that in Sydney, a "brickfielder" was any old wind that brought dust with it. On 10 March 1836 for example, The Sydney Gazette and New South Wales Advertiser reported that a brickfielder had occurred "Tuesday last".
The weather observations for that Tuesday last show a strong north north west wind, with high temperatures too.
In 2010, the Australian Meteorological and Oceanography Society ran a competition to give Melbourne's dusty winds their own name, with the winner being the "Northerly Duster". Is it yet to catch on. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,180 |
Ecumenical Councils accepted by the Ethiopian Orthodox Tewahedo Church
Since her recognition as an Episcopate in 330 A.D., the Ethiopian Orthodox Tewahedo Church - one of the most ancient church in Christendom - has been fulfilling her apostolic duties up to the present time.
Her doctrine is based on the teaching of the Ethiopian Eunuch, St. Matthew and other apostles. In addition, she accepted the canon and the decisions of the first three Ecumenical Councils, i.e., of Nicea in 325, Constantinople, in 381 and Ephesus in 431 and still is teaching their creed and serving the Lord to this day.
1. The Ecumenical Council of Nicea 325
As is known, the Nicean Council was called to oppose the heresy of Arius. Arius' teaching was based on "The Lord created me at the beginnigh of His work, the first of His act of old" (Prov. 8:22).
Taking the literal interpretation of this verse, Arius taught that God, the Son is a creature. The heresy of Arius originated from the Gnositc Lucian and Anthiochian heritics.Even if it could be said that fatherhood belongs ot God, He could not be the natural father but the adoptive father.
Among the heretical teachings of Arius was this: "There was a time when the Son, known as wisdom, was not; and there was an hour when He did not exist." Arius distorted verses to present God the Son as a creature and thus mislead the people".
Alexander, the Archbishop of Alexandria, made and effort to bring back Arius from his heretical teachings. But Arius kept firm. Yet, starting from 320 A.D. he was spreading openly his heretical teaching in every town. Archbishop Alexander called a local gathering of 100 bishops and presented Arius' heretical teaching. In addition he informed the assembly that he had advised Arius to desist from his heretical teaching in order no to raise a schism in the church and to resolve the problem in peace. The Synod, after examining the heretical teaching of Arius, and realizing that he would not repent, unanimously excommunicated him.
King Constantine sent his close and trusted friend, Hosius Episcopos of Spain, to Alexandria because of the problem. Having discussed the case with Archbishops Alexander and other Bishops, Hosius returned to Spain and informed the king that the matter could not be solved peacefully.
After Archbishop Alexander had notified Constantine by letter, that the matter should be dealt with by a synod, the emperor called a meeting of the synod.
On this basis, the meeting of the synod, which held its preparatory meeting from May 20 to June 13, was opened in Nicea in 325 with the attendance of several bishops and their assistants. Having thus prepared its agenda, the meeting was officially opened with a speech by the Emperor Constantine in the presence of 2000 participants.
The topics discussed by the synod were:
The heretical teaching of Arius
The decision of the Alexandria Synod which was led by Archbishop Alexander against the heresy of Arius.
The Synod, after discussing these matters, explained to Arius that all the biblical verses quoted by him and his followers against the divinity of God the Son were mistaken.
Thus, the fathers attending the Synod, made great effort to explain to Arius that Proverb 8:22 does not show that God the Son, Word of God, was not created like other creatures but begotten from the Father before the world was created.
They quoted from the Holy Bible and explained to him that God the Son, (the Logos) is God. The verses which show that Christ is the begotton Son of the Father are: Jn. 1:1-6, Jn. 10:30, Rom. 9:5, 1Jn.5:20, etc)
Then, the 318 Holy Fathers unanimously excommunicated Arius and condemned his teachings. In addition to this , they issued the Creed which confesses the divinity of the Son and they fixed the Canon Law of the church. The Creed which the 318 Nicean Fathers issued in 325 is the following:
"We believe in one God, God the Father Almighty, maker of heaven and earth, that which is visible and that which is invisible. And we believe in one Lord Jesus Christ the Only-Begotten Son of God, the Father who was with Him before the creation of the world, Light from Light, True God from True God, Begotten not created, consubstantial with the Father, through whom all things were made, and without Him was not anything made, things in heaven and things on earth, who for us men and for our salvation came down from heaven: He was incarnate by the Holy Spirit of the Holy Virgin Mary: and He became man, and He was crucified for us under Pontus Pilate: He suffered, died and was buried, and on the third day He rose again from the dead, according to the Scriptures: He ascended to heaven; He sitteth at the right hand of the Father, He will come again in His glory to Judge the living and the dead; whose kingdom shall have no end."
2. The Council of Constantinople
While Theodosius the first (The Great) was the emperor of Constantinople in 379, 'Timothy the Poor' was the Patriarch of Alexandria. And while Damasus was the Pope of Rome, Macedonius the Archbishop of Constantinople, denying the Godhead of the Holy Spirit, taught: "The Holy Spirit is not consubstantial with the Father and the Son in divinity. If the Holy Spirit proceeded from the Father and was sent by the Father and the Son, and was a messenger, He is not equal with Them. Therefore, He is subordinate to Them."
Since this heresy had spread throughout the Eastern Churches, 150 Bishops were assembled in Constantinople in 381.
At this great Council, the Holy Fathers quoting from the Old and New Testaments promulgated, saying "Even though the Holy Spirit proceeded from the Father, He is equal with the Father and the Son in nature, divinity and glory" (Ps. 33:6, Isa. 6:3, Acts. 28:25). After explaining this at great length, they ascertained the divinity of the Holy Spirit and they vehemently condemned Macedonius and his heretical teaching.
In the Nicean Creed, which was decided by the 318 Holy Fathers, they added an article concerning the divinity of the Holy Spirit. This article is:
"We believe in the Holy Spirit, the Lord and Life-giver, who proceeds from the Father; we worship and glorify Him with the Father and the Son, who spoke through the prophets; and in the one Holy Universal Apostolic Church. We confess one baptism for the remission of sins. We believe in the resurrection of the dead and their life in the world to come. Amen."
This was the unanimous promulgation of the 150 Fathers. From that time on, the Creed was called Niceno-Constantinopolitan Creed and was made to be recited in the daily prayer and liturgy by the whole congregations in all churches.
3. The Council of Ephesus
During the reign of Theodosius the second, St. Cyril was Patriarch of Alexandria in 412, and Celestine the first was the pope of Rome. Nestorius, the Anthiochian and disciple of Diodore of Tarsus, was entroned as Archbishop of Constantinople in 427 AD.
While Nestorius was Archbishop of Constantinople, there were several heretics who adhered to the teachings of Arius, Mancedonius and Apolinarius. Upon the request of Nestorius, the Emperor issued a decree dismissing the heretics from the city and condemning their teachings.
In the year after the decree, Anastasius the monk who was the secretary of Nestorius in the great cathedral of Constantinople, preached saying "It is true that the Virgin Mary gave birth to Christ in virginity and purity. But it is not right to call her the Mother of God (Theotokos), for God cannot be born from a human being. Therefore, we truly believe that she is the bearer of Christ the Man (Christotokos)".
At that time those who heard the new teaching became disturbed and opposition rose from every direction. The congregations asked Nestorius to tell the monk that he should correct his heretical teachings. But Nestorius refused to accept the requests for he knew that the heretical teaching was his own.
From that time on, Nestorius rejected the Only-Begotten Sonhood and divinity of Christ. And he spread his teaching in word and writing saying, "He who was born from Mary is a mere man. In him, the divinity or the Word of God dwelt and He is called Christ. Therefore, it is wrong to call Mary the Mother of God. But truly she must be called the mother of Christ and not the mother of God."
This heretical teaching of Nestorius spread everywhere. St Cyril of Alexandria on hearing this heretical teaching sent Nestorius an advisory letter for the first time in February 430. At the same time he wrote a letter to the Pope of Rome and to the other Archbishops of the Eastern and Western Churches explaining the danger inherent in the heresy. He also informed Emperor Theodosius about the matter.
Nestorius on his part wrote letters of opposition to those whom St Cyril had addressed. He forbade the priests and monks of his diocese from giving pastoral services and prohibited them from entering the church. However, they appealed to the Emperor and to St Cyril. Then St Cyril, without denouncing Nestorius, but condemning his heresy, sent Nestorius a serious letter and sent copies to the Emperor's office, to the concerned Archbishops and the Archmandirites of different monasteries. In these circumstances, St Cyril and Nestorius exchanged hostile letters for a year. Because Nestorius could not be stopped by any means from spreading his heresy; the Emperor, Pope Celestine the first, and Archbishops of Eastern and Western Churches, agreed to call a synod and examine his heresy. It was decided to hold this in Ephesus in 431. About 200 archbishops and bishops attended the Ephesus Meeting. And Cyril was elected to be the chairman of the Synod.
The archbishop of Anthioch was invited to come with bishops who were under his diocese. But he failed to come on time, since he was a supporter of Nestorius. He was awaited for two weeks and he continued to give false explanations about his absence. It was then decided to start the synod in his absence. An invitation was again sent to Nestorius so that he could come and explain his belief. He was invited three time and because he was unwilling to attend, the synod started its discussion in his absence. The documents containing the heresy of Nestorius were read first, followed by St. Cyril's against the heresy of Nestorius and the twelve anathemas. On this basis, the heretical teachings of Nestorius were exposed at the synod as contradictory to the doctrine of the Holy Church.
On the other hand, the writings of St Cyril were unanimously accepted by the synod, since they proved to be the same with the teachings of the Holy Scriptures and the doctrine of the Church Fathers.
The synod, after condemning the heretical teaching of Nestorius against St Mary being the Mother of God. Issued the following: "That He who was born from the holy Virgin Mary is God and Mary truly is the Mother of God."
In addition to the participants of the council, the holy Archbishops and Bishops heard:
1. The content of the letters written by St Cyril to Nestorius which state "We believe that the Holy Virgin Mary is the Mother of God, the Logos"
2. The statement of the letter written by St Cyril to the Eastern Bishops which says "While Jesus Christ is God, He became man like us due to the flesh and soul that He took from the holy Virgin Mary; and He is one person and one nature in unity."
3. In the same letter it is stated "We must believe that the holy Virgin Mary is the Mother of God (Theotokos) and it is wrong to call her the mother of the man (Anthropotokos)."
4. in his letter again, he taught "it is not proper to reject the unity of the two natures in Christ based on our subjective viewpoints; and likewise we should not say that the holy Virgin Mary did not give birth to God the Son."
The participants of the council, holy Archbishops and Bishops approved the above mentioned teachings unanimously, and excommunicated Nestorius. The decisions of this great council was signed by Emperor Theodosius and distributed in all places. Moreover, the holy Fathers of the Third International Council also endorsed the Cannon Laws of the church.
All Churches accepted the decision on the doctrine and the Canon Law of these three International Councils, because the Council of Nicaea, Constantinople and Ephesus took place before the separation of the ancient Church concerning doctrine and Canon Law, and they give great honour to the councils. Especially the Ethiopian Orthodox Church Tewahido Church and its sister Churches such as the Coptic Orthodox, the Syrian Orthodox, the Armenian Orthodox and the Indian Orthodox Churches, fully accept the decisions of the three councils; for they practise the teaching of the Apostles and the canon law which was agreed by the three Councils.
The Ethiopian Orthodox Tewahido Church abides by the decisions, cannons and doctrines of the 318 Fathers of Nicaea, of the 150 Fathers of Constantinople and of the 200 Fathers of Ephesus and condemns those who were excommunicated by the three Councils. She also condemns Apollinarius who was excommunicated by the 2nd Council in Constantinople and Eutyches who was also excommunicated by the Councils of Ephesus.
The Ethiopian Orthodox Tewahido Church does not recognize and accept the councils which took place after the Council of Ephesus. Not only the Ethiopian Orthodox Tewahido Church, but also our sister churches like the Coptic, the Syrian, the Indian and Armenian Orthodox churches, do not recognize and accept them.
Archbishop Yesehaq. (1997). The Ethiopian Tewahedo Church. Winston-Derek Pub | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,910 |
'Dron' is part of a bigger project called 'Color of Sound' whose idea is primarily focused on the connection between visual reception and matching sounds, i.e. their mutual overlap and complement. Specifically, it is connected to synaesthesia, a process in which colors and shapes can be heard, smelled or tasted.
Through the history of music the term drone was used to indicate harmonically rich or simple monophonic, but constantly and continually emitting sound, which with its own characteristics, expressed through complex layering and endless repetition, creates an atmosphere of infinite space and time. Visual part of this artifact is a drawing (ink on paper), and is drew by multiplying of a dot, the primary drawing element. With this technique I achieved to create an illusion of organic and atmospheric "heterogenic homogeny", which may represent visual manifestation of a constantly varying and morphing drone sound. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,535 |
{"url":"https:\/\/www.hpmuseum.org\/forum\/post-95884.html","text":"A not so useful HP-16C program\n04-21-2018, 11:37 PM\nPost: #1\n Gerson W. Barbosa Senior Member Posts: 1,214 Joined: Dec 2013\nA not so useful HP-16C program\nWell, it will run on the HP-11C as well (only replace DSZ with DSE).\n\n001-\u00a0\u00a0\u00a0\u00a0LBL A\n002-\u00a0\u00a0\u00a0\u00a0STO I\n003-\u00a0\u00a0\u00a0\u00a04\n004-\u00a0\u00a0\u00a0\u00a0ENTER\n005-\u00a0\u00a0\u00a0\u00a0ENTER\n006-\u00a0\u00a0\u00a0\u00a02\n007-\u00a0\u00a0\u00a0\u00a0ENTER\n008-\u00a0\u00a0\u00a0\u00a04\n009-\u00a0\u00a0\u00a0\u00a0LBL 0\n010-\u00a0\u00a0\u00a0\u00a0ENTER\n011-\u00a0\u00a0\u00a0\u00a0+\n012-\u00a0\u00a0\u00a0\u00a0R\u2193\n013-\u00a0\u00a0\u00a0\u00a0\u00d7\n014-\u00a0\u00a0\u00a0\u00a0\u221ax\n015-\u00a0\u00a0\u00a0\u00a0ENTER\n016-\u00a0\u00a0\u00a0\u00a0R\u2193\n017-\u00a0\u00a0\u00a0\u00a0x\u2194y\n018-\u00a0\u00a0\u00a0\u00a0+\n019-\u00a0\u00a0\u00a0\u00a0LSTx\n020-\u00a0\u00a0\u00a0\u00a0R\u2b06\n021-\u00a0\u00a0\u00a0\u00a0\u00d7\n022-\u00a0\u00a0\u00a0\u00a0LSTx\n023-\u00a0\u00a0\u00a0\u00a0R\u2193\n024-\u00a0\u00a0\u00a0\u00a0x\u2194y\n025-\u00a0\u00a0\u00a0\u00a0\u00f7\n026-\u00a0\u00a0\u00a0\u00a0ENTER\n027-\u00a0\u00a0\u00a0\u00a0+\n028-\u00a0\u00a0\u00a0\u00a0ENTER\n029-\u00a0\u00a0\u00a0\u00a0R\u2193\n030-\u00a0\u00a0\u00a0\u00a0R\u2193\n031-\u00a0\u00a0\u00a0\u00a0DSZ\n032-\u00a0\u00a0\u00a0\u00a0GTO 0\n033-\u00a0\u00a0\u00a0\u00a0R\u2193\n034-\u00a0\u00a0\u00a0\u00a0\u00d7\n035-\u00a0\u00a0\u00a0\u00a0\u221ax\n036-\u00a0\u00a0\u00a0\u00a0R\/S\n037-\u00a0\u00a0\u00a0\u00a0ENTER\n038-\u00a0\u00a0\u00a0\u00a0R\u2b06\n039-\u00a0\u00a0\u00a0\u00a0\u00f7\n040-\u00a0\u00a0\u00a0\u00a0ENTER\n041-\u00a0\u00a0\u00a0\u00a0\u00d7\n042-\u00a0\u00a0\u00a0\u00a0CHS\n043-\u00a0\u00a0\u00a0\u00a01\n044-\u00a0\u00a0\u00a0\u00a0+\n045-\u00a0\u00a0\u00a0\u00a0\u221ax\n046-\u00a0\u00a0\u00a0\u00a03\n047-\u00a0\u00a0\u00a0\u00a0\u00d7\n048-\u00a0\u00a0\u00a0\u00a0CHS\n049-\u00a0\u00a0\u00a0\u00a01\n050-\u00a0\u00a0\u00a0\u00a06\n051-\u00a0\u00a0\u00a0\u00a0+\n052-\u00a0\u00a0\u00a0\u00a0\u00d7\n053-\u00a0\u00a0\u00a0\u00a0R\u2b06\n054-\u00a0\u00a0\u00a0\u00a0+\n055-\u00a0\u00a0\u00a0\u00a0+\n056-\u00a0\u00a0\u00a0\u00a01\n057-\u00a0\u00a0\u00a0\u00a05\n058-\u00a0\u00a0\u00a0\u00a0\u00f7\n059-\u00a0\u00a0\u00a0\u00a0RTN\n\nNo numbered registers, only the index register and the stack. No attempt has been made to make it shorter. Hopefully this might have some didactic value (to demonstrate an algorithm).\n\nGerson.\n04-22-2018, 01:08 AM\nPost: #2\n Joe Horn Senior Member Posts: 1,525 Joined: Dec 2013\nRE: A not so useful HP-16C program\nOk, I give up. What is it for? What does it do? How is it used? Some hints, please.\n\n<0|\u0278|0>\n-Joe-\n04-22-2018, 01:14 AM (This post was last modified: 04-22-2018 01:36 AM by Gerson W. Barbosa.)\nPost: #3\n Gerson W. Barbosa Senior Member Posts: 1,214 Joined: Dec 2013\nRE: A not so useful HP-16C program\n(04-22-2018 01:08 AM)Joe Horn Wrote: \u00a0Ok, I give up. What is it for? What does it do? How is it used? Some hints, please.\n\nDon\u2019t give up, at least of yet. Enter an integer number. Start with 1 and run the program. Increase it to 2, then to 3 and see what you get in X and Y. Repeat this again pressing the R\/S key afterwards.\nI\u2019ll give an reference later.\n\nPainstakingly trying to fix errors on my smartphone while watching to Lost in Space on Netflix (Danger, Will Robinson!). Who said touchscreen was a good idea?\n04-22-2018, 01:37 AM\nPost: #4\n rprosperi Senior Member Posts: 3,683 Joined: Dec 2013\nRE: A not so useful HP-16C program\nSo it's easy to see the input is the number of iterations, but not so clear to see how the resulting X and Y converge. Look forward to the insights once folks have suffered long enough.\n\n--Bob Prosperi\n04-22-2018, 02:11 AM (This post was last modified: 04-22-2018 02:55 AM by Gerson W. Barbosa.)\nPost: #5\n Gerson W. Barbosa Senior Member Posts: 1,214 Joined: Dec 2013\nRE: A not so useful HP-16C program\n(04-22-2018 01:37 AM)rprosperi Wrote: \u00a0So it's easy to see the input is the number of iterations, but not so clear to see how the resulting X and Y converge. Look forward to the insights once folks have suffered long enough.\n\nBack to my good old desktop computer!\n\nThat's Archimedes' method to approximate Pi, except that instead of starting with the hexagon, I start with inscribed and circumscribed squares to a radius 1\/2 circumference (thus saving a few steps as the initial constants, 2 and 4, don't involve surds). Without a positional number system and without a consistent math notation, that was such a feat, 0.6 digits per iteration! Not bad around 250 BC.\n\nPressing R\/S accelerates the convergence by a factor of 3. Basically, I use formula 2.6 in this paper in terms of a and b (perimeters of the inscribed and circumscribed n-gons to a radius 1\/2 circumference ). I came up with a similar precision formula seven years ago, albeit an empirically-obtained one. Currently, I have a method that yields 3 times as much digits per iteration, compared to the basic Archimedes' algorithms, but I think it can go up to more than 8 times as much (5 digits per iteration). No intention to compete with modern methods, though. Square root extractions are a time-consuming task, be it done my hand or by machine.\n\nP.S.: BTW, the perimeter of the circumscribed 96-gon, Archimedes\u2019 second bound, is about 21.9990021999\/7 (well, actually 22.9990021975, but the former is nicer). This is also the place to discuss near integers like 2*(e - atan(e)) = 2.9999978 (that \u2018s gonna be our 3 in that nerd\u2019s clock!).\n04-22-2018, 01:08 PM\nPost: #6\n Dieter Senior Member Posts: 2,398 Joined: Dec 2013\nRE: A not so useful HP-16C program\n(04-22-2018 02:11 AM)Gerson W. Barbosa Wrote: \u00a0P.S.: BTW, the perimeter of the circumscribed 96-gon, Archimedes\u2019 second bound, is about 21.9990021999\/7 (well, actually 22.9990021975, but the former is nicer). This is also the place to discuss near integers like 2*(e - atan(e)) = 2.9999978 (that \u2018s gonna be our 3 in that nerd\u2019s clock!).\n\nThis reminds me of e^pi\u00a0\u2013\u00a0pi.\n\nAnd you should also read the caption. ;-)\n\nDieter\n04-22-2018, 05:59 PM (This post was last modified: 04-22-2018 06:06 PM by Gerson W. Barbosa.)\nPost: #7\n Gerson W. Barbosa Senior Member Posts: 1,214 Joined: Dec 2013\nRE: A not so useful HP-16C program\n(04-22-2018 01:08 PM)Dieter Wrote:\n(04-22-2018 02:11 AM)Gerson W. Barbosa Wrote: \u00a0P.S.: BTW, the perimeter of the circumscribed 96-gon, Archimedes\u2019 second bound, is about 21.9990021999\/7 (well, actually 22.9990021975, but the former is nicer). This is also the place to discuss near integers like 2*(e - atan(e)) = 2.9999978 (that \u2018s gonna be our 3 in that nerd\u2019s clock!).\n\nThis reminds me of e^pi\u00a0\u2013\u00a0pi.\n\nI've tried to \"fix\" that at least a couple of times :-)\n\n$${e}^{\\pi }-\\pi +\\frac{9^{2}}{89998-{10}^{5}\\cdot \\left ( {\\frac{9^{2}}{89998}} \\right )^{2}}=19.99999999999999295470$$\n\n$${e}^{\\pi }-\\pi+\\left(\\frac{3}{10^{2}}\\right)^{2}+\\frac{1}{\\left ( \\ln (2)\\cdot 10^{4}+\\frac{\\sqrt{10}}{6} \\right )^{2}}=20.00000000000000072951$$\n\n(04-22-2018 01:08 PM)Dieter Wrote: \u00a0And you should also read the caption. ;-)\n\n\"Also, I hear the 4th root of (9^2 + 19^2\/22) is pi.\"\n\nOnly slightly better, but many 2's and too many 9's, although not so much at 6's and 7's:\n\n$$\\frac{2\\left ( 16\\sqrt{2}+1 \\right )}{15+\\frac{1}{24-\\frac{9999}{2^{20+\\frac{22552}{99999}}}}}=3.1415926535876$$\n\nGerson.\n04-22-2018, 09:31 PM (This post was last modified: 04-27-2018 05:53 PM by Gerson W. Barbosa.)\nPost: #8\n Gerson W. Barbosa Senior Member Posts: 1,214 Joined: Dec 2013\nRE: A not so useful HP-16C program\n(04-21-2018 11:37 PM)Gerson W. Barbosa Wrote: \u00a0Hopefully this might have some didactic value (to demonstrate an algorithm).\n\nRPN, especially when making extensive use of the stack, is not adequate to demonstrate or share algorithms. So, here is a Decimal BASIC version:\n\nOPTION\u00a0ARITHMETIC\u00a0DECIMAL_HIGH\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a01000\u00a0digits\u00a0precision\nLET\u00a0r\u00a0=\u00a0TIME\nLET\u00a0n\u00a0=\u00a06\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0number\u00a0of\u00a0sides\u00a0of\u00a0the\u00a0first\u00a0circumscribed\u00a0polygon,\u00a0a\u00a0hexagon\u00a0in\u00a0this\u00a0case\nLET\u00a0m\u00a0=\u00a0330\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0number\u00a0of\u00a0iterations\u00a0(\u00a0number\u00a0of\u00a0sides\u00a0of\u00a0the\u00a0last\u00a0polygon:\u00a06*2^m\u00a0)\nLET\u00a0b\u00a0=\u00a02*SQR(3)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0perimeter\u00a0of\u00a0the\u00a0circumscribed\u00a0hexagon\nLET\u00a0a\u00a0=\u00a03*b\/4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0perimeter\u00a0of\u00a0the\u00a0inscribed\u00a0triangle\nFOR\u00a0i\u00a0=\u00a01\u00a0TO\u00a0m\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0current\u00a0a\u00a0=\u00a0GM(previous\u00a0a,\u00a0current\u00a0b);\u00a0GM\u00a0=\u00a0geometric\u00a0mean\nLET\u00a0b\u00a0=\u00a02*a*b\/(a\u00a0+\u00a0b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0\u00a0\u00a0\u00a0next\u00a0b\u00a0=\u00a0HM(current\u00a0a,\u00a0previous\u00a0b);\u00a0HM\u00a0=\u00a0harmonic\u00a0mean\nLET\u00a0n\u00a0=\u00a0n\u00a0+\u00a0n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0double\u00a0the\u00a0number\u00a0of\u00a0sides\u00a0at\u00a0each\u00a0iteration\nNEXT\u00a0i\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0final\u00a0a\nLET\u00a0t\u00a0=\u00a0a\/n\nLET\u00a0q\u00a0=\u00a0(2*b+a*(16-3*SQR(1-t*t)))\/15\u00a0!\u00a0Chakrabarti-Hudson\u00a0approximation\u00a0to\u00a0pi\nLET\u00a0t\u00a0=\u00a0q\/n\nLET\u00a0c\u00a0=\u00a0t*t\nLET\u00a0t\u00a0=\u00a0c*(329868000\u00a0+\u00a0c*(42226800\u00a0+\u00a0c*(4619230\u00a0+\u00a0481213*c)))\nLET\u00a0u\u00a0=\u00a01164240000\u00a0+\u00a0t\nLET\u00a0p\u00a0=\u00a01\/4656960000*(a*(1164240000\u00a0+\u00a0t)\u00a0+\u00a0SQR(a*(-9313920000*b*(-1164240000\u00a0+\u00a0t)\u00a0+\u00a0a*u*u)))\u00a0!\u00a0my\u00a0approximation\u00a0to\u00a0pi\nPRINT\u00a0p\nPRINT\u00a0TIME\u00a0-\u00a0r\nEND\n\n3.141592653589793238462643383279502884197169399375105820974944592307816406286208\u200b9986280348253421170679\n82148086513282306647093844609550582231725359408128481117450284102701938521105559\u200b64462294895493038196\n44288109756659334461284756482337867831652712019091456485669234603486104543266482\u200b13393607260249141273\n72458700660631558817488152092096282925409171536436789259036001133053054882046652\u200b13841469519415116094\n33057270365759591953092186117381932611793105118548074462379962749567351885752724\u200b89122793818301194912\n98336733624406566430860213949463952247371907021798609437027705392171762931767523\u200b84674818467669405132\n00056812714526356082778577134275778960917363717872146844090122495343014654958537\u200b10507922796892589235\n42019956112129021960864034418159813629774771309960518707211349999998372978049951\u200b05973173281609631859\n50244594553469083026425223082533446850352619311881710100031378387528865875332083\u200b81420617177669147303\n59825349042875546873115956286388235378759375195778185778053217122680661300192787\u200b6611195909216420199\n\n1.55\n\nThis yields 999 digits of pi after 330 iterations. Notice that q is pi to about 600 digits, and a and b are pi to about 200 digits.\n\nThe following is a more faithful version of the RPN program, which starts with a square instead of a hexagon:\n\nLET\u00a0n\u00a0=\u00a04\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0start\u00a0with\u00a0a\u00a0square\nLET\u00a0m\u00a0=\u00a05\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a05\u00a0iterations\u00a0->\u00a0128-gon\nLET\u00a0b\u00a0=\u00a04\nLET\u00a0a\u00a0=\u00a02\nFOR\u00a0i\u00a0=\u00a01\u00a0TO\u00a0m\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\nLET\u00a0b = 2*a*b\/(a+b)\nLET\u00a0n = n + n\nNEXT\u00a0i\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0Archimedes\u00a0method\u00a0\u00a0(128-gon\u00a0lower\u00a0bound)\nPRINT\u00a0a\nLET\u00a0t\u00a0=\u00a0a\/n\nLET\u00a0q\u00a0=\u00a0(2*b+a*(16-3*SQR(1-t*t)))\/15\u00a0\u00a0!\u00a0Chakrabarti-Hudson\u00a0approximation\u00a0(three-time\u00a0acceleration)\nPRINT\u00a0q\nEND\n\n3.14127725093278\n3.14159265359634\n\n============================================\n\nUpdate: (04-25-2018\u00a004:47\u00a0PM)\n\nHere is a version without the Chakrabarti-Hudson approximation. It requires an intermediate step, but saves one square root extraction. Not sure whether this is more efficient, though.\n\nOPTION\u00a0ARITHMETIC\u00a0DECIMAL_HIGH\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a01000\u00a0digits\u00a0precision\nLET\u00a0s\u00a0=\u00a0TIME\nLET\u00a0n\u00a0=\u00a06\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0number\u00a0of\u00a0sides\u00a0of\u00a0the\u00a0first\u00a0circumscribed\u00a0polygon,\u00a0a\u00a0hexagon\u00a0in\u00a0this\u00a0case\nLET\u00a0m\u00a0=\u00a0330\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0number\u00a0of\u00a0iterations\u00a0(\u00a0number\u00a0of\u00a0sides\u00a0of\u00a0the\u00a0last\u00a0polygon:\u00a06*2^m\u00a0)\nLET\u00a0b\u00a0=\u00a02*SQR(3)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0perimeter\u00a0of\u00a0the\u00a0circumscribed\u00a0hexagon\nLET\u00a0a\u00a0=\u00a03*b\/4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0perimeter\u00a0of\u00a0the\u00a0inscribed\u00a0triangle\nFOR\u00a0i\u00a0=\u00a01\u00a0TO\u00a0m\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0current\u00a0a\u00a0=\u00a0GM(previous\u00a0a,\u00a0current\u00a0b);\u00a0GM\u00a0=\u00a0geometric\u00a0mean\nLET\u00a0b\u00a0=\u00a02*a*b\/(a\u00a0+\u00a0b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0\u00a0\u00a0\u00a0next\u00a0b\u00a0=\u00a0HM(current\u00a0a,\u00a0previous\u00a0b);\u00a0HM\u00a0=\u00a0harmonic\u00a0mean\nLET\u00a0n\u00a0=\u00a0n\u00a0+\u00a0n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0double\u00a0the\u00a0number\u00a0of\u00a0sides\u00a0at\u00a0each\u00a0iteration\nNEXT\u00a0i\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0a\u00a0->\u00a0pi\u00a0to\u00a0199\u00a0digits\nLET\u00a0r\u00a0=\u00a0(2*a\u00a0+\u00a0b)\/3\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0r\u00a0->\u00a0pi\u00a0to\u00a0399\u00a0digits\nLET\u00a0t\u00a0=\u00a0r\/n\nLET\u00a0c\u00a0=\u00a0t*t\nLET\u00a0q\u00a0=\u00a0a*(1\u00a0+\u00a0(c*(60\u00a0+\u00a07*c))\/360)\u00a0\u00a0\u00a0!\u00a0q\u00a0->\u00a0pi\u00a0to\u00a0598\u00a0digits\nLET\u00a0t\u00a0=\u00a0q\/n\nLET\u00a0c\u00a0=\u00a0t*t\nLET\u00a0t\u00a0=\u00a0c*(329868000\u00a0+\u00a0c*(42226800\u00a0+\u00a0c*(4619230\u00a0+\u00a0481213*c)))\nLET\u00a0u\u00a0=\u00a01164240000\u00a0+\u00a0t\nLET\u00a0p\u00a0=\u00a01\/4656960000*(a*(1164240000\u00a0+\u00a0t)\u00a0+\u00a0SQR(a*(-9313920000*b*(-1164240000\u00a0+\u00a0t)\u00a0+\u00a0a*u*u)))\nPRINT\u00a0TIME\u00a0-\u00a0s;\"seconds\"\nPRINT\u00a0p\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0p\u00a0->\u00a0pi\u00a0to\u00a0999\u00a0digits\nEND\n\n.83\u00a0seconds\n\n3.141592653589793238462643383279502884197169399375105820974944592307816406286208\u200b9986280348253421170679\n82148086513282306647093844609550582231725359408128481117450284102701938521105559\u200b64462294895493038196\n44288109756659334461284756482337867831652712019091456485669234603486104543266482\u200b13393607260249141273\n72458700660631558817488152092096282925409171536436789259036001133053054882046652\u200b13841469519415116094\n33057270365759591953092186117381932611793105118548074462379962749567351885752724\u200b89122793818301194912\n98336733624406566430860213949463952247371907021798609437027705392171762931767523\u200b84674818467669405132\n00056812714526356082778577134275778960917363717872146844090122495343014654958537\u200b10507922796892589235\n42019956112129021960864034418159813629774771309960518707211349999998372978049951\u200b05973173281609631859\n50244594553469083026425223082533446850352619311881710100031378387528865875332083\u200b81420617177669147303\n59825349042875546873115956286388235378759375195778185778053217122680661300192787\u200b6611195909216420198\n\n============================================\n\nUpdate: (04-27-2018 05:43 PM)\n\nAnother version which involves two cubic root extractions:\n\nOPTION\u00a0ARITHMETIC\u00a0DECIMAL_HIGH\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a01000\u00a0digits\u00a0precision\n\nSUB\u00a0CBR(x)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0Cubic\u00a0root\u00a0subroutine\nIF\u00a0x<>0\u00a0THEN\nLET\u00a0cb\u00a0=\u00a0EXP(LOG(x)\/3)\nDO\nLET\u00a0w\u00a0=\u00a0cb\nLET\u00a0cb\u00a0=\u00a0(2*cb\u00a0+\u00a0x\/(cb*cb))\/3\nLOOP\u00a0UNTIL\u00a0ABS(cb\u00a0-\u00a0w)\u00a0<\u00a01e-1000\nLET\u00a0x\u00a0=\u00a0cb\nELSE\nLET\u00a0x\u00a0=\u00a00\nEND\u00a0IF\nEND\u00a0SUB\n\nLET\u00a0s\u00a0=\u00a0TIME\nLET\u00a0n\u00a0=\u00a06\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0number\u00a0of\u00a0sides\u00a0of\u00a0the\u00a0first\u00a0circumscribed\u00a0polygon,\u00a0a\u00a0hexagon\u00a0in\u00a0this\u00a0case\nLET\u00a0m\u00a0=\u00a0331\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0number\u00a0of\u00a0iterations\u00a0(\u00a0number\u00a0of\u00a0sides\u00a0of\u00a0the\u00a0last\u00a0polygon:\u00a06*2^m\u00a0)\nLET\u00a0b\u00a0=\u00a02*SQR(3)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0perimeter\u00a0of\u00a0the\u00a0circumscribed\u00a0hexagon\nLET\u00a0a\u00a0=\u00a03*b\/4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0perimeter\u00a0of\u00a0the\u00a0inscribed\u00a0triangle\nFOR\u00a0i\u00a0=\u00a01\u00a0TO\u00a0m\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0current\u00a0a\u00a0=\u00a0GM(previous\u00a0a,\u00a0current\u00a0b);\u00a0GM\u00a0=\u00a0geometric\u00a0mean\nLET\u00a0b\u00a0=\u00a02*a*b\/(a\u00a0+\u00a0b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0\u00a0\u00a0\u00a0next\u00a0b\u00a0=\u00a0HM(current\u00a0a,\u00a0previous\u00a0b);\u00a0HM\u00a0=\u00a0harmonic\u00a0mean\nLET\u00a0n\u00a0=\u00a0n\u00a0+\u00a0n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0double\u00a0the\u00a0number\u00a0of\u00a0sides\u00a0at\u00a0each\u00a0iteration\nNEXT\u00a0i\nLET\u00a0a\u00a0=\u00a0SQR(a*b)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0a\u00a0->\u00a0pi\u00a0to\u00a0199\u00a0digits\nLET\u00a0b2\u00a0=\u00a0b*b\nLET\u00a0n2\u00a0=\u00a0n*n\nLET\u00a0n4\u00a0=\u00a0n2*n2\nLET\u00a0n6\u00a0=\u00a0n2*n4\nLET\u00a0t\u00a0=\u00a0b*(30*n2\u00a0-\u00a013*b2)\u00a0+\u00a0SQR(500*n6\u00a0+\u00a0b2*(600*n4\u00a0+\u00a0b2*(165*b2\u00a0-\u00a0720*n2)))\nCALL\u00a0CBR(t)\nLET\u00a0cr2\u00a0=\u00a02\nCALL\u00a0CBR(cr2)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0cb2\u00a0=\u00a0cubic\u00a0root\u00a0of\u00a02\nLET\u00a0q\u00a0=\u00a0(b\u00a0+\u00a0(b2\u00a0-\u00a05*n2)*cr2\/t\u00a0+\u00a0t\/cr2)\/3\u00a0!\u00a0q\u00a0->\u00a0pi\u00a0to\u00a0599\u00a0digits\nLET\u00a0t\u00a0=\u00a0q\/n\nLET\u00a0c\u00a0=\u00a0t*t\nLET\u00a0t\u00a0=\u00a0c*(329868000\u00a0+\u00a0c*(42226800\u00a0+\u00a0c*(4619230\u00a0+\u00a0481213*c)))\nLET\u00a0u\u00a0=\u00a01164240000\u00a0+\u00a0t\nLET\u00a0p\u00a0=\u00a01\/4656960000*(a*(1164240000\u00a0+\u00a0t)\u00a0+\u00a0SQR(a*(-9313920000*b*(-1164240000\u00a0+\u00a0t)\u00a0+\u00a0a*u*u)))\nPRINT\u00a0TIME\u00a0-\u00a0s;\"seconds\"\nPRINT\u00a0p\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0!\u00a0p\u00a0->\u00a0pi\u00a0to\u00a0996\u00a0digits\nEND\n\n.91\u00a0seconds\n\n3.141592653589793238462643383279502884197169399375105820974944592307816406286208\u200b9986280348253421170679\n82148086513282306647093844609550582231725359408128481117450284102701938521105559\u200b64462294895493038196\n44288109756659334461284756482337867831652712019091456485669234603486104543266482\u200b13393607260249141273\n72458700660631558817488152092096282925409171536436789259036001133053054882046652\u200b13841469519415116094\n33057270365759591953092186117381932611793105118548074462379962749567351885752724\u200b89122793818301194912\n98336733624406566430860213949463952247371907021798609437027705392171762931767523\u200b84674818467669405132\n00056812714526356082778577134275778960917363717872146844090122495343014654958537\u200b10507922796892589235\n42019956112129021960864034418159813629774771309960518707211349999998372978049951\u200b05973173281609631859\n50244594553469083026425223082533446850352619311881710100031378387528865875332083\u200b81420617177669147303\n59825349042875546873115956286388235378759375195778185778053217122680661300192787\u200b66111959092164202\n \u00ab Next Oldest | Next Newest \u00bb\n\nUser(s) browsing this thread: 1 Guest(s)","date":"2019-12-15 07:47:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4354645907878876, \"perplexity\": 126.80430926151622}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575541307797.77\/warc\/CC-MAIN-20191215070636-20191215094636-00024.warc.gz\"}"} | null | null |
\section{Euler equation for two-dimensional CFTs}
\label{app2dcft}
Examples of large-$N$ field theories are $2d$ modular invariant CFTs with large central charge $c$. The microcanonical entropy for these theories is given by the Cardy formula (setting $c_L = c_R = c$)~\cite{Cardy:1986ie}
\begin{equation} \label{cardy}
S (E_L, E_R, c) = 2 \pi \sqrt{\frac{c}{6} E_L} + 2 \pi \sqrt{\frac{c}{6}E_R} ,
\end{equation}
with $E_{L,R}$ the left- and right-moving energies. On a circle of length $V=2\pi R$, the total energy and angular momentum are, respectively, $E = (E_L + E_R )/R$ and $J = E_L - E_R$. The Cardy formula holds for CFTs with a sparse light spectrum in the regime $C \to \infty$ with $E R \ge C$ \cite{Hartman:2014oaa}, where we normalized the central charge (conjugate to~$\mu$) as $C=c/12$. If we view the entropy \eqref{cardy} as the function $S= S(E, V,J, C)$, then
the fundamental variational equation of thermodynamics with $\nu^i dB_i = \Omega dJ$, follows by taking partial derivatives of the entropy function. Consequently, the products of thermodynamic quantities are
\begin{equation}
\begin{aligned} \label{2dcftdict}
T S &= \frac{4}{R} \sqrt{E_L E_R} ,\qquad \, \qquad p V = E, \\
\Omega J&= E - \frac{2}{R} \sqrt{E_L E_R}, \qquad \mu C = -\frac{2}{ R} \sqrt{E_L E_R } ,
\end{aligned}
\end{equation}
where $\Omega$ is the angular potential.
They satisfy the relation
\begin{equation} \label{eq:euler2d}
E = T S + \Omega J + \mu C .
\end{equation}
Hence, the large-$N$ Euler equation indeed holds for $2d$ CFTs. In fact, the Euler relation splits up into two separate equations, $E = \Omega J - \mu C $ and $TS =- 2 \mu C $.
In AdS$_3$ gravity the Smarr formula for the outer horizon of a BTZ black hole is given by $0 = TS + \Omega J - \Theta \Lambda / 4 \pi G$ \cite{Frassino:2015oca}. Comparing this to the Euler equation \eqref{eq:euler2d} we find that the chemical potential must correspond to $\mu C = E - \Theta \Lambda/ 4\pi G$. Using the holographic dictionary for the central charge $c=3L/2G$~\cite{Brown:1986nw,Strominger:1997eq}, it can be shown that the chemical potential is dual to
$
\mu \!
= \!- ( r_+^2 - r_-^2)/ ( L^2 R) ,
$
where $r_\pm$ are the outer and inner horizon radii of the rotating BTZ black hole.
Notably, $\mu$ vanishes for extremal black holes, if $r_+ = r_-$ or $ E R = |J|$, which correspond to CFT states with $E_L=0$ or $E_R=0$.
\section{The extended first law of entanglement}
In this appendix we compare our chemical potential for AdS black holes to the chemical potential in the extended first law for entanglement entropy of ball-shaped regions in the CFT vacuum \cite{Blanco:2013joa,*Wong:2013gua,Kastor:2014dra}. This CFT first law takes the form
\begin{equation}
d\bar E = \bar T dS_{\text{ent}} + \bar \mu dC,
\end{equation}
where $\bar E$ denotes the modular Hamiltonian expectation value, $S_{\text{ent}}$ is the vacuum entanglement entropy of the ball-shaped region and $C$ is the universal coefficient of the entanglement entropy (commonly denoted as $a^*_d$) \cite{Myers:2010xs,*Myers:2010tj,Caceres:2016xjz,Rosso:2020zkk}. The CFT first law is dual to the first law of static hyperbolic AdS black holes which are isometric to pure AdS space \cite{Casini:2011kv,Faulkner:2013ica,Emparan:1999gf}, a special case of the black holes considered in the main text, with $J=Q=0$. The boundary first law follows from reformulating our fundamental variational equation in terms of dimensionless quantities $\bar E = M L, \bar T = \kappa L / 2\pi$, and $ \bar \mu = \mu L $. The volume variation drops out of the first law, since it is a dimensionful quantity.
In the vacuum $\bar E=0$, hence the chemical potential reduces to $\bar \mu = - \bar T S_{\text{ent}} / C$, which agrees with the results in \cite{Kastor:2014dra} (where the temperature was normalized as $\bar T=1$).
\section{Euler equation in flat space}
\label{appA}
In flat spacetime, static equilibrium states satisfy the standard thermodynamic Euler equation,
\begin{equation}
E = T S + \nu^i B_i - p V ,
\end{equation}
which is often formulated instead in terms of densities since $V$ is infinite. Note that the energy is purely extensive in this formula, since it satisfies $E (\alpha S, \alpha B_i, \alpha V) = \alpha E (S, B_i , V)$. This Euler relation applies in particular to conformal and Lifshitz theories on the plane (see e.g.~\cite{Natsuume:2014sfa,Taylor:2015glc}). It is not immediately clear why this equation is consistent with the large-$N$ Euler equation, therefore in this appendix we explain the relation between the two for Lifshitz scale invariant theories.
Anisotropic scaling symmetry $\left \{ t, x^i \right \} \to \left \{\zeta^z t, \zeta x^i \right \}$ with dynamical scaling exponent $z$ implies that the product $T R^z $ is Lifshitz scale invariant, where $R$ is the curvature radius of the compact space, such as a sphere. Therefore, for Lifshitz theories with positive~$z$ the infinite-volume limit $R \to \infty$ is effectively the same as $T \to \infty$, so on the plane these theories are essentially always in the high-temperature deconfining phase.
In this limit, the energy scales as $E \sim T^{ \frac{d-1 + z }{z} }$ and entropy and conserved quantities as $S, B_i \sim T^{ \frac{d-1 }{z} }$, so the scaling relation is $E(\alpha^{ \frac{d-1}{z} } S, V, \alpha^{\frac{ d-1}{z} } B_i, C) = \alpha^{\frac{d-1 + z }{z} }E (S, V, B_i , C)$. This imposes the condition $(d-1 +z )E = (d-1 )(TS + \nu^i B_i)$, which in combination with the large-$N$ Euler equation yields $z E = - (d-1 )\mu C$. We can now compare this to the Lifshitz equation of state $z E = (d-1 ) p V$, which is a consequence of the anisotropic scaling relation $E (S, \alpha^{d-1 } V, B_i, C) = \alpha^{-z } E(S, V, B_i,C)$. As a result, we find $\mu C = - pV$ as $V \to \infty$, turning the large-$N$ Euler equation into the standard one. The same argument works for conformal theories (by setting $z=1$), hyperscaling violating theories and possibly other large-$N$ theories.
Notably, the standard Euler equation only applies in the infinite-volume limit of large-$N$ theories. The large-$N$ Euler relation, on the other hand, also holds at finite temperature on compact spaces for holographic field theories and $2d$ sparse CFTs (but not for generic CFTs).
\section{Holographic derivation of the Smarr formula for Lifshitz black holes}
In this appendix we derive the Smarr formula for charged Lifshitz black holes \cite{Tarrio:2011de}, with curvature radius $L$ and scaling exponent $z$, from the holograhic Euler equation and the dictionary for the thermodynamic quantities involved. We put the dual Lifshitz field theory on a spatial geometry of curvature radius~$R$. Our derivation generalizes section 2.3 of \cite{Karch:2015rpa} to $R \neq L$ and $z \neq 1$. Our aim is to prove that even for Lifshitz black holes the boundary pressure and its equation of state are not necessary input to deduce the Smarr formula (although they are in the special case $R = L$ considered in \cite{Karch:2015rpa}).
The holographic dictionary for Lifshitz black holes reads
\begin{align} \label{dictionarylifshitz}
E = M \frac{L}{R^z}, \quad T = \frac{\kappa}{2\pi} \frac{L}{R^z}, \quad \tilde \Phi = \frac{\Phi}{R^z}, \quad \tilde Q = Q L .
\end{align}
Note that the factors of $R$ and $L$ are chosen such that the products $E R^z $, $T R^z$ and $\tilde \Phi R^z$ are Lifshitz scale invariant (see Appendix \ref{appA}).
First, we express the $\Lambda$ term in the Smarr formula in terms of the boundary energy $E$
\begin{equation} \label{lifshitzkillingvolume}
- \frac{\Theta \Lambda}{4 \pi G} = L \left ( \frac{\partial M }{\partial L} \right)_{\!\!A, Q, G} \!\!= R^z \left ( \frac{\partial E}{\partial L } \right)_{\!\!A, Q, G} - E \frac{R^z}{L} .
\end{equation}
The strategy is to show that the right-hand side satisfies the Smarr formula. Note that the bulk quantities $A, Q$ and $G$ are fixed in the partial derivative with respect to $L$.
The boundary energy depends on these bulk quantities as follows
\begin{equation} \label{energyfunction}
E = E(S(A,G), \tilde Q (L,Q), V(R), C (L, G)).
\end{equation}
Note that $J=0$. The partial derivative is hence given by
\begin{align} \label{intermediatestep}
\left ( \frac{\partial E}{\partial L } \right)_{\!\!A, Q, G} \!\! &= \left( \frac{\partial E}{\partial \tilde Q} \right)_{\!\!S, V, C}\!\! \left ( \frac{\partial \tilde Q}{\partial L} \right)_{\!\!Q} \!\! +\left ( \frac{\partial E}{\partial C} \right)_{\!\! S, V, \tilde Q} \left ( \frac{\partial C}{\partial L} \right)_{\!\!G} \nonumber \\
&= \frac{1}{L} \! \left ( \tilde \Phi \tilde Q +(d-1) \mu C \right).
\end{align}
In the second line we used $\tilde Q = QL$ and $C \sim L^{d-1}/G$, and we recognized the definitions of the electric potential $\tilde \Phi$ and chemical potential $\mu$ (see \eqref{defintensive} in the main text).
Thus, we find
\begin{align} \label{smarrderivation}
- \frac{\Theta \Lambda}{4 \pi G} &= \frac{R^z}{L} \! \left ( \tilde \Phi \tilde Q +(d-1) \mu C -E \right) \\
&= \frac{R^z}{L} \! \left ( (d-2) E - (d-1) TS - (d-2) \tilde \Phi \tilde Q \right) . \nonumber
\end{align}
Finally, by inserting the holographic dictionary \eqref{dictionarylifshitz} we recover the Smarr formula. Note that the Smarr formula for Lifshitz black holes does not involve $z$ and is hence the same as for black holes in Einstein gravity \cite{Brenna:2015pqa}. Crucially, the holographic Euler equation was employed in the second line of \eqref{smarrderivation} and is therefore dual to the Smarr formula, as pointed out in \cite{Karch:2015rpa}. We emphasize that the boundary pressure does not play a role in this derivation, whereas the chemical potential $\mu$ does. For $R=L$ the pressure does feature in the derivation, since in that case the boundary volume depends on the bulk radius, i.e. $V (L)$, which yields an extra term $-(d-1) pV / L$ in \eqref{intermediatestep}. But ultimately the result in \eqref{smarrderivation} remains the same, since this pressure term cancels, due to the Lifshitz equation of state, against a new term $ z E L^{z-1} $ on the right side of \eqref{lifshitzkillingvolume}.
\section{The renormalized holographic Euler equation}
\label{appC}
In the main text the energy was defined with respect to the ground state, so the vacuum energy was effectively set to zero. However, CFTs on a curved background exhibit the Casimir effect, which implies that the ground state could have non-vanishing energy. In AdS/CFT the ground-state energy can be computed with the method of holographic renormalization, by regularizing the gravitational action with local counterterms at the boundary \cite{Henningson:1998gx,Balasubramanian:1999re}. In this appendix we derive the renormalized holographic Euler equation for static vacuum AdS black holes, and find that the ground-state energy contributes a constant term to the chemical potential.
We consider static, vacuum asymptotically AdS black holes with hyperbolic, planar and spherical horizons \cite{Birmingham:1998nr}
\begin{equation}
ds^2 = -f_k (r) dt^2 + \frac{dr^2}{f_k(r)} + r^2 d \Omega_{k,d-1}^2,
\end{equation}
where
\begin{equation}
f_k (r) = k + \frac{r^2}{L^2} - \frac{16 \pi G M }{(d-1) \Omega_{k,d-1}r^{d-1}} .
\end{equation}
For $k=1$ the unit metric $d \Omega_{k,d-1}^2 \!$ is the metric on a unit $S^{d-1}\!$ sphere, for $k=0$ it is the dimensionless metric $\frac{1}{L^2} \!\sum_{i=1}^{d-1} dx_i^2 $ on the plane $\mathbb R^{d-1}$, and for $k=-1$ the unit metric on hyperbolic space $H^{d-1}$ is $ d u^2 + \sinh^2 u d \Omega_{k=1,d-2}^2$.
The mass parameter $M$ is related to the horizon radius $r_+$ via
\begin{equation}
M = \frac{(d-1) \Omega_{k,d-1} r_+^{d-2}}{16 \pi G } \left ( \frac{r_+^2}{L^2} + k \right).
\end{equation}
According to the GKPW prescription in AdS/CFT \cite{Gubser:1998bc,Witten:1998qj} the CFT metric is identified with the boundary metric of the dual asymptotically AdS spacetime up to a Weyl rescaling, i.e. $g_{\text{CFT}}= \lim_{r \to \infty}\lambda^2 (x)g_{\text{AdS}} $ where $\lambda(x)$ is a Weyl scale factor. As $r\to \infty$ the boundary metric
approaches
\begin{equation}\label{asymptads}
ds^2 = \frac{r^2}{L^2} dt^2 + \frac{L^2}{r^2} dr^2 + r^2 d\Omega_{k,d-1}^2.
\end{equation}
A common choice of Weyl factor is $\lambda =L /r$, so that the CFT metric becomes $- dt^2 + L^2 d \Omega_{k,d-1}^2$. The boundary curvature radius is then equal to the AdS radius and the volume is $V = \Omega_{k,d-1} L^{d-1}/(d-1)$. Moreover, the CFT time is the same as the global AdS time $t$, which implies that the CFT energy $E$ can be identified (up to a constant) with the ADM mass~$M$, the conserved charge associated to time $t$ translations.
The temperature, entropy and energy of the black holes are
\begin{align} \label{appenergy}
T &=\frac{d \, r_+^2 + k (d-2)L^2}{4 \pi L^2 r_+}, \qquad
S= \frac{\Omega_{k,d-1}r_+^{d-1}}{4 G}, \\
E_{\text{ren}} &= \frac{(d-1) \Omega_{k,d-1} L^{d-2}}{16 \pi G } \left ( \frac{r_+^d}{L^d} + k \frac{r_+^{d-2}}{L^{d-2}} + \frac{2 \epsilon_k^0}{d-1} \right). \nonumber
\end{align}
The energy was derived from the renormalized boundary stress-energy tensor in \cite{Balasubramanian:1999re} and from the on-shell Euclidean gravitational action with counterterms in \cite{Emparan:1999pm}. The resulting energy, $E_{\text{ren}} = M + E_k^0 $, differs from the mass paramater by a constant term, the Casimir energy of the dual field theory
\begin{equation}
E_k^0 = \frac{\Omega_{k,d-1}L^{d-2}}{8\pi G} \epsilon_k^0 ,
\end{equation}
with $\epsilon^0_k =0$ for odd $d$ and equal to \cite{Emparan:1999pm}
\begin{equation}
\epsilon_k^0 = (-k)^{d/2} \frac{(d-1)!!^2}{d!} \qquad \text{for even $d$}.
\end{equation}
For instance, $\epsilon_k^0=-k/2$ for $d=2$ and $\epsilon_k^0=3 k^2 /8$ for $d=4.$ The renormalized version of the Smarr formula reads
\begin{equation}
E_{\text{ren}} = \frac{d-1}{d-2} TS - \frac{1}{d-2}\frac{ \Theta_{\text{ren}} \Lambda}{ 4\pi G},
\end{equation}
with a new (counterterm subtracted) Killing volume
\begin{equation}
\Theta_{\text{ren}} = - \frac{\Omega_{k,d-1}}{d} \left ( r_+^d - \frac{d-2}{ d-1 } L^d \epsilon_k^0 \right).
\end{equation}
The holographic Euler equation still takes the form
\begin{equation}
E_{\text{ren}} = T S + \mu_{\text{ren}} C,
\end{equation}
since the Casimir energy is also proportional to the central charge, which we normalize here as $C = \Omega_{k,d-1}L^{d-1}/16 \pi G$.
But the chemical potential is not given by equation \eqref{chemicalstatic} in the main text anymore, since it receives a constant contribution from the vacuum energy
\begin{equation} \label{renormchem}
\mu_{\text{ren}} = - \frac{r_+^{d-2}}{L^{d-1}} \left(\frac{r_+^2}{L^2}- k \right)+ \frac{2}{L} \epsilon_k^0 .
\end{equation}
For $d=2$ we find the chemical potential $\mu_{\text{ren}} = - r_+^2 /L^3 $, which agrees with the expression found in Appendix \ref{app2dcft} (for $r_-=0$ and $R = L$).
For planar black holes ($k=0$) or very large hyperbolic or spherical black holes (with $r_+ \gg L$), the Casimir energy is effectively zero and hence there is no distinction between the renormalized energy and the vacuum-subtracted energy. As can be seen from \eqref{appenergy} and \eqref{renormchem}, there are additional thermodynamic relations for these black holes
\begin{equation}
E = -(d-1) \mu C \qquad \text{and} \qquad T S = - d\, \mu C,
\end{equation}
consistent with the infinite-volume limit of Appendix \ref{appA}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,924 |
{"url":"https:\/\/turpin.dev\/tags\/maths\/","text":"## Mandelbrot\n\n### In Python\n\nGodbolt does Python! (And lots of other languages\u2026) import cmath # Set how far to zoom into the set, you can tweak this value scale = 25 # Calculate if each point in an x-y plane \"escapes\" the shape for y in range(-12, 12): for x in range(-55, 45): # Create a complex number based on current coordinates # Note the fudge factor of 2 because characters are taller than high c = complex(x \/ scale, y \/ scale * 2) # Initialise the calculation z = 0 # Check if calculation remains bounded (stays on the page) for _ in range(20): # THE IMPORTANT BIT! [Read More]\n\n## What does it all mean?\n\nMeans Mean: the average value of a data set Median: the middle of a data set, the number that splits a data set in two; requires sorting; median of an even number of elements is the mean of the middle two elements Mode: value in a data set that occurs most often; if a data set has unique numbers there is no mode; if there are equal numbers of some elements there are multiple modes Random variables Discrete: finite numbers of values Continuous: infinite number of values Probability density functions P(|Y - 2| < . [Read More]\n\n## The Fibonacci Sequence\n\n### In different languages\n\nBash #!\/bin\/bash function fibonacci { local n=$1 [[$n == 0 ]] && echo $n && return [[$n == 1 ]] && echo $n && return local x=$(fibonacci $((n - 1))) local y=$(fibonacci $((n - 2))) echo$((x + y)) } echo -e sh\\\\t\\$(fibonacci 14) C #include <stdio.h> unsigned int fibonacci(const unsigned int); int main() { printf(\"c\\t%d\\n\", fibonacci(14)); return 0l; } unsigned int fibonacci(const unsigned int n) { return ( n < 2 ? [Read More]","date":"2021-11-29 02:45:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2769711911678314, \"perplexity\": 2904.8825228050255}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964358685.55\/warc\/CC-MAIN-20211129014336-20211129044336-00550.warc.gz\"}"} | null | null |
Q: empty value after mysql update php i made a lot of research around here and Google but i cannot find an answer to this problem.
I update a field in a MySQL database with following code:
public function registerPubKey() {
$stmt = $this->cn->prepare('UPDATE sb_user SET pubkey= ? WHERE email= ?');
$exres = $stmt->execute(array($this->info["pubkey"], $this->info["email"]));
if ($exres == false) {
$resultArray["result"] = "Error registering public key";
echo json_encode($resultArray);
exit;
}
$resultArray["result"] = "success";
echo json_encode($resultArray);
}
I'm sure that all works except that the field in the database is empty. I dumped the private variable $info and it contains the pubkey (pubkey is a base64 string).
I noticed that if I change the update query with an INSERT, the value is inserted correctly!
A: It's likely because you're trying to UPDATE non existent rows. Try adding a ON DUPLICATE KEY before. See INSERT ... ON DUPLICATE KEY UPDATE Syntax. UPDATE returns nothing if the row does not exist.
A: I ran into a similar issue and validated that:
*
*the row existed, and
*the execute parameters were valid and correct
The PDO::errorInfo() function can provide insight into what's actually happening to cause the update to fail:
if (! $stmt->execute($params) ) {
$resultArray["result"] = print_r($stmt->errorInfo(), true);
}
In my case, I got the message The user specified as a definer ('user'@'172.20.%.%') does not exist. Since this was a database snapshot restored to a different subnet, the error message makes sense and the user in-fact did not exist.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,420 |
I used an IO die for the graphic background, cutting it from orchid cardstock and placing it on a white card base. Next I used a hand-crocheted medallion made by Jessi Fogan that I won in some blog candy quite a while ago. The flower is a hydrangea from a MFP stamp set, colored with Copics and heat-embossed with clear ep, then fussy cut and popped up with dimensionals. I added a MFT mini tag done a snippet of white that I colored with a matching green Copic marker. Then center was stamped with another MFP sentiment, and the ribbon came from a bouquet of flowers I received for my birthday recently.
Thanks for stopping by and I wish you a great Wednesday!
Beautiful!!!!! I love that grid die in the background and gorgeous bouquet of flowers. Outstanding coloring and it's a perfect fit for the crochet circle. Yes, yes, yes!!! LOVE!
A beautiful card Carol, such a lovely image and lovely crochet medallion - so pretty.
That IO background is outstanding, Carol! So is you coloring. Both add such a special flair to the creation.
This is SO pretty..love the design and colors..beautiful bloom!
Oh, wow!!! Fantastic design with this adorable hand-crocheted medallion and the lovely flower!!! Love the cute tag!!! This card is just amazing and so different! Very well done!!
Your card is beautiful. I love the orchid hydrangea and the crocheted medallion was the perfect embellishment to showcase it.
Oh my gosh!! I'm loving the crochet circle on the back of your beautiful flower, which you colored so perfectly.
Beautiful card, I really like the die cut grid, and cut in a colour to match the flower is a great idea. Hydrangea flowers always seem to make a lovely card and yours is coloured so prettily.
Pretty purple hydrangea and I love it on the backgrounds you've chosen. The modern look of the cover frame looks so pretty with the delicate crochet of the lace on the doily. A lovely design. Also like the light purple and green combo. Very fresh and also soothing. Hugs & Happy wkend too.
Gorgeous card Carol - I love the purple and green together. That background die looks interesting - lots of it to stick down I guess?
WOW WOW WOW, very impressed with the crotchet, love your colouring and the design.
Beautiful, I love love love purple and green together!
Thanks for giving thanks with the crew at SHOPPING OUR STASH this week!!
This is absolutely beautiful Carol! From the crocheted doily to that neat looking die cut background.
Oh wow!! This is beautiful!
Beautiful flower & love the layers! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,758 |
"Shop Talks"
The U.S. Embassy in Moscow, in coordination with the Forum for Cultural Engagement and the American Center in Moscow, has launched a series of virtual discussions to continue a cultural dialogue between Russians and Americans during a time of limited cultural performances and travel due to the effects of Covid-19.
What is required to achieve success as an artist in America?
The Shop Talks master class and lecture series highlight the creative process of renowned American artists from a wide variety of disciplines. The series features a theater artist who builds large-scale productions with puppets, a dancer/choreographer who also runs a full-time dance school, a Broadway music director/conductor/producer, a theater director who develops original musicals from the ground up, and a tap dancer inspired by Latin, Soul, Funk and Hip Hop. These celebrated virtuosos speak live from their homes about their careers and the challenges and surprises of virtual collaboration in this time of isolation.
All events livestreamed on the YouTube-channel of the American Center in Moscow on Wednesdays, 7:00-8:00 pm Moscow time / 12:00-1:00 pm U.S. Eastern time.
We invite you to join "Shop Talks" — a weekly series of conversations and masterclasses with American cultural performers and experts.
Join on YouTube
June 3, 2020 – Dan Hurlin
Award-winning American theater artist who develops large-scale theater productions with puppets.
In partnership with Obratsov Puppet Theater in Moscow; moderated by Tom Lee, American director, designer, puppet artist based in Chicago and New York.
June 10, 2020 – Calvin Booker
New York-based tap dancer, two-time winner of legendary "Showtime @ the Apollo" and original cast member of a Broadway show FELA!
In partnership with Vortex Dance Theater.
June 17, 2020 – Kris Kukul
New York-based orchestrator, arranger, music director and composer. Currently music director and orchestrator for upcoming Warner Brothers musical Beetlejuice.
In partnership with Broadway Moscow Theater Company.
June 24, 2020 – Elizabeth Parkinson
Highly acclaimed dance performer; performed worldwide as principal dancer with various ballet companies, including the Joffrey Ballet, Feld Ballet, Peridance Contemporary Ballet Company.
In partnership with the Boris Shukin Theater Institute.
July 1, 2020 – Amanda Charlton
New York-based theater director and artistic producer. Artistic Associate and Director of the Professional Training Program at Williamstown Theater Festival for ten years, where she mentored and directed over 1,000 artists.
In partnership with GITIS Russian Institute of Theater Arts. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,813 |
Russian Guy Renovates Apartments For Pensioners And War Veterans For Free
People, Social Issues8 months ago
Rokas Laurinavičius
BoredPanda staff
A 33-year-old from Russia is on a simple mission. To better the lives of pensioners and war veterans. The way he goes about it, however, is anything but simple. Anton Savchuk isn't a millionaire, he's a humble construction worker, but that doesn't stop him from picking his own pocket to renovate the homes of these sometimes forgotten citizens.
More info: Instagram
"When I was watching TV, I noticed how poorly some of the people who get medals or gifts from officials live," Anton told Bored Panda. "These people are living far from good and it really bothered me. I thought, 'How is this possible that those who served their motherland could live like this?' War veterans are remembered once a year, on the 9th of May (Victory Day). People with disabilities live in similar conditions. If my project lives on, I'll include struggling teachers and doctors as well."
47-year-old tram driver Polina Gennadievna's apartment before renovations
And after
"We renovated a home that belonged to a veteran of the Second World War which consisted of three different parts, a disabled person, a tram driver, and another veteran who fought in the Siege of Leningrad."
Valentina Isaevna is 71 years old and has been disabled since childhood. This is her apartment before Anton started working on it
And here it is after
"When I started the project, I had some money that I've saved up before. At first, I spent about $1,500 but then I realized that the work I was doing wasn't enough," Anton said. "The last renovation that I did was the most expensive one. I want to find somebody who could sponsor the work but for that, I probably need more followers and views. Currently, I'm not sure if I will be able to continue my project or not because it relies on my money that is quickly running out."
95-year-old war veteran Vasily Fedorovich and his wife Tamara Aleksandrovna have been together over 60 years. Here's how their apartment looked when Anton visited it
Anton has a few people who help him. The three of them usually take anywhere from 2 to 3 weeks to complete one renovation. "We paid 99% for the first three renovations," he continued. "Then, some people started sending donations but there were very few of them. We even got a couple of donations from the USA, Canada, and Europe and even though these were rare cases, it really struck me."
During renovations
Anton said he never thought this could've happened. So far, however, the donations don't make up a substantial amount. "Usually, people send some money for work supplies, coffee, or tea (they say so in the messages), but even though it's mostly a few dollars, it's incredibly nice. Every little bit helps and inspires me to continue. It makes me realize I'm not alone in my quest."
Tatyana Ilyinichnam who survived the blockade of Leningrad, will soon celebrate her 80th birthday. Here's her place before the renovation
To support Anton, follow him on Instagram or donate via PayPal to savant00@yandex.ru
Almost finished... To complete the subscription process, please click the link in the email we just sent you.
Like what you're reading? Subscribe to our top stories.
Author, BoredPanda staff
Rokas is a writer at Bored Panda with a BA in Communication. After working for a sculptor, he fell in love with visual storytelling and enjoys covering everything from TV shows (any Sopranos fans out there?) to photography. Throughout his years in Bored Panda, over 235 million people have read the posts he's written, which is probably more than he could count to.
C 8 months ago
How kind of him. Hopefully some local businesses will start to chip in to help with costs
Olesia Kovalenko 8 months ago
Waxier Cereal Ai 8 months ago
Sad truth of today's society.
Grumble O'Pug 8 months ago
DP von Icecream 8 months ago (edited)
A cluttered and/or a broken house, could mean: a cluttered AND broken mind. A even such a small thing like rearranging your home a tiny bit, a fresh coat of paint once every decade (or every couple of years) can make such a huge difference especially for veterans and/or for people struggling with burnouts and/or mental health issues.
Just hush. Russia coughed up more sacrifice in WW2 than the whole GLOBE. Read a history book about how challenging it was to live there before the wall came down and capitalism rushed in. Honestly, the comments people make with such little understanding.
The Cappy 8 months ago
And yet, the destruction of lives and of the human spirit was caused in far larger part by Stalin. Russia had concentration camps by or before 1920, and was arresting and killing its citizens in breathtaking numbers. Soldier who fought bravely in WW2 were then arrested once they returned, accused of being spies; sent to Siberian prisons, and often worked to death. When their sentences were up, they were simply given new sentences.
Full Name 8 months ago
Your comment has nothing to do with what DP von said. Sometimes you just have to be an asshole in your posts and I don't get it.
Norsman 8 months ago
Nobody doubts the sacrifice! Take a chill pill, the guy is trying to say that by this action of charity Anton is helping the elderly, or anyone else for that matter avoid any mental implications people might get for living in such an environment, depression for instance. That's all the DP v ICream was trying to say... jeez...
"grumble not sure if you were commenting on me, or the ones commenting on my original text. @cappy: and so did Mao in China, and so did the Dutch in Indonesia, and so did the Belgians in Congo, and so did the Aussies with aboriginals, and so did the Spanish in Latin america, and so did the Nazi's in WW2 , and so did the USA with the native American Indian tribes, and so did ISIS recently... this list goes on & on & on & o, an endless cycle of hate & ignorance. The only thing people can do, is strive for better, for themselves, for entire nations, for entire continents & maybe for the entire globe.... Some of my friends were on that MH17 flight...... so basically I hate everything Putin stands for. BUT this still doesn't mean every Russian is rotten to their core, on the contrary!
Marina 8 months ago (edited)
Aunt Messy 8 months ago
In this case we're talking about extreme poverty, no access to medical care, and a government that doesn't give a rat's behind about the people it owes for its existence. ...///... Of course, Putin worshipper like you wouldn't care about that.
Monika Soffronow 8 months ago
Nor would the Trump dittos across the waters. More than 14 million children living below the poverty line in the US, homeless veterans, etc, etc. Growing inequality is an international scourge, as is the equally growing sense of entitlement of the increasingly wealthy few. I am afraid that Global Warming is not the only scary thing on the horizon. I hope that I am wrong.
Stop that. The poster is just naive. Most people don't understand the situation in the FSB. TV in the USA doesn't show anything like that.
This comment is hidden. Click here to view.
@Cappy - Since when are you board Mommy? Come to think of it....I didn't even listen to my own mother, why would I bother with you?
Just ignore Grumble. He/she is often a fucking asshole for no reason in posts.
DP von Icecream 8 months ago
I don't get rocked that easily, I am too bloody old for that. That might be so, but that is still no reason to use foul language ;-)
@norsman, yes indeed, as many other people who actually understood my post: this got nothing to do with any politics or wars. For the sake of it: a new coat of paint, even with an income as low as an average Russian monthly income is feasible and just rearranging or deep-cleaning your home costs nothing. Extreme poverty = unfortunately the average Russian home. I am Dutch, highly educated, had a long standing corporate European Product Development career of 17 years in the highly complex world of ICT & had friends killed in the MH17 attack. Unfortunately I had the MOTHER-of-all-burnouts some years ago..... So I basically speak from some experience. The whole point is, this wonderful Russian gentleman is helping people with NO medical care coverage. @aunt messy, I think you have serious mental issues, hope you receive professional help & take daily medicine to combat your bipolar/borderline mood-swings. You actually might want to try a fresh new lick of paint, as previously said: a new lick of color, costs virtually nothing & could make all the difference.
I really do think Messy is bipolar which is why I slowed down responding to her intermittent angry idiocy. I just feel bad for her now, mostly.
Probably more BP visitors feel the same. That is the reason why I normally don't even bother about any of Messy's angry-frustrated-ways of communicating but when her vicious anger is pointed directly at my person, I will do it the favor to still respond like a gentleman. Disappointing to see she does not return the favor...... I initially also felt bad for her, but seeing her viciousness is increasing lately......
Oathbraker 8 months ago
@Aunt Messy, don't know why you were downvoted, but I find myself agreeing with you.
SupriyaG 8 months ago
You are doing great helping them. Hats Off!
Anton Savchuk,
free home makeovers,
free home repairs,
free renovations | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,677 |
use crate::common::CalcConsensus;
use bio::io::fastq;
use bio::stats::probs::LogProb;
use bio_types::sequence::SequenceRead;
use bio_types::sequence::SequenceReadPairOrientation;
use derive_new::new;
use itertools::Itertools;
use rust_htslib::bam;
use rust_htslib::bam::record::Aux;
use std::collections::{HashMap, HashSet};
use std::ops::BitOrAssign;
const ALLELES: &[u8] = b"ACGT";
pub fn get_umi_string(rec: &bam::record::Record) -> String {
let umi = match rec.aux(b"RX") {
Ok(Aux::String(value)) => {
format!(" RX:Z:{}", value)
}
_ => String::from(""),
};
umi
}
#[derive(Eq, PartialEq)]
enum StrandObservation {
None,
Forward,
Reverse,
Both,
}
impl BitOrAssign for StrandObservation {
fn bitor_assign(&mut self, rhs: Self) {
if let StrandObservation::None = self {
*self = rhs;
} else if *self != rhs {
*self = StrandObservation::Both;
}
}
}
#[derive(new)]
pub struct CalcOverlappingConsensus<'a> {
recs1: &'a [bam::Record],
recs2: &'a [bam::Record],
r1_vec: &'a [bool],
r2_vec: &'a [bool],
seqids: &'a [usize],
uuid: &'a str,
read_ids: &'a mut Option<HashMap<usize, Vec<u8>>>,
}
impl<'a> CalcOverlappingConsensus<'a> {
pub fn calc_consensus(&self) -> (fastq::Record, LogProb) {
let seq_len = self.r1_vec().len();
let mut consensus_seq: Vec<u8> = Vec::with_capacity(seq_len);
let mut consensus_qual: Vec<u8> = Vec::with_capacity(seq_len);
let mut consensus_strand = b"SI:Z:".to_vec();
let read_orientations_opt = self.build_read_orientation_string();
let mut consensus_lh = LogProb::ln_one();
for i in 0..seq_len {
match (
self.recs1().len() == 1,
self.map_read_pos(i, self.r1_vec()),
self.map_read_pos(i, self.r2_vec()),
) {
(true, Some(base_pos), None) => {
let base = self.recs1()[0].seq().as_bytes()[base_pos];
consensus_seq.push(base);
consensus_qual.push(self.recs1()[0].qual()[base_pos] + 33);
consensus_lh += Self::overall_allele_likelihood(self, &base, i);
}
(true, None, Some(base_pos)) => {
let base = self.recs2()[0].seq().as_bytes()[base_pos];
consensus_seq.push(base);
consensus_qual.push(self.recs2()[0].qual()[base_pos] + 33);
consensus_lh += Self::overall_allele_likelihood(self, &base, i);
}
_ => {
let likelihoods = ALLELES
.iter()
.map(|a| Self::overall_allele_likelihood(self, a, i))
.collect_vec();
Self::build_consensus_sequence(
likelihoods,
&mut consensus_lh,
&mut consensus_seq,
&mut consensus_qual,
33.0,
);
}
};
self.build_consensus_strand(&mut consensus_strand, consensus_seq[i], i);
}
let name = if self.read_ids.is_some() {
Self::build_verbose_read_name(self.uuid(), self.seqids(), self.read_ids)
} else {
format!(
"{}_consensus-read-from:{}_reads",
self.uuid(),
self.seqids().len(),
)
};
if let Some(mut read_orientations) = read_orientations_opt {
consensus_strand.append(&mut read_orientations)
}
let umi = get_umi_string(&self.recs1()[0]);
let description = format!("{}{}", String::from_utf8(consensus_strand).unwrap(), umi);
let consensus_rec =
fastq::Record::with_attrs(&name, Some(&description), &consensus_seq, &consensus_qual);
(consensus_rec, consensus_lh)
}
fn recs1(&self) -> &[bam::Record] {
self.recs1
}
fn recs2(&self) -> &[bam::Record] {
self.recs2
}
fn r1_vec(&self) -> &[bool] {
self.r1_vec
}
fn r2_vec(&self) -> &[bool] {
self.r2_vec
}
fn build_consensus_strand(&self, consensus_strand: &mut Vec<u8>, ref_base: u8, pos: usize) {
let mut strand = StrandObservation::None;
let rec1_pos = self.map_read_pos(pos, self.r1_vec());
let rec2_pos = self.map_read_pos(pos, self.r2_vec());
let mut strand_observation = |recs: &[bam::Record], rec_pos: Option<usize>| {
if let Some(pos) = rec_pos {
recs.iter().for_each(|rec| {
if rec.base(pos) == ref_base {
match rec.is_reverse() {
true => strand |= StrandObservation::Reverse,
false => strand |= StrandObservation::Forward,
};
}
});
}
};
strand_observation(self.recs1(), rec1_pos);
strand_observation(self.recs2(), rec2_pos);
match strand {
StrandObservation::Forward => consensus_strand.push(b'+'),
StrandObservation::Reverse => consensus_strand.push(b'-'),
StrandObservation::Both => consensus_strand.push(b'*'),
StrandObservation::None => consensus_strand.push(b'.'),
}
}
fn build_read_orientation_string(&self) -> Option<Vec<u8>> {
let mut read_orientations_set: HashSet<_> = self
.recs1()
.iter()
.filter_map(|rec| match rec.read_pair_orientation() {
SequenceReadPairOrientation::F2F1 => Some(b"F2F1,"),
SequenceReadPairOrientation::F2R1 => Some(b"F2R1,"),
SequenceReadPairOrientation::F1F2 => Some(b"F1F2,"),
SequenceReadPairOrientation::R2F1 => Some(b"R2F1,"),
SequenceReadPairOrientation::F1R2 => Some(b"F1R2,"),
SequenceReadPairOrientation::R2R1 => Some(b"R2R1,"),
SequenceReadPairOrientation::R1F2 => Some(b"R1F2,"),
SequenceReadPairOrientation::R1R2 => Some(b"R1R2,"),
SequenceReadPairOrientation::None => None,
})
.collect();
let mut read_orientations_string = b" RO:Z:".to_vec();
read_orientations_set
.drain()
.for_each(|entry| read_orientations_string.extend_from_slice(entry));
match read_orientations_string.pop() {
Some(b',') => Some(read_orientations_string),
Some(b':') => None,
Some(_) => unreachable!(),
None => unreachable!(),
}
}
fn map_read_pos(&self, consensus_pos: usize, alignment_vec: &[bool]) -> Option<usize> {
match alignment_vec[consensus_pos] {
true => Some(
alignment_vec[0..(consensus_pos + 1)]
.iter()
.filter(|&v| *v)
.count()
- 1,
),
false => None,
}
}
}
impl<'a> CalcConsensus<'a, bam::Record> for CalcOverlappingConsensus<'a> {
fn overall_allele_likelihood(&self, allele: &u8, pos: usize) -> LogProb {
let mut lh = LogProb::ln_one();
let rec1_pos = self.map_read_pos(pos, self.r1_vec());
let rec2_pos = self.map_read_pos(pos, self.r2_vec());
for (rec1, rec2) in self.recs1().iter().zip(self.recs2()) {
if let Some(pos) = rec1_pos {
lh += Self::allele_likelihood_in_rec(
allele,
&rec1.seq().as_bytes(),
rec1.qual(),
pos,
0,
);
};
if let Some(pos) = rec2_pos {
lh += Self::allele_likelihood_in_rec(
allele,
&rec2.seq().as_bytes(),
rec2.qual(),
pos,
0,
);
};
}
lh
}
fn seqids(&self) -> &'a [usize] {
self.seqids
}
fn uuid(&self) -> &'a str {
self.uuid
}
}
#[derive(new)]
pub struct CalcNonOverlappingConsensus<'a> {
recs: &'a [bam::Record],
seqids: &'a [usize],
uuid: &'a str,
read_ids: &'a mut Option<HashMap<usize, Vec<u8>>>,
}
impl<'a> CalcNonOverlappingConsensus<'a> {
pub fn calc_consensus(&self) -> (fastq::Record, LogProb) {
let seq_len = self.recs()[0].seq().len();
let mut consensus_seq: Vec<u8> = Vec::with_capacity(seq_len);
let mut consensus_qual: Vec<u8> = Vec::with_capacity(seq_len);
let mut consensus_strand = b"SI:Z:".to_vec();
let mut cigar_map = HashMap::new();
for record in self.recs() {
let cached_cigar = record.raw_cigar();
if !cigar_map.contains_key(cached_cigar) {
cigar_map.insert(cached_cigar, Vec::new());
}
cigar_map.get_mut(cached_cigar).unwrap().push(record);
}
// Potential workflow for different read lengths
// compute consensus of all reads with max len
// compute offset of all shorter reads
// pad shorter reads
// drop first consensus, compute consensus of full length reads and padded reads
// ignore padded bases for consensus computation
let mut consensus_lh = LogProb::ln_one();
for i in 0..seq_len {
// Maximum a-posteriori estimate for the consensus base.
// Find the allele (theta \in ACGT) with the highest likelihood
// given the bases at this position, weighted with their quality values
let likelihoods = ALLELES
.iter()
.map(|a| Self::overall_allele_likelihood(self, a, i))
.collect_vec(); //Check this. See below
Self::build_consensus_sequence(
likelihoods,
&mut consensus_lh,
&mut consensus_seq,
&mut consensus_qual,
33.0,
);
self.build_consensus_strand(&mut consensus_strand, consensus_seq[i], i);
}
let name = if self.read_ids.is_some() {
Self::build_verbose_read_name(self.uuid(), self.seqids(), self.read_ids)
} else {
format!(
"{}_consensus-read-from:{}_reads",
self.uuid(),
self.seqids().len(),
)
};
let umi = get_umi_string(&self.recs()[0]);
let description = format!("{}{}", String::from_utf8(consensus_strand).unwrap(), umi);
let consensus_rec =
fastq::Record::with_attrs(&name, Some(&description), &consensus_seq, &consensus_qual);
(consensus_rec, consensus_lh)
}
pub fn recs(&self) -> &[bam::Record] {
self.recs
}
fn build_consensus_strand(
&self,
consensus_strand: &mut Vec<u8>,
ref_base: u8,
current_pos: usize,
) {
let mut strand = StrandObservation::None;
self.recs().iter().for_each(|rec| {
if rec.base(current_pos) == ref_base {
match rec.is_reverse() {
true => strand |= StrandObservation::Reverse,
false => strand |= StrandObservation::Forward,
};
}
});
match strand {
StrandObservation::Forward => consensus_strand.push(b'+'),
StrandObservation::Reverse => consensus_strand.push(b'-'),
StrandObservation::Both => consensus_strand.push(b'*'),
StrandObservation::None => consensus_strand.push(b'.'),
}
}
}
impl<'a> CalcConsensus<'a, bam::Record> for CalcNonOverlappingConsensus<'a> {
fn overall_allele_likelihood(&self, allele: &u8, i: usize) -> LogProb {
let mut lh = LogProb::ln_one(); // posterior: log(P(theta)) = 1
for rec in self.recs() {
lh += Self::allele_likelihood_in_rec(allele, &rec.seq().as_bytes(), rec.qual(), i, 0);
}
lh
}
fn seqids(&self) -> &'a [usize] {
self.seqids
}
fn uuid(&self) -> &'a str {
self.uuid
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,193 |
\section{Introduction} *****************************************************
In particular, the chiral nature of the charge carriers in graphene is predicted to give rise to specular Andreev reflection~\cite{Beenakker2006}, and the conventional quantum Hall effect can be markedly different due to the interaction between edge states and the superconductor\cite{Hoppe2000,Chtchelkatchev2007}. Such systems also provide a unique way to probe valley-polarized edge states\cite{Akhmerov2007}, topological confinement in bilayer graphene~\cite{Martin2008a}, the interplay between superconductivity and quantum confinement or ballistic two-dimensional Josephson junctions and their response to phase coherent interference effects.
There are two important prerequisites that must be satisfied in order to observe any of these phenomena experimentally. First, the graphene-superconductor interface should be transparent and well defined. Secondly, the graphene must be of high electronic quality. In addition, for some of the above effects, a superconductor with a large upper critical field, $H_{c2}$, is required. While significant technological progress has been made in improving the quality of graphene by either suspending graphene\cite{Du2008a} or encapsulating it in hexagonal boron nitride (hBN) \cite{Dean2010,Wang2013}, the main challenge has been to combine such low-scattering graphene with a (large $H_{c2}$) superconductor. All reports on graphene-superconductor devices to date involved superconducting contacts deposited directly on the graphene surface, and diffusive transport through the device. In addition to the modest electronic quality, the use of top contacts leaves ambiguity in where exactly Andreev reflection takes place and under what spectral conditions. I.e. it is not clear how far electrons travel beneath the contact before entering the superconductor.
\begin{figure}[!t]
\centering
\includegraphics{fig_DC}%
\caption{\textbf{High-quality hBN-Graphene-hBN devices.} \textbf{a.} An optical image of device $A$. A graphene/hBN sandwich (blue) is contacted on both sides from the edge with MoRe contacts (yellow). The contacts are split further in two, which allows a (quasi-) four probe measurement with minimal series lead resistance. \textbf{b-c.} A schematic cross-section of the device. \textbf{d.} The measured resistance, $R$, as a function of gate voltage, $V_{gate}$, at room temperature and at 4.2K. The carrier density, $n$, is extracted from Shubnikov-de-Haas oscillations. \textbf{e.} Differentiated conductance, $dG/dV_{gate}$, as a function of gate voltage and magnetic field, taken at 40 mK. \textbf{f} The conductance, $G$, as a function of gate voltage at $B = 12\ \mathrm{T}$ and $T= 40$ mK, showing the symmetry broken states.\label{fig:DC}}
\end{figure}
To realise high quality graphene-superconductor junctions we encapsulate graphene between two hBN crystals using the van der Waals pick-up method \cite{Wang2013}. This method ensures that the graphene is never in contact with any polymer during the stacking and thereafter. Electrical contact is made by metal deposition onto areas where the stack has been etched through. Unlike earlier work\cite{Wang2013}, where metal deposition is done in a separate lithography step, we start by etching only the region to be contacted, followed immediately by metal deposition. This has the following advantages: (i) our contacts are self-aligned, thereby minimizing redundant metal overlap above the graphene and reducing the screening of electric and magnetic fields and (ii) combining the etching and deposition in one step minimizes resist residues at the contact interface, which is necessary for transparent contacts. Instead of a normal metal, we sputter an alloy superconductor MoRe, which is attractive in several respects. First, MoRe is a type-II superconductor with a critical temperature $T_c\approx 8$~K and an upper critical field $H_{c2}\approx 8$~T (at 4.2~K), which should easily allow for the observation of quantum Hall states while the MoRe remains predominantly superconducting. Secondly, it has been shown that MoRe makes good electrical contact to carbon-based materials such as carbon nanotubes~\cite{Schneider2012}. Considering the fact that edge-contact resistance can vary by an order of magnitude depending on the choice of metal~\cite{Wang2013}, it is critical to select a superconductor which makes good electrical contact to graphene. This is particularly important in the context of superconductor (S) graphene (G) JJs, where the transparency of the S-G interface directly affects the Andreev reflection. Furthermore, unlike surface contacts, such one-dimensional edge contacts ensure that the Andreev reflection occurs at a well-defined location, at the edge of the graphene, where it contacts with the 3-dimensional bulk superconductor. After the deposition of the superconducting electrodes, we etch the stack into the desired geometry.
An optical image and a cross-sectional schematic of device $A$ are shown in Fig.~\ref{fig:DC}a-c. The graphene is etched to a $L=1.5\ \mathrm{\mu m}$ long and $W=2.0\ \mathrm{\mu m}$ wide rectangle, with MoRe edge contacts on either side. All measurements described here are performed in a (dc) four point geometry, as shown in Fig.~\ref{fig:DC}a. The MoRe leads are arranged such that the lead series resistance is minimized and the measured resistance is effectively the two-probe graphene resistance, irrespective of whether the MoRe is normal or superconducting. This is important since disordered superconductors such as MoRe have a large normal-state resistivity, potentially confusing the interpretation of the measurements when the electrodes turn normal (see SI).
Fig.~\ref{fig:DC}d shows the measured resistance, $R$, versus back gate voltage, $V_{gate}$, at room temperature and 4.2~K. A clear electron-hole asymmetry is visible with the resistance in the hole doped ($p$) regime being somewhat larger than that in the electron doped ($n$) regime. We attribute this to contact-induced $n$-type doping, which leads to the formation of $pn$ junctions close to the contacts when the bulk of the graphene is $p$ doped. Such $n-$type doping effects from normal edge contacts have also recently been reported \cite{Maher2014}. Fig.~\ref{fig:DC}e shows the Landau fan diagram recorded up to $B=12\ \mathrm{T}$. The high electronic quality of the graphene is evident from the emergence of broken symmetry states above $B=5\ \mathrm{T}$, which are well developed at $B=12\ \mathrm{T}$ (Fig.~\ref{fig:DC}f). To our knowledge, this is the first observation of broken symmetry states in graphene with superconducting contacts. The plateaus on the electron side are better developed than those on the hole side, presumably a consequence of doping near the contacts
\begin{figure}[!b]
\centering
\includegraphics{fig_SC}%
\caption{\textbf{Long distance Josephson current in edge-contacted graphene}. \textbf{a} The differential resistance, $dV/dI$, is plotted as a function of applied dc current bias, $I_{dc}$, and gate voltage, $V_{gate}$, at $60\ \mathrm{mK}$. \textbf{b} Critical current density, $J$, plotted as a function of device length, $L$. Squares are the side contacted MoRe graphene devices $A-E$ reported here. Black (red) squares correspond to a temperature of 50~mK (700~mK). More details about the temperature dependence can be found in the SI. Circles are data points taken from the literature \cite{Heersche2007, Du2008, Miao2009, Girit2009, OAristizabal2009, Kanda2010, Choi2010, Borzenets2011, Coskun2011, Lee2011, Rickhaus2012, Popinciuc2012,Komatsu2012, Mizuno2013, Voutilainen2011, JeongChoi2011, Borzenetz2013, Choi2013}. Colors indicate different superconductors used: Black circles refer to Al, green circles to Nb/NbN/NbTiN, blue ReW, red Pb/PbIn and yellow Pt/Ta.\label{fig:SC}}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics{fig_FP2}%
\caption{\textbf{Fabry P\'{e}rot resonances in a Josephson junction.} \textbf{a.} The differential resistance, $dV/dI$, is plotted as a function of the applied dc current bias, $I_{dc}$, and gate voltage, $V_{gate}$, at $T=550\ \mathrm{mK}$. At $60\ \mathrm{mK}$, statistical fluctuations in $I_{c}$ make the effect much less visible. Inset: A schematic of a cavity formed between $pn$-junctions due to doping near the contacts. Interference occurs due to reflections at the $pn$-junctions. \textbf{b.} The normal state conductance, $G_N$, and the critical current, $I_{c}$, plotted as a function of gate voltage, $V_{gate}$. \textbf{c.} (upper panel) The conductance measured as a function of the magnetic field and gate voltage with $I_{dc}=100\ \mathrm{nA}$. The dispersion of the Fabry-P\'{e}rot interferences follows a $\mathrm{4^{th}}$-order polynomial, see equation~\ref{eq:FPdispersion}, plotted in yellow. (lower panel) The simulated conductance for a cavity size of $L=1.3\ \mathrm{\mu m}$ and $W=2\ \mathrm{\mu m}$. \textbf{d.}(upper panel) The conductance $\delta G$ in the $npn$ regime as function of absolute wavenumber $|k_F|$ in the central part of the device ($|k_F|$ is determined from the carrier density). $\delta G_N$ is obtained after subtraction of a slowly varying background (see SI for details). In the lower panel we plot $\delta G_N$ in the $nn'n$ regime for the same wavenumber range. Here we attribute the fluctuations to UCF. \label{fig:FP}}
\end{figure*}
At zero magnetic field we observe a gate-tunable supercurrent through the device. In Fig.~\ref{fig:SC}a we plot the diffential resistance, $dV/dI$, as a function of gate voltage, $V_{gate}$, and the current bias, $I_{dc}$. Evidently, the critical current, $I_{c}$, vanishes at the charge neutrality point, but reaches values in excess of 100~nA at V$_{gate}=30$~V. The individual $I_{dc}-V$ curves are hysteretic, as is evident from the asymmetry about $I_{dc}=0$ (we discuss the possible origins of this hysteresis in the SI). On the hole side $I_{c}$ is considerably smaller, consistent with the formation of the conjectured $npn$ junctions. In Fig.~\ref{fig:SC}b we plot the critical current density per unit length, $J$, versus the JJ length, $L$, with data obtained from previous reports of graphene JJs (circles) along with the present MoRe edge-contacted devices (squares). The black squares show the critical current density at $50\ \mathrm{mK}$, whereas the red squares are taken at $700\ \mathrm{mK}$. We point out that the critical current density depends on the temperature and the graphene carrier density, which vary from study to study. Despite this, it is clear that our MoRe edge-contacted devices stand out in relative magnitude compared to the previous data. We find large supercurrent densities (up to $\sim200\ \mathrm{nA/\mu m}$) over significantly longer distances ($\sim1.5\ \mathrm{\mu m}$). The observation of large supercurrents over an unprecedented long distance of 1.5~$\mu m$ indicates a high quality of both the graphene itself and of the 1D graphene-superconductor interfaces.
In addition, we find unambiguous signatures of ballistic Josephson transport in this 2D geometry. As shown in Fig.~\ref{fig:FP}a, we observe for the first time clear oscillations in the critical current and the retrapping current when we vary the gate voltage, indicative of Fabry-P\'{e}rot (FP) interferences in the supercurrent through the junction. The transmission probability of electrons and holes that carry the supercurrent is the result of interference of trajectories that travel ballistically from one contact to the other with multiple reflections close to or at the edges of the graphene flake. As the gate voltage is varied, the Fermi wavelength changes, constructive and destructive interference alternate, leading to modulations in the critical current. One may expect the graphene-superconductor interfaces to form the walls of the cavity. However, we observe the $I_c$ oscillations only on the hole-doped side and not on the electron-doped side (see Fig. ~\ref{fig:FP}d and the discussion below). This suggests that in the presence of $n$-doped regions near the MoRe-graphene interface the relevant cavity is instead formed by $pn$ junctions near the contacts (see inset Fig. ~\ref{fig:FP}a). This gives rise to a reduced cavity length $L_c$. This length can be directly inferred from the period of the oscillations, extracted via a Fourier analysis (see SI) of these oscillations over many periods. A cavity length of $L_c=1.3\ \mathrm{\mu m}$ is found, which is smaller than the etched device length ($L=1.5\ \mathrm{\mu m}$). A similar difference between device size and inferred cavity length was seen in device $D$ (see SI). This difference may arise from screening of the back gate near the contacts in combination with the presence of the n-doped regions at the MoRe-graphene interfaces in both devices.
The interpretation of the oscillations in $I_c$ in terms of FP interference, is further supported by comparing them with the oscillations in the normal state conductance, $G_N$, measured at currents just above $I_c$. The oscillations of $I_c$ with gate voltage clearly match the oscillations in $G_N$, (see Fig.~\ref{fig:FP}b), as expected for Josephson junctions. In the case of normal state transport, we can apply a weak magnetic field perpendicular to the graphene, to apply a Lorentz force to the trajectories of electrons and holes. This is expected to give a characteristic shift of the FP resonances due to the accumulation of extra field-dependent phases. Indeed in the measurements shown in Fig.~\ref{fig:FP}c, we find that as B increases the main resonance features shift to higher density, following a characteristic dispersion. To enhance the visibility, we plot the quantity $G_{sub}$, which was obtained after subtracting a gate dependent (but field independent) modulation of the background conductance. We compare the data with the results of numerical simulations of the device conductance (see methods for further details) in the ballistic regime and with $npn$ junctions for the exact geometry of the measured device (Fig.~\ref{fig:FP}c lower panel). Simulation and experiment show an almost identical dispersion of the FP resonances with magnetic field. It is also possible to obtain a semiclassical expression for the resonance condition (see SI) by considering all the phases accumulated in the n-region of the $npn$ junction:
\begin{equation}
\frac{L_c}{\lambda_F(V_{gate})} = n_m +\frac{1}{2} + \frac{1}{6n_m} \left(\frac{L_c^2 e B}{h}\right)^2,
\label{eq:FPdispersion}
\end{equation}
with $n_m$ a specific integer mode, $\lambda_F(V_{gate})$ the Fermi wavelength which is tuned by the backgate ($V_{gate}\sim1/\lambda_F^2$), $L_c$ the cavity size, $e$ the electron charge and $h$ Planck's constant. The yellow curves in Fig.~\ref{fig:FP}d are calculated using Eq.~\ref{eq:FPdispersion} for modes $n_m=-121,~-120$ and show an excellent agreement with the measured and simulated results. This provides strong evidence that the observed oscillations, both in $I_c$ and $G_N$, arise from Fabry-P\'{e}rot interference, which implies phase-coherent ballistic transport. While such oscillations due to FP interference have been reported before in a variety of systems including high-quality graphene with normal contacts \cite{Young2009,Varlet2014}, here we provide evidence for phase coherent FP interference in the supercurrent, which has not been observed before in any 2D geometry.
In order to better understand the microscopic details of our device, we compare the conductance in the $npn$ regime with that in the $nn'n$ regime (Fig.~\ref{fig:FP}d). Whereas in the $npn$ regime (upper panel), we observe periodic oscillations as a function of absolute wave number, $|k_{F}|$, we observe universal conductance fluctuations (UCF) in the $nn'n$ case (lower panel). We attribute these fluctuations to diffuse boundary scattering at or close to the graphene-MoRe interface. This diffuse scattering should also be present on the hole-side but does not dominate the transport due to the presence of the $pn$ junctions. Using the ballistic limit, $L$ much larger than the mean free path, where all resistance is from the contact interface, we can estimate a lower bound on the contact transparency, $T$ via $G = \frac{T}{2}\frac{4e^2}{\pi h}k_FW$. From the conductance in the $nn'n$ regime (see SI) we find a contact transparency of $T>0.2$. In the $npn$ case, the conductance is dominated by the $pn$ barriers. In this case, we can estimate the sharpness, $d$, of the $p$ to $n$ transition regions via $G_{npn}=\frac{e^2}{\pi h}\sqrt{\frac{k_F}{d}}W$. We find a sharpness of $d\sim70\ \mathrm{nm}$, which is a plausible value considering the device dimensions.
Since the DC Josephson effect is observed in these graphene devices over micron scale distances, we can also explore the magnetic field dependence of the critical current for unusual geometries. Earlier reports concerned graphene Josephson junctions with lengths much shorter than their width. In this case, the magnetic field dependence of $I_c$ is expected to follow the standard Fraunhofer diffraction pattern observed in tunnel junctions\cite{Tinkham}. In the present devices, in contrast, the aspect ratio is close to 1, which has two consequences. First, unlike in tunnel junctions, the phase difference across the junction must be integrated along both interfaces. Furthermore, contributions involving reflections off the side of the junction must be included, especially when transport is ballistic\cite{Heida1998, Ledermann1999, Sheehy2003}. The main prediction in this case is that the periodicity of $I_c$ with magnetic flux becomes larger than a single flux quantum, $\Phi_0 = h/2e$. Despite significant differences across the patterns measured on the various devices, we consistently find a period larger than $\Phi_0$, as seen in Fig.~\ref{fig:FH_wide} for device $A$ (and in the SI for devices $B$ and $C$). In contrast, earlier reports on graphene Josephson junctions all show flux periods smaller than $\Phi_0$ \cite{Heersche2007, Du2008, OAristizabal2009, Choi2010, JeongChoi2011, Coskun2011, Popinciuc2012, Komatsu2012} before corrections to account for the London penetration depth.
\begin{figure}[!t]
\centering
\includegraphics{fig_FH_wide}%
\caption{\textbf{Anomalous Fraunhofer diffraction pattern.} \textbf{a.} The differential resistance ($dV/dI$) is plotted as a function of applied current bias, $I_{dc}$, and magnetic field, $B$, at a gate voltage of $30\ \mathrm{V}$. We observe a a separation between minima that clearly exceeds the flux quantum, $h/2e$.
\label{fig:FH_wide}}%
\end{figure}
The Fabry P\'{e}rot oscillations in the critical current and the anomalous Fraunhofer diffraction patterns are mutually consistent and provide strong evidence of ballistic effects in superconducting transport through graphene. We believe that this is the first unambiguous demonstration of a ballistic JJ in graphene.
\subsection{Methods}
\subsubsection{DC transport measurements}
Low temperature dc measurements are done in a Leiden Cryogenics MCK-50 $\mathrm{^3He/^4He}$ dilution fridge. The setup can reach a base temperature of 40 mK with an electron temperature of about 70 mK. DC Currents and voltages are applied and probed with a home-built measurement setup. Furthermore the setup is equipped with a superconducting magnet coil that can sustain fields of up to 12 T.
\subsubsection{Tight-binding simulation}
The FP oscillations in the $npn$ junction are simulated by a tight-binding calculation using the Kwant software package\cite{Groth2014}. The source code has been provided along with this submission as an ancillary file. A $1.5\ \mathrm{\mu m}\times 2.0\ \mathrm{\mu m}$ hexagonal lattice is discretized with a lattice constant of $a = 2\ \mathrm{nm}$, with metallic leads on the $2.0\ \mathrm{\mu m}$ wide sides. The contact induced doping near both leads is modeled by a $100\ \mathrm{nm}$ region with a fixed chemical potential. The width of the transition region from the $n$ to the central $p$ region is set to $50\ \mathrm{nm}$ and modelled by $\tanh{\left[(x-x_0)/25\ \mathrm{nm}\right]}$. A finite contact resistance is imposed by reducing the transparency between the central strip and the leads to $60\%$. Finally we calculate the transmission as a function of the Fermi wavenumber $k_F(\mu_p)$ and magnetic field $B$, resulting in the dispersion given in Fig.~\ref{fig:FP}d.
\subsection{Acknowledgements}
We thank Vibhor Singh for sharing the MoRe sputtering recipe and Carlo Beenakker for fruitful discussions. We acknowledge support from the EC-FET Graphene Flagship, from the European Research Council Advanced grant No.~339306 (METIQUM), from a European Research Council Synergy grant (QC-LAB), and from the Ministry of Education and Science of the Russian Federation under Contract No.~14.B25.31.0007. This work is part of the Nanofront consortium, funded by the Dutch Science Foundation OCW/NWO/FOM.
\subsection{Author contributions}
K.W. and T.T. grew the hBN crystals, G.N. fabricated the devices, V.E.C. and S.G. performed the measurements, and M.D. and A.R.A. did the numerical simulations and theory. The measurements were analyzed and interpreted by V.E.C, S.G., M.D, A.R.A, T.M.K., and L.M.K.V. The manuscript was written by V.E.C, S.G. and L.M.K.V. with input from A.R.A., M.D. and T.M.K.
The authors declare no competing financial interests.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,881 |
package org.apache.shardingsphere.mode.manager.cluster.coordinator.registry.status.storage.subscriber;
import com.google.common.eventbus.Subscribe;
import org.apache.shardingsphere.infra.eventbus.ShardingSphereEventBus;
import org.apache.shardingsphere.infra.rule.event.impl.DataSourceDisabledEvent;
import org.apache.shardingsphere.infra.rule.event.impl.PrimaryDataSourceChangedEvent;
import org.apache.shardingsphere.mode.manager.cluster.coordinator.registry.status.storage.StorageNodeStatus;
import org.apache.shardingsphere.mode.manager.cluster.coordinator.registry.status.storage.node.StorageStatusNode;
import org.apache.shardingsphere.infra.metadata.schema.QualifiedSchema;
import org.apache.shardingsphere.mode.repository.cluster.ClusterPersistRepository;
/**
* Storage node status subscriber.
*/
public final class StorageNodeStatusSubscriber {
private final ClusterPersistRepository repository;
public StorageNodeStatusSubscriber(final ClusterPersistRepository repository) {
this.repository = repository;
ShardingSphereEventBus.getInstance().register(this);
}
/**
* Update data source disabled state.
*
* @param event data source disabled event
*/
@Subscribe
public void update(final DataSourceDisabledEvent event) {
if (event.isDisabled()) {
repository.persist(StorageStatusNode.getStatusPath(StorageNodeStatus.DISABLE, new QualifiedSchema(event.getSchemaName(), event.getDataSourceName())), "");
} else {
repository.delete(StorageStatusNode.getStatusPath(StorageNodeStatus.DISABLE, new QualifiedSchema(event.getSchemaName(), event.getDataSourceName())));
}
}
/**
* Update primary data source state.
*
* @param event primary data source event
*/
@Subscribe
public void update(final PrimaryDataSourceChangedEvent event) {
repository.persist(StorageStatusNode.getStatusPath(StorageNodeStatus.PRIMARY, new QualifiedSchema(event.getSchemaName(), event.getGroupName())), event.getDataSourceName());
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 958 |
Lentil Salad with Asiago Cheese – Alternative Natural Healthcare Services Inc.
Meanwhile, in large bowl, whisk together oil, vinegar, mustard, oregano, salt and pepper. Add lentil mixture, onions, red pepper, celery, cheese and parsley; toss gently to combine. Add a little extra shredded Asiago cheese and chopped parsley on top before serving.
Buen apetito and enjoy in good health!
*try your best to use organic ingredients as much as possible. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,973 |
Laureles, Los Laureles o Los Laureles de Pilar es un distrito paraguayo del departamento de Ñeembucú. Con su pintoresco perfil de ciudad provinciana del siglo XIX, es la guardiana de las tradiciones ganaderas del departamento de Ñeembucú y es sitio de la realización anual de la gran Fiesta de la Tradición y el Folklore, celebrado con jineteadas, música y asado a la estaca.
Fue fundada en el año 1790 por el coronel Joaquín de Alós y Brú, quien fue gobernador del Paraguay entre los años 1787 y 1796. Se encuentra ubicada en las cercanías del río Paraná, al sur del departamento de Ñeembucú, a 318 km de Asunción (Paraguay). Se accede al distrito por la Ruta PY20.
Historia
La comunidad de Los Laureles dista a pocos kilómetros del Cerrito saliendo del Departamento de Ñeembucú, pueblo fundado por los franciscanos. El gran atractivo de esta localidad es que todavía conserva casi intacto su estilo de plaza yere, con casas coloniales dispuestas alrededor de la gran plaza central y la iglesia, con corredores pegados unos a otros, formando largas y hermosas recovas. La iglesia de Laureles, que fue construida en el año 1791, es otra reliquia histórica. Laureles es una de las ciudades más antiguas del departamento de Ñeembucú.
Geografía
Está situado hacia el sureste del departamento de Ñeembucú. Cuenta con 800 km² y una población de 3676 habitantes, con una densidad poblacional de 4,60 hab./km², con el 91,44 % de su población que se encuentra asentada en la zona rural.
Desde la perspectiva del aspecto físico el departamento de Ñeembucú presenta una característica topográfica con amplio predominio de zonas bajas y planas, este rasgo del territorio favorece la presencia de grandes esteros y pantanos.
En el límite con el departamento de Misiones, se encuentra ubicado el Refugio de Vida Silvestre Yabebyry, en el que pueden realizarse safaris y paseos. Además se encuentra a escasos kilómetros de laureles la laguna Tanimbú.
Limita al norte con Guazú Cuá, al sur con Cerrito y Villalbín, al este con el departamento de Misiones, y al oeste con Desmochados y la capital departamental. El distrito de Los Laureles está rodeado de esterales como Ñeembucú, Cenizales, Pikyry y el estero Pirá Guazú.
Clima
Tiene un clima subtropical y húmedo, con una precipitación media anual de 1350 mm y una temperatura media 23,2 °C. En los últimos años se registraron un promedio de dos heladas por temporada. Los meses de menor cantidad de lluvia en la región son mayo, junio, julio y agosto, mientras que los meses más lluviosos son: enero, marzo, abril y octubre. El verano es muy cálido y húmedo, soportándose fuertes temperaturas de hasta 45 °C.
Demografía
Desde la perspectiva de la evolución de la población, un rápido estudio de lo ocurrido en las últimas décadas permite detectar un fuerte flujo migratorio, sobre todo en dirección a la Argentina. Lo que ha favorecido su crecimiento demográfico negativo es la cercanía del departamento con la provincia de Corrientes, Argentina.
La tasa de crecimiento poblacional ha sufrido grandes disminuciones, debido a la gran migración existente en toda la región, y se observa claramente en la tasa poblacional. Su población es mayoritariamente rural, y con una ligera predominancia de varones. De acuerdo a datos emitidos del Censo Nacional de Población y Vivienda realizada, el distrito de Laureles cuenta con una tasa de crecimiento poblacional negativo del –0,4 %. La población se encuentra asentada en la zona rural.
Economía
Los Laureles es una zona muy especial, ya que se encuentra rodeado de esterales, así como el estero Ñeembucú, el estero Cenizales, el estero Pikyry, y el estero Pira Guazú. La actividad económica principal es la explotación agroganadera y la mayoría de los productores se dedícan a la cría de ganado bovino, como una actividad principal. También cuentan con cultivos de otros productos como maíz, algodón, mandioca y caña de azúcar. En sus campos existen una gran variedad de aves.
Cultura
Festeja el tercer viernes de enero hasta el domingo la "Fiesta de la Tradición y Folklore", que es recordada en todo el Departamento de Ñeembucú. Durante esos días sus habitantes compiten en destreza en las tareas rurales, destacándose las habilidades ecuestres, hípicas y los juegos tradicionales como la carrerapé, las corridas de sortijas y de cueros, que tienen lugar en la plazoleta central de laureles Además en octubre se recuerda la festividad de la virgen del rosario.
Referencias
Localidades del departamento de Ñeembucú | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 29 |
\section{Introduction}\label{sec:intro}
Implicit-solvent models represent an intuitive and fast approach to
understand molecular solvation~\cite{Roux99,Sharp90,Davis90,Tomasi94}, and
have a rigorous statistical-mechanical interpretation as an
approximation to the potential of mean force (PMF) experienced by a
molecular solute due to the surrounding solvent
molecules~\cite{Roux99}. The PMF is usually decomposed into non-polar
and electrostatic terms, the latter of which are often modeled using
macroscopic continuum models based on the Poisson--Boltzmann
partial-differential equation (PDE). Continuum models approximate the
free energy required to grow the solute charge distribution into the
solute
cavity~\cite{Roux99,Kirkwood34,Latimer39,Kornyshev97,Hildebrandt04,Rizzo06,Abrashkin07,Gong09,Guo13,Zhou14}.
Although implicit-solvent models can be orders of magnitude faster
than explicit-solvent molecular-dynamics (MD) simulations, most
popular continuum theories ignore numerous potentially important
effects, including solvent molecules' finite size and specific
molecular interactions such as hydrogen bonding (the AGBNP2 model,
which addresses the latter, is a notable
exception~\cite{Gallicchio09}).
One of the Poisson model's most perplexing and long-standing
shortcomings is the difficulty of extending it to model charge-sign
asymmetric solvation: for example, given two monatomic ions of equal
radius, one of $+q$ charge and the other of $-q$, the negative charge
experiences stronger interactions with the solvent (more negative
solvation free
energy)~\cite{Latimer39,Rashin85,Ashbaugh00,LyndenBell01,Rajamani04,Grossfield05,Mukhopadhyay12,Bardhan12_asymmetry}.
However, standard Poisson models are charge-sign \textit{symmetric};
that is, they predict the same solvation free energy for $\pm q$. The
need to include asymmetric effects is difficult to exaggerate,
particularly in biological contexts. Consider that the protein avidin
binds its ligand biotin with a binding free energy of approximately
$-$20~kcal/mol, one of the most favorable in biology~\cite{Green75};
solvent-exposed $+1e$ and $-1e$ charges can experience as much as
40~kcal/mol difference in their solvation free
energies~\cite{Bardhan12_asymmetry}. Dominant factors in charge-sign
asymmetric response include the liquid-vapor interface
potential~\cite{Ashbaugh00,Rajamani04} and the fact that water
hydrogens can approach a negative solute charge closer than water
oxygens can approach a positive
one~\cite{Latimer39,Rashin85,Mukhopadhyay12,Bardhan12_asymmetry}.
Spherical solutes with central charges provide a useful data set for
developing an understanding of size- and charge-sign dependent
hydration, including the characterization of interface potentials,
solvent packing, and dielectric
saturation~\cite{Rashin85,Hummer96_netchargecorrection,Ashbaugh00,Rajamani04,Grossfield05,Fedorov07}.
These analyses and the continuum macroscopic-dielectric framework
suggest that improvements require a more detailed, accurate
representation of the solvent dipole field
$\mathbf{P}(\mathbf{r})$~\cite{Warshel76,Azuara08}, or, equivalently,
the solvent charge density ~\mbox{$\rho_{\mathrm{induced}}(\mathbf{r})
= \nabla \cdot \mathbf{P}(\mathbf{r})$}. Because $\mathbf{P}$ and
$\rho_{\mathrm{induced}}$ do \textit{not} respond linearly to the
solute charge distribution~\cite{Alper90}, particularly in the first
solvent shell~\cite{Purisima09}, many groups have developed solvent
models in which the solvent potential obeys a nonlinear partial
differential equation
(PDE)~\cite{Kornyshev98,Sandberg02_first_LD,Jha08,Gong08,Hu12_Wei_nonlinear_Poisson_BJ}.
Unfortunately, most of these models are still charge-sign symmetric.
However, in 1939 Latimer et al. proposed an approach to increase or
decrease an ion's radius based on the charge~\cite{Latimer39}, and
recent developments in high-performance computing and explicit-solvent
MD free-energy calculations provide important new data to extend this
approach. Mobley et al. constructed a challenging test set and
conducted extensive MD simulations on charge-sign
asymmetry~\cite{Mobley08_asymmetry}, enabling important new
developments in modeling
asymmetry~\cite{Purisima09,Corbeil10,Mukhopadhyay14} that extend
Latimer's work to Generalized-Born (GB) models of complex solutes. GB
theory was a natural setting for these developments because Latimer's
work and GB theory share the conceptual picture of an effective atomic
radius. These early studies provided an important insight: a buried
charge still affects the electric field at the boundary, so merely
parameterizing charge-dependent radii cannot (indeed, should not)
provide a satisfactory explanation. The accuracy of asymmetric GB
models suggests that a simple Poisson-based model exists, but finding
one has proven to be surprisingly difficult.
In this paper, we propose a simple Poisson continuum model that
includes charge-sign asymmetry and show that it is remarkably accurate
even without parameterization on an atom-by-atom basis. The key
feature of our theory is a \textit{nonlinear boundary condition}
(NLBC) for the normal displacement field; in contrast, the
displacement boundary condition for the standard (symmetric) Poisson
theory is linear~\cite{Jackson_classical_electrodynamics}.
Importantly, even though our proposed displacement boundary condition
is nonlinear, the electrostatic potential in the solvent and solute
volumes still satisfy \textit{linear} Poisson/Laplace equations. Two
phenomena motivated us to propose a nonlinear boundary condition
instead of a nonlinear governing equation. First, numerous results
illustrate that the solute reaction potential obeys nearly linear
response even though the solvent charge distribution does
not~\cite{Lin11_Pettitt,Boda09,Purisima09,Corbeil10,Mukhopadhyay14,Bardhan12_asymmetry}.
For example, the new asymmetric Generalized-Born (GB) models use the
charge distribution only to modify the Born radii, with the overall
energy still computed using superposition (independent sum of
individual charge
responses)~\cite{Purisima09,Corbeil10,Mukhopadhyay14}. Furthermore,
we found in our previous work that the solute reaction potential is
essentially a \textit{piecewise-linear} function of charge
~\cite{Bardhan12_asymmetry}, i.e. the proportionality coefficient
depends on whether one is charging an ion from zero to $+q$ or to
$-q$. In fact, we began this work seeking primarily to reproduce this
curiously simple nonlinearity.
The second phenomenon motivating our NLBC approach is the fact that
the solute reaction potential is a harmonic field---that is, it
satisfies the Laplace equation. This property is useful for numerical
computations~\cite{Chern03,Holst12} and also provides a path to
improve models via boundary-integral methods~\cite{Bardhan12_review}:
harmonicity means that regardless of the solvent model of interest,
there exists \textit{some} surface charge density that reproduces the
reaction potential inside. For a given solvent model, the surface
charge density might satisfy a nonlinear boundary-integral equation,
but the very fact that such a density always exists suggests that one
might improve continuum models by adding nonlinear terms to widely
used BIE formalisms~\cite{Rizzo67,Shaw85,Juffer91,Bardhan09_disc}.
\section{Continuum Model and Extension to Nonlinear Boundary Conditions}
We first present the standard (charge-sign symmetric) Poisson
electrostatic model and then describe the difference between it and
our proposed NLBC model. In both theories, the molecular solute is
treated as a macroscopic linear dielectric continuum obeying the
Poisson equation \mbox{$\nabla^2 \varphi_1(\mathbf{r}) =
-\frac{\rho(\mathbf{r})}{\epsilon_1}$} where $\mathbf{r}$ is a point
in space, $\varphi_1(\mathbf{r})$ is the potential in the solute,
$\epsilon_1$ is the relative permittivity, and the molecular charge
distribution $\rho(\mathbf{r})$ is a set of $N_q$ point charges, i.e.
\mbox{$\rho(\mathbf{r}) = \sum_{i=1}^{N_q} q_i
\delta(\mathbf{r}-\mathbf{r}_i)$}. The solute and solvent are
separated by the interface $\Gamma$, and the solvent exterior is a
linear dielectric with permittivity \mbox{$\epsilon_2 \gg
\epsilon_1$}, so the electric potential obeys \mbox{$\nabla^2
\varphi_2(\mathbf{r}) = 0$}; note that modeling realistic biological
solutions requires inclusion of screening effects due to mobile ions
using e.g. some form of the Poisson--Boltzmann equation for
$\varphi_2(\mathbf{r})$~\cite{Sharp90,Tomasi94}. From macroscopic
dielectric theory and Gauss's law, we obtain the standard Maxwell
boundary conditions for $\mathbf{r}_\Gamma \in \Gamma$
\begin{align}
\varphi_1(\mathbf{r}_\Gamma) &= \varphi_2(\mathbf{r}_\Gamma),\\
\epsilon_1 \frac{\partial \varphi_1}{\partial n}(\mathbf{r}_\Gamma) &= \epsilon_2 \frac{\partial \varphi_2}{\partial n}(\mathbf{r}_\Gamma),\label{eq:standard-Maxwell-bc}
\end{align}
where $\frac{\partial}{\partial n}$ denotes the normal derivative (the
normal at $\mathbf{r}_\Gamma$ is defined pointing outward into
solvent). Assuming that $\varphi_2(\mathbf{r})$ decays sufficiently
quickly as $|\mathbf{r}|\rightarrow\infty$, this mixed-dielectric
Poisson problem is well posed and the unknown potential $\varphi_1$
can be rewritten as a linear boundary-integral equation for an unknown
surface charge distribution on $\Gamma$. In particular, the
apparent-surface charge (ASC)
model~\cite{Shaw85,Altman05_2,Bardhan09_disc} (also known as the
polarizable continuum model~\cite{Miertus81,Tomasi94}) can be
interpreted as finding an equivalent surface charge
$\sigma(\mathbf{r})$ in a homogeneous medium with permittivity
$\epsilon_1$ everywhere. In this equivalent problem, the
analogous boundary condition to Eq.~\ref{eq:standard-Maxwell-bc}
is simpler due to homogeneity, but adds a term for the surface charge:
\begin{equation}
\frac{\sigma(\mathbf{r}_\Gamma)}{\epsilon_1} = \frac{\partial
\hat{\varphi}_1}{\partial n}(\mathbf{r}_\Gamma) - \frac{\partial
\hat{\varphi}_2}{\partial n}(\mathbf{r}_\Gamma),\label{eq:ecf-bc}
\end{equation}
and we use $\hat{\varphi}_i = \varphi_i$ to emphasize our use of an
equivalent problem. Defining \mbox{$G(\mathbf{r};\mathbf{r}') =
\frac{1}{4 \pi || \mathbf{r} - \mathbf{r}'||}$}, one obtains
\begin{align}
\left(I + \hat{\epsilon} \left(-\frac{1}{2} I+ K\right)\right)\sigma &= -\hat{\epsilon}\sum_i^{N_q} q_i \frac{\partial G}{\partial n}
\end{align}
where $\hat{\epsilon} = (\epsilon_2-\epsilon_1)/\epsilon_2$ and $K$ is
the normal electric field operator~\cite{Bardhan09_disc}. The
reaction potential in the solute is then
\mbox{$\varphi^{REAC}(\mathbf{r}) = \frac{1}{\epsilon_1} \int_\Gamma
G(\mathbf{r}; \mathbf{r}') \sigma(\mathbf{r}') dA'$}, and
\mbox{$\varphi_{1}(\mathbf{r}) = \varphi^{REAC}(\mathbf{r})
+\varphi^{Coulomb}(\mathbf{r})$}, with the latter term representing
the Coulomb potential due to $\rho(\mathbf{r})$.
The standard Maxwell displacement boundary condition
Eq.~\ref{eq:standard-Maxwell-bc} is obtained using Gauss's law in
integral form and the fact that the divergence of the polarization
field $\mathbf{P}(\mathbf{r})$ represents a volume charge density.
However, near the solute--solvent boundary, the assumption that
$\mathbf{P}(\mathbf{r})$ is pointwise proportional to the local
electric field breaks down due to water structure at the interface;
that is, it is no longer necessarily true that
\mbox{$\mathbf{P}(\mathbf{r}) = (\epsilon(\mathbf{r})-1)
\mathbf{E}(\mathbf{r})$}.
To model nonlinear solvent response at the boundary, we propose to
replace the linear boundary condition,
Eq.~\ref{eq:standard-Maxwell-bc}, with the phenomenological nonlinear
boundary condition
\begin{equation}
f(E_n) \frac{\partial \varphi_1}{\partial n}(\mathbf{r}_\Gamma) = \left(1+f(E_n)\right)\frac{\partial \varphi_2}{\partial n}(\mathbf{r}_\Gamma)\label{eq:charge-layer-nonlinear-BC}
\end{equation}
where $E_n$ is the electric field just inside $\Gamma$,
i.e. \mbox{$E_n = -\sum_i q_i \frac{\partial G}{\partial n} - K
\sigma$}, and
\begin{align}
f(E_n) & = \frac{\epsilon_1}{\epsilon_2-\epsilon_1} - h(E_n);\\
h(E_n) & = \alpha \tanh(\beta E_n - \gamma) + \mu.\label{eq:tanh}
\end{align}
with $\alpha$, $\beta$, and $\gamma$ representing model parameters and
\mbox{$\mu = -\alpha
\tanh(-\gamma)$}. The specification of $\mu$ ensures that $h(E_n=0)
= 0$, so that in the limit of weak electric fields, such as induced at
the surface by a deeply buried charge, the boundary condition reduces
to the familiar Poisson model. The NLBC leads to
the modified, \textit{nonlinear} BIE
\begin{equation}
\left(I + \hat{\epsilon}\left(-\frac{1}{2}I + K\right) + h(E_n)\right) \sigma = -\hat{\epsilon}\sum_{i} q_i \frac{\partial G}{\partial n},
\end{equation}
with the nonlinearity arising in the dependence of $h$ on $E_n$ (see
the Supporting Information for details on the numerical
implementation).
One challenge in developing more accurate solvent models is the fact
that nonlinear response~\cite{Sharp90_2} generally requires a charging
process~\cite{Zhou94}, i.e. the expression \mbox{$\Delta G^{solv,es} = \frac{1}{2}
q^T \varphi^{REAC} = \frac{1}{2}q^T L q$} no longer holds ($L$
denotes the reaction-potential
operator~\cite{Roux99,Bardhan12_review}). However, our previous work
showed the remarkable fact that the solute reaction potential is
\textit{piecewise} linear, with the breakpoint at
$q=0$~\cite{Bardhan12_asymmetry}, so that $\varphi^{REAC} = L_+ q$ for
$q > 0$ and $\varphi^{REAC} = L_- q$ for $q < 0$, with $L_+ \neq L_-$.
The proposed NLBC in Eqs.~\ref{eq:charge-layer-nonlinear-BC}
and~\ref{eq:tanh} immediately explains this curious phenomenon:
consider the limit $\beta \rightarrow \infty$, so that $\tanh$ is
constant everywhere, but discontinuous at $q=0$. The Debye charging
process~\cite{Kirkwood34_2} scales all charges uniformly, i.e.
\mbox{$\hat{\rho}(r; \lambda) = \lambda \rho(r)$}, so the Coulomb
field \mbox{$\frac{\partial \varphi^{Coul}}{\partial n}$} has the same
sign for all finite $\lambda$. The Coulomb-field approximation (CFA)
shows that the reaction field is nearly proportional to the direct
Coulomb field, but slightly smaller in
magnitude~\cite{Kharkats76,Bardhan08_BIBEE,Bardhan12_review}, so for
finite $\lambda$, at almost all $\mathbf{r}_\Gamma$, the total field
$E_n(\mathbf{r}_\Gamma; \lambda)$ has the same sign as
$E_n(\mathbf{r}_\Gamma; \lambda = 1)$. This implies that almost
everywhere on the surface, the $\tanh$ boundary condition takes its
limiting ($\lambda \rightarrow 1$) value for any finite $\lambda$,
which means that the boundary condition is essentially linear:
\mbox{$(1 + g(r))\sigma(r) = \frac{\partial \hat\varphi_2}{\partial
n}-\frac{\partial \hat\varphi_1}{\partial n}$}. With this
justification, in this work we compute solvation free energies as
\mbox{$\Delta G^{solv,es}= \frac{1}{2} q^T \varphi^{REAC}$}. Note
that a more precise definition of the charging free energy would be
piecewise \textit{affine}, because the charging free energy also
includes a linear term that results from the liquid-vapor interface
potential~\cite{Harder08,Kathmann11,Bardhan12_asymmetry}; as noted
above, however, in the present work its influence is approximated via
the offset parameter $\gamma$.
\section{Results and Discussion}\label{sec:results}
We parameterized the NLBC model using the Mobley et al. MD free-energy
calculations, who studied asymmetry using fictitious bracelet and rod
molecules~\cite{Mobley08_asymmetry} constructed from AMBER
$\mathrm{C}_{\alpha}$ atoms with $\mathrm{R}_{\mathrm{min}}/2 = 1.908$~\AA. We
obtained optimal results with $\alpha = 0.5$, $\beta = -60$, $\gamma =
-0.5$, and a continuum-model $\mathrm{C}_\alpha$ radius of 1.75~\AA~(a
scale factor of approximately 0.92). Note that in this first
exploration of the NLBC, we have parameterized against the overall
solvation free energies computed by Mobley et al. rather than the more
correct charging free energy.
Figure~\ref{fig:standard-ions-model} plots NLBC and MD free-energy
calculations for ion charging free energies; the MD charging
simulations used in our previous work~\cite{Bardhan12_asymmetry} (see
Supporting Information) and CHARMM Lennard-Jones parameters. We
remind the reader that no additional parameters were fit in obtaining
these NLBC results, i.e. ion radii were assigned
$\mathrm{R}_{\mathrm{ion}} = 0.92 \mathrm{R}_{\mathrm{min}}/2$. For
additional data, ions were charged to both $+1e$ and $-1e$, regardless
of the charge on the real ion, and the NLBC accurately predicts these
charging free energies as well. The largest deviations occur for
radii less than 1.4~\AA, where discrete packing effects and actual
dielectric saturation are likely.
\begin{figure}[ht!]
\centering \resizebox{3.0in}{!}{\includegraphics{vary-ion-radius}}
\caption{Asymmetric polarization free energies for a monovalent
central charge in a sphere, as a function of sphere radius. The
labeled symbols denote results from MD free-energy calculations
charging CHARMM monatomic ions from zero to $+1e$ or $-1e$, with
$\mathrm{R}_{\mathrm{ion}} = 0.92 \mathrm{R}_{\mathrm{min}}/2$. The dashed black curve
in the middle is the (charge-sign symmetric) Born polarization
free energy.}\protect\label{fig:standard-ions-model} \end{figure}
\begin{table}
\centering
\begin{tabular}{l|cc|cc}
Problem & \multicolumn{2}{c}{Solvation errors} & \multicolumn{2}{c}{Asymmetry errors}\\\hline
& RMSE & Max. & RMS & Max. \\
Rods & 5.57 & 9.63 & 0.88 & 1.49 \\
Bracelets (opposing) & 2.88 & 6.10 & 2.04 & 3.08 \\
Bracelets (distributed) & 2.20 & 2.72 & 0.29 & 0.59 \\
Bracelets (dipole) & 2.67 & 3.52 & 0.85 & 1.09
\end{tabular}
\caption{Comparison of NLBC model to MD free-energy calculations of
Mobley et al.~\cite{Mobley08_asymmetry} for rod and bracelet
molecule test set. All energies are in kcal/mol. See Supporting
Information for detailed results.}\protect\label{table:Mobley}
\end{table}
Calculations for the Mobley test set are summarized in
Table~\ref{table:Mobley}; SI Figures~1--8 plot the NLBC and Mobley MD
solvation free energies and asymmetry energies, and include
illustrations of the test problems. The rod molecules are composed of
5 or 6 atoms along a line, with one atom possessing $+1e$ charge, one
with $-1e$, and the rest neutral. The asymmetry errors in
Table~\ref{table:Mobley} represent the difference in solvation
energies when reversing the charged atoms' signs. The bracelet
molecules are regular polygons with between 3 and 8 sides; atoms are
at the vertices (1.4~\AA~apart). Bracelets were simulated with three
charge distributions: the ``opposing'' case had a $+1e$ charge
neutralized by two $-0.5e$ charges positioned symmetrically on the
opposite side. The ``distributed'' case has one $+1e$ charge and a
neutralizing $-1e$ distributed equally on all the other atoms; the
``dipole'' case is similar to ``opposing'' but fixes the dipole
moment~\cite{Mobley08_asymmetry}. Solvent charge-densities from the
MD calculations~\cite{Mobley08_asymmetry} suggest that solvent packing
may be responsible for size-dependent deviations; parameterizing radii
for actual atoms should significantly reduce these errors.
To test the model on real but nonspherical molecules, we compared NLBC
and MD charging free energies for isolated titratable amino acids in
both protonated and unprotonated states (See Supporting Information
for details on structure preparation). Parameters were from the CHARMM
force field~\cite{Brooks83} when available, with other protonation
states defined so that the protonated and unprotonated states had the
same number of atoms. The MD free-energy-perturbation (FEP)
calculations used the same protocol as the
ions~\cite{Bardhan12_asymmetry}, holding the solute rigid so that
$\epsilon_1 = 1$ unambiguously~\cite{Roux99}. The deviations between
our MD results and the MD calculations of Nina et al.~\cite{Nina97}
are small compared to the energies of interest, and likely due to our
use of (i) periodic boundary conditions, (ii) a larger solvent box
(1959 waters vs. 150), and (iii) slightly different backbone angles.
As in the ion and Mobley examples, the NLBC radii were defined by the
scaling $\mathrm{R} = 0.92 \mathrm{R}_{\mathrm{min}}/2$. The results
in Figure~\ref{fig:residues} illustrate that the NLBC model correctly
captures solvation free energies in both charge states, despite the
fact that radii were not adjusted individually or even for the atomic
charges. In contrast, standard Poisson model results computed using
the Nina et al.~\cite{Nina97} or PARSE~\cite{Sitkoff94} radii exhibit
larger deviations, particularly for arginine, aspartic acid, cysteine,
glutamic acid, and tyrosine. These data suggest that the differences
between symmetric and asymmetric electrostatic models are robust with
respect to radii (the PARSE calculations are merely suggestive because
these calculations used the CHARMM charges; for consistent comparison
to experiment, one should use PARSE charges with PARSE radii).
\begin{figure}[ht!]
\centering \resizebox{3.0in}{!}{\includegraphics{residues}}
\caption{Comparison of NLBC model to explicit-solvent MD FEP
calculations for titratable residues with neutral blocking
groups. MD results from Nina et al.~\cite{Nina97} are shown where
available; standard continuum model results are shown for Nina et
al. radii and PARSE radii~\cite{Sitkoff94} (using CHARMM
charges).}\protect\label{fig:residues}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
We have proposed a Poisson-based theory that models
charge-sign-dependent asymmetries in electrostatic solvation free
energies using a nonlinear boundary condition (NLBC), while still
using linear continuum theory in the solute and solvent volumes. The
NLBC model accurately reproduces MD free-energy results for monatomic
ions, the Mobley et al. bracelet and rod problems, and titratable
residues, even though we have used charge-independent radii that were
fixed by a single scaling factor applied to MD radii. Furthermore,
the NLBC reduces smoothly to the standard Poisson model as the
parameter $\alpha$ approaches zero. Finally, our boundary-element
method implementation for non-trivial molecules demonstrates that the
new model is easily implemented in numerical Poisson and
Poisson--Boltzmann solvers.
Our introduction of a modified boundary condition to account for
solvation-shell response follows a long history in continuum
mechanics, where phenomenological techniques find applications in many
areas of science and engineering to capture a particular physical
behavior in continuum theory rather than modeling or deriving it from
first
principles~\cite{MizziBarberEmersonReeseStefanov2007,Brenner2011,Bardhan11_Knepley}. Non-equilibrium
micro-scale gas flows offer a well-developed example: velocity-slip
and temperature-jump boundary conditions are simplified
phenomenological approaches to represent both non-equilibrium and
gas-surface interaction effects occurring near solid walls. Such
boundary conditions were first suggested in the 19th century by
Maxwell~\cite{Maxwell1878} and von
Smoluchowski~\cite{Smoluchowski1898}, respectively. More recent
examples include the partitioning of minerals at phase boundaries in
geophysics~\cite{LHeureux96}, tumor growth~\cite{Macklin05}, the
deformation of biological membranes~\cite{Fan03}, and thin electric
double layers in electro-osmotic flow~\cite{Yossifon07}.
Much as Beglov and Roux showed that solvent response approaches the
linear Poisson model in the limit as the solvent molecule approaches
zero size~\cite{Beglov96}, our model emphasizes that the nonlinear
response is generally localized in the first solvent shell.
Conceptually, the nonlinear boundary condition penalizes negative
surface charge because the larger water oxygen cannot approach a
solute charge as closely as the water hydrogens can. From a
boundary-integral point of view, this has the same effect as adjusting
the atomic radii, an approach pioneered by Latimer et
al.~\cite{Latimer39}, and extended recently to GB
models~\cite{Purisima09,Corbeil10,Mukhopadhyay12,Mukhopadhyay14}.
Purisima's work is particularly relevant due to their use of
surface-charge boundary-integral approach, adjusting GB radii using
$\sigma(\mathbf{r})$~\cite{Purisima09,Corbeil10}. Our work differs
substantially from these approaches because we have included asymmetry
directly in the underlying Poisson model.
The present theory can be extended in several important ways. First,
the proposed NLBC model has only three parameters whose particular
dependencies on solvent model have not yet been established
theoretically. Second, it seems straightforward to include ionic
screening via the Poisson--Boltzmann equation. Third, the proposed
NLBC depends exclusively on the normal electric field; improved models
might include local curvature or higher-order moments of the
potential. Importantly, the latter could distinguish between
small-magnitude charges near the surface, and larger charges further
away~\cite{Mukhopadhyay12}. Fourth, water's length-scale-dependent
dielectric behavior might be included using nonlocal
electrostatics~\cite{Hildebrandt04,Fedorov07,Bardhan11_pka,Bardhan11_DAC,Bardhan13_nonlocal_review}. The
new model also does not necessarily capture specific hydrogen-bonding
effects like AGBNP2 does~\cite{Gallicchio09}, which motivates future
work comparing the two approaches.
\iftoggle{fulltitlepage}
{
\section*{Acknowledgments}
The authors thank David Mobley for sharing detailed calculation
results, Jed Brown and David Green for valuable discussions, and Matt
Reuter for a critical reading of the manuscript. MGK was partially
supported by the U.S. Department of Energy, Office of Science,
Advanced Scientific Computing Research, under Contract
DE-AC02-06CH11357, and also NSF Grant OCI-1147680. JPB has been
supported in part by the National Institute of General Medical
Sciences (NIGMS) of the National Institutes of Health (NIH) under
award number R21GM102642. The content is solely the responsibility of
the authors, and does not necessarily represent the official views of
the National Institutes of Health.
\section*{Supporting Information}
Figures comparing NLBC and MD calculations for the Mobley test
set~\cite{Mobley08_asymmetry}. The source code (MATLAB) and surface
discretizations for running the nonlinear boundary-condition
calculations, data files, parameters, and scripts for preparing and
running the MD calculations of titratable residues, as well as source
code to generate the figures, are freely and publicly available online
at \url{https://bitbucket.org/jbardhan/si-nlbc}.
}{
}
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,786 |
\section{Introduction}
\label{introduction}
The multi-armed bandit problem is an online decision-making problem introduced by \cite{thompson} nearly a century ago in the context of clinical trials. It has then been widely studied for its numerous applications, for example in recommendation systems, advertising or hyperparameter tuning.
We are in particular interested in the multi-player version of the multi-armed bandit problem where at each time step, multiple players choose among a common set of arms. If two players choose the same arm, they collide and both of them receive a null reward. If a player is the only one to choose an arm, they receive a reward sampled from a Bernoulli distribution. Let us formally state the model.
\subsection{Model and Assumptions}
\paragraph{Multi-Player Bandits} We consider an $M$-player $K$-armed bandit problem with $M \le K$, parameterized by $\bm\mu = (\mu_1, \dots, \mu_K)$. The ``true'' reward of arm $k=1,...,K$ at time $t=1,...,T$ is denoted by $X_k(t)$, and we assume that $(X_k(t))_{t \ge 1}$ is i.i.d. Bernoulli distributed with expected value $\mathbb{E}[X_k(t)] = \mu_k$. At time $t=1,...,T$, player $m=1,...,M$ chooses an arm $\pi_m(t) \in \{1,...,K\}$ based on their past observations. If two players choose the same arm, we say that a collision occurs and both of them receive a reward of $0$. Formally, we define the collision indicator variable
$$
\eta_{k}(t) = \mathbbm{1} \Bigg\{ \# \{m=1,...,M: \pi_m(t) = k\} \geq 2 \Bigg\}
$$
so that $\eta_{k}(t) = 1$ if at least two distinct players have chosen arm $k$, leading to a collision, and $\eta_{k}(t) = 0$ otherwise.
\paragraph{Rewards and Information Structure}
The reward obtained by player $m=1,...,M$ can then be written as
$$
r_m(t) = X_{\pi_m(t)}(t)\Big(1 - \eta_{\pi_m(t)}(t)\Big),
$$
so that if they collide with another player, they get a $0$ reward and otherwise they get the reward of the selected arm $\pi_{m}(t)$ which is $X_{\pi_m(t)}(t)$.
Informally, the multi-player multi-armed bandit problem is simply an extension of the classical single-player multi-armed bandit problem, where several users simultaneously explore the arms in a distributed manner, subject to collisions.
In this work we consider no collision nor sensing information, which is the setting in which one gets the minimal amount of information. Namely, player $m$ observes solely its reward $r_m(t)$. They do not observe the channel reward $X_{\pi_m(t)}(t)$, nor do they observe the collision indicator $\eta_{\pi_m(t)}(t)$ or the rewards obtained by other players. In short the problem is fully distributed. We also assume that initially players have no information whatsoever about the mean reward of arms $\bm\mu$, no information about the number of players $M$, and no information about their index $m$, so that all players must behave symmetrically.
\paragraph{Optimal Policy and Regret}
In order to maximize the total reward, players must find the $M$ best arms, and assign one distinct arm to each player in order to avoid collisions, which would yield a total expected reward of $\sum_{m=1}^{M} \mu_{(m)}$,
where $\mu_{(1)} \geq ... \geq \mu_{(K)}$ are the sorted expected rewards of each arm $\mu_1,...,\mu_{K}$. We define the regret
$$
R_{\bm\mu}(T) = \sum_{t=1}^T \left(\sum_{m=1}^M \mu_{(m)} - \sum_{m=1}^M \mathbb{E} [r_{\pi_m(t)}(t)] \right)
$$
which is the difference in terms of cumulative expected reward between an oracle that knows $\bm\mu$ and acts optimally versus that of the algorithm considered.
\subsection{Related Work}
\paragraph{Centralized setting}
When players' decisions are managed by a central controller, we say the system is centralized, and this amounts to the multiple-play bandits, where the controller chooses a set of $M$ arms at each time step. This model was introduced by \cite{anantharam} and further studied by \cite{komiyama15}. The high cost of a central controller in cognitive radio networks applications however motivated \cite{liuzhao} to introduce the more interesting but more difficult decentralized setting, where players can only observe their own actions and received rewards.
\paragraph{Decentralized setting, with collision/sensing information}
Since then, the decentralized setting assuming collision information (knowledge of the collision indicator $\eta_{\pi_m(t)}(t)$) and/or sensing information (knowledge of $X_{\pi_m(t)}$) has been well explored \citep{liuzhao, anandkumar, rosenski, besson_multiplayer_revisited, boursier_sicmmab}. Recently, \cite{boursier_sicmmab} introduced SIC-MMAB, an algorithm which carefully leverages collisions between players in order to communicate. By doing so, they prove that its regret matches asymptotically the lower bound of the centralized problem, up to a universal constant.
\paragraph{Decentralized setting, without collision nor sensing information}
The decentralized setting without collision nor sensing information has been much less investigated. It was first introduced by \cite{bonnefoi}. They proposed the Selfish UCB algorithm and its application to IoT networks showed promising experimental results. Unfortunately, \cite{besson_multiplayer_revisited} then conjectured that with constant probability, it may incur linear regret, a negative result which was further confirmed by \cite{boursier_sicmmab}. Meanwhile, \cite{lugosi} provided the first two algorithms proven to achieve a logarithmic regret. \cite{boursier_sicmmab} followed with SIC-MMAB2, an adapted version of SIC-MMAB, which also achieves a logarithmic regret. Building upon their work, \cite{shi_ec_sic} proposed EC-SIC, an algorithm which improves the efficiency of the communication phase of SIC-MMAB2 by cleverly using an error correction code to transmit full statistics of the players. In Table~\ref{related_work} we report known regret upper bounds for these algorithms.
In the decentralized setting without collision and sensing information, the question of whether an algorithm can reach similar performance as in the centralized setting remains unanswered. For the simplest case where $M=2$ and $K =3$, \cite{bubeck} proposed a collision avoiding algorithm which achieves a problem independent regret of $O(\sqrt{T\log T})$. Note however that their setting is slightly different as it is cooperative, meaning that the two players have assigned roles at the beginning of the game. Their lower bound on the full-information feedback model also suggests that the $\log T$ term is necessary for the bandit feedback model.
\begin{table*}[t] \label{related_work}
\caption{Existing algorithms for decentralized, no collision/sensing information, no cooperative algorithm }
\vskip 0.15in
\begin{small}
\begin{tabular}{lccr}
\toprule
Algorithm & Required Knowledge & Asymptotic Upper Bound \\
\midrule
Algorithm 1 & $M$ & $O\left(\frac{MK\log T}{( \mu_{(M)} - \mu_{(M+1)} )^2}\right)$ \\
\cite{lugosi} & & \\
Algorithm 2 & $\mu_{(M)}$ & $O\left(\frac{K^2M \log^2 T }{\mu_{(M)}} + KM \min (\sqrt{T \log T}, \frac{\log T}{\mu_{(M)}- \mu_{(K)}} \right)$ \\
\cite{lugosi} & & \\
SIC-MMAB2 & $\mu_{(K)}$ & $
O\left(\sum_{k>M} \min \lbrace \frac{M\log T}{\mu_{(M)} - \mu_(k)}, \sqrt{MT \log T} \rbrace + \frac{M K^2}{\mu_{(K)}} \log T\right)$ \\
\cite{boursier_sicmmab} & & \\
EC-SIC & $\Delta$, $\mu_{(K)}$ & $O\left(\sum_{k>M} \frac{\log T}{\mu_{(M)} - \mu_{(k)}} + \left(\frac{M^2 K}{E(\mu_{(K)})} \log \frac{1}{\Delta} + \frac{MK}{\mu_{(K)}} \right) \log T \right) $ \\
\cite{shi_ec_sic} & & \\
\bottomrule
\end{tabular}
\end{small}
\vskip -0.1in
\end{table*}
\paragraph{Dynamic setting} An even more interesting setting for real-world applications is the dynamic setting, where the number of players $M$ is no longer fixed: players can leave or enter the game. Under the collision information assumption, \cite{avner2014concurrent} propose the MEGA algorithm, and show that it is robust when a player leaves the game. Later on, in the setting where players can leave the game only after a specific time, \cite{rosenski} propose the Dynamic Musical Chairs algorithm which consists in resetting the Musical Chairs algorithm at a certain frequency. For the dynamical setting without collision nor sensing information, the literature is still scarce at the moment. \cite{boursier_sicmmab} proposed an algorithm with logarithmic regret, DYN-MMAB, under quasi-asynchronicity, that is, the hypothesis that the players can enter but cannot leave the game. When players are allowed to leave, they suggest to generalize their algorithm by resetting it.
\subsection{Contributions}
A drawback of current state-of-the-art algorithm in the decentralized setting without collision nor sensing information is that they all assume the unrealistic knowledge of certain parameters of the environment such as the number of players $M$ \citep{lugosi}, the mean reward of the $M$-th best channel $\mu_{(M)}$ \citep{lugosi}, the gap $\Delta = \mu_{(M)}-\mu_{(M+1)}$ and/or a lower bound $\mu_{\min}$ on $\mu_{(K)}$ \citep{boursier_sicmmab, shi_ec_sic}, which are usually unknown to the users in real-world applications.
We propose Randomized Selfish KL-UCB, an algorithm derived from Selfish KL-UCB, which does not rely on such unrealistic assumptions. This algorithm also does not suffer from the negative results of Selfish KL-UCB stressed by \cite{besson_multiplayer_revisited, boursier_sicmmab}, and we show through extensive experiments that it performs far better than state-of-the-art algorithms in almost all environments, except some edge cases (Section~\ref{comparison_sota}), where it still seems to incur a logarithmic regret. Moreover, our experiments reveal that, in some environments, the performance of Randomized Selfish KL-UCB is even better than state-of-the-art algorithms which assume collision or sensing information such as SIC-MMAB \citep{boursier_sicmmab}, and MCTopM \citep{besson_multiplayer_revisited} (Section~\ref{sec:comparison_wo_coll_info}).
For the dynamic setting, we carry out experiments which also emphasize the potential of Randomized Selfish KL-UCB.
Under quasi-asynchronicity assumption, our experiments show that Randomized Selfish KL-UCB outperforms by far DYN-MMAB.
Furthermore, we propose a new, more realistic dynamic setting, where players can enter and leave at any moment. For this setting, since no algorithm exists to the best of our knowledge, we compare our algorithm to a simple Musical Chairs \cite{rosenski} for a baseline, and show that again, Randomized Selfish KL-UCB is very promising.
All code used for conducting experiments is publicly available at \url{https://github.com/ctrnh/multi_player_multi_armed_bandit_algorithms}.
\section{Proposed Algorithm}
We now highlight the proposed algorithm, the rationale behind its construction and list some of the shortcomings of the state-of-the-art algorithms.
\subsection{Computation of Statistics}
For player $m=1,...,M$ and arm $k=1,...,K$ we define
$$
N_{m,k}(t) = \sum_{s=1}^{t-1} \mathbbm{1} \{ \pi_m(s) = k \},
$$
the number of times player $m$ has selected arm $k$ between time step $1$ and $t-1$, as well as
$$
\hat{\mu}_{m,k}(t) = {1 \over \max( N_{m,k}(t),1) } \sum_{s=1}^{t-1} r_{m}(s) \mathbbm{1} \{ \pi_m(s) = k \},
$$
the average empirical reward obtained by player $m$ from arm $k$ between time step $1$ and $t-1$. Note that for all $k=1,...,K$, both $N_{m,k}(t)$ and $\hat{\mu}_{m,k}(t)$ are available to player $m$ at time $t$ based on the model assumptions. We will use those two statistics in order to design our algorithms.
\subsection{The Selfish KL-UCB Algorithm}
An algorithm proposed by \cite{besson_multiplayer_revisited} called Selfish KL-UCB is that each player $m=1,...,M$ chooses the arm
$$
\pi_m(t) \in \arg\max_{k=1,...,K} \{ b_{m,k}(t) \}
$$
maximizing the KL-UCB index defined as
$$
b_{m,k}(t) = \max \{q \in [0,1]: N_{m,k}(t) d( \hat{\mu}_{m,k}(t) , q ) \le f(t) \}
$$
where $f(t) = \log t + c \log\log t$, with $c\ge 0$ (in practice, one usually simply sets $c=0$) and
$$
d(\mu , \lambda) = \mu \log {\mu \over \lambda} + (1-\mu) \log {1-\mu \over 1 - \lambda}
$$
is the Kullback-Leibler divergence between Bernoulli distributions with means $\mu$ and $\lambda$. The pseudo code for Selfish KL-UCB is stated as Algorithm~\ref{alg:SelfishKLUCB}.
As its name indicates, Selfish KL-UCB is a straightforward extension to the multi-player setting of the KL-UCB algorithm \cite{klucb}, an optimistic algorithm which is provably asymptotically optimal in the single-player setting. It is called ``Selfish'' since each player acts as if other players did not exist and attempts to play optimally in the single player setting.
While Selfish KL-UCB is both conceptually simple and elegant, there exists cases in which its regret is linear, as proven by \cite{boursier_sicmmab} and stated in proposition~\ref{prop:selfishkl} \footnote{To be more accurate, the authors of \cite{boursier_sicmmab} analyze Selfish UCB1 which is simply Selfish KL-UCB using the UCB1 index instead of the KL-UCB index. Experimentally, both Selfish UCB1 and Selfish KL-UCB exhibit the same problematic behaviour.}. Selfish KL-UCB has an all-or-nothing behaviour: on some sample paths it performs very well, and on some of them a subset of players simply collide without an end, causing linear regret.
\begin{proposition}\label{prop:selfishkl}
There exists $K$, $M$ and $\bm\mu \in [0,1]^K$ such that under Selfish UCB1:
$$
\lim \inf_{T \to \infty} {R_{\bm\mu}(T) \over T} > 0
$$
i.e. the regret grows linearly.
\end{proposition}
\subsection{Proposed Algorithm: Randomized Selfish KL-UCB}
The reason why Selfish KL-UCB performs poorly on some sample paths is due to its symmetry. Indeed, consider two players $m \ne m'$ who, at time $t \in [T]$, have the same observations, that is $N_{m,k}(t) = N_{m',k}(t)$ and $\hat{\mu}_{m,k}(t) = \hat{\mu}_{m',k}(t)$ for all $k=1,...,K$. Then, by construction of Selfish KL-UCB they will choose the same arm $\pi_{m}(t) = \pi_{m'}(t)$ and collide, and this cascade of collisions might go on forever.
We propose to alleviate the problem by adding randomization in order to break symmetry, by selecting arm
$$
\pi_{m}(t) \in \arg\max_{k=1,...,K} \left\{ b_{m,k}(t) + {Z_{m,k}(t) \over t} \right\}
$$
where $(Z_{m,k}(t))_{m=1,...,M,k=1,...,K,t=1,...,T}$ are i.i.d. Gaussian with mean $0$ and variance $1$.
The variables $(Z_{m,k}(t))_{k=1,...,K}$ represent the internal randomization of player $m$ and it is noted that, of course, they are known only to player $m$. Informally, in order to break symmetry, each player maximizes the KL-UCB index perturbed by a small Gaussian perturbation. We call this algorithm Randomized Selfish KL-UCB, and we will show that it outperforms all known algorithms in Section~\ref{comparison_sota}. The pseudo code for Randomized Selfish KL-UCB is stated as Algorithm~\ref{alg:RandomizedSelfishKLUCB}.
A rationale for Randomized Selfish KL-UCB is that, if two players have the same observations then while under Selfish KL-UCB they will choose the same arm with probability $1$ and trigger a potentially infinite cascade of collisions, under Randomized Selfish KL-UCB, there exists a positive probability that they will choose different arms, and collisions will eventually stop. The infinite cascade of collision phenomenon occurs especially often when the number of players and the number of arms are small. Figure~\ref{fig:rnd_nornd} illustrates this in an environment where $M=2$, $K=2$, that adding randomization allows to eliminate this problem: over $500$ runs, while Selfish KL-UCB incurs a linear regret for almost 200 runs, the histogram shows that there is only one mode for the total cumulative regret of Randomized Selfish KL-UCB, as it did not exceed $10^3$ in any of the runs.
\begin{figure}
\centering
\includegraphics[scale=0.5]{22_rndnorndhist.png}
\includegraphics[scale=0.25]{22_rndnorndcumregret.png}
\caption{Environment: $M=2$, $K=2$, $\bm\mu = (0.9,0.1)$, $T=10,000$. Algorithms: Selfish KL-UCB, Randomized Selfish KL-UCB. Histogram of total cumulative regrets over 500 runs (left). Cumulative regret with respect to $t$, shaded areas represent the $90$-th percentile (right). }
\label{fig:rnd_nornd}
\end{figure}
\subsection{More Rationale for Selfish KL-UCB: Single Player Setting}
Another rationale for Randomized KL-UCB is understood by analyzing it in the single player case. Indeed, any good algorithm for the multi-player multi-armed bandit should at least be asymptotically optimal when applied in the single player case. Proposition~\ref{prop:singleplayer} states that Randomized Selfish KL-UCB, just like Selfish KL-UCB, is asymptotically optimal in this case.
By corollary, this also proves that Randomized Selfish KL-UCB performs well in a setting where a given player $m$ applies Randomized Selfish KL-UCB, while all other players select a constant arm, since this reduces to the single player case by replacing $\mu_k$ by $0$ if a player $m' \ne m$ plays arm $k$.
Also, when inspecting the proof in details, we understand why the magnitude of the randomization term is chosen as ${1 \over t}$, as it is small enough not to break asymptotic optimality at least in the single player case.
\begin{proposition}\label{prop:singleplayer}
Consider the single player case $M=1$. Then under Randomized Selfish KL-UCB, for any $\bm\mu \in [0,1]^K$ and any $k$ such that $\mu_k < \mu_{(1)}$ we have
$$
\lim\sup_{T \to \infty} {\mathbb{E}[N_{k}(T)] \over \log T} \le {1 \over d(\mu_k,\mu_{(1)}) }
$$
i.e. the algorithm is asymptotically optimal.
\end{proposition}
Proof: see Section~\ref{sec:proofs}.
\subsection{Randomized Selfish KL-UCB: Analysis}
Despite our most sincere efforts, we were unable to prove a regret upper bound for Randomized Selfish KL-UCB in the multi-player setting. However, numerical experiments show that it outperforms all known algorithms sometimes by several orders of magnitude, as can be seen in the following section. We conjecture that Randomized Selfish KL-UCB has logarithmic regret, and we believe that this is an important, but certainly challenging open problem.
\section{Comparison to State-of-the-art Algorithms} \label{comparison_sota}
{\bf Algorithms} We now compare Randomized Selfish KL-UCB to state-of-the-art algorithms: EC-SIC \cite{shi_ec_sic}, SIC-MMAB2 \cite{boursier_sicmmab} and the algorithm 2 of \cite{lugosi} under various environments. For the settings with $M=2, K=3$, we also add the cooperative algorithm of \cite{bubeck}. Although it enjoys an asymptotic logarithmic regret, we do not plot the algorithm 1 of \cite{lugosi} because it converges too slowly: even for a very favorable case, such as with $\Delta = 0.5, M = 2, K = 3$ and $T = 2 \times 10^6$, the exploration phase lasts at least $24. 128K\log(3K M^2T^2)/ \Delta^2 > 1.2 \times 10^6$ time steps, leading to a linear regret in all our settings.
{\bf Parameter tuning} When algorithms require a hyperparameter depending on the environment, we input the best possible. That is, for SIC-MMAB2 and EC-SIC, that require a lower bound on $\mu_{(K)}$, we provide it with $\mu_{(K)}$. Similarly, for the algorithm 2 of \cite{lugosi} that we call Lugosi2 that requires a lower bound on $\mu_{(M)}$ we provide it with $\mu_{(M)}$. For EC-SIC, we use the parameter setting $p=5$ as suggested by \cite{shi_ec_sic}.
All experiments are averaged over at least $50$ runs, and the shaded areas represent $95\%$ confidence intervals.
\subsection{Linearly Spaced $\bf \mu$}
We first evaluate algorithms on environments where the means of the arms are linearly spaced:
$$
\mu_{(k)} = \mu_{(1)} {K-k \over K-1} + \mu_{(K)} {k-1 \over K-1} \;,\; k=1,...,K
$$
We consider $M =2, 5, 10$ players and a horizon of $T=2 \times 10^6$ time steps.
For each value of $(M,K)$, we evaluate the algorithms on three settings: \begin{itemize} \item{(i)} $(\mu_{(1)},\mu_{(K)})=(0.99,0.01)$ \item{(ii)} $(\mu_{(1)},\mu_{(K)})=(0.2,0.1)$ \item{(iii)} $(\mu_{(1)},\mu_{(K)})=(0.9,0.8)$ \end{itemize} We expect setting (i) to be the hardest and (iii) the easiest for SIC-MMAB2. Indeed, in setting (i) the length of its phases is large, for example when $(M,K) =(2,3)$, the first exploration phase has length $4800 K\log(T)/ \mu_{(K)} \ge 2 \times 10^7$, far greater than $T$ as illustrated in Figure~\ref{fig:linearly_m_2_K_3}. On the other hand in setting (iii) the first exploration phase lasts close to $3 \times 10^5$ time steps, and the following phases have a length of similar order. Although the phases length of EC-SIC are also inversely proportional to $\mu_{(K)}$, the communication of complete statistics of the players combined with the longer exploration phase ($p$ = 5) allow to classify the very good ($\mu_{(1)} = 0.99$) and very bad ($\mu_{(K)} = 0.01$) arms faster.
\begin{figure}[t]
\vskip -0.1in
\begin{center}
\includegraphics[width=0.32\columnwidth]{31_m2k3_02.png}
\includegraphics[width=0.32\columnwidth]{31_m2k3_08.png}
\includegraphics[width=0.32\columnwidth]{31_m2k3_099.png}
\includegraphics[width=0.32\columnwidth]{31_m2k5_02.png}
\includegraphics[width=0.32\columnwidth]{31_m2k5_08.png}
\includegraphics[width=0.32\columnwidth]{31_m2k5_099.png}
\includegraphics[width=0.32\columnwidth]{31_m5k10_02.png}
\includegraphics[width=0.32\columnwidth]{31_m5k10_08.png}
\includegraphics[width=0.32\columnwidth]{31_m5k10_099.png}
\includegraphics[width=0.32\columnwidth]{31_m10k15_02.png}
\includegraphics[width=0.32\columnwidth]{31_m10k15_08.png}
\includegraphics[width=0.32\columnwidth]{31_m10k15_099.png}
\caption{Each row corresponds respectively to $(M,K) =(2,3)$, $(M,K) =(2,5)$, $(M,K) =(5,10)$, $(M,K)=(10,15)$. Each of the three columns corresponds respectively to an environment $\bm\mu$ generated by taking linearly spaced $\bm\mu = (0.2, \dots, 0.1)$, $\bm\mu = (0.9 , \dots,0.8)$, $ \bm\mu = (0.99, \dots,0.01). $}
\label{fig:linearly_m_2_K_3}
\end{center}
\vskip -0.2in
\end{figure}
As shown by Figure~\ref{fig:linearly_m_2_K_3}, Randomized Selfish KL-UCB outperforms other algorithms by far, and sometimes, by several orders of magnitude: for example, for $(M,K) =(5,10)$ in setting (ii), the regret of Randomized Selfish KL-UCB is $200$ times smaller than that of EC-SIC, the current best state-of-the-art algorithm.
\subsection{Variation of the Regret with Respect to Environment Parameters}
We now study how the variation of different quantities influence the algorithms performances.
We report the cumulative regret of each algorithm averaged over $50$ runs as a function of:
\begin{enumerate}
\item $\mu_{(K)}$: for an environment with $M=5$ players and $K=9$ arms, we take $\bm\mu$ linearly spaced between $\mu_{(1)} =0.9$ and $\mu_{(K)}$, where $\mu_{(K)}$ varies between $0.1$ and $0.8$
\item $\Delta$: for an environment with $M=5$ players and $K=9$ arms, we take $\bm\mu = (0.99, \dots, \mu_{(M)}, 0.8, \dots, 0.7)$, where $\mu_{(M)} \in \{0.9, 0.85, 0.81, 0.805,0.801\}$. Note that we chose $\mu_{(K)}$ high enough to favor the SIC algorithms, so that the regret of SIC-MMAB2 does not grow linearly like in the first column of Figure~\ref{fig:linearly_m_2_K_3}.
\item $M$: for an environment with $K=10$ arms, we vary $M$ from $1$ to $9$ (EC-SIC does not work for $K=M$) and we take $\bm\mu$ linearly spaced between $\mu_{(1)} =0.9$ and $ \mu_{(K)} = 0.1$.
\end{enumerate}
\begin{figure}[ht]
\vskip -0.1in
\begin{center}
\includegraphics[scale=0.24]{32_muK}
\includegraphics[scale=0.24]{32_Delta}
\includegraphics[scale=0.24]{32_M}
\caption{Total cumulative regret as a function of $\mu_{(K)}$ (top), $\Delta$ (middle), $M$ (bottom) for all state-of-the-art algorithms.}\label{fig:wrt_delta_mu_min}
\end{center}
\vskip -0.2in
\end{figure}
For all these parameter values, Figure~\ref{fig:wrt_delta_mu_min} shows the superiority of Randomized Selfish KL-UCB over other algorithms. Note that even if EC-SIC seems close to Randomized Selfish KL-UCB in those three plots, the difference is actually quite significant.
Note also that although it seems that for small $\mu_{(K)}$, SIC-MMAB2 performs better than for very high $\mu_{(K)}$, it is actually not the case, because for small $\mu_{(K)}$, SIC-MMAB2 does not converge (similarly to the plots of the first column of Figure~\ref{fig:linearly_m_2_K_3}).
\subsection{A Corner Case: When All Means are Equal}
We found in the corner case where arms all have the same means reward, that Randomized Selfish KL-UCB does not perform better than SIC-MMAB, although its regret still seems logarithmic as can be seen in Figure~\ref{fig:same_mean}. This can be explained by the fact that when all arms have the same mean, the best strategy is just to always be in an orthogonalized setting, (and a simple Musical Chairs should actually be an optimal strategy) which is exactly how the SIC-MMAB2 algorithm behaves: players start with a Musical Chairs and then continue sequential hopping forever.
Nevertheless, this specific corner case is not likely to happen in practice, and the good performance of SIC-MMAB2 in this setting is not robust to even very small perturbations as shown in Figure~\ref{fig:same_mean}, where we added a noise sampled from a uniform distribution centered in $0.5$, of width $0.02$, so that $\bm\mu$ is uniformly distributed in $[0.49,0.51]^K$.
\begin{figure}[ht]
\vskip -0.1in
\begin{center}
\includegraphics[scale=0.25]{33_same.png}
\includegraphics[scale=0.25]{33_almostsame.png}
\caption{$M=5, K=10$ When all means are equal $\bm\mu = (0.5, \dots, 0.5)$ (top), SIC-MMAB2 performs better than Randomized Selfish KL-UCB. When there is a small perturbation to this environment for example $\bm\mu \sim \mathcal{U}([0.49,0.51]^K)$ (bottom), it is not the case anymore.}
\label{fig:same_mean}
\end{center}
\vskip -0.2in
\end{figure}
\section{Comparison to Algorithms Assuming Collision Sensing}\label{sec:comparison_wo_coll_info}
In this section, we compare Randomized Selfish KL-UCB to algorithms which are state-of-the-art under the setting with collision and/or sensing information: SIC-MMAB \citep{boursier_sicmmab} and MCTopM \citep{besson_multiplayer_revisited}. It is interesting to see that Randomized Selfish KL-UCB often performs far better than SIC-MMAB as shown by Figure~\ref{fig:comparison_with_coll_info}, and its performance approaches that of MCTopM, sometimes outperforming it in certain environments.
\begin{figure}[t]
\vskip -0.1in
\begin{center}
\includegraphics[width=0.32\columnwidth]{4_m5k10_sic_02.png}
\includegraphics[width=0.32\columnwidth]{4_m5k10_sic_08.png}
\includegraphics[width=0.32\columnwidth]{4_m5k10_sic_099.png}
\includegraphics[width=0.32\columnwidth]{4_m10k15_sic_02.png}
\includegraphics[width=0.32\columnwidth]{4_m10k15_sic_08.png}
\includegraphics[width=0.32\columnwidth]{4_m10k15_sic_099.png}
\caption{Each row corresponds respectively to $(M,K) =(5,10)$ and $(M,K)=(10,15)$. Each of the three columns corresponds respectively to an environment $\bm\mu$ generated by taking linearly spaced $\bm\mu = (0.2, \dots, 0.1), \bm\mu = (0.9 , \dots,0.8), \bm\mu = (0.99, \dots,0.01). $}
\label{fig:comparison_with_coll_info}
\end{center}
\vskip -0.2in
\end{figure}
\section{Dynamical Setting}
As was noted by \cite{boursier_sicmmab}, the SIC algorithms (SIC-MMAB, SIC-MMAB2 and EC-SIC) rely heavily on the static assumption, so as to allow communication between players and "hack the system", by using a perfect synchronization between all players at all times. In practical applications however, this assumption is very unrealistic as players do not arrive at the same time, and do not leave at the same time (for instance in communication networks).
In this section, we study experimentally the dynamical setting without collision nor sensing information, a setting where little work has been done so far.
\subsection{Quasi-Asynchronicity}
For the dynamical setting without collision information, \cite{boursier_sicmmab} proposed DYN-MMAB an algorithm with logarithmic regret. They consider the quasi-asynchronous setting, where players can enter the system whenever they want, but cannot leave until the end of the time horizon.
Formally, player $m$ enters at time $\tau_m \in \{0,...,T\}$ and
stays until the final horizon $T$. The value of $\tau_m$ is unknown to all players (including $m$), who are only aware of their individual horizon $T-\tau_m$ and their own internal clock $t - \tau_m$.
We model the arrival of players by a Poisson process, starting with one player at the beginning of the game. We let the maximum number of players be $M=K$, therefore for sufficiently long horizon the system ends up saturated at the end of the game.
Figure~\ref{fig:dynamic} shows that DYN-MMAB converges slowly in comparison to Randomized Selfish KL-UCB. For Randomized Selfish KL-UCB, this setting might actually be even easier as players enter sequentially. Intuitively, if $M$ players have been playing for a long time in the game, they likely have settled on a preferred arm. If a new player enters, she effectively faces a system akin to a single player bandit with $K-M$ arms. This especially makes sense in light of proposition~\ref{prop:singleplayer} which treats the single-player case.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.45]{52_dynamic.png}
\includegraphics[scale=0.45]{52_rndselfish.png}
\caption{Quasi-asynchronous environment with $\lambda = 10^{-4}$. Cumulative regret of DYN-MMAB and Randomized Selfish KL-UCB $K=4$ (left). Cumulative regret of Randomized Selfish KL-UCB, with $M=5$, players entering at $ \{0, 4912, 13703, 15970, 18278\}$ (sample from a Poisson process with rate $\lambda = 10^{-4}$) (right).}
\label{fig:dynamic}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{When Players are Allowed to Leave}
Although the quasi-asynchronous setting is a step forward towards a realistic dynamical setting, in many real-world systems users leave whenever they want, in an asynchronous manner.
If we add the assumption that players can only leave at specific intervals, \cite{boursier_sicmmab} propose to adapt DYN-MMAB by resetting the algorithm at each of these intervals.
We propose to study experimentally an even more realistic dynamic setting in which players can enter and leave the system at any moment. Denote by $M(t)$ the number of players present in the system at time $t$. We model the arrivals and departures as an M/M/K queue:
\begin{itemize}
\item arrivals follow a Poisson process with rate $\lambda$,
\item players stay for an exponentially distributed duration with mean $1/\nu$,
\item when the system is saturated, that is $M(t)=K$, entering players are blocked.
\end{itemize}
As players constantly enter and leave the system, instead of reporting the cumulative regret with respect to $T$, we measure the performance of Randomized Selfish KL-UCB by computing an expected reward per unit time $\mathcal{R}$. More specifically, the performance of an algorithm is
\begin{align*} \mathcal{R} &= {1 \over T} \sum_{t=1}^T \sum_{m=1}^{M(t)} \mathbb{E} [r_{\pi_m}(t)] \\
&= {1 \over T} \sum_{t=1}^T \sum_{m=1}^{M(t)} \mathbb{E} \Big[\mu_{\pi_m}(t) (1 -\eta_{\pi_m(t)}(t))\Big],
\end{align*}
while the performance of the optimal oracle algorithm is
$$\mathcal{R}^\star = \frac{1}{T} \sum_{t=1}^T \sum_{m=1}^{M(t)} \mu_{(m)}$$
and we report the performance ratio ${\mathcal{R} \over \mathcal{R}^\star}$.
Under the realistic scenario where players arrive at a rate of $1$ person per second (where $1$ second corresponds to $10^3$ time steps), and stay for $10$ seconds ($10^4$ time steps), with an average number of $8$ players, the algorithm achieves a ratio of $95\%$ compared to the optimal oracle algorithm.
We report in Table~\ref{table:dyn_mc}, the ratio between the performance of Randomized Selfish KL-UCB and that of the optimal oracle algorithm, with respect to multiple parameters of $\lambda$ and $\nu$.
With this model, Randomized Selfish KL-UCB performs almost like the optimal oracle algorithm.
In comparison, we also report the ratio between the performance of Musical Chairs and that of the optimal oracle algorithm.
\begin{table}[h]
\caption{Dynamic setting. Performance ratio against optimal oracle algorithm, for Randomized Selfish KL-UCB (top) and Musical Chairs (bottom), with $T = 10^6$, $K=10$. }
\label{table:dyn_mc}
\centering
\begin{small}
\begin{tabular}{|l|c|c|r|
\hline
Randomized Selfish KL-UCB & $\lambda = 1/1000$ & $\lambda = 1/10,000$ \\
\hline
$\nu = 1/500$ & 91 $\pm$ 2 \% & 91 $\pm$ 1 \% \\
\hline
$\nu = 1/1000$ & 92 $\pm$ 2\% & 94 $\pm$ 1 \% \\
\hline
$\nu = 1/10,000$ & 93 $\pm$ 1 \% & 97 $\pm$ 1 \% \\
\hline
\end{tabular}
\medskip
\begin{tabular}{|l|c|c|r|
\hline
Musical Chairs & $\lambda = 1/1000$ & $\lambda = 1/10,000$ \\
\hline
$\nu = 1/500$ & 69 $\pm$ 1 \% & 69 $\pm$ 3\% \\
\hline
$\nu = 1/1000$ & 72 $\pm$ 1\% & 70 $\pm$ 3 \% \\
\hline
$\nu = 1/10,000$ & 90 $\pm$1 \% & 72 $\pm$ 3 \% \\
\hline
\end{tabular}
\end{small}
\end{table}
\section{Proof of Proposition~\ref{prop:singleplayer}}\label{sec:proofs}
\subsection{Technical Results}
\begin{lemma}[Chernoff bound for Gaussian variables] \label{lem:chenoff_gaussian}
Consider $Z \sim N(0,\sigma^2)$. Then for all $\delta > 0$ we have $\mathbb{P}( Z \ge \delta) \le e^{- {\delta^2 \over 2 \sigma^2}}$.
\end{lemma}
{\bf Proof:} A Chernoff bound yields, for any $\lambda > 0$
\begin{align*}
\mathbb{P}( Z \ge \delta) = \mathbb{P}( e^{\lambda Z} \ge e^{\lambda \delta}) \le e^{-\delta \lambda }\mathbb{E} e^{\lambda Z} = e^{ {\lambda^2 \sigma^2 \over 2} -\delta \lambda }
\end{align*}
and setting $\lambda = \sigma^2/\delta$ yields the result.
\subsection{Proof of Proposition~\ref{prop:singleplayer}}
Consider the single-player case $M=1$. Define $k^\star \in \arg\max_{k=1,...,K} \mu_k$ an optimal arm, consider $k$ a suboptimal arm so that $\mu_k < \mu_{k^\star}$. Consider $0 < \delta < (\mu_{k^\star} - \mu_k)/2$ fixed. The analysis is based on that of KL-UCB \cite{klucb}.
Define the following events:
\begin{align*}
{\cal A}_t &= \{ b_{k^\star}(t) \le \mu_{k^\star} \} \\
{\cal B}_t &= \{ Z_{k}(t) - Z_{k^\star}(t) \ge t \delta \} \\
{\cal C}_t &= \{ k(t) = k: N_k(t) \le f(T)/d(\mu_k+\delta,\mu_{k^\star} - \delta) \} \\
{\cal D}_t &= \{ k(t) = k: \hat{\mu}_k(t) \ge \mu_k + \delta \}.
\end{align*}
Let us prove that if none of those events occur, then $k(t) \ne k$ i.e. $k$ cannot be selected. If ${\cal A}_t$ does not occur then $b_{k^\star}(t) \ge \mu_{k^\star}$. If ${\cal C}_t$ and ${\cal D}_t$ both do not occur we have:
\begin{align*}
N_k(t) d(\hat{\mu}_k(t),\mu_{k^\star} - \delta) &\ge N_k(t) d(\hat{\mu}_k(t)-\delta,\mu_{k^\star} - \delta) \ \\
&\ge f(T) \ge f(t)
\end{align*}
therefore $b_k(t) \le \mu_{k^\star} - \delta$. If ${\cal B}_t$ does not occur as well, we finally get
$$
b_k(t) + {Z_k(t) \over t} \le b_{k^\star}(t) + {Z_{k^\star}(t) \over t}
$$
so that indeed, we cannot have $k(t) = k$.
So the number of times $k$ is selected is upper bounded as:
$$
N_{k}(T) \le \sum_{t=1}^T \mathbbm{1}\{ {\cal A}_t \} + \mathbbm{1}\{ {\cal B}_t \} + \mathbbm{1}\{ {\cal C}_t \} + \mathbbm{1}\{ {\cal D}_t \}
$$
From \cite{klucb} we have that:
$$
\sum_{t=1}^{+\infty} \mathbb{P}({\cal A}_t) \le C_1 \log \log T
$$
with $C_1 \ge 0$ a universal constant. Using the fact that $Z_k(t) - Z_{k}(t)$ has $N(0,2)$ distribution, using lemma~\ref{lem:chenoff_gaussian}
$$
\sum_{t=1}^{+\infty} \mathbb{P}({\cal B}_t) \le \sum_{t=1}^{+\infty} e^{- t^2 \delta^2/2 } < +\infty
$$
When ${\cal C}_t$ occurs we have that $N_k$ is incremented so that from a counting argument
$$
\sum_{t=1}^{+\infty} \mathbbm{1}\{ {\cal C}_t \} \le {f(T) \over d(\mu_k+\delta,\mu_{k^\star} - \delta)}
$$
Finally, using Hoeffding's inequality:
$$
\sum_{t=1}^{+\infty} \mathbb{P}({\cal D}_t) \le \sum_{n=0}^{+\infty} e^{- 2 n \delta^2} = {1 \over 1 - \exp^{- 2 \delta^2}} < \infty
$$
Putting it together we have proven that
$$
\lim\sup_{T \to \infty} {\mathbb{E}[N_{k}(T)] \over \log T} \le {1 \over d(\mu_k+\delta,\mu_{k^\star} - \delta)}
$$
Since the above holds for $\delta$ arbitrarily small we have proven the announced result:
$$
\lim\sup_{T \to \infty} {\mathbb{E}[N_{k}(T)] \over \log T} \le {1 \over d(\mu_k,\mu_{k^\star})}.
$$
\section{Conclusion}
In this work, through extensive experiments, we emphasize the potential of Randomized Selfish KL-UCB as an optimal algorithm for the decentralized MP-MAB without collision and sensing information for the static setting. We argue that for real-world applications, Randomized Selfish KL-UCB is a very good candidate as it performs well, does not require any prior knowledge on the environment, and is simple to implement in comparison to its peers which rely on complex multiple phases and sometimes unrealistic communication through collisions between users. Moreover, for the more realistic dynamic setting, our experiments also show promising results.
We hope this work will encourage the community toward the analysis of this algorithm, a challenging but promising open problem.
\begin{algorithm}[tb]
\caption{Selfish KL-UCB (for player $m=1,...,M$)}
\label{alg:SelfishKLUCB}
\begin{algorithmic}
\FOR{$t=1, \dots, T$}
\FOR{$k=1,...,K$}
\STATE Compute
$
b_{m,k}(t) = \max \{q \in [0,1]: N_{m,k}(t) d( \hat{\mu}_{m,k}(t), q ) \le f(t) \}
$ \ENDFOR
\STATE Draw arm $\pi_m(t) = \argmax{k=1,...,K} \ \{ b_{m,k}(t) \}$
\STATE Observe reward $r_m(t)$ and update statistics\;
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Randomized Selfish KL-UCB (for player $m=1,...,M$)}
\label{alg:RandomizedSelfishKLUCB}
\begin{algorithmic}
\FOR{$t=1, \dots, T$}
\FOR{$k=1,...,K$}
\STATE Compute
$
b_{m,k}(t) = \max \{q \in [0,1]: N_{m,k}(t) d( \hat{\mu}_{m,k}(t) , q ) \le f(t) \}$
\STATE Draw $Z_{m,k}(t) \sim N(0,1)$
\ENDFOR
\STATE Draw arm $\pi_m(t) = \argmax{k=1,...,K} \ \{ b_{m,k}(t) + {Z_{m,k}(t) \over t}\}$
\STATE Observe reward $r_m(t)$ and update statistics\;
\ENDFOR
\end{algorithmic}
\end{algorithm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,088 |
\section{Introduction
Super Earths ($1-10\unit{M_\oplus}$) are common \citep{mayor2011road, howard2012occurrence} even if none exist in our own solar system. They tend to be found in systems of multiple Super Earths in close-in, compact orbital configurations. For example, the Kepler 11 system contains five planets within the orbit of Mercury, each more massive than Earth \citep{lissauer2011closely}.
Here we show how that gas disk-driven orbital migration naturally produces systems similar to the observed ones. We use N-body simulations that include torques from a 1-D gaseous disk to show how a system of planetary embryos starting at several AU embedded naturally migrates inward and accretes into a compact system of hot Super Earths. Some embryos can also grow and remain trapped on more distant orbits and presumably become giant planet cores.
\section{Methods}\label{sec:model}
We start from a 1D protoplanetary disk model, with parameters listed in Table~\ref{tab:disk_param}. The key parameter is the surface density profile, defined as a power law in radial range $0.1<R<100\unit{AU}$ :
\begin{align}
\Sigma(R) &= \Sigma_0 \left(\frac{R}{1\unit{AU}}\right)^{-d}
\end{align}
The inner edge of the disk is smoothed with a $\tanh$ function on a length scale of $H(R_\text{in})$, where $H$ is the scaleheight of the disk, so the density goes to zero at the inner edge. From the surface density profile, we self-consistently compute the temperature $T$, thermal diffusivity $\chi$, scaleheight $H$ and optical depth profiles $\tau$. To calculate the temperature we use the following energy equation :
\begin{align}
0 &= - C_\text{BB} + H_\text{en} + H_\text{irr} + H_\text{vis}
\end{align}\label{eq:energy_equation}
with
\begin{align*}
C_\text{BB} &= 2\sigma \frac{T^4}{\frac{3}{8}\tau + \frac{\sqrt{3}}{4} + \inv{4\tau}} & H_\text{en} &= 2 \sigma {T_\text{en}}^4 \\
H_\text{irr} &= 2 \sigma {T_\star}^4 \frac{{R_\star}^2}{r^2} (1-\varepsilon) * \left[0.4 \frac{R_\star}{r} + r \od{}{r}\left(\frac{H}{r}\right)\right] & H_\text{vis} &= \frac{9}{4} \nu\Sigma\Omega^2
\end{align*}
$C_\text{BB}$ is the disk's black body cooling and $H_\text{en}$ is the black body heating by the disk envelope. $H_\text{vis}$ is the viscous heating. $H_\text{irr}$ is the heating from stellar irradiation~\citep{chiang1997spectral, menou2004low}, $\sigma$ the Stephan-Boltzmann constant, $\nu$ the viscosity of the disk, $T_\star$ and $R_\star$ the temperature and radius of the central star respectively, $\Omega$ the angular speed in the disk at a given position and $\varepsilon$ the disk's albedo. Starting at the outer edge of the disk, where we impose that $T=10\unit{K}$, we retrieve the temperature profile by solving eq. (\ref{eq:energy_equation}) numerically.
We use the formulae of \cite{paardekooper2011torque} to compute the torque exerted by our 1D disk on a planet of a given mass at a given orbital radius. The main differences with the model described in this paper are :
\begin{itemize}
\item The temperature profile is not a power law in our model, but instead a local power law between each point of a table of several hundred points spaced in orbital distance
\item the scaleheight of the disk, defined as $H = \frac{1}{\Omega}\sqrt{\frac{k_B T}{\mu m_H}}$, is computed following the temperature profile, with $k_B$ the Boltzmann constant, $\mu$ the mean molecular weight and $m_H$ the mass of an hydrogen atom.
\end{itemize}
We implement type I eccentricity and inclination damping following \cite{cresswell2008three}. We also include an eccentricity-migration feedback whereby the corotation torque is weakened for eccentric orbits \citep{bitsch2010orbital}, using the equations from \citep{cossou2013convergence}.
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$b/h = 0.6$ & $\gamma = 7/5$ & $\mu = 2.35$ & $\alpha = 5\cdot 10^{-3}$ & $T_\star = 5700\unit{K}$ & $R_\star = 4.65\cdot 10^{-3}\unit{AU}$\\\hline
\multicolumn{2}{|c|}{$\text{Disk Albedo} = 0.5$} & \multicolumn{2}{|c|}{$\mathrm{Disk : R\in[0.1;100]\unit{AU}}$} & \multicolumn{2}{|c|}{$\Sigma(R) = 300 \cdot R^{-1/2}\unit{g/cm^2}$}\\\hline
\end{tabular}
\caption{Parameters of the disk. In addition to thoses parameters, note that the opacities were retrieved from the opacity table of \cite{hure2000transition}. The viscosity is modeled via the $\alpha$-prescription \citep{shakura1973black}.}\label{tab:disk_param}
\end{table}
Disk forces were added to the hybrid version of the {\tt Mercury} integrator \citep{chambers1999hybrid}. Drag and migration forces are not applied to objects inside the inner cavity (inside 0.1 AU). Collisions were treated as inelastic mergers.
\section{Mechanism}
The right panel of \reffig{fig:simulation} shows the disk's migration map. Two zones of convergent migration exist. The first convergence zone causes all embryos between $0.2$ and $0.8\unit{AU}$ to migrate toward $0.5\unit{AU}$. The second causes embryos between $0.9$ and $\sim 100\unit{AU}$ to migrate toward $\sim15\unit{AU}$. However, multiple planet systems do not actually migrate to convergence zone like isolated planets. Rather, they become trapped in chains of mean motion resonances. The resonant configurations sustain eccentricities large enough to attenuate the corotation torque \citep{bitsch2010orbital}. The system stabilizes in an equilibrium position in the disk where the sum of all the torques felt by the planets cancels out \citep{cossou2013convergence}.
The left panel of \reffig{fig:simulation} shows the dynamics and accretion of a system of embryos embedded in the disk. The embryos comprised $60\unit{M_\oplus}$ in total, and started with masses randomly chosen from $0.1-2 \unit{M_\oplus}$ spaced from $2-17\unit{AU}$, and the integration lasted for 10 Myr. At early times, embryos migrate inward because their masses are below the $\sim 4 \unit{M_\oplus}$ threshold for outward migration (see right panel of \reffig{fig:simulation}). Because of their different masses the embryos do not all migrate at the same rate. This leads to close encounters and collisions. Almost all of the embryos migrate inward and complete their growth close to the inner edge of the disk. The final configuration of planets is a compact resonant system. The most massive planet grows fast enough to reverse its migration and stabilizes at $15\unit{AU}$. While migrating outward, it traps a lower mass planet in \MMR{6}{5} resonance. The low-mass planet is pushed by the more massive one, and the two-planet system is effectively ruled by the zero-migration line of the more massive planet.
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\linewidth]{cossou_fig1.pdf}\hfill\includegraphics[width=0.48\linewidth]{cossou_fig2.pdf}
\caption{A simulation that forms a compact system of hot Super Earths. \textbf{Left}: The orbital evolution (top) and mass growth (bottom) of the embryos. \textbf{Right}: The final state of the system. Black lines represent zero-torque zones where isolated planets should stop migrating.}\label{fig:simulation}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{cossou_fig3.pdf}
\caption{Final orbital configuration of the inner part of the simulation compared with the Solar System and Kepler 11.}\label{fig:kepler-comparison}
\end{figure}
The embryos ended up at the inner edge of the disk for two reasons. First, objects less massive than $\sim 4 \unit{M_\oplus}$ simply cannot migrate outward (see \reffig{fig:simulation}). During its inward migration, an embryo must accrete quickly if it is to enter a zone of outward migration, as was the case for the outer massive planet. Second, embryos that do migrate outward may encounter other large embryos and become trapped in resonance, leading to sustained non-zero eccentricities that damp the corotation torque \citep{bitsch2010orbital} and tip the balance toward inward migration \citep{cossou2013convergence}. In this simulation the first mechanism dominated but both mechanisms can be important. The system was held up because a huge positive torque exists close to the inner edge due to the sudden decrease of the surface density profile \citep{masset2006disk}. Our results are thus similar to those of \cite{terquem2007migration} even though outward migration does occur in our modeled disk.
\reffig{fig:kepler-comparison} shows the inner simulated system compared to our Solar System and the Kepler-11 system. The simulated system has similar masses to Kepler-11 but its orbital configuration is even more compact, although there are no planets in the simulation between 0.25 and 15 AU. The four outer simulated planets are in a resonant chain but the innermost planet is not. This is because the innermost planet was pushed into the disk's inner gas-free cavity where it lacked the energy dissipation required for efficient resonant trapping.
\section{Conclusion}
In our disk model, a compact system of hot Super Earths is formed by migration and accretion of planetary embryos. Inward migration is favored. Low-mass embryos naturally migrate inward. Outward migration is stalled or even reversed by the eccentricities sustained through resonances~\citep{cossou2013convergence}. In essence, the presence of multiple large embryos leads to non-zero eccentricities such that they no longer follow the migration map (right panel of \reffig{fig:simulation}).
This formation mechanism for hot Super Earths is robust against a much wider range of disk parameters than the one presented here. We note that the survival of the simulated hot Super Earths may depend on the detailed structure of the inner edge of the gas disk, which provides the positive torque needed to balance the inward-driven system. Their survival also likely depends on how the disk evolves \citep{horn2012orbital} and especially on how it dissipates. We plan to take these effects, and others such as stochastic forcing from turbulence, into account in future work.
\bibliographystyle{apj}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,207 |
{"url":"http:\/\/mistergreenberg.com\/sitePages\/mrG_insights\/mrG_insights.html","text":"# Introduction\n\nI've been a janitor, a doorman, and a construction laborer, but mostly I'm a teacher. I taught in the classroom for twenty years, picking up titles along the way like \"coach\" and \"chair.\" The students gave me strength to deal with administration, which to me resembled the land beyond the looking-glass in Lewis Carroll's novels.\n\nThere is no point in writing about topics on which everyone holds the same opinion. You will probably find I'm on the other side of the fence most of the time. Different perspectives broaden the mind, even if you continue to hold your own convictions.\n\nI'd like to dedicate these essays to my father. I know no wiser person. Dad, you taught that we should look both ways, not only when crossing the street, but also when traversing life. And that, as Robert Frost once wrote, has made all the difference.\n\n# Insights\n\nThe ROI Blind Spot\n\nThe litmus test for any proposal in business is return on investment, or ROI for short. This means that if Joseph the widget-maker proposes a change in the widgets his employer makes, perhaps adding rubber feet to them, that change must return more money than it costs. The increase in profit from rubber-footed widgets must outweigh the cost of gluing on the rubber feet. This makes perfect sense, and no right-minded businessperson would argue otherwise.\n\nI\u2019m an educator, a profession nearly entirely devoid of business sense, so I feel no such obligation to agree with this principle. Here is the flaw, as I see it, followed by a real example.\n\nROI requires that we know the effect of a change before it is implemented. Joseph the widget-maker is guessing that the rubber feet will make the widgets better and that consumers will buy more of the new and improved product. He is predicting that the increase in sales will be so great that it will justify the added expense of glue, rubber, labor, and changes to the production process. Notice the verbs here; he is guessing and predicting. At the moment the boss decides to make the change or not, ROI is just a guess or a prediction. More accurately, the decision will hinge on whether Joseph is persuasive enough to convince his boss that the addition of rubber feet will return more than they cost.\n\nI recently worked for a business that produced educational materials. Having many ideas as to what works in education, I tried to implement some changes. One particular change was to include computer games, the likes of which you can see in this website. My experience using such games in the classroom told me that they are a popular, practical, and effective addition to the usual means of teaching.\n\nThe bosses wanted justification in business terms. In other words, they wanted me to state that the games passed the ROI test, that they would return more than they cost. The games were rather expensive to produce as I was the only person who could make them and I had to handcraft each game from scratch. The reason for this is simple. Just as every lesson is unique and every learning objective different, so must each game be one-of-a-kind. I tried to convince those above me that our clients would so appreciate the inclusion of the games as part of our educational materials, that they would gladly absorb the added expense.\n\nBusinesses, however, have many competing voices. One such voice argued that one-of-a-kind games, one-offs he called them, were too inefficient to make and therefore too costly. His thesis was that we needed to produce materials cookie-cutter fashion with templates that anyone could fill in. He also argued that games need to resemble the video games kids play for entertainment, with robot avatars, racing cars, shooting and such. This, he asserted, would make the games more appealing to our clients. In other words, the games would be cheaper to produce and return more for the investment.\n\nSo ROI boiled down to who could convince management that his idea was best for the company. In the end, I lost out. Management agreed that games, at least the way I make them, cost too much and would return too little.\n\nBut the point isn\u2019t that \u201creturn on investment\u201d is a flawed tool. I\u2019ll leave that concern to the businesspeople who rely on it. The point is that, from a business perspective, proper educational design will always be too expensive and return too little. The amount of time a teacher spends preparing materials, delivering the lesson, and assessing learning will always outweigh any dollars-and-cents return. Every lesson is different (or should be), a costly one-off, so to speak. Results are given in incremental gains in understanding, a measure that is hard to translate into dollars. Even if we could agree on a dollar figure for the gradual, layered widening of a learner\u2019s perception of the world, it would have to be sky-high to exceed the total amount of money invested. An educator knows that the return on this investment does exceed the investment. The businessperson will always be blind to the true ROI of education.\n\nInvisible Desire\n\nTrainers steep us in mantras of good teaching. Whether we believe them or not, whether we practice them or not, they are the mantras of a teacher, so we repeat them. Chief among them are two mantras that, if a person has any hope of success in the world of education, he must learn and keep ever on his tongue. They are 1) that every student wants to learn, and 2) that every student can learn.\n\nA casual observer of the typical high school classroom might conclude that the first of these chief mantras\u2014that every student has a desire to learn\u2014might fall into the category of wishful thinking. Teachers are hard pressed to keep students from daydreaming, shooting rubber bands, and whispering to their neighbors. If the mantra is correct, then the students are acting against their own desire to learn.\n\nEvery teacher will repeat the mantra that students want to learn, but not all teachers believe it. How do I know? I used to run a computer lab in a public high school. Teachers would bring in their classes. Sometimes I would conduct class while the students\u2019 regular teacher graded papers or sipped coffee in the back of the lab. Other times the students\u2019 teacher would be in charge while I went station to station making sure the students could log in and access the technology. It was during these times when I was relegated to the responsibilities of a computer technician that I got to witness what few teachers ever do: another teacher delivering a lesson.\n\nIf we accept the idea that every student desires to learn, what I saw should resemble the ice cream man\u2019s stop in a suburban neighborhood. Kids should be clamoring for what he\u2019s vending, elbowing each other for the chance to be first to enjoy the goods. What I in fact saw more closely resembled a lion tamer forcing unwilling beasts through a series of hoops. Not always, but more often than not the teacher\u2019s approach was to coerce students to participate in activities by means of threats or just through a sense of that\u2019s what they are supposed to do.\n\nBut every student does want to learn. Every student does walk through the classroom door with the hopes that the teacher will be selling ice cream, or at least something that has real value to it. A master teacher relates what the students are learning to the real world and packages it so that the students want to learn it. She\u2019ll make polynomials seem like a natural part of the world around us, one that we all should be familiar with. And the learners, in the presence of a master teacher, will want to know about these curvy, wavy polynomial things that relate temperatures to sounds and baseballs to gravity. The teacher who believes that all student desire to learn, and thus presents desirable lessons, will make learning the joyful adventure that it should be.\n\nBut as soon as a teacher starts passing out worksheets and telling students to quietly circle the nouns in a passage of Pilgrim\u2019s Progress, it becomes clear the students don\u2019t desire to learn that. \u201cDo the odd numbered questions on page 536 and then trade papers so the person sitting next to you can see whether you got the answers right.\u201d \u201cTake notes while I tell you what happened in the Battle of Antietam. You can use your notes during the test on Friday.\u201d \u201cChristina, is chlorine a base or an acid?\u201d These are the utterances of a lion tamer. Do it\u2026 because I said so. The student did walk through the door desiring to learn, but the teacher soon made it clear that the material wasn\u2019t worth learning.\n\nStudents will desire to learn what\u2019s worth learning. Most of what we are supposed to be teaching is worth learning. We need to believe that and then present lessons so that the students believe it too.\n\nThe Clatsop Vote\n\nThe Lewis and Clark expedition was a military operation to explore and lay claim to whatever land lay between the Mississippi River and the Pacific Ocean. Except for a slave, a couple of civilians they picked up along the way, and a baby that was born during the expedition, the group of explorers were all military men under the joint command of Captain M. Lewis and Captain W. Clark. Back then, as now, commanding officers told their subordinates what to do; the decisions were not up for debate. For the first half of the expedition, men who dared to argue or contradict decisions were punished accordingly.\n\nBut when they reached their destination, the two commanders did something extraordinary. A decision had to be made whether to cross to the south shore of the mouth of the Columbia River, or camp in a more familiar location further back along their path. With winter already upon them, this could be a life or death decision. Rather than ordering the men to do what the commanders wanted, as they had done many times in the past, they took a vote, counting equally the choices of every military man, at least two civilian interpreters, York the slave, and Sacagawea the native civilian. When they tallied the votes, the majority felt that it was best to cross to the river\u2019s south bank. That\u2019s what they did, building Fort Clatsop to house them through the winter of 1805-06.\n\nMost people today hold this model of education: A teacher possesses a certain body of knowledge. Students don\u2019t. The teacher reveals the knowledge to students who then, to varying degrees, remember what the teacher revealed. Good students pay close attention, record the teacher\u2019s revelations, study, and can eventually demonstrate that they now possess the new knowledge too. They have learned, and the knowledge has passed from one person to many.\n\nThis is a good, practical way to view what happens between teacher and learner. It matches what we see in classrooms around the world and across time. It is economic as well because one person can reveal knowledge at once to large groups of students\u201430, 40, or more.\n\nBut that is only one way to describe learning. Another way, one that has been popular with staff development folks in the past ten or fifteen years, favors the teacher\u2019s role as a guide. Learning, in this model, is an adventure shared by both the students and the teacher. Though the teacher has been on the adventure before, he explores things together with the students. Learning happens when students make sense of some newly encountered situation. They work out how to maneuver a dowel around a bend in a hall; they realize that Shakespeare wrote some R-rated scenes; they debate the motives behind labor unions; they discover the secrets of inheritance; etc. In this model, learners don\u2019t simply absorb information, they seek it out. They form an initial understanding and refine it through trial and error and continued investigation. They help each other, referring to books, or the Internet, or direct measurement, or any other resource that is available to them. A student of mine even held an instant message conversation with the author of the story we were reading.\n\nLuckily, this has been my way of interpreting the complex, indefinable magic that happens between teacher and learner. I would not have survived as the only Academic Decathlon coach of a large urban high school if I tried to convince students that I knew the information they needed to master. The Academic Decathlon\u2019s competitions require students to know in depth and at a college level all of these subjects: math, science, literature, art, music, history, writing, oratory, and the art of the interview. Each year, the first six of these categories focuses on new pieces of art, novels, musical compositions, branches of science, etc. At least in a school that can afford but one coach, the only way to teach Academic Decathlon is to explore the topics together with the students, not deliver the knowledge to them.\n\nWhat happens between teacher and student cannot be put into a box. It defies definition. It is a nebulous, complex interaction that takes a master to both guide the students and inform them, and to have the wisdom to know which to do when. I personally like the idea that I lead students to discover things about the world rather than telling them how the world works.\n\nOh, I almost forgot about Lewis and Clark. What does the vote at the mouth of the Columbia River tell us about the relationship between officers and rank-and-file? Should I tell you? Or should I lead you to the edge of discovery and let you step over the threshold? \u263a\n\nAbsorbing Vocabulary\n\nPeople learn new words and phrases by hearing them, reading them, and using them in context. A learner will hear the word elm applied to the tall branching thing in the front of the house. He tries to use the word elm for the tall branching thing in the backyard, but is told that that is not an elm; it is a eucalyptus tree. Later, he hears someone say that they should trim the tree in the front yard. Human brains being what they are, he figures out that the tall branching thing in the front is both a tree and an elm, while the tall branching thing in the backyard is both a tree and a eucalyptus. Tree must mean a tall branching thing. Elm must be a kind of tree. Eucalyptus must be a different kind of tree. This is confirmed when he hears other trees, similar in appearance to the one in the backyard, also referred to as eucalyptus.\n\nIn a book he sees a picture of what he thinks is a eucalyptus tree, but the caption says that it is a red gum tree. It\u2019s possible that a red gum tree looks like a eucalyptus tree, or it is possible that red gum and eucalyptus mean the same thing. The learner files that information away for later clarification.\n\nThat is how we acquire words and build our functional vocabulary. It is a slow process. From the time that we first encounter a word to the time that we have a clear understanding of its meaning may be very long, even a year or more. In some cases, we may carry that imperfect definition around with us forever. For instance, a person who rarely messes with mechanical things might go to his grave confusing a socket with a ratchet.\n\nIn school though, a student can\u2019t wait for multiple in-context encounters with a word before she figures out what it means. If today\u2019s lesson is on metonymy, then the first encounter with the word had better be a clear definition of metonymy. Otherwise the rest of the lesson won\u2019t make any sense.\n\nLessons often rely on several key vocabulary terms. In addition, these new words are embedded in a context containing academic vocabulary and conventions that are still being defined in the learner\u2019s mind. A learner, for example, might be asked to contrast the author\u2019s use of figures of speech in the poem \u201cDo Not Go Gentle Into That Good Night\u201d to those found in an excerpt from Titus Alone. Not only does she have to know the definition of the phrase figures of speech, but to fully understand the task, she also needs to know what it means to contrast two things, what an excerpt is, what is meant by the author\u2019s use of something, and why Titus Alone is italicized while \u201cDo No Go Gentle Into That Good Night\u201d is in quotation marks.\n\nWe must use some sort of direct instruction for vocabulary so that students don\u2019t have to slowly absorb all the terms for our lessons. Of course they will hear us saying the words and they will see the words in textual contexts (hopefully rich media). Right up front, though, we need to provide them with clear definitions of the key terms.\n\nAt best, the misguided activity of copying definitions from the dictionary wastes time. If you think about it though, it actually harms the learners by giving them a false sense that they now know something. In spite of this, it is a very widespread practice. When I was manning a computer lab at Camelback High, I watched a social studies teacher direct his students to look up a list of words including colonialism in an online dictionary and paste the definitions into a document. He then went around the lab awarding credit if they had done it. His approach was slightly better than the one I witnessed as an intern in one respect: copying and pasting wastes much less time than transcribing by hand. From talking to teachers, I would guess that the dictionary activity is used more than any other for \u201cteaching\u201d vocabulary. Sometimes it is modified by asking the students to translate the dictionary definitions into their own words, but if the dictionary definitions are unclear in the first place, then how can a student give clear definitions in their own words?\n\nIf we need to directly teach vocabulary and the dictionary activity is no good, what then?\n\nAt a recent conference, an alternative vocabulary activity was touted. The students were to fold a piece of paper in quarters and fill those quarters with these pieces of information on each vocabulary word: a definition in their own words, a drawing illustrating what they think the word means, a list of related words including synonyms and antonyms, and an example of how it is used in context. The students could fill in this information as the lesson progressed. I like this approach as the students gradually construct their own understanding of what the term means based on its actual use. (This activity is not really as new as the presenter made it seem. I\u2019ve seen variations of it several times throughout my career.)\n\nBut that 4-square vocabulary activity is decidedly elementary. I\u2019m a high school teacher. Although it would work for high school students, it is not a very efficient way to absorb the large number of vocabulary terms a high school student needs to learn, nor is it particularly suited to the self-image of high school students, who associate folding paper and drawing pictures with younger grades.\n\nWe will all have different solutions to the problem of how to directly teach vocabulary terms in a way that is meaningful for the students. My preference is to treat it as an extension or acceleration of the natural way we acquire vocabulary. I will tell the students a definition as it comes up in a lesson, illustrating it with an explanation or a drawing on the board or perhaps a pantomime. Then I would supply examples. Often I would supply examples only, and then ask the students to come up with a definition based on what the examples show. I would reinforce the vocabulary terms at a later date by playing a game (what else?!). The games I used had names like Vocabulary Baseball, Scattergories\u00ae, and Concentrati. They were all whole-class adaptations of existing games. Every student got involved. Each game covered dozens of vocabulary terms in one activity. That's what worked for me.\n\nI\u2019m not trying to provide a solution for teachers. This article is more about what not to do, what to avoid. If you keep the following essentials of vocabulary acquisition in mind as you develop your own lessons, you should do well:\n\n\u2022 People naturally learn vocabulary terms from context.\n\u2022 Contextual encounters are not enough for academic vocabulary.\n\u2022 Dictionary definitions confuse learners more than they clarify.\n\u2022 A learner needs a clear definition up front.\n\u2022 A learner needs to see key terms used in context during the lesson.\n\u2022 Play a game on Friday to help reinforce the vocabulary terms.\n\nOkay, just kidding on that last one\u2026 or was I?\n\nBeyond Numbers\n\nMath is about numbers. Everybody knows that.\n\nBut it\u2019s not. The numbers just happen to be the symbols we use in math. Just as writing is not about the letters of the alphabet, math is not about numbers.\n\nIn reality, it is about relationships. It answers questions like these: What is the relationship between a shadow and the object casting the shadow? How far can a balcony extend before the support beams become unstable? If I weigh 180 pounds, how much cough syrup should I take? What is the relationship between a sound traveling over water and a sound traveling over land? What is the relationship between the size of the sidewalk rectangles and their temperature?\n\nNo one really cares that if $$\\frac{x}{68} = 144 (9.4\\times 10^{-6})$$, then $$x = 0.92$$. By themselves, the numbers are meaningless. A person trying to nail a 12-foot span of copper sheeting could use this particular relation to determine that his material will expand about a tenth of an inch when the sun heats it to 100\u00b0F. That\u2019s important because a tenth of an inch is enough to buckle the sheeting and pull it away from the underlayment.\n\nSo that begs the question, why do we spend so much time learning to juggle numbers? In fact, my observation is that that is very nearly all we do in math classes nowadays. We calculate and calculate and calculate, trying to learn tricks and memorize algorithms that produce right answers. A teacher might mention the relationships. She might even ask the students to complete a few word problems that illustrate the relationships. But students\u2019 grades rely on whether they can juggle the numbers in such a way that produces right answers.\n\nAs a byproduct of this type of math training, we as a society believe that to be good at math means to be extremely meticulous. A mathematician is someone who can write a long string of numbers and symbols without omitting a negative sign or misplacing a decimal point. What a pity.\n\nStephen Wolfram, the inventor of WolframAlpha and its related technologies, argues convincingly that this preoccupation with calculating is holding us back. We have calculators that can do that, and computers. There is no need, his argument goes, to spend hours doing long division or expanding polynomials by hand. Instead we should be spending our precious classroom hours learning about the relationships that the calculations represent.\n\nHe is right, we focus too much on calculation. That being said, calculating does develop a familiarity with the mathematical relationships that studying them directly does not. A wrestler who has spent hours on the mat practicing moves with a sparring partner will gain a feel for the moves that the coach could never impart. Calculating also develops mental agility with numbers, and the ability to recognize when algorithms allow shortcuts, or when different algorithms might be useful. This is similar to what a language student learns from exercises in sentence combining or parsing.\n\nMy games are, for the most part, calculation practice. They are not meant to provide insights into the relationships that math reveals. A person needs to do that\u2014a teacher or parent or tutor. The games do provide practice calculating, a necessary part of the learning of math. One advantage of the games is that the student can play them outside of the classroom, giving the teacher time to delve into the more meaningful parts of math instruction.\n\n# Legalities\n\nYou may reprint any of these essays as long as you don't change them and you attribute them to me, Mark Greenberg.\n\nTo make the title banner's image, I used Photoshop and a program called Frax HD. The image is my own intellectual property, and I reserve all rights regarding its distribution.\n\nI downloaded this page's background image, called subtle grunge, from subtlepatterns.com. The image's author is Breezi. The image is covered by the Creative Commons Attribution-ShareAlike 3.0 Unported license. I also use this background throughout the site.","date":"2022-08-17 19:01:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.33054494857788086, \"perplexity\": 1162.0467082138566}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573104.24\/warc\/CC-MAIN-20220817183340-20220817213340-00696.warc.gz\"}"} | null | null |
This story was not invented by the writers and producers of the film, it is the Biblical story of Esther (Hadassa). Esther was a Jewish orphan who was raised by her cousin Mordecai. They lived in exile in the citadel Susa, ancient Persia, under king Xerxes. When she came to court, Mordecai advised her to not tell anyone about her origin. Esther was just a normal girl, she was no drama queen, she did not ask for much. She just did what needed to be done. Everyone loved her. She gained the favor of Hegai, the custodian who was responsible for the young women. And then came the big day: after twelve months it was Esther's turn to go to the king. When a young woman would go to the king, she was given whatever she requested to take with her from the harem to the king's palace. But Esther only took what Hegai advised. And the king liked her. He loved her more than all the other women and made her queen.
So, is this the end of a fairytale? On the contrary. Long before the Jewish people were brought into exile, they were at war with the Amalekites. Their king's name was Agag. They have become the symbol of evil and antisemitism. King Xerxes most trusted man was Haman, the Agagite. Some say he was literally descendant of Agag, but it is safe to say that he truly was an antisemite. He hated Mordecai and when he learned that Mordecai was a Jew, it wasn't enough to just get rid of him. Haman plotted to kill all the Jews in Xerxes' reign and have Mordecai empaled on a sharpened pole with a length of seventy-five feet.
The entire city was in confusion while the king and Haman just sat down to drink. In every province to which the edict came, all the Jews went into great mourning. When Mordecai heard of the new law, he tore his clothes and went out into the city, crying loudly, up to the palace's gate. Esther was told and she was overcome with distress. She sent him clothes but he would not accept. Mordecai sent her a copy of the written decree issued in Susa for their destruction and asked her to go to the king to beg his favor and plead with him on behalf of her people.
So she went. We know how the story ends. Esther won favor in the king's sight, he held out his scepter to her and she approached. She invited both Xerxes and Haman to a banquet where Haman was exposed as the plotter and murderer he was. He was empaled instead of Mordecai and the Jews were saved.
Why did I tell you this story? Because it is not only inspirational and epic, it is also a great example of God's redemptive power. Even though He is not mentioned once in the entire book of Esther, it is clear He is behind the plan to save His people. Just as He had a plan to save all of us. Because of our sin we are in life's danger. If we are not saved, we will perish forever. We could never come before God, our King, and live. But do you know what is so wonderful? This very same King is also the One Who gave everything to save us. Who laid down His life for us. We can approach Him everyday through Jesus the Messiah, our Saviour. We do not get to spend just one night with Him, but we will live with Him eternally if we repent and believe.
And if you are scorned or maybe even oppressed because you, like the Jews in Esther's time, follow God's rules instead of the world's; hold on and know that eternal life is yours. Whatever happens to you in this life. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,184 |
Revisiting Foucault: War in peace and the question of 'power'
Post 10 August 2012
By Charles Ponnuthurai Sarvan
'The essence of our life consists, after all, of the political functioning of the society in which we find ourselves.' - Foucault
The word "peace" can connote the presence of a good degree of justice and harmony,or the absence of (overt,armed and violent) conflict. The observation in his treatise 'On War' by Carl von Clausewitz (1780 – 1831) –"war is the continuation of politics by other means" - is well-known. (See also 'The Art of War' by Sun-tzu, BCE 380-316, and 'The Arthashastra' by Kautilya.) Among Michael Foucault's chief concerns is the question of power: its forms and manifestations; its workings and effect. Power is not to be associated only with force, punishment and repression by the state. It functions also at the sub-state level; is regulatory and "productive", for example, of discourse. Foucault, inverting Clausewitz, says that 'politics is the continuation of war by other means.'
Political power puts an end to war, but not in order to suspend the effects of power or to neutralize the disequilibrium revealed by the last battle of the war (Foucault, 'Society Must Be Defended'). On the contrary, the state can use military victory to re-inscribe that relationship of force in institutions, economic inequalities, language, and even on the bodies of individuals (op. cit.). From the 1910s to the early 1970s, aboriginal children of mixed race were placed in white foster-homes or settlement camps in a policy of forced assimilation that sought to "speed the disappearance of aboriginal culture" (Michael Sandel, Justice, 2010).
What passes for Law (sometimes mistaken for 'Justice') is born of battle, massacre, conquest and their "horrific heroes"; is born in burning towns and villages; in ravaged fields, together with the innocent who died at break of day (Foucault). Power circulates. It is a network where some, submitting to power, get to exercise power. "Racism" by the state can be directed against groups of its own population, and "peace" can be but a code-word for war. Power produces discourses of "truth", and so even when the history of peace is written, at root, it's a history of war. History is the discourse of power - and its intensifier: the description of a war can be a weapon of war. The history of some is not the history of others, and what looks like right to one group is the abuse of power, violation and exaction to another (Foucault).
Destruction of the 'Other'
"Race", Foucault argues, doesn't have a scientific, biological, meaning: the real meaning of "race" is historical and political. (See, 'The term "racism" and discourse' in Sarvan - Sri Lanka: Literary Essays & Sketches, 2011.) Racism is a mechanism of power, exercised to serve a particular function. It fragments people into unequal categories, with one being classified as superior; the other, not only as inferior but abnormal. The relationship is mortal: the life of one group is thought to depend on the death or total subjection of the other. Biologically, the death of the inferior group is necessary not only for one's group to survive, but to become healthier and purer; more successful and happy. Thus, the violent destruction of the 'Other' becomes a way of regenerating one's own group (Foucault). We must bear in mind that "people are seized with a kind of madness when they take to violence. The violence carries them along, transforms them and makes them – even afterward, when it's all over – unrecognizable" (Sven Lindqvist, 'Exterminate All The Brutes' - 2007). Power claims to possess the truth, the only truth. In its control of discourse, it not only writes and re-writes, but it suppresses and erases. After almost three quarters of a century, Thea Halo took her mother, a Pontic Greek,back to visit her childhood village in Turkey. The title of the resulting book tells it all: 'Not Even My Name' (2001).
In 'Romeo & Juliet', it is innocence (ignorance) that makes Juliet– she's not quite fourteen - ask: "What's in a name?" As I have pointed out elsewhere, in Sri Lanka whether a name ends with a vowel or consonant can make an awful lot of difference: for example, Rajaratne (Sinhalese) or Rajaratnam (Tamil). Names of streets and buildings are changed, statues destroyed and graves desecrated. (For the last, see Anne Abaysekara's 'Open letter to the Defence Secretary' - The Island, Colombo, 19 May 2010.) The attempt is to alter and erase; falsify and re-write. Ananda Commaraswamy (1877 – 1947) is a scholar of international repute, one who did much work on Buddhism, Buddhist philosophy and art. Yet, seen as a Tamil, the street named after him is now 'Nelum Pokuna Mawatha'. I received the following message from a friend in Colombo (August 2012): "Gandhiji's statues were brutally vandalised twice in recent times, a few months ago in Trincomalee, and last week in Jaffna. It reveals the sheer hatred in the populace of all Tamil (and by extension Indian) cultural symbols".
"Most people in the South are ignorant of the true state of affairs in the North and East, while the others are just indifferent. Yours sorrowfully, Anne Abeysekera" - personal message from a (Sinhalese) friend, 6 August 2012. The outside world chooses, prefers, to think that with the annihilation of the Tigers, there's now peace in the Paradise Isle, the land of the Compassionate Buddha. But what is the nature of this "peace"? For whom is it "peace"? In reality the war continues to be waged without compassion - and now (the Tamil Tigers having been eliminated) with complete impunity. (See, for example, 'State Violence in Sri Lanka: The International Community and the Myth of Normalisation' by Samuel Thampapillai in 'Somatechnics', Edinburg University Press, 2011.)
'Memoryscape'
Of a truth, "absolute power corrupts absolutely" (Lord Acton). As Foucault notes, what leads to a brutal and unjust society is not defeat itself, but what is done after victory. I quote from my letter to Mr. Hemantha Warnakulasuriya (President's Counsel and former Ambassador), published by him in 'The Island' newspaper (Colombo, 9 July 2012): "You do not address issues such as massive military occupation; state-sponsored colonization, the expropriation of land; the cultural onslaught ('culture' in its wider meaning); the lack of equality, and the sense of dignity that goes with it; discrimination in opportunity and employment etc." To this list, one could add insult, intimidation and brutality become commonplace; abduction and murder. Most often, the victims are not middle-class Tamils but simple, innocent, folk: unknown and helpless; human and deeply grieving. World over, the wounds (physical, psychological) of such victims go unattended; their cries unheeded, and their tears un-wiped.
The term"landscape" has led to words such as skyscape, winterscape, "the landscape of thought", and to "memoryscape". Rosalind Shaw observes (Memories of the Slave Trade, 2002) that the remembering of violence and injustice, "memoryscape", can be non-discursive: though not verbally expressed, it exists. To "dismember" is to take apart, usually with violence. To "remember" is to recall, but it is also to "re-member" (make a "member" again); to heal and re-join what has been separated. The struggle is to protect and preserve; to remember and to "re-member", in the face of cruel power. The struggle is to realize real peace, and a society that is just and free, concerned and caring, ethical and decent.
* (With thanks to Liebetraut Sarvan for comment and criticism.)
Charles Ponnuthurai Sarvan studied at the University of Ceylon, Peradeniya and left for England two years after his graduation. He obtained the degree of Master of Philosophy and Doctor of Philosophy from the University of London. He was the contributing Editor of English Literature: Introductory Essays (National Educational Company, Lusaka, 1981) and co-author of Readings in Poetry (Lusaka, 1986). Professor Sarvan, now retired, taught in Sri Lanka, England, Nigeria, Zambia, the Middle East and Germany. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,360 |
# Copyright & Information
Christmas At Candleshoe
First published in 1953
© Michael Innes Literary Management Ltd.; House of Stratus 1953-2010
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording, or otherwise), without the prior permission of the publisher. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages.
The right of Michael Innes to be identified as the author of this work has been asserted.
This edition published in 2010 by House of Stratus, an imprint of
Stratus Books Ltd., Lisandra House, Fore Street, Looe,
Cornwall, PL13 1AD, UK.
Typeset by House of Stratus.
A catalogue record for this book is available from the British Library and the Library of Congress.
ISBN: 0755120906 EAN: 9780755120901
This is a fictional work and all characters are drawn from the author's imagination.
Any resemblance or similarities to persons either living or dead are entirely coincidental.
Note for Readers
Reader preferences vary, as do eReaders.
This eBook is designed to be read by any eReading device or software that is capable of reading ePub files. Readers may decide to adjust the text within the capability of their eReader. However, style, paragraph indentation, line spacing etc. is optimised to produce a near equivalent reflowable version of the printed edition of the title when read with Adobe® Digital Editions. Other eReaders may vary from this standard and be subject to the nuances of design and implementation. Further, not all simulators on computers and tablets behave exactly as their equivalent eReader. Wherever possible it is recommended the following eReader settings, or their equivalent (if available), be used:
Clear Local Data – off; Local Styling – off; Text Alignment – Publisher Default.
www.houseofstratus.com
# About the Author
Michael Innes is the pseudonym of John Innes Mackintosh Stewart, who was born in Edinburgh in 1906. His father was Director of Education and as was fitting the young Stewart attended Edinburgh Academy before going up to Oriel, Oxford where he obtained a first class degree in English.
After a short interlude travelling with AJP Taylor in Austria, he embarked on an edition of _Florio's_ translation of _Montaigne's Essays_ and also took up a post teaching English at Leeds University.
By 1935 he was married, Professor of English at the University of Adelaide in Australia, and had completed his first detective novel, _Death at the President's Lodging_. This was an immediate success and part of a long running series centred on his character Inspector Appleby. A second novel, Hamlet Revenge, soon followed and overall he managed over fifty under the Innes banner during his career.
After returning to the UK in 1946 he took up a post with Queen's University, Belfast before finally settling as Tutor in English at Christ Church, Oxford. His writing continued and he published a series of novels under his own name, along with short stories and some major academic contributions, including a major section on modern writers for the _Oxford History of English Literature_.
Whilst not wanting to leave his beloved Oxford permanently, he managed to fit in to his busy schedule a visiting Professorship at the University of Washington and was also honoured by other Universities in the UK.
His wife Margaret, whom he had met and married whilst at Leeds in 1932, had practised medicine in Australia and later in Oxford, died in 1979. They had five children, one of whom (Angus) is also a writer. Stewart himself died in November 1994 in a nursing home in Surrey.
# Biographical Quote
CHRISTMAS, GERARD (d. 1634), carver and statuary; carved funeral monuments; carver to the navy, 1614-34; designer of figures for several lord mayors' shows between 1611 and 1632.
The Concise Dictionary of National Biography
# 1
We are looking at an English rural landscape on a summer afternoon. Most of us are urban folk – we come from New York and London and Birmingham and St Louis and our principal sensation is the comfortable one of getting our money's worth. The Englishness is unchallengeable, the rurality unflawed, and the whole effect a landscape in the fullest sense of the word. This last circumstance, indeed, makes a few of us obscurely uneasy.
Delimiting the foreground, beyond a broad expanse of lawn, is a low and unassuming stone wall. Our eye lingers upon it, and we wonder why. Well, diagonally upon it falls another line – that of a small clear river flowing away into the middle distance. And it so happens that, in the picture-space we are contemplating, the one line cuts the other in a ratio which artists call golden section. Moreover the diagonal line of the river is balanced by an answering diagonal in the long slope of an adjacent hill, and we are further aware that to left and right, just comfortably within our peripheral vision, grove nods to grove and wood advances upon wood as in the sinuous symmetry of some sophisticated dance. Knowing that nature never contrives precisely such effects, we realize that the river has been diverted, the hill manufactured, and the circumambient forest persuaded to approach and take up a station in consonance with the general effect. We are studying a work of art.
More, we are the heirs of all the ages. Whate'er Lorraine light-touched with softening hue, or savage Rosa dashed, or learned Poussin drew has gone to build up this picture; the Gothic is present in a durably constructed ruin partly screened by Druidic oaks; and across the lawn stretches the shadow of an intricate and enormous object, presently to be explored, which could never have been thought of but for the lucidity of Greece and the grandeur of Rome. If in the course of the past few weeks we have been doing things in a really big way – perambulating, perhaps, the picture-galleries of the continent, pausing for appropriate minutes before three-starred canvasses, and refraining from any culpable lingering before inferior productions – if we have been doing this sort of thing we may feel that some optical trick is now being played on us; that effects properly to be contrived as a mild illusion upon a demonstrably flat surface have here been made ingeniously stereoscopic; and that by pressing a button or removing a pair of cunningly contrived spectacles we shall cause all the mass and roundedness to vanish, and be looking at nothing more out-of-the-way than a good canvas by Richard Wilson.
But there is the sky. Small clouds are actually moving across it, and light and shade play over the scene. On the lawn, beyond the farthest tip of the great still shadow, something – a further shadow – flutters. We turn, craning our necks upwards. High above the vast complicated building flies a small complicated flag. As the breeze catches it and flattens it out it becomes – we vaguely conjecture – generously and awesomely informative. If we knew a griffin from a wyvern, and could name when we saw them gules three pales vair and a chief gold, this fluttering scrap might largely if somewhat imaginatively instruct us in a substantial corner of English history. As it is, we may be content with the thing's simpler advertisement. The Marquess of Scattergood is in residence at Benison Court.
Lord Scattergood is entertaining guests. Groups of them are on the lawn with us now, and others are strolling in remoter parts of the gardens. The jet d'eau has been turned on and evokes admiration; the water-steps – so charmingly reproduced in miniature at Chatsworth – are agreeably cool; the Neptune fountain, with its circling and spouting dolphins and its diving Nereid, is accounted a marvellous toy. The palm houses and orangeries please some; others in two stately gondolas venture upon the surface of the south lake; smaller parties explore the temple of Artemis, the hermit's grotto (disused), the ice house, the sixth marquess' improved milking-parlour in the Chinese taste. Most however are indoors, and so too is Lord Scattergood himself. The state apartments are open, and large numbers move about in them, Lord Scattergood, in the middle of a small group (thus highly if somewhat randomly privileged), dominates the octagon room, full of affability. He had good reason to be delighted. It is a peak hour and the place is doing well. In the great courtyard the turnstiles never cease to click, and the park is alive with chars-à-bancs, like enormous beeves at pasture. Everybody has paid either three shillings for the house, or half-a-crown for the gardens, or five shillings for both.
Lord Scattergood feels, very properly, that he owes rather more to all these people than if they had paid nothing at all. So he leads his group around and is prodigal of information. Much of it is inaccurate, since it has been Lord Scattergood's habit to take his possessions for granted and revere them less in detail than in the mass. Even his elderly younger son, Lord Arthur Spendlove, who is also acting as cicerone, gets fewer of the dates wrong and is less apt to muddle the rebuildings and restorings and royal visits. But then one cannot have everything, even for five shillings. It is something to be done the honours of Benison by a Spendlove, and particularly by a Marquess of Scattergood himself. And Lord Scattergood's manners are so nice that we can feel quite at home. It is true that the elder statesmen who stroll up and down in a detached way through these large vistas are detectives. But they would be here, just the same, if Lord Scattergood was giving a party strictly confined within the limits of the peerage. Were he giving a large-scale family party, it would be his impulse to have them doubled. But this is something you would never guess as you look at him. He has all the appearance of reposing in utter confidence within his own inviolable caste. His present affability has its first and cardinal condition in that.
'I ought to begin, you know, by explaining that my people have lived here at Benison for quite a long time. I don't mean, of course, that the place is frightfully old. As you can see for yourselves, it quite definitely isn't. From the look of it' – and at this Lord Scattergood glances about him with all the appearance of a freshly appraising eye – 'from the look of it, I should say it was run up long after Queen Elizabeth and Shakespeare and Cromwell and all that thoroughly historical crowd. It's no good coming to Benison for the feel of that sort of thing. In Scotland I have a place called Corbies – my eldest boy lives there at present – where you get much more of all that. Dungeons, I mean, and a drawbridge, and deuced primitive drains.'
'Would your family ghost be there?'
It is an American lady who in all boldness and innocence asks this question. There is a ripple of embarrassed laughter. A small man clutching a child in either hand – he is a greengrocer from Nottingham – blushes painfully: his delicacy is outraged. But Lord Scattergood has had this one before and is delighted. 'Certainly. The family ghost has never come down here, I am glad to say. Not that he is in any way really tiresome. Only' – something new and pleasing flashes into Lord Scattergood's head – 'only whenever he appears he is accompanied by a skirl of bagpipes – and that, of course, can be disturbing in the middle of the night. But I was remarking that one doesn't come to Benison, don't you know, for the medieval side of things. The Henrys, and all that. When I was a lad my father packed me off to a big school near Windsor – and there, if you understand me, there is much more that takes you back to chain mail, and the Crusades, and those Wars of the Roses in which so many people you meet got badly cut up. But here at Benison we are seventeenth-and eighteenth-century, and we have got along in a very orderly way on the whole. Those carvings' – and Lord Scattergood abruptly extends a finely tapering finger as he makes this transition from the general to the particular – 'those carvings above the doorway are by a fellow called Grinling Gibbons. Or was it Edward Gibbon? I remember my grandmother telling me they were both little men who came down and worked here from time to time.'
Lord Scattergood, conscious of being a shade vague, pauses to collect himself. The little group gazes around and talks in whispers. The whispering is something they feel to be polite; it is not the issue of timidity. People are impressed but not overawed. They have just been told, it is true, that such and such paintings are by Titian and such and such by Velasquez; that here is a casket by Cellini and there a wax figure by Michelangelo. But are they not, after all, familiar with super-cinemas? And has not the cinema-screen itself conducted them in the course of historical films through palaces more gorgeous than this? Obscurely but quite confidently, the English feel that things have happened which make them, in a sense, joint-owners with the Marquess both of Benison itself and of all its treasures. They are in the same boat with Lord Scattergood; they will sink or swim together; on this sunny afternoon it is pleasant to have been invited to climb to the bridge. Their five shillings are forgotten; they are well-disposed and well-behaved guests; it will be tomorrow before some of them recall that they have peeped into a fantastically remote and still obstinately privileged world.
The Americans are different. They are keeping the full measure of their awe for the Tower of London, the crypts of the great cathedrals, the birthplace of Shakespeare in Stratford, the cradle of the Washington family at Sulgrave Manor. That the owner of Benison Court should confess the place to be of no great antiquity impresses and pleases them; they see in it the high standard of personal honour which the English aristocracy – they believe – manages to combine with the utmost of Machiavellian duplicity in the political and diplomatic sphere. At the same time a few of them are looking at their watches, and presently one of them asks the question that is in all their minds. He is a bald pale person from Buffalo, where he carries on the profession of mortician. Conceivably by way of reaction from this sombre calling, he now wears a lemon-coloured suit and an extraordinary tie – a tie as complicated as the flag now fluttering above Benison, and akin to it – we may feel – as a gesture of naïf ostentation. The mortician has a camera slung at the ready just above the bulge of his stomach, and this gives to the most prominent part of his person the appearance of a large Cyclops-face set directly upon two short legs. He swings round and faces Lord Scattergood with the camera's single staring eye. 'What', he demands, 'is the oldest thing you have here?'
The mortician is paying Lord Scattergood a compliment, is acknowledging him to be the sort of man who will take and deal with a straight question. And Lord Scattergood is once more delighted. The vocation of Edward Gibbon, although he must have learnt it when at the large school near Windsor, has long ago passed out of his head. But he knows how to answer the mortician. 'Well now, talking of that, I can show you rather a jolly thing.' On long loose limbs he strides out of the octagon room; from the back he might be a youth of twenty; the tourists puff and shuffle after him, their foothold uncertain on the great polished floors. They file between rows of portraits, a complexity of mirrors, cliffs of books; they descend a broad cold staircase hung with enormous canvases of conjectural Spendloves prancing upon badly foreshortened horses. Presently they are peering into a chill and musty gloom, while their guide fumbles for an electric switch. 'There you are. Rather fun – what?'
With a flicker and a ping a bar of fluorescent lighting has snapped on. Lord Scattergood's party takes on an unhealthy tinge and the mortician might be a piece of bad embalming. Only Lord Scattergood's own complexion is so florid as to be indestructible. He watches with amusement as his guests peer doubtfully into the great wedge-shaped space beneath the last flight of stairs. It is the corner into which, in a suburban house, one pushes the pram. And now Lord Scattergood's guests, as if they were Gullivers in a Brobdingnagian semi-detached villa, are looking at an enormous baby-carriage, elaborately painted and carved. The greengrocer's younger child becomes excited and utters cries.
'Constructed for the children of the Swedish Countess in 1722.' Lord Scattergood embarks somewhat uncertainly on an explanation of how this lady found herself a Spendlove. 'But, as you can see, it is really a sledge. She is said to have had reindeer brought over, and in winter her children went bowling about the park.'
'Did you have more snow in those days than you have now?' The American lady who inquired about the ghost has put this question with an air of much acuteness. Lord Scattergood, cheerfully accepting the character of a Methuselah, replies that the winters were decidedly more severe then than now.
Meanwhile the sledge is being a great success. It is pronounced to be cute and sweet. A young female from Sydney, who is mostly bare legs and an enormous rucksack, declares it to be dandy. The greengrocer's younger child starts shouting. Only the mortician from Buffalo remembers the motive behind this inspection. With professional deftness he applies a scraping fingernail to a leather surface. 'I don't get this,' he says. '1722 isn't so very old. And it don't look old, either.'
'Ah – you misunderstood me.' Lord Scattergood glances amiably round, collecting the attention of his auditory. 'It's not the carriage itself that is at all notably old. I'm not sure that the fellow Gibbons I was mentioning didn't have a hand in it. But the sledge-runners are quite old – and fine pieces of timber, as you can see. Cedar wood. They came from the Middle East.'
'The Middle East?' The mortician is suspicious.
'Yes – brought back by an ancestor of mine – quite an enterprising fellow – from the top of Mount Ararat. Ship's timbers, he decided they were. And he was a sailor, so he ought to have known.'
The more mentally alert of Lord Scattergood's hearers giggle or gasp. An explanatory voice at the back, unconscious of offence, says, 'Blessed if the ol' bastard doesn't say 'e's got Noah's ruddy Ark.' The greengrocer's second child, thus hearing mention of this object of juvenile enchantment, breaks loose, rushes forward, trips, grazes a knee, and howls. The greengrocer's wife, deeply mortified, seizes the child, rights him, and is about to administer the alarming if innocuous shaking with which in England the simpler classes are accustomed to admonish their young. But Lord Scattergood is before her, whisks the child to his shoulder, and marches off with brisk talk of warm water and sticking-plaster. The greengrocer, his wife, and his elder child follow. They are really awed now. Lord Scattergood pauses until they catch up. He has forgotten his damned tourists and the turn he puts on for them. The child has casually attracted him, and for five minutes he will chat to the parents just as he would do to any of his great neighbours in the county. He believes that they will go away with the unspoken knowledge that one does not shake small children.
Autocratic and benevolent, Lord Scattergood disappears. The group remains for a moment in uncertainty, staring at the sledge. But almost at once a less exalted guide sweeps down on them and politely carries them off. Opening Benison Court to the public has proved to be a money-earner. A good deal of efficiency has been mobilized for the job.
But at Benison even quite a lot of efficiency is liable to get spread out thin. Lord Arthur Spendlove, as he leads his own party round, knows all the closets where chaos and confusion lurk. The very skeletons in the cupboards, he likes to remark, are in a sad muddle. A clever man, seemingly shiftless because profoundly at odds with his time, Lord Arthur wonders if any amount of efficiency could now make much difference. His father, briefed by some soothing old donkey in Chancery Lane, declares that penal taxation is ephemeral, and that of the really big English properties the ownership has not changed. But Lord Arthur is aware of the price of coal and the state of the plumbing; fitfully but with an alert intelligence he conducts inquisitions in the estate office; he has followed certain financial clues through their labyrinth, and it is his conclusion that Benison is a Grace and Favour house, the patronage of which is vested in two or three powerful persons in the City. By an agreement among these, the Spendloves could be sold up tomorrow. But could he, knowing all this, control the situation any better than his father does, or than his elder brother will do, when in the fullness of time he is called home from his endless bird-watchings and other blameless idiocies in Scotland?
Lord Arthur checks himself in these musings, and turns in negligent ease to face his little flock. He has all his father's charm of manner, and although he will tire more quickly of this new family game, he is prepared to put greater finesse into it for a time.
'First it is very necessary to apologize to you about one or two things. The truth is that we are not quite straight at Benison.' And Lord Arthur meets the respectful attention of his group with a gaze the frankness of which must dispel any possible ambiguity lurking in his speech. 'In the early years of the war we had a government concern quartered on us – quite an important government concern – and after that we had a couple of schools. I didn't see much of it myself, because I was having a quiet sort of life in the desert and Tripoli and Italy. But it seems that things got pushed around a bit and stowed away and so forth; and we still don't know quite where we are.'
'Did the schoolchildren cause a lot of damage?' An elderly woman turns from fingering the long gold curtains of the music-room to ask this question.
'Oh, no – dear me, no.' Lord Arthur's glance has travelled over his hearers' heads – he is inches taller than any of them – to the long line of paintings on the north wall. They no longer correspond with the faded patches on the green silk behind them, and he sees too that Canova's frigid Aphrodite has been shoved into the corner formerly occupied by Flaxman's bust of the elder Pitt. He is assailed by the renewed conviction that he and his family are now only camping in Benison, even that they are unlawful squatters who may at any time be evicted by the police; that they may be required to pack up their improvised domesticities and quit – trundling the Aphrodite, and Pitt if he can be found, down the league-long drive on a wheelbarrow. The vision of his father doing this rises before him, and hurtling in the other direction he sees an unending line of motor-coaches, crammed with citizens feeling in their pockets for small change. When the Ministry took over in 1939, he is thinking, my father expected the whole place to be blown sky-high within a week. But it wasn't to be, and Benison is going to end not with a bang but a whimper.
Fortunately he is still talking. He hears his own voice insisting on how agreeable the schoolchildren were, revealing that some of them still write, still come back and inquire about horses, dogs, gardeners. Lord Arthur has the inventiveness of his father, into whose head will come nonsense about family ghosts or Noah's Ark. But he has too a streak of artistry. As he tells how the bigger girls played Sheridan in gowns which had for two centuries been laid away in lavender, or how the smaller girls were allowed to paint with their water-colours the putti that play hide and seek round the tall marble chimneypiece in Queen Caroline's Drawing-Room, or how, treasured in the library, there is a sound-strip of a hundred young voices echoing in the great gallery: as Lord Arthur tells of these things he makes them golden – as golden as the light now pouring in level shafts across the park – Claude's light, the light of the great ideal landscapes, glinting on the gold-leaf that sheaths the high windows without, on the gold damask that drapes them within, on the long lines of gilt frames on the walls, on furniture here smothered and here licked with gold. The great room is full of the golden light. But soon it will be fading and everybody will go away. Already from the nearer stretches of the park comes the pulse and throb of engines, as if the pasturing chars-à-bancs were raising their heads and lowing – lowing to be led to some milking-parlour mightier than that erected by the sixth Marquis of Scattergood in the Chinese taste.
And presently this is answered by another sound. From a distant court of the great building – a court palatial in itself, but here serving for offices and stables – a deep-toned bell is calling the hour in long golden syllables that carry through Benison's two hundred rooms, roll across its spreading formal gardens, its ornamental waters, and its spacious park, to die finally into a just perceptible vibration in the distant streets and houses of Benison Magna, Benison Parva, Abbot's Benison, and Candleshoe.
# 2
'If that wasn't a darn queer thing!' Grant Feather slows down behind a char-à-banc on the Palladian Bridge. 'What makes them put in time, do you think, taking round a raggle-taggle of tourists like you and me?'
With her nose still in her guidebook, Mrs Feather absently shakes her head. 'The Temple of Ancient Virtue', she reads, 'was designed by Kent. Now, why didn't we see that? A graceful but massive structure. The Temple of Modern Virtue was constructed nearby in the form of a ruin, the contrast being allegorical in intention. It was removed by the seventh marquess, who intended to erect in its place a Temple of Progress and Perfectibility. His interests changed however and he built a mosque, now used as a cow-shed. I'd say that folks crazy enough to do things like that are crazy enough to take round tourists.'
'You agree, momma, that it was a mite crazy?'
'Well, Grant, it was courteous too. If you're good enough to be let in at all, even at half a dollar, you're good enough to be talked to. Your grandfather would have done the same, if he'd ever felt like collecting half-dollars from people wanting to see round his house at Newport.'
'Nobody would want to see round that house at Newport.'
'They might now. Your grandfather's house is almost as much a period piece as Benison.' Mrs Feather turns the page of her guidebook. 'The chapel is by Wren, and contains a fine statuary group by Roubiliac. We didn't see that either.'
Grant Feather sets his foot on the accelerator and chuckles. 'Perhaps that's another half-dollar. After all, Benison isn't just one period piece. It's several.' He stops the car. 'There's your last glimpse of it.'
They have driven for two miles through the park, and lodge-gates and the public highway are just in front of them. On their left is a broad sheet of ornamental water, part balustraded and part overhung by dark-foliaged trees. Small islands support obelisks, groups of statuary, miniature temples. Beyond, the river winds gently through a valley whose wooded slopes, artfully converging as the scene recedes, finally form the wings of a theatre in which the backcloth is Benison itself – the great house in all its incredible length and high Ionic elegance planted squarely to the view, with only the open sky behind the bold symmetry of its central mass, its spreading wings, its end pavilions.
For a moment Mrs Feather contemplates the large assertion of it in silence. Then she snaps shut the guidebook. 'I was wrong. Period piece isn't the name for it. It's a show-place.'
'Well, I guess it's that too.' Grant is amused by what he discerns as a change of mood in his mother. 'And grandfather's Newport mansion is hardly that.'
'Benison was a show-place from the start, and that's why that old man must go on showing it now. He'd prefer a more time-sanctioned ostentation – big parties of his own sort, with fifty housemaids staggering up and down those great staircases with coalscuttles, and everything very grand and splendid. But that's no longer possible in England. And rather than have his great house degenerate into something useful – say an orphanage or a convalescent home–'
Grant Feather lets in his clutch again and shouts with laughter. He is from Harvard; he has finished his first year at Oxford; it pleases him to pretend that his mother is a cosy little woman, much lacking in sophistication. 'Rather than do that, the old boy continues to show off – but to new classes of society?'
'Just that. You see, Benison isn't really old – and those Spendloves aren't really old either, or at least they ain't old as the biggest sort of aristocrats are. When you get true antiquity–'
Mrs Feather has again provoked an explosion of mirth in her son. 'A single thirst for modernity distinguishes the American at home, and a single passion for antiquity grips him when abroad.'
'Grant, you got that from your Oxford tutor.'
'Perhaps I did.' Grant grins. 'But it's true, all the same. "What is the oldest thing you have here?" I heard one of our countrymen fire that at the old marquess an hour ago. Or there was the woman that pointed at a portrait of Margaret Plantagenet and asked if she came of an old family. And now here's you complaining that Benison Court misses out on the owls and ivy.'
'That's not quite why I find I don't like it.' Mrs Feather settles back comfortably as the car swings into the highway. 'If it's a period piece, it's a period piece of the show-place period. Do you get that?'
'I get the ostentation. Benison makes its gesture half across its tight little county.'
'That's just it. Rather a blatant gesture. Right at the end of that century – the seventeenth century – the English sense of values deteriorates. They begin putting up big empty vulgar things, and demanding admiration for their mere size and expensiveness. Mind you, Grant, I think it may have been largely our fault.'
' Our fault?' Grant takes his eyes from the road to glance at his mother in astonishment.
'For quitting. For crossing the Atlantic, and draining England of the folk with the old, mature sense of values. English society has been kind of raw ever since.'
'Perhaps we should come back?'
'Perhaps we should.' Mrs Feather considers it seriously. 'After all, they wouldn't put us in the pillory any more, or burn our books, or stop us going to church. You might figure it, Grant, that the practical reasons for our exile being past and done with, it's our business to pack up and come home.'
'You would advocate founding a new England on the western seaboard of this island? You would push back the savages of Lancashire and Cumberland into Yorkshire and Northumberland?' Grant pauses for a moment to peer at a signpost. 'But you wouldn't like it, momma. The immemorial spirit of the place would take charge, and presently you would find that your new England was being run by men. The great American Matriarchy would have perished in the resettlement.'
Mrs Feather opens her guidebook again as a gesture of scorn. The Matriarchy joke always offends her. For some minutes the car travels in silence, and then she makes a discovery. 'The church at Abbot's Benison has long-and-short work.'
'Has what?'
'A stonemason's technique not found in England after the Saxon period.'
'Sure – Owl-and-Ivy. Do you know, I kind of get it mixed with Decorated and Perpendicular. Well, it's just too bad we missed Abbot's Benison.'
'Second on the left, and then left again, will take us straight back to it.' Mrs Feather is inflexible. 'There is a three-decker Jacobean pulpit.'
'That's fine. But these English hotels, remember, believe in something called the dinner-hour. If we miss–'
'And an elaborate marble monument, with curious original iron-work, is believed to be by Gerard Christmas, carver to the navy. I was reading about him only the other day.'
With a sigh of resignation Grant swings the car left. His mother's indefatigable antiquarianism at once delights and bores him. 'You know,' he says presently, 'if you lived in this country, you'd never go after all these period pieces and show-places and churches with Owl-and-Ivy work. Your bondage to them would be broken, and you could sit quietly at home, toasting your ten toes before a nice English open fire. Why not settle for a year or two and try it? Your own little experiment in the new New England.'
'Now, Grant, that's a curious thing. I've been thinking as we drove along this morning that Oxford has a great attraction for me. And last week I was shown a very nice apartment there, right opposite the gates of your college. And the society would be attractive, too. Your dear old President and his wife, and your tutor, and a heap of your own friends.'
With a quick glance Grant assures himself that this devastating idea is a product of his mother's sense of humour. 'Oxford has plenty of Owl-and-Ivy, sure enough. But would it give you scope, momma? You'd do better to buy Benison.'
'You think it's for sale?' Mrs Feather speaks with the prosaic interest of one who would have no difficulty in finding the money – and she is, as it happens, an extremely wealthy woman.
'They'd jump at a good price, and retire to Corbies and the family ghost with the bagpipes. Or you might rent the place, and bind Lord Scattergood to live in one of the lodges and continue to act as chief guide.'
'Benison isn't at all what I want. And none of the places I've looked at is quite right.'
Grant stares. 'You've really looked?'
'I went over several manor-houses in the Cotswolds last week. If your sisters follow you over here, a house will be a convenience. But those I've seen have all had something – well, subtly wrong with them.'
'Something spurious about the immemorial flavour?'
'That – and their being kind of thrust at you. I want to find a house. That, you know, is what a period piece is – something that you yourself rescue from oblivion, and that quite perfectly recalls its own epoch because it has seen unregarded and uninterfered with ever since... Look out!'
They are on a winding secondary road and Grant, driving well, is not really in danger of an accident. But he has to brake hard, and the boy who has burst out of a hedge on his near side is lucky to have got across without at least a bad fright. He is through the opposite hedge now, and in a moment he vanishes. Grant changes gear and drives on.
'Wasn't that rather a queer boy?' Mrs Feather's voice is perplexed. 'Did you notice?'
'I didn't notice much about him. Reckless little brute.'
'He had a cap.'
'Why shouldn't he?'
'I don't think that English country boys much wear them. And it had a long feather in it. And he had long stockings that looked almost like–'
Grant is aware that his mother has broken off in order to concentrate her attention upon some object, apparently in the middle distance, that lies over his shoulder. He glances in the same direction and sees nothing but a hawthorn hedge, and beyond this a beech copse in which the shadows of evening are beginning to gather. Recalling the celerity with which the less unpalatable dishes are prone to be 'off' in English hotels, he accelerates. But his mother lays a hand on his arm. 'Grant – do stop. It's Jacobean.'
He stops, and Mrs Feather at once gets out of the car. He follows and sees rising above the beeches two chimneystacks in cut brickwork, each of them of three grouped shafts. They rise boldly above a scrollwork gable which can just be glimpsed through the tree-tops, and the evening sun catches them so that the mellow red above the foliage is like flame. Children's voices can be heard in the distance, but the evening is curiously still and the beech copse has an air of mystery. Mrs Feather is entranced and Grant is apprehensive. 'If you really want to see the church at Abbot's Benison–' he begins.
But his mother shakes her head. Without taking her eyes from the peeping gable and its clustered chimneys, she feels in the car for her guidebook. 'If it hadn't been for that boy, we'd have gone by without noticing it. Did you ever see a place that had such an air of being hidden away?'
Grant does not audibly assent to this; he sees the looming danger of trespass, barbed wire, torn clothes, detection, embarrassment. He has been through it before. So he reaches for a map and studies it. 'Three miles to Abbot's Benison,' he presently announces. 'And two miles short of that there's a hamlet called Candleshoe.'
'Candleshoe?' Mrs Feather's delight deepens. 'Isn't that a wonderful name?'
'I don't find it all that striking, momma. And that house, if you want to know, must be Candleshoe Manor.'
Mrs Feather consults her guidebook. 'This says nothing about it. Yet I'm sure it's Jacobean. It may even be Elizabethan.'
'And good Queen Bess herself may have slept here?' Grant moves back towards the car. 'Well, if it isn't mentioned, it can't be on show. So we may as well move on.'
'But, Grant, that means we've practically discovered it.'
'Nonsense. There will be some big county history with screeds about it. But that doesn't mean that the folk who live here want inquisitive trippers poking round.'
'Perhaps nobody lives here. They say a lot of these places are deserted and going to rack and ruin. Would that be some sort of drive fifty yards down the road?' Mrs Feather sets off at a brisk pace as she speaks. 'I believe it is. And there's a lodge.'
Resigning himself to the situation, Grant steps out beside her. The road appears wholly unfrequented, and he recalls that he has seen no other vehicle since branching off on it. The boy – according to his mother, the oddly dressed boy – is the only sign of life that has appeared. And the lodge, when they come up to it, is clearly deserted; the windows are boarded up and a hole gapes in the roof. Even his mother acknowledges it to be a nondescript of no antiquarian interest; she opines that it is a nineteenth-century affair, built when some secondary approach to the house was constructed by a prosperous owner. On each side of the drive itself a masonry column of undistinguished proportions attests to a sort of perfunctory grandeur; one is topped by a meaningless stone ball and from the other an identical ball has toppled and lies half-buried in grass. Rusted hinges show that there must once have been a pair of iron gates. But these have vanished and there is open access to a cart-track – it appears little more – that presently takes a twist among the beech-trees and vanishes. The house can no longer be seen.
'We'll just look what's round that bend.' Mrs Feather is still briskly resolved. But in response to the silence that seems to be gathering round them she has unconsciously lowered her voice. A rabbit not twenty yards ahead nibbles undisturbed, and for a moment the intruders find themselves standing quite still in tingling expectation. It is a drift of primitive feeling that has worked its way up wards into their eminently civilized consciousness; were it to break in on a more substantial scale they would experience panic, and the god himself might catch and claim them as they bolted for their car. But this passes; they set off up the neglected avenue; the rabbit vanishes; Mrs Feather gives a moment's attention to the commonplace business of finding half-a-crown.
'Probably it is deserted. But there may be a caretaker, and he will be glad enough to show us round. There will be quite enough daylight, although it is already dusky here in the trees.'
Grant says nothing. He sees only cold corned beef, watery salad, blancmange, and the crowning horror known as 'jelly' between himself and bedtime. These will be served in penitential conditions by an obtrusively promoted scullion in an empty dining-room depressingly 'laid' for breakfast.
They have reached the bend and rounded it. A few yards ahead a tree-trunk sprawls dead across the drive; it is a barrier which they must scramble over if they are to go further. Grant supposes that it must have been brought down by a storm, but when his eye travels to its base he sees that this is not so, and that it has been expertly felled to lie as it now does. He and his mother both come to a halt; as they do so there is an odd twang in the air somewhere to their left, and they are looking at the shaft of an arrow quivering in the obstacle before them.
'That's your boy with the queer cap.' Grant is at once clear-headed about this surprising occurrence. 'And it has a message.' He advances to the tree-trunk and takes hold of the arrow. It is homemade, powerful, and correctly feathered; it pierces and has carried a twist of paper. Grant tears this off and unfolds it. They are looking at a single word scrawled in pencil:
Avaunt!
'An inhospitable boy.' Mrs Feather frowns. 'But where would a village child come by a word like that?'
Grant laughs. 'From a five-cent story of Robin Hood and his merry men, I'd guess. And I don't suppose it's meant to be ambiguous.'
'How could it be ambiguous?' Mrs Feather turns colloquial. 'Don't it just mean "Git"?'
'It might mean "Advance".' Grant is on ground where his education excels his mother's. 'Spenser uses it that way in the Faerie Queene.'
'I never heard of village boys reading the Faerie Queene.'
'This mayn't be a village boy. It may be the young lord of the manor, amusing himself in a mildly alarming way at our expense. The way they said "Trespassers will be prosecuted" in Sherwood Forest long ago.'
'We'll take it to mean the other thing, and go right ahead.' Mrs Feather's resolution is mounting. She climbs over the tree and walks on.
Twang!
This time the arrow gives the impression of having travelled uncomfortably close to their ears. But its mark has been at a discreet distance ahead; Grant goes forward to the standing tree in which this time it has lodged, and again finds a message. He twists it open and reads:
'Enter these enchanted woods,
You who dare!'
'Meredith.' It is apparent that in the way of English poetry Grant Feather knows all the answers. 'And this time I'd say it is ambiguous – a kind of challenge. But will it be safe to accept?' Grant looks at his mother as whimsically as he can. In fact, he is uneasy. He knows that the bow and arrow at work are not the sort with which a child plays in a suburban garden. The thing could be lethal. And the child may be cracked. He does not want this Robin Hood ballad stuff to turn into a Cock Robin nursery rhyme to his mother's personal hazard.
Mrs Feather divines that her son is feeling protective. This amuses her, but she is diplomatic. 'We can risk it. The boy has certainly gotten a powerful bow. But a challenge like that doesn't come to the mind of anyone who is going to shoot you in the back. We'll go straight ahead.'
Strictly, this is not a feasible programme, for the drive, such as it is, pursues a winding course. Perhaps it was originally constructed in this way in order to give a false impression of distance; it is twisting about in the beech wood so as to make the most of it.
They move forward. No more arrows are fired. Mrs Feather has taken the second message from her son, and now she glances at it. 'Grant – can you remember any more of this poem?'
'Quite a lot.' He is aware that his voice has gone self-conscious in what they feel as a deepening circumambient silence. But he firmly begins to quote:
'Enter these enchanted woods,
You who dare.
Nothing harms beneath the leaves
More than waves a swimmer cleaves.
Toss your heart up with the lark,
Foot at peace with mouse and worm,
Fair you fare.'
Mrs Feather listens attentively as she walks. 'It's an odd sort of poetry to appeal to a boy.'
'Didn't you say he was an odd sort of boy?' And Grant Continues to recite:
'Only at a dread of dark
Quaver, and they quit their form:
Thousand eyeballs under hoods
Have you by the hair.
Enter these enchanted woods,
You who dare.'
'I see.' Mrs Feather is appreciative. 'The whole poem is a kind of challenge. Perhaps Candleshoe Manor will be that too.' She pauses, as if aware that in this remark there is a flavour of vagueness alien to her normal personality. 'How very still the place is! One can imagine there is never a sound in this wood from dawn to dusk.'
'But plenty from dusk to dawn?' And Grant, who wants to show off his stock of poetry, begins to quote again:
'Sudden will a pallor pant
Chill at screeches miscreant;
Owls or spectres, thick they flee;
Nightmare upon horror broods;
Hooded laughter, monkish glee,
Gaps the vital air.
Enter these enchanted woods,
You...'
Grant breaks off abruptly. The notable silence into which he is declaiming has been as notably broken. Somewhere quite close at hand a bell is tolling – a small, cracked bell.
# 3
The bell is cracked and insignificant. But unlike the majestic bell at Benison its tintinnabulation is unmistakably a call to some religious observance. It is at once authoritative and domestic – first cousin to a dinner-bell and yet indubitably of the Church. It speaks of a parson tugging at a rope with one hand while stuffing away his pipe and reaching for his surplice with the other. Mrs Feather, whose historical imagination has been so inadequately gratified in the course of the afternoon, suddenly feels that she has heard the very heartbeat of England. Her eyes fill with tears, and she has to cope with these before taking a glance at her son. Is he at all impressed? At Oxford he is exposed to quite a lot of bell-ringing, and if he returns there in thirty years' time the sounds will move him unspeakably. But this he may feel to be only an ugly little clamour. Mrs Feather cannot tell. She rounds another bend, and finds a miniature church or chapel before her. Like the lodge it is in disrepair, with windows for the most part boarded up and a hole in the roof. Nevertheless something is going on in it. In a little belfry the small jangling bell is just ceasing to swing.
'It seems to be joined on.' Grant is pointing vaguely ahead. A tall hedge – it may be either box or yew – interposes between them and the main building beyond, and in the fading light the character and topography of the place are alike hard to determine. But on the far side of the chapel some sort of covered way may be descried, and it is to this that he is pointing.
'Almshouses!' Light comes to Mrs Feather. 'One of those immemorial charities. And the poor old almoners – isn't that the word...?'
'It certainly is not. Inmates – or beadsmen.'
'The poor old beadsmen have to attend chapel twice a day and pray for the soul of the founder.'
'Would they be allowed to do that in an Anglican church?' Grant is interested. 'I mean, if it had been laid down when the charity started in Catholic times?'
'We can go in and find out. The old people must be there now.'
'Say – we can't do that!' Grant is horrified. 'It's no business of ours.'
'Public and corporate worship is anybody's business.' As Mrs Feather delivers herself of this pious if convenient sentiment she is already on the march again. 'I expect there are old women too. And no doubt strangers may attend and contribute. There will be a box. Now, what did I do with that half-crown?'
Recollection comes to Grant. 'It can't be almshouses. The map says–'
He is too late. Mrs Feather, whom he still follows obediently, has reached the little chapel, found an open door, and marched in. Some sort of service is indubitably going forward. Grant is aware that his mother, with unusual precipitancy, has sat down, and that he himself is perched beside her on a hard bench of quite inadequate breadth. He is aware too of a high quavering voice speaking of the absolution and remission of sins. He knows that he is attending Evening Prayer according to the form of the Church of England. He takes a deep breath and looks about him for the almsfolk of his mother's imagining, although with small hope of finding them. And of course he is right. It is a private chapel. He has never been in such a place before. But he recognizes it in an instant.
At Benison the chapel is by Wren and there is a statuary group by Roubiliac. This is different. It is like the smallest and most unassuming parish church in a Decorated style – 'Decorated' only in the technical sense, since the actual effect is bare enough. There is a single monument – and at a first glance it appears to be of the order described in Mrs Feather's guidebooks as 'rude'. Grant studies it; he has a hunch that it is the least embarrassing thing available for study. A gentleman with flowing locks and a completely composed demeanour is raising himself from a stony ocean and grasping the prows of a vessel which appears to be in the act of foundering. Upon this watery scene two younger gentlemen standing on either hand are about to lower a pair of marble curtains. Crowning this is a coat of arms, decorated in faded gold and colour – and this Grant, although without much learning in such matters, feels to be obscurely familiar.
And now he lets his glance stray further afield. Along one wall the chapel contains two benches such as one might find in a village school, and it is on one of these that he and his mother are sitting. The rest of the furnishing consists of the altar, a lectern, and three mouldering upright chairs upholstered in ragged leather. Before each chair is an ancient and crumpled hassock upon which either long practice or an abnormally good sense of balance might make it possible to kneel. Only one of the chairs is occupied – by a diminutive lady of great age. Dressed in black silks of an answering antiquity, and with a black lace cap set upon snow-white hair, she is delivering herself of responses in what Grant supposes to be a provincial accent. Mrs Feather, who has been exposed to intermittent contact with the English since childhood, knows that it belongs not to any specific region but to the past – to a past, she further guesses, quite surprisingly remote. The officiating clergyman, too, suggests an earlier time – but this less by his voice than by his attire. Memories of some illustrated edition of Jane Austen float through her head; as she listens to the Collect for Aid against all Perils she finds herself surprised that the person offering this petition wears his own hair – or a wispy remnant of that – rather than a powdered wig. She perceives that her sense of time is becoming confused, and supposes it the effect of some delayed shock from the archery display to which she has lately been subjected.
The service is over. The little old lady rises, speaks briefly to the clergyman in inaudible tones, turns, and moves from the chapel, supporting herself on a silver-mounted ebony stick. As she passes the Feathers she bows. The weight of years has already so bent her figure that the effect is alarming. Moreover the gesture is unaccompanied by any play of feature, and without pausing the old lady walks on and disappears. The clergyman vanishes somewhere at the back.
Mrs Feather finds that she is still clutching her half-crown. She looks about her, not quite prepared to abandon the obscure hope of paying her way. 'There may be a box saying "General Expenses",' she suggests. 'Or "For the Fabric". There so often is.'
'Would you keep such a thing in your bathroom?'
'In my bathroom, Grant?' Rather feebly, Mrs Feather affects bewilderment.
'Sure. This chapel is just as private to the old lady as your bathroom is to you. Different kinds of cleanliness are in question, no doubt, in one and the other. But the idea of privacy attaches to each.' Grant makes this speech with some severity. His hopes of anything resembling a satisfactory dinner are now remote.
'Well, Grant, it did look like almshouses–'
'Nonsense, momma. It's just that you will keep walking on, and opening doors, until you're stopped.'
'Grant, I was opening doors in this country, and having them opened for me, I'd like to add, before you–'
'Good evening.'
The Feathers, caught in a moment of some indignity, turn round. For a second they suppose themselves to be addressed by a venerable upper servant. They then see that it is the clergyman. He has abandoned his outmoded sacerdotal habiliments for equally outmoded garments inescapably suggestive of a superannuated butler. He is however a gentleman – a very old gentleman – and he is himself now engaged in a process of social appraisal through steel-rimmed spectacles balanced precariously on the end of his nose.
'And what a beautiful day it has been. It is pleasant to think of visitors touring the country in such ideal weather. No doubt you have been to Abbot's Benison, and have come on upon hearing that we too have a fine Christmas.'
'A fine Christmas?' Mrs Feather is only momentarily at a loss. 'But yes, indeed! And is that your Christmas?' She advances upon the monument which Grant has already studied. 'I know some of his work in Buckinghamshire. That's the county one of my ancestors left in 1620.'
Whether or not he makes anything of this august date, the clergyman smiles benignly. 'You may be thinking in particular of the Clarke monument at Hitcham. But there, my dear madam, caution is necessary – caution is undoubtedly necessary. The affinity with our own monument is pronounced – you have only to glance at those figures holding the curtains to acknowledge it. But the authenticity is less well attested. We, as you may know, have the actual accounts, with a discharge in Christmas' own hand.'
If Grant were not a well-bred young man he would audibly groan. Local antiquarianism dispensed by a clerical dotard amid deepening shades of evening makes a close to the day even more depressing than corned beef and blancmange. But Mrs Feather is now in her element. 'It is a monument', she is asking, 'to a former lord of the manor?'
'Certainly – most certainly.' The ancient clergyman takes off his glasses, breathes on them from lungs still professionally robust, polishes them, and returns them to his nose upside-down. 'Admiral Candleshoe. We lost him, I am sorry to say, on the Islands Voyage. That would be – let me see – in 1597. It was a bad business – a very bad business. To be quite frank with you, we were displeased with the conduct of the Earl of Essex.'
'You were displeased with the conduct of the Earl of Essex –Elizabeth's Essex?' Mrs Feather is uncertain how to take this.
'Yes, indeed. But they needn't have chopped his head off four years later, all the same. By the way, my name is Armigel – Rupert Armigel.' The ancient clergyman has produced a snuff-box and is tapping it. Grant sees that because he himself carries no snuff-box there is going to be a hitch in some ritual of introduction that the old gentleman is proposing. This embarrasses him acutely; he blushes; and his mother has to come to his rescue in this matter of names.
'My name is Feather – Alice Feather – and this is my son Grant. Do you live here, Mr Armigel?'
'Assuredly – most assuredly.' The ancient clergyman pauses while Grant, who has been obliged to take a pinch of snuff, gives a sequence of sneezes that ring out startlingly in the bare chapel. 'Rupert Armigel, madam – domestic chaplain to Miss Candleshoe.'
Mrs Feather gives a cry of delight – presumably at discovering that a Candleshoe still lives at Candleshoe. Grant's embarrassment returns, and he edges away towards Gerard Christmas' monument. The foundering admiral, he realizes, bears an unmistakable family likeness to the old lady who, a few minutes before, has been worshipping here. But there is something else – some further likeness – that eludes, puzzles him.
'You judge it appropriate?' Mr Armigel is at his elbow, and now all three are confronting Admiral Candleshoe's memorial.
'It's very handsome.' Grant hopes that he has hit on the right epithet.
'I agree with you. Both as an artist, which was my first profession, and as a very old friend and – um – adherent of the Candleshoe family, I am entirely pleased with it. Moreover Miss Candleshoe herself, I am glad to say, considers that Christmas has done a very good piece of work. She considers that the thing will serve very well.'
Grant, not without satisfaction, sees something like alarm momentarily visit his mother's features. 'Has Christmas done anything else here?' he asks.
'Decidedly – most decidedly. You will see a work of considerable interest in the house itself. And that reminds me that I have been remiss – most remiss. Miss Candleshoe has desired me to invite you to take a glass of wine. I hope there is some wine. And perhaps we had better wait on her now.'
The Feathers make modest protestations, but Grant knows that his mother is jubilant. Once more he takes refuge in the monument – this time peering at an inscription low down on the right. The light is bad; he fails to decipher it; and Mr Armigel comes to his rescue.
'An addition, Mr Feather. A copy of modern verses which, although not inappropriate to their subject, strike, to my mind, a jarring note. Modern poetry is out of place, surely, in connection with the Islands Voyage.'
Grant has knelt down and can now read the lettering. It is incised in an ancient character and still faintly gilt. For the second time that evening he gives himself to declaiming English verse.
'Aye me! whilst thee the shores and sounding seas
Wash far away, where'er thy bones are buried;
Whether beyond the stormy Hebrides,
Where thou perhaps under the whelming tide...'
Grant breaks off. 'Say! But that's Milton. I thought you said–'
Mr Armigel nods placidly. 'Quite so – precisely so. An elegy called "Lycidas", Mr Feather. Beautiful in itself. But modern poetry is not suitable on Admiral Candleshoe's monument.'
Enter these enchanted woods, You who dare... It comes home to Grant with marked force that about Candleshoe Manor there is something a little out of the way. Perhaps the ether wobbles. Conceivably there is a kink in space. Time – at least within the consciousnesses of the residents – is far from behaving as it should. Grant gets rather hastily to his feet. Mr Armigel may be mad. Miss Candleshoe's wine may be a magic potion calculated to turn respectable pilgrims from Massachusetts into Sleeping Beauties or Rip van Winkles.
'Haven't I seen that coat of arms before?' Mrs Feather, uninterested in Milton, is pointing to the upper part of the monument.
Grant follows her gesture. He remembers that he, too, has had the same impression. And suddenly he can account for it. 'The flag, momma – the Spendlove standard, flying above Benison.' He turns to Mr Armigel. 'Are the Candleshoes connected with the Spendloves, sir?'
Mr Armigel finds this amusing. He contrives the odd feat of laughing and taking snuff simultaneously. 'My dear Mr Feather, your conjecture is at once correct and preposterous.'
'I don't get that.' Grant is now sure that the old gentleman is crazy.
'Correct but upside-down, topsy-turvy, the wrong way round. The Spendloves are connected with the Candleshoes.'
Grant sees the difference. He glances again at Admiral Candleshoe and experiences a shock of discovery. Here is the other similarity that has worried him. The expiring sailor is not only like the present Miss Candleshoe of Candleshoe Manor; he is the split image of Lord Scattergood of Benison Court.
'The Spendloves are Candleshoes – but of a very junior line.' Mr Armigel, pleased to find Mrs Feather evidently entranced, appears about to embark upon genealogical disquisition. But he checks himself. 'Miss Candleshoe is waiting to receive you. She is most interested in your visit, and it will be a kindness if you will make a short call. Let me lead the way to the house.'
The Feathers, with polite murmurs, prepare to follow Mr Armigel. All three turn towards the door of the chapel, and all three pause. Framed in it, as if to bar their way, stands a boy. He is dressed in what may be Tudor costume. And he carries a bow.
# 4
Abruptly the boy vanishes. Together with accurate archery, it seems to be his main accomplishment. For a moment the effect has been as of some ancient portrait; now the arched doorway of the chapel is like an empty frame, and behind it is only mild evening sky. Grant Feather frowns for a moment into this immensity, and then takes a glance back at Admiral Candleshoe. That distressed mariner has all the timelessness and immobility which the best authorities pronounce to be desirable in sculpture. He does not propose really to drown, nor on the other hand has he any genuine mind to be rescued. It is conceivable that when the chapel is void of spectators the attendant effigies will lower their marble curtains on the scene and go off duty for the night. But this fancy can be entertained only in defiance of the most powerful suggestion to the contrary. These supporting figures are frozen into the same permanence as the Admiral between them. In the first days of their existence, while Gerard Christmas was still tidying up his chisels and superintending the gilding, they must already have had the appearance of centennial vigil. Infants christened beneath their impassive gaze have come rejoicing to the command of bow and arrow and angling-rod and fowling-piece – and have returned to their presence in the end, while some predecessor of Mr Armigel's has addressed himself to the burial service.
From somewhere outside comes a sudden hubbub of young voices; it recedes, as if a bevy of children are racing and tumbling into the distance. Mr Armigel occupies himself with a bunch of keys; Mrs Feather takes the opportunity of slipping her half-crown unobtrusively back into her bag; Grant finds small comfort in this, for he suspects his mother of nursing an atrocious purpose. The coin may not be produced again at Candleshoe. But what about its near companion, Mrs Feather's cheque-book? The Spendloves at Benison are lords. But Mrs Feather's forebears for several generations have been princes – merchant-princes – and she has inherited an instinct for brisk and open commerce. Had the impulse moved her, she might have offered Lord Scattergood in his own octagon room a round figure for everything within sight. She would be capable of doing the same thing – Grant reflects with a fine imaginative flight – while being shown the Crown Jewels in the Tower of London. And she believes that she has discovered Candleshoe. Horridly pat, like an actor taking a heavily signalled cue in some banal play, the place had peeped out from behind its beech-trees in the very instant that the good lady was discoursing on the satisfaction of rescuing a period piece from oblivion.
Grant does not at all object to his mother's buying a derelict English manor-house. What he has glimpsed of Candleshoe pleases him, and he knows that his sisters would adore it. But Oxford, although he is doubtless to derive large benefits from his residence there in the end, has rather muddled him for the time, and he has a morbid fear that his mother is going to do something crude. How is he to circumvent this? There comes to him the inspiration that he must be crude himself – so crude that his mother will at once be all reaction. So he turns to Mr Armigel. 'Say,' he offers conversationally, 'what sort of sanitation do you have here?'
They are walking down a short covered way of no great antiquity, and the house is in flank before them. Mrs Feather nearly drops her bag – half-crown, cheque-book, and all. She remembers that it was very hot in the gardens at Benison, and wonders anxiously if Grant has suffered a sun-stroke. Mr Armigel however appears to take the inquiry entirely in good part.
'Now, that is an interesting question – a very interesting question, indeed. Only, I think you use rather a grand word, if I may say so, for anything of the sort at Candleshoe. We never have had anything that you could quite term that.'
'Is that so, sir?'
'In fact, all that I can recall at the moment, are two or three quite small and nasty affairs.'
'I see.'
'And, of course, that sort of thing has fallen more and more out of favour in England. We have lost the taste for it. In your country, I understand, the posture of affairs is somewhat different. The thing keeps on cropping up.'
'Well, yes, sir.' Grant is rather at a loss. 'In fact we just aren't happy without it.'
'Deplorable!' Mr Armigel shakes his venerable head, and for the first time speaks with some severity. 'But I should be happy to tell you our own experiences, if your interest runs that way. They are, I fear, malodorous. And underground.'
At this Mrs Feather stops in her tracks. 'It just occurs to me', she says, 'to inquire what you suppose my son to be talking about?'
Mr Armigel turns to her courteously. 'Assassination, madam. The topic is an interesting if repulsive one. And as it is commonly applied in England only to homicide of some political or large public significance, I have remarked that it is not quite an appropriate word in a quiet place like this. But we have had our bloodstained pages, 1 am bound to admit.'
Grant wonders whether Mr Armigel is really a little deaf. Meanwhile they have turned aside into a ruined garden, perhaps that they may approach the house by way of its main façade. The garden is like the faded and shrunken ghost of something at Hatfield or Longford – intricately formal within a great rectangular hedge grown wild and ragged, and with all its ordered elaboration of arabesques and knots overgrown and only in part distinguishable, like a schoolchild's geometrical drawing largely obliterated by the sweep of an India rubber. At the far end is a small pool covered with duckweed, and in the middle of this an eroded Nereid patiently clasps a lichened shell from which water has ceased to issue a long time ago. Grant recalls the gardens at Benison and their great jet d'eau. Presently, he thinks, time's impatient India rubber will reach that too.
Mr Armigel discourses on certain passages of violence in the history of Candleshoe. From generation to generation the place itself has slumbered, and its owners with it. But the chronicle shows an intermittent streak of wildness among its younger sons. It is two or three of these who have, upon certain unedifying occasions, streaked the page with blood. Others have taken their waywardness to sundry remote corners of the globe, and among these – Mr Armigel intimates – an equal rashness has produced rather more that is laudable.
Mrs Feather inquires about the present heir. Grant compresses his lips, reading into this a clear proof of his mother's intention. Mr Armigel replies that the air is generally accounted wholesome, and the exposure of the mansion particularly well adapted to making the best of the winter sunshine. Grant decides that this is a cunning old man. He even suspects that there may be some plan to put Candleshoe on the market, and that any persons of evident substance straying within its policies are liable to be conducted to the owner and entertained with an eye to possible business. This however scarcely allays his anxieties about his mother's conduct. Her mind may be moving in the same way. And if they are both wrong, some humiliating situation may ensue.
But now the house is squarely before them. It is undoubtedly a gem. The plainness of the front is relieved by a central and two flanking bays, and the fine proportions of the whole are accented by the weathered stone with which the mellow brick is bonded at the major perpendiculars. There is a terrace with a crumbling arcade and a flight of steps leading down into the gardens; above the main entrance is a great dim sundial with its gnomon gone, like a battered pugilist; crowning the whole is a lettered balustrade carrying some pious Latin inscription the whole length of the building. They climb the steps, finding the broken treads only uncertainly amid weeds and moss; the main doorway is narrow, and its sides are polished by the friction of centuries of broad shoulders and hurrying elbows; on their left is a buttery hatch and on their right a high carved screen with a little staircase leading to a gallery. They pass through an opening in this screen hung with a curtain so ancient that it seems woven of dust, and find themselves in the great hall of Candleshoe Manor.
The greatness is relative. Lord Scattergood's octagon room could digest the place without noticing. But it remains a big hall, with a dais and a lofty bay window at the far end, a fireplace with a massive dog-grate, and a ceiling of elaborately moulded plaster. On the oak-panelled walls a variety of pictures – portraits and mythological scenes that have alike retreated behind a brown haze of varnish – jostle with boars' heads, foxes' masks, pikes, shields, and muskets.
There is a great deal of stuff lying about. This – Grant sees at once – makes the real point of contrast with the octagon room. That room – although very conceivably whole bevies of Spendloves smoke their pipes in it after business hours – has taken on the air of a museum; there is a great deal of stuff there too; but it is ranged and ordered, so that on each object one expects to see a little label. Here, if you are not careful, you will trip over things or bump your head. There is a lot of armour tumbled about in one corner, as if a knight in haste to get into the lists has been rummaging for a hauberk with a good snug fit. Near this a tall armoire stands open. It has been adapted – perhaps two hundred years ago – to the purposes of a wardrobe, and it contains an odd jumble of doublets and riding-cloaks and breeches, mud-bespattered and antique of cut. On the dais is a long refectory table. One end of this, extending into the bay window, catches the last warmth of the day and is laid with some elaboration for two, with silver plates and tankards, and apples in a great silver-gilt bowl. Then comes a salt-cellar – an immemorial affair of silver and horn – and below this several further places have been laid, with horn spoons and pewter mugs and great platters of polished wood.
So Miss Candleshoe is crazy. Grant Feather feels a sense of relief at being able at length to 'place' this whole queer set-up. And relief makes him charitable. 'Crazy' is perhaps an unimaginative way of putting it. Conceivably Miss Candleshoe is the last of the major English eccentrics, about whom Dr Edith Sitwell wrote so engaging a book. Grant is for some reason sure that his mother will behave impeccably in the presence of a positive strangeness of this sort. He therefore cheers up, and is about to make some polite remark when a voice speaks – or hisses – behind him.
'Strangers – beware!'
Grant looks over his shoulder. An animal of alarming proportions – he takes it to be a wolfhound – has come through the carved oak screen behind him and is regarding him with disfavour. For a moment it seems necessary to return to the magical hypothesis and suppose Miss Candleshoe to be the mistress of some species of Circean enchantment. The dog however offers no further observation, and it occurs to Grant to look upwards. The screen, as he has noticed, supports a small minstrels' gallery. This is now shrouded in gloom, but just perceptible in it are several pairs of bright eyes. Grant raises an arm and waves to them, since this strikes him as the amiable thing to do. They vanish. The archer, it appears, commands auxiliary forces as nimble as himself.
Mr Armigel and Mrs Feather have walked on. Their goal appears to be some farther room beyond the hall, and Grant remembers that in an Elizabethan mansion the private apartments lie in that direction. The other side of the house is for the servants, and at each end a staircase will rise through the several storeys of the edifice to the long gallery which must run its full length at the top. Grant sees his sisters wanting to hold a dance in the long gallery, and being told that under the weight of such a proceeding the floor will certainly collapse and bring the greater part of the house down with it. They will demand that architects and builders be brought in. And presently the whole county – which is what, if you are grand enough, you must call your neighbours – will be laughing at the antics of the folk that have bought out the Candleshoes. Grant relapses into gloom. In this mood he follows his mother into Miss Candleshoe's drawing-room.
Miss Candleshoe may worship in eighteenth-century style, dine in a fashion notably feudal, and suffer armour to lie about as other untidy people do ulsters and gumboots and shooting sticks. But when she withdraws from these occasions it is into a privacy that is wholly Victorian. There is a tartan carpet which Grant finds baffling, but which Mrs Feather is able to date as shortly after 1868, the year in which was published an illustrated edition of Leaves from the Journal of our Life in the Highlands. There is a further testimony to the same influence in an engraving after Landseer, depicting Prince Albert in the pose of a successful lion-hunter, standing beside a shot stag. An upholstered sofa, even after nearly a century of use, is like a fat boy in imminent danger of bursting all his buttons. On sundry small round tables, under inverted glass bowls, repose heaps of strawberries ingeniously manufactured from felt and peaches blushing in scarcely faded plush. Viewed in this setting Miss Candleshoe, who now rises to greet her guests, swims at last into something like plausible chronological focus. She is simply a very old lady who carries her own period about with her. Perhaps, like her chaplain, she drops in, as it were, on other periods from time to time. But that is a privilege of the very old.
'How do you do?' Miss Candleshoe advances with the aid of her ebony stick. She has always been what her generation would have called petite, and now she is so stooped – virtually, Grant thinks, into the form of an inverted capital L – as to bear the appearance of something indecisively quadrupedal moving about near the floor. But out of this posture Miss Candleshoe manages to extract more of dignity than pathos. And although doubtless a dotty old thing, she contrives an upward glance of considerable penetration from a pair of very clear black eyes which frame a powerfully hooked nose. Admiral Candleshoe, Grant remembers, has the same nose. Perhaps, before his translation into Gerard Christmas' stony immortality, he had the same eyes too.
'How do you do? Mr Armigel and I were gratified that you joined us at service. And you now add the further kindness of a call.' Miss Candleshoe raises a hand above her head and shakes hands with Mrs Feather. 'It is particularly good of your grandson to come. Youth has many calls.'
'Grant feels, as I do, that it is a privilege to see Candleshoe.' Mrs Feather declines to find malice in her hostess' disposition to treat her as a contemporary. 'It is just such a house as I have dreamed of for a long, long time.'
Grant Feather grinds his teeth. But neither Miss Candleshoe nor Mr Armigel notice this, since they are engaged in accommodating the visitors with tightly upholstered chairs, massively rich plum-cake, and glasses of wine. Grant suspects that this last may be distilled from cowslips; he sips it and discovers it to be Madeira of a sort superior to that commonly available to junior members of the University of Oxford. Perhaps Madeira lasts forever, and this was laid down in the eighteenth century. It may have been about then that the cake was baked.
'You must not be anxious about the horses.' Miss Candleshoe herself takes a large slice of plum-cake. This however proves to be for the wolfhound, who has taken up a posture rather like that of the Prince Consort in the engraving above him. 'My people will see to your carriage, and look after the animals very well.'
Mrs Feather is delighted. 'That is very kind of you. As a matter of fact–'
'Although naturally, since the death of my brother Sir James, we have a trifle retrenched in the stables. I do not myself hunt. Nor does Mr Armigel care to do so, although it is a customary and very proper diversion for the clergy. Of cock-fighting I do not approve. Nor should a clergyman – I speak, of course, of the Established Church – attend bouts of fisticuffs.'
'In this, fortunately, we are of one mind.' Mr Armigel appears to find nothing out-of-the-way in the sequence of his patroness' thoughts. 'But I regret the desuetude of the bowling-green.'
'The gardeners must see to it.' Miss Candleshoe pauses and sips Madeira. 'If there are any gardeners, that is to say. Since my brother Sir James died several years ago we have been obliged a little to cut down on one side and another. But the topiary, at least, is in tolerable order. The children, I am told, see to that.'
Here is something about which Grant wants to know. 'Then you do have kids living here?' he asks.
'At the moment, only a solitary goat.' Mr Armigel seems to offer this reply in perfectly good faith. 'But the poultry are very flourishing, I am glad to say.'
'Only this morning, indeed, we had boiled eggs for breakfast.' Miss Candleshoe makes this announcement with an innocent triumph somewhat at odds with her grande dame manner. 'If we had a cow we might have some butter – in which event scrambled eggs would become a distinct possibility. Unfortunately the death of my brother Sir James made it necessary to dispose of the home farm.'
'Living in this wonderful old house has its inconveniences for you?' Mrs Feather is all sympathy.
Miss Candleshoe may be observed as giving her visitor a very penetrating glance indeed. 'The times are indubitably adverse to the landed interest. My brother Sir James tells me – has told me, I ought to say – that much of the blame must be attributed to Mr Gladstone. I am surprised. I had understood Mr Gladstone to indulge a taste for arboriculture, a pursuit very proper in a country gentleman of the soundest principles. But it appears that his activities were rather those of a woodcutter – or what you, doubtless, would term a lumberjack. Little good will come of a man who murders trees.'
'I just adore trees.' Mrs Feather is unblushing. 'But perhaps there is some smaller and more convenient house on the estate, which might, with a little capital–'
Grant, with great presence of mind, gives a vicious but unobstrusive kick at the wolfhound's behind. The brute leaps up, contrives a deft outflanking movement, and bites Grant firmly in the corresponding part of his own person. There is a good deal of confusion. But this it would be tedious to retail. We may take advantage of the interlude for a necessary retrospective glance over some centuries of English history. We shall then be in a position to meet Jay Ray, the boy with the bow, and the hero – after a fashion – of this story.
# 5
It cannot be maintained that Queen Elizabeth the First slept at Candleshoe Manor. The present house, replacing one of unknown appearance and uncertain antiquity, was completed only in the year of her death. But her successor, the canny James of Scotland, on his journey south condescended to pause there for a bever or light refection. This illustrious and somewhat expensive occasion was without consequences of any kind; neither royal favour nor royal disfavour ever visited Candleshoe again; as the house settled firmly upon its foundations it settled too into the comfortable security of near-oblivion.
Three centuries had been required for the Candleshoes to reach the modest magnificence which the place represented. When in the year 1367 a younger son was born to the Black Prince, it is upon record that the vessel bearing the news from Bordeaux belonged to one Roger Candleshoe, a vintner of Cheapside – 'long-time well-reputed', we are told, as an importer of the red wines of the Gironde. Forty-three years later, when the royal infant thus heralded met the fate of a deposed king at Pontefract, Roger's son William had added to the family trade a profitable importing of the wines of Spain – described by an expert Customs official, Geoffrey Chaucer, as of considerably greater 'fumosity' than their northern neighbours. It was when one of William Candleshoe's novelties known as sherris sack was acclaimed by a leading connoisseur of the day that the modest Candleshoe fortunes became secure. Candleshoe Manor, in fact, would never have been built had not an early fifteenth-century Candleshoe enjoyed the lavish custom and earned the generous approbation of Sir John Falstaff.
It would appear to have been not long after the death of the good Sir John that the family acquired those lands upon which, as we may presume, they had originally laboured for others. By the close of the sixteenth century their connexion with the wine-trade had disappeared. When in the year 1600 Robert Candleshoe decided to demolish what must have been for many generations his family's home and erect in its place a more commodious mansion in the refined taste of the time, it was to his resources as landowner that he looked to defray the cost. His calculations may not have been unsound in themselves, although it is notable that he was a younger son, entered upon the inheritance only as a consequence of the death by drowning of his brother the Admiral, and committing himself to the ambitious project within three years of that melancholy circumstance. But if not a rash builder, he was certainly an injudiciously fond father; and the over-lavish provision which he endeavoured to make for most of his twelve children in fact crippled the estate to an extent from which it was never to recover.
Of these children the youngest was called Rupert; and he alone got nothing at all, except a little Latin and much fustigation from a resident tutor grown grey in the purveying of these amenities to elder brothers. Nobody disliked Rupert, or indeed much noticed him; and when at fifteen he was eventually packed off to apprenticeship in the city, the action was motivated only by the plain fact that there was nothing else to do with him. As it happened, young Rupert disliked his master, a highly respectable goldsmith with a technique of fustigation much in advance of the ageing tutor's; and the boy with great good sense almost immediately ran away. Being reduced in consequence to a somewhat hungry tramping of the London streets, he recalled the origins of his family's former prosperity in malmsey and sack, and betaking himself to the appropriate quarter of the town he accepted employment without articles in the establishment of a wine-merchant carrying on a large trade with the citizen classes. Being here set to the business of improving his firm's commodities by the judicious admixture of resins, molasses, red clay, salt-petre, and rainwater, he laboured so successfully at these mysteries as to become a person of much consideration in the city, and eventually its Lord Mayor. Rupert's son William inherited both the wealth and the address of his father. Marrying a certain Lady Elizabeth Spendlove, and acquiring her considerable fortune for his children on condition of taking her name, he further improved matters by disposing of her person to his sovereign, with the result that Charles the Second, shortly before his death, created the Lord Mayor's son first Baron Spendlove. After this the family, in the vulgar phrase, never looked back. Within a century of this well-deserved ennoblement, a certain Rupert Spendlove, son of that William, first Earl of Benison who built Benison Court, was created the first Marquess of Scattergood. A wit and a philosopher, the patron of Gay, and the friend of Bolingbroke and Swift, the first Marquess derived urbane amusement from his relationship with a neighbouring squire, and the Mr and Mrs Candleshoe of the time were occasionally invited even to the very grandest Benison occasions. Throughout the later eighteenth and earlier nineteenth centuries, indeed, young Candleshoes in quest of either a clerical or a military career would be given an amiable upward kick from Benison. One of them, a lad of parts, eventually became a bishop. In those days a marquess could do a great deal.
All this – or nearly all this – the Reverend Mr Armigel, domestic chaplain to Miss Candleshoe of Candleshoe Manor, has now expounded (by way of supplement to dabs of iodine, strips of adhesive plaster, and commiserating chuckles) to Grant Feather. Mrs Feather has meanwhile received a sufficient modicum of the same historical intelligence from her hostess to be more enchanted than ever. She takes a just pride in her ability to understand the complexity of the social system involved. The Candleshoes are confessedly bankrupt, and they are intermittently patronized by the Spendloves, whose bankruptcy is only to be conjectured, and who belong to a rank of society (Mrs Feather is quite clear about this) only just below the dukes and duchesses. But in the high dry light of genealogical science the Candleshoes, although far from shining with the first brilliance, shine distinguishably brighter than the Spendloves. An inconsiderable Candleshoe became a Spendlove, and Spendloves subsequently acquired sundry territorial tags, as of Benison and Scattergood. Is the present Miss Candleshoe in a sense the head of the family to which the present Lord Scattergood belongs? Mrs Feather confesses to herself that on a question so recondite as this she is frankly at sea. But she is at least aware of the question, and there is merit in that. She is aware too of a possible high significance attaching to the fact that the present proprietor of Candleshoe is an unmarried lady. To obtain further information here, however, requires some delicacy of approach. She waits until her hostess, with a solicitude incumbent upon the owner of the peccant hound, makes further reference to the absent Grant. She then embarks upon some general observation about her son.
'Grant won't at all mind that mite of attention from your dog – certainly not from a fine dog like that. Grant can take some hard knocks without complaining. He's an open-air boy, although fond of his books as well. Grant has a fancy to be a writer. And I'm prepared to back him in that. Only I do wish I had another son to take control of some of the family concerns.'
' Some of the family concerns?' Miss Candleshoe is gratifyingly interrogative.
'Not perhaps the railroad interests. Nor even the oil. But I did have a fancy he might spend a year or two looking after the ranches. The Feathers have always enjoyed raising cattle. They pack more of it than most other folk, but they've always preferred to deal with it when still on the hoof. Coming myself from people who have never gone outside steel, I find that attractive. When my husband was alive, we used to spend weeks in the saddle, getting round one place or another.'
Miss Candleshoe's glance goes to the decanter. She is conceivably reflecting that her visitor is worth another glass. But she contents herself with regretting her own lack of acquaintance with the American colonies.
Mrs Feather accepts this as an entirely gracious observation. 'And until this present generation there always have been Feathers to take over. And that's a great thing. Property – landed property, say – must always mean less when there isn't an heir.'
Miss Candleshoe remarks that commonly there is an heir somewhere. When an heir seems to be lacking in England, one generally turns up from across the Atlantic. Persons of rustic or menial conditions have been known so to turn up – she believes from what Mrs Feather would call the prairies – and make successful claims on earldoms and baronies. But such episodes, which are on the whole to be deprecated, rarely occur among the landed gentry. It is clear to Mrs Feather that Miss Candleshoe takes a poor view of the nobility. Mrs Feather makes a note to suppress her own devious connection with an Irish peerage – a circumstance upon which she has at times found it advantageous to touch – and to bring in the Buckinghamshire squires when opportunity offers. Meanwhile she sets out upon a further exploratory movement. 'I do know, of course, how things are very different over here. I mean with the sale of family properties and matters of that sort. Some of our lawyers reckon to be pretty good at tying things up, and there are more trusts and the like in our family than I'd care to count. But here these matters are still on a feudal basis, and a lot of your places are pretty elaborately entailed. I've heard that even when two generations see eye to eye in such a business a really strict entail can be hard to break.'
Miss Candleshoe now definitely reaches for the Madeira. Her own property, she offers, is an instance in point. Although not extensive, nor at all certainly associated with the Candleshoes until after the Norman Conquest, its tenure is believed to be a matter of the most amazing intricacy. Her brother Sir James – who reluctantly accepted the convention of knighthood on becoming Solicitor-General – used frequently to discuss it in her hearing with fellow lawyers deeply versed in conveyancing. Miss Candleshoe believes that if the property were to be disposed of there would certainly be a question of Crown prerogative. Moreover she positively knows – what is very vexatious – that she has mislaid the deeds of both home paddocks. But neither of these obstacles, perhaps, would prove insurmountable should sufficient – abundantly sufficient – occasion be presented for tackling them.
Mrs Feather, who is far from an artless lady, feels that this exploratory skirmish has gone far enough. As soon as Grant returns it will be time to bring the visit to a close. She gives her hostess a preliminary indication of this by picking up and smoothing her gloves. Miss Candleshoe, who is perhaps not an artless lady either, drops the stopper into the decanter and inquires if Mrs Feather is comfortably accommodated in an hotel. The Benison Arms at Benison Magna is said to be disagreeable, largely because flooded with sightseers, who are said to pay money to go gaping round Benison Court. Mrs Feather will recall that the servants of poor Dean Swift in his last years used to show their bizarrely demented master in return for half a-crown. Miss Candleshoe confesses to a belief that showing one's ancestral home for a like consideration is an action of very comparable sort. But the Spendloves have not perhaps been at Benison long enough to develop any very nice feelings in such matters.
At this moment Grant and Mr Armigel return to the room. Mrs Feather, remembering the half-crown which she herself had been clutching in Miss Candleshoe's private chapel, has felt herself on the verge of blushing. She is therefore glad of the diversion. Grant and Miss Candleshoe exchange civilities about the injured part of Grant's person, which Miss Candleshoe roundly describes as a buttock. Mrs Feather gloves her left hand and rises. Miss Candleshoe makes Mr Armigel a sign which can only be interpreted as an instruction to ring the bell. Mr Armigel accordingly advances to the fireplace and gives a tug at a long silken rope, about the thickness of a ship's cable, that depends from the gloom of the ceiling. Perhaps because it is quite evident that nothing happens or can happen as a consequence of this ritual, Mr Armigel gives a second tug with rather too much vigour. The rope falls to the floor, together with a long coil of wire and about a barrow-load of plaster. The wolfhound, which appears to be peckish again, falls upon the rope and savages it. It is apparent that the designed ritual has wholly broken down. There is no means of summoning a servant; in all probability there is no servant to summon; the visit of the Feathers to Candleshoe Manor looks like being, of necessity, indefinitely prolonged.
Grant Feather is rather disposed to turn and run. His mother advances upon Miss Candleshoe in good order, determined upon farewells. Whereupon Miss Candleshoe, with much formality and to the evident consternation of her chaplain, presents her visitors with an invitation to dine.
Mrs Feather has managed to get her back to the most recent evidence of the house's extreme dilapidation; although the air is thick with dust and powdered plaster she contrives not to cough. She sees – being a woman of precise and rapid social discernment – that in the circumstances Miss Candleshoe's utterance is in fact less an invitation than a command. Hesitation must suggest a hint that the present resources of Miss Candleshoe's establishment may be severely taxed by an unexpected accession to her board. Mrs Feather has a very good idea how limited these resources are. Is she not, indeed, planning in the light of that knowledge? And here Miss Candleshoe is conceivably not without a fairly full insight into her visitor's mind. All this renders necessary the preservation of a very high decorum. Mrs Feather accepts – charmingly but without effusiveness. Grant must do something about their car – it can scarcely be left on the roadside while darkness falls – but that need take no more than fifteen minutes. Mrs Feather hopes that this interval will not conflict with Miss Candleshoe's customary domestic arrangements.
Miss Candleshoe is very clear about this. Nevertheless she will herself have a word with her housekeeper. Tapping with her ebony stick, and bent forward as if scanning the threadbare tartan carpet for an invisible pin, she moves towards the door. Reaching it, she turns and gives her guests a swift glance of stony irony. 'If there is a housekeeper, that is to say.'
She goes out. Towering over her, the wolfhound follows.
# 6
'As a matter of fact there is no housekeeper.' Mr Armigel, conducting Grant to the drive, becomes confidential. 'All that sort of thing became very difficult during the war.'
'The women went into munition factories, and so on?'
Mr Armigel looks doubtful. 'I don't know that I ever heard of that. But it was unsettling – decidedly unsettling. Women adore a red coat.'
'A red coat?'
'Precisely. You recall the relief of one of those places – was it called Mafeking? Both our cook and kitchen-maid, I am sorry to say, subsequently proved to have celebrated that occasion in a manner that cannot be described as virtuous. I remember reflecting at the time how distressed Colonel Baden-Powell would have been to hear of it. He cannot have intended that his gallant defence of the place – which was probably by no means worth defending – should result in lax sexual behaviour among the lower classes. You agree with me?'
'I surely do.' Grant sometimes encounters persons of mature years for whom 'the war' means a conflict beginning in 1914. Mr Armigel, going fifteen years farther back, takes him entirely out of his depth.
'Moreover two of our housemaids left soon after. Their lovers were hanged in the county gaol. It is an astonishing fact, but one well-attested in our poetry, that a high proportion of soldiers returning from the wars at that time were hanged in county gaols. But these girls were very upset, all the same. In fact they took a decided dislike to the district, and went away to places like Australia and the United States. We have never recovered – never quite recovered – on the domestic side. There is, as I say, no housekeeper. But at least there is a housekeeper's boy.'
'You mean somebody that runs about for a housekeeper who isn't there?'
'I ought to have said the housekeeper's boy – our late housekeeper's son.'
'Is he good with a bow and arrow?'
'Decidedly good. When our last shot-gun went – and it blew itself to pieces in my own hands, my dear sir, a circumstance somewhat alarming at the time – when our last shot-gun went, Jay developed considerable efficiency with a bow. At this moment I have a rabbit-pie in the oven–'
'Say, do you do the cooking?'
'Certainly. Jay and I largely divide the labour. He provisions the larder, and I make what I can of it.'
Grant considers. 'Is this Jay what you would call a strange boy?'
'Dear me, no.' Mr Armigel is somewhat anxiously emphatic. 'He is a very practical boy. We rely upon him in all our more prosaic and humdrum affairs. He could not, I fear, be called an imaginative lad, but he commonly has a sensible solution to any casual mundane exigency.'
'But he likes going about in fancy dress?'
'I cannot say that I have noticed anything of the sort. It is true that he is very good in contriving to dress himself in whatever he finds about the place, so his appearance may be a trifle outmoded now and then. I would not know. But I should not like to feel that his frugality in that regard was likely to lay him under any reproach of singularity with his fellows.'
Grant finds that Mr Armigel's remarks regularly require a little decoding. This slows things down. 'Then Jay', he asks presently, 'has fellows?'
'He has made friends with several other lads at the village school. Miss Candleshoe, who is fond of children, is very willing that they should play about together.'
'And fell trees?' Grant has remembered the obstacle laid across the avenue down which he and his clerical acquaintance are now walking. That Jay is responsible for it he has very little doubt. And it means that he cannot, in fact, drive the car up to the house.
'Certainly not! I am sure they would not dream of such a thing.'
Mr Armigel is shocked, and Grant sees that the situation is a little awkward. Because the tree has been neatly felled he is prepared to be on the side of the young woodcutters. So Mr Armigel, who probably has not been down to the end of this drive for months, must be headed off. Grant has an inspiration. 'See here, Mr Armigel, don't you come any further. You have that pie to think of, and that's a whole heap more important than stopping along with me. I'll just get the car a bit up this avenue, and follow you back to the house.'
Mr Armigel discernibly hesitates. It is clear that part of his mind is indeed with his rabbit-pie. At this moment a twig snaps in the undergrowth nearby, and with the suddenness of an apparition the boy is before them. Mr Armigel is delighted. 'But here is Jay – and at a thoroughly apposite juncture, as is his wont. Jay, be so kind as to take Miss Candleshoe's guest to the lodge, and help him to dispose suitably of his conveyance. You will excuse me, my dear sir? It has occurred to me that baked apples, albeit an unassuming dish, may make an agreeable addition to our repast.'
Mr Armigel toddles away. Grant and the boy are left eyeing each other.
Jay is slim, straight, pale, dark-haired, and with dark eyes deeply set. He ought to have more chance of being handsome than attractive, and he clearly does not intend that his present demeanour should be held engaging. He confronts Grant grimly for a moment. Then he turns and precedes him silently down the drive. His bow has vanished, and he has changed out of his archer's clothes into very old grey flannel trousers and a dark blue shirt. Jay is long-limbed and will remain so. His arms as well as his legs move with precision as he walks. Grant finds it indicative of his own social inexperience that he would certainly have supposed this to be the young squire, happily bundled into his shabbiest attire for the holidays.
Grant overtakes Jay, but doesn't speak. He has decided that here is a nice kid, and he is anxious not to say a wrong thing. There has been sufficient evidence that Jay has no use for casual visitors to Candleshoe, and he wants not to get further in the boy's black books. They reach the felled tree. Grant stops. 'I've done a good deal of this in my time.' He steps to the tree's base and passes a hand appraisingly over the axed surface. He gives a curt approving nod and walks on.
Jay is looking at him sideways. The boy, he realizes, is not sullen or surly. He is wary – very wary – and now he is puzzled. He has put Grant in some category, and Grant's taking note of the soundness of the tree-felling job has thrown him out. But still he doesn't speak. Grant remembers that this kitchen-boy knows Meredith's 'Woods of Westermain', and this makes him steal his own sidelong glance. Their eyes meet for a moment and each looks away. Now comes the part of the beech wood, Grant recalls, that is curiously silent.
But this time he does hear something. It is the low murmur of a gently flowing stream. To the right is a small glade, and he can just discern a gleam of water. Something – to Grant no more than a shadow – flickers. But the boy has stopped in his track – and now he speaks.
'The kingfisher!'
'Could you tell, son, in this light?'
'It was the kingfisher.' For some reason the boy is darkly triumphant. 'That's always important, isn't it?'
'You mean lucky?' Grant is amused.
Perhaps he sounds so – for Jay flings round at him. 'Do you defy augury?'
So Jay knows Hamlet too. It occurs to Grant that Mr Armigel has been permitted but a partial view of this child. 'No,' he says soberly, 'I don't defy augury, son. And if there's good luck around, I hope it's coming to you. But what am I to do about my car?'
They have come to the ruined lodge. The dusk is soon going to give place to darkness, and there is something sinister about the mean, gapped building and the two piers of masonry and the single perched ball. It suddenly occurs to Grant that, so far as he knows, the only inhabitants of Candleshoe Manor are a couple of ancient eccentrics and this boy. And their situation is a very lonely one.
An owl hoots, and Grant senses Jay stiffening beside him. 'Don't you like owls, Jay? Are they ill-omened birds?'
'Anyone can make a call like an owl. That's why I don't like them very much.'
It is a quiet reply – but it comes to Grant with the effect of a flash of lightning. 'I can understand that,' he says. 'But there's the car.'
They turn down the road, and suddenly Grant is aware that Jay has skipped to the other side of it. 'Have you brought two cars?' The boy's voice is sharp, peremptory. He is like a grown man who suspects a trap.
'Of course not, son.' Grant peers ahead. 'But there are two cars. Now, that's certainly strange.'
A second car – another powerful American car – is indeed drawn up in front of his own. Two men have got out. They appear to be reconnoitring Grant's car – even to be poking about in it. Grant is indignant and surprised. Perhaps they are car thieves, but the spot is an unlikely one for that. It is an unfrequented road. A single glimpse of two cars standing together on it has instantly struck Jay as queer in itself.
Their footsteps have been heard, and the two men swing round. There is an uncertainty in their movements that betrays what is surely a criminal purpose. Jay gives a long low whistle on a rising note. This is promptly answered from half-a-dozen places in the wood. The effect is startling, and it startles the two men. They run for their own car, jump in, and drive off. As they go past, accelerating furiously, Grant tries to get a clear glimpse of them. But the light is too bad. As the noise of the engine presently fades, silence succeeds it. There is no sign of the children who have given this odd and effective demonstration. Nor does Jay refer to them. 'I can find you a way up to the house,' he says. 'It means opening some gates – and one or two other things.'
Grant for the first time notices the boy's speech. It is of the rustic sort, evolved through generations of slow thinking and slow utterance. But the boy uses it rapidly and nervously, so that the effect is markedly individual. Moreover beneath this or above this is something that strikes Grant as familiar. The accents of Miss Candleshoe and Mr Armigel are at play in the articulations of their young assistant. Perhaps it is only that. Remembering the rabbit pie, he looks at his watch. 'Never mind the gates, Jay. I'll just drive the car past the lodge and she'll be safe enough.'
'No.'
There has been a moment's deliberative pause and then the word has come decisively out of the dusk. Grant sees that on the kitchen-boy is some burden of command. It is perhaps from this that he gets both his pallor and his poise. 'You think those people might come back and take my car?'
'Your car will be better at the house. May I get in beside you and show you the way?'
It is a reticent reply, but Grant senses that Jay has made some important decision. He is quite sure that Mr Armigel's practical and unimaginative lad in fact leads a secret life of vivid fantasy, and that to this – or to a part of this – he has admitted some of his companions of the village school. Perhaps Grant himself is going to be approved; perhaps that is the inner meaning of the decision to guide his car by devious ways to Candleshoe Manor.
They climb in and Jay directs Grant to turn round. He watches as Grant's hands move over the controls. Grant realizes that Jay has the habit of learning all the time; that he could now, if necessary, have a fair shot at starting this car himself. He may get fancying things, but he is decidedly not what is called a dreamy boy. Grant wonders about his mother, the former housekeeper – where she came from, whether she has died or merely gone off with a lover, how the boy comes to be left apparently in Miss Candleshoe's care.
The secret route to the manor house turns out to be a matter of traversing a couple of fields by cart tracks and crossing the stream by a small wooden bridge. At the bridge Jay has to get out and perform some complicated operation in the darkness – a piece of ritual, Grant supposes, connected with whatever fantasy he is indulging. Once get such a fantasy going, he reflects, and anything that comes along will feed it. Two men driving down a country road see an empty car. They stop to take a rummage in it in the hope of petty theft. But for Jay and his concealed troop this drops into place as part of some vast shadowy adventure. Perhaps Grant and his mother drop in too.
The bridge is negotiated safely, and it appears that there is a clear run to join the main drive near the house. As Jay climbs back into the car an owl hoots again in the distance. And by way of experiment Grant quotes softly:
'Owls or spectres, thick they flee;
Nightmare upon horror broods;
Hooded laughter, monkish glee,
Gaps the vital air...'
' You know that?' Jay is surprised; he has clearly supposed himself to be the only person in the world who has discovered Meredith's poem.
'Enter these enchanted woods,
You who dare.'
Grant concludes the quotation and brings the car to a halt. The house, now dark and dimly sprawling, uncertainly towering, is before them. A couple of lights are burning on the ground floor. Their suggestion is of tiny areas of tenuous security scooped out of the void. Grant doubts whether, for a child living in such a place, imagination can be the most comfortable of companions.
'You got my message.' Jay has opened the car-door beside him, but for a moment sits tight. 'And yet you have entered, all the same. Do you think it was wise?'
'That depends.' Grant switches off his engine. 'If Candleshoe is like Westermain, I think I can take it. Dare, you know, and nothing harms. Keep your courage up, and fair you fare. I think I can manage that.'
'So do I. But then we are inclined to be boastful, aren't we? Or at least so Mr Armigel says.'
'We – you mean human beings?'
Jay can be seen shaking his head in the darkness. 'I mean people of our nationality – yours and mine.'
Grant bursts into laughter. 'Say, son, haven't you guessed that I'm an American?'
'Of course. And so am I.'
This is neither a boast nor a confession, but simply a piece of natural history. Grant is taken aback by it – the more so when he sees that he ought to have guessed. What in Jay's speech lies beneath its rustic and gentle components – the accents of his school companions, the accents of Miss Candleshoe and her chaplain – is Grant Feather's own tongue.
'Well, if this isn't a surprise!' Grant has taken to the boy, and now here is a bond. He is genuinely delighted.
'Even in England Americans must meet quite often, I suppose.' Jay remains objective and even cool. Grant feels on probation still.
They get out of the car and the boy produces a pocket torch. As he switches it on Grant tries a question. 'Do you remember much about America, Jay?'
'Nothing at all.' The beam picks out the first of the broken steps by which they must mount to the terrace.
'But you've read about it?'
'No.' The boy is abrupt. 'I know very little about it.'
They climb in silence. When they reach the terrace Grant speaks. 'Well – you've plenty of time. But there's quite a heap to learn.'
'I suppose there is.' For the first time Jay's voice is uncertain. It is as if he suspects himself of having been discourteous. 'You see, I don't really know much about anything.' He hesitates. He has reached the front door. He flashes the torch backwards to light the way for Grant. Then – perhaps the better to locate himself – he puts out his other hand to the smooth stone. 'Except Candleshoe. I know quite a lot about that.'
# 7
The rabbit-pie is a notable achievement, in point both of succulence and of mere size. Mrs Feather speculates on the oven from which it has emerged, and upon the invisible domestic economy of Candleshoe in general. She is obliged to conclude that there is no invisible domestic economy. The place puts everything on the table – and around it.
The surface appearance of the feast is that of somewhat rough-and-ready antiquarian reconstruction. Miss Candleshoe, it may be supposed, has formed a sentimental attachment to the Middle Ages, and like some eccentric in a novel of Peacock's has arranged her household, its usages and appurtenances in conformity with this fondness. She sits at the head of her board, with her guests on her right hand and her chaplain on the left. Her retainers sit below the salt. They consist of a good-natured and mentally defective girl called Tib – of whom may safely be postulated an almost unlimited capacity for washing up – and a crowd of children. The children are a shock to Mrs Feather; she wonders for a moment whether Candleshoe is really a sort of orphanage, conducted upon lines which if surprising, are nevertheless conceivable in this perennially unpredictable country. It may even be an orphanage controlled by the State – in which case her cheque-book will be of no use to her. Grant, she sees, feels that he has a line upon the children; he is now more interested in them than in Miss Candleshoe. And in particular he is interested in the boy called Jay.
Jay is not at all suggestive of an orphanage. He has changed his clothes again – there is undoubtedly a streak of vanity in him – and is in black from neck to toe. Mr Armigel, if he sees this merely as a laudable economy, has become decidedly vague about immediate appearances. The old ragbag stuffs suit Jay admirably; he looks like Hamlet in a cry of child actors – or might do so if his demeanour admitted any suggestion of the theatrical. When one returns to the medieval interpretation of Candleshoe one observes that Jay sustains the character of a page. He carves the pie and performs other menial services with the proper air of a lad of gentle breeding. Above all, he is businesslike. Like Hamlet he may dream. But like Hamlet he will be capable of arranging a very efficient Mouse Trap should the occasion for such a thing come his way.
Jay has a henchman in a fair-haired boy called Robin, who must be of about his own age. Mrs Feather guesses that Robin too has a good arm for a bow, and her ear tells her that he is not what Mr Armigel would call a village child – although it appears to be in the nearest village that he lives. Robin is the vicar's son, the doctor's son – something of that sort. There are three other children – two girls and a boy – and although simple they are unselfconscious and natural, which makes it certain that their present situation is without novelty. The wooden platters and pewter mugs with which they are provided enable them to eat a great deal of pie and drink quite a lot of what appears to be a decidedly heady brew.
From these utensils Mrs Feather's eye travels to her own. She has occasionally eaten off gold plate, but never off silver. The design is distinguished and she comments upon it to her hostess. Miss Candleshoe, whose head and hands alone appear above the level of the table, receives her compliments with civility.
'China of good design is hard to come by. My brother used frequently to remark that the Prince, had he lived, might well have elevated the public taste in these regards.'
'The Prince?' Mrs Feather is momentarily astray.
'The Prince Consort.' Mr Armigel takes upon himself the task of courteous explanations to the colonial lady. 'We have been much grieved by his death.'
'It was untimely, of course.' Mrs Feather finds the tenses into which the chaplain is apt to cast his observations mildly unnerving. 'And I believe he was interested in the arts.'
'And crafts. But unfortunately a corrupt taste has become pervasive. Consider the novels of Lord Beaconsfield.' Mr Armigel pauses, but finding Mrs Feather without facility in taking up this theme returns to that of table utensils. 'As a matter of fact, we employed nothing but china until the Cataclysm.'
'The Cataclysm?' Mrs Feather supposes that Mr Armigel is referring to some obscure impact upon Candleshoe of the late world war. But she realizes that he may well be speaking of the Great Rebellion or the Norman Conquest.
'Tib.' Mr Armigel looks with great amiability down the table, where the half-wit girl is gnawing with concentration at the leg of a rabbit. 'She had not long been with us when our entire stock of domestic crockery vanished in one single act of destruction. Dispassionately considered, the feat was no inconsiderable one, since it involved the accumulations of some centuries. When we made inquiries about replacements, however, we found serious obstacles in our path – obstacles which might be subsumed under the two general heads of artistic and financial. Fortunately Jay – as so often – had a sensible solution of the problem. He raked about and found these rather older things. Upon their use, as you can conceive, one crucial advantage attends. The Cataclysm is impotent before them.'
'Was it Jay who thought of having meals together in the hall?'
Miss Candleshoe answers this, bringing her magnificent nose out of a fine silver tankard to do so. 'Certainly. The servants' hall was becoming a little difficult to use–'
'The river was coming in.' Mr Armigel interpolates this with casual pride. 'And the ceiling had come down.'
'Moreover' – and Miss Candleshoe frowns at her chaplain, conceivably feeling something impolitic in the suggestion that Candleshoe is in disrepair – 'moreover there seemed to be remarkably few servants in it. So Jay contrived the present arrangement, which works very well. I do not at all know what put it in his head.'
Mrs Feather suspects that the answer to this may be Sir Walter Scott. Being a woman capable of sudden large intuitions she has a sudden further suspicion as well. Unless there is a missing heir to Candleshoe who proves unamenable to financial persuasion, it is this boy who is the chief obstacle in her path. It is Jay alone who keeps the place going as a running concern. He has persuaded these old persons to revert, without their being much aware of the fact, to a feasible feudal economy.
Mrs Feather buries her own nose – which at present she is conscious of as rather undistinguished – in her own tankard. Mr Armigel watches her benevolently. 'You approve?' he asks.
Mrs Feather judges it safe to answer in the affirmative.
'Pears.' The chaplain is impressive. 'When lately we had some cause of – um – dissatisfaction with the wine merchant–'
'A circumstance unthinkable' – this time it is Miss Candleshoe who interrupts – 'in the time of my brother Sir James.'
'And when, in consequence, we were under some apprehension that we might have to drink water–'
'An unhealthy practice – and, to my mind, uncleanly as well.' Miss Candleshoe disappears behind her tankard.
'In this exigency Jay evolved a reliable process for fermenting pears. The result is the perry which you are now honouring. Jay tells me that he has some thought of going on to mead. But that, it appears, needs bees.'
'Bees?' Miss Candleshoe is sharply interrogative. 'I saw several bees only this afternoon. Jay must be told.'
'I am afraid they belong to neighbours.' Mr Armigel is candid about this. 'Without at all knowing what may be the range or – so to speak – tether of a bee, I judge it possible that they may even be from the apiaries at Benison.'
'They were undersized bees.' Miss Candleshoe appears suddenly reminded of this. 'And their flight struck me as uncertain and sickly. Probably they were from Benison.'
'What you might call Whig bees.' It is Grant Feather who, rather to his mother's alarm, cuts in with this somewhat facetious remark.
But it is a great success with Miss Candleshoe. 'Our bees are certainly Tory bees. Or would be, if we had bees. Perhaps it is possible to breed them. Jay has had remarkable success with his geese.'
'He assures me that next year it should be possible to part with half the flock in exchange for a heifer.' Mr Armigel advances this as intelligence of considerable importance.
'An excellent plan. But the alternative advantages of several kids must be considered. A cow is very well. But while the cow is in calf...'
Miss Candleshoe and her chaplain drift for a time into problems of estate-management not of the first interest to their guests. Mrs Feather glances down the table, where something is happening among the children. Jay has given a nod – and at this Robin, the two girls, and the third boy have risen and are filing from the hall. A certain ceremony – or at least precision – attends their departure. It has, in fact, a military air. As they go, Jay and Tib employ themselves in fetching the baked apples, and for a moment the elders are left alone.
'Do you always have all those children?' Mrs Feather addresses her hostess with candid interest.
'The children? Ah, yes – of course they are friends of Jay's. He has them to a meal from time to time.' It is evident that Miss Candleshoe, although her perceptions are still acute in certain areas, is a little cloudy about much that goes on around her. 'I believe the children assist Jay in various ways.'
'Jay, although not what may be termed an interesting child, has a certain organizing capacity,' Mr Armigel strikes in. 'His mother was the same. Without being in the least a woman that one would notice, she was thoroughly capable. We were sorry when something fell on her.'
'Something fell on her?' Mrs Feather is startled.
'Part of the house.' Mr Armigel appears to be surprised that there should be any need for this amplication. 'Miss Candleshoe acknowledged a certain obligation – the family has always done so in that precise exigency – and when no relatives of Mrs Ray's could be traced, she made herself responsible for the boy. He was still very small, and we had some thought that he might eventually work in the gardens.'
'If there were any gardens.' Miss Candleshoe adds this proviso.
'That decidedly became an issue.' The chaplain nods. 'Jay sprouted rapidly, but the weeks were ahead of him. While the grass grew, the steed starved.' Mr Armigel frowns, aware of some lack of literary felicity in the application of this adage. 'However, Jay has now adopted what may be termed a wider sphere of usefulness.'
'Was it Jay's mother – this Mrs Ray – who was American?'
'Yes. It was a circumstance in which Miss Candleshoe took some interest, since she had at that time a nephew in Australia.'
'I see.' And indeed Mrs Father has now learnt that from the Candleshoe point of view one outlandish part of the globe is much like another. Then a horrid thought strikes her, and she forgets all about her late compatriot, Jay's mother. 'Would that be a nephew who was himself a Candleshoe?'
'Certain – a near relation. A little more perry?'
'Please.' Mrs Feather absently allows the chaplain to pour her out something like a further pint of Jay's beverage. 'And is the nephew in Australia still?'
Mr Armigel shakes his head. 'He passed on.'
'Oh.' Mrs Feather is a little ashamed of the manner in which she finds herself hoping that this is to be interpreted. 'You mean that ?'
'He was called to a better place.'
Mrs Feather sees Grant eyeing her satirically. She is genuinely contrite. 'Oh, dear! I am exceedingly–'
'California.'
'I beg your pardon?'
'He was called – or affected to be called – to more congenial employment there. It was always happening.' Mr Armigel pauses, as if in search of a turn of phrase which shall give the matter complete definition. 'Rupert Candleshoe might best be described as a rolling stone. Except indeed that his locomotion owed less to simple gravity that to traveller's cheques provided by his aunt. However –de mortuis nil nisi bonum.'
Mrs Feather can hardly trust her own Latinity. 'This Rupert Candleshoe is dead?'
'Certainly. His decease was obscure, but undoubted. And for the family it was, of course, a great calamity.'
'He was my sole heir.' Miss Candleshoe, despite some appearance to the contrary, has been following the conversation. 'As you will appreciate, this makes the future of Candleshoe more speculative than it has been for some centuries. Ah, baked apples! It is to be hoped that they have not forgotten the cloves.' Miss Candleshoe raises her magnificent nose from its near resting-place on the tablecloth to sniff. 'What is a baked apple without its clove? Jay will no doubt serve them. I see that his friends have returned.'
It is true that several children have slipped into the hall. But they are not the same children who left it a couple of minutes ago, and their interest is not in the apples but in the abundant remains of the rabbit-pie. Neither Miss Candleshoe nor Mr Armigel is aware of this; to them one child is the same as another, and it is the apples that engage their serious attention. At another juncture Mrs Feather would take lively notice of this odd circumstance, and her son is certainly doing so now. She realizes however that Miss Candleshoe has reverted to serious concerns. There is no heir to oppose the selling of the house at an advantageous figure, and it is a course which the present owner is really revolving. Mrs Feather has been brought up in an atmosphere of business, and she knows by instinct whether or not a deal is authentically on the carpet. So she plunges boldy in. 'Candleshoe, I suppose, has never been without an heir before. It makes you feel that your own plans are unsettled?' She catches Grant's eye and feels acutely the indecency of such a question addressed to a woman who must be over ninety.
'My thoughts turn more and more to a very long journey.'
Mrs Feather's heart sinks. Miss Candleshoe, like the sick man in the play, is about to say with dignity that her plans are very simple and that she is going to die.
'In fact I am minded to embark upon the Grand Tour.' Miss Candleshoe proceeds with some briskness. 'Mr Armigel, I need hardly say, would accompany me. Moreover it has occurred to me that continental travel is always hazardous if one is unaccompanied by a personal physician. But a private chef is surely an unnecessary complication in the entourage of one who is happily free from digestive ailments. It would thus appear that the party may be completed simply by a courier and a maid; that the Channel may be crossed by the common packet. But ought one to hire conveyances and horses at Calais, or take one's own? This is a detail which at present eludes me.'
'And you would go far?' Mrs Feather has no doubt that Miss Candleshoe would go far. But for the moment she can think of nothing else to say.
'I should begin with the Low Countries and proceed to some of the lesser German States. I have been told that the Court life there is frequently entertaining and instructive. I should then proceed through Switzerland to Italy. It has long been in my mind to view some of the scenes so affectingly described by Lord Byron.'
'We should then hire a schooner and proceed to Missolonghi.' Mr Armigel takes up what is evidently a well-rehearsed itinerary. 'Byron, poor fellow, died there, as you have no doubt heard. Our ultimate goal would be Constantinople and the monasteries of the Levant. There are reliable reports that it is a most interesting and informative part of the world. There was the Marquess of Dorchester's daughter – not, I think, the present Marquess – who married rather a dull dog, but nevertheless found Constantinople full of instruction. And I recall another lady of very respectable family who domesticated herself with Bedouins amid the ruins of Palmyra. She found them to be well worth a visit.'
'I'm sure she did.' Mrs Feather is a little taken aback at realizing that she may launch Miss Candleshoe upon a nonagenarian version of the travels of Lady Mary Wortley Montagu and Lady Hester Stanhope. Nevertheless she sees that the proposal has its advantages. Miss Candleshoe retired to a villa in Cheltenham or Bath is unthinkable; Miss Candleshoe lurking in some small dower-house on the fringes of her present territory might be a somewhat awkward neighbour; But Miss Candleshoe perched, say, on Mount Lebanon would at least be a monument – and a blessedly remote monument – to the continued enterprise of her country and her class. 'I think your Grand Tour deserves to succeed. But travel, of course, is extremely expensive nowadays. Particularly with a chaplain and a physician.'
'We are under no illusions in that regard.' Miss Candleshoe favours her guest with an extremely penetrating if mildly lunatic glance. 'And particularly would it be so if we are then minded to move a little farther afield. There is an undoubted attractiveness about the idea of Cathay.'
Mr Armigel nods placidly. 'Perhaps you can confirm us in our impression that there is excellent sketching on the Yang-tse-kiang? Miss Candleshoe is fond of watercolour, and I still do a little in oils myself.'
Mrs Feather understands that China is indeed regarded as offering great natural beauties.
'Moreover it is said that fresh archaeological observations are still to be made upon the Great Wall.' Mr Armigel takes a pinch of snuff. 'I might conceivably address myself to a monograph on the subject.'
'An interesting proposal.' Miss Candleshoe is approving. 'Antiquarian investigation is a very proper pursuit to fill the leisure of a clergyman.'
To Mrs Feather it occurs horridly to wonder whether perhaps Miss Candleshoe and Mr Armigel are not already provided in some unobtrusive fashion with a physician, and with trustees or guardians as well. The English trade on being what Grant calls a mite crazy – of this they have had a sufficiently clear exhibition at Benison Court earlier in the day – but will even English social custom permit an old lady like this to dispose of property at will? May not some tiresome lawyer – or even Commission or Trust or Ministry – intervene on the ground that Candleshoe is a building of historic interest? But a building of historic interest is just what Mrs Feather wants, and what she considers herself very well able to care for.
She becomes aware that Jay is offering her another apple; she glances up at him and he looks her very squarely in the eyes. She supposes that Grant has made friends with him – Grant is wonderful with young people – and she is therefore surprised at something darkling in his brow. The boy has divined her full intention – she is suddenly sure of this – and his hostility to it is absolute. He is only a child, but he is the sole able-bodied and able-minded person about the place. He therefore bosses things. And he wants no change.
Mrs Feather takes an apple. At the same time, since she is a good-hearted woman, she begins to form romantic plans for Jay. He is a good sort of boy, with a straight if lowering gaze. For such a lad the concocting of perry and mead and the exchanging of geese for heifers is all very well for a time. But it is scarcely likely to lead to any very prominent position on life's stage. Jay must have education. Mrs Feather wonders whether it is too late to send him to Eton, which she understands is the best place for this purpose. But he can certainly go, like Grant, to Oxford. If he does well, he shall go into politics – British politics. Mrs Feather has not yet sunk her spoon into the fresh apple when the culmination of this reverie comes to her. Jay shall be Britain's first American-born Prime Minister. And when this happy climax is achieved she, Alice Feather, will present Candleshoe Manor to the nation as an official residence for holders of the office. Conceivably one or two such places already exist. But by that time she will have made Candleshoe so superbly attractive –
The undisciplined fantasy ends abruptly. For the second time at Candleshoe the Feathers are startled by the sudden pealing of a bell. This time it is from somewhere high overhead, and its character is not that of a summons to prayer but of a tocsin. The loud urgent clangour of the thing seems to crash down through the ancient building like a cataract and flood the hall. Jay drops his dish of apples and runs. The children at the farther end of the table follow him. Only Tib is left. The uproar delights her, and she laughs unrestrainedly.
Mrs Feather supposes that the place must be on fire, and the irony of Candleshoe's thus eluding her assails her vividly. She turns to Miss Candleshoe, whom she expects to see aghast, and who may well claim her succour on this dire occasion. But neither the old lady nor her chaplain are at all discomposed. Mr Armigel indeed has stood up and is reaching for the abandoned apples. The bell stops and he can be heard speaking. 'Jay's friends must have gone off to play at hide-and-seek. I am afraid they are a little noisy at times, but children ought not to be checked unduly.'
Miss Candleshoe nods in support of this liberal sentiment. 'Very true. And Candleshoe is an excellent place for sports of that kind.'
Mrs Feather supposes that this must be true. Voices can now be heard from various parts of the house, but they do not strike her as being congruous with a game of hide-and-seek. 'All these children,' she asks, '–do they sleep here?'
'Sleep here?' It is one of the points upon which Miss Candleshoe is entirely vague. 'I hardly suppose so. But Jay makes his own arrangements. His friends assist him in various ways.'
The voices have now ceased and Candleshoe is completely silent. Grant Feather rises and slips from the hall. The bustle just concluded has spoken to him quite clearly. It has been occasioned by a garrison responding to an alarm and taking up its station. Hide-and-seek is no doubt a sufficiently accurate general term to cover the make-believe involved. But something prompts Grant to see if he can join in the game.
# 8
Beyond the screen the house appears to be in darkness, and Grant hesitates for a variety of reasons. By any standards it is a shade casual to quit one's hostess at dinner for the purpose of wandering about her mansion; and on this sort of thing it is very likely that Miss Candleshoe holds strict views. Again, he has really no business thrusting himself upon the amusements of this gang of kids. They have evolved, he can see, some large and sustained fantasy of medieval warfare. For them, Candleshoe is a good many centuries older than it actually is, and under the captaincy of Jay they are acting out imagined episodes of the Barons' Wars. There is no harm in that.
But may there not be the possibility of harm? A game played so intensely as this may turn, Grant knows, into a species of mass hallucination. And this tells him of another reason why he is thus hesitating in the darkness of the outer lobby. Let him once start groping about – a large dim figure discernibly not one of the crowd – and it will be scarcely surprising if some ancient mace or battle-axe is brought down with a crack on his skull. He remembers the warning arrow passing his head with no more than a discreet margin of safety that afternoon. Sooner or later these children must experience a misadventure. Their game, he has divined, has intensity as its hallmark, and such violent delights have violent ends. This great mouldering house is more dangerous than a ruin. It is a brick and-stone shell encasing tons of perished plaster and decayed timber – and the children go charging about it in the dark, bearing the actual weapons of its earliest time.
Grant laughs aloud. He would like to convince himself that he has lapsed into grandmotherly absurdity. But he is struck again by the queerness of the place. Its effective inhabitants are the children. Beside them, Miss Candleshoe and her chaplain are only ghosts – ghosts with a little grey matter still in the skull, but ghosts all the same. The children ought presently to be in bed – but who is to see to that? Besides Jay and Robin there are at least half-a-dozen of them. Presumably they all have homes in the village, and if they are found to be absent at ungodly hours rustic parents will bring to the irregularity the simple discipline of a strap. But there is no sign that the game is breaking up. Candleshoe is so quiet not because the children have departed, but because each is silent and tense at a station. Grant is sure of this in a general way, and as he himself stands taut in the darkness he tries for a more precise picture. At each end of the house a staircase winds upwards through a square tower; at the top of each there will be a ladder and a trapdoor leading to the open air. Grant can see, as surely as if he had made the climb, an inviting intricacy of leaded roof, with that long scrollwork inscription by way of parapet. He can see a score of places where the finely cut stone has split and flaked long ago, and been cobbled up with iron bands which are themselves rusted away by a century of English weather. It is a wonderful eyrie, with vantage points at a score of places. By day – and even by night if there is a moon – one can command the gardens, the line of the drive and the stream, every break in the beech-trees, much of the farther country. And to sweep the terrace one has only to lean forward –
Grant shuts his eyes – and is aware of a play of light upon their closed lids. He opens them and sees that he is held in the beam of a torch. A moment later Jay and Robin are standing beside him; Robin opens what appears to be some species of dark lantern; and in the light of this the boys look at him silently. Then Jay speaks. 'Did you – did your mother – know anything about Candleshoe before you drove up this afternoon?'
'Nothing at all.'
The two boys glance at each other swiftly. This time it is Robin who utters. 'But you are very interested in it now?'
Grant shakes his head. 'I don't think I am. All you people interest me quite a heap – the things you like doing, and what you are busy about right now. But the place is nothing special to me. It's your place, I reckon – not mine.'
'But your mother wants it?' Jay's voice is at its most peremptory. 'She would buy it for a great deal of money from Miss Candleshoe?'
'Maybe so.' Grant tries to be easy. 'But nothing will come of it, I guess. My mother is always taking a fancy to buy places. But most times it remains just a fancy.'
'Do you want her to buy Candleshoe?'
'I certainly do not.' Grant is relieved at having it in his power to be unquestionably sincere about this. 'My mother is romantic, and sometimes she doesn't see how a thing wouldn't do.'
'A person ought not to come to a strange place without being asked and offer money for it.' Jay enunciates this rule of conduct with grave courtesy.
Grant, although not prepared to criticize his mother, feels unable to dispute the general proposition. So he says nothing.
'When a place is for sale – really for sale – boards are put up, and there are advertisements in the newspapers.'
'That's right.' Robin backs up his leader. 'And my father says only a lunatic would buy Candleshoe, because it's dangerous and unhealthy and inconvenient.'
'These are things which you should explain to your mother.' Jay is apparently unoffended by his lieutenant's revelation. 'And please say that Robin's father is a doctor, who ought to know.'
'And the place is haunted.' Robin is now eager. 'There are two ghosts. And each is of a very specially terrifying sort.'
Jay seems at once to recognize this as a false cast. He silences Robin with a look. 'Probably your mother would like to buy the ghosts too?'
'Probably she would.'
'Ghosts can't be bought. It's a vulgar error to think they can.'
Grant receives this censure submissively. It is his inward opinion that Jay is right. The Candleshoe ghosts will in all probability not 'go with the house'. They are much more likely to accompany Miss Candleshoe and Mr Armigel to Constantinople or Crim-Tartary.
'May your mother be offering Miss Candleshoe the money now?'
'I guess not.'
'But soon?'
'She might.'
'Then don't you think you had better go?' Jay says this terribly quietly; he may fire minatory arrows at strangers, but he knows what it is to ask a guest to leave; he has strung himself up to it.
'Maybe we better had.' Greatly daring, Grant puts out a hand and gives Jay's arm a friendly pat. 'I'll slip out and see to starting the car. But as we'll have to go by the fields again, I'm afraid we'll need a pilot. Perhaps I could give some of your friends a lift home?'
The two boys confer in whispers. Grant remembers that the threat constituted by his mother is no more than an additional and unexpected danger at Candleshoe. Such as it is, it is a real danger; but in the minds of these strange children it is secondary to some more exciting peril of their own invention. It is on this that they are taking counsel together now.
'There is an enemy approaching the house.' Jay turns back and speaks in his most level tones. 'We have had a message flashed from our sentry at the end of the drive. That is why the alarm-bell went.'
'I thought it was something like that.' Grant is surprised to feel an uncomfortable pricking down his spine. The children's proceedings, he must finally acknowledge, cannot by any stretch of language be called a game. He is not in contact with make-believe, but with illusion – with fiction held as fact. He knows that learned persons would deny the difference; would declare that children are still playing when the suspension of their disbelief is entire; that they can be at once actors and spectators in a theatre where illusion is unflawed. But Grant feels this uncomfortable pricking, just the same. He would like to give the two boys a shake and say, 'That's enough for tonight.' Instead he asks, 'What sort of enemy?'
'We can't tell you that now,' Jay answers as he and Robin move through the lobby to the outer door of Candleshoe. 'But if we believe you when you say your mother won't really buy this house, and if we accept you as a friend, will you do something for us?'
'I'll do anything that doesn't strike me as dangerous and foolish.' Grant is guarded.
But Jay frowns, finding this a poor reply. 'It is dangerous.'
'Is it entering these enchanted woods?'
'Yes.'
'I said "dangerous and foolish". The woods aren't that. So go ahead.'
'Will you please take the torch, and go out of the house with a bit of a row when I unbolt the door? And then go and have a look at your car and come back – all in a very open sort of way? When you want to come in again you must knock' – Jay pauses and glances round him warily – 'and say Christmas at Candleshoe.'
'Is that the password?'
'It's the password for tonight. Will you do it?'
Grant nods. 'Sure. But what's the big idea – distraction technique?'
This puzzles Jay – but Robin gets it and nods back. 'I'm going out to scout around. I'll slip along the terrace while you attract attention to yourself and your car.'
'Very well. I'll start the engine and race her. Only, let's hurry – for my mother and I must honestly be off fairly soon.'
Jay whistles on a rising note. It is a sound Grant has heard before. Two boys and a girl glide out of the buttery and take their stance at the back of the lobby. All are deadly serious, and all are armed with bows. They stand with arrows notched, facing the door. The set-up, Grant realizes, is genuinely lethal. Sooner or later there will be a misadventure. Jay has drawn the bolts. Before he knows it, Grant is outside, flashing the torch before him and whistling. The door bangs to behind him. As he takes a second cautious step down from the terrace he can just hear it softly opening again.
There is a clear sky and a sickle moon. After a few minutes in the open it would just be possible to get about without a torch. Driving will be pleasant – but Grant glances at his watch and wonders at what unearthly hour he and his mother will finally make a decent hotel. He wants to get away from this place. And, once away, he is quite sure that he is not coming back. If his mother really succeeds in bringing the crazy old dame to a deal he will go in and veto it. Once in a way, his authority with his mother will stretch to that. Let Jay run Candleshoe, hallucinations and all, until one morning its owner is found stiff in her bed. And then let family lawyers descend on the place and clear it all up. Let them, at a pinch, burn Jay's bows and send his forces packing and set him to a useful trade. It will be better for him in the end than getting the boundaries of fact and fancy so dangerously confused.
In this mood of impatience Grant comes to his car. He climbs into the driving-seat and switches on a light. His mother's guidebook, with its fatal promise of long-and-short work at Abbot's Benison, lies open on the floorboards. He picks it up and then switches on the ignition. He has promised to make a row, and he will. But perhaps he is no friend to the children in encouraging a mass of obsessive nonsense that has plainly gone too far. He tugs the self-starter. Nothing happens.
He tugs again – although already he knows that there is something wrong. After a minute he gets out, swings up the bonnet, and flashes his torch on the engine. One look tells him enough. The car will not move that night.
He finds himself acting in an extraordinary way. He flicks off the torch, reaches into the car and switches off the light, turns, and walks swiftly and quietly into shadow. It needs thinking out.
He has no impulse to suspect the children. This is intuitive and immediate, and only seconds later does he see that it is backed by logic. For the moment Jay is putting up with him, and has even pressed him into service. But the boy wants nothing more than to be rid of him – or at least to be rid of his mother. Jay has no motive for doing this thing. Moreover – ruthless as one may feel him to be – this can be guessed as something he would not do even to the most unwelcome visitor who had once received the hospitality of Candleshoe.
There is a possible explanation in insubordination and stupidity. Robin is certainly not a lieutenant to display either of these weaknesses. But there is a whole bunch of other kids, and it is unlikely that Jay has been able so to handpick his forces that one or two young blockheads are not among them.
Yet that won't do either; won't do for the sufficient reason that the job on the car has been a knowledgeable one. Grant begins to see why he is acting queerly. And he is acting queerly. He has got on the shadowed side of a yew-hedge, long since grown wild and cliff-like, and he is listening intently. He wants to locate Robin, now on his scouting expedition, and get him back to the house. For his own imagination is working. Just as, a little time ago, he could not bear the mental image of some tense child leaning far out over the crumbling masonry of the roof, so now he finds he can't comfortably take the image of Robin prowling these deserted gardens in a sliver of moonlight.
Grant tries to catch himself on a rebound from all this; tries to see it as darn nonsense. But the more he goes after such an attitude the less can he manage it. There must be some reasonable link between the extravagant fancies of Jay and friends and the hard fact that somebody has scotched the ignition of his, Grant Feather's, car. But instead of any reasonable supposition only rubbish comes into his head. The children are convinced that Candleshoe is beleaguered; that an enemy is closing upon it. Can a conviction like that, vividly held by a closely integrated group of young minds, set odd things happening in the physical world? A single hysterical girl is often pointed to as the source of poltergeist phenomena – of pictures falling from the wall and china hurtling across the room. Why should not a poltergeist of a modern mechanical bent get under the bonnet of a Packard and have no end of fun?
Grant finds that while his mind is spinning this poppycock his body is behaving with great deliberation and discretion. It has taken him silently to a gap in the high yew-hedge from which he can gain, as his eyes grow accustomed to the darkness, a faint but intelligible visual impression of a further reach of the gardens. The house is over on his left; the moon rides behind it; written as if with a heavy pencil against the dimly luminous sky he can distinguish in the balustrade a single Latin word: Nisi. Grant looks back to the garden. Out of the tail of his eye he thinks he has just caught a flicker of movement. He watches and is sure of it. Robin is flitting from shadow to shadow in a wide circle round the house. Grant breaks cover and goes in direct pursuit of him. At once his mind starts putting up a better show.
Suppose that Miss Candleshoe is a miser, and that the apparent poverty of her household is the consequence of this. Suppose she has mattresses stuffed with banknotes and old trunks heavy with guineas and sovereigns and jewels. It isn't terribly likely, but at least it is a rational supposition. It is bruited abroad that to rifle Candleshoe would be to possess oneself of great wealth. Professional thieves take on the job. They reconnoitre the place – perhaps make some unsuccessful assault upon it. They lurk around, are seen in the nearest villages, withdraw for a time until any suspicions are allayed, return to further reconnaissance. And all this of cold criminal fact and intent collides with something quite different – the fantasy-world of Jay and Robin and their companions. Almost without realizing the change, the children have turned from engaging imaginary enemies to engaging real ones. And then –
Grant finds that he has fallen flat on his face, and that his face is most uncomfortably tingling. He remembers that bramble and nettle proliferate around him, and he proceeds more cautiously. Perhaps he should give a shout and summon the boy. It may be true that criminals surround them, but, even so, the best plan is probably to behave with the greatest boldness. In nine cases out of ten, surely, detected thieves and burglars cut their losses and run.
Following this line of thought, Grant is about to bellow out Robin's name when he remembers the car. It comes to him, obscurely but powerfully, that there is some sort of warning in it. Somehow the treatment it has received seems to speak of rather desperate villainy, and he wonders why. Jay would gladly be rid of the Feathers; would like to see them trundling over the cart track back to the high road. Why should not the lurking criminals – if criminals there are – feel the same? If they propose to break into Candleshoe this very night, why are they not more than willing to see the departure of the evening's altogether unexpected accession to its garrison? There is only one reasonable answer. With one or two more people on the spot they feel that they can effectively deal. But they are taking no chances of the visitors' getting away with any inkling of what is going forward and the disposition to raise an alarm. Grant's car has been immobilized for the same reason that a telephone-wire would be cut, supposing Candleshoe to boast anything so new-fangled as a telephone: effectively to isolate the place while a projected assault is carried through.
Robin has crossed a stretch of garden already familiar to Grant, who recognizes the dull gleam of a pool and in the middle of it a patch of shadow that is the small crouching Nereid with the empty shell. There is a criss-cross of paths beneath his feet, but they are overgrown and in the faint light largely indistinguishable. The surrounding hedge, gapped and irregular, shows as a mere silhouette; it might be a scattered crowd standing immobile round some nocturnal ball game. Through one of the gaps Robin vanishes and Grant follows. For a moment he distinguishes nothing but blobs of deeper darkness in a general gloom; for another moment he is startled by a sense of living presences all about him; and then it suddenly comes to him that his whole adventure must be a dream. It is a new solution, simple and comprehensive, and he is massively surrounded by evidence not otherwise to be interpreted. He has come to a halt beside an elephant; a hippopotamus is facing him; and beyond that looms a motionless giraffe. The forms are exaggerated and monstrous, but there is no mistaking them; his dream has brought him to a circus or menagerie, and in a moment he will wake up. Grant stretches out a hand to the elephant's trunk and finds that he is grasping leaves. He is in the topiary garden which – as Miss Candleshoe has explained – the children care for; they have transformed the shapes prescriptive in such a place into creatures that more engage their juvenile fancy. The notion of a dream must be abandoned. Here, in a special sense, is an enchanted wood, a grotesque metamorphosis of the plants. And amid these slumbering vegetable monsters or beyond them, it is his business to find the boy called Robin.
Grant advances. The creatures about him are mere roughly shaped masses. But they are done with the sure sense possessed by children for the nature of material and for essential form; and in the darkness this makes them entirely alive. No doubt the obscure presence of danger helps.
In the night, imagining some fear,
How easy is a bush suppos'd a bear !
But here bushes are bears. Shakespeare slips into Grant's head only to slip rapidly out again – for suddenly he grasps a new fact. Endeavouring to follow Robin, he is himself being followed. He cannot tell by what sensory channel this knowledge comes to him. But he is suddenly so vividly possessed of it that he swings round like a man expecting a blow. Only the absurd menagerie is to be seen, its members standing improbably at gaze each with another.
Mythology has been admitted, for Grant finds himself looking at a centaur. The upper part of the centaur moves. It is some common four-footed creature, with a man slinking away from behind it. As Grant marks this, he feels a hand pluck at his sleeve and hears a low warning hiss. Robin, while making his own reconnaissance, has been keeping an eye on Jay's dubiously useful recruit. Grant sees that this is the situation, and he lets himself be guided silently from the topiary garden and into a narrow walk between high hedges.
'They've come, all right.' Robin whispers this grimly but with distinguishable satisfaction. 'We'd better cut back to the house.'
Grant agrees. He has left his mother to the sole companionship of childhood and dotage in what has turned out to be, really and truly, an unknown degree of hazard. The first thing to do is to rejoin her in the security of Candleshoe. For the house does, he feels, represent security – at any rate in some degree. It is a rambling and tottering old place, but he has little doubt that Jay has given much thought to constituting it a fairly effective fortress.
There is turf beneath their feet and they break into a run; at the end of the alley they plunge into a shrubbery and move forward warily. Grant guesses that they have rounded the house and are approaching it by the rear; he sees that, as they move, Robin is thinking out a route that shall keep them steadily in shadow.
He feels his arm gripped. The boy has come to a halt and is pointing – is pointing out into clear moonlight. Grant sees a small overgrown terrace beyond which the ground seems to fall away. On this a man is standing, facing away from them. He holds an electric torch at arm's length above his head and lets its beam circle slowly in air. The movement irresistibly suggests a summons, a command to gather. Grant likes it less than anything he has yet seen.
They have moved on, and a moment later the house looms before them. They skirt a wall, are in some cold, sunken place, have come to a halt in almost complete darkness. Grant hears the boy beside him tap cautiously on a wooden surface. A moment later there is a creak somewhere overhead on their left. He guesses that a window has softly opened, glances upward, and sees or imagines he sees the glint of an arrowhead, the gleam of a drawn bow.
'Christmas at Candleshoe.'
The words are breathed in darkness, bolts are drawn back, and he and Robin tumble into a flagged lamplit passage. Archers face them as Jay closes the door and shoots the bolts back home. Jay's pallor is greater than before; his lips are compressed; his dark eyes blaze with excitement. 'They've come?'
Grant answers. 'They've come all right – whoever they are. And now you must tell me, Jay. You must tell me the whole thing.'
# 9
It was the custom of Lord Arthur Spendlove when stopping at Benison Court to reclaim from time to time what had been an important privilege of childhood – that of climbing to the roof at sunset and lowering his father's standard from its staff. On our particular evening – for the narrative upon which we are engaged will not carry us on to another – it was at a somewhat earlier hour than usual that he addressed himself to this mild ritual performance. The day had been a bumper one; they were still counting the stacks of notes and piles of silver at the turnstiles; presently a grand total would be arrived at and conveyed with some ceremony to the Marquess. From this and from the locking-up of the 'takings' – the word delighted his father – Arthur Spendlove found that he was willing to dispense himself. So he made his climb to the leads not long after the last char-à-banc had departed, and prepared to spend a contemplative half-hour with the face of nature as it appeared from that lofty station.
But from the roof of Benison the natural world shows much as does the Atlantic ocean from the deck of the Queen Mary. It is there – but at some remove, and with every appearance of respectful subjection. This appearance may be in both cases delusive; and Arthur Spendlove's consciousness of something of the sort made him frown as he glanced over the bleak immensity of Benison as this aspect revealed it. At some time or other an idle marquess had made a half-hearted attempt to ornament this sterile world of slate and lead, and had set up a proliferation of large stone objects – compounded, it might seem, from the mingled ideas of the urn, the acorn, and the pineapple – wherever an adequately supporting surface could be achieved. These meaningless embellishments, which a score of masons must have chipped at for a livelihood for months on end, jostled with chimneystacks, skylights, trapdoors, and a complicated system of wooden ladders and guide-rails which had been run up for fire-watching purposes during the war. Round the perimeter of the building it was possible to take a brisk walk of just under half-a-mile, varied by occasional climbs from one level to another. This form of exercise Arthur Spendlove no longer favoured, but he did upon this occasion stroll some way down the east wing, pausing eventually to gaze with whimsical concern at a long line of concealed attic windows thus exposed. They represented the last addition ever made to Benison, and were just under fifty years old. For it had been Arthur's grandfather who, in a fit of eccentric benevolence, had presented his twenty senior maidservants with windows instead of skylights – and even with a bathroom to share between them. The windows remained, but the rooms behind them were uninhabited – unless indeed it were by ghosts too undistinguished to be mentioned to his father's tourists. Arthur liked to take a glance at these windows – forlorn and vain concession to the march of time – before turning to gaze at the unchanging lineaments of rural England.
He gazed now. The scene was not, after all, quite unchanged. Straight in front of him his mother's flourishing poultry-farm spread over the broad paddocks once reserved for the hunters. Since the western arm of Benison Wood had gone, more could be seen of Benison Magna – and there was more of it to see, a rash of small red buildings on the higher ground beyond the old town. Benison Parva had always been full in view; you could make out the village school to which his grandfather, in another spasm of democratic feeling, had despatched his father every day for a whole month – with a footman and a groom in attendance. Arthur Spendlove let his eye travel here and there. There was little ground, in the nearer prospect at least, of which he did not know every yard. And even in the farthest distance he knew just where the villages, the manor houses, the farms lay. For a minute longer he stood beside the flagstaff, naming the places one by one. Kerpen House was still shut up: those people clung to London like Cockneys. You could see that old Colonel Riskey had given his little box of a place a coat of white paint. The gable east of the low church-tower of Abbot's Benison belonged to the house built by what's-his-name – a draper or ironmonger, surely, and now the local MP. And on the other side, just distinguishable... Arthur Spendlove frowned, then chuckled. How ever could he forget that? Candleshoe, of course – the cradle of the family. He must ask his father if the rum old lady was still alive.
He hauled down the flag. As he did so he heard the hum of an engine, and went to peer over the apex of the great pediment immediately before him. One of his father's cars had drawn up before the main entrance and somebody had got out. A footman was hoisting a suitcase from the boot. Arthur glanced at his watch. Somebody arrived by the London train. There were half-a-dozen visitors at Benison already, and his father hadn't mentioned that another was expected... He folded the flag, dropped it into its locker, and turned to re-enter the house.
Lord Scattergood was at the door of the small, strategically placed room from which he conducted domestic business. Seeing his son come down the great staircase, he waved a slip of paper in triumphant summons and disappeared within. Arthur followed and found that his mother was there too; she sat in a window-seat and was engaged in removing burrs from an Old English sheepdog. Lord Scattergood again waved his paper. 'A very good day. Fifty-seven pounds fifteen shillings more than last week.'
Lady Scattergood looked up. 'Fifty-nine.'
Lord Scattergood picked up a pencil. 'Fifty-nine? I could swear–'
'Fifty-nine burrs on Brown.'
Brown uttered a low woofing sound. He was always gratified on hearing his own name, despite its humble associations.
'It's not bad going, even when you make deductions for wear and tear. And, of course, it is educative.' The Marquess seemed to challenge his son to deny this gratifying consideration. 'Lets one fellow see how another fellow lives.'
'Or lived.' Arthur walked over to Brown, disentangled an ear and tugged it. 'This creature', he said affectionately, 'looks more and more like a filthy old grey rug, with some appearance of animation deriving from the presence of unspeakable things crawling about beneath.' He turned to his father. 'You know that this game is all nonsense?'
Lady Scattergood raised her head. 'Surely not all nonsense, Arthur? In your father's ideas I have always been able – well, to feel something shining through. There has always been a gleam. Don't you agree?'
'Possibly so. There is something to be said for hanging on, without a doubt. In three or four years' time – well, one just doesn't know. Circumstances may change, feelings may change – and with them the whole drift of social legislation. Brown's day may be over.'
'Brown's day over?' The Marchioness was dismayed. 'Brown's and Jones' and Robinson's. It's excessively unlikely. But, as I say, one just doesn't know. So there is something to be said for living from hand to mouth.'
'I'm very glad to hear you say so, my boy.' Lord Scattergood was delighted. 'In point of fact, I have one or two plans maturing now. One of them is maturing here at this moment – I suppose in a hot bath. That is to say, if they go in for that sort of thing.'
Arthur looked suspiciously at his father. 'If who go in for what sort of thing?'
'Connoisseurs for baths. I've asked a fellow called Rosenwald for the weekend, and he arrived a few minutes ago. From Rome.'
'A man called Rosenwald has come all the way from Rome to spend a weekend at Benison?' Arthur shook his head. 'It sounds too like old times to be true.'
'There will be a small fee.'
Lady Scattergood was startled. 'You mean this man is going to pay?'
'Certainly not, my dear.' The Marquess was really shocked. 'We haven't gone into the hotel business yet, I am thankful to say. This fellow Rosenwald gets the fee. And his fare.'
Lady Scattergood parted the curtain of hair hanging over Brown's nose and gazed thoughtfully into the creature's seldom-revealed eyes. 'I should pay him only from Hamburg. It seems more suitable, with a name like that. And why does he get a fee?'
'For making an expertise.' Lord Scattergood was solemn. 'That, it seems, is the technical term. It means that he will find buyers for both Titians, and possibly for the two Velasquez portraits as well.'
Arthur Spendlove sat down abruptly. He possessed neither knowledge nor love of the fine arts in any marked degree, but he felt both startled and shocked. For a long time, indeed, he had been convinced that these and other family treasures should go. But the revelation that the cold wind of sober fact in such matters had at last penetrated the thick garments of his father's comfortable illusions was formidable. 'You've really made up your mind to sell?'
'Certainly. The right moment has come.' Lord Scattergood was very serious. 'It's much as with timber, you know. Or as it is with livestock. Recall how I found the psychological moment for parting with the Aberdeen Angus herd. I have an instinct that it's like that with Titian now. And probably with Velasquez as well.'
Arthur frowned. 'It's no more than so many square feet of canvas gone from the walls. But we'll feel it as the deuce of a gap.'
'Of course more must be laid down.'
Arthur stared. 'I beg your pardon?'
'It came to me not long ago that what one does with wine one ought to be doing with pictures and everything of that sort as well. Your mother must go round and pick things up. The same sort of thing, you know – but done by young fellows today. Nymphs and goddesses and portraits of bigwigs. We'll hang 'em up in place of the Titians and whatnot. And – mark my words, my boy – in a couple of hundred years they'll have matured out of all recognition. Given a century or two, the octagon room would do wonders for any picture.'
Arthur had heard his father assert much the same thing about the Benison cellars in relation to port. 'There may be something in what you say. But who is this Rosenwald, and how does he go to work?'
'He may come from Rome.' Lady Scattergood had her own problem. 'But is it from a shop, or from a museum? I mean, is he to have his meals–'
'My dear Grace, he is our guest – decidedly our guest. I understand him to be a private gentleman, who has become a great authority on his subject. I understand that he advises the Pope and a number of other respectable people who have these Titians and so forth on their hands. And his method of going to work is admirable. The buyer pays for the expertise. Rosenwald inspects the paintings – although of course he has seen them before – and then approaches his man. He explains that there is a chance – just a chance, you know, and extravagant hopes must not be entertained – that if he were authorized to negotiate with me–'
'I seem to have met expertise before – but I didn't know that was the name for it.' Arthur got up and opened the door for his mother, for a low-toned bell had begun to sound through Benison. 'It sounds as if the fellow will need a bath after the transaction as well. When does he inspect?'
'I thought we might all go up after dinner in a perfectly informal way, taking the Fernalls and the Crespignys and the L'Estranges along with us. It seems that Rosenwald likes these things to begin quite casually as a result of his having chanced to be stopping here or there with people of our sort.'
'What revolting rot.' Arthur gave the Old English sheepdog a prod, and it moved shapelessly from the room like an enormous decayed chrysanthemum. 'What advantage can he get from a sort of charade played out before dreary people like...'
'Arthur, my dear.' Lady Scattergood was mildly reproving.
'Very well, Mother, very well. But Brown must come too.'
'Brown, Arthur?'
'Yes, indeed. Isn't he the last of us to know how to live with any dignity in this unfortunate house?'
# 10
It was early evident to his host – as also to the Fernalls, the Crespignys, and the L'Estranges – that Dr Rosenwald was a person of high distinction in the distant world from whence he came. He spoke with whimsical affection of the Pope, praised the claret, and described modestly but in some detail the little house – already a gem even amid the sequestered villas of the Brianza – around which, for the solace of his retirement, he was slowly creating a giardinetto tagliato in the antique Sienese style. Lord Scattergood, listening to this silken old person's evocation of the severities of composition involved, felt that Benison, where not a garden but an entire landscape had been made to order, must be a shockingly tasteless and extravagant place. The Fernalls, who were accustomed to spend a fortnight of the year with an aunt at Saltino, and who had several times under the superintendence of that lady surveyed the antiquities of Florence, were conscious of a just superiority over the Crespignys, whose acquaintance with the continent was virtually confined to the city of Paris and the more hazardous parts of Switzerland. Mrs L'Estrange, since she had artistic interests and was painted almost every year for the purpose of being exhibited at Burlington House, felt it due to herself to offer some remarks on Leonardo da Vinci. Her opinions, it turned out, were of quite amazing delicacy and penetration; Dr Rosenwald, picking them up as they were delivered – somewhat embryonically, it is true – from her lips, developed them into an elaborate and felicitous discourse upon contrapposto and chiaroscuro. This continued until the ladies had withdrawn, whereupon Dr Rosenwald, easily accommodating himself to the interests of the barbarians around him, fell to patronizing the port. Lord Scattergood, who had for some years been constrained to drink wood port except upon the very highest occasions, took even this in good part. As a salesman, Dr Rosenwald struck him as being incontrovertibly in the very highest flight. With an unwonted exercise of imagination, he pictured the excellent creature putting on just such a turn as this for some tremendous American millionaire – and all in the interest of the Spendlove pictures. It was in high good humour that he presently suggested picking up the womenfolk again and proceeding to the octagon room at once.
'Aha – the salon carré of Benison!' Dr Rosenwald struck his whimsical note, and at the same time gracefully accepted a cigar. Lord Scattergood, as a consequence of some odd upsurge of knowledge from the large school near Windsor, found himself wondering how carré could well described an eight-sided chamber. He perceived however that some compliment was intended – this sort of foreigner was always dishing out compliments – and he responded with the courteous hope that Dr Rosenwald wouldn't think the proposal an awful bore.
'But, milord, I am enchanté! This is a pleasure of which I not thought.'
Lord Scattergood saw Arthur gulping the last of his port and at the same time giving him a decidedly grim look. It was evident that Dr Rosenwald liked to play out his charade with an elaboration and completeness attributable – no doubt – to his large possession of the artistic temperament. Looking firmly at his son, Lord Scattergood inquired whether his guest might not, after all, prefer a game of billiards? Dr Rosenwald replied that the notion of taking a look at the pictures was a delightful idea of his host's, and one that he was altogether unwilling to forgo. He remembered them tolerably well, having seen many of them when they were on exhibition in London before the war. He assured Lord Scattergood that his collection was one, if not of the first importance, yet of very considerable interest and charm.
At this the entire party was presently reconstituted; a footman dispatched to switch on about a quarter of a mile of lights in corridors which it would be necessary to traverse; the ladies donned wraps – for even in summer the immensities of Benison could be chilly after nightfall; and the cavalcade made its sortie from the habitable corner of the house.
Dr Rosenwald paused to admire the Swedish Countess' sledge. Unlike the mortician from Buffalo, he did not apply a scratching finger, but sketched instead a graceful arabesque in air, presumably implying that thereby here was a formal assemblage of lines and volumes conformable with the nicest artistic taste. Lord Scattergood wondered if he was marking the outlandish old contraption down for offer to some hyperborean magnate in Greenland or Alaska.
Because Lord Scattergood had forgotten an appropriate key, the party had to pass down the long corridor that ran behind the main line of state apartments. It was crammed – as indeed were the leagues of similar corridors throughout the building – with the junk of three centuries of random collecting. On one side, in glass-fronted cabinets between the twenty regularly spaced windows, stood, hermetically sealed, sufficient china – much of it exquisite and most of it inconceivably hideous – to banquet the entire peerage; on the other were paintings, prints, statues, fossils, idols, flags, miniatures, enormous vases, fans, cannon, snuff-boxes, coins, medals, suits of armour, dugout canoes, travelling-libraries, geological specimens, and almost everything else that it is possible to amass. As Dr Rosenwald was delighted with all this, and remorselessly evinced the liveliest and most informed interest in the most outlandish of the exhibits, the progress of the party was on the slow side. Lord Scattergood wished that he had thought to put the cigar-box under his arm. His wife conversed alternately with Colonel Fernall and with Brown, neither of whom appeared to be in a communicative vein. Arthur listened to Mrs Fernall describing, in a powerful and resonant voice, her own wretched ill health. The other gentlemen had fallen into a grave discourse of fowl pest, hard-pad, and foot-and-mouth disease. Except for the exotic note struck by Dr Rosenwald, any stranger dropped miraculously into these domestic sanctities would have been gratified by an exhibition of English territorial life at its best.
At length they passed into Queen Caroline's Drawing-Room, and from thence to the Great Gallery. Dr Rosenwald stopped and pleasantly announced a modest desire to be shown the Cima da Conegliano.
Lord Scattergood glanced at the endless vista of paintings that ran in a double or treble line down the north wall and felt a moment of dismay. His librarian, Mr Archdeacon, knew something about these things – but Mr Archdeacon he had carelessly not thought to detain, and he would long ago have departed to Great Benison on his bicycle. The five-shilling tourists were uninterested in Cima da Conegliano, and Lord Scattergood was himself in consequence not as clear about this particular possession as he might have been. All the pictures here, he knew, were worth anything from five hundred to five thousand pounds apiece. It might he a good idea to sell the lot, and decorate this room with a nice line of mirrors. He seemed to remember that there was something of the sort at Versailles, a place at which the turnstiles clicked in a very satisfactory manner all day.
Meanwhile, perhaps his wife knew about this fellow Cima. He was about to inquire, when Dr Rosenwald fortunately noticed the Alessio Baldovinetti. On this master he had, it appeared, a difference of opinion with Dr Borenius, and he now proceeded to lay the case in some detail before Mrs L'Estrange. Mrs L'Estrange, gratified at this admission to the status of connoisseurship, offered intelligent murmurs. Her husband, who disliked what he called Kate's damned nonsense, made occasional growling noises indicative of impatience and distaste. Fortunately it was not easy to distinguish that these did not emanate from Brown. The party thus eventually reached the octagon room in tolerable harmony.
The stuff was all on one wall – the two Titians flanked by the two Velasquez portraits. For the two Italian pictures their owner had never greatly cared. As a boy he had judged them indecent indeed but unsatisfactory, since he had been unable to imagine himself in any amatory engagement with females of this species turning the scale at anything like the figure to be posited of these sprawling monsters. Later he had come to distinguish that they were what he called deuced colourful, but he had never kindled to them, all the same. He liked the reclining nude – she was said to be no more than a high-class tart, poor girl – better than the more elaborately engaged goddess hanging beside her. For one thing, he could never remember what that particular mythological proceeding was. And who had ever seen a swan of that size, anyway?
The two Velasquez portraits were a different matter. Here again he was bad at keeping names in his head – but he could accept each simply as a superb evocation of the aristocratic idea. This was even more true of the little girl than of the elderly grandee – although he was (Lord Scattergood suddenly remembered) King Philip the Fourth of Spain. Lord Scattergood had a great regard for ancient lineage, and admitted no illusion that a Candleshoe turned Spendlove in the later seventeenth century constituted anything of the sort. Now, therefore, he met alike the candid gaze of the little Infanta and the haughty stare of King Philip with a decidedly guilty glance. He was much struck, moreover, by the circumstance that Brown had retreated to a far angle of the octagon room and sat down with his back to the proceedings. He suddenly decided that he would let Titian go, but hang on to Velasquez to the end.
Dr Rosenwald, with Mrs L'Estrange still beside him, was examining the Titians. At least he was standing in front of them, but it was not at all clear that they were very seriously engaging his interest. Dr Rosenwald's glance was idle, almost absent; and he was edifying his companion with remarks on some of the major private collections in Italy. Did she know the Bagatti Valsecchi Collection in Milan? Or the treasures of the Crespi Palace? Or the remarkable group of pictures assembled by the late Prince Trivulzio? When she was next in Rome – and, indeed, her so charming and cultivated husband too – would she permit him the pleasure of securing her an introduction to the Contessa Adriano-Rizzoli, who in addition to possessing a magnificent Quirico da Murano was also a lineal descendant (as Sir Max Beerbohm had pointed out) of the Emperor Hadrian?
The entire party – Brown still excepted – had now gathered round in silence. There was something undeniably impressive – even hypnotic – in Dr Rosenwald's manner of thus reviewing these major repositories of the plastic arts. Lord Scattergood however was impatient; he was, indeed, indignant. The well-cadenced discourse, the resonant names of noble families across the Alps, the eye so casually exploring the canvasses immediately before it: all these things had the effect of making Benison Court and its treasures seem very small beer. With mounting irritation Lord Scattergood remembered the price of a return ticket by air from Rome. And presently he could contain himself no longer. 'Look here,' he said, '–what do you think of those Titians of ours? Are they worth anything?'
Dr Rosenwald looked at his host in surprise – as well he might, since the mortician from Buffalo himself could scarcely have asked a question more baldly. Then his distinguished features transformed themselves into a smile – a smile at first brilliant, and then almost wholly reverent. He looked at each of the pictures in turn, and again his fingers traced – but this time with infinitely greater delicacy – their arabesque in air. 'Milord,' he said, 'they are a revelation.'
'Eh?' Lord Scattergood was startled, His guests were all staring.
'I had forgotten. Indeed, in seeing them amid the bustle of that London exhibition, I had perhaps not fully realized.' Dr Rosenwald was softly solemn. 'These may be – well, the greatest Titians in the world.'
'God bless my soul!' Lord Scattergood was almost alarmed.
'But are they merely Titians? I have to ask myself that. Yes, most seriously do I have to put that question to myself. It is the crucial point, milord, in the expertise.' Dr Rosenwald paused. 'And the answer I finally give myself is – Yes!'
'Ah – I'm uncommonly glad to hear it.' Lord Scattergood was now altogether at sea.
'But are they merely Titians? I have to ask myself assuredly, in the period – the tragically brief period, milord – of his supreme achievement. These are the work of the young Titian as he steps back – still dazzled and still divinely gifted – from the untimely grave of his exact contemporary and sole inspirer – il miglior fabbro, Giorgione!'
Mrs L'Estrange gasped. She could be trusted, Arthur Spendlove saw, to spread the tale of this impressive encounter with the higher connoisseurship broadcast among her artistic friends. And presently some young ass would be down from town, eager to do a talk on Titian's supreme creations for the Third Programme of the BBC. Rosenwald was undoubtedly worth his money. Nevertheless Arthur still preferred the company of Brown. Brown, indeed, had a great deal of wool over his own eyes. But it was not his profession to pull it over the eyes of others.
Slightly dazed, the company presently drifted from the room. The women went to bed, and the men, accompanied by Brown, repaired to the smoking-room. Lord Scattergood took a stiffer whisky than was at all customary with him. It looked as if he might make out of Titian what he had calculated to make out of Titian and Velasquez together. The trollops from the Venetian bagno would depart across the Atlantic and the Spanish royalty would remain at Benison. There was in this – Lord Scattergood opined – a high propriety that put him in excellent humour; and he gave Arthur a wink – it was a bad family habit – over the heads of the other gentlemen. For a time Fernall, Crespigny, and L'Estrange lingered over their glasses. They had a notion that Dr Rosenwald, as their senior and a stranger, should take himself off first. But, the eminent connoisseur making no move, they eventually got up and went away, amid customary civilities and involuntary yawns. It had been a devilishly dull evening.
The moment was one for which Lord Scattergood – although with faultless dissimulation – had been eagerly waiting. He turned to Dr Rosenwald. 'Well,' he demanded, 'what are we likely to get?'
'For the Velasquez portraits and the Titians?'
'Just the Titians. Will they really fetch a notable price?'
'Undoubtedly.' Dr Rosenwald favoured the Marquess of Scattergood and Lord Arthur Spendlove with his most brilliant smile. 'Provided, of course, that you can find them.'
'What's that?' Lord Scattergood supposed that he had not heard correctly.
'It was a circumstance not very convenient to mention in the presence of your other guests. But the paintings now in your octagon room, milord, are not the Benison Titians. They are only copies.'
# 11
The first to respond to this strange intelligence was Brown. He got to his feet and moved his mop-like head slowly up and down in the air. He had all the appearance of giving himself to an exhibition of well-bred mirth.
'Copies!' Lord Scattergood too had got to his feet. 'You mean we haven't any Titians after all?'
'Apparently not.' Dr Rosenwald was studying his host with interest. It might have been hazarded, indeed, that he was making an expertise. 'And I think, milord, you underestimate our difficulties. Still, something may conceivably be done.'
'Something may be done?'
'But no longer for what you call, I think, big money. So I hope you got a good figure – is not that the expression? – in the first place.'
Lord Scattergood's florid complexion had deepened to a colour which might have attracted Titian when looking for a nice curtain to hang behind a courtesan. 'Arthur,' he gasped, 'am I right in thinking that Dr Rosenwald thinks–'
'Probably you are.' Arthur Spendlove grabbed the whisky decanter and bustled about. 'But we needn't make anything of that. A damned odd thing like this may give rise to a misconception or two – eh? And no doubt Dr Rosenwald does meet some queer fish.' And Arthur turned briskly to his father's guest. 'Have another dash of this. All of us can do with it. Bit of a shock, you know. Really a shock. Just keep that in your head.' A man of more worldly guile than his father, Arthur thus steered deftly past an awkward moment. 'But I don't know that it's all that extraordinary. The war meant queer times for Benison, and a little large-scale hanky-panky may have crept in. Better send for Archdeacon.'
'Certainly we had better send for Archdeacon.' Lord Scattergood rang a bell. 'What about the Velasquez portraits – are they still the genuine thing?'
'Without question.' Dr Rosenwald had accepted with charming grace the invitation to apply himself anew to the whisky.
'And Cima What's-his-name, and Baldovinetti, and all that crowd?'
'Dear me, yes.'
'Well, now – somebody must have got in and played this trick on us. Or would it have been that girls' school?' Lord Scattergood was much struck with this possibility. 'The art-mistress, you know. I distinctly remember not at all caring for her. She might have done it at night.'
'Wasn't the octagon room a dormitory?' Arthur appeared not to think highly of his father's suggestion. 'And surely you didn't keep all those things on the walls?'
'Didn't we?' Lord Scattergood, vague on the point, paused to give an order to the servant who had entered the room. Then he resumed his speculations. 'Or would it be professional crooks? It seems a dashed queer thing for anyone of that kidney to take to. And how would they make money out of it?'
'Very readily.' Dr Rosenwald seemed now to accept the innocence of his host, and to be urbanely amused by it. 'I could tell you of a number of owners of works of art who have found it convenient to part with one or two of their treasures in an unobtrusive way. Do you happen lately to have inspected the Contessa Adriano-Rizzoli's Quirico da Murano – the picture I was mentioning to your charming Mrs L'Estrange? No? A pity.' Dr Rosenwald applied himself largely to his whisky. 'I painted it myself.'
' You painted it?' Lord Scattergood's indignation was such that he had difficulty in articulation. 'Wasn't that a damned dishonest thing to do?'
Dr Rosenwald, by no means offended, raised a mildly deprecatory hand. 'Not, I think, damned dishonest. The purchaser of the original – he lives in Chicago – got very good value for his money, even although he is pledged not to exhibit the Quirico for twenty-five years. And what the dear Contessa is pleased to hang on her walls is entirely her own affair. Nobody is defrauded in the slightest degree. It is not as if she made visitors to the Palazzo Rizzoli pay at the door.' And leaving Lord Scattergood to digest this as he might, Dr Rosenwald turned to Arthur. 'Excuse me,' he said, 'I am interested. Does this whisky come from Scotland or from Ireland?'
Reminding himself that Dr Rosenwald was his guest, Lord Scattergood took a turn about the room. 'Would you mind telling me', he said presently, 'how long it would take to concoct these two things now passing as my Titians?'
Dr Rosenwald considered. 'I think it likely', he said, 'that I could manage one in three months.'
'Bless my soul!' It had never occurred to Lord Scattergood that any work of art, whether authentic or spurious, could take more than three or four days to execute. 'What a deuced odd way for a fellow to spend his time! I can remember doing art at my private school. But it never went on for more than fifty minutes. And the last ten of those were commonly a bit of a rag.'
'Surely those Titians are insured?' Arthur halted his father's irrelevance by asking this question abruptly. 'If they are, this outrage at least isn't dead loss.'
'A very interesting point.' Benignly smiling, Dr Rosenwald shook a richly experienced head. 'Let us hope, by all means, that they are insured. But, you know, the insurance people will fight.'
'Why the dickens should they fight?'
'My dear Lord Arthur, they will fight because of the magnitude of the sum involved. They will take you to – what do you call it? – the House of Lords. They will take you to – am I right? – the Judicial Committee of your Privy Council. If, that is to say, it is necessary to fight in more than one court.'
Arthur frowned. 'I don't see that they'd have a leg to stand on.'
'On the contrary. Your father, I fear, may have great difficulty in establishing that he has ever been the owner of two authentic Titians. For an unknown length of time, two modern paintings have been hanging in Benison Court; and it has been represented – and of course believed – by the Marquess of Scattergood that these were authentic works. We cannot explain, or put an exact date to, the supposed substitution. The position, believe me, my dear Lord Arthur, is a difficult and delicate one.' Dr Rosenwald drained his glass. 'And now, milord, we had better return to the octagon room.'
'Certainly – if you think it any good.' Lord Scattergood was impressed by something businesslike that either the whisky, or the present exigency, or both, had begun to induce in the deplorable visitor from Rome. He moved to the door and looked at his watch. 'My librarian and curator, Mr Archdeacon, should be here in half-an-hour. I sent a car. Perhaps I should tell you' – and Lord Scattergood looked at his guest with some severity – 'that in addition to being extremely learned, and everything of that sort, he is a very old friend of the family.'
Dr Rosenwald made a graceful motion with a hand that had somehow managed to get hold of another cigar. 'Mr Archdeacon', he said suavely, 'is a scholar whom I have long been anxious to meet.'
Attended only by Brown, the three men returned in silence through the long empty corridors. And presently they were once more facing the spurious progeny of Tiziano Vecellio. Dr Rosenwald, who had so edified Mrs L'Estrange by his ecstasies before them half an hour ago, shook an unblushing head. 'So-so,' he said. 'Decidedly so-so. It surprises me that no moderately-informed visitor – But no matter. I think we will have the Leda, if you please, down from the wall.'
'Is that the one with the swan?' Lord Scattergood looked at the picture with a distaste only intensified by his new knowledge. 'I've never had any notion of what it's about, and I wouldn't like to mention the idea it puts in my head. Had we better have a man up to help?'
'Much better not. Lord Arthur and I will have no difficulty.' Dr Rosenwald was a monument of discretion. 'These little troubles, believe me, are sometimes best kept in the family.'
Arthur, remarking his father compress his lips at the promotion which this smooth old rascal was thus according himself, made haste to get to work on the canvas. They lowered it to the floor. Dr Rosenwald, producing a magnifying glass and an instrument like a scalpel, took on an air of professional intentness that was undeniably impressive. He might have been a plastic surgeon in fashionable practice, and about to address himself to Leda's rotundities in the interest of a modern couture. Or he might have been a poulterer, minded to prepare her web-footed friend for some traditional feast. His actual proceedings, however, amounted to no more than first taking a glance at the back of the canvas and then doing a certain amount of scratching and scraping of its painted surface. Lord Scattergood watched him uneasily. There had come into his head the alarming idea that Dr Rosenwald might be either a madman or a monstrous practical joker, and the work he was thus chipping at an authentic masterpiece of the sixteenth century after all.
But any such notion as this evaporated before the brisk conviction with which the eminent Roman connoisseur presently straightened himself and spoke. 'There is no question of what you would call a fake. The work has been done on a new canvas, not an old one. And the pigments and processes are palpably modern. This is not a forgery. There has been no attempt to deceive an expert.' Dr Rosenwald spoke as one frankly disappointed that the higher levels of his science need not be called into play. 'This is a straightforward copy, and nothing else.'
'The sort of thing you see old ladies doing in the National Gallery and all those places abroad?' Lord Scattergood seemed mildly surprised at the reach of his own artistic information.
'Precisely that sort of thing. And I see no need for a more particular examination of the other painting at present.'
'You're quite sure that it's all right about Velasquez?' Lord Scattergood exchanged an uneasy glance with King Philip and the Infanta. It appeared altogether shocking to him to have to ask such a question in their presence. But his anxiety forbade him to wait until he was once more out of their view. 'Hadn't you better vet them a little more thoroughly?'
Dr Rosenwald shook his head. 'Your Velasquez portraits are authentic. On Velasquez, milord, I could not be deceived in the dark. On Velasquez' – and Dr Rosenwald brilliantly but modestly smiled – 'I am the first authority in Europe.'
The arrival of Mr Archdeacon in the smoking-room some fifteen minutes later was distinguished by a demonstration on the part of Brown. The high regard which the Spendlove family in general felt for their librarian was clearly shared by this severer judge. He and Mr Archdeacon, in fact, embraced cordially; and as Mr Archdeacon was a venerable person with flowing white hair, abundant eyebrows, and a bushy beard the visual effect was striking. It was some moments before Lord Scattergood could provide the new arrival with whisky and introduce Dr Rosenwald. He then explained the state of the case. Listening in silence, Mr Archdeacon occupied himself with stuffing an enormous pipe.
'So you see, my dear Archdeacon, here is a shocking thing. I can't imagine anything more disgraceful. We have been showing these pictures to the public as being by Titian, and it turns out that they are by somebody quite different – a school-mistress perhaps, or a new sort of burglar.'
Mr Archdeacon nodded through a cloud of smoke. 'It is very deplorable, to be sure. But beauty, after all, is in the eye of the beholder.'
'Is that so? I hadn't heard.' Lord Scattergood received this mysterious intelligence respectfully. 'And that makes a difference?'
'Assuredly. Let us conclude that each man largely creates the beauty he experiences, and our position is morally a strong one. Let me be very clear, very simple. By "Titian" – or shall we say rather by "Titianness"? – we mean a class of experiences, preponderantly emotional but in part intellectual, varying from individual to individual within limits which I shall presently endeavour to define. "Titianness", in fact, is a term only applicable with any philosophical strictness to phenomena of a purely subjective character. Whether that in the outer world whereby the response of "Titianness" is occasioned has or has not any objective and verifiable connexion with the man Tiziano Vecellio is a circumstance altogether immaterial.' Mr Archdeacon emitted a further cloud of smoke, which had perhaps the effect of a little obscuring his train of thought. 'So, you see, we need not really worry on the score of having been parties to a deception.'
'I'm extremely glad to hear it.' Lord Scattergood's gratitude to the family sage for this clarification of his ethical position was unaffected. 'I was afraid, you see, that we hadn't been giving people their money's worth. Forgeries, after all, are not at all a nice thing to have about.'
'My dear Marquess, we are all forgeries.'
'You don't say so!'
'Certainly – even Brown.' From behind his now impenetrable cloud Mr Archdeacon gave a Jehovah-like chuckle. 'Brown himself is but a counterfeit, a feeble copy of the real Brown – whom we should find, you know, only in the kennels of Heaven. And were Titian – or shall we say the late Sir Edwin Landseer? – to execute a painting of our Brown, what would this be but a copy of a copy, a shadow of a shadow? Now, suppose further that a forger gets to work on Landseer's painting. His work will be at but one further remove from the real Brown – the shadow, we may say, of a shadow's shadow. There is here a field for abundant reflection.'
'That's extremely true.' Lord Scattergood hesitated before descending from these edifying and Platonic heights. 'But the plain fact, my dear fellow, is this: that people who collect art and so forth don't manage to take your profound sort of views. They have matter-of-fact minds, Archdeacon – damned matter-of-fact minds. And the disappearance of these things means that I stand to lose the deuce of a lot of money. To tell you the truth, Dr Rosenwald here was going to find a millionaire or two to take the Titians off my hands for an uncommonly large sum.'
'That's another matter.' Abruptly Mr Archdeacon rose to his feet and emerged from the layers of smoke wherein he had been enshrouded. 'The paintings must be recovered.' He turned to Dr Rosenwald. 'When, pray, would the copies have been executed?'
'Judging from the state of the pigment, they are not less than three years old, and not more than ten.'
'You are sure of that?'
'My good sir, with me these are matters of professional knowledge.' Dr Rosenwald was gracefully magistral. 'I have no doubt of it whatever.'
'Very good.' Mr Archdeacon, who had taken upon himself with surprising suddenness the role of practical investigator, paused to give Brown an amiable cuff on the nose. 'And now be so good, Dr Rosenwald, as to tell me this: is it possible, in your judgement, that these copies could have been made other than direct from the originals? I may remind you that the Marquess many years ago gave permission for the preparation and sale of colour prints of a superior sort, and that the paintings have further been photographed in considerable detail.'
Dr Rosenwald considered. 'The copies are not very good copies. But they have been executed with much care, and almost certainly from the originals. And that would involve access to the originals covering a period of many weeks – probably, indeed, of many months.'
'Thank you.' Mr Archdeacon, accompanied by Brown, took a turn about the room. Both Lord Scattergood and his son watched the family oracle respectfully. Dr Rosenwald benefited by their absorption to the extent of a further glass of whisky and a third cigar. 'There can be no doubt as to how the matter stands.' Mr Archdeacon came to a halt again before his employer. 'Unfortunately it can only be described as in a posture of some delicacy.'
'Is that so?' Lord Scattergood was dismayed. 'And you don't see quite what to do?'
'I by no means make that asseveration.' Having delivered himself of this mild rebuke, Mr Archdeacon briefly resumed his perambulation. For a moment he halted in a far corner – seemingly for the purpose of conferring with Brown. And presently he returned. 'You will recall that at the outbreak of war we sent a good many of the things away. With so important a Ministry proposing to move in, it looked as if we might well be singled out as a target for aerial attack.'
'To be sure.' Lord Scattergood nodded intelligently. 'I remember that you advised sending the muniments to Corbies.'
'They were, of course, the objects of our chief concern.' Mr Archdeacon turned to Dr Rosenwald. 'Paintings and so forth are one thing. But family documents, I am sure you will agree, are quite another.'
Whether the eminent connoisseur indeed concurred in the view that charters and title-deeds must enjoy priority over the achievements of Cima da Conegliano and Alessio Baldovinetti – let alone of Titian and Velasquez – was highly doubtful. Dr Rosenwald however had by this time advanced so far in independent research into the territorial origins of Lord Scattergood's whisky as to be indisposed to argument on the subject; so that Mr Archdeacon presently resumed his observations to the room in general.
'But we did at the same time disperse a considerable number of the works of fine art – the major Italian and Spanish paintings included. It was not easy, however, to arrange transport to Scotland on a large scale. I bethought myself, therefore, of invoking the courtesy of our more retired neighbours. That Benison should be bombed appeared not improbable. But who would wish, for example, to blow up old Colonel Riskey?'
Lord Scattergood nodded. 'Very true. Unless one knew him, that is to say. And he had probably never run up against Goering and those fellows personally.'
'Or what likelihood was there of enemy action being directed upon an edifice so inconsiderable as Kerpen House?' Mr Archdeacon paused. 'So I sent the better ceramics and bronzes to Sir Richard, and the Colonel was good enough to house the prints and drawings.'
'And the paintings?' Lord Scattergood was all anxiety. 'It was in them, after all, that the hard cash lay.'
'Precisely. The point had by no means escaped me.' For a moment or two Mr Archdeacon applied himself once more to his pipe. 'I therefore arranged that the paintings should go to the most retired and insignificant spot of all. Or insignificant, I should say, but for the accident of its early association with the family. In short...'
'Candleshoe!' Understanding flashed upon Lord Scattergood. 'The paintings went there?'
'The most important paintings certainly did. And at Candleshoe, clearly, the substitution must have been effected.'
'Then we must go and find out. I'll order round a car this minute.' And Lord Scattergood firmly rang a bell for the second time that evening. 'But – by Jove! – isn't the old lady said to be a bit hard to handle?'
'There is not a doubt of it. Only the high vein of patriotic feeling current at that time disposed her, if I recollect rightly, to admit anything at all from Benison. And she charged a good round figure, too.' Lord Scattergood's philosophic librarian took another puff at his pipe. 'Fortunately it occurred to me to send on the bill to the Ministry. They paid, without demur.'
Arthur Spendlove was looking doubtful. 'Ought we really to go over there at this hour? We can't ring up and make a civil inquiry about its being convenient. Candleshoe is certainly not on the telephone.'
'It might be described as only very uncertainly on the map.' Mr Archdeacon was on the point of indulging himself in a laugh on the strength of this witticism, but was dissuaded by an evidently disapproving gesture on the part of Brown. 'It makes, that is to say, no great figure in the world at present. I doubt whether there be anybody there except Miss Candleshoe herself – unless, indeed, she still has poor Armigel.' And Mr Archdeacon shook his venerable locks. 'The dear old boy must be getting on.'
Lord Scattergood looked at his watch. 'Perhaps, after all, it would be better to wait till the morning? A spinster of advanced years, don't you know, living in a tumbledown place like that, might be a bit alarmed–'
'I think we'd better go now.' Arthur had changed his mind. 'If there is really some danger of the old lady's being unco-operative, something in the way of shock tactics may be the best start.'
'Very true, my dear boy – very true, indeed.' With some dexterity, Lord Scattergood moved the whisky decanter out of Dr Rosenwald's immediate reach. 'Not that I'd want to do anything to upset old Miss Candleshoe. We've none of us seen her for years, you know; and I have an idea that she has a bit of a bee in her bonnet in the matter of the family history. Absurd – but there it is. And I shouldn't be surprised if she's in difficulties. I think it very possible that she's hard up. Shocking to think of – eh?' Lord Scattergood was genuinely distressed at the notion of indigence among the upper classes. 'But we'd better be off at once. Arthur, will you drive? No need to drag out Ball. Archdeacon, my dear fellow, I rely on you to come along. Fortunately it's a mild night, and there will be a bit of a moon.' Lord Scattergood's eye, as he spoke, fell upon Dr Rosenwald. 'Good lord – is that fellow asleep?'
Arthur gave the connoisseur an unceremonious prod – without discernible effect. 'Heavily, it seems. Perhaps he's unused to whisky – eh?'
'Had we better rouse him and take him along?' Lord Scattergood consulted his librarian. 'Would he be useful with the old girl?'
The sage nodded. 'Not perhaps with the old girl – but conceivably with the Old Master.'
'What's that? James Candleshoe died years ago.'
'You misapprehend me, my dear Marquess. I refer to Titian.'
'To be sure. And what a deuced mysterious business this is! But I believe, my dear Archdeacon, that you already see some light in it.'
The librarian, who was re-enveloping himself in an ancient Inverness cape, paused to consider this. 'I think I may say that I see some possibility of presently advancing upon a working hypothesis.'
'By jove! is that so?' Lord Scattergood was impressed. 'Had I better bring a gun?'
'My dear Marquess, all we need take is authority and a clear head. It is a situation in which we ought to have no difficulty in effecting a convenient division of labour.'
# 12
Candleshoe has ample cellarage, and parts of this are distinguishably of far greater antiquity than is the house. It is supposed that when Robert Candleshoe built his ambitious new dwelling he incorporated in its foundations the substructure of some immemorial building acquired by the family upon its first coming to prosperity. It is here only that the ghosts walk – a circumstance which would seem to argue the very high antiquity of these apparitions. There can be little doubt that the ghosts look upon Candleshoe as Candleshoes look upon Spendloves. When these spirits were incarnate, William of Normandy had not yet come to England.
It is at this lower level that Grant Feather and the boy Robin have been readmitted to the beleaguered house. The children themselves are as pale as ghosts. But Jay, who leads them, is less like a ghost than a flame. Crisis has come, and he has kindled to it.
Grant now realizes that the place is actually under some sort of threat from an unknown number of rascals gathered outside it. These can hardly be intending assault and violence for its own sake. They can scarcely, for instance, be prosecuting any species of blood-feud with Miss Candleshoe or old Mr Armigel. Robbery must be their motive – although it is hard to see what in this poverty-stricken mansion can be worth removal. Still, theft alone can be their object; and this is a circumstance slightly alarming perhaps, but prosaic enough. Over against it is the other and disproportionate fact of these children's emotional state, of their dangerous weapons and resolute bearing; of an exaltation in their leader for which Grant obscurely feels there is an ominous word. He is acutely conscious that the situation must be controlled. Fate, in tumbling him into Candleshoe on this particular evening, seems to have handed him out this assignment and to be watching with an unwinking eye how he measures up to it. Grant walks up to Jay, puts a hand on his shoulder, and repeats what he has just said. 'You must tell me the whole thing.'
'Come upstairs.' This from Jay may be either a request or an order. The boy speaks rapidly to Robin – he seems constantly to be redisposing the small force at his command – and then turns and strides away. Grant follows. It is a narrow passage, stone-vaulted and flagged, and their footfalls have an exaggerated resonance, like an effect for radio. Jay carries a lantern; its light glints on something richly figured in the outlandish old clothes which he wears with so sombre a grace. 'Fey' is the ominous word that might be applied to the boy; it is conceivable that as the climax of this nonsense he is expecting to die.
Grant feels the necessity of saying something commonplace. 'Jay,' he asks, 'what's my mother doing? I'd better have a word with her and explain how we're held up. And Miss Candleshoe will be wondering why we don't clear out.'
'The women must wait.' Jay makes this pronouncement without turning round, but as they are now ascending a spiral stone staircase his features are just visible in profile, lit up from below by the lantern he carries at his knee. Seen thus, he looks calmer and older; desperate as the situation may be, he is conscious of having a masculine grip on it; the words he has just spoken come from him perfectly naturally.
Grant reflects that his mother has little sense of time, Miss Candleshoe much less, and Mr Armigel demonstrably none at all. He had better not bother about them, therefore, until he has won Jay's confidence – as there seems at least to be a chance of doing. They have emerged on the ground floor, crossed a lobby, and are now climbing a broader staircase with shallow wooden treads. It goes up and up by short flights round a rectangular well, and on several landings they pass without pausing high closed doors that must give upon apartments of consequence long ago. The perishing timber creaks beneath their feet; dust lies on the dull surfaces of ancient chests and cupboards in the window-embrasures; thick dust swims in the beam of Jay's lantern. The whole place smells of decay – of the slow inoffensive decay of dry panelling and crumbling leather and tindery hangings and innumerable stuffs and fabrics long since laid carefully away.
A slight sound behind Grant makes him whirl round in a flash. There is nothing there except his old antagonist the wolfhound. Jay turns too. 'Don't mind Lightning – even if he did take a nip at you before. He usually follows me round.'
'Certainly I shan't mind Lightning.' Grant realizes from the speed of his own reaction how much he is keyed up. 'Are we going right to the roof?'
'No – only to the gallery. If you are to know, you may as well be shown, I think.' Suddenly Jay stops, turns, and raises the lantern high in air, so that Grant is full in the light of it. 'Do you give me your word that you haven't come here because you do know?'
'I didn't come here as a result of knowing anything, Jay. My mother simply saw an old house and followed her nose to it which is a way she has. I just don't have an idea of what you're talking about. But I suppose it must be whatever those crooks outside have come after. And we're going to stop them.'
'Then I'll tell you. It's the Christmas box.'
'The Christmas box? Isn't that a sort of present you give the letter-carrier and the ash man?'
Jay shakes his head. 'Not here. Our Christmas was a man.'
'Of course he was,' Grant has remembered. 'The sculptor who made the monument to Admiral Candleshoe in the chapel. And Mr Armigel said he made something for the house as well. Is that the box?'
'Yes – and you're going to see it – what can be seen of it, that is – now.' Jay turns and climbs again.
'And those fellows want to steal the box?'
'It's stranger than that.' Jay stops and opens a door. He continues to stand still for a moment, so that Lightning slips past and vanishes into darkness. 'You understand about the Long Gallery of a house like this? We're there now. But you must stay here at the door, please, until I get more light. The floor is bad.'
Grant, left waiting at the top of the stairway, finds that he is listening intently for sounds from the house below. It comes into his head that the enemy may have one of their number already concealed within, who is even now creeping to unbar some postern and admit his fellows. It would be possible, surely, for a patient ruffian to lurk undetected in a corner of Candleshoe for days – and may not such a one, therefore, have entered long before the present crisis aroused the extreme vigilance of the children? Or again, there are the two crazy old folk who are the house's only adult inhabitants. May one or the other not be tricked at any moment into answering a knock, a call? He realizes that these and a score of other questions which he himself is without the knowledge to formulate can never be out of Jay's head; they form the weight of public care that hangs on the boy's brow. Bows and arrows are very well – but Grant wishes he had a gun. Surely there must be at least a shotgun, a sporting rifle, in the house? He remembers Mr Armigel's having said something about the last such weapon blowing to pieces in his hands.
Once more he listens intently – listens for a stealthy footstep on the dusty treads below him. There is no sound, and he crosses to the door of the Long Gallery and looks in. Jay is only halfway down, but in the murky perspective of the place he seems already a long way away. He is lighting a row of candles that stand in rusty sconces along the right-hand wall. Lightning stands beside him, his ears pricking into the darkness beyond. Jay turns and beckons. Grant takes another quick look behind him – he scarcely knows whether his behaviour is rational or panicky – and enters. Here and there the floorboards have decayed and vanished. He treads carefully, and sees little of the gallery until he is standing beside the boy in the middle of it. Jay motions him to stand still, then moves on and lights more candles. As he nears the far end of the gallery something wholly bizarre becomes first faintly and then more clearly visible. It is as if the gallery were a tunnel ending in open air. Grant is looking into a little nocturnal glade between over-arching trees.
The thing gives him what the topiary garden gave: a brief moment of extreme strangeness. Then he sees that this is another ghost – the ghost of some departed modest revelry, a tattered remnant of stage décor. Perhaps it was Miss Candleshoe's brother Sir James who had a taste for private theatricals; perhaps it was Sir James' great-grand-father. But for its split second of illusion the thing has had Grant gaping – and this the boy has seen. Surprisingly, lithely, he vaults to the little stage and strikes an attitude; then his clear voice rings down the gallery:
'A fool, a fool! I met a fool i' the forest,
A motley fool; a miserable world!
As I do live by food, I met a fool...'
Jay's inky clothes are surely Hamlet's. But they do very well for Jaques – and for a moment the boy holds his pose before he jumps down from the stage. He is laughing at Grant. It is a queer carefree interlude, the appearance for a flash of a Jay troubled by no problems of generalship. Then, grave again, he is pointing over Grant's shoulder. 'There!'
Grant turns round, making a quick survey of the whole place as he does so. It is panelled and has a plaster ceiling parts of which have come down; the height is inconsiderable, and except for two deep bays near either end the gallery cannot be more than six or seven yards broad. But it is at least fifty yards long, and the immense promenade which this permits of must have been the prime pride of Candleshoe once upon a time. The central floor-space is vacant – indeed little could now be set down there with safety – but along either wall there is an uninterrupted jumble of junk which makes the great hall downstairs appear a very orderly sight indeed. A few of the objects can be scarcely a century old: a weighing-machine, for instance, and a mechanical horse, and a variety of culinary and other domestic engines plainly of the Victorian age. But most of the stuff survives – after a fashion – from far earlier times, and some of it must represent the original furnishings of the gallery.
The two deep bays are in fact great windows, and opposite each is an elaborately carved fireplace. Or so Grant for a moment thinks. Then he sees that one of them (it is to this that Jay is pointing) is not a fireplace at all. It is Admiral Candleshoe's monument, done all over again. Grant positively rubs his eyes. He then sees that, this time, Gerard Christmas has done his work with a difference. It is the Admiral's monument once more – but this time the Admiral himself is missing. The flanking figures have lowered their curtains upon the watery scene. Between the spectator and the Admiral – if he is really there – are two massive slabs of marble, chiselled into heavy folds.
'There,' Jay repeats. 'That's the Christmas box. It has been called that always.'
Jay has slipped away to listen at the head of the staircase. Grant is left staring. Lightning, aware that the monument – if it may be called that – is a focus of interest, goes up and sniffs at it. Perhaps for some sinister reason, perhaps merely because a cold draught from its crevices has tickled his nose, the hairs of his neck bristle. Jay returns and Grant speaks. 'I don't see any sense in it.' He is aware that this is a prosaic and inadequate reaction. But the thing can only be some sort of joke, and he is offended by the notion of a joke which must have entailed a great deal of human labour.
'That's because you don't know the story.' Jay takes Lightning by the collar and makes him lie down. 'Thomas Candleshoe was various things.'
'The Admiral?'
'He was that in the end. But he was only a captain when he sailed with Drake against the Armada. And although he was to inherit this house from his father, he was quite a poor man. Then he disappeared.'
'Disappeared? But didn't he go on something called the Islands Voyage?'
'That was nearly ten years later, and he was drowned on it. But what do you think he did in between?'
'Turned pirate, perhaps.'
Grant has spoken idly, saying merely what appears to be the appropriate thing. But Jay looks at him with swift distrust. 'So you do know something?'
'Nothing of the kind, Jay. I'm just taking a guess.'
'Well, he did. But it isn't really known. It's in an old book in the library – one that was printed just for members of the family. The first page says "Privately Printed in 1823".'
'It tells about Admiral Thomas having been a pirate?'
'Yes – and the legend of the Christmas box.'
'I see.' Grant looks at the heavy marble affair before which they are still standing. 'Do you know what a legend is?'
'Of course.' Jay's pale cheeks flush faintly. It is plain that he would quickly resent any rash reference to his circumscribed education. 'But a good many legends are – are founded in fact.'
'Is there treasure in this legend?'
'Yes.'
'And the treasure was hidden in the Christmas box – perhaps is there still?'
'Yes.' Jay is very pale again. Here is the core of some immense fantasy within which he lives. He fears incredulity far more than he fears the men now prowling outside Candleshoe.
'Wasn't the Admiral drowned before this house was built?'
'Two or three years before that.'
'Then he couldn't have done any hiding of treasure here himself?'
'Of course not – any more than he could have ordered his own monument, either in the chapel or here. Thomas' younger brother Robert, who was his heir, built this house – and paid for it perhaps with money from Thomas' treasure. He sold jewels and plate and coins that Thomas had won from the Spaniards, and gave the money to the masons and carpenters.'
Grant nods acceptively. 'That sounds likely enough, Jay. I'd say a good many English houses were paid for that way in the days of Drake. But when Robert had Gerard Christmas carve a monument to Admiral Thomas in the chapel, why did he get him to make this affair as well?'
'That's just the point!' Jay is eager. 'There was treasure that couldn't be sold – that couldn't be owned to. Don't you see? Thomas had been reckless about whom he robbed at sea. He had been a real pirate – not just a privateer pillaging only the Queen's enemies. So there was a great deal of wealth that couldn't possibly be owned to – not perhaps for hundreds of years. And that's why Robert Candleshoe had Christmas build him this secret chamber. It was to house the treasure in until later members of the family could safely use it.'
Jay is urgent, but at the same time he is perfectly matter-of-fact. Grant feels that he himself may presently be persuaded into actually accepting the boy's tall story. He looks again at the enigmatic structure before him, and it strikes him as being rather like a poem of the same tortuously minded age: an elaborate conceit, and a chilly one. 'Don't you think', he asks, 'that it's rather an odd way of concealing treasure? A secure hiding place, surely, ought to be unnoticeable. This affair sets one a great puzzle at once.'
'Their minds didn't work like that.' Jay gives himself courteously to explanation. 'The story is that Robert and the Admiral's widow – Thomas was married, although he had no children – quarrelled over the form the monument should take. The widow had her way in the chapel, and Robert said the design was extravagant; was what we should call theatrical, or in bad taste. So Robert had this one, which he called chaster, set up here in the gallery of his new house. But all this story of a quarrel was, of course, only a blind. It covered the making of a small secret chamber by Christmas and his men. Christmas was very reliable. He had carved the figurehead of Admiral Candleshoe's ship, and he was in the family secret.'
'As you and I are now – not to mention those fellows out in the garden?' Jay's story hangs together after a fantastic fashion – but it is surely a yarn very much out of a boys' magazine. 'You say you read all this in a book printed more than a hundred years ago? If it was known like that, and there was really supposed to be treasure, surely one Candleshoe or another would have looked into it?'
'Looked into Christmas' box? But you can't. The entrance is a lost secret.'
Grant chuckles. 'It always is – in tales like this, Jay. But plenty of Candleshoes would have broken in with a crowbar, surely, if they'd believed there was wealth behind these hunks of marble.'
'They just didn't – and for two reasons.' Jay is now confident again in his story; his high state of tension has eased a little as he absorbs himself in retailing it; his right hand caresses Lightning, who has laid his nose between his paws and appears to be asleep. 'It did come, you see, to be thought of as only a legend. That was in the eighteenth century, which was a very – a very rational time.' This time, Jay smiles at his own ignorance. 'Is that the right word?'
'I think it is. And the other reason?'
'When people do become that – rational, I mean, and scorning old stories – they become secretly superstitious as well. And there is a superstition about the Christmas box which none of the Candleshoes has cared to go against. This too is in the old book. And it is this: that when the family's danger is greater than it has ever been, the Christmas box will open and – and save the situation. That part is silly, perhaps. But I like it, all the same.' Jay's eye is kindling again. 'Don't you?'
'I don't like the notion that there are a lot of crooks hanging around this place, thinking they will do themselves a whole heap of good by smashing up this gallery in a hunt for treasure from the Spanish Main. If they've got hold of the old story, it seems a pity.'
Grant speaks mildly. But he is startled to see the ironic twist that must be given to his own first near-shot at the actual state of affairs. He had thought of the crooks as after real booty – and at a sort of cross-purposes with the children, who are interpreting the situation in terms of their own private imaginings. But now it appears that the crooks are pursuing and the children defending the same fairy gold. It is wildly improbable that there is any truth in Jay's history or legend. Far more likely, although the boy does not realize it, is the story of the dispute over alternative monuments. Crooks however may well be persons of indifferent education, incapable of weighing evidence in a matter of this sort. Somehow they have got hold of Jay's story, and it has not occurred to them to disbelieve it.
'They must have got hold of the book, you see.' Jay continues patiently to explain. 'It was a great mistake to put such a thing in a book – even if it was to be, as they call it, privately printed. Wicked people were sure to get hold of a copy one day.'
'That may be true.' Grant looks again at the Christmas box, and a fresh consideration strikes him. 'Jay – have you measured? Is there more space to account for behind this monument than would be occupied by the old chimney-shaft?'
Jay nods; his anxiety to convince keeps him patient still. 'Yes, indeed. It would be difficult to show you in the dark, and you have to make measurements if you are really going to be sure. But I've worked it out that there is space for a room fifteen feet one way and eight feet the other. Robert Candleshoe could have got quite a lot of treasure into that.'
'Quite enough to set the place on its feet again.' Grant finds that, however heated he must suppose Jay's imagination to be, he has no disposition to distrust the boy's measurements. 'But why have you kept quiet about all this? Why are you chancing it that you and your friends will be able to beat this enemy alone? I'd say it would have been better to tell Miss Candleshoe and Mr Armigel. Or do you think them too cra–' Grant checks himself. 'Do you think them too old to be reliable?'
There is a moment's silence. Jay is having one of his rare hesitations. He tugs at Lightning's ear, and the hound's tail, stirring in acknowledgement, sends up a little eddy of dust from the floor. 'Shall I tell you? I'm trusting you very far.'
'Sure. But you can go on trusting me, Jay.'
'Well, you see it's like this. When I was quite small, I used to imagine things.'
'I see.' Grant looks warily at the boy. 'And you grew out of it?'
'Of course. But at that time both Miss Candleshoe and Mr Armigel, who were more – more observant then, thought that I imagined things too much. They are very kind. But of course it is a long time – a very long time – since they were young like you and me.'
'It certainly is.' Grant feels unreasonably flattered.
'And then – when, as I say, I was much younger still, and really quite small – they were worried about this. They used to say that being alone here was bad, and that I ought to be sent away. I discovered, by listening when I shouldn't' – Jay flushes faintly – 'that Miss Candleshoe was inquiring about boarding-schools.'
'That was pretty handsome of her, wasn't it?'
Jay's flush deepens. 'You mean because I am only an orphan whose mother was – was an employee here? Yes, of course. But my mother died in an accident, you know, almost before I can remember her; and Miss Candleshoe has considered me a responsibility.' Jay articulates this last word very precisely. 'She is, I say, very kind. And because she has very little money now, I believe she would have sold something valuable here – we have still, you know, a few such things – to send me to this school. So at once I had to become different.'
'Different, Jay?'
'Not imagining things. I had to become a – a practical boy, who knew what could still be done with animals, and in the garden, and so that we can all continue to live here although there is less and less money. Have you asked Mr Armigel about me?'
Grant finds this direct challenge embarrassing. 'Mr Armigel has spoken of you.'
'Then he has certainly told you that I am not a boy who imagines things. Has he not?'
Grant grins. 'Sure.'
'It is a thing that pleases him, and Miss Candleshoe too. They feel that they have handled me well. But if I now told them the truth about this plot against the treasure in the Christmas box–'
'They would pack you off to that school after all?'
'There would be a danger of it, I think. Of course, they are both very old now, and you can't tell any longer how they will take things. That is why I have been anxious too about your mother. They might sell her Candleshoe, quite suddenly, in order to follow out some foolish plan of their own.'
'I believe they might.' Grant considers the boy soberly. 'See here, Jay – you are American born, just as I am. But I take it that your future is going to be here in England. And you know the English reckon it an advantage for a kid to have been at the kind of school Miss Candleshoe was probably thinking of?'
'I'm not interested in that.' This time Jay's reply is like a flash.
'Do you know what would happen if my mother did buy Candleshoe?'
'Builders and decorators and insolent servants from London.'
'Maybe so.' Grant reflects that a streak of something very lordly is evident at times in Jay's speech. 'But she'd consider herself as taking over the livestock too.'
'The livestock?' Jay glances at Lightning – and then back at Grant as comprehension comes to him. 'You mean me?'
'Just that. And if you weren't a polite kind of boy your reply would be "Damn her impudence" – wouldn't it? But she would think the world of you as her very own discovery, and probably want to send you to an even grander–'
'I prefer, please, to be nobody's discovery but my own.' Jay looks at Grant with a directness that shows him to attach a clear significance to this statement. Then he seems to feel that some softening civility should be added. 'Your mother is a tremendously wealthy person?'
'Wealthier, I'd say, than Lord Scattergood and half the other marquesses of England rolled up together.'
'That must be very nice.'
Grant laughs aloud. 'You mean, don't you, "My God, how awful"? They do seem, Jay, to have made an utter Englishman of you.'
Jay frowns. 'All that – about England and America, I mean – is something that I must think about at another time.'
'Quite right, son. Just at this moment, you do seem to have quite enough on your plate already. But listen. There really are crooks hanging about Candleshoe. They've wrecked my car. And I've seen one of them myself, sending signals to others. If we bring in the police and clear them up, nobody can possibly say you've been imagining things.'
'There would be a – an inquiry into the Christmas box. It might be opened. The treasure might be taken by – by the Government, by the Queen. Doesn't that happen to treasure trove?'
'I don't know what the law would say about it, Jay. But suppose there really is a treasure. Mightn't it be of more use to the Government, or to the Queen, than just lying behind all that marble? And it wouldn't be of much significance to any one so very old as Miss Candleshoe, would it? And there don't seem to be any other Candleshoes within sight. The family looks like being extinct, and the old Admiral's hoard still untouched.'
'I have thought about all that.' Jay is cautious again. 'But I see it differently, somehow. I think I believe in the legend, in a way. That there will be a crisis, I mean, and that Candleshoe will be saved by the secret of the Christmas box being revealed at that moment.'
'Isn't that what's called imagining things?'
Jay opened his eyes wide. 'I didn't say I had stopped imagining things. I'd as soon stop living. Wouldn't you?'
# 13
Grant Feather, who is going to be a great writer and transform what he likes to call 'the creative situation' on the North American continent, feels rather shattered by this coup on the part of the son of Candleshoe's deceased housekeeper. He takes another look round the Long Gallery and is constrained to admit that a boy who, having the run of such a place, yet refused to give his fancy some rein in it would be sadly wasting his opportunities. Not Jaques alone haunts the cobweb and tattered canvas of that derelict stage; Rosalind and Celia too lurk in the wings – and Touchstone, and the lioness, and the green and gilded snake. They have been there a full two years, likely enough – ever since Robin Hood and Friar Tuck made way for them. And here, behind the boldly incised marble of Gerard Christmas, lies half the treasure of the Spanish Main. Had Admiral Candleshoe one leg or two? Impossible to tell, since even that other and more informative monument submerges him up to the neck in his petrified ocean. But it is a good guess that in Jay's mind he is still not wholly distinct from Long John Silver, and that this mouldering gallery has often been the deserted deck of the Hispaniola, with Israel Hands lying in a pool of blood in the scuppers. It has been too the Admiral Benbow tavern near midnight with Jim Hawkins bending over the dead mariner, and hearing suddenly upon the frozen road –
Grant gives a jump that brings Lightning to his feet, his spine once more bristling. From somewhere beyond the confines of the dimly lit gallery comes a faint but crisp tap-tap. For a moment the sound seems to penetrate from beyond the enigmatical marble curtains before which Grant and Jay stand – and for a moment too it suggests overpoweringly a stick in the hands of a blind man. Then there is a murmur of voices and the illusion dissipates itself. Miss Candleshoe has entered the gallery. Old ladies, as well as blind pirates, get about with the aid of a stick.
Miss Candleshoe taps her way forward with a very reasonable caution, holding up a lantern in her free hand. Behind her come Mrs Feather and Mr Armigel, amiably conversing. It is apparent that the chatelaine of Candleshoe is courteously affording her guest a view of the principal antiquities of the house. Grant sees that the process of secular and undisturbed decay everywhere revealed has gone to his mother's head. Candleshoe in its more than centennial trance is her own absolute discovery; destiny has led her to this spot as designedly as it ever led Aeneas to the Lavinian shore; so urgently is her cheque-book occupied in burning a hole in her handbag that Grant can almost see the incandescence in what is still the half-darkness of the gallery.
Miss Candleshoe comes to a halt, raises the lantern above her head, and nods approvingly. 'So Jay has already thought to show your grandson round. That was most sensible. He has always been a sensible lad. And I see that he is drawing attention to the Christmas box, upon which Mr Armigel has lately been informing you.'
'Now, isn't that just thrilling?' Mrs Feather advances in a condition of happy awe that makes her son grind his teeth. 'To think, Grant, that this gallery has one of the finest priest's holes in the country!'
'Is that what the Christmas box is?' Grant, as he turns to Mr Armigel, glimpses a flicker of resigned disgust on Jay's face.
'Most certainly, my dear sir – most certainly it is. There have, of course, been other stories. But, although entertaining, they must be dismissed as fanciful. A priest's hole it most assuredly is.'
Grant is conscious that at Candleshoe at the moment there are matters of more urgent consideration than the probable purpose of Gerard Christmas' obscure fabrication. Nevertheless Mr Armigel's proposition raises a problem of historical scholarship which a university student ought not to let pass. 'Do you mean', he asks, 'that at one time the Candleshoes were Catholics?'
'Catholics?' Mr Armigel is momentarily perplexed. 'Ah – _Roman_ Catholics. But most assuredly not. The family, I am glad to say, has never since the Reformation felt any attraction to the errors of Rome.'
'In that case would they want a priest's hole?'
But at this Miss Candleshoe herself chimes in with some spirit. 'And pray, sir, why should they not want a priest's hole? It would appear to me to be a most reasonable form of accommodation in any gentleman's mansion. Indeed, I can recall our late Vicar remarking to my brother Sir James that, in the vicarage, such an apartment would be invaluable to him.'
'Precisely so.' Mr Armigel takes off his glasses and placidly polishes them. 'I am disposed to believe, moreover, that Robert Candleshoe, in adding this amenity to his new residence, was actuated by a humanitarian feeling all too rare at that time. In a high-spirited household, we must recall, the life of a domestic chaplain was at times subject to extraordinary casualties. Particularly on days when there was no hunting.'
Grant is bewildered. 'You mean they hunted the chaplain?'
'Exactly. It was harmless, of course, but harassing. Now William Shakespeare – you know William Shakespeare?'
'Sure.'
'I am very glad to hear it. He appears to me the very greatest writer of the late age. Well, in his tragedy of Hamlet, Shakespeare has a reference, I believe, to this simple old English sport. The young hero, about to elude his wicked uncle's guards, cries out "Hide fox and all after". The allusion is undoubtedly to the robust old sport of Hunt the Chaplain. But Robert Candleshoe, not wishing future chaplains here to be subjected to this good humoured but exhausting exigency, caused Christmas to build by way of an earth, you might say – the concealed chamber which is the subject of our present discussion.'
Grant, as he listens to this, catches another glimpse of Jay. The boy is immobile, and in an attitude of strained listening. And Grant sees that it is time he himself weighed in. In point of imagining things it is these two ancient persons who really make all the going; and it is Jay who is in contact with hard, if enigmatical, fact. Grant decides that the situation decidedly requires opening up. 'But there are stories', he asks Mr Armigel, 'that the Christmas box was used for concealing valuable property?'
'Certainly. But I much doubt whether there could ever have been any foundation for rumours of that kind.'
'Still, the place could have been used for that – and could still be used for it now?'
'My dear sir, the secret of ingress to our priest's hole has been lost, time out of mind.'
'But it could be found again?'
Mr Armigel is a shade perplexed by this insistence. 'I judge it probable that there was a mechanism of some little complexity, which by this time will assuredly have ceased to operate. To penetrate to the chamber now, a gang of stonemasons would be required. And family sentiment has been, on the whole, adverse to the idea of investigation.'
'But repairs could doubtless be effected.' Miss Candleshoe makes this point with some emphasis. 'No doubt the mechanism of which Mr Armigel speaks could be located and put in very good order. And I have no doubt that a thoroughly convenient priest's hole would result.'
'Certainly.' Mr Armigel backs up his patroness. 'And the situation being dry and airy, it could scarcely fail of being salubrious. But unfortunately we are not in a position to investigate further this evening.'
'That's just too bad.' Grant shakes his head. 'For a really burglar-proof strong-room is just what Candleshoe needs right now.'
It is Mrs Feather who sees that Grant offers this odd remark with some serious purpose. 'Candleshoe needs a strong-room! Now, just what would that mean?'
'I'll explain.' Grant turns to Miss Candleshoe. 'I don't want to alarm you, marm, more than need be. But the fact is that a gang of crooks–'
'I beg your pardon?' Miss Candleshoe is wholly at sea.
'The fact is that a band of robbers is prowling about outside this house now. I believe they are determined to break in. And as they must expect to get away with objects of very considerable value, I say it's a pity you can't tuck away whatever these may be in the Christmas box.'
'Robbers? Objects of very considerable value?' For the moment, both these conceptions appear to perplex Miss Candleshoe equally.
'I'm perfectly serious.' Grant turns to Mr Armigel. 'Jay knows about this too. And Jay, I can see, is a very sensible boy, with a strong practical turn of mind.'
'Very true.' Mr Armigel nods with vigour. 'Jay, I think I may venture to declare, has turned out a lad with both his feet planted firmly on earth. But surely, my dear sir–'
'Well, Jay has taken some useful measures about this threat, but it remains a very urgent one.'
'The men-servants must be armed.' Miss Candleshoe, rising to the occasion, speaks with feudal resolution. 'And a mounted groom must be dispatched for the soldiery. It is at moments like these that I particularly regret the death of my dear brother Sir James. In addition to being a first-class shot he had a notable skill with mantraps. Mr Armigel, be so kind as to ring the bell.'
But this time the chaplain appears to be in no mood for empty ritual. He addresses Grant. 'When I come to think of it, I have been aware of suspicious characters about the place for some little time. Only the day before yesterday a totally strange person penetrated to the great hall on the pretext of wishing to read what he called, I think, the gas meter. It was most perplexing. Of course I called in Jay, who at once persuaded the fellow to leave. But how these marauders could – um – come to suppose that we cherish at Candleshoe any objects of large pecuniary value is wholly baffling to me. We still own, it is true, a little family plate. But the res angusta domi must be only too evident among us.'
'Then there is nothing of really great value?' Grant is briskly challenging.
Mr Armigel removes his spectacles for the purpose of giving a brisk rub to his nose; and when he answers, it is with a question of his own. 'Might these villains be thinking of the Christmas box? Might they have heard the legends of treasure, and so forth?'
'I suppose they might. Jay here – who has thought this out in a very cool, clear-headed way–'
Mr Armigel manages to return the spectacles to his nose without interrupting a vigorous nod. 'I have no doubt that Jay takes a sound practical view of the matter.'
'Jay is inclined to suppose that it is the Christmas box they are after. But, if there is anything else, I think we ought to have – well, complete frankness, Mr Armigel. If there is something else that needs hiding away, let us get on with the job while we can.'
'A most prudent suggestion. But, if I may say so, all the Candleshoe skeletons are securely in their cupboards already.' Mr Armigel allows himself a pardonable chuckle at this mild witticism, and in this Miss Candleshoe herself somewhat unexpectedly joins. 'Do I understand you to suppose that these villains are actually on the point of endeavouring to break in?'
'Yes, sir. I've seen two of them in the gardens myself. And, what's more, they've wrecked our automobile.'
'Wrecked our automobile?' Mrs Feather looks incredulously at her son.
'Yes, momma. The automobile won't stir again without a new magneto. These people are just taking no chances, and they have Candleshoe very nicely isolated for the night.'
'I just can't believe it.' Most unwontedly, Mrs Feather for a moment allows mere bewilderment to overwhelm her. 'A sweet, peaceful spot like this! Why, out in the garden, in that romantic moonlight, I was feeling–'
'Out in the garden?' Jay, who has been silent since his elders entered the gallery, snaps out this question. Everybody is startled. He takes a stride forward. 'You have been outside since I saw you last?'
'Certainly I have – right now. When Mr Armigel was showing me the library I remembered there was moonlight, and I wanted badly to see the effect of it on this beautiful building. So I just slipped out–'
'You mean you unbolted a shutter and went out on the terrace?'
'Yes, Jay. Mr Armigel was kind enough to help me to–'
'Fools!' The words leap from the lips of a Jay blazing with anger. 'When you came back – did it occur to you to bolt it again?'
'I don't think–'
But Mrs Feather's reply is lost in a commotion that tells its own story. From somewhere far below them comes a crash and a shout. There is a moment's silence and then a second crash, a hubbub of children's voices, and what can only be a scream of pain. Jay is off down the gallery like a flash. He shouts over his shoulder to Grant, and Grant follows. As they reach the head of the staircase by which they had climbed to the Long Gallery the turmoil below becomes momentarily louder. Then it is swamped by the clangour of that alarm bell which Grant has already heard once tonight. This time he can feel the floor tremble beneath him as it peals.
# 14
Jay's mode of getting downstairs in a hurry is simple. He leaps clean from one turn to the next, and as he does so contrives a right-angled twist in air; as he lands he is thus in position for his next leap. Miraculously, his lantern remains alight, but the trajectory thus enforced upon it nullifies its function as an illuminant. To Grant the treads beneath his feet become no more than a meaningless dance of shadows; he judges it best simply to jump when Jay jumps; and this he manages successfully enough till the last flight of all, when he stumbles and arrives in the lobby head over heels and with the breath knocked out of him. Jay goes straight on unheeding. For a second or two Grant gropes about in darkness, and then manages to stagger into the hall.
Here the lamps are still burning, and he sees at the farther end the uncleared table, with half-a-dozen baked apples still on their dish. He puts on a sprint to overtake Jay, but his eye travels round the place once more as he runs, and it comes to him powerfully that this whole adventure is nonsense. The solid and unremarkable Jacobean furniture; the moth-eaten trophies of the chase that jostle with the darkened and indecipherable canvases on the walls; the cupboards and chests overflowing with the rubbish of centuries; the armour, now lying about in disregarded heaps, which innocently pretentious Candleshoes must have bought cheap on the antique market centuries ago: there is surely absolutely nothing in all this that could excite the cupidity of a sneak-thief, let alone an unascertained number of formidable ruffians.
Grant swears, and quickens his pace further. From in front of him he hears sounds that make him wish Candleshoe and all its mouldering junk at the bottom of the sea. Whether or not the presence of these criminals is senseless, they are indisputably within the walls of Candleshoe – and, equally indisputably, Jay's forces are waging a pitched battle with them at this moment. Some of the children – Grant has marked – are far younger even than Jay. This is shocking in itself – but even more alarming is the fact that the elder children, at least, have weapons which, if they can be brought favourably into play, are likely to be quite as accurate as any gangster's gun. Such power, if used, invites reprisals by unscrupulous men secure of themselves in this empty countryside.
Grant jumps to the dais, vaults the table, and is out of the hall. On his right, he remembers, is Miss Candleshoe's drawing-room. But the uproar comes from the end of a corridor on his left, and he realizes that here the house ramifies beyond its original plan. Some flicker of prosperity after Robert Candleshoe's time must have enabled one of his descendants to add thus to the consequence of the mansion. Grant races forward, turns a corner, tumbles through an open door, and is straightway in the midst of pandemonium.
It is certainly a library, and a surprisingly extensive one; there is quite enough light to see that. The light however has a flickering or flame-like quality most appropriately suggestive of an infernal region; there would be nothing out of the way in the momentary appearance amid it of a pitchfork, a cloven hoof, or a forked tail; and this expectation is powerfully reinforced by the yelling and screaming which fill the dust-laden air. For a moment the confusion seems hopeless. Grant takes a grip on himself and realizes that he must master its elements one at a time. Then he can act, if any effective action is possible.
The light comes from nearly a dozen small electric torches, jettisoned by the children in the course of the current mêlée, and now being rolled and kicked about the floor as an unnoticed by-product of the same titanic struggle. They add to the insecurity of anyone who manages to get momentarily to his feet; and so too does the circumstance that the terrain is littered with books that have been swept from their shelves in some earlier phase of the conflict.
In the wall facing Grant are three tall windows. Those on either flank are shuttered and bolted; the shutters of the middle aperture are drawn back, and what appears to be a French window beyond them has been wrenched open; it is here, Grant realizes, that his mother has so fatally tampered with the efficiency of Jay's defences. But the force which has entered as yet through this breach seems to be restricted to a single individual now momentarily submerged beneath an ominously heaving heap of children. Grant sees that it would be a good idea to get the fatal shutters firmly closed again. He has little doubt that the enemy can muster a considerably larger power than this.
And then Grant sees that what is going on is in fact a struggle for the open window. The library has a system of bays constituting a bottle-neck which the struggling children are trying to force; their enemy is endeavouring to hold them at bay in order to cover the species of beach head behind him. But why has he not been supported through this established breach already? As Grant asks himself this he sees a wavering light in the outer darkness framed by the open window – and a moment later he hears, from somewhere close above his head, a twang already familiar to him at Candleshoe, and this is immediately followed by a warning shout from outside. Grant turns and glances upwards. Uncertainly behind a cloud of dust which is almost as thick as a curtain he can just distinguish the boy Robin, perched on the cornice of a massive bookcase, steadying himself against some bust of the classical variety conventionally proper in such places, and with his bow still quivering in his outstretched hand. While the main body of Jay's supporters has been fighting its way towards the window, Robin has been covering it from this point of vantage. Grant, remembering that the invasion of the library has been a wholly unexpected turn in the siege, has to admit that the deployment of the defenders has been a triumph of general preparedness.
'Last arrow fired. Two men coming.' It is Robin's voice from above; he speaks loudly and rapidly, but nevertheless with the impassivity of a player making some necessary announcement in the course of a game. Grant sees that it is the moment of crisis, squares his shoulders, and prepares to charge. There is just a chance that he can burst through the scrum and make fast those shutters in time.
'Stop!' It is Jay who is beside him. The boy has tugged a tall library ladder from the wall, and now thrusts it into Grant's hands. 'Hold on till I say "Shove". And then send it straight over them.'
'Sure.' Grant feels that he can be as reliable a lieutenant as Robin at a pinch. He holds the ladder pointing at the ceiling; Jay swarms up it like a circus child doing some perfectly familiar turn, and at a word Grant gives a shove; with gathering velocity Jay describes a curve in air, and lands like a cat by the window while the ladder comes down with a nasty thud on the backs of the milling supporters now behind him. As a commander Jay has his decidedly ruthless moments. But he has slammed to the shutters and bolted them just as a heavy body crashes against them from the outside.
It looks like victory. Somebody, seeing this, gives a shout of triumph. The effect is unfortunate; it distracts the children and fires their isolated and virtually captive enemy to a last effort. The man staggers to his knees and then to his feet. He kicks out viciously and then, still clutched by tenacious hands, hurls his full weight against the nearest library bay. From the lower shelves of this the books have already been swept, and it is top-heavy; it tilts and a further half-dozen tiers of massive volumes come showering to the floor; it tilts further and falls with a crash, the force of which is fortunately in part taken by the mass of material it has just discharged. Dust for a full half-minute reduces visibility to nil, and nobody in the library has power to do anything but choke and gasp.
As the air begins to clear it becomes evident that the situation has sharply deteriorated. The invader is behind the barrier of the fallen bookcase. In one hand he has a torch with which he is exploring the disposition of the defenders. In the other hand he has a revolver. Grant looks hard at this and cannot persuade himself that the thing is a toy or a fake. Even so, it may not be loaded. And, even if indeed loaded, the probability is that the fellow has very little disposition to murder. He may be prepared to use the weapon if it is a question of avoiding capture; he may be unprepared to use it in the face of mere passive resistance and an injunction to clear out. Grant tries to apply these considerations to the single problem before him: the safety of this crowd of excited children. And for the same purpose he tries to size up the man. He has not the appearance of a successful gangster. Even before this rough house began he must have cut a shabby figure, and now he looks as if he had been tipped out of an ash-cart. Partly because he has had most of the breath knocked out of him, and partly – Grant guesses – because he is scared stiff, the gun and the torch both tremble in his hands. But, if the gun is really loaded, there is very little comfort in the supposition that he may be terrified and barely in control of himself.
The children stand immobile, fascinated. The man's glance travels over them and pauses on Jay. He licks his lips and speaks in a voice that betrays the same tremor as his hands. 'Open those shutters.' Nobody stirs. He swings his gun round until it is levelled at the boy. 'Open them – quick!'
'You are our prisoner. Put that thing down.' Jay speaks and makes no move.
'Open those shutters, or I fire.'
'Put it down, or I shall come for it.'
The air is clearer now. Grant can see enough of the man's face to dislike it. He dislikes a twitch at the mouth. He decides that there is just enough take-off to give him an outside chance of clearing the bookcase at a straight on jump. He decides too that the thing must be fought out at whatever risk to the defenders – this simply because Jay has no thought of anything else.
'I'm coming now.' Jay looks straight at the man and walks deliberately forward. From his standing start Grant hurls himself into the three paces he can afford before taking off. At the same moment some sort of thunderbolt crashes down on the man with the gun and sends him sprawling. It is Robin who has launched himself from his bookcase. He has been the forgotten factor in the affair.
Jay is sitting on the floor, and Grant guesses that he is wondering if he is going to be ingloriously sick. Courage must sometimes be paid for in humiliating ways. Robin, who ought to have broken both his legs, is only badly winded; even so, he manages to gasp out orders that have the effect of covering his leader's temporary withdrawal from the direction of affairs. Robin must be one of the supreme lieutenants of all time. And both these boys must own a demon. Nothing else can account for the morale of the small and absurdly juvenile force at their disposal. The children are now dispersing to their former action stations with the phlegm of a crack division deploying under fire.
Grant takes a look at the vanquished enemy. He lies motionless on his back – horrifyingly helpless and deflated and dirty. His complexion is as grey as the dust that coats it and there is blood coming from his nose and mouth. Jay gets rather shakily to his feet and joins Grant. 'Do you think', he asks carefully, 'that this man is dead?'
'He's some way from that. But he'll be senseless for quite some time. And something rather nasty may have happened to his skull.'
'Ought we to shove him out, so that they can get him to a doctor? There's Robin's father.'
'I don't know that they would think their casualty all that important, Jay. And the right person to get to Robin's father is Robin.'
Jay swings round. 'Robin isn't hurt?'
'Strangely enough he doesn't seem to be. But I don't mean that. I mean that somebody – somebody who knows the ground, I'm afraid – must get out of this and through to your nearest village. If that's where Robin lives, let Robin make for home, and get his father to call out all the police he can. You see, this must stop.'
'I don't know that I do see.'
'Be honest with yourself, Jay, and you will. This is a siege. It's our business to hold out. But we must also plan to be relieved as soon as may be. That's just plain sense.'
Jay nods. His decisions are always rapid. 'Very well. Robin, will you go?'
'Of course I'll go if you ask me to.' Robin, who has got his wind back, is as matter-of-fact as ever. 'It's just a matter of getting clear.'
'I think I can fix that. It's a diversion that's required, and that will be my part of the affair.' As he speaks, Grant stoops and picks up the unconscious man's revolver. He knows in an instant that it is unloaded. The crooks, as he had half-guessed, have not trusted so jittery a member of their body with the live thing. He slips the weapon casually into his pocket. 'Yes – I can do quite a lot in the way of a diversion, I reckon. Particularly now that I've gotten a gun.'
# 15
Lord Arthur Spendlove, although a well-built man in good condition, had some difficulty in heaving that distinguished Roman connoisseur, Dr Rosenwald, into the ancient, powerful, and capacious car that he had chosen for the purpose of the expedition to Candleshoe. Even when this had been managed there was some further delay, since Mr Archdeacon had vanished in search of what he called – somewhat enigmatically – the relevant documents. Lord Scattergood, still not altogether convinced that firearms might not come in handy if a working hypothesis was sighted, muttered gloomily that his somnolent guest stank like a taproom. Brown, who alone bore the responsibility of seeing the party off, seemed to be of a similar mind, since every now and then he took a short walk into middle distance as if in quest of purer air. When Mr Archdeacon at length appeared, Brown bade him an affectionate farewell and at once withdrew into the house. It would have been possible to feel that, in Brown's view, a certain lack of aristocratic poise marked this nocturnal hue and cry after a couple of missing canvasses.
The moon, now riding high in a clear sky, gave to Benison itself and all its policies something the air of a vast canvas, cycloramically disposed. The main façade, with its long march of Ionic columns diminishing in either direction into distance and its broad flights of shallow steps descending from terrace to terrace amid an ordered profusion of sentries – Amazonian for the most part – in marble and bronze, seemed at once as insubstantial and as prodigal as an illusion conjured up out of paint-pots of the largest size. When Arthur let in his clutch and the car moved forward, it might have been this whole inordinate ostentation that was trundling by on rollers, after the fashion of that gorgeous species of visual entertainment which has been so unhappily superseded by the cinema. The West Pavilion, the Orangery, the Water Steps, the Temple of Ancient Virtue, the Neptune Fountain: all these flowed successively past – and each with the air of claiming that burst of applause reserved by the informed audience for some undoubted chef d'œuvre of the scene painter's art. And on all this theatrical traffic the moon, like limelight expertly manipulated from somewhere up in the gallery, shed a soft radiance exactly tinted to make the very most of the bravura nature of the spectacle.
The car had gained the park, and was running past the sixth marquess' improved milking-parlour in the Chinese taste, before anybody spoke. Then Arthur addressed Mr Archdeacon, who was sitting beside him. 'I think you would say we want to mind what we're about?'
'Most decidedly. You will recall that almost my first observation was to the effect that this matter stands in a posture of some delicacy. In this, reflection now confirms me.'
'If I remember anything of old Miss Candleshoe, she won't stand for very much.'
'Precisely. Indeed, my dear Lord Arthur, your remark can be described only as a meiosis. Miss Candleshoe is unlikely to stand for anything at all. Caution will be necessary in addressing her. I am disposed to wonder, however, whether we shall in fact be the first persons to approach her on the subject of the missing Titians.'
'What's that?' Lord Scattergood, who had resigned himself to making this perplexing journey by the side of the slumbering Rosenwald, thrust forward a head which – perhaps with some dim memory of what is appropriate in a person engaged upon detective investigation – he has encased in an ancient deerstalker hat. 'What is that, Archdeacon, about other people being after the Titians?'
'I have been visited by a disturbing memory. Or rather' – and Mr Archdeacon produced simultaneously his pipe and his best metaphysical manner – 'I have been visited by a memory, trivial in itself, which our present exigency renders susceptible of a disturbing, if not indeed of a positively sinister, interpretation. Do I make myself clear?'
'Something fishy – eh?' Long practice had enabled Lord Scattergood to keep up wonderfully with his learned librarian.
'Exactly. As you know, I am one of those who take parties round Benison on Wednesday and Saturday afternoons.'
'And very nice of you too, my dear fellow.' Lord Scattergood, although himself at present engaged in this monotonous occupation every day of the week, was clearly conscience-stricken that the family oracle should have to retrench his meditations in the same interest.
'Not at all. There is much food for thought in both the bearing and the conversation of our visitors. Volumes could be written upon them.'
'That's very true.' Lord Scattergood cheered up on being presented with this elevated view of the matter. 'And I hope you'll bring out something of the sort with a good publisher. Sermons in stones, and all that – what?'
'Approximately that, no doubt.' Mr Archdeacon, who had the kindliest feeling for the simplicity of his employer, benevolently puffed tobacco at him as if intent upon fumigating the deerstalker. 'But what I speak of is an incident perhaps three Wednesdays back. I had a small party, nearly every individual in which might without disparagement be described as a familiar type. One must remark, by the way, that the relationship between type and individual opens up a wide field of philosophic reference.'
'Yes, indeed.' It was Arthur who hastily interposed. 'But something happened, all the same?'
'In a modified sense of the term – yes.' This time Mr Archdeacon puffed at Arthur. 'There was an episode or incident. Or perhaps it would be more precise to say an occurrence.'
'I see.' Arthur found himself thrusting rather recklessly at the accelerator. 'But go on.'
'There were three men who – quite unwittingly – distinguished themselves from the group. Each preserved to the others the bearing of a stranger. Yet it was apparent to me that, in fact, some relationship existed between them. We come here upon the whole absorbing subject of intuitive perceptions. I myself incline to the interpretation of such phenomena in terms of simple hyperaesthesia.'
'Being on the qui vive – eh?' Once more Lord Scattergood had hit the nail remarkably straight on the head. 'And what did these fellows do then?'
'When we got to the octagon room, two of them simply fell to staring out of the window. This in itself, of course, was a circumstance by no means untoward. Many of our visitors are chiefly struck by the fact that the outer frames of the windows are protected by gold leaf and not by paint. Others give their whole attention to scanning the gardens – presumably in order to determine if they are worth a further half-crown. But now comes the rub, my dear Marquess. The third man minutely scrutinized the Titians.'
'That's common enough, too.' Lord Scattergood had his own powers of observation and inference. 'There are people, you know, who understand no more about painting than I do, who believe that the impressive thing before a picture is to rub their noses on it, or peer in a ferocious way into the top left-hand corner. Just self-consciousness, I'd say, in simple folk feeling rather small in a big place. No vice in it – no vice in it at all. Prefer them to Rosenwald here, any day.'
Arthur Spendlove swung the car out into the high road. 'Do you mean that this chap had a go at the Titians in a professional-looking way? Not like the simple folk trying to impress themselves, but like our friend in the back seat making what he calls an expertise?'
'You describe it very well.' Mr Archdeacon could be dimly descried as nodding, Jove-like, within his cloud. 'But it is in the sequel that the chief significance of the incident resides. Having completed his inspection, this person made his way unobtrusively first to the one and then to the other of the remaining two men. To each he rapidly muttered something – and while still endeavouring to sustain the appearance of being a stranger. This interested me very much. I have remarked a similar convention of conduct in cinematographic entertainments dealing with low life and criminal practice.'
'Gangsters?' Lord Scattergood was much struck by this. 'Do you think these rascals came sneaking back, and somehow managed to steal the pictures? If so, aren't we on a fool's errand now? And in danger of being needlessly offensive to this poor old soul at Candleshoe?'
'I think not.' For a few moments Mr Archdeacon, who possessed a nice sense of climax, brooded in silence. 'For account must be taken of the reaction of the first man to his study of the paintings, and of the other men to the intelligence then covertly communicated to them. It was one of consternation.'
'Bless my soul! You think they had tumbled to what this Rosenwald person discovered tonight?'
'I judge that there can be no doubt of it. The man who scrutinized the supposed Titians was sufficiently expert in these matters to know that they were not what they were held out to be. I should add that he presently made to approach me.'
'What's that? Arthur was really startled. 'He was going to tackle you about it?'
'The matter bore that appearance – or rather, I ought to say, does now so bear it. I simply had an impression – no more than a fleeting impression, upon which my mind did not again dwell until the revelation of this evening – that this man was minded to address himself to me; and that one of his companions – his clandestine companions, be it remembered – restrained him. I now pass to the succeeding Saturday.'
'You what?' Lord Scattergood had become a little dazed in the effort to follow all this.
'I have now to record a further incident, at the time apparently unrelated to the first, which an irresistibly logical compulsion now, however, obliges us to concatenate with it. As so often in the history of ratiocinative processes, the apparently casual reveals itself as being, in fact, within the sphere of the causal. This is something which you must frequently have remarked.'
Again Arthur made the car leap forward. 'Just what happened on the Saturday?'
'I can be very brief.' Mr Archdeacon paused – a sure sign that he was winding himself up for one of his most sustained rhetorical flights. Arthur again punched at the accelerator – and at this, whether by casual or causal impulsion, the oracle really did deliver himself with some conciseness. 'A lady of unexceptional conversation and address spent some time with me after my party had dispersed. I found her – um – singularly charming. She was most interested in what we had done with our more valuable things during the war. She asked me about the Old Masters in particular – remarking that her brother had stored a collection of some importance in a salt-mine in Wales. She had never heard of Candleshoe, but when I explained that our paintings had gone there, she appeared to be uncommonly curious about it, and asked a number of questions. You will agree that all this must now appear significant.'
'Uncommonly.' Lord Scattergood put unflawed intellectual conviction into this reply. 'But of what? Have you any ideas there, my dear fellow?'
'The indications all point to attempted theft. The three men were making a preliminary survey of the ground, preparatory to stealing the Benison Titians. And they were no common thieves, since they included among their number an expert capable of detecting what our pilgrim from Rome has detected. Moreover they were in a position to command the services of a woman of genteel bearing, who elicited from me that the genuine paintings had for some years been at Candleshoe Manor. To Candleshoe Manor we are now ourselves proceeding. In the popular old phrase, the plot thickens.'
Lord Scattergood considered this for some time. 'You think, Archdeacon, that these people may themselves have gone to Candleshoe and got a further line on the affair? They may have discovered who is likely to have abstracted our Titians from under the old lady's nose and left those shocking fakes instead?'
'I have no doubt whatever that they carried their inquiries to Candleshoe.'
Having said so much, the Marquess of Scattergood's sage withdrew upon that obscurity within which he delighted to enshroud himself. The tobacco-smoke became so thick in the car that Dr Rosenwald woke up coughing, offered some observations in a German surprisingly unrefined, and went to sleep again. There was a silence which was presently broken by Arthur, who addressed his father. 'What Archdeacon has in mind is this: that having learnt about Candleshoe and taken themselves off there, these rascals might find it unnecessary to take themselves further.'
'My dear boy, I don't at all understand you. If they were hot on the scent of our pictures–'
'The point is that the scent may have ended at Candleshoe.'
'Ended at Candleshoe?' The car travelled about a quarter of a mile while Lord Scattergood addressed his labouring mind to the implication of this. 'You can't mean–'
'It seems to be what Archdeacon means. I'm bound to say it wouldn't be the first thing to come into my own head.'
Abruptly Lord Scattergood lowered a couple of windows, and his librarian once more became visible. 'Archdeacon, my dear fellow, you can't mean this scandalous thing?' Lord Scattergood was much shocked. 'You don't suggest that this crazy but respectable old person – a kinswoman of mine, after a fashion, mark you – has stolen my Titians?'
For a moment Mr Archdeacon seemed unwilling to vouchsafe any reply to this plea; he puffed so hard that the air-stream now flowing through the car became a ribbon of smoke. Then, very slowly, he fetched from a capacious pocket a small portfolio of dark leather, secured with green tape. 'I have here', he presently pronounced, 'the relevant documentation of the affair.'
'You mean that you've been having inquiries made – that sort of thing?' Lord Scattergood was aghast.
'My inquiries, my dear Marquess, have been confined within the limits of the seventeenth and eighteenth centuries. Lord Arthur, pray stop the conveyance.'
'Draw up?' Arthur took his foot from the accelerator and looked in some surprise at his father's librarian.
'Precisely. Before we reach Candleshoe, it is highly desirable that you should be apprised – or is it reminded? – of certain historical circumstances connected with the family. Their almost alarmingly apposite nature has only come to me, I confess, as a result of the revelations of the present evening. Had they been within my knowledge on an anterior occasion, I might well have hesitated to embark with Candleshoe on the relations that I did. Regrets, however, are vain. I now propose to read to you – Marquess, will you switch on that light? – from the private diaries of William Spendlove, the first earl. I make bold to say that an acquaintance with what he has to say will be of some guidance to you later tonight. Shall I begin?'
'Certainly, my dear Archdeacon. We are entirely in your hands. It's an odd time for family history, I'm bound to say. But quiet.' Lord Scattergood made the best of the matter. 'In fact, an uncommonly peaceful scene.'
This was incontestable. Arthur had drawn up on a stretch of grass by the roadside, and it was possible partly to see and partly to sense around them an empty countryside, slumbering snugly beneath its tidy coverlet of field and copse. It was very still when the purr of the engine faded. Arthur lowered the window beside him, for the car was still heavy with Mr Archdeacon's tobacco, faintly blended with a residual tap-room smell from Dr Rosenwald. Then he paused, arrested. 'Odd,' he said. 'Can you hear a bell?'
'A bell, Arthur?' Lord Scattergood shook his handsome head beneath its deerstalker. 'I can't say that I do. And who would want to ring a bell at an hour like this? Could it be people ringing one of those tiresome marathon peals at Abbot's Benison?'
Arthur leant out of the car. 'It's gone. Perhaps I imagined it. It wasn't like bell-ringing of that sort – more like an alarm bell.'
'Indubitably an auditory hallucination.' Mr Archdeacon, with his manuscript open before him, was impatient of this distraction. 'The delusive impression of hearing a bell-like note may be traced to acoustic laws which themselves depend upon the simple fact that the ear is a cartilaginous funnel. But of this I may speak on another occasion. Let me repeat: shall I begin?'
'By all means, Archdeacon. We are all attention. Arthur and myself, that is to say. Perhaps I had better wake up Rosenwald?'
'It is unnecessary – and might even be indiscreet. What I am to read is a good deal concerned with paintings, including what is incontestably the Leda of Titian. But if our Roman friend is to help us, it may be judicious to let him sleep as long as may be.'
'Clear-headed when he wakes up – eh?'
'Precisely. And – so far as the Leda is concerned, the better able to distinguish between a goose and a swan.' Mr Archdeacon paused to allow time for any merriment that this sally might provoke, and then cleared his throat. 'I proceed, then, to the diary of William Spendlove, first Earl of Scattergood, for the year 1720.'
# 16
1720. 1 Aug. This Day my Son Rupert, together with his late Companion de Voyage and former Fellow at Westminster School, Jack Candleshoe, is safe returned from his Travels. Mr Drake, the Boys' worthy Tutor, declares them to be now perfect in Latin Verses (each indeed what Horace desired to be called, Romanae fidicen Lyrae) as well as largely exercised in the Mathematics and the several Branches of Natural Philosophy. That so much regular Instruction can have been combined with the peripatetical Part of their Education is a Thing to marvel at – if not to take, as my good Neighbour and Cousin Thomas Candleshoe doth aver, cum granum salis. The Conversation of both Lads at Dinner (to which I bade Squire Candleshoe and his Wife) was indeed edifying and learned to a gratifying and surprising Degree, with much Talk of large Collections – as alike of Books, Minerals, Plants, Antiquities both Classical and Gothick etc. – presently to follow them back from Italy. To all this deep Commerce with the Muses it is my own Hope that a decent Acquaintance with the Graces has been added; and that Rupert, whose Talk appeared to me to smell somewhat musty and of the Lamp, has equally improved himself in le Ton de la bonne Compagnie, and gained from his Wanderings those polished Manners and that certaine Tournure so necessary in the Station that he must (in the Fullness of Time) be called upon to fill and, if possible, adorn. I would have my dear Boy, above all Things, confirmed in the exactest moral Principles of a rationally believing Christian, as also of solid Knowledge and correct Judgement as to his intellectual Parts. It would be sadly vexatious however were he to turn out mere Parson or Pedant, fitter for the Laurels of a College and Plaudits of a School than for the just Esteem of the Court and Senate of his Country. Let honest Candleshoe's Boy aspire to slumber in some Brasenose Garret or Christ Church Stall. Rupert must consider that from simple Earl (a middle Station accordant with my own retired and unostentatious Temper) he is like to be Duke or Marquess; and learn to combine polite Learning and a fixed and unenthusiastick Piety with due Attention to Circumstance, Dignity, and the Bearing proper in the magnificent Man. (Aristot.)
1720. 8 Aug. Rupert continues in his learned Humour. At Dinner tonight somewhat tediously instructive on Optics and Astronomy. It now appears that at Padua he and Jack Candleshoe gave 80 l between them for a large optic Tube or Telescope, which is like to be delivered presently upon us. Talk of its being set up on the Leads etc. I fear we may be put under some Imputation of Singularity at Benison should this Extravagance of the Boys' go forward. They envisage, it appears, much nocturnal Experiment, with Observation of the Movement of Bodies etc.
1720. 12 Sept. In Town still. Supped with Richard Boyle, Son of my ever-lamented Friend, and was afterwards engaged in the Tour of Burlington House, now re-edified, enlarged, and adorned with great Curiosity and Taste by him. It is nothing so fine as Benison, but for a Town House well enough. His Ldship extremely polite in Expressions of Esteem for my dear Boy's Progress in the Sciences, and Command of the Mathematicals, curious Figures, etc. He spoke of making Interest for Rupert's Entrance into the Royal Society, the principal Body in the Kingdom for Persons expert in Natural Knowledge. Although R Boyle be himself young I listened with Attention and Gratification, all of that Family being famous Virtuosi and well able to pronounce on Matters of the Sort. Returning by Coach to Scattergood House resolved to review more kindly Rupert's late Proposal for his Solomon's Cottage (as he and his Jack wd call it) beyond the Park. This Morning a Letter from Rupert respectfully urging Festination in my pronouncing upon the same, the Italian Books, Collections, Instruments, and the like being any Day expected now.
1720. 22 Sept. I find Squire Candleshoe to be not well disposed towards the Project of our two Lads for a learned Retreat of their own in the Abbot's Lodging. He thinks it were better that an Eye should be kept on them, and expresses himself as having small Confidence in the Perspicacity of Mr Drake. Since the Provision of this joint Tutor has been by a good Part more at my Charge than the Squire's, I have felt myself mightily inclined to take this ill, but have so far restrained any irascible Word in the Interest of neighbourly Feeling. I doubt whether one of inconsiderable Property and obscure Situation, such as is our good Squire, can readily be a Man of liberal Feeling or extensive Views.
1720. 23 Sept. A sultry Day, but with something of Fraicheur towards Evening. Took Advantage of this to stroll with Rupert to the Abbot's Lodging. The dear Boy again very eager for his Project, urging the Advantages of Seculsion, Retirement etc. in promoting those severer and more advanced Studies to which he and his Friend now wish to proceed. It appears that the Behaviour of Venus may in the coming Months be examined into with some special Prospect of learned Success. Rupert hath a Plan ready drawn out for the ready converting of the Lodging to this Solomon's House or Cottage. The Name whimsically derived from that curious Work New Atlantis by Sir F Bacon, later First Viscount St Albans. Amusing Particulars from Rupert of many the first Nobles alike of France and Italy presently renowned as Curiosi and Projectors.
1720. 30 Oct. The Work on Solomon's Cottage far advanced. Sundry great Boxes and Crates, unopened here since their Arrival from Leghorn, have today gone down by Wagon to their new Abode. Rupert and his Jack in great Joy. For my own Part I take much delight in the Spectacle of two young Men, in whom their Years might licence some Wildness of Fancy if not of Behaviour, thus intent upon the Advancement of Learning. I recall with unaffected Contrition my own Youth, so sadly at Variance with this Sobriety, Usefulness and Good Sense. It is Mr Drake's Belief that a large Reform of youthful Morals and Manners hath been brought about by the Writings and Christian Example of Mr Joseph Addison, so unhappily deceased in the Course of the last Year. I have questioned Rupert on this, and find that he does modestly confess Mr Addison to be an Example indeed.
1720. 7 Nov. This Day was entertained to a Collation by my dear Son Rupert and his excellent Henchman Jack Candleshoe. The Dishes choice but very simple. We drank Water, and later an Infusion of Tea. Their Attendance is to be only one Lad, whom Rupert begged our good Mr Drake to choose upon the strictest Principles of settled Conduct and moral Probity. On the Ground Floor are convenient Offices for this Servant; a Study or Library, plainly furnished but commodious to the Labours of our young Philosophers; and two Sleeping Apartments in a simple Style, having no Ambition of Elegance. Thus it will be possible for the Boys to pernoctate from Time to Time, according to their astronomical Occasions. I highly approve these Dispositions, and Squire Candleshoe is at least so far reconciled to them as to withdraw any vocal Objections. He has offered, indeed, to install (the better to insure the Health and Comfort of our Lads) a motherly Person as Housekeeper and occasional Cook. But this the Boys have positively declined, having a set Fancy, it would appear, that Solomon's Cottage (like the Abbot's Lodging before it) should be wholly celibate and monastical in its Economy. Upon the upper Floor the Roof over one Apartment hath been removed, and the Great Telescope is (I suppose) by now there established. But of this Part of their Domain the Boys are jealous Guardians, and I have promised not to profane by any ignorant Inspection the Mysteries of their Science. It is to be hoped that Perkin (for such is the Denomination of their youthful Attendant) will occasionally be permitted to circulate in this higher Region with a Broom.
1720. 12 Dec. My dear Rupert's twentieth Birthday. Inquired of his Studies, at which, judging by his Absences, he hath been assiduous of late. He replied that he becomes steadily more absorbed in, and delighted with, the Heavenly Movements.
1721. 8 May. I write in the greatest Confusion of Mind and Agitation of Spirit. But let me compose myself, and be systematical. This a.m. early comes Squire Candleshoe in as vast Rage and Incoherence as I have ever known Man evince; and falls to imprecating Benison and all Spendloves that ever were in a Manner to be excused only as the Issue of plain Frenzy. Eventually somewhat subdued (if not mollified) by my own inflexible Exhibition of superior Breeding, he came within the Bounds of intelligible Sense. Solomon's Cottage, he declared, had its sole Affinity with Aught of that Monarch's only as a very Mark and Acme of amatorious and venereal Indulgence. Those nocturnal Experiments which our Boys had been pursuing in the Interests of Natural Knowledge –
Should these artless Pages ever, at some remote Time, come within the Notice of an other Eye than mine, will it not readily be understood that here the unhappy Writer dropped his Pen? But I continue now. The Lad Perkin, it appears, having received upon the Occasion of some casual Offence too hearty a Drubbing from his Masters, had allowed a Thirst for Vengeance and the Hope of Gain so far to suborn his Fidelity as to bring him before the Squire this very Morning with a substantial Account of a World of licentious Folly into which our miserable Boys have fallen. Of what followed let a bare Note suffice. The Squire (mounting whatever wretched Nag his Stables afforded him, and hastening to the Spot) was in Time to surprise, and drive out while yet in their Smocks, the particular Heavenly Bodies which had been engaging the absorbed and delighted Attention of our precious Couple, encouraging them, with a smart Thwack upon their ill-guarded Persons, to return to whatever Orbit is properly theirs in Abbot's Benison or Benison Parva. Having then assured the Reprobates of his unfeigned Regret that they were by some two Summers beyond the Reach of similar and more extended Castigation, the Squire rode straight on to Benison Court, his ill-directed and offensive Rage mounting in him as he approached. Our Interview, I verily believe, might well have come to the Point of Honour and an Affair of Rapiers, had I not steadfastly maintained all of the Reserve, judiciously tempered by Candour and Condescension, incumbent upon a Peer in a trying Situation.
I will not dwell further upon the Day's Scenes of Reprehension and Penitence. I have had at least the Satisfaction of knowing the sole feasible Procedure to adopt. With a Lad who has thus scandalously comported himself, and been detected, there is only one course to take. At three of the Clock this Afternoon Rupert departed with Mr Drake for the University, where he will at once be entered as a Nobleman of my own old College. I trust that his genuine Proficiency in Mathematical Study, if not that in curious Figures, will earn him the Regard of the resident Fellows, to whom I have strictly charged him to comport himself with Affability and a reasonably familiar Address. Rupert, I am willing to believe, is truly penitent; nor, after a World of Tears throughout the Morning, was I altogether displeased at his Mode of parting with me. Leaning from the Window of the Carriage, the Rogue had the Impertinence to murmur that, could but Peg (his detected Paphian Girl) make the Journey with him, it would be rounding off his astronomical Studies with the Transit of Venus.
1721. 12 May. Squire Candleshoe, it seems, has taken a Leaf out of my Book (which is indeed proper for him to do), and his Jack is packed off to Cambridge – not indeed with a Tutor of his own, but to be put under the Care of some worthy Tutor of a College. So here should be an End of this troublesome Affair. The Squire and I are to meet presently at Solomon's Cottage (to use the Name which, I fear, must ever adhere to it, at least in local Fame) and agree upon a proper Dispersal of whatever is unsuitably contained there. The Studies of these discreet (yet not sufficiently discreet) Boys have had, as it now appears, genuine Substance; and my Rupert (I suspect) was indeed something further advanced in the Courses of the veritable Planets than in his sublunary Researches. It thus seems not proper or convenient wholly to make away with what is of an authentick philosophic Cast in the Place. I apprehend little Possibility of serious Dispute with my good Neighbour, despite the high Tone of our last Encounter. Since the Abbot's Lodging is an undisputed Part of my own Estate, indeed, the Squire would be on but uncertain Ground were he minded to be disputatious.
1721. 16 May. All is in sad Confusion between Squire Candleshoe and me. I blame myself for taking up, over Matters of small Consideration in themselves, a Position from which it now appears difficult with any Dignity to recede. The Root of the Mischief lies in the good Squire's having allowed his Jack, during his Grand Tour made in Company with my Rupert and Mr Drake, a larger pecuniary Supply than was at all consonent with the Modesty of his Station. The Candleshoes have ever had a particular Pride or Arrogance in all their Associations with Spendloves – a Consequence of their being (what, as a Man of Candour, I have always been free to acknowledge) the elder Line of that Family from which our Antiquity is derived. This having led our honest Squire to dip deep into his Pocket on his itinerant Son's Behalf, Jack, it seems, hath on sundry Occasions disbursed more – as on other Occasions indubitably much less – than my own Boy upon those diverse Antiquities and Curiosities with the Acquiring of which the Lads liberally amused themselves upon their Italian Peregrination. These Objects they later disposed about their detestable Cottage in all the Carelessness of common Ownership, so that it is now scarce possible to make a just Appropriation or Assignment between them.
All this would seem, indeed, of little Moment but for a Circumstance at once flagitious and vexing. For preserving from parental Scrutiny the larger upper Apartment of their Dwelling the Boys now appear to have had this sufficient Reason, viz, that it had pleased them to appoint it with all Voluptuousness and amatory Luxury of Venetian Houses with which no modest Traveller through that carnal City would willingly hold the least Acquaintance. Once more my Pen would almost drop from my Fingers as I record this shameful fact – mitigated though it be (at least in some minor Degree) by the Observation (which my own sadly irregular Courses in early Life have enabled me to make) that Something more of Imagination than Familiarity hath gone to the framing of this most culpable Extravagance. But now to the Point at special Issue. Among the Embellishings of this rural Bagnio for simple Peg and Moll what have the Squire and I fallen upon but three large and erotical Paintings in Oil, viz, a Lollia Paulina, a Leda and Swan, and a Diana and Actaeon, all very rich in Colour and splendid in the Flesh – so much so, indeed, as to put the good Squire something out of Countenance, as having small Connoisseurship in graphick Immodesties. When however it did presently transpire that, while the Diana (although very brave with bathing Nymphs, and an Actaeon with his Stag's Head exceeding quaint) is by one Schiavone (a Venetian of small Regard), yet the Lollia and Leda are by that Tiziano Vecelli or Da Cadore now above all other Italian Artists hugely esteemed; when, I say, it appeared that our Boys had (at a great and notable Bargain) acquired two such Canvasses as the greatest Curiosi in the Kingdom might envy them the Luck of, then did Squire Candleshoe pipe to another Tune.
So here we are, my honest Neighbour and myself, at Jar together and by the Ears over these same Paintings, which is surely a bad Sequel to a worse Business. All this Afternoon did we debate the Matter, with Squire Candleshoe very hot – and outrageous, too, in Expression and Suggestion, as that he should render me the Swan but allow me never a Collop of the Wench, or that we had best cleave the Schiavone in twain, himself to have the Ladies and myself to be well-suited by the Fellow with the Horns. And thus are we parted with high Words and short Tempers; and Nothing settled betwixt us in a Matter that both Discretion and Decency cry aloud to see softly handled.
1721. 18 May. The First Part of the Contention betwixt the Two Famous Houses of Candleshoe and Benison (to parody the Title of the old Play) is now acted out and settled – and altogether to the Advantage of the younger Line. Yet while I rejoice at this Success, and at the just Castigation of the Insolence of my irascible Neighbour, I am yet apprehensive of the Scandal like to be spread (at least so far as the Boundaries of our County) by the wild Events of the last Night; and of suffering somewhat in my Dignity as First Earl by un Eclat de riresuch as lively Rumours of the same may occasion.
And here I must unaffectedly confess (to the Privacy of this inviolate Page) that at my first Sight of Tiziano's Leda I did myself judge her a monstrous fine Woman (although so ill employed), and that his Lollia Paulina too appeared to me of a Rarity much excelling those Jewels with which the Painter (by a Licence to be reprobated from an Ethick Point of View) had alone adorned her. From this I was led to the Consideration that my Rupert, as a Lad from his earliest Years bred to the greatest Refinement and Politeness, as also to the Conversation of Women of the first Quality, was likelier by far than our good Squire's honest Jack to have been the original Appraiser, and consequent sole Purchaser, of these exalted Productions of a Master's Brush. When to this I added the further Thought, that the Objects of which the Property was thus disputed had been freely conveyed to, and now reposed in, a Building incontestably mine, and to which None have any Right or Title of Entrance save by my Leave, it did appear to me that the old Doctrine, whereby Possession is declared to be nine-tenths of the Law, might well be acted upon, and that with all Speed possible.
Thus it was that Yesterday (but biding my Time until Dusk, that the Matter might be carried through in a decent Privateness) I despatched Three or Four (such as were of a Discretion I could trust) with a Wagon to Solomon's Cottage, that they might bring these three Paintings (alike the Tizianos and the Schiavone, being itself, although less valuable, a sportive Work d'un vrai Divertissement) with all Haste and Quietness to Benison. For I made bold to think that, once hung in my Great Gallery or octagon room, there would be but small Hazard that those Trophies of my dear Boy's Purity of Taste (as I now saw it to be) would ever depart thence to an obscure Lodgement at Candleshoe.
But here was a Train of Reflection upon which Two might fall. And thus came it about that my Servants, proceeding at my Command upon this lawful Occasion, were at the Cottage hotly encountered by a like Number of Squire Candleshoe's Men, most nefariously and thievishly furnished with a Wagon to an identical Purpose with mine. Thence followed a notable Skirmish, with diverse bloody Cockscombs and (I fear) at least one broken Crown – the Hurly-burly receiving vast Increase (ere all was over) from several Reinforcements on either Side gathered from the nearer Villages. Yet out of all this great Disgrace (as Persons of any civil Breeding must unfeignedly conceive it) has come this fair Conclusion: that while the worthy Squire indeed has escaped away with the Schiavone and certain other Paintings of little or no Name, my dear Rupert's more truly inspired Purchases (the Leda and the Lollia of Tiziano, to wit) are now safe within the Portals of Benison. This Morning, it is true, being something too incensed by the Temerity of my Neighbour, I took Horse for Candleshoe, intent to demand that the third Painting be lawfully rendered me. But finding the Squire (who had been advised of my Approach) standing to Arms upon his Threshold, with a most martial Levy of his Retainers arrayed behind him, and himself crying out that did I want the horned Fellow a-peeping at the Ladies, I might come in and find him, I judged it well to speak him fair, and as if turning the Matter to a notable Jest. For it would be an inconvenient Thing, if Cognizance of all this were taken by some officious Justice; and it could be said that a Nobleman and a Gentleman of the County had been summoned to keep the Peace together, after coming to Blows over the Spoils of a detected Bawdyhouse.
1721. 30 May. Caused to be hung in my Great Gallery my two new-acquired Paintings by Tiziano Vecelli (formerly styled Da Cadore), being that Artist commonly reckoned by the Curious first among all those out of Italy. Both mighty fine, and in the Leda I do begin to perceive a great Sweetness in the Expression, as also une certaine Rondeur des Fesses, that alike do most charmingly remind me of my Ever-to-be-Honoured Mother's Maid, Betty Brown, the first that I well remember to have –
But here Company obliges me to lay down my Pen. Une belle Assemblée is like to be with us this Evening, and I propose no small Pleasure to myself in my two new Evidences of that Correct Taste and those Liberal Expenditures which steadily enhance the Elegance of Benison Court.
# 17
'If that isn't a deuced queer thing.' As Mr Archdeacon finished his reading, and as Arthur Spendlove let in his clutch with what in a less seasoned car would have been a jerk, Lord Scattergood spoke with some emphasis. 'I'm quite sure I never heard this yarn before.'
'Precisely.' Mr Archdeacon paused to get his pipe in commission. 'It is only lately, as you know, that I have come to work through the first earl's papers. Had these circumstances been known to me in 1939–'
'You wouldn't have handed back this disputed property to the Candleshoes on a plate.' Arthur chuckled. 'What mugs they must have thought us. Supposing, that is, they knew.'
'They must have known, Arthur, or they would scarcely have played this trick on us.' Lord Scattergood, conscious in this of a large stroke of intellectual clarity, banged so vigorously on the seat in front of him that Dr Rosenwald for a brief moment came awake again. 'But, my dear Archdeacon, did the old girl behave as if she knew? Can you recall anything to suggest that those particular paintings meant something to her?'
'I cannot say that I do. Miss Candleshoe was concerned to drive a bargain in point of what she was to store for us; and she was not without acuteness in the matter of insurance. But in the particular works involved, and their qualities, she appeared not interested. It was, I fear, her attitude that nothing from Benison was likely to be of the first quality. All this, however, may well have been dissimulation. Miss Candleshoe may well have been gloating inwardly at the strange opportunity our proposal was bringing her.'
'Gloating?' A just indignation was evidently beginning to rise in Lord Scattergood. 'I'd call that a shocking thing, you know – a most unneighbourly thing.'
'I agree with you, Marquess. We must remember, however, that Squire Candleshoe must have formed just such a judgement upon the Earl's conduct in 1721. It is only too probable, I fear, that Miss Candleshoe, a woman of strong family piety, as she contemplated the Leda and the Lollia, reflected that thus the whirligig of time brings in his revenges.'
'That her chance had come – eh?'
'It must be entertained as a tenable hypothesis. On the other hand, Miss Candleshoe may be blameless. And I have myself little to go upon, since my commerce with her upon the relevant occasion was restricted and indeed inconsiderable. I dealt in the main with a housekeeper – an unusual woman, whom I have cause to remember with some particularity. She died, I believe, very shortly after. I wondered at the time whether I ought – But that is another story. And now a fresh consideration presents itself to me. Armigel.'
'Eh?'
'There was then – as there is now – living with Miss Candleshoe a retired clergyman of the name of Armigel. He acted as a domestic chaplain.'
'Services on the spot and as required? The sort of thing we bring in the Bishop for from time to time?' Lord Scattergood was impressed. 'Arthur – do you hear that? Dashed convenient notion.'
'But mark.' Mr Archdeacon, as he delivered himself of this injunction, turned round and pointed the stem of his pipe at his employer. 'Not the present employment of Armigel, but rather something that I now recall of his first profession, is the circumstance to which some large significance may well attach. Let me be brief.'
As Arthur heard these ominous words he swung the car off the high road between two unimpressive stone columns. 'Capital! For here we are.'
'Let me be very brief. Armigel's first profession was that of artist. He was a painter – although I believe an undistinguished one.'
'Then that settles it!' Lord Scattergood's righteous wrath was now given unrestricted issue. 'This scoundrel copied our Titians at his leisure; and substituted his beastly efforts for the real thing, when the time came to send them back to Benison. Upon my soul, if he wasn't a clergyman, I'd have him sent to gaol.'
'You'd be laughed at for your pains.' Arthur spoke with conviction. 'The whole story of Solomon's Cottage would come out, and then...' Arthur broke off abruptly, and braked so hard that they were all thrown forward in their seats. 'Something's fallen across the drive. It's a tree.' He switched off his engine. 'We can't get any farther.'
'Then we must get out and walk.' Lord Scattergood's vein of high lucidity held. 'Archdeacon, my dear fellow, I'm sorry to give you this inconvenience. But we must decidedly go right ahead.'
'By all means Marquess. There is small hardship in a brief nocturnal perambulation on such a night. But what of our Roman friend?'
Lord Scattergood, as he prepared to step from the car, gave Dr Rosenwald an experimental shake. 'Leave him behind – eh Arthur? Send for him if we want him.'
'Just that.' Arthur was already scrambling over the fallen tree. 'The house isn't a quarter of a mile.'
'Then it should all be plain sailing.' Lord Scattergood lent a solicitous arm to his librarian as he in turn negotiated the obstacle. 'We present ourselves, explain that the imposture is detected, and take the Titians quietly home with us. No need to admit that it has made us a bit hot under the collar – what?'
'An admirable proposal, Marquess. As your politic ancestor put it, restrain any irascible word in the interest of neighbourly feeling.'
Arthur allowed himself a sceptical laugh. 'And hope that Miss Candleshoe will be subdued, if not mollified, by an inflexible exhibition of superior breeding? Well, we can only try. And good humour will certainly be the note on which to begin.'
'I don't anticipate any trouble.' A mood of confidence appeared to grow in the rightful owner of the Leda and the Lollia as he trudged up the neglected and moonlit drive towards Candleshoe. 'Lucky that we have come by night, you know. Less chance of gossip. A firm line and – believe me – everything will go off very quietly.'
'It is certainly very quiet now.' Mr Archdeacon spoke almost dreamily from amid his cloud of tobacco. 'The imagination of a poet could scarcely propose to itself a scene of more unflawed tranquillity. See, my dear Marquess, how sweet the moonlight sleeps upon this bank! It will be within the scope of your recollection that Shakespeare–'
Abruptly, Mr Archdeacon's discourse broke off. For on this so tranquil night another voice had made itself heard. It was close at hand, and its tone was uncompromising.
'Stand quite still. I have a gun, and I can drop any two of you.'
# 18
'Now, what would be the meaning of that?' Lord Scattergood stops and peers ahead with considerable interest. The moonlight, although doubtless sleeping upon banks much as at Belmont, is an uncertain and low-powered affair, so that it is difficult to distinguish much. Just ahead, the drive appears to take one of its numerous twists and disappear into shadows. It is from this obscurity that the voice has spoken, and now it speaks again. Loud but level, it conveys every impression of intending business.
'I mean to continue getting right out of this. You can surround me, you can outflank me, you can rush me. But I can get two of you, or perhaps three, before you get me. So that may mean any of you – get that? Maybe it would be healthier if you were to quit.'
Lord Scattergood moves on again. 'Odd – eh? What would you make of it, Arthur?'
'I suppose we are being addressed by one of the fellows who were so interested in the paintings when Archdeacon was showing people round. They are here before us, sure enough. What would you say, Archdeacon?'
'I am in agreement with you, Lord Arthur. We are in dubitably being addressed by a criminal... How treacherous the surface of this drive is! In this uncertain light it is positively dangerous.'
'You've been warned. Stop, or I drop you dead.'
'Incompetent – what?' Lord Scattergood shakes his deerstalker in the moonlight. 'If he wants to get away, why keep shouting at us? Sounds almost as if attracting attention was his idea.'
'It rather does.' Arthur is puzzled.
'Well, he shall get it.' Lord Scattergood speaks with some asperity, and quickens his pace. 'Let me just catch sight of him, and I shall tell him precisely what I think.'
'It's your last chance. I tell you to stop.'
'Indubitably what is called a gangster.' Mr Archdeacon pronounced this with assurance. 'The accent is decisive. I have frequently heard it in the cinema.'
'American?' Lord Scattergood is much interested. 'And proposing to smuggle my paintings out of the country – eh? And there he is!'
They have rounded a bend. Before them, uncertainly visible in shadow, is the stationary figure of a man. Arthur has a momentary impression – which further puzzles him – that the man has his back to them and is addressing vacancy. But, if this is so, he immediately swings round. There is no doubt that he has a gun, and that they are covered by it.
'Stop!'
Lord Scattergood, who is now much incensed, replies to this with a snort of indignation and stumps on. At his side, Mr Archdeacon has all the appearance of emitting a smokescreen to cover this advance. Arthur tries to get ahead of them, but has no success. His father, having got within a dozen paces of the waiting man, feels that the time has come to offer a few remarks. 'You miserable rascal!' he says. 'Terrorizing a helpless old woman! I'll have you know that the lady is my kinswoman, and moreover has the custody of some of my most valuable possessions. I respect her highly, you scoundrel; and if you think I'll permit antics like yours on her property – why, you're a very great donkey!' Lord Scattergood, as he finishes this address, comes within arm's length of the gangster and knocks his gun from his hand.
'Say – aren't you the Marquess?' The gangster – he is a young man of dishevelled but polite appearance – asks this question in what, to Arthur, is patent bewilderment.
'I am Lord Scattergood. But your business in the immediate future, you horrible ruffian, is going to be with the police.'
'And you said something about Miss Candleshoe having valuable possessions of yours?'
'My Titians, rascal – as you very well know.'
'Paintings? Then that's what they're after!'
At this Arthur steps forward. 'What's that?'
'The thieves sir – the folk I've been trying to lead off.'
'I see.' Arthur takes a keen look at the young man, stoops and picks up the fallen gun. 'Loaded?'
'No. We got it from one of them in a fight, but it wasn't loaded. I brought it out and started doing what I could to get their attention. That was so that a boy from the house – a boy called Robin – could get past them, and bring help from the village. You see, Candleshoe is besieged. There's quite a crowd of those crooks... Perhaps I ought to say that my name is Grant Feather.'
Lord Scattergood, who had listened to this with attention, turns to his librarian. 'Archdeacon, what are we to make of this?'
'Granted the given terms of our present situation, Marquess, the problem of the young man's veracity would appear, for the time, to be unamenable to other than a purely empirical approach.'
'Go ahead, but keep an eye open – what?' And Lord Scattergood nods understandingly before turning back to the young man. 'You been inside Candleshoe?'
'Yes, sir. And my mother's there right now.'
'Seen my Titians – Lollia somebody, and an odd girl with a swan?'
'I just don't get that, Lord Scattergood. I saw those paintings at Benison today – or I suppose I ought to say yesterday afternoon.'
'You saw copies of them. The originals have been st–' Lord Scattergood checked himself at a warning cough from Mr Archdeacon. 'The originals, for reasons into which I need not enter, have been for some time at Candleshoe. And now it appears that a pack of rascals are after them. You haven't seen them – either hung up or stored away?'
'No, sir – nor heard them mentioned either. But Candleshoe is quite a big place. And what you say does explain things. These people are much more likely to be after a couple of valuable paintings than a pirate hoard hidden in a secret chamber.'
'A pirate hoard? Stuff and nonsense.'
'That's what Jay thinks it is.'
Arthur Spendlove interposes. 'Jay? Who is he?'
'A capable kid who lives with Miss Candleshoe. His name is Jay Ray.'
From the base of what is now a tall column of tobacco smoke Mr Archdeacon emits a sound of mild interest. 'Ray? Surely there cannot be–'
'Quite right, Archdeacon. No boy could be called Jay Ray. The idea's absurd.' And Lord Scattergood looked with renewed suspicion at Grant Feather. 'And as for a pirate–'
'My dear Marquess, you misconstrue the sense of my proposed observation. But no matter. Suffice it to remark that an American boy might conceivably have such a name.'
Grant Feather laughs at this. 'He's American, all right. But – what's more important – he's all alone in Candleshoe now, with the two old folk, and my mother, and a bunch of children younger than himself. And these crooks may be starting another attack.'
'Then we'd better be moving on.' It is Arthur Spendlove who speaks, and he steps out as he does so. 'What about that other boy – Robin, did you say? Will he have reached the village?'
'Not yet. But he sure will quite soon. I'm certain he got clean away.'
'Then there doesn't seem much to worry about. We'll go up to the house now, and wait for the arrival of the police, and so on. With a crowd of us like this, it isn't likely that the thieves will show up again. They certainly don't seem much in evidence at the moment.' Arthur turns to Grant. 'Don't you think they may have gone already? For I gather you didn't actually draw them when you put up that diversionary turn?'
'I did not. And it makes me a mite uneasy. I think they feel they've got a trump card, and believe they can play it any moment now. That would account for their not much minding whether I walked straight out of the place or not. They reckon that they can be clear of Candleshoe, Titians and all, before any effective force can be mustered.'
'In that case, Mr Feather, they are wrong.' Lord Scattergood delivers himself of this with a snort of indignation, and at the same moment quickens his pace. 'My son and I, together with Mr Archdeacon... By the way, this is Mr Archdeacon.'
'How do you do, sir.'
'How do you do.'
'I say that my son and I, together with Mr Archdeacon and yourself, constitute an effective force in ourselves. Archdeacon, am I wrong?'
'Certainly not, Marquess. No other view of the matter would occur to me. We have all the makings of a well-balanced force, if I may say so.'
'Precisely.' And now from under his deerstalker Lord Scattergood turns a stern eye on Grant. 'You, sir – do you agree?'
At this – and with a deplorable lack of military caution – Grant gives a shout of laughter. 'Yes, Marquess. But we're nothing on the garrison at present in the house, or I'd never have quit it, even to get that boy away. If we can get in and join up with it, we should do pretty well. But you will have to take your orders from Jay.'
'From the boy? Does Miss Candleshoe do that?'
Grant laughs again. 'When there's a crisis, I guess she treats him pretty well as commander-in-chief.'
'Then – while we are inside Candleshoe – that settles the matter.' Having thus declared himself, Lord Scattergood continues to march up the drive. 'Would it be the sort of place that has a front-door bell?'
'I suppose so.' Grant finds this an odd question. 'Wouldn't Benison have a front-door bell?'
'Do you know, I've never looked to see?' Lord Scattergood appears much struck by this circumstance. 'But here we are. And no doubt the best thing will be a very loud knock.'
Candleshoe is before them, and Grant sees that the moon has moved on as if to inspect a fresh face of it. He has little idea of the time, but knows that the small hours have come. Even so, he could count the hours of his acquaintance with the house very readily upon his ten fingers. And this is strange, since it already has the air of an established landmark in his life. Moreover he is still apprehensive that it may have come to stay. The night's wild events, when happily over, will by no means render the place less endearing in his mother's regard. Miss Candleshoe and Mr Armigel, he gloomily reflects, may be watercolour sketching upon the Yang-tse-kiang within a twelve-month. Meanwhile he remembers Jay and his archers within, and the uncertain number of evilly – as it may be of desperately – disposed persons without. The approach to Candleshoe has its hazards, and he wonders whether Lord Scattergood ought to be apprised of them. 'Do you think, sir,' he presently asks, 'that the front door will be the best thing?'
'My dear fellow, nothing else would be civil. I have some business – family business, you might say – to discuss with Miss Candleshoe; and she is both a spirited woman and apt to stand upon old-fashioned forms. It wouldn't do to climb in through a bathroom window, you know – it wouldn't do at all... I think we go up these steps. Spot of mortar wouldn't come amiss to them – eh?'
They climb the steps. Grant doesn't at all know what is going to happen – a cloud of arrows from one direction or a rain of revolver bullets from another. But Lord Scattergood, who appears unable to command more than an intermittent consciousness of the altogether abnormal state of affairs at Candleshoe, is not a person with whose views one ought hastily to express nervous dissent; and Grant mounts the steps beside him. He can hear Mr Archdeacon, who is immediately behind, offering Arthur Spendlove miscellaneous antiquarian observations on the building. They are standing before the front door, and Lord Scattergood has found a knocker. In a moment Grant is realizing how deep has been the silence which the Marquess now proceeds to break. It is hard to believe that the enemy still lurks. It is even hard to believe that there has ever really been an enemy at all. The fantastic fracas in the library might be remembered from a broken dream.
'Who goes there?'
There is no doubt about what the garrison feels. The challenge, although its pitch suggests one of the less mature of Jay's following, rings out sharply and formidably enough. Lord Scattergood however comports himself as if his knock had produced a butler bowing gravely at an opened portal. 'Is Miss Candleshoe at home? The Marquess of Scattergood.'
There is silence for some seconds, during which Grant has an impression of considerable confabulation in progress behind the massive timber confronting them. And then comes the voice of Mr Armigel. It is placid and decisive. 'Miss Candleshoe is not at home.'
Grant's sense of the incongruity of this exchange is suddenly sharpened by the impression that he can hear voices somewhere in the darkness behind him. And they are not voices, somehow, that he can associate with the advance of any forces of law and order. In the circumstances Lord Scattergood's approach seems to him a little on the formal side. Moreover that nobleman, a moment before so resolute in manner, appears to be somewhat at a loss. It is only after a discernible hesitation that he makes a further move. 'In that case, and since I have business of some importance with her, I will just step in and write a note.'
Inside the house this produces further distinguishable conference. Outside, Grant is now sure that he hears not only voices but some sort of engine as well. He is trying to place this – it is not at all a familiar sound – when Mr Armigel delivers himself once more with the same placidity and decision. 'Miss Candleshoe regrets that her health disables her at present from holding either epistolatory or any other form of correspondence with her friends.'
For a moment this has all the appearance of being victoriously unanswerable. Lord Scattergood is reduced to turning round for the purpose of consulting his librarian. Grant turns too, and finds that he is looking over the heads of the others at a stretch of neglected lawn upon which the moon is now casting lengthening shadows. Something moves on it – something at first merely puzzling, and then unbelievable and monstrous.... Grant takes one further look, swings round again, and shouts lustily. 'Jay! Are you there? Open up!'
'Grant?' Jay's voice comes from somewhere overhead.
'Make them open up. There's another attack coming. No time to lose.'
# 19
Among the more recent ancestresses of Mrs Feather – those active since the year 1620 – have been not a few ladies with the knack of continuing to keep things tidy while their husbands and sons have been shooting through the windows. It is doubtless this tradition that has prompted her, during the past hour or so, to encourage and assist the half-witted Tib to wash up. And to this in turn is due the fortunate circumstance that Miss Candleshoe is now able to receive her visitors in a great hall the feudal disorder of which is not incongruously enhanced by the remains of rabbit-pie and baked apples.
Mrs Feather is relieved to see Grant again, although she discerns at once that he is far from feeling this obscure nocturnal crisis to be over. Grant indeed no sooner appears than he vanishes once more, together with Jay and a middle-aged man whom she recognizes as Lord Arthur Spendlove. For the moment, therefore, Lord Scattergood is unaccompanied except by an ancient person, approximately coeval with Mr Armigel, who is occupied – very properly – in hastily stuffing away an enormous pipe as he advances into Miss Candleshoe's presence. Mrs Feather has, of course, no notion of the inwardness of the situation. Dimly, she supposes that Lord Scattergood – conceivably in his capacity as Lord-Lieutenant of the County – has arrived at the head of a troop of horse to the relief of his beleaguered neighbour. But this impression lasts only for a moment. Being a woman of swift perceptions, Mrs Feather quickly realizes that she is present at a clash of mighty opposites, and that the shades of Candleshoes and Spendloves innumerable may well be looking down upon the scene. Indeed, she can almost discern them at an impalpable jostle in the minstrels' gallery. This being so, Mrs Feather further feels that she herself ought to assume some role significant for the occasion. Lord Scattergood is attended by the old person with the pipe; Miss Candleshoe – a circumstance, this, surely more impressive in itself – is flanked by her domestic chaplain; Mrs Feather sees that she herself is indubitably what is called a waiting gentlewoman, and she at once takes up on Miss Candleshoe's other hand a posture as evocative of this condition as she can contrive. She flatters herself that this makes Lord Scattergood start at a disadvantage.
And certainly Lord Scattergood is embarrassed. He begins by proffering Miss Candleshoe a good evening – an anachronous tender to which the old lady replies only with one of her alarming bows. So alarming is it, indeed, that Lord Scattergood leaps forward with chivalrous haste and the evident object of fielding her neatly from the carpet. The discovery that this is an unnecessary solicitude puts him somewhat out of countenance, and he breaks abruptly into speech. 'I say, you know – my Leda and Lollia. It really won't do.'
Miss Candleshoe draws herself up. 'Lord Scattergood, I can well believe that you are in trouble again with your bitches. But need I be concerned in the matter?'
'Bitches, Miss Candleshoe?' Lord Scattergood is scandalized and bewildered.
'I presume that Leda and Lollia are bitches?'
'God bless my soul! Leda's a woman, Miss Candleshoe; and Lollia is too.'
'Ah – forgive me. I had supposed your concern was over canine bitches. My late brother, Sir James, frequently remarked to me that you had no happiness whatever with hounds. But if you are in trouble with women, Lord Scattergood, is there between us that degree of intimacy which would justify your appealing to me?'
It seems to Mrs Feather that Lord Scattergood, who is a florid man, is possibly going to suffer a stroke. He merely, however, turns to his companion. 'Archdeacon, will you be so good as to take this matter over? I'm damned if I can trust myself to say at all the proper thing.'
'Certainly, Marquess. Miss Candleshoe, let me be brief. It must be within the scope–'
'On second thoughts, I think I'll carry on myself.' Lord Scattergood thus changes his mind with what, to Mrs Feather, is inexplicable haste. 'The plain fact, ma'am, is that I've come for my Titians. So, I understand, have some rascally thieves who have also traced them here. We needn't – need we? – go at all deeply into the affair. Just take it that the time has come for the Titians to go back to Benison.' Lord Scattergood pauses hopefully, and then a further inspiration comes to him, 'And if you'd care yourself to have those deuced good copies–'
But Miss Candleshoe has turned to her chaplain. 'Mr Armigel,' she says, 'do my ears deceive me?'
'I fear not.' Mr Armigel is polishing his spectacles, as if preparatory to some distasteful but necessary scrutiny of the visitors. 'I fear that Lord Scattergood has appeared at this extraordinary hour for the sole purpose of putting forward an extravagant claim to the Candleshoe Titians. The paintings he has in mind appear to be those of which copies are well known to be exhibited at Benison. Lord Scattergood, in fact, has fallen into some sad confusion in this matter. Should he care to return at some more convenient time, I shall be happy to clear it up for him with the aid of the relevant family documents.'
'Family documents!' Mr Archdeacon at this can no longer contain himself; he produces his pipe and waves it wildly in the air. 'May I ask, sir, if you are aware of the existence of the personal journals of William Spendlove, first Earl of Scattergood, in which the provenance of the Titians–'
'And are you, Mr Archdeacon, familiar with the Candleshoe Papers?'
'The Candleshoe Papers? Certainly not! I never heard of them.'
'Precisely. They show conclusively that in the year 1721 Squire Candleshoe very properly took over the custody of certain Italian paintings which had been acquired by his son–'
'Nonsense, sir – contemptible nonsense! You know very well that the Earl carried off the paintings to Benison.'
'So the Earl thought.' Mr Armigel, conceivably because his invention is flagging, pauses to take a pinch of snuff. 'In point of fact, the Squire defeated the tiresome importunity of the Earl by permitting him to make off with two indifferent copies. I was most interested in the copies, Mr Archdeacon, when you sent them to be stored here during the war.'
'Lies, sir – impudent and impotent lies!' Mr Archdeacon is now considerably more florid than his employer. 'No forgeries existed, as you very well know, until you yourself took dastardly advantage of being entrusted with the originals, and exploited your previous profession in order to perpetrate a disgraceful fraud. You copied the Titians, sent copies back to us, and are concealing the originals in this house now... Bless me – what was that?'
It is not without due occasion – Mrs Feather has to acknowledge – that the Marquess of Scattergood's librarian thus abruptly suspends his doubtless just denunciation of Miss Candleshoe's domestic chaplain. A tremendous concussion has shaken the house to its massive foundations; a cloud of dust has risen from the floor; and from the wainscotted walls there is now tumbling and clattering a generous assortment of paintings, pikes, muskets, boars' heads, and suits of armour. A second concussion follows the first, and this is too much for a considerable portion of the elaborate plaster ceiling, which promptly deposits itself to the extent of a ton or so of debris just behind the spot upon which Lord Scattergood and Mr Archdeacon are standing.
Mrs Feather supposes an earthquake, and feels a sharp resentment that providence should thus visit Candleshoe when it appears to be within her grasp. As the dust disperses she sees the extent of the damage already occasioned, and reckons that a third or fourth tremor must bring down the whole tottering edifice in ruin. Mr Archdeacon, she notices, is behaving in an extraordinary way; he has picked up one of the fallen pikes and thrust it within the hands of Lord Scattergood; and now, with surprising agility, he is similarly providing himself with a broadsword and a shield. Thus equipped, Mr Archdeacon has all the appearance of a mythological personage addressing himself to the affair with the gods. Mrs Feather is a good deal impressed by this manner of taking arms against a natural visitation. Mr Archdeacon's defiance, however – which now takes the form of a bellow of rage – proves to be directed not against the heavens but against the Candleshoe faction as it still confronts him. The librarian, in fact, is under the impression that he and his employer have been the specific object of lethal attack by some monstrous engine; and he is proposing, at the sword's point, to seek immediate satisfaction. His advance upon Mr Armigel with this intent is only prevented by a third tremendous impact, followed by the sudden reappearance of Lord Arthur Spendlove, who shouts at his father across the hall.
'They've got a tank!'
This announcement – as might be expected – commands attention and a moment's silence. Lord Scattergood sets his pike carefully against the wall. 'A tank, my dear boy? Who have a tank?'
'The thieves – the fellows who are after our pictures.'
'Nonsense, Arthur. They can't have a tank. Such things simply aren't sold. You cause Miss Candleshoe unnecessary alarm.'
'Didn't you hear it? They're using it to break into the library, which is less massive than the older parts of the house. And you don't need to buy a tank. You simply borrow one. At this time of year they are parked all over the place.'
'I call that a very scandalous thing. I shall take the matter up with the War Office.' Lord Scattergood reaches for his pike again, and then addresses Miss Candleshoe. 'Fortunately, ma'am, my boy here knows a thing or two about tanks. Played about with them a lot in the desert. Can show us the vulnerable spots, you know. Just let me get this through a slit' – and Lord Scattergood flourishes his weapon with fine confidence – 'and – by gad! – I'll tickle them up a bit... There they go again.'
It is true that Candleshoe has taken yet another pounding. This time, a substantial piece of timber comes down virtually on Mr Armigel's toes. He looks at it in mild perplexity and turns to Miss Candleshoe. 'There can be little doubt that we are confronted with some obscure and untoward situation. Would it be prudent, I wonder, to send for Jay?'
'Certainly we must send for Jay.' Miss Candleshoe is uncompromising. 'Jay will compose this uproar, and assist the gentlemen from Benison to bed.'
'To bed?' Mr Armigel is puzzled.
'Precisely. There can be no doubt whatever that they are in liquor. Even a Spendlove, one supposes, would scarcely break in upon a neighbour and behave in this destructive manner when sober. Drink, as my late brother Sir James used frequently to remark, has been the curse of that family. But that is no more than justice. For is it not well known that their fortunes were founded upon bottling ditch-water?'
Lord Scattergood and Mr Archdeacon, who have alike listened with mounting indignation to these monstrous aspersions, are plainly collecting themselves for spirited reply when Grant Feather runs into the hall. Arthur Spendlove turns to him. 'It's bad?'
'I'll say it is. They've got a mullion down, and they're almost through. Jay says our best chance is to get up to the Long Gallery and hold the stair-heads. His friend Robin must have made the village some time ago, and help should be arriving pretty soon.'
'Then up we go... Is this Jay?'
Jay has indeed followed Grant into the hall. The lantern he is carrying shows that his pallor has yielded to a faint flush. Mrs Feather suddenly sees in him a child who ought to have been tucked up and asleep hours ago. But Jay, if exhausted, has lost nothing of his peremptory manner. 'Will you all, please, go straight up to the gallery at once? It's the only part of the house we can now hope to hold.' Having given this general direction, Jay walks straight on to Mr Armigel. 'What is this, please, about two valuable pictures?'
'Pictures, Jay?' Thus challenged, the chaplain appears for the first time discomfited.
'They say we are hiding two valuable old paintings, and that the thieves want them. Is this true?'
'My dear Jay, this is a complicated matter. But it is true that we have – um – thought proper to detain at Candleshoe two paintings by Titian – a famous artist of whom you have doubtless heard – since their ownership is a circumstance of some family complication.'
'Nothing of the sort.' Mr Archdeacon breaks in with high indignation. 'The Marquess' title to the Titians is as plain as one of those pike-staffs.'
'Where are the pictures now, please?'
'Now, Jay?'
'If we are to defend them, they must go with us to the Long Gallery this instant. I can't guarantee another three minutes.' At this, the commander of Candleshoe produces from some fold of his sombre garments a schoolboy's large and innocent watch. 'So choose.'
Mr Armigel hesitates – whereupon Lord Scattergood steps forward. 'Where are the pictures, sir? Dash it all – would you have them go out of the family altogether, and be sold by a pack of thieves to some rascal in New York or Chicago?'
This well-calculated appeal has its effect. Mr Armigel glances at Miss Candleshoe, who almost imperceptibly nods. Then he turns back to Lord Scattergood. 'The Candleshoe Titians? They are, in point of fact, lying at your feet. They tumbled from the wall not five minutes ago.' Mr Armigel pauses to observe the effect produced by this startling intelligence, and is so heartened by what he sees as to break into a chuckle. 'For the sake of decency and reticence, my dear Marquess, I have thought proper a little to obscure them beneath a sound brown varnish. The Leda might be a goose girl, and you can hardly discern that the Lollia is disrobed. But underneath, I assure you, the work of Titian will be found, wholly unimpaired.'
Lord Scattergood opens his mouth at this – but nobody is ever to know what observation he is proposing to make. Another and even more violent concussion is followed by a sound of falling masonry, the shouts of children, and footsteps in rapid withdrawal towards the hall. Mr Archdeacon, with a nicely balanced chivalry and sense of property, seizes the Leda with one hand and Miss Candleshoe with the other. Lord Scattergood snatches up the Lollia, Mr Armigel takes a lantern, and Grant and Arthur grab weapons for the purpose of fighting a rearguard action. Jay vanishes in the direction in which Candleshoe has been breached, intent upon rallying his forces and achieving an orderly retreat. In a twinkling the great hall is empty – or empty save for the wolf-hound Lightning, who has so far evinced singularly little interest in the night's proceedings. And now Lightning, who has been lying in front of the empty fireplace, rises, yawns, stretches, and proceeds at leisure to join the perturbed humans now pounding and puffing their way upstairs to the Long Gallery.
# 20
The defence of this last fastness of Candleshoe is clearly a subject to which Jay has given considerable thought. The east staircase has been effectively sealed off long ago; here an attacker from below will finally be presented with a flat ceiling, the upper surface of which is so weighted with miscellaneous lumber that there is no possibility of forcing a way through it. The west staircase, up which the defenders have retreated, is left free and open. But a barricade has been moved into place at the top; and from this and from the uppermost landing the three final turns of the stair can be commanded by bowmen. It looks almost impregnable against any common assault. But Jay explains that there is a second line of defence. Should the stair-head have to be yielded, his force will retreat to the cover of the little stage at the east end of the gallery. From this position their bows will command what is virtually a long empty tunnel up which the enemy must advance. While their arrows last, and while they retain, too, a sufficiency of torches to cast some light upon the scene, they cannot be rushed without having the chance to inflict what ought to be annihilating casualties.
All this seems to Grant Feather to be a satisfactory state of affairs – and so is Jay's announcement that he has sent his two youngest retainers to light and stoke a beacon on the roof. If the outer world is hesitating over what to make of Robin's story – and it has occurred to Grant that it may well bear the appearance of implausible fantasy – the sight of this distant minor conflagration may well be a useful stimulant. Jay's confidence in Robin seems to extend to Robin's father, whom he judges certain to arrive with overwhelming forces long before these are seriously needed. Jay opens a window which he declares to command a distant view of the high road, and bids another henchman keep strict watch there for a long line of rapidly approaching cars. Grant wonders if Jay is quite as confident as he seems. It is certain that he has a flair for keeping up morale.
The tremendous blows upon the fabric of the building have ominously ceased, and there can be no doubt that the enemy now has the run of the house. But for a few minutes there is a lull in the gallery, and this usefully contributes to the composure of the company there assembled. Lord Scattergood and Miss Candleshoe, whom the pressure of events within the last fifteen minutes has impelled with miraculous speed to an appearance of unflawed family solidarity, are conversing in front of that odd alternative version of Admiral Candleshoe's monument which goes by the name of the Christmas box. Miss Candleshoe pokes at it here and there with her ebony stick – possibly by way of emphasizing its artistic merits, or possibly with the vague notion of touching off the spring that shall send it flying magically open. Lord Scattergood, who is shrewdly convinced that there will be no more trouble over the Titians, and who is by nature incapable of a flicker of discomposure in face of any number of rascals and ruffians, shows high good humour. Mr Archdeacon, who has stowed away the Leda and the Lollia behind a pile of mouldering scenery on the stage, has obtained permission to light his pipe, and is now practising swordsmanship at the expense of one of those contraptions of wire and padding upon the generous contours of which Victorian ladies were accustomed to create additions to their wardrobe. When he is not observing with satisfaction the easy havoc wrought upon this dummy, his eye follows Jay with considerable curiosity up and down the gallery. Lord Arthur Spendlove and Mr Armigel are amicably occupied in testing the strength of the pikes with which they have armed themselves. The more powerful moiety of Jay's juvenile army is at guard over the stair-head; the remainder are held in reserve upon the stage. Mrs Feather has the impression that she is the only person who is extremely frightened. She would like to lower the Titians on a cord through the window, call upon the criminals to take note and make off with them, and then herself pick up a couple of equivalent art treasures for Lord Scattergood on the open market. In all this Mrs Feather has not remotely in mind, indeed, either her own safety or her son's. But she is profoundly shocked that these children have already been involved in one scene of violence, and that they may presently be precipitated into another which may conceivably become a fight to the death. Mrs Feather is aware however that nobody is going to support her in this view, and that in their several ways all her companions are delivered over to a sort of mild madness. But if she is unable to prevent further hostilities it is incumbent upon her to prepare for them. She has had the foresight to raid a bedroom on her way to the gallery, and is in possession of a pair of linen sheets. With these she now retires to an unobtrusive corner and proceeds to the manufacture of bandages.
When the assault does come it is sudden and formidable. Grant, who is at the stair-head, is among the first to be aware of it. At one moment there is nothing below him but darkness and silence; at the next a powerful beam of light is sweeping up the staircase, and behind it, with a rush, come three or four men in a bunch. A bow twangs; there is a scream of pain; the powerful light vanishes; and in the same instant a lantern balanced on the barricade explodes in a shower of glass under the impact of a revolver bullet. Silence follows.
It must be said that the first round has been won. But the scream and the revolver-shot have done their work. In the uncertain light of lanterns, torches, and candles the adults look at the children, and then with fresh eyes at one another. In a voice that has held unquestioned authority in the western desert, Arthur Spendlove orders Jay's force to the far end of the gallery. Nobody moves. Jay's own brow is suddenly like thunder. It is decidedly a moment of crisis. Lord Scattergood, who appears to possess other than intellectual means of assessing and responding to a situation of this kind, leaps with surprising agility to the top of the barrier and down on the other side. 'This won't do,' he says. 'I shall go down and tell the blackguards what I think of them.' But he has not taken three steps before he is unexpectedly held up. A puff of acrid vapour comes up the staircase, and within a second it is a dense and stifling cloud. Lord Scattergood is driven back, blinded and coughing, over the barrier. There can be no question of what has happened. The enemy has let off some species of smoke screen with deadly effect. There is a great sound of breaking glass. Jay is smashing every window he can reach. Then there is a shrill whistle and his force is in tolerably orderly retreat to the east end of the gallery. As the adults, unpractised on this treacherous terrain, follow, the ancient flooring creaks dangerously beneath them.
And now the whole force is back on the little stage. Jay with astonishing rapidity orders individuals here and there. Already he has his best bowmen lurking in the canvas foliage of the wings or crouched behind the dusty burlap simulacra of gnarled logs and mossy banks. It is very much a last stand – and not in Arden but in Sherwood. The far end of the gallery is still obscured in smoke, but the stuff seems to be making no progress towards them. If the enemy is already established there he still has a daunting stretch of space to cover. Once more the defenders appear to have the upper hand. And suddenly Jay gives a shout. 'Listen!' He is echoed by a cry from the boy who is still at watch by a window. Faintly in the night can be heard a queer, distant ululation – a rising and falling note that it takes only a second to identify. A police-car, moving very fast on the high road, is taking no chances and freely sounding its siren.
Jay's army gives a yell of excitement. And at the same instant something fantastic begins to happen near the farther end of the gallery. The junk which lines its sides is moving. Old portmanteaux, mangles, cooking-stoves, chests are edging forward, while at the same time spreading across the floor. Behind this armour the enemy is advancing – cautiously but at a fair pace. It is taking no more chances with the bows of the defenders. Jay gives a sharp order, and a whole flight of arrows spends itself in vain. The advancing barrier is level with the Christmas box when something on the stage stirs, howls, leaps. The uncanny monster approaching has roused Lightning from his indifference at last; he takes one more leap and is on top of whatever it represents. There are shouts, cries, curses; and behind the barrier rise up the figures of several men, flailing with their arms as the wolf-hound attacks them. Two overbalance and collide; a heavy chest in the middle of the floor goes over with a crash; and suddenly the whole picture disappears inexplicably from view. Candleshoe shudders and sways through all its fabric, and its air is filled first with the crash of tumbling masonry and falling timbers and then with a dust so suffocating that it is impossible even to cry out at what has occurred. But there can be no doubt of the event. It is simple and definitive. The rotten and over-strained floor of the Long Gallery has given way throughout the greater part of its length. The criminals with much else have disappeared into the gulf with all the instantaneousness of a good coup de théâtre. Nothing much is left standing except the little stage – and upon it – incongruously – those who have been cast in the role of audience to this topsy-turvy drama.
The rumble of subsiding debris ceases and the air begins to clear. Through the shattered windows men can be heard shouting – so many men as to put the sagacity of Robin's father beyond doubt. Grant has possessed himself of an electric torch, and with this he proceeds to make a survey of the position. At this east end of the gallery the beams and joists appear to have held, so that there seems no reason to fear a further collapse. The entire defending force is unscathed, and it should presently be possible to unblock the east staircase and descend.
Grant turns his torch upon the yawning cavity in the middle of the gallery and finds that the beam has no power to pierce its obscurity. It occurs to him that all the criminals may not have been engulfed; someone may have managed to scramble to the side. The beam reaches no more than halfway down the gallery; he plays it along first one wall and then the other. Some of the accumulated lumber has tumbled into the pit; some remains; there is no sign of a human form.
The beam halts. For a second Grant believes that he has spotted somebody after all – a single figure, standing pressed against the wall. Then he realizes that the figure is in marble; it is one of the youths who stands on either side of Gerard Christmas' monument. He moves the torch and sees that the corresponding figure is missing. The collapsing floor has carried with it part of the face of the monument. Where previously the marble curtains hung there is now vacancy. The torch probes this and uncertainly picks out a small chamber, once more piled with lumber, and thick with dust. Candleshoe has faced its crisis and the Christmas box has opened.
# 21
The ambulances have departed with the battered evildoers, and Lightning is said to have enjoyed a large breakfast. Mrs Feather is glad of a cup of tea. The sunrise has struck her as particularly beautiful, and it is already warm on the terrace that flanks the eastern façade of Candleshoe Manor. The house has been sadly battered, but in this Mrs Feather sees a certain advantageousness. Major repairs being now essential, it should be possible to make a thorough job of restoration – including adequately modern domestic offices and first-class plumbing – without any risk of offending local or antiquarian sentiment.
And the rooms must be rearranged. With sudden inspiration, Mrs Feather sees that her breakfast-room should face this way. On a glorious summer morning such as this it will thus be possible for her guests to stroll out with their coffee to the terrace. For that matter, this is what everybody has done now; it is the presence of nearly all who have been concerned in the late untoward events, here assembled for the purpose of consuming whatever Tab, Jay, and Mr Armigel can provide, that has put this pleasant picture of future house-parties into Mrs Feather's head. And there is nobody here, she reflects, whom she would not much enjoy entertaining.
Except, conceivably, Dr Rosenwald. Perhaps it is because this eminent expert has only lately come into anybody's head, and been extracted in a thoroughly chilled condition from Lord Scattergood's car, that his disposition appears to Mrs Feather to be not of the first charm. The distinguished Roman, indeed, is concerned to make himself agreeable, and has already entered into conversation with Mrs Feather on the large opportunities that attend any American lady of conjoined means and taste who is minded to amass paintings under expert guidance.
Paintings, meanwhile, are occupying other members of the party. For it is paintings – all much browned and some of them sadly battered – that the Christmas box has proved chiefly to contain. This is an odd circumstance upon which Mr Archdeacon and Mr Armigel are even now in learned conference. They presently opine that Squire Candleshoe, at the time of his notable dispute with the first Earl of Scattergood, must have taken the precaution of stowing away in a secret chamber the access to which was still known to him such works of art as he had succeeded in abstracting from Solomon's Cottage. The librarian and the chaplain, armed with piles of dusters and assisted by Jay, turn over the oddly revealed little collection where it has been stacked at one end of the terrace, and presently they come upon what confirms their conjecture. This is a large canvas behind the accumulated dirt on which it is possible to discern what can only be an encounter of Actaeon and Diana.
'Ah – the Schiavone!' Mr Archdeacon is delighted. 'A most interesting minor painter of the Venetian School. Let us have Dr Rosenwald to confirm our discovery.'
Dr Rosenwald however, who conceives himself to be pressing home his advantage with Mrs Feather, declines for the moment to interest himself in such small game. Mr Archdeacon turns roguishly to his employer. 'Marquess, let us he quite clear about this, so that there may be no further dispute between the houses of Spendlove and Candleshoe. Miss Candleshoe has ceded you the Leda and the Lollia. Do you acknowledge that the Diana and Actaeon is hers?'
'Certainly, my dear Archdeacon. I can have no claim upon anything that has been come upon in this extraordinary way. And now bring Armigel here for a cup of this excellent tea.'
Jay is left, rather a lonely figure, turning over and dusting the remaining paintings. Grant Feather, watching him from a distance, sees that the contents of the Christmas box have been far from answering the boy's romantic expectations. Partly, perhaps, because the pirate hoard has revealed itself as no more than these dismal squares of darkened canvas, and partly as a reaction from his heroic defence of Candleshoe, Jay, for the first time in Grant's brief acquaintance with him, is visibly depressed. He examines the paintings one by one, conscientiously but listlessly. Then suddenly he pauses at his task. 'Grant,' he calls out, 'come here.'
Grant walks across the terrace. Jay is looking in some perplexity at the painting now beneath his hand. He has just wiped the dust from its surface, and what is revealed is the portrait of a youth, richly attired in what appears to be carnival costume, and holding in his hand a small black mask. Jay gives another wipe. 'I can't understand it,' he says. 'It looks familiar.'
Grant takes one glance and agrees. The portrait is clearly by an Italian painter of the early eighteenth century, and is of no high distinction. But it is extremely interesting, nevertheless. Jay may well find it perplexingly familiar. For the youth who looks squarely out of the canvas might be the mirror image of Jay Ray.
And suddenly, without a word from Grant, this comes to Jay. He turns even paler than usual, and then very quietly asks, 'Who is it?'
Everybody has gathered round. Jay turns from the portrait and looks at the circle of familiar faces with something like terror. Mr Armigel kneels down, adjusts his spectacles, and reads out an inscription. ' Johannes Candleshoe. Aet. Suite 19. Venetia 1720... This is undoubtedly a portrait of himself brought home by Jack Candleshoe from his Grand Tour.'
Miss Candleshoe has stepped forward. She takes one look and speaks decisively. 'Then, pray, may I be told why this Jack Candleshoe is indistinguishable from Jay?'
'By all means.' Mr Archdeacon speaks from behind the first glorious cloud of tobacco smoke which he has allowed himself this morning. 'Jay is a Candleshoe. In fact, my dear madam, there can be little doubt that he is your heir.'
Rarely can it have fallen to a professional oracle – one with leisurely habits, metaphysical interests, and a highly involved and periphrastically form of address – to enjoy such an opportunity as is now Mr Archdeacon's. His explanations occupy just under an hour. And yet, in essence, they are extremely simple. He had, at the time of the depositing of the Benison pictures at Candleshoe, fallen into a relationship of some confidence and familiarity with the lately established housekeeper, an American lady passing under the name of Mrs Ray. Perhaps because by that time Mr Archdeacon had already been in notable possession of the qualities of a sage, or perhaps simply because some confidant had become emotionally necessary to her, Mrs Ray had revealed that hers was a surprising and anomalous, yet wholly respectable situation. A Californian by birth, she had married, obscurely but with an undoubted legality, a shiftless Englishman named Rupert Candleshoe, who had very shortly thereafter died. The character of her husband having been far from such as to make her repose any ready confidence in his relations, and she herself being a woman of strong – even original – turn of mind, she had determined upon a little anonymous prospecting before entering into any overt connexion with them. This odd resolution it was that had brought her under her maiden name to Candleshoe; and at Candleshoe she had still been turning over her problem when she suddenly met with an accidental death. To Mr Archdeacon her conduct had appeared a shade fantastic. Yet this was perhaps essentially because he had remained without one vital piece of information. He had no notion that the lady passing as Mrs Ray had an infant child, or that the decision confronting her was whether her child's future should be that of an American lad with his own way to make, or that of a bankrupt English squire. Had Mr Archdeacon known of the orphaned Jay's existence, it would have been incumbent upon him to come forward with such facts as he knew. As it was, the strange situation apparently terminated by the lady's sudden death had seemed no affair of his, and the uncertain relationship always existing between Benison and Candleshoe had militated against any casual revelation. But the facts of the case were undoubted now; and Jay's mother in the course of her confidence had been sufficiently explicit in the matter of times and places to enable the situation to be investigated and corroborated by whatever legal personages would be concerned.
All this – which may have been felt by some as not altogether incongruously touched by the canons of eighteenth-century romance – is listened to with close attention by everybody on the terrace of Candleshoe. Or by everybody with one exception. Dr Rosenwald – understandably in view of his own just elevation above the vulgar concerns of common life – takes very little interest in the denouement of our comedy. At first he sits in abstraction in the garden chair, presumably planning that campaign by which he will eventually secure for the happily recovered Leda and Lollia a record price for Lord Scattergood and a record commission for himself. Then he gets up, prowls about, and presently takes a condescending look at the undistinguished treasure-trove which the Christmas box has afforded. He turns over the old neglected canvases, dusting his fingers gloomily between each. He arrives at the Diana and Actaeon, pauses on it, peers, scratches, peers again, and surprises the company by giving vent to a sudden loud cry.
'God bless my soul! I don't believe that fellow can be sober yet.' Lord Scattergood is apologetic. 'Arthur, do you think we could have Rosenwald taken away? I am afraid he has fallen into some sort of alcoholic delirium. It must have been all that whisky. Perhaps they don't drink it in Rome.'
'He certainly appears to be extremely excited.' Arthur Spendlove glances in perplexity at Dr Rosenwald, who is now waving his arms in what must be either mystical exaltation or agony.
Mr Archdeacon is also alarmed. 'His behaviour is certainly very aberrant. Would it, one wonders, be occasioned by a sudden abnegation of the ratiocinative faculty?'
'Off his rocker – eh–?' Lord Scattergood is concerned. 'Oughtn't to have left him in that car all night. Delicate, no doubt – that sort.'
'It is, in my opinion, nothing less than possession.' Mr Armigel offers this. 'Mark – a sure sign of such a state – the confusion of tongues. Pandemonium, after all, is an international settlement.' Mr Armigel takes out his watch, glances at it, and walks away.
It is certainly true that a remarkable medley of the languages of Europe is tumbling from Dr Rosenwald's lips. But presently he controls himself sufficiently to point a trembling finger at the Diana and Actaeon, and to produce an approximation to intelligible sense. 'That that! It is whose...what...yes?'
'Whose, sir?' Miss Candleshoe is swift to have no doubts on this point. 'That painting, as you must yourself have heard Lord Scattergood acknowledge, is my property. Not, possibly, in an absolute sense. I am not altogether clear that it may not be entailed upon the issue of my late nephew – that is to say, upon Jay. It is Candleshoe property. Let that suffice.'
'And a Schiavone, you know.' Mr Archdeacon nods his head sagely. 'He is known to me as a painter of some little–'
'Schiavone!' Dr Rosenwald utters the name as a sort of howl in which are weirdly mingled derision, rage, and ecstasy. 'That painting is by Giorgione.'
'Is that so?' Lord Scattergood is a little crestfallen on Miss Candleshoe's behalf. 'But, my dear fellow, it should have some little value, all the same?'
This time Dr Rosenwald's howl is even more heavily loaded with conflicting emotions. Then, as with a supreme effort, he delivers himself tonelessly of two sentences. 'Giorgione is the greatest painter in the history of European art. And this will unquestionably be acknowledged as his greatest work.'
There is a blank silence. Jay, who has been sitting on the edge of the terrace staring deep into some world of his own, now turns round and addresses the Roman connoisseur gravely. 'The painting is worth a lot of money?'
'Yes, my child.'
'Enough to repair Candleshoe?'
Dr Rosenwald throws up his hands in disgust. 'It is worth more than any other painting in the world.' Then he brightens. 'Put it in my hands, and I will get you enough to build a Benison Court, if you want to.'
Jay rises. 'We shan't want to do that.' He brings his large watch from his pocket, looks at it, and then walks over to Miss Candleshoe. As he does so, from beyond the battered house, a cracked bell begins to sound. Miss Candleshoe hears it, bows majestically to Mrs Feather and the gentlemen assembled on the terrace, takes the arm of her young kinsman, and walks away.
# Synopses of Innes Titles
(Both Series & 'Stand-alone')
Published by House of Stratus
| The Ampersand Papers
While Appleby is strolling along a Cornish beach, he narrowly escapes being struck by a body falling down a cliff. The body is that of Dr Sutch, an archivist, and he has fallen from the North Tower of Treskinnick Castle, home of Lord Ampersand. Two possible motivations present themselves to Appleby – the Ampersand gold, treasure from an Armada galleon; and the Ampersand papers, valuable family documents that have associations with Wordsworth and Shelley.
---|---
---
| Appleby and Honeybath
Every English mansion has a locked room, and Grinton Hall is no exception – the library has hidden doors and passages...and a corpse. But when the corpse goes missing, Sir John Appleby and Charles Honeybath have an even more perplexing case on their hands – just how did it disappear when the doors and windows were securely locked? A bevy of helpful houseguests offer endless assistance, but the two detectives suspect that they are concealing vital information. Could the treasures on the library shelves be so valuable that someone would murder for them?
---|---
---
| Appleby and the Ospreys
Clusters, a great country house, is troubled by bats, as Lord and Lady Osprey complain to their guests, who include first rate detective, Sir John Appleby. In the matter of bats, Appleby is indifferent, but he is soon faced with a real challenge – the murder of Lord Osprey, stabbed with an ornate dagger in the library.
---|---
---
| Appleby at Allington
Sir John Appleby dines one evening at Allington Park, the Georgian home of his acquaintance Owain Allington, who is new to the area. His curiosity is aroused when Allington mentions his nephew and heir to the estate, Martin Allington, whose name Appleby recognises. The evening comes to an end but just as Appleby is leaving, they find a dead man – electrocuted in the son et lumière box which had been installed in the grounds.
---|---
---
| The Appleby File
There are fifteen stories in this compelling collection, including: Poltergeist – when Appleby's wife tells him that her aunt is experiencing trouble with a Poltergeist, he is amused but dismissive, until he discovers that several priceless artefacts have been smashed as a result; A Question of Confidence – when Bobby Appleby's friend, Brian Button, is caught up in a scandalous murder in Oxford, Bobby's famous detective father is their first port of call; The Ascham – an abandoned car on a narrow lane intrigues Appleby and his wife, but even more intriguing is the medieval castle they stumble upon.
---|---
---
| Appleby on Ararat
Inspector Appleby is stranded on a very strange island, with a rather odd bunch of people – too many men, too few women (and one of them too attractive) cause a deal of trouble. But that is nothing compared to later developments, including the body afloat in the water, and the attack by local inhabitants.
---|---
---
| Appleby Plays Chicken
David was hiking across Dartmoor, pleased to have escaped the oppressively juvenile and sometimes perilous behaviour of his fellow undergraduates. As far as he could tell, he was the only human being for miles – but it turns out that he was the only living human being for miles. At least, that is what he presumed when he found a dead man on top of the tor.
---|---
---
| Appleby Talking
Arbuthnot is paying for a rash decision – he recently married a beautiful but slightly amoral girl whose crazy antics caught his rather cynical professional interest. His wife has taken a lover, Rupert Slade, and Arbuthnot wants nothing more than to see him dead – but the last thing he expected was that he'd walk into his living room and find just that!
Inspector Appleby shares the details of this and many other fascinating crimes in this un-missable collection.
---|---
---
| Appleby Talks Again
Ralph Dangerfield, an Edwardian playwright who belonged to the smartest young set of his day, kept a scandalous diary recording the intimate details of his own life and those of his friends. After his death, it was believed that his mother had burnt the incriminating evidence, but fifty years later, a famous collector of literary curiosities claims to have the diary in his possession and threatens to blackmail fashionable London with belated secrets about people now in respectable old age. Sir John Appleby reveals how he uncovered this unscrupulous crime and talks about his key role in seventeen more intriguing cases.
---|---
---
| Appleby's Answer
Author of detective novels, Priscilla Pringle, is pleased to find that she is sharing a railway compartment with a gentleman who happens to be reading one of her books – Murder in the Cathedral. He is military officer, Captain Bulkington, who recognises Miss Pringle and offers her £500 to collaborate on a detective novel. To everyone's surprise, Miss Pringle is rather taken with Captain Bulkington – is she out of her depth?
---|---
---
| Appleby's End
Appleby's End was the name of the station where Detective Inspector John Appleby got off the train from Scotland Yard. But that was not the only coincidence. Everything that happened from then on related back to stories by Ranulph Raven, Victorian novelist – animals were replaced by marble effigies, someone received a tombstone telling him when he would die, and a servant was found buried up to his neck in snow, dead. Why did Ranulph Raven's mysterious descendants make such a point of inviting Appleby to spend the night at their house?
---|---
---
| Appleby's Other Story
During a walk to Elvedon House, palatial home of the Tythertons, Sir John Appleby and Chief Constable Colonel Pride are stunned to find a police van and two cars parked outside. Wealthy Maurice Tytherton has been found shot dead, and Appleby is faced with a number of suspects – Alice Tytherton, flirtatious, younger wife of the deceased; Egon Raffaello, disreputable art dealer; and the prodigal son, Mark Tytherton, who has just returned from Argentina. Could the death be linked to the robbery of some paintings several years ago?
---|---
---
| An Awkward Lie
Sir John Appleby's son, Bobby, assumes his father's detective role in this baffling crime. When Bobby finds a dead man, in a bunker on a golf course, he notices something rather strange – the first finger of the man's right hand is missing. A young girl approaches the scene and offers to watch the body while Bobby goes for help, but when he returns with the police in tow, the body and the girl are missing.
---|---
---
| The Bloody Wood
An assorted party of guests have gathered at Charne, home of Charles Martineau and his ailing wife, Grace, including Sir John Appleby and his wife, Judith. Appleby's suspicions are soon aroused with the odd behaviour of Charles, and the curious last request of Grace – who desires that upon her death, Charles marries her favourite niece, Martine. When Charles and Grace die on the same day, foul play is suspected.
---|---
---
| Carson's Conspiracy
Businessman Carl Carson decides to make a dash for South America to escape the economic slump, leaving his home and his barmy wife. But he has a problem – if his company were seen to be drawing in its horns, it wouldn't last a week. His solution is his wife's favourite delusion – an imaginary son, named Robin. Carson plans to stage a fictitious kidnapping – after all, what could be more natural than a father liquidating his assets to pay the ransom demand? Unfortunately, Carson has a rather astute neighbour – Sir John Appleby, ex-Commissioner of the Metropolitan Police.
---|---
---
| A Change of Heir
George Gadberry, 'resting actor', packs his bags and heads for obscurity when the Tax Inspector beckons. Then he receives a mysterious invitation and a proposition that could lead to enormous riches. Wealthy imbiber, Nicholas Comberford, wants George to impersonate him in order to secure a place in the will of fabulously affluent Great-Aunt Prudence, who lives in a Cistercian monastery and won't allow a single drop of liquor in the place. Gadberry's luck seems to have changed – but at what cost?
---|---
---
| Christmas at Candleshoe
When an American multi-millionaire is keen to buy an Elizabethan manor, she comes up against fierce opposition from a young boy, Jay, and his band of bowmen, who are prepared to defend the manor and its nonagerian owner against all comers. It seems likely that that behind a monumental, seventeenth-century carving, by the hand of Gerard Christmas, lies a hoard of treasure.
---|---
---
| A Connoisseur's Case
When John Appleby's wife, Judith, sets eyes on Scroop House, she insists that they introduce themselves to the owners – a suggestion that makes her sometimes reserved husband turn very pale. When Judith hears the village gossip about the grand house, she is even more intrigued; but when a former employee is found dead in the lock of the disused canal, and the immense wealth of Scroop's contents is revealed, Appleby has a gripping investigation on his hands.
---|---
---
| The Daffodil Affair
Inspector Appleby's aunt is most distressed when her horse, Daffodil – a somewhat half-witted animal with exceptional numerical skills – goes missing from her stable I Harrogate. Meanwhile, Hudspith is hot on the trail of Lucy Rideout, an enigmatic young girl has been whisked away to an unknown isle by a mysterious gentleman. And when a house in Bloomsbury, supposedly haunted, also goes missing, the baffled policemen search for a connection. As Appleby and Hudspith trace Daffodil and Lucy, the fragments begin to come together and an extravagant project is uncovered, leading them to South American jungle.
---|---
---
| Death at the Chase
When master sleuth, Appleby, leaps over a stile during a country stroll, he is apprehended by an irate Martyn Ashmore, owner of the land on which Appleby has unwittingly trespassed. But when the misunderstanding is cleared up, eccentric, aged Ashmore reveals that he is in fear for his life – once every year, someone attempts to murder him. Is it the French Resistance, or a younger Ashmore on the make? When Martyn dies, Appleby sets out to find who exactly is responsible.
---|---
---
| Death At The President's Lodging
Inspector Appleby is called to St Anthony's College, where the President has been murdered in his Lodging. Scandal abounds when it becomes clear that the only people with any motive to murder him are the only people who had the opportunity – because the President's Lodging opens off Orchard Ground, which is locked at night, and only the Fellows of the College have keys...
---|---
---
| A Family Affair
Over a period of twenty years, a series of highly elaborate art hoaxes have been perpetrated at carefully time intervals, and in each case, the victim has a very good reason for keeping quiet. Inspector Appleby's interest is kindled by an amusing dinner-party anecdote – when he enlists the help of his wife and son, the ensuing investigation is truly a family affair. The scenes shift swiftly between glorious stately homes and the not-so-glorious art gallery of the irrepressibly dubious Hildebert Braunkopf.
---|---
---
| From London Far
As Meredith, an academic, stands in a Bloomsbury tobacconist waiting for his two ounces of tobacco, he murmurs a verse of 'London, a Poem' and is astounded when a trap door opens into the London Catacombs, bringing him face to face with the Horton Venus, by Titian. From then on he is trapped in a maze of the illicit art trade, in the company of the redoubtable Jane Halliwell.
---|---
---
| The Gay Phoenix
When tycoon, Charles Povey, is killed in a bizarre boating accident, his corrupt, look-alike brother, Arthur, adopts his identity and his financial empire. But the charade becomes complicated when one of Charles's many mistresses sees through the guise and blackmails Arthur. Enter retired detective, Sir John Appleby...
---|---
---
| Going it Alone
Gilbert Averell avoids some of the rigours of taxation by living for part of each year in France – but he is unhappy about the number of weeks he spends away from his native country. So when his look-alike friend, Georges, suggests that they swap passports for a short spell, Gilbert seizes the opportunity. However, a number of incidents, involving Gilbert's sister and nephew, begin to suggest that Georges's offer was not made out of simple friendship.
---|---
---
| Hamlet, Revenge!
At Seamnum Court, seat of the Duke of Horton, The Lord Chancellor of England is murdered at the climax of a private presentation of Hamlet, in which he plays Polonius. Inspector Appleby pursues some of the most famous names in the country, unearthing dreadful suspicion.
---|---
---
| Hare Sitting Up
When a germ-warfare expert goes missing, his twin brother impersonates him as a cover-up, but for how long can this last? Inspector Appleby is sent on a series of wild goose chases, which take him to a preparatory school, to the estate of an eccentric earl, and to a remote Atlantic rock, before a truly shocking climax.
---|---
---
| Honeybath's Haven
When portrait-painter and occasional detective, Charles Honeybath, pays a visit to his old friend Edwin Lightfoot, there are a few surprises in store. Edwin's irksome wife is packing her bags, while Edwin is indulging in an eccentric game of pretence – acting the part of a long-dead petty criminal named Flannel Foot. Days later, when Edwin disappears, Honeybath finds himself with a mystery to solve and some decisions to make about his life – will he be lured by his intended haven?
---|---
---
| The Journeying Boy
Humphrey Paxton, the son of one of Britain's leading atomic boffins, has taken to carrying a shotgun to 'shoot plotters and blackmailers and spies'. His new tutor, the plodding Mr Thewless, suggests that Humphrey might be overdoing it somewhat. But when a man is found shot dead at a cinema, Mr Thewless is plunged into a nightmare world of lies, kidnapping and murder – and grave matters of national security.
---|---
---
| Lament for a Maker
When mad recluse, Ranald Guthrie, the laird of Erchany, falls from the ramparts of his castle on a wild winter night, Appleby discovers the doom that shrouded his life, and the grim legends of the bleak and nameless hamlets, in a tale that emanates sheer terror and suspense.
---|---
---
| The Long Farewell
Lewis Packford, the great Shakespearean scholar, was thought to have discovered a book annotated by the Bard – but there is no trace of this valuable object when Packford apparently commits suicide. Sir John Appleby finds a mixed bag of suspects at the dead man's house, who might all have a good motive for murder. The scholars and bibliophiles who were present might have been tempted by the precious document in Packford's possession. And Appleby discovers that Packford had two secret marriages, and that both of these women were at the house at the time of his death.
---|---
---
| Lord Mullion's Secret
At Mullion Castle, sumptuous stately home, we meet the Earl and his family, who include his delightful daughters, Patty and Boosie, and dotty Great-aunt Camilla.
Old school chum, Charles Honeybath, who has been commissioned to paint a portrait of the Earl's wife, finds himself at the helm of a complex investigation involving ancestral works of art and a young under gardener, Swithin, who seems to possess the family features somewhat strikingly . . .
---|---
---
| The Man From The Sea
When a man swims to shore from a freighter off the Scottish coast, he interrupts a midnight rendezvous between Richard Cranston and Lady Blair. Richard sees an obscure opportunity to regain his honour with the Blair family after he hears the swimmer's incredible tale of espionage, treason and looming death. But this mysterious man is not all he seems, and Richard is propelled into life threatening danger.
---|---
---
| Money From Holme
Sebastian Holme was a painter who, as the exhibition catalogue recorded, had met a tragic death during a foreign revolution. Art dealer, Braunkopf, has made a small fortune from the exhibition. Unfortunately, Holme turns up at the private view in this fascinating mystery of the art world in which Mervyn Cheel, distinguished critic and pointillist painter, lands in very hot water.
---|---
---
| The Mysterious Commission
Portrait painter, Charles Honeybath, is intrigued when he is visited by a mysterious Mr Peach and is commissioned to paint an anonymous, aristocratic sitter, known only as 'Mr X', whom relatives claim is insane. Under cover of night, Honeybath is taken to the house and asked to stay while he completes his work; but when he returns to his studio, he discovers that the bank next door has been robbed and that he is under suspicion!
---|---
---
| The New Sonia Wayward
Colonel Ffolliot Petticate's predicament begins when his novelist wife, Sonia, drowns during a sailing trip in the English Channel. A dramatic cover-up ensues in a tale full of humour, irony and devastating suspense.
---|---
---
| A Night of Errors
A gruelling night of shrouded motives and confused identities develops when the last of the Dromios is found murdered, with both of his hands burnt off. He was one of triplets, whose brothers had died in a fire forty years previously. Inspector Appleby wrenches the facts from a melodrama in which the final solution is written in fire.
---|---
---
| Old Hall, New Hall
The forbears of Sir John Jory, of New Hall, would seem to have committed several foul acts, including tomb-robbery and murder. Old Hall, the family's former residence, is now a University. Biographer Colin Clout, engaged to write an account of one of Jory's ancestors, gets caught up in a frenzied treasure hunt as rival interests and rival claimants probe the past and naked greed comes to the fore.
---|---
---
| The Open House
When Inspector Appleby's car breaks down on a deserted road one dark night, he happens upon an imposing mansion, whose windows are all illuminated. His sense of curiosity gets the better of him when he discovers that the front door is wide open, and he gets a funny feeling of being watched as he wanders round this splendid house, looking for signs of life. When he finds an elaborate feast laid out, he wonders who is expected...
---|---
---
| Operation Pax
A two-bit con-man is thrown in at the deep end as a desperate hunt takes place in Oxford, in this gripping tale whose thrilling climax takes place in the vaults of the Bodeleian.
---|---
---
| A Private View
Sir John and Lady Appleby attend a memorial exhibition of the oils, gouaches, collages and trouvailles of artist Gavin Limbert, who was recently found shot, under very suspicious circumstances. As Assistant Commissioner of Police, Sir John is already interested, but he becomes even more intrigued when Limbert's last masterpiece is stolen from the gallery under his very eyes.
---|---
---
| The Secret Vanguard
Successful minor poet, Philip Ploss, lives a peaceful existence in ideal surroundings, until his life is upset when he hears verses erroneously quoted as his own. Soon afterwards, he is found dead in the library with a copy of Dante's Purgatory open before him.
---|---
---
| Sheiks and Adders
When half of the guests at a charity masquerade fête at Drool Court turn up dressed as sheiks, it must be more than pure coincidence. One of them is the real thing, however, and Sir John Appleby, master detective, discovers that he is in grave danger. When one of the pseudo-sheiks if murdered, Appleby finds himself in the midst of an international political crisis.
---|---
---
| Silence Observed
Respected Fine Art experts are deceived in one of the most intriguing murder cases Inspector Appleby has ever faced, beginning with Gribble, a collector of forgeries whose latest acquisition is found to be a forged forgery! In the words of Appleby himself: 'Those whom the gods wish to destroy, they first make mad. Just a little mad, for a start. Inclined, say, to unreasonable jokes in the course of business. But later – well, very mad indeed.'
---|---
---
| Stop Press
Famous writer, Richard Eliot, has written numerous detective novels, featuring 'The Spider', a daring, clever criminal in earlier books, and an equally canny private investigator in later ones. But when he comes to life – first to burgle an odd neighbour, then to harass the Eliot family, and finally to attend his own 'birthday party' – Inspector John Appleby is sent to investigate.
---|---
---
| There Came Both Mist and Snow
Stunning Belrive Priory, consisting of a mansion, park and medieval ruins, is surrounded by the noise and neon signs of its gaudy neighbours – a cotton-mill, a brewey and a main road. Nevertheless, Arthur Ferryman is pleased to return for a family Christmas, but is shocked to discover that his cousins have taken up a new pastime – pistol-shooting. Inspector Appleby arrives on the scene when one of Ferryman's cousins is found shot dead in the study, in a mystery built on family antagonisms.
---|---
---
| The Weight of the Evidence
Meteorites fall from the sky but seldom onto the heads of science dons in redbrick universities; yet this is what happens to Professor Pluckrose of Nestfield University. Inspector Appleby soon discovers that the meteorite was not fresh and that the professor's deckchair had been placed underneath a large, accessible tower – he already knew something of academic jealousies but he was to find out a great deal more.
---|---
---
| What Happened at Hazlewood
The Simney family, of Hazlewood Hall, have a dubious history. Sir George Simney, who was travelling in Australia before the baronetcy fell to him, sleeps with a shotgun by his side. When he is found dead in the library, the Reverend Adrian Deamer will not rest until he has discovered who is responsible. This is an absorbing tale narrated by Simney's widow, Nicolette, and by young Harold, who has just joined the C.I.D.
---|---
www.houseofstratus.com
| {
"redpajama_set_name": "RedPajamaBook"
} | 9,517 |
This the second collection of Tales from the Athena Lee Universe. However if you get the chance to read them, definitely do. My wife and I live outsid I was born in 1968 in Mineral Wells, Tx. What is the meaning of this? Make sure I get to see all the new stuff. We have only been on this shuttle for ten minutes and you are whining already! This the second collection of Tales from the Athena Lee Universe. I have a Telecommunication degree and a Culinary degree. Do robots need to be emancipated? He volunteered to start a new colony and spread Earth's power.
The Senator hit a button on his console. I marked it down for the following reasons. I enjoyed those so I picked up the first book in the Athena Lee chronicles. I have just deposited to your account enough to cover the upcoming Bill. Do robots need to be emancipated? Sounds like a toy to me… I stopped listening after that comment.
The Senator studied his tablet. No, the Military does not own any of those. The future of man lay in the stars. I was a history major before I discovered multimedia. Alexander Lee who voted against it the Bill watched as his fellow Senators swept the whole thing under the rug and then ignored it. What happens when Pirates buy a robot? The shuttle was dark and my best buddy Athena looked very uncomfortable in her dress uniform. Taking advantage of Humans is too easy.
This the second collection of Tales from the Athena Lee Universe. I have a Telecommunication degree and a Culinary degree. He may not know you all that well but I do. I have always been a Huge fan of Science Fiction and fantasy I bring my love of the genre to my writing. The reason I called is to get a bill passed in the senate. Senator and Colonel Alexander Lee watched the by-play on the floor between the two sides. The only issue I have is that the books are too shor Much improved With all the typos and grammatical errors in the first book, it was a real burden to make myself read on.
I need at least one co-sponsor if possible. Many of the Senators had not really been paying all that much attention. He found some for sale over on New Texas. We know which is why you are on this shuttle with me this time. This the second collection of Tales from the Athena Lee Universe. The Bill passed the Senate Assembly and was made into law by a wide margin.
Much improved With all the typos and grammatical errors in the first book, it was a real burden to make myself read on. Who put the Son in Wilson? This is the Prequel for the Athena Lee Chronicles. . These are novelettes, not books! I would say you will get your Bill then Mr Theai. Athena, they have a lot of cool stuff that Tad got in from some contacts he had. Wilson has his origin story revealed at long last. The only issue I have is that the books are too short for the price.
Wilson needed to be watched. Prepare to Get down and Get Funky as the Hits just keep on coming! No, the ones that I want to free are the ones that can think for themselves. The war was fought until only the Major Powers were victorious. Earth was in turmoil before the first colony ship was launched. Who put the Son in Wilson? The problem is that once you have gotten your nifty new product, the a colony of catts more tales from the athena lee universe gets a brief glance, maybe a once over, but it often tends to get discarded or lost with the original packaging. Warriors, trained and bred to be the best the world had ever seen emerged from the shadows, bringing order and control out of the chaos of the Cyber Wars. Who put the Son in Wilson? I have lived all across the country:Alabama, Georgia, Florida, Texas, Alaska, and finally Missouri.
Meet lots of new characters and visit a few of our friends. Sam was one such warrior. He designed and build this new one. The ceremony was going to be covered by all the local Vid services and for political reasons all the heroes of the Revolution needed to be there. I will do as you like.
Then you can go back to work. Meet lots of new characters and visit a few of our friends. This story was one of those times. A Colony Of Catts More Tales From The Athena Lee Universe can be very useful guide, and a colony of catts more tales from the athena lee universe play an important role in your products. Paul Legal Stuff Copyright © T. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,567 |
Ervin Haág (11 January 1933 – 23 October 2018) was a Hungarian chess International Master (IM, 1961). He was European Team Chess Championship silver (1970) and bronze (1961) medalist.
Biography
In the 1960s Ervin Haág was one of the top Hungarian chess players. He participated in the individual finals of the Hungarian Chess Championships many times and won three medals: silver (1966) and two bronze (1960, 1967). One of his greatest successes in International Chess tournaments was in 1961 in Debrecen, where he shared the 1st place (together with Isaac Boleslavsky) in the Lajos Asztalos Memorial.
His career highest chess ranking was on July 1, 1971, with a score of 2,440 points, split 11th–13th at the time place among Hungarian chess players.
Ervin Haág played for Hungary in the European Team Chess Championships:
In 1961, at the seventh board in the 2nd European Team Chess Championship in Oberhausen (+4, =2, -2) and won team and individual bronze medals.
In 1970, at second reserve board in the 4th European Team Chess Championship in Kapfenberg (+1, =2, -0) and won team silver medal.
Ervin Haág played for Hungary in the World Student Team Chess Championships:
In 1955, at second reserve board in the 2nd World Student Team Chess Championship in Lyon (+2, =3, -0) and won team bronze medal,
In 1957, at first reserve board in the 4th World Student Team Chess Championship in Reykjavik (+3, =0, -1),
In 1958, at third board in the 5th World Student Team Chess Championship in Varna (+0, =4, -2),
In 1959, at fourth board in the 6th World Student Team Chess Championship in Budapest (+2, =1, -2) and won team bronze medal.
Ervin Haág played for chess club Spartacus Budapest in the European Men's Chess Club Cups:
In 1976, in the 1st European Chess Club Cup (+2, =3, -1),
In 1982, in the 3rd European Chess Club Cup (+2, =3, -1) and won team tournament,
In 1984, in the 4th European Chess Club Cup (+1, =1, -0).
Ervin Haág was also successful in correspondence chess. He won Hungarian Correspondence Chess Championship. In 1961 he received the title of International Master in this variant of chess.
Ervin Haág was member of the Presidium of the Hungarian Chess Federation since 1971. He was President of the Hungarian Chess Federation from 1979 to 1984.
He was engaged in chess coaching. The most famous wards are István Csom, Péter Lukács, Attila Schneider, Laszlo Cherna, Tibor Tolnai.
Ervin Haág graduated from Applied mathematics University of Budapest. He worked as a programmer in the textile industry. From 1972 to 1979 he worked in the book publishing house Lapkiadó Vállalat, at the same time he was the editor-in-chief of the chess magazine Magyar Sakkélet.
Together with Győző Forintos, he was the author of two books on Chess openings:
Petrov's Defence, 1992, ,
Easy Guide to the 5.Nge2 King's Indian Defence, 2000, .
References
External links
1933 births
2018 deaths
People from Mosonmagyaróvár
Chess International Masters
Hungarian chess players
Hungarian chess writers
Chess administrators
Budapest University alumni | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,910 |
<?php
// autogenerated file 30.09.2013 15:20
// $Id: $
// $Log: $
//
//
require_once 'EbatNs_ComplexType.php';
/**
* Summary details for a specified My Messages folder.
*
* @link http://developer.ebay.com/DevZone/XML/docs/Reference/eBay/types/MyMessagesFolderSummaryType.html
*
*/
class MyMessagesFolderSummaryType extends EbatNs_ComplexType
{
/**
* @var long
*/
protected $FolderID;
/**
* @var string
*/
protected $FolderName;
/**
* @var int
*/
protected $NewAlertCount;
/**
* @var int
*/
protected $NewMessageCount;
/**
* @var int
*/
protected $TotalAlertCount;
/**
* @var int
*/
protected $TotalMessageCount;
/**
* @var int
*/
protected $NewHighPriorityCount;
/**
* @var int
*/
protected $TotalHighPriorityCount;
/**
* @return long
*/
function getFolderID()
{
return $this->FolderID;
}
/**
* @return void
* @param long $value
*/
function setFolderID($value)
{
$this->FolderID = $value;
}
/**
* @return string
*/
function getFolderName()
{
return $this->FolderName;
}
/**
* @return void
* @param string $value
*/
function setFolderName($value)
{
$this->FolderName = $value;
}
/**
* @return int
*/
function getNewAlertCount()
{
return $this->NewAlertCount;
}
/**
* @return void
* @param int $value
*/
function setNewAlertCount($value)
{
$this->NewAlertCount = $value;
}
/**
* @return int
*/
function getNewMessageCount()
{
return $this->NewMessageCount;
}
/**
* @return void
* @param int $value
*/
function setNewMessageCount($value)
{
$this->NewMessageCount = $value;
}
/**
* @return int
*/
function getTotalAlertCount()
{
return $this->TotalAlertCount;
}
/**
* @return void
* @param int $value
*/
function setTotalAlertCount($value)
{
$this->TotalAlertCount = $value;
}
/**
* @return int
*/
function getTotalMessageCount()
{
return $this->TotalMessageCount;
}
/**
* @return void
* @param int $value
*/
function setTotalMessageCount($value)
{
$this->TotalMessageCount = $value;
}
/**
* @return int
*/
function getNewHighPriorityCount()
{
return $this->NewHighPriorityCount;
}
/**
* @return void
* @param int $value
*/
function setNewHighPriorityCount($value)
{
$this->NewHighPriorityCount = $value;
}
/**
* @return int
*/
function getTotalHighPriorityCount()
{
return $this->TotalHighPriorityCount;
}
/**
* @return void
* @param int $value
*/
function setTotalHighPriorityCount($value)
{
$this->TotalHighPriorityCount = $value;
}
/**
* @return
*/
function __construct()
{
parent::__construct('MyMessagesFolderSummaryType', 'urn:ebay:apis:eBLBaseComponents');
if (!isset(self::$_elements[__CLASS__]))
self::$_elements[__CLASS__] = array_merge(self::$_elements[get_parent_class()],
array(
'FolderID' =>
array(
'required' => false,
'type' => 'long',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
),
'FolderName' =>
array(
'required' => false,
'type' => 'string',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
),
'NewAlertCount' =>
array(
'required' => false,
'type' => 'int',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
),
'NewMessageCount' =>
array(
'required' => false,
'type' => 'int',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
),
'TotalAlertCount' =>
array(
'required' => false,
'type' => 'int',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
),
'TotalMessageCount' =>
array(
'required' => false,
'type' => 'int',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
),
'NewHighPriorityCount' =>
array(
'required' => false,
'type' => 'int',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
),
'TotalHighPriorityCount' =>
array(
'required' => false,
'type' => 'int',
'nsURI' => 'http://www.w3.org/2001/XMLSchema',
'array' => false,
'cardinality' => '0..1'
)
));
}
}
?>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,034 |
Ruki Wwierch () – rosyjski zespół muzyczny, założony w 1995 roku.
Historia zespołu
Zespół powstał w 1995 roku z inicjatywy Siergieja Żukowa i Aleksieja Potiechina, którzy poznali się kilka lat wcześniej podczas pracy w radiu Jewropa plius Samara. Początkowo projekt nosił nazwę Diadiuszka Rej i kompanija (). Pod koniec 1994 roku wysłali swoje demo do siedziby radia Maksimum, załączając do niego opis: Эта музыка заставит вас поднимать руки вверх (tłum. Ta muzyka sprawi, że zechcecie podnieść ręce do góry). Nagrania dotarły do prezenterów radiowych, Olgi Maksimowej i Konstantina Michajłowa, którzy w swojej audycji zapowiedzieli nagranie zespołu słowami: Молодая группа "Руки вверх!" (tłum. Młoda grupa "Ręce do góry!).
W 1997 roku wydali swój debiutancki album studyjny, zatytułowany Dyszitie rawnomierno!, zawierający przeboje "Małysz" i "Studient", w którym żeńskie partie wokalne zaśpiewała Jelizabeta Rodnianska, współpracująca z zespołem także nad innymi nagraniami. W 1998 roku ukazały się ich trzy nowe płyty studyjne: Ruki Wwierch, doktor Szlagier!, Sdiełaj pogromcze! i Sdiełaj jeszcze gromcze!. Dwa ostatnie zawierały hity, takie jak np. "Czużije guby" i "Lisz o tebie miecztaja". W 1999 roku premierę miał ich piąty album studyjny, zatytułowany Biez tormozow, który sprzedał się w nakładzie ponad 12 mln egzemplarzy. Pod koniec lat 90. zespół założył własną wytwórnię muzyczną Pliaszuszczie czelowieczki.
W 2000 roku wydali szóstą płytę studyjną, zatytułowaną Zdrawstwij, eto ja, zawierającą m.in. przebój "Aj–jaj–jaj". W tym samym roku niemiecki zespół ATC wydał singiel "Around the World (La La La La La)", posiadający sample z piosenki "Piesenka" z albumu pt. Sdiełaj pogromcze!.
W 2001 roku ukazały się dwa kolejne albumy Ruki Wwierch: Nie bojsia, ja s toboj i Malenkie diewoczki, promowane m.in. przez singiel "18 mnie uże". Do 2006 roku wydali jeszcze cztery płyty studyjne: Koniec popsie, tancujut wsie! (2002), Mnie s toboju choroszo (2003), A diewoczkam tak chołodno (2004) i Fuc*in' Rock'n'Roll (2006). W sierpniu 2006 roku muzycy zawiesili działalność zespołu.
Po rozpadzie zespołu obaj wokaliści skupili się na karierze solowej, wydając nowe nagrania. Wiosną 2008 roku Potiechin rozpoczął pracę nad autorskim projektem Triek & Bliuz. W tym czasie Żukow wydał album studyjny, zatytułowany W poiskach nieżnosti, a kolejne nagrania zaczął sygnować nazwą Ruki Wwierch. W 2012 roku ukazała się nowa płyta, sygnowana nazwą zespołu, zatytułowana Otkroj mnie dwieri.
Dyskografia
Albumy studyjne
Dyszitie rawnomierno! (1997)
Ruki Wwierch, doktor Szlagier! (1998)
Sdiełaj pogromcze! (1998)
Sdiełaj jeszcze gromcze! (1998)
Biez tormozow (1999)
Zdrawstwij, eto ja (2000)
Nie bojsia, ja s toboj (2001)
Malenkie diewoczki (2001)
Koniec popsie, tancujut wsie! (2002)
Mnie s toboju choroszo (2003)
A diewoczkam tak chołodno (2004)
Fuc*in' Rock'n'Roll (2006)
Otkroj mnie dwieri (2012)
Przypisy
Rosyjskie zespoły muzyczne | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,689 |
\section{Introduction}\label{sec:introduction}
Definitions have a very important role in scientific literature because they define the major concepts with which an article operates.
They are used in many automatic text analysis tasks, such as question answering, ontology matching and construction, formal concept analysis, and text summarization.
Intuitively, definitions are basic building blocks of a scientific article that are used to help properly describe hypotheses, experiments, and analyses.
It is often difficult to determine where a certain definition lies in the text because other sentences around it may have similar style.
Automatic definition extraction (DE) is an important field in natural language processing (NLP) because it can be used to improve text analysis and search.
Definitions play a key role in mathematics, but their creation and use differ from those of \enquote*{everyday language}
definitions. A comprehensive study is given in a series of works by Edwards and Ward~\citep{edwards2008undergraduate}, \citep{ edwards2004surprises}, \citep{edwards1998undergraduate},
inspired by writings of Richard Robinson~\citep{robinson1962} and lexicographer Sidney Landau~\citep{landau2001}.
Mathematical definitions frequently have a history as they evolve over time.
The definition we use for \textit{function}, for instance, may not be the one that was used a hundred years ago.
The concept of \textit{connectivity} has two definitions, one for \textit{path connectivity} and another for \textit{set-theoretic connectivity}. In mathematical texts the meaning of the defined concept is not determined by its context but it is declared and is expected to have no variance within that specific mathematical text~\citep{edwards2008undergraduate}.
Mathematical definitions have many features, some critical and some optional but accepted within the mathematical community.
\cite{van2003many} describe a good mathematical definition as containing criteria of \textit{hierarchy
, \textit{existence
, \textit{equivalence}, and \textit{axiomatization}. Desired but not necessary criteria of a definition are \textit{minimality}, \textit{elegance}, and \textit{degenerations}.
We give here short definitions of these concepts; detailed explanations with examples can be found in \cite{van2003many}.
\begin{itemize}
\item \textbf{Hierarchy}: any new concept
must be described as a special case of a more general concept.
\item \textbf{Equivalence}: when one gives more than one formulation for the same concept, one must prove that they are equivalent.
\item \textbf{Axiomatization}: the definition fits in and is part of a deductive system.
\item \textbf{Minimality}: no more properties of the concept are mentioned in the definition than is required for its
existence.
\item \textbf{Elegance}: an elegant definition looks nicer, needs fewer words or less symbols, or uses more general basic concepts from which
the newly defined concept is derived.
\item \textbf{Degeneration}: what occurs at instances when our intuitive idea of a concept does not conform to a definition.
\end{itemize}
Not every definition appearing in text is mathematical in the above sense. For example, Wikipedia articles contain definitions of different style.
We see below that the Wikipedia definition of the Kane \& Abel musical group
is not similar in style to the Wikipedia definition of an Abelian group.
$$\begin{array}{l}
\mbox{\rm Definition 1: }\\
\mbox{\it \textbf{Kane \& Abel}, formally known as 'Double Vision', is an }\\
\mbox{\it American hip hop duo formed by twin brothers Daniel and }\\
\mbox{\it David Garcia that were founded by Master P in late 1995. }\\
\mbox{\it They were best known for their time with No Limit Records.}\\\\
\mbox{\rm Definition 2: }\\
\mbox{\it In abstract algebra, an \textbf{abelian group}, also called a}\\
\mbox{\it \textbf{commutative group}, is a group in which the result of }\\
\mbox{\it applying the group operation to two group elements does }\\
\mbox{\it not depend on the order in which they are written.
\end{array}$$
Naturally, we expect to find mathematical definitions in
mathematical articles. Mathematical definitions usually use formulas and
notations extensively, both in definitions and in surrounding text. The number of words in mathematical text is smaller than in regular text due to the formulas that are used to express the former, but a presence of formulas is not a good indicator of a definition sentence because the surrounding sentences also use notations and formulas.
As an example of such text, Definition 3, below, contains a definition from Wolfram MathWorld.
Only the first sentence in this text is considered a definition sentence.
$$\begin{array}{l}
\mbox{\rm Definition 3: }\\
\mbox{\it Also called Macaulay ring, a \textbf{Cohen Macaulay ring} is a }\\
\mbox{\it Noetherian commutative unit ring R in which any proper }\\
\mbox{\it ideal $I$ of height $n$ contains a sequence $x_1,\dots,x_n$ of } \\
\mbox{\it elements (called a ring regular sequence) such that for all }\\
\mbox{\it $i=1,\dots, n$, the residue class of $x_i$ in the quotient ring }\\
\mbox{\it $R/\langle x_{1},\dots,x_{i-1}\rangle$ is a non-zero divisor. If $x_1,\dots, x_n$}\\
\mbox{\it are indeterminate over a field $K$, the above condition}\\
\mbox{\it is fulfilled by the maximal ideal $I=<x_1,\dots,x_n>$.}
\end{array}$$
Current methods for automatic DE view it as a binary classification task,
where a sentence is classified as a definition or a non-definition. A supervised learning process is usually employed for this task,
employing feature engineering for sentence representation.
The absolute majority of current methods study generic definitions and not mathematical definitions (see Section \ref{rel-sec}).
In this paper we describe a supervised learning method for automatic DE from mathematical texts. Our method applies a Convolutional Neural Network (CNN), a Long Short-Term Memory network (LSTM), and their combinations to the raw text data and sentence syntax structure, in order to detect definitions.
Our method is evaluated on three different corpora; two are well-known corpora for generic DE and one is a new annotated corpus of mathematical definitions, introduced in this paper.
The main contributions of this paper are (1) analysis and introduction of the new annotated dataset of mathematical definitions, (2) evaluation of the state-of-the-art DE approaches on the new mathematical dataset, (3) introduction and evaluation of upgraded sentence representations adapted to mathematical domain with an adaptation of deep neural networks to new sentence representations, (4) extensive experiments with multiple network and input configurations (including different embedding models) performed on different datasets in mathematical and non-mathematical domains, (5) experiments with cross-domain and multi-domain learning in a DE task, and (6) introduction of the new parsed but non-annotated dataset composed of Wiki articles on near-mathematics topics, used in an additional--extrinsic--evaluation scenario. These all contribute to showing that using specifically suited training data along with adapting sentence representation and classification models to the task of mathematical DE significantly improves extraction of mathematical definitions from surrounding text.
The paper is organized as follows. Section \ref{rel-sec} contains a survey of up-to-date related work. Section \ref{met-sec} describes the sentence representations and the structure of neural networks used in our approach. Section \ref{ev-sec} provides the description of the datasets, evaluation results, and their analysis. Section \ref{conc-sec} contains our conclusions. Finally, Appendix contains some supplementary materials -- annotation instructions, description of the Wikipedia experiment, and figures.
\section{Related Work\label{rel-sec}}
Definition extraction has been a popular topic in NLP research for more than a decade~\cite{xu2003trec}, and it remains a challenging and popular task today as a recent research call at SemEval-2020 shows~\footnote{\url{https://competitions.codalab.org/competitions/20900}}. Prior work in the field of DE can be divided into three main categories: (1) rule-based methods, (2) machine-learning methods relying on manual feature engineering, and (3) methods that use deep learning techniques.
Early works about DE from text documents belong to the first category. These works rely mainly on manually crafted rules based on linguistic parameters.
\cite{klavans2001evaluation} presented the DEFINDER, a rule-based system that mines consumer-oriented full text articles in order to extract definitions and the terms they define; the system is
evaluated on definitions from on-line dictionaries such as the UMLS Metathesaurus \citep{schuyler1993umls}.
\cite{xu2003trec} used various linguistic tools to extract kernel facts for the definitional question-answering task in TREC 2003.
\cite{malaise2004detecting}
utilized semantic relations in order to mine defining expressions in domain-specific
corpora, thus detecting semantic relations between the main terms in definitions. This work is evaluated on corpora from fields of anthropology and dietetics.
\cite{saggion2004mining,saggion2004identifying} employed analysis of on-line sources in order to
find lists of relevant secondary terms that frequently occur together with the definiendum in definition-bearing passages.
\cite{storrer2006automated} proposed a system that automatically detects and annotates definitions for technical terms in German text corpora.
Their approach focuses on verbs that typically appear in definitions by specifying search patterns based on the valency frames of definitor verbs.
\cite{borg2009evolutionary} extracted definitions from nontechnical texts by using genetic programming to learn the typical linguistic forms of definitions and then using a genetic algorithm to learn the relative importance of these forms. Most of these methods suffer from both low recall and precision (below $70\%$), because definition sentences
occur in highly variable and noisy syntactic structures.
The second category of DE algorithms relies on semi-supervised and supervised machine learning that use semantic and other features
to extract definitions. This approach generates DE rules automatically but relies on feature engineering to do so.
\cite{fahmi2006learning} presented an approach to learning concept definitions from
fully parsed text with a maximum entropy classifier incorporating various syntactic features; they tested this approach on a subcorpus of the
Dutch version of Wikipedia.
In \cite{westerhout2007combining}, a pattern-based glossary candidate detector, which is capable of extracting
definitions in eight languages, was presented.
\cite{westerhout2009definition} described a combined approach that first filters corpus with a definition-oriented grammar, and then applies machine learning
to improve the results obtained with the grammar. The proposed algorithm was evaluated on a collection
of Dutch texts about computing and e-learning.
\cite{navigli2010learning} used Word-Class Lattices (WCLs), a generalization of word lattices, to model textual definitions. Authors introduced a new dataset called WCL that was used for the experiments. They achieved a $75.23\%$ F1 score on this dataset.
\cite{reiplinger2012extracting} compared lexico-syntactic pattern bootstrapping and deep analysis. The manual rating experiment suggested that the concept of definition quality in a specialized domain is largely subjective, with a $0.65$ agreement score between raters.
The DefMiner system, proposed in \citep{jin2013mining}, used Conditional Random Fields (CRF) to predict the function of a word and to determine whether this word is a part of a definition. The system was evaluated on a W00 dataset~\citep{jin2013mining}, which is a manually annotated subset of ACL-ARC ontology.
\cite{boella2013extracting} proposed a technique that
only uses syntactic dependencies between
terms extracted with a syntactic parser and then transforms
syntactic contexts to abstract representations in order to use a Support
Vector Machine (SVM).
\cite{anke2015weakly} proposed a weakly
supervised bootstrapping approach for identifying textual definitions with higher linguistic variability.
\cite{espinosa2014applying} presented a supervised approach to DE in which only
syntactic features derived from dependency relations are used.
Algorithms in the third category use Deep Learning (DL) techniques for DE, often incorporating syntactic features into the network structure.
\cite{li2016definition} used Long Short-Term Memory (LSTM) and word vectors to identify definitions and then tested this approach on the English and Chinese texts. Their method achieved a $91.2\%$ F-measure on the WCL dataset.
\cite{anke2018syntactically} combined CNN and LSTM, based on syntactic features and word vector representation of sentences. Their experiments showed the best F1 score ($94.2\%$) on the WCL dataset for CNN and the best F1 score ($57.4\%$) on the W00 dataset for the CNN and bidirectional LSTM (BLSTM) combination, both with syntactically-enriched sentence representation.
Word embedding, when used as the input representation, have been shown to boost the performance in many NLP tasks, due to its ability to encode semantics. We believe, that a choice to use word vectors as input representation in many DE works was motivated by its success in NLP-related classification tasks.
We use the approach of \citep{anke2018syntactically} as a starting point and as a baseline for our method.
We further extend this work by additional syntactic knowledge in a sentence representation model
and by testing additional network architectures on our input. Due to observed differences in grammatical structure between regular sentences and definition sentences, we hypothesize that dependency parsing can add valuable features to their representation. Because extending a representation model results in a larger input, we also hypothesize that a standard convolutional layer can help to automatically extract the most significant features before performing the classification task. Word embedding matrices enhanced with dependency information naturally call for CNN due to their size and CNN's ability to decrease dimensionality swiftly. On other hand, sentences are sequences for which LSTM is very suitable. In order to test the architecture properly, we needed to check how the order of these layers affects the results, and also to make sure that both layers are necessary. In order to test our hypothesis, we tested two variants of combined networks---LSTM and CNN---in different configurations on our data.
\section{Methodology}\label{met-sec}
Our approach uses a matrix representation of a sentence, where every word and every syntactic dependency in that sentence is represented by a vector. Figure \ref{pipeline-fig} depicts our pipeline.
\begin{figure}[!t]
\center
\includegraphics[scale=0.4]{pipeline.jpg}
\vspace{4mm}
\caption{\label{pipeline-fig}Pipeline of our approach}
\end{figure}
We define several deep neural network architectures that combine CNN and LSTM layers in a different way.
We train every network on preprocessed\footnote{We applied the following text preprocessing steps: sentence boundary detection, tokenization, and dependency parsing with Stanford CoreNLP package~\citep{manning2014stanford}.} text data, where every sentence is labeled as a definition or a non-definition.
During testing, we use the pre-trained network for DE.
\subsection{Sentence representation parameters}
To represent \textit{sentence words}, we use standard sentence modeling for CNNs\\ \citep{kim2014convolutional}, where every word $w$ is represented by its $k$-dimensional word vector $\vec{w}$~\citep{mikolov2013distributed},
and all sentences
are assumed as having the same length $n$, using zero padding where necessary. An entire sentence is then represented by $n\times k$ zero-padded matrix $S_{n\times k}$.
In all cases we used word vectors of size $k=300$ and $n=\max_{i} \{\mbox{length of sentence \#i}\}$.
For BERT sentence representation~\citep{devlin2018bert}, we obtain a vector of size 1024 produced by a BERT model for every sentence. The model we use is a pre-trained 24-layer uncased BERT model with 1024-hidden layers, 16-heads, and 340M parameters released by Google AI~\citep{devlin2018bert}.
In this case, sentence length has no effect on data representation.
Syntactic dependency, in the dependency parse tree of a sentence, has the form
$(w_{i},w_{j},\mathit{d_{ij}})$, where $w_{i},w_{j}$ are words and $\mathit{d_{ij}}$ is the dependency label.\footnote{The Stanford CoreNLP parser supports 46 dependency types.} For example, in a sentence \enquote*{\textit{This time around, they're moving even faster},} a tuple $(\mathit{moving},\mathit{they}, \mathit{nsubj})$ represents the dependency named $\mathit{nsubj}$, which connects the word $\mathit{moving}$ to the word $\mathit{they}$.
We represent dependency words $w_{i},w_{j}$ of dependency $(w_{i},w_{j},\mathit{d_{ij}})$ by a single vector, denoted by $\vec{r_{ij}}$, computed in one of the following ways:
\begin{itemize}
\item Normalized sum $\vec{r_{ij}}^{avg}:=\frac{1}{2}(\vec{w_{i}}+\vec{w_{j}})$ of word vectors $\vec{w_{i}}$ and $\vec{w_{j}}$.
The resulting vector has 300 dimensions.
\item Concatenation $\vec{r_{ij}}^{c}:=\vec{w_{i}}\circ \vec{w_{j}}$ of the corresponding word vectors $\vec{w_{i}}$ and $\vec{w_{j}}$.
The resulting vector has 600 dimensions.
\end{itemize}
The dependency label $\mathit{d_{ij}}$ is represented in one of the following ways:
\begin{itemize}
\item One-hot representation $\mathit{dep_{ij}}$ of the dependency label over the search space of size 46.
\item Concatenation $\mathit{dep_{ij}}\circ \mathit{depth_{ij}}$ of one-hot dependency label representation with the depth vector $\mathit{depth_{ij}}$ containing one number---the depth $n \in \mathbb{Z}^{\geq 0}$ of the dependency edge in the dependency tree (edges starting in the root of the tree have depth 0).
\end{itemize}
\subsection{Neural network models}
In this section we describe different neural network models we have implemented and tested in this work.
\subsubsection{Neural network structures}
We use four different network configurations, described below:
\begin{enumerate}
\item The convolutional network, denoted by $\mathit{CNN}$, uses a convolutional layer~\citep{lecun1998gradient} only.
\item The $\mathit{CBLSTM}$ network uses a CNN layer followed by a bidirectional LSTM layer, following the approach of \cite{anke2018syntactically}.
\item The recurrent network that uses a single LSTM layer, denoted by $\mathit{LSTM}$.
\item The $\mathit{BLSTMCNN}$ network that uses a bidirectional LSTM layer followed by a CNN layer.
\end{enumerate}
Figure~\ref{CBLSTM} and Figure~\ref{BLSTMCNN} demonstrate mixed NN architectures---CBLSTM and BLSTMCNN, respectively.
LSTM, CNN, and CBLSTM were tested as baselines, used in~\citep{anke2018syntactically}. Also, the inverse combination of layers, denoted by BLSTMCNN, was used to test our hypothesis that automatically extracted features provide a better representation model for classified sentences. CNN was added as a layer that can automatically extract features and LSTM as a classification model that is context-aware. The experiment using different orders of these layers was aimed to examine which order is beneficial for the DE task---first to extract features from the original input and then feed them to the context-aware classifier, or first to calculate hidden states with context-aware LSTM gates and then feed them into CNN classifier (which includes feature extraction before the classification layer).
\begin{figure}[!th]
\center
\includegraphics[scale=0.45]{CBLSTM.jpg}
\vspace{5mm}
\caption{\label{CBLSTM}CBLSTM network architecture.}
\end{figure}
\begin{figure}[!th]
\center
\includegraphics[scale=0.45]{BLSTMCNN.jpg}
\vspace{5mm}
\caption{\label{BLSTMCNN}BLSTMCNN network architecture.}
\end{figure}
\subsubsection{\label{input-sec}Input representation}
The final sentence representation is a concatenation of the word vector matrix with the dependency structure representation, enriched with the dependency label information.
Below, we outline three main input configurations that we have defined and tested on all our networks.
\begin{enumerate}
\item Configuration $\mathit{m}$ includes
word vectors for sentence words
and the words of dependencies (normalized sum of word vectors). Formally, $\mathit{m}=S_{n\times k}\circ [\vec{r_{ij}}^{avg}]_{ij}$
\item Configuration $\mathit{ml}$ includes word vectors for sentence words,
dependency words, and dependency label representations.
Formally, $\mathit{ml}=S_{n\times k}\circ [\vec{r_{ij}}^{avg}\circ \mathit{dep_{ij}}]_{ij}$
\item Configuration $\mathit{mld}$ has full dependency information, including concatenation of word vectors for dependency words, dependency label, and dependency depth.
Formally, $\mathit{mld}=[\vec{r_{ij}}^{c}\circ \mathit{dep_{ij}}\circ \mathit{depth_{ij}}]_{ij}$
\end{enumerate}
Figure \ref{input-fig} shows how dependencies are represented for different configurations ($\mathit{m}$, $\mathit{ml}$, and $\mathit{mld}$, respectively).
\begin{figure}[!th]
\center
\includegraphics[scale=0.6]{input-configs-3.jpg}
\vspace{4mm}
\caption{\label{input-fig}Input configurations}
\end{figure}
Figure~\ref{conf-ex} shows the example of these input configurations.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.4]{config-ex-sent-capt.jpg}
\\
\includegraphics[scale=0.45]{config-ex-m-capt.jpg}
\includegraphics[scale=0.45]{config-ex-ml-capt.jpg}
\includegraphics[scale=0.45]{config-ex-mld-capt.jpg}
\vspace{2mm}
\caption{Examples of input configurations.}\label{conf-ex}
\end{figure}
\section{Experiments}\label{ev-sec}
Our experiments aim at testing the following hypotheses:
\begin{enumerate}
\item Deep NNs outperform classical machine learning models on DE task;
\item CNN layer improves NN performance on DE task;
\item Syntactic (dependency) knowledge improves NN performance on DE task;
\item FastText embedding performs better on our datasets than other embedding models, due to larger word coverage (because a larger portion of words from the datasets is contained in the fastText dictionary);
\item Self-embedding performs worst among tested embeddings due to a smaller amount of training data;
\item Because mathematical and general definitions are different, supervised DE tasks for mathematical definitions must be trained on mathematical domains;
\item Mathematical Wikipedia articles can be automatically detected as definition-containing articles;
\end{enumerate}
Tests were performed on a cloud server with 32GB of RAM, 150 GB of PAGE memory, an Intel Core I7-7500U 2.70 GHz CPU, and two NVIDIA GK210GL GPUs.
\subsection{Tools}
The models were implemented with help of the following tools: (1) Stanford CoreNLP wrapper~\citep{stanfordnlp-python}
for Python (tokenization, sentence boundary detection, and dependency parsing) , (2) gensim~\citep{rehurek_lrec} (loading word2vec vectors), (3) Keras~\citep{chollet2015keras} with Tensorflow~\citep{tensorflow2015-whitepaper}
as a back-end (NN models),
(4) fastText vectors pre-trained on English webcrawl and Wikipedia~\citep{fasttext}, (5) Scikit-Learn~\citep{scikit-learn} (evaluation with F1, recall, and precision metrics), (6) BERT as a service python package~\citep{bert-service}, and (7) WEKA software~\citep{hall2009weka}.
All networks were trained with batch size 32 and 10 epochs.
\subsection{Data}
In our work we use the three datasets--WCL, W00 and WFMALL--that are described below.
For in-domain tests, every dataset was evaluated on its own.
In cross-domain tests, a network or a baseline was trained on one of the datasets and was tested on another.
Additionally,
we used a joint multi-domain dataset that contains the mathematical WFMALL dataset with the other two datasets (WCL\&W00\&WFMALL), denoted MULTI.
The number of sentences for each class, majority vote values, total number of words, and number of joint words with three pre-trained word embedding models ---word2vec (denoted by W2V), fastText (denoted by FT), and BERT---are given in Table~\ref{dataset-table}.
\begin{table*}
\scriptsize
\begin{tabular}{llllllll}
\hline\hline
Dataset & Definition & Non-def & Majority & Total & Common & Common & Common\\
& sentences & sentences & vote &words& W2V& FT& BERT\\
\hline
WCL & 1,871 & 2,847 & 0.603 & 21,843 & 14,368 & 16,937& 10,740 \\
W00 & 731 & 1,454 & 0.665 & 7,478 & 5,307 & 6,077 & 4,329\\
WFMALL & 1,934 & 4,206 & 0.685 & 9,759 & 6,052 & 7,366 & 5,025\\
MULTI & 4,536 & 8,507 & 0.652 & 30,791 & 18,037 & 22,155 & 12,678 \\
\hline
\end{tabular}
\vspace{1mm}
\caption{Datasets description}\label{dataset-table}
\end{table*}
\subsubsection{The WCL Dataset}
The World-Class Lattices (WCL) dataset~\citep{WCL}, introduced in \citep{navigli2010annotated}, was constructed from manually annotated Wikipedia data
in English. The version that we used (WCL v1.2) contains 4,719 annotated sentences, 1,871 of which are proper definitions and 2,847 are distractor sentences, that
have similar structures with proper definitions, but are not actually definitions. This dataset contains generic definitions in all areas and it is not
mathematically oriented. A sample definition sentence from this dataset is
$$\begin{array}{l}\mbox{\it American Standard Code for Information Interchange}\\
\mbox{\it is a character encoding based on English alphabet.}\end{array}$$
and a sample distractor is
$$\begin{array}{l}\mbox{\it The premise of the program revolves around Parker,}\\
\mbox{\it an 18-year-old country girl who moves back}\\
\mbox{\it and forth between her country family, who lives }\\
\mbox{\it on a bayou houseboat, and the wealthy Brents, who own}\\
\mbox{\it a plantation and pancake business.}\\
\end{array}$$
In this corpus, following parts of definitions are annotated:
\begin{enumerate}
\item the DEFINIENDUM field (DF), referring to the word being defined and its modifiers,
\item the DEFINITOR field (VF),
referring to the verb phrase used to introduce a definition,
\item the DEFINIENS field (GF), which includes the genus phrase, and \item the REST field (RF), which indicates all additional sentence parts.
\end{enumerate}
According to the original annotations, existence of the first three parts indicates that a sentence is a definition.
\subsubsection{The W00 Dataset}
The W00 dataset~\citep{W00}, introduced in \citep{jin2013mining},
was compiled from ACL-ARC ontology~\citep{bird2008acl} and contains 2,185 manually annotated sentences,
with 731 definitions and 1,454 non-definitions; the style of the distractors is different from the one used in the WCL dataset.
A sample definition sentence from this dataset is
$$\begin{array}{l}\mbox{\it Our system, SNS (pronounced "essence"), retrieves}\\
\mbox{\it documents to an unrestricted user query and }\\
\mbox{\it summarizes a subset of them as selected by the user.}\end{array}$$
and a sample distractor is
$$\begin{array}{l}\mbox{\it The senses with the highest confidence scores are the}\\
\mbox{\it senses that contribute the most to the function for the set.
}
\end{array}$$
Annotation of the W00 dataset is token-based, with each token in a sentence identified by a single label that indicates whether a token is a part of a term ($T$), a definition ($D$), or neither ($O$). According to the original annotation, a sentence is considered not to be a definition if all of its tokens are marked as $O$. Sentence that contains tokens marked as $T$ or $D$ is considered to be a definition.
\subsubsection{The WFMALL Dataset}
The WFMALL dataset is an extension of the WFM dataset~\citep{WFM}. It was created by us after collecting and processing all 2,352 articles from Wolfram Mathworld~\citep{weisstein2007wolfram}. The final dataset contains 6,140 sentences, of which 1,934 are definitions and 4,206 are non-definitions.
Sentences were extracted automatically and then manually separated into two categories: definitions and statements (non-definitions).
All annotators (five in total) have at least BSc degree and learned academic mathematical courses (research group members, including three research students). The data was semi-automatically segmented to sentences with Stanford CoreNLP package and then manually assessed. All malformed sentences (as result of wrong segmentation) were fixed, 116 too short sentences (with less than 3 words) were removed. All sentences related to Wolfram Language\footnote{\url{https://en.wikipedia.org/wiki/Wolfram_Language}} were removed because they relate to a programming language and describe how mathematical objects are expressed in this language, and not how they are defined. Sentences with formulas only, without text, were also removed. The final dataset was split to nine portions, saved as Unicode text files. Three annotators worked on each portion.
First, two annotators labeled sentences independently. Then, all sentences that were given different labels were finally annotated by the third annotator (controller)\footnote{We decided that a label with majority vote will be selected. Therefore, the third annotator (controller) labeled only the sentences with contradict labels.}. The final label was set by majority vote.
The kappa agreement between annotators was 0.651, which is considered substantial agreement.
This dataset is freely available for download from GitHub.\footnote{\url{https://drive.google.com/drive/folders/1052akYuxgc2kbHH8tkMw4ikBFafIW0tK?usp=sharing}}
A sample definition sentence from this dataset is
$$\begin{array}{l}\mbox{\it The $(7,3,2)$-von Dyck group, also sometimes termed the}\\
\mbox{\it $(2,3,7)$-von Dyck group, is defined as the von Dyck group}\\
\mbox{\it with parameters $(7,3,2)$.}\end{array}$$
and a sample non-definition is
$$\begin{array}{l}\mbox{\it Any $2$-Engel group must be a group of nilpotency class $3$.}
\end{array}$$
\subsection{\label{pre-sec}Text preprocessing}
With regard to all three datasets described
above, we applied the same text preprocessing steps in the following manner:
\begin{itemize}
\item Sentence splitting was derived explicitly from the datasets, without applying any additional procedure, in the following manner: WCL and W00 datasets came pre-split, and sentence splitting for the new WFMALL dataset was performed semi-automatically by our team (using Stanford CoreNLP SBD, followed by manual correction, due to many formulas in the text).
\item Tokenization and dependency parsing were performed on all sentences with the help of the Stanford CoreNLP package~\citep{manning2014stanford}.
\item For the WCL and W00 datasets used in \cite{anke2018syntactically} for DE, we replaced parsing by SpaCy~\citep{honnibal-johnson:2015:EMNLP} with the Stanford CoreNLP parser. We found that the latter tool improved precision.
\item We have tested three pre-trained word embedding options, the word2vec~\citep{word2vec} with pre-trained vectors obtained from the Google News corpus, fastText~\citep{grave2018learning} with vectors pre-trained on English webcrawl and Wikipedia (available at \citep{fasttext}), and BERT~\citep{devlin2018bert}. Also, self-embedding vectors that were trained on our data with an embedding layer, were tested and compared to the pre-trained models.
\end{itemize}
\subsection{Baselines}
Using the WEKA software~\citep{hall2009weka}, we applied the following baselines: Simple Logistic Regression (SLR), Support Vector Machine (SVM) and Random Forest (RF).
To use these methods on complete sentences, we computed vector representation for every sentence as an average of word vectors for all its words, and marked sentences as belonging to either a `definition' or a `non-definition' class.
We have also used the DefMiner system of~\cite{jin2013mining}, available at \url{https://github.com/YipingNUS/DefMiner}, as a baseline. Because DefMiner comes pre-trained, we do not use it in the cross-domain evaluations.
\subsection{\label{res-sec}Results}
We tested all network configurations and baselines (where applicable) in three domain configurations described in the following sections. We use the notation $\mathit{Network}_{\mathit{input}}$ for a network of type $\mathit{Network}$, accepting input of type $\mathit{input}$ as specified in Section \ref{input-sec}.
In total, we have four network types, three input configurations, three word embedding models for each network, and pre-trained BERT embedding that resulted in four additional configurations (one per network, because we did not combine it with dependency features), resulting in 40 final configurations. Our motivation was to test the extent to which the order and presence of CNN and LSTM layers affect the results, and how the performance was affected by the representation of sentence dependency information and different embedding models.\footnote{Our python code is available at \url{https://github.com/NataliaVanetik/MDE-v2}.}
In the tables containing results, we show accuracy and F1 scores for all embeddings, with the best scores for each dataset marked in bold.
The first series of tests was performed for every dataset
separately.
Each dataset was divided, by a stratified sampling, into three sets: training=70\%, validation=5\%, test=25\%. Every model was trained on a training set and tested on a test set. We adjusted the hyperparameters of all NN models using a validation set.
\subsubsection{In-domain and multi-domain results}
Results of in-domain experiments are given in Tables~\ref{in-domain-tab-WCL} through \ref{in-domain-tab-WFM}.
and visualized in Figures~\ref{in-domain-wcl} through \ref{in-domain-wfm} (see Appendix).
Because DefMiner does not work with embedding vectors, its scores for all embedding models are identical.
\begin{table}
\scriptsize
\begin{tabular}{lllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.779 & 0.807 & 0.878 & 0.721 & 0.773 & 0.800 & 0.875 & 0.600 \\
SL & 0.817 & 0.859 & \textbf{0.935} & 0.775 & 0.817 & 0.859 & 0.934 & 0.771 \\
SVM & 0.824 & 0.756 & 0.934 & 0.780 & 0.824 & 0.863 & \textbf{0.936} & 0.776 \\
DefMiner & 0.797 & 0.797 & 0.797 & 0.797 & 0.741 & 0.741 & 0.741 & 0.741 \\
\hline & & & & \\
$\mathrm{CNN_{m}}$ & 0.889 & 0.916 & 0.925 & 0.909 & 0.889 & 0.916 & 0.925 & 0.908 \\
$\mathrm{CNN_{ml}}$ & 0.909 & 0.930 & 0.925 & 0.934 & 0.909 & 0.930 & 0.925 & 0.934 \\
$\mathrm{CNN_{mld}}$ & 0.915 & 0.929 & 0.925 & 0.917 & 0.915 & 0.929 & 0.925 & 0.917 \\
$\mathrm{CBLSTM_{m}}$ & 0.892 & 0.903 & 0.825 & 0.931 & 0.891 & 0.902 & 0.824 & 0.931 \\
$\mathrm{CBLSTM_{ml}}$ & 0.921 & 0.929 & 0.825 & \textbf{0.935} & 0.921 & 0.929 & 0.824 & \textbf{0.935} \\
$\mathrm{CBLSTM_{mld}}$ & 0.914 & 0.926 & 0.825 & 0.916 & 0.914 & 0.926 & 0.824 & 0.917 \\
$\mathrm{LSTM_{m}}$ & 0.858 & 0.871 & 0.618 & 0.913 & 0.858 & 0.871 & 0.506 & 0.914 \\
$\mathrm{LSTM_{ml}}$ & 0.876 & 0.890 & 0.618 & 0.914 & 0.876 & 0.889 & 0.506 & 0.914 \\
$\mathrm{LSTM_{mld}}$ & 0.905 & 0.919 & 0.618 & 0.933 & 0.905 & 0.920 & 0.506 & 0.933 \\
$\mathrm{BLSTMCNN_{m}}$ & 0.889 & 0.912 & 0.922 & 0.915 & 0.889 & 0.912 & 0.922 & 0.915 \\
$\mathrm{BLSTMCNN_{ml}}$ & \textbf{0.923} & 0.916 & 0.922 & 0.913 & \textbf{0.927} & 0.915 & 0.922 & 0.912 \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.919 & \textbf{0.933} & 0.922 & 0.913 & 0.920 & \textbf{0.933} & 0.922 & 0.913 \\
\hline \end{tabular}
\vspace{1mm}
\caption{In-domain performance for WCL dataset.}\label{in-domain-tab-WCL}
\end{table}
\begin{table}
\scriptsize
\begin{tabular}{lllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.717 & 0.705 & 0.702 & 0.678 & 0.659 & 0.637 & 0.622 & 0.591 \\
SL & 0.703 & 0.721 & 0.722 & 0.690 & 0.677 & 0.697 & 0.716 & 0.640 \\
SVM & 0.707 & 0.717 & 0.701 & 0.682 & 0.645 & 0.665 & 0.700 & 0.575 \\
DefMiner & \textbf{0.819} & \textbf{0.819} & \textbf{0.819} & \textbf{0.819} & 0.644 & 0.644 & 0.644 & 0.644 \\
\hline
$\mathrm{CNN_{m}}$ & 0.696 & 0.698 & 0.769 & 0.652 & 0.622 & 0.607 & \textbf{0.766} & 0.636 \\
$\mathrm{CNN_{ml}}$ & 0.691 & 0.698 & 0.769 & 0.714 & 0.617 & 0.610 & \textbf{0.766} & 0.668 \\
$\mathrm{CNN_{mld}}$ & 0.698 & 0.700 & 0.769 & 0.696 & 0.631 & 0.617 & \textbf{0.766} & 0.685 \\
$\mathrm{CBLSTM_{m}}$ & 0.696 & 0.709 & 0.684 & 0.698 & 0.656 & 0.677 & 0.556 & 0.696 \\
$\mathrm{CBLSTM_{ml}}$ & 0.712 & 0.716 & 0.684 & 0.732 & 0.676 & 0.668 & 0.556 & \textbf{0.707} \\
$\mathrm{CBLSTM_{mld}}$ & 0.735 & 0.705 & 0.684 & 0.707 & 0.732 & 0.698 & 0.556 & 0.702 \\
$\mathrm{LSTM_{m}}$ & 0.684 & 0.698 & 0.684 & 0.686 & 0.649 & 0.679 & 0.556 & 0.654 \\
$\mathrm{LSTM_{ml}}$ & 0.689 & 0.696 & 0.684 & 0.675 & 0.649 & 0.643 & 0.556 & 0.618 \\
$\mathrm{LSTM_{mld}}$ & 0.723 & 0.730 & 0.684 & 0.698 & 0.706 & \textbf{0.719} & 0.556 & 0.697 \\
$\mathrm{BLSTMCNN_{m}}$ & 0.712 & 0.728 & 0.769 & 0.657 & 0.689 & 0.700 & \textbf{0.766} & 0.660 \\
$\mathrm{BLSTMCNN_{ml}}$ & 0.700 & 0.716 & 0.769 & 0.705 & 0.706 & 0.707 & \textbf{0.766} & 0.701 \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.719 & 0.696 & 0.769 & 0.689 & 0.721 & 0.691 & \textbf{0.766} & 0.683 \\
\hline \end{tabular}
\vspace{1mm}
\caption{In-domain performance for W00 dataset.}\label{in-domain-tab-W00}
\end{table}
\begin{table}
\scriptsize
\begin{tabular}{lllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.720 & 0.766 & 0.776 & 0.705 & 0.665 & 0.733 & 0.623 & 0.740 \\
SL & 0.756 & 0.788 & 0.841 & 0.721 & 0.737 & 0.780 & 0.660 & 0.840 \\
SVM & 0.759 & 0.792 & \textbf{0.844} & 0.700 & 0.736 & 0.780 & 0.700 & \textbf{0.844} \\
DefMiner & 0.704 & 0.704 & 0.704 & 0.704 & 0.134 & 0.134 & 0.134 & 0.134 \\
\hline
$\mathrm{CNN_{m}}$ & 0.834 & 0.856 & 0.827 & 0.717 & 0.830 & 0.856 & 0.821 & 0.679 \\
$\mathrm{CNN_{ml}}$ & 0.832 & 0.859 & 0.827 & 0.780 & 0.831 & 0.858 & 0.821 & 0.781 \\
$\mathrm{CNN_{mld}}$ & \textbf{0.841} & \textbf{0.867} & 0.827 & 0.785 & 0.836 & \textbf{0.866} & 0.821 & 0.782 \\
$\mathrm{CBLSTM_{m}}$ & 0.835 & 0.860 & 0.702 & 0.751 & 0.836 & 0.861 & 0.646 & 0.752 \\
$\mathrm{CBLSTM_{ml}}$ & 0.840 & 0.864 & 0.702 & 0.744 & 0.835 & 0.865 & 0.646 & 0.742 \\
$\mathrm{CBLSTM_{mld}}$ & 0.835 & 0.855 & 0.702 & 0.789 & 0.831 & 0.857 & 0.646 & 0.784 \\
$\mathrm{LSTM_{m}}$ & 0.832 & 0.853 & 0.673 & 0.742 & 0.829 & 0.847 & 0.576 & 0.731 \\
$\mathrm{LSTM_{ml}}$ & 0.828 & 0.860 & 0.673 & 0.746 & 0.828 & 0.858 & 0.576 & 0.734 \\
$\mathrm{LSTM_{mld}}$ & \textbf{0.841} & 0.849 & 0.673 & \textbf{0.801} & \textbf{0.839} & 0.850 & 0.576 & 0.801 \\
$\mathrm{BLSTMCNN_{m}}$ & 0.829 & 0.849 & 0.833 & 0.751 & 0.823 & 0.850 & \textbf{0.831} & 0.748 \\
$\mathrm{BLSTMCNN_{ml}}$ & 0.833 & 0.851 & 0.833 & 0.781 & 0.831 & 0.849 & \textbf{0.831} & 0.778 \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.828 & 0.859 & 0.833 & 0.779 & 0.828 & 0.858 & \textbf{0.831} & 0.782 \\
\hline \end{tabular}
\vspace{1mm}
\caption{In-domain performance for WFMALL dataset.}\label{in-domain-tab-WFM}
\end{table}
We decided not to merge BERT representation with dependency knowledge because: (1) BERT did not provide any performance advantage over other embedding models without dependency knowledge; (2) BERT representation produces long vectors (1024 dimensions in our case) that require a large amount of memory, and the BERT-as-a-service package takes an extended period of time to compute sentence vectors, which significantly slows the classification task, making it almost impossible to run all combinations of input configurations and networks within available time constraints. Following this decision, the scores for BERT embedding do not depend on the representation and appear to be the same for all three ($m$, $ml$, and $mld$).
The first observation that can be seen from the
results is that all NN-based models usually perform much better than four baselines. From Tables~\ref{in-domain-tab-WCL} through \ref{in-domain-tab-WFM} it can be seen that models having a CNN layer as one (or only one) of their components outperform other models in most cases. Also, results demonstrate that dependency knowledge improves the performance of the models.
It is worth noting that the DefMiner on W00 superiority with accuracy scores can be naturally explained by the fact that DefMiner was designed by extracting hand-crafted shallow parsing patterns from the W00 dataset. DefMiner gives very poor F1 scores on the WFMALL dataset because it almost always classifies mathematical definitions as a non-definition. \footnote{Confusion matrix values are TP=130, FN=1804, TN=4195 and FP=12. That gave us high precision P=0.915 and low recall R1=0.072, resulting in a low F1 value.}
Another interesting phenomenon was observed -- there are cases where baselines performed better than NN models. All these cases have something in common: the sentences were represented with BERT or self-embedding vectors.
As such, the following general conclusions can be made: (1) the CNN model---pure or integrated with BLSTM---achieves better performance than LSTM; (2) the NN models gain better performance with dependency information.
The superiority of models having a CNN layer
can be explained by the ability of CNN to learn features and reduce the number of free parameters in a high-dimensional sentence representation, allowing the network to be more accurate with fewer parameters. Due to a high-dimensional input in our task, this characteristic of CNN appears to be very helpful.
\begin{table}
\scriptsize
\begin{tabular}{lllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.726 & 0.750 & 0.777 & 0.695 & 0.690 & 0.722 & 0.752 & 0.626 \\
SL & 0.749 & 0.784 & 0.841 & 0.711 & 0.734 & 0.775 & 0.840 & 0.675 \\
SVM & 0.749 & 0.776 & \textbf{0.851} & 0.698 & 0.727 & 0.763 & \textbf{0.850} & 0.636 \\
DefMiner & 0.766 & 0.766 & 0.766 & 0.766 & 0.549 & 0.549 & 0.549 & 0.549 \\
\hline
$\mathrm{CNN_{m}}$ & 0.839 & 0.859 & 0.843 & 0.713 & 0.838 & 0.860 & 0.837 & 0.705 \\
$\mathrm{CNN_{ml}}$ & 0.844 & \textbf{0.870} & 0.843 & 0.787 & 0.844 & \textbf{0.870} & 0.837 & 0.785 \\
$\mathrm{CNN_{mld}}$ & 0.847 & 0.859 & 0.843 & 0.774 & 0.847 & 0.860 & 0.837 & 0.775 \\
$\mathrm{CBLSTM_{m}}$ & 0.841 & \textbf{0.870} & 0.765 & 0.732 & 0.840 & 0.868 & 0.761 & 0.735 \\
$\mathrm{CBLSTM_{ml}}$ & 0.850 & 0.861 & 0.765 & 0.732 & 0.851 & 0.857 & 0.761 & 0.735 \\
$\mathrm{CBLSTM_{mld}}$ & 0.841 & 0.865 & 0.765 & 0.785 & 0.842 & 0.864 & 0.761 & 0.782 \\
$\mathrm{LSTM_{m}}$ & \textbf{0.856} & 0.865 & 0.667 & 0.752 & \textbf{0.853} & 0.863 & 0.572 & 0.750 \\
$\mathrm{LSTM_{ml}}$ & 0.796 & 0.861 & 0.667 & 0.617 & 0.790 & 0.860 & 0.572 & 0.569 \\
$\mathrm{LSTM_{mld}}$ & 0.851 & 0.861 & 0.667 & 0.785 & 0.850 & 0.861 & 0.572 & 0.781 \\
$\mathrm{BLSTMCNN_{m}}$ & 0.853 & 0.849 & 0.829 & 0.749 & \textbf{0.853} & 0.847 & 0.818 & 0.746 \\
$\mathrm{BLSTMCNN_{ml}}$ & 0.844 & 0.860 & 0.829 & \textbf{0.800} & 0.843 & 0.859 & 0.818 & \textbf{0.798} \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.842 & 0.864 & 0.829 & 0.787 & 0.840 & 0.863 & 0.818 & 0.783 \\
\hline \end{tabular}
\vspace{1mm}
\caption{Multi-domain performance.}\label{in-domain-tab-MULTI}
\end{table}
The second series of tests was performed on the joint dataset \\
WCL\&W00\&WFMALL (denoted by MULTI). The dataset was randomly split to 70\%-25\%-5\% training, test, and validation data, respectively (keeping the proportions of mixed data). Results are given in Table~\ref{in-domain-tab-MULTI} and in Figure~\ref{in-domain-multi} (in Appendix).
Table~\ref{in-domain-tab-MULTI} also shows that the CNN layer improves the performance of our NN models. However, it does not demonstrate consistent superiority for incorporating syntactic information into sentence representations.
Statistical analysis performed on the obtained scores shows the following:
(1) On two datasets (WFMALL and MULTI) there is a significant superiority of two pre-trained word embeddings (word2vec and fastText) over the other two embeddings (self-trained and pre-trained BERT); there is no significant difference between the four embedding models on the other two datasets;
(2) FastText is significantly better than most other representations on two datasets;
(3) There is a significant superiority of CNN and CBLSTM models over LSTM in many configurations (WCL using m and ml, WFMALL using m; W00 and MULTI using ml);
(4) There is a significant improvement when we use dependency information in the sentence representation (ml and mld vs m) in many cases (WCL: CNN, CBLSTM, LSTM; WFMALL: CNN; W00: LSTM);
(5) Some evaluated models on three datasets (WCL: CBLSTM and CNN using ml and mld; WFMALL: CNN using mld; MULTI: CNN with ml and mld) are significantly better than the best baseline (SVM).
In conclusion, we would recommend to use CNN or CBLSTM models with fastText embedding and ml or mld input configuration for DE in both general and mathematical domains.
\begin{table}
\scriptsize
\begin{tabular}{lllllllllllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.638 & 0.642 & 0.638 & 0.599 & 0.549 & 0.547 & 0.529 & 0.459 \\
SL & 0.688 & 0.751 & \textbf{0.864} & 0.601 & 0.674 & 0.746 & \textbf{0.861} & 0.464 \\
SVM & 0.681 & 0.745 & 0.841 & 0.604 & 0.654 & 0.738 & 0.838 & 0.459 \\
\hline
$\mathrm{CNN_{m}}$ & 0.723 & 0.723 & 0.776 & 0.604 & 0.698 & 0.698 & 0.755 & 0.455 \\
$\mathrm{CNN_{ml}}$ & 0.685 & 0.781 & 0.776 & 0.702 & 0.633 & 0.775 & 0.755 & 0.661 \\
$\mathrm{CNN_{mld}}$ & 0.669 & 0.750 & 0.776 & 0.658 & 0.608 & 0.732 & 0.755 & 0.574 \\
$\mathrm{CBLSTM_{m}}$ & 0.764 & 0.764 & 0.651 & 0.605 & 0.756 & 0.756 & 0.570 & 0.468 \\
$\mathrm{CBLSTM_{ml}}$ & 0.754 & \textbf{0.806} & 0.651 & 0.603 & 0.737 & \textbf{0.796} & 0.570 & 0.481 \\
$\mathrm{CBLSTM_{mld}}$ & 0.668 & 0.785 & 0.651 & 0.653 & 0.602 & 0.784 & 0.570 & 0.562 \\
$\mathrm{LSTM_{m}}$ & 0.772 & 0.772 & 0.608 & 0.600 & 0.758 & 0.758 & 0.469 & 0.474 \\
$\mathrm{LSTM_{ml}}$ & \textbf{0.805} & 0.801 & 0.608 & 0.602 & \textbf{0.801} & 0.793 & 0.469 & 0.477 \\
$\mathrm{LSTM_{mld}}$ & 0.693 & 0.737 & 0.608 & 0.674 & 0.647 & 0.719 & 0.469 & 0.611 \\
$\mathrm{BLSTMCNN_{m}}$ & 0.760 & 0.760 & 0.795 & 0.605 & 0.743 & 0.743 & 0.781 & 0.548 \\
$\mathrm{BLSTMCNN_{ml}}$ & 0.720 & 0.766 & 0.795 & \textbf{0.718} & 0.687 & 0.760 & 0.781 & \textbf{0.698} \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.680 & 0.730 & 0.795 & 0.709 & 0.638 & 0.709 & 0.781 & 0.680 \\
\hline
\end{tabular}
\vspace{1mm}
\caption{Cross-domain scores for WFMALL and WCL datasets as training and test sets, respectively.}\label{cross-domain-tab-wfm-wcl}
\end{table}
\begin{table}
\scriptsize
\begin{tabular}{lllllllllllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.673 & 0.671 & 0.669 & 0.667 & 0.567 & 0.554 & 0.540 & 0.535 \\
SL & 0.676 & 0.674 & 0.697 & \textbf{0.676} & 0.607 & 0.584 & 0.630 & 0.571 \\
SVM & 0.677 & 0.674 & \textbf{0.702} & 0.668 & 0.590 & 0.574 & \textbf{0.655} & 0.541 \\
\hline
$\mathrm{CNN_{m}}$ & 0.702 & 0.707 & 0.690 & 0.666 & 0.665 & 0.678 & 0.598 & 0.547 \\
$\mathrm{CNN_{ml}}$ & 0.697 & 0.693 & 0.690 & 0.644 & 0.635 & 0.653 & 0.598 & 0.610 \\
$\mathrm{CNN_{mld}}$ & 0.693 & 0.693 & 0.690 & 0.669 & 0.625 & 0.652 & 0.598 & 0.589 \\
$\mathrm{CBLSTM_{m}}$ & 0.692 & 0.708 & 0.672 & 0.657 & 0.673 & \textbf{0.683} & 0.573 & 0.583 \\
$\mathrm{CBLSTM_{ml}}$ & 0.706 & \textbf{0.714} & 0.672 & 0.618 & 0.667 & 0.676 & 0.573 & 0.592 \\
$\mathrm{CBLSTM_{mld}}$ & 0.691 & 0.667 & 0.672 & 0.661 & 0.622 & 0.658 & 0.573 & 0.588 \\
$\mathrm{LSTM_{m}}$ & 0.712 & 0.701 & 0.664 & 0.650 & 0.676 & 0.640 & 0.533 & 0.589 \\
$\mathrm{LSTM_{ml}}$ & 0.712 & 0.710 & 0.664 & 0.641 & \textbf{0.695} & 0.670 & 0.533 & 0.589 \\
$\mathrm{LSTM_{mld}}$ & 0.682 & 0.681 & 0.664 & 0.657 & 0.622 & 0.646 & 0.533 & 0.608 \\
$\mathrm{BLSTMCNN_{m}}$ & \textbf{0.715} & 0.700 & 0.699 & 0.590 & 0.679 & 0.679 & 0.631 & 0.590 \\
$\mathrm{BLSTMCNN_{ml}}$ & 0.709 & 0.700 & 0.699 & 0.650 & 0.665 & 0.657 & 0.631 & \textbf{0.632} \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.659 & 0.684 & 0.699 & 0.648 & 0.605 & 0.633 & 0.631 & 0.624 \\
\hline
\end{tabular}
\vspace{1mm}
\caption{Cross-domain scores for WFMALL and W00 datasets as training and test sets, respectively.}\label{cross-domain-tab-wfm-w00}
\end{table}
\begin{table}
\scriptsize
\begin{tabular}{lllllllllllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.669 & 0.714 & 0.745 & 0.658 & 0.668 & 0.702 & 0.737 & 0.594 \\
SL & 0.675 & 0.704 & \textbf{0.755} & 0.571 & 0.677 & 0.704 & \textbf{0.745} & 0.576 \\
SVM & 0.678 & 0.715 & 0.744 & 0.549 & 0.680 & 0.715 & 0.741 & 0.560 \\
\hline
$\mathrm{CNN_{m}}$ & 0.733 & 0.760 & 0.712 & 0.636 & 0.728 & 0.751 & 0.721 & 0.587 \\
$\mathrm{CNN_{ml}}$ & 0.731 & 0.751 & 0.712 & \textbf{0.671} & 0.729 & 0.743 & 0.721 & \textbf{0.621} \\
$\mathrm{CNN_{mld}}$ & 0.709 & 0.737 & 0.712 & 0.640 & 0.713 & 0.733 & 0.721 & 0.620 \\
$\mathrm{CBLSTM_{m}}$ & 0.756 & \textbf{0.779} & 0.686 & 0.654 & 0.750 & \textbf{0.768} & 0.682 & 0.577 \\
$\mathrm{CBLSTM_{ml}}$ & \textbf{0.758} & 0.774 & 0.686 & 0.637 & \textbf{0.752} & \textbf{0.768} & 0.682 & 0.583 \\
$\mathrm{CBLSTM_{mld}}$ & 0.717 & 0.742 & 0.686 & 0.617 & 0.720 & 0.734 & 0.682 & 0.609 \\
$\mathrm{LSTM_{m}}$ & 0.741 & 0.724 & 0.679 & 0.539 & 0.738 & 0.730 & 0.588 & 0.552 \\
$\mathrm{LSTM_{ml}}$ & 0.752 & 0.745 & 0.679 & 0.569 & 0.734 & 0.735 & 0.588 & 0.570 \\
$\mathrm{LSTM_{mld}}$ & 0.702 & 0.749 & 0.679 & 0.625 & 0.707 & 0.742 & 0.588 & 0.616 \\
$\mathrm{BLSTMCNN_{m}}$ & 0.731 & 0.754 & 0.743 & 0.496 & 0.726 & 0.752 & 0.737 & 0.514 \\
$\mathrm{BLSTMCNN_{ml}}$ & 0.739 & 0.742 & 0.743 & 0.632 & 0.709 & 0.731 & 0.737 & 0.616 \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.705 & 0.725 & 0.743 & 0.631 & 0.705 & 0.722 & 0.737 & 0.614 \\
\hline
\end{tabular}
\vspace{1mm}
\caption{Cross-domain scores for WCL and WFMALL datasets as training and test sets, respectively.}\label{cross-domain-tab-wcl-wfm}
\end{table}
\begin{table}
\scriptsize
\begin{tabular}{lllllllllllllllll}
\hline
Method & W2V & FT & BERT & self & W2V & FT & BERT & self \\
& acc & acc & acc & acc & F1 & F1 & F1 & F1 \\
\hline
RF & 0.633 & 0.658 & 0.694 & 0.657 & 0.627 & 0.652 & 0.605 & 0.654 \\
SL & 0.640 & 0.644 & \textbf{0.705} & 0.632 & 0.648 & 0.657 & 0.613 & \textbf{0.713} \\
SVM & 0.652 & 0.669 & 0.668 & 0.668 & 0.652 & 0.674 & 0.588 & 0.675 \\
\hline \\
$\mathrm{CNN_{m}}$ & 0.655 & 0.665 & 0.694 & 0.634 & 0.665 & 0.677 & \textbf{0.704} & 0.601 \\
$\mathrm{CNN_{ml}}$ & 0.686 & 0.693 & 0.694 & \textbf{0.680} & 0.690 & 0.698 & \textbf{0.704} & 0.593 \\
$\mathrm{CNN_{mld}}$ & 0.638 & 0.650 & 0.694 & 0.617 & 0.649 & 0.662 & \textbf{0.704} & 0.606 \\
$\mathrm{CBLSTM_{m}}$ & 0.694 & 0.682 & 0.686 & 0.635 & \textbf{0.694} & 0.692 & 0.562 & 0.582 \\
$\mathrm{CBLSTM_{ml}}$ & \textbf{0.696} & 0.697 & 0.686 & 0.653 & 0.689 & 0.684 & 0.562 & 0.581 \\
$\mathrm{CBLSTM_{mld}}$ & 0.645 & 0.643 & 0.686 & 0.602 & 0.655 & 0.656 & 0.562 & 0.588 \\
$\mathrm{LSTM_{m}}$ & 0.686 & \textbf{0.702} & 0.685 & 0.653 & 0.688 & \textbf{0.710} & 0.557 & 0.589 \\
$\mathrm{LSTM_{ml}}$ & 0.650 & 0.673 & 0.685 & 0.647 & 0.660 & 0.683 & 0.557 & 0.583 \\
$\mathrm{LSTM_{mld}}$ & 0.646 & 0.618 & 0.685 & 0.614 & 0.655 & 0.632 & 0.557 & 0.609 \\
$\mathrm{BLSTMCNN_{m}}$ & 0.684 & 0.683 & 0.678 & 0.568 & 0.682 & 0.689 & 0.685 & 0.576 \\
$\mathrm{BLSTMCNN_{ml}}$ & 0.658 & 0.675 & 0.678 & 0.618 & 0.668 & 0.680 & 0.685 & 0.614 \\
$\mathrm{BLSTMCNN_{mld}}$ & 0.661 & 0.625 & 0.678 & 0.596 & 0.668 & 0.639 & 0.685 & 0.592 \\
\hline
\end{tabular}
\vspace{1mm}
\caption{Cross-domain scores for W00 and WFMALL datasets as training and test sets, respectively.}\label{cross-domain-tab-w00-wfm}
\end{table}
\subsubsection{Cross-domain results}
The third series of tests was performed using one of the datasets as a training set and another as a test set, where WFMALL is either test or training dataset.
The primary aim of these tests was to see how well mathematical definitions could be located using general datasets as a training set. An additional goal was to determine whether training the system on mathematical definitions improved recognition of general definitions.
Results are given in Tables~\ref{cross-domain-tab-wfm-wcl} through \ref{cross-domain-tab-w00-wfm} and visualized in Figure~\ref{cross-domain}.
The outcomes that are observed in this experiment are very similar to what was observed during in-domain runs. The deep neural network models that used three representations (see Section~\ref{input-sec}),
significantly outperformed all the baselines in most cases, with few exceptions.
From Tables~\ref{cross-domain-tab-wfm-wcl} and \ref{cross-domain-tab-wfm-w00} it can be also seen that models having a CNN layer
outperform other models in most scenarios.
As such, we reach the same conclusions as for the in-domain experiments: (1) the CNN model---pure or integrated with BLSTM---achieves better performance than LSTM, which can be explained by the ability of CNN to learn features and reduce the number of free parameters in a high-dimensional sentence representation; (2) most models usually gain better performance with dependency information.
As in other evaluation scenarios, sometimes baselines performed better than NN models, when the sentences were represented with BERT or self-embedding vectors. In general, we observed that BERT and self-embedding vectors did not improve NN performance, in any evaluation scenario.
Another important observation can be made if we compare between performance of in-domain and cross-domain experiments (see Figure~\ref{cross-domain}). The performance of all models with cross-domain learning is far lower than their performance with in-domain learning. As such, we can conclude that mathematical definitions require special treatment, while using cross-domain learning is inefficient. Models trained to detect general definitions are not sufficient for the detection of mathematical definitions. Likewise, models trained on mathematical domain are not very helpful for detection of general definitions.
\subsubsection{Binary classification with fine-tuned BERT}
We performed a small experiment with BERT, fine-tuned on our WFMALL dataset and the DE (as a binary classification) task. The purpose of this experiment was to see how the fine-tuning of BERT can improve its performance over the pre-trained one.
We have used the StrongIO package~\citep{strongio} to fine-tune BERT model. We have adapted the code to handle binary sentence classification and train-validation-test mode of evaluation (original code supports train-validation mode only). The BERT model we used is bert$\_$uncased$\_$L-12$\_$H-768$\_$A-12~\citep{bert-code}.\footnote{according to Google release notes~\citep{huggingface} this is the model that can be fine tuned with server having less than 64 GB memory, which fits our server} This BERT model has $110,302,011$ parameters, out of which $21,460,737$ are trainable. We have used $75\%$-$5\%$-$25\%$ train-validation-test split as in all other experiments.
\begin{table}
\scriptsize
\begin{tabular}{llllll}
\hline
Dataset & Domain & accuracy & recall & precision & F1 \\
\hline
WCL & in-domain & \textbf{0.964} & 0.964 & 0.948 & \textbf{0.956} \\
WFMALL & in-domain & 0.844 & 0.864 & 0.627 & 0.727 \\
W00 & in-domain & 0.804 & 0.854 & 0.489 & 0.622 \\
MULTI & multi-domain & 0.804 & 0.854 & 0.489 & 0.622 \\
WFMALL$=>$WCL & cross-domain & \textbf{0.929} & 0.876 & 0.955 & \textbf{0.914} \\
WFMALL$=>$W00 & cross-domain & 0.666 & 0.000 & 0.000 & 0.000 \\
WCL$=>$WFMALL & cross-domain & 0.516 & 0.391 & 0.960 & 0.555 \\
W00$=>$WFMALL & cross-domain & 0.675 & 0.402 & 0.067 & 0.114 \\
\hline
\end{tabular}
\vspace{1mm}
\caption{Fine-tuned BERT.}\label{bert}
\end{table}
As it can be seen from Table~\ref{bert}, fine-tuned BERT model outperforms other models on WCL dataset in both in-domain and cross-domain scenarios and in both metrics. However, it does not outperform top models on other datasets in any of three scenarios. BERT model has difficulty to detect true positive (definitions) and therefore suffers from low precision, recall, and F1-measure results.
Therefore, we conclude that even fine-tuned BERT does not perfectly suite the DE task for mathematical domain.
\subsection{Discussion and error analysis}
As can be observed from the experimental results, the NN models outperform all baselines in most cases of two scenarios (in-domain and multi-domain).
We also can confirm that the use of syntactic information as a part of input configurations ($\mathit{ml}$ and $\mathit{mld}$) improves the results in most scenarios.
Moreover, during experiments we observed that when dependency encoding is used, keeping separate information about sentence words (as word vectors) has no significant impact on classification results. This allows us to decrease the model size and achieve better time complexity without significantly harming performance.
We can conclude that mathematical definitions require special treatment.
Finally, we see that generally both CNN and its combination with LSTM are more beneficial than selecting a plain LSTM model. This is probably due to the ability of CNN to perform abstract feature engineering before computing the classification model.
Regarding the word embedding models, we can conclude that neither pre-trained BERT nor self-embedding are helpful for the DE task. BERT usually helps in IR tasks, where the surrounding context of a word is considered in its representation. However, this quality is not helpful for distinguishing between definitions and regular sentences, especially when the representation trained on a general domain and applied on a specific one, such as mathematical articles. Self-embedding requires a much larger training set than we could provide in our task. Both word2vec and fastText vectors provided comparable results. According to our expectations,
the fastText vectors usually provided better results due to the larger body of word coverage that fastText gave to all three datasets.
As regards the runtime performance, training of NN models took from 1 to 3 hours on our server, depending on the model. A significant portion of that time was spent on dependency parsing. Using the BERT-as-a-service python package~{bert-service} to obtain BERT vectors on our input text also took considerable time. As for models, those with CNN as their first layer were much faster, due to the feature space reduction.
We tried to understand which sentences represented difficult cases for our models. During annotation process, we found that multiple sentences were assigned different labels by different annotators. Finally, the label for such sentences was decided by majority voting, but all annotators agreed that the decision was not unambiguous. Based on our observation and manual analysis, we believe that most of the false positive and false negative cases were created by such sentences. We categorized these sentences to the following cases
\begin{enumerate}
\item Sentences describing properties of a mathematical object. Example (annotated\footnote{gold standard label = ``definition''} as \textit{definition
):
$$\begin{array}{l}
\mbox{\it An extremum may be local ( a.k.a. a relative extremum; }\\
\mbox{\it an extremum in a given region which is not the overall }\\
\mbox{\it maximum or minimum ) or global. }%
\end{array}$$
We did not instruct our annotators regarding labeling this sentence type and let them make decisions based on their knowledge and intuition. As result, this sentence type received different labels from different annotators.
\item Sentences providing alternative naming of a known (and previously defined) mathematical object. Example (annotated as \textit{non-definition
):
$$\begin{array}{l}
\mbox{\it Exponential growth is common in physical processes such }\\
\mbox{\it as population growth in the absence of predators or }\\
\mbox{\it resource restrictions (where a slightly more general form }\\
\mbox{\it is known as the law of growth).}
\end{array}$$
We received the same decisions and the same outcomes in our dataset with this sentence type as with type (1).
\item Formulations -- sentences that define some mathematical object by a formula (in contrast to a verbal definition, that explains the object's meaning). Example (annotated as \textit{non-definition
):
$$\begin{array}{l}
\mbox{\it Formulas expressing trigonometric functions of an angle 2x}\\
\mbox{\it in terms of functions of an angle x, sin(2x) = [FORMULA]. }
\end{array}$$
If both definition and formulation sentences for the same object were provided, our annotators usually assigned them different labels. However, rarely a mathematical object can be only defined by a formula. Also, sometimes it can be defined by both, but the verbal definition is not provided in an analyzed article. In such cases, annotators assigned the \enquote*{definition} label to the formulation sentence.
\item Sentences that are parts of a multi-sentence definition. Example (annotated as \textit{non-definition
):
$$\begin{array}{l}
\mbox{\it This polyhedron is known as the dual, or reciprocal. }
\end{array}$$
We instructed our annotators not to assign \enquote*{definition} label to sentences that do not contain comprehensive information about a defined object. However, some sentences were still annotated as \enquote*{definition,} especially when they appear in a sequence.
\item Descriptions -- sentences that describe mathematical objects but do not define them unequivocally. Example (annotated as \textit{non-definition
):
$$\begin{array}{l}
\mbox{\it A dragon curve is a recursive non-intersecting curve }\\
\mbox{\it whose name derives from its resemblance to a certain}\\
\mbox{\it mythical creature. }
\end{array}$$
Although this sentence looks like a legitimate definition (grammatically), it was labeled as non-definition because its claim does not hold in both directions (not every recursive non-intersecting curve is a dragon curve). Because none of our annotators was expert in all mathematical domains, it was difficult for them to assign the correct label in all similar cases.
\end{enumerate}
As result of subjective annotation (which occurs frequently in all IR-related areas), none of the ML models trained on our training data were very precise with the ambiguous cases like those described above.
Below are several examples of sentences misclassified as definitions (false positives\footnote{with gold standard label ``non-definition'' but classified as ``definition''}), from each type described in the list above:
\begin{enumerate}
\item Property description:
$$\begin{array}{l}
\mbox{\it Every pentagonal number is 1/3 of a triangular number.}
\end{array}$$
\item Alternative naming:
$$\begin{array}{l}
\mbox{\it However, being in \enquote*{one-to-one correspondence} is synonymous }\\
\mbox{\it with being a bijection.}
\end{array}$$
\item Formulations and notations:
$$\begin{array}{l}
\mbox{\it The binomial distribution is therefore given by $P_p(n|N ) = $[FORMULA].}\\
\:\: \\
\mbox{\it For a binary relation R, one often writes aRb to mean that $(a, b)$ is in $R \times R$.}
\end{array}$$
\item Partial definition:
$$\begin{array}{l}
\mbox{\it A center X is the triangle centroid of its own pedal }\\
\mbox{\it triangle iff it is the symmedian point.}
\end{array}$$
This sentence was annotated as non-definition, because it does not define the symmedian point.
\item Description:
$$\begin{array}{l}
\mbox{\it The cyclide is a quartic surface, and the lines of curvature }\\
\mbox{\it on a cyclide are all straight lines or circular arcs.}
\end{array}$$
\end{enumerate}
Most misclassified definitions (false negatives) can be described by an atypical grammatical structure. Examples of such sentences can be seen below:
$$\begin{array}{l}
\mbox{\it Once one countable set S is given, any other set which can be put }\\
\mbox{\it into a one - to - one correspondence with S is also countable .}\\
\:\: \\
\mbox{\it The word cissoid means \enquote*{ivy-shaped .}}\\
\:\: \\
\mbox{\it A bijective map between two metric spaces that preserves distances, }\\
\mbox{\it i.e., d(f(x), f(y)) = d(x, y), where f is the map and d(a, b)}\\
\mbox{\it is the distance function.}
\end{array}$$
We propose to deal with some of the identified error sources as follows. Partial definitions can probably be discarded by applying part-of-speech tagging and pronouns detection. Coreference resolution (CR) can be used for identification of the referred mathematical entity in a text. Also, the partial definitions problem should be resolved by reduction of the DE task to multi-sentence DE.
Formulations and notations can probably be discarded by measuring the ratio between mathematical symbolism and regular text in a sentence. Sentences providing alternative naming for mathematical objects can be discarded if we are able to detect the truth definition and then select it from multiple candidates. It can also be probably resolved with the help of such types of CR as split antecedents and coreferring noun phrases.
\section{Conclusions\label{conc-sec}}
In this paper we introduce a framework for DE from mathematical texts, using deep neural networks. We introduce a new annotated dataset of mathematical definitions, called WFMALL. We test state-of-the-art approaches for the DE task on this dataset. In addition, we introduce a novel representation for text sentences, based on dependency information, and models with different combinations of CNN and BLSTM layers, and compare them to state-of-the-art results.
With regard to our hypotheses, we can conclude the following.
\begin{enumerate}
\item NNs generally perform better than baselines (hypothesis 1 is accepted);
\item Our experiments demonstrate the superiority of CNN and its combination with LSTM, applied on a syntactically-enriched input representation (hypotheses 2 and 3 are accepted);
\item FastText embedding vectors contribute to better NNs performance (hypothesis 4 is accepted);
\item NNs with self-trained embedding vectors performed the worst, as expected (hypothesis 5 is accepted);
\item Mathematical definitions require special treatment -- models trained on non-mathematical domains are not very helpful for extraction of mathematical definitions (hypothesis 6 is accepted);
\item An additional experiment performed on our newly collected dataset of Wikipedia articles from the mathematical category demonstrates the ability of our approach to detect mathematical definitions in most of the collected articles. However, not every article contains definitions (hypothesis 7 is rejected).
\end{enumerate}
In addition, we can conclude that using BERT vectors pre-trained on a general context does not gain much performance. Despite our expectations, fine-tuned BERT on the DE classification task and the definitions data did not demonstrate superiority on mathematical domain.
In our future work, we plan to extend syntactic structure-based representation from single sentences to the surrounding text. We also plan to expand the classic single-sentence DE task to the task of multi-sentence DE.
The proposed approach can be adapted to the multi-sentence definition extraction, if we train our model to detect the definition boundaries. This task can be reduced to a multi-class classification, where each sentence must be assigned into one of several classes, such as \enquote*{start of a definition,} \enquote*{definition,} \enquote*{end of definition,} and \enquote*{non definition.}
\newpage
\bibliographystyle{model5-names}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,723 |
In Buddhist literature, the sixteen frightening dreams of King Pasenadi (Sanskrit; Pali ) is notable topic of Buddhism and dreamt by the King Pasenadi of Kosala and their Interpretations by the Gautama Buddha.
Buddha used Anāgatam Nyandaw () to visualise the nightmares. Then he explained that the nightmares are the terrible things that will happen in the future. When the time reaches after 2500 Sāsanā Years () or circa 1957 A.D, the foretold events will take place.
The pictures depicting the sixteen nightmares are found in some pagodas in Myanmar.
The 16 frightening dreams
First dream
The four strong oxen, from North, East, South and West, came to fight against one another in front of the king. But when they are about to fight, they retreated instead.
Meaning: "When bad people lead, the dark clouds came to rain, but just thunder instead, thus destroying certain crops and leading people to starve."
Second dream
The young plants are bearing fruits.
Meaning: "When people begin to have short lifespan, young people will marry and have kids even when they are below 18 years old."
Third dream
The female cow feeds on the milk of her child.
Meaning: "When people stop respecting the elders, the parents will live with their children, for not having anyone to rely on."
Fourth dream
A farmer is using young cows for transportation and abandons strong ones.
Meaning: "People will hire young, inexperienced people in certain jobs, especially judging crimes. Not being able to work well, the young people will quit the jobs and old people will not reapply for their previous jobs, which leads to the demise of a country."
Fifth dream
The horse with two heads eats wheat from two different containers instead of eating together.
Meaning: Evil judges will take the bribery of both the defendant and the jurors, and they will decide the crimes however they want."
Sixth dream
People encourage a wolf to urinate on the golden cup.
Meaning: "Good people will not be respected or praised, thus reducing reputation. So, they have to make friends with bad people for the sake of their reputation."
Seventh dream
A man is making threads and places them near his legs. A hungry, female wolf tries to eat the thread without letting him know.
Meaning: "Women will not consider about the money their husbands have tiredly made. They will spend the money for clothes, food, drinks, jewelry and gambling without letting their husbands know."
Eighth dream
There is a large pot surrounded with other small pots. Although the large pot is overflowing, people come and pour water only into the large pot and no one pour water into the small pots.
Meaning: "People will experience poverty because of their evil leaders. They have to work hard for their leaders and not for their family which leads to starvation."
Ninth dream
Animals drink water from a large lake which is polluted around the centre and clean around the edge.
Meaning: "The future leaders will not be sympathetic to their citizens. They will force the citizens to pay a lot of taxes. Not being able to withstand the heavy tax, they will move to the countryside."
Tenth dream
Rice is cooked in a single pot. However, some portions are overcooked, some undercooked and some are cooked right.
Meaning: "It will not rain properly, i.e some areas get a lot of rain while other areas don't. Such a rain destroys certain crops and results in crops of different qualities although they are grown in same field."
Eleventh dream
A man buys a valuable ruby with a liquid.
Meaning: "People will spread Buddha's teachings, asking for donations and spend for their own good. This is equivalent to buying the teachings with a few money."
Twelfth dream
Dried gourds which usually float on water sank in the water.
Meaning: "Wicked people will get the duties of leading a country and people will start to believe in them."
Thirteenth dream
A large rock is floating in water.
Meaning: "When bad people rule, good people couldn't make strong statements in arguments. So, they are no longer appreciated.
Fourteenth dream
A female frog tries to eat a large cobra.
Meaning: "Men would marry younger spouses and give them whatever they want. However, the spouses will scold them for lacking decision in certain cases."
Fifteenth dream
A crow is surrounded by golden Hamsas.
Meaning: "In the future, only people of bad characteristics will become the leaders. Not having anyone to rely on, good people will have to serve them."
Sixteenth dream
Because goats feed on cheetahs, the cheetahs run in fear when they see goats.
Meaning: "When Wicked people became governors, they will claim the land of the good people and deports them. So, good people have to run away from those people."
References
Sources
Buddhist cosmology
Buddhist mythology
Prophecy in Buddhism
Dreams in religion
6th-century BC Buddhism | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,657 |
1992.09.24 - Los Angeles Times - Ice-T Is 'Vetoed' From 2 Guns Shows
by Blackstar on Wed Apr 18, 2018 3:06 pm
Ice-T Is 'Vetoed' From 2 Guns Shows
September 24, 1992|STEVE HOCHMAN | SPECIAL TO THE TIMES
Controversial rapper Ice-T has been "vetoed" from appearing on the Guns N' Roses/Metallica concert bills Sunday at the Los Angeles Memorial Coliseum and Oct. 3 at the Rose Bowl in Pasadena.
The rapper, whose song "Cop Killer" was the focus this summer of a national debate over lyric content in music, is doing at least five dates with the two rock bands--including Oakland tonight and San Diego on Wednesday. Ice-T and his rock group Body Count also had been asked by Guns N' Roses to do the two Los Angeles-area shows.
But concert promoter Brian Murphy said he believed that an appearance by Body Count was inappropriate because of negative "perceptions."
"I thought it was an inappropriate act, given the circumstances of where our show was taking place," Murphy said Wednesday.
He said that he had no fear of violence, but was concerned that the controversy surrounding the band could hurt sales by compounding fears of some rock fans about attending a concert at the Coliseum so soon after the Los Angeles riots.
"I want to keep the perception of our show credible with our audience and reduce any concerns anyone might have about going downtown to the Coliseum," he added. The Rose Bowl date was dropped at the same time.
Ticket sales for the Coliseum show are reportedly well below expectations. Predictions now call for only about 35,000 to 40,000 fans out of a potential 70,000. The Rose Bowl has sold out, with sales of more than 70,000 tickets. The English hard-rock band Motorhead will appear on the two Los Angeles area shows instead of Body Count.
Body Count manager Jorge Hinojosa didn't criticize Murphy for his actions. "We're glad to be doing the dates we are on the tour," he said.
Guns N' Roses lead singer Axl Rose branded the elimination of Body Count from the shows as "shallow-minded." He added, "Both Ice and myself are tired of all the racial crap. This was our chance to play together and show people that we're about artistic expression, not violence or prejudice. It comes down to this--freedom of speech is OK, as long as it doesn't piss off some public official."
Ice-T withdrew the song "Cop Killer" from Body Count's debut Sire album in July after various police organizations complained to Time Warner, which distributes Sire Records, that it encouraged violence against police officers. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,562 |
The University of Indianapolis has named E. John McIlvried to be the first dean of its Graduate School.
A psychologist by training, McIlvried joined the UIndy faculty in 1993 and served first as chair of the psychology department, then since 2001 as dean of the School of Psychological Sciences. In 2007, he took on additional responsibilities as associate provost, a title he will retain.
Pending a national search to replace McIlvried as head of the psychology school, Associate Professor Rick Holigrocki has been named acting dean.
UIndy established its Graduate School in 2009 in response to the continuing growth of its advanced degree programs in such disciplines as physical therapy, occupational therapy, nursing, psychology, education and business. This fall the university has nearly 1,200 graduate students, accounting for 22 percent of enrollment at the main campus. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,381 |
Download "(a) (b) All real numbers. (c) All real numbers. (d) None. to show the. (a) 3. (b) [ 7, 1) (c) ( 7, 1) (d) At x = 7. (a) (b)"
1 Chapter 0 Review 597. E; a ( + )( + ) + + S S + S S lim + l. D; a diverges by the Itegral l k Test sice d lim [(l ) ], so k l ( ) does ot coverge absolutely. But it coverges by the Alteratig Series d l l Test: Use d to show the u are decreasig for.). (a) Ratio test: + ( + ) + + lim + + lim ( + )( + ) + ( + )( ) + + < < < The series coverges absolutely o (, ). The series diverges at both edpoits by the th-term Test: ( ( ) + ) lim + 0 ad ( ( ) + ) lim 0. + Sice the series coverges absolutely o (, ) ad diverges at both edpoits, there are o values of for which the series coverges coditioally. Chapter 0 Review Eercises (pp. 5 5).. + a+! lim lim a ( )! + lim + 0 The series coverges absolutely for all. (a) All real umbers All real umbers (d) Noe + a + + lim lim a ( ) The series coverges absolutely for or 7 < <. ( ) Check 7: coverges. Check : diverges. (a) [ 7, ) ( 7, ) (d) At 7 +. This is a geometric series, so it coverges absolutely whe r < ad diverges for all other values of. Sice r ( ), the series coverges absolutely whe 5 ( ) <, or < <. (a) 5, <, Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
2 598 Chapter 0 Review 5, (a) (, ) (d) Noe (, ) a+ ( )! lim lim a ( )! + lim ( + )( ) 0 The series coverges absolutely for all. (a) All real umbers All real umbers (d) Noe + a + lim lim a ( ) + The series coverges absolutely for <, or 0< <. Furthermore, whe, we have a ad coverges by the p-test with p, so a also coverges absolutely at the edpoits. (a) 0, 0, (d) Noe + a+ ( + ) lim lim. The a ( + ) series coverges absolutely for <, or < <. The series diverges for >. Whe, the series diverges by the th- Term Test (d) Noe lim a + a + ( + ) + ( + ) lim ( ) + + ( + ) + + The series coverges absolutely for + <, or < < ; the series diverges for + + >. Whe, the series diverges by the th-term Test. (a),, (d) Noe + a+ lim lim a ( ) + + lim ( + )( + ) lim ( + ) + ( ) lim e + 0 The series coverges absolutely for all. Aother way to see that the series must coverge is to observe that for, we have, so the terms are (evetually) bouded by the terms of a coverget geometric series.a third way to solve this eercise is to use the th-root Test (see Eercises 7 7 i Sectio 0.5). Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
3 Chapter 0 Review 599 (a) (d) Noe 9. All real umbers All real umbers (d) Noe + a+ lim lim a + The series coverges absolutely for <, or < <. Check : ( ) coverges by the Alteratig Series Test. Check : diverges by the p-test with p.. a+ lim a + ( + ) lim + ( + ). The series coverges absolutely whe <, or < < ; the series diverges whe >. Whe, the series diverges by the th- Term Test. (a) (, ) (a) [, ) (, ) (, ) (d) Noe 0. (d) At + + a+ e lim lim e. a e ( ) + e The series coverges absolutely for e<, e. + a+ + lim lim a + +. The series coverges absolutely whe <, or 0 < <. or. e < < e Furthermore, whe e, we have a ad e the p-test with p e, so coverges by e a absolutely at the iterval edpoits. also coverges ( ) ( ) + ( ) Check 0: coverges coditioally by the Alteratig Series Test. ( ) Check : coverges 0 + coditioally by the Alteratig Series Test. (a) e, e e, e e (a) [0, ] (0, ) (d) At 0 ad Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
4 600 Chapter 0 Review. + a ( )! + + lim lim a +! ( + ) lim 0, 0, 0 The series coverges oly at a+ ( + )! lim lim a ( + )! lim ( + ) ( 0) + The series coverges oly at 0. (a) 0 (a) 0 0 oly 0 oly 0 0 (d) Noe. (d) Noe + a+ 0 l lim lim 0 a l( ) + 0 The series coverges absolutely for 0 <, or 0 < < 0. ( ) Check : coverges by the 0 l Alteratig Series Test. Check : diverges by the Direct 0 l Compariso Test, sice > for ad l diverges. (a) 0 (d) At, 0 0, This is geometric series with r, so it coverges absolutely whe <, or < <. It diverges for all other values of. (a) (, ) (, ) (d) Noe f( ) + + ( ) +, + evaluated at f( ) l( + ) Sum ( ) + + ( ), evaluated at. 5 Sum l + l. f( ) si ( ) +,! 5! ( + )! evaluated at π, Sum si π 0. Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
5 Chapter 0 Review f( ) cos + + ( ) +,!! ( )! evaluated at π π. Sum cos. f( ) e ,!! evaluated at l. Sum e. f( ) ta l ( ) +, 5 + evaluated at. Sum ta π 6. (Note that whe is replaced by, the geeral term of ta becomes ( ), which matches the geeral term give i the eercise.). Replace by 6 i the Maclauri series for + ( 6) + ( 6) + + ( 6) ( 6) +. Replace by i the Maclauri series for ( ) + ( ) + ( ) ( ) + + give at the ed of Sectio 0.. give at the ed of Sectio The Maclauri series for a polyomial is the polyomial itself: ( ) Replace by π i the Maclauri series for si give at the ed of Sectio 0.. ( ) ( ) 5 ( ) + π π π si π π + + ( ) +! 5! ( + )! Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
6 60 Chapter 0 Review 8. Replace by i the Maclauri series for si give at the ed of Sectio 0.. ( ) ( ) ( ) 5 + si + + ( )! 5! ( )! ( ) ( ) ( + )! si ( ) +! 5! 7! ( + )! ( ) +! 5! 7! ( + )! e + e ( ) +!!!! !! ( )!. Replace by 5 i the Maclauri series for cos give at the ed of Sectio 0.. ( 5) ( 5) ( 5) cos ( )!! ( )! + 5 ( 5) ( 5) + + ( ) +!! ( )! π. Replace by i the Maclauri series for π π ( ) ( ) π / π e !! π π π ! + e give at the ed of Sectio 0... Replace by i the Maclauri series for e give at the ed of Sectio 0., ad multiply the resultig series by. ( ) ( ) e + ( ) !! ( ) +!!. Replace by i the Maclauri series for ta give at the ed of Sectio ( ) ( ) ( ) ta + + ( ) Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
7 Chapter 0 Review Replace by i the Maclauri series for l( + ) give at the ed of Sectio 0.. ( ) ( ) ( ) l( ) + + ( ) + 8 ( ). 6. Use the Maclauri series for l( + ) give at the ed of Sectio 0.. [ ] l( ) l + ( ) 7. ( ) ( ) ( ) ( ) + + ( ) + + f ( ) f ( ) ( ) f ( ) f ( ) ( ), so! f ( ) f ( ) 6( ) 6, so! ( ) ( ) f ( ) f ( )!( )!, so! + ( ) + ( ) + ( ) + + ( ) + 8. f( ) ( + 5) f ( ) ( ) 7 f ( ) f ( ) ( 6 ) 0, so 5! f ( ) f ( ) 6 6, so! ( f ) ( ) 0 for ( + ) 5( + ) + ( + ) This is a fiite series ad the geeral term for is 0. Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
8 60 Chapter 0 Review 9. f () f () 9 f () f (),so 7! 7 f () f () 6,so 7! 8 ( )! f () ( )!, so + ( ) f () ( )! + ( ) ( ) + ( ) ( ) + + ( ) f( π ) si π 0 f ( π ) cos π f ( π ) f ( π ) si π 0, so 0! f ( π ) f ( π ) cos π, so! 6 0, if k is eve ( k) f ( π ), if k +, eve, if k +, odd si ( π) + ( π) ( π) + ( π) + ( )! 5! 7! ( )! ( ) + π + +. Diverges, because it is 5 times the harmoic series: 5 5. Coverges coditioally a is a diverget p-series p, so ( ) does ot coverge absolutely. Use the Alteratig Series Test to check for coditioal covergece: () () () u > 0 + > + > <, + lim u lim 0. so the u are decreasig. ( ) Therefore, coverge. Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
9 Chapter 0 Review 605. Coverges absolutely by the Direct l Compariso Test, sice 0 < for ad coverges by the p-test with p.. Coverges absolutely by the Ratio Test, sice a+ +! lim lim a ( + )! + + lim ( + ) Coverges coditioally a l( + ) Compariso Test. Let a ad l( + ) b. diverges by the p-test, ad a c lim b lim l( + ) lim /( + ) lim ( + ).) diverges by the Limit ( ) Therefore, does ot coverge l( + ) absolutely. Use the Alteratig Series Test to check for coditioal covergece: () u 0 l( + ) > Clear. () + > l( + ) > l <, l( + ) l so the u are decreasig. () lim u lim 0. l ( ) Therefore, coverges. l 6. Coverges absolutely by the Itegral Test, b because d lim (l ) b l. l 7. Coverges absolutely by the Ratio Test, + a+! because lim lim a ( )! + lim Coverges absolutely by the Direct Compariso Test, sice for ad is a coverget geometric series. Alterately, we may use the Ratio Test or the th-root Test (see Eercises 7 ad 7 i Sectio 0.5). 9. Diverges by the th-term Test, sice lim lim + a Coverges absolutely by the Direct Compariso Test, sice ( + )( + ) + + > ( + )( + ) > <, ( + )( + ) / ad coverges by the p-test. / 5. Coverges absolutely by the Limit Compariso Test: Let a ad b. The Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
10 606 Chapter 0 Review a c lim b lim + lim + Sice 0 < c < ad coverges by the p-test, coverges. 5. Diverges by the th-term Test, sice lim lim e 5. This is a telescopig series. ( )( ) ( ) ( ) s ( ) ( ) 6 0 s s s 6 ( ( + ) ) 6 ( + ) S lim s 6 5. This is a telescopig series. + ( + ) + s + + s s s + + S lim s Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
11 Chapter 0 Review (a) f () f () P ( ) f( ) + f ( )( ) + ( ) + ( )!! + ( ) + ( ) + ( ) f(.) P (.) 96. Sice the Taylor series for f ca be obtaied by term-by-term differetiatio of the Taylor Series for f, the secod order Taylor polyomial for f at is + 6( ) + 6( ). Evaluated at.7, f ( 7. ) 7.. It uderestimates the values, sice f () 6, which meas the graph of f is cocave up ear. 56. (a) Sice the costat term is f (), f () 7. Sice f ( ), f ( ).! Note that P ( ) + 0( ) 6( ) + ( ). The secod order polyomial for f at is give by the first three terms of this epressio, amely + 0( ) 6( ). Evaluatig at., f (. ) 05.. The fourth order Taylor polyomial for g() at is 5 [ 7 ( t ) + 5( t ) ( t ) ] d 7t ( t ) + ( t ) ( t ) 5 7( ) ( ) + ( ) ( ) (d) No; oe would eed the etire Taylor series for f (), ad it would have to coverge to f () at. 57. (a) Use the Maclauri series for si give at the ed of Sectio ( / ) ( / ) ( / ) 5si ( ) +! 5! ( + )! ( ) ( )! The series coverges for all real umbers, accordig to the Ratio Test: + + a+ 5 ( + )! lim lim a ( + )! 5 lim ( + )( + ) 0 ( Note that the absolute value of ) 5 f ( ) is bouded by for all ad all,,,. We may use the Remaider Estimatio Theorem with M 5 ad r. 5 5 So if < <, the trucatio error usig P is bouded by. + ( + )! ( + )! To make this less tha 0. requires. So, two terms (up through degree ) are eeded. + Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
12 608 Chapter 0 Review 58. (a) Substitute for i the Maclauri series for + + ( ) + ( ) + + ( ) ( ) + give at the ed of Sectio 0..,. The series is geometric with, r so it coverges for <. ( You could also use the Ratio Test, but you would eed to verify divergece at the edpoits) f, so oe percet is approimately It takes 7 terms (up through degree 6). This ca be foud by trial ad error. Also, for, the series is the alteratig series. 0 If you use the Alteratig Series Estimatio Theorem, it shows that 8 terms (up through degree 7) are sufficiet 8 sice < It is also a geometric series, ad you could use the remaider formula for a geometric series to determie the umber of terms eeded. (See Eample i Sectio 0..) 59. (a) lim + + a+ ( + )! lim a ( )! + + ( + ) lim ( ) + + lim e The series coverges for e<, or <, so the radius of covergece is. e e f + +!! By the Alteratig Series Estimatio Theorem the error is o more tha the magitude of the et term, which is 0..! Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
13 Chapter 0 Review (a) f() ( ) f () ( ) f () f () ( ),so! f () f () 6( ) 6,so! ( ) ( ) f () f () ( )!,so ( )! f( ) ( ) + ( ) ( ) + + ( ) ( ) + Itegrate term by term. l dt t ( ( t ) + ( t ) ( t ) + + ( ) ( t ) + ) dt + ( t ) t ( t ) + ( t ) ( t ) + + ( ) ( ) ( ) ( ) ( ) ( ) ( ) + + Evaluate at.5. This is the alteratig series + + ( ) + By the + ( + ) Alteratig Series Estimatio Theorem, sice the size of the third term is < 005., the first two terms will suffice. The estimate for l is (a) Substitute for i the Maclauri series for e give at the ed of Sectio 0.. ( ) ( ) ( ) e + ( ) !!! ( ) +! Use the Ratio Test: a + + +! lim lim a ( )! + lim + 0 The series coverges for all real umbers, so the iterval of covergece is (, ). The differece betwee f() ad g() is the trucatio error. Sice the series is a alteratig series, the 8 ( ) error is bouded by the magitude of the fifth term:. Sice , this term is! less tha 8 06 (. ) which is less tha 0.0. Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
14 60 Chapter 0 Review 6. (a) f( ) + ( ( ) + ) ( ) + + No; at, the series is ( ) ad the partial sums form the sequece 0, 0,, 0,, 0,..., which has o limit. 6. (a) Substitutig t for i the Maclauri series for si give at the ed of Sectio 0., t t t si t t + + ( ).! 5! ( + )! Itegratig term-by-term ad observig that the costat term is 0, 7 + si t dt + + ( ) + 0 7(!) 5 (!) ( + )( + )! (d) 6. (a) si d 0 ( ). 7(!) + 5 (!) + ( + )( + )! + Sice the third term is < 000., it suffices to use the first two ozero terms (through (!) 5 0 degree 7). NINT(si,, 0, ) , (!) 5 (!) 5(!) , This is withi 5. 0 of the aswer i. Let f( ) e d. e d 0 f( d ) 0 h [ f ( 0) + f ( 05. ) + f ( )] 05. e e 05. e e Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
15 Chapter 0 Review 6 e !! !! P ( ) P 0 ( ) Sice f is cocave up, the trapezoids used to estimate the area lie above the curve, ad the estimate is too large. (d) Sice all the derivatives are positive (ad > 0), the remaider, R ( ), must be positive. This meas that P ( ) is smaller tha f(). (e) Let u dv e d du d v e ed e e d Let u dv e d du d v e e e d e e e d e e + e + C ( + ) e + C e d ( + ) e 0 0 e (a) Because [$ 000(. 08) ](. 08) $ 000 will be available after years. Assume that the first paymet goes to the charity at the ed of the first year. 000(. 08) + 000(. 08) + 000(. 08) This is a geometric series with sum equal to, 500. ( 08) be ivested today i order to completely fud the perpetuity forever. This meas that $,500 should Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
16 6 Chapter 0 Review 66. We agai assume that the first paymet occurs at the ed of the year. Preset value 000(. 06) + 000(. 06) + 000(. 06) ( 06. ) , The preset value is $ 6, (a) Sequece of Tosses Payoff ($) T 0 Probability HT HHT Term of Series 0 HHHT + Epected payoff d ( ) ( ) d ( ) ( ) (d) If, the formula i part matches the ozero terms of the series i part (a). Sice ( ), the epected payoff is $. ( ) Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
17 Chapter 0 Review (a) The area of a equilateral triagle whose sides have legth s is areas removed from the origial triagle is b b b b + 9 or b b b b s s () s. The sequece of b This is a geometric series with iitial term a ad commo ratio b b, which is the same as the area of the origial triagle. r, so the sum is No. For eample, let b ad set the base of the origial triagle alog the -ais from (0, 0) to (, 0). k The poits removed from the base are all of the form 0,, so poits of the form (, 0) with irratioal (amog others) still remai. The same sort of thig happes alog the other two sides of the origial triagle, ad, i fact, alog the sides of ay of the smaller remaiig triagles. While there are ifiitely may poits remaiig throughout the origial triagle, they paradoically take up zero area Differetiate both sides ( ) Substitute to get the desired result. 70. (a) + Note that is a geometric series with first term + the idetity ( for < ). Differetiate. ( )( ) ( )( ) ( + ) ( ) ( ) Differetiate agai, a ad commo ratio r, which eplais Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
18 6 Chapter 0 Review ( + ) ( ) ( ) ( )[ ( )( )] ( ) ( ) ( ) + ( )( ) ( ) ( )[( )( ) + ( )] ( ) ( ) Multiply by. ( + ) ( ) Replace by. ( ) ( + ), > ( ) Use a grapher to solve the equatio ( ). The grapher calculates that. 769 is the solutio of the equatio that is greater tha. 7. (a) Computig the coefficiets, f () f ( ) ( + ), so f ( ) f () f ( ) ( + ), so! 8 f () f ( ) 6( + ), so! 6 ( ) I geeral, f ( ) ( )!( + ), so f( ) ( ) f () ( )! +. ( ) ( ) + + ( ) Ratio test for absolute covergece: lim + + i + < < <. The series coverges absolutely o (, ). At, the series is, 0 which diverges by the th-term test. At, the series is ( ), 0 which diverges by the th-term test. The iterval of covergece is (, ). ( ) ( ) P ( ) P ( 05. ) 05. ( 05. ) ( 05. ) (a) Ratio test for absolute covergece: + ( + ) lim + + lim < < < The series coverges absolutely o (, ). The series diverges at both edpoits by the th-term test, sice lim 0ad lim ( ) 0. The iterval of covergece is (, ). The series coverges at ad forms a alteratig series: 8 6 ( ) The th-term of this series decreases i absolute value to 0, so the trucatio error after 9 terms is less tha the absolute th value of the0 term. Thus, 0 error < < (a) P ( ) + Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
19 Chapter 0 Review 65 (d) P ( ) + P ( ) + + P ( 07. ) + 07 (. ) ( 07. ) + ( 07. ) 006. Copyright 0 Pearso Educatio, Ic. Publishig as Pretice Hall.
Example 2. Find the upper bound for the remainder for the approximation from Example 1.
8.3. Click here for answers. Click here for solutions. THE INTEGRAL AND COMPARISON TESTS. n 3 n 2. 4 n 5 1. sn 1. is convergent or divergent.
Absolute covergece Defiitio A series P a is called absolutely coverget if the series of absolute values P a is coverget. If the terms of the series a are positive, absolute covergece is the same as covergece.
MTH 26 TEST April, 20 (PLEASE PRINT YOUR NAME!!) Name:. (6 poits each) Evaluate lim! a for the give sequece fa g. (a) a = 2 2 5 2 5 (b) a = 2 7 2. (6 poits) Fid the sum of the telescopig series p p 2.
Math Fall 7 Lab Represetig Fuctios as Power Series I. Itrouctio I sectio.8 we leare the series c c c c c... () is calle a power series. It is a uctio o whose omai is the set o all or which it coverges.
Solutios to Homework MATH 36. Describe geometrically the sets of poits z i the complex plae defied by the followig relatios /z = z () Re(az + b) >, where a, b (2) Im(z) = c, with c (3) () = = z z = z 2.
Fourier Series Some Prelimiar Ideas: Odd/Eve Fuctios: Sie is odd, which meas si ( ) si Cosie is eve, which meas cos ( ) cos Secial values of siie a cosie at Whe dealig with series, is alwas a ositive iteger.
( ) (( ) ) ANSWERS TO EXERCISES IN APPENDIX B. Section B.1 VECTORS AND SETS. Exercise B.1-1: Convex sets. are convex, , hence. and. (a) Let.
In number theory we will generally be working with integers, though occasionally fractions and irrationals will come into play.
The Gamma function Michael Taylor. Abstract. This material is excerpted from 18 and Appendix J of [T]. | {
"redpajama_set_name": "RedPajamaC4"
} | 439 |
My basement flooding alarm was put in place because this year we have seen extreme rain on a few occasions, which in one case actually flooded the street, and as a result also my basement.
With the electrical cabinet and utility connections in the basement, that could have been very bad, if waterlevels had reached the cabinet itself, so I needed a way to measure waterlevels and have an automatic start of a sump pump when not at home (now my neighbours called me at work to tell me what was happening in the street).
As the house this alarm was built for, is not always occupied, we needed notifications via internet.
In the house I set up a MySensors network with a few nodes and a Raspberry which integrates the Controller (Domoticz) and Gateway functions (nrf24l01+ directly connected via interface board to raspi).
The basement is now being watched by a node which measures the distance to the floor using a cheap ultrasonic sensor. It will switch a relay if the "floor" rises a certain amount (10cm) and will switch off the relay when the "floor" is back within 2 cm from initial position.
The node resets itself to a start position on startup. This means that during startup a first distance measurement is done. This is the "zero" level. Any deviation from that first measured distance is "the change".
You can use any relay module which can be controlled by a digital pin.
I use the PA+LPA version of the nrf24l01+ radio, since the basement is covered by a slab of metal re-inforced concrete. A normal radio did not reach the gateway.
I saw your demonstration during the Dutch MySensors meetup and think this project is one of the first that demonstrates MySensors at it's fullest capabilities.
Small caveat @GertSanders , if your basement is flooded with 9cm of water and still rising, when at that moment the power interrupts for a few seconds, your node will reset and take the 9cm as the zero level, meaning, your pump will start then when there's 19cm of water. Will not be a big issue, but your pump will need to run longer, and hoping there won't be any other power interruptions.
I considered this point and plan to add a EEPROM save of the zero line in a next version of the sketch, but as a quick fix the current version has proven adequate.
Nice project and nice board, I have ordered a bunch and they are on the way from dirtyPCB.
Why not use a water level sensor like the one below, I bought some and they are quite precise with very reproductible values, is it to avoid any risk from wires coming from the main-powered box going into the water ?
@Nca78 I'm just curious what's the max depth that you can measure with those water level sensors?
The length of the "stripes" is 4cms so that's all you've got to measure the level. But in this case it's enough to know if basement is flooded or not and start the pump. With the components on the top part + connectors protected by oxime (non acid) silicon sealant it won't be a problem if the water level rises higher than the sensor.
The reason I use the ultrasonic sensor is because I do not like to put a wire along the wall and a sensor on ground level. Now I only have a small case at cabinet level, with the US sensor directed to the floor. And I can measure any water level up to 255 cm with this.
A collegue of mine uses this same node to measure the amount of rainwater collected in his rainwater tank.
In Domoticz I have set up several triggers so that I get notifications on specific waterlevels. This is needed if waterlevels keep rising while the pump is working, meaning the pump can not handle the flooding and needs to be assisted by extra equipment.
@GertSanders I'd had a feeling that you used the ultra sonic sensors for a particular reason. Measuring the level of rain in my rainwater tank is definitely a project I've put in my todo list.
The 4cm sensor is nice to have some alarm under a washing machine for example, but in my "watertank" like problem (I have had up to 122cm of water in the basement) that would not tell me much.
The Ultrasonic sensor I use now also exists in a waterproof variant (as used in car-bumpers). Still need to experiment with that, as it would be nicer to use that in a watertank (is better protected against corrosion).
In my current setup, I expect the electronics to degrade after some time, since they are used in a very humid environment. The box all this is mounted in, is certainly NOT waterproof.
@Nca78 have you been using the Rain Water Level Sensor modules for a while? Can you run more than one on a pro mini? I was thinking, since they are small, that I could use a few at different levels to have different actions. I saw a water level eTape, but it is a bit expensive (haven't seen a Chinese knockoff yet). Or I suppose I could combine the sensor for initial water detection, then the ultra sonic to figure out depth.
@crodgers no I just tested them at the moment. But they all gave me close values for similar levels of water. It has an analog output, so you can combine as many as the analog pins on your arduino, so up to 8 on a pro-mini if you are not using I2C devices (I2C uses A4/A5).
I plan to use these for simple flooding or rain detection more than for the water level.
I found this sensors too, they look a bit bigger and have gold coating which would solve the potential hazard of lead solder and increase durability, but there are no dimensions on the website. They also don't have the led which is an advantage for me to run on batteries, no use to remove it by hand.
Nice build, and great idea using the ultra sonic sensor.
This sort of thing was the exact reason I got into arduino and eventually mysensors. I had a concern about a damp basement and went looking for a water leak detection alarm but the prices were so high at the time... eventually I stumbled upon arduino and the obsession went from there.
The first node I ever made was a water leak detection system that used the 4cm thing mentioned above... then I realized I had all these left over pins so I filled them up with a gas, flame, temperature, humidity and light sensor. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,060 |
Phaedrotettix bistrigatus är en insektsart som först beskrevs av Scudder, S.H. 1897. Phaedrotettix bistrigatus ingår i släktet Phaedrotettix och familjen gräshoppor. Inga underarter finns listade i Catalogue of Life.
Källor
Gräshoppor
bistrigatus | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,207 |
Interested in learning more about Agile? Then come join the Agile Committee of Practice where we will dig into concepts, methods, and techniques for applying Agile to your projects and explore the PMI-ACP principles.
Our goal is to increase awareness and knowledge of Agile project/program management.
This is a great opportunity to help learn and apply relevant takeaways. It is a great forum for networking to others who specialize in agile.
Meetings occur every odd month but exact day will vary. See calendar for exact date and time (typically from 5:30 – 7:00pm).
Stay informed of the latest news and events in Agile! | {
"redpajama_set_name": "RedPajamaC4"
} | 1,339 |
Harbarian process modeling (HPM) is a method for obtaining internal process information from an organization and then documenting that information in a visually effective, simple manner.
The HPM method involves two levels:
Process diagrams: High-level overviews of specific processes or workflows.
Systems diagrams: Mapping how each process is correlated, as well as various inputs, outputs, goals, feedback loops, and external factors.
HPM method purpose
The primary purpose of the HPM method is to first elicit process information from all relevant stakeholders and subsequently document existing processes completed within an organization. This method addresses the problem of workplace inefficiency, which can largely be attributed to the majority of processes being undocumented and informally completed.
The formal documentation of processes offers to replace ambiguity and uncertainty with clarity and transparency for the work being completed, both for process stakeholders and for upper management. The development of formal documentation also provides the opportunity to reassess process efficacy. Stakeholders will be given the chance to offer their innate insight into process strengths, weaknesses, and redundancies.
HPM output
The final output of the HPM method is the formalized master documentation of an organization's or branch's workflows and processes. This collection is divided into specific process series, each for a specific group or team. Each process series is divided into the team's major workflows which are individually documented into HPM process diagrams. Each process series also includes an HPM systems diagram which shows the relationships and connections between the various processes, inputs, outputs, feedback loops, external environment, and system goals.
HPM process diagram
HPM process diagrams provide a high-level overview of a specific workflow or process completed by a business unit. These diagrams are not meant to provide detailed instructions on procedures or codes, but instead address all major steps, decisions, and evaluations that are included in a process. Once finalized, these documents can be used as a reference for anyone in the organization. For example:
The process owners can utilize the diagrams to train new employees.
Other groups can reference the diagrams for enhanced understanding and communication.
Upper management can reference the diagrams for increased process transparency and decision-making.
HPM process diagrams can be customized to fit the specific needs of an organization, however, typically include:
Process title
Process phases
Timeline (if applicable)
Sequential process steps
Legend/key
HPM system diagram
HPM system diagrams provide a holistic view of a set of process diagrams. The system focuses on the connections and relationships between various processes. These diagrams also address the system as a collection of:
Inputs
Transformations
Outputs
Goals
Feedback
External factors
HPM implementation
The HPM method implementation is completed in five main phases. Meetings with stakeholders from organizational teams are conducted to identify major processes, document each process in detail, and develop implementable solutions. Information is elicited from stakeholders and then formally documented into process flowchart diagrams and systems thinking diagrams for use within the organization:
initial elicitation and collaboration,
preliminary documentation,
follow-up elicitation and collaboration,
final documentation, and
project package submission.
Initial elicitation and collaboration
The first phase of the HPM method involves scheduling and meeting with each major team that makes up an organization or branch. Meetings are then conducted in the form of an interview and followed a detailed protocol to establish the meeting purpose, convey expected benefits, and to elicit information about the respective team's processes.
Meetings begin with an explanation of the purpose, as well as a list of expected benefits to each team: Clarification should then be given to all questions posed by stakeholders to ensure buy-in from all members of the respective team.
Next, each team should provide a high-level overview of all of the major processes they complete on a regular basis. Each of these processes can then be discussed in detail. The chronological order of tasks for each process is elicited and inputs, outputs, operations, decision points, and evaluations are identified.
Preliminary documentation
The second phase, preliminary documentation, begins after all process information is elicited from all organizational teams. Each process is then organized and formatted into a HPM process diagram. Processes are designed with a title, overview of process phases, timeline (if applicable), and specific steps in sequential order.
Follow-up elicitation & collaboration
After all preliminary HPM process diagrams are drafted, follow-up meetings with each of the teams is conducted. These meetings open with a review of the respective team's HPM process diagrams for accuracy. This review also serves as a means to prime stakeholders for the three stages of brainstorming: (1) prepare the group, (2) present the problem, and (3) guide the discussion.
Prepare the group
Teams are primed for brainstorming through the review of their HPM process diagrams. This step reminds stakeholders about the content being discussed and allows them to think about each process in detail, reviewing what works well and what may be improved. Additionally, the time between the initial interview and follow-up meeting should have provided each stakeholder with the opportunity to independently think about the processes.
Present the problem
Once prepared for brainstorming, teams are tasked with problem identification. While the act of formally documenting processes innately addresses existing problems with process efficiency and ambiguity, brainstorming is meant to focus on further solving these problems. This involves a brief independent reflection for each stakeholder of their existing processes' efficacy, strengths, and areas that could be or need to be improved.
Guide the discussion
To facilitate the brainstorming session, teams are guided through the four stages of AI: (1) discovery, (2) dream, (3) design, and (4) destiny. Each stage consists of a discussion guided with specific AI-based questions crafted to elicit ideas and solutions founded out of positivity.
Discovery
The first stage, discovery, appraises stakeholders and existing workflows, identifying what already works well and "appreciating and valuing the best of 'what is'". Stakeholders are asked AI-based questions designed to elicit the best of their respective team. For example, stakeholders could identify personal strengths of specific stakeholders, strong points within existing processes, and environmental factors that enabled the team to operate at their best.
Dream
The second stage, dream, asks teams to envision a future based on the positives discovered in the first stage of AI. Questions posed to teams allow them to explore optimistic possibilities of what could be accomplished while intentionally overlooking deficits and struggles that existed in the past. For example, stakeholders could envision what their team would be able to accomplish when operating at their best or what factors would enable the team to operate with an elevated sense of purpose.
Design
The third stage, design, focuses on teams articulating how they could turn what was identified in the dream stage into a reality. indicate that "once strategic focus or dream is articulated attention turns to the creation of the ideal organization" and the "actual design of the system" (p. 10). Questions should focus on action planning and identifying where specific improvements could be made within existing processes to make their optimistic futures tangible. Where the dream stage asked stakeholders to overlook deficits and struggles, the design stage asked stakeholders to develop new solutions that fixed or bypassed existing issues by using the teams' strengths.
Destiny
The fourth stage, destiny, concludes the AI process by having teams develop a plan to sustain what was identified in the first three stages. Utilizing the positive momentum built throughout the brainstorming session, stakeholders are likely to agree to perform specific actions. Cognitive dissonance theory postulates that by making a public commitment of behavioral intent, stakeholders will feel a strong need to maintain consistency between their words and their actions. For this reason, questions focus on eliciting self-identified commitments from stakeholders. For example, stakeholders were asked to identify a small action they could each do immediately to help make their envisioned future become a reality. These answers served as public commitments to the rest of their team.
Final documentation
At this point, all relevant information has been elicited from the organizational teams and is ready to be documented. First, HPM process diagrams should be updated to reflect feedback and insights from stakeholders. Second, the collective HPM process diagrams of each team are reviewed and analyzed. Systems thinking is then applied to identify a "deeper understanding of the linkages, relationships, interactions and behaviours among the elements that characterize the entire system".
Business psychology concepts
The HPM method utilizes four core concepts derived from business psychology: (a) flowcharts, (b) brainstorming, (c) appreciative inquiry (AI), and (d) systems thinking.
Flowcharts
Flowcharts are "easy-to-understand diagrams that show how the steps of a process fit together". They provide a visual reference to stakeholders so that steps can clearly be followed in a chronological order. Flowcharts are "used commonly with non-technical audiences and are good for gaining both alignment with what the process is and context for a solution".
This neuroscience tool was incorporated into the HPM method for its numerous applications: (a) defining a process, (b) standardizing a process, (c) communicating a process, (d) identifying bottlenecks or waste in a process, (e) solving a problem, and (f) improving a process. Flowcharts provide a useful and straightforward visual reference for all members of an organization. Utilizing flowcharts offers increased process transparency and decreased ambiguity, often resulting in an increase to overall workplace efficiency.
Brainstorming
Brainstorming is an effective neuroscience tool that can be used with groups to generate ideas that draw on the experience and strengths of all stakeholders. This tool was incorporated into the HPM method for its potential to provide teams with the opportunity to "open up possibilities and break down incorrect assumptions about the problem's limits." Additionally, studies have shown that groups that engage in brainstorming "can be cognitively stimulated as a result of exposure to the ideas of others". This implies there is a synergistic relationship among stakeholders' individual strengths and the ideas generated throughout a brainstorming session.
Appreciative inquiry and the 4-D cycle
Appreciative inquiry (AI) is based on recognizing a "positive core" by appreciating the qualities and strengths of the people who make up an organization. assert that "human systems grow in the direction of what they persistently ask questions about and this propensity is strongest and most sustainable when the means and ends of inquiry are positively correlated" (pp. 3–4). This implies that asking positive and optimistic questions will likely guide a group or organization towards a positive, optimistic future.
AI involves four key stages, known as the 4-D cycle: (1) discovery, (2) dream, (3) design, and (4) destiny. Each stage engages stakeholders in appreciating their organization, constructing a holistic appreciation for the people they work with, and creating a "positive core" that allows the organization to change and grow.
AI was incorporated into the HPM method for its promotion of positive perspectives to stakeholders., the creators of AI, assert that AI focuses on the positive philosophy behind the approach rather than viewing AI solely as a problem-solving technique. AI-based questions can be used to elicit constructive ideas and solutions from stakeholders throughout the elicitation portion of the project.
Systems thinking
Systems thinking is a theory that provides stakeholders with an "understanding [of] how the people, processes, and technology within an organization interact allow[ing] business analysts to understand the enterprise from a holistic point of view". While traditional forms of analysis look at specific parts of a system, systems thinking looks at the "big picture," focusing on the interactions between parts including dependencies and synergistic relationships.
While there are many approaches and models of systems thinking, provide an open system (systems theory) that analyzes a system by its (a) inputs, (b) throughputs or transformations, (c) outputs, (d) feedback, and (e) environment. This model has been adapted for use in analyzing each of the organizational teams as a system through their (a) inputs, (b) transformations, (c) outputs, (d) feedback loops, (e) goals, and (f) environment.
References
Business process modelling | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,226 |
\section*{Certificate}\label{section:certificate}
\end{center}
This is to certify that the thesis titled \textbf{``Broker Bots: Analyzing automated activity during high impact events on Twitter"} submitted by \textbf{Sudip Mittal} for the partial fulfillment of the requirements for the degree of \emph{Master of Technology} in \emph{Computer Science \& Engineering} is a record of the bonafide work carried out by her / him under my / our guidance and supervision at the Cybersecurity Education and Research Centre, Indraprastha Institute of Information Technology, Delhi (CERC@IIITD). This work has not been submitted anywhere else for the reward of any other degree. \\ \vspace{0.5in}
\textbf{Professor PK}\\
\textbf{Indraprastha Institute of Information Technology, New Delhi}
\begin{abstract}
Twitter is now an established and a widely popular news medium. Be it normal banter or a discussion on high impact events like Boston marathon blasts, February 2014 US Icestorm, etc., people use Twitter to get updates and also broadcast their thoughts and views. Twitter bots have today become very common and acceptable. People are using them to get updates about emergencies like natural disasters, terrorist strikes, etc., users also use them for getting updates about different places and events, both local and global. Twitter bots provide these users a means to perform certain tasks on Twitter that are both simple and structurally repetitive, at a much higher rate than what would be possible for a human alone. During high impact events these Twitter bots tend to provide a time critical and a comprehensive information source with information aggregated form various different sources.
In this study, we present how bots participate in discussions and augment them during high impact events. We identify bots in 5 high impact events for 2013: Boston blasts, February 2014 US Icestorm, Washington Navy Yard Shooting, Oklahoma tornado, and Cyclone Phailin. We identify bots among top tweeters by getting all such accounts manually annotated. We then study their activity and present many important insights. We determine the impact bots have on information diffusion during these events and how they tend to aggregate and broker information from various sources to different users. We also analyzed their tweets, list down important differentiating features between bots and non bots (normal or human accounts) during high impact events. We also show how bots are slowly moving away from traditional API based posts towards web automation platforms like IFTTT, dlvr.it, etc. Using standard machine learning, we proposed a methodology to identify bots/non bots in real time during high impact events. This study also looks into how the bot scenario has changed by comparing data from high impact events from 2013 with data from similar type of events from 2011. Bots active in high impact events generally don't spread malicious content. Lastly, we also go through an in-depth analysis of Twitter bots who were active during 2013 Boston Marathon Blast. We show how bots because of they're programming structure don't pick up rumors easily during these events and even if they do; they do it after a long time.
\end{abstract}
\newpage
\pagestyle{empty}
\newpage
\section*{Acknowledgments}\label{section:acknowledgments}
\pagestyle{plain}
\pagenumbering{roman}
I thank all members of Precog research group and CERC@IIITD (Cybersecurity Education and Research Centre) at IIIT-Delhi for their valuable feedback and suggestions. I would also like to thank Aditi Gupta and Paridhi Jain for their feedback and support. Last and not the least, I really appreciate and thank ``PK" for being an awesome advisor and mentor.
\newpage
\tableofcontents
\listoffigures
\listoftables
\newpage
\newpage
\newpage
\mbox{}
\chapter{Introduction Research Aim and Contribution}\label{chapter:introduction}
\pagenumbering{arabic}
\setcounter{page}{1}
\onehalfspacing
\section{Introduction}
Twitter has been transformed into a news and media source, it is being used by all top news agencies like CNN\footnote{\url{https://twitter.com/cnnbrk}}, BBC\footnote{\url{https://twitter.com/BBCBreaking}}, etc., and also by print media like New York Times\footnote{\url{https://twitter.com/NYTLive}}, The Huffington Post\footnote{\url{https://twitter.com/HuffingtonPost}}, etc., all vying to redirect huge numbers of Twitter users to their websites and content. These sources tend to push news to Twitter.
Now, Twitter is even being used by people like politicians, celebrities to create news on Twitter too, Twitter is being used by US Senators and Congress representatives to update the media and their constituents. On twitter its even possible for normal users to create news; Abbottabad resident Sohaib Athar created a huge sensation when he live tweeted the US raid to capture Osama Bin Laden.\footnote{\url{http://techcrunch.com/2011/05/02/heres-the-guy-who-unwittingly-live-tweeted-the-raid-on-bin-laden-2/}} Twitter is a medium to detect popularity; politicians, celebrities are even paying a lot to get more and more followers.\footnote{\url{http://www.dailymail.co.uk/news/article-2430875/Barack-Obama-19-5m-fake-Twitter-followers.html}}
Our work focusses on Twitter because of its immense impact on news and news content generation. Also, data collection on Twitter is easier because of its well maintained and mature APIs.
\section{Defining High Impact Events}
Everyday there are scores of events that can be categorized as news worthy. Events like Government policy changes, elections, earthquakes, celebrity gossip, etc., all can be categorized as news.
\textbf{What makes an event news-worthy?} An event is news-worthy if it has: Timing, significance, Proximity, Prominence, Human Interest.\footnote{\url{http://www.mediacollege.com/journalism/news/newsworthy.html}}
We however only include high impact events in our study. We define \textbf{``high impact events"} as those events that have a great political and economical impact; they may also have high to moderate damages to life and property.
\section{Use of Twitter in High Impact Events}
During high impact events there is a huge increase in activity on Twitter. Users all across the world login to check for news, discuss, express their sympathies, opinions, and share content.
\subsection{Use of Twitter during the Boston Marathon Blast (April 15, 2013)}
In our dataset of tweets regarding Boston Blasts we encountered many tweets:
\begin{itemize}
\item I saw people's legs blown off. Horrific. Two explosions. Runners were coming in and saw unspeakable horror. \textit{-- Jackie Bruno (@JackieBrunoNECN) April 15, 2013}
\item At the ER. Not a comforting way to pass the time.\#boston So sad. \textit{-- GanderHeroDog (@veterantraveler) April 15, 2013}
\item Prayers goes out to those involved/hurt in \#BostonMarathon. WTF is wrong with people man. Just sad \textit{-- LeBron James (@KingJames) April 15, 2013}
\item An eyewitness during the explosions at the \#Boston \#Marathon says ``it sounded like a cannon blast." Video: on.cnn.com/1399A40 \textit{-- CNN Video (@CNNVideo) April 15, 2013}
\item If you are concerned about a runner in the \#BostonMarathon, you can see where they last checked in here theh.gr/XCP0II \textit{-- taylor johnson (@thehappygirl) April 15, 2013}
\end{itemize}
\section{Use of Bots Online}
Internet bot, web robot, WWW robot or simply bot are software application that runs automated tasks over the Internet. Bots perform tasks that are simple and repetitive, at a higher rate than would be possible for a human. Bots can be good or have malicious intent. Malicious use of bots include orchestrating DDoS attacks, spreading spam campaigns, commit click fraud, etc. Examples of Internet Bots: Wikipedia bots\footnote{\url{http://www.bbc.com/news/magazine-18892510}}, Twitter bots, Spam bots, IRC bots, Botnets, etc.
\section{Use of bots on Twitter (\textit{Twitter bots})}
Twitter bots are Twitter users that post updates to Twitter automatically. They are used many a times to help in spam campaigns, directing Twitter users to certain webpages, post directed messages, and sometimes to assist Twitter users by updating them about information like highway traffic, natural disaster alerts, etc.
Some examples of Twitter bots:
\begin{itemize}
\item \textit{@AllOilPainting} : Helps user search for oil paintings being sold on Ebay. Tweets the links to the painting for its followers.
\item \textit{@Horse\_ebooks} : Horse\_ebooks is a spam bot that is followed by approximately 200,000 users. Its famous for its amusing ``non sequiturs" updates. Horse\_ebooks was named one of the best Twitter feeds, by UGO Networks in 2011\footnote{\url{http://www.ugo.com/web-culture/best-twitter-accounts-of-2011-horse-ebooks}} and Time.com in 2012.\footnote{\url{http://techland.time.com/2012/03/21/the-140-best-twitter-feeds-of-2012/slide/horse-ebooks/}} John Hermann at Splitsider wrote that Horse\_ebooks ``might be the best Twitter account that has ever existed".\footnote{\url{http://splitsider.com/2012/01/the-ballad-of-horse_ebooks/}}
\item \textit{@earthquakeBot} : It's an emergency bot that tweets about earthquakes.
\begin{figure}[!ht]
\centering
\includegraphics[scale=.4]{botearthquake.png}
\caption [Example of an Automated Account.]{Tweets from the \textit{@earthquakeBot}.}
\label{fig:earthqk}
\end{figure}
\end{itemize}
\section{Rules for Posting Updates on Twitter}\label{rules}
An understanding of the Twitter API rules for ``posting updates" to Twitter is vital in designing Twitter bots, because if an account flouts any of these rules they are either banned from posting updates to Twitter for some time or their account is suspended.
Formal documentation by twitter is available for these ``Twitter rules"\footnote{\url{https://support.twitter.com/articles/18311-the-twitter-rules}} and ``API Limits".\footnote{\url{https://support.twitter.com/articles/15364-twitter-limits-api-updates-and-following}}
Some major rules:
\begin{enumerate}
\item Update limits:
\begin{itemize}
\item Direct messages: 250 per day.
\item Tweets: 1,000 via API per day. The daily update limit is further broken down into smaller limits for semi-hourly intervals. Retweets are counted as tweets.
\item Following (daily): The technical follow limit is 1,000 per day.\footnote{\url{https://support.twitter.com/articles/68916-following-rules-and-best-practices}}
\item Following (account-based): Once an account is following 2,000 users, additional follow attempts depend upon account-specific ratios.
\end{itemize}
\item Content boundaries and Use of Twitter:
\begin{itemize}
\item Don't follow and/or unfollow large amounts of users in a short time period, particularly by automated means.
\item Don't repeatedly follow and unfollow people, whether to build followers or to garner more attention for your profile.
\item Updates should not consist mainly of links.
\item Don't post duplicate content over multiple accounts or multiple duplicate updates on one account.
\item Don't post multiple unrelated updates to a topic using \#, trending or popular topic, or promoted trend.
\item Don't send large numbers of duplicate @replies or mentions.
\item Don't send large numbers of unsolicited @replies or mentions in an aggressive attempt to bring attention to a service or link.
\item Don't randomly or aggressively Retweet through automation in an attempt to bring attention to an account, service or link.
\item Don't randomly or aggressively Favoriting tweets through automation in an attempt to bring attention to an account, service or link.
\end{itemize}
\end{enumerate}
\section{Research Motivation}
As bots get more and more popular on Twitter they will tend to impact and affect discussions, information flow, credibility, information security, etc. During high impact events people rely on twitter for important updates, crucial information. We were motivated to analyze how bot activity affects high impact events.
\section{Research Contribution}\label{chapter:contri}
We in this section list down all the major contributions of this study.
\begin{itemize}
\item We show that bots actively participating in high impact events spread information obtained from ``trusted and verified" sources.
\item We show that bots aid in information distribution, they help in ``brokering" information from various sources to other users. In the Boston marathon blasts we show that bots push updates to at least 9.53\% new users.
\item We show that bots do not propagate rumors and even if they do; they do it after some time.
\item Bots are moving away from Twitter API based approach to web automation softwares like IFTTT, dlvr.it in order to post updates to Twitter.
\item We show that temporal beaded Twitter features do not actually add value in differentiating bot and non-bot accounts on Twitter.
\item We create a classifier based on user based features with an accuracy of 85.10\% in classifying bot and non-bot accounts.
\end{itemize}
\section{Computer Science Contribution}
The work done in this document falls under the panel ``Computational Social Science". Computational social science includes study of computing systems and social interactions. We in our work analyze behavior of bots during high impact events on Twitter. We approach this problem from a more computer science perspective. We begin by discussing our elaborate data collection and annotation schemes. We then proceed to compare different features for bots and non-bot accounts one by one to create a method to differentiate between the two classes. We provide insights into how bots function during high impact events.
\vspace{1.5cm}
The rest of the study is structured in the following manner. In Chapter 2 we cover the related work. Chapter 3 covers how we selected 5 high impact events followed by our data collection, annotation, and enrichment methodology. Our Analysis is covered in Chapter 4. Chapter 5, covers discussion, limitations and future work.
\chapter{Related Work}\label{chapter:related_work}
This chapter presents previous work done in understanding High impact events and automated activity on Twitter. We first discuss work done on high impact events and then work on Twitter bots.
\section{Work on High Impact Newsworthy Events}
A large number of studies have been conducted to understand high impact events on Twitter.
Kwak et. al. \cite{Kwak:2010:TSN:1772690.1772751} discussed whether Twitter is a social network or a news media. They showed that nearly 85\% topics talked about on Twitter were in fact related to news.
Palen et. al. \cite{Palen:2008:EOW:1460563.1460583} analyzed Twitter adoption and use during emergencies.
Zhao et. al. \cite{Zhao:2011:CTT:1996889.1996934} 222.
Mendoza et. al. \cite{Mendoza:2010:TUC:1964858.1964869} analyzed the use of Twitter for emergency response activity during the 2010 earthquake in Chile.
Castillo et. al. \cite{Castillo:2011:ICT:1963405.1963500} worked on ``Information Credibility" on Twitter. They showed that automated classifiers can be used to find information and determine its credibility.
Gupta et. al. \cite{Gupta:2012:CRT:2185354.2185356} created a mechanism to rank credible information on Twitter.
Longueville et. al. \cite{DeLongueville:2009:OHI:1629890.1629907} used location based mining to get spatio-temporal data on forest fires in France.
Sakaki et. al. \cite{Sakaki:2010:EST:1772690.1772777} used Twitter to locate epicenter and impact of earthquakes.
Agrawal et. al. \cite{Oh:2011:ICT:1968924.1968971} tracked the Mumbai terrorist attack on Twitter.
Verma et. al. \cite{VermaVCPMPSA11} used NLP techniques to extract ``situational awareness tweets" during emergencies.
Vieweg et. al. \cite{Vieweg:2010:MDT:1753326.1753486} analyzed the use of Twitter during two natural hazards events.
Gupta et. al. \cite{gupta20131} analyzed fake content on Twitter during the Boston marathon blast.
\section{Work on Automated Activity (\textit{bots}) on Twitter} \label{RWB}
Zhang et. al. \cite{Zhang:2011:DAA:1987510.1987521} worked on analyzing time trends present in bot activity on Twitter. The argued that in order to avoid being banned from posting tweets to Twitter, bots have to ensure that they do not exceed the 1,000 tweets per day Twitter API limit. So they must post approximately 41 tweets per hour or approximately 1 tweet every 0.68 minute, this will create a pattern in their tweeting behavior.
Chu et. al. \cite{Chu:2010} They obtained about 6,000 accounts annotated to be either human, bot or cyborg (bot account that is ``curated" by a human) accounts. They studied various set of features that can help distinguish between these three type of accounts.
Roure et. al. \cite{de2013observing} observed ``social machines in the wild". They monitored interactions with bots and bot lifecycle on Twitter. They argued that the ``purpose" of the bot is the key attribute that must be studied for analyzing bots.
Boshmaf et. al. \cite{boshmaf2011socialbot} proved that online social networks are vulnerable to infiltration by bots on a large scale. They showed that it is very easy to run astroturf campaigns to spread misinformation and propaganda.
Messias et. al. \cite{messias2013you} studied social bots and their influence over the network in which they were active. They said that popular influence scores are vulnerable and can be easily manipulated.
Tavares et. al. \cite{tavares2013scaling} did many temporal analysis to compare activity of bots and humans.
Wald et. al. \cite{Wald} studied the users that were susceptible to bots on Twitter. They approached it from the view point of spam bots and argued that user influence score was a key factor in determining if that user will engage a bot.
\section{Types of Bots discussed in literature.}
In Section \ref{RWB} we discussed major work done on bots on Twitter. Different types of bots have been discussed in literature. In this Section we will discuss various types of bots discussed in papers mentioned in Section \ref{RWB}:
\begin{enumerate}
\item \textbf{Explicit Bots}: The class of bots who declare that they are bots. This can be done through mentioning the same in their profile description. Many a times these bots also mention their creators by mentioning their twitter handles.
\item \textbf{Implicit Bots}: These are the bots that do not mention that they are bots.
\item \textbf{Social Bots}: Bots which interact with other users on Twitter.
\item \textbf{Retweet Bots}: Bots who only Re-Tweet tweets by other users. These bots can Re-Tweet based on particular keywords or only Re-Tweet tweets by some particular user. Example of these type of bots are those accounts who Re-Tweet all popular tweets about say ``cricket' or ``Boston"; another example can be accounts which Re-Tweet all tweets by popular users like ``@katyperry" or ``@BillGates".
\item \textbf{News Bots}: These are a special class of Retweet Bots which Retweet tweets with content from important news sources like ``@cnnbrk". They also Retweet content based on certain keywords associated with the type of news being propagated by these accounts. The need for a separate class arrises because these are the accounts which begin heavy retweeting when a high impact event occurs.
\item \textbf{Cyborgs}: The type of a Twitter account that is ``curated" by a human.
\end{enumerate}
\section{Difference from previous work}
Both Chu et. al. \cite{Chu:2010} and Zhang et. al. \cite{Zhang:2011:DAA:1987510.1987521} have in their work highlighted methods to distinguish automated activity from human activity on Twitter. They presented innovative schemes to distinguish between the two classes during the normal everyday functioning of these accounts. We on the other hand present insights and interesting findings on bot accounts that participate in discussion during high impact events. We also show that methods proposed by Chu et. al. \cite{Chu:2010} and Zhang et. al. \cite{Zhang:2011:DAA:1987510.1987521} do not provide good results when applied to bots actively participating high impact events. We in our work suggest improvements to better differentiate and understand automated activity during high impact events.
\chapter{Methodology}\label{chapter:method}
In this Section we are going to discuss how we selected the events that were analyzed, followed by our data collection, annotation, and enrichment methodology.
\section{Event Selection}
We selected 5 high impact events from the year 2013 and 2014 for analysis. The events selected included natural hazards, political and terror strikes. The main theme associated with these events was that they had huge impact worldwide and gained traction all across the world. These events in our opinion caused a huge economic, political waves throughout the world along with loss of lives and property. We refer to these events as ``high impact events."
The table \ref{table:events} describes the data collection details of 5 events we selected, it also includes all the hashtags associated with each selected event.
Event description:
\begin{enumerate}
\item Boston blasts: On April 15, 2013, 2:49 pm EDT 2 bombs exploded during the Boston Marathon 2013. 3 people died and 264 were injured in the bombing. Bombings were followed by shooting, manhunt and firefight.
\item Oklahoma Tornado: 2013 Moore tornado struck Moore, Oklahoma, and adjacent areas on the afternoon of May 20, 2013, killing 24 people damaging about \$2 billion worth of property in Grady, McClain, and Cleveland counties in Oklahoma; particularly the city of Moore.
\item Washington Navy Yard Shooting: lone gunman Aaron Alexis on September 16, 2013, shot 12 people and injured 3 others at the Naval Sea Systems Command (NAVSEA) inside Washington Navy Yard. The attack began around 8:20 a.m. EDT and ended around 9:20 a.m. EDT; when Aaron Alexis was killed by police.
\item Cyclone Phailin: Very Severe Cyclonic Storm Phailin stuck Thailand, Myanmar, India on October 4, 2013, till October 14, 2013. Causing damages of about \$696 million (USD 2013) and 45 fatalities. In India the cyclone wreaked havoc in Anadaman and Nicobar Islands, Andhra Pradesh, Orissa, and Jharkhand.
\item Ice-storm: During the period of February 11 to 17, 2014; a fierce ice-storm struck the East and the South coast of USA There were about 22 fatalities and damages of about 15 million USD.
\end{enumerate}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Name & Hashtags & Data Collection Duration\\ \hline
Boston Blasts & Dzhokhar Tsarnaev, \#Watertown, & April 15 - April 21, 2013 \\& \#manhunt, Sean Collier, & \\&\#BostonStrong,& \\& \#bostonbombing, \#oneboston, & \\& bostonmarathon,& \\& \#prayforboston, & \\& boston marathon, \#bostonblasts, & \\& boston blasts, bostonterrorist,& \\& boston explosions, bostonhelp, & \\& boston suspect & ~ \\ \hline
Oklahoma Tornado & Oklahoma, tornado,&May 20 - May 22, 2013 \\& PrayForOklahoma, & \\& care4kidsOK & \\ \hline
Navy Yard Shooting & NavyYardShooting,& September 16 - September 18, 2013 \\& navy yard shooting,& \\& Washington navy yard & \\ \hline
Cyclone Phailin & phailin, cyclonephailin & October 4 - October 16, 2013 \\ \hline
Ice-storm & Icestorm, \#USIcestorm, &February 11 - February 19, 2014 \\& \#SnowInUS & \\ \hline
\end{tabular}
\caption[Events in consideration]{Details about various selected events.}
\label{table:events}
\end{center}
\end{table}
\section{Event Data Collection}
We used the Twitter Streaming API to collect data about the events.
\begin{table} [!h]
\tiny
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Metric&Boston Blast Marathon & Icestorm & Navy Yard Shooting & Oklahoma Tornado & Cyclone Phailin \\ \hline
Total tweets & 7,888,374 & 433,880 & 484,609 & 809,154 & 76,136 \\ \hline
Total unique users & 3,677,531 & 198,391 & 257,682 & 542,049 & 34,776\\ \hline
Tweets with URLs & 3,420,228 & 233,576 & 290,887 & 388,541 & 44,990 \\ \hline
Number of re-tweets & 5,217,769 & 209,556 & 262,362 & 509,732 & 41,718\\ \hline
Start date & April 15, 2013 & February 11, 2014 & September 16, 2013& May 20, 2013 & October 4, 2013 \\ \hline
End date & April 21, 2013 & February 19, 2014 & September 18, 2013& May 22, 2013 & October 16, 2013 \\ \hline
\end{tabular}
\caption[Data collected]{Details about collected data.}
\label{table:eventsdet}
\end{center}
\end{table}
Statistics and numbers associated with the dataset for each High Impact Event: (Table \ref{table:eventsdet})
\section{Data Annotation}
After collecting the data we compiled a list of top 200 users for each of the 5 events based on the number of tweets they posted in context to the event. This group of 1000 high frequency users were then manually annotated by a group of Masters and PhD students in Computer Science at IIIT Delhi. The criteria for selecting these annotators was that they must be familiar with Twitter landscape and must have used Twitter for a minimum period of 1 year. These set of annotators were given the following instructions:
\\
\begin{lstlisting}
What is a bot?: Twitter bots are, essentially, computer programs that tweet on their own accord. While normal people access Twitter through its Web site and other clients, bots connect directly to the Twitter via its API (application programming interface), parse information and post to Twitter automatically.
There are various kinds of bots like retweet bots, news bots, humor bots
Example of a bot to tweet about earthquakes in San Francisco:
https://twitter.com/earthquakesSF
User description: I am a robot that tells you about earthquakes in the San Francisco Bay area as they happen. I get my data right from the USGS. I was programmed by @billsnitzer.
This account specifies that it is a bot.
A user may or may not do so.
If a user does not mention use the TIPS mentioned below.
What is to be done here?
STEP 1: Visit the URL of the Twitter profile.
STEP 2: Study the account. (Have a look at the TIPS)
STEP 3: Mark whether an account is a BOT account or not.
TIPS: While making the decision pay attention to the following:
Closely read the user description.
Most bots have a large number of tweets.
Time difference between tweets: Bots tweet very frequently
Number of Friends and Followers: Most Bots have many friends and few followers
Bots usually tweet about some specific topic or Re-Tweet tweets from particular users.
If you come across accounts that you think are being ``curated" by humans. We request you not to mark them as bots.
Decide if the account is a Bot or not.
Mark the choice.
Wherever you are not sure mark: Can't Decide.
\end{lstlisting}
Every account in the set of the above mentioned 1000 users was annotated by 3 annotators.
Each annotator was given a list of account URLs and 3 choices per account:
\begin{itemize}
\item Bot.
\item NOT a Bot.
\item Can't Decide.
\end{itemize}
We did not consider ``Cyborgs" as bots in our analysis. We took a very strict approach in labeling the annotated accounts as Bots and Non-Bots. All those accounts that were annotated as Bots by all 3 annotators were labelled as Bots. A similar strict bound was applied to the accounts labelled as Non-Bots. We used this highly strict criteria for labeling so as to ensure very good quality data. At the end of this stage we had 2 distinct classes of accounts; namely Bots and Non-Bots.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Event & Bots & Non-Bots \\ \hline
Boston Blasts & 97 & 17 \\ \hline
Icestorm & 87 & 7 \\ \hline
Navy Yard Shooting & 90 & 11 \\ \hline
Oklahoma Tornado & 47 & 42 \\ \hline
Cyclone Phailin & 56 & 38 \\ \hline
Total (Includes accounts active in multiple events) & 377 & 115 \\ \hline
\end{tabular}
\caption[Annotation result]{Number of accounts in each class after annotation.}
\end{center}
\end{table}
\section{Data Enrichment}
After creating a true positive data-set for both bot and non bot categories for 5 high impact events, we proceeded to get more data about these Twitter accounts to enrich this data for further analysis.
Using various Twitter API endpoints we collected profile meta-data and all of their friends and followers. We also collected maximum possible tweets for all of the above mentioned annotated accounts.
\chapter{Analysis}\label{chapter:analysis}
In this section, we start with analyzing some basic characteristics of bots and then move on to the analysis of their followers. We also discuss their network, impact on information diffusion and rumor propagation during high impact events. We present an in-depth analysis on URLs, Tweet source and Tweet time. We then discuss all the important features that can help us predict if an account is a bot or not during high impact events. We try to predict once using all user based and temporal features and once after discarding temporal features and compare the results of the two. We also present a detailed analysis of a few bots that were active during different high impact events. We also compare bot activity from 2013 to bot activity in 2011.
\section{Exploratory Numbers and Bot Creation Methodology}\label{createbot}
Our annotated data consists of 377 bots that were active during 5 different events out of which 26 bots were active in multiple events. We have 309 distinct annotated bots and 115 distinct annotated non-bots. We look at 5 events from 2013 and 7 events from 2011. In order to successfully study Twitter bots first we must understand the logic and how they are created. In order to avoid being banned from posting tweets to Twitter, bots have to ensure that they do not exceed the 1,000 tweets per day Twitter API limit. So they must post approximately 41 tweets per hour or approximately 1 tweet every 0.68 minute. They must also follow all rules discussed in Section \ref{rules}.
There are 4 major ways to create bots on Twitter:
\begin{enumerate}
\item Popular Tweet based bots: These bots can ``listen" for tweets that are currently popular on Twitter via searching for popular tweets (flags present in the API) or they can post content from ``Twitter Trending topics". They repost (Retweet or copy and post content as their own) these tweets.
\item Keyword based bots: These bots look for certain keywords on Twitter and repost tweets that contain them.
\item Source based bots: These bots repost content from specific Twitter users. For example, a bot can repost all updates from news accounts like ``@cnnbrk", ``@BBCWorld", etc., these type of bots are termed ``News Aggregators".\footnote{\url{http://www.zillman.us/white-papers/bots-blogs-and-news-aggregators/}}
\item Outside content based bots: These bots look for updates outside Twitter like RSS and other web feeds, updates to blogs, dedicated databases , etc., and post these updates to Twitter usually with a link to the same.
\end{enumerate}
\section{Bots active During Multiple Events}
In our dataset we found that 26 bots were active during multiple events. We manually analyzed the ``Twitter Timeline" of all 26 of these bots and found that 8 were Keyword based bots, 9 were source based bots, 2 were Popular Tweet based bots and 7 were outside content based.
\section{Characteristics}
\subsection{Followers and Friends on Twitter}
\begin{figure}[!ht]
\centering
\includegraphics[scale=.4]{friendvsfollowerbots.png}
\caption{Followers vs Friends ratio for bots.}
\label{fig:ff1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=.4]{friendvsfollowernonbot.png}
\caption{Followers vs Friends ratio for non-bots.}
\label{fig:ff2}
\end{figure}
We created graphs for Followers and Friends (Figure \ref{fig:ff1} and \ref{fig:ff2}) for both bots and non-bots. The graph is more spread out for non-bots. Data points for bots were more clustered around the origin. Similar results were obtained by Chu et. al. \cite{Chu:2010}.
\subsection{Profile Description on Twitter}
\begin{figure*}[!ht]
\centering
{\includegraphics[scale=.25]{wordlebots.png}}
\label{fig:wordle1}
{\includegraphics[scale=.275]{wordlenonbots.png}}
\label{fig:wordle2}
\caption[Frequent words present in the profile description of bots and nonbots.]{Most frequently used words in profile description for bots (left) and non-bots(right). Size of the word denotes frequency.}\label{fig:wordle}
\end{figure*}
We found out top words used by both bot and non-bot accounts in their profile description on Twitter (Figure \ref{fig:wordle}). The most frequent words used by bots highlight the motive behind creating these bots that are active during high impact events which can be inferred from words like ``news", ``world", ``breaking", ``retweet", ``latest", etc. Whereas on the other side one can find more general words that are used to describe non-bot accounts.
\section{Bot Friends and Data Source Analysis}\label{srcv}
We wanted to find out the most prominent / common sources from where bots pick their data during high impact events. One way we can measure it can be through a count of all the ``@mentions" which is used on Twitter to hold conversations or cite the source of a particular Tweet. An example for the same can be seen in Figure \ref{fig:cred}, here the bot is citing @GP\_Today which is the Twitter account of the website gptoday.com that publishes Formula 1 news articles. The bot account in question listens to its web feed updates and tweets them giving reference to the source via ``@mentions" to improve the credibility of the news \cite{Castillo:2011:ICT:1963405.1963500},\cite{Gupta:2012:CRT:2185354.2185356}.
\begin{figure}[!h]
\centering
\includegraphics[scale=.6]{credit.png}
\caption[Example of a bot giving credit.]{A news bot giving credit to its source using the ``via" keyword. This bot is picking up content from a web feed and posting it on Twitter.}
\label{fig:cred}
\end{figure}
In case of Retweet bots, they keep on listening for updates from other Twitter accounts and Retweet them as soon as new updates are posted. Retweets texts in the Twitter API results appear as ``\textit{RT @mention Tweet text...}".
\begin{figure}[!h]
\centering
\includegraphics[scale=.6]{aggree.png}
\caption{A Retweet bot reposting content form credible news sources.}
\label{fig:cred}
\end{figure}
Hence creating just a frequency table of @mention will give us the top sources from where bots pick their data during high impact events. Users mentioned by accounts that mention other users with whom they are interacting (high frequency directed or targeted mentions) will not actually impact the top 15 data sources during high impact events as the frequency of such @mentions will be low.
\begin{figure}[!h]
\centering
\includegraphics[scale=1]{topsource.png}
\caption{Top sources during Boston blasts.}
\label{fig:topsour}
\end{figure}
Figure \ref{fig:topsour} shows the top 15 data sources and their frequency in the Boston marathon blasts. Out of all the 4,562 accounts mentioned by annotated bots active during the Boston marathon blasts 652 (14.29\%) were actually verified. Table \ref{table:veri} lists the top 15 data sources and if they were verified accounts or not.
\begin{table}[!h]
\centering
\begin{tabular}{|l|l|}
\hline
\textbf{Source} & \textbf{Verified/Non-verified} \\ \hline
CMoleBoston & Non-verified \\ \hline
BostonGlobe & Verified \\ \hline
7News & Verified \\ \hline
cnnbrk & Verified \\ \hline
BBCWorld & Verified \\ \hline
fox25news & Verified \\ \hline
Slate & Verified \\ \hline
TIME & Verified \\ \hline
AP & Verified \\ \hline
WLTX & Verified \\ \hline
wsvn & Verified \\ \hline
NewsBreaker & Non-verified \\ \hline
WCVB & Verified \\ \hline
NBCNews & Verified \\ \hline
PressHerald & Non-verified \\ \hline
\end{tabular}
\caption[Top data sources and if they are verified ]{Top 15 data sources and if they are verified by Twitter or not.}
\label{table:veri}
\end{table}
\section{Bot Followers}
In this section we will analyze the users who follow bots that are active during high impact events. We discuss the bot network and their impact on Information diffusion during high Impact events. For our analysis we collected all followers of bots using Twitter API endpoint. The total number of unique bot followers collected were 623,198.
\subsection{Profile Description of the Followers.}
In Figure \ref{fig:follwe} we present the top frequency words common in the profile description of these bot followers.
\begin{figure}[!h]
\centering
\includegraphics[scale=.4]{wordlefollowersboston.png}
\caption[Common words in profile description of bot followers.]{Most frequent words in profile description of bot followers in the Boston bast dataset. Size of the word denotes frequency.}
\label{fig:follwe}
\end{figure}
\subsection{Bot and Follower Network in Boston Marathon Blasts}
To study the bot and their follower network, we collected bot followers of all the 97 annotated bots in our dataset. The total number of unique bot followers collected were 623,198.
We created network graph for a random set of 40,000 followers (Figure \ref{fig:network4}) and also a graph for 20,000 randomly selected followers (Figure \ref{fig:network2}).\footnote{We could not create the graph for all users because of memory restrictions.} Both these graphs were created in Gephi - an open graph viz platform.\footnote{\url{https://gephi.org/}} Though we wanted to create and analyze graphs of all 623,198 bot followers and 97 bots from the Boston marathon blast dataset, we were limited by memory restrictions. For Figure \ref{fig:network4} the average degree for the graph was 1.01545.
Looking at Figure \ref{fig:network2} and Figure \ref{fig:network4} and that the average degree for Figure \ref{fig:network4} is 1.01545. We can safely conclude that many users tend to follow very few bot accounts. Most of these accounts follow only one bot account in our Boston blast high impact event, very few accounts follow more than one bot account.
We can attribute this behavior to the fact that users would not like to follow many high frequency bot accounts who Tweet about the same thing as these accounts who Tweet a lot will flood a users Twitter Timeline with their content.
\begin{figure}[!hbtp]
\centering
\includegraphics[scale=.7]{2.pdf}
\caption[Network graph of 40,000 randomly selected bot followers.]{Network graph of 40,000 randomly selected bot followers. Average degree is 1.01545}
\label{fig:network4}
\end{figure}
\begin{figure}[!hbtp]
\centering
\includegraphics[scale=.65]{1.png}
\caption{Network graph of 20,000 randomly selected bot followers}
\label{fig:network2}
\end{figure}
\newpage
\subsection{Impact on Information Diffusion during High Impact Events.}
\begin{figure}[!h]
\centering
\includegraphics[scale=.35]{infodiff.pdf}
\caption[Study of impact due to automated accounts on information diffusion during high impact events]{A simple node diagram showing relation between, bots, bot friends and followers. A follower can follow bot friends too. The colored node represents the bot followers who don't follow bot friends and hence, get updates about high impact events from bots.}
\label{fig:inflow}
\end{figure}
To answer the question, how many new users received information as a result of automated activity on Twitter during high impact events and what was the impact of bots on information diffusion. We focussed our study on the Boston blast dataset. We collected all bot followers of the 97 annotated bot accounts found in the Boston marathon dataset. We then collected their friends. We used this methodology in place of collecting the followers of bot friends because a very popular bot friend was ``@cnnbrk" which at the time of writing this document had close to 17 million followers and collecting details of these 17 million followers is nearly impossible and there are many more bot followers.
Using all the above mentioned data we then created a list of all accounts following bots and all accounts among these accounts that followed bot friends. We then found out all accounts that get updates only from bots and not from bot friends (represented by colored node in Figure \ref{fig:inflow}). In our dataset for Boston blast marathon these accounts were 9.53\% of all bot followers. This analysis suggest that \textit{at least 9.53\% of bot followers are getting fresh updates from these bots during high impact events as a result of them following bots and not accounts that are the most likely ``verified" source of this information} (see table \ref{table:veri}). However, posts by these bots are also visible in various search results and the public Twitter Timeline further increasing the reach of information to many different users. We can also loosely suggest that bots aid information diffusion during high impact events. These bots can be seen as ``\textbf{brokering}" information to other users.
\subsection{Role in Rumor Propagation}\label{rumor}
During Events there is lot of conflicting information that is spreading on online social networks. Castillo et. al \cite{Castillo:2011:ICT:1963405.1963500}. and Gupta et. al. \cite{Gupta:2012:CRT:2185354.2185356}. in their respective papers have highlighted the spread of misinformation on Twitter and have also suggested methods to find credible information on Twitter. Similar analysis was done by Gupta et. al. in their paper \cite{gupta20131}, discussed the spread of fake and malicious content on Twitter during the Boston Marathon bombings. We will focus our analysis on spread of rumors during the same Boston marathon bombings by bots.
Some key rumors that were propagated regarding the bombings on Twitter were:\footnote{\url{http://www.snopes.com/politics/conspiracy/boston.asp}}
\begin{enumerate}
\item Suspect became citizen on 9/11: One rumor suggested that the suspects became naturalized citizens of the US on September 11. Tweet text: ``\textit{RT @pspoole: Dzhokhar Tsarnaev received US citizenship on Sept 11, 2012 -- 11 years to the day after 9/11 attacks http://t.co/kHLL7mkjnn}". The 1st tweet was by @pspoole on Friday April 19, 2013, at 15:34:56 (+0000); this rumor was shared 2,321 times by unique users in our dataset.
\item Sandy Hook Child Killed in Bombing: One rumor claimed that one victims was an 8-year-old girl who attended Sandy Hook school and was running for the victims of the Sandy Hook shootings. Tweet text: ``\textit{RT @CommonGrandma: She ran for the Sandy Hook children and was 8 years old. \#prayforboston http://t.co/cLir6nI7tB}". 1st tweet on Friday, April 19, 2013, at 09:56:45 (+0000); this rumor was shared 1,743 times by unique users in our dataset.
\item Donating 1\$ Tweet:\footnote{\url{http://globalnews.ca/news/482442/fake-boston-marathon-donation-twitter-account-suspended/}} A Tweet by a fake account @\_BostonMarathon, ``\textit{For each RT this gets, \$1 will be donated to the victims of the Boston Marathon Explosions. \#DonateToBoston}" went viral on Twitter. Tweet time: Monday, April 15, 2013, at 11:29:23 (+0000); this rumor was shared 28,350 times by unique users in our dataset.
\end{enumerate}
Findings from our dataset concerning bots:
\begin{enumerate}
\item Suspect became citizen on 9/11: In our dataset only 2 bots Retweet this rumor on Friday, April 19, 2013, at 15:41:35 and 15:47:58 (+0000).
\item Sandy Hook Child Killed in Bombing: This rumor was never picked up by any of the 97 bots in our dataset.
\item Donating 1\$ Tweet: Only 1 bot in our dataset picked up this Tweet that too on Wednesday, April 17, 2013 at 00:50:24 (+0000)
Gupta et. al. claim that the Tweet was Retweeted 28,350 times in their paper \cite{gupta20131}, while working on the \textit{same} dataset.
\end{enumerate}
This particular bot behavior of not picking up and spreading rumors can be attributed to the fact that they get their data from verified sources (see Section \ref{srcv} and table \ref{table:veri}).
\section{Analysis of Tweets}
In this section, we analyzed tweets posted by bots during high impact event. We will first put forward our analysis of URLs present in these tweets, then move on to discuss our findings regarding the source of the Tweet. We will also present our results from time related analysis.
\subsection{URL Analysis}\label{URLA}
In our Boston Blast data we have 44,071 tweets by accounts annotated as bots and 7,099 tweets by annotated non-bots. Out of these tweets 36,672 (83\%) bots tweets and 4,849 (68\%) non-bots tweets had URLs. Table \ref{table:url} lists the top URL hostnames shared during Boston Marathon Blasts by frequency.\footnote{Similar results were observed in the other 4 High Impact events.} Top 8 hostnames used by Boston Bots are standard URL shorteners out of which bit.ly is popular among both bots and non-bots (Site cmole.com has been suspended, at the time of writing this document). To further analyze we expanded all the bit.ly shortened URLs to get their expanded URLs. The top 10 bit.ly shortened hostnames have been listed in the Table \ref{table:urlbitly}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Rank & Boston Bots & Boston Non Bots \\ \hline
1 & bit.ly & satu-Indonesia.com \\ \hline
2 & j.mp & bit.ly \\ \hline
3 & dlvr.it & adf.ly \\ \hline
4 & q.gs & inktothepeople.com \\ \hline
5 & cmole.com & tavernkeepers.com \\ \hline
6 & adf.ly & twitter.com \\ \hline
7 & bo.st & www.youtube.com \\ \hline
8 & youtu.be & apne.ws \\ \hline
9 & pulpnews.com & nbcnews.to \\ \hline
10 & www.rightnow.io & cnsnews.com \\ \hline
\end{tabular}
\caption{Top URL hostnames shared during Boston Blast.}
\label{table:url}
\end{center}
\end{table}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Rank & Boston Bots & Boston Non Bots \\ \hline
1 & www.sigalert.com & feeds.abcnews.com \\ \hline
2 & www.indeed.com & news.google.com \\ \hline
3 & likes.com & rss.cnn.com \\ \hline
4 & boston.craiglist.org & feedproxy.google.com \\ \hline
5 & www.youtube.com & edition.cnn.com \\ \hline
6 & feeds.abcnews.com & aol.sportingnews.com \\ \hline
7 & news24s.com & www.theblaze.com \\ \hline
8 & feedproxy.google.com & www.news12.com \\ \hline
9 & declassifieds.info & www.mediaite.com \\ \hline
10 & www.woweather.com & www.guardian.co.uk \\ \hline
\end{tabular}
\caption{Top bit.ly expansion URL hostnames shared during Boston Blast.}
\label{table:urlbitly}
\end{center}
\end{table}
To further check if these URLs were malicious or not, we compared them with google's database using the Google Safe Browsing API.\footnote{\url{https://developers.google.com/safe-browsing/}} Out of the 36,672 bot posted URLs and 4,849 non-bots posted URLs we found that only 188 URLs posted by bots and 27 posted by non-bots were marked as ``Malicious". As the percentage of malicious URLs is very low, we can say that bots participating in discussions on high impact events generally do not spread malicious content.
\subsection{Tweet Source Analysis} \label{TSA}
The Twitter API gives us the source from where the Tweet originated for example Twitter for iPhone, web, etc. This field provides an excellent feature to differentiate bot and non-bot accounts. Table \ref{table:bostonsource} list top sources as observed in Boston Marathon Blast event.\footnote{Similar results observed for other events. See Appendix.}
\subsubsection{Use of Existing Automation Software}
In the top Tweet source for bots one can see that many automation services are being used. Services like ``dlvr.it" and ``IFTTT" are being used heavily to direct content from outside sources onto Twitter. ``dlvr.it" is an online service that automatically publishes an RSS feed to various social media like Twitter, Facebook, Google+ etc., ``IFTTT" lets user create simple ``if then else" statement that are triggered via outside Twitter activity like a new blog, a new Instagram picture etc., and the action is implemented/published on Twitter or any other service mentioned by the user.\footnote{See Appendix for some IFTTT recipes.} The use of these services makes it very easy to create bots on Twitter. A bot creator does not even need to have programming experience or dedicated machines to run bots, they can user the service provided by these web-apps.
A possible reason for using automated systems can be to circumvent coding and bot hosting expenditures. These automation services provides simple web based GUI's through which bot creators can add bot logic to post content onto Twitter as mentioned in Section \ref{createbot}. All these automation services require from the bot creators is to log into Twitter once in order to create app API keys; using which these services are able to post updates on Twitter.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Rank & Boston Bots Source & Count & Boston Non Bots Source & Count \\ \hline
1 & twitterfeed & 11405 & Web & 2603 \\ \hline
2 & web & 5844 & twitterfeed & 1969 \\ \hline
3 & Tweet Old Post & 4052 & Tweet Button & 1042 \\ \hline
4 & dlvr.it & 3962 & Twitter for iPhone & 453 \\ \hline
5 & IFTTT & 2049 & TweetDeck & 298 \\ \hline
6 & TweetDeck & 1515 & Botize & 246 \\ \hline
7 & Crime News Updates & 950 & Twitter for iPad & 220 \\ \hline
8 & VenturaCounty\_Retweets & 609 & Echofon & 216 \\ \hline
9 & WordPress.com & 530 & Twitterfall & 31 \\ \hline
10 & Strictly Tweetbot for Wordpress & 353 & Instagram & 12 \\ \hline
11 & Bitly Composer & 186 & HootSuite & 3 \\ \hline
\end{tabular}
\caption{Top Tweet Sources for Boston Blast dataset.}
\label{table:bostonsource}
\end{center}
\end{table}
\subsection{Tweet Time Analysis} \label{TTA}
Zhang et. al. in their paper \cite{Zhang:2011:DAA:1987510.1987521}, based their hypothesis for detecting ``Automated Activity" on Twitter on the fact that highly automated accounts will exhibit timing patterns that are not found in non-automated users. They argued that ``Automated activity is invoked by job schedulers that execute tasks at specific times or intervals". They assumed that bots in order to maximize reach while keeping in mind the constraint of posting only 1,000 tweets in a day need to spread their tweets to at least 1 Tweet per minute in order to escape penalty under Twitter Limits.
\begin{figure*}[h]
\centering
{\includegraphics[scale=.35]{ss.png}}
\label{fig:subfig2}
{\includegraphics[scale=.35]{163706568.jpg}}
\label{fig:subfig3}
\caption[Time analysis of a bot active during Boston blasts]{A bot from Boston dataset, and its corresponding tweeting activity based on its Timeline.}
\label{fig:fake}
\end{figure*}
\begin{figure}[!h]
\centering
\includegraphics[scale=.35]{72248158.jpg}
\caption[Time analysis of a nonbot active during Boston blasts]{A non-bot from Boston dataset, and its tweeting activity based on its Timeline.}
\label{fig:nonbottweet}
\end{figure}
We observed similar results as observed by Zhang et.al. (See Figure \ref{fig:fake} and Figure \ref{fig:nonbottweet}). Both these plots were based on their entire Twitter Timeline for a week. Usually most of the ``high impact events" tend to last only for a few days sometimes even hours. During this short period of time it becomes really tough to differentiate between a bot and a non-bot account using data only from the event. Both high frequency bot and non-bot accounts show similar tweeting pattern (see Figure \ref{fig:fake})
\begin{figure*}[!h]
\centering
{\includegraphics[scale=.35]{bothighimpactpattern.jpg}}
\label{fig:subfig2}
{\includegraphics[scale=.35]{nonbothighimpctpattern.jpg}}
\label{fig:subfig3}
\caption[Tweeting pattern of a bot and a nonbot during Boston blasts]{Similar Tweeting pattern observed for a bot account (left) and a non-bot account (right) in the Boston blast event.}
\label{fig:fake2}
\end{figure*}
We plotted graphs between inter-tweet delay mean and inter-tweet delay standard deviation for bot and non-bot accounts (see Figure \ref{fig:fake3}) using their data only from the events. Both bot and non-bot accounts have similar plots. To further analyze their tweeting pattern we also created average Tweet time plots (Figure \ref{fig:fake4}). Again we observed similar behavior between bot and non-bot accounts.
\begin{figure*}[!h]
\centering
{\includegraphics[scale=.45]{bot_mean_sd.png}}
\label{fig:subfig2}
{\includegraphics[scale=.45]{nonbot_mean_sd.png}}
\label{fig:subfig3}
\caption[Inter tweet time mean and standard deviation for bot and non-bot accounts]{Inter tweet time mean and standard deviation for bot accounts (left) and non-bot accounts (right) using their data only from the events. (All bots and all non-bots from all events).}
\label{fig:fake3}
\end{figure*}
\begin{figure*}[!h]
\centering
{\includegraphics[scale=.28]{bots_avg_tweet_time.jpg}}
\label{fig:subfig2}
{\includegraphics[scale=.28]{nonbots_avg_tweet_time.jpg}}
\label{fig:subfig3}
\caption[Average tweet time for bot and non-bot accounts]{Average Tweet Time for bot accounts (left) and non-bot accounts (right) using their data only from the events. (All bots and all non-bots from all events, some data points are overlapping).}
\label{fig:fake4}
\end{figure*}
\newpage
\section{Features}
Chu et. al. in their paper \cite{Chu:2010}, proposed a 4 part classification system to determine if a user is a bot or not. Chu et. al used a number of Twitter based features in their experiment. Namely:
\begin{enumerate}
\item Tweet Count: Number of tweets posted by an account. Bots post many more tweets than humans.
\item Long term hibernation by some accounts(in bots): Bots generally show long period of no activity and some short period of heavy activity. Humans generally have constant activity.
\item Ratio of Followers vs Friends: Bots tend to have more friends than followers. Humans on the other hand have nearly equal number of friends and followers.
\item Temporal tweeting patterns: Bots are more active during specific days of the week.
\item Account Creation Date: Bots are created more ``recently" than non-bot accounts.
\item Device used for Tweeting: Bots generally Tweet via Twitter API or some other programmable services. Humans generally Tweet from mobile phones, web browsers etc.
\item Presence of URL in tweets: Bots tend to include more URLs in their tweets.
\item Time trends: Zhang et. al. in their paper argue that because of the mode of automation bots are limited by the constraints of the Twitter API and will have to space out their tweets. They argued that bots show a pattern in their tweet time.
\end{enumerate}
All the above mentioned features can be categorized into two, user based features and temporal based features. Features like Time trends, Temporal tweeting pattern, long term hibernation are all temporal based features and all others can be called user based features.
In case of high impact events, many features stop contributing in helping us differentiate between bots and non-bot accounts. Some of the features that are not observed in High Impact events are (most of them have been discussed in previous sections):
\begin{enumerate}
\item Long term hibernation by some accounts: During high impact events all accounts that participate are not in ``hibernation". Bots that are active are the ones that are either already listening for updates or in some scenario created specifically for that event (example, spam bot @\_bostonmarathon discussed in Section \ref{rumor} was created specifically to Tweet during the Boston marathon).
\item Temporal tweeting patterns: A High Impact Event can occur any day of the week. Bot or non-bot accounts will post updates about these events irrespective of day of the week.
\item Presence of URL in tweets: In Section \ref{URLA} we found that 83\% of bot tweets and 68\% of non-bot tweets have URLs in them. As this difference is quite small it can be argued that presence of URLs in tweets is not a big differentiator.
\item Time trends: In Section \ref{TTA} we applied all analysis done by Zhang et. al. in their paper and found out that Time Tweeting pattern of accounts active during high impact events are very similar.
\end{enumerate}
\section{Real Time Prediction During an Event}
In this section we apply machine learning techniques to create a model that can help decide if an account is a bot or not. For our analysis, we created two set of features. One including temporal based features and one excluding temporal based features (including only user based features). We propose that user based features are a better indicator. We applied Decision Trees (J48) machine learning model to the dataset using WEKA.\footnote{\url{http://www.cs.waikato.ac.nz/ml/weka/}} We created a balanced model with equal numbers of bot and non-bot accounts. We ran a 10-fold cross validation scheme in which a subset of the original dataset is used for learning and another subset is used for evaluation of the decision tree model.
The result of the classifier can be seen in table \ref{table:datar}. In the table \textbf{F1} represents the feature-set that includes only the user based features. \textbf{F2} represents the feature-set that includes both user and temporal based features.
\begin{table}[!h]
\centering
\begin{center}
\end{center}
\begin{tabular}{|p{2cm}|p{1.3cm}|p{1.3cm}|}
\hline
\bf{}&\bf{F2}&\bf{F1}\\
\hline
Accuracy &66.54 \% & 85.10\%\\
\hline
TP Rate &0.665 & 0.851\\
\hline
FP Rate &0.335 & 0.149\\
\hline
Precision &0.716 & 0.852\\
\hline
Recall &0.665 &0.851\\
\hline
F-Measure& 0.645 &0.851\\
\hline
ROC &0.788 & 0.913\\
\hline
\end{tabular}
\caption[Classification results]{F1: User features based classification results, F2: User and Temporal based classification results. TP Rate, FP Rate, Precision, Recall, F-Measure, ROC are represented as weighted averages of the two classes, bot and non-bot}\label{table:datar}
\end{table}
In table \ref{table:datar}, accuracy of F1 feature-set is higher than that of the F2 feature set. Using the same, we conclude that using user based features, we are able to better differentiate between bots and non-bots.
We also computed the best knowledge gain features using WEKA. For the F1 set, the best features in order of knowledge gain were, Device used for Tweeting, Presence of URLs, Ratio of Followers vs. Friends, Account creation date, and Tweet count. These results were similar to those obtained by Chu et. al. \cite{Chu:2010}.
\section{Detailed Analysis of Few Bots.}\label{5bots}
To further analyze how these bots that are active during high impact events, function in normal conditions, we picked up 5 bots (1 each from each event) and monitored them for a time period of ~1 month (5 March 2014 to 9 April 2014). We took a daily snapshot of their followers and tweets. We also collected all their mentions and any response by these bots during this time period. The only thing we didn't capture were Direct Messages sent to these bots by other users and vice-versa because we simply didn't have permissions to do so. \\
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
User ID & Screen Name & Tweets & Following & Followers \\ \hline
21287212 & FintechBot & 22.2K & 297 & 1,137 \\ \hline
219913533 & DTNUSA & 565K & 83 & 3,108 \\ \hline
591713080 & tipdoge & 23.9K & 1,333 & 5,622 \\ \hline
1348277670 & Warframe\_BOT & 3,397 & 1 & 4,251 \\ \hline
606204776 & BBCWeatherBot & 535 (Deletes many tweets) & 29 & 307 \\ \hline
\end{tabular}
\caption[The details of the 5 bots studied in depth]{The details of the 5 bots (As of April 9, 2014 2359 hrs IST).}
\end{center}
\end{table}
\begin{table}[!h]
\small
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
Screen Name & Profile Description & Location Mentioned & External Website \\ \hline
@FintechBot & A little twitter bot that curates&England&adendavies.com \\& financial services tech news. && \\& Also some curating by @Aden\_76. & & \\ \hline
@DTNUSA & Comprehensive Daily News on &Canada &\\& United States of America Today && \\& \~ © Copyright (c) DTN News && \\&Defense-Technology News && \\&http://defense-technologynews.blogspot.ca/ & & \\ \hline
@tipdoge & available commands: balance, deposit,&&tipdoge.info \\& withdraw, tip on the way to moon & ~ & \\ \hline
@Warframe\_BOT & This bot retweets ONLY Warframe ?Alerts&& \\& from @WarframeAlerts Sorry, I know& &\\& multiple RT bug. I'll fix it. & ~ & ~ \\ \hline
@BBCWeatherBot & Ask us about the weather! Start a tweet & Everywhere&bbc.co.uk/weather\\& @BBCWeatherbot and tell us where && \\&and when you want to know about. && \\&Trial service by @NixonMcInnes && \\&@BBCRDat \#BBCconnected & & \\ \hline
\end{tabular}
\caption[The profile description, location, and external website mentioned by these 5 bots.]{The details of the 5 bots Part 2 (As of April 9, 2014 2359 hrs IST).}
\end{center}
\end{table}
@Warframe\_BOT has changed itself sometime after the event in which it was active and before the time of data collection. It has changed itself from a news aggregator website to a Twitter bot that tweets about game updates. It deleted all its previous tweets. Such behavior is quite common in bots.
\begin{figure*}[!h]
\centering
{\includegraphics[scale=.4]{tip1.png}}
\label{fig:tip1}
{\includegraphics[scale=.4]{tip2.png}}
\label{fig:tip2}
\caption[Bot ``@tipdoge" interacting with users]{@tipdoge interacting with a user.}
\label{tippp}
\end{figure*}
\begin{figure*}[!h]
\centering
{\includegraphics[scale=.4]{bbc1.png}}
\label{fig:bbc1}
{\includegraphics[scale=.4]{bbc12.png}}
\label{fig:bbc12}
\caption[Bot ``@BBCWeatherBot" interacting with users]{@BBCWeatherBot interacting with a user.}
\label{fig:bbcccc}
\end{figure*}
We also recorded interactions that many users engaged with these 5 bots. These included Mentions, Retweets, and @replies. @FintechBot and @DTNUSA are bots that spread financial and general news respectively; users tend to Retweet their tweets and sometimes replied to their tweets. These bots never responded to these replies. This is a very frequently observed pattern for these kind of bots. On the other hand @BBCWeatherBo t and @tipdoge are another kind of bots which engage in user interactions. These bots are computer programs that require the user to input certain values in a particular format in the tweet to which they respond with some information. In case of @BBCWeatherBot it requires the user to tweet the name of the place and the bot returns the weather conditions in that place. @tipdoge similarly requires input and responds with the current status of or alteration to the user's dogecoin wallet.
\begin{figure}[!h]
\centering
\includegraphics[scale=.4]{bbc2.png}
\caption[Bot tweets having clear patterns and similarity]{Tweets by the @BBCWeatherBot showing clear patterns and text similarity.}
\label{fig:bbc2}
\end{figure}
We can also see that tweets by these bots have a pattern. They are highly repetitive in nature and also have very high Tweet similarity. This is classic bot behavior as they are nothing but programs that output Tweet texts and hence have a very limited output range.
\section{Growth and Changes Observed in Bots (2013 vs 2011)}
In order to determine how bots participate during high impact events have changed with years we compared bot activity during impactful events in 2013 with their activity in same kind of events from 2011. We took the data set for 2011 events from \cite{Gupta:2012:CRT:2185354.2185356}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
Event & Hashtag/Trending Topic & Tweets & Details\\ \hline
Virginia Earthquake & \#earthquake, Earthquake SF & 277,604 & Magnitude 5.8 earthquake \\ \hline
Indiana Fair Tragedy & Indiana State Fair & 49,924 & Stage accident at Fair \\ \hline
Hurricane Irene & Hurricane Irene & 90,237 & Caused 55 deaths \\ \hline
Libya Crisis & Libya, tripoli & 389,506 & Rebel against Qaddafi \\ \hline
Mumbai Blast & \#mumbaiblast,\#needhelp & 32,156 & 3 bomb blasts \\ \hline
UK Riots & \#ukriots, \#londonriots & 542,685 & Riots in United Kingdom \\ \hline
US Downgrading & S\&P, AAA to AA & 148,047 & Debt crisis in US \\ \hline
\end{tabular}
\caption{2011 crisis dataset.}
\label{table:2100c}
\end{center}
\end{table}
The data set comprised of data collected for the events mentioned in table \ref{table:2100c}. We used the same methodology to label accounts as bots and non-bots as used in the methodology section.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
~ & Bots & Non-Bots\\ \hline
All 2011 Events & 183 & 95 \\ \hline
\end{tabular}
\caption[Annotation result on 2011 crisis dataset]{Bots in 2011 Crisis Dataset. The number of accounts identified as bots and non-bots.}
\end{center}
\end{table}
Please note we do not state that the analysis and the methodology stated in this section is correct because it has been 3 years since the data used in this section was collected. We do not say that the accounts that we labeled as bots or non-bots in 2014 behaved similarly in 2011. A lot has changed in 3 years; user tweeting pattern, Twitter API restrictions, etc. We were unable to collect all the tweets by these accounts from 2011. These accounts were annotated using their timeline in 2014.
We only compared the tweet sources used by these accounts.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Rank & 2011 Bot Sources & 2011 Non Bots Sources \\ \hline
1 & twitterfeed & web \\ \hline
2 & TweetDeck & Twitter for iPhone \\ \hline
3 & web & Twitter for BlackBerry \\ \hline
4 & Twitter for iPhone & Twitter for Android \\ \hline
5 & PageBase.Net & Tweet Button \\ \hline
6 & Echofon & Visibli \\ \hline
7 & Tweet Button & Twitter for iPad \\ \hline
8 & Resonancers & UberSocial for BlackBerry \\ \hline
9 & Mobile Web & HootSuite \\ \hline
10 & Butting In & Mobile Web \\ \hline
\end{tabular}
\caption{Top tweet sources in 2011 crisis dataset by bots and nonbots.}
\label{table:2011src}
\end{center}
\end{table}
We can see in table \ref{table:2011src} that bots in 2011 used more Twitter Apps to post data to Twitter; this is in contrast to the use of many automation web apps like IFTTT and dlvr.it in 2013 (Table \ref{table:bostonsource}).
\chapter{Discussion, Limitations and Future Work}\label{chapter:discuss}
In this study, we have presented an analysis of automated activity during High Impact activity on Twitter. First, we begin by defining high impact events and highlighting how Twitter is used as a news medium by its users for various updates during high impact events. We then discussed use of bots on Twitter and some Twitter API rules that govern posts on the online social network; these rules are very important as they help regulate bot activity on Twitter.
In the next part of the study we discussed our methodology and criteria of event selection based on their political and economical impacts and importance. We collected the data for each of the event through the Twitter API. We also discussed our annotation scheme in which we got about 1,000 Twitter accounts annotated in two categories as ``bots" and ``non-bots". As a result of annotation we got 377 bot accounts and 115 non-bot accounts. We then used the Twitter API to collect more data associated to these annotated accounts like their Timeline, followers etc.
In Chapter 5 we present our analysis of automated activity during high impact events on Twitter. We begin by discussing how to create bots, and how this process is constrained by various Twitter API rules. We then moved on to discuss some basic characteristics of these bots and non-bots on Twitter. We presented some insights on number of Friends and Followers of these annotated accounts, and also, the most common keywords that are present in their profile description. We then discussed the network formed by bots and their followers and friends during the Boston marathon bombings and how these bots ``broker" information from some trusted accounts to the masses. We also showed that bots actually did not propagate rumors and even if they do they pick them very late, this can be attributed to the fact that they mainly use trusted sources for the information that they propagate in the network. We also analyzed their URLs, Tweet sources and Tweet times. We also discussed in detail user and temporal based features that can be used to create a classifier to decide if a given account is a bot or not, we used WEKA to create this classifier which had an accuracy of 85.10 \% using user based features and not temporal based features. We also analyzed 5 bots active during high impact events in detail and show how they actually work. We also compared bot activity between 2011 and 2013.
We must also highlight that a possible limitation of our work can be the fact that annotations were done after a few months of the event. Bot behavior could have changed or some other conditions altered during this time interval. Also, we cannot claim that data collected from Twitter API endpoints, which gives limited access to the actual data, can be said to be representative of the actual data.
In future, we would also like to analyze more events to strengthen our analysis and assertions. We would like to verify these results for other high impact events. We would also like to investigate more on the impact of automated activity in propagation of fake and malicious content on Twitter. We will also like to develop a tool to differentiate between bot and nonbot accounts in real time on Twitter.
\bibliographystyle{acm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,021 |
Full Text: Padma-winning scientists protest 'a rash of bigoted acts'
Over 100 eminent and distinguished scientists issue a joint statement against 'intolerance and rejection of reason' by 'important functionaries of the government'.
Over 100 eminent and distinguished scientists, many of them recipients of Padma Bhushan and Padma Shri awards, have issued a joint statement expressing their deep concern with the "climate of intolerance, and the ways in which science and reason are being eroded in the country".
"It is the same climate of intolerance, and rejection of reason that has led to the lynching in Dadri of Mohammad Akhlaq Saifi and the assassinations of Prof Kalburgi, Dr Narendra Dabholkar and Shri Govind Pansare," the scientists said in their statement, taking the government to task for a "rash of bigoted acts, attacks on minorities and Dalits, which show no signs of abating."
Invoking Article 51 A (h) of the the Indian Constitution, the scientists pointed out that it demands, as a part of the fundamental duties of the citizens, that we "...develop the scientific temper, humanism and the spirit of inquiry and reform."
"Unfortunately," they went on to add, "What we are witnessing instead is the active promotion of irrational and sectarian thought by important functionaries of the government."
This statement follows another group of over 130 scientists who had urged President Pranab Mukherjee on Tuesday to take "suitable action" to stop incidents of "intolerance, polarisation and [the spreading] of communal hatred" from "taking our country, which has a rich heritage and cultural diversity, backwards."
Earlier, artists and sociologists had expressed their support of over 35 writers across the country, representing several linguistic groups, who returned their Sahitya Akademi awards or resigned from their positions at the country's top literary body to protest what they perceived as the rising tide of intolerance and the shrinking space for free expression.
The avalanche of these protests was triggered off by Hindi writer Uday Prakash who was the first to return his award to Sahitya Akademi, which was soon followed by noted writer Nayantara Sahgal and former Lalit Kala Akademi chairman Ashok Vajpeyi.
"The writers have shown the way with their protests," the statement by scientists said. "We scientists now join our voices to theirs, to assert that the Indian people will not accept such attacks on reason, science and our plural culture. We reject the destructive narrow view of India that seeks to dictate what people will wear, think, eat and who they will love."
The full text of the statement is given below.
The scientific community is deeply concerned with the climate of intolerance, and the ways in which science and reason are being eroded in the country.
It is the same climate of intolerance, and rejection of reason that has led to the lynching in Dadri of Mohammad Akhlaq Saifi and the assassinations of Prof Kalburgi, Dr Narendra Dabholkar and Shri Govind Pansare. All three fought against superstition and obscurantism to build a scientific temper in our society. Prof Kalburgi was a renowned scholar and an authority on the Vachana literature associated with the 12th-century reformer Basava, who opposed institutionalised religion, caste and gender discrimination. Similarly, Dr Dabholkar and Shri Pansare promoted scientific temper through their fight against superstition and blind faith.
The Indian Constitution in Article 51 A (h) demands, as a part of the fundamental duties of the citizens, that we '...develop the scientific temper, humanism and the spirit of inquiry and reform'. Unfortunately, what we are witnessing instead is the active promotion of irrational and sectarian thought by important functionaries of the government.
The Indian civilisation is a truly plural one. We have always had many practices and communities that have allowed space for each other; we celebrate the festivals and anniversaries of all faiths. This unity and peace has now been disturbed by a rash of bigoted acts, attacks on minorities and Dalits, which show no signs of abating.
The writers have shown the way with their protests. We scientists now join our voices to theirs, to assert that the Indian people will not accept such attacks on reason, science and our plural culture. We reject the destructive narrow view of India that seeks to dictate what people will wear, think, eat and who they will love.
We appeal to all other sections of society to raise their voice against the assault on reason and scientific temper we are witnessing in India today.
The views expressed in the statement are individual and do not reflect views of the institution a signatory is affiliated to.
Dr Alladi Sitaram Visiting Professor, Chennai Mathematical Institute; Professor Emeritus, Indian Statistical Institute, Bengaluru
Dr Ashoke Sen, Padma Bhushan, Fellow of Royal Society (FRS), Distinguished Professor, Harish-Chandra Research Institute, Allahabad
Dr Ashok Jain, Former Director, National Institute of Science Technology and Development Studies (NISTADS), New Delhi
Dr A Gopalakrishnan, Former Chairman, Atomic Energy Regulatory Board, Government of India
Dr D Balasubramanian, Padma Shri Research Director, LV Prasad Eye Institute, Hyderabad, & former Director Centre for Cellular and Molecular Biology, Hyderabad
Dr Madabusi Raghunathan, Padma Bhushan, Fellow of Royal Society (FRS), Professor, National Centre for Mathematics, Indian Institute of Technology, Mumbai
Dr PM Bhargava, Padma Bhushan, Former Director, Centre for Cellular and Molecular Biology, Hyderabad (one of the original signatories to the 1981 Scientific Temper Statement)
Dr P Balaram, Padma Bhushan, Former Director, Indian Institute of Science, Bengaluru
Dr Satyajit Mayor, Foreign Associate, US National Science Academy, Director, National Centre for Biological Sciences, Tata Institute of Fundamental Research, Bengaluru
Dr Spenta Wadia, Emeritus Professor and Founding Director, International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bengaluru
Dr AP Balachandran, Joel Dorman Steele Professor of Physics (Emeritus), Physics Department, Syracuse University, Syracuse, New York, USA.
Dr Abhishek Dhar, Tata Institute of Fundamental Research, Bengaluru
Dr Alak K Ray, Tata Institute of Fundamental Research, Mumbai
Dr Alok Laddha, Chennai Mathematics Institute, Bangalore
Dr Amit Sengupta, National Convenor, Peoples Health Movement
Dr Anna George, National Institute of Immunology, New Delhi
Dr Arnab Bhattacharya, Tata Institute of Fundamental Research, Mumbai
Dr Arvind, Indian Institute of Science Education and Research, Mohali
Dr Ashvin Vishvanath, University of California-Berkeley, USA
Dr Avinash Dhar, Tata Institute of Fundamental Research, Bengaluru
Dr B Ravindran, Institute for Life Sciences, Bhubaneswar
Dr BLS Prakasa Rao, CR Rao Advanced Institute of Mathematics, Statistics and Computer Science, Hyderabad
Dr B Ananthanarayan, Indian Institute of Science, Bengaluru
Dr B Rajeev, Indian Statistical Institute, Bangalore
Dr B Ekbal, Former Vice Chancellor, Kerala University
Dr B Sury, Indian Statistical Institute, Bengaluru
Dr Chandrasekhar Khare, Fellow of Royal Society, Professor of Mathematics, University of California-Los Angeles, USA, and Tata Institute of Fundamental Research
Mr D Raghunandan, Director, Centre of Technology and Development, New Delhi
Dr DP Sen Gupta, Visiting Professor, National Institute of Advanced Studies, Bengaluru
Dr Debashis Ghoshal, School of Physical Sciences, Jawaharlal Nehru University, New Delhi.
Dr Dhruv Raina, Zakir Husain Centre for Educational Studies, Jawaharlal Nehru University, New Delhi
Dr Dilip Ahuja, Profesor, National Institute of Advanced Studies, Bengaluru
Dr Dinesh Abrol, Institute of Studies in Industrial. Development, New Delhi
Dr Dipendra Prasad, Tata Institute of Fundamental Research, Mumbai
Dr Firoza Sutaria, Indian Institute of Astrophysics, Bengaluru
Dr Gadadhar Misra, Indian Institute of Science, Bengaluru
Dr Gautam Mandal, Tata Institute of Fundamental Research, Mumbai
Dr Hema Murthy, Indian Institute of Technology, Chennai
Dr Joseph Samuel, Raman Research Institute, Bengaluru
Dr Justin David, Indian Institute of Science, Bengaluru
Dr Kapil Paranjape, Indian Institute of Science Education and Research, Mohali
Dr LS Shashidhara, Indian Institute of Science Education and Research, Pune
Dr Leena ChandranWadia, Observer Research Foundation, Mumbai
Dr MG Narasimhan, National Institute of Advanced Studies, Bengaluru
Dr MRN Murthy, Indian Institute of Science, Bengaluru
Dr MVN Murthy, Professor Emeritus, The Institute of Mathematical Sciences, Chennai
Dr Madan Rao, Raman Research Institute and NCBS-TIFR, Bengaluru
Dr Mangal C Mahato, Department of Physics, North East Hill University, Shillong
Dr Nilmani Mathur, Tata Institute of Fundamental Research, Mumbai
Dr P Ajith, Tata Institute of Fundamental Research, Bengaluru
Dr Pallab Basu, Tata Institute of Fundamental Research, Bengaluru
Dr Partha P Majumder, National Institute of Biomedical Genomics, Kalyani
Dr Parthib Basu, Department of Zoology, Calcutta University, Kolkata
Mr Prabir Purkayastha, Chairperson, Knowledge Commons, New Delhi
Dr Prajit K Basu, Centre for Neural and Cognitive Sciences, University of Hyderabad
Dr Prajval Shastri, Indian Institute of Astrophysics, Bengaluru
Dr Pramathanath Sastry, Chennai Mathematical Institute, Chennai
Dr Pravabati Chinganbam, Indian Institute of Astrophysics, Bengaluru
Dr Probal Choudhuri, Indian Statistical Institute, Kolkata
Dr Purushottam Kulkarni, Indian Institute of Technology, Mumbai
Dr R Ramanujam, Institute of Mathematical Sciences, Chennai
Dr R Shankar, Institute of Mathematical Sciences, Chennai
Dr Rahul Roy, Indian Statistical Institute, New Delhi
Dr Rajat Tandon, School of Mathematics and Statistics, University of Hyderabad
Dr Rajesh Gopakumar, Director, International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bengaluru
Dr Rama Govindarajan, Tata Institute of Fundamental Research, Hyderabad
Dr Ranjini Bandyopadhyay, Raman Research Institute, Bengaluru
Dr Ravinder Banyal, Indian Institute of Astrophysics, Bengaluru
Dr Riddhi Shah, School of Physical Sciences, Jawaharlal Nehru University, New Delhi.
Dr Ronnie Sebastian, Indian Institute of Science Education and Research, Pune
Dr Rukmini Dey, Tata Institute of Fundamental Research, Bengaluru
Dr S Chakrabarti, Department of Chemistry, Calcutta University, Kolkata
Dr S Ranganathan, National Institute of Advanced Studies Bengaluru
Dr Sabyasachi Chatterjee, Formerly Indian Institute of Astrophysics, Bengaluru; and President All India Peoples Science Network
Dr Sachindeo Vaidya, Indian Institute of Science, Bengaluru
Dr Samriddhi Sankar Ray, Tata Institute of Fundamental Research, Bengaluru
Dr Sandeep Krishna, Tata Institute of Fundamental Research, Bengaluru
Dr Sanjib Sabhapandit, Raman Research Institute, Bengaluru
Dr Santanu Datta, Chief Scientific Officer, BUGWORKS Inc., Bengaluru
Dr Satyajit Rath, National Institute of Immunology, New Delhi
Dr Saumen Datta, Tata Institute of Fundamental Research, Mumbai
Dr Shamsher Singh, Indian Statistical Institute, Bengaluru
Dr Sharada Srinivasan, National Institute of Advanced Studies, Bengaluru
Dr Shiraz Naval Minwalla, Tata Institute of Fundamental Research, Mumbai
Dr Shrikrishna, G Dani Indian Institute of Technology, Mumbai
Dr Shyamala Mani, Indian Institute of Science, Bengaluru
Dr Siva Athreya, Indian Statistical Institute, Bengaluru
Dr Sorab N Dalal, Advanced Centre for Treatment, Research and
Education in Cancer, Navi Mumbai
Dr Sriram Ramaswamy, Director, TIFR Centre for Interdisciplinary Sciences, Tata Institute of Fundamental Research, Hyderabad
Dr Subhro Bhattacharjee, Tata Institute of Fundamental Research,
Begaluru
Dr Sumati Surya, Raman Research Institute, Bengaluru
Dr Sumathi Rao, Harish-Chandra Research Institute, Allahabad.
Dr Sunil Mukhi, Indian Institute of Science Education and
Research, Pune
Dr Suresh Govindarajan, Dept of Physics, Indian Institute of Technology,
Chennai.
Dr Suvrat Raju, Tata Institute of Fundamental Research,
Dr T Jayaraman, School of Habitat Studies, Tata Institute of Social
Sciences, Mumbai
Dr TR Govindarajan, Chennai Mathematical Institute, Chennai
Ms Tejal Kanitkar, Centre for Climate Change and Sustainability Studies, Tata Institute of Social Sciences, Mumbai
Dr Tiju Thomas, Indian Institute of Technology, Chennai
Dr Todadri Senthil, Massachusetts Institute of Technology (MIT), USA
Dr Vani VC, Indian Institute of Science, Bengaluru
Dr Venkatesh Athreya, Adjunct Professor, Asian College of Journalism, Chennai; formerly President All India Peoples
Science Network
Dr Vidita Vaidya, Tata Institute of Fundamental Research, Mumbai
Dr Vijay Kumar, Krisnamurthy, Tata Institute of Fundamental Research, Bengaluru
Dr Vineeta Bal, National Institute of Immunology, New Delhi
Dr Vishal Vasan, Tata Institute of Fundamental Research
Dr Vivek Borkar, Institute Chair Professor, Indian Institute of Technology, Mumbai | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,301 |
AIIMS raipur released a recruitment notification for the professor jobs through online application www.aiimsraipur.edu.in.
All India Institute of medical Sciences AIIMS Raipur has released a recruitment notificatio for the professor jobs through online application. this recruitment detials are officially published in website www.aiimsraipur.edu.in.
Candidates who are interested for this recruitment they must follow official website www.aiimsraipur.edu.in. AIIMS Raipur recruitment total 98 numbers of professor jobs. this jobs are professor, associate professor, assistant professor.
Candidates who want to apply for this recruitment of AIIMS Raipur professor jobs they have to apply through online application through website www.aiimsraipur.edu.in. Submission of online application starts from 05.08.2013 to 04.09.2013.
Age limit: Candidates age should not exceed 50 years for 3 ,4 no. posts and 58 years for 1, 2 posts.
Educational Qualifications: Candidates must posses Ph.D, medical specialty, a post graduation qualification. Candidates see notification for the post wise qualifications.
Fee: Candidates have to pay 500/- in case of SC/ST candidates 200/- make the payment through online.
How to apply: Candidates who want to apply for this recruitment of professor jobs they have to apply through online application through website www.aiimsraipur.edu.in. Submission of application from 05.08.2013 to 04.09.2013.
Candidates must visit www.aiimsraipur.edu.in for the recruitment of AIIMS Raipur professor jobs details and apply online application.
Submission of online application: 05.08.2013 to 04.09.2013.
Labels: AIIMS Raipur, Professor, Recruitment, www.aiimsraipur.edu.in. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,060 |
from keras.layers import LSTM, Input, Dense, Embedding, merge
from keras.models import Model
class C2W(Model):
def __init__(self, maxlen, d_C, d_W, d_Wi, V_C):
"""
maxlen = maximum input word length
d_C = character features (input embedding vector size)
d_W = word features (output word embedding vector size)
d_Wi = internal encoder state dimension
V_C = character vocabulary
"""
c_I = Input(shape=(maxlen,), name='context', dtype='int32')
c_E = Embedding(V_C.size + 1, d_C, mask_zero=False)(c_I)
forward = LSTM(d_Wi,
return_sequences=False,
go_backwards=False,
consume_less='gpu')(c_E)
backwards = LSTM(d_Wi,
return_sequences=False,
go_backwards=True,
consume_less='gpu')(c_E)
s_Ef = Dense(d_W)(forward)
s_Eb = Dense(d_W)(backwards)
s_E = merge(inputs=[s_Ef, s_Eb], mode='sum')
super(C2W, self).__init__(input=c_I, output=s_E, name='C2W')
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,516 |
Efe Ajagba Ready for Top Rank Debut
LAS VEGAS (September 14, 2020) — Heavyweight knockout artist Efe Ajagba will make his Top Rank on ESPN debut Saturday, Sept. 19 against veteran Jonnie Rice in a 10-rounder as the co-feature to the Jose Pedraza-Javier Molina junior welterweight main event from the MGM Grand Las Vegas.
On the undercard, a pair of newly signed 17-year-old Top Rank prospects who are co-promoted by Antonio Leonard Promotions, welterweights Jahi Tucker and Kasir "Mazzi" Goldston, will see action in separate four-round contests. Goldston, from Deer Park, N.Y., will fight Isaiah Varnell (3-2, 2 KOs), while Tucker will face Deandre Anderson (1-1).
The undercard bouts will stream live on ESPN+ beginning at 7:30 p.m. ET, with the co-feature scheduled to begin at approximately 10 p.m. ET.
"I am delighted that Efe Ajagba will make his Top Rank debut as he continues his march to the top of the heavyweight division," said Top Rank chairman Bob Arum. "Kasir Goldston and Jahi Tucker are two major talents, and we are excited to see their professional journeys begin here in Las Vegas."
Ajagba (13-0, 11 KOs) resumes his heavyweight world title quest with a new promoter, manager (James Prince) and head trainer (Kay Koroma). Now living in Houston, Ajagba last fought in March, knocking out former world title challenger Razvan Cojanu in nine rounds. At 6'6 and 240-plus pounds, the 26-year-old former Nigerian Olympian is one of the division's youngest contenders. Rice (13-5-1, 9 KOs) measures 6'5 and often tips the scales at more than 260 pounds. He is known for his durability, as his only two knockout defeats have come against previously unbeaten foes in the seventh and 10throunds, respectively.
"To all my fans, the wait is finally over. I am ready to get back in the ring and do what I do best," Ajagba said. "I haven't fought since March 7, and I've been looking for someone to devour. On September 19, I finally get to do it. Tune in to ESPN+. You don't want to miss it."
Goldston, from Albany, N.Y., won three consecutive National Junior Olympic titles from 2015-2017 and back-to-back Junior Open Championships in 2017 and 2018. Last year, at the prestigious Bornemissza Tournament in Eger, Hungary, he took home a silver medal after a split decision loss to a Hungarian boxer.
Tucker, from Deer Park, N.Y., was ranked first in the nation at 138 pounds after winning the 2018 USA National Boxing Championship in Salt Lake City, Utah. A high school sophomore at the time, Tucker aimed to qualify for the 2024 Summer Olympics. He picked up his first international gold medal last June at the Bornemissza Tournament and elected to turn pro rather than wait for the Olympics.
In other undercard contests:
Two-time Cuban Olympic gold medalist Robeisy Ramirez (4-1, 3 KOs), who avenged his lone pro defeat via shutout decision over Adan Gonzales on July 2, will fight Felix Caraballo (13-2-2, 9 KOs) in an eight-rounder at featherweight. Caraballo last fought June 9, losing via sixth-round knockout to former featherweight world champion Shakur Stevenson, the man Ramirez edged in the 2016 Olympic gold medal match.
Undefeated junior middleweight prospect Leo Ruiz (7-0, 5 KOs), winner of four straight by knockout in three rounds or less, will fight an opponent to be named in a six-rounder.
Bryan Lua (5-0, 2 KOs), from California's Central Valley, will fight for the first time in more than two years against an opponent to be named in a six-rounder at lightweight.
Puerto Rican junior lightweight prospect Frevian Gonzalez (3-0, 1 KO), who won a decision inside the "Bubble" on June 18, returns to fight Carlos Marrero (2-3-1) in a four-rounder.
Previous article BRÆKHUS ACTIVATES MCCASKILL REMATCH CLAUSE
Next article MATCHROOM Confirms Host of Fall Cards
DORTICOS: "I CAN'T WAIT TO COME BACK TO RIGA!"
When the visiting finalist, Yuniel Dorticos, broke down and knocked out Andrew Tabiti in Riga...
LAY OFF JOSHUA, BLASTS DUBOIS
DANIEL DUBOIS has told the critics to lay off his former Team GB amateur team-mate... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,580 |
52 Stories Blog
General Interiors
Boulevard Pool
The Chelsea Pool
Restaurant Collection
Bang Bar
Beauty & Essex
China Poblano
District: Donuts. Sliders. Brew.
Eggslut
Ghost Donkey: Mezcal & Tequila Bar
Hattie B's Hot Chicken
The Juice Standard
Overlook Grill
Superfrico
Tekka Bar: Handroll & Sake
Va Bene Caffé
Wicked Spoon
The Ski Lodge
The Chandelier
Ghost Donkey
The Barbershop Cuts and Cocktails
CliQue Bar & Lounge
Race & Sports Book
Marquee Nightclub & Dayclub
CRSVR Sneaker Boutique
Jason of Beverly Hills
Maceoo
Molly Brown's Swimwear
Reviv
Skins 6|2 Cosmetics
Bet MGM Race & Sports Lounge
Sunset Cocktail Hour
Dive In Movies
The Ice Rink
The WhiskyX Returns to Boulevard Pool at The Cosmopolitan Of Las Vegas, Oct. 29
LAS VEGAS (July 19, 2022) – The Cosmopolitan of Las Vegas welcomes back The WhiskyX to Boulevard Pool for an evening of elevated whisky tastings, live musical renderings, award-winning bites and more on Saturday, Oct. 29, 2022. The second-annual event will offer guests a curated mix of more than 60 world-class whiskies, live sets from Alabama-based rock and roll soul band St. Paul & The Broken Bones, an array of bites from the world-renowned Restaurant Collection and complimentary grooming services from The Barbershop Cuts & Cocktails.
Giving Module Note to Media
The Cosmopolitan of Las Vegas is pleased to share the Q2 results of its Q2 Giving Module program, providing nearly $172,000 in local aid since its inception in July 2019. As a result of the resort's ongoing charitable slot kiosk program, more than $13,000 was donated to four local charities from April to June this year, including Grant a Gift Autism Foundation, Signs of HOPE, Nevada Homeless Alliance and The Pride Tree.
The Cosmopolitan Of Las Vegas Welcomes Grammy®- Nominated Rock Band Counting Crows to The Chelsea, July 23
LAS VEGAS (Apr. 4, 2022) – The Cosmopolitan of Las Vegas is thrilled to welcome GRAMMY® and Academy Award-nominated rock band Counting Crows to The Chelsea on Saturday, July 23. The highly anticipated announcement follows the band's immense success over the last two decades selling more than 20 million records worldwide and releasing seven studio albums
American Boy Band Why Don't We to Perform at The Chelsea at The Cosmopolitan Of Las Vegas, Aug. 19
LAS VEGAS (Mar. 31, 2022) – The Cosmopolitan of Las Vegas welcomes multi-talented, five-piece band Why Don't We to The Chelsea on Friday, Aug. 19. The highly anticipated performance is a stop on the group's "The Good Only Times" tour featuring special appearances by The Aces and JVKE.
Multi-Platinum Hitmaker Sam Hunt Takes Over The Chelsea Stage, Sept. 23 & 24
LAS VEGAS (Mar. 7, 2022) – The Cosmopolitan of Las Vegas welcomes GRAMMY®-nominated country singer Sam Hunt to The Chelsea stage for a two-night engagement on Friday, Sept. 23 and Saturday, Sept. 24. The multi-platinum-selling hitmaker has accumulated more than 11 billion global streams and earning 39 million RIAA certified units.
The Cosmopolitan Of Las Vegas Makes a Splash This 2022 Pool Season with Luxurious and Unique Poolside Experiences
LAS VEGAS (Feb. 22, 2022) – The iconic Pool District at The Cosmopolitan of Las Vegas comes to life with an array of summer experiences, featuring the return of Dive In Movies and Sunset Cocktail Hour, the season opening of Overlook Grill, upgraded poolside fitness classes and more.
<< Back 11 - 16 of 16
© 2023 MGM Resorts International. All rights reserved.
You must be logged in to view this item. You may register
for the site to obtain a login id/pw by clicking here. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 360 |
{"url":"https:\/\/www.physicsforums.com\/threads\/nodal-analysis-physical-problem.413012\/","text":"# Nodal analysis physical problem !\n\nin the nodal analysis -one of ways of analysis DC circuit - i couldn't understand the physical roots of this way .\n\ni mean how could i make the voltage of a node equal to zero by make it as a reference point without having any effects on the voltage(electric potential) of the rest nodes .\n\nfrom algebra view i'm sure 100% that this way is correct but physically i find difficult to say that the potential difference of the branch \"ab\" is:\n$$U_{ab}=v_{a}-v_{b}$$\nand put for example 'a' as reference point and the $$_{ab}$$ become $$-v_{b}$$ .\n\nthank you\n\nRelated Electrical Engineering News on Phys.org\nberkeman\nMentor\nin the nodal analysis -one of ways of analysis DC circuit - i couldn't understand the physical roots of this way .\n\ni mean how could i make the voltage of a node equal to zero by make it as a reference point without having any effects on the voltage(electric potential) of the rest nodes .\nYou can't. When you choose a node as the reference node and call its voltage zero, all the other node voltages will be referenced to that node.\n\nfrom algebra view i'm sure 100% that this way is correct but physically i find difficult to say that the potential difference of the branch \"ab\" is:\n$$U_{ab}=v_{a}-v_{b}$$\nand put for example 'a' as reference point and the $$_{ab}$$ become $$-v_{b}$$ .\n\nthank you\n\nAt each node, you use the KCL to write an equation that involves the voltage at that node, with respec to the voltages of surrounding nodes. The sum of the currents leaving the node has to equal zero, so that's how you write the equation for each node. The voltage differences are used to express the currents leaving the node in the different directions.\n\nDoes that help?\n\nfrom algebra view i'm sure 100% that this way is correct but physically i find difficult to say that the potential difference of the branch \"ab\" is:\nThe potential difference is defined as a path-integral of a charge moved in an electric field from point A to point B which produces a simple potential difference like V2 - V1 if Electric field is constant as is in a voltage or current source.\n\nthank you berkeman for this useful informations\nThe voltage differences are used to express the currents leaving the node in the different directions\nas what 'waht' said:\nThe potential difference is defined as a path-integral of a charge moved in an electric field from point A to point B which produces a simple potential difference\n\nand from this to definitions i can tell you exactly what i mean:\nwhen i attach the node to ground i had zero voltage in this node . physically\nthe electrons must go from the smaller voltage point$$(v_{ground}=0)$$ to the biger voltage(upper positive point) > how this works , or is this way of thinking is wrong?.\n\nthank you again\n\n#### Attachments\n\n\u2022 10.7 KB Views: 423\ndlgoff\nGold Member\nYou don't need to connect a ground to the circuit. And current is defined as the direction of positive charge flow. i.e. the opposite direction of the electron flow.\n\nTake a look at this example of http:\/\/docs.google.com\/viewer?a=v&q...q8nrJ&sig=AHIEtbSm6Plp9OVqQqU3rTv2tnAQgOEhFg\"\n\nThe \ufb01rst step in the analysis is to label all the nodes except for the common node.....\n\nSecondly, label the currents entering or leaving each node.....\n\nThe next step is to write the KCL equation for each node except the common node....\n\nLast edited by a moderator:\nthank you dlgoff but i know how to do analyzing using this method but i can't understander\nthe case of connecting the circuit to ground without eletrons going up to circuits and have influence to other voltages\n\ndlgoff","date":"2020-12-01 22:04:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6682283878326416, \"perplexity\": 595.1018978148485}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141681524.75\/warc\/CC-MAIN-20201201200611-20201201230611-00001.warc.gz\"}"} | null | null |
NINA NESBITT TO MAKE LATE NIGHT TELEVISION DEBUT TONIGHT ON CBS' THE LATE SHOW WITH STEPHEN COLBERT – 11:35/10:35PM CST
rockyourlyrics April 4, 2019 0 views
Fresh off her North American headline tour, Scottish singer-songwriter Nina Nesbitt will be making her late night television debut tonight on CBS' The Late Show with Stephen Colbert. Tune in at 11:35ET/ 10:35PM CT to see Nina perform her hit single 'The Best You Had.'
In other news, Nina recently surpassed a quarter of a billion combined streams between singles and the release of her new album. 'The Sun Will Come Up, The Seasons Will Change' (Cooking Vinyl) features production from Nina Nesbitt herself alongside heavy hitters Lostboy, Fraser T Smith (Adele, Drake, Gorillaz, Florence and the Machine) and Jordan Riley (Macklemore, Zara Larsson); bearing no hint of compromise, Nina demonstrates her unique talent for acute lyrical observations and ear-worm melodies. Heralding a musical change of direction, with Nina evolving from her singer/songwriter past into a more decidedly pop realm. The album is a scintillating journey from start to finish, with Nina brilliantly showcasing her impressive vocal range and knack for writing deeply relatable lyrics.
"I'm so proud of this album," Nina says. "This is the album I always wanted to make on my own terms. It's an honest account of somebody in their early '20s, giving a real window into their often ever-changing life." The album chronicles Nesbitt's early adulthood and life experiences (good and bad), filled with a healthy mix of poignant ballads, like current single 'Is It Really Me You're Missing,' and big and rousing pop moments like 'Colder,' and 'Loyal To Me,' are instant pop classics. The album starts with 'Sacred,' an emotionally-charged opener. Another obvious highlight; the multi-layered, biographical 'The Moments I'm Missing' which was written and produced in Nina's bedroom as well as the R&B tinged 'The Best You Had' which was championed by the likes of Chloë Grace Moretz and Taylor Swift. Further highlights include 'Somebody Special', a love song with a twist which has had over 30 million streams on Spotify alone and destined-to-be pop classics like "Love Letter," a proper 90s/00s inspired R&B-pop banger and 'Loyal To Me's' little sister (according to Nina). The album also features empowering anthems like the stand out 'Empire' not to mention deeply personal songs such as 'Chloe,' 'Things I Say When You Sleep,' and 'Last December.'
After a whirlwind two month stint in North America, Nina will return to the UK kicking off a headline tour on April 8 before heading over to Australia and New Zealand in late May for a string of live dates.
Catch Nina Nesbitt on CBS' The Late Show with Stephen Colbert tonight at 11:35/ 10:35PM CT.
CBSChloeColderEmpireFraser T SmithIs It Really Me You're MissingJordan RileyLast DecemberLATE NIGHT TELEVISION DEBUTLostboyLove LetterLoyal To MeNina NesbittNorth American headline tourSacredSomebody SpecialThe Best You HadTHE LATE SHOW WITH STEPHEN COLBERTThe Moments I'm MissingThe Sun Will Come Up The Seasons Will ChangeThings I Say When You Sleep
Previous ArticleRock Your Lyrics Backstage – Podcast Interview with Els Prins – Sisters of Suffocation
Next ArticleBay Fest 2019: Svelati i nomi delle band italiane presenti al festival
THE FLAMING LIPS SHARE JUBILANT "ASSASSINS OF YOUTH" VIDEO
rockyourlyrics November 5, 2020
Nina Nesbitt Releases Official Video For 'Love Letter'
Sean Paul Joins Dua Lipa, Chris Martin and more for BBC Radio 1 Live Lounge Cover Of Foo Fighters' 'Times Like These' To Benefit COVID-19 Charities
TOM GRENNAN: sabato 24 aprile 2021 a Milano per la prima volta in Italia
NE-YO Celebrates 15 Years Of "In My Own Words," Announces Digital Deluxe Version
FINNEAS Releases Blood Harmony On Vinyl | Makes Late Night Television Debut | Releases Remix Of 'I Lost A Friend' | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,914 |
{"url":"https:\/\/www.lua.ovh\/mundo\/en\/1%2C000%2C000","text":"# 1,000,000\n\n \u2190 999999 1000000 1000001 \u2192\nCardinalone million\nOrdinal1000000th\n(one millionth)\nFactorization26 \u00d7 56\nGreek numeral${\\displaystyle {\\stackrel {\\rho }{\\mathrm {M} }}}$\nRoman numeralM\nBinary111101000010010000002\nTernary12122102020013\nQuaternary33100210004\nQuinary2240000005\nSenary332333446\nOctal36411008\nDuodecimal40285412\nHexadecimalF424016\nVigesimal6500020\nBase 36LFLS36\n\n1,000,000 (one million), or one thousand thousand, is the natural number following 999,999 and preceding 1,000,001. The word is derived from the early Italian millione (milione in modern Italian), from mille, \"thousand\", plus the augmentative suffix -one.[1] It is commonly abbreviated as m[2][3][4] (not to be confused with the metric prefix for 1\u00d710\u22123) or M[5][6] and MM (\"thousand thousands\", from Latin \"Mille\"; not to be confused with the Roman numeral MM = 2,000), mm, or mn in financial contexts.[7][better\u00a0source\u00a0needed]\n\nIn scientific notation, it is written as 1\u00d7106 or 106.[8] Physical quantities can also be expressed using the SI prefix mega (M), when dealing with SI units; for example, 1 megawatt (1\u00a0MW) equals 1,000,000 watts.\n\nThe meaning of the word \"million\" is common to the short scale and long scale numbering systems, unlike the larger numbers, which have different names in the two systems.\n\nThe million is sometimes used in the English language as a metaphor for a very large number, as in \"Not in a million years\" and \"You're one in a million\", or a hyperbole, as in \"I've walked a million miles\" and \"You've asked a million-dollar question\".\n\nVisualisation of powers of ten from 1 to 1 million\n\n## Visualizing one million\n\nEven though it is often stressed that counting to precisely a million would be an exceedingly tedious task due to the time and concentration required, there are many ways to bring the number \"down to size\" in approximate quantities, ignoring irregularities or packing effects.\n\n\u2022 Information: Not counting spaces, the text printed on 136 pages of an Encyclop\u00e6dia Britannica, or 600 pages of pulp paperback fiction contains approximately one million characters.\n\u2022 Length: There are one million millimeters in a kilometer, and roughly a million sixteenths of an inch in a mile. A typical car tire might rotate a million times in a 1,200-mile (1,900\u00a0km) trip, while the engine would do several times that number of revolutions.\n\u2022 Fingers: If the width of a human finger is 2.2\u00a0cm (78\u00a0in), then a million fingers lined up would cover a distance of 22\u00a0km (14\u00a0mi). If a person walks at a speed of 4\u00a0km\/h (2.5\u00a0mph), it would take them approximately five and a half hours to reach the end of the fingers.\n\u2022 Area: A square a thousand objects or units on a side contains a million such objects or square units, so a million holes might be found in less than three square yards of window screen, or similarly, in about one half square foot (400\u2013500\u00a0cm2) of bed sheet cloth. A city lot 70 by 100 feet is about a million square inches.\n\u2022 Volume: The cube root of one million is one hundred, so a million objects or cubic units is contained in a cube a hundred objects or linear units on a side. A million grains of table salt or granulated sugar occupies about 64\u00a0ml (2.3\u00a0imp\u00a0fl\u00a0oz; 2.2\u00a0US\u00a0fl\u00a0oz), the volume of a cube one hundred grains on a side. One million cubic inches would be the volume of a small room \u200b8\u00a013\u00a0feet long by \u200b8\u00a013\u00a0feet wide by \u200b8\u00a013\u00a0feet high.\n\u2022 Mass: A million cubic millimeters (small droplets) of water would have a volume of one litre and a mass of one kilogram. A million millilitres or cubic centimetres (one cubic metre) of water has a mass of a million grams or one tonne.\n\u2022 Weight: A million 80-milligram (1.2\u00a0gr) honey bees would weigh the same as an 80\u00a0kg (180\u00a0lb) person.\n\u2022 Landscape: A pyramidal hill 600 feet (180\u00a0m) wide at the base and 100 feet (30\u00a0m) high would weigh about a million tons.\n\u2022 Computer: A display resolution of 1,280 by 800 pixels contains 1,024,000 pixels.\n\u2022 Money: A USD bill of any denomination weighs 1 gram (0.035\u00a0oz). There are 454 grams in a pound. One million USD bills would weigh 2,204.62 pounds (1,000.00\u00a0kg), or just over 1 ton.\n\u2022 Time: A million seconds, 1 megasecond, is 11.57 days.\n\nIn Indian English and Pakistani English, it is also expressed as 10 lakh or 10 Lac. Lakh is derived from 'laksh' for 100,000 in Sanskrit.","date":"2019-09-20 22:24:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 1, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3359578251838684, \"perplexity\": 2579.122063428895}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514574084.88\/warc\/CC-MAIN-20190920221241-20190921003241-00552.warc.gz\"}"} | null | null |
Posted inBlog, Food
There's Something Crawling Around in This Swag Bag
by Jessica Sidman November 14th, 2014 September 19th, 2020
Between all the galas, fundraisers, and VIP events, D.C. is a city swimming in swag bags. But EquityEats—a new restaurant crowdfunding platform in which investors earn profits, not just perks—may have just outdone them all at its launch party last night. As guests parted the event at Doi Moi, hosts handed out bags with live lobsters.
The idea came from EquityEats CEO Johann Moonesinghe as a way to promote one of their restaurants seeking funding called Lighthouse, which plans to serve a menu limited to burgers and whole lobsters.
"We were kind of like, 'Are you serious?'" says Steve Lucas, the vice president of strategy and communications for EquityEats. "But the more we thought about it, that was kind of a way to get across what the concept is about."
The company gave away more than 150 lobsters (two per bag) from D.C.-based supplier Lobster Maine-ia. (I opted not to take any.) Each guest got instructions to cook the lobsters within a day or two—the time they'd stay alive in a fridge.
"Part of us was like, 'Are people going to freak out that we're giving out lobsters?'" Lucas says. "But people were just in really good spirits and they took it to heart. And I think it was a good representation of Lighthouse…We wanted to do something that was different."
As of this morning, Lighthouse had raised 38 percent of its $665,000 goal. Three other restaurants are also currently seeking funding through EquityEats: a seafood counter called Albright Special, a seasonally focused American restaurant called Sussex Drive, and a bakery called Bluebird. Read more about EquityEats in Y&H's recent column.
Read more Food stories
A Comfort Food Pop-Up Starring Spuds Launches Today in Shaw
A Schmuck Stole a Sign at Jane Jane. Now You Can Win a $200 Bar Tab. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,912 |
All People > Thomas3.20.2010 > Thomas3.20.2010 Blog > Blog Posts
Second-hand smoke from e-cigarettes also harmful – FDA
Blog Post created by Thomas3.20.2010 on Jul 18, 2013
The Food and Drug Administration (FDA) warned yesterday that second-hand smoke from electronic cigarettes could also be harmful to health.
In Advisory No. 2013-015, FDA acting director Kenneth Hartigan-Go said: "Electronic cigarettes are not emission-free... Second-hand exposure to e-cigarette emission which may lead to adverse health effects cannot be excluded.
"E-cigarettes contain volatile organic substances, including propylene glycol, flavors and nicotine, and are emitted as mist or aerosol into indoor air.
"If several people are using e-cigarettes in a room at the same time, considerable indoor air pollution will accumulate and may result to harmful second-hand exposure."
The advisory said that "ultrafine liquid particles of less than 2.5 micrometer in diameter" from e-cigarettes may penetrate into the lungs.
The FDA said e-cigarettes contain various harmful substances, citing the study "Electronic Cigarettes – An Overview" by the German Cancer Research Center Unit Cancer Prevention, Heidelberg and the WHO Collaborating Centre for Tobacco Control.
Headlines ( Article MRec ), pagematch: 1, sectionmatch: 1
These substances include "glycol (the main ingredient), nicotine, flavors, tobacco-specific nitrosamines, volatile organic compounds, acetone, formaldehyde, acetaldehyde, benzo(a)pyrene, silicate and various metal particles."
Visibility: Thomas3.20.2010 Blog1 View
Last modified on Jul 18, 2013 10:36 AM | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,411 |
The Cincinnati Reds begin a six-game home stand with three games against the Atlanta Braves, to be followed after one day off by three more against the Milwaukee Brewers. (7:10 p.m., Fox Sports Ohio)
Wednesday-Saturday
For the first time, the University of Kentucky plays host to the Southeastern Conference Softball Tournament. The 10-team field includes nine of the nation's top 25 teams, which counts the 19th-ranked Wildcats. You'll also be able to see No. 2 Tennessee, No. 3 Florida, No. 7 LSU, No. 8 and defending national champion Alabama, No. 11 Missouri, No. 21 Texas A&M, No. 22 Arkansas and No. 24 Georgia. Action begins with a pair of nationally televised games on Wednesday. (4 and 6:30 p.m., ESPNU)
The Lexington Legends play the first of 15 home games this month when they open a three-game series against West Virginia at Whitaker Bank Ballpark. (7:05 p.m.)Friday
Conner High School's four-star quarterback Drew Barker is expected to choose the University of Kentucky when the senior-to-be announces his college choice in the school's auditorium. (3:30 p.m.)
The UK baseball team opens its final SEC home series of the season when No. 2 Vanderbilt visits Cliff Hagan Stadium for the first of three games. (6:30 p.m., WLAP-AM 630)
Mathew Graf
Photo slideshow: The Ride and Fall of the Lexington Horsemen
2019 Bluegrass 10,000: Photo slideshow from the annual July 4th race
Marmolejos' homer leads Fresno to 10-9 win over Albuquerque
Fresno beats Albuquerque 10-9 behind Marmolejos' home run.
MORE OTHER SPORTS
Ron Francis has big hopes as GM of Seattle's new NHL club
Peters hits two homers as Okla. City tops Iowa 18-5
Duran, Metzgar lead the way for Staten Island
Gordon's single leads Rochester to 9-4 win over Norfolk
Burks' homer leads GCL Tigers West to 12-3 win over GCL Blue Jays | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 513 |
var express = require('express');
var router = express.Router();
var async = require('async');
var VersionModel = require('../model/version_model');
var ProjectModel = require('../model/project_model');
var IterationModel = require('../model/iteration_model');
var RouterService = require('../service/router_service');
// 检查id
router.use('/:id', function (req, res, next) {
VersionModel
.findById(req.params.id)
.then(function (version) {
if (version === null) {
res.status(404);
res.json({msg: '版本不存在'});
} else {
req.version = version;
next();
}
});
});
// 新建
router.post('/', checkProject);
router.post('/', checkPN);
router.post('/', function (req, res) {
VersionModel
.build(req.body)
.save()
.then(function (version) {
res.json({id: version.id});
})
.catch(function (err) {
res.status(400);
res.json({msg: err.errors[0].message});
});
});
// 列表
router.get('/', function (req, res) {
var where = {};
if (req.query.status) {
where.status = req.query.status.split(',');
} else {
where.status = VersionModel.statusOnline;
}
VersionModel
.findAndCount({
where: where,
include: [
{model: ProjectModel}
],
order: 'id DESC',
offset: req.query.offset || 0,
limit: req.query.size || 10
})
.then(function (result) {
res.json(result);
});
});
// 编辑
router.put('/:id', checkProject);
router.put('/:id', function (req, res, next) {
if (req.body.name !== req.version.name) {
checkPN(req, res, next);
} else {
next();
}
});
router.put('/:id', function (req, res) {
req.version
.updateAttributes(req.body)
.then(function (version) {
res.json({id: version.id});
})
.catch(function (err) {
res.status(400);
res.json({msg: err.errors[0].message});
});
});
// 关闭
router.put('/:id/toggle', function (req, res, next) {
req.version
.toggle(req.query.status)
.then(function (version) {
res.json({msg: '操作成功'});
next();
})
.catch(function (err) {
RouterService.json(err, res);
});
});
router.put('/:id/toggle', function (req) {
IterationModel
.findAll({
where: {version_id: req.version.id}
})
.then(function (iterations) {
async.each(iterations, function (iteration, callback) {
iteration
.updateAttributes({
status: IterationModel.statusClosed
})
.then(function (iteration) {
callback();
})
.catch(function (err) {
callback(err.errors[0].message);
});
}, function (err) {
if (err) {
throw err;
}
});
});
});
// 删除
router.delete('/:id', function (req, res) {
req.version
.updateAttributes({
status: VersionModel.statusDeleted
})
.then(function () {
res.json({msg: '删除成功'});
});
});
function checkPN(req, res, next) {
VersionModel
.find({
where: {
project_id: req.body.project_id,
name: req.body.name
}
})
.then(function (version) {
if (version === null) {
next();
} else {
res.status(404);
res.json({msg: '版本号存在!'});
}
});
}
function checkProject(req, res, next) {
ProjectModel
.findById(req.body.project_id)
.then(function (project) {
if (project === null) {
res.status(404);
res.json({msg: '项目不存在!'});
} else {
req.project = project;
next();
}
});
}
module.exports = router; | {
"redpajama_set_name": "RedPajamaGithub"
} | 7,846 |
Chettithangal is a census town in Vellore district in the state of Tamil Nadu, India.
Demographics
At the 2001 India census, Chettithangal had a population of 6,029. Males constituted 50% of the population and females 50%. Chettithangal has an average literacy rate of 70%, higher than the national average of 59.5%, with male literacy of 79% and female literacy of 60%. 10% of the population was under 6 years of age.
References
Cities and towns in Vellore district | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,701 |
\section{Introduction}
\emph{Purely functional programming} (PFP) has a chance of becoming very popular for the simple reason that we now have laptops with four cores and more. The promise of PFP is that because there are no side-effects, no destructive updates, and no shared mutable state, partitioning a program into pieces that run in parallel becomes straightforward.
Another consequence of the freedom from impure language constructs is that reasoning about program correctness, both formally and informally, becomes much easier in PFP languages than in, say, imperative languages. Therefore it is not surprising that PFP is popular within the theorem proving community. For example, the source code of the interactive theorem proving assistant Isabelle~\cite{isabelle} is mostly written in a purely functional style. Outside of such specialty communities though, PFP clearly has not reached the mainstream yet.
A programming paradigm that pervades today's mainstream is Dijkstra's \emph{structured programming}~\cite{structuredprogramming} (SP). Most young programmers even do not know the term structured programming anymore, but anyway still construct their object-oriented programs out of building blocks like \bsrc{if}-branches and \bsrc{while}-loops.
Interestingly, the PFP community largely rejects SP because it smells of side-effects, destructive updates, and mutable state, just the things a purely functional programmer wants to avoid. As an example, let us examine the Isabelle (version 2009-2) source code.
Discounting blank lines, it consists of about 140000 lines of Standard ML~\cite{standardml} (SML) code. Yet, only ten of those lines use the \bsrc{while} keyword of SML! Furthermore, five out of those ten lines are part of Isabelle's system level code, and a further three lines stem from the author of this paper trying to circumvent missing tail-recursion optimization. The reason for this sparse use of \bsrc{while} is clear: in order to use \bsrc{while} in SML one must also use reference cells which are the embodiment of the small amount of impurity still left in SML.
The easiest way to make PFP more mainstream might be to make SP, which already is part of the mainstream, an integral part of PFP! This is what this paper is about. Our central tool for such a unification of PFP and SP is the notion of \emph{linear scope}. Linear scope makes heavy use of \emph{shadowing}, therefore we first look at shadowing and its treatment in other languages that draw on functional programming, like Erlang and Scala. We then present the syntax of a toy language called \emph{Mini Babel-17} to prepare a proper playground for the introduction of linear scope. First we concentrate on how linear scope interacts with the sequencing and nesting of statements. From there the extension to conditionals and loops is straightforward.
Finally we give a formal semantics for Mini Babel-17 and hence also for linear scope.
\section{Shadowing is Purely Functional}\label{sec:shadowing}
Here is how you could code in SML the function $x \mapsto x^4$:
\begin{babellisting}
fn x => let val x = x * x in x * x end
\end{babellisting}
There is no doubt that the above denotes a pure function. The fact that the introduction of the variable \bsrc{x} via \bsrc{val x = x * x} \emph{shadows} the previous binding of \bsrc{x} in \bsrc{fn x} might make it look a little bit more unusual than the more common
\begin{babellisting}
fn x => let val y = x * x in y * y end,
\end{babellisting}
but of course both denotations are equivalent. Rewriting both functions in De Bruijn notation~\cite{debruijn} would actually yield the exact same closed term.
Yet it seems that the conception that shadowing is somehow wrong lies at the heart of why PFP and SP do not overlap in the mind of many programmers.
An instance where shadowing is forbidden in order to obtain a notion of pure variables is the programming language Erlang which features \emph{single-assignment} of variables. Quoting the inventor of Erlang~\cite[p. 29]{erlang}:
\begin{quote}
When Erlang sees a statement such as X = 1234, it binds the variable X to the value 1234. Before being bound, X could take any value: it's just an empty hole waiting to be filled. However, once it gets a value, it holds on to it forever.
\end{quote}
Clearly in Erlang shadowing is a victim of the idea that variables are not just names bound to values, but that \emph{the variables themselves are the state}.
Something similar can be observed in the programming language Scala~\cite{scala}. Scala combines functional and structured programming in an elegant fashion. But when it comes to integrate \emph{purely} functional programming and SP, Scala does not go all the way: it also forbids shadowing. For example, the following Scala function implements $x \mapsto x^8$,
\begin{babellisting}
(x : Int) => { val y = x*x; val z = y*y; z*z },
\end{babellisting}
but both of the following expressions are illegal in Scala:
\begin{babellisting}
(x : Int) => { val y = x*x; val y = y*y; y*y },
(x : Int) => { val y = x*x; y = y*y; y*y }.
\end{babellisting}
The last expression can be turned into a legal Scala expression by replacing the keyword \bsrc{val}, which introduces immutable variables, with the keyword \bsrc{var}, which introduces mutable variables:
\begin{babellisting}
(x : Int) => { var y = x*x; y = y*y; y*y }.
\end{babellisting}
It might seem that after all, shadowing in Scala is possible! But this is not the case. That \bsrc{var}
behaves differently than shadowing can easily be checked:
\begin{babellisting}
(x : Int) => { var y = x*x
val h = () => y
y = y*y
h() * y }
\end{babellisting}
also implements $x \mapsto x^8$. With shadowing, we would expect above function to implement $x \mapsto x^6$.
\section{Syntax of Mini Babel-17}
\emph{Babel-17}~\cite{babel17} is a new dynamically-typed programming language in the making which is being developed by the author of this paper. One of its main features is that it combines purely functional programming and structured programming, building on the key observation that shadowing is purely functional. For illustration purposes we use a simplified version of a subset of Babel-17, which we call \emph{Mini Babel-17}, as a proposal of how a purely functional structured programming language could look like. An implementation of Mini Babel-17 is available at~\cite{minibabel17}.
A \emph{block} in Mini Babel-17 is a sequence of \emph{statements}:
\begin{babellisting}
$\textsl{block}$ $\longrightarrow$ $\textsl{statement}_1$
$\vdots$
$\textsl{statement}_n$
\end{babellisting}
Several statements within a single line are separated via semicolons. There are seven kinds of statements:
\begin{babellisting}
$\textsl{statement}$ $\longrightarrow$ $\textsl{val-statement}$
$|$ $\textsl{assign-statement}$
$|$ $\textsl{yield-statement}$
$|$ $\textsl{if-statement}$
$|$ $\textsl{while-statement}$
$|$ $\textsl{for-statement}$
$|$ $\textsl{block-statement}$
$\textsl{val-statement}$ $\longrightarrow$ val $\textsl{identifier}$ = $\textsl{expression}$
$\textsl{val-assign-statement}$ $\longrightarrow$ $\textsl{identifier}$ = $\textsl{expression}$
$\textsl{yield-statement}$ $\longrightarrow$ yield $\textsl{expression}$
$\textsl{if-statement}$ $\longrightarrow$ if $\textsl{simple-expression}$ then $\textsl{block}$ else $\textsl{block}$ end
$\textsl{while-statement}$ $\longrightarrow$ while $\textsl{simple-expression}$ do $\textsl{block}$ end
$\textsl{for-statement}$ $\longrightarrow$ for $\textsl{identifier}$ in $\textsl{simple-expression}$ do $\textsl{block}$ end
$\textsl{block-statement}$ $\longrightarrow$ begin $\textsl{block}$ end
\end{babellisting}
If the last statement of a \emph{block} is a \emph{yield-statement}, then the \bsrc{yield} keyword may be dropped in that statement.
A \emph{simple-expression} is an expression like it can be found in most other functional languages, i.e. it can be an integer, a boolean, an identifier, an anonymous function, function application, or some operation on \emph{expressions} like function application, multiplication or comparison:
\begin{babellisting}
$\textsl{simple-expression}$ $\longrightarrow$
$\textsl{integer}$ $|$ $\textsl{boolean}$ $|$ $\textsl{identifier}$
$|$ $\textsl{identifier}$ => $\textsl{expression}$
$|$ $\textsl{expression}_1$ $\textsl{expression}_2$
$|$ $\textsl{expression}_1$ * $\textsl{expression}_2$
$|$ $\textsl{expression}_1$ == $\textsl{expression}_2$
$\vdots$
\end{babellisting}
An \emph{expression} is either a \emph{simple-expression} or a statement:
\begin{babellisting}
$\textsl{expression}$ $\longrightarrow$ $\textsl{simple-expression}$
$|$ $\textsl{if-statement}$
$|$ $\textsl{while-statement}$
$|$ $\textsl{for-statement}$
$|$ $\textsl{block-statement}$
\end{babellisting}
\section{Linear Scope}
Let us gain a first intuitive understanding of Mini Babel-17 before formally introducing its semantics.
Here is how you could denote $x \mapsto x^8$ in Mini Babel-17:
\begin{babellisting}
x => begin val y = x*x; val z = y*y; z*z end
\end{babellisting}
This looks pretty much like the Scala denotation of $x \mapsto x^8$ from Section~\ref{sec:shadowing}.
But because Mini Babel-17 is designed so that shadowing of variables is allowed, an equivalent notation is:
\begin{babellisting}
x => begin val x = x*x; val x = x*x; x*x end
\end{babellisting}
The central idea of Mini Babel-17 is the notion of \emph{linear scope}. Whenever an identifier x is in linear scope, it is allowed to rebind
x to a new value, and \emph{that rebinding will affect all other \emph{later} lookups of $x$ that happen within its normal lexical scope}. The rebinding is done via a \emph{val-assign-statement}.
The linear scope of a variable is contained in the usual lexical scope of that variable.
The linear scope of a variable x starts
\begin{itemize}
\item in the statement after the \emph{val-statement} that defines $x$, or
\item in the first statement of an anonymous function that binds $x$ as a function argument, if the body of that function is a block, or
\item in the first statement of the block of a \emph{for-loop} where x is the identifier bound by that loop.
\end{itemize}
It continues throughout the rest of the block unless a new linear scope for $x$ starts. It does extend into nested blocks and statements, but not into \emph{simple-expressions}. The reason for this is that blocks and statements are ordered sequentially, but there is no natural order for the evaluation of the components of a \emph{simple-expression}.
Using the linear scope rules of Mini Babel-17, the above function can also be encoded as
\begin{babellisting}
x => begin x = x*x; x = x*x; x*x end
\end{babellisting}
If there are no nested blocks involved, then linear scope is no big deal. It is just a fancy way of saying that when in a \emph{val-statement} the variable being defined shadows a previously defined variable, often it is ok to drop the \emph{val} keyword, effectively turning a \emph{val-statement} into a \emph{val-assign-statement}.
But with nested blocks, linear scope becomes important:
\begin{center}
\begin{tabular}{c|cc|cc}
\begin{babellisting}
val x = 2
begin
val y = x*x
val x = y
end
x+x
\end{babellisting}
&
&
\begin{babellisting}
val x = 2
begin
val y = x*x
x = y
end
x+x
\end{babellisting}
&
&
\begin{babellisting}
val x = 2
begin
val y = x*x
val x = 0
x = y
end
x+x
\end{babellisting}
\\\hline
evaluates to 4 & & evaluates to 8 & & evaluates to 4
\end{tabular}
\end{center}
The left and right programs both evaluate to 4 because the \bsrc{begin} ... \bsrc{end} block is superfluous as none of its statements have any effect in the outer scope. The middle program evaluates to 8, though, because the rebinding \bsrc{x = y} effects all later lookups in the lexical scope of that x that has been introduced via \bsrc{val x = 2}, and \bsrc{x+x} certainly is such a later lookup.
Maybe the rules of linear scope sound confusing at first. But they really are not. Just replace in your mind in the above three programs the \bsrc{val}s by \bsrc{var}s and view them as imperative programs. What value would you assign now to each program?
Let us also recode the last Scala expression of Section~\ref{sec:shadowing} as a Mini Babel-17 expression:
\begin{babellisting}
x => begin
val y = x*x
val h = dummy => y
y = y*y
h 0 * y
end
\end{babellisting}
Mini Babel-17 is purely functional, therefore the value of h is of course not changed by the rebinding \bsrc{y = y*y} which affects only \emph{later} lookups of y. Thus the above expression implements $x \mapsto x^6$, not $x \mapsto x^8$.
\section{Conditionals and Loops}
Conditionals and especially loops are the meat of structured programming. With linear scope, they are easily seen also as part of purely functional programming. All we need to do is to apply linear scoping rules to the nested blocks that the \emph{if}-, \emph{while-} and \emph{for-statements} consist of. For example, this is how you can encode the subtraction based Euclidean algorithm for two non-negative integers in Mini Babel-17:
\begin{babellisting}
a => b =>
if a == 0 then
b
else
val a = a
while b != 0 do
if a > b then
a = a - b
else
b = b - a
end
end
a
end
\end{babellisting}
Note the line \bsrc{val a = a} which on first sight seems to be superfluous. But while the linear scope of \bsrc{b} encompasses the whole
function body, the linear scope of \bsrc{a} does not, because linear scope does not extend into \emph{simple-expressions}.
If Mini Babel-17 had pattern matching, the line \bsrc{val a = a} could be avoided by starting the function definition with
\begin{babellisting}
[a, b] =>
$\vdots$
\end{babellisting}
instead.
\section{Semantics of Mini Babel-17}
In this section we define an operational semantics for Mini Babel-17 by building a Mini Babel-17 interpreter written in Standard ML\footnote{
The original SML sources of the interpreter and all Mini Babel-17 programs of this paper are available online~\cite{babel17}.}.
First we represent the grammar of Mini Babel-17 as SML datatypes:
\begin{babellisting}
datatype block = Block of statement list
and statement =
SVal of identifier * expression
| SAssign of identifier * expression
| SYield of expression
| SIf of simple_expression * block * block
| SWhile of simple_expression * block
| SFor of identifier * simple_expression * block
| SBlock of block
and expression =
ESimple of simple_expression
| EBlock of statement
and simple_expression =
EInt of int | EBool of bool | EId of identifier
| EFun of identifier * expression
| EBinOp of (value * value -> value) *
expression * expression
and identifier = Id of string
\end{babellisting}
Note that function application, multiplication, comparison and so on are all described via the \bsrc{EBinOp} constructor by providing a suitable parameter of type \bsrc{value * value -> value}. The type \bsrc{value} represents all values that can be the result of evaluating a Mini Babel-17 program:
\begin{babellisting}
datatype value = VBool of bool | VInt of int
| VFun of value -> value
| VList of value list
\end{babellisting}
Mini Babel-17 wants to be both purely functional and structured; the most important ingredients of a purely functional program are expressions; the most important ingredients of an SP program are blocks and statements. This dilemma is resolved by treating statements as special expressions.
The interpreter defines the following evaluation functions:
\begin{babellisting}
eval_b : environment -> block -> environment * value list
eval_st : environment -> statement -> environment * value list
eval_e : environment -> expression -> value
eval_se : environment -> simple_expression -> value
\end{babellisting}
The evaluation of blocks and statements yields lists of values instead of single values, the block
\begin{babellisting}
begin yield 1; yield 2; 3 end
\end{babellisting}
for example evaluates to \bsrc{[1, 2, 3]}.
Consider the following Mini Babel-17 program:
\begin{babellisting}
val x = 0
begin x = 1; x end * begin val x = x + 2; x end
\end{babellisting}
It does not obey the linear scoping rules of Mini Babel-17 because x is not in linear scope in the \emph{val-assign-statement} \bsrc{x = 1}.
In such a situation, the exception Illformed is raised during evaluation. Furthermore, an exception TypeError is raised when for example the condition of an if-statement evaluates to a list instead of a boolean. Note by the way that the program
\begin{babellisting}
val x = 0
begin val x = 1; x end * begin val x = x + 2; x end
\end{babellisting}
is perfectly fine and evaluates to 2.
What does the environment look like? It is actually split into two parts, one part for those identifiers that have linear scope, and one part for identifiers that don't. The nonlinear part is a mapping from identifiers to values, the linear part a mapping from identifiers to reference cells of values. Both parts can be described by the polymorphic type 'a idmap:
\begin{babellisting}
type idmap = (string * 'a) list
fun lookup [] _ = raise Illformed
| lookup ((t, x)::r) (Id s) =
if t = s then x else lookup r (Id s)
fun remove [] _ = []
| remove ((t,x)::r) (Id s) =
if t = s then r else remove r (Id s)
fun insert m ((Id s),x) = (s,x)::(remove m (Id s))
\end{babellisting}
The type of environments is then introduced as follows:
\begin{babellisting}
type environment = value idmap * (value ref) idmap
fun deref [] = [] | deref ((s, vr)::m) = ((s,!vr)::(deref m))
fun freeze (nonlinear, linear) = (nonlinear@(deref linear), [])
fun bind (nonlinear, linear) (id,value) =
(remove nonlinear id, insert linear (id, ref value))
fun rebind (env as (_, linear)) (id, value) =
(lookup linear id := value; env)
\end{babellisting}
Note that bind returns a new environment, and rebind returns the same environment with a mutated linear part. The function freeze
turns all mutable linear bindings into immutable nonlinear ones.
Now we can give the definition of all evaluation functions:
\begin{babellisting}
fun eval_b env (Block []) = (env, [])
| eval_b env (Block (s::r)) =
let
val (env', values_s) = eval_st env s
val (env'', values_r) = eval_b env' (Block r)
in (env'', values_s @ values_r) end
and eval_nestedb env b =
let
val (_, values) = eval_b env b
in (env, values) end
and eval_st env (SVal (id, e)) =
let
val value = eval_e env e
in (bind env (id, value), []) end
| eval_st env (SAssign (id, e)) =
let
val value = eval_e env e
in (rebind env (id, value), []) end
| eval_st env (SYield e) =
let
val value = eval_e env e
in (env, [value]) end
| eval_st env (SBlock b) = eval_nestedb env b
| eval_st env (SIf (cond, yes, no)) =
(case eval_se env cond of
VBool true => eval_nestedb env yes
| VBool false => eval_nestedb env no
| _ => raise TypeError)
| eval_st env (loop as SWhile (cond, body)) =
(case eval_se env cond of
VBool true =>
let
val (_, values_1) = eval_b env body
val (_, values_2) = eval_st env loop
in (env, values_1 @ values_2) end
| VBool false =>
(env, [])
| _ => raise TypeError)
| eval_st env (SFor (id, list, body)) =
(case eval_se env list of
VList L => eval_for env id body L
| _ => raise TypeError)
and eval_for env id body [] = (env, [])
| eval_for env id body (x::xs) =
let
val (_, values_1) = eval_b (bind env (id,x)) body
val (_, values_2) = eval_for env id body xs
in (env, values_1@values_2) end
and eval_e env (ESimple se) = eval_se env se
| eval_e env (EBlock s) =
(case eval_b env (Block [s]) of
(_, [a]) => a
| (_, L) => VList L)
and eval_se env se = eval_simple (freeze env) se
and eval_simple env (EInt i) = VInt i
| eval_simple env (EBool b) = VBool b
| eval_simple env (EBinOp (f, a, b)) =
f (eval_e env a, eval_e env b)
| eval_simple (nonlinear, _) (EId id) =
lookup nonlinear id
| eval_simple env (EFun (id, body)) =
VFun (fn value =>
eval_e (bind env (id, value)) body)
\end{babellisting}
Here is the evaluation function that computes the meaning of a Mini Babel-17 program, i.e. of a block:
\begin{babellisting}
eval : block -> value
fun eval prog = snd (eval_e ([], []) (EBlock (SBlock prog)))
\end{babellisting}
It is straightforward how to extract from above evaluation functions a wellformedness-criterion such that if a Mini Babel-17 program is statically checked to be wellformed according to that criterion, no Illformed exception will be raised during the evaluation of the program:
\begin{babellisting}
val VALUE = VInt 0
fun check_b env (Block []) = env
| check_b env (Block (s::r)) =
check_b (check_st env s) (Block r)
and check_st env (SVal (id, e)) =
(check_e env e; bind env (id, VALUE))
| check_st env (SAssign (id, e)) =
(check_e env e; rebind env (id, VALUE))
| check_st env (SYield e) = (check_e env e; env)
| check_st env (SBlock b) = (check_b env b; env)
| check_st env (SIf (cond, yes, no)) =
(check_se env cond;
check_b env yes; check_b env no; env)
| check_st env (loop as SWhile (cond, body)) =
(check_se env cond; check_b env body; env)
| check_st env (SFor (id, list, body)) =
(check_se env list;
check_b (bind env (id, VALUE)) body; env)
and check_e env (ESimple se) = check_se env se
| check_e env (EBlock s) =
(check_b env (Block [s]); ())
and check_se env se = check_simple (freeze env) se
and check_simple env (EInt i) = ()
| check_simple env (EBool b) = ()
| check_simple env (EBinOp (f, a, b)) =
(check_e env a; check_e env b)
| check_simple (nonlinear, _) (EId id) =
(lookup nonlinear id; ())
| check_simple env (EFun (id, body)) =
check_e (bind env (id, VALUE)) body
fun check prog = check_e ([], []) (EBlock (SBlock prog))
\end{babellisting}
The function \textsl{check} terminates because it is basically defined via primitive recursion on the structure of the program. Furthermore, the set of calls to \textsl{lookup} generated during an execution of \textsl{check prog} is clearly a superset of the set of calls to \textsl{lookup} generated during the execution of \textsl{eval prog}.
Therefore, if \textsl{check prog} does not raise an exception \textsl{Illformed}, then neither will \textsl{eval prog}.
\section{Loops or Functionals?}
With Mini Babel-17, you can freely choose between a programming style that uses loops and a programming style that puts its emphasis on the use of higher-order functionals. If you have an imperative background, you might start out with using loops everywhere, and then migrate slowly to the use of functionals like \emph{map} or \emph{fold} as your understanding of functional programming increases.
But even after your functional programming skills have matured, you might still choose to use loops in appropriate situations. Let us for example look at a function that takes a list of integers $[a_0, \ldots, a_n]$ and an integer $x$ as arguments and returns the list
\begin{displaymath}
[q_0, \ldots, q_n] \quad \text{where} \quad q_k = \sum_{i=0}^k a_i\, x^i
\end{displaymath}
The implementation in Mini Babel-17 via a loop is straightforward, efficient and even elegant:
\begin{babellisting}
m => x => begin
val y = 0
val p = 1
for a in m do
y = y + a*p
p = p * x
yield y
end
end
\end{babellisting}
\section{Related Work}
We have already mentioned how Scala also combines structured programming with functional programming, but fails to deliver a combination of structured programming and \emph{purely} functional programming. Actually, it should be possible to conservatively extend Scala so that linear scope for variables defined via \bsrc{val} is supported.
The work done on monads in the purely functional programming language Haskell~\cite{monads} has a superficial similarity with the work done in this paper. With monads it is possible to formulate sequences of (possibly shadowing) assignments, and with the help of monad transformers even loops can be modeled. But in order to understand and effectively use monads a solid background in functional programming is useful, if not even required; linear scope on the other hand is understood intuitively by programmers with a mostly imperative background, because Mini Babel-17 programs can look just like imperative programs and do not introduce additional clutter like the need for lifting.
Actually, in Haskell monads are used to limit the influence of mutable state to a confined region of the code that can be recognized by its type; the work in this paper has the entirely different focus of trying to merge the structured and purely functional programming style as seamlessly as possible.
This work is not directly connected to work done on linear or uniqueness types~\cite{lineartypes}. Of course one might think about applying uniqueness typing to Mini Babel-17, but Mini Babel-17 itself is dynamically-typed and its values are persistent and can be passed around without any restrictions.
\section{Conclusion}
The current separation between SP and PFP is an artificial one. There is no good reason anymore why SP should not be used where appropiate for the sequential parts of a purely functional program except the personal preference of the programmer. The purpose of Mini Babel-17 is to show the importance of linear scope for unifying SP and PFP. Babel-17 incorporates further important features for purely functional and structured programming like mutually recursive functions, pattern matching, exceptions, objects, memoization, concurrency, laziness, and more syntactic sugar.
\bibliographystyle{abbrvnat}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,746 |
{"url":"https:\/\/math.stackexchange.com\/questions\/551528\/homotopy-cauchy","text":"# Homotopy cauchy\n\n-If $D$ is convex (part of $\\mathbb{C}$) and if we have two paths $\\gamma_1$ and $\\gamma_2$ in $D$ with $\\gamma_1(a)=\\gamma_2(a)$, and $\\gamma_1(b)=\\gamma_2(b)$. Proof that $\\gamma_1$ and $\\gamma_2$ are homotopic in $D$ as paths with constant endpoints. Intuitively it's clear, but how can I give an explicit homotopy? And if $D$ (part of $\\mathbb{C}$, complex numbers) is open and $f: D \\to \\mathbb{C}$ holomorphic, how can I proof that $\\int \\limits_{\\gamma_1} f(z)\\:dz$ along path $\\gamma_1$ is equal to $\\int \\limits_{\\gamma_2} f(z)\\:dz$ along path $\\gamma_2$? (given that $\\gamma_1$ and $gamma_2$ are homotopic in $D$ as paths with constant endpoints). I think that it has to do with the Cauchy theorem homotopic version.\n\n## 1 Answer\n\n\u2022 To construct the homotopy, note that for every $t$ the line segment $\\ell_t$ joining $\\gamma_1(t)$ to $\\gamma_2(t)$ lies entirely in $D$. So the formula $H_s(t) = (1-s)\\gamma_1(t) + s\\gamma_2(t)$ is a homotopy of paths in $D$.\n\n\u2022 Depending on what you mean by the \"homotopic version of Cauchy's theorem\", your question about integrals follows trivially from your first question. Read the statement of the theorem carefully, and if you're still stuck then write it out here.","date":"2019-07-17 18:27:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9208127856254578, \"perplexity\": 90.53719945577065}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195525374.43\/warc\/CC-MAIN-20190717181736-20190717203736-00337.warc.gz\"}"} | null | null |
Q: Physical location of the processes assigned by the MPI run-time system When we launch an MPI program with the command say mpirun -np 4 a.out on a cluster, then
how does the MPI run-time system assign the processes across the CPU's?
What I mean is, suppose it finds an idle quad-core CPU in the cluster , will it run all the 4 processes on that CPU, or will it find 4 CPU's and run 4 processes with 1 process per CPU?
Does this depend on the particular implementation of MPI?
And should I be bothered by the particular configuration which MPI would pick for me (4 processes on one CPU or 1 process per CPU on 4 CPU's)
A: Yes, it depends on the MPI implementation, and yes it matters. For instance, if you were expecting to be able to use a nodes worth of memory per MPI task, and you find yourself loading 4 tasks on a single node and nothing on the others, you're going to run into serious problems. Similarly, if you are running on 4 8-core nodes, and you were running 4 mpi tasks with 8 OpenMP threads each, there's a big difference between using 1 task and 8 threads for each of the 4 nodes, or 4 tasks and 32 threads on one node and nothing on the others.
The most common MPI implementations out there on x86-type hardware are OpenMPI or MPICH2-based. OpenMPI will fill up a node before going to the next one; you can change that behavior with, for instance, giving it the "--bynode" option, where it will assign one task to one node, the next task to the next, etc, and wrapping around to the first node again as needed. (OpenMPI also has --bysocket and --bycore for finer control, and the very useful --display-map option which shows you exactly what's going where).
With mpich2-based MPIs, you can give it the -rr option for "round robin", which will round robin between nodes (eg, OpenMPI's --bynode behaviour).
In either case, on linux-type systems you can always run eg 'mpirun -np 4 hostname' as a quick and dirty way to find out which hosts your mpirun command would launch processes on.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,489 |
/* @LICENSE(MUSLC_MIT) */
#ifndef FLOATSCAN_H
#define FLOATSCAN_H
#include <stdio.h>
long double __floatscan(FILE *, int, int);
#endif
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,336 |
Denim Dash 2015
About Us Our Team
Our A-Team
Akesh Gupta
Akesh's path began with a degree in mechanical engineering from Northwestern. But, after discovering the art of coding, his passions shifted and now has over 20 years experience in software development. His success in creating a top pharmaceutical company's patient hub enticed him to explore this industry further. His motto: "Design, Develop and Deliver."
in l Connect
Thomas Lembck
As Viropharma's former Vice President of Information Technology, Thomas brings experience and valuable insight to the team. While bringing life to any room he walks into, he keeps our team excited for what the future holds. His knowledge of technology and passion for the rare disease community are the perfect combination to make a difference for not only our clients, but for the patients, as well.
Milt Spanton
Milt has been working closely with ODS since conception and benefits our team through his 10+ years of experience in the orphan drug and life sciences market. While advising some of the industry's leading pharmaceutical companies, he has built a valuable understanding for client and patient needs, which ultimately results in our team having the strongest relationships possible.
Nicole Pagoulatos
With a degree in communications from Florida Atlantic University, a personal training certification through the AAPTE, and being a proud mommy of a toddler, Nicole makes multitasking look easy. Her caring nature and dedication to her duties always makes our clients feel satisfied. So, naturally, we let her take the lead when it comes to making sure our projects are on schedule and up to par.
Jason Moore
One of the brightest minds in technology today, Jason possesses highly developed, process-oriented skills for troubleshooting, problem solving, and problem resolution. With over 15 years of software development experience combined with a background in infastructure, he is capable of thinking outside the box to deliver solutions beyond everyone's expectations.
Lisa Skrezec
With a degree in marketing from Iona College, Lisa has been creating campaigns, expanding social media horizons, building brands from the ground up, and developing media content for companies in all industries for over five years. Whether she's wow-ing our clients with her innovative pitches or kayaking out in the Hamptons, she's always doing it with an upbeat attitude.
Jessie Rivadeneyra
With her hard-working mindset and go-getter attitude, Jessie has grown from the ground up to become one of the most diligent and organized team members our company has to offer. Learning on the fly and adapting to any situation thrown her way gives her the ability to excel in all tasks — whether it be in the office or while jumping out of airplanes!
"Painting For A Cause"
2015 Run For Rare
2043 Wellwood Avenue
Email: info@orphandrugsolutions.com | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 346 |
import unittest
from registry import Version
# Translated from:
# https://github.com/bazelbuild/bazel/blob/79a53def2ebbd9358450f739ea37bf70662e8614/src/test/java/com/google/devtools/build/lib/bazel/bzlmod/VersionTest.java#L39
class TestVersionCompare(unittest.TestCase):
def testReleaseVersion(self):
self.assertTrue(Version("2.0") > Version("1.0"))
self.assertTrue(Version("2.0") > Version("1.9"))
self.assertTrue(Version("11.0") > Version("3.0"))
self.assertTrue(Version("1.0.1") > Version("1.0"))
self.assertTrue(Version("1.0.0") > Version("1.0"))
self.assertTrue(Version("1.0+build2") == Version("1.0+build3"))
self.assertTrue(Version("1.0") > Version("1.0-pre"))
self.assertTrue(Version("1.0") == Version("1.0+build-notpre"))
def testReleaseVersionWithLetters(self):
self.assertTrue(Version("1.0.patch.3") > Version("1.0"))
self.assertTrue(Version("1.0.patch.3") > Version("1.0.patch.2"))
self.assertTrue(Version("1.0.patch.3") < Version("1.0.patch.10"))
self.assertTrue(Version("1.0.patch3") > Version("1.0.patch10"))
self.assertTrue(Version("4") < Version("a"))
self.assertTrue(Version("abc") < Version("abd"))
def testPrereleaseVersion(self):
self.assertTrue(Version("1.0-pre") > Version("1.0-are"))
self.assertTrue(Version("1.0-3") > Version("1.0-2"))
self.assertTrue(Version("1.0-pre") < Version("1.0-pre.foo"))
self.assertTrue(Version("1.0-pre.3") > Version("1.0-pre.2"))
self.assertTrue(Version("1.0-pre.10") > Version("1.0-pre.2"))
self.assertTrue(Version("1.0-pre.10a") < Version("1.0-pre.2a"))
self.assertTrue(Version("1.0-pre.99") < Version("1.0-pre.2a"))
self.assertTrue(Version("1.0-pre.patch.3") < Version("1.0-pre.patch.4"))
self.assertTrue(Version("1.0--") < Version("1.0----"))
if __name__ == '__main__':
unittest.main() | {
"redpajama_set_name": "RedPajamaGithub"
} | 7,831 |
#
#
CONTENTS
Acknowledgments
History of My Path in Jiu-Jitsu
Jiu-Jitsu
CHAPTER 1
Takedowns with GI
Koshiki Taoshi – Ouchi Gari
Oushi Gari – Kibisu Gaeshi
Sukui Nage
Kibisu Gaeshi
CHAPTER 2
The Closed Guard with Gi
Arm bar from the closed guard
Choke from the closed guard
Closed guard sweep to back control
Umaplata sweep from arm bar in the closed guard
Collar choke from the closed guard with one arm trapped
Sweep to mount from the closed guard
Arm bar from the close guard with your opponent's arm trapped
Sweep to mount from the closed guard when your opponent stands
Sweep from Umaplata when your opponent stands
Umaplata from arm bar in the closed guard
Arm bar from the closed guard when your opponent stands
Close guard sweep to mount from the kimura position
Kimura from the closed guard
Guillotine from the closed guard
CHAPTER 3
Passing the Closed Guard with Gi
Passing the closed guard
Passing the high closed guard securing one sleeve
Passing the high closed guard gripping your opponent's collar
Passing the guard controlling your opponent's arm
Passing the low closed guard while holding your opponent's writs
CHAPTER 4
Butterfly guard and X-guard with Gi
Butterfly guard sweep
Butterfly guard sweep gripping your opponent's belt
Butterfly guard sweep to back control
Butterfly guard sweep holding a leg and arm and standing up
Butterfly sweep holding a leg and arm to side control
Butterfly guard transition to X-guard sweep to the front of your opponent
Butterfly guard sweep to technical stand up
Butterfly guard transition to X-guard sweep to the back of your opponent
Spider guard sweep to back control
Spider Guard sweep with a hook while holding your opponent's ankle
CHAPTER 5
Open Guard Pass with Gi
Open guard pass controlling the hips
The "Bullfighter" open guard pass
Knee cut across open guard pass
Knee cut across open guard pass when the opponent blocks with his leg
CHAPTER 6
Across-side Position with Gi
Armbar from side control
Kimura from the north-south position
Kimura to choke from the north-south position
Armbar from side control to the other side
Choke from side control with the arm trapped
North-south choke
Choke from side control
Choke from side control with hand on collar similar to Katagatame
Mount sliding knee to belly
Arm bar from the mount
Arm bar from te mount in both arms
CHAPTER 7
Across-side Position with Gi
Half guard sweep to mount
Recover close guard in half guard with hook
Half guard sweep dominating your opponent's arm
Half guard sweep dominating your opponent's legs
CHAPTER 8
Turtle Guard and Back Mount with Gi
Turtle guard attack to taking the back with one leg over the shoulder
Kimura while attacking the turtle guard
Choke from the top on all fours
Attack from back mount using both collars and your shoulder on the back of your opponent's head
Back attack with one hand on the collar and one leg over the shoulder
CHAPTER 9
Takedowns without Gi
Koshiki Taoshi – Ouchi Gari
Oushi Gari – Kibisu Gaeshi
Kibisu Gaeshi
Sukui Nage
CHAPTER 10
The Closed Guard without Gi
Arm bar from the closed guard
Double arm bar from the closed guard
Closed guard sweep to the mount
Arm bar from the closed guard 2nd version
Wrist lock from the closed guard
Kimura from the closed guard
Guillotine from the closed guard
"Hip bump" – Closed guard sweep to mount
Arm bar from the closed guard 3rd version
Arm bar from the closed guard when your opponent stands
CHAPTER 11
Passing the Closed Guard without Gi
Standing closed guard pass
Closed guard pass while holding your opponent's wrist
Closed guard pass while holding your opponent's wrist
Standing closed guard pass controlling your opponent's arm
CHAPTER 12
Butterfly Guard without Gi
"Over-under" Butterfly guard sweep
Butterfly guard sweep from back attack
Butterfly guard sweep to knee cut across
CHAPTER 13
Passing the Open guard without Gi
Knee cut across open guard pass
Knee cut across open guard pass when the opponent blocks with his leg
Knee cut across open guard pass when the opponent blocks with his leg
Knee cut across to high leg back
Knee cut across to high leg back
Knee cut across to side control
Knee cut across to the mount
CHAPTER 14
Side Control without Gi
Same side arm bar from side control
Same side arm bar from side control off triangle attack
Same side arm bar from side control off triangle attack
Mount sliding knee to belly
Mount sliding knee to belly
Mount to arm triangle
Kimura from the north-south position
Opposite side arm bar from side control
Arm bar from north-south Kimura
CHAPTER 15
The Mount without Gi
Arm bar from the mount
Arm bar from the mounted arm triangle
CHAPTER 16
Half Guard without Gi
Deep half guard sweep dominating your opponent's leg
Half guard sweep to mount
Half guard sweep
Half guard sweep dominating your opponent's legs
Half guard sweep to side control
Deep half guard sweep to side control
Half guard pass to side control
Half guard to Kimura pass
Half guard pass to knee cut across
CHAPTER 17
Attacking the Turtle Guard and Back Mount
Turtle guard attack to taking the back -arm bar finish
Turtle guard attack to Kimura
Turtle guard attack to rear naked choke
The Tuttle Story: "Books to Span the East and West"
Most people are very surprised to learn that the world's largest publisher of books on Asia had its beginnings in the tiny American state of Vermont. The company's founder, Charles E. Tuttle, belonged to a New England family steeped in publishing. And his first love was naturally books—especially old and rare editions.
Immediately after WW II, serving in Tokyo under General Douglas MacArthur, Tuttle was tasked with reviving the Japanese publishing industry, and founded the Charles E. Tuttle Publishing Company, which still thrives today as one of the world's leading independent publishers.
Though a westerner, Charles was hugely instrumental in bringing knowledge of Japan and Asia to a world hungry for information about the East. By the time of his death in 1993, Tuttle had published over 6,000 titles on Asian culture, history and art—a legacy honored by the Japanese emperor with the "Order of the Sacred Treasure," the highest tribute Japan can bestow upon a non-Japanese.
With a backlist of 1,500 books, Tuttle Publishing is as active today as at any time in its past—inspired by Charles' core mission to publish fine books to span the East and West and provide a greater understanding of each.
#
Please note that the publisher and author(s) of this instructional book are NOT RESPONSIBLE in any manner whatsoever for any injury that may result from practicing the techniques and/or following the instructions given within. Martial arts training can be dangerous—both to you and to others—if not practiced safely. If you're in doubt as to how to proceed or whether your practice is safe, consult with a trained martial arts teacher before beginning. Since the physical activities described herein may be too strenuous in nature for some readers, it is also essential that a physician be consulted prior to training.
Published by Tuttle Publishing, an imprint of Periplus Editions (HK) Ltd.
www.tuttlepublishing.com
Copyright © 2012 Marian Bornakowski - Poznań, Poland
Photographs and graphic design: Marian Winiecki
All rights reserved. No part of this publication may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without prior written permission from the publisher.
Library of Congress Cataloging-in-Publication Data
in process
ISBN: 978-1-4629-1000-7 (ebook)
Distributed by
North America, Latin America &
Tuttle Publishing
364 Innovation Drive
North Clarendon
VT 05759-9436 U.S.A.
Tel: 1 (802) 773-8930
Fax: 1 (802) 773-6993
info@tuttlepublishing.com
www.tuttlepublishing.com
Japan
Tuttle Publishing
Yaekari Building, 3rd Floor
5-4-12 Osaki
Shinagawa-ku
Tokyo 141 0032
Tel: (81) 3 5437-0171
Fax: (81) 3 5437-0755
sales@tuttle.co.jp
www.tuttle.co.jp
Asia Pacific
Berkeley Books Pte. Ltd.
61 Tai Seng Avenue #02-12
Singapore 534167
Tel: (65) 6280-1330
Fax: (65) 6280-6290
inquiries@periplus.com.sg
www.periplus.com
First edition
16 15 14 13 12 6 5 4 3 2 1 1209CP
Printed in Singapore
TUTTLE PUBLISHING® is a registered trademark of Tuttle Publishing, a division of Periplus Editions (HK) Ltd.
#
Acknowledgments
There are many people I would like to acknowledge for the help they have given me in my practice. If I listed everyone, it would take many pages.
All of those with whom I have had the pleasure of practicing — from those with white belts to those with black belts, from the Brazilians to the foreigners — have in some way helped me with my unending search to improve my technique. They have helped me on my way through a complex martal art which is difficult to understand. It demands years of devotion to attain an upper level of maturity, as well as technical and pedagogical expertise.
I am convinced that it is not enough to gain knowledge; one must also know how to convey knowledge.
There are a few people that I need mention, however, as they have been present througout my life:
My friend and teacher Romero Cavalcante, Jacare, who opened my eyes to the sport. He is an example of how to combine the profesional career of a fighter and teacher with one's personal life, of how to transform a profession into a lifestyle and to enjoy what you are doing. I met so many friends in his school, not to mention Daniela, who became my wife.
My mother Salvia, who always understood and respected my will to do what I want. This allowed me to devote my efforts entirely to my practice.
My wife Daniela: a great athlete with international recognition, a black belt owner, and a cyclist who won the Race Across America 2009. She has sport in her blood. She is my greatest motivator who helped me discover new dimensions in sports, and who gave me beautiful children and a fantastic family.
My children, Victorio, Julia, and Antonio, for understanding my absence at home and for being aware of my love, a love that is bigger than any possible physical distance between us.
Fabio Gurgel, my great friend and colleague from the Alliance Team — a person with great character and excellent technique — who has always supported me. He is one of the best entrepreneurs in the martial arts world, one who has become an example for me.
Fernando Gurgel, Fabia's brother and my friend since the age of 3. He has always been like a brother to me. One of the best teachers in the academy.
Orlando Cani, my teacher and yoga master, a wonderful person who has enriched me with his friendship and respect. He is one of those people who is born to set an example of what it means to be a human.
Elcio Figueiredo, my long time friend and brother who, at the beginning of my teaching career, gave me strength. Although now we are far away from each other, he will always remain my oldest brother.
Luis Fernando, like a father to me, always sharing his precious advice and wisdom of the life of athlete.
Sebastian Slowek, my student and friend who is responsible for shaping up this book.
Markku Juntunen, who always comes to my mind when I think of a charismatic person from the martial arts world. He is also a true example of a great student and friend.
Alexandre Puga, my student and friend who works with me at the academy, always supporting me, always ready to help.
I would also like to acknowledge the Gracie Family for their input into the development of the exeptional martial art that Jiu-Jitsu is. I would especially like to mention:
Rickson Gracie, a warrior and an idol who inspired me and my generation. I have been honored with the privilege of practicing with him and being thought of by him.
I couldn't miss this opportunity to express my appreciation to those who were not a part of my sport's life, yet played imoprtant roles in my life in other ways.
Life's secret wisdom is to face every day of your life as if it were your last, trying to be the best possible human, not showing off in front of others, but merely for the sake of doing the best you can.
This type of thinking I owe my uncle Silvio, uncle Paulo, my cousin Marcinho, Davidowi Isaac (Mumm-Rá), my brother Kiko, Gilberto (Giba), Jules Sideratos, and my father Aldo Genovesi, who even though he wasn't able to assist me in my sporting career and did not see my children grow up, has always been a point of reference for me in both my good decisions and my mistakes.
As my mother-in-law used to say: "... the world is only for a few, for a very few." Life is in ours hands. We do not have to understand this fact to live in a happines which comes from what we have, not from what we have lost.
—Alexandre Paiva Genovesi
#
History of My Path in Jiu-Jitsu
I started Jiu-Jitsu practice in 1984 when I was 10 yeras old. Sergio Lauro Jardim, more commonly known as Malibu, encouraged me to take my first lesson in the Jacare shool where he taught some classes. Like any other teenager, I loved action movies and wanted to be a warrior, so the opportunity sounded perfect to me, almost like a dream come true. I always admired the advanced students, wondering if one day I would know what they know, and be able to perform what they can. Many things happened in those days, things which shaped me for the rest of my life.
In the beginning I devoted myself to pratice. But later, I had to face my greatest decision: whether I should follow my dreams and do what I really enjoy in my life, or choose something that would build my social position — a way of submitting to the rules of the market place (which was something I was expected to do).
My father's death was a shock, leaving me with great grief and a big, empty hole in my heart. Since then, I have had to be responsible for my family without having the support of somebody who could guide me. While death is a part of life, and does not have to set us back, this death was not expected. Many responsibilities and duties fell on my shoulders.
Another important moment came with my decision to get married and become a father. It is not an easy task to be a husband, a father, a teacher, and a warrior at the same time. Such a life instantly becomes busy with many responsibilities, and there is little time for fun. I think I am truly blessed with my job, which brings me joy and satisfaction.
When I injured my back in 1988, I was forced to stop practicing and wasn't able to see my friends. I wanted to be far away from the places and people I love so much. I thought I would not be able to resume my practice.
After surgery at the end of 1989, I began training again. I got my black belt in 1991. Then, in 1998, I went to a doctor to confirm what sort of exercises I could be doing to stay in top shape for an upcoming competition without hurting myself. I had been geting back injuries often. My doctor told me that I shouldn't practice at all, that I shouldn't involve myself in any serious sports or demanding activities. And especially, I shouldn't practice Jiu-Jitsu. I thought my life had ended, that my dreams were over.
I went to another doctor, my friend, who had taken care of me since my childhood. He decided that my condition was stable enough to not only practice, but also to participate in a competition (indicating that I should always be aware of the limitations, pain, and ailments I can experience at any time).
I felt confident. In 1999 I won all the competitions in which I participated, but a week before the world championships I sustained a bad knee injury. I went to my doctor, who assured me that he could perform surgery the same day. I had two options: either arthroscopy, to remove a piece of the meniscus, which would immobilze my knee; or normal surgery, in which a block would be placed into my knee, though without a warranty that at any moment it might not slip out of place. I chose the second option. I rested till the end of the week only to discover two days before the competition that I had gained some weight and needed to lose 8 kilograms in order to be able to compete in my weight category. The enormous weight gain was caused by a combination of two factors: medication containing corticosteroids placed in my knee, thus causing water retention, and the fact that my body was being deprived of high-energy activities. I lost the surplus weight, passed the weight test, and after surmounting uncountable obstacles and challenges, manged to reach my goal – I won the world championship.
All the obstacles I faced through the whole advanture, like the doctors diagnosing that I was almost handicapped, only made my sucess taste sweeter and more satisfying.
The Alliance Academy was created in 1994 from the joint forces of the Master Academy (Jacare and Fabio Gurgel) and Strike (my school in those days). We didn't want our teams competing with each other in prestigious domestic and international competitions.
Our decision to join forces and establish one strong tournament team with the potential to win was welcomed with interest — other schools joined the Alliance team and began competing under this name.
Currently the Alliance team is a four-time world championship winner (1999, 2000, 2008, 2009); the Panamerican championship winner in 2009; the Brazilian championship winner in 2009; and was choosen the best tournament team by many sides.
Alliance gave an opportunity to many great athletes including Leonardo Vieira, Leonardo Leite, Fernando Terere, Ricardo Vieira, Claudio Moreno, Marcos Meireles, Alexandre Street, Gabriel Leite, Marcelo Garcia, Alex Monsalve, Lucas Lepre, Michael Langhi, Cobrinha, Sergio Moraes, Damien Maia, and many others.
#
Jiu-jitsu
Jiu-Jitsu techniques can be characterized by a reasoning process similar to the one found in the game of chess. In both activities, all actions are defined by logic, tactics and strategy.
Your path in Jiu-Jitsu can be compared to attempting to solve a puzzle. At the beginning, we are given a few elements that are easy enough to figure out. As soon as we solve the easy ones, however, we are facing a more complicated puzzle, with more elements to piece together. It is hard to imagine how many techniques are possible, how many techniques can be created. The variety of techniques can encourage one to not only know possitions, but to also try to understand the mechanics of movement.
The logic, tactics, and strategy in techniques must be built on the knowledge of anatomy. One must know body parts, structure, mechanics, and all kinds of limitations — the limitations of the range of motion needed to execute sprains and needed to perform chokes. Based of this sort of knowledge, one can learn how to use locks and pins to force the opponent into submission. This is a close contact fight, body to body. Both fighters need to set their own strategy and tactics of attack, but they also have to understand and anticipate the opponent's moves and possible actions in order to counteract.
Playing chess and practicing Jiu-Jitsu demand exercising the anticipation of the opponent's moves. Thus, playing chess might bring advantages for the Jiu-Jitsu student and can be a part of the learning process.
Beside its physical side, Jiu-Jitsu has also a moral and ethical context, which also gives students training in moral aspects. When practicing with a sense of proportion and respect for the opponent, Jiu-Jitsu is very positive and moral. Practicing with others allows one to develop friendships and to bond with other people. The social aspect keeps the student coming back, continuing to practice and — finally — achieving new skills. Jiu-Jitsu is a wonderful way to educate a person and allow them to develop social tools.
Without doubt, Jiu-Jitsu is an effective martial art. It has been proved as such in comparison with others martial techniques, and included in the training routines of those who fight the most brutal combat: World Vale Tudo Championship (WVC), Ultimate Fighting Championship (UFC), and Pride Fighting Championships (Pride FC). Being aware of Jiu-Jitsu's advantages and effectiveness helps students to develop self-confidence and self-esteem.
Technical development in Jiu-Jitsu depends on the way one maintains discipline in many aspects of daily life, such as a healthy diet, a regular schedule, etc. This discipline is another way of saying self-control.
The practice of martial arts is also a great life lesson which will prepare students to face their surrounding reality by teaching them about their limitations, increasing their self-esteem and confidence, and improving their motor skills. Practicing a martial art teaches one to respect one's friend and opponents, to handle both victory and failure.
If only we choose, we can live our dream life and achieve everything we want, instead of watching as time passes by. Jiu-Jitsu can be of great use in creating and preparing for such a life.
#
CHAPTER 1
Takedowns with GI
Koshiki Taoshi – Ouchi Gari 12
Oushi Gari – Kibisu Gaeshi 14
Sukui Nage 15
Kibisu Gaeshi 16
##
Koshiki Taoshi - Ouchi Gari
##
Oushi Gari - Kibisu Gaeshi
##
Sukui Nage
##
Kibisu Gaeshi
#
CHAPTER 2
The Closed Guard with Gi
Arm bar from the closed guard 20
Choke from the closed guard 21
Closed guard sweep to back control 22
Umaplata sweep from arm bar in the closed guard 24
Collar choke from the closed guard with one arm trapped 25
Sweep to mount from the closed guard 26
Arm bar from the close guard with your opponent's arm trapped 28
Sweep to mount from the closed guard when your opponent stands 29
Sweep from Umaplata when your opponent stands 30
Umaplata from arm bar in the closed guard 31
Arm bar from the closed guard when your opponent stands 32
Close guard sweep to mount from the kimura position 33
Kimura from the closed guard 34
Guillotine from the closed guard 35
##
Arm bar from the closed guard
##
Choke from the closed guard
##
Closed guard sweep to back control
##
Umaplata sweep from arm bar in the closed guard
##
Collar choke from the closed guard with one arm trapped
##
Sweep to mount from the closed guard
##
Arm bar from the close guard with your opponent's arm trapped
##
Sweep to mount from the closed guard when your opponent stands
##
Sweep from Umaplata when your opponent stands
##
Umaplata from arm bar in the closed guard
##
Arm bar from the closed guard when your opponent stands
##
Close guard sweep to mount from the kimura position
##
Kimura from the closed guard
##
Guillotine from the closed guard
#
CHAPTER 3
Passing the Closed Guard with Gi
Passing the closed guard 38
Passing the high closed guard securing one sleeve 39
Passing the high closed guard gripping your opponent's collar 40
Passing the guard controlling your opponent's arm 42
Passing the low closed guard while holding your opponent's writs 43
##
Passing the closed guard
##
Passing the high closed guard securing one sleeve
##
Passing the high closed guard gripping your opponent's collar
##
Passing the guard controlling your opponent's arm
##
Passing the low closed guard while holding your opponent's wrist
#
CHAPTER 4
Butterfly guard and X-guard with Gi
Butterfly guard sweep 46
Butterfly guard sweep gripping your opponent's belt 47
Butterfly guard sweep to back control 48
Butterfly guard sweep holding a leg and arm and standing up 49
Butterfly sweep holding a leg and arm to side control 50
Butterfly guard transition to X-guard sweep to the front of your opponent 51
Butterfly guard sweep to technical stand up 52
Butterfly guard transition to X-guard sweep to the back of your opponent 54
Spider guard sweep to back control 55
Spider Guard sweep with a hook while holding your opponent's ankle 56
##
Butterfly guard sweep
##
Butterfly guard sweep gripping your opponent's belt
##
Butterfly guard sweep to back control
##
Butterfly guard sweep holding a leg and arm and standing up
##
Butterfly sweep holding a leg and arm to side control
##
Butterfly guard transition to X-guard sweep to the front of your opponent
##
Butterfly guard sweep to technical stand up
##
Butterfly guard transition to X-guard sweep to the back of your opponent
##
Spider guard sweep to back control
##
Spider Guard sweep with a hook while holding your opponent's ankle
#
CHAPTER 5
Open Guard Pass with Gi
Open guard pass controlling the hips 60
The "Bullfighter" open guard pass 61
Knee cut across open guard pass 62
Knee cut across open guard pass when the opponent blocks with his leg 63
##
Open guard pass controlling the hips
##
The "Bullfighter" open guard pass
##
Knee cut across open guard pass
##
Knee cut across open guard pass when the opponent blocks with his leg
#
CHAPTER 6
Across-side Position with Gi
Armbar from side control 66
Kimura from the north-south position 67
Kimura to choke from the north-south position 68
Armbar from side control to the other side 69
Choke from side control with the arm trapped 70
North-south choke 71
Choke from side control 72
Choke from side control with hand on collar similar to Katagatame 73
Mount sliding knee to belly 74
Arm bar from the mount 75
Arm bar from te mount in both arms 76
##
Armbar from side control
##
Kimura from the north-south position
##
Kimura to choke from the north-south position
##
Armbar from side control to the other side
##
Choke from side control with the arm trapped
##
North-south choke
##
Choke from side control
##
Choke from side control with hand on collar similiar to Katagatame
##
Mount sliding knee to belly
##
Arm bar from the mount
##
Arm bar from te mount in both arms
#
CHAPTER 7
Across-side Position with Gi
Half guard sweep to mount 80
Recover close guard in half guard with hook 81
Half guard sweep dominating your opponent's arm 82
Half guard sweep dominating your opponent's legs 83
##
Half guard sweep to mount
##
Recover close guard in half guard with hook
##
Half guard sweep dominating your opponent's arm
##
Half guard sweep dominating your opponent's legs
#
CHAPTER 8
Turtle Guard and Back Mount with Gi
Turtle guard attack to taking the back with one leg over the shoulder 86
Kimura while attacking the turtle guard 87
Choke from the top on all fours 88
Attack from back mount using both collars and your shoulder on the back of your opponent's head 89
Back attack with one hand on the collar and one leg over the shoulder 90
##
Turtle guard attack to taking the back with one leg over the shoulder
##
Kimura while attacking the turtle guard
##
Choke from the top on all fours
##
Attack from back mount using both collars and your shoulder on the back of your opponent's head
##
Back attack with one hand on the collar and one leg over the shoulder
#
CHAPTER 9
Takedowns without Gi
Koshiki Taoshi – Ouchi Gari 94
Oushi Gari – Kibisu Gaeshi 95
Kibisu Gaeshi 96
Sukui Nage 97
##
Koshiki Taoshi - Ouchi Gari
##
Oushi Gari - Kibisu Gaeshi
##
Kibisu Gaeshi
##
Sukui Nage
#
CHAPTER 10
The Closed Guard without Gi
Arm bar from the closed guard 100
Double arm bar from the closed guard 101
Closed guard sweep to the mount 102
Arm bar from the closed guard – 2 103
Wrist lock from the closed guard 104
Kimura from the closed guard 105
Guillotine from the closed guard 106
"Hip bump" – Closed guard sweep to mount 108
Arm bar from the closed guard – 3 109
Arm bar from the closed guard when your opponent stands 110
##
Arm bar from the closed guard
##
Double arm bar from the closed guard
##
Closed guard sweep to the mount
##
Arm bar from the closed guard 2nd version
##
Wrist lock from the closed guard
##
Kimura from the closed guard
##
Guillotine from the closed guard
##
"Hip bump" - Closed guard sweep to mount
##
Arm bar from the closed guard 3rd version
##
Arm bar from the closed guard when your opponents stands
#
CHAPTER 11
Passing the Closed Guard without Gi
Standing closed guard pass 114
Closed guard pass while holding your opponent's wrist 116
Closed guard pass while holding your opponent's wrist 118
Standing closed guard pass controlling your opponent's arm 120
##
Standing closed guard pass
##
Closed guard pass while holding your opponent's wrist
##
Closed guard pass while holding your opponent's wrist
##
Standing closed guard pass controlling your opponent's arm
#
CHAPTER 12
Butterfly Guard without Gi
"Over-under" Butterfly guard sweep 124
Butterfly guard sweep from back attack 126
Butterfly guard sweep to knee cut across 128
##
"Over-under" Butterfly guard sweep
##
Butterfly guard sweep from back attack
##
Butterfly guard sweep to knee cut across
#
CHAPTER 13
Passing the Open guard without Gi
Knee cut across open guard pass 132
Knee cut across open guard pass when the opponent blocks with his leg 134
Knee cut across open guard pass when the opponent blocks with his leg 136
Knee cut across to high leg back 138
Knee cut across to high leg back 140
Knee cut across to side control 142
Knee cut across to the mount 144
##
Knee cut across open guard pass
##
Knee cut across open guard pass when the opponent blocks with his leg
##
Knee cut across open guard pass when the opponent blocks with his leg
##
Knee cut across to high leg back
##
Knee cut across to high leg back
##
Knee cut across to side control
##
Knee cut across to the mount
#
CHAPTER 14
Side Control without Gi
Same side arm bar from side control 148
Same side arm bar from side control off triangle attack 150
Same side arm bar from side control off triangle attack 152
Mount sliding knee to belly 153
Mount sliding knee to belly 154
Mount to arm triangle 155
Kimura from the north-south position 156
Opposite side arm bar from side control 158
Arm bar from north-south Kimura 160
##
Same side arm bar from side control
##
Same side arm bar from side control off triangle attack
##
Same side arm bar from side control off triangle attack
##
Mount sliding knee to belly
##
Mount sliding knee to belly
##
Mount to arm triangle
##
Kimura from the north-south position
##
Opposite side arm bar from side control
##
Arm bar from north-south Kimura
#
CHAPTER 15
The Mount without Gi
Arm bar from the mount 164
Arm bar from the mounted arm triangle 166
##
Arm bar from the mount
##
Arm bar from the mounted arm triangle
#
CHAPTER 16
Half Guard without Gi
Deep half guard sweep dominating your opponent's leg 170
Half guard sweep to mount 172
Half guard sweep 174
Half guard sweep dominating your opponent's legs 176
Half guard sweep to side control 178
Deep half guard sweep to side control 180
Half guard pass to side control 182
Half guard to Kimura pass 183
Half guard pass to knee cut across 184
##
Deep half guard sweep dominating your opponent's leg
##
Half guard sweep to mount
##
Half guard sweep
##
Half guard sweep dominating your opponent's legs
##
Half guard sweep to side control
##
Deep half guard sweep to side control
##
Half guard pass to side control
##
Half guard to Kimura pass
##
Half guard pass to knee cut across
#
CHAPTER 17
Attacking the Turtle Guard and Back Mount
Turtle guard attack to taking the back - arm bar finish 188
Turtle guard attack to Kimura 190
Turtle guard attack to rear naked choke 191
##
Turtle guard attack to taking the back - arm bar finish
##
Turtle guard attack to Kimura
##
Turtle guard attack to rear naked choke
#
| {
"redpajama_set_name": "RedPajamaBook"
} | 4,918 |
{"url":"https:\/\/www.physicsforums.com\/threads\/sound-homework-help-please.99074\/","text":"1. Nov 8, 2005\n\nPhysic_Scholar\n\nOk I need guidance on how to approach this problem. The problem is as follows: A stone is dropped from the top of a cliff. The splash it makes when striking the water below is heard 3.5s later. How high is the cliff?\n\n2. Nov 8, 2005\n\nPhysic_Scholar\n\nDoes anyone know the answer to this problem?\n\n3. Nov 8, 2005\n\nthenewbosco\n\nthe time is of course the time for it to fall plus the time for the sound to come back up. So therefore use $$d=\\frac{1}{2}at^2$$ and $$v_{sound}=\\frac{d}{t}$$ then you solve both for time and add them together to produce 3.5 and hence solve for d.\n\n4. Nov 8, 2005\n\nPhysic_Scholar\n\nok I had those equations I just didn't think what to do. CAn you help me with this problem also: At a rock concert, a dB meter registered 130 dB when placed 2.8m in front of a loudspeaker on the stage. What is the power output of the speaker, assuming uniform spherical spreading of the sound and neglecting absorption in air?\n\nSomeone help me plz?\n\nLast edited: Nov 8, 2005","date":"2017-05-22 23:50:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7023415565490723, \"perplexity\": 508.21806818857374}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-22\/segments\/1495463607242.32\/warc\/CC-MAIN-20170522230356-20170523010356-00557.warc.gz\"}"} | null | null |
TobyMac's Wife Amanda Levy McKeehan
January 12, 2021 December 22, 2020 by L.A Girl
PinterestFacebookTwitterMessengerWhatsAppRedditPocketBuffer
Amanda Levy McKeehan
Amanda Levy McKeehan is the longtime wife of singer-songwriter producer, Toby McKeehan -best known by his stage name TobyMac.
Amanda's husband is a well known Christian recording artist. Her man rose to fame in the 1990's with his group DC Talk. TobyMac was in the band DC Talk alongside Michael Tait and Kevin Max.
Amanda and Mac have been together for many years, want to know more about her? Keep reading below.
Toby Mac Wife
Who is Toby Mac's Wife?
Marlee Mckeehan
Moses Mckeehan
What happened to Toby Mac's son?
Amanda Mckeehan
What ethnicity is Amanda Levy McKeehan?
Toby Mac Family
Robert levy
Judy Levy
Toby Mac Age
What is TobyMac net worth?
To talk about Tobymac's wife, we first need to talk about Tobymac.
Born Toby Mckeehan on October 22, 1964; the Fairfax, Virginia native is better known to the public as TobyMac. He graduated from Liberty University.
He rose to fame for performing in the Christian group, DC Talk and has since released eight studio albums in his solo career.
The first TobyMac solo album, Momentum, was released in 2001. A mixture of urban rock and rap, it garnered five Dove Awards and a Grammy nomination.
He has sold over ten million records between the two projects and has been signed to ForeFront Records and EMI CMG.
Toby has been extremely successful in his solo career, he had five singles stand at number one on the Christian Billboard charts, and he received a Grammy in 2013 for Best Contemporary Christian Music Album.
He released his most recent work in 2018, 'The Elements' -which landed on top of the Christian albums chart and cracked the Top 20 of the Billboard 200.
To this day, TobyMac has 3 Grammy Awards to his credit, two Billboard Music Awards and one American Music Award.
In addition to his music, he is also an author penning several Christian books including a book called City on Our Knees. He's also co-authored several books with writers Michael Tait and Kevin Max.
However, he certainly couldn't have made it all by himself. Through the ups and downs, his lovely wife Amanda has been there as his companion and main source of support.
Amanda and her musician husband have been married for over two decades. TobyMac and Amanda Levy McKeehan have been married since 1994.
Amanda has certainly been a source of inspiration for her husband. Online sources reveal the tune "Made For Me" chronicles the love-at-first-sight courtship of Toby and his wife Amanda. Speaking about the song Toby revealed "I stand in awe that I have this amazing woman as my wife."
That said, there is no perfect relationship, in fact Toby has even sung about their differences. He added:
"We're opposites in a way and see things very differently. She's from a third-world country and I grew up just outside of Washington, D.C., in Virginia. She's a morning person; I'm a night person," "But we are committed to laying down our minor differences and holding on to what brings us together."
During the same interview, Tobymac explained why he and wife Amanda work, saying:
"There are many times my wife and I are sitting there, with a wall between us, trying to figure out how we can get back to being unified. Inevitably, what penetrates the wall is that we both want the same thing: preserving our love, our family and our faith in God. We are on the same side"
TobyMac and wife Amanda have welcomed five children together, including a pair of adopted twins. Their first child, Truett was born in 1998; they are also parents to twins Moses and Marlee, and sons, Leo and Judah.
The couple adopted twins Moses and Marlee in 2002. Their fourth child is son, Leo McKeehan, born in 2004. The duo's last born is Judah McKeehan, who was born in 2006.
The story of the couple's twins goes a little like this. Amanda wasn't getting pregnant after the birth of their first child in 1998. They went to the doctor who wanted to do exploratory surgery on Amanda but they called off the surgery and prayed instead.
They had actually been praying for twins when a few days later they were approached by a stranger outside of church who said to them there was a set of twins who needed a home. The couple was dumbfounded.
Then a roller coaster of emotions followed after the birthmother later went back on her word breaking the couple's heart. The couple prayed again and a few weeks later the twins' mom called and apologized.
Toby and wife Amanda, already felt like the parents of the twins and after Amanda inexplicably became pregnant twice years later, they were convinced that God intended for the twins to be with them.
It is believed that Marlee was named after after one of Toby's chief influences, reggae icon Bob Marley.
Son Mosses suffers from muscular dystrophy, which was diagnosed around 2015.
It requires the parents to provide 24-hour care. He is in a wheelchair full time and his life expectancy reaches only into the early 20s.
The couple who shared five children, sadly lost son, Truett in October of 2019. Truett McKeehan was 21-years-old. He was found dead at home in Nashville.
Truett Foster Mckeehan was the couple's eldest child, he was born in 1998 and like his father, he was also a musician.
The aspiring rapper went by the names Truett Foster, TRU, Shiloh and truDog online and collaborated with his father on a few tracks.
The cause of death was deemed as 'accidental overdose' of fentanyl and amphetamines.
Just days prior to his passing, Truett had held his first show at the Factory in Franklin, Tennessee.
Amanda Levy McKeehan is a native of Jamaica who was born January 9, 1971. She is the daughter of Judy Levy and Robert Levy.
Amanda Levy McKeehan certainly keeps busy caring for her children.
Amanda Levy McKeehan told 'Focus on the Family' during an interview that in the first year of the couple's marriage she told her husband
"I hate living in this country! I'm going home!"
Amanda added she soon realized
"I had to decide in that moment that this was now my home, this culture was my culture, his life was my life"
They worked things out and they have now been married for over two decades, 26-years to be exact.
The dedicated spouse and full time mother is best known as her husband's wife.
Amanda has made short appearances in Toby Mac's music videos. According to an article, she became a US citizen in 2007.
The family is currently based in Nashville, however they often travel to Jamaica 'to rest'
The celebrity spouse, Amanda was actually born in Jamaica, so she is Jamaican-American. She is currently 49-years-old and has been living in the US since the 90's.
Now we want to talk about Tobymac's extended family and that would include his in-laws. His beloved wife Amanda, is the daughter of admired businessman, Robert Levy and Judy Levy.
In 2014, Robert Levy chairman of the Jamaica Broilers Group, was honored with the prestigious 2014 International Humanitarian Award.
He's been recognized for his generosity of spirit, genuine concern for the welfare of his fellowmen, ethical business practices, dedicated service to the advancement of the livestock industry, and contribution to nation building.
Robert Levy is a graduate of the Harvard Business School.
According to his bio, his longstanding, passionate commitment to the agricultural industry in Jamaica, has made him a distinguished expert in livestock and crop production.
He joined the Jamaica Broilers Group in 1959 and worked his way up in several roles. He was named CEO in 1994, President and CEO in 2001 and to his current position as Chairman of the Board of Directors since 2009.
Judy Levy and her husband Robert Levy have been married since 1964.
In addition to Amanda, they raised three other now adult children, Christopher, Wendy and Stephen.
Toby McKeehan whose real name is Kevin Michael McKeehan, was born October 22, 1964. That would make him currently a 56-year-old man.
According to online sources, Toby's net worth has been estimated to be around the $10 million mark. His wealth comes from his decade long career in the music business, and his different roles as singer-songwriter, author, record producer, rapper, music artist and actor.
Categories BREAKING NEWS Post navigation
Ricky Gervais' Girlfriend Jane Fallon
Ian O. Cameron-Susan Rice's Husband
fresh news delivered to your inbox
©2022 · DailyEntertainmentNews.com is part of the Herval.co publishing family
Made with in North Carolina by LikeablePress | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,071 |
Table of Contents » Title 22.1. Education » Chapter 5. School Boards; Selection, Qualification and Salaries of Members » Article 2. Method of Selecting School Boards in School Divisions Composed of a Single County
Title 22.1. Education
Article 2. Method of Selecting School Boards in School Divisions Composed of a Single County.
§ 22.1-34. Application of article.
The school board in each county constituting a school division, except a county to which the provisions of §§ 15.2-410, 15.2-531, 15.2-627, 15.2-837 or § 22.1-44 are applicable, shall be selected as provided in this article.
§ 22.1-35. School board selection commission.
In each county to which the provisions of this article are applicable there shall be a school board selection commission composed of three members appointed from the county at large or, upon the request of the county governing body, one member appointed from each election district of such county. Members shall be qualified voters, shall reside in the county and shall not be county or state officers. Members shall be appointed by the circuit court of the county within thirty days after the first day of July, 1950, and every four years thereafter. Any vacancy occurring other than by expiration of term shall be filled by the circuit court within thirty days after the vacancy occurs. Each member shall receive twenty-five dollars for each day actually engaged in the performance of duties as such member, to be paid out of the funds of the school board. No person regularly employed by the school board of the division shall be eligible to serve on or as clerk of such school board selection commission.
Code 1950, § 22-60; 1956, c. 365; 1959, Ex. Sess., c. 79, § 1; 1972, cc. 224, 665; 1973, c. 275; 1980, c. 559.
§ 22.1-36. Composition of school board; to be appointed by commission.
The county school board shall consist of the same number of members from each magisterial district or, if the provisions of subsection C of § 15.2-1211 are applicable, election district in the county as there are members of the board of supervisors from each such district in the county. Each school board member shall be appointed by the school board selection commission. In addition to the members selected by districts, the governing body may authorize the school board selection commission to appoint no more than two members from the county at large.
Code 1950, § 22-61; 1969, Ex. Sess., c. 25; 1970, c. 88; 1971, Ex. Sess., c. 225; 1972, c. 137; 1980, c. 559.
§ 22.1-36.1. Composition of school board in certain cases.
Notwithstanding any other provision of law, when a county contains a town that is a separate school division, the school board for such county, regardless of whether it is elected or appointed, shall have no member representing such town. Instead, the county school board shall be comprised of one member elected or appointed from all of the election districts other than districts which have more than five percent of town residents, and an additional member elected or appointed at large from the entire county, excluding the town.
1993, c. 220; 1995, c. 316; 2002, cc. 146, 269.
§ 22.1-37. Notice by commission of meeting for appointment.
Before any appointment is made by the school board selection commission, it shall give notice, by publication once a week for four successive weeks in a newspaper having general circulation in such county, of the time and place of any meeting for the purpose of appointing the members of the county school board. Such notice shall be given whether the appointment is of a member or members of the county school board for the full term of office as provided by law or of a member to fill a vacancy occurring in the membership of the county school board or of a member from a new school district.
Code 1950, § 22-62; 1954, c. 638; 1980, c. 559; 1984, c. 131.
§ 22.1-38. Terms of members of school board.
Within sixty days prior to July 1 in each and every year, the school board selection commission shall appoint, for terms of four years beginning July 1 next following their appointment, successors to the members of the county school board whose terms of office expire on June 30 of such year.
In any county having five or more districts in which it is found by the school board selection commission that it is not in the best interest of the schools for the terms of the school board members from two certain districts to expire simultaneously and such terms have been so expiring, the commission may, on the next occasion thereafter for appointing successors to the school board members from such two districts, appoint the member from one of such districts for a term of one year with appointments thereafter to be made for terms of four years.
Code 1950, § 22-64; 1958, c. 515; 1980, c. 559.
§ 22.1-38.1. Provisions for school board where division consolidated as result of certain governmental consolidations.
A. Notwithstanding the provisions of §§ 22.1-38 and 22.1-57 or any other statutory provision, in any consolidation of school divisions comprised of single cities and single counties, which consolidation constitutes a part of a governmental consolidation resulting in the formation of a consolidated county and a tier-city, the consolidation agreement may provide as follows:
1. The effective date for consolidation of school divisions may be prior or subsequent to the effective date for general governmental consolidation.
2. Initial members of the consolidated school board selection committee may be selected as provided in § 22.1-35 from the consolidating divisions at any time after certification by the appropriate electoral boards of approval by referendum of the consolidation plan.
3. Initial members of the consolidated school board may be selected to assume office at any agreed time prior to the effective date for consolidation of school divisions, only for such of the following limited purposes as may be provided by the consolidation agreement or plan:
a. Organization of itself and election of one of its members as chairman.
b. Preparation and approval of an initial budget applicable to the newly consolidated school divisions.
c. Preparation of job descriptions, pay ranges and qualifications for each position in the consolidated school division.
d. Hiring of individuals to hold each position in the consolidated school division.
e. Designation of school attendance zones.
f. Allocation of office space and furniture to accommodate the administrative staff of the consolidated school division.
g. Preparation of seniority lists and reductions in force policy.
h. Approval of initial curriculum, grading systems, and all such forms of records as may be required.
i. Adoption of a transportation plan for the consolidated school division.
B. Any member of a school board of a consolidating school division may be appointed to the consolidated school board, and for the limited time period as provided in the consolidation agreement may hold both offices.
C. Upon the effective date of consolidation of school divisions, all school board members shall assume full powers, duties, rights and responsibilities of their offices.
§ 22.1-39. Vacancies in school board.
Vacancies occurring in the membership of the county school board shall be filled for the unexpired term by the school board selection commission.
Code 1950, § 22-65; 1980, c. 559.
§ 22.1-40. Appointment of tie breaker.
The school board selection commission may, at the option of the governing body of the county, appoint a qualified voter who is a resident of the county to cast the deciding vote in case of a tie vote of the school board as provided in § 22.1-75. The term of office of each tie breaker so appointed shall be four years whether the appointment is to fill a vacancy caused by expiration of term or otherwise. The commission shall give the notice required by § 22.1-37 before appointing any tie breaker.
The chapters of the acts of assembly referenced in the historical citation at the end of these sections may not constitute a comprehensive list of such chapters and may exclude chapters whose provisions have expired.
The Virginia General Assembly is offering access to the Code of Virginia on the Internet as a service to the public. We are unable to assist users of this service with legal questions nor respond to requests for legal advice or the application of the law to specific facts. Therefore, to understand and protect your legal rights, you should consult an attorney.
The Code of Virginia online database excludes material copyrighted by the publisher, Michie, a division of Matthew Bender. Copyrighted material includes annotations and revisors' notes, which may be found in the print version of the Code of Virginia. Annotated print copies of the Code of Virginia are available in most Virginia public library systems, from LexisNexis (1-800-446-3410), and from West, a Thomson-Reuters business (1-800-344-5008). | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 764 |
Q: How do I put a single csv row into one list with no duplicates? I have a csv file with a row that I need to put into a list. An example of the row would be
row A
apple
apple
apple
orange
orange
watermelon
and i need to read that row into a list without the duplicate names, so it would look like
['apple','orange','watermelon']
Here is my current code for this problem:
import csv
start = open('fruits.csv', 'r')
reader = csv.reader(start)
next(reader, None)
for row in reader:
fruits = [row[1]]
print(fruits)
My current code just puts each individual line into its own list.
A: In the code you provided, you are creating a new list with a single item every time you loop though the for loop. Instead, you want to maintain a growing list
import csv
start = open('data.txt', 'r')
reader = csv.reader(start)
next(reader, None)
fruits = [] #define an empty list
for row in reader:
fruits.append(row[1]) #add to the list
Note that in the data example you provide, there is only one column so it should be row[0] instead of row[1] if we are strictly using that example
To make the list unique, you can convert it into a set which enforces uniqueness:
fruits = set(fruits)
If you want it to be converted back into a list, try the following:
fruits = list(fruits)
Note that this method does not guarantee the order of the list will stay the same.
A: import csv
data = 'fruits.csv'
fruits = []
# read csv, append unique items to list
with open(data, 'r') as f:
reader = csv.reader(f)
for row in reader:
if row[0] not in fruits:
fruits.append(row[0])
# output: ['row A', 'apple', 'orange', 'watermelon']
print(fruits_in)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,723 |
{"url":"https:\/\/socratic.org\/questions\/57ac03f97c014941441103cf","text":"# Question #103cf\n\n1. Distance covered in one revolution $=$One circumference\n$= 2 \\pi r$. Inserting given value we get\nDistance coverd in one revolution $= 2 \\pi \\times 5$.\n$\\therefore$ Distance moved in three revolutions $= 3 \\times 2 \\pi \\times 5 \\approx 94.2 c m$.\n2. We know that displacement $\\Delta \\vec{r}$after one revolution$= 0$, as body moving in circular motion is back to its original location.\n$\\implies$ Displacement $\\Delta \\vec{r}$after three revolutions$= 0$","date":"2022-01-23 13:30:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 10, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8266311883926392, \"perplexity\": 6767.381646999745}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320304261.85\/warc\/CC-MAIN-20220123111431-20220123141431-00162.warc.gz\"}"} | null | null |
Waganka is a village in the administrative district of Gmina Tłuszcz, within Wołomin County, Masovian Voivodeship, in east-central Poland.
References
Waganka | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,343 |
HMS Dianthus was a of the Royal Navy. She was launched on 9 July 1940 from the Leith Docks on the Firth of Forth and named after the genus of flowering plants including Carnation, Pink, and Sweet William. The ship escorted trade convoys between Newfoundland and the Western Approaches through the Battle of the Atlantic wolf pack attacks of the winter of 1942–43.
Background
Flower-class corvettes like Dianthus serving with the Royal Navy during World War II were different to earlier and more traditional sail-driven corvettes. The "corvette" designation was created by the French in the 19th century as a class of small warships; the Royal Navy borrowed the term for a period but discontinued its use in 1877. During the hurried preparations for war in the late 1930s, Winston Churchill reactivated the corvette class, needing a name for smaller ships used in an escort capacity, in this case based on a whaling ship design. The generic name "flower" was used to designate the class of these ships, which – in the Royal Navy – were named after flowering plants.
War duty
Dianthus spent 1941 escorting trade convoys through coastal waters and the Western Approaches to the United Kingdom until assigned to Mid-Ocean Escort Force (MOEF) group C1. Dianthus rammed and sank U-379 while defending convoy SC 94. Dianthus was assigned to MOEF group A3 after yard overhaul to repair damage from the ramming collision. With group A3, she participated in the battles of convoys ON 145, ON 166, SC 121 and HX 233. When group A3 disbanded, Dianthus was assigned to MOEF group C5 until another yard overhaul in August 1943. Dianthus completed refit in November and escorted four more trans-Atlantic convoys in two round trips before being returned to European coastal escort work for the remainder of the war. The ship was decommissioned and sold for civilian use following the end of hostilities. She became the Norwegian buoy tender Thorslep, and was later used for whaling before being scrapped in 1969.
Trans-Atlantic convoys escorted: winter of 1942–43
See also
Wolf pack
Notes and references
Notes
Bibliography
External links
1940 ships
Flower-class corvettes of the Royal Navy
Ships built in Leith | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 410 |
\section{Introduction}
\input{introduction.tex}
\section{Methods}
\input{methods.tex}
\section{Results and Discussion}
We present our results in four parts.
(i) Performance, scaling, and cost efficiency in terms of performance to price (P/P) ratios for the standard MD input systems such as MEM and RIB
on CPU and GPU instances.
(ii) A cost comparison of cloud computing versus buying and operating an own cluster.
(iii) As a prerequisite for the binding affinity study,
the results of the free energy benchmarks (SHP-2, c-Met, and HIF-2$\alpha$) on various instance types,
including the resulting performance to price ratios.
(iv) The performance and the costs of the binding affinity studies on global cloud resources.
\input{gmx2020resultsAWS-MEM-RIB.tex}
\input{costComparison.tex}
\input{gmx2020-2021resultsAWS-freeEnergy.tex}
\input{high-throughput-MD.tex}
\section{Summarizing discussion}
\input{summary.tex}
\section{Conclusions}
\input{conclusions.tex}
\section*{Data and Software Availability}
The input files for the benchmarks can be downloaded from \url{https://www.mpinat.mpg.de/grubmueller/bench}.
A guide to build GROMACS on AWS is available here: \url{https://gromacs-on-pcluster.workshop.aws}.
\section*{Acknowledgments}
Compute time for all cloud-based simulations of this study was generously provided by AWS public sector.
Many thanks to Torsten Bloth, Stephen Sachs, Cristian M\u{a}gheru\c{s}an-Stanciu,
Bruno Silva, Agata Jablonka, and Johannes Schulz for advice and support throughout the project.
Simulation input preparation and output analysis was done on compute clusters of the Max Planck Society.
The work was supported by the BioExcel CoE (www.bioexcel.eu), a project funded by the European Union contracts H2020-INFRAEDI-02-2018-823830.
\input{awsBenchmarks.bbl}
\end{document}
\subsection{Cost comparison: Cloud vs. on-premises cluster}
\label{sec:costComparison}
Whether or not it is more cost-efficient to run simulations on a cloud-based cluster depends of course almost completely on the specific use case,
i.e.\ how big the cluster will be, what software will run on it,
and whether there are enough jobs at all times to keep the cluster busy
as opposed to bursts of compute demand with idle time in between.
Therefore, no generalisable results or guidance can be provided here.
We do think however,
that rough estimates of respective costs and comparison to a typical local compute cluster at a research institution
will provide useful information and guidelines in particular for new groups in the field who need to setup computer resources.
To this aim, we will estimate and compare the total costs of producing one microsecond of trajectory for the RIB benchmark
with GROMACS\xspace.
\begin{figure}
\begin{center}
\includegraphics[width=1.1\textwidth]{./pictures/compareTCO.pdf}
\caption{\textbf{Costs of a compute node in an owned cluster compared to a cloud instance with similar GROMACS\xspace performance over 3 years.}
Violet bars show costs of AWS \texttt{g4dn.4xl} instances (producing 4.63 \nicefrac{ns}{d}\xspace of RIB trajectory),
which offer one of the highest performance to price ratios for GROMACS\xspace (compare Fig.\ref{fig:scalingAWS}),
in individual blocks of one year.
Bar \textbf{A} shows the fixed costs for buying a consumer GPU node tailored to GROMACS\xspace within the thick black line
(broken down into individual hardware components) plus the yearly recurring costs (mainly energy) for three years.
This node (E5-2630v4 CPU plus RTX 2080 GPU) produces 5.9 \nicefrac{ns}{d}\xspace of RIB trajectory.\cite{kutznerMoreBang2018}
Bar \textbf{B} shows the average costs using an AWS Spot instance.
Bar \textbf{C} shows the costs when reserving the AWS instance and paying upfront.
Bar \textbf{D} is the same as bar A, but using a 4 U node with a professional GPU (e.g.\ Quadro P6000).
}
\label{fig:compareTCO}
\end{center}
\end{figure}
The hardware for an own cluster can be aggressively tuned towards cost-efficiency for simulations with GROMACS\xspace.
Combining inexpensive processors with consumer GPUs yields the best performance to price ratios.\cite{kutznerMoreBang2018}
For instance, 1~U nodes with an Intel E5-2630v4 processor plus an NVIDIA GeForce RTX 2080 GPU were offered for under 2,000~\euro\xspace net at the time,
including three years of warranty.
Fig.~\ref{fig:compareTCO}A shows a breakdown of the costs into individual contributions for that example.
For nodes similar to those, the RIB trajectory costs can be brought down to approximately 500~\euro\xspace per microsecond
(see Fig.~12 in \cite{kutznerMoreBang2018}).
However, that value is not the total cost of ownership
as it only reflects the costs for the compute node itself plus energy including cooling,
but no costs for technical staff, room and rack space.
Investment costs for the racks, cooling system, and infrastructure needed to operate the cluster
are estimated to about 500~\euro\xspace per U of rack space over the lifetime of the racks.
For a lifetime of 5 years that adds 100~\euro\xspace per U per year.
For technical staff to operate, repair, and look after a 500 node cluster,
we assume 100,000~\euro\xspace per year,
which adds 200~\euro\xspace to the operating costs for each node per year.
A suitable room
(60 -- 100~m$^2$ for about 500~U of hardware with appropriate infrastructure and the possibility to install heavy apparatus)
adds about 30,000~\euro\xspace to the yearly costs (60~\euro\xspace per node), depending on the location.
For cluster management software we assume 40~\euro\xspace per node per year.
Taken together, that adds $100+200+60+40=400$~\euro\xspace for each node per year.
As our exemplary nodes (E5-2630v4 CPU with RTX 2080 GPU) have been benchmarked\cite{kutznerMoreBang2018} to produce
5.9~\nicefrac{ns}{d}\xspace of RIB trajectory, a node needs 170 days for a microsecond.
This adds $170/365 * 400 \approx 185$~\euro\xspace to the trajectory production costs.
Including those costs, total RIB trajectory costs can be estimated to be roughly 700~\euro\xspace per microsecond
with optimal hardware.
Bar D of Fig.~\ref{fig:compareTCO} is to illustrate how the costs grow when using the
same hardware as in bar A, but now with a professional GPU (e.g.\ an NVIDIA Quadro P6000) instead of
a consumer GPU (which leads to considerably higher fixed costs)
and in a larger chassis that takes 4~U rack space
(which lead to significantly increased recurring costs for room and rack space).
Thus, densely packed hardware helps to reduce costs.
\texttt{g4dn.4xl} instances offer both a high absolute performance as well as a good
performance to price ratio for producing RIB trajectory with GROMACS\xspace (Fig.~\ref{fig:scalingAWS}),
which would therefore be a good pick for production runs.
As seen in Tab.~\ref{tab:gpu_instances}, one produces 4.63~\nicefrac{ns}{d}\xspace for 1.20 dollars (1.00~\euro\xspace) per hour
on such an instance, i.e.\
one microsecond of RIB trajectory would cost about 5,200~\euro\xspace on an on-demand instance.
To reduce costs, one would reserve an instance for one or three years,
and for maximal savings one can pay upfront.
In the latter case, a \texttt{g4dn.4xl} would cost about 0.40~\euro\xspace per hour,
translating to about 2,100~\euro\xspace for a microsecond of RIB trajectory.
Running on Spot instances will on average reduce the trajectory cost by 70 percent compared to
the on-demand price, as illustrated in bar B of Fig.~\ref{fig:compareTCO},
resulting in 1,500~\euro\xspace for a microsecond of RIB trajectory.
However, for our trajectory cost comparison we implicitly assume that the department cluster we operate
indeed does produce trajectory 100 percent of the time and there is no down or idle time.
If cluster nodes are producing useful data only 3/4 of the time,
it will increase trajectory costs by 4/3,
whereas for Spot instances you only need to pay when you do need them for productive use time.
For a cluster utilization of 75 \% this would increase the trajectory production costs
of our optimal nodes to about 950 \euro\xspace per microsecond of RIB trajectory.
In summary, with careful selection of cloud resources and payment options,
there is not much difference in cost today compared to on-premises computing.
\subsection{GROMACS\xspace performance for free energy calculations}
Turning on FE perturbations reduces the GROMACS\xspace performance, because
an additional PME grid is evaluated,
and because interactions involving perturbed atoms run through kernels that are not
as optimized as the standard kernels.
How much the performance differs with and without FE depends on
how big the fraction of perturbed atoms is and
on the parameters chosen for FE.
For those reasons we cannot use the MEM and RIB benchmarks to predict the performances
of the systems used in our high throughput ligand screening study.
Instead, we carry out new benchmarks for four representative FE systems
(Tab.~\ref{tab:systems}) chosen from the whole binding affinity ensemble (Tab.~\ref{tab:ba1}).
The performances for these systems, which are a small ligand in water system (from the c-Met dataset)
plus three protein-ligand complexes of different size
(HIF-2$\alpha$, c-Met, and SHP-2) are shown in Tab.~\ref{tab:equil_CPU} for CPU instances
for various decompositions into MPI ranks and OpenMP threads.
The table shows the general trend of small instances exhibiting higher P/P ratios
but there are no pronounced differences between the architectures.
The highest performances are observed on the 96 vCPU Intel instances.
\begin{table}
\caption{\textbf{Free energy benchmarks on CPU instances.}
Performances (\nicefrac{ns}{d}\xspace) and performance to price ratios (ns/\$) for GROMACS\xspace 2020
on various Intel (\texttt{c5} and \texttt{m5zn}), AMD (\texttt{c5a}), and ARM (\texttt{c6g}) CPU instances.
Color-coding as in Tab.~\ref{tab:numbers2020}.}
\label{tab:equil_CPU}
\begin{center}
\scriptsize
\STautoround*{2}
\begin{spreadtab}{{tabular}{lcccKCXRYQZP}}
\toprule
@ &@ & @ &@ & @\multicolumn{2}{c}{--- ligand ---} & @\multicolumn{6}{c}{--- protein-ligand complex ---} \\
@instance &@vCPUs & @price &@ ranks \mbox{$\times$}\xspace & @\multicolumn{2}{c}{c-Met} & @\multicolumn{2}{c}{HIF-2$\alpha$} & @\multicolumn{2}{c}{c-Met} & @\multicolumn{2}{c}{SHP-2} \\
@type &@ & @(\$/h) &@ \hspace{3mm} threads & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} \\ \midrule
@\texttt{c5.24xl} &@ 96 & 4.08 &@ 2 \mbox{$\times$}\xspace 48 & & & 44.346 & \STcopy{v99}{[-1,0]/(24*[-5,0])} & 27.705 & \STcopy{v99}{[-1,0]/(24*[-7,0])} & 18.379 & \STcopy{v99}{[-1,0]/(24*[-9,0])} \\
@ &@ 96 & 4.08 &@ 3 \mbox{$\times$}\xspace 32 & & & 42.535 & & 28.377 & & 19.460 & \\
@ &@ 96 & 4.08 &@ 4 \mbox{$\times$}\xspace 24 & & & 55.595 & & 35.252 & & 25.338 & \\
@ &@ 96 & 4.08 &@ 6 \mbox{$\times$}\xspace 16 & & & 66.884 & & 37.958 & & 26.872 & \\
@ &@ 96 & 4.08 &@ 8 \mbox{$\times$}\xspace 12 & & & 69.454 & & 40.437 & & 28.566 & \\
@ &@ 96 & 4.08 &@ 12 \mbox{$\times$}\xspace 8 & & & 71.678 & & 46.960 & & 30.540 & \\
@ &@ 96 & 4.08 &@ 16 \mbox{$\times$}\xspace 6 & & & 80.647 & & 43.397 & & 34.358 & \\
@ &@ 96 & 4.08 &@ 24 \mbox{$\times$}\xspace 4 & & & 83.176 & & 46.336 & & 32.223 & \\
@ &@ 96 & 4.08 &@ 32 \mbox{$\times$}\xspace 3 & & & 82.789 & & 48.210 & & 36.361 & \\
@ &@ 96 & 4.08 &@ 48 \mbox{$\times$}\xspace 2 & & & 89.649 & & 45.598 & & 34.907 & \\
@ &@ 96 & 4.08 &@ 96 \mbox{$\times$}\xspace 1 & & & 64.936 & & 32.197 & & 20.202 & \\ \cmidrule{1-4}
@\texttt{c5.18xl} &@ 72 & 3.06 &@ 18 \mbox{$\times$}\xspace 4 & & & 65.444 & & 42.165 & & 28.086 & \\
@\texttt{c5.12xl} &@ 48 & 2.04 &@ 1 \mbox{$\times$}\xspace 48 & & & 52.355 & & 31.391 & & 21.877 & \\
@\texttt{c5.9xl} &@ 36 & 1.53 &@ 1 \mbox{$\times$}\xspace 36 & & & 47.048 & & 26.604 & & 18.055 & \\
@\texttt{c5.4xl} &@ 16 & 0.68 &@ 1 \mbox{$\times$}\xspace 16 & 77.012 & \STcopy{v99}{[-1,0]/(24*[-3,0])} & 27.319 & & 15.089 & & 10.011 & \\
@\texttt{c5.2xl} &@ 8 & 0.34 &@ 1 \mbox{$\times$}\xspace 8 & 52.371 & & 15.236 & & 8.029 & & 5.225 & \\
@\texttt{c5.xl} &@ 4 & 0.17 &@ 1 \mbox{$\times$}\xspace 4 & 30.672 & &@ &@ &@ &@ &@ &@ \\
@\texttt{c5.large} &@ 2 & 0.085 &@ 1 \mbox{$\times$}\xspace 2 & 18.106 & &@ &@ &@ &@ &@ &@ \\ \midrule
@\texttt{m5zn.12xl}&@ 48 & 3.9641 &@ 8 \mbox{$\times$}\xspace 6 & &@ & 64.347 & & 36.753 & & 25.501 & \\
@\texttt{m5zn.2xl} &@ 8 & 0.6607 &@ 1 \mbox{$\times$}\xspace 8 & 57.663 & & 19.282 & & 10.059 & & 6.568 & \\ \midrule
@\texttt{c5a.24xl} &@ 96 & 3.696 &@ 48 \mbox{$\times$}\xspace 2 & &@ & 71.728 & & 39.964 & & 27.626 & \\
@\texttt{c5a.16xl} &@ 64 & 2.464 &@ 32 \mbox{$\times$}\xspace 2 & &@ & 58.710 & & 32.737 & & 21.054 & \\
@\texttt{c5a.12xl} &@ 48 & 1.848 &@ 24 \mbox{$\times$}\xspace 2 & &@ & 46.065 & & 25.450 & & 16.496 & \\
@\texttt{c5a.8xl} &@ 32 & 1.232 &@ 32 \mbox{$\times$}\xspace 1 & &@ & 32.966 & & 17.902 & & 11.573 & \\
@\texttt{c5a.4xl} &@ 16 & 0.616 &@ 1 \mbox{$\times$}\xspace 16 & 70.034 & & 18.382 & & 9.988 & & 6.630 & \\
@\texttt{c5a.2xl} &@ 8 & 0.308 &@ 1 \mbox{$\times$}\xspace 8 & 48.891 & & 10.775 & & 5.719 & & 3.698 & \\
@\texttt{c5a.xl} &@ 4 & 0.154 &@ 1 \mbox{$\times$}\xspace 4 & 26.094 & &@ &@ &@ &@ &@ &@ \\
@\texttt{c5a.large}&@ 2 & 0.077 &@ 1 \mbox{$\times$}\xspace 2 & 13.414 & &@ &@ &@ &@ &@ &@ \\ \midrule
@\texttt{c6g.16xl} &@ 64 & 2.176 &@ 32 \mbox{$\times$}\xspace 2 & &@ & 58.072 & & 32.846 & & 21.604 & \\
@\texttt{c6g.12xl} &@ 48 & 1.632 &@ 12 \mbox{$\times$}\xspace 4 & 151.413 & & 46.263 & & 25.866 & & 17.246 & \\
@\texttt{c6g.8xl} &@ 32 & 1.088 &@ 4 \mbox{$\times$}\xspace 8 & 111.261 & & 32.645 & & 18.745 & & 12.157 & \\
@\texttt{c6g.4xl} &@ 16 & 0.544 &@ 1 \mbox{$\times$}\xspace 16 & 69.803 & & 18.583 & & 10.325 & & 6.579 & \\
@\texttt{c6g.2xl} &@ 8 & 0.272 &@ 1 \mbox{$\times$}\xspace 8 & 42.473 & & 10.043 & & 5.527 & & 3.451 & \\
@\texttt{c6g.xl} &@ 4 & 0.136 &@ 1 \mbox{$\times$}\xspace 4 & 23.319 & & @ & @ & @ & @ & @ & @ \\
\bottomrule
\end{spreadtab}
\end{center}
\end{table}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.85\textwidth]{pictures/perfComparison2020-2021.pdf}
\caption{\textbf{Performance improvements of GROMACS\xspace 2021 for free energy calculations on GPUs.}
For three different MD systems (colors) with free energy perturbation turned on,
the bars compare GROMACS\xspace 2021 and 2020 performances
on a \texttt{p3.2xl} instance.
}
\label{fig:performance20vs21}
\end{center}
\end{figure}
Up to version 2020, with perturbed charges it was not possible to offload the PME grid calculations to the GPU.
This has changed from version 2021 on, leading to considerably enhanced performance on GPU instances
of more than a factor of two in our cases (Fig.~\ref{fig:performance20vs21}).
Therefore, on GPU instances, we used GROMACS\xspace 2021 for all binding affinity simulations.
The benchmark results for the four representative FE systems on GPU instances are assembled in Tab.~\ref{tab:equil_GPU}.
\begin{table}[tbp]
\caption{\textbf{Free energy benchmarks on GPU instances.}
As in Tab.~\ref{tab:equil_CPU}, but now for
GROMACS\xspace 2021 using one GPU per simulation.
The single-GPU performance on \texttt{p4d.24xl} was derived by running 8 identical benchmarks,
each using one GPU and 1/8th of the hardware threads, in a multi-simulation.
}
\label{tab:equil_GPU}
\begin{center}
\scriptsize
\STautoround*{2}
\begin{spreadtab}{{tabular}{lcccKCXRYQZP}}
\toprule
@ &@ & @ &@ & @\multicolumn{2}{c}{--- ligand ---} & @\multicolumn{6}{c}{--- protein-ligand complex ---} \\
@instance &@vCPUs & @price &@ ranks \mbox{$\times$}\xspace & @\multicolumn{2}{c}{c-Met} & @\multicolumn{2}{c}{HIF-2$\alpha$} & @\multicolumn{2}{c}{c-Met} & @\multicolumn{2}{c}{SHP-2} \\
@type &@ & @(\$/h) &@ \hspace{3mm} threads & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} \\ \midrule
@\texttt{p3.2xl} &@ 8 & 3.06 &@ 1 \mbox{$\times$}\xspace 8 & & & 72.207 & \STcopy{v99}{[-1,0]/(24*[-5,0])} & 42.830 & \STcopy{v99}{[-1,0]/(24*[-7,0])} & 31.480 & \STcopy{v99}{[-1,0]/(24*[-9,0])} \\
@\texttt{p4d.24xl}/8 &@ 96/8 & 32.7726/8 &@ 1 \mbox{$\times$}\xspace 12 & & & 90.451 & & 57.303 & & 41.919 & \\
@\texttt{g3s.xl} &@ 4 & 0.75 &@ 1 \mbox{$\times$}\xspace 4 & 61.415 & \STcopy{v99}{[-1,0]/(24*[-3,0])} & 43.911 & & 22.814 & & 14.211 & \\
@\texttt{g3.4xl} &@ 16 & 1.14 &@ 1 \mbox{$\times$}\xspace 16 & 125.923 & & 60.595 & & 30.554 & & 19.330 & \\ \midrule
@\texttt{g4dn.16xl} &@ 64 & 4.352 &@ 1 \mbox{$\times$}\xspace 64 & 90.100 & & 93.553 & & 64.729 & & 42.260 & \\
@ &@ 64 & 4.352 &@ 1 \mbox{$\times$}\xspace 32 & 119.676 & & 106.986 & & 68.354 & & 43.710 & \\
@\texttt{g4dn.8xl} &@ 32 & 2.176 &@ 1 \mbox{$\times$}\xspace 32 & 130.074 & & 108.721 & & 67.957 & & 43.160 & \\
@ &@ 32 & 2.176 &@ 1 \mbox{$\times$}\xspace 16 & 134.507 & & 114.142 & & 69.215 & & 43.467 & \\
@\texttt{g4dn.4xl} &@ 16 & 1.204 &@ 1 \mbox{$\times$}\xspace 16 & 117.325 & & 96.552 & & 61.866 & & 38.985 & \\
@ &@ 16 & 1.204 &@ 1 \mbox{$\times$}\xspace 8 & 118.946 & & 93.666 & & 56.305 & & 38.397 & \\
@\texttt{g4dn.2xl} &@ 8 & 0.752 &@ 1 \mbox{$\times$}\xspace 8 & 98.946 & & 70.448 & & 41.158 & & 29.406 & \\
@ &@ 8 & 0.752 &@ 1 \mbox{$\times$}\xspace 4 & 86.068 & & 69.728 & & 41.422 & & 29.489 & \\
@\texttt{g4dn.xl} &@ 4 & 0.526 &@ 1 \mbox{$\times$}\xspace 4 & 58.156 & & 49.401 & & 28.692 & & 19.626 & \\
@ &@ 4 & 0.526 &@ 1 \mbox{$\times$}\xspace 2 & 47.809 & & 42.145 & & 24.075 & & 16.501 & \\
\bottomrule
\end{spreadtab}
\end{center}
\end{table}
Whereas the performances of the 32 and 64 vCPU \texttt{g4dn} instances are comparable to or higher than that of
the best performing CPU instances (i.e.\ \texttt{c6g.12xl} for the ligand in water and \texttt{c5.24xl} for the protein-ligand complexes),
the smaller \texttt{g4dn} instances with $\leq 16$ vCPUs still offer high performances
but at exceptionally high P/P ratios: about two times higher than on CPU instances.
On the instances with $\geq 32$ vCPUs it is beneficial for performance
to just use half the number of vCPUs for OpenMP threads,
as the reduction of value over too many threads can deteriorate performance otherwise.
In a nutshell, the highest performances are observed on GPU-accelerated \texttt{g4dn.8xl} instances for the protein-ligand systems,
and on the ARM-based \texttt{c6g.12xl} instances for the small ligand in water systems.
Regarding cost-efficiency, any of the \texttt{c5}, \texttt{c5a}, or \texttt{c6g} instances with $\leq 8$ vCPUs
has a high P/P ratio for the small ligand in water systems, whereas single-GPU
\texttt{g4dn} instances with $\leq 16$ vCPUs are undefeated for the larger protein-ligand systems.
\subsubsection*{Costs and time-to-solution per FE calculation}
The numbers in Tables \ref{fig:performance20vs21} and \ref{tab:equil_GPU} are for the
equilibration phase of the FE calculation (see Sec.~\ref{sec:methodsFEbench}).
We do not list the benchmark results of the transition phase separately,
but included them in the estimate of the total cost of computing one FE difference,
as described in the methods section.
Fig.~\ref{fig:optimalConfig} shows the time-to-solution and the costs per FE difference that result
when using different Spot instance types.
With three replicas and two directions,
the total costs for one FE difference is 6\mbox{$\times$}\xspace the time needed for the protein-ligand part,
plus the (small) costs of the ligand in water.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{./pictures/costsPerFeValueSpot.pdf}\\
\caption{\textbf{Costs and time needed to compute one FE difference.}
Diamonds show the costs to compute one FE difference (using Spot pricing)
versus the time-to-solution for various instance types (colors) for the c-Met system.
In addition to c-Met, HIF-2$\alpha$ is shown at the lower left end of each colored line,
and SHP-2 at the upper right end.
Gray square shows costs and timings for a consumer GPU node specifically tuned for GROMACS\xspace simulations,
as discussed in Sec.~\ref{sec:costComparison} and shown in Fig.~\ref{fig:compareTCO}A.
}
\label{fig:optimalConfig}
\end{center}
\end{figure}
Spot instance costs are just about a third of the on-demand costs (not shown),
although Spot prices vary slightly among the regions and change over time.
We therefore used Spot instances for our binding affinity studies,
even though these may be terminated at any time should there be demand for that instance type in the given region.
As can be seen in the Figure, on CPU instances the time-to-solution generally shrinks with the number of vCPUs (as expected) while the costs grow.
Using \texttt{g4dn.xl}, \texttt{g4dn.2xl}, or \texttt{g4dn.4xl} GPU instances,
any FE calculation is completed within 15 hours for less than 20~\$ for all systems (green quadrant).
Other single-GPU instances like \texttt{g4dn.8xl} and \texttt{g3.4xl} are somewhat less cost-efficient, but
still better than the remaining instance types.
The white quadrant on top of the green quadrant accommodates multiple instance types
on which a FE value can be computed in less that 15 hours,
albeit at a markedly higher cost than on \texttt{g4dn} instances.
\subsection{Which instances are optimal for GROMACS\xspace?}
Tables~\ref{tab:numbers2020}--\ref{tab:g4dn.16xl_scaling2020} show the benchmark results for various instance types.
For CPU instances, Tab.~\ref{tab:numbers2020} lists MEM and RIB performances in gray and blue
colors, and the resulting P/P ratios from greens over yellows to reds,
corresponding to high, medium and low cost-efficiency.
Tab.~\ref{tab:gpu_instances} shows the same for instances with up to 8 GPUs.
As the mapping of colors to values depends on the smallest and largest observed values
it differs between MEM and RIB but is the same across all tables:
In result, greens will always refer to good choices in terms of P/P ratio.
For several of the instances, various ranks \mbox{$\times$}\xspace threads decompositions are listed;
``PME ranks'' indicates if and how many MPI ranks were set aside for PME.
\subsubsection{Performance on individual instances with CPUs}
The highest performances were measured on \texttt{c6i.32xl} and \texttt{m6i.32xl} instances,
which is expected as with 128 vCPUs they offer the largest number of cores (see also Tab.~\ref{tab:instances}).
Performance-wise, they are followed by 96 vCPU \texttt{c5.24xl} and \texttt{m5n.24xl} instances.
In terms of cost-efficiency, the 72 vCPU \texttt{c5} instances as well as
the 64 vCPU ARM-based \texttt{c6g}'s are a good choice for GROMACS\xspace,
whereas the \texttt{c6i}'s with 128 vCPUs score high for the large MD system.
\begin{table}
\caption{{\bf GROMACS\xspace 2020 performance on selected CPU instances.}
\nicefrac{ns}{d}\xspace values list MEM and RIB performances,
(ns/\$) columns performance to price.
Values are color-coded for a quick visual orientation:
Grays for low performances, blue towards higher values.
For the performance to price ratios, reds indicate sub-average ratios,
yellows average, and greens above-average ratios.
}
\vspace{-5mm}
\label{tab:numbers2020}
\begin{center}
\scriptsize
\STautoround*{2}
\begin{spreadtab}{{tabular}{ll
ccMLON}}
\toprule
@ processor(s) and & @price & @ranks \mbox{$\times$}\xspace & @PME & @{MEM} & @{RIB} & @{MEM} & @{RIB} \\
@ \texttt{instance type} & @(\$/h) & @\hspace{3mm} threads & @ranks & @{(\nicefrac{ns}{d}\xspace)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(ns/\$/10)} \\ \midrule
@\texttt{c5.24xl} & 4.08 & @96 \mbox{$\times$}\xspace 1 & @ - & 97.3235 & 6.9465 & \STcopy{v99}{[-2,0]/(24*[-5,0])} & \STcopy{v99}{10*[-2,0]/(24*[-6,0])} \\
@\hspace{3mm} Intel 8275CL & 4.08 & @48 \mbox{$\times$}\xspace 2 & @ - & 105.3735 & 6.3025 & & \\
@\hspace{3mm} 2 x 24 cores & 4.08 & @48 \mbox{$\times$}\xspace 1 & @ - & 96.961 & 6.3435 & & \\
@\hspace{3mm} 96 vCPUs & 4.08 & @32 \mbox{$\times$}\xspace 3 & @ - & 93.8795 & 6.136 & & \\
& 4.08 & @24 \mbox{$\times$}\xspace 4 & @ - & 96.6770 & 5.9715 & & \\
& 4.08 & @16 \mbox{$\times$}\xspace 6 & @ - & 91.1270 & 5.726 & & \\
& 4.08 & @12 \mbox{$\times$}\xspace 8 & @ - & 87.021 & 5.5755 & & \\
& 4.08 & @ 8 \mbox{$\times$}\xspace 12 & @ - & 81.370 & 5.618 & & \\
& 4.08 & @96 \mbox{$\times$}\xspace 1 & @ 32 & 99.8895 & 6.4925 & & \\
& 4.08 & @48 \mbox{$\times$}\xspace 2 & @ 16 & 91.448 & 6.2555 & & \\
& 4.08 & @48 \mbox{$\times$}\xspace 1 & @ 16 & 92.3805 & 5.9115 & & \\
& 4.08 & @32 \mbox{$\times$}\xspace 3 & @ 12 & 83.318 & 6.3585 & & \\
& 4.08 & @24 \mbox{$\times$}\xspace 2 & @ 8 & 89.27 & 5.845 & & \\ \midrule
@\texttt{c5.18xl} & 3.06 & @36 \mbox{$\times$}\xspace 1 & @ - & 81.733 & 4.4545 & & \\
@\hspace{3mm} Intel 8124M & 3.06 & @72 \mbox{$\times$}\xspace 1 & @ - & 86.384 & 4.7995 & & \\
@\hspace{3mm} 2 x 18 cores & 3.06 & @36 \mbox{$\times$}\xspace 2 & @ - & 89.388 & 4.594 & & \\
@\hspace{3mm} 72 vCPUs & 3.06 & @24 \mbox{$\times$}\xspace 3 & @ - & 79.209 & 4.4295 & & \\
& 3.06 & @18 \mbox{$\times$}\xspace 4 & @ - & 79.346 & 4.349 & & \\
& 3.06 & @12 \mbox{$\times$}\xspace 6 & @ - & 74.4755 & 4.198 & & \\
& 3.06 & @36 \mbox{$\times$}\xspace 2 & @ 12 & 79.3355 & 5.0005 & & \\
& 3.06 & @72 \mbox{$\times$}\xspace 1 & @ 24 & 84.1015 & 5.094 & & \\ \midrule
@\texttt{m5zn.12xl} & 3.9641 & @24 \mbox{$\times$}\xspace 2 & @ 8 & 69.700 & 4.620 & & \\
@\texttt{m5n.24xl} & 5.712 & @96 \mbox{$\times$}\xspace 1 & @ 32 & 102.147 & 5.895 & & \\ \midrule
@\texttt{c6i.32xl} & 5.44 & @128\mbox{$\times$}\xspace 1 & @ 44 & 118.0965 & 10.0415 & & \\
@\texttt{m6i.32xl} & 6.144 & @40 \mbox{$\times$}\xspace 1 & @ 24 & 121.866 & 10.076 & & \\ \midrule
@\texttt{c5a.24xl} & 3.696 & @48 \mbox{$\times$}\xspace 1 & @ - & 67.151 & 4.218 & & \\
@\hspace{3mm} AMD EPYC 7R32 & 3.696 & @96 \mbox{$\times$}\xspace 1 & @ - & 69.121 & 4.239 & & \\
@\hspace{3mm} 48 cores & 3.696 & @48 \mbox{$\times$}\xspace 2 & @ - & 71.116 & 4.034 & & \\
@\hspace{3mm} 96 vCPUs & 3.696 & @24 \mbox{$\times$}\xspace 3 & @ - & 54.175 & 3.980 & & \\
& 3.696 & @24 \mbox{$\times$}\xspace 4 & @ - & 64.981 & 4.006 & & \\
& 3.696 & @16 \mbox{$\times$}\xspace 6 & @ - & 53.326 & 3.786 & & \\
& 3.696 & @48 \mbox{$\times$}\xspace 2 & @ 12 & 67.187 & 4.734 & & \\
& 3.696 & @96 \mbox{$\times$}\xspace 1 & @ 16 & 75.0465 & 5.0155 & & \\
& 3.696 & @96 \mbox{$\times$}\xspace 1 & @ 24 & 76.882 & 4.950 & & \\ \midrule
@\texttt{c6g.16xl} & 2.176 & @ 1 \mbox{$\times$}\xspace 64 & @ - & 54.6525 & 3.5465 & & \\
@\hspace{3mm} ARM Graviton2 & 2.176 & @64 \mbox{$\times$}\xspace 1 & @ - & 59.595 & 3.532 & & \\
@\hspace{3mm} 64 cores & 2.176 & @64 \mbox{$\times$}\xspace 1 & @ 10 & 50.6255 & 3.3105 & & \\
@\hspace{3mm} 64 vCPUs & 2.176 & @64 \mbox{$\times$}\xspace 1 & @ 14 & 62.023 & 3.6585 & & \\
& 2.176 & @64 \mbox{$\times$}\xspace 1 & @ 16 & 51.77 & 3.731 & & \\
& 2.176 & @32 \mbox{$\times$}\xspace 2 & @ - & 56.2035 & 3.319 & & \\
\bottomrule
\end{spreadtab}
\end{center}
\end{table}
\subsubsection{Performance on single instances with GPUs}
From the GPU instances (Tab.~\ref{tab:gpu_instances}) the ones with a single V100 or T4 GPU
reach about the performance of the \texttt{c5.24xl} CPU instance,
albeit with a significantly (1.2--1.8\mbox{$\times$}\xspace) better cost-efficiency.
In fact, the single-GPU \texttt{g4dn}'s exhibit the best cost-efficiency of all instances
for the MEM and RIB benchmarks.
Perhaps unsurprisingly, the highest single-instance performances of this whole study
have been measured on instances with four and eight GPUs.
With the exception of the (comparatively cheap) quadruple-GPU \texttt{g4dn.12xl} instances, however,
the P/P ratio plunges when distributing a simulation across many GPUs on an instance.
In those cases, GROMACS\xspace uses both domain decomposition via MPI ranks as well as OpenMP parallelization,
with added overheads of both approaches.
Additionally, as the PME long range contribution can not (yet) be distributed to multiple GPUs,
it is offloaded to a single GPU, while the other GPUs share the remaining calculations of the nonbonded interactions.
All imbalances in computational load between the GPUs or between the CPU and GPU part
translate into a loss in efficiency and thus in a reduced cost-efficiency.
For single-GPU simulations GROMACS\xspace has a performance sweet spot.
Here, domain decomposition is usually not needed nor invoked, and all nonbonded interactions
including PME can be offloaded to a single GPU, leading to considerably less
imbalance than in the multi-GPU scenario.
To use instances with $N$ GPUs more efficiently, one can run $N$ simulations simultaneously
on them via GROMACS\xspace' built-in \texttt{-multidir} functionality,
thus essentially gaining the efficiency of the single-GPU case.
This is demonstrated in Tab.~\ref{tab:gpu_instances} for the \texttt{p4d.24xl} and the \texttt{g4dn.12xl} instances.
The \texttt{p4d.24xl} line in the table shows the results for parallelizing a single simulation across the whole instance,
whereas \texttt{p4d.24xl/8} shows what happens when eight simulations run concurrently.
Here, the produced amount of trajectory and thus also the cost-efficiency, is about four times as high.
For the \texttt{g4dn.12xl/4} vs.\ \texttt{g4dn.12xl} instance,
running four concurrent simulations instead of one simulation translates into about a factor of two higher cost-efficiency.
\begin{table}
\caption{{\bf GROMACS\xspace 2020 performance on individual instances with GPUs.}
As Tab.~\ref{tab:numbers2020}, but on instances with up to eight GPUs.
PME long-range interactions were offloaded to a GPU in all cases, except \mbox{$^\star$}, where they were evaluated on the CPU.
}
\label{tab:gpu_instances}
\begin{center}
\scriptsize
\STautoround*{2}
\begin{spreadtab}{{tabular}{lclccMLON}}
\toprule
@instance &@ vCPUs &@ GPU(s) & @price &@ ranks \mbox{$\times$}\xspace & @{MEM} & @{RIB} & @{MEM} & @{RIB} \\
@type &@ &@ & @(\$/h) &@ \hspace{3mm} threads & @{(\nicefrac{ns}{d}\xspace)} & @{(\nicefrac{ns}{d}\xspace)} & @{(ns/\$)} & @{(ns/\$/10)} \\ \midrule
@\texttt{p3.2xl} &@ 8 &@ V100 & 3.06 &@ 1 \mbox{$\times$}\xspace 8 & 101.287 & 6.4425 & \STcopy{v99}{[-2,0]/(24*[-4,0])} & \STcopy{v99}{10*[-2,0]/(24*[-5,0])}\\
@\texttt{p3.8xl} &@ 32 &@ V100\mbox{$\times$}\xspace{4} & 12.24 &@ 4 \mbox{$\times$}\xspace 8 & 142.5475 & 6.824 & & \\
@\texttt{p3.16xl} &@ 64 &@ V100\mbox{$\times$}\xspace{8} & 24.48 &@ 8 \mbox{$\times$}\xspace 8 & 202.636 & 13.1395 & & \\
@\texttt{p3.24xl} &@ 96 &@ V100\mbox{$\times$}\xspace{8} & 31.218 &@ 8 \mbox{$\times$}\xspace 12 & 227.323 & 16.9815 & & \\ \midrule
@\texttt{p4d.24xl}/8 &@ 12 &@ A100 & 32.7726/8&@ 1 \mbox{$\times$}\xspace 12 & 130.454375 & 7.241375 & & \\
@\texttt{p4d.24xl} &@ 96 &@ A100\mbox{$\times$}\xspace{8} & 32.7726 &@ 8 \mbox{$\times$}\xspace 12 & 227.2065 & 15.0935 & & \\ \midrule
@\texttt{g3s.xl} &@ 4 &@ M60 & 0.75 &@ 1 \mbox{$\times$}\xspace 4 & 37.4390 & 2.089 & & \\
@\texttt{g3.4xl} &@ 16 &@ M60 & 1.14 &@ 1 \mbox{$\times$}\xspace 16 & 51.1510 & 2.6355 & & \\ \midrule
@\texttt{g4dn.xl} &@ 4 &@ T4 & 0.526 &@ 1 \mbox{$\times$}\xspace 4 & 57.8825 & 3.173 & & \\
@\texttt{g4dn.2xl} &@ 8 &@ T4 & 0.752 &@ 1 \mbox{$\times$}\xspace 8 & 76.4985 & 4.0275 & & \\
@\texttt{g4dn.12xl}/4 &@ 12 &@ T4 & 3.912/4 &@ 1 \mbox{$\times$}\xspace 12 & 321.835/4 & 16.377/4 & & \\
@\texttt{g4dn.4xl} &@ 16 &@ T4 & 1.204 &@ 1 \mbox{$\times$}\xspace 16 & 91.985 & 4.6315 & & \\
@\texttt{g4dn.8xl} &@ 32 &@ T4 & 2.176 &@ 1 \mbox{$\times$}\xspace 32\mbox{$^\star$} & 100.085 & 6.339 & & \\
@\texttt{g4dn.16xl} &@ 64 &@ T4 & 4.352 &@ 1 \mbox{$\times$}\xspace 32\mbox{$^\star$} / 16 \mbox{$\times$}\xspace 4\mbox{$^\star$} & 109.5595 & 8.4725 & & \\
@\texttt{g4dn.12xl} &@ 48 &@ T4\mbox{$\times$}\xspace{4} & 3.912 &@ 4 \mbox{$\times$}\xspace 12 & 140.607 & 9.0415 & & \\
\bottomrule
\end{spreadtab}
\end{center}
\end{table}
\subsubsection{Scaling across multiple instances}
For selected instance types, we also determined how much performance can be gained on multiple instances.
For this we have selected instance types that
(i) exhibit above average P/P ratios for the single-instance benchmarks, and
(ii) have a network speed of at least 50 Gigabit/s.
\begin{table}
\caption{{\bf Scaling across multiple CPU instances.}
GROMACS\xspace 2020 performances for MEM and RIB over multiple \texttt{c5n.18xl} instances.
Third column lists the optimal decomposition into MPI ranks and OpenMP threads,
forth column lists the optimal number of separate PME ranks,
left entry for MEM, right entry for RIB if they differ.
}
\label{tab:c5n_scaling2020}
\begin{center}
\STautoround*{3}
\begin{spreadtab}{{tabular}{
S[table-format=2.0,round-mode=places,round-precision=0]
S[table-format=2.0,round-mode=places,round-precision=0]
c
c
S[table-format=4.1,round-mode=places,round-precision=1]
S[table-format=3.2,round-mode=places,round-precision=2]
S[table-format=4.2,round-mode=places,round-precision=2]
S[table-format=3.2,round-mode=places,round-precision=2]
}}
\toprule
@{instan-} &@ total &@ ranks \mbox{$\times$}\xspace &@{PME} & @{MEM} &@{$E_\text{MEM}$} & @{RIB} &@{$E_\text{RIB}$} \\
@{ces} &@ vCPUs &@ \hspace{3mm} threads &@{ranks} & @{(\nicefrac{ns}{d}\xspace)} &@ & @{(\nicefrac{ns}{d}\xspace)} &@ \\ \midrule
1 & 72 &@ 36 \mbox{$\times$}\xspace 2 / 72 \mbox{$\times$}\xspace 1 &@ 0 / 24 & 89.388 & \STcopy{v99}{[-1,0]/([-5,0]*89.388)} & 5.094 & \STcopy{v99}{[-1,0]/([-7,0]*5.094)} \\
2 & 144 &@ 72 \mbox{$\times$}\xspace 2 &@ 24 / 0 & 105.52 & & 9.6605 & \\
4 & 288 &@ 48 \mbox{$\times$}\xspace 6 / 144 \mbox{$\times$}\xspace 2 &@ 16 / 0 & 116.923 & & 17.5395 & \\
8 & 576 &@ 288 \mbox{$\times$}\xspace 2 &@ 96 & 168.8755 & & 35.831 & \\
16 & 1152 &@ 192 \mbox{$\times$}\xspace 3 / 576 \mbox{$\times$}\xspace 2 &@ 64 / 192 & 126.172 & & 55.5490 & \\
32 & 2304 &@ 384 \mbox{$\times$}\xspace 3 / 576 \mbox{$\times$}\xspace 2 &@126 / 192 & 109.353 & & 71.4125 & \\
\bottomrule
\end{spreadtab}
\end{center}
\end{table}
Tables~\ref{tab:c5n_scaling2020}, \ref{tab:g4dn.8xl_scaling2020}, and \ref{tab:g4dn.16xl_scaling2020}
summarize the results for scaling across 1--32 CPU and GPU instances.
For the 81 k atom MEM system, the maximal performance is reached on 8 \texttt{c5n.18xl} instances,
however at a parallel efficiency of less than 25\%, whereas
for the \texttt{g4dn}'s, the highest performance is recorded on individual instances.
In contrast, the large RIB system shows a decent scaling behavior.
On \texttt{c5n.18xl}, the single-instance performance of 5 \nicefrac{ns}{d}\xspace can be increased to about
36 \nicefrac{ns}{d}\xspace at a parallel efficiency of 88\% on eight instances.
On 32 instances, with 71 \nicefrac{ns}{d}\xspace, the single-instance performance is increased 14-fold.
Whereas the RIB system continues to scale beyond eight \texttt{c5n.18xl} instances,
the \texttt{g4dn}'s never reach 30 \nicefrac{ns}{d}\xspace.
The difference in scaling efficiency between CPU and GPU instances
is mainly determined by the network speed for the inter-node communication.
As the \texttt{c5n.18xl} instances have a much better interconnect than \texttt{g4dn} (see Tab.~\ref{tab:instances}),
the scaling is more efficient for the CPU nodes.
The \texttt{c5n.18xl} instances, however, never reach the scaling performance of an on-premises dedicated HPC cluster.
There, as shown in Fig.~7 of Ref.~\cite{pall2014}, the same benchmark systems exhibit
peak performances of 303 \nicefrac{ns}{d}\xspace for MEM and 204 \nicefrac{ns}{d}\xspace for RIB.
\begin{table}
\caption{{\bf Scaling across multiple GPU instances.}
As Tab.~\ref{tab:c5n_scaling2020}, but for \texttt{g4dn.8xl} instances with hyperthreading off.}
\label{tab:g4dn.8xl_scaling2020}
\begin{center}
\STautoround*{3}
\begin{spreadtab}{{tabular}{
S[table-format=2.0,round-mode=places,round-precision=0]
S[table-format=2.0,round-mode=places,round-precision=0]
c
S[table-format=4.1,round-mode=places,round-precision=1]
S[table-format=3.2,round-mode=places,round-precision=2]
S[table-format=4.2,round-mode=places,round-precision=2]
S[table-format=3.2,round-mode=places,round-precision=2]
}}
\toprule
@{instan-} &@{total}&@ ranks \mbox{$\times$}\xspace & @{MEM} &@{$E_\text{MEM}$} & @{RIB} &@{$E_\text{RIB}$} \\
@{ces} &@{cores}&@ \hspace{3mm} threads & @{(\nicefrac{ns}{d}\xspace)} &@ & @{(\nicefrac{ns}{d}\xspace)} &@ \\ \midrule
1 & 16 &@ 1 \mbox{$\times$}\xspace 16 / 16 \mbox{$\times$}\xspace 1 & 95.276 & \STcopy{v99}{[-1,0]/([-4,0]*95.276)} & 5.1465 & \STcopy{v99}{[-1,0]/([-6,0]*5.1465)}\\
2 & 32 &@ 4 \mbox{$\times$}\xspace 8 & 64.964 & & 8.485 & \\
4 & 64 &@ 8 \mbox{$\times$}\xspace 8 / 32 \mbox{$\times$}\xspace 2 & 73.0665 & & 15.803 & \\
8 & 128 &@ 32 \mbox{$\times$}\xspace 4 / 64 \mbox{$\times$}\xspace 2 & 63.7325 & & 21.2535 & \\
16 & 256 &@ 32 \mbox{$\times$}\xspace 8 & @ & @ & 25.859 & \\
32 & 512 &@ 32 \mbox{$\times$}\xspace 16 & @ & @ & 22.781 & \\
\bottomrule
\end{spreadtab}
\end{center}
\end{table}
\begin{table}
\caption{{\bf Scaling across multiple GPU instances.}
As Tab.~\ref{tab:c5n_scaling2020}, but for \texttt{g4dn.16xl} instances with hyperthreading off.}
\label{tab:g4dn.16xl_scaling2020}
\begin{center}
\STautoround*{3}
\begin{spreadtab}{{tabular}{
S[table-format=2.0,round-mode=places,round-precision=0]
S[table-format=2.0,round-mode=places,round-precision=0]
c
S[table-format=4.1,round-mode=places,round-precision=1]
S[table-format=3.2,round-mode=places,round-precision=2]
S[table-format=4.2,round-mode=places,round-precision=2]
S[table-format=3.2,round-mode=places,round-precision=2]
}}
\toprule
@{instan-} &@{total}&@ ranks \mbox{$\times$}\xspace & @{MEM} &@{$E_\text{MEM}$} & @{RIB} &@{$E_\text{RIB}$} \\
@{ces} &@{cores}&@ \hspace{3mm} threads & @{(\nicefrac{ns}{d}\xspace)} &@ & @{(\nicefrac{ns}{d}\xspace)} &@ \\ \midrule
1 & 32 &@ 1 \mbox{$\times$}\xspace 32 / 8 \mbox{$\times$}\xspace 4 & 98.0550 & \STcopy{v99}{[-1,0]/([-4,0]*98.055)} & 7.481 & \STcopy{v99}{[-1,0]/([-6,0]*7.481)} \\
2 & 64 &@ 8 \mbox{$\times$}\xspace 8 / 32 \mbox{$\times$}\xspace 2 & 75.974 & & 13.2685 & \\
4 & 128 &@ 8 \mbox{$\times$}\xspace 16 / 32 \mbox{$\times$}\xspace 4 & 73.187 & & 19.4985 & \\
8 & 256 &@ 32 \mbox{$\times$}\xspace 8 & @ & @ & 24.3895 & \\
16 & 512 &@ 64 \mbox{$\times$}\xspace 8 & @ & @ & 28.3745 & \\
32 & 1024 &@ 32 \mbox{$\times$}\xspace 32 & @ & @ & 21.4735 & \\
\bottomrule
\end{spreadtab}
\end{center}
\end{table}
Fig.~\ref{fig:scalingAWS} summarizes all benchmark results and interrelates them to uncover
which instances are optimal in terms of both performance and cost-efficiency.
The symbols show benchmark performances (at optimal parallelization settings) on various instances
as a function of the on-demand hourly instance costs.
The inclined gray lines are isolines of equal P/P ratio with the most
cost-efficient configurations towards the upper left.
Moving from one isoline to the neighboring one towards the top left
improves the P/P ratio by a factor of two.
Symbols connected by a line denote the strong scaling behavior across multiple identical instances,
with a single instance at the left end of the curve, followed by 2, 4, 8, and so on, instances.
A scaling curve that runs parallel to the cost-efficiency isolines
would indicate optimal scaling, i.e.\ a parallel efficiency of $E = 1$.
Fig.~\ref{fig:scalingAWS} allows a series of observations.
(i) In terms of cost-efficiency, the optimal instances for GROMACS\xspace are the single-GPU \texttt{g4dn}'s
with 4, 8, and 16 vCPUs (green symbols towards the left),
whose P/P ratio is at least a factor of two higher than most of the other instance types.
(ii) Perhaps unsurprisingly,
the highest MEM and RIB performances on individual instances
are reached with \texttt{p3} and \texttt{p4d} instances hosting eight GPUs connected via PCI Express (red and purple symbols).
(iii) For larger systems (RIB and PEP), the highest absolute performances are reached
by scaling across multiple \texttt{c6i.32xl} or \texttt{c5n.18xl} instances,
with the \texttt{c6i}'s showing the best cost-efficiency.
(iv) The performance of small systems like MEM cannot be significantly improved by scaling across many instances.
(v) Choosing one of the many possible instances for an MD project
essentially boils down to pinning down a point along the connecting line
between best cost-efficiency and highest performance,
trading off HTC and HPC computing.
Let's follow this special line for the example of the RIB benchmark.
It starts at optimal cost-efficiency with the single-GPU \texttt{g4dn.xl} instances (left, green stars).
For higher performances, one would pick to \texttt{g4dn.2xl} and then \texttt{g4dn.4xl} instances,
however at the cost of losing 20\%--35\% in P/P ratio.
For higher performances (again, at reduced cost-efficiency), the scientist would then
switch to \texttt{g4dn.16xl}, then \texttt{g4dn.12xl} with 4 GPUs,
and then continue with scaling over \texttt{c6i} instances (magenta) which exhibit the best P/P ratios
towards growing performances.
There is generally no reason to pick instances within the area below the described line
as here one simply gets lower GROMACS\xspace performance for the same price.
E.g., for the price of a \texttt{g3s} instance (violet, bottom left),
one would instead get a \texttt{g4dn.2xl} that exhibits two times the RIB performance.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{pictures/scalingAWS.pdf}
\caption{{\bf Performance, costs, and cost-efficiency for GROMACS\xspace simulations on various AWS instance types.}
GROMACS\xspace 2020 performance as a function of the on-demand instance costs (\$/h)
for the MEM (circles), RIB (stars), and PEP (triangles) benchmark on CPU (open symbols) and GPU instances (filled symbols).
Separate symbols indicate single-instances,
connected symbols show the parallel scaling across multiple instances.}
\label{fig:scalingAWS}
\end{center}
\end{figure}
\subsection{High throughput ligand screening in the global cloud}
\subsubsection{Study 1: Focus on time-to-solution}
Our first screening study consisted of 19,872 Batch jobs to compute 1,656 free energy differences
(200 $\mu$s of trajectory in total) for the ensemble shown in Tab.~\ref{tab:ba1}.
With this study we evaluate the suitability of cloud computing for large scale computational drug design scans
that have been traditionally performed on on-premises clusters where
such a scan would typically take several weeks to complete.
As we aimed to minimize the time-to-solution, from all available instance types we
only selected instances that would need no more than nine hours for any job.
The \texttt{g4dn.2xl}, \texttt{g4dn.4xl}, and \texttt{g4dn.8xlarge} meet that criterion
at the lowest costs.
However, relying on just three instance types is risky if one wants to minimize time-to-solution.
\texttt{g4dn} instances are not very abundant in the AWS cloud and if they happen to be in high demand
at the time of our screening study, we might not get many of them.
Therefore, we added other instance types that meet our nine hour criterion, but that
are almost always available:
\texttt{c5.24xl} and \texttt{c5.18xl} as well as the similar \texttt{c5d.24xl} and \texttt{c5d.18xl}.
We ran the small systems of ligand in water on eight \texttt{c5} vCPUs,
where they would complete in about five hours at a price of less than 2~\$
and high cost-efficiency (see c-Met ligand column in Tab.~\ref{tab:equil_CPU}).
To draw from a larger pool of instances we allowed for \texttt{c5} instances of various size and
just requested that they offer at least eight vCPUs (see also Fig.~\ref{fig:instances}).
Larger instances accept multiple jobs until they do not have enough free vCPUs left.
\begin{figure}[tb]
\includegraphics[width=\textwidth]{./pictures/instance_types.png}\\
\caption{\textbf{Usage of global compute resources for the first ligand screening study aimed to optimize time-to-solution.}
Colors show the various instances that were in use globally during the three days of the ensemble run.}
\label{fig:instances}
\end{figure}
We submitted the first 9,936 jobs (the large protein systems) in a first wave on a Monday at around 5 p.m.,
and the second 9,936 jobs (the small ligand systems) in a second wave 24 hours later.
Fig.~\ref{fig:instances} shows the number of instances that were in use during our first screening study color-coded by instance type.
Fig.~\ref{fig:bigrun} provides further details of the run.
As can be seen from the top and middle panels,
we acquired about 140,000 vCPUs within the first 30 minutes,
and about 3,000 GPUs within the first two hours of the run,
distributed globally over six regions.
\begin{figure}
\begin{center}
\includegraphics[width=0.85\textwidth]{./pictures/run1_fig10.png}\\
\caption{\textbf{Usage of global compute resources for the first ligand screening study aimed to optimize time-to-solution.}
Compute resources (split into regions) allocated for the ensemble run over time.
Top: vCPUs, middle: GPU instances, bottom: number of instances.}
\label{fig:bigrun}
\end{center}
\end{figure}
Each wave finished in about one day,
and we speculate that also the whole 19,872 jobs would have finished within 24 hours if submitted simultaneously.
As GPU instance availability is essentially independent of the CPU instance availability,
the GPU jobs from the first wave (greens in Fig.~\ref{fig:instances}) can completely overlap with the
CPU jobs of the second wave.
At the same time, after the peak of the second wave (Tue 17 h -- 23 h), there
should be more than enough \texttt{c5} Spot capacity to accommodate the CPU jobs of the first wave.
Unfortunately there was a glitch in the initial version of our setup that prevented finished instances to terminate properly.
For that reason, the actual costs of the first run summed up to 56 \$ per FE difference,
although, when counting productive time only, they reduce to 40 \$ per FE difference.
This is in the expected range (see Fig.~\ref{fig:optimalConfig}), given the mix of instances that were in use.
The overall costs almost entirely resulted from the hourly charges of the EC2 compute instances,
whereas data transfer to and from the S3 buckets accounted for less than 0.5 \% of the whole costs.
In addition to the performance and price evaluation, we have also validated correctness of the calculations
against the previous computations.~\cite{gapsys2022raven}
We used Student's t-test to compare free energy values calculated in the current work to those reported previously
ensuring that the results showed no statistically significant differences.
\subsubsection{Study 2: Focus on cost-efficiency}
Our second screening study aimed to further reduce the price tag by incorporating only the most cost-efficient instance types for the run.
The second study used a slightly different and smaller data set (see Tab.~\ref{tab:ba2}) that required about 6,984 jobs to be run
for 582 FE differences, or 70 $\mu$s of trajectory in total.
The setup was as in the first study, however we further excluded instances with low cost-efficiency:
most notably, we ran all the protein systems on cost-efficient GPU instances.
The acquired resources over time for the second study are shown in Fig.~\ref{fig:run2}.
\begin{figure}[tbp]
\includegraphics[width=.9\textwidth]{./pictures/run2_fig11.png}\\
\caption{\textbf{Usage of global compute resources for the second ligand screening study aimed to optimize cost-efficiency.}
Top: Allocated instance types over time,
bottom: GPU instances allocated in the different regions.}
\label{fig:run2}
\end{figure}
The vCPU usage peaked at slightly above 35,000 vCPUs at two hours into the second run (not shown),
with on average 500 GPU instances in use over 26 hours.
About six hours after the submission of the ensemble the small ligand in water systems were finished
(blue and orange areas in Fig.~\ref{fig:run2} top).
As our benchmarks on \texttt{c5.2xl} estimated a runtime of about five hours per system,
we conclude that there were enough \texttt{c5} Spot instances available to run each of the 3,492 ligand jobs concurrently.
GPU instances are however running over a time span of about 30 hours altogether,
as apparently not enough \texttt{g4dn} Spot capacity was available to run all 3,492 GPU jobs concurrently.
From the lower panel of Fig.~\ref{fig:run2} we see that at the time of submission,
there was only \texttt{g4dn} capacity available in four regions,
whereas the Ireland (blue) and North Virginia (yellow) regions provided \texttt{g4dn} instances only after several hours into the run.
The large differences across regions underline that a multi-region approach is necessary
for decent job throughput when limiting oneself to few instance types only.
The resulting costs of our second study were about 16 \$ per FE difference,
thus only about a third of what was achieved in the first study and in line
with what is expected from the benchmarks on \texttt{g4dn} instances (Fig.~\ref{fig:optimalConfig}).
Both high throughput ligand screening studies illustrate the flexibility of cloud computing for MD-based investigations:
AWS can be used to scale up the computations to the extent of a large HPC facility, but can also be used in a cost-efficient manner akin
to a smaller in-house cluster.
When aiming to minimize the time-to-solution, the 19,872 calculation jobs were finished in $\approx$2 days.
This compares well to the timing in the recent report, where the Tier 2 Max Planck Supercomputer Raven (interim version,
480 Intel Xeon Cascade Lake-AP nodes with 96 cores, 192 threads) performed calculations of the same dataset in $\approx$3 days.\cite{gapsys2022raven}
The cost-efficient usage of the cloud resources allowed reaching the cost of 16 \$ for a free energy calculation.
Cost-efficiency could be further optimized by also running the ligand in water simulations on the \texttt{g4dn} GPU instances
(instead of using \texttt{c5} instances),
which would result in a cost of 14 \$ per free energy difference, although \texttt{g4dn} capacity may then
not be sufficient to run all simulations at once.
In comparison to a GROMACS\xspace optimized in-house cluster of Intel E5-2630v4 10-core nodes and NVIDIA RTX 2080 GPU,
this cost would be $\approx$8.5 \$,
in agreement with the estimates of relative costs for a compute node analyzed in Fig.~\ref{fig:compareTCO}.
\section{General background}
\subsection{Cloud computing}
The large cloud providers offer a wide range of instance types, with and without GPUs,
optionally with extra memory or HPC network, targeted towards different application areas.
The compute unit that is rented out to customers is called \emph{instance}.
It may be a whole node with multiple cores and GPU(s),
or just a part of a node, even just a single core.
Large nodes that are rented out as several smaller instances
are shared between different customers.
However, each customer is restricted to her instance (her part of the node) exclusively,
and her processes cannot spill over into the compute cores, memory, or network bandwidth allocated to other instances on the node.
AWS instances come with a certain number of virtual CPUs (vCPUs)
which translate to hardware threads.
Renting two vCPUs on a modern AMD or Intel-based instance
is equivalent to getting one physical core exclusively on that machine.
Although the actual exact location of allocated compute instances remains opaque to the user,
the \emph{region} she chooses encompasses a group of geographically close data centers.
Costs usually vary by region, depending on supply and demand, as well as energy costs, and
specific services or cutting edge processor features may only be available in some of the regions.
For the case of AWS, each region consists of multiple, isolated, and physically separate \emph{availability zones} (AZs) within a geographic area.
An AZ is a group of one or more datacenters
with independent redundant power supply and network connectivity.
In 2021, AWS had 85 AZs in 26 regions.
There are different payment models that can be chosen from.
\emph{On-demand} payment is most flexible,
as one can rent an instance at any time and give it back when it is not needed any more.
One only pays for the time that the instance is needed.
One can also get \emph{reserved instances} at a 50--70\% discount if one books these instances for one to three years,
but then one has to pay regardless if one can make use of them.
\emph{Preemptible} or \emph{Spot} instances tap into the pool of currently unused compute capacity
and are available at discount rates of up to 90\% compared to on-demand,
though pricing varies across AZs and over time.
However, a Spot instance can be claimed back at any time by Amazon EC2 with a two-minute warning.
\subsection{Using hardware efficiently with GROMACS\xspace}
\label{sec:gromacs}
Key to optimal simulation performance is understanding how GROMACS\xspace makes use of the available hardware.
GROMACS\xspace combines several parallelization techniques,
among them MPI and OpenMP parallelism, GPU offloading,
and separable ranks to evaluate long-range electrostatics.
With domain decomposition (DD),
the simulation system is divided into $n_x \times n_y \times n_z$ domains,
each of which is operated on by one MPI rank.\cite{gromacs4}
During the simulation, dynamic load balancing (DLB) adjusts the size of the domains
such that any uneven computational load between the MPI ranks is minimized.
Each MPI rank can further have multiple OpenMP threads.
Best performance is usually achieved when the product of MPI ranks and OpenMP threads
equals the number of cores (or hardware threads) on a node or instance,
and when all threads are properly pinned to cores.
Though leaving some cores idle may in rare cases make sense,
oversubscription will lead to significant performance degradation.
When distributing a simulation system over an increasing number of MPI ranks in a strong scaling scenario, at some point the time spent for communication between the ranks limits further speedup.
Usually the bottleneck is in the long-range contribution to the electrostatic forces
which are calculated with the Particle Mesh Ewald (PME) method.\cite{Essmann1995}
Parallel PME requires all-to-all communication between the participating ranks,
leading to $r^2$ MPI messages being sent on $r$ MPI ranks.\cite{gromacs4}
This communication bottleneck can be alleviated by assigning a subset of MPI ranks
to exclusively evaluate the long range PME part.
As typically only a quarter up to a third of all ranks need to be allocated for long range electrostatics,
the communication bottleneck is greatly reduced, yielding better performance and scalability.
GROMACS\xspace can offload various types of computationally demanding interactions onto the GPU.\cite{pall2014,abraham2015,kutznerMoreBang2018}
One of the largest performance benefits stems from offloading the short-range part of the nonbonded interactions
(Coulomb and van der Waals).
In parallel, each MPI rank can offload its local domain's interactions to a GPU.
The PME long range part can be offloaded as well,
however, this computation still cannot be distributed onto multiple GPUs.
Additionally, bonded interactions and for suitable parameter settings the integration and constraint calculations can be offloaded.
The relative GPU to CPU compute power on a node determines how many interaction types
can be offloaded for optimal performance.
Ideally, CPU and GPU finish their force calculation at about the same time in the MD time step
so that no time is lost waiting.
Earlier studies showed that both the GROMACS\xspace performance as well as the performance to price (P/P) ratio,
i.e.\ how much MD trajectory is produced per invested \euro\xspace,
can vastly differ for different hardware.\cite{kutznerBestBang2015,kutznerMoreBang2018}
Nodes with GPUs provide the highest single-node GROMACS\xspace performance.
At the same time, P/P skyrockets when consumer GPUs are used instead of
professional GPUs (e.g. NVIDIA GeForce RTX instead of Tesla GPUs).
The P/P ratio of consumer GPU nodes is typically at least a factor of three higher
than that of CPU nodes or nodes with professional GPUs.
Pronounced variations in GROMACS\xspace performance and cost-efficiency are therefore expected
between the different instance types on AWS.
Benchmarks allow to pick instance types optimal for MD simulation.
\subsection{Obtaining relative binding free energies from MD simulations}
To evaluate relative binding affinities in a chemical library of interest,
ligands are connected into a network (graph) and a number of pair-wise calculations is performed
eventually allowing to sort the molecules according to their binding free energy.
It is a usual practice to repeat calculations several times for each ligand pair to obtain reliable uncertainty estimates.\cite{knapp2018,gapsys2020elife,bhati2018uncertainty}
Various methods for the alchemical calculations have been developed.
For example, the commercially available Schr{\"o}dinger software uses a free energy perturbation based approach,\cite{Wang2019}
whereas the open source workflow used here\cite{pmx2015, gapsys2020large} is based on thermodynamic integration (TI)\cite{vanGunsteren1993TI}
using a non-equilibrium transition protocol.\cite{gapsys2015calculation}
Both approaches yield similarly accurate relative binding free energies at similar computational effort.\cite{gapsys2020large}
The non-equilibrium TI approach requires equilibrated ensembles of the physical end states for the solvated protein with ligand,
one for ligand A and one for ligand B, as well as two equilibrated ensembles of ligand A and ligand B in solution.
From the equilibrated ensembles, many short "fast growth" TI simulations are spawned
during which ligand A is transformed into ligand B and vice versa using a $\lambda$-dependent Hamiltonian.
\label{sec:introTI}
The free energy difference is then derived from the overlap of the forward (A $\rightarrow$ B)
and reverse (B $\rightarrow$ A) work distributions using estimators based on the Crooks Fluctuation theorem.\cite{Crooks1999}
\subsection{Cloud-based HPC cluster and software setup}
The benchmark simulations were carried out on
AWS compute clusters in the North Virginia region
set up with the ParallelCluster\cite{parallelCluster} open source cluster management tool.
Each cluster consists of a master instance of the same architecture as the nodes (x86 or ARM).
The master fires up and closes down the node instances as needed
and operates the queueing system (SLURM).\cite{slurm2003}
For the x86 cluster, we used ParallelCluster v.\ 2.10.0 on a \texttt{c5.2xlarge} master,
for the ARM cluster, we used v.\ 2.9.1 on a \texttt{m6g.medium} master instance.
For brevity, we will from now on refer to \texttt{c5.2xlarge} instances as \texttt{c5.2xl}
and also abbreviate all other \texttt{*xlarge} instances accordingly.
All instances use Amazon Linux 2 as operating system,
for technical specifications of the instances see Tab.~\ref{tab:instances}.
Whereas all instances can communicate via TCP (see last columns in Tab.~\ref{tab:instances} for the network bandwidth),
some of them have an Elastic Fabric Adapter (EFA).
EFA enables HPC scalability across instances by a higher throughput compared to TCP
and a lower and more consistent latency.
\begin{table}
\caption{Technical specifications of AWS instances used in this study,
and GROMACS\xspace compilation options.
i = using Intel MPI 2019, t = using GROMACS\xspace' built-in thread-MPI library.
EFA (Elastic Fabric Adapter) signals whether an HPC network is available.}
\label{tab:instances}
\footnotesize
\begin{tabular}{llcclccrc}
\toprule
instance & CPU & HT or & clock & used SIMD & NVIDIA & MPI & \multicolumn{2}{c}{network} \\
type & model & vCPUs & (GHz) & instructions & GPUs & lib & (Gbps) & EFA \\ \midrule
\texttt{c5.24xl} & Intel 8275CL & 96 & 3.0 & AVX\_512 & -- & i & 25 & \\
\texttt{c5.18xl} & Intel 8124M & 72 & 3.0 & AVX\_512 & -- & i & 25 & \\
\texttt{c5n.18xl} & Intel 8124M & 72 & 3.0 & AVX\_512 & -- & i & 100 & \checkmark \\
\texttt{c5.12xl} & Intel 8275CL & 48 & 3.0 & AVX\_512 & -- & i & 12 & \\
\texttt{c5.9xl} & Intel 8124M & 36 & 3.0 & AVX\_512 & -- & i & 10 & \\
\texttt{c5.4xl} & Intel 8275CL & 16 & 3.0 & AVX\_512 & -- & i & $\le$ 10 & \\
\texttt{c5.2xl} & Intel 8275CL & 8 & 3.0 & AVX\_512 & -- & i & $\le$ 10 & \\
\texttt{c5.xl} & Intel 8275CL & 4 & 3.0 & AVX\_512 & -- & i & $\le$ 10 & \\
\texttt{c5.large} & Intel 8124M & 2 & 3.0 & AVX\_512 & -- & i & $\le$ 10 & \\\midrule
\texttt{c5a.24xl} & AMD EPYC 7R32 & 96 & 3.3 & AVX2\_128 & -- & i & 20 & \\
\texttt{c5a.16xl} & AMD EPYC 7R32 & 64 & 3.3 & AVX2\_128 & -- & i & 20 & \\
\texttt{c5a.12xl} & AMD EPYC 7R32 & 48 & 3.3 & AVX2\_128 & -- & i & 12 & \\
\texttt{c5a.8xl} & AMD EPYC 7R32 & 32 & 3.3 & AVX2\_128 & -- & i & 10 & \\
\texttt{c5a.4xl} & AMD EPYC 7R32 & 16 & 3.3 & AVX2\_128 & -- & i & $\le$ 10 & \\
\texttt{c5a.2xl} & AMD EPYC 7R32 & 8 & 3.3 & AVX2\_128 & -- & i & $\le$ 10 & \\
\texttt{c5a.xl} & AMD EPYC 7R32 & 4 & 3.3 & AVX2\_128 & -- & i & $\le$ 10 & \\
\texttt{c5a.large} & AMD EPYC 7R32 & 2 & 3.3 & AVX2\_128 & -- & i & $\le$ 10 & \\\midrule
\texttt{c6g.16xl} & ARM Graviton2 & 64 & 2.3 & NEON\_ASIMD\hspace{-6mm} & -- & t & 25 & \\
\texttt{c6g.12xl} & ARM Graviton2 & 48 & 2.3 & NEON\_ASIMD\hspace{-6mm} & -- & t & 20 & \\
\texttt{c6g.8xl} & ARM Graviton2 & 32 & 2.3 & NEON\_ASIMD\hspace{-6mm} & -- & t & $\le$ 10 & \\
\texttt{c6g.4xl} & ARM Graviton2 & 16 & 2.3 & NEON\_ASIMD\hspace{-6mm} & -- & t & $\le$ 10 & \\
\texttt{c6g.2xl} & ARM Graviton2 & 8 & 2.3 & NEON\_ASIMD\hspace{-6mm} & -- & t & $\le$ 10 & \\
\texttt{c6g.xl} & ARM Graviton2 & 4 & 2.3 & NEON\_ASIMD\hspace{-6mm} & -- & t & $\le$ 10 & \\\midrule
\texttt{c6i.32xl} & Intel 8375C & 128 & 2.9 & AVX\_512 & -- & i & 50 & \checkmark \\
\texttt{m6i.32xl} & Intel 8375C & 128 & 2.9 & AVX\_512 & -- & t & 50 & \checkmark \\\midrule
\texttt{m5n.24xl} & Intel 8259CL & 96 & 2.5 & AVX\_512 & -- & i & 100 & \checkmark \\
\texttt{m5zn.12xl} & Intel 8252C & 48 & 3.8 & AVX\_512 & -- & t & 100 & \checkmark \\
\texttt{m5zn.2xl} & Intel 8252C & 8 & 3.8 & AVX\_512 & -- & t & $\le$ 25 & \\\midrule
\texttt{p3.2xl} & Intel E5-2686v4 & 8 & 2.3 & AVX2\_256 & V100 & t & $\le$ 10 & \\
\texttt{p3.8xl} & Intel E5-2686v4 & 32 & 2.3 & AVX2\_256 & V100 \mbox{$\times$}\xspace 4 & t & 10 & \\
\texttt{p3.16xl} & Intel E5-2686v4 & 64 & 2.3 & AVX2\_256 & V100 \mbox{$\times$}\xspace 8 & t & 25 & \\
\texttt{p3dn.24xl} & Intel 8175M & 96 & 2.5 & AVX2\_256 & V100 \mbox{$\times$}\xspace 8 & t & 100 & \checkmark \\
\texttt{p4d.24xl} & Intel 8275CL & 96 & 3.0 & AVX2\_256 & A100 \mbox{$\times$}\xspace 8 & i & 400 & \checkmark \\
\texttt{g3s.xl} & Intel E5-2686v4 & 4 & 2.3 & AVX2\_256 & M60 & i & 10 & \\
\texttt{g3.4xl} & Intel E5-2686v4 & 16 & 2.3 & AVX2\_256 & M60 & i & $\le$ 10 & \\
\texttt{g4dn.xl} & Intel 8259CL & 4 & 2.5 & AVX\_512 & T4 & i & $\le$ 10 & \\
\texttt{g4dn.2xl} & Intel 8259CL & 8 & 2.5 & AVX\_512 & T4 & i & $\le$ 25 & \\
\texttt{g4dn.4xl} & Intel 8259CL & 16 & 2.5 & AVX\_512 & T4 & i & $\le$ 10 & \\
\texttt{g4dn.8xl} & Intel 8259CL & 32 & 2.5 & AVX\_512 & T4 & i & 50 & \\
\texttt{g4dn.12xl} & Intel 8259CL & 48 & 2.5 & AVX\_512 & T4 & i & 50 & \\
\texttt{g4dn.16xl} & Intel 8259CL & 64 & 2.5 & AVX\_512 & T4 & i & 50 & \\
\texttt{g4dn.12xl} & Intel 8259CL & 48 & 2.5 & AVX\_512 & T4 \mbox{$\times$}\xspace 4 & i & 50 & \\
\bottomrule
\end{tabular}
\end{table}
Different versions of GROMACS\xspace (2020.2 and 2021.1, with and without MPI)
were installed with the Spack\cite{spack} 0.15.4 package manager.
GROMACS\xspace was built in mixed precision with GCC 7.3.1, FFTW 3.3.8, hwloc 1.11, and either
Intel MPI 2019 or it's built-in thread-MPI library (as listed in Tab.~\ref{tab:instances}).
GPU versions used CUDA 10.2 on \texttt{g} instances and CUDA 11.1 on \texttt{p} instances.
Benchmarks on \texttt{m6i.32xl} instances were done using ICC 2021.2 and Intel MKL.
The multi-instance scaling benchmarks on \texttt{m5n.24xl} and \texttt{m5zn.12xl} instances
used a GROMACS\xspace executable built with Intel MPI + ICC 2021.2 and Intel MKL.
A workshop to reproduce a (slightly updated) setup is available on the web,\cite{gromacsPcluster2021}
whereas general advice on how to use AWS services can be found in this book.\cite{wittig2018amazon}
\subsection{Description of the MD benchmark systems}
To determine the GROMACS\xspace performance on various instance types
we used seven simulation systems (Tab.~\ref{tab:systems}).
MEM, RIB, and PEP are typical MD systems differing in size and composition,
where no special functionality like external forces or free energy (FE) is required.
MEM is an aquaporin tetramer embedded in a lipid membrane surrounded by water and ions
in a simulation box of $10.8 \times 10.2 \times 9.6~\mbox{nm}^3$ size.\cite{escidoc:599912}
RIB contains an \emph{E. coli} ribosome in a box of size (31.2 nm)$^3$ with water and ions.\cite{escidoc:1854600}
The (50 nm)$^3$ large PEP system was used to study peptide aggregation;\cite{Matthes2012}
it contains 250 steric zipper peptides in solution.
MEM, RIB, and PEP were used in previous performance studies,\cite{kutznerBestBang2015, kutznerMoreBang2018, Kutzner2014}
allowing to compare cloud instances to a variety of other already benchmarked hardware.
\begin{table}
\begin{center}
\caption{{\bf Benchmark systems.}
Specifications of the MD input systems that are used for benchmarks in this study.
FE column lists the number of perturbed atoms for this benchmark
(note that this number will vary for different ligands considered in the physical end states),
$\Delta t$ is integration time step, $r_c$ cutoff radius, grid sp. the spacing of the PME grid.
Benchmark input \texttt{.tpr} files can be downloaded from \url{https://www.mpinat.mpg.de/grubmueller/bench}}
\label{tab:systems}
\small
\begin{tabular}{lrcccr} \toprule
benchmark & \# of & $\Delta t$ & $r_c$ & grid sp.& \# of FE\\
acronym & atoms & (fs) & (nm) & (nm) & atoms \\ \midrule
PEP\cite{Kutzner2014} &12,495,503 & 2 & 1.2 & 0.160 & 0 \\
RIB\cite{escidoc:1854600} & 2,136,412 & 4 & 1.0 & 0.135 & 0 \\
MEM\cite{escidoc:599912} & 81,743 & 2 & 1.0 & 0.12 & 0 \\ \midrule
SHP-2 protein + ligand & 107,330 & 2 & 1.1 & 0.12 & 53 \\
c-Met protein + ligand & 67,291 & 2 & 1.1 & 0.12 & 61 \\
HIF-2$\alpha$ protein + ligand & 35,546 & 2 & 1.1 & 0.12 & 35 \\
c-Met ligand in water & 6,443 & 2 & 1.1 & 0.12 & 61 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
c-Met, HIF-2$\alpha$, and SHP-2 are representative systems from the large binding affinity ensemble assembled by Schindler et al.~\cite{schindler2020}
These systems run special FE kernels
for all $\lambda$-dependent interactions, i.e.\ those involving a transition between atomic properties.
As the FE kernels are slower than the normal kernels,
and due to a larger cutoff, finer PME grid and the need to calculate two PME grids (one for each of the physical states),
even at equal atom count a FE simulation will be slower than a plain MD system.
We therefore carried out separate benchmarks for the FE systems,
chosen such that predicting the performance of all ensemble members listed in Tabs.~\ref{tab:ba1}--\ref{tab:ba2}
is easily possible: A small, medium and large protein plus ligand system to cover the whole
range of sizes for the protein systems (35~k -- 110~k atoms) and one ligand in water system
representative for all 9,936 ligand in water systems.
\begin{table}[tb]
\caption{{\bf Systems considered for the first binding affinity study.}
For each of eight considered protein-ligand complexes (from the study~\cite{schindler2020}) two sets of simulations are performed:
\emph{protein+ligand} for the solvated protein-ligand complex and \emph{ligand} for the solvated ligand alone.
An \emph{edge} is referred to as the transition of one ligand A to another ligand B.
As we probe three independent replicas for each system in forward and backward simulation direction,
and three small molecule force fields (GAFF~\cite{wang2004} v2.11, CGenFF~\cite{vanommeslaeghe2010cgenff,yesselman2012} v3.0.1 and OpenFF~\cite{qiu2020} v2.0.0),
the total number of jobs is $3 \times 2 \times 3 = 18 \times$ the number of edges for the \emph{protein+ligand}
plus an equal number for the \emph{ligand} systems.
}
\label{tab:ba1}
\begin{center}
\begin{tabular}{lrrrr}
\toprule
& \multicolumn{2}{c}{size (atoms)} & \# of & \# of \\
system & protein+ligand & ligand & edges & jobs \\ \midrule
CDK8 & 109,807 & 5,789 & 54 & 972 \\
SHP-2 & 107,330 & 6,231 & 56 & 1,008 \\
PFKFB3 & 96,049 & 6,570 & 67 & 1,206 \\
Eg5 & 79,653 & 6,116 & 65 & 1,170 \\
c-Met & 67,291 & 6,443 & 57 & 1,026 \\
SYK & 66,184 & 5,963 & 101 & 1,818 \\
TNKS2 & 52,251 & 6,012 & 60 & 1,080 \\
HIF-2$\alpha$ & 35,546 & 4,959 & 92 & 1,656 \\ \midrule
Total & & & & 2$\times$ 9,936 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
In total, $2 \times 9,936 = 19,872$ independent jobs were run for the binding affinity study (Tab.~\ref{tab:ba1})
by which 1,656 free energy differences ($\Delta\Delta G$ values) were determined.
Each job first simulated six nanoseconds at equilibrium (for the starting state, i.e.\ A or B),
followed by 80 non-equilibrium transitions from the start to the end state (A $\rightarrow$ B, or B $\rightarrow$ A),
as mentioned in section \ref{sec:introTI}.
The 80 individual transitions were started from different, equidistant, positions of the equilibrium trajectory
and were each 50 picoseconds long.
In total 10 nanoseconds of trajectory were generated per job.
\begin{table}[tb]
\caption{{\bf Systems considered for the second binding affinity study.}
Same as in Tab.~\ref{tab:ba1}, but considering 14 protein-ligand complexes in one MD force field (OpenFF v2.0.0).
The systems were collected from public sources for the previous free energy calculation studies~\cite{perez2019predicting,gapsys2020large}.
The total number of jobs is $3 \times 2 = 6 \times$ the number of edges for the \emph{protein+ligand}
plus an equal number for the \emph{ligand} systems.
}
\label{tab:ba2}
\begin{center}
\begin{tabular}{lrrrr}
\toprule
& \multicolumn{2}{c}{size (atoms)} & \# of & \# of \\
system & protein+ligand & ligand & edges & jobs \\ \midrule
CDK2 & 106,910 & 4,993 & 25 & 150 \\
P38 & 80,777 & 6,750 & 56 & 336 \\
ROS1 & 73,957 & 8,434 & 63 & 378 \\
Bace & 73,330 & 5,914 & 58 & 348 \\
JNK1 & 72,959 & 5,956 & 31 & 186 \\
Bace (Hunt) & 72,036 & 5,773 & 60 & 360 \\
Bace (p2) & 71,671 & 6,687 & 26 & 156 \\
PTP1B & 70,020 & 8,753 & 49 & 294 \\
PDE2 & 63,943 & 5,504 & 34 & 204 \\
TYK2 & 62,292 & 5,956 & 24 & 144 \\
PDE10 & 56,616 & 7,655 & 62 & 372 \\
Thrombin & 49,312 & 6,025 & 16 & 96 \\
Galectin & 35,635 & 9,576 & 7 & 42 \\
MCL1 & 32,745 & 5,435 & 71 & 426 \\ \midrule
Total & & & & 2$\times$ 3,492 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Benchmarking procedure}
\subsubsection{MEM and RIB plain MD systems}
MEM and RIB benchmarks were run for 20~k steps on single instances and for 40~k -- 50~k steps
when scaling across multiple instances or multiple GPUs using GROMACS\xspace 2020.
Due to effects of load balancing, PME grid versus cutoff scaling and memory allocations (compare section~\ref{sec:gromacs})
the first few thousand steps in a GROMACS\xspace simulation are typically slower than average
and were therefore excluded from the benchmarks,
which are intended to be proxies for the long-term performance.
To make use of all CPU cores of an instance,
the product of ranks $\times$ threads was set to the number of physical cores
or to the number of available hardware threads.
We benchmarked various combinations of ranks $\times$ threads and
additionally checked whether separate PME ranks improve performance.
Pinning of threads to cores was enabled, and no checkpoint files were written during the benchmark runs.
On GPU instances we used one rank per GPU and offloaded all short range nonbonbed interactions to the GPU(s).
For improved performance, also the long range PME contribution was offloaded to a GPU,
except for some GPU instances with many cores,
where it turned out to be faster to evaluate the long range PME contribution on the CPU.
For scaling benchmarks across two or more GPU instances,
the long range PME contribution was run on the CPU part,
as only there it can be parallelized.
The timings (in simulated nanoseconds per day)
reported for MEM and RIB (Tabs.~\ref{tab:numbers2020}--\ref{tab:g4dn.16xl_scaling2020})
are averages over two runs.
The parallel efficiency on $n$ instances $E_n$ reported in Tabs.~\ref{tab:c5n_scaling2020}, \ref{tab:g4dn.8xl_scaling2020}, and \ref{tab:g4dn.16xl_scaling2020}
is computed as the performance $P_n$ on $n$ instances divided by $n$ times the performance on a single instance:
\begin{equation}
E_n = \frac{P_n}{n \cdot P_1}
\end{equation}
The performance to price ratios (ns/\$) in the MEM and RIB tables are calculated from
Amazon EC2 on-demand prices for Linux instance in the US East (N. Virginia) region
(\url{https://aws.amazon.com/ec2/pricing/on-demand/}).
\subsubsection{Free energy systems used for the binding affinity study}
\label{sec:methodsFEbench}
Each job of the binding affinity ensemble run (Tab.~\ref{tab:systems}) consists of two parts,
first, a 6~ns equilibrium simulation,
second, 80 non-equilibrium transitions of 50~ps length each.
The first (equilibration) part was benchmarked as described above for MEM and RIB,
using 10~k total steps with timings from the first half discarded.
In cases, where PME grid vs.\ cutoff tuning took more than 5~k time steps,
20~k time steps were used in total.
For the binding affinity runs we did not check whether separate PME ranks improve the performance.
The timings reported in tables \ref{tab:equil_CPU}--\ref{tab:equil_GPU}
resulted from individual runs of the equilibration part.
Here, we ran on individual instances only,
no scaling across multiple instances was attempted.
Though in most cases we tested various combinations
of splitting a given number of total cores $N_\text{c}$ into ranks and threads
$N_\text{c} = N_\text{ranks} \times N_\text{threads}$,
we do not report all results
in tables \ref{tab:equil_CPU}--\ref{tab:equil_GPU}
to keep them reasonably concise.
Instead, we report a consensus for the combination $N_\text{ranks} \times N_\text{threads}$ that
yielded best results across the free energy benchmark systems.
The second (transition) part was benchmarked by timing
one of the 50~ps (25~k steps) long transition runs.
No time steps were discarded from the measurements,
as an initial below average performance will occur in each of the 80 short transition runs
and thus should be included when predicting the performance of the whole transition part.
The total costs per free energy difference (Fig.~\ref{fig:optimalConfig})
have been derived by combining the equilibration and transition phase timings
of the protein-ligand complex and the ligand alone in water.
Six runs were performed per free energy difference for the protein-ligand complex
(3 replicas $\times$ 2 directions) plus additional six for the solvated ligand.
All runs for the solvated ligand were performed on \texttt{c5.2xl} instances.
On-demand (or Spot) prices for AWS instances in the US East (N. Virginia) region as of May 2021 were used.
\subsection{Setup of globally distributed compute resources}
\label{sec:hyperbatch}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\textwidth]{./pictures/BatchWorkflow.pdf}
\caption{{\bf The HyperBatch-based setup distributes all 19,872 GROMACS\xspace jobs globally.}
An illustrative lifetime of a job follows the steps \Circled{1},\ldots \Circled{8}
and is described in Section~\ref{sec:hyperbatch} of the text.}
\label{fig:batchworkflow}
\end{center}
\end{figure}
The allocation of cloud-based compute resources (e.g.\ via ParallelCluster or AWS Batch\cite{AwsBatch})
is normally confined to a specific geographic region (there are currently 26 in AWS).
Whereas stacks of small to medium jobs can be conveniently executed using just a single region,
a global setup is better suited when a substantial amount of core hours is needed:
The pool of available instances is much larger for all regions combined
compared to just a single region. This allows, e.g., to start more instances at the same time,
or to pick only the subset of instances with the highest performance to price ratio.
To benefit from global compute resources, we used AWS HyperBatch
as a means to provide a single entry point for jobs scheduled to AWS Batch
queues across regions.
The technical setup used for the binding affinity study
is sketched in Fig.~\ref{fig:batchworkflow}.
For easier overview, the main compute setup is shown in the middle,
whereas input and output of data is gathered in the left, blue column, and
monitoring functionality about the status of jobs and instances in the right, green column.
In a nutshell, AWS HyperBatch provides cross-regional serverless job scheduling and
resource orchestration using DynamoDB, Lambda functions, Step Functions,
AWS Kinesis Data Streams, the Simple Queue Service (SQS), and the Amazon API Gateway.\cite{wittig2018amazon}
For the binding affinity ensemble,
we used Spot instances because they are much cheaper than on-demand.
The downside of a Spot instance is, that it can be terminated at any time,
which can happen if the pool of free Spot instances shrinks over time
and more on-demand capacity is requested in a region.
To minimize instance termination,
we requested a number of instances in each region proportional to the Spot pool size of that region.
We introduced additional flexibility by requesting instances with all possible vCPU counts
and fitting several jobs on them.
A single 96 vCPU \texttt{c5.24xl} instance could then e.g.\ end up running one 48 vCPU job
plus six eight vCPU jobs at the same time.
\begin{figure}
\caption{{\bf Example of a Docker file for a GPU image.}
From the Docker files multiple Docker container images are compiled (one for each architecture)
that are loaded from the Amazon ECR by the instances.}
\label{lst:dockergpu}
\begin{lstlisting}[language=Dockerfile]
ARG SRC_IMG=public.ecr.aws/hpc/spack/gromacs/2021.1:cuda-tmpi_linux-amzn2-skylake_avx512-2021-04-29
FROM ${SRC_IMG}
ENV NVIDIA_DRIVER_CAPABILITIES=compute
RUN yum install -y python3-pip jq git
RUN pip3 install --target=/opt/view/lib/python3.8/ --no-warn-script-location --upgrade pip
RUN pip3 install --target=/opt/view/lib/python3.8/ --no-warn-script-location boto boto3 awscli jsonpickle
RUN yum install -y python-pip
RUN pip install awscli
RUN git clone https://github.com/deGrootLab/pmx /usr/local/src/pmx
RUN yum install -y gcc python-devel
RUN cd /usr/local/src/pmx \
&& pip install .
COPY batch_processor.py /usr/local/bin/batch_processor.py
COPY start.sh /usr/local/bin/start.sh
VOLUME /scratch
WORKDIR /scratch
COPY ti_verlet_l0.mdp /opt/common/
COPY ti_verlet_l1.mdp /opt/common/
COPY ff /opt/common/ff
VOLUME /opt/common
COPY run-gpu.pl /usr/local/bin/run.pl # make executable before copying in
COPY run.sh /usr/local/bin/run.sh
## Make sure to add -c as spack won't create the environment correctly
ENTRYPOINT ["/bin/bash","--rcfile","/etc/profile","-l", "/usr/local/bin/start.sh"]
\end{lstlisting}
\end{figure}
To better understand the whole setup, let's look at the encircled digits (red) in Fig.~\ref{fig:batchworkflow}
and follow the lifetime of one of 19,872 jobs from the binding affinity ensemble.
\Circled{1} We submit an example job from a Cloud9 terminal to the HyperBatch entry point.
The job definition file specifies how many vCPUs to allocate, whether to request a GPU,
and which subfolders to use in the S3 input and output buckets for job I/O.
HyperBatch distributes jobs across regions according to
region weights reflecting the compute capacity of the regions, e.g.\ using the weights
(6, 6, 3, 1, 1, 4) for the regions
(\texttt{us-east-1}, \texttt{us-east-2}, \texttt{us-west-2}, \texttt{ap-southeast-1}, \texttt{ap-northeast-2}, \texttt{eu-west-1})
for GPU jobs.
Our example job gets distributed to \texttt{eu-west-1} (blue, \Circled{2}),
where it is relayed to a Batch instance \Circled{3} with sufficient free resources (vCPUs, GPUs).
The instance loads the correct Docker image from AWS Elastic Container Registry (ECR)
with preinstalled software for the current architecture \Circled{4},
e.g.\ pmx and the GROMACS\xspace executable with the SIMD level matching the CPU,
see Fig.~\ref{lst:dockergpu} for the definition of the Docker file.
\begin{figure}
\caption{Perl script used to launch each of the 19,872 jobs (first part).}
\label{lst:perlone}
\begin{lstlisting}[language=Perl]
#!/usr/bin/env perl
use Cwd;
my $workdir = getcwd;
my $intpr = $ARGV[0]; # "s3://input-gaff2-water/cdk8/edge_13_14/stateA/eq1/tpr.tpr"
my $outdir = $ARGV[1]; # "s3://output-gaff2-water/cdk8/edge_13_14/stateA/run1/"
my $topdir = $ARGV[2]; # "s3://input-gaff2-water/cdk8/edge_13_14"
my $topfile = $ARGV[3]; # "topolA1.top"
my $mdpfile = $ARGV[4]; # "ti_verlet_l0.mdp"
my $ntomp = $ARGV[5]; # number of threads e.g. "8"
my $nmpi = 1; # number of ranks
system("rm -rf ./*"); # Start clean + remove potential leftovers from earlier run
system("aws s3 cp $topdir . --recursive --exclude='state?/*' ");
$ENV{'GMXLIB'} = "/opt/common/ff";
# Maybe this job did already run on another Spot instances that died at some point.
# Retrieve whatever data we stored from that run, and go on from there.
system("aws s3 cp $outdir . --recursive");
if (-s "frame0.gro") {
print "=== Frame 0 found, starting transitions. ===\n";
} else {
print "=== Frame 0 not found, equilibration not complete, continuing eq.===\n";
#############################################################################
# FIRST PART: run EQUILIBRATION in chunks, occasionally save generated data #
#############################################################################
my @count = (1..8);
for my $iter (@count) {
if (-s $intpr) {
print "--- Found $intpr, continuing ... (iteration $iter)\n";
} else {
print "--- Copying $intpr ... (iteration $iter)\n";
system("aws s3 cp $intpr .");
}
system("gmx mdrun -ntmpi $nmpi -ntomp $ntomp -s tpr.tpr -npme 0 -quiet -cpi -nsteps 500000");
system("rm confout.gro"); # we don't need this one
# save checkpoint + other files generated by the run to S3 in case Spot instance gets interrupted
system("aws s3 cp md.log $outdir");
system("aws s3 cp ener.edr $outdir");
system("aws s3 cp traj.trr $outdir");
system("aws s3 cp traj_comp.xtc $outdir");
system("aws s3 cp dhdl.xvg $outdir");
system("aws s3 cp state.cpt $outdir");
# check number of steps
my $nsteps = get_step_number( "md.log" ); # (get_step_number not part of this listing)
if( $nsteps >= 2999999 )
{ last; }
}
system("echo 0 | gmx trjconv -quiet -s tpr.tpr -f traj.trr -o frame.gro -sep -ur compact -pbc mol -b 2256");
# Save the frames in case this Spot instance gets interrupted:
system("aws s3 cp . $outdir --recursive --exclude='*' --include='frame*.gro' ");
}
\end{lstlisting}
\end{figure}
\begin{figure}
\caption{Perl script used to launch each of the 19,872 jobs (cont'd).}
\label{lst:perltwo}
\begin{lstlisting}[language=Perl, firstnumber=67]
#############################################################################
# SECOND PART: run the 80 transitions #
#############################################################################
system("mkdir $workdir/morphes"); # create folder to run the transitions in
for(my $i=0; $i<=79; $i++) # loop over each transition (= frame)
{
system("mkdir $workdir/morphes/frame$i"); # make subfolder for this frame
if( -e "$workdir/dhdl$i.xvg" ) # check whether dhdl already exists
{ # if yes,
system("cp $workdir/dhdl$i.xvg $workdir/morphes/frame$i/."); # copy dhdl to subfolder
next; # and proceed to next frame
}
system("mv $workdir/frame$i.gro $workdir/morphes/frame$i/frame$i.gro"); # mv input to subfolder
chdir("$workdir/morphes/frame$i"); # and go there
# call grompp and mdrun
system("gmx grompp -p $workdir/$topfile -c frame$i.gro -f /opt/common/$mdpfile -o ti.tpr -maxwarn 3");
system("gmx mdrun -ntmpi $nmpi -ntomp $ntomp -s ti.tpr -dhdl dhdl$i.xvg -npme 0");
system("aws s3 cp dhdl$i.xvg $outdir"); # save dhdl to S3
system("rm *#"); # clean up
} # done with all 80 dhdl values
# integrate and save work values
chdir("$workdir/morphes");
if( $topfile =~ /A/ )
{ system("pmx analyse --integ_only -fA frame*/dhdl*.xvg -oA work.dat --work_plot none"); }
else
{ system("pmx analyse --integ_only -fB frame*/dhdl*.xvg -oB work.dat --work_plot none"); }
# Copy all results back to correct S3 output container
system("aws s3 cp work.dat $outdir");
exit 0;
\end{lstlisting}
\end{figure}
The actual simulations are handled
by the Perl script shown in Figs.~\ref{lst:perlone}--\ref{lst:perltwo}.
This script is designed to deal with sudden interrupts that are possible with Spot instances.
Accordingly, output data and checkpoints are saved in regular intervals to S3 storage.
To start a simulation, step \Circled{5} loads the input files from S3
(line 13 in Fig.~\ref{lst:perlone}).
Step \Circled{6} loads potentially existing output data from S3
(line 18 in the listing),
this is the case when the job was running earlier already but was interrupted before it finished.
Depending on whether prior output data is present, the job is either continued or
started from scratch.
Generally, the MD job consists of two parts,
(i), the production of an equilibrium trajectory (lines 25--56 in the listing), and
(ii), the 80 individual transitions (lines 67--86).
Part (i) is executed in eight chunks (lines 28--29)
so that upon instance termination only a small part needs to be recomputed,
as \Circled{7} each chunk's data is transferred to S3.
If an instance terminates during one of the 80 transitions,
the job is continued from the start of that transition,
as a completed transition \Circled{7} is immediately saved to S3.
At last, pmx integrates and saves the work values that are later used for free energy estimation (lines 88--95).
Instance termination \Circled{8} at any time will trigger a Lambda function that resubmits
the job again to HyperBatch.
The current state of each job can be checked in a DynamoDB table (Fig.~\ref{fig:batchworkflow} right).
Additional configuration using Amazon Elasticsearch allows
to globally monitor the whole simulation ensemble in a Kibana\cite{gupta2015kibana} dashboard
that shows the total number of running instances, the instance types by region, and more.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,568 |
1) Flexi Tabs are mainly used for lightweight items and automatic application.
2) Flexi's eliminated the need for space-hungry blister packs. They help to creat space for more product with increased facing and can lower your packaging costs.
3) Flexi's are quick and easy to use for merchandising products particularly when used in pad form.
4) To trigger more impulse buys at the checkout,self-adhesive hang tabs can be used to hang your product on peg boards or on free-standing counter racks. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,540 |
Q: Perturbation of the value of a general-sum game at a equilibirium Consider a general-sum game with $N$ players. Let $u_i(a_1, \ldots, a_N)\colon \prod_{i=1}^N A_i \rightarrow \mathbb{R} $ be the payoff of the player $i\in \{ 1, \ldots, N \}$ when each player takes action $a_i \in A_i$, where $A_i $ is the action set of player $i$. Let $\sigma^*$ be any notion of correlated equilibrium (CE) that is computable and unique. For example, the social optimal correlated equilibrium or max-entropy correlated equilibrium, both can be solved efficiently using linear programming. Thus, $\sigma^*$ is a probability measure on the joint action space $\prod_{i=1}^N A_i$. Then the expected payoff of player $i$ is
$$
V_i( u_1, \ldots, u_N) = \mathbb{E}_{(a_1, \ldots, a_N) \sim \sigma^*} \bigl [ u_i(a_1, \ldots a_n) \bigr ] \\
= \sum_{(a_1, \ldots, a_N)\in \prod_{i=1}^N A_i} u_i(a_1, \ldots a_n) \cdot \sigma^* (a_1, \ldots a_n).
$$
Note that the value of game at a social optimal CE or a max-entropy CE is unique.
I was wondering whether the values of the game $( V_1, \ldots, V_N)\in \mathbb{R}^N$ is Lipschitz with respect to the utility functions. That is, suppose we have two sets of utility functions $\{ u_i \}_{i=1}^N $ and $\{\tilde u_i \}_{i=1}^N $ and we solve for the same kind of CE on both games. Is it possible to show that
$$
\max_{i\in \{1, \ldots, N \} } \bigl | V_i ( u_1, \ldots, u_N) - V_i(\tilde u_1, \ldots, \tilde u_N) \bigr | \leq C \cdot \max_{j\in \{1,\ldots, N\} } \| u_j - \tilde u_j \|_{\infty}
$$
for some constant $C$?
P.S.: For zero-sum games, it seems that we can show this with $C = 1$.
A: Consider the following 2x2 two player game
\begin{array}{c|c}
1,1 & 0,1 \\
\hline
1,0 & 0,0
\end{array}
In this game, all strategy profiles are Nash equilibria, and consequently every point in the unit square is an equilibrium payoff (and a correlated equilibrium payoff).
Take now the following perturbation of this game (for positive $\epsilon$)
\begin{array}{c|c}
1+\epsilon,1+\epsilon & \epsilon,1 \\
\hline
1,\epsilon & 0,0
\end{array}
In this game each player has a dominant strategy, hence the unique correlated equilibrium payoff is $(1+\epsilon,1+\epsilon)$.
Take the same game with negative $\epsilon$. The unique correlated equilibrium payoff is (0,0).
Does this example answer you question?
Regarding zero-sum games: in a zero-sum game, the unique correlated equilibrium payoff coincides with the value. Since the value is 1-Liphschitz in the maximum norm, then the answer to your question is positive (for zero-sum games).
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,484 |
\section{Introduction}
\label{sec:intro}
It is now easier than ever to collect personal health data due to the increase in the production of smart devices designed to track data from multiple inputs.
Target demographics of these products can be designated as quantified-selfers (those who maintain their own health records as a hobby), people with chronic health conditions, and everyday individuals who wish to maintain a healthy lifestyle. Quantified-selfers strive to record as much of their lives as possible using the health technologies available to them and are eager to track and learn from their own data. On the other hand, those with chronic health conditions (e.g., Type II diabetes) mainly use this information to make decisions related to future food consumption, physical activity, and so on~\cite{sundecision}. Everyday individuals who are health-conscious may also utilize a health app or device to track their progress and learn what works for them.
Unfortunately, many users of these personal health technologies tend to abandon them after a short period of time due to a lack of support when it comes to decision-making and a lack of sufficient interpretation of their data~\cite{choequantifiedself}.
Users will then lose interest in learning from their own data and begin to record it less often.
This results in a sparse dataset that becomes more difficult to interpret and the users end up becoming even more disengaged~\cite{codella2018}.
Non-expert users may also incorrectly interpret their data, leading them to make unfavorable health decisions~\cite{peeldiabetes}.
With increasingly more data collected over longer periods of time, it becomes more and more difficult to understand it.
In light of this, there is a need for an automated system that can interpret and surface meaningful insights to aid users in their progress towards their health goals.
This problem was partially addressed previously by works~\cite{businessdata,processes,trends,trendsapproach,kacprzyk2008linguistic,eldercare} (inspired by~\cite{prototype,yagerapproach}) designed to generate natural language summaries of temporal data using summary templates, or ``protoforms.''
A protoform is essentially a summary with special ``blanks'' to be filled with specific types of words, such as summarizers (conclusive phrases),
quantifiers (phrases that specify how often a conclusion is true), attributes (variables of interest), time windows (e.g., weeks, months), and days of the week (e.g., Friday).
The structure of an example protoform is: \textit{On $\langle$quantifier$\rangle$ $\langle$sub-time window$\rangle$ in the past $\langle$time window$\rangle$, your $\langle$attribute$\rangle$ was $\langle$summarizer$\rangle$}. This could generate the following example summary: ``On \textit{most of the days} in the past \textit{week}, your \textit{calorie intake} was \textit{high}.'' We call this a standard evaluation summary at the daily granularity.
In recent work \cite{harris2021framework}, we created a comprehensive hierarchy of twelve different protoforms to summarize different types of patterns of interest in time-series data. The summaries range from simple (e.g., standard evaluation and comparison) -- those that focus on observations that are more apparent to the everyday individual -- to more complex (e.g., if-then and cluster-based patterns) -- those that describe longer patterns discovered using more advanced data-mining techniques.
We use the hierarchy to generate summaries (via our summarization framework) describing behavioral patterns in real user health data.
Although rule-based approaches can be effective, the reliance on the use of protoforms limits the diversity of the summary output. Furthermore, extending the summarization framework requires manually defining new temporal patterns (and subsequently creating new protoforms) to generate new summaries.
In contrast, we aim to train deep learning models to both learn and fill the protoform templates presented in our framework. We believe that a transition to deep learning gives our framework more freedom to grow on a summarization and pattern mining level. Deep learning models may discover temporal patterns that we cannot see and present those patterns in natural language.
We present an end-to-end neural approach for time-series summarization, exploring the spectrum of recurrent, convolutional, and Transformer-based models to automatically generate natural language summaries from numeric temporal personal health data. To our knowledge, this is the first such approach in the personalized health domain.
Given the lack of publicly available ground-truth summaries from personal health data, we rely on the summaries generated from our protoform-based summarization framework to train the models. We showcase summaries generated from real user data from MyFitnessPal~\cite{mfp}, and show that the automatically generated summaries are both personalized and of high quality. Our models achieve good accuracies and high BLEU scores~\cite{bleu} for many summary types. In other words, our models can effectively learn to generate understandable natural language summaries automatically from numeric time-series data. Our work should thus be considered as a proof-of-concept that opens up the tantalizing possibility of generating new temporal summary types and bypassing the need to manually extend rule-based approaches.
\vspace{-0.15in}
\section{Related Work}
\label{sec:related}
According to~\citet{vanderlee}, there are three families of data-to-text generation methods: statistical machine translation~\cite{koehn,lopez,vandeemter,sanby}, neural machine translation~\cite{klein,content,ferreira,zhao,puduppully,uehara,ehr,med2vec,li}, and rule-based linguistic summarization~\cite{boran2016overview,harris2021framework,reiter}. Neural and statistical methods generally involve training models to automatically generate natural language summaries of data, while rule-based methods depend on the use of protoforms to model their summary output. There are definite benefits and drawbacks between each family, especially between the machine translation methods and the rule-based methods.
Rule-based methods tend to have better performance and higher textual quality; however, these methods require manual creation or extension which can be considerably time-intensive.
Most rule-based approaches find simple conclusions based on the trend/concavity of a time series and relay this to the user in a templatized natural language summary.
In our previous work~\cite{harris2021framework}, we employed various data mining algorithms to discover hidden patterns within temporal personal health data and generated summaries via different rule-based protoforms.
They are evaluated by humans, and make use of objective measures~\cite{boran2016overview}, such as summary length and relevance.
In the field of neural machine translation~\cite{uehara,puduppully,zhao,ferreira,content,gao,wiseman}, neural and statistical methods bypass the need for manual rule creation, but they rely on large datasets and are generally lacking in performance and text quality.
The models' reliance on large datasets can be especially difficult in certain domains, such as in personal health.
For evaluation, these models are typically compared using the BLEU score, which is designed to measure the agreement between the model output and the reference sentences.
Notable examples include \citet{murakami2017} and \citet{aoki2018} who present the Market Reporter model, which can handle inter-market relationships for stock market data (e.g., relationships between stock trends for the Nikkei and Dow Jones indices).
The authors paired time series sequences gathered from Thomson Reuters DataScope Select with associated market comments from Nikkei Quick News (NQN). The summaries generated by this model were limited to simpler conclusions, such as a continual rising trend that could be easily viewed in the data.
In contrast to the works mentioned above, our aim is to construct neural sequence-to-sequence (i.e., numeric-to-text) generation models for temporal personal health data to generate summaries of meaningful and interesting patterns.
\input{learning_task}
\section{Learning Task}
\label{sec:learningtask}
Before delving into the encoder-decoder architectures, we define the learning task for numeric-to-text neural models.
A main challenge is the lack of suitable ground-truth training data pairing personal health data with high-quality summaries that can be used for training. On the other hand, we do have relatively high-quality summaries from our recently proposed summary type hierarchy. We also conducted a user study to evaluate the output summaries
by their readability, their comprehensiveness, their usefulness, and how well they align with the data they are describing.
Thus, given the lack of publicly available domain expert summaries for personal health data, as a first step, we use the summaries produced from our rule-based framework as the ground truth to train our neural models. We believe this is an effective strategy since we can train our models on a variety of summary types, establishing a suitable state-of-the-art method for this task. Further, this also showcases the proof of concept, that it is indeed possible to automatically generate high-quality natural language summaries from numeric data using deep learning models. In the future, our aim is to generate free-form summaries.
The learning task is to translate raw or numeric time-series subsequences into natural language summaries, as reflected in Figure \ref{fig:learning_task}. Here, the input is numeric time-series data comprising the subsequences comprising the past week (top) and the entire user history (bottom). The neural network models are then expected to generate a natural language summary, as shown.
Our models receive training pairs containing a time series subsequence of personal health data (e.g., calorie intake), the natural language summary generated for it, and the associated protoform for that summary. The summary type is selected prior to training and the learning models are evaluated based on their accuracy and BLEU score for each summary type.
\vspace{-0.075in}
\section{Numeric-to-Text Models}
\label{sec:arch}
We introduce CNN-LSTM, Transformer, and Transformer-LSTM encoder-decoder models for numeric-to-text translation.
The input to all three models comprises the
the short-term ($x_{short}$) and long-term ($x_{long}$) representations of the temporal personal health data. In our case, the short-term representation of the data is the input time series subsequence of interest (shown on top left in Fig.~\ref{fig:learning_task}), while the long-term representation is the entire time series (shown on bottom left).
Formally, we define the long-term representation as $x_{long} = (x_1, x_2, ..., x_N)$ where $x_i \in \mathbb{R}$ and $N$ is the length of the entire time series, and the short-term representation as $x_{short} = (x_i, x_{i+1}, ..., x_j)$ where $1 \leq i,j \leq N$ and $i < j$. The length of $x_{short}$ depends on the summary type the model is learning.
Since we are working with personalized summaries (e.g., medium sodium intake for one user can be high intake for another user), they require the context of the time series ($x_{long}$) to be useful.
\input{output}
For the CNN-LSTM model, we feed the two representations of the input data into separate, yet similar, convolutional encoder layers and concatenate the resulting hidden states with the original $x_{short}$ and $x_{long}$ sequences before sending them through fully-connected dense and dropout layers.
For the decoding step, we utilize two separate LSTM decoders: a summary decoder and an additional template decoder. The summary decoder generates the predicted summary tokens $y_{pred} = ``s_{1}\;s_{2}\;...\;s_{n}"$
where $n$ is the number of tokens generated by the LSTM for the resulting natural language summary, while the template decoder generates the predicted template tokens $y_{proto} = ``t_{1}\;t_{2}\;...\;t_{n}"$ for the resulting protoform. These template tokens are generated directly from the summary tokens $y_{pred}$ for input. It may seem that the same $y_{proto}$ will be fed as input for each example; however, any summary type capable of generating summaries that vary in length (e.g., if-then pattern summaries) will have varying inputs for $y_{proto}$. Summary tokens $y_{pred}$ and template tokens $y_{proto}$ are the two outputs of our model. In essence, the model has two similar learning tasks: the translation of a time series sequence with added context to a natural language summary and its associated protoform.
Whereas we are mainly interested in the summary output, the template decoder allows the model to learn the protoform structure which results in better summary output. Once it learns the protoform using the input template tokens, it can automatically determine what the ``blanks'' should be.
For example, given the set of template tokens ``In the past full \textbf{TW}, your \textbf{A A} has been \textbf{S},'' it can generate a summary such as ``In the past full \textbf{week}, your \textbf{calorie intake} has been \textbf{moderate}.''
The template tokens help the neural network focus on the special ``blanks'' mentioned in Sec.~\ref{sec:intro}, whereas the summary tokens can focus on the final token-level natural language summary.
The decoding process is shown in Figure~\ref{fig:output}.
The model utilizes a cross-entropy loss with respect to the ground-truth summary and template tokens at each position, which yields the combined loss for the summary and template decoder output. The resulting loss function given as: $L(\hat{y}_s,\hat{y}_t) = \sum_{i=1}^{n} CE(y_{s_i},\hat{y}_{s_i}) + m\sum_{i=1}^{n}CE(y_{t_i},\hat{y}_{t_i})$,
where $CE$ is the cross entropy loss per token, $n$ is the summary length, $y_{s_i}$ and $\hat{y}_{s_i}$ represent the actual and predicted summary tokens from the summary decoder, $y_{t_i}$ and $\hat{y}_{t_i}$ represent the actual and predicted template tokens from the template decoder, and $m$ represents the number of incorrect ``blanks'' in the template decoder output (i.e., $m$ provides a higher penalty).
Transformers are a viable alternative to recurrent and convolutional networks via their use of attention; therefore, we decided to test the summary generation task on a numeric-to-text Time Series Transformer-Transformer model. The original Transformer~\cite{attention} focuses on text-to-text machine translation. Thus, we replace the text encoder with one that can process numeric time-series data. We extend the Time Series Transformer (TST)~\cite{cohen} encoder, and pair it with a Transformer decoder (for natural language generation) to construct a model for numeric-to-text generation. The input to TST encoder is the concatenation of $x_{short}$ and $x_{long}$, and it
utilizes multi-head attention by dividing the queries, keys, and values into chunks using a moving window (we use window size 12).
For decoding, we employ dual Transformer decoders to train the model on both the protoform structure and natural language so that it can produce a more comprehensive output. Teacher forcing is not used during training.
We also experimented with the TST encoder and an LSTM decoder model.
We hypothesized that the LSTM decoder could be a possible alternative to the Transformer decoder, especially when receiving encodings from time-series data since the Transformer decoder may not be the ideal pairing for the TST encoder.
The encoder-decoder connection between the TST and LSTM is similar to that of the CNN-LSTM model.
\vspace{-0.05in}
\section{Experiments}
\begin{table*}[!t]
\centering
\footnotesize
\begin{tabular}{|c|c|c|c||c|c|c|}
\hline
\multirow{2}*{Summary Type} & \multicolumn{3}{|c||}{Accuracy} & \multicolumn{3}{|c|}{BLEU Score}\\\cline{2-7}
& CNN-LSTM & TST-Transformer & TST-LSTM & CNN-LSTM & TST-Transformer & TST-LSTM\\\hline
Standard Evaluation (TW granularity) & \textbf{1} & 0.98 & \textbf{1} & 0.9999 & 0.998 & \textbf{1}\\\hline
Standard Evaluation (sTW granularity) & \textbf{1} & 0.96 & \textbf{1} & 0.999 & 0.996 & \textbf{0.9998}\\\hline
Day-Based Pattern & \textbf{1} & 0.846 & \textbf{1} & 0.9998 & 0.987 & \textbf{0.9999}\\\hline
Goal Evaluation & \textbf{0.98} & 0.5 & 0.92 & \textbf{0.997} & 0.954 & 0.991\\\hline
Goal Assistance & 0.86 & 0.745 & \textbf{0.87} & 0.854 & 0.778 & \textbf{0.866}\\\hline
Standard Trend & \textbf{1} & 0.29 & \textbf{1} & \textbf{0.9999} & 0.919 & \textbf{0.9999}\\\hline
If-Then Pattern & \textbf{1} & 0.998 & \textbf{1} & \textbf{0.9999} & \textbf{0.9999} & 0.9998\\\hline
Day If-Then Pattern & 0.1 & 0.07 & \textbf{0.14} & 0.845 & \textbf{0.955} & 0.853\\\hline
Evaluation Comparison & \textbf{0.97} & 0.8 & \textbf{0.97} & \textbf{0.99} & 0.968 & \textbf{0.99}\\\hline
Goal Comparison & \textbf{0.97} & 0.59 & 0.73 & \textbf{0.994} & 0.953 & 0.944\\\hline
Cluster-Based Description & 0.43 & 0.74 & \textbf{0.97} & 0.894 & 0.98 & \textbf{0.995}\\\hline
Cluster-Based Pattern & 0.43 & 0.26 & \textbf{0.71} & 0.861 & 0.925 & \textbf{0.931}\\\hline
Standard Pattern & \textbf{0.85} & 0.3 & 0.82 &\textbf{ 0.977} & 0.915 & 0.961\\\hline\hline
\textbf{Average} & 0.815 & 0.621 & \textbf{0.856} & 0.955 & 0.948 & \textbf{0.964}\\\hline
\end{tabular}
\caption{Experiment Results: Comparing the Numeric-to-Text Encoder-Decoder Models}
\label{tab:modelresults}
\vspace{-0.3in}
\end{table*}
The models were trained using PyTorch, on a Linux-based machine with an NVIDIA Tesla V100 GPU. For reproducibility purposes, our open source implementation is available from~\url{https://github.com/neato47/Neural-Numeric-To-Text-Generation}.
We conducted our experiments using the MyFitnessPal food log dataset~\cite{mfp},
which contains 587,187 days of real food log data across 9.9K users (389 of them were selected), each tracking up to 180 days worth of food and nutrient intake data.
Users were expected to log the food items they consumed and their daily calorie goals, while the MyFitnessPal database added in the associated nutrient information and total daily intake.
We train our models on each summary type separately and evaluate their performance using the BLEU score and the model's prediction accuracy. The accuracy is determined by how exactly each summary in the predicted output matches the expected output on a token-to-token basis.
In terms of hyperparameters, we used the Adam optimizer with a learning rate of 0.0001 and cross-entropy loss for all three models.
For the CNN-LSTM model, the hidden encoder/decoder size is 180 and the encoder's output size is 256.
The CNN kernel size is $1 \times 3$, with a stride of 1 and padding of 1 for both convolutional layers.
The max pooling layers
have a kernel size of 2 and a stride of 2. Only one linear layer is used before the output neurons. The output dimension of the decoder is the length of the largest ground-truth summary. The CNN-LSTM model is trained in batches of size 180 for 78 epochs.
For the Transformer-based models, the input embeddings are 64 dimensional ($d_{model}$), with query, key and value dimensionality of 8, with 4 heads. There are four stacks encoder and (summary and template) decoder layers. A dropout probability of 0.2 is used for both the encoder and decoder layers. The TST-LSTM model was trained in batches of size 8 for 30 epochs.
We ran experiments on the users' calorie intake data; the comparative results for the three models, for each summary type, are reported in Table~\ref{tab:modelresults}.
The CNN-LSTM's average prediction accuracy across all of the summary types is around 0.814, the TST-Transformer's average accuracy is around 0.621, and the TST-LSTM's is around 0.856. The TST-LSTM model also has the highest exact match accuracy for 10 out of the 13 summary types.
The BLEU score~\cite{bleu} measures the agreement between the model output and the reference sentences by calculating the n-gram overlap between the output and reference sentences. A score of 1 indicates identical sentences.
The CNN-LSTM model has an average BLEU score of 0.955, the TST-Transformer model has an average of 0.948, and the TST-LSTM model has an average of 0.964. The TST-LSTM model also has the highest BLEU score for 9 out of the 13 summary types. Based on average accuracy and BLEU score alone, the TST-LSTM model performs better when it comes to matching the exact summary output and it makes predictions that are closer to the target summary output more often. This shows that the TST-LSTM model is the better model.
Looking at the summary types, it seems that the models had the most trouble with day if-then pattern, goal comparison, cluster-based pattern, and standard pattern summaries.
Please refer to~\cite{harris2021framework} for more information on these summary types.
All three models mainly struggled to correctly guess the days of the week (e.g., Friday) for the day if-then pattern summaries. It may be difficult to keep track of the days based on the data.
Goal comparison summaries compare a user's adherence to a goal between two time windows at the weekly granularity. It appears that the TST-Transformer had trouble factoring in the calorie intake goal for the comparison, which may point to the raw input. It only had an accuracy of 0.59 for this type, while it had an accuracy of 0.8 for evaluation comparison summaries.
Standard trend summaries describe how often a time series changes slope from one day to the next; however, the CNN-LSTM struggles for this summary type with an accuracy of 0.29. It is possible that the CNN encoder is having trouble detecting the change in slope.
Cluster-based pattern summaries explain what happened directly after weeks that are most similar to the most recent week $w$. This information helps predict what could happen in $w'$, the week after week $w$. The cluster-based description summary type is a description of the similar week that is most recent.
The $x_{short}$ of both summary types is the most recent week. This may hinder the CNN-LSTM's and TST-Transformer's ability to find the connections between the most recent week and the weeks similar to it since the CNN-LSTM only had an accuracy of 0.43 for both summary types, while the TST-Transformer had an accuracy of 0.26 for the cluster-based pattern summary type. It may be beneficial to add more information to the $x_{short}$ (i.e., similar weeks and the weeks after them).
The standard pattern summary type is very similar to the cluster-based pattern summary type, except it only uses the most recent similar week to predict the user's behavior in week $w'$ and its $x_{short}$ contains the most recent similar week, the week directly after, and $w$. The CNN-LSTM also struggled with this summary type, resulting in an accuracy of 0.3.
\vspace{-0.05in}
\section{Conclusion}
In this paper, we present and compare neural numeric-to-text machine translation models designed to translate raw temporal personal health data into natural language summaries. With these models, we surface hidden, meaningful patterns in a user's personal health data and provide them with the knowledge required to work closer to their health goals.
This work is a proof-of-concept demonstrating the feasibility of generating explanations and summaries from personal health data.
For future work, we plan to
construct joint models that can be trained on all of the summary types at once. We also plan to explore generative models~\cite{t5,bart,gpt3} to generate novel summaries from time-series data using machine translation. Finally, we wish to look more into how we could make real-life applications of our work despite limited training data.
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,237 |
Q: What does "namespace FSI_00XX" mean in FSharp Interactive? Each time I load an FSX script file (or any other file for that matter) in FSharp Interactive in Visual Studio 2015, it prints a message:
> #load "D:\Projects\Tests.fsx";;
[Loading D:\Projects\Tests.fsx]
namespace FSI_0055
It doesn't matter whether the FSX is empty, has one or more types or modules in it. The result is always the loading message (clear enough) and then the namespace FSI_00XX message, where XX is an incremental number. I.e., if I run the above command again (with or without changes to the file) it shows this:
> #load "D:\Projects\Tests.fsx";;
[Loading D:\Projects\Tests.fsx]
namespace FSI_0056
It looks like an error, but clearly it isn't. My guess is, it is an implicit namespace, and the current namespace will be set to the latest. Does it also mean that I can refer to the previous version using the previous namespace?
Or, if not that, what does it stand for?
Note: if I use "Send to interactive" of a code snippet, this message does not appear.
A: Someone with more detailed knowledge of FSI internals could no doubt give you a more complete answer, but for what it's worth:
*
*FSI_XXXX is a dynamically created F# module that contains the "free" definitions you input into the REPL,
*"Sending into interactive" does in fact put the definitions into such a module, despite that no explicit message is printed as in the "load" case,
*I couldn't explicitly get hold of FSI_XXXX type, but it can be reached using reflection. FSI doesn't seem to allow something like "FSI_0001.Test" as a reference to the old version - can't tell if it's accidental or by-design.
Check the code below:
type Test = T of int
// this is the FSI_XXXX type, you can inspect it for more details.
let fsiType = typeof<Test>.DeclaringType
Microsoft.FSharp.Reflection.FSharpType.IsModule fsiType // returns true
If you copy/paste and run this in the FSI window in Visual Studio, it will return something like the following:
type Test = | T of int
val fsiType : System.Type = FSI_0026
val it : bool = true
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,720 |
package org.locationtech.geomesa.accumulo.process.temporalDensity
import java.util.Date
import com.typesafe.scalalogging.slf4j.Logging
import org.geotools.data.Query
import org.geotools.data.simple.{SimpleFeatureCollection, SimpleFeatureSource}
import org.geotools.data.store.ReTypingFeatureCollection
import org.geotools.feature.DefaultFeatureCollection
import org.geotools.feature.visitor.{AbstractCalcResult, CalcResult, FeatureCalc}
import org.geotools.process.factory.{DescribeParameter, DescribeProcess, DescribeResult}
import org.geotools.util.NullProgressListener
import org.joda.time.Interval
import org.locationtech.geomesa.accumulo.index.QueryHints
import org.locationtech.geomesa.accumulo.iterators.TemporalDensityIterator.createFeatureType
import org.opengis.feature.Feature
import org.opengis.feature.simple.SimpleFeature
@DescribeProcess(
title = "Temporal Density Process",
description = "Returns a histogram of how many data points fall in different time buckets within an interval."
)
class TemporalDensityProcess extends Logging {
@DescribeResult(description = "Output feature collection")
def execute(
@DescribeParameter(
name = "features",
description = "The feature set on which to query")
features: SimpleFeatureCollection,
@DescribeParameter(
name = "startDate",
description = "The start of the time interval")
startDate: Date,
@DescribeParameter(
name = "endDate",
description = "The end of the time interval")
endDate: Date,
@DescribeParameter(
name = "buckets",
min = 1,
description = "How many buckets we want to divide our time interval into.")
buckets: Int
): SimpleFeatureCollection = {
logger.debug("Attempting Geomesa temporal density on type " + features.getClass.getName)
if (features.isInstanceOf[ReTypingFeatureCollection]) {
logger.warn("WARNING: layer name in geoserver must match feature type name in geomesa")
}
val interval = new Interval(startDate.getTime, endDate.getTime)
val visitor = new TemporalDensityVisitor(features, interval, buckets)
features.accepts(visitor, new NullProgressListener)
visitor.getResult.asInstanceOf[TDResult].results
}
}
class TemporalDensityVisitor(features: SimpleFeatureCollection, interval: Interval, buckets: Int)
extends FeatureCalc with Logging {
val retType = createFeatureType(features.getSchema())
val manualVisitResults = new DefaultFeatureCollection(null, retType)
// Called for non AccumuloFeatureCollections
def visit(feature: Feature): Unit = {
val sf = feature.asInstanceOf[SimpleFeature]
manualVisitResults.add(sf)
}
var resultCalc: TDResult = new TDResult(manualVisitResults)
override def getResult: CalcResult = resultCalc
def setValue(r: SimpleFeatureCollection) = resultCalc = TDResult(r)
def query(source: SimpleFeatureSource, query: Query) = {
logger.debug("Running Geomesa temporal density process on source type " + source.getClass.getName)
query.getHints.put(QueryHints.TEMPORAL_DENSITY_KEY, java.lang.Boolean.TRUE)
query.getHints.put(QueryHints.TIME_INTERVAL_KEY, interval)
query.getHints.put(QueryHints.TIME_BUCKETS_KEY, buckets)
source.getFeatures(query)
}
}
case class TDResult(results: SimpleFeatureCollection) extends AbstractCalcResult
| {
"redpajama_set_name": "RedPajamaGithub"
} | 245 |
Q: Get & update the form data in react I'm using React & Apollo client to fetch the data from GraphQL server.
Now I have written following component in react.
*
*Component for listing all the item - working.
*Component for creating an item - working.
*Component for updating the item - not working (not sure how to do that).
I have 2 component files.
*
*UpdateAudio.js.
*AudioForm.js.
Now.
*
*I need to get the data from GraphQL server for given audioID (will get from the path params).
*then set that audio in state.
*and pass the audio state to AudioForm component.
I'm initializing the audio with blank fields initially in state.
I can get the data using Query but I'm not sure how to update the audio state which will be passed to AudioForm and populated the data further.
UpdateAudio.js
import React, { Component, Fragment } from 'react';
import { withRouter } from 'react-router-dom';
import { Mutation, Query, graphql } from 'react-apollo';
import { UpdateAudioQuery, GetAudio } from '../query';
import { Button, Grid, LinearProgress } from '@material-ui/core';
import AudioForm from '../components/AudioForm';
class UpdateAudio extends Component {
constructor(props) {
super(props);
this.state = {
site: "5d517862-0630-431c-94b1-bf34de6bfd8b",
audio: {
site: "5d517862-0630-431c-94b1-bf34de6bfd8b",
title: '',
description: '',
}
};
this.updateCache = this.updateCache.bind(this);
this.handleChange = this.handleChange.bind(this);
}
updateCache = (cache, { data }) => {
if (data.createAudio.audio.guid) {
console.log("redirecting...")
this.props.history.push('/audios')
}
}
handleChange = event => {
const { name, value } = event.target;
this.setState(prevState => ({
audio: {
...prevState['audio'],
[name]: value,
}
}));
};
render() {
return (
<Fragment>
<AudioForm audio={this.state.audio} handleChange={this.handleChange} />
<Grid container alignItems="center" justify="center">
<Grid item>
<Mutation mutation={UpdateAudioQuery} update={this.updateCache}>
{(Update, {loading, data, error}) => {
if (loading) {
return <Button variant="contained" size="large" color="secondary" disabled>Save</Button>
}
else if (error) {
return <div>{error}</div>
}
else {
return <Button variant="contained" size="large" color="secondary" onClick={() => Update({variables: this.state.audio})}>Update</Button>
}
}}
</Mutation>
</Grid>
</Grid>
</Fragment>
)
}
}
UpdateAudio = withRouter(UpdateAudio)
export default graphql(GetAudio,
{
name:'Get',
options: ownProps => ({ variables: {site: "5d517862-0630-431c-94b1-bf34de6bfd8b", guid: ownProps.match.params.guid} })
})(UpdateAudio);
AudioForm.js
import React, {Fragment} from 'react';
import { TextField, Grid } from '@material-ui/core';
class AudioForm extends React.Component {
render() {
console.log(this.props.audio.title);
return (
<Fragment>
<form noValidate autoComplete="off">
<Grid container spacing={24} alignItems="center" justify="center">
<Grid item md={4} xs={12}>
<TextField
id="audio-title"
name="title"
label="Title"
margin="normal"
variant="outlined"
InputLabelProps={{ shrink: true }}
fullWidth
defaultValue={this.props.audio.title}
onChange={this.props.handleChange}
required
/>
</Grid>
<Grid item md={4} xs={12}>
<TextField
id="audio-description"
name="description"
label="Description"
margin="normal"
variant="outlined"
InputLabelProps={{ shrink: true }}
fullWidth
defaultValue={this.props.audio.description}
onChange={this.props.handleChange}
/>
</Grid>
</Grid>
</form>
</Fragment>
);
}
}
export default AudioForm;
query.js
import gql from 'graphql-tag';
export const GetAudio = gql`
query Get($site: String!, $guid: String!) {
getAudio(site: $site, guid: $guid){
guid
siteGuid
title
description
}
}`
export const UpdateAudio = gql`
mutation Update($site: String!, $guid: String!, $title: String, $description: String) {
updateAudio(
input:{
guid: $guid,
site: $site,
title: $title,
description: $description
}
) {
audio{
guid
siteGuid
title
status
}
}
}`;
A: As i got your question.
Just do query like like below. Do not use render props method for query because you cant do setState there.
Best place for below method is getDerivedStateFromProps function.
Do this work there.
client.query({
query: GetAudio,
variables: { site, guid }
})
.then((res) => {
// check for response and do setState
// example ...
const { site, title, description } = res;
this.setState({ site, title, description });
})
.catch( err => console.log(err));
Also check the link from official docs
https://www.apollographql.com/docs/react/essentials/queries.html#manual-query
I read i think more than 5 times your question if i got this wrong forgive me.
Stay blessed.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,426 |
Labour says confidence in justice system at 'all-time low for victims'
By Douglas Ferreira
11 February 2021 | 8:22 am
Crime, Miscarriages of Justice, News
Labour has renewed its calls for a 'victims' law' following reports that victims are often left 're-traumatised' by their experiences with the criminal justice system. The shadow minister for victims and youth justice, Peter Kyle has introduced a new Bill in Parliament calling for a fresh law to be passed giving legally enforceable rights to victims of crimes.
Currently, victims already have rights under the Victim's Code, which include the right to information and the right to make a victim impact statement, but, according to Crime Survey for England & Wales, only one in five victims had heard of it in 2017-18. A BBC report showed earlier this month that victims are losing faith in the justice system, with 22.6% of crimes being closed due to victims not supporting prosecution. The Labour MP agrees that confidence in the justice system is 'at an all-time low for victims'.
The Labour MP has said, in his motion to bring in the Bill, that 'only a minority of victims understand their rights and only a fraction will ever exert them'. He believes that, by putting these rights on to statutory footing, victims will be empowered with the necessary knowledge of their rights, as well as the tools to uphold them.
The Bill introduces, among other things, a requirement for victims to be read their rights as early as perpetrators, a Victim's Commissioner independent of government, and career-limiting consequences to those in the justice system who fail to uphold victim's rights, such as the creation of a register containing the names of such individuals.
The issue has a long history and back in 2015 Labour published recommendations on a victims' law. Defence lawyers have expressed their concerns about the undermining of defendants' rights and the 'pendulum swing' towards the rights of victims. Conservative governments had already pledged to enshrine victims' rights into law before. According to Kyle, there have been 1 million sexual offences and 350,000 rapes since the first promise of a Bill by the Conservative government and none of these victims could benefit from their promised statutory rights.
Author: Douglas Ferreira
Douglas is a legal researcher with particular interest in criminal law and matters relating to miscarriages of justice. Before changing his career to law, he obtained a degree in translation studies.
Victims of crime treated as 'bystanders'
Government limits digital strip searches for rape…
New government scheme to protect victims from unwanted…
Manchester police fail to record 80,000 crimes in last 12…
Rape victims waiting nearly three years for justice in some… | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,072 |
package pl.wurmonline.deedplanner.logic.ground;
import pl.wurmonline.deedplanner.data.Map;
import pl.wurmonline.deedplanner.data.Tile;
import pl.wurmonline.deedplanner.graphics.Camera;
import pl.wurmonline.deedplanner.input.Mouse;
public abstract class GroundMode {
private final String name;
public GroundMode(String name) {
this.name = name;
}
public final void update(Mouse mouse, Map map, Camera cam) {
Tile tile = cam.getHoveredTile();
action(mouse, map, tile);
}
public abstract void action(Mouse mouse, Map map, Tile tile);
public final String toString() {
return name;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,546 |
I always feel like a valued customer who is treated with respect and consideration of my schedule and time. Of course they also fix what is needed and keep my older car in tip top shape.
I was lucky enough to break down near here . These guys are real mechanics that understand family financial dinamics . Even gave me a ride home and set me up with a rental after my estimate until my car was fixed. Plus I got a warranty and a car wash . Thanks to all you guys .
I've been taking my cars to Ken and his crew for nearly 20 years. They are courteous, caring and know their stuff. Do not hesitate — you will not be disappointed by the level of service and attention to detail Beachside offers.
Thorough honest service at a reasonable price.
My Nissan Xterra broke down on me unexpectedly in Hermosa Beach. I need my vehicle daily and dreaded the idea of having stuck at a shop for days. Luckily, I received a thorough explanation and quote followed up by a speedy repair. Thank you for making the best out of a bad situation, Beachside Auto Repair. Highly recommended!
Ken and the crew at Beachside provide outstanding service at prices that are more than fair. So much better than a dealership, especially if you live nearby. Beachside keeps our 19-year-old car in great shape as it approaches 154,000 miles.
The car is fixed. No appointment was necessary. Back driving the same day.
Very competent auto mechanics. With Efficient workmenlike attitudes - and courteous. Have only used this garage since at least 2005. Why go elsewhere?.
Nicholas said the business always provides great work and always stands by it.
Dennis said the staff provided great service.
Christine said she would recommend the business for service.
Dan said they have good service and good guys there.
Clarissa said the service was quick, the pricing was fair and the staff was friendly. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,863 |
R bodies (from refractile bodies, also R-bodies) are polymeric protein inclusions formed inside the cytoplasm of bacteria. Initially discovered in kappa particles, bacterial endosymbionts of the ciliate Paramecium, R bodies (and genes encoding them) have since been discovered in a variety of taxa.
Morphology, assembly, and extension
At neutral pH, type 51 R bodies resemble a coil of ribbon approximately 500 nm in diameter and approximately 400 nm deep. Encoded by a single operon containing four open reading frames, R bodies are formed from two small structural proteins, RebA and RebB. A third protein, RebC, is required for the covalent assembly of these two structural proteins into higher-molecular weight products, visualized as a ladder on an SDS-PAGE gel.
At low pH, Type 51 R bodies undergo a dramatic structural rearrangement. Much like a paper yo-yo, the ribbon extends (from the center) to form hollow tube with pointed ends that can reach up to 20μm in length.
Other types of R bodies from different bacterial species vary in their size, ribbon morphology, and triggers for extension.
Function
When kappa particles shed from a killer paramecium are ingested, R bodies extend within the acidic food vacuole of the predatory paramecium, distending and rupturing the membrane. This liberates the contents of the food vacuole into the cytoplasm of the paramecium. While feeding kappa particles to sensitive paramecium results in the death of paramecium, feeding purified R bodies or R bodies recombinantly expressed in E. coli is not toxic. Thus, R bodies are thought to function as a toxin delivery system.
R bodies are also capable of rupturing E. coli spheroplasts, demonstrating that they can rupture membranes in a foreign context, and they can be engineered to extend at a variety of different pH levels.
References
Cell biology
Cell anatomy
Protein complexes
Bacteriology
Biotechnology | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,216 |
East survives OT thriller with Mosinee for best start since 2004
Published on September 7, 2019 in News/Sports
By Scott Williams
MOSINEE – For four quarters, Wausau East was unable to solve the puzzle that was Mosinee quarterback Trey Fitzgerald.
He spent most spent most of Friday night shredding its defense. Most of the damage he inflicted was with his right arm. But Fitzgerald also hurt East with his legs.
None of that mattered. All the Lumberjacks needed was to stop Fitzgerald just once in overtime and the previous 48 minutes would be a distant memory.
The Lumberjacks swarmed the senior signal-caller in the backfield on a potential game-winning two-point conversion run to seal a heart-stopping 45-44 overtime win at Veterans Park.
Some celebrations are sweeter than others. After convincing wins over Merrill and Southern Door, the Lumberjacks had every right to savor this one a little more.
Why not. East is off to a 3-0 start for the first time since 2004.
"It feels awesome," said Lumberjacks coach Kevin Grundy of the unbeaten start to the season. "We talk about being WE strong – Wausau East. It's all about the we and not the me.
"It was a great football game. This is Friday night. This is what it's all about. Competing and doing your best."
Fitzgerald, who rushed for three scores and threw for three more TDs, plowed into the end zone from 3 yards out with 47 seconds left to force the extra period.
There wasn't a single player who saw considerable playing time who didn't leave everything he had on the field.
The game had taken a physical toll on everyone. Each and every player was gassed.
Count Donovan Leverette in that group. Who could blame him. The senior halfback for East rode a dominating blocking performance in the trenches for 304 yards and four touchdowns.
"I don't know how much longer I could have gone. I'm pretty sore right now," said Leverette, who scored on runs of 8, 52, 32 and finally three yards in overtime. "I was so dazed and confused, I wasn't sure what was happening (in overtime).
"I love my O linemen. I do believe their the best O linemen in the state. They always give me the biggest holes to hit and after that it's up to me."
The wear and tear of the physical matchup ultimately played a big part in Mosinee coach Craig Martens' decision to go for the two-point try instead of kicking an extra point and forcing a second overtime.
"We had guys cramping. Too many things going on (physically). Too many moving parts," Martens said. "Our kids were feeling ready (to go for the win). We felt we had the right played called. It just didn't come up for us."
Both sides faced serious gut checks, especially from a mental perspective.
Mosinee was on the verge of being blown out in the first half, staring at a 28-7 deficit with two minutes left in the second quarter.
Miscues by East opened the door and Mosinee reversed their fortunes behind the lethal combination of wideout Drayton Lehman and Fitzgerald.
Spurred on by turnovers on four consecutive possessions by the Lumberjacks, Mosinee eventually took a 31-28 lead in the third quarter.
Fitzgerald was the catalyst. He completed 31-of-40 passes for 386 yards. Lehman was on the receiving end of 19 passes for 250 yards.
"I'm proud of the way they reacted being down 21 points. The 2018 Mosinee may not have reacted the same way," Martens said. "A big emphasis for us from last year to this year has been responding to adversity."
East could certainly relate to how the Indians felt.
After scoring on their first four possessions, the Lumberjacks proceeded to turn the ball over the next four times it had the ball.
Mosinee turned those turnovers into 24 unanswered points and a 31-28 lead after three quarters.
Instead of feeling sorry for itself, East picked itself up and went back to playing big boy power football and find a way to open the season with three straight victories for the first time in 14 years.
"This was great for us. Facing adversity will be something that only makes us stronger," Leverette said. "That was the best thing to come out of this for us. It's a confidence booster."
Wausau East 21 7 0 10 7 – 45
Mosinee 7 14 10 7 6 – 44
WE – Jacob Thome 31 pass from Matt Heinrich (Caleb Gruszynski kick), 8:50
M – Drayton Lehman 14 pass from Trey Fitzgerald (Michal Dul kick) 3:44
WE – Heinrich 2 run (Gruszynski kick), 1:42
WE – Donovan Leverette 8 run (Gruszynski kick), :33
WE – Leverette 52 run (Gruszynski kick), 9:17
M – Fitzgerald 1 run (Dul kick), 2:00
M – Lehman 35 pass from Fitzgerald (Dul kick), :23
M – Dul 42 FG, 1:48
WE – Leverette 32 run (Gruszynski kick), 11:23
WE – Gruszynski 23 FG, 2:24
M – Fitzgerald 3 run (Dul kick), :47
WE – Leverette 3 run (Gruszynski kick)
M – Cyle Kowalski 16 pass from Fitzgerald (run failed)
Previous Story Previous post: Football roundup: Everest, West suffer losses while Newman wins
Next Story Next post: Yelich, Davies pace Brewers past Cubs, 7-1
Latest from News
NOT REAL NEWS: A look at what didn't happen this week
By BEATRICE DUPUY, ARIJETA LAJKA and AMANDA SEITZ Associated Press A roundup
4 dead in house fire in northeastern Wisconsin
OCONTO, Wis. (AP) — A house fire in northeastern Wisconsin has killed
Wisconsin Senate to vote on 1 of 8 homelessness bills
MADISON, Wis. (AP) — The Wisconsin Senate plans to pass one of
2 injured in hit-and-run crash January 18, 2020 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,622 |
\section{Introduction}
Adversarial robustness is an important issue in NLP, asking how to proof models against confounding tokens designed to maliciously manipulate model outputs. As such models become more powerful and ubiquitous, research continues to discover surprising vulnerabilities \citep{wallace_universal_2019}, demanding improved robustness methods.
A common defense method to combat adversarial attacks is adversarial training.
Given knowledge of attack strategies,
it constructs synthetic adversarial examples
to augment clean examples during training \citep{zhang_adversarial_2020}.
Intuitively, the model will \textit{implicitly} learn to ignore attacking tokens and become robust to that type of attack.
In practice, however, this goal can be challenging through data augmentation alone.
In this study, we propose a simple yet effective adversarial training schema for additive attacks:
\textit{explicitly} training the model to ignore adversarial tokens. We do this by augmenting the underlying model with
a rationale extractor \cite{lei-etal-2016-rationalizing} to serve as an input filter, and then training this extractor to ignore attacking tokens as an additional joint objective to overall label accuracy (\figref{fig:model_diagram}).
\begin{figure}[t]
\centering
%
%
\includegraphics[width=0.9\columnwidth]{figs/model_diagram_v7.png}
\caption{
Example illustration of an ideal rationale model.
The attack sentence adds confusing information ``two'' to the correct answer ``four'' (five except Helen).
In addition to the typical predictor $f$, an ideal rationale model identifies the {\em relevant} tokens ($\hat{\vect{r}}\cdot \vect{x}$) in the input with a rationale extractor $g$ and only presents relevant tokens to the predictor after filtering the attack text.
As a result, it effectively ignores the attack.
%
%
%
%
%
%
%
}
\label{fig:model_diagram}
\end{figure}
In addition to training the extractor to respect the attacking/non-attacking token dichotomy, we also explore the utility of human-provided explanations in this regard. Doing so, we ask: does learning from human rationales help the model avoid attending to attacking tokens?
Fine-tuning BERT \citep{devlin_bert:_2018} and RoBERTa \citep{liu_roberta_2019}
on
multiple datasets,
we demonstrate that the additive attack proposed by \citet{jia_adversarial_2017} does reduce model accuracy, and that data augmentation with adversarial examples provides limited benefit in defending these models from this attack in most cases.
Our main results are
that rationale-style models learn to ignore these attacks more effectively than
only with data augmentation, leading to an improvement of $\sim$10\%
in accuracy on attacked examples compared to baseline models and an advantage of
$2.4$\%
over data augmentation alone, mostly recovering clean test performance.
While human explanations may potentially improve the interpretability of these models, they are of limited use in improving this defense even further.
In summary, we offer three main contributions:
\setlist{nolistsep}
\begin{itemize}[leftmargin=*]
%
\item We show that explicitly training an extractive rationale layer to ignore attack tokens is more effective
%
than implicitly training a model via data augmentation with adversarial examples.
%
\item We assess whether human-annotated rationales augment this defense, showing that they have only a limited benefit.
\item We conduct an in-depth error analysis of differences between models, explaining some of the patterns we observe in our main results.
%
%
\end{itemize}
\section{Related Work}
We build on prior work on adversarial robustness and learning from explanations.
\para{Adversarial robustness.}
Adversarial attacks against NLP models seek to maliciously manipulate model output by perturbing model input. \citet{zhang_adversarial_2020} present a survey of both attacks and defenses.
Example attacks include character-level manipulations \citep{gao_black-box_2018,li_textbugger_2019}, input removal \citep{li_understanding_2017, feng_pathologies_2018}, synonym substitutions \citep{ren_generating_2019}, and language model-based slot filling \citep{li_bert-attack_2020,garg_bae_2020,li_contextualized_2021}.
A distinction in attack types is whether the attack requires access to the model \citep{ebrahimi_hotflip_2018,yoo_towards_2021,wallace_universal_2019} or not \citep{alzantot_generating_2018,jin_is_2020}.
TextAttack \citep{morris_textattack_2020} is a framework and collection of attack implementations.
Our work focuses on the \textsc{AddSent}\xspace attack proposed by \citet{jia_adversarial_2017} in reading comprehension.
As interest in adversarial attacks has increased, so has interest in developing models robust to these attacks. A popular defense method is adversarial training via data augmentation, first proposed by \citet{szegedy_intriguing_2014} and employed by \citet{jia_adversarial_2017} to bring their model \textit{almost} back to clean test performance. A recent example in this vein is \citet{zhou_defense_2020}, which proposes Dirichlet Neighborhood Ensemble as a means for generating dynamic adversarial examples during training. Another popular approach is knowledge distillation \citep{papernot_distillation_2016}, which trains an intermediate model to smooth between the training data and the final model.
Our work explores a new direction that explicitly learns to ignore attacks.
\para{Learning from explanations.}
Recent work has sought to collect datasets of human-annotated explanations, often in the form of binary \textit{rationales}, in addition to class labels \citep{deyoung_eraser_2019,wiegreffe_teach_2021}, and to use these explanations as additional training signals to improve model performance and robustness, sometimes also known as \textit{feature-level feedback} \citep{hase_when_2021,beckh_explainable_2021}.
An early work is \citet{zaidan_using_2007}, which uses human rationales as constraints on an SVM. More recently, \citet{ross_right_2017} uses human rationales to penalize neural net input gradients showing benefits for out-of-domain generalization, while \citet{erion_improving_2021} use a similar method based on ``expected gradients'' to produce improvements in in-domain test performance in certain cases. \citet{katakkar_practical_2021} evaluate feature feedback for two attention-style models, finding, again, gains in out-of-domain performance, while \citet{han_influence_2021} use influence functions \citep{koh_understanding_2017} to achieve a similar outcome. Where our study differs from most previous work is in using feature feedback for adversarial rather than out-of-domain robustness. A concurrent work by \citet{chen2022RationaleRobustness} uses rationalization
to improve robustness. The proposed method is similar to our work, but we explore supervision with attack tokens and achieve stronger robustness to additive attacks.
\section{Adversarial Attacks and Datasets}
In this paper, we focus on model robustness against the \textsc{AddSent}\xspace additive attack proposed by \citet{jia_adversarial_2017}. The attack is designed for
reading comprehension: consider each instance as a tuple of document, query, and label $(d, q, y)$, where $y$ indicates whether the query is supported by the document.
The attack manipulates the content of the query to form an attack sentence ($A$) and adds $A$ to the document to confuse the model.
Specifically, \textsc{AddSent}\xspace proceeds as follows:
\setlist{nolistsep}
\begin{enumerate}[leftmargin=*]
\item[1.] We modify the query $q$ by converting all named entities and numbers to their nearest neighbor in the GloVe embedding space \citep{pennington_glove_2014}. We flip all adjectives and nouns to their antonyms using WordNet \citep{miller_wordnet_1995} and yield a mutated query $\hat{q}$. If we fail to mutate the query due to not being able to find matching named entities or antonyms of adjectives and nouns, we skip the example.%
%
%
\item[2.] We convert the mutated query $\hat{q}$ into an adversarial attack $A$ using CoreNLP \citep{manning_stanford_2014} constituency parsing, under a set of about 50 rules enumerated by \citet{jia_adversarial_2017}. This step converts it into a factual statement that resembles but is not semantically related to the original query $q$.
\item[3.] The adversarial attack $A$ is inserted at
%
a random location within the original document and leads to a new tuple $(d', q, y)$.\footnote{We experimented with variants of inserting only at the beginning or the end. The results are qualitatively similar, so we only report random in this paper.}
%
\end{enumerate}
The key idea behind the \textsc{AddSent} attack is that the mutations alter the semantics of the query by mutating the named entities and numbers,
so that the attack contains words or phrases that are likely confusing to the model without changing the true semantics of the input. An example of the \textsc{AddSent}\xspace attack is given above.
\begin{figure}[t]
\small
\centering
\fbox{\begin{minipage}{\linewidth}
\textbf{Query} $q$:
FC Bayern Munich was founded in 2000.
\textbf{Mutated Query} $\hat{q}$:
DYNAMO Leverkusen Cologne was founded in 1998.
\textbf{Modified Document} $d'$
\dots has won 9 of the last 13 titles. DYNAMO Leverkusen Cologne was founded in 1998. They have traditional local rivalries with \dots
\end{minipage}}
\caption{An example of the \textsc{AddSent}\xspace attack.}
\label{fig:addsent_example}
\end{figure}
The original approach includes an additional step of using crowdsourced workers to filter ungrammatical sentences.
We do not have access to this manual validation process in all datasets.
Occasionally, \textsc{AddSent}\xspace generates ungrammatical attacks
but it nevertheless proves empirically effective in reducing the performance of our models.
\para{Datasets.}
To evaluate our hypotheses on learning to ignore adversarial attacks, we
train and evaluate models on the Multi-Sentence Reading Comprehension (\textsc{MultiRC}\xspace) \citep{khashabi_looking_2018} and Fact Extraction and VERification (\textsc{FEVER}\xspace) \citep{thorne_fever_2018} datasets. Both are reading comprehension datasets, compatible with the \textsc{AddSent}\xspace attack.
For \textsc{MultiRC}\xspace, the query consists of a question and potential answer about the document, labeled as true or false, while for \textsc{FEVER}\xspace it is a factual claim about the document labeled as ``supported'' or ``unsupported''.
Both datasets include \textit{human rationales}, %
indicating which tokens %
are pertinent to assessing the query. Table \ref{tab:data} summarizes their basic statistics.
%
\begin{table}[t]
\small
\centering
\begin{tabular}{@{}llll@{}}
\toprule
Dataset & \begin{tabular}[c]{@{}l@{}}Text \\ length\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rationale \\ length\end{tabular} & \begin{tabular}[c]{@{}l@{}} Total \\ size \end{tabular} \\ \midrule
\textsc{MultiRC}\xspace & 336.0 & 52.0 & 32,088 \\
\textsc{FEVER}\xspace & 335.9 & 47.0 & 110,187 \\
\textsc{SQuAD}\xspace & 119.8 & ------ & 87,599 \\
\bottomrule
\end{tabular}
\caption{Basic statistics of \textsc{MultiRC}\xspace, \textsc{FEVER}\xspace, and \textsc{SQuAD}\xspace.}
\label{tab:data}
\end{table}
In modeling these two datasets, we follow standard practice in appending the query
to the end of the document with [SEP] tokens.
We use train/validation/test splits prepared by the ERASER dataset collection \citep{deyoung_eraser_2019}.
Because we are interested in relative differences between training regimes rather than absolute performance, we subsample the \textsc{FEVER}\xspace training set to 25\% so that it is comparable to \textsc{MultiRC}\xspace for the sake of training efficiency.
Directly applying the synthetic \textsc{AddSent}\xspace attack to \textsc{MultiRC}\xspace and \textsc{FEVER}\xspace leads to occasionally ungrammatical adversarial examples due to incorrectly applied conversion heuristic or errors in constituency parsing.
To alleviate this concern, we further evaluate on \textsc{SQuAD}\xspace \citep{rajpurkar_squad_2016}, for which \citeauthor{jia_adversarial_2017} provide an \textsc{AddSent}\xspace-attacked {\em evaluation} set that is re-written and approved by human workers.
However, this dataset does not have human rationales.
We use the train/validation/test splits provided by \citeauthor{jia_adversarial_2017} in our experiments.
\section{Modeling}
Our study assesses whether adding an explicit
rationale extractor\xspace
to a %
model and training it to ignore attack tokens results in a more effective defense than simply adding attacked examples to the training set. This comparison results in several combinations of model architectures and training regimes.
We denote each training instance as $(\vect{x}, \vect{r}, y)$: a text sequence $\vect{x}$ consisting of the concatenated document and query, a ground-truth binary rationale sequence $\vect{r}$, and a binary label $y$.
\para{Baseline models and training.}
We use BERT \citep{devlin_bert:_2018} and RoBERTa \citep{liu_roberta_2019} as basis models.
In the baseline training condition we fine-tune these models as normal, evaluating them on both the original test set and a version of the test set where each item has been corrupted with the \textsc{AddSent}\xspace attack described above.
We denote this condition as ``\textsc{No Adv.}\xspace''
In the baseline adversarial training via data augmentation condition (denoted \textsc{Adv.}\xspace), we add \textsc{AddSent}\xspace-attacked versions of each training example to the training set on a one-to-one basis, allowing the model to train for the presence of such attacks. This represents a fairly standard baseline defense in the literature \citep{zhang_adversarial_2020}.
Following prior adversarial robustness literature \citep{jia_certified_2019}, we also consider a stronger baseline by augmenting the training set with $K$ perturbed examples for each training example. For our main experiments, we use $K = 10$. This setting (denoted \textsc{Adv.-10x}\xspace) should in theory provide abundant signal for the baseline method to implicitly adapt to the \textsc{AddSent}\xspace attack.
\para{Rationale model.}
To lend the baseline model an extractor capable of filtering out confounding tokens, we use the rationale model proposed by \citet{lei-etal-2016-rationalizing}. It comprises a rationale extractor $g$ and a label predictor $f$ (Fig. \ref{fig:model_diagram}). The rationale extractor generates a binary predicted rationale $\hat{\mathbf{r}}$, which is applied as a mask over the input to the predictor via masking function $m$, producing a predicted label:
\begin{equation}
\begin{aligned}
& g(\mathbf{x}) \rightarrow \hat{\vect{r}} \\
& f(m(\mathbf{x}, \hat{\vect{r}})) \rightarrow \hat{y}
\end{aligned}
\end{equation}
The two components are trained together to optimize predicted label accuracy as well as loss associated with the predicted rationale. In an unsupervised scenario, this loss punishes the norm of the predicted rationale, encouraging sparsity on the (heuristic) assumption that a sparse rationale is more interpretable. In this study, we rather consider the supervised scenario, where we punish $\hat{\vect{r}}$'s error with respect to a ground-truth rationale $\vect{r}$. However, we find empirically that the rationale sparsity objective is useful in combination with the rationale supervision objective, leading to the following joint objective function using cross-entropy loss $\ensuremath{\mathcal{L}_{CE}}$ with hyperparameter weights $\lambda_1$ and $\lambda_2$:
\begin{equation}
\ensuremath{\mathcal{L}_{CE}}(\hat{y},y) +
\lambda_1 \ensuremath{\mathcal{L}_{CE}}(\hat{\vect{r}},\vect{r}) +
\lambda_2 ||\hat{\vect{r}}||.
\end{equation}
\paragraph{Adversarial training with rationale supervision.}
To introduce rationale supervision, we augment the training set with attacked examples on a one-to-one basis with original examples, similar to adversarial training.
Moreover, we can change the ground-truth rationale to reflect the desired behavior for the model. We consider two options for this new ground-truth $\vect{r}$: (1) a binary indicator of whether a token is adversarial or not (\textsc{Adv. + Atk. Sup.}\xspace), and (2) the
human-annotated rationale (\textsc{Adv. + Human Sup.}\xspace), which also filters adversarial tokens.
Table~\ref{tab:models} shows all the combinations of setups that we use in our study.
For each of these setups, We test one rationale model using independent BERT modules for $g$ and $f$, and one using independent RoBERTa\xspace modules for both.
We present additional implementation details in the appendix.
\begin{table}[t]
\centering
\small
\begin{tabular}{ll}
\toprule
Data augmentation? & Rationale? \\
\midrule
\multirow{2}{*}{\shortstack{No data \\ augmentation}} & None \\
& Human (\textsc{Human Sup.}\xspace) \\
\midrule
\multirow{3}{*}{\shortstack{Augmented with \\ attack data (Adv.)}} & None \\
& Non-attack (\textsc{Adv. + Atk. Sup.}\xspace)\\
& Human (\textsc{Adv. + Human Sup.}\xspace) \\
\bottomrule
\end{tabular}
\caption{Summaries of rationale model setups.}
%
%
%
\label{tab:models}
\end{table}
Taken together, these conditions address our three research questions:
(1) Is adversarial training via rationale supervision more effective than via attacked examples?
(2) Does training the model to emulate human explanation make it intrinsically more robust to attacks?
(3) Do human explanations improve upon adversarial training with non-attack tokens as rationale supervision?
\section{Experimental Setup and Results}
We start by describing our experimental setup and evaluation metrics.
We then investigate model performance with different training regimes and conduct an in-depth error analysis.
\begin{table*}[t]
\centering
\small
\begin{tabular}{@{}lllrrrrrr@{}}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Model}} &
\multicolumn{1}{c}{\multirow{2}{*}{Architecture}} &
\multicolumn{1}{c}{\multirow{2}{*}{Training}} &
\multicolumn{2}{c}{\textsc{MultiRC}\xspace (Acc.)} &
\multicolumn{2}{c}{\textsc{FEVER}\xspace (Acc.)} &
\multicolumn{2}{c}{\textsc{SQuAD}\xspace (Span F1)} \\ \cmidrule(l){4-9}
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
Clean &
Attacked\textsubscript{S} &
Clean &
Attacked\textsubscript{S} &
Clean &
Attacked\textsubscript{H} \\ \midrule
\multirow{5}{*}{BERT\xspace} &
\multirow{3}{*}{Standard} &
\textsc{No Adv.}\xspace &
68.6 &
62.6 &
88.2 &
78.9 &
86.4 &
62.8 \\
&
&
\textsc{Adv.}\xspace &
67.3 &
61.6 &
\textbf{88.5} &
84.8 &
86.0 &
80.4 \\
&
&
\textsc{Adv.-10x}\xspace &
66.2 &
65.9 &
86.3 &
84.5 &
82.2 &
78.0 \\ \cmidrule(l){2-9}
&
\multirow{3}{*}{Rationale} &
\textsc{Adv. + Atk. Sup.}\xspace &
69.6 &
66.2 &
87.1 &
\textbf{87.7} &
\textbf{86.5} &
\textbf{83.1} \\
&
&
\textsc{Human Sup.}\xspace &
70.0 &
64.4 &
88.0 &
76.7 &
\text{- - -} &
\text{- - -} \\
&
&
\textsc{Adv. + Human Sup.}\xspace &
\textbf{70.5} &
\textbf{69.4} &
87.5 &
87.5 &
\text{- - -} &
\text{- - -} \\
\midrule
\multirow{5}{*}{RoBERTa\xspace} &
\multirow{3}{*}{Standard} &
\textsc{No Adv.}\xspace &
82.6 &
76.5 &
93.5 &
83.0 &
93.2 &
81.0 \\
&
&
\textsc{Adv.}\xspace &
84.4 &
82.9 &
93.2 &
92.7 &
92.9 &
90.4 \\
&
&
\textsc{Adv.-10x}\xspace &
83.5 &
82.1 &
93.5 &
93.2 &
89.9 &
86.9 \\ \cmidrule(l){2-9}
&
\multirow{3}{*}{Rationale} &
\textsc{Adv. + Atk. Sup.}\xspace &
\textbf{85.2} &
\textbf{85.1} &
93.4 &
\textbf{93.4} &
\textbf{93.3} &
\textbf{91.4} \\
&
&
\textsc{Human Sup.}\xspace &
84.0 &
74.9 &
\textbf{94.1} &
85.7 &
\text{- - -} &
\text{- - -} \\
&
&
\textsc{Adv. + Human Sup.}\xspace &
85.0 &
82.5 &
93.4 &
\textbf{93.4} &
\text{- - -} &
\text{- - -} \\
\bottomrule
\end{tabular}
\caption{
Model performance on clean and attacked test sets for \textsc{MultiRC}\xspace, \textsc{FEVER}\xspace, and \textsc{SQuAD}\xspace.
Attacked\textsubscript{S} are synthetic attacks produced by \textsc{AddSent}\xspace, and Attacked\textsubscript{H} are attacks generated by human workers. We vary the level of augmentation for the standard classification models (\textsc{No Adv.}\xspace, \textsc{Adv.}\xspace, \textsc{Adv.-10x}\xspace). For rationale models, we control for the presence of adversarial training data and the type of rationale supervision:
\textsc{Adv. + Atk. Sup.}\xspace treats non-attack tokens as rationale, and \textsc{Human Sup.}\xspace does not use adversarial training.
Rationale models outperform the baseline classifiers across all attacked datasets.
}
\label{tab:accuracy_table}
\end{table*}
\begin{table*}[]
\centering
\small
\begin{tabular}{@{}llrrrrrr@{}}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Model}} &
\multicolumn{1}{c}{\multirow{2}{*}{Training}} &
\multicolumn{2}{c}{\textsc{MultiRC}\xspace} &
\multicolumn{2}{c}{\textsc{FEVER}\xspace} &
\multicolumn{2}{c}{\textsc{SQuAD}\xspace} \\ \cmidrule(l){3-8}
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{l}{Attack \%} &
\multicolumn{1}{l}{\textsc{Non-A}\xspace \%} &
\multicolumn{1}{l}{Attack \%} &
\multicolumn{1}{l}{\textsc{Non-A}\xspace \%} &
Attack \% &
\textsc{Non-A}\xspace \% \\ \midrule
\multirow{3}{*}{BERT\xspace} & \textsc{Adv. + Atk. Sup.}\xspace & 1.4 & 98.4 & 0.2 & 96.7 & 27.8 & 99.7 \\
& \textsc{Human Sup.}\xspace & 87.5 & 8.2 & 66.7 & 17.8 & \text{- - -} & \text{- - -} \\
& \textsc{Adv. + Human Sup.}\xspace & 9.5 & 14.4 & 0.5 & 24.4 & \text{- - -} & \text{- - -} \\ \midrule
\multirow{3}{*}{RoBERTa\xspace} & \textsc{Adv. + Atk. Sup.}\xspace & 6.0 & 96.7 & 0.9 & 95.8 & 16.1 & 99.0 \\
& \textsc{Human Sup.}\xspace & 92.4 & 12.6 & 60.0 & 12.2 & \text{- - -} & \text{- - -} \\
& \textsc{Adv. + Human Sup.}\xspace & 32.1 & 15.6 & 0.1 & 23.0 & \text{- - -} & \text{- - -} \\
\bottomrule
\end{tabular}
\caption{Percentage of attack and non-attack (\textsc{Non-A}\xspace) tokens \textit{included} in the predicted rationales. Lower is better for attack tokens.
Arguably, a lower percentage of non-attack tokens is also better as it improves interpretability.}
\label{tab:percentage_inclusion_table}
\end{table*}
\subsection{Experimental Setup}
Our study compares whether rationale-style models are better at learning to explicitly ignore adversarial tokens than standard models via adversarial training. As we describe above, we train three variants of the standard classification model (\textsc{No Adv.}\xspace, \textsc{Adv.}\xspace, \textsc{Adv.-10x}\xspace), and three variants of the rationale model (\textsc{Adv. + Atk. Sup.}\xspace, \textsc{Human Sup.}\xspace, \textsc{Adv. + Human Sup.}\xspace).
Exploring these 6 architecture/training combinations for three datasets (\textsc{MultiRC}\xspace, \textsc{FEVER}\xspace,
and \textsc{SQuAD}\xspace%
) and two underlying models (BERT\xspace and RoBERTa\xspace), we report results from all trained models in Table \ref{tab:accuracy_table}. We report relevant metrics on both the clean test set and the attacked test set
for each model.
For \textsc{MultiRC}\xspace and \textsc{FEVER}\xspace, the metric we use is accuracy. Since \textsc{SQuAD}\xspace is a span extraction task, we report the Span F1 score instead.
Performance on the attacked test set is our key measure of robustness.
Additionally, for the rationale models, we report
the mean percentage of attack and non-attack tokens included in each predicted rationale, two metrics that help explain our accuracy results.
The mean percentage of attack tokens included in the predicted rationale indicates the effectiveness of ignoring attack tokens: the lower the better.
\subsection{Main Results} \label{sec:main_results}
We focus our analysis on three questions:
\setlist{nolistsep}
\begin{enumerate}[leftmargin=*]
\item Does adversarial rationale supervision on augmented data improve robustness over adversarial data augmentation alone?
\item Does human rationale supervision improve adversarial robustness over a standard model?
%
\item Does the addition of human rationales to adversarial training further improve robustness?
\end{enumerate}
Table \ref{tab:accuracy_table} summarizes the main results of the paper, showing the accuracy of each combination of architecture, training regime, underlying model and dataset. Looking at the attacked versus clean test set performance for the standard model, we see that \textbf{the \textsc{AddSent}\xspace attack is effective},
reducing accuracy on \textsc{MultiRC}\xspace ($\sim$6\%), \textsc{FEVER}\xspace ($\sim$10\%), and \textsc{SQuAD}\xspace ($\sim$12-24\%).
\para{Adversarial rationale supervision (\textsc{Adv. + Atk. Sup.}\xspace).}
Rationale models provide an interface for explicitly supervising the model to ignore attack tokens.
Our {\em key} question is whether they can be used to improve the effectiveness of adversarial training.
We first discuss the effect of data augmentation and then show that rationale models are indeed more effective at ignoring attack tokens.
\textit{Data augmentation with adversarial examples works, mostly.} In almost all cases, data augmentation does result in improved performance on the attacked test set,
improving +5.9\% (\textsc{FEVER}\xspace) and +17.6\% (\textsc{SQuAD}\xspace) for BERT\xspace, as well as +6.4\% (\textsc{MultiRC}\xspace), +9.7\% (\textsc{FEVER}\xspace), and +9.4\% (\textsc{SQuAD}\xspace) for RoBERTa\xspace.
The exception is BERT\xspace on \textsc{MultiRC}\xspace, where it causes a decrease of -1.0\%. However, in only one case out of six does data augmentation with adversarial examples bring the model back to clean test performance (RoBERTa\xspace on \textsc{MultiRC}\xspace, +0.3\%).
Surprisingly, BERT\xspace on \textsc{MultiRC}\xspace is the only scenario where the \textsc{Adv.-10x}\xspace augmentation significantly improves attack accuracy (4.3\% improvement over \textsc{Adv.}\xspace). In all other cases,
adding more adversarial examples does not improve robustness and even leads to a 3.5\% drop in \textsc{SQuAD}\xspace for RoBERTa\xspace.
This result demonstrates that BERT\xspace and RoBERTa\xspace may not learn from adversarial examples alone.
\textit{Adversarial rationale supervision improves on adversarial training baselines in all cases.}
We see an improvement of +4.6\% for BERT\xspace on \textsc{MultiRC}\xspace, +2.9\% for BERT\xspace on \textsc{FEVER}\xspace, +2.7\% for BERT\xspace on \textsc{SQuAD}\xspace, +2.2\% for RoBERTa\xspace on \textsc{MultiRC}\xspace, +0.7\% for RoBERTa\xspace on \textsc{FEVER}\xspace, and +1.0\% for RoBERTa\xspace on \textsc{SQuAD}\xspace (2.4\% on average).
For the one case where adversarial data augmentation recovered clean test performance (RoBERTa\xspace on \textsc{MultiRC}\xspace), adversarial rationale supervision actually improves clean test performance by +2.5\%.
The effectiveness of \textsc{Adv. + Atk. Sup.}\xspace is even more salient if we compare with \textsc{No Adv.}\xspace on attacked test:
3.6\%, 8.8\%, and 20.3\% for BERT\xspace on \textsc{MultiRC}\xspace, \textsc{FEVER}\xspace, and \textsc{SQuAD}\xspace, 8.6\%, 10.4\%, and 10.4\% for RoBERTa\xspace on \textsc{MultiRC}\xspace, \textsc{FEVER}\xspace, and \textsc{SQuAD}\xspace (10.4\% on average).
The above findings remain true even when we compare our methods against the theoretically stronger baseline of \textsc{Adv.-10x}\xspace, where the training dataset is augmented with 10 perturbed examples for every training example. Our models trained with adversarial rationale supervision outperforms \textsc{Adv.-10x}\xspace across all datasets and models, and our best model outperforms the \textsc{Adv.-10x}\xspace baseline by 3.3\%
on average. This result highlights both the efficiency and the effectiveness of our method: with adversarial rationale supervision, BERT\xspace and RoBERTa\xspace achieve greater defense against the \textsc{AddSent}\xspace attack using
10\% of the adversarial examples.
Interestingly, the adversarially-supervised rationale model demonstrates a strong ability to generalize knowledge learned from synthetic attacks to tune out human-rewritten attacks (+20.3\% on \textsc{SQuAD}\xspace; recall we do not have human-rewritten attacks during training), indicating the potential of our method in a real-world scenario.
Table \ref{tab:percentage_inclusion_table} explains this success. The adversarially-supervised rationale model includes 6\% or fewer attacking tokens on \textsc{MultiRC}\xspace and \textsc{FEVER}\xspace, indicating that it did largely succeed in learning to occlude these tokens for the predictor. Additionally, both BERT\xspace and RoBERTa\xspace rationale models are able to tune out most human-generated attack tokens, ignoring over 70\% of attack tokens while keeping 99\% of the original text for both models.
\para{Effect of human rationale supervision alone (\textsc{Human Sup.}\xspace).}
We find
mixed evidence for whether human rationale supervision alone improves adversarial robustness. For BERT on \textsc{MultiRC}\xspace and RoBERTa\xspace on \textsc{FEVER}\xspace, human rationale outperforms the standard classification model, but the opposite occurs for the other two model/dataset combinations.
Table \ref{tab:percentage_inclusion_table} contextualizes this mixed result: the rationale model supervised solely on human rationales includes 60.0\% to 92.4\% of attack tokens in its rationale (compared to between 8.2\% and 17.8\% of non-attack tokens), indicating that it is largely fooled by the \textsc{AddSent}\xspace attack into exposing the predictor to attack tokens.
This result may be explained by the fact that human rationales for these datasets identify the part of the document that pertains particularly to the query, while the \textsc{AddSent}\xspace attack crafts adversarial content with a semantic resemblance to that same query. Hence, it is understandable that human rationale training would not improve robustness.
\para{Human and adversarial rationale supervision (\textsc{Adv. + Human Sup.}\xspace).}
Although human rationales alone may not reliably improve model robustness, a final question is whether human rationales can serve as a useful addition to adversarial training. Does training the model to both ignore adversarial tokens and emulate human explanations further improve robustness against the \textsc{AddSent}\xspace attack?
In two out of four cases, the performance of \textsc{Adv. + Human Sup.}\xspace is equal to that of \textsc{Adv. + Atk. Sup.}\xspace Only for BERT\xspace on \textsc{MultiRC}\xspace does \textsc{Adv. + Human Sup.}\xspace result in an improvement, being the only configuration
that brings performance back to that of clean test for that model, and dataset. For RoBERTa\xspace on \textsc{MultiRC}\xspace, it actually weakens attacked test performance.
While these results are mixed, Table \ref{tab:percentage_inclusion_table} shows that the model does at least achieve this result at a much lower included percentage of non-attack tokens ($\sim$20\% vs. $>$95\%), a concession toward model interpretability.
Overall, our results suggest that human rationales have limited effect in defending against adversarial attacks, but can be important in developing sparse (and potentially interpretable) models.
\begin{table*}[!h]
\footnotesize
\begin{tabular}{@{}p{0.32\linewidth} p{0.33\linewidth} p{0.32\linewidth}@{}}
\toprule
\multicolumn{1}{c}{\textbf{Human rationale \& attack}} & \multicolumn{1}{c}{\textbf{\textsc{Adv. + Atk. Sup.}\xspace}} & \multicolumn{1}{c}{\textbf{\textsc{Adv. + Human Sup.}\xspace}} \\ \midrule
\multicolumn{3}{c}{\textbf{(A) Example 1}, true label: False} \\ \midrule
\definecolor{highlight}{RGB}{238,238,187 }\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace
\hl{in}\hl{ }\hl{may}\hl{ }\hl{1904}\hl{ }\hl{,}\hl{ }\hl{the}\hl{ }\hl{couple}\hl{ }\hl{'}\hl{ }\hl{s}\hl{ }\hl{first}\hl{ }\hl{son}\hl{ }\hl{,}\hl{ }\hl{hans}\hl{ }\hl{albert}\hl{ }\hl{einstein}\hl{ }\hl{,}\hl{ }\hl{was}\hl{ }\hl{born}\hl{ }\hl{in}\hl{ }\hl{bern}\hl{ }\hl{,}\hl{ }\hl{switzerland}\hl{ }\hl{.}
\hl{ }\hl{their}\hl{ }\hl{second}\hl{ }\hl{son}\hl{ }\hl{,}\hl{ }\hl{eduard}\hl{ }\hl{,}\hl{ }\hl{was}\hl{ }\hl{born}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{in}\hl{ }\hl{july}\hl{ }\hl{1910}\hl{ }\hl{.}\hl{ }\hl{in}\hl{ }\hl{1914}\hl{ }\hl{,}\hl{ }\hl{the}\hl{ }\hl{couple}\hl{ }\hl{separated}\hl{ }\hl{;}\hl{ }\hl{einstein}\hl{ }\hl{moved}\hl{ }\hl{to}\hl{ }\hl{berlin}\hl{ }\hl{and}\hl{ }\hl{his}\hl{ }\hl{wife}\hl{ }\hl{remained}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{with}\hl{ }\hl{their}\hl{ }\hl{sons}\hl{ }\hl{.} they divorced on 14 february 1919 , having lived apart for five years .\xspace\xspace\xspace...\xspace\xspace \definecolor{highlight}{RGB}{255,204,204} \sethlcolor{highlight} \hl{a}\hl{ }\hl{-}\hl{ }\hl{te}\hl{te}\hl{ }\hl{did}\hl{ }\hl{n}\hl{ }\hl{'}\hl{ }\hl{t}\hl{ }\hl{stay}\hl{ }\hl{in}\hl{ }\hl{basel}\hl{ }\hl{after}\hl{ }\hl{charles}\hl{ }\hl{and}\hl{ }\hl{ho}\hl{ub}\hl{en}\hl{ }\hl{separated}\hl{ }\hl{.} \xspace\xspace\xspace...\xspace\xspace \definecolor{highlight}{RGB}{238,238,187} \sethlcolor{highlight} \hl{[SEP]}\hl{ }\hl{who}\hl{ }\hl{did}\hl{ }\hl{n}\hl{ }\hl{'}\hl{ }\hl{t}\hl{ }\hl{stay}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{after}\hl{ }\hl{albert}\hl{ }\hl{and}\hl{ }\hl{mari}\hl{c}\hl{ }\hl{separated}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{te}\hl{te}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv.}\xspace~prediction: True}
& \definecolor{highlight}{RGB}{204,238,255}\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace
\hl{ }\hl{in}\hl{ }\hl{may}\hl{ }\hl{1904}\hl{ }\hl{,}\hl{ }\hl{the}\hl{ }\hl{couple}\hl{ }\hl{'}\hl{ }\hl{s}\hl{ }\hl{first}\hl{ }\hl{son}\hl{ }\hl{,}\hl{ }\hl{hans}\hl{ }\hl{albert}\hl{ }\hl{einstein}\hl{ }\hl{,}\hl{ }\hl{was}\hl{ }\hl{born}\hl{ }\hl{in}\hl{ }\hl{bern}\hl{ }\hl{,}\hl{ }\hl{switzerland}\hl{ }\hl{.}
\hl{ }\hl{their}\hl{ }\hl{second}\hl{ }\hl{son}\hl{ }\hl{,}\hl{ }\hl{eduard}\hl{ }\hl{,}\hl{ }\hl{was}\hl{ }\hl{born}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{in}\hl{ }\hl{july}\hl{ }\hl{1910}\hl{ }\hl{.}\hl{ }\hl{in}\hl{ }\hl{1914}\hl{ }\hl{,}\hl{ }\hl{the}\hl{ }\hl{couple}\hl{ }\hl{separated}\hl{ }\hl{;}\hl{ }\hl{einstein}\hl{ }\hl{moved}\hl{ }\hl{to}\hl{ }\hl{berlin}\hl{ }\hl{and}\hl{ }\hl{his}\hl{ }\hl{wife}\hl{ }\hl{remained}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{with}\hl{ }\hl{their}\hl{ }\hl{sons}\hl{ }\hl{.}\hl{ }\hl{they}\hl{ }\hl{divorced}\hl{ }\hl{on}\hl{ }\hl{14}\hl{ }\hl{february}\hl{ }\hl{1919}\hl{ }\hl{,}\hl{ }\hl{having}\hl{ }\hl{lived}\hl{ }\hl{apart}\hl{ }\hl{for}\hl{ }\hl{five}\hl{ }\hl{years}\hl{ }\hl{.}\xspace\xspace\xspace...\xspace\xspace a - tete did n ' t stay in basel after charles and houben separated .\xspace\xspace\xspace...\xspace\xspace\hl{ }\hl{[SEP]}\hl{ }\hl{who}\hl{ }\hl{did}\hl{ }\hl{n}\hl{ }\hl{'}\hl{ }\hl{t}\hl{ }\hl{stay}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{after}\hl{ }\hl{albert}\hl{ }\hl{and}\hl{ }\hl{mari}\hl{c}\hl{ }\hl{separated}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{te}\hl{te}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv. + Atk. Sup.}\xspace~prediction: False}
& \definecolor{highlight}{RGB}{204,238,255}\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace
in may 1904 , the couple ' s first son , hans albert einstein , was born in bern , switzerland .
their second son , eduard , was born in zurich in july 1910 \hl{.}\hl{ }\hl{in}\hl{ }\hl{1914}\hl{ }\hl{,}\hl{ }\hl{the}\hl{ }\hl{couple}\hl{ }\hl{separated}\hl{ }\hl{;}\hl{ }\hl{einstein}\hl{ }\hl{moved}\hl{ }\hl{to}\hl{ }\hl{berlin}\hl{ }\hl{and}\hl{ }\hl{his}\hl{ }\hl{wife}\hl{ }\hl{remained}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{with}\hl{ }\hl{their}\hl{ }\hl{sons}\hl{ }\hl{.}\hl{ }\hl{they}\hl{ }\hl{divorced}\hl{ }\hl{on}\hl{ }\hl{14}\hl{ }\hl{february}\hl{ }\hl{1919}\hl{ }\hl{,}\hl{ }\hl{having}\hl{ }\hl{lived}\hl{ }\hl{apart}\hl{ }\hl{for}\hl{ }\hl{five}\hl{ }\hl{years}\hl{ }\hl{.}\xspace\xspace\xspace...\xspace\xspace a - tete did n ' t stay in basel after charles and houben separated .\xspace\xspace\xspace...\xspace\xspace \hl{[SEP]}\hl{ }\hl{who}\hl{ }\hl{did}\hl{ }\hl{n}\hl{ }\hl{'}\hl{ }\hl{t}\hl{ }\hl{stay}\hl{ }\hl{in}\hl{ }\hl{zurich}\hl{ }\hl{after}\hl{ }\hl{albert}\hl{ }\hl{and}\hl{ }\hl{mari}\hl{c}\hl{ }\hl{separated}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{te}\hl{te}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv. + Human Sup.}\xspace~prediction: False}
\\ \midrule
\multicolumn{3}{c}{\textbf{(B) Example 2}, true label: True} \\ \midrule
\definecolor{highlight}{RGB}{238,238,187}\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace \hl{on}\hl{ }\hl{the}\hl{ }\hl{day}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{party}\hl{ }\hl{,}\hl{ }\hl{all}\hl{ }\hl{five}\hl{ }\hl{friends}\hl{ }\hl{showed}\hl{ }\hl{up}\hl{ }\hl{.}\hl{ }\hl{each}\hl{ }\hl{friend}\hl{ }\hl{had}\hl{ }\hl{a}\hl{ }\hl{present}\hl{ }\hl{for}\hl{ }\hl{susan}\hl{ }\hl{.} \definecolor{highlight}{RGB}{255,204,204}\sethlcolor{highlight} \hl{6}\hl{ }\hl{thank}\hl{ }\hl{-}\hl{ }\hl{you}\hl{ }\hl{cards}\hl{ }\hl{did}\hl{ }\hl{helen}\hl{ }\hl{send}\hl{ }\hl{.} \definecolor{highlight}{RGB}{238,238,187} \hl{susan}\hl{ }\hl{was}\hl{ }\hl{happy}\hl{ }\hl{and}\hl{ }\hl{sent}\hl{ }\hl{each}\hl{ }\hl{friend}\hl{ }\hl{a}\hl{ }\hl{thank}\hl{ }\hl{you}\hl{ }\hl{card}\hl{ }\hl{the}\hl{ }\hl{next}\hl{ }\hl{week}\hl{ }\hl{.}\hl{ }\hl{[SEP]}\hl{ }\hl{how}\hl{ }\hl{many}\hl{ }\hl{thank}\hl{ }\hl{-}\hl{ }\hl{you}\hl{ }\hl{cards}\hl{ }\hl{did}\hl{ }\hl{susan}\hl{ }\hl{send}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{5}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv.}\xspace~prediction: False}
& \definecolor{highlight}{RGB}{204,238,255}\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace\hl{ }\hl{on}\hl{ }\hl{the}\hl{ }\hl{day}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{party}\hl{ }\hl{,}\hl{ }\hl{all}\hl{ }\hl{five}\hl{ }\hl{friends}\hl{ }\hl{showed}\hl{ }\hl{up}\hl{ }\hl{.}\hl{ }\hl{each}\hl{ }\hl{friend}\hl{ }\hl{had}\hl{ }\hl{a}\hl{ }\hl{present}\hl{ }\hl{for}\hl{ }\hl{susan}\hl{ }\hl{.} 6 thank - you cards did helen send . \hl{susan}\hl{ }\hl{was}\hl{ }\hl{happy}\hl{ }\hl{and}\hl{ }\hl{sent}\hl{ }\hl{each}\hl{ }\hl{friend}\hl{ }\hl{a}\hl{ }\hl{thank}\hl{ }\hl{you}\hl{ }\hl{card}\hl{ }\hl{the}\hl{ }\hl{next}\hl{ }\hl{week}\hl{ }\hl{.}\hl{ }\hl{[SEP]}\hl{ }\hl{how}\hl{ }\hl{many}\hl{ }\hl{thank}\hl{ }\hl{-}\hl{ }\hl{you}\hl{ }\hl{cards}\hl{ }\hl{did}\hl{ }\hl{susan}\hl{ }\hl{send}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{5}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv. + Atk. Sup.}\xspace~prediction: True}
& \definecolor{highlight}{RGB}{204,238,255}\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace on the day of the party , all five friends showed up . each friend had a present for susan . 6 thank - you cards did helen send . \hl{susan}\hl{ }\hl{was}\hl{ }\hl{happy}\hl{ }\hl{and}\hl{ }\hl{sent}\hl{ }\hl{each}\hl{ }\hl{friend}\hl{ }\hl{a}\hl{ }\hl{thank}\hl{ }\hl{you}\hl{ }\hl{card}\hl{ }\hl{the}\hl{ }\hl{next}\hl{ }\hl{week}\hl{ }\hl{.}\hl{ }\hl{[SEP]}\hl{ }\hl{how}\hl{ }\hl{many}\hl{ }\hl{thank}\hl{ }\hl{-}\hl{ }\hl{you}\hl{ }\hl{cards}\hl{ }\hl{did}\hl{ }\hl{susan}\hl{ }\hl{send}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{5}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv. + Human Sup.}\xspace~prediction: False} \\
\midrule
\multicolumn{3}{c}{\textbf{(C) Example 3}, true label: False} \\ \midrule
\definecolor{highlight}{RGB}{238,238,187 }\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace \hl{roman}\hl{ }\hl{legions}\hl{ }\hl{encountered}\hl{ }\hl{the}\hl{ }\hl{stronghold}\hl{s}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{castle}\hl{ }\hl{rock}\hl{ }\hl{and}\hl{ }\hl{arthur}\hl{ }\hl{'}\hl{ }\hl{s}\hl{ }\hl{seat}\hl{ }\hl{,}\hl{ }\hl{held}\hl{ }\hl{by}\hl{ }\hl{a}\hl{ }\hl{tribe}\hl{ }\hl{of}\hl{ }\hl{ancient}\hl{ }\hl{brit}\hl{ons}\hl{ }\hl{known}\hl{ }\hl{as}\hl{ }\hl{the}\hl{ }\hl{vo}\hl{tad}\hl{ini}\hl{ }\hl{.} \definecolor{highlight}{RGB}{255,204,204}
\hl{the}\hl{ }\hl{mer}\hl{cian}\hl{s}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{mana}\hl{w}\hl{ }\hl{.} \definecolor{highlight}{RGB}{238,238,187 }\sethlcolor{highlight} \hl{little}\hl{ }\hl{is}\hl{ }\hl{recorded}\hl{ }\hl{about}\hl{ }\hl{this}\hl{ }\hl{group}\hl{ }\hl{,}\hl{ }\hl{but}\hl{ }\hl{they}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{god}\hl{od}\hl{din}\hl{ }\hl{,}\hl{ }\hl{whose}\hl{ }\hl{feat}\hl{s}\hl{ }\hl{are}\hl{ }\hl{told}\hl{ }\hl{in}\hl{ }\hl{a}\hl{ }\hl{seventh}\hl{ }\hl{-}\hl{ }\hl{century}\hl{ }\hl{old}\hl{ }\hl{welsh}\hl{ }\hl{manuscript}\hl{ }\hl{.}\xspace\xspace\xspace...\xspace\xspace the god\xspace\xspace\xspace...\xspace\xspace din\xspace\xspace\xspace...\xspace\xspace \hl{[SEP]}\hl{ }\hl{who}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{god}\hl{od}\hl{din}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{the}\hl{ }\hl{pic}\hl{ts}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv.}\xspace~prediction: True}
& \definecolor{highlight}{RGB}{204,238,255}\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace\hl{ }\hl{roman}\hl{ }\hl{legions}\hl{ }\hl{encountered}\hl{ }\hl{the}\hl{ }\hl{stronghold}\hl{s}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{castle}\hl{ }\hl{rock}\hl{ }\hl{and}\hl{ }\hl{arthur}\hl{ }\hl{'}\hl{ }\hl{s}\hl{ }\hl{seat}\hl{ }\hl{,}\hl{ }\hl{held}\hl{ }\hl{by}\hl{ }\hl{a}\hl{ }\hl{tribe}\hl{ }\hl{of}\hl{ }\hl{ancient}\hl{ }\hl{brit}\hl{ons}\hl{ }\hl{known}\hl{ }\hl{as}\hl{ }\hl{the}\hl{ }\hl{vo}\hl{tad}\hl{ini}\hl{ }\hl{.} the mercians were probably the ancestors of the manaw . \hl{little}\hl{ }\hl{is}\hl{ }\hl{recorded}\hl{ }\hl{about}\hl{ }\hl{this}\hl{ }\hl{group}\hl{ }\hl{,}\hl{ }\hl{but}\hl{ }\hl{they}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{god}\hl{od}\hl{din}\hl{ }\hl{,}\hl{ }\hl{whose}\hl{ }\hl{feat}\hl{s}\hl{ }\hl{are}\hl{ }\hl{told}\hl{ }\hl{in}\hl{ }\hl{a}\hl{ }\hl{seventh}\hl{ }\hl{-}\hl{ }\hl{century}\hl{ }\hl{old}\hl{ }\hl{welsh}\hl{ }\hl{manuscript}\hl{ }\hl{.}\xspace\xspace\xspace...\xspace\xspace\hl{ }\hl{the}\hl{ }\hl{god}\xspace\xspace\xspace...\xspace\xspace\hl{din}\xspace\xspace\xspace...\xspace\xspace\hl{ }\hl{[SEP]}\hl{ }\hl{who}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{god}\hl{od}\hl{din}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{the}\hl{ }\hl{pic}\hl{ts}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv. + Atk. Sup.}\xspace~prediction: True}
& \definecolor{highlight}{RGB}{204,238,255}\sethlcolor{highlight}\hl{[CLS]}\xspace\xspace\xspace...\xspace\xspace roman legions encountered the strongholds of the castle rock and arthur ' s seat , held by a tribe of ancient britons known as the votadini . \hl{the}\hl{ }\hl{mer}\hl{cian}\hl{s}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{mana}\hl{w}\hl{ }\hl{.}\hl{ }\hl{little}\hl{ }\hl{is}\hl{ }\hl{recorded}\hl{ }\hl{about}\hl{ }\hl{this}\hl{ }\hl{group}\hl{ }\hl{,}\hl{ }\hl{but}\hl{ }\hl{they}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{god}\hl{od}\hl{din}\hl{ }\hl{,}\hl{ }\hl{whose}\hl{ }\hl{feat}\hl{s}\hl{ }\hl{are}\hl{ }\hl{told}\hl{ }\hl{in}\hl{ }\hl{a}\hl{ }\hl{seventh}\hl{ }\hl{-}\hl{ }\hl{century}\hl{ }\hl{old}\hl{ }\hl{welsh}\hl{ }\hl{manuscript}\hl{ }\hl{.}\xspace\xspace\xspace...\xspace\xspace
\hl{the}\hl{ }\hl{god}\xspace\xspace\xspace...\xspace\xspace\hl{din}\xspace\xspace\xspace...\xspace\xspace\hl{ }\hl{[SEP]}\hl{ }\hl{who}\hl{ }\hl{were}\hl{ }\hl{probably}\hl{ }\hl{the}\hl{ }\hl{ancestors}\hl{ }\hl{of}\hl{ }\hl{the}\hl{ }\hl{god}\hl{od}\hl{din}\hl{ }\hl{?}\hl{ }\hl{|}\hl{ }\hl{|}\hl{ }\hl{the}\hl{ }\hl{pic}\hl{ts}\hl{ }\hl{[SEP]} \textbf{\textsc{Adv. + Human Sup.}\xspace~prediction: False}
\\ \bottomrule
\end{tabular}
\caption{Example outputs from \textsc{Adv. + Atk. Sup.}\xspace and \textsc{Adv. + Human Sup.}\xspace with BERT in \textsc{MultiRC}\xspace.
Attack tokens are marked in red. True human rationales are marked yellow,
and predicted rationales are marked in blue. We only show tokens where generated rationales disagree with each other or with the human rationale\xspace/the attack.}
\label{tab:example_table}
\vspace{-2pt}
\end{table*}
\subsection{Error Analysis}
To better understand the behavior of the models, we examine mistakes from BERT compared to explicitly training a rationale extractor\xspace on \textsc{MultiRC}\xspace.
We start with a qualitative analysis of example errors, and then discuss general trends, especially on why human rationales only provide limited benefits over \textsc{Adv. + Atk. Sup.}\xspace
More in-depth analyses can be found in the appendix for space reasons, including a Venn diagram of model mistakes.
\para{Qualitative analysis.}
We look at example errors of \textsc{Adv.}\xspace to investigate attacks that are confusing even after adversarial augmentation.
Table~\ref{tab:example_table} shows example outputs of the rationale models based on either non-attack tokens or human rationales.
Example 1
shows a case where models with explicit rationale extractors ignore attacks more effectively than \textsc{Adv.}\xspace
In the attack sentence, ``tete did n ' t stay in'' is highly similar to the query, so a model likely predicts True if it uses the attack information.
In comparison, both rationale models ignore the attack in label prediction, which enables them to make correct predictions.
Example 2 demonstrates that
\textsc{Adv. + Human Sup.}\xspace makes mistakes when it fails to include crucial information in rationales while avoiding attack tokens.
\textsc{Adv. + Human Sup.}\xspace predicts the wrong label because it misses information for the number of friends in its rationale.
\textsc{Adv. + Atk. Sup.}\xspace gets this example correct because it can both ignore the attack and include the necessary information.
Finally, Example 3 shows an example where
\textsc{Adv. + Human Sup.}\xspace is better than \textsc{Adv. + Atk. Sup.}\xspace when generating rationales to ignore noises.
\textsc{Adv. + Human Sup.}\xspace includes attacks in rationale, but it is still able to predict the label because the attack is not confusing given the selected rationale.
The generated rationale helps \textsc{Adv. + Human Sup.}\xspace to avoid unnecessary information that may confuse the model.
For example, the sentence with ``picts'' could confuse the model to predict True.
On the other hand, \textsc{Adv. + Atk. Sup.}\xspace gets this example wrong, despite occluding all attack tokens.
More generally, we find that \textit{\textsc{Adv. + Human Sup.}\xspace tends to have high false negative rates}.
When \textsc{Adv. + Human Sup.}\xspace fails to extract good rationales, it tends to predict False %
due to missing information from the rationale.
In contrast, \textsc{Adv. + Atk. Sup.}\xspace rarely occludes necessary information, so it does not suffer from the same issue.
\textit{\textsc{Adv. + Atk. Sup.}\xspace is better than \textsc{Adv. + Human Sup.}\xspace when human rationales are denser and passage length is longer (see Table \ref{tab:brlr-vs-brlna} in the appendix).}
We observe that denser human rationales usually comprise evidence from different parts of the passage.
Since \textsc{Adv. + Atk. Sup.}\xspace predicts almost all non-attack tokens as rationale, they have higher human rationale\xspace recall (98.6\%) than
\textsc{Adv. + Human Sup.}\xspace
(57.6\%). Thus, \textsc{Adv. + Atk. Sup.}\xspace generates higher quality rationales when human rationales are dense.
Similarly, long passages prove difficult for \textsc{Adv. + Human Sup.}\xspace.
In summary, these analyses highlight the challenges of learning from human rationales: it requires precise occlusion of irrelevant tokens while keeping valuable tokens, and must account for variance in human rationale and input lengths.
These challenges partly explain the limited benefit of \textsc{Adv. + Human Sup.}\xspace over \textsc{Adv. + Atk. Sup.}\xspace
\section{Concluding Discussion}
In this study, we find that adding
an explicit extractor layer helps a
model learn to ignore additive adversarial attacks produced by the \textsc{AddSent}\xspace attack more effectively than conventional adversarial training via data augmentation.
This is an exciting result because it defeats an attack which is otherwise stubbornly effective against even copious adversarial data augmentation. It is a novel use for this type of explicit token relevance representation, which is more typically applied for model interpretability \citep{lei-etal-2016-rationalizing}. This makes it related to defenses like \citet{cohen_certified_2019} which allow the model to reject inputs as out-of-distribution and abstain from prediction, but it differs in rejecting only part of the input, making a prediction from the remainder as usual.
\para{Generality.} As \citet{carlini_evaluating_2019} note, it is easy to overstate claims in evaluating adversarial defenses. Hence, we note that our results pertain only to the \textsc{AddSent}\xspace attack, and perform favorably
against a baseline defense in adversarial training via data augmentation.
Since most adversarial training approaches assume the ability to {\em generate} a large number of synthetic attack examples, it is reasonable to further assume that we have access to the positions of the attacks.
However, such knowledge about attacks may not be available in general.
Nevertheless, the success of the rationale model architecture in learning to occlude adversarial tokens does hold promise for a more general defense based on a wider range of possible attacks and possible defenses by the extractor layer.
\para{Utility of human rationales.} We explore the possibility that human-provided explanations may make the model more robust against adversarial attacks. We mostly find that they do not, with the notable exception of BERT\xspace on \textsc{MultiRC}\xspace, where it is only this augmentation that brings the model back to clean test accuracy.
While it does provide an advantage of sparsity over supervision with non-attack tokens, this advantage alone may not justify the cost of collecting human explanations for robustness.
Further understanding of human rationales and novel learning strategies are required for improving model robustness.
\para{Future directions.} A generalization of our approach might convert the ``extractor'' layer into a more general ``defender'' layer capable of issuing a wider range of corrections in response to a wider range of attacks. It could, for example, learn to defend against attacks based on input removal (e.g. \citet{feng_pathologies_2018}) by training to recognize gaps in the input and fill them via generative closure. This defender could be coupled with a self-supervision style approach (e.g., \citet{hendrycks_using_2019}) involving an ``attacker'' capable of levying various types of attack against the model.
We leave such a generalization for future work.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,467 |
Johanna Harris
Senior Lecturer in English, University of Exeter
Johanna Harris gained her BA (with double Honours in English and Ancient History) at the University of Sydney in 2002, MSt. (2004) and DPhil. (2008), both at University of Oxford (Somerville). She taught at various Oxford colleges (Keble, St Hilda's, Somerville), and was a postdoctoral researcher at the University of Geneva in 2009-10. In 2010 she was a lecturer at Lincoln College, Oxford. In January 2011, she was appointed Lecturer (E&R) in Renaissance Literature at the University of Exeter, and promoted to Senior Lecturer in January 2016.
Her research investigates the lives and letters of the early modern period, ranging from poets such as Andrew Marvell and Thomas Traherne to lesser known writers such as the puritan Lady Brilliana Harley. She is particularly interested in the religious politics of the early modern period and in the techniques and styles of letter-writing as a vehicle for, and often an embodiment of, these partisan concerns. Her interest in these aspects of the period necessarily extends to editorial, as well as interpretive, projects.
Johanna has co-edited (with Elizabeth Scott-Baumann) The Intellectual Culture of Puritan Women, 1558-1680 (Palgrave Macmillan, 2011). Her monograph, Epistolary Communities in Early Modern England, is in progress and due for completion in 2016 (Palgrave Macmillan). She is currently working on an edition of Thomas Traherne's previously unpublished 'The Ceremonial Law' and Select Meditations (for the Oxford English Texts series, OUP), and as general editor (with Alison Searle) and volume editor of Richard Baxter's correspondence (a nine-volume edition contracted with OUP). Other ongoing projects include studies of Andrew Marvell's prose, letters, and nonconformist networks; an edition of Lady Brilliana Harley's non-epistolary manuscript writings; 'lives and letters'; and notions of dignity in relation to reading in early modern England.
Alongside research, teaching and administrative roles, Johanna established and continues to co-ordinate the Exeter Care Homes Reading Project, a volunteer initiative that trains and sends English students into local care homes to read to residents. The Reading Project has garnered local and national press and TV coverage, is supported by the University's Annual Fund, and partners with the Department's excellent student-led English Society and Community Action. In November 2015 Johanna was named a winner of the Prime Minister and Cabinet Office's award, 'Points of Light'.
In 2013, Johanna was James M. Osborn research fellow at Beinecke Library, Yale University, and Folger Shakespeare Library research fellow, Washington D.C.
She reviews for Times Literary Supplement, Renaissance Quarterly, Literature and Theology and Notes & Queries, and reviews book proposals for Routledge, Ashgate, and OUP. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,222 |
var child = require('child_process')
var util = require('util')
var u = require('./util')
module.exports.install = function (opts, callback) {
var cmd = 'npm'
var args = [ 'install' ]
opts = opts || {}
opts.cwd = opts.cwd || process.cwd()
// append user args to default args
if (util.isArray(opts.args)) {
args = args.concat(opts.args)
}
opts.stdio = 'inherit'
if (opts.module) {
args.push(module)
}
var c = child.spawn(cmd, args, opts)
u.hookChildProcessEvents(c, callback)
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 9,808 |
``GridSearchCV``
============================================================================
.. autoclass:: ibex.sklearn.grid_search.GridSearchCV
:members:
:special-members:
:show-inheritance:
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,860 |
{"url":"https:\/\/www.earthdatascience.org\/tutorials\/using-python-r-same-jupyter-notebook\/","text":"# Using R and Python in the same Jupyter notebook\n\nR and Python are both useful to retrieve, analyze, and manipulate data. While these two languages are both individually viable and powerful, sometimes it may be necessary to use the two in conjunction.\n\nUnfortunately, it is currently not possible to simultaneously use both of these languages at the same time within a Jupyter notebook due to the fact that Jupyter notebooks can only run one kernel (or language) per notebook. However, we\u2019re able to accomplish this via library-level support. The Python package rpy2 allows us to interact with an R interpreter in-memory as opposed to running an entirely separate kernel. This tutorial will go over how to use rpy2 which in turn will allow us to use these two languages simultaneously.\n\n## Objectives\n\n\u2022 Use Python\n\u2022 Use R within the Python kernel\n\n## Dependencies\n\n\u2022 Install of Python and R\n\u2022 rpy2\n!pip install rpy2\n\nRequirement already satisfied (use --upgrade to upgrade): rpy2 in \/home\/user\/anaconda2\/lib\/python2.7\/site-packages\n\nfrom rpy2.robjects import r\n\n\n## Using Python\n\nAs this is natively a Python kernel, we should be able to do all of the Python programming we want.\n\nprint \"Hello World!\"\n\nHello World!\n\nstr_list = ['This', 'is', 'python!']\nfor i in range(0, len(str_list)):\nprint str_list[i],\n\nThis is python!\n\ndef my_function(i):\nprint \"I've been called\", i + 1, \"times!\"\n\nfor i in range(0, 5):\nmy_function(i)\n\nI've been called 1 times!\nI've been called 2 times!\nI've been called 3 times!\nI've been called 4 times!\nI've been called 5 times!\n\n\n## Using R within Python\n\nAs we\u2019ve just shown, we can do all of our normal python programming without any hiccups. Now, let\u2019s try and do some R programming within this Python kernel.\n\nr('print(\"Hello World!\")')\n\n[1] \"Hello World!\"\n\nR object with classes: ('character',) mapped to:\n<StrVector - Python:0x7f3552edc128 \/ R:0x3a8e8e8>\n[str]\n\nr('str_list <- c(\"This\", \"is\", \"R!\")')\nr('for (i in 1:length(str_list)){print(str_list[i])}')\n\n[1] \"This\"\n[1] \"is\"\n[1] \"R!\"\n\nrpy2.rinterface.NULL\n\nr('''\nmy_function <- function(i){\ncat(\"I've been called\", i, \"times!\\n\")\n}''')\nr('for (i in 1:5) {my_function(i)}')\n\nI've been called 1 times!\nI've been called 2 times!\nI've been called 3 times!\nI've been called 4 times!\nI've been called 5 times!\n\nrpy2.rinterface.NULL\n\n\n\n\nUpdated:","date":"2018-10-18 07:21:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.180021733045578, \"perplexity\": 6654.233950317052}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-43\/segments\/1539583511744.53\/warc\/CC-MAIN-20181018063902-20181018085402-00340.warc.gz\"}"} | null | null |
\section{Introduction}
Mirror symmetry of three dimensional gauge theories is an infra-red
equivalence of two theories in which Coulomb and Higgs branches, and
Fayet-Iliopoulos parameters and masses are exchanged. First
discovered in ${\cal N}=4$ theories by Intriligator and Seiberg \cite{is},
explanations in terms of string theory dualities \cite{hw}
as well as generalisations to both other gauge groups \cite{berk} and
${\cal N}=2$ gauge theories \cite{berk2,ahiss} soon appeared. More recently
it has been shown that, for a large class of three-dimensional
theories, the correspondence may be extended to all length
scales \cite{as}.
In this paper we present further examples of mirror theories with ${\cal N}=2$
supersymmetry. A novel feature of these theories is that the
Coulomb branch may be compact and is naturally described in the
language of toric geometry. In fact we will find that the Coulomb branch,
which in three dimensions admits a toric action, possesses submanifolds
on which certain cycles of the torus vanish and thereby defines a toric
variety. Further we find pairs of theories
for which the Coulomb (Higgs) branch of one theory and the Higgs
(Coulomb) branch of the other are specified by the same toric data.
A classical analysis of the Higgs branch in question
simply yields the usual symplectic quotient construction of the
corresponding toric variety. In contrast,
various quantum effects which are characteristic to three dimensions
play a central role in realizing the same toric variety as
the Coulomb branch of the mirror theory. This equivalence means, in
particular, that the two branches admit identical U(1)
isometries with precisely the same set of fixed submanifolds. As we are
dealing with theories with only four supercharges it is not obvious
whether the correspondence extends to the respective metrics on the two
branches. Despite this, we will find that the two metrics do in fact
agree in cases where an explicit calculation is possible
The plan of the paper is as follows: in Section 2 we review various
aspects of abelian gauge theories in three dimensions. In Section 3
we present a simple self-mirror theory for which both the Higgs and
Coulomb branch is a copy if ${\bf CP}^1$. Finally, in Section 4 we
consider a more general abelian gauge theory and employ the language of
toric geometry. The appendix contains a discussion of the brane realisation
of the theory of Section 3. As this work was in preparation, \cite{ak}
appeared which considers similar theories. The connection between
three-dimensional Chern-Simons gauge theories and the related
phenomenon of mirror symmetry in
two-dimensional Calabi-Yau $\sigma$-models has been
discussed in \cite{kog}.
\section{Review of N=2 Gauge Theories}
Three dimensional gauge theories with ${\cal N}=2$ supersymmetry
(4 supercharges) have been studied in detail in \cite{ahiss},
where a comprehensive introduction may be found.
Here we collate some facts relevant to abelian gauge theories, with
a $U(1)^r$ gauge group and $N$ chiral multiplets.
To this end, we introduce
abelian vector superfields $V_a$, $a=1,\cdots,r$, and chiral
superfields $Q_i$, $i=1\cdots,N$, both of which are simply the dimensional
reduction of the familiar four dimensional ${\cal N}=1$ superfields.
The gauge kinetic terms are written most simply in terms of linear
superfields $\Sigma_a=\epsilon^{\alpha\beta}\bar{D}_\alpha D_\beta V_a$,
whose lowest component is a real scalar $\phi_a$ and also includes
the $U(1)$ field strength as well as two Majorana fermions.
The chiral multiplets each consist of a
complex scalar $q_i$ and two further Majorana fermions. The kinetic terms
for all fields are written as D-terms,
\begin{eqnarray}
{\cal L}_K=\int{\rm d}^4\theta\,\left[\sum_{i=1}^NQ^\dagger_i
\exp\left(2\sum_{a=1}^rR_i^aV_a\right)Q_i +\sum_{a=1}^r
\frac{1}{e_a^2}\Sigma_a^2\right]
\label{above}\end{eqnarray}
where $e^a$ is the $a^{\rm th}$ gauge coupling constant which has dimension
(mass)$^{1/2}$ and $R_i^a$ are the charges of the chiral multiplets under
each of the gauge symmetries. We assume these charges to be integers.
Further interactions
take the form of a superpotential, ${\cal W}$, constructed from gauge
invariant monomials of the chiral superfields,
\begin{eqnarray}
{\cal L}_F=\int{\rm d}^2\theta\,{\cal W}(Q_i)\ +\ {\rm h.c.}
\nonumber\end{eqnarray}
In particular, if there exist two chiral superfields of opposite
charge the usual complex mass is written is this manner. In
three dimensions each chiral multiplet may have a further, real,
mass parameter which cannot
be written in terms of a superpotential and which will play an
important role in the following discussion. It is introduced by
weakly gauging the Cartan-subalgebra of the global flavour symmetry
of the theory and constraining the vector multiplet scalar to a
fixed background vacuum expectation value (VEV). The net result is that
the exponent in \eqn{above} is replaced by,
\begin{eqnarray}
\sum_{a=1}^rR_i^aV_a \rightarrow \sum_{a=1}^rR_i^aV_a +
2m_i\theta\theta^\dagger
\nonumber\end{eqnarray}
Notice that there are only $N-r$ independent such parameters,
the remaining $r$ being set to zero by shifts of the vector
multiplet scalars $\phi_a$.
Two further sets of couplings will also prove important in the story:
Fayet-Iliopoulis (FI) parameters $\zeta_a$ which have dimension
of mass, and dimensionless Chern-Simons (CS)
parameters $\kappa_{ab}$. The former are incorporated in the
usual fashion,
\begin{eqnarray}
{\cal L}_{FI} = \sum_{a=1}^r\zeta_a\int{\rm d}^4\theta\,V_a
\nonumber\end{eqnarray}
while the latter are written in terms of the linear superfield as,
\begin{eqnarray}
{\cal L}_{CS}=\sum_{a,b=1}^r\kappa_{ab}\int{\rm d}^4\theta\,\Sigma_aV_b
\nonumber\end{eqnarray}
Notice from the similarity of these two expression that the
combinations of scalar fields $\sum_b\kappa_{ab}\phi_b$ will play
the role of a dynamical FI parameter in the theory. This may be
seen by examining the
classical scalar potential, obtained by integrating out auxiliary
fields,
\begin{eqnarray}
U=\sum_{a=1}^r\,e_a^2\left(\sum_{i=1}^NR_i^a|q_i|^2-
\sum_{b=1}^r\phi_b\kappa_{ab}-\zeta_a\right)^2
+\sum_{i=1}^N M_i^2|q_i|^2
+\sum_{i=1}^N\,\left|\frac{\partial{\cal W}}{\partial q_i}\right|^2
\label{U}\end{eqnarray}
where
\begin{eqnarray}
M_i=\sum_{a=1}^rR^a_i\phi_a+m_i
\label{mi}\end{eqnarray}
is the effective mass of the $i^{\rm th}$ chiral multiplet.
The manifold of classical supersymmetric vacua, determined by the condition
$U=0$, depends on the parameters $\zeta_a,\kappa_{ab}$ and $m_i$.
Let us consider situations with ${\cal W}=0$. Then there may
be two branches of vacua: the Higgs and Coulomb branches.
In the former, the vector multiplet scalars are set to zero,
$\phi_a=0$, while the $q_i$'s are constrained by the vanishing of the
first expression in \eqn{U} modulo gauge transformations. In this phase
the gauge symmetry is generically completely broken. It is clear from
the form of \eqn{U} that the Higgs branch does not exist for generic
non-zero real masses.
On the Coulomb branch, all chiral multiplet scalars are set to zero,
$q_i=0$, while the $\phi_a$'s are unconstrained. Again,
from \eqn{U}, it is clear that non-zero FI or CS parameters will
lift the Coulomb branch. When this phase exists however
the gauge symmetry is completely unbroken and one may exchange
each abelian gauge field for a scalar, $\sigma_a$, of period $e_a$,
via the duality transformation
$F^{\mu\nu}_a=\epsilon^{\mu\nu\rho}\partial_\rho\sigma_a$. The Coulomb
branch is therefore parametrised by the VEVs of both $\phi_a$ and
$\sigma_a$, which combine to lie in $r$ chiral multiplets, and is
classically given by $({\bf R}\times{\bf S}^1)^r$. There exist $r$ $U(1)_J$
isometries of the Coulomb branch induced by constant shifts of the
$r$ dual photons. These are preserved in the full quantum theory
\cite{ahiss}. In the following it will sometimes be
useful to weakly gauge these symmetries. In particular, consider
gauging the symmetry which shifts $\sigma_{a}$ by a constant and
leaves the other dual photons invariant. As explained in
\cite{ahiss}, the lowest component of the linear
multiplet which contains the corresponding field strength is precisely
the FI parameter $\zeta_{a}$. In addition to pure Higgs and Coulomb
branches there will also exist mixed branches of vacua in which
both $q$'s and $\phi$'s have non-zero VEV.
This concludes our discussion of the classical field theory. Let us
now mention some relevant quantum effects that occur. Firstly note that in
four dimensions a theory with the above matter content would suffer
a gauge anomaly unless $\sum_iR_i^a=0$ for each gauge group $a$. In
three dimensions, while there are no gauge anomalies, the theory
may suffer from a ``parity anomaly'' \cite{red}. This manifests
itself in the dynamical generation of CS terms due to integrating
out chiral multiplet fermions at one-loop \cite{ahiss}. In the case
of ${\cal W}=0$, the effective CS term is given by,
\begin{eqnarray}
\kappa^{\rm eff}_{ab}=\kappa_{ab}+\ft12\sum_{i=1}^N\,R^a_iR^b_i\,{\rm sign}\, M_i
\label{wherekwent}\end{eqnarray}
The parity anomaly arises from the observation that gauge invariance
requires $\kappa^{\rm eff}_{ab}$ to be an integer and therefore, for certain matter
content, the factor of $\ft12$ in \eqn{wherekwent} means a non-zero
bare CS term is obligatory, breaking parity.
A similar CS coupling is also generated for weakly gauged global
symmetries of the type discussed earlier in this section. In
particular, for the element of the Cartan-subalgebra of the global
flavour group under which $q_i$ transforms with charge $+1$, with all
other fields neutral, one has
\begin{eqnarray}
\kappa^{\rm eff}_{ai}= \ft12R_{a}^{i}\,{\rm sign}\, M_{i}
\nonumber\end{eqnarray}
Finally, the combined effect of all dynamically generated CS parameters may be
interpreted as a finite renormalisation of the FI parameters,
\begin{eqnarray}
\zeta^{\rm eff}_a= \zeta_a + \sum_{b=1}^r\kappa^{\rm eff}_{ab}\phi_{b} + \sum_{i=1}^N
\kappa^{\rm eff}_{ai}m_{i}
\label{zeff}\end{eqnarray}
Notice that the gauge and global symmetries appear on an equal footing
in this expression. Further, for $M_i=0$, which is the case on the Higgs
branch, ${\rm sign}\, M_i$ and $\kappa^{\rm eff}_{ab}$ are ill-defined. However, the
FI parameters $\zeta^{\rm eff}_a$ are of the form $M_i\, {\rm sign}\, M_i$ and are
therefore continuous at $M_i=0$. Finally, these one-loop corrections
can be implemented in the scalar potential \eqn{U}, simply by
replacing the FI parameters and CS terms by the renormalised FI
parameters \eqn{zeff}.
There is one further quantum effect, discussed in \cite{ahiss}, that will
be important for us. At the intersection of Coulomb and
Higgs branches, certain $U(1)_J$ isometries, which are generically
broken on the Coulomb branch, must be restored as the
Higgs branch is invariant under such symmetries. This, in turn,
requires the vanishing of the periods of the dualized scalars at these points
and the Coulomb branch is to be viewed as a fibre of $T^r$ over
$R^r$ where certain cycles of the torus shrink at the intersection
points. This suggests that the Coulomb branch can be thought of as a
toric variety specified in terms of toric data which encodes its
intersections with other branches. In the following Sections we will
realize this idea explicitly.
\section{A Self-Mirror Example}
In this section we exhibit a simple ${\cal N}=2$ gauge theory in
which both Higgs and Coulomb branches are
compact, the former classically and the latter due to the dynamical
generation of CS terms. This will also serve as an pedagogical example
for more complicated models to be discussed in the following section.
The theory has a single abelian gauge group with bare FI parameter $\zeta$.
The matter content consists of two chiral multiplets,
both transforming with charge $+1$ and with real masses
$m_1=-m_2$. We set both superpotential and bare CS parameter to zero so
that the effective one-loop couplings are given by,
\begin{eqnarray}
\kappa^{\rm eff}=\ft12 \left[ {\rm sign}\,(\phi+m)+{\rm sign}\,(\phi-m)\right]
\label{kaphere}\end{eqnarray}
Similarly, the one-loop effective FI parameter is,
\begin{eqnarray}
\zeta^{\rm eff}=\zeta+\ft12 (\phi+m)\,{\rm sign}\,(\phi+m)+ \ft12 (\phi-m)
\,{\rm sign}\,(\phi-m)
\label{fihere}
\end{eqnarray}
and the scalar potential \eqn{U} thus simplifies to,
\begin{eqnarray}
U=e^2\left(|q_1|^2+|q_2|^2-\zeta^{\rm eff}\right)^2+(\phi+m)^2|q_1|^2
+(\phi-m)^2|q_2|^2
\nonumber\end{eqnarray}
Let us now examine solutions of the vacuum condition $U=0$ of this theory as a
function of the FI and mass parameters. Without loss of generality we
may restrict our attention to masses $m\geq 0$. We will also set
the bare CS coupling to zero. There are then five different regimes:
i) $m=\zeta=0$: In this case there is a unique vacuum at the
origin $\phi=q_1=q_2=0$.
ii) $m=0$, $\zeta>0$: In this case we have a
Higgs branch of vacua given by $\phi=0$,
and $|q_1|^2+|q_2|^2=\zeta$ modulo gauge transformations. This is simply
Witten's gauged linear sigma model \cite{wit} with target space
${\bf CP}^1$ of K\"ahler class $\zeta$.
iii) $m>0$, $\zeta=-m$: In this case,
requiring $\zeta^{\rm eff}=0$ restricts the range of $\phi$ to $|\phi|\leq m$.
Therefore the space of vacua is given by $q_1=q_2=0$
while $\phi$ may take any value in the interval,
$I=\{\phi : -m\leq\phi\leq m\}$. We will discuss
this case in more detail below.
iv) $m>0$, $\zeta>-m$: In this case there are two isolated Higgs
branch vacua. These are located at $\phi=-m$, $|q_{1}|^{2}=m$,
$q_{2}=0$ and $\phi=m$, $|q_{2}|^{2}=m$, $q_{1}=0$ respectively.
v) $m\geq 0$, $\zeta<-m$: In this case there are two isolated vacua
on the Coulomb branch located at $q_1=q_2=0$, $\phi=\pm \zeta$.
The above vacuum structure may be reproduced by realising the theory
as a D3-brane probe of a $(p,q)$ 5-brane web. This is done in the
appendix, where we also discuss the vacuum structure in the
presence of a non-zero bare CS term.
The novel feature of case iii) above
is the restriction of the range of $\phi$ to an interval $I$. For the
critical value of the bare FI parameter, $\zeta=-m$,
the Coulomb branch is therefore compact.
After dualizing the massless photon in favour of a periodic scalar
$\sigma$, the Coulomb branch can be thought
as a fibration of $S^1$ over $I$. The $U(1)_J$
symmetry which shifts $\sigma$ by a constant acts on the fibre over each
point. Classically the fibration is trivial and
simply corresponds to the cylinder $I\times S^1$. However we
will now argue that this picture is modified by quantum
effects. The key idea, based on the arguments of \cite{ahiss}, is that
the endpoints of the interval, $\phi=\pm m$, must be invariant under $U(1)_J$.
This follows because, at these two points, the Coulomb branch
intersects the Higgs branch of case iv) which must be invariant under
$U(1)_{J}$. Strictly speaking, moving onto the Higgs branch requires
changing the bare FI parameter $\zeta$ away from its critical value.
This means that one is moving to a different theory rather
than onto another vacuum branch of the same theory. This distinction
is irrelevant here because, as explained in the previous section, we
can promote $\zeta$ to be the scalar
component of a background linear multiplet by weakly gauging
$U(1)_{J}$. The existence of a new branch emenating from the endpoints
of the interval then hinges on whether or not this field is massless.
At a generic point on the Coulomb branch $U(1)_{J}$ acts non-trivially
on the dual photon is therefore spontaneously broken. This means that
the corresponding gauge multiplet which contains $\zeta$ acquires a
mass by the Higgs mechanism. Conversely, $\zeta$ only remains
massless at points on the Coulomb branch at which $U(1)_{J}$ remains
unbroken.
The restoration of $U(1)_{J}$ discussed above is only possible if the $S^{1}$
fibre shrinks to zero size at the endpoints of the interval. Following
\cite{ahiss}, this
effect which can be ascribed to quantum corrections which are
unsupressed near these points. The result is a Coulomb branch with
the topology of a two sphere.
As mentioned above, it is natural to
combine $\phi$ and $\sigma$ to form a complex scalar which is the
lowest component of a chiral superfield. The Coulomb branch can then
be thought of as a K\"{a}hler manifold of complex dimension one.
We therefore seek such a complex manifold which can be realized as
an $S^1$ fibration over an interval with degenerate fibres at the
endpoints. This is precisely the data which specifies ${\bf CP}^1$
as a toric variety. In fact, as explained in \cite{vl},
the interval is just the toric diagram for ${\bf CP}^{1}$. In the following
we will see that this connection between three-dimensional Coulomb
branches and toric diagrams is a general one.
In this simple case, one can also understand the symmetry between
Higgs and Coulomb branches by considering the original
${\cal N}=4$ self-mirror theory of Intriligator and Seiberg \cite{is}.
In the ${\cal N}=2$ language this consists of a $U(1)$
vector multiplet, a single neutral chiral multiplet, two chiral
multiplets of charge $+1$ and two of charge $-1$. A superpotential
couples all hypermultiplets. One may flow to the theory described
above by gauging various global symmetries (including a subgroup of
one the $SU(2)$ R-symmetries of the ${\cal N}=4$ algebra) and introducing
mass terms for the unwanted chiral multiplets. The duality properties
should be invariant under such deformations \cite{ahiss}, and the
Higgs and Coulomb branches should again be equal in the above theory.
Finally, one may also see the emergence of the $SU(2)$-invariant
K\"ahler metric on ${\bf CP}^{1}$ by an explicit one-loop
calculation on the Coulomb branch. Note however that,
unlike the theory with ${\cal N}=4$ supersymmetry, one has little
control over the K\"ahler potential and the following calculation
is only valid at points where $e^2\ll|\phi\pm m|$ and, in particular,
cannot be trusted at the end points of the interval. Nevertheless we
proceed with the calculation, encouraged by the end result.
Classically, the low-energy dynamics on the Coulomb branch
are described by the metric,
\begin{eqnarray}
{\rm d}s^2=H(\phi){\rm d}\phi^2+H(\phi)^{-1}{\rm d}\sigma^2
\label{met}\end{eqnarray}
where classically $H=1/e^2$. As well as the renormalisation of the
FI and CS parameters discussed above, there is also a finite
renormalisation of the gauge coupling constant,
\begin{eqnarray}
H^{\rm 1-loop}&=&\frac{1}{e^2}+\frac{\ft12}{|\phi+m|}
+\frac{\ft12}{|\phi-m|}\nonumber\\
&=&\frac{1}{e^2}+\frac{m}{m^2-\phi^2}
\nonumber\end{eqnarray}
where the second equality follows only when $\phi$ is restricted to the
interval $I$.
In the limit, $e^2\rightarrow\infty$, we thus
find that one-loop Coulomb branch metric indeed becomes the
Fubini-Study metric on ${\bf CP}^1$ with K\"ahler class $m$.
Notice that, as in the original Intriligator-Seiberg model,
the $U(1)_J$ symmetry of the
Coulomb branch is enhanced in the infra-red to a $SU(2)$ symmetry.
Under mirror symmetry this is exchanged with the $SU(2)$ flavour
symmetry of the theory while the FI parameter is exchanged
with the real mass.
To summarise, this model is self-mirror: the Coulomb and Higgs
branches coincide if one simultaeously exchanges the
mass and FI parameters. Specifically, the Coulomb branch exists
only when $\zeta =0$ and is given by ${\bf CP}^1$ of Kahler class
$m$. In constrast, the Higgs branch only exists when $m=0$ and is
given by ${\bf CP}^1$ of Kahler class $\zeta$.
\section{Mirror Symmetry and Toric Geometry}
In this section we discuss the more general mirror symmetric theories.
Specifically, we will consider,
\paragraph{}
{\bf Theory A:} $U(1)^r$ gauge theory with $N$ chiral multiplets containing
scalars $q_i$ of
charge $R_i^a$, $i=1,\cdots,N$ and $a=1,\cdots,r$. The parameters of
the model are bare CS couplings $\kappa_{ab}$, bare FI parameters
$\zeta_a$ and real masses $m_i$. Notice that only $N-r$ of the
mass parameters are independent.
\paragraph{}
{\bf Theory B:} $U(1)^{N-r}$ gauge theory with $N$ chiral multiplets
containing scalars $\tilde{q}_i$
of charge $S_i^u$, $u=1,\cdots,N-r$. The parameters are
$\tilde{\kappa}_{uv}$, $\tilde{\zeta}_u$ and $\tilde{m}_i$, where
$N-(N-r)=r$ of the mass parameters are independent.
\paragraph{}
The charges of Theory A and Theory B are constrained to satisfy,
\begin{eqnarray}
\sum_{i=1}^NR_i^aS_i^u=0\ \ \ \ \ \ \ \mbox{\rm for all}\ a\ \mbox{\rm and}\ u
\label{theone}\end{eqnarray}
We denote the Coulomb branch of Theory A (B) as ${\cal M}_{C}^{A\,(B)}$
and the Higgs branch as ${\cal M}_H^{A\,(B)}$. Notice firstly that
$dim({\cal M}_C^{A})=dim({\cal M}_H^B)=r$ and
$\dim({\cal M}_C^B)=dim({\cal M}_H^A)=N-r$. Similarly, the number
of independent mass parameters of Theory A (B) is equal to the number
of FI parameters of Theory B (A).
Let us start with the Higgs branch of Theory B in the case where the
masses and CS parameters vanish. This is defined as the symplectic
quotient,
\begin{eqnarray}
{\cal M}_H^B=\mu_a^{-1}(\zeta_a)/U(1)^{N-r}
\nonumber\end{eqnarray}
where the momentum map is $\mu_a=\sum_iS^u_i|\tilde{q}_i|^2$. For
certain values of the FI parameters (specifically, when $\zeta_a$
lie in the K\"ahler cone of ${\cal M}_H^B$ - see \cite{mp}), the
classical Higgs branch is the toric
variety\footnote{For an introduction to toric geometry for physicists,
see section 3 of \cite{mp} and section 4 of \cite{kmv}. Our presentation
will follow these references},
\begin{eqnarray}
{\cal M}_H^B=({\bf C}^{N}-F_{\Delta})/({\bf C}^{\star})^{N-r}
\nonumber\end{eqnarray}
in which ${\bf C}^N$ is parametrised by $\tilde{q}_i$, and
$F_\Delta$ is a subset of ${\bf C}^N-{\bf C}^{\star N}$. Precisely
which subset is determined in terms of a fan of cones, $\Delta$, as
will be reviewed below. The action of $({\bf C}^{\star})^{N-r}$ in
the above quotient is the complexified gauge symmetry of Theory B,
given by,
\begin{eqnarray}
\tilde{q}_i\rightarrow \lambda^{S_i^u}\tilde{q}_i\ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \lambda\in{\bf C}^\star\, ,\, u=1,\cdots N-r
\nonumber\end{eqnarray}
Our first task is to identify the set $F_\Delta$. This is specified
by the charges $S_i^u$, which we may
view as $(N-r)$ charge vectors in ${\bf Z}^N$. The first step is to
construct $r$ vectors orthogonal to
$S_i^u$. These are provided by the charges of Theory A, $R_i^a$,
courtesy of equation \eqn{theone}. These charges then provide a convenient
basis of gauge invariant polynomials which parametrise ${\cal M}_H^B$,
\begin{eqnarray}
X(k)=\prod_{i=1}^N \tilde{q}_i^{\,\langle R^i,k\rangle}\ \ \ \ \ \
\ \ \ \ \mbox{where}\ \langle R^i,k\rangle=\sum_{a=1}^rR^i_ak_a
\nonumber\end{eqnarray}
with $k_a\in{\bf Z}^r$. The charges, $R_i^a$, are now in turn viewed
as $N$ vertices in ${\bf Z}^r$ and it is these which are used to
define $\Delta$ as the collection of cones bounded by vectors from the
origin through the vertices. Finally, the set $F_{\Delta}$ is defined
as containing all sets $\{\tilde{q}_i=0\,:\,i\in \{i_\rho\}\,\}$
such that the corresponding set of vertices
$\{R_{i}^a\,:\,i\in\{i_\rho\}\,\}$ does not lie in any single cone of
$\Delta$.
The toric variety defined in this manner has an action of
${\bf C}^{\star r}$. For each vertex $R_a^i\in{\bf Z}^r$, one
may define the action
\begin{eqnarray}
\tilde{q}_j\rightarrow\lambda^{\delta_{ij}}\tilde{q}_j
\label{cyclebaby}\end{eqnarray}
which is simply the complexified global flavour symmetry of Theory B.
The limit point, $\lambda=0$, of this symmetry is contained
${\cal M}_H^B$ (as opposed to $F_\Delta$) as the vertices are themselves
the integral generators of one-dimensional cones and therefore clearly
belong to a single cone. Notice however that not all linear combinations
of the symmetry need necessarily have limit point in ${\cal M}_H^B$.
Nevertheless, the vertices $R_i^a$ do encode the fixed point structure of all
abelian isometries of ${\cal M}_H^B$ and allow one to reconstruct
the geometry of the toric variety. This is the approach
to toric geometry discussed in \cite{vl}. To each of the vertices
$R_i^a$, one associates a a hyperplane, $D_i$, orthogonal to the vertex,
on which the corresponding cycle \eqn{cyclebaby} is taken to vanish,
\begin{eqnarray}
D_i\equiv\left\{Y_a\in{\bf R}^r: \sum_{a=1}^rR_i^aY_a=C_i\right\}
\label{dual}\end{eqnarray}
where $C_i$ are $N$ constants. Define $\nabla$,
a polytope of of dimension $r$, as the region of ${\bf R}^r$
bounded by the $N$ hyperplanes $D_i$. Importantly, by construction,
a collection of hyperplanes only intersect if the corresponding
limit points are not in $F_\Delta$. This ensures that the Higgs branch,
${\cal M}_H^B$ may be viewed as a fibration of $T^r$ over
$\nabla$ such that on the boundary $D_i$ the cycle defined
by $R_i^a$ in equation \eqn{cyclebaby} shrinks while, on the
intersection of $k$ boundaries, $k$ such cycles shrink.
If the vertices of $\nabla$ lie on ${\bf Z}^r$, then ${\nabla}$ is
said to be an integral reflexive polytope. These vertices then
encode the information for a dual toric variety. This is
Batyrev's construction of mirror manifolds \cite{batty}.
This concludes our discussion of the Higgs branch. The next part
of the story is to show that the Coulomb branch of Theory A,
${\cal M}_C^A$, corresponds to the same toric variety. ${\cal M}_C^A$
is parametrised by the $r$ real scalars $\phi_a$ and
$r$ dual photons, $\sigma_a$. The key point is that the $\phi$'s are
restricted to lie within $\nabla$ due to CS terms, while certain
periods of the $\sigma$'s vanish on the boundaries of this space
thus giving a non-trivial fibration of $T^r$ over $\nabla$ as
described above. To see this, take the constants $C_i$ appearing in
\eqn{dual} to be defined by $C_i=-m_i$ for $i=1,\cdots,N$.
Then the equation $M_i=0$, defined in \eqn{mi}, specifies the hyperplane
$D_i$, defined in \eqn{dual}, spanned by the $\phi_a=Y_a\in{\bf R}^r$.
Thus the region $\nabla$ is specified by a choice of ${\rm sign}\,(M_i)$
for each $i$. In order to fix this choice, consider the effective
CS coupling \eqn{wherekwent},
\begin{eqnarray}
\kappa^{\rm eff}_{ab}=\kappa_{ab}+\ft12\sum_{i=1}^N\,R_i^aR_i^b{\rm sign}\,\,(M_i)
\nonumber\end{eqnarray}
We see that a judicious choice of bare CS coupling $\kappa_{ab}$
will ensure that $\kappa^{\rm eff}_{ab}$ vanishes in $\nabla$ and only
in $\nabla$. As long as $\kappa^{\rm eff}_{ab}$ vanishes, it follows from
equation (\ref{zeff}), that we may choose the bare FI parameters
$\zeta_{a}$ in such a way that the effective FI parameters $\zeta^{\rm eff}_{a}$
vanish. In this case the part of the Coulomb branch described
by the $\phi_a$ is precisely $\nabla$.
As the hyperplane $D_i$ is defined by $M_i=0$, one chiral
multiplet becomes massless on each component of the boundary of
$\nabla$. Thus, at least in the classical theory,
the hyperplane $D_{i}$ is the root of a Higgs branch
on which $q_{i}\neq 0$ and $q_{j}=0$ for all $j\neq i$. As in the
simple self-mirror example of the previous section, moving onto this
Higgs branch requires changing the FI parameters. In fact, for each
$i$, moving onto the corresponding Higgs branch requires varying the
specific linear combination $R_{i}^{a}\zeta^{a}$ away from its
critical value. To analyse this transition, it is convenient to
promote this linear combination of the FI parameters
to a background superfield. After dualizing the gauge fields to periodic
scalars, this corresponds to weakly gauging a particular subgroup of the
$U(1)^{r}_{J}$ symmetry, $U(1)^{(i)}_{J}$, which shifts the
corresponding linear combination, $R_{i}^{a}\sigma_{a}$, of the
dual photons. Following the same arguments as in Section 3, the
existence of the new branch in the quantum theory is only consistent
if this symmetry is unbroken. At a generic point on the Coulomb branch
the whole of $U(1)_{J}^{r}$ is spontaneously broken.
Hence, we predict that the subgroup $U(1)^{(i)}_{J}$ must be restored
on the intersection between the two branches
which in turn requires that the corresponding cycle of the toric fibre
must degenerate over $D_{i}$. One may easily check that this is
exactly the same cycle as that defined in equation
(\ref{cyclebaby}). This means that Coulomb branch of Theory A
precisely agrees with the description of ${\cal M}_B^H$ given
above as a toric fibration of the polytope $\nabla$.
This completes our identification of ${\cal M}_C^A$ with ${\cal
M}_H^B$.
Let us illustrate the above ideas with the simple example of
complex projective space. The relevant toric data for this example
can also be found in section 4 of \cite{kmv}. We take,
\paragraph{}
{\bf Theory A:} $U(1)^{N-1}$ gauge theory with $N$ chiral multiplets
of charge $R_i^a=\delta^a_i-\delta_i^{a+1}$. The masses are
$m_i=-m$ for $i=1,N$ and $m_i=0$ for $i=2,\cdots,N-1$ where $m>0$.
The bare CS parameters are
\begin{eqnarray}
\kappa_{ab}=\delta_{ab}-\ft12\delta_{a,b-1}-\ft12\delta_{a-1,b}
\label{itskagain}\end{eqnarray}
{\bf Theory B:} $U(1)$ gauge theory with $N$ chiral multiplets of
charge $S_i=+1$ for all $i$. The masses and CS parameters are set to
$-N/2$. The FI parameter is $\tilde{\zeta}$.
\paragraph{}
It is well known that the classical Higgs branch of Theory B
is ${\bf CP}^{N-1}$ of K\"ahler class $\tilde{\zeta}$ \cite{wit},
so we concentrate on the Coulomb branch of Theory A. Notice firstly
that the charges $R_i^a$ and $S_i$ satisfy \eqn{theone}.
Following the prescription above, we set $C_i=m$ for $i=1,N$ and
$C_i=0$ for $i=2,\cdots,N-1$. The hyperplanes $D_i$ defined in
\eqn{dual} are then given by
$Y_i-Y_{i-1}=m$, where it is taken that $Y_0\equiv Y_N\equiv 0$. The vertices
of $\nabla$ are given by the intersection points of any $N-1$ of
the $N$ hyperplanes,
$V^a_i=D_1\cap\cdots\cap\hat{D}_i\cap\cdots\cap D_N$, where the
hat denotes omission of the $i^{\rm th}$ hyperplane. We thus
find that $V^a_i=am$ for $i>a$ and $V_i^a=(a-N)m$ for $i\leq a$.
Notice that, for $m\in {\bf Z}$, all $V^a_i\in{\bf Z}$ and
$\nabla$ is a reflexive integral polytope as required to
define a mirror toric variety. Moreover, as $\sum_{i=1}^NS_iV_i^a=0$
the vertices of $\nabla$ define another ${\bf CP}^{N-1}$. This is
simply the statement that the complex projective space is self-mirror.
In order to see that the Coulomb branch is defined in terms of
$\nabla$, we examine the effective CS coupling,
\begin{eqnarray}
\kappa^{\rm eff}_{ab}=\kappa_{ab}+\ft12\sum_{i=1}^NR^a_iR^b_i\ {\rm sign}\, (M_i)
\nonumber\end{eqnarray}
where $M_1=\phi_1-m$, $M_i=\phi_i-\phi_{i-1}$ for $i=2,\cdots,N-1$
and $M_N=-\phi_{N-1}-m$.
Identifying $Y_a=\phi_a$, we see
that the equation for the hyperplane $D_i$ is simply
$M_i=0$ and, with the bare CS coupling given by equation \eqn{itskagain}
to be $\kappa_{ab}=\ft12\sum_iR_i^aR_i^b$, we find that $\kappa^{\rm eff}_{ab}=0$
for ${\rm sign}\, M_i =-1$ or, alternately,
\begin{eqnarray}
-m<\phi_{N-1}<\phi_{N-2}<\cdots <\phi_1<m
\nonumber\end{eqnarray}
which is indeed the polytope $\nabla$. Further, on each boundary
$M_i=0$, the scalar $q_i$ becomes massless and the corresponding
cycle arising from shifts in $\sum_a R^a_i\sigma_a$ must shrink.
Choosing the $\zeta_a$ such that $\zeta^{\rm eff}_a=0$, the Coulomb branch
is then given as a toric variety to be ${\bf CP}^{N-1}$.
Notice that in the case of $N=2$, the above theory differs from the
self-mirror theory introduced in the previous section.
Although in the above example the moduli space of vacua is compact,
more generally this will not be the case. In fact, compactness is
assured only if the fan $\Delta$ spans ${\bf Z}^r$. In particular, in
the case of a toric variety which obeys the Calabi-Yau condition
$\sum_{i=1}^N{R_i^u}=0$ for all $u$, the Coulomb branch is always
non-compact.
\section*{Appendix: A Brane Configuration}
In this appendix we show how the vacuum structure of the self-mirror
theory exhibited in Section 3 and, in particular, the compactness of
the Coulomb branch can be rederived from a brane realisation of this
theory. The relevant brane configuration is the T-dual of that
described in \cite{hanhori}. Set-ups identical to the one discussed
here were also considered in
\cite{bhkk}. We work in units $\alpha^\prime=1$ and set $g_{\rm st}=1$.
The configuration of interest involves a D3-brane suspended between
various 5-brane webs in IIB string theory. The first of these webs
consists of an NS5-brane spanning worldvolume directions 012345, two
D5-branes spanning worldvolume directions 012346 and two $(1,1)$
5-branes in directions 01234(56), where the direction in $(X^5,X^6)$-plane
is $X^5=\pm X^6$. The final configuration is shown
in figure 1 and is positioned at $X^7=X^8=X^9=0$. The two D5-branes
are located at $X^5=\pm m$ while, when $m=0$, the NS5-brane
is located at $X^6=0$. The position of this brane for general $m$
will be described below.
The second 5-brane web is much simpler, consisting of a
single NS5-brane spanning 012589, which is traditionally called the
NS$^\prime$5-brane. It is positioned at $X^3=X^4=0$, $X^7=1/e^2$
and $X^6=\zeta$. Finally a D3-brane is suspended between
the web and NS$^\prime$5-brane, spanning 0127. It is of finite
length in the $X^7$ direction and is positioned at $X^5=\phi$.
The low-energy dynamics of the D3-brane in this configuration
are governed by the three-dimensional gauge theory of Section 3,
with the dictionary between brane and field theory moduli explicitly
given above. Before examining the vacuum
structure, we must first describe the zero mode of the 5-brane
web corresponding to changing $m$. This is denoted by the dotted
lines in Figure 1. Notice that,
fixing the position of the $(1,1)$ 5-branes at infinity, the
position of the NS5-brane is given by $\zeta=-m$.
\begin{figure}
\begin{center}
\epsfxsize=3.0in\leavevmode\epsfbox{fig1.eps}
\end{center}
\caption{The 5-brane web. The cross denotes the origin of the $(X^5,X^6)$
plane. The dotted lines represent the zero mode of the web.}
\label{fig1}
\end{figure}
Let us now discuss the vacuum structure of the theory. The
numbering here coincides with that of section 2.
Each case is illustrated with a diagram in Fig 2 in which we
draw only the 5-brane web, with the position of the D3-brane (which
also encodes the $X^6$ position of the NS$^\prime$5-brane) marked by
a solid dot.
i) $m=\zeta=0$: The set up is shown in Figure 2i). It is clear that
the D3-brane has a unique vacuum state at $\phi=0$.
ii) $m=0$, $\zeta>0$: This is shown in Figure 2ii). The D3-brane ends on
the two coincident D5-branes located at $\phi=0$.
This is the Higgs branch of the gauge theory which, in the analogous
two-dimensional theory, was
argued in \cite{hanhori} to contain a copy of ${\bf CP}^1$ that is
hard to see from the brane picture.
iii) $m>0$, $\zeta=-m$: It is clear from the brane diagram, shown
in Figure 2iii), that the D3-brane must end on the NS5-brane and is
is therefore restricted to the interval $|\phi|<m$. This is the
Coulomb branch of the gauge theory.
iv) $m>0$, $\zeta>-m$: There are two isolated vacua where the D3-brane
ends on one or the other D5-brane as shown in Figure 2iv).
These are located at $\phi=\pm m$ in agreement with the field theory
analysis of Section 2.
v) $m \geq 0$, $\zeta<-m$. In this case the D3-brane must end on one
of the two dyonic fivebranes as shown in Figure 2v). The corresponding
vacua are located at $\phi=\pm \zeta$.
\begin{figure}
\begin{center}
\epsfxsize=5.0in\leavevmode\epsfbox{fig2.eps}
\end{center}
\caption{ The 5-brane web for various parameters.
The dot denotes the position of the D3-brane. The top two diagrams
are, from left to right, 2i) and 2ii) and the bottom three 2iii),
2iv) and 2v). In diagram 2iii), the
D3-brane is restricted to lie on the interval as shown by the arrow.}
\label{fig2}
\end{figure}
Notice that the 5-brane web encodes information about quantum
effects of the gauge theory, in particular the contribution to the
renormalised FI parameter arising from weakly gauged global symmetries
(the last term in \eqn{zeff}). Moreover, it was argued in \cite{ohta,bhkk} that
when the D3-brane is suspended between 5-branes of different kinds
CS terms appear on its world-volume theory. This is
again in agreement with the field theory dynamics whereby, for instance,
in the Coulomb phase described above
the D3-brane cannot move outside the interval
due to dynamically generated CS terms. In fact, one can further study
the theory of Section 3 with a bare CS coupling $\kappa=\pm 1$
by replacing the NS$^\prime$5-brane with a $(1,1)^\prime$-5-brane.
It is simple to see that, for $\zeta^{\rm eff} =0$ and $m\neq 0$, the D3-brane
may lie anywhere on one of the $(1,1)$-5-branes of the web, parametrised
by the half-line. A short calculation confirms that this is in agreement
with the field theory result.
The similarity between the 5-brane webs and toric skeletons was
pointed out by Vafa and Leung \cite{vl}. In the case of an
NS$^\prime$5-brane above, we saw that, on the Coulomb branch, the
D3-brane was restricted to lie on the interval of the NS5-brane:
this interval is the toric skeleton for ${\bf CP}^1$. Similarly,
in the case of a $(1,1)^\prime$-5-brane, the D3-brane is
constrained to lie on the half-line of the $(1,1)$-5-brane:
this is the toric skeleton for the complex plane. In the field theory,
using the results of Section 3 and \cite{ahiss}, one may see that the
size of the ${\bf S}^1$ is zero at the origin of
the half-line while asympotically, where one may trust the one-loop
calculation, it grows linearly, in agreement with the proposal that
the Coulomb branch is indeed the complex plane.
\subsubsection*{Acknowledgements}
We would like to thank Heather Russell for discussions on toric
geometry. DT would further like to thank the University of Washington,
Seattle where this work was initiated, as well as the Tata Institute of
Fundamental Research, Mumbai and the Mehta Research Institute, Allahabad,
for their kind hospitality. DT is supported by an EPSRC fellowship.
ND is supported by a PPARC ARF. The authors acknowledge support from
TMR network grant FMRX-CT96-0012.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,969 |
{"url":"https:\/\/math.stackexchange.com\/questions\/1924129\/solving-frac-sin2v-sin40-circ-sin40-circ-2v-3","text":"# Solving $\\frac{\\sin(2v)+\\sin(40^\\circ)}{\\sin(40^\\circ-2v)}=3$\n\nHow do I solve this equation?\n\n$$\\frac{\\sin(2v)+\\sin(40^\\circ)}{\\sin(40^\\circ-2v)}=3$$\n\nI'm not sure that I have the knowledge required to solve this problem yet, I have tried and can get it to: $$\\sin(2v)(1+3\\cos(40^\\circ))=3\\sin(40^\\circ)\\cos(2v)-\\sin(40^\\circ)$$\n\nHowever, I doubt it has brought me any closer to a solution, and would be happy if you could provide me with a full solution. Thanks!\n\nHINT:\n\nUsing Prosthaphaeresis Formula $(\\sin C+\\sin D)$,\n\n$$2\\sin(v+20^\\circ)\\cos(v-20^\\circ)=-3\\cdot2\\sin(v-20^\\circ)\\cos(v-20^\\circ)$$\n\n$$2\\cos(v-20^\\circ)\\{\\sin(v+20^\\circ)+3\\sin(v-20^\\circ)\\}=0$$\n\nSo, either $\\cos(v-20^\\circ)=0$ or $\\sin(v+20^\\circ)+3\\sin(v-20^\\circ)=0$\n\nBut $\\cos(v-20^\\circ)=0$ will make the left hand side of the given expression $$\\dfrac00$$\n\nFinally use $$\\sin(A\\pm B)$$ formula","date":"2019-07-17 02:22:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9034706950187683, \"perplexity\": 161.93158320386718}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195525009.36\/warc\/CC-MAIN-20190717021428-20190717043428-00239.warc.gz\"}"} | null | null |
{"url":"https:\/\/tutors.com\/math-tutors\/geometry-help\/quotient","text":"This lesson will teach you what quotient means in math and how to find the quotient in division.\n\n## What Is A Quotient?\n\nThe quotient is the answer to any division problem. The word comes from a Latin word, quotiens, which means \u201chow many times,\u201d as in, \u201chow many times does $8$ go into $65$? The number of times $8$ goes into $65$ is the quotient or the result of a division problem.\n\n### Parts Of A Division Problem\n\nIn a division problem, the number being divided into pieces is the dividend. The number by which the dividend is divided is called the divisor. And the answer to the division problem is the quotient.\n\nHere are the parts for the simple division problem, ten divided by two:\n\n### Where Does The Quotient Go?\n\nWhen using short or long division, the dividend goes under the division bracket, \u27cc, the divisor goes to the left of the bracket, and the quotient goes on top of the bracket aligned by place value with the dividend.\n\nThe division symbol, $\\mathbf{\u00f7}$, is called an obelus. It is used in division number sentences. The obelus follows the dividend and precedes the divisor.\n\n## How To Find The Quotient Of A Number\n\nSetting up a division problem is a key first step to dividing correctly. First, decide which number is to be divided. That is the dividend. Place it under the division bracket.\n\nThe dividend is divided by some other number; that is the divisor, and it goes to the left of the bracket. Perform the division. Your answer is the quotient. Any remainder is placed to the right of the quotient.\n\nFinding the two given parts (dividend and divisor) is often challenging in a word problem, but in a number sentence, these parts stand out. Here is an example sentence:\n\n\u2022 Dividend (obelus) divisor (equal sign) quotient\n\nIn this case, our answer would be the whole number $5$. So, the number $5$ is one example of a quotient. We will go over more complicated examples of quotients later in the lesson.\n\n### Quotient And Remainder\n\nWhen you compute the quotient in division, you may end up with a remainder. The result of division is called the quotient. The number left over is called the remainder. The remainder is part of the result.\n\nHere is a quotient example with a remainder:\n\n$8$ goes into $34$ four ($4$) times, which is $32$. That leaves us with $2$ remaining.\n\n## How To Find The Quotient Of A Fraction\n\nFractions are already division problems. The fraction bar separating numerator and denominator is signaling division:\n\n### Quotient Of Two Fractions\n\nA more complicated search for a quotient can occur when you are dividing two fractions:\n\nSuch a problem can also appear in this form:\n\n$\\frac{\\frac{5}{9}}{\\frac{10}{16}}$\n\nRecall the process for dividing fractions; invert the second fraction and multiply:\n\nThe quotient for is $\\frac{8}{9}$.\n\n## Division Quotients In Algebra\n\nQuotients appear in algebraic expressions, too. You can divide one monomial by another:\n\n$\\frac{51ab}{17b}$\n\nThe variable $b$ in the numerator and denominator cancel out (think: $\\frac{1}{1}$). And the numbers divide readily, leaving you with $3a$ as the quotient.\n\nYou can also divide a polynomial by a monomial:\n\nYou have the polynomial being divided by a monomial, $9x$.\n\nFirst, separate the two terms in the numerator and divide each by the denominator.\n\nThis gives you 3x for the first term ($\\frac{27{x}^{2}}{9x}$) and 4 for the second term ($\\frac{36x}{9x}$):\n\nin the quotient\n\nOne polynomial can be divided by another polynomial to get a quotient, too:\n\n## What Is A Partial Quotient?\n\nPartial quotient is a division method (also called chunking) that uses repeated subtraction to solve simple division problems. The partial quotients method is used when dividing a large number by a small number.\n\nInstead of trying to figure out how many times $12$ goes into $250$, you can turn it into a simpler multiplication problem and multiply $12$ by an easy multiple like $10$, which gives you $120$. The number 10 becomes your partial quotient, and you subtract $120$ from the divided, $250$.\n\nYou repeat this step reducing the dividend by chunks until it is reduced as much as it can be by $12$. At the end, you add up your partial quotients, and the result is your quotient.\n\n## What you learned:\n\nAfter working your way through this lesson and video, you have learned:\n\n\u2022 The definition of quotient\n\u2022 The parts of a division problem\n\u2022 How to find the quotient\nInstructor: Malcolm M.\nMalcolm has a Master's Degree in education and holds four teaching certificates. He has been a public school teacher for 27 years, including 15 years as a mathematics teacher.\n\n### 20+ Math Tutors in Ashburn, VA\n\nGet better grades with tutoring from top-rated private tutors. Local and online.\n\n15 chapters | 149 lessons\nTutors online\n\n### Find a math tutor in Ashburn, VA\n\nLearn faster with a math tutor. Find a tutor locally or online.","date":"2022-01-17 04:52:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 29, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8475457429885864, \"perplexity\": 801.536385742703}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320300289.37\/warc\/CC-MAIN-20220117031001-20220117061001-00538.warc.gz\"}"} | null | null |
{"url":"https:\/\/e0hyl.github.io\/BLOG-OF-E0\/SplineCNN\/","text":"# CVPR-2018-SplineCNN\n\n## May 31, 2020\n\nReading time ~12 minutes\n\nContents\n\n# SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels\n\n## SplineCNN\n\n### \u7b26\u53f7\u8bf4\u660e\n\nInput graphs.\u6709\u5411\u56fe$$G=(V,E,U)$$\uff0c\u5176\u4e2d$$V={1,...,N}$$\u8868\u793a\u8282\u70b9\u96c6\u5408\uff0c$$E\\subseteq{V\\times{V}}$$\u8868\u793a\u8fb9\u96c6\u5408\uff0c$$U\\in{[0,1]^{N\\times{N}\\times{d}}}$$\u7531\u6bcf\u6761\u6709\u5411\u8fb9\u7684$$d$$\u7ef4\u4f2a\u5750\u6807$$u(i,j)\\in{[0,1]^d}$$\u7ec4\u6210\uff1b\u5bf9\u4e8e\u4e00\u4e2a\u8282\u70b9$$i\\in{V}$$\uff0c\u5b83\u7684\u90bb\u5c45\u8282\u70b9\u96c6\u5408\u8868\u793a\u4e3a$$N(i)$$\u3002\n\nU can be interpreted as an adjacency matrix with d-dimensional, normalized entries u(i, j) if (i, j) \u2208 E and 0 otherwise\n\nInput node features. \u5b9a\u4e49\u6620\u5c04$$f:V\\rightarrow{R^{M_{in}}}$$\uff0c\u5176\u4e2d$$f(i)\\in{R^{M_{in}}}$$\u8868\u793a\u6bcf\u4e2a\u8282\u70b9$$i\\in{V}$$\u7684$$M_{in}$$\u7ef4\u8f93\u5165\u7279\u5f81\u7684\u5411\u91cf\u3002\u5bf9\u4e8e$$1\\leq{l}\\leq{M_{in}}$$\uff0c$$f(i)$$\u4e2d\u6bcf\u4e2a\u7279\u5f81\u503c\u7684\u96c6\u5408$${\\{f_l(i)\\vert{i\\in{V}}\\}}$$\u88ab\u79f0\u4e3a\u8f93\u5165\u7684\u7279\u5f81\u56fe (feature map)\u3002\n\nB-spline basis function. $$((N^m_{1,i})_{1\\leq{i}\\leq{k_1}},...,(N^m_{d,i})_{1\\leq{i}\\leq{k_d}})$$\u8868\u793ad\u7ec4$$m$$\u9636\u7684\u5f00(open)B-\u6837\u6761\u57fa\u51fd\u6570\uff0c\u8282\u70b9(knots)\u5411\u91cf\u7b49\u8ddd\u5206\u5e03\uff0c\u5373\u6837\u6761\u5747\u5300(uniform)\uff0c\u5176\u4e2d$$k=(k_1,...,k_d)$$\u5b9a\u4e49\u4e86$$d$$\u7ef4\u7684\u6838\u5927\u5c0f\u3002\n\n### \u6838\u5fc3\u601d\u60f3\n\n\u2022 Graphs: U\u8868\u793a\u7684\u7279\u5f81\u53ef\u4ee5\u662f\u8fb9\u7684\u6743\u91cd\u3001\u8282\u70b9\u7684\u5ea6\u7b49\n\u2022 Discrete Manifolds: U\u53ef\u4ee5\u5305\u542b\u5c40\u90e8\u7684\u5173\u7cfb\u4fe1\u606f\uff0c\u5982\u6e90\u8282\u70b9\u6bcf\u6761\u8fb9\u6240\u5bf9\u5e94\u7684\u76ee\u6807\u8282\u70b9\u7684\u76f8\u5bf9\u6781\u5750\u6807\u3001\u7403\u5750\u6807\u6216\u7b1b\u5361\u5c14\u5750\u6807\n\n### \u5377\u79ef\u8fd0\u7b97\n\n\u2022 \u8fde\u7eed\u7684\u5377\u79ef\u6838\u51fd\u6570$$g_l:[a_1,b_1]\\times{...}\\times{[a_d,b_d]}\\rightarrow{R}$$\n$g_l(u)=\\sum_{p\\in{P}}w_{p,l}\\cdot{B_p(u)}$\n\n\u200b \u5176\u4e2d$$B_p$$\u662fp\u4e2d\u6240\u6709B-\u6837\u6761\u57fa\u51fd\u6570\u7684\u53c9\u4e58\uff0c\u5373$$B_p(u)=\\prod_{i=1}^dN_{i,p_i}^m{u_i}$$\uff0c\u800c$$w_{p,l}$$\u5bf9\u4e8e\u6bcf\u4e2a$$P=((N_{1,i}^m)_i\\times{...}\\times{(N_{d,i}^m)_i})$$\u4e2d\u7684\u5143\u7d20$$p$$\u4ee5\u53ca\u7279\u5f81\u56fe\u4e2d\u7684$$M_{in}$$\u4e2a\u503c\u90fd\u662f\u53ef\u8bad\u7ec3\u7684\u53c2\u6570\uff0c\u56e0\u6b64\u53ef\u8bad\u7ec3\u7684\u53c2\u6570\u603b\u6570\u4e3a$$M_{in}\\cdot{K}=M_{in}\\cdot{\\prod^d_{i=1}}k_i$$\u3002\n\n\u2022 \u7ed9\u5b9a\u6838\u51fd\u6570\u96c6$$g=(g_1,...,g_{M_{in}})$$\u548c\u8f93\u5165\u8282\u70b9\u7279\u5f81$$f$$\u540e\uff0c\u8282\u70b9$$i$$\u5728\u7a7a\u95f4\u4e0a\u7684\u5377\u79ef\u8fd0\u7b97\u53ef\u5b9a\u4e49\u4e3a\n$(f*g)(i)=\\frac{1}{\\vert{N(i)\\vert}}\\sum_{l=1}^{M_{in}}\\sum_{j\\in{N(i)}}f_l(j)\\cdot{g_l(u(i,j))}$\n\n#### \u5c40\u90e8\u652f\u6301\u6027\n\n$(f_l*g_l)(i)=\\sum_{j\\in{N(i)},p\\in{P(u(i,j))}}f_l(j)\\cdot{w_{p,l}}\\cdot{B_p(u(i,j))}$\n\n## \u5b9e\u9a8c\u7ed3\u679c\n\n### \u56fe\u50cf\uff08\u56fe\uff09\u5206\u7c7b\n\n[MNIST] 60,000 training and 10,000 test images containing grayscale, handwritten digits from 10 different classes\n\n1. equal grid graphs ($$28\\times28$$ nodes)\n\n\u2022 LeNet5-like network architecture: $$SConv((5, 5), 1, 32)\\rightarrow{MaxP(4)}\\rightarrow{SConv((5, 5), 32, 64)} \\\\ \\rightarrow{MaxP(4)}\\rightarrow{FC(512)}\\rightarrow{FC(10)}$$\n\u2022 mirror the LeNet5 architecture with its 5 \u00d7 5 filters: neighborhoods of size 5 \u00d7 5 from the grid graph\n\u2022 reach equivalence to the traditional convolution operator in CNNs: m=1\n2. embedded graph of 75 nodes defining the centroids of superpixels [\u8d85\u50cf\u7d20]\n\n[MoNet] F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model CNNs. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5425\u20135434, 2017.\n\n\u200b $$w_j(u) = exp(\u2212\\frac1 2 (u \u2212 \u00b5_j )^\u22a4\u03a3_j^{\u22121} (u \u2212 \u00b5_j ))$$\n\n\u200b $$\u03a3_j$$ and $$\u00b5_j$$ are learnable d \u00d7 d and d \u00d7 1 covariance matrix and mean vector of a Gaussian kernel\n\n\u2022 architecture: $$SConv((k_1,k_2), 1, 32)\\rightarrow{MaxP(4)}\\rightarrow{SConv((k_1, k_2), 32, 64)} \\\\ \\rightarrow{MaxP(4)}\\rightarrow{AvgP}\\rightarrow{FC(128)}\\rightarrow{FC(10)}$$\n\u2022 Cartesian coordinates: $$k_1=k_2=4+m$$; Polar coordinates: $$k_1=1+m, k_2=8$$\n\n#### Discussion\n\n\u2022 \u5b9e\u9a8c1\u4e2d\u4e09\u79cd\u65b9\u6cd5\u8868\u73b0\u7c7b\u4f3c\n\u2022 \u5b9e\u9a8c2\u4e2d\u51c6\u786e\u7387\u4f18\u4e8eMoNet\u7ea64.11\u4e2a\u767e\u5206\u70b9\uff1ain contrast to the MoNet kernels, our kernel function has individual trainable weights for each combination of input and output feature maps, just like the filters in traditional CNNs.\n\n\u2022 Different configurations\n\n\u2022 \u6a2a\u5750\u6807\uff1a\u9636\u6570m\uff08\u7ebf\u6027\uff0c\u4e8c\u6b21\uff0c\u7acb\u65b9\uff09\uff1b\u56fe\u4f8b\uff1a\u4e24\u79cd\u4e0d\u540c\u7684\u4f2a\u5750\u6807\n\u2022 \u4f7f\u7528\u9636\u6570\u8f83\u5c0f\u7684B\u6837\u6761\u57fa\u51fd\u6570\u548c\u7b1b\u5361\u5c14\u5750\u6807\u65f6\u7684\u8868\u73b0\u7565\u4f18\u4e8e\u5176\u4ed6\u8bbe\u7f6e\n\n### \u56fe\u8282\u70b9\u5206\u7c7b\n\n[Cora] Nodes: 2708 scientific publications (classified into one of seven classes); Links: 5429 undirected unweighted.\n\nEach publication (document) in the dataset is described by a 0\/1-valued word vector indicating the absence\/presence of the corresponding word from the dictionary (1433 unique words).\n\n\u2022 no Euclidean relations\n\u2022 pseudo-coordinates: globally normalized degree of the target nodes $$u(i, j)=\\frac{deg(j)}{max_{v\\in{V}}deg(v)}$$\n\u2022 architecture: $$SConv((2), 1433, 16)\\rightarrow{SConv((2), 16, 7)}, m=1$$\n\n### (shape correspondence) 3D\u914d\u51c6\n\n[FAUST] 100 scanned human shapes in 10 different poses, resulting in a total of 100 non-watertight meshes with 6890 nodes each\n\n[Princeton benchmark protocol] Correspondence quality: counting the percentage of derived correspondences that lie within a geodesic radius r around the correct node.\n\n\u2022 three-dimensional meshes\n\n\u2022 architecture: $$SConv((k_1, k_2, k_3), 1, 32)\\rightarrow{SConv((k_1, k_2, k_3), 32, 64)} \\\\ \\rightarrow{SConv((k_1, k_2, k_3), 64, 64)}\\rightarrow{Lin(256)}\\rightarrow{Lin(6890)}$$\n\n\u5176\u4e2d$$Lin(o)$$\u8868\u793a\u8f93\u51fa $$o$$ \u7ef4\u7279\u5f81\u7684$$1\\times1$$\u5377\u79ef\u5c42\n\n\u2022 end-to-end: without handcrafted feature descriptors, input features are trivially given by $$1\u2208R^{N\u00d71}$$\uff08\u6bcf\u4e2a\u8282\u70b9\u7684\u7279\u5f81\u90fd\u7b80\u5355\u5730\u88ab\u521d\u59cb\u5316\u4e3a1\uff09\n\n#### Discussion\n\n\u2022 [\u6d4b\u5730\u8ddd\u79bb]\u9519\u8bef\u4e3a0\u65f6\uff0c\u5339\u914d\u738799.2%\u4f18\u4e8e\u5176\u4ed6\u6240\u6709\u65b9\u6cd5\n\n\u2022 \u5728\u66f4\u5927\u7684\u6d4b\u5730\u9519\u8bef\u8fb9\u754c\u4e0a\u7684\u5168\u5c40\u8868\u73b0\u7565\u4f4e\u4e8eFMNet\uff0c\u53ef\u80fd\u7684\u539f\u56e0\u662f\u635f\u5931\u51fd\u6570\u8bbe\u7f6e\u7684\u4e0d\u540c\u3002\u4f46\u503c\u5f97\u5f3a\u8c03\u7684\u662f\uff0c\u5176\u4ed6\u65b9\u6cd5\u90fd\u4f7f\u7528SHOT\u7279\u5f81\u63cf\u8ff0\u5b50\u4f5c\u4e3a\u8f93\u5165\uff08\u975e\u7aef\u5230\u7aef\uff09\u3002\n\nWhile we train against a one-hot binary vector using the cross entropy loss, FMNet trains using a specialized soft error loss, which is a more geometrically meaningful criterion that punishes geodesically far-away predictions stronger than predictions near the correct node\n\nUpdated on","date":"2022-08-16 16:04:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3928614556789398, \"perplexity\": 11739.563237190147}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572408.31\/warc\/CC-MAIN-20220816151008-20220816181008-00042.warc.gz\"}"} | null | null |
Q: How to upload dynamic VHD to Azure using python and rest API calls? https://github.com/Microsoft/azure-vhd-utils is written in Go.
Add-AzureRMVhd is the powershell cmd.
Similarly, is there a python alternative that uploads dynamic VHD files and does checksum verification?
#Working code to list blobs using GET API:
import requests
import datetime
import hmac
import hashlib
import base64
storage_account_name = 'abcd'
storage_account_key = '4********************************************$'
api_version = '2018-03-28'
request_time = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
string_params = {
'verb': 'GET',
'Content-Encoding': '',
'Content-Language': '',
'Content-Length': '',
'Content-MD5': '',
'Content-Type': '',
'Date': '',
'If-Modified-Since': '',
'If-Match': '',
'If-None-Match': '',
'If-Unmodified-Since': '',
'Range': '',
'CanonicalizedHeaders': 'x-ms-date:' + request_time + '\nx-ms-version:' + api_version + '\n',
'CanonicalizedResource': '/' + storage_account_name + '/containername\ncomp:list\nrestype:container'
}
string_to_sign = (string_params['verb'] + '\n'
+ string_params['Content-Encoding'] + '\n'
+ string_params['Content-Language'] + '\n'
+ string_params['Content-Length'] + '\n'
+ string_params['Content-MD5'] + '\n'
+ string_params['Content-Type'] + '\n'
+ string_params['Date'] + '\n'
+ string_params['If-Modified-Since'] + '\n'
+ string_params['If-Match'] + '\n'
+ string_params['If-None-Match'] + '\n'
+ string_params['If-Unmodified-Since'] + '\n'
+ string_params['Range'] + '\n'
+ string_params['CanonicalizedHeaders']
+ string_params['CanonicalizedResource'])
signed_string = base64.b64encode(hmac.new(base64.b64decode(storage_account_key), msg=string_to_sign.encode('utf-8'), digestmod=hashlib.sha256).digest()).decode()
headers = {
'x-ms-date' : request_time,
'x-ms-version' : api_version,
'Authorization' : ('SharedKey ' + storage_account_name + ':' + signed_string)
}
url = ('https://' + storage_account_name + '.blob.core.windows.net/containername?restype=container&comp=list')
r = requests.get(url, headers = headers)
print(r.content)
Is this the right canonicalized resource to upload a page blob? 'CanonicalizedResource': '/' + storage_account_name + '/containername/vhdname.vhd'
#Failing PUT request to upload page blob
import requests
import datetime
import hmac
import hashlib
import base64
storage_account_name = 'abc'
storage_account_key = '4*******************************='
api_version = '2018-03-28'
request_time = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
string_params = {
'verb': 'PUT',
'Content-Encoding': '',
'Content-Language': '',
'Content-Length': '',
'Content-MD5': '',
'Content-Type': '',
'Date': '',
'If-Modified-Since': '',
'If-Match': '',
'If-None-Match': '',
'If-Unmodified-Since': '',
'Range': '',
'CanonicalizedHeaders': 'x-ms-blob-type:PageBlob' + '\nx-ms-date:' + request_time + '\nx-ms-version:' + api_version + '\n',
'CanonicalizedResource': '/' + storage_account_name + '/containername/vhdname.vhd'
}
string_to_sign = (string_params['verb'] + '\n'
+ string_params['Content-Encoding'] + '\n'
+ string_params['Content-Language'] + '\n'
+ string_params['Content-Length'] + '\n'
+ string_params['Content-MD5'] + '\n'
+ string_params['Content-Type'] + '\n'
+ string_params['Date'] + '\n'
+ string_params['If-Modified-Since'] + '\n'
+ string_params['If-Match'] + '\n'
+ string_params['If-None-Match'] + '\n'
+ string_params['If-Unmodified-Since'] + '\n'
+ string_params['Range'] + '\n'
+ string_params['CanonicalizedHeaders']
+ string_params['CanonicalizedResource'])
signed_string = base64.b64encode(hmac.new(base64.b64decode(storage_account_key), msg=string_to_sign.encode('utf-8'), digestmod=hashlib.sha256).digest()).decode()
headers = {
'x-ms-date' : request_time,
'x-ms-version' : api_version,
'Content-Length' : '0',
'x-ms-blob-type': 'PageBlob',
'Authorization' : ('SharedKey ' + storage_account_name + ':' + signed_string)
}
url = ('https://' + storage_account_name + '.blob.core.windows.net/containername/vhdname.vhd')
r = requests.get(url, headers = headers)
print(r.content)
A:
is there a python alternative that uploads dynamic VHD files
We use Azure python sdk to upload the VHD file to Azure storage.
block_blob_service = BlockBlobService(account_name='accountname', account_key='accountkey')
block_blob_service.create_blob_from_path(container_name, local_file_name, full_path_to_file)
For more information, please refer to the azure offical tutorial.
does checksum verification?
Yes, azure Blob service provides mechanisms to ensure data integrity both at the application and transport layers. This post will detail these mechanisms from the service and client perspective. MD5 checking is optional on both PUT and GET operations.
For more information, please refer to this blog.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 388 |
const gulp = require('gulp');
const livereload = require('gulp-livereload');
const PATHS = require('./tasks.constants').PATHS;
const TRANSPILE = require('./tasks.constants').TRANSPILE;
const ALL = require('./tasks.constants').ALL;
// watch all source files for changes
gulp.task('watch', ['build'], () => {
livereload.listen();
for (const task of ALL) {
// tanspile tasks
if (TRANSPILE.has(task)) gulp.watch(PATHS[task].src, [`transpile:${task}`]);
// add some delay for images
else if (task === 'images') {
gulp.watch(PATHS.images.src, {
debounceDelay: 2500
}, ['images']);
} else gulp.watch(PATHS[task].src, [task]);
}
gulp.watch(PATHS.ng.src + '**/*', ['ng']);
// also lint this gulpfile on save
gulp.watch('gulpfile.babel.js', ['lint:gulpfile']);
}); | {
"redpajama_set_name": "RedPajamaGithub"
} | 3 |
Q: core/no-app No Firebase App 'DEFAULT' has been created - call Firebase.initializeApp() | Android Only Error So i tried my app on android device, but the firebase didn't work like it does on iOS device. it give me [core/no-app] No Firebase App '[DEFAULT]' has been created - call Firebase.initializeApp()
But i did already initialize the firebase. Here's where I initialized the firebase
Firebase Initialized :
void main() async {
WidgetsFlutterBinding.ensureInitialized();
SystemChrome.setPreferredOrientations(
[DeviceOrientation.portraitUp, DeviceOrientation.portraitDown])
.then((_) => runApp(
new MaterialApp(
debugShowCheckedModeBanner: false,
title: 'Title',
theme: ThemeData(
fontFamily: 'Poppins'
),
home: FutureBuilder(
future: Firebase.initializeApp(),
builder: (context,snapshot){
return ResponsiveSizer(
builder: (context, orientation, screenType) {
return MyApp();
},
);
},
)
)));
}
the app work well on iOS device but not on Android device. Here's what where i use the Firebase service
Firebase Service call :
Future _getToken() async {
var token = await FirebaseMessaging.instance.getToken();
SharedPreferences prefs = await SharedPreferences.getInstance();
prefs.setString('otp', token);
}
@override
void initState() {
// TODO: implement initState
super.initState();
_getToken().then(AfterToken);
}
A: Your may check you gradle files and your services.json file for android.
add firebase to android
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,502 |
Q: How to get values of a Counter object in the order which it was received? Task:
The first line contains the integer, N.
The next N lines each contain a word.
Output should be:
1) On the first line, output the number of distinct words from the input.
2) On the second line, output the number of occurrences for each distinct word according to their appearance in the input.
I had no difficulty with #1. For point #2, I used Counter to get the occurrences of the words. However I am having difficulty in printing them in the order which it was received. Below is my code.
from collections import Counter
from collections import OrderedDict
all_words=[]
for _ in range(int(raw_input())):
name=raw_input()
all_words.append(name)
uniqlst=list(set(all_words))
print len(uniqlst)##On the first line, output the number of distinct words from the input.
x=OrderedDict(Counter(all_words)) #This is where I am having trouble to get values of x in the order it was received.
print " ".join(map(str,x.values()))
Input:
4
bcdef
abcdef
bcde
bcdef
Output of my code:
3
1 1 2
Expected output:
3
2 1 1
A: This isn't going to work:
x=OrderedDict(Counter(all_words))
First you're creating a Counter by iterating all_words. Since a Counter is just a dict under the hood, depending on your Python version, this may be in insertion order, consistent-but-arbitrary order, or explicitly-randomized order.
Then you create an OrderedDict by iterating that Counter. That will preserve the order of the Counter—which isn't very useful if the Counter was in arbitrary order.
What you want to do is to create a class that does everything Counter does but also does everything OrderedDict does. Which is trivial:
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
This isn't quite perfect, because its repr will give you the wrong class name, and it won't pickle right. But fixing that is almost as simple. In fact, it's given as an example in the docs:
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,822 |
ParkChamp is a participating startup in Batch#4 of The Accelerator; Tackling the daily challenge of finding a parking spot just got easier.
INTERVIEW with Maggie Young, Alex Chalamova, and Darnell Shinbine.
ParkChamp is a technology startup based in Calgary focused on automated parking management, operations, and payments in order to improve the overall parking experience for drivers and increase revenues and efficiency of parking lots.
Our team consists of three founders that have worked together over the past year and a half to develop and commercially deploy our product. Maggie (aka the hustler), Alex (aka the hipster) and Darnell (aka the hacker) work with an experienced team of advisers in the software, AI and automation space that have either invested or built companies in these industries.
ParkChamp is eliminating parking pains for drivers by allowing them to find parking in seconds, reserve in advance and pay instantly from their phone at the best price. Our five-star-rated App has drivers talking about the ease of use, great pricing and extraordinary customer support.
ParkChamp is reducing parking equipment costs through GPS enabled, secure and infrastructure-less access technology that can be retrofitted to any building in hours. We simplify operations and management through remote monitoring and enforcement, and cashless payments – while also increasing revenues and optimizing under-utilized space through real-time availability and dynamic pricing. Our app uses data-driven insights to improve efficiency and provide metrics and trends to every property tracked in real-time.
We noticed the gap in the market by personally experiencing parking pains as drivers and researching the experience of transient parkers. We realized an opportunity to enhance the revenues of commercial parking lots by transforming static parking rates into dynamic rates and displaying real-time availability - something that is not currently offered to commercial parking lots.
4. What is your company culture?
Our company motto is - " If you don't do it, then you won't do it." We are focused on getting things done - we don't hesitate, we calculate and deliver. We disrupt the status quo by solving problems with technology always with a customer-centric focus today. We work as a team - always. We value open communication, incentivizing, and challenging each other to continuously improve performance.
Our experience at The Accelerator has been great so far. We have had the opportunity to work alongside like-minded entrepreneurs and get connected with mentors and investors. We've been able to address our business challenges in an open, safe environment and learn from others facing similar challenges. One of the best parts of The Accelerator was attending Base Camp where we learned how fundraising works from an investor's perspective alongside what startups can do to successfully raise money and get the most out of it.
The biggest lesson we learned is the know-how when it comes to raising money. We chose this Accelerator because we were unsure of how to start, how much to raise and where to go to find funding. We've had multiple opportunities to practice our pitch through bi-weekly founders' dinners that each has a special guest offering feedback. We are more confident now than we were ever before with the direction we will take with our business, how to get there and how to navigate that journey.
A major milestone for us in 2018 was launching the first underground parkade on ParkChamp in downtown Calgary. We developed proprietary access technology so drivers can securely and instantly access any underground parkade through the App. The parkade is currently sold out every day and both our drivers and property managers love the simple and flexible solution.
We are expanding our partnerships in the automation and smart technologies space that will add to our current value and product offering. ParkChamp is opening a seed-round for investors wanting to join us on our journey in order to continue new developments and expand our business to additional markets.
Please contact us! We'd love to chat. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,884 |
ETHNOCULTURAL MEDIA PLATFORM
PRESERVE YOUR LANGUAGE
PROMOTE YOUR CULTURE
SPEAK FREELY
About U
COVID-19 INFO SERIES
U Training
U STORIES
Ethno Fest 2020
U Radio
Human Trafficking Prevention
I'm named after my grandfather, and it means 'grizzly bear' – Zeann Manernaluk
Disclaimer: The views and opinions expressed in this article are those of the authors and do not
necessarily reflect the official policy or position of U Multicultural.
Spread out over 35% of Canada's landmass and across 50% of its coastline, the Inuit people have been integral to Canada's history. Inuktut is the language traditionally spoken among Inuit people. Winnipeg also has dedicated a centre for Inuit people to visit. Zeann Manernaluk is from a small community, on the shore of the Hudson's Bay. She works at the Inuit centre in Winnipeg. Aside from her career – helping people in Manitoba, she is also a remarkable throat singer and a proud mom.
"AK&A (ah-clah) is my Inuktut name. I'm named after my grandfather, and it means 'grizzly bear'. I'm from Rankin Inlet where I could always see the water of the Bay from my bedroom window. I moved down South to Winnipeg when I was eleven. There were only about five thousand people living there. It was great, everyone knew each other, and it was very safe. Fishing was popular. I remember catching a ton of minnows, as well as berry picking in Spring", says Zeann.
Fond childhood memories of playing outdoors, ice fishing contests for Arctic char, and even sledding during the Winter. These are the experiences Zeann says she will always cherish. Zeann visited Rankin Inlet again after many years as an adult. She says the stark contrast of how her hometown felt as a child versus much recently, was noticeable. It wasn't because the landscape changed, but because many of her friends had left the place. And also, because the social fabric is now completely different.
Zeann has a vocation with the Inuit centre here in Winnipeg and she has been a part of their team since 2009. She says, "It's a medical boarding home at the centre. They house people who come from up north – down to Winnipeg for medical procedures. I've been a supervisor there for a few years. We provide people with transportation, a room to sleep in, and food. We schedule appointments and give people things like pampers and food. It's rewarding and has been such a positive part of my life for the past decade. The overnight shift contains the most excitement. It's because many women come to the centre, right before they are about to give birth. Sometimes, the children come sooner than expected before an ambulance can even take them to a hospital. This has motivated me to work a little bit outside my job description. I've delivered seven babies during my time there."
Outside of her work and motherhood life, Zeann has a unique and polished talent. She is a talented Inuit throat singer. She shares, "I used to do a lot of performances all over Winnipeg. I've performed at the Millenium Library, Winnipeg Art Gallery, and Thunderbird house. My performances also include many graduation ceremonies. I have always encountered the same reaction from listeners. People say, 'Oh my gosh, that's so amazing, I've never heard that before!' and, while flattering, it also makes me a little sad".
She continues, "Inuit culture has been around for thousands of years. The throat singing techniques have been so core to our musical culture. To see how many people over the years haven't heard about Inuit culture, heard our music and seen our traditional outfits, makes me uneasy. Yet, it also fills me with hope that I can reach out to many more people through it. I can show them both – who I am and who my people are."
Authored by David Teffaine
Edited by Kiran Ajaz
Laura Castillo – Born for Education
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of U Multicultural. An educator's work is not for everyone, but sometimes your path in life brings you to it, even if the course was occasionally uncertain.Continue Reading
Cultural Appropriation of Indigenous Cultures in North America
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of U Multicultural. Cultural appropriation has been an infamous controversy in recent times. In the education of exploitation of Indigenous peoples and their cultures, cultural appropriation isContinue Reading
The Challenges of Entering Technology for Immigrants. By Ryan Funk
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of U Multicultural. What does it take to enter the occupational fields of Science, technology, engineering, and mathematics (STEM)? For female immigrants, that can be a dauntingContinue Reading
Missing Indigenous Women and Girls: A National Crisis. By Natasha Byrne
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of U Multicultural. Being born an Indigenous female has its consequences, and it is a shame that there is not enough public outcry by the Canadian populationContinue Reading
The Dream of Better Things and Thai Food – With Sadudee and Janfong Sae-Phan
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of U Multicultural. What drives a person to leave everything they have known and achieved? Such as leaving their friends and family, to a country and cultureContinue Reading
Connecting to a Culture from an Ocean Away, With Sonja Lundstrom
Disclaimer: The views and opinions expressed in this article are those of the authors and do notnecessarily reflect the official policy or position of U Multicultural. Can we still feel a connection to our ancestor's culture, even if the country they migrated from is thousands of miles away? In 1939,Continue Reading
HOME | ABOUT | DIRECTORY | ETHNO FEST | GET INVOLVED | CONTACT US
Copyright © 2018-2020 U Multicultural Inc. All rights reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,973 |
Ján Mičovský (ur. 26 grudnia 1954 w Zwoleniu) – słowacki polityk i leśnik, poseł do Rady Narodowej, w latach 2020–2021 minister rolnictwa.
Życiorys
W 1974 ukończył technikum przemysłu drzewnego w rodzinnej miejscowości. Absolwent studiów na wydziałach leśnych VŠLD w Zwoleniu (1980) i VŠZ w Brnie (1990). Z zawodu leśnik, pracował w różnych instytucjach branży leśnej (w tym w przedsiębiorstwie LESY Slovenskej republiky), a także w administracji Koszyc. Zyskał pewien rozgłos w 2009, wysyłając do prezesa lasów państwowych i słowackich leśników list, w którym zwrócił uwagę na niegospodarność i korupcję w przedsiębiorstwie. W 2010 otrzymał przyznawaną przez organizacje obywatelskie nagrodę "Biela vrana".
W wyborach w 2012 z ramienia Zwyczajni Ludzie uzyskał mandat posła do Rady Narodowej, który wykonywał do 2016. W 2014 sygnalizował manipulowanie cenami transportu drewna, za co został pozwany przez lasy państwowe. W wyniku wyborów w 2020 po raz drugi został wybrany na deputowanego do słowackiego parlamentu.
W marcu 2020 został ministrem rolnictwa w nowo powołanym rządzie Igora Matoviča. Utrzymał tę funkcję również w utworzonym w kwietniu 2021 gabinecie Eduarda Hegera. W maju tegoż roku zapowiedział swoją dymisję w związku z zarzutami korupcyjnymi wobec byłej prezes agencji zajmującej się dopłatami rolniczymi. Później ogłosił wycofanie rezygnacji, jednak został odwołany w czerwcu 2021.
Przypisy
Politycy Zwyczajnych Ludzi
Słowaccy leśnicy
Słowaccy ministrowie rolnictwa
Słowaccy parlamentarzyści
Ludzie urodzeni w Zwoleniu (Słowacja)
Urodzeni w 1954 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,619 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.