text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
\section{Introduction}
The recent measurements of the local cosmic-ray electron and positron fluxes have stimulated
a lot of interest in the cosmic-ray field. The sharp rise in the positron fraction detected by
PAMELA~\cite{Adriani:2008zr} seems a clear indication of the existence of a nearby positron
source in addition to the secondary positron component due to the interaction of primary species
with the interstellar medium along propagation. The very recent high-statistic measurement
by FERMI~\cite{Abdo:2009zk} of the all-electron flux (namely the flux of electrons plus
positrons without charge discrimination), while not confirming previous hints of an anomalous
peak by ATIC~\cite{:2008zzr} and PPB-BETS~\cite{Torii:2008xu}, has found a spectral index sensibly
harder than the one inferred from past extrapolations for the electron spectrum based on lower
energy (and less accurate) data. Such a picture leaves room for a substantial contribution from unconventional
(or previously disregarded) lepton sources, and it is very tempting to consider the
possibility that a large component of the positron and electron flux is provided by the pair
annihilation of dark matter (DM) particles, possibly forming the DM halo of the Milky Way
(for early proposals in this direction see, e.g.,~\cite{Rudaz:1987ry,Ellis:1988qp,Kamionkowski:1990ty};
for reviews on indirect DM detection see,
e.g., ~\cite{Jungman:1995df,Bergstrom:2000pn,Bertone:2004pz}). The latest fits of the data with
such a component, see, e.g.,~\cite{Bergstrom:2009fa,Meade:2009iu} confirm that this picture is viable.
Two general features of the DM-induced source
were well-known even from early analyses: i) to match the level of
the positron background, the local pair annihilation rate needs to be much larger than the one expected
for thermal DM weakly interacting massive particles (WIMPs), within standard assumptions for
the Universe thermal history and the local DM distribution;
ii) having normalized the annihilation rate to the observed positron flux, there are very stringent
bounds on the WIMP model from measurements of the local antiproton flux, disfavouring
annihilation modes giving rise to a hadronic yield, and favouring the leptonic channels.
DM models fulfilling these requirements have been proposed recently, see,
e.g.,~\cite{ArkaniHamed:2008qn,Nomura:2008ru}. For what concerns the first requirement, they
mainly focus on a mechanism to account
for a mismatch between the thermally averaged annihilation cross section at the freeze-out in the
early Universe and a much larger annihilation cross section in the halo today. Alternatively, one could
invoke an enhancement in the positron signal based on the presence of a local population
of dense dark matter substructures, with the pair annihilation rate being large because the average
of the number density of DM pairs is much greater than one half of the square of the mean
DM number density (i.e., in terms of the local DM halo density $\rho$,
$\langle \rho^2 \rangle \gg \langle \rho \rangle^2$). When considering an average effect within
many realizations of the Milky Way substructure population as extrapolated from current
$\Lambda$CDM numerical N-body simulations, the mean enhancements in the local
cosmic-ray fluxes are typically very modest, possibly below a factor of
few~\cite{Brun:2007tn,Lavalle:1900wn}. Large effects, at the level needed to account for PAMELA
and FERMI data, have been claimed instead in connection to a (few) single, very dense, nearby
substructure(s)~\cite{Hooper:2008kv,Bringmann:2009ip,Brun:2009aj,Kuhlen:2009is}; the price to
pay in this case is that one has to refer to a configuration with a very small realization probability
according to the N-body simulations, or to rely on a subhalo picture which is less standard.
Both in discussing average effects from a full subhalo population and in tracing the effect of individual
substructures, the approach of the recent papers and the vast majority of papers in the literature
has been to ignore the fact that one is dealing with a system which is not static, and the emission
and propagation of charged particles has been treated in the steady state limit.
The distribution of substructures in the Galaxy is not rotationally supported; their typical velocity
can be estimated from the total mass density profile of the Milky Way. Assuming for simplicity
spherical symmetry for the Galaxy and an isothermal sphere for the subhalo phase-space
distribution function, the velocity dispersion for such a distribution is simply equal to
$\sqrt{3/2}$ times the value of the circular velocity~\cite{BT}, i.e., assuming
250~km~s$^{-1}$ for the local rotational speed~\cite{Reid:2009nj}, about 300~km~s$^{-1}$.
An object moving at such speed on an orbit perpendicular to the Galactic plane has a time
of crossing of the diffusive halo region for cosmic-rays, say a cylinder with a 4~kpc half-height,
of about $10^{15}$~s. Such value is comparable to the typical confinement time for
cosmic rays as estimated in the simplified "leaky-box" propagation models, and to the energy loss
timescale for electrons and positrons~\cite{G}. This is clearly just a qualitative argument
(actually not even referring to the most appropriate quantities, see the discussion below) to
illustrate that the effect of proper motion of DM substructure can be relevant. Indeed, we will show
that there is the possibility that local antimatter measurements reflect a transient due to
DM annihilations in a subhalo, rather than a source to be modeled in the steady state
limit.
The paper discusses the main features of the local antimatter fluxes resulting from individual
DM substructures, taking proper motion into account. Analytic solutions for the
propagation equation, as appropriate for positrons and antiprotons/antideuterons,
are presented taking into account the most relevant terms, referring
to a simplified description of the diffusion region, the interstellar medium and
radiation field (these are the same kind of assumptions which are usually implemented
for DM studies in the steady state limit, as well as, most often, to discuss electron/positron
fluxes from astrophysical sources). We show results in a few benchmark configurations for
the propagation parameters and for the orbit of the DM source. We introduce a few sample possible
interpretations for the PAMELA and FERMI positron/electron data, illustrating how sensitive
one becomes to different assumptions. In particular, we show that, contrary to the picture extensively
discussed in recent analyses, it is no longer true that one can extract from
the data, in a unique way, model independent particle
physics observables, such as the DM mass, the pair annihilation cross section and the
annihilation channel. We also consider the gamma-ray signals associated to this scenario and compare with current
limits as well as with the detection prospects in the upcoming future.
The paper is organized as follows. In Section~\ref{sec:CRprop}, we present the description of the particle propagation in the Galaxy. We discuss point-like DM substructures as sources of positrons and compare to PAMELA and FERMI data in Section~\ref{sec:positronpoint} and \ref{sec:positrondata}, respectively. The contribution to the antiproton flux is presented in Section~\ref{sec:antiproton}. In Section~\ref{sec:extra}, we compute other detectable extra features of the electron/positron flux, such as the associated radiative emission and the dipole anisotropy of the spectrum. Section~\ref{sec:concl} concludes. Details on the solution of the transport equation for positrons and antiprotons are reported in the Appendix.
\section{The cosmic-ray propagation model}
\label{sec:CRprop}
The random walk of charged cosmic-rays in the turbulent and regular component of
the Galactic magnetic fields is usually treated through an effective description. Defining $n$
the number density per unit total particle momentum of a given particle species
(i.e., $n(p) dp$ is the number of particles in the momentum interval $(p,p+dp)$), the
propagation equation can be casted in the general form (see, e.g.~\cite{Strong:2007nh}):
\begin{equation}
\frac{\partial n (\vec r,p,t)}{\partial t} = Q(\vec r, p, t)
+ \vec\nabla \cdot ( D_{xx}\vec\nabla n - \vec{v}_c n)
+ {\partial\over\partial p}\, p^2 D_{pp} {\partial\over\partial p}\, \frac{1}{p^2}\, n
- \frac{\partial}{\partial p} \left[\dot{p} \,n
- \frac{p}{3} \, (\vec\nabla \cdot \vec{v}_c )n\right]
- \frac{n}{\tau_f} - \frac{n}{\tau_r}\,.
\label{eq:prop_n}
\end{equation}
Here $Q$ is the source term, including primary, spallation and decay contributions,
$D_{xx}$ is the spatial diffusion coefficient, $\vec{v}_c$ is the convection velocity, diffusive
reacceleration is described as diffusion in momentum space and is determined by the
coefficient $D_{pp}$\ , $\dot{p}\equiv dp/dt$ is the momentum gain or loss rate, $\tau_f$ is
the time scale for loss by fragmentation, and $\tau_r$ is the time scale for radioactive
decay. The problem is usually solved for stationary sources and assuming $n$ has
reached equilibrium (i.e. setting the left-hand side of the equation to zero), through a
fully numerical integration of the general model, see, e.g., the Galprop~\cite{Strong:1998pw}
and the Dragon~\cite{Evoli:2008dv} packages, or
implementing (a chain of) semi-analytic solutions valid within a set of simplifying
assumptions, see, e.g.,~\cite{WLG,Maurin:2001sj}. In our analysis we will
follow the second route, implementing however solutions of the diffusion equations
valid for positron or antiproton primary sources which are non-stationary.
As commonly done, we will assume a spatially constant diffusion coefficient and
introduce a spatial average for the positron/electron energy loss rate; we will also neglect both
convection and reacceleration, sketching the relative energy dependent effects
through an appropriate choice of the momentum scaling of the spatial diffusion
coefficient. This very simplified scheme, which is flexible enough to introduce many study
cases without neglecting any of the physical effects we wish to discuss, can actually be
sufficient for a fair description of some of the key observables in cosmic-ray physics.
E.g., Ref.~\cite{Moskalenko:2001ya} introduces, for the Galprop numerical package,
the case of a standard diffusive model with spatial diffusion coefficient of the form:
\begin{equation}
D(p) = \beta D_0 \left(\frac{R}{R_0}\right)^\delta\,,
\end{equation}
with $\beta$ being the particle velocity in unit of the speed of light, $R$ being its rigidity, and with the following
parameter choice: $D_0 = 2.5 \cdot 10^{28}\,{\rm cm^2\,s^{-1}}$, $R_0 = 4\,{\rm GV}$,
$\delta = 0.6$ for $R>R_0$ and $\delta=0$ for $R<R_0$ (having neglected $D_{pp}$,
from now on we will simply label $D_{xx}$ as $D$). The diffusion region is treated
as a cylinder extending from $+h_h$ to $-h_h$ in the vertical direction, with standard
primary sources in a thin layer around $z=0$, and up to $R_h$ in the radial direction;
in the example of Ref.~\cite{Moskalenko:2001ya}, $h_h = 4$~kpc and $R_h = 30$~kpc.
Parameters are tuned to reproduce observational constraints, and in particular the relative abundance
of secondary to primary components. Indeed, running the public available version of
Galprop within this setup, we find a fairly good fit to the Boron over Carbon ratio data
(reduced $\chi^2 = 1.23$ for $R>4$~GV, considering the B/C measurements at high energy
by ATIC~\cite{Panov:2007fe}, CREAM~\cite{Ahn:2008my}, HEAO3~\cite{Engelmann:1990}, and CRN~\cite{Swordy:1990}, and
having assumed a spectral index for primary nuclei of 2.35 and 2.1 for, respectively $R<40$~GV and $R>40$~GV),
and to the antiproton over proton ratio as recently measured by PAMELA.
We label this parameter choice as "model A", and take it as reference benchmark case.
A comprehensive discussion of the dependence of our results on the propagation model
is clearly beyond the scope of this paper; the main trends we will present are essentially insensitive
to slight readjustments in the parameter space. One of the most relevant parameters for our
discussion is the vertical size of the propagation region $h_h$. It is well known that
the local measurements of secondary to primary ratios are mostly sensitive to
$D_0/h_h$ rather than to each of the two parameters. The degeneracy would be broken
by local measurements of the so-called "radioactive clocks", namely unstable secondaries,
such as $^{10}$Be as compared to $^9$Be. Such data are however not very accurate at
present. We consider two extreme setups: a thin halo model with $h_h = 1$~kpc and a thick
model with $h_h = 10$~kpc; $D_0$ is rescaled to, respectively, $0.56$ and $4.6$
in $10^{28}\,{\rm cm^2\,s^{-1}}$ as we find simulating this model with Galprop and refitting
the B/C ratio (reduced $\chi^2 = 1.10$
and $1.22$; the thin halo model is labeled as "model B", while the thick one as "model C").
Within these models the local primary proton and electron fluxes are reproduced as well
(for the electron injection spectrum we take a spectral index of 2.30 at high energy, i.e. above $E=4$~GeV).
The spectra obtained with Galprop in our reference cases
will be used to as background estimates in the next sections; the local electron and positron
flux are computed with the spatially-dependent energy loss terms following from the
standard templates for the interstellar radiation field and the magnetic field profile as
implemented in the code.
When computing primary components from WIMP annihilations we will introduce instead
the simplification of a spatially-constant energy loss term, referring to an average value valid
in the local neighborhood. In the energy loss configuration "H", which we assume as standard reference, the synchrotron and inverse Compton energy loss terms are driven, respectively, by
an average magnetic field in the diffusion region which is about $B = 6\, \mu$G, and a mean
background starlight density $U = 0.75\,{\rm eV\,cm^{-3}}$~\cite{Porter:2008ve}, on top of the
cosmic microwave background component. For comparison, we will also consider a template
in which both quantities are sharply reduced, assuming $B = 1\, \mu$G and
$U = 0.4\,{\rm eV\,cm^{-3}}$ (we label this energy loss configuration by "L").
\section{Positrons from a dark matter point source}
\label{sec:positronpoint}
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig1a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig1b.eps}
\end{minipage}
\caption{{\it Left Panel:} For three sample values of the positron momentum at injection
($p_0 = 100$~GeV, 500~GeV and 1~TeV) and as a function of the positron momentum $p_e$
as measured locally, the proper motion scale $\Delta d$ (black dashed curves) is compared
to distance $d_m$ at which the positron Green function is maximal (thick solid curves)
and the range of values for which the Green function is 10\% of the maximum (thin solid
curves). The propagation model AH is assumed, see the text for more details.
{\it Right Panel:} The same as for the left panel, but for $p_0 = 100$~GeV and the six
combinations of diffusion and energy loss models.}
\label{fig:epdiff}
\end{figure}
Consider the limit of a point-like dark matter substructure, entering the diffusion region at the
point $\vec{r}_i$ at the time $t_i$ and moving along an orbit $\vec{r}_p(t)$ (e.g.,
$\vec{r}_p(t) = \vec{r}_i + \vec{v}_s(t-t_i)$ if one can approximate such motion
as a straight line trajectory with constant velocity $\vec{v}_s$), with dark matter made of
WIMP dark matter particles of mass $M_\chi$, annihilating in pairs with annihilation rate
$(\sigma v)$ and positron yield per annihilation $dN_{e^+}/dp$. The positron dark matter
source $Q$ at the position $\vec{r}$, momentum $p$ and time $t$, takes the form:
\begin{equation}
Q(\vec{r}, t,p) = \delta^3 \left[\vec{r}-\vec{r}_p(t)\right] \frac{dN_{e^+}}{dp} \,\Gamma\,,
\label{eq:pointsource}
\end{equation}
where $\Gamma$, the total dark matter annihilation rate in the source, contains all terms not
depending on spatial coordinates, momentum and time:
\begin{equation}
\Gamma = (\sigma v) \int d\vec{r}_{s} \frac{\rho_{s}^2(\vec{r}_{s})}{2\,M_\chi^2}
\equiv (\sigma v) \frac{\rho_0^2}{2\,M_\chi^2}\,{\mathcal V}_s
\,.
\end{equation}
Here $\rho_{s}(\vec{r}_{s})$ is the dark matter density profile within the substructure,
and the expression after the equivalence sign defines the annihilation volume ${\mathcal V}_s$,
having normalized $\rho_{s}$ to the reference value $\rho_0=0.3$~GeV~cm$^{-3}$.
The positron number density per unit momentum is given by:
\begin{equation}
n(\vec r,p,t) = \frac{\Gamma}{\left|\dot{p}(p)\right|} \int_{t_i}^t dt_0 \int_p^{p_{\rm max}} dp_0\;
G\left(\vec r,t,p;\vec {r}_p(t_0),t_0,p_0\right) \, \frac{dN_{e^+}}{dp_0}
\label{eq:epnsol}
\end{equation}
The Green function $G$ is given in Appendix~\ref{app:pos}; neglecting boundary conditions it is
approximately in the form:
\begin{equation}
G \simeq \frac{1}{\pi^{3/2} [\lambda(p,p_0)]^3}
\exp\left\{-\left[\frac{d(t_0)}{\lambda(p,p_0)}\right]^2\right\}
\delta\left[(t-t_0)- \Delta \tau (p,p_0)\right] \,
\label{eq:epgreensimply}
\end{equation}
where we introduced the distance $d=\left| \vec r -\vec r_p(t_0)\right|$ between the source
and the observer at the time $t_0$, the energy loss timescale
$\Delta \tau = \int_p^{p_0} {d\tilde{p}}/{\left|\dot{p}(\tilde{p})\right|}$, and diffusion length $\lambda$,
defined through $\lambda^2 = 4 \int_p^{p_0} {d\tilde{p}} \, D(\tilde{p})/{\left|\dot{p}(\tilde{p})\right|}$.
Looking at this expression, we can guess that including the time dependence in the propagation
equation is relevant whenever the variation of $d$ within the time $\Delta \tau$,
say $\Delta d \sim v_s \cdot \Delta \tau$, is larger or comparable to $\lambda$. Consider the high
energy limit, in which the energy loss term scales like $\dot{p}(p) \propto p^2$ and the diffusion
coefficient like $D(p) \propto p^\delta$, with $\delta \sim 0.3-0.7$; the square of the ratio
between $\Delta d$ and $\lambda$ goes like:
\begin{equation}
\frac{\Delta d^2}{\lambda^2} \simeq 5 \cdot 10^{-2}
\frac{v_{300}^2}{D_1\,\dot{p}_1}
\frac{1-\delta}{1-0.6}
\frac{100^{0.6-\delta}}{p_{100}^{1+\delta}}
\frac{(1-{\mathcal R})^2}{1-{\mathcal R}^{1-\delta}}
\label{Eq:ratiodl}
\end{equation}
where $D_1$ and $\dot{p}_1$ are reference values for the diffusion coefficient and energy loss
rate at 1~GeV, respectively, $10^{28}\,{\rm cm^2\,s^{-1}}$ and $-10^{-16}\,{\rm GeV \,s^{-1}}$,
$v_{300}$ the substructure velocity in units of $300 \,{\rm km \,s^{-1}}$, $p_{100}$ is the positron
momentum in the equilibrium distribution in units of 100~GeV, and ${\mathcal R}\equiv p/p_0$
the ratio between such momentum and the momentum of the positron at the source.
Being energy losses very effective at high energy, the time spent in the system before losing most of the energy by particles injected with very high momentum tends to be very small. The distance traveled by a DM subhalo in the same timescale is very small as well, and, in this case, proper motion can be safely neglected. An analogous picture occurs when considering cases with small difference between momentum at injection and momentum at equilibrium, and thus with short timescale associated to the electron/positron transport.
One sees from Eq.~\ref{Eq:ratiodl} that $\Delta d \ll \lambda$ for ${\mathcal R}\rightarrow 0$ or large $p$.
On the other hand, we expect proper motion to be relevant at intermediate to low energies.
This effect is illustrated also
in Fig.~\ref{fig:epdiff}: in the left panel, for a few values of the momentum at the source $p_0$,
we compute $\lambda$ as a function of $p$ and, correspondingly, find the value of $d_m$, namely,
the distance from us along the direction perpendicular to the Galactic plane at which the
exact Green function $G$ reaches its maximum (thick solid lines). Neglecting
boundary conditions as in Eq.~(\ref{eq:epgreensimply}), we would simply find
$d_m = \sqrt{3/2} \,\lambda$; in the plot the scaling of $d_m$ with $p$ gets more rapid
when the distance approaches the vertical boundary of the diffusion region at $h_h=4$~kpc
(the propagation model "AH" is assumed in the plot). Also displayed are the range of distances
within which the Green function is larger than one tenth of its maximum and our estimate
for $\Delta d$, as defined above. In the right panel, we plot the same quantities for a single
injected momentum ($p_0=100$~GeV), but looping over the three propagation models and
the two energy loss configurations as specified in the previous Section. In both panels,
comparing $d_m$ and the 10\% range to $\Delta d$, one sees that there are large $p$
intervals over which proper motion effects are expected to be large.
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig2a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig2b.eps}
\end{minipage}
\caption{{\it Left Panel:} Local positron flux spectral shape for a point-source composed by
500~GeV WIMPs annihilating into monochromatic $e^+\,e^-$, and moving along a vertical
orbit intersecting the Galactic plane at a short distance from the observer.
Each curve refer to a different time of observation,
with the distance $d$ of the source from the observer playing the role of time
variable. This is useful since we can compare with the spectra one obtains for static
sources placed at the same distances. The positron flux is multiplied by $p^2\,d^3$ to
match the approximate scaling in the static limit. {\it Right Panel:} For a give time of
observation (namely, fixing $d=1$~kpc), predictions for the flux within our benchmark
setups for the propagation model.}
\label{fig:scal}
\end{figure}
To discuss the contribution to the local positron flux $\Phi_e$ from a single point source,
we start considering the sample case of a source moving with constant velocity
$v_s = 300 \,{\rm km \,s^{-1}}$, on a trajectory which is perpendicular to the Galactic plane,
intersecting the plane at a small distance from the Sun (say 10~pc, the precise value has
little relevance for the discussion). To emphasizes propagation effects, we refer to a dark matter
model, of given mass and total annihilation rate, with prompt annihilation into $e^+\, e^-$ pairs, i.e.
with a monochromatic positron spectrum $dN_{e^+}/{dp} \simeq \delta (p-M_\chi)$. In
Fig.~\ref{fig:scal} we fix $M_\chi=500$~GeV and show the shape of the induced local
positron fluxes in a few sample cases.
In the left panel, each curve refers to a different time of observation, but rather than labeling
them by time steps, we indicate the distance $d$ of the source from the observer, along the
trajectory and at the time when the positron flux is measured (dashed lines for an approaching
source, solid line for a source which has passed nearby and now is moving away; for reference,
in the example we are considering, a shift of 1~kpc in $d$ corresponds to a time interval
$\Delta t \simeq 10^{14}$~s, i.e. about an order of magnitude lower than the typical
"escape time" for particles of these energies as extrapolated in simple "leaky box"
propagation models). For comparison, we also show the case of
static sources (dotted lines), at a given distance from the Sun.
From the Green function solution (again sufficiently away from the boundaries of the
propagation region) we can guess that, in the static limit, the peak of the product
$p^2 \cdot \Phi_e$ scales like $d^{-3}$. This is indeed what we find in the plot, where the
quantity $d^3\,p^2\,\Phi_e$ has been normalized to 1 in arbitrary units for $d=10$~pc;
the figure shows also
the departure from such scaling in case of a non-stationary source, with the result for
approaching, static and departing sources essentially coinciding only in the limit of small
distances and large energies.
For moderate to large distances the height of the peaks increases (decreases) for
a source which has passed by (is approaching). The spectral shapes are also very different,
with the sharp cutoffs at low energy enforced by the mismatch between the time interval
from positron emission to detection and the energy loss timescale, which increases at lower
energies. Positron propagation is treated according to model "AH" as introduced above; such
model has $h_h = 4$~kpc and hence an approaching source at $d=3$~kpc, along the vertical
trajectory we are considering, has just entered the diffusion region and induces a local positron
flux which is marginal compared to the flux from the same source after passing by and being on
the other side of the trajectory at the same distance. Even after the source has left the diffusion
region there is still a population of positrons which has been left behind contributing to the
local flux, especially at low energies: here a larger $d$ just reflects a longer time interval since
the injection time and hence a more efficient degrading of the injected high energy positrons
to small energies. In the right panel of Fig.~\ref{fig:scal}, we take one of the cases plotted in the
left panel, namely a source moving away from the observer and being at $d=1$~kpc at the
time of observation, and show instead the dependence of the spectral shape and the normalization
of the measured flux on the benchmark models chosen for propagation and energy losses. Indeed
the differences between the various cases are rather striking;
this gives a feeling for the fact that there are large uncertainties related to propagation in our
problem, which are not resolved tuning the model, as we did, to local measurements of
secondary to primary nuclei ratios. Furthermore, there are potentially other relevant
issues which we do not address in our simplified approach. E.g., we expect some dependence
of the results on how boundary conditions are treated in the diffusion model:
we are adopting here the standard approach of a spatially constant diffusion coefficient
and free escape (i.e. the diffusion coefficient going to infinity) as a sharp transition at the vertical
boundaries; this is the same approach followed, e.g., in Galprop. Considering a less sharp
transition between the two regimes, with, e.g., an exponential vertical scaling of the diffusion
coefficient as proposed by~\cite{Evoli:2008dv}, we would obtain smoother cutoffs in
the spectra of sources at large distances shown in the left panel of Fig.~\ref{fig:scal}. A
gradient in the energy loss term would also mildly affect the prediction for the local positron
flux.
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig3a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig3b.eps}
\end{minipage}
\caption{Local positron flux spectral shape for a point source made of 100~GeV
(left panel) or 1~TeV (right panel) WIMPs annihilating into monochromatic $e^+\,e^-$, in
the benchmark propagation model AH. The two values of $d$ refer to
the distances which the source has from the observer at the time the flux is measured and
after the source has passed in the vicinity of the observer; the set of curves displayed refer
to a few sample orbits for the source (see the text for details).}
\label{fig:scal2}
\end{figure}
Fig.~\ref{fig:scal2} considers the cases for a lighter and heavier WIMP (respectively, 100~GeV and
1~TeV) and, referring again to the propagation model AH, discusses the dependence of the locally
measured positron flux on the orbit of the source, having chosen two sample values of the present distance of the source from the observer (1~kpc and 3~kpc and source moving away from the
observer). The thick solid lines correspond to the source moving along a vertical orbit intersecting
the Galactic plane at 10~pc from the observer (i.e., the same orbit as in
Fig.~\protect{\ref{fig:scal}}), while thin solid lines are obtained in a few cases in
which this scale is varied up to 1~kpc for $d = 1$~kpc, and to 2~kpc for $d = 3$~kpc,
or inclining the orbit from vertical to a 45$^\circ$ incident angle; as it can be seen, such cases
are essentially equivalent and show that what is actually relevant for the result is how much
time the source spend within a distance from the observer corresponding to diffusion
length $\lambda$ (with $\lambda$ depending on the emission and observation energies).
Such time is about the same for all solid lines, while it changes if the source speed is changed.
The dashed and dotted lines refer to the same orbit as for the thick solid line but with a
source speed which is, respectively, twice as large and half of it, namely 600~km~s$^{-1}$
and 150~km~s$^{-1}$ versus 300~km~s$^{-1}$. The dash-dotted lines refer instead to a
circular orbit with velocity matching the observed circular velocity, i.e. about 250~km~s$^{-1}$.
For comparison spectral shapes in the static limit are also displayed. The picture emphasizes
further the fact that the encounter with a dark matter point-source determines a transient
in the local positron flux, and that the static approximation is roughly valid only for nearby sources,
for which the energy of the measured positrons is not significantly different from the energy at
emission, or for very energetic positrons.
We have focussed the discussion on DM sources with a monochromatic positron spectrum, still
it can be very simply extended to DM sources with a generic positron emission spectrum (which
can be obviously thought as a superposition of line spectra for different masses and annihilation
rates), along the same patterns regarding the dependence on distance and energy.
\section{A non-static point source versus PAMELA and FERMI electron/positron data}
\label{sec:positrondata}
As an application of the general discussion outlined in the previous Section, and to introduce
a focus on a specific case despite the many ingredients and parameters involved in the problem,
we consider the possibility of a major contribution from a single non-static DM point-source
to the local electron/positron spectrum as recently measured by PAMELA~\cite{Adriani:2008zr} and FERMI~\cite{Abdo:2009zk}.
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig4a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig4b.eps}
\end{minipage}
\caption{Two examples of fits of the PAMELA positron fraction (left panel) and of the sum of
the electron and positron fluxes (right panel) with a component due to DM annihilations in a
substructure. The monochromatic $e^+\, e^-$ final state of annihilation has been considered,
as well as two sample values of the DM mass.
The fit was performed assuming that the primary electron spectral index
and normalization follows from the FERMI data.}
\label{fig:pamela1}
\end{figure}
For what concerns the background from ordinary astrophysical electrons and positrons,
we will consider two possibilities.
In the first scenario, the bulk of the "all-electron" spectrum measured by FERMI is due to
primary electrons emitted in supernova cosmic-ray sources (with only a mild contamination, at the 10-20\% level, from secondary electrons and positrons), and, hence, we can use this data-set to
derive the electron spectral index at the sources. The second possibility is that the FERMI
measurement has actually found an extra electron and, possibly, positron source, comparable
to standard contributions at about 100~GeV or so, and dominating at high energy up to the
cutoff found by HESS at about 1~TeV~\cite{Collaboration:2008aaa}; in this second scenario, we assume that the primary
electron spectrum is softer and can be inferred from preliminary (less accurate) lower energy
data on the electron-only spectrum presented by PAMELA~\cite{PAMELA:preliminary},
while the extra contribution is dominated by a single DM source.
Starting with the first background choice, we show in Fig.~\ref{fig:pamela1} (left panel) two examples of fits of the
PAMELA positron fraction at high energy with a component due to DM annihilations in a
subhalo. The fits have been obtained with sample values of the WIMP
mass and a given annihilation channel, respectively 500~GeV and 1~TeV and the
monochromatic $e^+\, e^-$ final state. We have also assumed that the source moves with constant
velocity $v_s = 300 \,{\rm km \,s^{-1}}$ on the trajectory perpendicular to the Galactic plane
already introduced for Fig.~\ref{fig:scal}. Looping over the propagation models
discussed in the previous Sections, we have extracted the best-fit values for the source distance
and the total annihilation rate in the source $\Gamma$. For a given propagation model, the
background is inferred normalizing the proton flux to the locally observed spectrum, making a
prediction for the secondary electron and positron components using the Galprop package,
and deriving the primary electron spectral index and normalization from the FERMI data.
Including then the DM component,
the fit to the PAMELA, FERMI and HESS data (the HESS datasets are rescaled within systematic
uncertainties to match the normalization from FERMI) is performed allowing for a 20\% variation
in the normalization of the secondary spectra (reflecting various uncertainties on their modelling)
and on the primary electrons, as well as a slight tilt in the primary electron spectral index;
the fit disregards PAMELA datapoints below 10~GeV, since the force-field method we are
implementing to take into account solar modulation, with modulation parameter of 0.55~GV,
is probably not sufficiently accurate, and a modulation which takes into account the charge sign
should be implemented instead~\cite{Adriani:2008zr}. For a given WIMP mass, and for all
propagation parameters except for the thin halo model (model B), this procedure produces
configurations with moderate to small $\chi^2$. They are restricted in a rather narrow range of
allowed distances; however this range shifts with WIMP mass, choice of the propagation model
and the velocity of the source along the orbit.
The parameters for the two sample models obtained from the fit and plotted in Fig.~\ref{fig:pamela1}
are given in the first two rows of Table~\ref{tab:fit}.
{\small
\begin{table}[b]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$\quad\quad$ & $M_\chi$ & annihilation & $\Gamma$ & ${\mathcal V}_s$ & prop. & d
& $\Phi_\gamma(E>0.1\,{\rm GeV})$ & $\chi^2$ \\
$\quad\quad$ & GeV & channel & $10^{36}\,{\rm s^{-1}}$ & kpc$^3$ & model & kpc
& ${\rm cm}^{-2}\, {\rm s}^{-1}$ & (d.f.=50)
\tabularnewline
\hline
\hline
1&1000 & $e^+/e^-$ & 20.9 & $\,5.3 \cdot 10^5$ & AH & 4.25 & $1.2 \cdot 10^{-8}$ & 46.7
\tabularnewline
\hline
\hline
2& 500 & $e^+/e^-$ & 73.4 & $\,4.6 \cdot 10^5$ & CH & 6.25 & $1.6 \cdot 10^{-8}$ & 42.3
\tabularnewline
\hline
\hline
3& 5000 & $\tau^+/\tau^-$ & 1.9 & $\,12.6 \cdot 10^5$ & AH & 1.54 & $1.1 \cdot 10^{-8}$ & 44.4
\tabularnewline
\hline
\hline
4& 3000 & $\tau^+/\tau^-$ & 2.4 & $\,5.5 \cdot 10^5$ & CH & 1.43 & $1.4 \cdot 10^{-8}$ & 60.9
\tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Sample DM models }
\label{tab:fit}
\end{table}
}
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig5a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig5b.eps}
\end{minipage}
\caption{Two further examples of fits of the PAMELA positron fraction (left panel) and of the sum of
the electron and positron fluxes (right panel) with a component due to DM annihilations in
subhalo. With respect to Fig.~\protect{\ref{fig:pamela1}}, we are now considering a different
annihilation channel, i.e. $\tau^+\, \tau^-$, and that the primary electron spectral index
is slightly harder than the one you would infer from the FERMI data.}
\label{fig:pamela1b}
\end{figure}
Analogously, Fig.~\ref{fig:pamela1b} shows two sample fits of the data in the case of a primary
electron spectral index slightly softer, according to the picture mentioned at the beginning of this Section.
Since we need to account for a positron/electron exotic contribution up to 1-2~TeV, DM WIMPs needs
to be rather heavy (e.g., we take 5~TeV and 3~TeV), while the choice of the annihilation channel
is again not critical (we chose to refer to the case of pair annihilation into the $\tau^+\, \tau^-$
final state). The results from the fit are reported in the last two rows of Table~\ref{tab:fit}.
A few issues should be stressed looking at this table. First, for all the four selected models,
the $\chi^2$ indicate fairly good fits, although not exceptionally good (one should also take into
account that, especially for FERMI, errors in the data-set are correlated, while we have simply
added in quadrature statistical and systematic errors); our interest is, however, to show the feasibility
of the framework, rather than playing with all possible uncertainties in the background and the signal
to improve the fits further (and indeed a slight readjustment of the background could
lead to better fits in all cases). A second issue is that the values which we find for the total
annihilation rate $\Gamma$ are very large for all models, as one sees converting them to
the annihilation volume ${\mathcal V}_s$ under the hypothesis that the annihilation cross section
is of the order $\sigma v \simeq 3 \cdot 10^{-26}$~cm$^3$~s$^{-1}$ (this is, roughly speaking,
the level needed for a thermal relic WIMP to match the DM density in the Universe in case
of standard assumptions for the Universe thermal history, and without invoking an enhancement
in the cross section going from the freeze-out to the zero temperature limit, see,
e.g.,~\cite{Bergstrom:2000pn}). Annihilation volumes of the order of $10^5-10^6$~kpc$^3$
are much larger than typical values predicted for DM substructures in N-body simulations of
hierarchical clustering for Milky-Way size DM halos, with a realization probability for a configuration containing these sources, supposing one can extrapolate from the results shown in Ref.~\cite{Brun:2009aj} for static sources, below few in $10^{-4}$. On the other hand, in a different scenarios, such
as the one in which the adiabatic formation of an intermediate mass black hole
drives a sharp enhancement of the dark matter density inside the hosting
substructure~\cite{Bertone:2005xz}, the probability density of ${\mathcal V}_s$ for these sources has
a peak at about $10^6$~kpc$^3$ and a tail extending to much larger values~\cite{Brun:2007tn}.
There is also the possibility that the reference value $\sigma v \simeq 3 \cdot 10^{-26}$~cm$^3$~s$^{-1}$ is a significant underestimate of the pair annihilation cross section, and in this case
${\mathcal V}_s$ would be appropriately downscaled.
Having shown that it is not manifestly implausible to interpret current positron/electron data within
the single DM substructure scenario, the examples chosen for Figs.~\ref{fig:pamela1} and \ref{fig:pamela1b} question the standard methods applied to extract model-independent informations
on the DM candidate: It is usually assumed that interpreting an excess as a DM signal gives an
estimate of the DM particle mass when the energy threshold for the excess is detected, that
the spectral shape of the excess determines the dominant annihilation channel, while the
normalization is mainly an indicator of the level of the pair annihilation rate. Here, instead, the
threshold is just setting a lower limit to the DM particle mass, since the energy at which the exotic
component dies out depends also on the distance of the DM point-source. Regarding the shape
of the spectrum, we have shown that this is a transient, keeping little memory of the spectrum
at the source, and indeed, in the plot, the spectral shapes of the single source contribution and
that from the smooth DM halo component are sensibly different in all examples. Finally, the level
of the induced flux depends mainly on the product $\sigma v \cdot {\mathcal V}_s$, and it is difficult
to give any solid estimate for the annihilation volume ${\mathcal V}_s$.
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig6a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig6b.eps}
\end{minipage}
\caption{Upper limits on the total annihilation rate $\Gamma$ derived from the
PAMELA data on the positron ratio and FERMI data on the all electron flux. A sample WIMP model
is considered ($M_\chi=500$~GeV and monochromatic $e^+\,e^-$ final state), while we loop
over propagation model configurations (left panel) and a few possibilities for the point source
orbit (right panel). Results are compared to the level of $\Gamma$ required to match either
the brightest unidentified EGRET source, the faintest source detected by EGRET and the level
of the $\gamma$-ray sensitivity for FERMI.}
\label{fig:epluslimit1}
\end{figure}
In Table~\ref{tab:fit}, we quote the prediction for the gamma-ray flux,
integrated above 100~MeV, induced by the DM point-source in the four selected models:
in the first two cases, with only
monochromatic $e^+/e^-$ as tree-level final state of annihilation, the flux is due to photons
emitted as a final state radiation (FSR), with the rate and photon energy distribution for the
$\chi \chi \rightarrow e^+ e^- \gamma$ which are estimated in terms of the lowest order process
$\chi \chi \rightarrow e^+ e^-$ and in the approximation for $m_e\ll M_\chi$ (see,
e.g.,~\cite{Bergstrom:2008ag} and references therein; an eventual model-dependent
"internal bremsstrahlung" contribution~\cite{Bergstrom:2008gr} is not included here). For the
last two cases, the annihilation into $\tau^+/\tau^-$ gives rise to decay chains containing
neutral pions, which in turn decay to two photons; this process is accounted for by
linking to the Pythia Montecarlo simulations as provided by the {\sffamily DarkSUSY}\ package~\cite{Gondolo:2004sc}.
The level of the integrated gamma-ray fluxes, slightly above $1\cdot 10^{-8}$~cm$^{-2}$~s$^{-1}$,
is much lower than the one for the brightest unidentified EGRET source, i.e. about
$7\cdot 10^{-7}$~cm$^{-2}$~s$^{-1}$, or the level of the faintest source detected by EGRET,
i.e. about $6\cdot 10^{-8}$~cm$^{-2}$~s$^{-1}$~\cite{Pavlidou:2007su} (in a more detailed
comparison one should consider also the fact that the energy spectra for the sources proposed
here are sensibly softer than typical spectra for EGRET sources), but well within the sensitivity
for the FERMI~LAT instrument, about $4\cdot 10^{-9}$~cm$^{-2}$~s$^{-1}$ considering intermediate
to high latitudes and a 2~year data taking period~\cite{Baltz:2008wd} (again we are not taking into
account spectral features, and also the fact that the sensitivity we quote for a DM signal was
computed based on a gamma-ray background as extrapolated from EGRET GeV measurements of
the diffuse gamma-ray emission which are not being confirmed by FERMI and will have to be
sensibly lowered, see, e.g.,~\cite{FERMI:preliminary}). This is a trend we see in general.
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig7a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig7b.eps}
\end{minipage}
\caption{Upper limits on the total annihilation rate $\Gamma$ derived from the
PAMELA data on the positron ratio and FERMI data on the all electron flux. The left panel is
specular to the right panel of Fig.~\protect{\ref{fig:epluslimit2}}, but for the $\tau^+\,\tau^-$ final
state. In the right panel the mass scale for the DM candidate is varied and a few sample values
of the distance for a source moving away from the observed along a vertical orbit are considered.
Results are compared to the level of the $\Gamma$ required to match either
the brightest unidentified EGRET source, the faintest source detected by EGRET and the level
of the $\gamma$-ray sensitivity for FERMI.}
\label{fig:epluslimit2}
\end{figure}
In Fig.~\ref{fig:epluslimit1},
we compute the upper limit to the total annihilation rate $\Gamma$ from the
PAMELA data on the positron ratio and from the FERMI data on the "all-electron" flux
(limits are extracted separately for the two datasets, allowing for extra freedom
in the background normalization and spectral index; the most stringent one is
displayed). These constraints are compared to the values
of $\Gamma$ corresponding to an integrated
$\gamma$-ray luminosity at the level of the brightest unidentified EGRET source, of the faintest source
detected by EGRET and of the FERMI $\gamma$-ray sensitivity. We have chosen a benchmark WIMP model, with
$M_\chi=500$~GeV and annihilating into monochromatic $e^+\,e^-$. We display the results
for: {\sl i)} the reference vertical orbit and different propagation models (left panel, the
displayed cases are the same as in Fig.~\ref{fig:scal}, see the relative discussion in the text; the
values of $d$ refer to the distance of the point source from the observer at the time when the
positron flux is measured, after the source has passed in the neighborhood of the observer);
{\sl ii)} the reference propagation model AH and a few possibilities for the point-source orbit (right
panel, the sample orbits are a subset of those considered in Fig.~\ref{fig:scal2}; negative $d$ refers
to an approaching source, positive $d$ to a source moving away from the observer). In the right panel
of Fig.~\ref{fig:epluslimit1}, we also sketch the error that can be induced by
estimating positron constraints assuming the stationary limit.
Dotted curves refer to static point-sources and show (making the comparisons with the appropriate color coding) that the bounds, except for nearby sources, are systematically overestimated for approaching sources and greatly underestimated for
sources moving away from the observer.
Clearly, the gamma-ray luminosity of a point-source scales with $1/d^2$.
The scaling with distance, or better with time, of the
local positron flux depends on many different ingredients which are hard to extract from observations.
The picture, however, is intuitively clear: unless the DM point-source is extremely close to the observer inducing a very large gamma-ray flux (at a level which can be already excluded by EGRET),
it can generate a substantial contribution to the electron/positron flux measured by
PAMELA and FERMI, without
being in conflict with present gamma-ray observations. In particular, we expect, for intermediate
distances, the balance to lean towards the electron/positron side since there is a time interval during which
electrons/positrons go through a diffusion transient while the gamma-ray intensity decreases as the inverse of
time squared. When the diffusion transient is over and the local positron population
becomes negligible, i.e. at large positive distances in the plot, the gamma-ray limit takes over again.
On the other hand, if a substantial contribution to the positron flux
measured by PAMELA is actually due to a DM point source, it is unlikely that such source will not be
detected by FERMI in $\gamma$-rays. This does not necessarily mean that it will be identified as a DM source.
Indeed, FERMI will measure spectra only up to about 300~GeV and spectral features might not be easily
identifiable, as well as the cross correlation with the positron flux is not
anymore unambiguous, as we already stressed.
These conclusions do not depend critically on the WIMP model, as we show in Fig.~\ref{fig:epluslimit2}.
In the left panel, we consider the same configurations of the right panel in Fig.~\ref{fig:epluslimit1}, but now for a
WIMP model annihilating into $\tau^+\,\tau^-$. In this case, the $\gamma$-ray yield is enhanced (being the $\pi^0$-decay channel open) and the electron/positron yield is softer; still, at intermediate distances, a picture inducing a sizable local flux of positrons is not in conflict with the EGRET observations, but again within the FERMI $\gamma$-ray sensitivity. In the right panel of Fig.~\ref{fig:epluslimit2}, we show the constraints on $\Gamma$ obtained considering a few values for the source distance (still moving away from the observer on a vertical orbit with a velocity of 300~km~s$^{-1}$) and varying the WIMP mass. The interplay between electron/positron and gamma-ray observations is changing only slightly with mass, while
it is again evident that the static approximation may be misleading.
\section{Antiprotons from a dark matter point source}
\label{sec:antiproton}
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig8a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig8b.eps}
\end{minipage}
\caption{Spectral shapes for the local antiproton flux due to a dark matter point source moving
on a vertical orbit with constant velocity $v_s = 300 \,{\rm km \,s^{-1}}$ and for different values of
the distance $d$ of the source at the time of observation. In the left panel, the source is getting
closer to the observer. In the right panel, the source, after passing in the vicinity of the observer, is moving
away. A WIMP mass of 500~GeV and pair annihilations into $W^+\, W^-$ have been assumed,
as well as the propagation model~A. For comparison, spectral shapes for a static source at the
corresponding distances and for pair annihilations in the smooth DM halo are also shown.}
\label{fig:pbscal1}
\end{figure}
We focus now the discussion on the possibility that dark matter pair annihilations within a substructure
give rise to an antiproton yield. This is possible if the final states of annihilation include
weak gauge bosons or quarks, instead of leptons only, as we considered so far.
Analogously to the positron case, in the limit of a point-like dark matter substructure,
the antiproton source function takes the form:
\begin{equation}
Q_{\bar{p}}(\vec{r}, t,E) = \delta^3 \left[\vec{r}-\vec{r}_p(t)\right] \frac{dN_{\bar{p}}}{dE} \,\Gamma\,,
\label{eq:pointsourcepb}
\end{equation}
where $dN_{\bar{p}}/dE$ is the differential antiproton yield per annihilation and $\Gamma$ is the
total dark matter annihilation rate in the source already introduced above. Solving the propagation
equation with this source function, we find the corresponding number density per unit energy:
\begin{equation}
n(\vec r,E,t) = \frac{dN_{\bar{p}}}{dE} \,\Gamma\, \int_{-\infty}^t dt_0\; G(\vec r,t,E;\vec {r}_p,t_0)\,,
\end{equation}
where the Green function $G$ is given in Appendix~\ref{app:pbar}. As for positron, it is instructive
to consider its approximate form by neglecting boundary conditions; assuming at first
that also antiprotons annihilations in the gas disc along propagation play a minor role, $G$ goes like:
\begin{equation}
G \simeq \frac{1}{\pi^{3/2} [\lambda_p(E,t,t_0)]^3}
\exp\left[-\frac{\left| \vec r -\vec r_p(t_0)\right|^2}{[\lambda_p(E,t,t_0)]^2}\right]\,,
\label{eq:pbgreensimply}
\end{equation}
where the diffusion length depends explicitly on time: $\lambda_p(E,t,t_0) = 2 \sqrt{(t-t_0)\,D(E)}$.
Suppose for simplicity that the substructure is moving along a straight line trajectory with
constant velocity $\vec{v}_s$, namely $\vec{r}_p(t_0) = \vec{r}_t + \vec{v}_s(t_0-t)$; in this
case $G$ can be rewritten in the form:
\begin{equation}
G \simeq \frac{1}{8 \pi^{3/2} \, D^{3/2} \, (t-t_0)^{3/2}}
\exp\left[-\left(\frac{t_s}{t-t_0}-2\,\sqrt{\frac{t_s}{t_d}}c_\theta+\frac{t-t_0}{t_d}\right)\right]
\label{eq:pbgreensimply2}
\end{equation}
where we have introduced a "static" timescale $t_s \equiv d_t^2/4 D$, with
$d_t=\left| \vec r_t -\vec r\right|$ being the distance between the point-source and the observer at the time of
observation (which would be, of course, the distance between the point-source and the observer at all times in
the static limit), and a "dynamical" timescale $t_d \equiv 4 D/|\vec{v}_s|^2$, and we have defined
$c_\theta$ as the cosine of the angle between the vectors $\vec{v}_s$ and $ \vec r_t -\vec r$
(which is negative for a source approaching the observer and positive for those moving away).
We recognize that $t_d$ and $t_s$ are the appropriate quantities to compare for a first
guess on whether proper motion is relevant or not in estimating antiproton fluxes from point
sources; in particular, Eq.~(\ref{eq:pbgreensimply2}) can be integrated over time, giving:
\begin{equation}
\int_{-\infty}^t dt_0\; G \simeq \frac{1}{4 \pi \, D \, \left| \vec r -\vec r_t\right|}
\exp \left[ -\frac{2 t_s}{t_d} + 2 \,\sqrt{\frac{t_s}{t_d}}c_\theta \right]\,.
\label{eq:pbgreensimply3}
\end{equation}
In the limit for $t_d \gg t_s$, this expression correctly reduces to the Green function of the
three dimensional Laplacian, while for $t_d \sim 2\,t_s$ we see that proper motion starts
to become important. For typical values of the propagation parameters in the model, we find:
\begin{equation}
\frac{t_d}{2\,t_s} = \frac{8 D^2(E)}{d_t^2 v_s^2} \simeq 1.5
\frac{D_1^2\,E_{10}^{2 \delta} \, 10^{2 \delta-1.2}}{d^2_{t,1} v_{300}^2}\,,
\end{equation}
where $E_{10}$ is antiproton energy in units of 10~GeV and $d_{t,1}$ is the source distance in units
of 1~kpc. As for the positrons, we find the effect of proper motion to be negligible
either in the limit of nearby sources or going to high energies, while we expect it
to be significant in the other cases.
Actually, Eq.~(\ref{eq:pbgreensimply}) and (\ref{eq:pbgreensimply2}) contain an oversimplification,
since they were derived neglecting antiprotons annihilations during propagation, an effect which
defines a third timescale that could also be relevant: depending on energy, the antiproton loss via
annihilation can be much larger than the leakage from the boundaries of the diffusion region.
When including this extra timescale, the expression for $G$ becomes less transparent; we show
instead results for a few sample cases obtained by implementing the exact Green function.
In Fig.~\ref{fig:pbscal1}, we plot the spectral shape for the local antiproton flux from a source moving
along the reference trajectory, namely a
path perpendicular to the Galactic plane and intersecting it at a short distance from the
observer, with the source moving at constant velocity $v_s = 300 \,{\rm km \,s^{-1}}$.
We have also chosen the propagation model~A, and defined the dark matter model through
a sample value for the WIMP mass, $M_\chi=500$~GeV, and assuming that the annihilation
is dominantly into $W^+\, W^-$ pairs. As in the previous plots, $d$ refers to the distance of the
source from the observer at the time of observation and is used instead of a time variable
to compare more easily to the static limit (dotted curves in the plot).
There is a evident transient at small and intermediate energies, soon after the source has
entered the diffusion region; the spectrum starts to fully match the static limit case only when the
source arrives very close to the observer. Moving away from the observer, the scaling with
distance is less severe than in the static limit and a contribution to the local antiproton flux
persists much later than the time at which the source has left the propagation region (the vertical
boundary is at $h_h = 4$~kpc in model~A). Since high energy antiprotons diffuse more efficiently,
at late times the spectral shape becomes steeper and starts again to differ sensibly from the shape
of the component due to pair annihilations in the smooth dark matter halo (dash-dotted line in
the plot). Note that the latter, contrary to the positron case, traces rather closely the shape of static
point-sources.
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig9a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig9b.eps}
\end{minipage}
\caption{The locally measured antiproton and positron fluxes as a function of the distance
of the point source (negative distances label an approaching source, positive values a source
moving away) normalized to the fluxes at a closest encounter distance $d_{min} = 50$~pc
and to the scaling with distance of the companion induced gamma-ray flux, i.e. $1/d^2$.
$W^+\, W^-$ annihilation channel is considered (giving rise at the same time to antiprotons,
positrons and photons) and three sample values of the WIMP mass $M_\chi$. The antiproton
and positron fluxes are plotted at the energies $E$ which are equal to $M_\chi/5$ (left panel)
and $M_\chi/50$ (right panel).}
\label{fig:pbscal2}
\end{figure}
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig10a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig10b.eps}
\end{minipage}
\caption{Scaling of the locally measured antiproton flux with the distance of the point source
from the observer (negative distances label an approaching source, positive values a source
moving away), for two WIMP masses $M_\chi$ (200~GeV and 1~TeV) and two sample energies
$E$ (equal to $M_\chi/5$ and $M_\chi/50$), and assuming
pair annihilations into $W^+\, W^-$. In the left panel, the results refer to the standard vertical
trajectory and to our three benchmark propagation models A, B \& C (intermediate,
thin-halo \& thick-halo, see text for details). In the right panel, for the propagation model~A,
the color coding refers to different orbits, namely three vertical trajectories with velocities of the
source changing from 300 to 600 or 150~km~s$^{-1}$, and a circular orbit in the Galactic plane
with velocity matching the local circular velocity 250~km~s$^{-1}$. }
\label{fig:pbscal3}
\end{figure}
The $W^+\, W^-$ annihilation channel is a copious source of positrons and gamma-rays
(mainly from the production and decay of pions). In Fig.~\ref{fig:pbscal2}, we compare
the scaling with distance (i.e. time) of the locally measured antiproton and positron fluxes at an
energy corresponding to a fraction of the WIMP mass (1/5 and 1/50, respectively, in the left and right
panels) for three sample values of $M_\chi$. Both antimatter fluxes are normalized to the scaling with distance
of the induced gamma-ray emission, which goes simply like $1/d^2$. The ratio between charged-particle
fluxes to the gamma-ray flux increases rapidly around $d=0$ since the local number density of
positrons and antiprotons decreases less severely when the distance from the source is smaller
than the corresponding diffusion length. At negative $d$, i.e. for a source approaching the observer, one can see again the transients due to the fact that the source has just entered the diffusion region;
these transients are essentially specular in the two cases. On the other hand,
the contribution to the local antiproton population tends to survive much longer than for the positron
component. When the source is moving away, the antiproton flux scales, at least at intermediate distances and except for very energetic antiprotons, like $1/d^2$; positrons have instead much sharper transients, due to the efficient energy
loss term.
Fig.~\ref{fig:pbscal2} has been obtained for the reference propagation model (AH, with the
"H" labeling the energy loss configuration for positrons only) and the reference trajectory. As for
the positrons, we expect the results for the antiproton flux to depend on the choice of the
propagation parameters and the orbit for the point source. This is illustrated in
Fig.~\ref{fig:pbscal3}: in the left panel, we show the scaling of the locally measured antiproton
flux with distance considering two sample WIMP masses, two sample energies and the benchmark
trajectory, while looping over the three reference propagation models. The main effect here is due to the
increase in the diffusion coefficient going from model~B to model~A and then to model~C, as well as
to the boundary conditions, entering more critically for the thin-halo model~B ($h_h=1$~kpc); note,
however, that even for model~B a sizable contribution to the local flux may be due to sources
which are well outside the diffusion region and that would not be included in the static limit treatment.
In the right panel of Fig.~\ref{fig:pbscal3}, we consider instead a few of the orbits that we have already
introduced to discuss positron fluxes, with the pattern for source velocities which is analogous here;
note that in the circular orbit case the transient following a close encounter can be sufficiently
long lasting for the induced antiproton population to persist up to the next close encounter (what we
plot is the "equilibrium" configuration after a couple of full orbits).
Finally, in Fig.~\ref{fig:pbscal4}, we summarize the picture by comparing, for a WIMP model
with $W^+\, W^-$ annihilation channel, the constraints on the total annihilation rate which are
set by the latest measurements by PAMELA of the antiproton to proton ratio~\cite{Adriani:2008zq},
the electron/positron measurements by PAMELA and FERMI, as well as searches for point
$\gamma$-ray sources by EGRET. Projected limits for the FERMI $\gamma$-ray telescope are also shown.
Analogously to the case of the smooth DM halo component,
we find for a DM substructure contribution that if WIMP annihilates in a channel in which antiprotons
are copiously emitted, the measured antiproton flux sets tighter constraints than
electron/positron data, except possibly for extremely heavy DM candidates. This trend is reinforced
going to larger distances. The information from EGRET is inconclusive, while FERMI
is going to cover most, if not all, the parameter space currently probed by antiproton or positron
searches. The displayed results refer to a source moving along the reference vertical trajectory,
approaching or moving away from the observer; however, analogous conclusions hold for other
configurations. The extension to other WIMP annihilation channels also leads to similar results.
\begin{figure}[t]
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig11a.eps}
\end{minipage}
\ \hspace{3mm} \
\begin{minipage}[htb]{8cm}
\centering
\includegraphics[width=7.8cm]{fig11b.eps}
\end{minipage}
\caption{Upper limits on the total annihilation rate $\Gamma$ derived from the
PAMELA data on the antiproton fraction (solid lines), PAMELA data on the
positron ratio and FERMI data on the all electron flux (dash-dotted curves). The source is
moving along the reference vertical trajectory and is approaching the observer (left panel)
or moving away from the observer (right panel).
Results are compared to the level of the $\Gamma$ required to match either
the brightest unidentified EGRET source, the faintest source detected by EGRET and the level
of the $\gamma$-ray sensitivity for FERMI.}
\label{fig:pbscal4}
\end{figure}
\section{Extra features of positron/electron emission for a DM point source}
\label{sec:extra}
\subsection{A gamma-ray component from inverse Compton emission on starlight}
Radiative emissions are unavoidably associated to electron/positron yields; in particular,
the Inverse Compton (IC) radiation component due to the interaction of high energy
electrons on optical starlight is peaked in the range between tens to hundreds of GeV and may be
sizable for the intense dark-matter lepton emitters we have been considering.
The IC photon emissivity is obtained by folding, at any given point within the diffusion region,
the IC emission power with the electron/positron number density, see, e.g.,~ \cite{Rybicki}:
\begin{equation}
j_{IC}(\nu,\vec{r},t)=2\int^{M_{\chi}}_{m_e}dE\, P_{IC}(\vec{r},E,\nu)\, n(\vec{r},E,t)\;,
\label{eqjIC}
\end{equation}
where the IC power is given by:
\begin{equation}
P_{IC}(\vec{r},E,\nu) = c\,h\nu \int d\epsilon\, \frac{dn_\gamma}{d\epsilon}(\epsilon,\vec{r})
\,\sigma(\epsilon,\nu,E) v
\label{eqPIC}
\end{equation}
where $\epsilon$ is the energy of the target photons, $dn_\gamma/d\epsilon$ is their differential energy
spectrum and $\sigma$ is the Klein--Nishina cross section. The spatial dependence and spectrum of the number density of starlight photons can be computed from photometric maps of the Galaxy; we will adopt the model implemented in the public release of the Galprop numerical package~\cite{Strong:1998pw}. On the other hand, the analytic solution introduced in Section~\ref{sec:positronpoint} for computing the electron/positron number density $n$ was obtained assuming a spatially constant electron/positron energy loss term, and, hence, also a mean value for the background starlight density, rather than its value as a function of the spatial coordinates. This should not be regarded as a severe inconsistency, since the IC on starlight is just one of the effects contributing to the energy loss term, which main uncertainty is related to synchrotron emission on background magnetic fields. Having computed the IC emissivity field as in Eq.~\ref{eqjIC}, the relative $\gamma$-ray flux for a local observer is obtained simply by summing contributions along the line of sight.
\begin{figure}[t]
\centering
\includegraphics[width=9.cm]{fig12.eps}
\caption{$\gamma$-ray spectra for a DM point-source composed by 500~GeV WIMPs annihilating into
monochromatic electrons and positrons, and moving along the reference vertical trajectory.
The solid lines refer to the IC emissivity and are computed for three different directions of observation
labeled by the latitude $b$ (the longitude is $l=180^\circ$);
dashed lines refer to the FSR component in the direction of the source,
while dotted lines show the Galactic $\gamma$-ray background. The DM signals are computed for three
sample distances of the source, assuming that, for each distance, the normalization of the
total annihilation rate in the source is the maximum allowed by current PAMELA and FERMI
electron/positron data, see the limits derived in Fig.~\protect{\ref{fig:epluslimit1}}.
}
\label{fig:ic}
\end{figure}
An example is given in Fig.~\ref{fig:ic}, where we refer again to the model introduced at the beginning of
the discussion, namely a point source composed by 500~GeV WIMPs annihilating into monochromatic
electrons and positrons and moving on a vertical orbit intersecting the Galactic plane at a short distance from the observer, see Figs.~\ref{fig:scal} and \ref{fig:epluslimit1}. The propagation scenario considered is the AH model (recall that in this model the vertical scale height of the diffusion region is 4~kpc) and the time variable is again replaced by the distance from the observer. Having assumed a velocity of
the point source of 300~km~s$^{-1}$, we consider three sample cases, i.e. $d=1,\, 3,\, 5$~kpc with the source that has passed close to the observer and is now moving away. The induced IC flux (solid lines) is shown for three different directions of observation, all at the longitude $l=180^\circ$ (i.e. opposite to the Galactic center), with latitude $b=90^\circ$ (the direction towards the point source in our example), $b=75^\circ$ or $b=60^\circ$. We plot also the background level corresponding to the emission from Galactic cosmic-rays (dotted lines) computed with Galprop, and the DM-induced $\gamma$-ray component due to FSR at the source (which is obviously present only in the direction of the DM substructure). The normalization of the total annihilation rate, or of the annihilation volume, has been chosen by saturating the current limits from PAMELA or FERMI, see the results in Fig.~\ref{fig:epluslimit1}. This explains the relative strength of the FSR components.
One can see that the IC flux in the direction of the source can be as large as the FSR contribution (for $d=1~kpc$) or even larger (for $d=3$~kpc). Most notably, the spatial size of the IC emission is significant, inducing a signal larger than the background even tens of degrees away from the source. Consider, however, that the case displayed here is the most favorable, since the source is located in the portion of the sky with the faintest Galactic background, and we have not included the uncertain extragalactic component. For $d=5$~kpc, i.e. for the source outside the diffusion region, the IC term drops dramatically. This is due to the fact that there is no fresh source of high energy electrons and positrons, and the ones injected along the transit of the source through the diffusion region have rather rapidly degraded their energy. Therefore, there is no efficient high energy source to up-scatter starlight photons to $\gamma$-ray frequency. The trend is probably exaggerated by the fact that we are assuming a sharp boundary for the propagation region, while a smoother boundary condition would be certainly more physical and would probably lead to a smoother transition between the cases, $d=3$~kpc to the $d=5$~kpc.
From the sample case discussed, we can infer that it is rather likely that if the local positron flux receives a sizable contribution from a single point-source, the induced IC flux is a relevant target for the FERMI $\gamma$-ray telescope. A more thorough discussion about this point is delayed to an upcoming dedicated analysis.
\subsection{Dipole anisotropy in the electron/positron spectrum}
As pointed out by recent analyses (see e.g.,~\cite{Buesching:2008hr,Hooper:2008kg,Grasso:2009ma,Cernuda:2009kk}), a single nearby point-like source (e.g., a pulsar), being the dominant local positron source at high energy, induce an anisotropy which could be at a detectable level. In this Section, we reconsider this possibility in the context of DM substructure. We take into account proper motion effects, which are not relevant for bursting-like sources as pulsars. We assume the dominant contribution to the anisotropy to be given by the dipole term.
In order to detect an anisotropy along a certain direction at a good confidence level, the required number of events has to be very large. Therefore, although the anisotropy in the positron spectrum would be higher than in the total electron plus positron spectrum, we refer to the latter, being the detection prospects for FERMI much more promising than for PAMELA.
If the anisotropy is dominated by the dipole term, the intensity at a given point as a function of the direction of observation has only one maximum. The degree of dipole anisotropy can be defined by the quantity $\delta=(I_{max}-I_{min})/(I_{max}+I_{min})$, where $I_{max}$ and $I_{min}$ are the maximum and minimum $e^++e^-$ intensity with respect to direction.
Expanding the intensity in spherical harmonics up to the dipole term, we have: $I(\theta,\phi)\simeq \bar I+\delta\, \bar I \cos{\theta}+\sin{\theta}\,(I^{-1}_1e^{-I\phi}+I^1_1 e^{I\phi})$, where $\theta$ is the angle with respect to the direction of maximal anisotropy, $\bar I=(I_{max}+I_{min})/2=(I(\theta=0)-I(\theta=\pi))/2$, and $I^{-1}_1$ and $I^1_1$ are angle-independent coefficients.
Note that contrary to the case of a stationary point-source (with homogeneous and isotropic propagation parameters), the $\phi$-dependent terms of the dipole are not null.
Although the direction of maximal anisotropy (chosen, e.g., as the $z$-axis) is, in general, unknown, we can define the particle flux $F$ along such direction by integrating over all the possible directions the projection over $z$ of the intensity, i.e., $F\simeq \int d\Omega\, I\cos{\theta}\simeq 4/3\,\pi\, \delta\, \bar I$, and by estimating it as $F\simeq D\,|\nabla n|$. The latter is obtained in the diffusion approximation~\cite{Berezinskii:1990}, but it is valid at first order (i.e., for small anisotropies) also when including energy losses. Moreover, we assume the anisotropies in directions orthogonal to $z$ to be subdominant (i.e., $\partial_z n \simeq \nabla n$). By equating the two expressions for $F$, one gets~\cite{Ginzburg:1964}:
\begin{equation}
\delta=\frac{3\,D}{c}\frac{|\nabla n_{tot}|}{n_{tot}}=\frac{3\,D}{c}\frac{|\nabla n_{DM}|}{(n_{DM}+n_{CR})} \;,
\label{Eq:anis}
\end{equation}
where $n_{DM}$ denotes the contribution to the $e^++e^-$ number density induced by a DM substructure, $n_{CR}$ is the contribution from cosmic-rays, and $n_{tot}=n_{DM}+n_{CR}$.
Neglecting boundary conditions, $|\nabla n_{DM}|$ is given by the gradient of Eq.~\ref{eq:epnsol}:
\begin{equation}
|\nabla n_{DM}(\vec r,p,t)| = \left[\sum_{i=1}^3\left(\Gamma
\int_{t_i }^t dt_0 \, \left|\frac{\dot{p}(p_0)}{\dot{p}(p)} \right|
\frac{dN_{e^+}}{dp_0} \, 2\,\frac{x_i-x_{i,p}(t_0)}{\lambda(p,p_0)^2}\,G(\vec r,p;\vec{r}_p(t_0), p_0)\right)^2\right]^{1/2} \;,
\label{eq:anis2}
\end{equation}
with $p_0$ obtained from $\Delta \tau=t-t_0$ and $x_i=x,y,z$ for $i=1,2,3$, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=7.cm]{fig13.eps}
\caption{Dipole anisotropy of the electron plus positron spectrum as a function of the energy for the four sample DM scenarios in Table~\ref{tab:fit}.}
\label{fig:anis}
\end{figure}
We plot in Fig.~\ref{fig:anis}, the degree of dipole anisotropy for the benchmark DM scenarios in Table~\ref{tab:fit}.
Comparing models 3 and 4, describing sources at analogous distance and with analogous local spectrum, one can note that, as intuitive, an approaching source would induce a higher anisotropy than a moving-away source (both on a straight-line trajectory). On the other hand, for sources at moderate distance the mismatch is quite small and the picture is similar to the stationary case.
The degree of anisotropy is strongly and inversely correlated to the diffusion length (see Eq.~\ref{eq:anis2}).
Although the benchmark DM models 1 and 2 induce a very similar local spectrum (see Fig.~\ref{fig:pamela1}) and describe substructures moving away from us on a vertical trajectory at the same velocity, in the second case the electrons and positrons undergo a greater diffusion before reaching us, and the anisotropy is suppressed with respect to the first case.
Note that the maximum of the curves in Fig.~\ref{fig:anis} occur at larger energy than the peaks in Figs.~\ref{fig:pamela1} and \ref{fig:pamela1b}, being the diffusion and the related wash-out of anisotropies less effective at high energy.
The degree of dipole anisotropy for the benchmark models in Fig.~\ref{fig:anis} can be observable by FERMI in few years of data taking. On the other hand, the predictions for the anisotropy associated to a DM substructure suffer of many theoretical uncertainties, as in the case of the spectrum, and, depending on the trajectory, the degree of anisotropy can be suppressed or slightly enhanced with respect to an analogous (i.e., at analogous distance from the observer) stationary point-like case.
\section{Conclusions}
\label{sec:concl}
We have discussed the contribution to the local antimatter fluxes due to individual WIMP
DM substructures, accounting for the substructure proper motion.
We have derived analytic solutions to the propagation equation as appropriate for a
time-dependent positron and antiproton primary sources, identifying the relevant quantities
to discriminate whether proper motion effects are relevant or not.
We found that, for both positrons and antiprotons,
the static limit is a fair approximation only in the case of high energy particles
and nearby sources, while it fails in all other situations. The discussion has involved
a few benchmark DM candidates and and a few sample orbits for the substructure.
As an application of the general discussion, we focused first on WIMPs annihilating into leptons only,
and we derived sample fits of the PAMELA positron excess and FERMI all-electron data.
The fits have been obtained starting from some arbitrary values of the WIMP mass,
demonstrating that, for a single non-static DM point-source, it is no longer
true that one can extract from the data, in a unique way, as extensively done in recent analyses,
model independent particle physics observables, such as the DM mass, the annihilation yield and
the pair annihilation cross section.
Indeed, the threshold of the local electron/positron spectrum cannot be used as a sure indicator of the DM particle mass, being possibly set by the distance of the DM substructure. The relation between the shapes of the injection spectrum and of the propagated spectrum is not straightforward for a non-static DM substructure, which behaves as a transient source.
Moreover, the annihilation rate depends on the product of the pair annihilation cross section times the unknown annihilation volume of the substructure, rather than on $\sigma v$ only.
The PAMELA positron excess can be explained in such a scenario, provided that a large annihilation rate in the substructure is considered. This requires either a large annihilation volume (much larger than typical values predicted for DM substructures in N-body simulations, but viable for scenarios accounting for the adiabatic formation of an intermediate mass black hole in the substructure) and/or a pair annihilation cross section significantly enhanced with respect to the reference thermal value.
We have then used PAMELA and FERMI
electron/positron data to derive, under different configurations, limits to the total annihilation
rate in the DM source. These limits have been compared to the bounds extracted from the $\gamma$-ray luminosities associated to the same DM models. The general
trend is that the $\gamma$-ray signal induced by a DM point-source, giving a sizable contribution to the local positron flux, is below the level of the brightest unidentified EGRET sources, but well above the FERMI $\gamma$-ray telescope sensitivity.
We have also discussed WIMP models giving raise to an antiproton yield through pair annihilations in the substructure,
as in the case with $W^+W^-$ as final state of annihilation.
Analogously to the electron/positron case, single DM sources inducing a significant contribution
to the local antiproton flux can be detected by FERMI in $\gamma$-rays
(on the other hand, as for the smooth DM halo component,
a positron/electron flux at the level of PAMELA and FERMI data can be hardly obtained in such WIMP models
without violating antiproton limits).
Finally, we sketched two further features in connection to
positron/electron emission from a DM point-source, namely the spectral and angular shape of
$\gamma$-ray flux induced by inverse Compton emission on starlight, and the dipole anisotropy
in the electron/positron spectrum. We have shown that, potentially, both of them could be
used to test the scenario proposed here.
\section*{Acknowledgements}
We would like to thank P.~D.~Serpico for useful discussions.
M.R. acknowledges funding by, and the facilities of, the Centre for High Performance Computing, Cape Town. P.U. is partially supported by the European Community's Human Potential Programme under contracts MRTN-CT-2006-035863.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,128 |
\section{Introduction}
\label{sec:intro}
\input{note-intro}
\section{The homology spectral sequence and the direct sum
decomposition of the homology of the arrangement}
\label{sec:homology}
\input{note-homology}
\section{The graded ring structure}
\label{sec:graded-ring}
\input{note-graded-products}
\section{Intersecting with a hyperplane}
\label{sec:hyperplane}
\input{note-proj-proj}
\section{Real projective arrangements and an example}
\label{sec:real}
\input{note-real}
\section{Recovering the direct sum decomposition}
\label{sec:stratum-top}
\input{note-stratum-top}
\section{The product of two arrangements}
\label{sec:arr-product}
\input{note-arr-product}
\section{The vanishing of the intersection product of classes of
degrees not adding up to~\texorpdfstring{$n$}n.}
\label{sec:vanishing}
\input{note-vanishing}
\section{The main theorem}
\label{sec:main-proof}
\input{note-proj-products}
\section{Projective \texorpdfstring{$c$}{c}-arrangements}
\let\seci\subsection
\label{sec:c-arr}
\input{note-c-arr}
\bibliographystyle{amsalpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,764 |
// Copyright 1998-2015 Epic Games, Inc. All Rights Reserved.
#pragma once
/*-----------------------------------------------------------------------------
Type definitions
-----------------------------------------------------------------------------*/
/** Type definition for shared pointers to instances of FEventGraphColumn. */
typedef TSharedPtr<class FEventGraphColumn> FEventGraphColumnPtr;
/** Type definition for shared references to instances of FEventGraphColumn. */
typedef TSharedRef<class FEventGraphColumn> FEventGraphColumnRef;
/*-----------------------------------------------------------------------------
Enumerators
-----------------------------------------------------------------------------*/
/** Enumerates event graph view modes. */
namespace EEventGraphViewModes
{
enum Type
{
/** Hierarchical list of the events. */
Hierarchical,
/** Flat list of the events based on the inclusive time, sorted by the inclusive time. */
FlatInclusive,
/** Flat list of the events based on the inclusive time coalesced by the event name, sorted by the inclusive time. */
FlatInclusiveCoalesced,
/** Flat list of the events based on the exclusive time, sorted by the exclusive time. */
FlatExclusive,
/** Flat list of the events based on the exclusive time coalesced by the event name, sorted by the exclusive time. */
FlatExclusiveCoalesced,
/** For the specified class shows an aggregated hierarchy @TBD. */
ClassAggregate,
/** Invalid enum type, may be used as a number of enumerations. */
InvalidOrMax,
};
/**
* @param EventGraphViewMode - The value to get the string for.
*
* @return string representation of the specified EEventGraphViewModes value.
*/
FText ToName( const EEventGraphViewModes::Type EventGraphViewMode );
/**
* @param EventGraphViewMode - The value to get the string for.
*
* @return string representation with more detailed explanation of the specified EEventGraphViewModes value.
*/
FText ToDescription( const EEventGraphViewModes::Type EventGraphViewMode );
FName ToBrushName( const Type EventGraphViewMode );
}
// @TODO: Sort properties by size.
/** Holds information about a column in the event graph widget. */
class FEventGraphColumn /*: public FNoncopyable*/
{
public:
FEventGraphColumn
(
const EEventPropertyIndex::Type InIndex,
const FName InSearchID,
const FText InShortName,
const FText InDescription,
const bool bInCanBeHidden,
const bool bInIsVisible,
const bool bInCanBeSorted,
const bool bInCanBeFiltered,
const bool bInCanBeCulled,
const EHorizontalAlignment InHorizontalAlignment,
const float InFixedColumnWidth
)
: Index( InIndex )
, ID( FEventGraphSample::GetEventPropertyByIndex(InIndex).Name )
, SearchID( InSearchID )
, ShortName( MoveTemp(InShortName) )
, Description( MoveTemp(InDescription) )
, bCanBeHidden( bInCanBeHidden )
, bIsVisible( bInIsVisible )
, bCanBeSorted( bInCanBeSorted )
, bCanBeFiltered( bInCanBeFiltered )
, bCanBeCulled( bInCanBeCulled )
, HorizontalAlignment( InHorizontalAlignment )
, FixedColumnWidth( InFixedColumnWidth )
{}
public:
/** Index of the event's property, also means the index of the column @see EEventPropertyIndex. */
EEventPropertyIndex::Type Index;
/** Name of the column, name of the property. */
FName ID;
/** Name of the column used by the searching system. */
FName SearchID;
/** Short name of the column, displayed in the event graph header. */
FText ShortName;
/** Long name of the column, displayed in the column tooltip. */
FText Description;
/** Whether this column can be hidden. */
bool bCanBeHidden;
/** Is this column visible?. */
bool bIsVisible;
/** Whether this column cab be used for sorting. */
bool bCanBeSorted;
/** Where this column can be used to filtering displayed results. */
bool bCanBeFiltered;
/** Where this column can be used to culling displayed results. */
bool bCanBeCulled;
/** Horizontal alignment of the content in this column. */
EHorizontalAlignment HorizontalAlignment;
/** If greater than 0.0f, this column has fixed width and cannot be resized. */
float FixedColumnWidth;
};
/*-----------------------------------------------------------------------------
Declarations
-----------------------------------------------------------------------------*/
/** Interface for the event graph. */
class IEventGraph
{
public:
virtual void ExpandCulledEvents( FEventGraphSamplePtr EventPtr ) = 0;
};
/** A custom event graph widget used to visualize profiling data. */
class SEventGraph : public SCompoundWidget, public IEventGraph
{
struct ESelectedEventTypes
{
enum Type
{
AllEvents,
SelectedEvents,
SelectedThreadEvents,
};
};
public:
SLATE_BEGIN_ARGS( SEventGraph ){}
SLATE_END_ARGS()
/** Default constructor. */
SEventGraph();
/** Destructor. */
~SEventGraph();
/**
* Construct this widget.
*
* @param InArgs - the declaration data for this widget.
*/
void Construct( const FArguments& InArgs );
protected:
virtual void ExpandCulledEvents( FEventGraphSamplePtr EventPtr );
protected:
/*-----------------------------------------------------------------------------
Helper for constructing sub-widgets of this event graph widget.
-----------------------------------------------------------------------------*/
TSharedRef<SWidget> GetWidgetForEventGraphTypes();
TSharedRef<SWidget> GetWidgetForEventGraphViewModes();
TSharedRef<SWidget> GetWidgetBoxForOptions();
TSharedRef<SWidget> GetToggleButtonForEventGraphType( const EEventGraphTypes::Type EventGraphType, const FName BrushName );
TSharedRef<SWidget> GetToggleButtonForEventGraphViewMode( const EEventGraphViewModes::Type EventGraphViewMode );
/*-----------------------------------------------------------------------------
Settings
-----------------------------------------------------------------------------*/
EVisibility EventGraphViewMode_GetVisibility( const EEventGraphViewModes::Type ViewMode ) const;
/*-----------------------------------------------------------------------------
Events
-----------------------------------------------------------------------------*/
public:
/**
* The event to execute when the event graph state has been restored from the history.
*
* @param FrameStartIndex - <describe FrameStartIndex>
* @param FrameEndIndex - <describe FrameEndIndex>
*
*/
DECLARE_EVENT_TwoParams( SEventGraph, FEventGraphRestoredFromHistoryEvent, uint32 /*FrameStartIndex*/, uint32 /*FrameEndIndex*/ );
FEventGraphRestoredFromHistoryEvent& OnEventGraphRestoredFromHistory()
{
return EventGraphRestoredFromHistoryEvent;
}
protected:
void ProfilerManager_OnViewModeChanged( EProfilerViewMode::Type NewViewMode );
protected:
/** The event to execute when the event graph has been restored from the history. */
FEventGraphRestoredFromHistoryEvent EventGraphRestoredFromHistoryEvent;
/** Called by STreeView to generate a table row for the specified item. */
TSharedRef< ITableRow > EventGraph_OnGenerateRow( FEventGraphSamplePtr EventPtr, const TSharedRef< STableViewBase >& OwnerTable );
/**
* Called by STreeView to retrieves the children for the specified parent item.
* @param InParent - The parent node to retrieve the children from.
* @param out_Children - List of children for the parent node.
*/
void EventGraph_OnGetChildren( FEventGraphSamplePtr InParent, TArray< FEventGraphSamplePtr >& out_Children );
void EventGraph_OnSelectionChanged( FEventGraphSamplePtr SelectedItem, ESelectInfo::Type SelectInfo );
void EventGraph_OnExpansionChanged( FEventGraphSamplePtr SelectedItem, bool bIsExpanded );
void TreeView_SetItemsExpansion_Recurrent( TArray<FEventGraphSamplePtr>& InEventGraphSamples, const bool bShouldExpand );
void SetHierarchicalSelectedEvents( const TArray<FEventGraphSamplePtr>& HierarchicalSelectedEvents );
void GetHierarchicalSelectedEvents( TArray< FEventGraphSamplePtr >& out_HierarchicalSelectedEvents, const TArray<FEventGraphSamplePtr>* SelectedEvents = NULL ) const;
void SetHierarchicalExpandedEvents( const TSet<FEventGraphSamplePtr>& HierarchicalExpandedEvents );
void GetHierarchicalExpandedEvents( TSet<FEventGraphSamplePtr>& out_HierarchicalExpandedEvents ) const;
void ShowEventsInViewMode( const TArray<FEventGraphSamplePtr>& EventsToSynchronize, const EEventGraphViewModes::Type NewViewMode );
void ScrollToTheSlowestSelectedEvent( EEventPropertyIndex::Type ColumnIndex );
void CreateEvents();
void SortEvents();
void TreeView_Refresh();
void SetTreeItemsForViewMode( const EEventGraphViewModes::Type NewViewMode, EEventGraphTypes::Type NewEventGraphType );
void EventGraphViewMode_OnCheckStateChanged( ECheckBoxState NewRadioState, const EEventGraphViewModes::Type InViewMode );
ECheckBoxState EventGraphViewMode_IsChecked( const EEventGraphViewModes::Type InViewMode ) const;
void EventGraphType_OnCheckStateChanged( ECheckBoxState NewRadioState, const EEventGraphTypes::Type NewEventGraphType );
ECheckBoxState EventGraphType_IsChecked( const EEventGraphTypes::Type InEventGraphType ) const;
bool EventGraphType_IsEnabled( const EEventGraphTypes::Type InEventGraphType ) const;
void FilteringSearchBox_OnTextChanged( const FText& InFilterText );
bool FilteringSearchBox_IsEnabled() const;
TSharedPtr<SWidget> EventGraph_GetMenuContent() const;
void EventGraph_BuildSortByMenu( FMenuBuilder& MenuBuilder );
void EventGraph_BuildViewColumnMenu( FMenuBuilder& MenuBuilder );
FReply ExpandHotPath_OnClicked();
void HighlightHotPath_OnCheckStateChanged( ECheckBoxState InState );
void InitializeAndShowHeaderColumns();
void TreeViewHeaderRow_OnSortModeChanged( const EColumnSortPriority::Type SortPriority, const FName& ColumnID, const EColumnSortMode::Type SortMode );
void SetSortModeForColumn( const FName& ColumnID, EColumnSortMode::Type SortMode );
EColumnSortMode::Type TreeViewHeaderRow_GetSortModeForColumn( const FName ColumnID ) const;
TSharedRef<SWidget> TreeViewHeaderRow_GenerateColumnMenu( const FEventGraphColumn& Column );
bool EventGraphTableRow_IsColumnVisible( const FName ColumnID ) const;
void EventGraphTableRow_SetHoveredTableCell( const FName ColumnID, const FEventGraphSamplePtr EventPtr );
FName EventGraphRow_GetHighlightedEventName() const;
EHorizontalAlignment EventGraphRow_GetColumnOutlineHAlignment( const FName ColumnID ) const;
void TreeViewHeaderRow_ShowColumn( const FName ColumnID );
void TreeViewHeaderRow_CreateColumnArgs( const uint32 ColumnIndex );
/** Binds our UI commands to delegates. */
void BindCommands();
/*-----------------------------------------------------------------------------
SelectAllFrames
-----------------------------------------------------------------------------*/
public:
/** Maps UI command info SelectAllFrames with the specified UI command list. */
void Map_SelectAllFrames_Global();
/** Add comment here */
const FUIAction SelectAllFrames_Custom() const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::SelectAllFrames_Execute );
UIAction.CanExecuteAction = FCanExecuteAction::CreateSP( this, &SEventGraph::SelectAllFrames_CanExecute );
return UIAction;
}
protected:
/** Handles FExecuteAction for SelectAllFrames. */
void SelectAllFrames_Execute();
/** Handles FCanExecuteAction for SelectAllFrames. */
bool SelectAllFrames_CanExecute() const;
// Dummy implementation, used to verify that all actions have been properly implemented.
void ContextMenu_ExecuteDummy( const FName ActionName );
bool ContextMenu_CanExecuteDummy( const FName ActionName ) const;
bool ContextMenu_IsCheckedDummy( const FName ActionName ) const;
// Action_ExpandHotPath
void ContextMenu_ExpandHotPath_Execute();
bool ContextMenu_ExpandHotPath_CanExecute() const;
// Action_CopySelectedToClipboard
void ContextMenu_CopySelectedToClipboard_Execute();
bool ContextMenu_CopySelectedToClipboard_CanExecute() const;
// Action_SelectStack
void ContextMenu_SelectStack_Execute();
bool ContextMenu_SelectStack_CanExecute() const;
// Action_SortByColumn
void ContextMenu_SortByColumn_Execute( const FName ColumnID );
bool ContextMenu_SortByColumn_CanExecute( const FName ColumnID ) const;
bool ContextMenu_SortByColumn_IsChecked( const FName ColumnID );
// Action_ToggleColumn
void ContextMenu_ToggleColumn_Execute( const FName ColumnID );
bool ContextMenu_ToggleColumn_CanExecute( const FName ColumnID ) const;
bool ContextMenu_ToggleColumn_IsChecked( const FName ColumnID );
// Action_SortMode
void ContextMenu_SortMode_Execute( const EColumnSortMode::Type InSortMode );
bool ContextMenu_SortMode_CanExecute( const EColumnSortMode::Type InSortMode ) const;
bool ContextMenu_SortMode_IsChecked( const EColumnSortMode::Type InSortMode );
// Action_ResetColumns
void ContextMenu_ResetColumns_Execute();
bool ContextMenu_ResetColumns_CanExecute() const;
// Header Action_SortMode
void HeaderMenu_SortMode_Execute( const FName ColumnID, const EColumnSortMode::Type InSortMode );
bool HeaderMenu_SortMode_CanExecute( const FName ColumnID, const EColumnSortMode::Type InSortMode ) const;
bool HeaderMenu_SortMode_IsChecked( const FName ColumnID, const EColumnSortMode::Type InSortMode );
// Header Action_HideColumn
void HeaderMenu_HideColumn_Execute( const FName ColumnID );
bool HeaderMenu_HideColumn_CanExecute( const FName ColumnID ) const;
/*-----------------------------------------------------------------------------
SetRoot
-----------------------------------------------------------------------------*/
public:
/** Add comment here */
const FUIAction SetRoot_Custom() const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::SetRoot_Execute );
UIAction.CanExecuteAction = FCanExecuteAction::CreateSP( this, &SEventGraph::SetRoot_CanExecute );
return UIAction;
}
protected:
/** Handles FExecuteAction for SetRoot. */
void SetRoot_Execute();
/** Handles FCanExecuteAction for SetRoot. */
bool SetRoot_CanExecute() const;
/*-----------------------------------------------------------------------------
ClearHistory
-----------------------------------------------------------------------------*/
public:
/** Add comment here */
const FUIAction ClearHistory_Custom() const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::ClearHistory_Execute );
UIAction.CanExecuteAction = FCanExecuteAction::CreateSP( this, &SEventGraph::ClearHistory_CanExecute );
return UIAction;
}
protected:
/** Handles FExecuteAction for ClearHistory. */
void ClearHistory_Execute();
/** Handles FCanExecuteAction for ClearHistory. */
bool ClearHistory_CanExecute() const;
/*-----------------------------------------------------------------------------
ShowSelectedEventsInViewMode
-----------------------------------------------------------------------------*/
public:
/** Add comment here */
const FUIAction ShowSelectedEventsInViewMode_Custom( EEventGraphViewModes::Type NewViewMode ) const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::ShowSelectedEventsInViewMode_Execute, NewViewMode );
UIAction.CanExecuteAction = FCanExecuteAction::CreateSP( this, &SEventGraph::ShowSelectedEventsInViewMode_CanExecute, NewViewMode );
UIAction.GetActionCheckState = FGetActionCheckState::CreateSP( this, &SEventGraph::ShowSelectedEventsInViewMode_GetCheckState, NewViewMode );
return UIAction;
}
protected:
/** Handles FExecuteAction for ShowSelectedEventsInViewMode. */
void ShowSelectedEventsInViewMode_Execute(EEventGraphViewModes::Type NewViewMode);
/** Handles FCanExecuteAction for ShowSelectedEventsInViewMode. */
bool ShowSelectedEventsInViewMode_CanExecute(EEventGraphViewModes::Type NewViewMode) const;
/** Handles FGetActionCheckState for ShowSelectedEventsInViewMode. */
ECheckBoxState ShowSelectedEventsInViewMode_GetCheckState(EEventGraphViewModes::Type NewViewMode) const;
/*-----------------------------------------------------------------------------
FilterOutByProperty
-----------------------------------------------------------------------------*/
public:
/** Add comment here */
const FUIAction FilterOutByProperty_Custom( const FEventGraphSamplePtr EventPtr, const FName PropertyName, const bool bReset ) const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::FilterOutByProperty_Execute, EventPtr, PropertyName, bReset );
UIAction.CanExecuteAction = FCanExecuteAction::CreateSP( this, &SEventGraph::FilterOutByProperty_CanExecute, EventPtr, PropertyName, bReset );
return UIAction;
}
protected:
/** Handles FExecuteAction for FilterOutByProperty. */
void FilterOutByProperty_Execute( const FEventGraphSamplePtr EventPtr, const FName PropertyName, const bool bReset );
/** Handles FCanExecuteAction for FilterOutByProperty. */
bool FilterOutByProperty_CanExecute( const FEventGraphSamplePtr EventPtr, const FName PropertyName, const bool bReset ) const;
/*-----------------------------------------------------------------------------
CullByProperty
-----------------------------------------------------------------------------*/
public:
/** Add comment here */
const FUIAction CullByProperty_Custom( const FEventGraphSamplePtr EventPtr, const FName PropertyName, const bool bReset ) const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::CullByProperty_Execute, EventPtr, PropertyName, bReset );
UIAction.CanExecuteAction = FCanExecuteAction::CreateSP( this, &SEventGraph::CullByProperty_CanExecute, EventPtr, PropertyName, bReset );
return UIAction;
}
protected:
/** Handles FExecuteAction for CullByProperty. */
void CullByProperty_Execute( const FEventGraphSamplePtr EventPtr, const FName PropertyName, const bool bReset );
/** Handles FCanExecuteAction for CullByProperty. */
bool CullByProperty_CanExecute( const FEventGraphSamplePtr EventPtr, const FName PropertyName, const bool bReset ) const;
/*-----------------------------------------------------------------------------
HistoryList_GoTo
-----------------------------------------------------------------------------*/
public:
/** Add comment here */
const FUIAction HistoryList_GoTo_Custom( int32 StateIndex ) const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::HistoryList_GoTo_Execute, StateIndex );
UIAction.CanExecuteAction = FCanExecuteAction();
UIAction.GetActionCheckState = FGetActionCheckState::CreateSP( this, &SEventGraph::HistoryList_GoTo_GetCheckState, StateIndex );
return UIAction;
}
protected:
void HistoryList_GoTo_ExecuteRadioState( ECheckBoxState NewRadioState, int32 StateIndex )
{
if( NewRadioState == ECheckBoxState::Checked )
{
HistoryList_GoTo_Execute( StateIndex );
}
}
/** Handles FExecuteAction for HistoryList_GoTo. */
void HistoryList_GoTo_Execute( int32 StateIndex );
/** Handles FGetActionCheckState for HistoryList_GoTo. */
ECheckBoxState HistoryList_GoTo_GetCheckState( int32 StateIndex ) const
{
return StateIndex == CurrentStateIndex ? ECheckBoxState::Checked : ECheckBoxState::Unchecked;
}
/*-----------------------------------------------------------------------------
SetExpansionForEvents
-----------------------------------------------------------------------------*/
public:
/** Add comment here */
const FUIAction SetExpansionForEvents_Custom( const ESelectedEventTypes::Type SelectedEventType, bool bShouldExpand ) const
{
FUIAction UIAction;
UIAction.ExecuteAction = FExecuteAction::CreateSP( this, &SEventGraph::SetExpansionForEvents_Execute, SelectedEventType, bShouldExpand );
UIAction.CanExecuteAction = FCanExecuteAction::CreateSP( this, &SEventGraph::SetExpansionForEvents_CanExecute, SelectedEventType, bShouldExpand );
return UIAction;
}
protected:
void GetEventsForChangingExpansion( TArray<FEventGraphSamplePtr>& out_Events, const ESelectedEventTypes::Type SelectedEventType );
/** Handles FExecuteAction for SetExpansionForEvents. */
void SetExpansionForEvents_Execute( const ESelectedEventTypes::Type SelectedEventType, bool bShouldExpand );
/** Handles FCanExecuteAction for SetExpansionForEvents. */
bool SetExpansionForEvents_CanExecute( const ESelectedEventTypes::Type SelectedEventType, bool bShouldExpand ) const;
protected:
/** All events coalesced by the event name, stored as FName -> FEventGraphSamplePtr. */
TMultiMap< FName, FEventGraphSamplePtr > HierarchicalToFlatCoalesced;
/** An array of samples to be displayed in this widget. */
TArray<FEventGraphSamplePtr> Events_Flat;
TArray<FEventGraphSamplePtr> Events_FlatCoalesced;
/** How we sort the event graph?. */
EColumnSortMode::Type ColumnSortMode;
/** Name of the column currently being sorted, NAME_None if sorting is disabled. */
FName ColumnBeingSorted;
typedef TSharedPtr< STreeView<FEventGraphSamplePtr> > FTreeViewOfEventGraphSamples;
/** Holds the tree view widget which display event graph samples. */
FTreeViewOfEventGraphSamples TreeView_Base;
/** External scrollbar used to synchronize tree view position. */
TSharedPtr<SScrollBar> ExternalScrollbar;
TSharedPtr<SBox> FunctionDetailsBox;
/** Holds the tree view header row widget which display all columns in the tree view. */
TSharedPtr< SHeaderRow > TreeViewHeaderRow;
/** The search box widget used to filter items displayed in this widget. */
TSharedPtr< SSearchBox > FilteringSearchBox;
/** Column metadata used to initialize column arguments, stored as PropertyName -> FEventGraphColumn. */
TMap< FName, FEventGraphColumn > TreeViewHeaderColumns;
/** Column arguments used to initialize a new header column in the tree view, stored as column name to column arguments mapping. */
TMap< FName, SHeaderRow::FColumn::FArguments > TreeViewHeaderColumnArgs;
/** Name of the column currently being hovered by the mouse. */
FName HoveredColumnID;
/** A shared pointer to the event currently being hovered by the mouse. */
FEventGraphSamplePtr HoveredSamplePtr;
/*-----------------------------------------------------------------------------
History management
-----------------------------------------------------------------------------*/
protected:
struct EEventHistoryTypes
{
enum Type
{
NewEventGraph,
Rooted,
Culled,
Filtered,
};
};
class FEventGraphState
{
struct FCulledTag{};
struct FFilteredTag{};
public:
/** New event graph state constructor. */
FEventGraphState( FEventGraphDataRef InAverageEventGraph, FEventGraphDataRef InMaximumEventGraph )
: AverageEventGraph( InAverageEventGraph )
, MaximumEventGraph( InMaximumEventGraph )
, FakeRoot( FEventGraphSample::CreateNamedEvent( FEventGraphConsts::FakeRoot ) )
, CullPropertyName( NAME_None )
, CullEventPtr( nullptr )
, FilterPropertyName( NAME_None )
, FilterEventPtr( nullptr )
, CreationTime( FPlatformTime::Seconds() )
, HistoryType( EEventHistoryTypes::NewEventGraph )
, ViewMode( EEventGraphViewModes::Hierarchical )
, EventGraphType( InAverageEventGraph->GetNumFrames() == 1 ? EEventGraphTypes::OneFrame : EEventGraphTypes::Average )
{
CreateOneToOneMapping();
}
FEventGraphState* CreateCopyWithNewRoot( const TArray<FEventGraphSamplePtr>& UniqueLeafs )
{
return new FEventGraphState( *this, UniqueLeafs );
}
FEventGraphState* CreateCopyWithCulling( const FName InCullPropertyName, const FEventGraphSamplePtr InCullEventPtr )
{
return new FEventGraphState( *this, InCullPropertyName, InCullEventPtr, FCulledTag() );
}
FEventGraphState* CreateCopyWithFiltering( const FName InFilterPropertyName, const FEventGraphSamplePtr InFilterEventPtr )
{
return new FEventGraphState( *this, InFilterPropertyName, InFilterEventPtr, FFilteredTag() );
}
protected:
/** Copy constructor for setting new root. */
FEventGraphState( const FEventGraphState& EventGraphState, const TArray<FEventGraphSamplePtr>& UniqueLeafs )
: AverageEventGraph( EventGraphState.AverageEventGraph )
, MaximumEventGraph( EventGraphState.MaximumEventGraph )
, AverageToMaximumMapping( EventGraphState.AverageToMaximumMapping )
, MaximumToAverageMapping( EventGraphState.MaximumToAverageMapping )
, ExpandedEvents( EventGraphState.ExpandedEvents )
, SelectedEvents( EventGraphState.SelectedEvents )
, FakeRoot( FEventGraphSample::CreateNamedEvent( FEventGraphConsts::FakeRoot ) )
, CullPropertyName( EventGraphState.CullPropertyName )
, CullEventPtr( EventGraphState.CullEventPtr )
, ExpandedCulledEvents( EventGraphState.ExpandedCulledEvents )
, FilterPropertyName( EventGraphState.FilterPropertyName )
, FilterEventPtr( EventGraphState.FilterEventPtr )
, CreationTime( FPlatformTime::Seconds() )
, HistoryType( EEventHistoryTypes::Rooted )
, ViewMode( EventGraphState.ViewMode )
, EventGraphType( EventGraphState.EventGraphType )
{
// Set new root.
SetNewRoot( UniqueLeafs );
}
/** Copy constructor for culling. */
FEventGraphState( const FEventGraphState& EventGraphState, const FName InCullPropertyName, const FEventGraphSamplePtr InCullEventPtr, FCulledTag )
: AverageEventGraph( EventGraphState.AverageEventGraph )
, MaximumEventGraph( EventGraphState.MaximumEventGraph )
, AverageToMaximumMapping( EventGraphState.AverageToMaximumMapping )
, MaximumToAverageMapping( EventGraphState.MaximumToAverageMapping )
, ExpandedEvents( EventGraphState.ExpandedEvents )
, SelectedEvents( EventGraphState.SelectedEvents )
, FakeRoot( FEventGraphSample::CreateNamedEvent( FEventGraphConsts::FakeRoot ) )
, CullPropertyName( InCullPropertyName )
, CullEventPtr( InCullEventPtr )
, ExpandedCulledEvents()
, FilterPropertyName( EventGraphState.FilterPropertyName )
, FilterEventPtr( EventGraphState.FilterEventPtr )
, CreationTime( FPlatformTime::Seconds() )
, HistoryType( EEventHistoryTypes::Culled )
, ViewMode( EventGraphState.ViewMode )
, EventGraphType( EventGraphState.EventGraphType )
{
// Copy fake root.
SetNewRoot( EventGraphState.FakeRoot->GetChildren() );
}
/** Copy constructor for filtering. */
FEventGraphState( const FEventGraphState& EventGraphState, const FName InFilterPropertyName, const FEventGraphSamplePtr InFilterEventPtr, FFilteredTag )
: AverageEventGraph( EventGraphState.AverageEventGraph )
, MaximumEventGraph( EventGraphState.MaximumEventGraph )
, AverageToMaximumMapping( EventGraphState.AverageToMaximumMapping )
, MaximumToAverageMapping( EventGraphState.MaximumToAverageMapping )
, ExpandedEvents( EventGraphState.ExpandedEvents )
, SelectedEvents( EventGraphState.SelectedEvents )
, FakeRoot( FEventGraphSample::CreateNamedEvent( FEventGraphConsts::FakeRoot ) )
, CullPropertyName( EventGraphState.CullPropertyName )
, CullEventPtr( EventGraphState.CullEventPtr )
, ExpandedCulledEvents( EventGraphState.ExpandedCulledEvents )
, FilterPropertyName( InFilterPropertyName )
, FilterEventPtr( InFilterEventPtr )
, CreationTime( FPlatformTime::Seconds() )
, HistoryType( EEventHistoryTypes::Filtered )
, ViewMode( EventGraphState.ViewMode )
, EventGraphType( EventGraphState.EventGraphType )
{
// Copy fake root.
SetNewRoot( EventGraphState.FakeRoot->GetChildren() );
}
public:
const FEventGraphDataRef AverageEventGraph;
const FEventGraphDataRef MaximumEventGraph;
TMap<FEventGraphSamplePtr,FEventGraphSamplePtr> AverageToMaximumMapping;
TMap<FEventGraphSamplePtr,FEventGraphSamplePtr> MaximumToAverageMapping;
/** Only for hierarchical events, states for coalesced events are generated on demand. */
TSet<FEventGraphSamplePtr> ExpandedEvents;
TArray<FEventGraphSamplePtr> SelectedEvents;
/** Fake root event used to limit the event graph to the specified events and its children. */
FEventGraphSamplePtr FakeRoot;
/** Name of the property used to cull the event graph. */
FName CullPropertyName;
/** Value of the property used to cull the event graph. */
FEventGraphSamplePtr CullEventPtr;
/** Events that were culled, but later the user decided to expand them. */
TArray<FEventGraphSamplePtr> ExpandedCulledEvents;
/** Name of the property used to filter out the event graph. */
FName FilterPropertyName;
/** Value of the property used to filter out the event graph. */
FEventGraphSamplePtr FilterEventPtr;
const double CreationTime;
EEventHistoryTypes::Type HistoryType;
/** Event graph view mode. */
EEventGraphViewModes::Type ViewMode;
/** Event graph type. */
EEventGraphTypes::Type EventGraphType;
void CreateOneToOneMapping();
const bool IsCulled() const
{
return CullPropertyName != NAME_None;
}
const bool IsFiltered() const
{
return FilterPropertyName != NAME_None;
}
const bool IsRooted() const
{
return FakeRoot->GetChildren().Num() > 0;
}
/**
* @return the number of frames used to create this event graph data state.
*/
const uint32 GetNumFrames() const
{
return AverageEventGraph->GetNumFrames();
}
FText GetFullDescription() const;
FText GetRootedDesc() const;
FText GetCullingDesc() const;
FText GetFilteringDesc() const;
FText GetHistoryDesc() const;
const FEventGraphDataRef& GetEventGraph() const
{
return EventGraphType == EEventGraphTypes::Average ? AverageEventGraph : MaximumEventGraph;
}
FEventGraphSamplePtr GetRoot() const
{
return IsRooted() ? FakeRoot : GetEventGraph()->GetRoot();
}
FEventGraphSamplePtr GetRealRoot() const
{
return GetEventGraph()->GetRoot();
}
void SetNewRoot( const TArray<FEventGraphSamplePtr>& NewRootEvents )
{
for( int32 Nx = 0; Nx < NewRootEvents.Num(); ++Nx )
{
FakeRoot->AddChildPtr( NewRootEvents[Nx] );
}
}
void ApplyCulling()
{
if( IsCulled() )
{
// Apply culling.
FEventArrayBooleanOp::ExecuteOperation
(
GetRoot(), EEventPropertyIndex::bIsCulled,
CullEventPtr, FEventGraphSample::GetEventPropertyByName(CullPropertyName).Index,
EEventCompareOps::Less
);
// Update not culled children.
GetRoot()->SetBooleanStateForAllChildren<EEventPropertyIndex::bNeedNotCulledChildrenUpdate>(true);
}
else
{
// Reset culling.
GetRoot()->SetBooleanStateForAllChildren<EEventPropertyIndex::bIsCulled>(false);
}
}
void ApplyFiltering()
{
if( IsFiltered() )
{
// Apply filtering.
FEventArrayBooleanOp::ExecuteOperation
(
GetRoot(), EEventPropertyIndex::bIsFiltered,
FilterEventPtr, FEventGraphSample::GetEventPropertyByName(FilterPropertyName).Index,
EEventCompareOps::Less
);
}
else
{
// Reset filtering.
GetRoot()->SetBooleanStateForAllChildren<EEventPropertyIndex::bIsFiltered>(false);
}
}
/** Hacky method to update this saved state so it can be used with the new event graph type, mostly temporary. */
void UpdateToNewEventGraphType( const EEventGraphTypes::Type NewEventGraphType )
{
if( EventGraphType != NewEventGraphType )
{
const TMap<FEventGraphSamplePtr,FEventGraphSamplePtr>& OneToOneMapping = NewEventGraphType == EEventGraphTypes::Maximum ? AverageToMaximumMapping : MaximumToAverageMapping;
// Copy selected events.
TArray<FEventGraphSamplePtr> NewSelectedEvents;
for( int32 Nx = 0; Nx < SelectedEvents.Num(); ++Nx )
{
const FEventGraphSamplePtr& EventRef = OneToOneMapping.FindRef( SelectedEvents[Nx] );
NewSelectedEvents.Add( EventRef );
}
// Cope expanded events.
TSet<FEventGraphSamplePtr> NewExpandedEvents;
for( auto It = ExpandedEvents.CreateConstIterator(); It; ++It )
{
const FEventGraphSamplePtr& EventRef = OneToOneMapping.FindRef( *It );
NewExpandedEvents.Add( EventRef );
}
// Copy fake root's children.
FEventGraphSamplePtr NewFakeRoot = FEventGraphSample::CreateNamedEvent( FEventGraphConsts::FakeRoot );
TArray<FEventGraphSamplePtr>& Children = FakeRoot->GetChildren();
for( int32 Nx = 0; Nx < Children.Num(); ++Nx )
{
const FEventGraphSamplePtr& EventRef = OneToOneMapping.FindRef( Children[Nx] );
NewFakeRoot->AddChildPtr( EventRef );
}
// Switch to new data.
Exchange( SelectedEvents, NewSelectedEvents );
Exchange( ExpandedEvents, NewExpandedEvents );
FakeRoot = NewFakeRoot;
EventGraphType = NewEventGraphType;
}
}
};
/** Type definition for shared pointers to instances of FEventGraphState. */
typedef TSharedPtr<class FEventGraphState> FEventGraphStatePtr;
/** Type definition for shared references to instances of FEventGraphState. */
typedef TSharedRef<class FEventGraphState> FEventGraphStateRef;
FEventGraphStateRef GetCurrentState() const
{
return EventGraphStatesHistory[CurrentStateIndex];
}
/**
* @return the current event graph view mode.
*/
EEventGraphViewModes::Type GetCurrentStateViewMode() const
{
if( IsEventGraphStatesHistoryValid() )
{
return GetCurrentState()->ViewMode;
}
return EEventGraphViewModes::InvalidOrMax;
}
/**
* @return the current event graph type.
*/
EEventGraphTypes::Type GetCurrentStateEventGraphType() const
{
return GetCurrentState()->EventGraphType;
}
public:
/** Updates top level of the event graph, but only if there is no any other selection. */
void SetNewEventGraphState( const FEventGraphDataRef AverageEventGraph, const FEventGraphDataRef MaximumEventGraph, bool bInitial );
protected:
void SaveCurrentEventGraphState();
void RestoreEventGraphStateFrom( const FEventGraphStateRef EventGraphState, const bool bRestoredFromHistoryEvent = true );
void SwitchToEventGraphState( int32 StateIndex );
void SetEventGraphFromStateInternal( const FEventGraphStateRef& EventGraphState );
bool EventGraph_IsEnabled() const;
FReply HistoryBack_OnClicked();
bool HistoryBack_IsEnabled() const;
FText HistoryBack_GetToolTipText() const;
bool HistoryForward_IsEnabled() const;
FReply HistoryForward_OnClicked();
FText HistoryForward_GetToolTipText() const;
bool HistoryList_IsEnabled() const;
TSharedRef<SWidget> HistoryList_GetMenuContent() const;
bool IsEventGraphStatesHistoryValid() const
{
return EventGraphStatesHistory.Num() > 0;
}
/** Array of all operations that have been done in this event graph. */
TArray<FEventGraphStateRef> EventGraphStatesHistory;
/** The current operation index. */
int32 CurrentStateIndex;
/*-----------------------------------------------------------------------------
Function details
-----------------------------------------------------------------------------*/
protected:
struct FEventPtrAndMisc
{
FEventPtrAndMisc( FEventGraphSamplePtr InEventPtr, float InIncTimeToTotalPct, float InHeightPct )
: EventPtr( InEventPtr )
, IncTimeToTotalPct( InIncTimeToTotalPct )
, HeightPct( InHeightPct )
{}
FEventGraphSamplePtr EventPtr;
float IncTimeToTotalPct;
float HeightPct;
};
TSharedRef<SVerticalBox> GetVerticalBoxForFunctionDetails( TSharedPtr<SVerticalBox>& out_VerticalBoxTopFuncions, const FText& Caption );
TSharedRef<SVerticalBox> GetVerticalBoxForCurrentFunction();
void UpdateFunctionDetailsForEvent( FEventGraphSamplePtr SelectedEvent );
void DisableFunctionDetails();
void UpdateFunctionDetails();
void RecreateWidgetsForTopEvents( const TSharedPtr<SVerticalBox>& DestVerticalBox, const TArray<FEventPtrAndMisc>& TopEvents );
void GenerateCallerCalleeGraph( FEventGraphSamplePtr SelectedEvent );
void GenerateTopEvents( const TSet< FEventGraphSamplePtr >& EventPtrSet, TArray<FEventPtrAndMisc>& out_Results );
void CalculateEventWeights( TArray<FEventPtrAndMisc>& Events );
FString GetEventDescription( FEventGraphSamplePtr EventPtr, float Pct, const bool bSimple );
TSharedRef<SHorizontalBox> GetContentForEvent( FEventGraphSamplePtr EventPtr, float Pct, const bool bSimple );
FReply CallingCalledFunctionButton_OnClicked( FEventGraphSamplePtr EventPtr );
TSharedPtr<SVerticalBox> VerticalBox_TopCalled;
TSharedPtr<SVerticalBox> VerticalBox_TopCalling;
TSharedPtr<SVerticalBox> VerticalBox_CurrentFunction;
SVerticalBox::FSlot* CurrentFunctionDescSlot;
TArray<FEventPtrAndMisc> TopCallingFunctionEvents;
TArray<FEventPtrAndMisc> TopCalledFunctionEvents;
/** Name of the event that should be drawn as highlighted. */
FName HighlightedEventName;
};
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,588 |
{"url":"https:\/\/krpc.github.io\/krpc\/python\/api\/infernal-robotics.html","text":"# InfernalRobotics API\u00b6\n\nProvides RPCs to interact with the InfernalRobotics mod. Both the original mod and Infernal Robotics Next are supported. Provides the following classes:\n\n## Example\u00b6\n\nThe following example gets the control group named \u201cMyGroup\u201d, prints out the names and positions of all of the servos in the group, then moves all of the servos to the right for 1 second.\n\nimport time\nimport krpc\n\nconn = krpc.connect(name='InfernalRobotics Example')\nvessel = conn.space_center.active_vessel\n\ngroup = conn.infernal_robotics.servo_group_with_name(vessel, 'MyGroup')\nif group is None:","date":"2019-10-22 04:36:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20985396206378937, \"perplexity\": 10657.49858382922}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570987798619.84\/warc\/CC-MAIN-20191022030805-20191022054305-00349.warc.gz\"}"} | null | null |
Il Distributed File System (in acronimo DFS o Dfs), in italiano file system distribuito, è una tecnologia di Microsoft che permette di raggruppare in un'unica struttura logica di directory più cartelle situate fisicamente su file server diversi. L'utente che accede alla rete utilizzando il protocollo SMB, vedrà cartelle e file come se fossero su un unico server, anche se fisicamente saranno contenuti in server diversi, situati anche in punti distanti tra loro. È inoltre possibile memorizzare copie multiple di un singolo file, e il servizio provvederà a scegliere in modo ottimale il server a cui inoltrare la richiesta.
La tecnologia DFS è disponibile solamente sui sistemi operativi server di Microsoft, quali ad esempio Windows Server 2003. I sistemi operativi client quali Windows 98, XP, Vista o Linux, possono solamente accedere ad una struttura DFS, ma non possono entrare a farne parte con le proprie cartelle locali.
È incompatibile con il file system DCE/DFS di IBM, quasi omonimo ma il cui sviluppo si è interrotto nel 2005.
Panoramica
Il componente server DFS è stato introdotto con la denominazione "DFS 4.1" come add-on nel sistema operativo Windows NT 4.0, ed è stato incluso come componente standard in tutte le
versioni successive dei sistemi operativi di classe server, a partire da Windows 2000 Server. Il supporto client del DFS è stato introdotto dalla Microsoft a partire da Windows 4.0 Workstation.
Una struttura DFS può essere creata solamente con un sistema operativo Microsoft di classe server; i sistemi operativi di classe Enterprise e Datacenter di Microsoft possono ospitare più di una struttura DFS sullo stesso server.
Esistono due modalità di attivazione di DFS su un server:
DFS standalone: la struttura DFS esiste ed è visibile solamente dal computer locale, quindi non utilizza Active Directory. Questa modalità non offre alcuna fault tolerance e non può essere collegato ad altre strutture DFS. La modalità standalone è l'unica disponibile sul sistema operativo Windows NT 4.0 Server.
DFS di dominio: sfrutta Active Directory, può contenere le informazioni provenienti dai server che fanno parte del dominio e fornisce il supporto fault tolerance.
La struttura DFS (o le strutture, nel caso di S.O. Enterprise o Datacenter) devono essere ospitate su un controller di dominio per garantire la coerenza nel recupero dei dati all'interno della rete. Tutte le informazioni sulla struttura DFS sono replicate all'interno della rete tramite il "File Replication Service" (FRS) di Microsoft.
A partire dal sistema operativo Microsoft Windows Server 2003 release R2, sono state apportati alcuni miglioramenti a DFS, ad esempio:
il "File Replication Service" è stato sostituito dal nuovo componente "Replica DFS" anche se il vecchio componente FRS è ancora utilizzato per replicare cartelle particolari quali SYSVOL;
l'interfaccia utente per la gestione di FRS da parte degli amministratori è stata migliorata: è presente un'interfaccia grafica e a riga di comando.
è stata ottimizzata la tecnologia di replica per ridurre il consumo di banda: vengono replicati solamente i blocchi modificati dei file.
Vantaggi
I vantaggi di questa tecnologia sono molteplici:
gli utenti che accedono da remoto ad un file in rete, non si devono preoccupare della sua posizione "fisica", che può anche variare senza creare problemi;
ne consegue che l'amministratore può spostare cartelle e file da un server all'altro per ottimizzarne il bilanciamento del carico, senza che per l'utente cambi la visione della struttura delle cartelle;
sempre per ottimizzare il carico di lavoro, è inoltre possibile aggiungere facilmente e in maniera trasparente nuovi file server;
tipicamente, specie in ambiti un poco strutturati, il sistemista impone una regola centralizzata di AD (assorbita in automatico dai client durante la login al dominio) per cui in esplora risorse viene mappata un'unità di rete (con una lettera fissa, uguale per tutti). Questa unità, quando serve, è gestita tramite DFS in quanto "simula" un'unica cartella quando, invece, sono N archivi fisici su storage diversi.
Note
Bibliografia
Collegamenti esterni
Protocolli livello applicazione | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,889 |
package com.gs.fw.common.mithra.test.domain;
import java.sql.Timestamp;
public class Employee extends EmployeeAbstract
{
public Employee()
{
super();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,471 |
Q: TypeError: environment.setup is not a function in React Testing I was trying to implement the example shown in Jest website: Getting started with Jest.
While running npm test on I was getting the following error:
FAIL src/sum.test.js
● Test suite failed to run
TypeError: environment.setup is not a function
at node_modules/jest-runner/build/run_test.js:112:23
sum.js:
function sum(a, b){
return a+b;
}
module.exports = sum;
sum.test.js:
const sum = require('./sum');
test('adding sum function', () => {
expect(sum(234,4)).toBe(238);
})
sum.js and sum.test.js are an exact copy of the example shown in Getting started with Jest.
package.json:
{
"name": "jest-demo-test",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.2.0",
"react-dom": "^16.2.0",
"react-scripts": "1.0.17"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "jest",
"eject": "react-scripts eject"
},
"devDependencies": {
"jest": "^22.0.4"
}
}
So, how can I get rid of TypeError: environment.setup is not a function error?
A: Just remove jest from your dev dependencies. To do so, run the following command:
npm remove --save-dev jest
A: Add
"jest": {
"testEnvironment": "node"
}
to your package.json file, your package.json would look like this:
{
"name": "jest-demo-test",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.2.0",
"react-dom": "^16.2.0",
"react-scripts": "1.0.17"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "jest",
"eject": "react-scripts eject"
},
"devDependencies": {
"jest": "^22.0.4"
},
"jest": {
"testEnvironment": "node"
}
}
UPDATE
for the people using create app and having this problem, there are two solutions.
1- update testcommand (or add new one) in package.json to jest instead of react-scripts test, because react-scripts test will not allow jest options in package.json as @Xinyang_Li stated in comments.
2-if you don't want to change test command in package.json, and use default create-react-app test; remove jest options from packge.json (if exists)
"jest": {
"testEnvironment": "node"
}
then remove node_modules and then run npm install to install them all again, and after that your tests should run normally.
A: My issue solve with this:
.
If you have both react-scripts and jest in your package.json, delete jest from it.
Then delete package-lock.json, yarn.lock and node_modules.
Then run npm install (or yarn if you use it). ~Dan Abramov~
.
issues #5119
.
A: Jest is included with react-scripts. The error is due to a conflict that arose when you installed Jest in a project started with react-scripts. The "Getting started with Jest" guide expects a 'clean' project.
Simply remove Jest from your (dev)dependencies and it should work.
If you want to use Jest for testing React components (which you probably do), you need to modify your test script in package.json to react-scripts test --env=jsdom, as it was.
A:
Based on this github issue: @jest-environment node not working in v22,
I updated the sum.test.js as:
/**
* @jest-environment node
*/
const sum = require('./sum');
test('adding sum function', () => {
expect(sum(234,4)).toBe(238);
})
Now I got rid of the TypeError: environment.setup is not a function.
Output:
PASS src/sum.test.js
✓ adding sum function (2ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 0.438s
Ran all test suites.
Updated
Reading several answers, today I recreated the scenario.
Step 1:
I created a new project using create-react-app.
The command: create-react-app demo-app
package.json file:
{
"name": "demo-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.3.2",
"react-dom": "^16.3.2",
"react-scripts": "1.1.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject"
}
}
This file shows that jest is not included in the dependencies.
Step 2:
I included sum.js and sum.test.js files in src folder as shown in the official getting started guide.
sum.js:
function sum(a, b) {
return a + b;
}
module.exports = sum;
sum.test.js:
const sum = require('./sum');
test('adds 1 + 2 to equal 3', () => {
expect(sum(1, 2)).toBe(3);
});
Step 3:
I ran yarn test command without including jest in package.json file.
So, this test command uses react-scripts test --env=jsdom as shown in package.json.
Output of the test:
PASS src/sum.test.js
PASS src/App.test.js
Test Suites: 2 passed, 2 total
Tests: 2 passed, 2 total
Snapshots: 0 total
Time: 0.594s, estimated 1s
Ran all test suites related to changed files.
Step 4:
I updated the package.json file to use jest as test script:
{
"name": "demo-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.3.2",
"react-dom": "^16.3.2",
"react-scripts": "1.1.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "jest",
"eject": "react-scripts eject"
}
}
Again I ran yarn test and the output is:
$ jest
PASS src/sum.test.js
FAIL src/App.test.js
● Test suite failed to run
/<PATH>/demo-app/src/App.test.js: Unexpected token (7:18)
5 | it('renders without crashing', () => {
6 | const div = document.createElement('div');
> 7 | ReactDOM.render(<App />, div);
| ^
8 | ReactDOM.unmountComponentAtNode(div);
9 | });
10 |
Test Suites: 1 failed, 1 passed, 2 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 0.645s, estimated 1s
Ran all test suites.
Too Long; Did not Read (TL;DR)
*
*Do not add jest using yarn add --dev jest or npm install --save-dev jest if you use create-react-app command. jest is included in node-modules of the project if you use create-react-app.
From official Jest documentation:
Setup with Create React App
If you are just getting started with React, we recommend using Create
React App. It is ready to use and ships with Jest! You don't need to
do any extra steps for setup, and can head straight to the next
section.
*Do not change the test script in package.json file { look at this issue }. Keep the default "test": "react-scripts test --env=jsdom", in that.
A: You should downgrade installed jest to v20.0.4
npm install -D jest@20.0.4
A: This is an older issue but I was facing the same problem. We have react-scripts package and it uses Jest version 20.x.x which has problems with coverageTreshold. I couldn't use newer react-scripts version and the only thing helped was to use yarn resolutions, read more here https://yarnpkg.com/lang/en/docs/selective-version-resolutions/.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,764 |
Q: Distinguishable versus indistinguishable counting (three variations on similar problem). Trying to grasp concepts. Can someone please help me understand the following three variations of a problem...
Here are the three problems along with some of my reasoning/questions:
1) How many ways to put 20 different (distinct) chocolates into a red bag and a green bag so that each bag contains 10 chocolates?
I'm thinking I can think of this as a set (call this set $A$) of 20 distinct integers and the different ways I can put these distinct elements into two distinct sets (call these sets $B$ and $C$).
My answer to this is: ${20 \choose 10} \cdot {10 \choose 10}$. That is select 10 elements for my first set $B$ and 10 elements out of the remaining elements for my second set $C$. This one seems pretty straightforward.
2) How many ways to put 20 different (distinct) chocolates into two identical blue bags so that each bag contains 10 chocolates?
Conceptually this is a bit trickier. I'm thinking this is analogous to a set (call this set $A$) of 20 distinct integers and the ways I can put these distinct elements into two identical sets (both called $B$).
My answer to this is: $\frac{20\choose 10}{2!}$. That is, I'm thinking of this as the number of ways to create two sets of subsets, but since one set is identical to the other I'm dividing out the over-counted subsets. My question here though is how is this possibly less than the ways I can create 10 element subsets from a 20 element set? I'm dealing with two sets... although identical it doesn't make intuitive sense that this somehow reduces the number of combinations that would be obtained from a single set.
3) How many ways to put 20 identical chocolates into two identical blue bags so that each bag contains 10 chocolates?
This one I'm not sure on.
A: Your answer to #1 is correct, and so is your answer to #2.
Here is one way to justify the answer to #2:
If we put the chocolates into two identical blue bags, and then color one blue bag red and the other one green, then we will get the result in #1.
Since there are $2!$ ways to color the bags, the answer to #1 is $2!$ times the answer to #2.
The answer to #3 is 1, since the chocolates and the bags are identical.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 799 |
El vizcondado de Laguna de Contreras es un título nobiliario español otorgado el 7 de noviembre de 1642 por el rey Felipe IV, con la primitiva denominación de «vizcondado de La Laguna», en favor de Luis Jerónimo de Contreras y Velázquez de Cuéllar, como previo al de conde de Cobatillas.
Su nombre hace referencia al municipio español de Laguna de Contreras, en la provincia de Segovia, comunidad autónoma de Castilla y León.
Vizcondes de Laguna de Contreras
Historia de los Vizcondes de Laguna de Contreras
Luis Jerónimo de Contreras y Velázquez de Cuéllar (n. 1600), I vizconde de Laguna de Contreras, I conde de Cobatillas, caballero de la Orden de Santiago, capitán del Regimiento del Príncipe, regidor de Segovia, corregidor de Madrid, procurador en Cortes por la ciudad de Segovia y miembro del Consejo de Hacienda.
Casado con Victoria de Villarroel y Peralta (n. 1620)
Nota
Al ser un vizcondado previo no puede considerarse un título nobiliario, ya que nunca fue tenido como tal. Su concesión fue un puro trámite administrativo que se extinguió al otorgarse el título de conde de Cobatillas.
Referencias
Elenco de Grandezas y Títulos Nobiliarios Españoles. Instituto "Salazar y Castro", C.S.I.C.
Laguna de Contrerasl
Laguna de Contreras
Laguna de Contreras
Laguna de Contreras
Casa de Contreras
España en 1642 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 333 |
#include "typedefs.h"
#include "keyvalue.h"
using namespace MAPREDUCE_NS;
/* ----------------------------------------------------------------------
edge_to_vertices
emit 2 vertices for each edge
input: key = Vi Vj, value = NULL
output:
key = Vi, value = NULL
key = Vj, value = NULL
------------------------------------------------------------------------- */
void edge_to_vertices(uint64_t itask, char *key, int keybytes, char *value,
int valuebytes, KeyValue *kv, void *ptr)
{
EDGE *edge = (EDGE *) key;
kv->add((char *) &edge->vi,sizeof(VERTEX),NULL,0);
kv->add((char *) &edge->vj,sizeof(VERTEX),NULL,0);
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,193 |
// --------------------------------------------------------------------------------------------------------------------
// <copyright file="WindowLogic.cs" company="Catel development team">
// Copyright (c) 2008 - 2015 Catel development team. All rights reserved.
// </copyright>
// --------------------------------------------------------------------------------------------------------------------
#if NET || SL5
namespace Catel.MVVM.Providers
{
using System;
using System.Threading.Tasks;
using System.Windows;
using Views;
using Logging;
using MVVM;
using Reflection;
#if SILVERLIGHT
using System.Windows.Controls;
#endif
/// <summary>
/// MVVM Provider behavior implementation for a window.
/// </summary>
/// <remarks>
/// Some parts in this class (with the instances and increments), but this is required to dynamically subscribe to
/// an even that we do not know the handler of on forehand. Normally, you would do this via an anynomous delegate,
/// but that doesn't work so the event delegate is created via ILGenerator at runtime.
/// <para />
/// http://stackoverflow.com/questions/8122085/calling-an-instance-method-when-event-occurs/8122242#8122242.
/// </remarks>
public class WindowLogic : LogicBase
{
#region Fields
/// <summary>
/// The log.
/// </summary>
private static readonly ILog Log = LogManager.GetCurrentClassLogger();
private bool? _closeInitiatedByViewModel;
private bool? _closeInitiatedByViewModelResult;
private readonly DynamicEventListener _dynamicEventListener;
#if SILVERLIGHT
private bool _isClosed;
#endif
#endregion
#region Constructors
/// <summary>
/// Initializes a new instance of the <see cref="WindowLogic"/> class.
/// </summary>
/// <param name="targetWindow">The window this provider should take care of.</param>
/// <param name="viewModelType">Type of the view model.</param>
/// <param name="viewModel">The view model to inject.</param>
/// <exception cref="ArgumentNullException">The <paramref name="targetWindow"/> is <c>null</c>.</exception>
public WindowLogic(IView targetWindow, Type viewModelType = null, IViewModel viewModel = null)
: base(targetWindow, viewModelType, viewModel)
{
var targetWindowType = targetWindow.GetType();
string eventName;
var closedEvent = targetWindowType.GetEventEx("Closed");
if (closedEvent != null)
{
eventName = "Closed";
_dynamicEventListener = new DynamicEventListener(targetWindow, "Closed", this, "OnTargetWindowClosed");
}
else
{
eventName = "Unloaded";
_dynamicEventListener = new DynamicEventListener(targetWindow, "Unloaded", this, "OnTargetWindowClosed");
}
Log.Debug("Using '{0}.{1}' event to determine window closing", targetWindowType.FullName, eventName);
}
#endregion
#region Properties
/// <summary>
/// Gets or sets a value indicating whether the logic should call <c>Close</c> immediately when
/// the <c>DialogResult</c> is set.
/// <para />
/// By default, the <c>Window</c> class correctly closes the window when the <c>DialogResult</c> is
/// set, but not all implementations work like this.
/// <para />
/// The default value is false.
/// </summary>
/// <value>
/// <c>true</c> if <c>Close</c> should be called directly after setting the <c>DialogResult</c>; otherwise, <c>false</c>.
/// </value>
public bool ForceCloseAfterSettingDialogResult { get; set; }
/// <summary>
/// Gets the target control as window object.
/// </summary>
/// <value>The target window.</value>
private FrameworkElement TargetWindow
{
get { return (FrameworkElement)TargetView; }
}
#endregion
#region Methods
/// <summary>
/// Sets the data context of the target control.
/// <para />
/// This method is abstract because the real logic implementation knows how to set the data context (for example,
/// by using an additional data context grid).
/// </summary>
/// <param name="newDataContext">The new data context.</param>
protected override void SetDataContext(object newDataContext)
{
TargetView.DataContext = newDataContext;
}
/// <summary>
/// Called when <see cref="LogicBase.TargetView"/> has just been unloaded.
/// </summary>
/// <param name="sender">The sender.</param>
/// <param name="e">The <see cref="System.EventArgs"/> instance containing the event data.</param>
public override void OnTargetViewUnloaded(object sender, EventArgs e)
{
base.OnTargetViewUnloaded(sender, e);
ViewModel = null;
}
/// <summary>
/// Called when the <see cref="LogicBase.ViewModel"/> is closed.
/// </summary>
/// <param name="sender">The sender.</param>
/// <param name="e">The <see cref="Catel.MVVM.ViewModelClosedEventArgs"/> instance containing the event data.</param>
public override async Task OnViewModelClosedAsync(object sender, ViewModelClosedEventArgs e)
{
if (_closeInitiatedByViewModel == null)
{
_closeInitiatedByViewModel = true;
_closeInitiatedByViewModelResult = e.Result;
}
await base.OnViewModelClosedAsync(sender, e);
#if SILVERLIGHT
if (TargetWindow is ChildWindow)
{
// This code is implemented due to a bug in the ChildWindow of silverlight, see:
// http://silverlight.codeplex.com/workitem/7935
// Only handle this once
if (_isClosed)
{
return;
}
}
#endif
if (_closeInitiatedByViewModelResult != null)
{
bool result;
try
{
result = PropertyHelper.TrySetPropertyValue(TargetWindow, "DialogResult", _closeInitiatedByViewModelResult);
}
catch (Exception ex)
{
Log.Warning("Failed to set the 'DialogResult' exception: {0}", ex);
result = false;
}
// Support all windows (even those that do not derive from ChildWindow)
if (!result)
{
Log.Warning("Failed to set the 'DialogResult' property of window type '{0}', closing window via method", TargetWindow.GetType().Name);
InvokeCloseDynamically();
}
else if (ForceCloseAfterSettingDialogResult)
{
InvokeCloseDynamically();
}
}
else
{
InvokeCloseDynamically();
}
}
/// <summary>
/// Called when the <see cref="TargetWindow"/> has been closed.
/// </summary>
/// <remarks>
/// Public to allow the generated ILGenerator to access this method.
/// </remarks>
// ReSharper disable UnusedMember.Local
public async void OnTargetWindowClosed()
// ReSharper restore UnusedMember.Local
{
#if SILVERLIGHT
// This code is implemented due to a bug in the ChildWindow of silverlight, see:
// http://silverlight.codeplex.com/workitem/7935
// Only handle this once
if (_isClosed)
{
return;
}
_isClosed = true;
#endif
if (_closeInitiatedByViewModel == null)
{
_closeInitiatedByViewModel = false;
bool? dialogResult = null;
if (!PropertyHelper.TryGetPropertyValue(TargetWindow, "DialogResult", out dialogResult))
{
Log.Warning("Failed to get the 'DialogResult' property of window type '{0}', using 'null' as dialog result", TargetWindow.GetType().Name);
}
await CloseViewModelAsync(dialogResult);
}
_dynamicEventListener.UnsubscribeFromEvent();
}
/// <summary>
/// Invokes the close method on the window dynamically.
/// </summary>
private void InvokeCloseDynamically()
{
var closeMethod = TargetWindow.GetType().GetMethodEx("Close");
if (closeMethod == null)
{
throw Log.ErrorAndCreateException<NotSupportedException>("Cannot close any window without a public 'Close()' method, implement the 'Close()' method on '{0}'", TargetWindow.GetType().Name);
}
closeMethod.Invoke(TargetWindow, null);
}
#endregion
}
}
#endif | {
"redpajama_set_name": "RedPajamaGithub"
} | 6,559 |
Пумпянский — еврейская топонимическая фамилия.
Пумпянский, Алексей Леонидович — автор целого ряда книг по теории и практике перевода научной и технической литературы.
Пумпянский, Александр Борисович (род. 1940) — советский и российский журналист, редактор, сценарист.
Пумпянский, Арон-Элия (1835—1893) — писатель, издатель и редактор «Еврейских записок»; общественный раввин.
Пумпянский, Борис Яковлевич (1906—1944) — советский кинооператор, лауреат Сталинской премии.
Пумпянский, Дмитрий Александрович (род. 1964) — российский предприниматель, председатель совета директоров Трубной металлургической компании, член Бюро Правления РСПП.
Пумпянский, Лев Васильевич (Пумпян, 1891—1940) — русский литературовед и критик, музыковед.
Пумпянский Лев Иванович (1889—1943) — советский искусствовед, поэт, методист.
Пумпянский, Михаил Аркадьевич (1878—1951) — оперный певец (драматический тенор).
Пумпянская, Семирамида Николаевна (1916—2014) — советский кинорежиссёр-документалист, лауреат Ленинской премии.
Примечания | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,618 |
<Record>
<Term>Shared Paranoid Disorder</Term>
<SemanticType>Mental or Behavioral Dysfunction</SemanticType>
<ParentTerm>Schizophrenia</ParentTerm>
<ClassificationPath>Mental Disorders/Schizophrenia and Disorders with Psychotic Features/Schizophrenia/Shared Paranoid Disorder</ClassificationPath>
<BroaderTerm>Schizophrenia</BroaderTerm>
<BroaderTerm>Schizophrenia and Disorders with Psychotic Features</BroaderTerm>
<BroaderTerm>Shared Paranoid Disorder</BroaderTerm>
<BroaderTerm>Mental Disorders</BroaderTerm>
<Synonym>Shared Paranoid Disorder</Synonym>
<Synonym>Shared Psychotic Disorder</Synonym>
<Synonym>Shared Psychotic Disorders</Synonym>
<Synonym>Shared Paranoid Disorders</Synonym>
<Synonym>Folie a Deux</Synonym>
<Synonym>Folie a Trois</Synonym>
<Description>A condition in which closely related persons, usually in the same family, share the same delusions.</Description>
<Source>MeSH</Source>
</Record>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,856 |
{"url":"http:\/\/zbmath.org\/?q=an:0566.34055","text":"# zbMATH \u2014 the first resource for mathematics\n\n##### Examples\n Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. \"Topological group\" Phrases (multi-words) should be set in \"straight quotation marks\". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. \"Quasi* map*\" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. \"Partial diff* eq*\" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.\n\n##### Operators\n a & b logic and a | b logic or !ab logic not abc* right wildcard \"ab c\" phrase (ab c) parentheses\n##### Fields\n any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)\nOscillations of higher-order neutral equations. (English) Zbl\u00a00566.34055\n\nConsider the neutral delay differential equation of order n $\\left(*\\right)\\phantom{\\rule{1.em}{0ex}}\\left({d}^{n}\/d{t}^{n}\\right)\\left[y\\left(t\\right)+py\\left(t-\\tau \\right)\\right]+qy\\left(t-\\sigma \\right)=0,$ $t\\ge {t}_{0}$ where q is a positive constant, the delays $\\tau$ and $\\sigma$ are nonnegative constants and the coefficient p is a real number. Theorem 1. (a) Assume that n is odd and that $p<-1$. Then every nonoscillatory solution of (*) tends to $+\\infty$ or -$\\infty$ as $t\\to \\infty$. (b) Assume that n is odd or even and that $p>-1$. Then every nonoscillatory solution of (*) tends to zero as $t\\to \\infty$. Theorem 2. Assume that n is odd. Then each of the following four conditions implies that every solution of (*) oscillates: (i) $p<-1$ and ${\\left(-q\/\\left(1+p\\right)\\right)}^{1\/n}\\left(\\tau -\\sigma \\right)\/n>1\/e$; (ii) $p=-1$; (iii) $p>-1$ and ${\\left(q\/\\left(1+p\\right)\\right)}^{1\/n}\\left(\\sigma -\\tau \\right)\/n>1\/e$; (iv) $-1 and ${q}^{1\/n}\\sigma \/n>1\/e\u00b7$\n\nTheorem 3. Assume that n is even. Then each of the following two conditions implies that all solutions of (*) oscillate: (i) $p\\ge 0$; (ii) $-1\\le p<0$ and ${\\left(q\/p\\right)}^{1\/n}\\left(\\sigma -\\tau \\right)\/n>1\/e$. Theorem 4. Assume that n is even, $p<-1$, and ${\\left(q\/p\\right)}^{1\/n}\\left(\\sigma -\\tau \\right)\/n>1\/e$. Then every bounded solution of (*) oscillates.\n\n##### MSC:\n 34K99 Functional-differential equations 34C10 Qualitative theory of oscillations of ODE: zeros, disconjugacy and comparison theory\n##### Keywords:\nneutral delay differential equation","date":"2014-04-21 15:13:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 22, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8947996497154236, \"perplexity\": 6438.091100051392}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-15\/segments\/1397609540626.47\/warc\/CC-MAIN-20140416005220-00280-ip-10-147-4-33.ec2.internal.warc.gz\"}"} | null | null |
Q: how to output file to browser after jquery ajax call I have a link on the website to a php file that generates native excel file on a fly ant outputs it directly to browser via headers for user to to open/save. Since it takes some time for the file to be generated I'd like to use jQuery Ajax to make the call and use some loading animation in the mean while.
The only thing I'm not sure how to do is how to output the file into the browser after Ajax call? Is it even possible?
A: (N.B. This is a paraphrasing of @dmitry's answer, but just elaborated upon)
The problem you have is that there is no means of directly returning a file to the user via AJAX - the browser has to request the file using a normal, synchronous HTTP request.
To solve this, your PHP will need to:
*
*Generate the Excel file as normal.
*Instead of writing the file back to the user, save it somewhere on the server's filesystem (i.e. using file_put_contents() or similar).
*Return the file path to the user.
Your JS, on receiving this response will then need to:
*
*Read the Excel file path back from the PHP script.
*Open the Excel file in a new tab/window using window.open() (or redirect in the current tab/window by setting location.href).
A: I think you can make a trick: generate file and return path to generated file in your Ajax response and then just call
window.location = fileUrl
also there some techniques with iframe Ajax
A: Why don't you use iframe where you load the actual php file that generates the excel file? That way you won't block the UI while the file is being generated and browser will trigger download dialog.
There's no way that you can send a file via javascript, at least not in the manner of forcing the browser to open the download dialog.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,363 |
{"url":"https:\/\/www.infoq.com\/articles\/JGroups-raft-Primer\/","text":"InfoQ Homepage Articles Integrating Raft into JGroups\n\nIntegrating Raft into JGroups\n\nSome months ago I was reading the enlightening Diego Ongaro paper on the Raft consensus algorithm and thought that it would be great to have an option like that in the Red Hat open source portfolio. In the past I messed with Paxos\u00a0many times and decided it was really too complex to implement correctly, at least for me. To tell the truth, we already support Apache ZooKeeper, which is currently part of JBoss Fuse, and JGroups, which is the basis of everything else clusterable in the JBoss family, from the WildFly application Server itself to Infinispan, which both solve similar problems.\n\nZooKeeper is a very mature project implementing a consensus algorithm, (which has been published in some detail), and is also cited in the Raft paper. JGroups is arguably even more mature, dating back to 2002, and provides many low level messaging features, but doesn\u2019t implement a distributed consensus algorithm. JGroups is more of a toolset for writing these kinds of algorithms rather than just an implementation of any one of those.\n\nChatting with Bela Ban, JGroups benevolent dictator for life, we decided that it would be interesting to implement the Raft algorithm on top of JGroups, both because JGroups already has many features that could be useful to a robust Raft implementation, and on the other side because a consistent consensus based algorithm could be really a nice addition in many different use cases. Seeing that Bela implemented 99% of the code in just a few days, I guess it was a valid choice!\n\nWait, what\u2019s Raft?\n\nDiego Ongaro and John Ousterhout described Raft in detail in their paper \u201cIn Search of an Understandable Consensus Algorithm (Extended Version). In their own words, \u201cit is a consensus algorithm that is designed to be easy to understand and equivalent to Paxos in fault-tolerance and performance\u201d. The goal of Raft is to make consensus available to a wider audience, and that this wider audience will be able to develop a variety of higher quality consensus-based systems than are available today. I\u2019m part of the wider audience, so that\u2019s definitely a great start!\n\nAnother very important fact about Raft is that it\u2019s been proven through the use of formal methods, specifically with the help of TLA+: you won\u2019t find hidden corner cases in the algorithm, though obviously there could be bugs in implementations!\n\nConsensus is a well known problem in distributed computing. It basically consists of strategies for making different systems agree on something. When you have different (networked) systems, a simple command like \u201cSET MEANING_OF_LIFE to 42\u201d must be agreed on to make sure that the state can be highly available. The consensus approach is to have an agreement of the majority of nodes, and the Raft algorithm relies on a leader election and a per-node persistent log to achieve that.\n\nA leader is elected by consensus and all changes happen through it, which then replicates them to all nodes and adds them to their persistent log. Raft guarantees that there\u2019s at most one leader at any time, so all state machines receive the same ordered stream of updates and thus have the exact same state.\n\nRaft favors consistency over availability: in terms of the Cap theorem, jgroups-raft is a C-P system, meaning that if it can\u2019t get a majority of nodes agreeing, it won\u2019t be available but it will maintain its consistency. If for example we have a cluster of 5 nodes, 3 is the majority, so it will be possible to read\/write on the system even with 2 nodes failures. With more than 2 failures it\u2019s impossible to get a majority so the system won\u2019t be available (though it\u2019s possible to have some read-only features in this case).\n\nIn summary, at a very high level Raft consists of a leader election, (which requires a majority), as well as nodes being coordinated by the leader, each having one persistent log detailing what they are doing.\n\nAn excellent graphic explanation of how Raft algorithm works in detail is available here.\n\nWhy jgroups-raft\n\nThere already were many Raft implementations available in many different languages, but there were many reasons to implement Raft in JGroups too: JGroups has many ready-to-use building blocks which have been battle tested in fifteen years of wide usage and the code required for a robust implementation is much smaller than starting everything from scratch and therefore less buggy. Besides that, JGroups can be extremely useful to expand the basic Raft features too, for example adding reliability over UDP, multicasting features for large clusters and smarter cluster membership changes (\"view changes\" in JGroups parlance, we will see an example later)\n\nHaving a reliable consensus algorithm \u201cinside\u201d JGroups is also important for JGroups itself, because it means that Wildfly and\/or Infinispan will be able in the future to offer different clustering features out of the box, for example a cluster-aware singleton that is capable of maintaining its unicity even in the presence of a network partition (providing it can still reach consensus): a \u201cnormal\u201d Wildfly cluster-aware singleton in fact would break its unicity in case of a Network partition, ending up with a singleton per partition (a \u201cpartitionton\u201d?).\n\nTesting jgroups-raft\n\nTo test jgroups-raft, we will download it from GIT (see below) and use the included CounterServiceDemo. This is a distributed counter that is simple enough to analyze at the Raft level. You will need Java 8 installed on your preferred OS, though this tutorial has been tested only on OSX and Linux.\n\nFor this example I will assume you\u2019re testing jgroups-raft on your laptop, using a loopback interface.\n\nTo start open a terminal and clone the git repo somewhere on your disk:\n\n> git clone git@github.com:belaban\/jgroups-raft.git\n> export JGROUPS_RAFT_HOME=jgroups-raft\n> cd jgroups-raft\n\n\nwe will reference this directory as $JGROUPS_RAFT_HOME in the rest of this tutorial. > .\/bin\/counter.sh -name A You should see some output from JGroups, similar to: ------------------------------------------------------------------- GMS: address=A, raft-id=A, cluster=cntrs, physical address=127.0.0.1:62162 ------------------------------------------------------------------- -- view: [A, raft-id=A|0] (1) [A, raft-id=A] [1] Increment [2] Decrement [3] Compare and set [4] Dump log [8] Snapshot [9] Increment N times [x] Exit first-appended=0, last-appended=4, commit-index=4, log size=40b -- changed role to Candidate the last line means that this node is now a candidate to become leader, but it\u2019s still not a leader. In Raft, at any given time each node can be in exactly one of three states: leader, follower, or candidate. Being a candidate means that the node is trying to become a leader, while being a follower means that the node will just follow requests from leaders and candidates. The default raft.xml (which you can find in the$JGROUPS_RAFT_HOME\/conf) is configured with three Raft nodes, so the majority is 2, meaning that to finalize an election we need to add a second node.\n\nOpen a second terminal:\n\n> .\/bin\/counter.sh -name B\n\nNow that we started a second node, one of the two nodes will be able to become the leader, and the other one will become a follower. In Raft the leader election is based on a heartbeat mechanism: if the followers don\u2019t receive the heartbeat in a period called the election timeout, they\u2019ll assume there is no leader and they\u2019ll begin an election. The election consists roughly of becoming a candidate (everyone start as a follower) voting for itself and broadcasting a request for vote to the other nodes in the cluster to get a majority. Election timeouts are randomized and thus slightly different for each node, so split votes are rare and quickly resolved by the algorithm.\n\nYou should now see the voting happening between nodes and see an output similar to the following in the two consoles:\n\n-- changed role to Follower\n-- changed role to Leader\n\nIf this doesn\u2019t happen, it\u2019s because the second Raft instance can\u2019t see the other one at the networking level: the default JGroups configuration is UDP based, so it probably means that your UDP network configuration is not correctly configured for the loopback interface, which typically happens on BSD based Operating Systems like OSX: you will see in the output that the second Raft instance creates a new JGroups view, containing just itself.\n\n-- view: [B, raft-id=B|0] (1) [B, raft-id=B]\n\nIf this is the case, to fix this misconfiguration, add this line in the routing table to make UDP work on the loopback interface:\n\n> sudo route add -net 224.0.0.0\/5 127.0.0.1\n\nStart again the two Raft instances and you should finally see the election starting, the two nodes changing role and the JGroups view correctly containing both nodes:\n\n-- view: [A, raft-id=A|1] (2) [A, raft-id=A, B, raft-id=B]\n\nIf everything is fine, open a third terminal to start the third Raft node:\n\n> .\/bin\/counter.sh -name C\n\nNow you will have three different Raft nodes and you can start to play with the example.\n\nIf it\u2019s still not working, it could be that for some reason\u00a0you are binding to the wrong network interface and UDP traffic is dropped.\u00a0The advice is to have a look at the troubleshooting section of JGroups Manual. Or, if you are in a hurry, just contact us on IRC (#jgroups-raft) or Google group.\n\nThe distributed counter has a very simple command line interface that you can use to increment\/decrement the distributed counter in different ways and see the information contained in the persistent log of the node.\n\n[1] Increment [2] Decrement [3] Compare and set [4] Dump log\n[8] Snapshot [9] Increment N times [x] Exit\nfirst-appended=0, last-appended=7, commit-index=7, log size=70b\n\nYou can for example modify the counter in the different nodes consoles and see that each node has a consistent view of the counter.\n\nIt\u2019s interesting now to look at what happens if you kill a single node: you will still have the majority so the system will continue working, but if you kill two nodes the system will become unavailable: the remaining node will vote for itself but it won\u2019t be able to get the remaining vote needed to reach the majority. Remember that Raft is a C-P system and sacrifices availability if it detects that it won\u2019t be able to guarantee consistency, which is the case when there is just one node alive in a cluster of three. Restarting one of the two dead nodes will make the overall system work again.\n\nYou\u2019ll see last-appended and commit-index values grow together: to understand what they are, you\u2019ll need more information on how the Raft persistent log is structured and how exactly Raft manages the availability of the cluster when it can\u2019t reach consensus.\n\nNote that when you kill and restart a node, its initial state is initialized from its persistent log.\n\nWhat\u2019s in the Persistent log\n\nIn Raft, once the leader is up and running, it begins answering all client requests, that contain a command to be executed by the replicated state machine, which in this case is just an action (inc\/dec) on our distributed counter. The leader appends the command to its own log and then asks the other cluster members to do the same: the leader returns the control to the client only if it\u2019s been able to replicate the command to a majority of the followers.\n\nIf you look at the persistent log you\u2019ll get an output similar to the following:\n\nindex (term): command\n---------------------\n1 (200): incrementAndGet(counter)\n2 (200): incrementAndGet(counter)\n3 (200): incrementAndGet(counter)\n4 (200): incrementAndGet(counter)\n5 (200): decrementAndGet(counter)\n6 (200): incrementAndGet(counter)\n7 (200): compareAndSet(counter, 4, 10)\n\n\nThe log contains the operations that have been applied to the replicated state machine. The term is an important concept in Raft, because it\u2019s basically its unit of time. The term begins with the election, and when a candidate wins it it will become the leader for the rest of that term (potentially, forever: the term has an arbitrary length). Raft ensures that there is at most one leader in a given term: in this example we are at term 200, and to see this value grow you just have to make Raft start a new election.\n\nYou can see the last-appended and commit-index values in the log: the first one is the index of the last command appended to the persistent log, while the latter is the last committed index: they are usually the same, but if for example on the leader you have last-appended at 100 and commit-index at 90, that means that the leader hasn\u2019t been able to replicate 10 commands on a majority, so these commands are not yet applied to the state machine.\n\nTo show this behaviour in action, let\u2019s kill the C node and have a cluster with just two of the three nodes alive, A and B. Let\u2019s say for example that A is the leader: if you kill it, B - the follower - won\u2019t be able to become a leader (no majority) and won\u2019t accept any command: if you try to do something on the counter, you\u2019ll get a Java Exception saying that the leader is no longer part of the cluster view, so the command can\u2019t be applied. But if instead of killing the leader A you kill the follower B, you\u2019ll end up with a view containing just the leader.\n\nIn this case, if you try to apply commands to it, you will see a slightly different behaviour;\u00a0you could expect the leader to step down and become a candidate, but in fact you\u2019ll just get a timeout on your client (a ConcurrentTimeOutException to be precise). The command will then be appended to the persistent log of the leader without being committed (since there isn\u2019t anyone answering to the poor leader), so you\u2019ll see the last-appended and commit-index values of the leader diverge. If you then\u00a0restart the B node, you\u2019ll eventually see the commit-index catching up.\n\nThis is the default behaviour of Raft and it\u2019s just fine, but with the help of JGroups we could just pick up the change in the cluster view, observe that the size of the view is less than the required majority and make the leader step down: this is not implemented at the moment but will be a configurable option in the future. This is one example of the possible Raft improvements that can be easily applied to the basic algorithm thanks to the power of JGroups.\n\nJgroups-raft also implements a snapshotting system as suggested in the Raft paper: the leader can take snapshots of its persistent log, so it can transfer a snapshot to a member that might be out of sync for example , after being down for a considerable amount of time and then started again. Note that this is just an optimization, because even without snapshots the algorithm is able to eventually catch up.\n\nThe implementation of jgroups-raft is currently at 0.2 release, so it\u2019s not really ready for prime time, but is almost feature complete. Its codebase is currently living \u201coutside\u201d of the main JGroups project because it\u2019s evolving much more quickly, but sometime in the future it will be merged back. Nonetheless it\u2019s definitely ready to be tested, and any feedback is - as usual in open source projects - more than welcome. Please submit feedback in the IRC Group #jgroups-raft or Google group.\n\nConclusions\n\nIn this first part of the tutorial we just scratched the surface of Raft and had a look at jgroups-raft implementation with the provided DistributedCounter example.\u00a0In the second part of this tutorial, we will (finally) write some Java code, looking at what is needed to implement your own jgroups-raft enabled partition-proof cluster.\n\nUgo Landini takes the sentence \u201cThe only difference between men and boys is the size of their shoes and the cost of their toys\u201d very seriously. He works as a software architect at Red Hat. He dedicates the rest of his time to what\u2019s new in the IT field and is strongly convinced that sharing knowledge is not only a must but also an opportunity for personal growth. The cofounder of the JUG Roma, Ugo is an Apache committer, develops games for mobile devices and is convinced he can still play a decent football game (soccer for American people). He is also the cofounder and chair of technical committee at Codemotion.\n\nStyle\n\nHello stranger!\n\nYou need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.\n\nGet the most out of the InfoQ experience.\n\nAllowed html: a,b,br,blockquote,i,li,pre,u,ul,p\n\nAllowed html: a,b,br,blockquote,i,li,pre,u,ul,p\n\nAllowed html: a,b,br,blockquote,i,li,pre,u,ul,p\n\nIs your profile up-to-date? Please take a moment to review and update.\n\nNote: If updating\/changing your email, a validation request will be sent\n\nCompany name:\nCompany role:\nCompany size:\nCountry\/Zone:\nState\/Province\/Region:\nYou will be sent an email to validate the new email address. This pop-up will close itself in a few moments.","date":"2019-05-26 08:01:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.27782103419303894, \"perplexity\": 1390.0228216894175}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232258862.99\/warc\/CC-MAIN-20190526065059-20190526091059-00381.warc.gz\"}"} | null | null |
The late 80s and early 90s was a golden age of goofy action loosely connected with sci-fi themes: from the hugely successful Terminator 2 to Total Recall to Robocop , it was the zenith of combining Phillip K. Dick plots with explosions. It was at this cultural high tide that Demoltion Man was released, introducing Stallone's first foray into the genre.
His frozen body is forced through a rigorous exercise routine.
We begin in the terrible future Los Angeles of 1996, where we're just shy of Escape from New York/LA dystopia, including an inexplicably burning "Hollywood" sign. Heroically named John Spartan (Stallone) is sent in to stop the villainous Simon Phoenix (Wesley Snipes with a blonde mohawk), playing the "Joker" to Stallone's grim, window smashing vigilante. Spartan is finally able to bring in his nemesis, but somehow the cop, not the psychotic murderer, is blamed for the deaths of a bus full of hostages. As punishment, Spartan is frozen in ice and implanted with subliminal knitting instructional videos.
Simon Phoenix: all around bad role model!
Meanwhile, in the even more distant future of 2032, ridiculously named Lenina Huxley (the adorable Sandra Bullock), named after a Brave New World character AND the author of Brave New World in case you didn't get it, is bored by her everyday life in the LAPD. In the future, crime no longer exists outside of the occasional curfew breaker or bad-word-user. In one of the best running gags of the film, all cursing results in an animatronic voice announcing a fine for inappropriate language use. Her wish for excitement comes true when Phoenix breaks loose at his parole hearing, having somehow learned the release code for his handcuffs and how to kick people really hard. Phoenix goes on a murder spree, killing several cops and the prison warden before disappearing off the grid.
When will people learn? Retinal scanning just encourages eye injuries!
Huxley, along with a wise old cop (character actor and Morgan Freeman stand-in Bill Cobbs), convince the chief to reluctantly unfreeze Spartan, whose chiseled biceps have inexplicably remained rock-hard throughout suspended animation. Spartan has trouble adjusting to the prissy and effeminate future, but his cop instincts take over and he is able to quickly track down Phoenix in time for a big dumb action set-piece at a museum.
For greater authenticity, all of the guns in the museum
are kept loaded and surrounded by ammo.
After the action scene, Spartan is invited to dinner with the founding father of this new world order, at the greatest (and only) restaurant in town: Taco Bell. His chalupa is ruined though when Dennis Leary shows up as a violent rebel who basically just does his stand-up routine in lieu of acting, so I'm not going to use his character's name. The plot thickens as Spartan discovers that while he's been given super-knitting skills as part of his rehabilitation, Phoenix was made even more dangerous and psychotic, and his escape was a planned action against Leary's freedom fighting resistance.
Taco Bell introduces their new five cheese escargot chalupa!
We get a few big action scenes and a final showdown, but without going into too much detail I can say someone's head is kicked off from their body as a final "Chekhov's gun" reference to a character saying they'd "lose their head" if it wasn't attached.
Stallone month is brought to you by PEPSI.
Honestly, except for the camp humor that was obviously intended, there isn't much about Demolition Man that belongs on this site. It's a fun popcorn movie, the kind that you might not seek out but would definitely watch on a rainy Sunday afternoon. Stallone projects are built to minimize his glaring weaknesses as an actor, so he rarely talks, or emotes, or does anything that isn't shooting or shouting or looking befuddled. Sandra Bullock has a breakout role one year before her mainstream breakout role in "Speed," as the adorable buddy/cop/love interest/clutzy comic relief. There's plenty of good observational comedy mostly gleaned from other sci-fi, from the germaphobia carried to ridiculous extremes, to the absurd contraptions.
Rob Schneider and Academy Award Winner Sandra Bullock share a laugh at Stallone's expense.
The real star of the show though is Wesley Snipes as Phoenix. Most of that comes from the fact that it's a great part, a deranged lunatic that kills people for fun because he is just a bad guy. But even a part like that can be ruined by the wrong actor (Travolta in Broken Arrow ), and Snipes just looks like he's having a good time as he throws future citizens through plate glass windows, beats up the police with a smile, or shoots a cartoonishly over-sized gun without much accuracy. Movies like this live and die with their villains, and Demolition Man has a great one. Snipes does a great job, and only the most squeamish will be offended by the portrayal's potential racial implications. It's not like he goes around with a giant boombox on his shoulder with a bucket of chicken fresh from Taco Bell.
While most of the humor is organic, some of it comes from the inevitable aging of a sci-fi movie that dates itself with its premise. The only real unintended humor comes from the prediction of 1996 Los Angeles as a war zone, or the references to the "2010" earthquake, or the ridiculously antiquated computer terminals. Conversely, the "President Schwarzenegger" joke has actually become more funny with time, even if Schwarzenegger is unlikely to ever actually become president.
As Stallone movies go, it's one of the few that I'd consider genuinely "good" in the normal sense of the word: it's not regarded with quite the same status as Rocky or First Blood , but it's far from the "so bad it's great" qualities we see in Rocky V or the Rambo sequels or Cliffhanger .
Memorable Quotes:
Phoenix: Spartan? John Spartan? Aw, shit, they let anybody into this century! What the hell you doing here?
Spartan: Look, Huxley, why don't we do it the old fashioned way?
Huxley: Eeeew, disgusting! You mean... fluid transfer?
Huxley: You are even better live than on laserdisc!
By Dobson
Labels: 1990s, Action, Sci-Fi, Stallone
Nathan Forester May 25, 2012 at 7:20 PM
Superb job.
blubeetle December 26, 2015 at 10:59 PM
Question: why does she has to type and voice command the computer at the same time, every time? Stupid script, indeed. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,436 |
Conserving America's Wetlands 2007:
Three Years of Progress
Implementing the President's Goal
Appendix E.
U.S. Army Corps of Engineers, Civil Works
Table E-1. USACE Programs Supporting the President's Wetland Goal in FY 2008. Funding (millions of dollars)*
Program Restore or Create Improve Protect Total Wetlands Funding for Goal FY 2008 Difference from FY 2007
USACE Civil Works
Aquatic Ecosystem Restoration Program 51.873 170.000 0.525 222.398 -60.602
Excludes regulatory program, mitigation, and Coastal Wetlands Planning, Protection and Restoration Act. Includes funding for projects that will result in acres to be counted in future fiscal years.
Table E-2. USACE Programs Supporting the President's Wetland Goal in FY 2008. Planned Accomplishments (in acres)
Aquatic Ecosystem Restoration Program 3,795 14,827 185 18,807 -242,547
USACE Projects Supporting the President's Wetland Goal
Aquatic Ecosystem Restoration: The USACE has numerous study, project-specific, and programmatic authorities for implementing aquatic ecosystem restoration projects. In addition, activities contributing to the President's goal may occur on the 12 million acres of water and land managed by the USACE for other purposes, such as flood damage reduction, navigation, and recreation. Another contribution is the use of dredged material to create, restore, or improve wetland habitat as part of routine maintenance dredging of Federal channels.
The data in the tables above represent a subset of the total USACE commitment to achieving the President's goal. Because most USACE restoration projects take several years to complete, the funds appropriated in any one fiscal year have a minimal correlation to the number of acres that count toward the President's goal in that fiscal year. Projects are included in the budget based on their effectiveness in addressing significant regional or national aquatic ecological problems. The aquatic ecosystem studies and projects proposed by the USACE for funding in FY 2008 include the following examples (the large number of projects precludes a comprehensive list within this document):
Comprehensive Everglades Restoration Plan (CERP): The primary and overarching purpose of CERP is to restore the South Florida ecosystem, which includes the Everglades. The plan provides the framework and guidance to restore, protect, and preserve the water resources of the greater Everglades ecosystem. CERP has been described as the world's largest ecosystem restoration effort, and includes providing more natural flows of water, improved water quality, and more natural hydro-periods within the remaining natural areas. The plan is intended to help restore the ecosystem while ensuring clean and reliable water supplies, and providing flood protection in urban areas.
http://www.evergladesplan.org
Louisiana Coastal Area Ecosystem Restoration: More than one million acres of Louisiana's coastal wetlands have been lost since the 1930s; another one-third of a million acres could be lost over the next 50 years unless large-scale corrective actions are taken. The ecosystem restoration program will construct significant restoration features; undertake demonstration projects; study potentially promising large-scale, longterm concepts; and take other needed actions to restore the ecosystem. A 10-year plan of studies and projects was developed through a public involvement process, and working closely with other Federal agencies and the State of Louisiana.
http://www.mvn.usace.army.mil/prj/lca/
Upper Mississippi River Restoration: Originally authorized in 1986 but significantly modified in 1999, this program provides for planning, construction, and evaluation of measures for fish and wildlife habitat rehabilitation. Multiple habitat projects are helping to revitalize the side channels and to restore island, aquatic, and riparian habitat in the Upper Mississippi River. The program also includes funds for the collection of project and systemic baseline data and monitoring.
http://www.mvr.usace.army.mil/EMP/default.htm
USACE Programs that Maintain the Wetland Base
Together with their partners, the USACE provides environmental stewardship of nearly 12 million acres of public land and water and oversees the natural resources management of 456 operating civil works water resources projects nationwide. The USACE strives to provide sound environmental stewardship of lands and waters entrusted to its care, while accomplishing multiple authorized project purposes. Its Natural Resources Management Mission is to manage and conserve those natural resources (including fish and wildlife, woodlands and grasslands, wetlands, soils, and water) consistent with ecosystem sustainability principles, to serve the needs of present and future generations.
The stewardship of wetland resources is an integral part of the USACE responsibility. Although the classification and quantity of wetlands acreage under USACE stewardship has not yet been determined, an inventory of natural resources (including wetlands) is required for each project. This effort is under way and is being accomplished as fiscal resources allow.
Information from the inventories is incorporated into master plans and operational management plans and used to help manage, conserve, and protect wetland resources. Where feasible, wetland resources management is integrated to capture mutual benefits (e.g., for efforts to manage wetlanddependent plants and animals, including endangered species). In addition, the effects of existing and proposed land use activities are monitored or evaluated to guard against wetland degradation or loss. Opportunities to enhance wetland quality and quantity are implemented where feasible, employing partnerships and volunteer assistance where possible.
http://corpslakes.usace.army.mil/employees/envsteward/envsteward.html
Engineer Research and Development Center: Within the Environmental Laboratory, the Wetlands and Coastal Ecology group conducts field and laboratory investigations on biotic and abiotic resources in wetlands and coastal systems and develops products/systems supporting assessment, restoration, and management of wetlands and coastal ecosystems. Examples of wetlands research include the development of improved standards, techniques, and guidelines for the planning, design, and construction of USACE wetlands restoration and creation projects; completion of a GIS-based decision support system for prioritizing candidate wetlands restoration sites with the greatest potential for success; and exploration of innovative plant harvesting/installation methods for the large-scale restoration of submerged aquatic vegetation (SAV) ecosystems in the Chesapeake Bay. In addition, state-of-the-art tools and methods for wetlands restoration will be integrated to forecast physical, chemical, and biological responses to water resource management activities and to manage these resources within a watershed- scale perspective. Approximately $1.8 million is included in the FY 2008 budget for wetlands research.
http://el.erdc.usace.army.mil/org.cfm?Code=EE-W
Regulatory Clean Water Act 404 Program: The USACE manages the Nation's wetlands through a regulatory program requiring permits for the discharge of dredged and fill material into jurisdictional waters of the United States. In a typical year the USACE receives permit requests to fill about 25,000 acres of jurisdictional waters. Of these, about 5,000 acres are not permitted, and for the 20,000 permitted acres the USACE requires mitigation on average of more than two acres for each permitted acre lost. FY 2008 funding request is $180 million.
http://www.usace.army.mil/inet/functions/cw/cecwo/reg
« Back to Table of Contents
Major Speeches
| Press Conference
| State of the Economy
| State of the Union
| Ask the White House
| White House Interactive
| President's Cabinet
| USA Freedom Corps
| Faith-Based & Community
| OMB
| NSC
| More Offices
| Nominations
| Application | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,199 |
package org.apache.flink.runtime.clusterframework.overlays;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.configuration.SecurityOptions;
import org.apache.flink.core.fs.Path;
import org.apache.flink.runtime.clusterframework.ContainerSpecification;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
import java.io.File;
import java.io.IOException;
import static org.apache.flink.runtime.clusterframework.overlays.Krb5ConfOverlay.JAVA_SECURITY_KRB5_CONF;
import static org.apache.flink.runtime.clusterframework.overlays.Krb5ConfOverlay.TARGET_PATH;
import static org.junit.Assert.assertEquals;
/** Tests for {@link Krb5ConfOverlay}. */
public class Krb5ConfOverlayTest extends ContainerOverlayTestBase {
@Rule public TemporaryFolder tempFolder = new TemporaryFolder();
@Test
public void testConfigure() throws Exception {
File krb5conf = tempFolder.newFile();
Krb5ConfOverlay overlay = new Krb5ConfOverlay(krb5conf);
ContainerSpecification spec = new ContainerSpecification();
overlay.configure(spec);
assertEquals(
TARGET_PATH.getPath(),
spec.getSystemProperties().getString(JAVA_SECURITY_KRB5_CONF, null));
checkArtifact(spec, TARGET_PATH);
}
@Test
public void testNoConf() throws Exception {
Krb5ConfOverlay overlay = new Krb5ConfOverlay((Path) null);
ContainerSpecification containerSpecification = new ContainerSpecification();
overlay.configure(containerSpecification);
}
@Test
public void testBuildOverlayFromConf() throws IOException {
final File krb5conf = tempFolder.newFile();
final Configuration configuration = new Configuration();
configuration.set(SecurityOptions.KERBEROS_KRB5_PATH, krb5conf.getPath());
final Krb5ConfOverlay overlay =
Krb5ConfOverlay.newBuilder().fromEnvironmentOrConfiguration(configuration).build();
assertEquals(overlay.krb5Conf.getPath(), krb5conf.getPath());
}
@Test
public void testBuildOverlayFromEnv() throws IOException {
final File krb5conf = tempFolder.newFile();
final Configuration configuration = new Configuration();
System.setProperty(JAVA_SECURITY_KRB5_CONF, krb5conf.getPath());
final Krb5ConfOverlay overlay =
Krb5ConfOverlay.newBuilder().fromEnvironmentOrConfiguration(configuration).build();
assertEquals(overlay.krb5Conf.getPath(), krb5conf.getPath());
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,388 |
Would National Health Insurance Help GM?
Bradford Plumer
Conventional wisdom in this country has it that American businesses are uncompetitive partly because they have to spend so much on health insurance for their workers. Here's a common variation, from Dean Bakopoulos:
[W]e must implement a system that guarantees universal healthcare. American industry — from National Steel to Starbucks — would benefit from having the burden of health insurance lifted off its back. Why else would GM be aggressively investing in nationalized-healthcare Canada while U.S. plants shut down?
Why indeed? I certainly don't know. But I'm not convinced that the conventional wisdom is entirely right. At least let's hash it out. There's reason to think that national healthcare wouldn't necessarily make American businesses more competitive.
Say each year GM paid each worker $40,000 and spent $5,000 per worker on health insurance. That's a major drag, right? Well, look. Say national health insurance is then created, some system that doesn't rely on employers. Depending on how it's financed, GM could still be on the hook for that $5,000, so long as total worker compensation doesn't change—which it shouldn't, so long as it's set by the market. Maybe companies will now pay that $5,000 in wage form, to attract the same caliber workers (or because unions demand it). Or maybe the new system will be financed by payroll taxes or individual mandates, in which case the company might have to pay each worker $45,000 to cover the cost. But total compensation wouldn't change.
Alternatively, those companies that are currently paying nothing for health insurance can help share the load with companies like GM. But then you're just taking from one company to help out another—American businesses overall don't necessarily become more competitive. There are probably ways to redistribute the load that make sense, and that's why we have policy wonks, but the point is there's nothing prima facie business-friendly about this.
In reality, of course, things would look far more complicated. The current tax system makes things complex. And some health insurance systems are more efficient than others. National health insurance might be cheaper, on aggregate, than our current system, in which case everyone would be paying less, and businesses obviously become more competitive. But what if the new system was more expensive—given that 45 million new people would need to be covered? GM's fortunes would depend largely on how the system was financed and how good it was at controlling costs. European companies are more competitive on this front presumably because Europe rations its health care and so spends less (with similar, if not better health outcomes). If we could do that, it wouldn't matter quite as much how health care was delivered—cutting costs is where the benefits to business would lie, primarily.
There's another aspect here. Right now, when insurance premiums go up each year, GM usually has to cover the increase, which goes up faster than wages do, unless it wants to shift some of the cost onto workers—a move that usually causes a big stir and is somewhat hard to do. But if GM was paying its workers entirely in wages, and the government handling health insurance, then GM might be able to get away with avoiding the "necessary" wage increases whenever there was a premium hike. In that case, GM would save money and become more profitable by giving its employees a pay cut—who get, say, a payroll tax increase or premium hike from the government, but not enough of a corresponding wage increase from GM to cover it. But who knows.
I certainly think a national health insurance system is necessary in this country, one not tied to employment. It would help workers move from job to job more easily while remaining insured, and would guarantee that everyone had insurance. It's fair, moral, decent, etc. And it would likely be progressive, which the current tax deductions for employer-backed insurance certainly aren't. And so on. In theory reform could even help control costs, although I'm not as orthodox about that particular faith as some. But would it be a boon for American businesses? It really depends.
All Five Officers Charged With Second-Degree Murder in Tyre Nichols' Death
It Shouldn't Just Be Elaine Chao | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,981 |
{"url":"https:\/\/socratic.org\/questions\/how-do-you-find-all-the-asymptotes-for-function-f-x-7x-1-2x-9","text":"# How do you find all the asymptotes for function f(x) = (7x+1)\/(2x-9)?\n\nThat is when the numerator gets near to $0$:\n$2 x - 9 \\approx 0 \\to x \\approx 4 \\frac{1}{2}$\nThe horizontal is found when we make $x$ larger and larger. The $+ 1$ and $- 9$ then matter less and less, and the funtion begins to approach $f \\left(x\\right) \\approx \\frac{7 x}{2 x} = \\frac{7}{2} = 3 \\frac{1}{2}$","date":"2019-03-21 01:15:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 6, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8144965767860413, \"perplexity\": 336.4428548083035}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-13\/segments\/1552912202476.48\/warc\/CC-MAIN-20190321010720-20190321032720-00097.warc.gz\"}"} | null | null |
Barbara Bielasta (ur. 23 października 1956 w Ciechanowie), bibliotekarka, regionalistka, publicystka.
Absolwentka Instytutu Bibliotekoznawstwa i Informacji Naukowej Uniwersytetu Warszawskiego. Od 1979 pracownik Wojewódzkiej (dziś: Powiatowej) Biblioteki Publicznej w Ciechanowie.
Autorka książki Szulmierz i okolice (2007) oraz 6 tomów Bibliografii województwa ciechanowskiego za lata 1975-1998 (1993-2004). Współautorka ciechanowskiej części Bibliografii województwa mazowieckiego oraz innych tematycznych zestawień bibliograficznych. Inicjatorka i główna organizatorka cyklu spotkań o tematyce regionalnej Z Bibliotecznej półki. Autorka artykułów o tematyce regionalnej, gł. historycznej w prasie lokalnej i regionalnej.
Polscy bibliotekarze
Polscy regionaliści
Polscy publicyści
Ludzie urodzeni w Ciechanowie
Urodzeni w 1956 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,357 |
{"url":"http:\/\/motls.blogspot.co.uk\/2006\/04\/ignore-bloggers-at-your-peril.html","text":"## Wednesday, April 19, 2006 ... \/\/\/\/\/\n\n### Ignore bloggers at your peril\n\nClifford Johnson has pointed out an article in the Guardian. The article discusses some kind of research about the influence of bloggers. It also mentions three companies that were affected by bloggers because the bloggers described physics of Kryptonite locks, McDonald's abracadabra, as well as Dell whose last CEO was possibly fired. Well, the visitor data indicates that very different segments of the society are being influenced. For example, many people were looking for Angela Merkel semi-naked today. And of course, people are still interested in Mary Winkler as well as a potential massive nuclear strike. More demanding readers look for physics blogs uncertainty as well as the sad story of John Brodie, the physicist.","date":"2015-05-27 05:36:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.22937056422233582, \"perplexity\": 3428.421784866762}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-22\/segments\/1432207928907.65\/warc\/CC-MAIN-20150521113208-00198-ip-10-180-206-219.ec2.internal.warc.gz\"}"} | null | null |
package org.apache.bcel.generic;
/**
* DALOAD - Load double from array
*
* <PRE>
* Stack: ..., arrayref, index -> ..., result.word1, result.word2
* </PRE>
*/
public class DALOAD extends ArrayInstruction implements StackProducer {
/**
* Load double from array
*/
public DALOAD() {
super(org.apache.bcel.Const.DALOAD);
}
/**
* Call corresponding visitor method(s). The order is: Call visitor methods of implemented interfaces first, then call
* methods according to the class hierarchy in descending order, i.e., the most specific visitXXX() call comes last.
*
* @param v Visitor object
*/
@Override
public void accept(final Visitor v) {
v.visitStackProducer(this);
v.visitExceptionThrower(this);
v.visitTypedInstruction(this);
v.visitArrayInstruction(this);
v.visitDALOAD(this);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,683 |
\section{Introduction}\label{sec:sec1}
The presence of sharp variations in the physical parameters of solar and stellar atmospheres is anything but irrelevant in the formation
of spectroscopic and spectropolarimetric signals.
For instance, it is well known that the propagation of shock waves can generate a large variety of important features in observed spectra \citep{mihalas1980}.
Moreover, the presence of transition layers or boundaries associated with a jump in magnetic
field, in temperature, and in bulk velocities significantly modifies spectral line profiles.
In many astrophysical applications, one must routinely solve the radiative transfer equation.
Very common examples are: iterative solutions of nonlinear radiative
transfer problems in non-local thermodynamic equilibrium (non-LTE) conditions,
inversions of spectral line profiles, and massive three-dimensional (3D) radiative magnetohydrodynamic simulations (R-MHD) of stellar atmospheres.
The radiative transfer equation seldom has
solutions that can be expressed in an analytical form, and therefore it is common to seek approximate
solutions in terms of numerical schemes.
However, standard numerical methods
rely on smoothness assumptions regarding the functions to be integrated
and these methods may become inaccurate or inefficient in the presence of discontinuities.
In particular, the crossing of a discontinuity might introduce important local errors in the numerical solution,
which negatively affect the accuracy of the global problem.
The relevance of discontinuities has already been recognized in the radiative transfer community.
For example, standard non-LTE radiative transfer codes barely converge
when applied to data from 3D R-MHD simulations that contain discontinuities and steep gradients \citep{steiner2016}.
Some efforts have already been exercised to handle such discontinuities:
\citet{tscharnuter1977} proposed a numerical method for computing the radiative transfer across a shock front with spherical symmetry.
Afterwards, \citet{mihalas1980} designed a numerical scheme for solving the equation of transfer in discontinuous media.
\citet{auer+paletou1994} remarked the necessity to avoid negative overshoots when interpolating physical quantities.
For this reason, \citet{auer2003} suggested the use of Hermite interpolants and Bézier curves in the formal
solution and, subsequently, \citet{ibgui2013} made use of monotonic cubic Hermite polynomials.
Some attention to discontinuities has also been paid in the context of the radiative transfer of polarized light.
For example, \citet{steiner2000} and \citet{mueller2002} studied the role played by discontinuities
in fluid velocities and in the magnetic field in the formation of asymmetric Stokes profiles.
The SIRJUMP inversion code by \citet{louis2009} is able to infer possible discontinuities
in the physical atmospheric quantities along the line of sight \citep{deltoro_iniesta2016}. More recently,
\citet{delacruz_rodriguez+piskunov2013} and~\citet{stepan+trujillo_bueno2013} applied B\'ezier interpolants to control abrupt changes in the absorption and emission quantities, whereas
\citet{steiner2016} proposed a numerical scheme based on monotonic reconstruction that allows for discontinuities at the boundary of each numerical cell of the atmospheric model.
However, a rigorous mathematical and numerical investigation of the problems arising in the integration of the discontinuous radiative transfer equation
is still lacking and deeper understanding of this topic is certainly required.
This work clearly states the issues introduced by discontinuities in the context of the radiative transfer equation.
The paper is organized as follows:
Sections~\ref{sec:sec2} and~\ref{sec:sec31} introduce differential systems with a discontinuous right-hand side and
interpolations of discontinuous functions, respectively,
explaining the limitations of standard convergence analyses for these kinds of problems.
Section~\ref{sec:sec3} explores the role of discontinuities in the radiative transfer equation.
Section~\ref{sec:sec5} exposes specific numerical tests that highlight
the inefficient performances of standard methods when dealing with discontinuities in the radiative transfer equation.
Sections~\ref{sec:sec4} and~\ref{sec:int_disc_data} briefly summarize the existing numerical methods for the treatment of discontinuous ordinary differential equations (ODEs)
and the interpolation techniques for discontinuous discrete data.
Particular attention is paid to their suitability to the radiative transfer equation in discontinuous media.
Finally, Section~\ref{sec:sec6} provides remarks and conclusions, with a view on future work.
\section{Ordinary differential
equations with a discontinuous right-hand side}\label{sec:sec2}
Discontinuous differential systems frequently appear in various fields, such as chemical kinetics \citep[e.g.,][]{landry2009}, biological systems \citep[e.g.,][]{gouze2002}, and various engineering disciplines \citep[e.g.,][]{malmborg1997,acary2008}.
Moreover, most of the studies on discontinuous ODEs rely on numerical methods and a broad literature attests to the importance of this topic:
from the pioneering works by \citet{chartres1972} and \citet{mannshardt1978} to the surveys by \citet{calvo2008} and \citet{dieci2012}.
\subsection{Impact of a discontinuity on the numerical solution}\label{sec:sec2_3}
The numerical integration of discontinuous differential systems is challenging, because standard numerical schemes may perform very inefficiently in the absence of local smoothness.
It is known that the crossing of a discontinuity introduces significant local errors, which negatively affect global errors
and might even yield a convergence order breakdown.
Consider a numerical method of order $p$ applied to the scalar initial value problem (IVP)
\begin{equation}\label{scalarIVP}
y'(t) = f(t,y)\,, \quad y(0) = y_0\,.
\end{equation}
Standard convergence analysis would conclude that the numerical scheme yields a global error that scales as $\mathcal{O}(h^p)$, where $h$ denotes the step size.
Such an analysis relies on the assumption that
the right-hand side $f$ of the IVP~\eqref{scalarIVP} (and consequently $y$) is sufficiently smooth.
More precisely, the function $f$ must have $p+1$ continuous derivatives.
However, a different local truncation error analysis is mandatory if discontinuities occur in $f$ or
its derivatives.
By definition, a discontinuity in Equation~\eqref{scalarIVP} is of order $q \ge 1$ if $f$ contains a finite jump
in at least one of its partial derivatives of order $q - 1$ and
it has continuous derivatives through order $q - 2$ \citep{gear1984}.
This produces a $q$th-order discontinuity in $y$ at the discontinuity location.
Numerical analysis shows that if $q \ge p + 1$ the numerical method will remain of order $p$ and one may not even notice the discontinuity.
By contrast, if $q \le p$, the discontinuity introduces the dominant term $\mathcal{O}(h^q)$ in the local error, which can reduce the global error order.
For instance, \citet{mannshardt1978} observed that a Runge-Kutta method remains convergent after having crossed a first-order discontinuity, but only with order 1.
In many practical problems, the right-hand side $f$ of the IVP~\eqref{scalarIVP} is known at
a discrete set of points only. In such cases, the numerical grid must be considered fixed \textit{a priori}
and there is not a well defined concept of discontinuity in $f$.
It is difficult to tell whether the underlying model that gave
rise to the discrete data is continuous, or rather contains jumps.
In a discrete world, all changes are jumps by definition
and data are purely discontinuous even if the underlying model is not.
Therefore, there is no concrete difference between a discontinuity and a steep gradient
for a single set of discrete data that is not able to resolve the local feature as shown in Figure~\ref{fig:sampling}.
The difference is only observed if one considers
a sequence of discrete grids with a finer and finer sampling.
Computing the maximum of the differences between adjacent data points along the sequence of increasingly refined grids, one recognizes two possible cases:
either the underlying function is continuous and the value approaches zero,
or it contains a discontinuity and the value approaches the size of the jump.
Practical radiative transfer problems often involve rapidly varying functions which actually do not contain any mathematical discontinuities.
Section~\ref{sec:sec5.1} numerically explores such issues by considering both a discontinuous atmospheric model
and one containing a steep gradient.
\subsection{Example: trapezoidal method order reduction}\label{sec:subsec2.2}
This section gives a concrete example of order reduction for the (implicit) trapezoidal method,
but the following local error analysis can be adapted to other numerical methods.
For instance, \citet{mannshardt1978} presented the case of a general one-step method, whereas \citet{gear1984} analyzed predictor-corrector methods.
For simplicity, consider the IVP
\begin{equation}\label{scalarIVP_disc}
y'(t) = f(t)=
\begin{cases}
f_1(t) & \text{if}\; t<\xi\,, \\
f_2(t) & \text{if}\; t\ge\xi\,,
\end{cases}
\quad y(0) = y_0\,,
\end{equation}
where $f:\mathbb{R}\rightarrow\mathbb{R}$
and the discretization of the time interval $[0,T]$ given by $\{t_k\}$ with $k=0,\dots,N$.
Let $\xi\in[t_k,t_{k+1}]$ and assume both $f_1$ and $f_2$ at least three-times continuously differentiable.
For $f_1(\xi)\ne f_2(\xi)$, the IVP~\eqref{scalarIVP_disc} shows a first-order discontinuity.
The trapezoidal method applied to the IVP~\eqref{scalarIVP_disc} in the interval $[t_k,t_{k+1}]$ reads
\begin{equation}
y_{k+1} = y_k+\frac{t_{k+1}-t_k}{2}\left[f_1(t_k)+f_2(t_{k+1})\right]\,,
\label{trapezoidal}
\end{equation}
where $y_k$ and $y_{k+1}$ are the numerical approximations of the exact values $y(t_k)$ and $y(t_{k+1})$, respectively.
The local truncation error is defined by
\begin{equation*}
L_{k+1}=y(t_{k+1})-y_{k+1}\,,\text{ assuming }y_{k}=y(t_{k})\,,
\end{equation*}
while the global error, defined as
\begin{equation*}
E_N= |y_N-y(t_N)|\,,
\end{equation*}
represents the accumulation of local errors over all the steps.
The Taylor expansion of the local exact solution gives
{\small
\begin{align*}
y(t_{k+1}) &= y(\xi)+(t_{k+1}-\xi)f_2(\xi)+\frac{(t_{k+1}-\xi)^2}{2}f_2'(\xi)+\mathcal{O}((t_{k+1}-\xi)^3)\,,\\
y(\xi) &= y_k+(\xi-t_k)f_1(t_k)+\frac{(\xi-t_k)^2}{2}f_1'(t_k)+\mathcal{O}((\xi-t_k)^3)\,.
\end{align*}}\noindent
Making use of the additional Taylor expansions
\begin{align*}
f_1(t_k) &= f_1(\xi)+(t_k-\xi)f_1'(\xi)+\mathcal{O}((\xi-t_k)^2)\,,\\
f_1'(t_k) &= f_1'(\xi)+\mathcal{O}(t_k-\xi)\,,\\
f_2(t_{k+1}) &= f_2(\xi)+(t_{k+1}-\xi)f_2'(\xi)+\mathcal{O}((t_{k+1}-\xi)^2)\,,
\end{align*}
one recovers by direct calculations the expression
\begin{equation*}
\begin{split}
L_{k+1}&=(\xi-\frac{t_k+t_{k+1}}{2})\left[f_1(\xi)-f_2(\xi)\right]\\
&+(\xi-t_k)(t_{k+1}-\xi)\left[f_1'(\xi)-f_2'(\xi)\right]+\mathcal{O}(h^3)\,,
\end{split}
\end{equation*}
where $h=t_{k+1}-t_k$.
This leads to the formula
\begin{equation}
|L_{k+1}|\approx hK_1+h^2K_2+ \mathcal{O}(h^3)\,,
\label{local_err_disc}
\end{equation}
where
\begin{align*}
K_1&=|f_1(\xi)-f_2(\xi)|\,,\\
K_2&=|f_1'(\xi)-f_2'(\xi)|\,,
\end{align*}
are the first and second-order jumps, respectively, at the discontinuity point $\xi$.
The first and second terms in Equation~\eqref{local_err_disc} contribute to the global error
with $\mathcal{O}(h)$ and $\mathcal{O}(h^2)$ terms, respectively.
These contributions arise from the jump and cannot be improved by using some higher-order method.
After having crossed a first-order discontinuity, that is, $f_1(\xi)\neq f_2(\xi)$,
the trapezoidal method is therefore only first-order convergent. Here, one assumes a finite number of jumps,
otherwise the $\mathcal{O}(h)$ contributions will accumulate, thwarting convergence.
Discontinuities of second order or greater (which imply $f_1(\xi)=f_2(\xi)$) do not affect the order of convergence of the trapezoidal method.
\begin{figure}
\includegraphics[width=0.49\textwidth]{sampling}
\caption{The hyperbolic tangent (top panel, continuous blue) and the modified Heaviside (bottom panel, continuous blue)
functions are sampled in the interval $x\in[-1,1]$ with 10 (red circles) and 40 (green dots) grid points.
No difference between the coarser (red) samplings is visible, while this is not true for the finer (green) samplings.}
\label{fig:sampling}
\end{figure}
\section{Interpolations of discontinuous functions}\label{sec:sec31}
Consider a function $y(x)$ and a discrete set of points $x_0< x_1<\ldots< x_n$ and
assume that $y(x_0),\dots, y(x_n)$ are given.
The interpolation problem is to find a function $p(x)$ that satisfies
interpolation requirements
\begin{equation}\label{interp_cond}
p(x_i)=y(x_i)\, \text{ for } i=0,\dots,n\,.
\end{equation}
The simplest way is to connect the
given points with straight lines. However, when seeking for higher accuracy, it is usual to look for a function $p(x)$
that is a polynomial or a piecewise-smooth polynomial of higher degree.
Alternatively, the function $p(x)$
can be a trigonometric function, a rational polynomial function, and so on.
\subsection{Polynomial interpolation}\label{pol_int}
In the problem of polynomial interpolation, one seeks a polynomial $p(x)$ that
satisfies the interpolation condition~\eqref{interp_cond}.
Limiting the degree of
$p(x)$ to be $\le n$, one obtains precisely one single interpolant $p_n(x)$ that satisfies the interpolation conditions.
A very common strategy is to construct this interpolant in terms of Lagrange polynomials, namely
\begin{equation}\label{lagrange_interp}
p_n(x)=\sum_{i=0}^{n}y(x_i)\ell_i(x)\,,
\end{equation}
where the Lagrange basis polynomials, $\ell_i(x)$, given by
\begin{equation*}
\ell_i(x)=\prod_{\substack{0\le m \le n\\m\neq i}}\frac{x-x_m}{x_i-x_m}\,,
\end{equation*}
satisfy the relation $\ell_i(x_j)=\delta_{ij}$, with $\delta_{ij}$ being the Kronecker delta
\begin{equation*}
\delta_{ij} =
\begin{cases}
1 & \text{ if } i = j \,, \\
0 & \text{ if } i \ne j\,.
\end{cases}
\end{equation*}
The interpolant $p_n(x)$ and the interpolated function $y(x)$ in Equation~\eqref{lagrange_interp}
satisfy the condition~\eqref{interp_cond},
that is, they agree with each other at the interpolation points.
In general, there is no reason to expect them to be close to each other elsewhere. Nevertheless, one can estimate
the difference between them, the so-called interpolation error.
Let $\Pi_n$ denote the space of polynomials of degree $\le n$, and let $C^{n+1}[a,b]$ denote the
space of functions that have $n+1$ continuous derivatives on the interval $[a, b]$.
Suppose $y(x)\in C^{n+1}[a,b]$ and let $p_n(x)\in \Pi_n$ be the unique polynomial
that interpolates $y(x)$
at the $n+1$ distinct points $x_0< x_1<\dots< x_n \in [a,b]$.
Then, for all $x \in [a,b]$ there is a $\xi \in (a,b)$, such that
\begin{equation}\label{pol_accuracy1}
y(x)-p_n(x) =\frac{1}{(n+1)!}y^{(n+1)}(\xi)\prod_{j=0}^n(x-x_j)\,.
\end{equation}
Defining $h= \max_j x_{j+1}-x_j$, one obtains
\begin{equation}\label{pol_accuracy2}
\vert y-p_n\vert_\infty \coloneqq \sup_{a\le x\le b}|y(x)-p_n(x)|\le \frac{h^{n+1}}{4n}\vert y^{(n+1)} \vert_\infty\,.
\end{equation}
This means that, by assuming $y(x)$ smooth enough,
the error decreases to zero as $\mathcal{O}(h^{n+1})$ and the interpolation has an order of accuracy $n+1$.
However, the smoothness assumption of $y(x)$ is not always fulfilled
and polynomial interpolations may become inaccurate in the presence of discontinuities.
Moreover, the error of the interpolation does not necessarily decrease by increasing
the order of the polynomial.
In fact, even though
$h^{n+1}$ may go to zero for $n\rightarrow\infty$, the term $|y^{(n+1)}|$ can grow rapidly, preventing convergence in Equation~\eqref{pol_accuracy2}.
Equidistant interpolation of the Runge's function (the so-called Runge phenomenon) is a striking example of this.
\subsection{Impact of discontinuities}\label{subsec_int_disc}
Error estimates~\eqref{pol_accuracy1} and~\eqref{pol_accuracy2} assume some smoothness of $y(x)$
and the order of accuracy of interpolations decreases for less regular functions.
Suppose that $y(x) \in C^{s}[a,b]$ with $s<n$.
Then, Equation~\eqref{pol_accuracy1} must be replaced by
\begin{equation}
y(x)-p_n(x)=\frac{y^{(s)}(\xi)-p_n^{(s)}(\xi)}{n!}(x-\tilde x_1)(x-\tilde x_2)\dots(x-\tilde x_s)\,,
\label{disc_interp_err}
\end{equation}
where $\{\tilde x_j\}^s_{j=1}$ is any subset of $\{x_j\}^{n}_{j=0}$.
By the same arguments that led up to Equation~\eqref{pol_accuracy2},
one obtains
\begin{equation*}
\vert y-p_n\vert_\infty = \mathcal{O}(h^{s})\,,
\end{equation*}
that is, the order of accuracy of the polynomial interpolation is reduced to $s$.
Moreover, it is known that standard second-order or higher interpolations are oscillatory near discontinuities and such oscillations might
lead to numerical inaccuracy and even to numerical instability in nonlinear problems \citep[e.g.,][]{richards1991,shu1998}.
This implies, first, an over/undershoot in $p_n(x)$ ($n>1$) whenever the function $y(x)$ has a jump discontinuity
and, second, the failure of higher-order approximations to remove the overshoot.
This behavior (similar to the Gibbs phenomenon in Fourier series) is known as Gibbs phenomenon.
For example, \citet{zhang1997} showed that cubic spline interpolations on uniform meshes always oscillate near a jump discontinuity.
For these reasons, it makes little sense (and could even be noxious) to use standard high-order interpolation $(n>s)$ when the smoothness of $y(x)$ is not guaranteed.
\subsection{Interpolatory quadratures}\label{subsec_int_quad}
Consider a function $y(x)$ and assume that a discrete set of points $x_0< x_1<\ldots< x_n$ and the data values
$y(x_0),\dots, y(x_n)$ are given.
A systematic approximation of the integration of $y(x)$ in an interval $[\tilde a,\tilde b]$ is commonly called quadrature
and its formula is usually given in the form
\begin{equation}\label{quadrature}
\int_{\tilde a}^{\tilde b} y(x){\rm d}x\approx\sum_{i=0}^n y(x_i)\omega_i\,,
\end{equation}
where $\omega_i$ are the quadrature weights.
A usual way to derive a quadrature formula is through the so-called interpolatory quadrature:
one makes an assumption for the polynomial form of the integrand in the integration interval and analytically solves the integral.
For instance, the Lagrange interpolation given by Equation~\eqref{lagrange_interp} yields
\begin{equation*}
\int_{\tilde a}^{\tilde b} y(x){\rm d}x\approx \sum_{i=0}^n y(x_i)\int_{\tilde a}^{\tilde b} \ell_i(x){\rm d}x\,,
\end{equation*}
where the integrals of the Lagrange basis polynomials provide the quadrature weights.
The quadrature error, i.e., the difference between the left-hand and the right-hand sides
of Equation~\eqref{quadrature}, decreases to zero as $\mathcal{O}(h^{n+2})$.
Moreover, the quadrature is exact for all polynomials of degree $\le n$, that is, for all $y(x)\in\Pi_{n}$.
However, Section~\ref{subsec_int_disc} shows that a polynomial of degree $n$ guarantees $\mathcal{O}(h^{n+1})$ accuracy only if
$y(x) \in C^{n+1}$ in the integration interval.
Consequently, interpolatory quadratures suffer from the same order-reduction issue of interpolations when facing discontinuities.
Moreover, the presence of overshoots in the interpolated integrand
reduces the accuracy of the quadrature
and could even yield negative results in the quadrature of positive integrands.
For this reason, the use of standard high-order interpolatory quadratures is not encouraged when the smoothness of $y(x)$ is not guaranteed.
\section{Discontinuities in radiative transfer}\label{sec:sec3}
Standard radiative transfer calculations usually assume smooth variations in the radiation field and in the atmospheric physical parameters.
The incidence of discontinuities is often neglected, without considering
whether this assumption is justified or not.
This section inquires into the role of discontinuities in the radiative transfer equation,
focusing on the possible numerical issues.
\subsection{Discontinuities in solar and stellar atmospheres}
The plasma of solar and stellar atmospheres can be highly dynamic,
with its properties fluctuating sharply over small scales.
For instance, jumps in the magnetic field frequently appear:
a typical example is the magnetopause at the boundary of a flux tube, where
the magnetic field of the tube is separated from the surrounding field-free
plasma by a thin transition layer.
Another example is given by magnetic canopies, that is, horizontally
extending magnetic fields overlying field-free regions of internetwork cells and
separated from it by a thin transition layer \citep{steiner2000}.
Abrupt variations in temperature can also be present:
for example, the jump in temperature of three orders of magnitude that takes place in the solar transition region.
Clouds of cool material embedded in the chromosphere or corona
are usually associated with jumps in bulk velocities as well.
For instance, a magnetic flux tube interacting with convective motions shows a downwardly directed flow along its boundary \citep{steiner1998}.
The presence of shock fronts could also imply discontinuities in the velocity field.
All these sudden variations of the physical quantities describing the atmosphere might reflect into sudden variations in absorption and emissivity.
Radiative transfer calculations make use of discrete atmospheric models and
one can distinguish between two cases:
on the one hand, there are semi-empirical atmospheric models \citep[e.g.,][]{vernazza1981,fontenla1990,fontenla1993}, which are still frequently used to synthesize spectral lines.
In these models, the physical parameters across the atmosphere usually vary smoothly.
On the other hand, state-of-the-art 3D R-MHD simulations of stellar atmospheres show contact discontinuities, steep
gradients in state variables, and shock fronts.
In fact, if the MHD equations are solved by the method of characteristics
then shocks are represented by mathematical discontinuities \citep{mihalas1980},
while in the case of a nonideal fluid, sharp structures might be smeared over a few zones in the mesh.
Nonetheless, one still deals with a quasi discontinuity when the jump in physical states occurs
on a spatial scale much smaller than that of the
numerical discretization \citep{steiner2016}.
Discrete models provide the physical parameters describing the atmosphere at a discrete set of spatial points only.
Section~\ref{sec:sec2_3} explains that, in this case,
there is no well defined concept of discontinuity.
However, one is still interested in determining the possible impact of such features on the numerical solution.
\subsection{Radiative transfer equation}
The monochromatic (time-dependent) transfer of unpolarized
light is described by the following scalar partial differential equation \citep{mihalas1978}
\begin{equation}
\left[\frac{1}{c}\frac{\partial}{\partial t}+\frac{\partial}{\partial s}\right] I_{\nu}(s,t) = -\chi_{\nu}(s,t) I_{\nu}(s,t) + \epsilon_{\nu}(s,t)\,,
\label{eq:general_scalar_RTE}
\end{equation}
where $I_{\nu}$ is the specific intensity of radiation propagating in the ray path direction at time $t$. Moreover, $\chi_{\nu}$ and $\epsilon_{\nu}$ are the absorption and the emission coefficients, respectively, $\nu$ is the frequency, and $c$ is the speed of light. The spatial coordinate $s$ denotes the position along the ray under consideration. Equation~\eqref{eq:general_scalar_RTE} is a hyperbolic partial differential equation and, specifically, an advection equation with source and sink terms.
It is known that computing the numerical solution of advection equations can be particularly challenging.
However, in the vast majority of radiative transfer problems in astrophysics, the photon free-flight time
is much smaller than the fluid dynamics timescales and the radiation field fully stabilizes before any change occurs in the medium.
Therefore, one usually assumes the steady-state version of Equation~\eqref{eq:general_scalar_RTE},
consisting in the linear first-order inhomogeneous ODE given by \citep{mihalas1978}
\begin{equation}
\frac{\rm d}{{\rm d} s} I_{\nu}(s) = -\chi_{\nu}(s) I_{\nu}(s) + \epsilon_{\nu}(s)\coloneqq F_{\nu}(s,I_{\nu}(s))\,.
\label{eq:scalar_RTE}
\end{equation}
Assuming that $F_{\nu}$ is Riemann integrable in the interval $[s_0,s]$, one formally integrates Equation~\eqref{eq:scalar_RTE} and obtains
\begin{equation}
I_{\nu}(s)=I_{\nu}(s_{0})+\int_{s_{0}}^{s}F_{\nu}(x,I_{\nu}(x)){\rm d}x\,.
\label{scalar_solution}
\end{equation}
Here, the problem of calculating the emergent radiation consists in the evaluation of the integral in the right-hand side of Equation~\eqref{scalar_solution},
which in turn depends on the specific intensity $I_{\nu}$.
When dealing with radiative transfer in astrophysical plasmas, it is common to refer to the concept of optical depth. Replacing the coordinate $s$ by the optical depth $\tau_{\nu}$ defined by
\begin{equation}
{\rm d}\tau_{\nu}=-\chi_{\nu}(s){\rm d}s\,,
\label{opt_depth}
\end{equation}
one recasts Equation~\eqref{eq:scalar_RTE} into
\begin{equation}
\frac{\rm d}{{\rm d} \tau_{\nu}} I_{\nu}(\tau_{\nu}) = I_{\nu}(\tau_{\nu}) - S_{\nu}(\tau_{\nu})\,,
\label{eq:scalar_RTE_tau}
\end{equation}
where $S_{\nu} = \epsilon_{\nu}/\chi_{\nu}$
is the source function, that is, the ratio between the emission and absorption coefficients.
Moreover, the formal integration of Equation~\eqref{eq:scalar_RTE_tau} in the interval $\left[\tau_{\nu,0},\tau_{\nu}\right]$
gives
\begin{equation}
I_{\nu}(\tau_{\nu})=e^{-(\tau_{\nu,0}-\tau_{\nu})}I_{\nu}(\tau_{\nu,0})+\int^{\tau_{\nu,0}}_{\tau_{\nu}}e^{-(x-\tau_{\nu})} S_{\nu}(x){\rm d}x\,,
\label{scalar_solution_tau}
\end{equation}
where $\tau_{\nu,0}\ge\tau_{\nu}$. Here, the problem of calculating the emergent radiation
reduces to an integral evaluation.
\subsection{Numerical approximations}
\citet{auer2003} defined the (numerical) formal solution of Equation~\eqref{eq:scalar_RTE} as the evaluation of the radiation field,
given knowledge of the opacity $\chi_k$ and the emissivity $\epsilon_k$
coefficients at each grid point $s_{k}$, and of the boundary conditions.
The first step in the numerical approach to an ODE involves the discretization of the integration domain.
Therefore, one discretizes the ray path under consideration with a spatial depth grid $\{s_k\}$ (or an optical depth grid $\{\tau_k\}$) with $k=0,\dots,N$.
The index $k$ increases along the propagation direction. The numerical approximation of a certain quantity at node $s_k$ (or $\tau_k$)
is indicated by substituting the explicit dependence on $s$ (or $\tau$) with the subscript $k$, for instance,
\begin{equation*}
I_k \approx I(s_k)\,.
\end{equation*}
For notational simplicity, the frequency dependence is omitted.
Inserting numerical quantities in Equation~\eqref{scalar_solution}, one obtains\footnote{The very same procedure can apply starting from
Equation~\eqref{eq:scalar_RTE_tau} instead of Equation~\eqref{eq:scalar_RTE}. \citet{janett2018a} analyze the differences
between the two cases.}
\begin{equation*}
I_{k+1}=I_k+\int_{s_k}^{s_{k+1}}F(s,I(s)){\rm d}s\,.
\end{equation*}
Different approximations of the integral on the right-hand side yield different numerical schemes.
Well-known examples are the backward Euler method, the implicit trapezoidal method,
and the third-order Runge-Kutta method (see Table~\ref{tab:schemes}).
Note that some numerical schemes (e.g., Runge-Kutta 3) use intermediate grid points for the integration.
In this case, one must recover the relevant quantities at off-grid points, which are typically provided through interpolation.
This procedure may alter absorption and emission coefficients and introduce numerical errors.
However, the most common formal solutions
ground on Equation~\eqref{scalar_solution_tau}.
Inserting numerical quantities, one gets
\begin{equation}
I_{k+1}=e^{-\Delta\tau_k}I_k+\int^{\tau_k}_{\tau_{k+1}}e^{-(x-\tau_{k+1})} S(x){\rm d}x\,,
\label{sc_formal_solution}
\end{equation}
where,
from Equation~\eqref{opt_depth}, one has
\begin{equation}
\Delta \tau_k = \tau_k -\tau_{k+1} = \int_{s_k}^{s_{k+1}}\chi(s){\rm d}s\,.
\label{conversion_opt_depth}
\end{equation}
This recursive relation is the starting point for
the so-called exponential integrators \citep{hochbruck2010}, which are a family of numerical schemes
that exhibit strong stability properties\footnote{Positiveness of $\Delta \tau_k$ guarantees $L$-stability \citep{janett2018a}.}.
The numerical problem is reduced to the quadrature of the integrals in Equations~\eqref{sc_formal_solution} and~\eqref{conversion_opt_depth},
for which different techniques are available.
The knowledge of $S$ and $\chi$ at the neighboring grid points suggests the use of interpolatory quadratures (see Section~\ref{subsec_int_quad}).
The standard choice is to use Lagrange polynomials and the simplest strategy is either constant or linear approximations inside the integration interval.
However, \citet{auer+paletou1994} noted that in order to recover the diffusion approximation, it is necessary to use parabolic or higher-order interpolations of $S$.
\citet*{olson+kunasz1987} and \citet*{kunasz+olson1988} made use of parabolic interpolations and,
alternatively, \citet*{mihalas+auer1978} proposed cubic Hermite interpolations of $S$.
Section~\ref{sec:sec2} explains that
the accuracy of numerical schemes for ODEs rely on different assumptions on the degree of smoothness of the functional $F$.
Moreover, Section~\ref{subsec_int_disc} argued that polynomial interpolations are inaccurate around discontinuities,
whereas Section~\ref{subsec_int_quad} showed that the accuracy of interpolatory quadratures
relies on a sufficient smoothness of the integrand.
\subsection{Polarized radiative transfer equation}\label{section_polarized_rte}
Most classical radiative transfer problems do not consider polarization. However, the transfer of polarized light is of particular interest in many applications.
The radiative transfer of partially polarized light is described by the system of first-order coupled inhomogeneous ODEs given by
\begin{equation}\label{eq:RTE}
\frac{\rm d}{{\rm d} s}\mathbf I_{\nu}(s)
= -\mathbf K_{\nu}(s)\mathbf I_{\nu}(s) + \boldsymbol{\epsilon}_{\nu}(s)\,,
\end{equation}
where $s$ is the spatial coordinate measured along the ray under consideration, $\mathbf{I}_{\nu}=(I,Q,U,V)^{T}$ is the Stokes vector,
\begin{equation*}
\mathbf K_{\nu} = \begin{pmatrix}
\eta_I & \eta_Q & \eta_U & \eta_V \\
\eta_Q & \eta_I & \rho_V & -\rho_U \\
\eta_U & -\rho_V & \eta_I & \rho_Q \\
\eta_V & \rho_U & -\rho_Q & \eta_I
\end{pmatrix}\,
\label{matrix_K}
\end{equation*}
is the $4\times4$ propagation matrix,
and $\boldsymbol{\epsilon}_{\nu}=(\epsilon_I,\epsilon_Q,\epsilon_U,\epsilon_V)^{T}$ is the emission vector \citep{landi_deglinnocenti+landolfi2004}.
For notational simplicity the frequency dependence of all the vectorial and matricial entries is omitted.
The peculiarity of Equation~\eqref{eq:RTE} originates simply from its matricial character \citep{landi_deglinnocenti+landolfi2004}.
In fact, the assumption of vanishing off-diagonal coefficients in the propagation matrix decouples Equation~\eqref{eq:RTE}
into four independent scalar problems formally identical to Equation~\eqref{eq:scalar_RTE}, reducing the vectorial problem to the scalar one.
The diagonal element of the propagation matrix corresponds to the total absorption coefficient in the unpolarized case,
that is, $\eta_I\equiv\chi_{\nu}$.
The generalization of the definition of the scalar formal solution to the polarized case consists in substituting radiation intensity, opacity, and emissivity by Stokes vector, propagation matrix, and emission vector, respectively.
The polarized problem is considerably more complex than the scalar one.
For instance, the problem cannot be reduced to simple quadratures \citep[see][]{landi_deglinnocenti+landolfi2004}.
Moreover the radiative transfer equation for polarized light exhibits stiff behavior, and numerical schemes face instability issues \citep{janett2018a}.
An extensive analysis of the numerical solution of the polarized radiative transfer equation is given by \citet{janett2017a,janett2017b},
who characterized several exponential integrators (DELO methods) and
high-order formal solvers.
Just like in the scalar case, the vectorial formal solution suffers from numerical issues in the presence of discontinuities.
For the sake of simplicity, the full convergence analysis is not repeated,
but the numerical tests presented in Section~\ref{sec:sec5} comprehend the polarized problem.
\subsection{Two- and three-dimensional problems}
In 1D problems, any ray under consideration is discretized by the grid points.
However, in 2D or 3D problems,
the ray path (long characteristics) or the downwind and upwind ray segments (short characteristics)
do not generally coincide with the grid points at which the physical quantities are given.
At each integration step, one must then recover the neighboring absorption and emission (or source function) coefficients
and the upwind intensity $I_k$
at off-grid points\footnote{Multistep methods make also use of additional upwind intensities ($I_{k-1},I_{k-2},\dots)$.
However, it is very uncommon to apply this class of numerical schemes to radiative transfer problems.}.
This is typically performed through interpolations, whose role is to ``fill in the gaps'' between the known discrete values \citep{auer2003,steffen2017}.
In 2D problems, the usual strategy is to discretize the ray by taking its intersections with the segments connecting the grid points,
dealing then with 1D interpolations to recover the off-grid points quantities \citep{auer2003}.
The linear interpolations from the values of the nearest grid points
is the simplest and fastest method,
but it leads to a large numerical dispersion \citep{fabianibendicho2003,ibgui2013}.
As the accuracy of the formal solution depends on the accuracy of the interpolated off-grid points
quantities, it is often necessary to use high-order interpolations.
In 3D problems, an efficient approach is to discretize the ray by taking its intersections with the individual planes defined by the grid points,
dealing then with 2D interpolations.
The simplest approach is to use bilinear interpolations, because a cell wall has four corner points.
Once again, linear interpolations are not suitable to reach high accuracy.
Extending to higher-order, one can use bi-quadratic or bi-cubic interpolations \citep{dullemond2013}.
Section~\ref{subsec_int_disc} explains that if the behavior of the quantities to be interpolated is discontinuous or particularly intermittent,
high-order interpolations oscillate and may introduce artifacts.
Both \citet{kunasz+auer1988} and \citet{auer2003} highlighted that high-order Lagrange polynomials may introduce spurious extrema
that could lead to nonphysical negative values of the source function or of the intensity.
\section{Numerical tests}\label{sec:sec5}
Numerical analysis alone is merely theory. Indeed,
numerical evidence is mandatory to assess the mathematical predictions given in the previous sections.
Numerical evidence for oscillatory interpolations is easily accessible in the literature and therefore is not exposed here.
In order to determine whether and how discontinuities (or high-gradients) affect the numerical
integration of Equation~\eqref{eq:RTE}, different scenarios are investigated.
Since Equation~\eqref{eq:RTE} is always integrated along a ray \citep{auer2003},
the 1D geometry is adopted.
At first, different analytical models for the scalar radiative transfer are analyzed and,
secondly, the vectorial polarized case is considered.
One should avoid drawing premature conclusions on the mere basis of small details of error curves.
However, the numerical tests presented in this section allow for some general considerations.
Table~\ref{tab:schemes} gives an overview of the numerical schemes used.
\begin{figure*}
\centering
\includegraphics[width=1.\textwidth]{disc.eps}
\caption{Top row: log-log representation
of the global error in the emergent intensity profile as a function of the number of grid-points.
Bottom row: Local error as a function of the spatial coordinate $s\in[0,100]$ for a sampling of 50 equispaced grid points for a single wavelength near the line core.
The four columns correspond to four different atmospheric models, where
the absorption coefficient $\chi_{\nu}$ and emissivity $\epsilon_{\nu}$ are given, respectively,
by Equations~\eqref{case1_k} and~\eqref{case1_e} (first column),
by Equations~\eqref{case2_k} and~\eqref{case2_e} (second column),
by Equations~\eqref{case3_k} and~\eqref{case3_e} (third column), and
by Equations~\eqref{case4_k} and~\eqref{case4_e} (fourth column).
The parameters are:
$c_1=0.01$, $c_2=0.05$, $c_3=0.15$, $k_1=1/25$, $k_2=1/15$, $k_3=5$,
$j_1=2$, $j_2=20$, $s_{d1}=39.985$, and $s_{d2}=39.955$.
This choice of $s_{d1}$ and $s_{d2}$ warrants a monotonic approach of the grid points
to the location of the discontinuity (or sharp gradient) as the grid sampling is refined.
The global and the local error are computed as exposed in Equations~\eqref{error} (considering the Stokes $I$ component only) and~\eqref{local_error}, respectively,
with a reference atmospheric model with $10^4$ grid points.}
\label{fig:convergence_localerror}
\end{figure*}
\begin{table}
\caption{Numerical schemes}
\setlength{\tabcolsep}{5pt}\renewcommand{\arraystretch}{1.5}
\centering
\begin{tabular}{ l c c}
\hline
\hlin
\emph{Formal solver} & \emph{Order} & \emph{Stability} \\%& \emph{Proposed by}\\
\hline
\hline
Backward-Euler & 1 & $L$-stable \\
Trapezoidal & 2 & $A$-stable \\
DELO-linear & 2 & $L$-stable\tablefootmark{a} \\%& \citet{rees+al1989}\\
Pragmatic 2 & 2 & $L$-stable\tablefootmark{a} \\%& \citet{janett2018a}\\
Pragmatic 3 & 3 & $L$-stable\tablefootmark{a} \\%& \citet{janett2018a}\\
Runge-Kutta 3 & 3 & - \\
cubic DELO-B\'ezier & 4 & $L$-stable\tablefootmark{a}\\%& \citet{delacruz_rodriguez+piskunov2013}\\
cubic Hermitian & 4 & $A$-stable \\\hlin
\end{tabular}
\tablefoot{
\tablefoottext{a}{$L$-stability is guaranteed for the scalar problem (or diagonal $\mathbf K_{\nu}$ in Equation~\eqref{eq:RTE})}
}
\label{tab:schemes}
\end{table}
\subsection{Scalar case}\label{sec:sec5.1}
This section presents the integration of
Equation~\eqref{eq:scalar_RTE} in four different scenarios,
where the absorption and emission coefficients show, respectively:
\begin{enumerate}[(i)]
\item a smooth behavior;
\item a discontinuity in absorption;
\item a discontinuity in emissivity;
\item a high gradient in absorption.
\end{enumerate}
Spectral line profiles are synthesized using a modified version of the Octave code used by \citet{janett2017a,janett2017b},
while the spectral line parameters are identical to those described in Appendix C of \citet{janett2017a}.
\subsection*{Case {\rm (i)}: Smooth atmosphere}
In order to verify the predicted order of accuracy of the different numerical schemes,
the case of a smooth atmospheric model is first analyzed.
Consider Equation~\eqref{eq:scalar_RTE} where
\begin{equation}
\chi_{\nu}(s) = c_1 e^{-k_1s}\,,\\\label{case1_k}
\end{equation}
\begin{equation}
\epsilon_{\nu}(s) = c_2e^{-k_2s} \,.\label{case1_e}
\end{equation}
Figure~\ref{fig:convergence_localerror}(a) shows that the various numerical methods effectively achieve their predicted order of convergence.
In this case, the smoothness assumptions are fulfilled and, consequently, the use of high-order methods either allows
to reach higher accuracy or makes the use of coarser spatial grids feasible.
Moreover, Figure~\ref{fig:convergence_localerror}(b) shows that local errors are bigger where absorption and emission gradients are higher.
\subsection*{Case {\rm (ii)}: Discontinuity in absorption}
The effect of discontinuities in the right-hand side of Equation~\eqref{eq:scalar_RTE} is first analyzed by considering
\begin{equation}
\chi_{\nu}(s) =
\begin{cases}
c_1\left(j_1+e^{-k_1s}\right) & \text{if}\; s<s_{d1}\,, \\
c_1 e^{-k_1s} & \text{if}\; s\ge s_{d1}\,, \label{case2_k}
\end{cases}
\end{equation}
\begin{equation}
\epsilon_{\nu}(s) = c_2e^{-k_2s} \,.\label{case2_e}
\end{equation}
A first-order jump of magnitude $j_1$ occurs in the absorption coefficient when the independent variable $s$ reaches the value $s_{d1}$.
Figure~\ref{fig:convergence_localerror}(c) clearly shows the order breakdown due to the first-order discontinuities.
Moreover, Figure~\ref{fig:convergence_localerror}(d) shows that the global error is dominated by the local errors due to the discontinuity.
\subsection*{Case {\rm (iii)}: Discontinuity in emissivity}
The effect of discontinuities in the right-hand side of Equation~\eqref{eq:scalar_RTE} is analyzed by considering
\begin{equation}
\chi_{\nu}(s) = c_1 e^{-k_1s}\,,\\\label{case3_k}\end{equation}
\begin{equation}
\epsilon_{\nu}(s)
\begin{cases}
c_2\left(j_2+e^{-k_2s}\right) & \text{if}\; s<s_{d1}\,, \\
c_2 e^{-k_2s} & \text{if}\; s\ge s_{d1}\,. \label{case3_e}
\end{cases}
\end{equation}
A first-order jump of magnitude $j_2$ occurs in the emission coefficient when the independent variable $s$ reaches the value $s_{d1}$.
Just like in case (ii), Figure~\ref{fig:convergence_localerror}(e) clearly shows the predicted order breakdown due to the first-order discontinuities.
Moreover, Figure~\ref{fig:convergence_localerror}(f) shows that the global error is dominated by the local errors due to discontinuity.
\subsection*{Case {\rm (iv)}: High gradient in absorption}
Atmospheric models can involve rapidly varying physical parameters without an actual mathematical discontinuity.
The effect of sharp variations in the right-hand side of Equation~\eqref{eq:scalar_RTE} is analyzed with the model
\begin{equation}
\chi_{\nu}(s)= c_1\left(j_1\cdot\frac{1}{2}(1-\tanh(k_3(s-s_{d2}))+e^{-k_1s}\right)\,,\label{case4_k}
\end{equation}
\begin{equation}
\epsilon_{\nu}(s) = c_2e^{-k_2s} \,.\label{case4_e}
\end{equation}
A sharp gradient occurs in the absorption coefficient when the independent variable $s$ reaches the value $s_{d2}$.
For $k_3\rightarrow\infty$ the function $\frac{1}{2}(1+\tanh(k_3(s-s_{d2})))$
tends to a step function with the discontinuity at $s = s_{d2}$.
This case represents the continuous version of case (ii).
For coarse samplings, Figure~\ref{fig:convergence_localerror}(g) clearly shows the order breakdown due to the high gradient.
The global error is dominated by the local errors due to the high gradient and
such errors present only first-order convergence until the sampling starts to resolve the sharp gradient.
In fact, Section~\ref{sec:int_disc_data} explains that
there is no difference between a discontinuity and a sharp gradient
for a set of discrete data which is not able to resolve the local feature.
Figure~\ref{fig:convergence_localerror}(g) shows that, once the sampling resolves the high gradients,
the numerical methods start to converge according their predicted order of accuracy.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{model_disc.eps}
\caption{Temperature (top), upward directed line of sight velocity (middle), and three components of the magnetic field (bottom),
all as a function of the geometrical height in the atmosphere.
The dashed lines represent the FALC smooth atmospheric model, while solid lines show discontinuities in temperature, velocity, and magnetic field profiles.}
\label{fig:model_disc}
\end{figure}
\subsection{Polarized case}\label{sec:sec5.2}
This section presents some numerical evidence for the integration of Equation~\eqref{eq:RTE}
in the case of discontinuities in the atmospheric physical parameters.
From these numerical tests, one can still draw conclusions about the scalar problem considering the Stokes $I$ component only.
Stokes profiles of the photospheric Sr~{\sc i} line at 4607.3 {\rm \AA} are synthesized using a modified version of the RH code of \citet{uitenbroek2001}
that allows to switch between different formal solvers and to sequentially perform Stokes-profile syntheses
with a set of discrete atmospheric models \citep[see][]{janett2018b}.
In these calculations, spectral lines are synthesized under the assumption of LTE conditions and the polarization is produced by the Zeeman effect alone.
Figure~\ref{fig:model_disc} shows the considered atmospheric models. These are
modified versions of the model C of
\citet{fontenla1993}, the so-called FALC model\footnote{An additional magnetic field is provided because FAL models do not include a specific magnetic field.}.
These models present, respectively:
\begin{enumerate}[(i)]
\item a smooth profile;
\item a discontinuity in the temperature;
\item a discontinuity in the line of sight velocity;
\item a discontinuity in the magnetic field.
\end{enumerate}
These atmospheric models are homogeneously re-sampled in $\log\tau_c$
($\tau_c$ is the continuum optical depth at the wavelength $\lambda=5000$ {\rm \AA})
with different grid-point densities
in the optical depth interval $-8.6\le\log\tau_c\le1.4$, which encompasses 10 decades.
The Sr~{\sc i} line at 4607.3 {\rm \AA} is sampled with around 500 points equispaced in frequency
in a spectral interval of a few {\rm angstroms} around the core.
Figure~\ref{fig:sri_disc} gives the log-log representation
of the global error in the emergent Stokes profiles as a function of the number of points-per-decade of continuum optical depth,
i.e., the number of grid points that sample a variation of one
order of magnitude in the optical depth at the wavelength $\lambda=5000$ {\rm \AA}.
These error curves allow for different general considerations.
In the pre-asymptotic regime (below 2 points-per-decade),
the comparison between the rows of Figure~\ref{fig:sri_disc} does not reveal essential differences in the error curves.
In this regime, the accuracy of numerical schemes strongly depends on the specific sampling.
Asymptotic convergence rates become relevant above 3 points-per-decade.
The first row of Figure~\ref{fig:sri_disc} shows that the different numerical methods
effectively achieve the predicted (second, third, and fourth) order of accuracy.
In this case, the smoothness assumptions are fulfilled,
allowing the asymptotic order of accuracy of the numerical solution and
high-order schemes out-perform low-order methods.
By contrast, the second, third, and fourth rows of Figure~\ref{fig:sri_disc} clearly exhibit
the order breakdown due to the discontinuity.
In fact, all the numerical methods drop to first-order convergence,
making the application of high-order schemes pointless.
This attests to the fact that discontinuities in the atmospheric physical parameters effectively
induce first-order discontinuities in Equation~\eqref{eq:RTE} and
affect the convergence (and the accuracy) of numerical solutions.
In the fourth row, the error curve of Stokes $I$ seems to be only mildly affected by the discontinuity.
This is because the magnetic field has a meager impact on Stokes $I$ with respect to $Q$, $U$, and $V$.
Dropped-to-first-order methods require among 30-40 points-per-decade to achieve an accuracy of $E_i\le10^{-2}$, for $i=1,2,3,4$ (see Equation~\ref{error}),
whereas they require prohibitive numerical grids to produce $E_i\le10^{-3}$ accurate spectra.
This demonstrates the need for high-order well-behaved
formal solvers able to face discontinuities in the radiative transfer equation.
\begin{figure*}
\centering
\includegraphics[width=1.\textwidth]{sri_disc.eps}
\caption{The log-log representation of the global error for the Stokes vector components $I,Q,U$ and $V$
as a function of the number of points-per-decade of continuum optical depth for different formal solvers (color coded).
The Sr~{\sc i} line at 4607.3 {\rm \AA} is considered for the smooth FALC atmospheric model (first row),
and for three additional atmospheric models that contain discontinuities in temperature (second row),
in line of sight velocity (third row), and in magnetic field (fourth row).
The atmospheric models are described in Figure~\ref{fig:model_disc}.
The global error is computed as exposed in Equation~\eqref{error}.}
\label{fig:sri_disc}
\end{figure*}
\section{Numerical methods for discontinuous ODEs}\label{sec:sec4}
The need for efficient and accurate numerical methods for treating discontinuous differential systems has been widely recognized.
Most of the literature on the topic is focused on adaptive stepsize numerical
methods, that is, numerical schemes that dynamically choose the length of steps along the integration.
In this way, the algorithm maintains the desired accuracy of the approximation by decreasing the step size when large derivatives
(or singularities)
appear and improves efficiency by increasing the step size whenever the smoothness of the system allows it.
In real-life radiative transfer problems, the sampling of the atmospheric model is given.
This implies that the step size is fixed \textit{a priori} and thus the adaptive step-size technique cannot be applied.
In principle, one might circumvent this problem by providing the absorption and emission quantities
at intermediate grid-points through interpolations.
However, Section~\ref{sec:sec31} points out that interpolations
suffer from local oscillations around discontinuities.
When facing a discontinuous ODE, \citet{gear1984} distinguish among four main stages in a numerical procedure:
\begin{enumerate}
\item detecting the discontinuity,
\item locating the discontinuity,
\item crossing the discontinuity,
\item restarting after the discontinuity.
\end{enumerate}
Moreover, \citet{dieci2012} classified the different numerical schemes in three main categories:
\emph{time stepping} methods, \emph{event driven} methods, and \emph{regularization} of the differential system.
This section briefly reviews the numerical schemes for the treatment of discontinuous ODEs,
investigating their applicability to Equations~\eqref{eq:scalar_RTE} and~\eqref{eq:RTE} in discontinuous media.
\subsection{Time stepping methods}
Time stepping methods usually rely on local error estimators to detect and locate discontinuities:
a large value of the local error estimate indicates the presence of a discontinuity and
different strategies to estimate local errors are available \citep{shampine1971,butcher1993}.
Alternatively, \citet{calvo2003} exposed the possibility to detect discontinuities by using the so-called pairs of embedded forms
or by monitoring the size of an estimate of the local defect.
Some discontinuities might avoid detection. These would probably have negligible effects
on the local truncation error and they can consequently be ignored.
In contrast, for some smooth but rapidly varying problems, one may detect discontinuities that
mathematically do not exist.
In this case,
it is often reasonable to
assume that a discontinuity is present and to act accordingly \citep{gear1984}.
In radiative transfer problems, one can
determine at each step whether or not the numerical
solution is crossing a discontinuity.
This detection should be computationally inexpensive and must fit well into the code, that is, it should be based on quantities that the code computes at each step.
Alternatively, one may locate discontinuities using the discrete values of absorption and emission coefficients and
by detecting the discontinuities in the discrete data before the integration
or during the integration.
This approach involves data analysis along the entire ray path and
has the advantage that the treatment
of the discontinuity is performed in a separate preliminary step.
A more in-depth study is required to support this strategy,
but some suitable techniques are already available \citep{gelb1999,gelb2002,mallat2006,oeffner2013}.
In real-life radiative transfer problems, the location of a discontinuity is, in some sense, harder to decipher than commonly assumed.
In discrete data,
it is usually impossible to exactly locate the discontinuity, because there is a lack of knowledge and the jump can occur anywhere
inside the interval.
After having located a discontinuity, time stepping methods adapt time steps to ensure that the local error is kept below a specified tolerance $\epsilon$.
This automatically enforces accuracy in spite of the decreased smoothness of the problem.
Accordingly, \citet{dieci2012} defined the passing stepsize $\Delta \tilde{t}$ as
\begin{equation*}
\Delta\tilde{t}=\frac{\epsilon}{K_1}\,,
\end{equation*}
where $K_1$ is the first-order jump in $f$ defined in Section~\ref{sec:subsec2.2}.
However, radiative transfer problems usually assume \textit{a priori} fixed step sizes and the adaptive techniques are not suitable.
After having crossed a discontinuity in $[t_{k-1},t_k]$, the numerical methods start up the usual integration.
However, one must be careful about the previously obtained information,
because back values of $y$ and $f$ (i.e., $y_{k-1},f_{k-1},y_{k-2},f_{k-2},\ldots$) may not correspond to those
of smooth functions. In case of a discontinuity of order $q$, they differ by terms of the order $\mathcal{O}(h^q)$ and $\mathcal{O}(h^{q-1})$, respectively.
The use of back values in the numerical integration give rise to terms in the local truncation error of order $\mathcal{O}(h^q)$, and
clearly, it is always safe to use a numerical method that does not use them (e.g., the trapezoidal method and DELO-linear).
\subsection{Event-driven methods}
In many problems, discontinuities do not occur completely unexpectedly.
The so-called event-driven methods assume that there is an \emph{event function}, i.e., a function that changes sign at the location of the discontinuity \citep{dieci2012}.
\citet{mao2002} pointed out that most discontinuity-detection algorithms are based on the root-finding of event functions.
In fact, when the numerical solution reaches a root of the event function the discontinuity is detected and located.
Once the location of the discontinuity is precisely known, then the numerical method includes it as an extra grid point and
the algorithm accordingly adjusts the stepping and restarts at that point.
In doing so, the local smoothness assumptions are fulfilled,
maintaining the asymptotic correctness of the numerical solution and avoiding the order breakdown \citep{mannshardt1978}.
Unfortunately, no event function is known in the radiative transfer problems and event-driven methods are consequently not suitable in this context.
\subsection{Regularization of the differential system}
An alternative strategy is to regularize (or smoothen) the differential system, removing the mathematical discontinuities.
Undoubtedly, this leads to simplifications in the theory on ODE\footnote{The Picard-Lindel\"of Theorem ensures the existence and uniqueness of a solution
to the IVP.}. A more technical and thorough investigation of this topic can be found in \citet{llibre1997} and \citet{llibre2007}.
However, due to the large derivatives that replace the structural discontinuities, the regularized system becomes quite stiff and small time steps are then required.
Moreover, regularization can lead to changes in the dynamics of the original nonsmooth system.
For these reasons, the regularization of the differential system strategy is not particularly adequate for radiative transfer problems.
\section{Interpolations for discontinuous discrete data}\label{sec:int_disc_data}
Different approximations can be used to recover the various intermittent quantities
involved in radiative transfer problems at off-grid points.
The goal is to obtain high-order accuracy in smooth regions,
avoiding spurious effects around sharp gradients and discontinuities.
The standard strategy is to use a locally monotonic\footnote{In practice,
one requires interpolants of monotonic data to themselves be monotonic.} interpolation
for each direction ($x$, $y$, $z$ and the
short-characteristic direction in Cartesian grids).
This section shortly reviews different interpolation (or reconstruction) techniques for scattered quantities,
providing some illustrative applications to the radiative transfer problem conducted in the past.
In particular: cubic Hermite splines, B\'ezier curves, piecewise rational polynomials, slope limiters,
and high-order (weighted) essentially non-oscillatory approximations.
\subsection{Cubic Hermite splines}\label{subsec:cubic_hermite}
Given a set of points $\{x_i\}$, the Hermite interpolation $H$ does not only match a set of function values $\{y_i\}$,
but also its derivatives \citep[see][Section 4]{janett2017b}.
For notational simplicity one defines the normalized variable $t \in [0,1]$ as
\begin{equation*}
t=\frac{x-x_i}{\Delta x_i}\,,\text{ for } x \in [x_i,x_{i+1}]\,.
\end{equation*}
The cubic Hermite interpolation, approximating the function $y(t)$ inside the interval $t \in [0,1]$, reads
\begin{equation}
\begin{split}
y(t)&\approx y_i\cdot(1-3t^2+2t^3) + y_i'\cdot \Delta x_i(t-2t^2+t^3)\\
&+ y_{i+1}\cdot (3t^2-2t^3) + y_{i+1}'\cdot \Delta x_i(-t^2+t^3)\,.
\label{hermite_cubic}
\end{split}
\end{equation}
It is known that the cubic Hermite interpolant is fourth-order accurate if the derivatives are at least third-order, and
third-order accurate if the derivatives are second-order, and so on \citep{dougherty1989}.
Therefore, assuming derivatives that are at least third-order accurate, the error scales as $\mathcal{O}(h^4)$,
indicating that the cubic Hermitian interpolation is fourth-order accurate.
Monotonicity can be ensured by a suitable choice of the derivatives.
The main weakness of this approach is that it necessarily degenerates
to a linear interpolation near smooth extrema \citep{shu1998}.
\citet{fritsch1980} proposed a two-pass algorithm to recover such derivatives.
Later on,
\citet{fritsch1984} and \citet{steffen1990}
added alternative formulas to recover second-order accurate first derivatives
\footnote{The formula by \citet{fritsch1984} is
second-order accurate on uniform grids only and
it drops to first-order on nonuniform grids.}.
\citet{auer2003} suggested the use of monotonic Hermite interpolants in radiative transfer problems
and \citet{ibgui2013} used monotonic cubic Hermite polynomials \citep[version of][]{fritsch1984} in the IRIS code.
\subsection{B\'ezier curves}\label{subsec:bezier_curves}
These well-known interpolations make use of the so-called control points (or weights) to suppress spurious extrema.
A B{\'e}zier curve of degree $q$ applied to a set of function values $\{y_i\}$ at positions $\{x_i\}$
inside the interval $t \in[0,1]$ can be defined as
\begin{equation*}
B_q(t)=\sum_{n=0}^{q} C_n B_{n,q}(t)\,,
\end{equation*}
where $C_n$ are the control points, and the Bernstein polynomials $B_{n,q}(t)$ are given by
\begin{equation*}
B_{n,q}(t)=\binom{q}{n}\cdot t^n\left(1-t\right)^{q-n}\,.
\end{equation*}
The first and the last control points define the start and end points of the B{\'e}zier curve in the interval, that is,
\begin{equation*}
C_0 = y_i\,,\text{ and }C_q = y_{i+1}\,.
\end{equation*}
All the remaining points, conventionally referred to as weights, are usually used to shape the curve.
Moreover, a B{\'e}zier curve always lies in the convex hull of the control points, that is,
in the smallest set that contains the line segment joining every pair of control points \citep[see][Section 5]{janett2017b}.
One can avoid the creation of new extrema by adjusting the weights.
However, the high-order accuracy of B{\'e}zier interpolations is
achieved by forcing the B{\'e}zier interpolants to be
identical to the corresponding degree Hermite interpolants but, in this case, monotonicity is not guaranteed.
\citet{auer2003} suggested the use of B{\'e}zier curves in radiative transfer problems,
because of their suitability for preventing spurious behavior near rapid variations in the absorption and emission coefficients.
Moreover, \citet{stepan+trujillo_bueno2013} used B{\'e}zier interpolations in the PORTA code, whereas
\citet{delacruz_rodriguez+piskunov2013} made use of them to construct the quadratic and cubic DELO-B{\'e}zier formal solvers.
\subsection{Piecewise rational polynomials}\label{subsec:pw_rational}
As an alternative to the standard use of polynomials for the interpolation of monotonic data,
\citet{gregory1982} and~\citet{delbourgo1983} opted for the application of piecewise
rational quadratic functions.
\citet{delbourgo1985} constructed a monotone, piecewise rational cubic interpolation,
that includes the rational quadratic function as a special case.
A piecewise rational cubic function $R(t)$, approximating the function $y(t)$ with $t\in[0,1]$, reads
\begin{equation}
R(t)=\frac{P(t)}{Q(t)}\,,
\label{rat_funct}
\end{equation}
where
\begin{equation}
\begin{split}
P(t)&= y_{i+1}\cdot t^3 + (r_i y_{i+1}-\Delta x_i y_{i+1}')\cdot t^2(1-t)\\
& +(r_i y_{i}+\Delta x_i y_{i}')\cdot t(1-t)^2+y_{i}\cdot (1-t)^3\,,
\label{P_cubic}
\end{split}
\end{equation}
and
\begin{equation}
Q(t)= 1+(r_i-3)t(1-t)\,.
\label{Q_cubic}
\end{equation}
The condition $r_i>-1$ ensures a strictly positive denominator in the rational cubic,
while $r_i=3$ reduces the rational cubic to the standard cubic Hermite polynomial given in Equation~\eqref{hermite_cubic}.
Here, the parameter $r_i$ is chosen to ensure that the interpolant preserves monotonicity.
In particular, if
\begin{equation*}
r_i= 1+\Delta x_i\frac{y_i'+y_{i+1}'}{y_{i+1}-y_i}\,,
\end{equation*}
then the rational cubic defined by Equation~\eqref{rat_funct} reduces to the rational quadratic form
{\small
\begin{equation}
R(t)= \frac{y_{i+1}t^2+\Delta x_i t(1-t)(y_{i+1}y_i'+y_iy_{i+1}')/(y_{i+1}-y_i)+y_i(1-t)^2}{t^2+\Delta x_i(y_i'+y_{i+1}')t(1-t)/(y_{i+1}-y_i)+(1-t)^2}\,,
\label{rat_funct2}
\end{equation}}
which yields a monotonic interpolant.
In most applications, the derivatives $y_i'$ and $y_{i+1}'$ are not known in advance and hence must be numerically determined.
\citet{delbourgo1985} proposed different approximations for the derivative parameters
and showed that an error $\mathcal{O}(h^4)$ can be expected when $\mathcal{O}(h^3)$ derivative information is given at the data points.
Finally, the rational quadratic given by Equation~\eqref{rat_funct2} can be used to construct a $C^2$ rational spline
which interpolates strictly monotonic data \citep{delbourgo1985}.
\subsection{Slope limiters}\label{subsec:slope_limiters}
An alternative method to avoid spurious oscillations near discontinuities is to reduce the order of accuracy of the interpolation,
for example using a linear rather than a quadratic interpolant near the discontinuity,
or by reducing the slope of reconstructions applying limiters.
In fact, large gradients and discontinuities produce local extrema in high-order reconstructions, inducing a lack of monotonicity (overshoots), demonstrating the necessity to limit the spatial derivatives inside each cell to physically meaningful values.
This is already guaranteed for the Godunov (constant) scheme, but some limitations are required for the Van Leer (linear) and the parabolic reconstructions.
In computational fluid dynamics, this problem is solved by the so-called slope limiters.
There exists a variety of slope limiters
and broad literature attests to the importance of this topic \citep[e.g.,][]{colella+woodward1984,sweby1984,berger2005,colella2008,velechovsky2013}.
One disadvantage of this approach is that it necessarily degenerates to a linear interpolation near smooth extrema \citep{shu1998}.
\citet{steiner2016} introduced the use of limiters for the interpolations and reconstructions of the
coefficients involved in the polarized radiative transfer equation, purposely taking discontinuities
into account.
\subsection{Essentially non-oscillatory and weighted essentially non-oscillatory approximations}\label{eno_weno}
Provided the smoothness of the function inside the interval,
it is known that the wider the stencil, i.e., the set of grid points considered, the higher the order of accuracy of the interpolation.
The most common interpolations (see Sections~\ref{subsec:cubic_hermite}--\ref{subsec:pw_rational}) are based on fixed stencils.
However, fixed stencil interpolations of second or higher order are oscillatory in the presence of a discontinuity.
Essentially non-oscillatory (ENO) methods are based on a nonlinear adaptive procedure that automatically chooses the stencil that leads
to the locally smoothest interpolant, avoiding crossing discontinuities.
A broad range of literature on the topic is available \citep{liu1994,shu1998,shu2009}.
The classical ENO scheme chooses, among several candidates, the smoothest stencil to work with and discards the rest.
This stencil is then used to construct a polynomial interpolation.
Otherwise, weighted ENO (WENO) schemes use weighted linear combinations of smaller-stencil polynomials
to obtain higher-order accuracy.
WENO approximations guarantee a non-oscillatory
result since the contribution from any stencil containing the discontinuity has an
essentially zero weight.
Both ENO and WENO schemes are especially suitable for problems containing both discontinuities and
smooth structures.
Moreover, they do not necessarily degenerate
to a linear interpolation near smooth extrema.
As an explicit example, an illustrative third-order accurate ENO interpolation is presented in the following.
To obtain a third-order-accurate interpolation of the function $y(x)$, one needs a three-point stencil.
Thus in cell $[x_i,x_{i+1}]$, one starts with the two-point stencil $S_2=\{x_i,x_{i+1}\}$.
Subsequently, one has two choices to expand the stencil: by adding either the left neighbor $x_{i-1}$
or the right neighbor $x_{i+2}$.
The selection criteria is to compare the local smoothness of the function to be interpolated,
measured in terms of divided differences \citep{shu1998}.
For instance, one takes the decision by comparing the absolute values of the two relevant divided differences
$y [x_{i-1},x_i,x_{i+1}]$ and $y [x_i,x_{i+1},x_{i+2}]$.
A smaller one implies that the function is smoother in that stencil.
Therefore, if
$$|y [x_{i-1},x_i,x_{i+1}]| < |y [x_i,x_{i+1},x_{i+2}]|\,,$$
the three-point stencil is taken as
\begin{equation*}
S_3 = \{x_{i-1},x_i,x_{i+1}\}\,,
\end{equation*}
otherwise,
\begin{equation*}
S_3 = \{x_i,x_{i+1},x_{i+2}\}\,.
\end{equation*}
Once the interpolation stencil is determined, one proceeds with a standard (Lagrange) polynomial
interpolation.
Clearly, one can continue this procedure to add more grid points to the stencil,
constructing higher-order ENO approximations.
A more thorough investigation on the possible applications of ENO and WENO techniques
in radiative transfer problems is in progress.
\section{Conclusions}\label{sec:sec6}
This paper identifies and exposes the relevant problems appearing in the numerical treatment of the radiative transfer equation in discontinuous media.
The main aim is not to provide the ultimate numerical method able to face discontinuous problems,
but to better understand the specific situations where discontinuity issues appear.
The first part pays
particular attention to assumptions and limitations of the standard convergence analysis
of numerical schemes for ODEs and interpolations.
In fact, standard numerical methods (and interpolations) rely on smoothness assumptions on the functions
to be integrated (interpolated)
and may perform very inefficiently in the presence of discontinuities,
which may drastically increase local errors, reducing the accuracy of the solution
and thwarting high-order convergence.
The numerical tests
(performed for both the scalar and the polarized case) corroborate analytical predictions:
discontinuities in absorption and emission coefficients effectively
affect the convergence and, consequently, the accuracy of numerical solutions,
inducing the order breakdown where all the numerical methods drop to first-order convergence.
Moreover, local errors are higher around high-gradients
and unresolved high-gradients behave like discontinuities.
It is also shown that discontinuities in the atmospheric physical parameters effectively
induce first-order discontinuities in the radiative transfer equation,
making the application of high-order schemes pointless.
Unfortunately, first-order convergence is not sufficiently accurate in many situations.
In the presence of a discontinuity, the order breakdown in the formal solution is inevitable and
the only practical solution is to increase resolution in the atmospheric model.
Discontinuities do not cause numerical instability (in the sense of magnification of errors)
in the formal solution, but they just decrease accuracy.
The final part summarizes the existing numerical methods for the treatment of discontinuous ODEs
and the interpolation techniques for discontinuous discrete data.
Standard numerical schemes for discontinuous ODE are shown to be unsuitable for common RTE problems.
By contrast, many suitable interpolation techniques are already available. In particular,
Essentially Non-Oscillatory (ENO) and Weighted ENO (WENO) techniques
are high-order robust approximations that
resolve discontinuities
in an accurate and non-oscillatory fashion.
An investigation into
their possible applications
to radiative transfer problems is in progress.
\begin{acknowledgements}
The financial support by the Swiss National Science Foundation (SNSF) through grant ID 200021\_159206 is gratefully acknowledged.
Special thanks are extended to F. Calvo and C. Bertoni for particularly enriching discussions, and
to E. Alsina Ballester, L. Belluzzi, and O. Steiner
for reading and commenting on previous versions of the paper.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,596 |
<?xml version="1.0" encoding="UTF-8"?>
<spring:beans xmlns="http://www.citrusframework.org/schema/testcase"
xmlns:spring="http://www.springframework.org/schema/beans"
xmlns:ws="http://www.citrusframework.org/schema/ws/testcase"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.citrusframework.org/schema/testcase http://www.citrusframework.org/schema/testcase/citrus-testcase.xsd
http://www.citrusframework.org/schema/ws/testcase http://www.citrusframework.org/schema/ws/testcase/citrus-ws-testcase.xsd">
<testcase name="SendSoapMessageActionParserTest">
<actions>
<ws:send endpoint="mySoapClient">
<message>
<data>
<![CDATA[
<TestMessage>Hello Citrus</TestMessage>
]]>
</data>
</message>
<ws:attachment content-id="MySoapAttachment" content-type="text/plain">
<ws:data>
<![CDATA[This is an attachment!]]>
</ws:data>
</ws:attachment>
</ws:send>
<ws:send endpoint="mySoapClient">
<message>
<data>
<![CDATA[
<TestMessage>Hello Citrus</TestMessage>
]]>
</data>
</message>
<ws:attachment content-id="MySoapAttachment" content-type="application/xml" charset-name="UTF-8">
<ws:resource file="classpath:com/consol/citrus/ws/actions/test-attachment.txt"/>
</ws:attachment>
</ws:send>
<ws:send endpoint="mySoapClient">
<message>
<data>
<![CDATA[
<TestMessage>Hello Citrus</TestMessage>
]]>
</data>
</message>
<ws:attachment content-id="FirstSoapAttachment" content-type="text/plain">
<ws:data>
<![CDATA[This is an attachment!]]>
</ws:data>
</ws:attachment>
<ws:attachment content-id="SecondSoapAttachment" content-type="application/xml" charset-name="UTF-8">
<ws:resource file="classpath:com/consol/citrus/ws/actions/test-attachment.txt"/>
</ws:attachment>
</ws:send>
<ws:send endpoint="mySoapClient" fork="true">
<message>
<data>
<![CDATA[
<TestMessage>Hello Citrus</TestMessage>
]]>
</data>
</message>
</ws:send>
</actions>
</testcase>
<spring:bean id="mySoapClient" class="org.mockito.Mockito" factory-method="mock">
<spring:constructor-arg value="com.consol.citrus.ws.client.WebServiceClient"/>
</spring:bean>
</spring:beans> | {
"redpajama_set_name": "RedPajamaGithub"
} | 3,337 |
Q: Query Person Names using Wikidata Api I am now trying to query the aliases of specific person names (which correspond to the article title in Wikipedia) using Wikidata API.
I am using R and there is a WikidataR package for Wikidata query. E.g.,
the person name "Martin Luther", I use the "find_item" and the returned ID is as follows:
Results:
1 Martin Luther (Q9554) - German monk, priest, and professor of theology, seminal figure in Protestant Reformation
2 Martin Luther (Q342938) - Wikipedia disambiguation page
3 Martin Luther (Q62290) - German diplomat
4 Martin Luther (Q328096) - 1953 film biography
5 Martin Luther (Q989866) - 1923 silent movie about Martin Luther
6 Martin Luther (Q13479610)
7 Martin Luther (Q1904488) - planned 1983 movie
8 Martin Luther (Q6776037) - artwork by Ernst Friedrich August Rietschel
9 Martin Luther King, Jr. (Q8027) - American clergyman, activist, and leader in the American Civil Rights Movement
10 Martin Luther King, Jr. Day (Q751738) - United States holiday
In the returned result, only 1, 3,9 point to the human beings. And if I use the first ID Q9554 and get_item function, I can get the following output:
Wikidata item Q9554
Label: Martin Luther [150 other languages available]
Aliases: Martin Luther, Luther, Junker Jörg, 路德马丁,, לוטהער, מארטין, לותר, Лютер Мартин, Martin LUTHER, chevalier Georges, frère Martin, Martin Luder
Description: German monk, priest, and professor of theology, seminal figure in Protestant Reformation [15 other languages available]
Claims: 72
Sitelinks: 189
So if I want to get the aliases of "Martin Luther", I need the ID of him in Wikidata instead of the direct query. As it is possible to query wikipedia using person name directly, is there any possible way to query wikidata using person name ? If it is not possible, how can I set the condition to query the wikidata API only to return the IDs of persons (such as 1, 3,9 in the example)? Thx.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 388 |
Q: How to make strace keep following children after the death of the father/mother:))? I use this command to trace all the things that are done or started from a shell.
sudo strace -e trace=memory -o outm -ff -p <the pid of the terminal instance I want to trace> -f
But the problem is When I for example start firefox or chromium, I get so many messages like:
strace: Process 16776 attached
strace: Process 17508 attached
strace: Process 17509 attached
strace: Process 17512 attached
But then if open a new tab etc no new processes are created. My guess is that these applications create so many different processes and then kill the parent so strace stops tracing the children. So, how can I make strace keep following children after their father/mother is killed or terminated?
Also please tell me if my command is not doing what I want.
What I want is to trace all the memory allocations/deallocations of all of the processes and threads of the system. So when I'm sure this function is working fine I want to make a module out of it and put in in the kernel and try to trace the earliest process possible in the system.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,202 |
Das gemeindefreie Gebiet Fichtelberg liegt im oberfränkischen Landkreis Bayreuth.
Der 20,89 km² große Staatsforst liegt zwischen Fichtelberg, Bischofsgrüner Forst, Neubauer Forst-Nord, Neubauer Forst-Süd, Tröstauer Forst-West, Nagel, Mehlmeisel, Kirchenpingarten und Warmensteinach. Das Gebiet ist komplett bewaldet und besteht aus zwei Teilflächen. Im Gebiet befindet sich eine Exklave von Warmensteinach.
Im Gebiet entspringen die Warme Steinach und die Fichtelnaab sowie einige Zuflüsse von ihnen.
Siehe auch
Liste der gemeindefreien Gebiete in Bayern
Liste der Baudenkmäler in gemeindefreien Gebieten in Bayern#Ehemalige Baudenkmäler
Liste der Bodendenkmäler im Gemeindefreien Gebiet Fichtelberg
Einzelnachweise
Weblinks
Gemeindefreies Gebiet Fichtelberg in OpenStreetMap (Abgerufen am 20. August 2017)
Geographie (Landkreis Bayreuth)
Fichtelberg
Waldgebiet in Bayern
Waldgebiet in Europa | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,087 |
\section{Introduction}
The reconstruction of attosecond beating by interference of two-photon transitions \hbox{(RABBIT)}
is a widely employed technique to measure attosecond time delays in photo\-ionization processes \cite{Paul2001,Muller2002,Klunder2011}.
The extraction of time information from the RABBIT measurements usually involves retrieving
atomic phases encoded in the delay-dependent modulation of the sideband (SB) yield.
These SBs are traditionally formed in the photo\-electron spectrum by the interaction
of two photons (one pump, one probe) with the target. Spectral harmonics from an attosecond pulse train (the pump photons)
form discrete photo\-electron signal peaks. The presence of a time-delayed infrared field (the probe photon)
then creates a signal in between these main peaks that oscillates with the time delay.
The so retrieved atomic phase ($\Delta\phi_a$) from the RABBIT measurement can be
separated into a single-photon ionization contribution ($\Delta\eta$, Wigner phase \cite{PhysRev.98.145})
and a continuum-continuum (cc) coupling phase ($\Delta\phi^{cc}$) by applying
an ``asymptotic approximation'' \cite{Dahlstr_m_2012,Dahlstr_m_2014,RevModPhys.87.765}.
Variations of the RABBIT scheme, such as \hbox{0-SB}, \hbox{1-SB}, and \hbox{2-SB}, have been utilized to study the dipole transition phases and attosecond pulse shaping \cite{Loriot_2017,PhysRevA.104.043113, Maroju2020all}. As the name suggests, in a 3-SB \hbox{RABBIT} scheme, three SBs are formed between
two consecutive main photoelectron peaks \cite{Harth2019,Bharti2021}. The creation
of these three SBs requires more than one transition in the continuum, i.e.,
the absorption or emission of several probe photons.
To explain the phases of the oscillations in the yield of the three sidebands,
we recently extended the asymptotic approximation and the decomposition scheme of the phases
to include an arbitrary number of cc transitions.
Specifically, we expanded the phase of the relevant $N^{\rm th}$-order transition
matrix elements into a sum of the Wigner phase and $N\!-\!1$ cc phases, each describing
a step\-wise transition in the continuum~\cite{Bharti2021}:
\begin{eqnarray}
\mbox{arg}[{\cal M}_{\ell,\lambda}^{(N)}] \approx &&~-\frac{(N\!-\!2)\pi}{2} -\frac{\lambda\pi}{2}+\eta_{\lambda}+\phi_{k_2,k_1}^{cc}+\phi_{k_3,k_2}^{cc}\nonumber\\
&&~+ ... + \phi_{k_{N\!-\!2},k_{N\!-\!1}}^{cc}+\phi_{k_{N},k_{N\!-\!1}}^{cc}.
\label{eq:gen_Fak_Approx}
\end{eqnarray}
Here $M^{(N,a/e)}_{\ell,\lambda}$ is the $N^{\rm th}$-order dipole matrix element corresponding to $N\!-\!1$
absorptions ($a$) or emissions ($e$) of the probe photon.
The magnitudes of the photo\-electron's linear momentum
in the intermediate states are indicated by
$k_1$, $k_2$, $\ldots$, $k_{N\!-\!1}$, respectively, $k_N$ is the final momentum,
and $\lambda$ labels possible orbital quantum numbers reached in the single
photo\-ionization step by the XUV pulse. Finally, $\ell$ is one of generally several allowed orbital angular momenta of the ejected electron.
In a recent paper on atomic hydrogen \cite{Bharti2021}, for which numerical calculations with high accuracy can be carried out by
solving the time-dependent Schr\"odinger equation (TDSE) directly, we verified that the
decomposition approximation explains the \hbox{RABBIT} phases in all three SBs qualitatively.
As expected, its accuracy improves with increasing energy of the emitted photo\-electron.
On the other hand, assuming $\Delta\phi^{cc}$ to be independent of the orbital angular
momenta of the continuum states involved leads to deviations from the analytical prediction,
particularly in the lower and the higher SB of the triplet at low kinetic energies.
Even though starting with a $3p$ electron still limits the information
that can be extracted to the combined effect of the Wigner and cc phases, we decided to
perform the present proof-of-principle
study on argon due to its experimental advantages, including a significantly lower
ionization potential than helium. In argon, $\lambda = 0,2$, while $\lambda = 1$ in helium.
For the latter target, therefore, the dependence on the Wigner phase would also drop out, and the \hbox{3-SB} setup would provide direct access to the phase associated with higher-order cc transitions ~\cite{Harth2019,Bharti2021}.
Nevertheless,
a significant strength of our current setup already lies in the fact that the results within each group
are {\it independent of any atto-chirp} in the XUV pulse, because the XUV harmonic pair
is common to all three SBs.
This paper is organized as follows. We begin with a brief review of the basic idea behind
the \hbox{3-SB} setup in Sec.~\ref{sec:basic}. This is followed by a description of the experimental
apparatus in Sec.~\ref{sec:experiment} and the accompanying theoretical \hbox{$R$-matrix} (close-coupling) with
time dependence (RMT) approach in Sec.~\ref{sec:theory}.
In section~\ref{sec:results}, we first show angle-integrated data
(Sec.~\ref{subsec:Angle-integrated})
before focusing on the angle dependence of the \hbox{RABBIT} phases in the three SBs of each individual group in
Sec.~\ref{subsec:Angle-differential}. We finish with a summary and an outlook in Sec.~\ref{sec:outlook}.
\section{The 3-SB Scheme}\label{sec:basic}
In this section, we briefly review the 3-SB scheme introduced in~\cite{Harth2019} and the analytical
treatment presented in~\cite{Bharti2021} as applied to the \hbox{3-SB} RABBIT experiment.
\begin{figure}[h]
\includegraphics[width=0.9\columnwidth]{Figures/fig1.pdf}
\caption{3-SB \hbox{RABBIT} scheme. $M_{q-1}$ and $M_{q+1}$ label the main
photoelectron peaks created directly by the odd harmonics ($H_{q-1}$ and $H_{q+1}$)
of the frequency-doubled fundamental probe frequency in the XUV pulse, while $S_{q,l}$, $S_{q,c}$,
and $S_{q,h}$ are the lower, central, and higher SBs, respectively. These SBs
are formed by emission or absorption of probe photons by the quasi-free photo\-electrons.
$|i\rangle$ denotes the initial state and $I_p$ is the ionization potential.}
\label{fig:scheme}
\end{figure}
\begin{figure*}
\includegraphics[width=0.99\linewidth]{Figures/fig2.pdf}
\caption{Experimental setup. A holey mirror (BS) splits the linearly polarized laser
beam between the two arms of the inter\-ferometer. In the pump arm, the HHG process is
driven by the second harmonic of the laser beam. The generated XUV and the fundamental
probe beam are recombined and focused onto a supersonic gas jet of Argon. The
interferometer is stabilized by tracking the movement of the fringes from the pump and the probe beams.}
\label{fig:setup}
\end{figure*}
Figure~\ref{fig:scheme} illustrates only the two most dominant transition paths
for each SB contributing to the oscillation in their respective yields.
The lowest-order transition dominates this yield, but its modulation requires
inter\-ference between at least two distinct paths leading to the same energy.
This involves two different XUV harmonics that are aided by absorption or emission of NIR photons.
For the lower ($l$) and higher ($h$) SBs, $S_l$ and $S_h$, the most important interfering
paths are of 2$^{\rm nd}$ (one harmonic and one NIR) and 4$^{\rm th}$ order (one harmonic and three NIR),
which results in a weak modulation of the yield. The lowest-order terms contributing to the build-up
of the central ($c$) SB, $S_c$, are both of 3$^{\rm rd}$ order (one harmonic and two NIR). Consequently,
interference between them exhibits the delay-dependent oscillation most clearly.
Mathematically, the angle-integrated yield in the three SBs, considering only two prominent transition
paths, can be written as~\cite{Bharti2021}:
\begin{subequations}
\begin{align}
S_{q,l}\!&\propto \!\! \sum_\ell \!\left|\sum_\lambda \!\left( \tilde{E}_{q+1}\tilde{E}^{*3}_{\omega} {\cal M}^{(4,e)}_{\ell,\lambda}(k_l) \!+\! \tilde{E}_{q-1}\tilde{E}_{\omega} {\cal M}^{(2,a)}_{\ell,\lambda}(k_l) \right) \!\right|^{2} \nonumber\\
&= I^l_0 + I^l_1 \, \cos(4\,\omega\tau -\phi^l_{R}+\pi);\\
S_{q,c}\!&\propto \!\! \sum_\ell \!\left|\sum_\lambda \!\left(\tilde{E}_{q+1}\tilde{E}^{*2}_{\omega} {\cal M}^{(3,e)}_{\ell,\lambda}(k_c) \!+\! \tilde{E}_{q-1}\tilde{E}^2_{\omega} {\cal M}^{(3,a)}_{\ell,\lambda}(k_c) \right) \!\right|^{2} \nonumber\\
& = I^c_0 + I^c_1 \, \cos(4\,\omega\tau -\phi^c_{R});\\
S_{q,h}\!&\propto \!\! \sum_\ell \!\left|\sum_\lambda \!\left(\tilde{E}_{q+1}\tilde{E}^{*}_{\omega} {\cal M}^{(2,e)}_{\ell,\lambda}(k_h) \!+\! \tilde{E}_{q-1}\tilde{E}^3_{\omega} {\cal M}^{(4,a)}_{\ell,\lambda}(k_h) \right) \!\right|^{2} \nonumber\\
& = I^h_0 + I^h_1 \, \cos(4\,\omega\tau -\phi^h_{R}+\pi)
\end{align}
\label{eq: SB osc c}
\end{subequations}
\noindent Here $q$ labels the SB group.
Furthermore, \hbox{$\tilde{E}_{\Omega}=E_{\Omega}{\rm e}^{{i}\, \phi_\Omega}$} and
\hbox{$\tilde{E}_\omega=E_\omega{\rm e}^{{i}\, \omega \tau}$} (for absorption)
are the complex electric-field amplitudes of the XUV-pump ($\Omega$) and NIR-probe ($\omega$) pulses, respectively.
The yield of each SB is separated into an average part $I_0$ and an another term $I_1$ that oscillates at $4\,\omega$ with the delay.
According to Eq.~(\ref{eq:gen_Fak_Approx}),
every dipole transition adds a factor of~$i$, i.e., a phase of~$\pi/2$. Since the two dominant interfering terms in $S_l$ and $S_h$ are of different orders (2$^{\rm nd}$ and 4$^{\rm th}$), this leads to an additional $\pi$ phase in $S_l$ and $S_h$ relative to $S_c$, where both interfering terms
are of the same (3$^{\rm rd}$) order.
The RABBIT phase ($\phi_{R}$) includes the spectral phase difference of the two harmonics and the weighted average of channel-resolved atomic phases of the transitions. Since the three SBs involve the same pair of harmonics, the contribution of the XUV group delay to the oscillation phase is the same in all three SBs. This is a key advantage of the \hbox{3-SB method}, since it removes the influence of the XUV chirp when we compare the phases of the three SBs only within a particular group.
\section{Experimental Setup}\label{sec:experiment}
Figure~\ref{fig:setup} shows the schematic design of our \hbox{3-SB} \hbox{RABBIT} experimental setup.
A commercial fiber-based laser delivers pulses with a duration of approximately 50~fs (FWHM) at a
49~kHz repetition rate with a pulse energy of 1.2~mJ and a center wavelength of 1030~nm.
This pulse is split into two parts using a holey mirror (BS) that reflects $\approx 85 \% $ of
the incoming beam in the pump arm, while the rest passes through the hole into the probe arm.
The beam size of the reflected donut beam in the pump arm is reduced by a pair of lenses and
passed through a 0.5~mm thick BBO crystal to double its frequency.
The conversion efficiency for the Second-Harmonic Generation (SHG) by the BBO crystal is \hbox{$25-30\,\%$.}
A dichroic beam-splitter (DBS) filters out the fundamental beam, and a lens with a focal
length of 12~cm focuses the second harmonic beam inside a vacuum chamber to a focal spot
of $30-40~\mu\,$m on a jet of neon gas, which results in an XUV frequency comb through
high-harmonic generation (HHG). The gas nozzle has a diameter of $100~\mu\,$m and is operated
at a backing pressure of 1.2~bar with a chamber pressure of $5 \times 10^{-3}$~mbar.
The generated XUV beam is spatially separated from the annular NIR with the help of an
additional holey dumping mirror (DM).
The co-propagating second harmonic is weak and does not generate any visible sidebands.
The beam in the probe arm goes through a retro-reflector mounted on a piezo\-electric-translation
stage that offers a step-resolution of 5~nm with closed-loop position control.
Another holey mirror (RM) recombines the NIR (probe) and XUV (pump) beams, which are then focused
inside a reaction microscope (ReMi) on a cold gas jet of argon.
The efficiency of the ReMi made angular-resolved measurements with sufficiently
high signal rates possible \cite{Ullrich_2003}.
The setup was actively
stabilized~\cite{Srinivas:22} to achieve a stability of $\approx 40\,$atto\-seconds
over a data acquisition time of 7~hours. The stability of the inter\-ferometer
was critical for the successful realization of the
\hbox{3-SB} scheme since the oscillation period was just $850\,$atto\-seconds.
\smallskip
\section{Theoretical Approach}\label{sec:theory}
In the theoretical part of this study, we employ the general \hbox{$R$-matrix} with time dependence (RMT)
method~\cite{BROWN2020107062} to generate theoretical predictions for comparison with our experimental data.
In order to calculate the necessary time-independent basis functions and dipole matrix elements, we set up the \hbox{2-state}
non\-relativistic model introduced by Burke and Taylor~\cite{BurkeTaylor1975}
to treat the steady-state standard photo\-ionization process.
In this model, multi-configuration expansions for the initial $(3s^23p^6)^1S$
bound state and the two coupled final ionic states $(3s^2 3p^5)^2P$ and $(3s 3p^6)^2S$ were employed.
We checked that the photo\-ionization cross sections at the photon energies
corresponding to the various HHG lines was reproduced properly (in agreement
with Burke and Taylor~\cite{BurkeTaylor1975} as well as experiment~\cite{MarrWest1976,SAMSON2002265}) by our RMT model.
The probe-pulse duration was chosen as about twice the length of the XUV pulse.
We emphasize that the present calculation was meant as a supplement to the current experiment, with the hope of providing
additional qualitative insights rather than quantitative agreement, which would require much more detailed information
about the actual pulses than what was available. We purposely employed significantly lower NIR peak
intensities ($10^{11}\,$W/cm$^2$) than in
the experiment ($\approx 6 \times 10^{11}\,$W/cm$^2$). This reduced the number of partial
waves needed to obtain converged results and also led to clean spectra, which are easier to interpret.
Specifically, we performed calculations for 11 delays in multiples of
0.05 NIR periods.
For each delay, we needed about 5 hours on 23 nodes using all 56 available cores per node
on the \hbox{Frontera} supercomputer hosted at the Texas Advanced \hbox{Computing} Center (TACC)~\cite{frontera}.
\section{Results and Discussion}\label{sec:results}
Below we present our results. We start with the angle-integrated setup in Sec.~\ref{subsec:Angle-integrated}
before going into further detail with angle-resolved measurements and calculations in Sec.~\ref{subsec:Angle-differential}.
\subsection{Angle-integrated RABBIT phases}\label{subsec:Angle-integrated}
\begin{figure*}
\includegraphics[width=0.99\linewidth]{Figures/fig3.pdf}
\caption{3-SB RABBIT trace~(a), normalized photo\-electron spectra generated with the XUV
pulse only (dark) and during the RABBIT measurement integrated over the delays (lighter)~(b),
and \hbox{RABBIT} phases extracted from all three sidebands~(c). Note that the $\pi$ phase
difference between $S_c$ and ($S_l,S_h$), which is clearly seen in the position of
the maxima in panel~(a), has been removed for better visibility in panel~(c).
The error bars from the fitting procedure are smaller than the symbol size and hence not visible.}
\label{fig:expt-results}
\end{figure*}
\begin{figure}
\includegraphics[width=0.80\linewidth]{Figures/fig4.pdf}
\caption{The delay-dependent normalized signal (dots) of the three sidebands in the $SB_{12}$
group and fits to the cosine function (lines).}
\label{fig:fit_12}
\end{figure}
Figure~\ref{fig:expt-results} exhibits the results of our 3-SB \hbox{RABBIT} experiment
after integrating the signal over all photo\-electron emission angles.
To highlight the oscillations, the \hbox{RABBIT} trace in panel~(a) is plotted after
subtracting the average delay-integrated signal. The delay-integrated photoelectron spectra (normalized to~1 at the highest peak) is plotted in panel~(b).
Due to the high NIR intensity, some of the main bands are depleted substantially
and appear weaker than the SBs in their vicinity.
The \hbox{RABBIT} phase ($\phi_R$) is extracted by fitting a cosine function (cf.\ Eqs.~\eqref{eq: SB osc c})
to the delay-dependent oscillating signals of the sidebands, as seen in Fig.~\ref{fig:fit_12}.
Due to the large dataset available and the excellent stability of the interferometer,
the phase retrieval resulted in error bars
smaller than the symbol size in Fig.~\ref{fig:expt-results}(c). This gives us confidence in the
results obtained from our extraction procedure.
As predicted by our generalized decomposition approximation~(\ref{eq:gen_Fak_Approx}),
the lower and the higher SBs oscillate by $\pi$ out of phase with the central SB.
The retrieved \hbox{RABBIT} phases ($\phi_R$) are plotted in Fig.~\ref{fig:expt-results}(c) after removing the extra $\pi$ from $S_l$ and $S_h$ to simplify the comparison.
The time-delay axis on the right side of this panel was created via the conversion $\tau_R=\phi_R/(4\omega)$.
Five SB groups are clearly identifiable in Fig.~\ref{fig:expt-results}(c).
While there are some irregularities in $SB_{8}$ and $SB_{16}$, especially with the phase
extracted from $S_l$, groups $SB_{10}$, $SB_{12}$, and $SB_{14}$ show the expected trend:
The RABBIT phases of the three SBs in each group are similar, although a clearly visible
difference remains in $SB_{10}$. That difference, however, essentially vanishes in $SB_{12}$ and $SB_{14}$.
The irregularity seen in the $SB_{8}$ group is due to a significant contribution of another
4$^{\rm th}$-order transition in the absorption path of the lowest SB $S_l$, which involves a
transition from M$_7$ down to the Rydberg states and back up to $S_l$.
The Rydberg states enhance the strength of this transition and
add a resonance phase that leads to
a significant deviation in the RABBIT phase of $S_l$ compared to the other members of the $SB_{8}$ group.
Furthermore, due to the low cut-off of the XUV spectrum based on HHG and the decreasing photo\-ionization
cross section of argon with increasing photon energy,
the strength of the M$_{17}$ peak is very weak compared to the rest of the lower main peaks.
As a result, higher-order transitions involving lower main bands also play a significant
role in the oscillation of $S_l$ in the $SB_{16}$ group, which again affects the extracted phase.
\begin{figure*}
\includegraphics[width=0.90\linewidth]{Figures/fig5.pdf}
\caption{Top row: Angle-dependent \hbox{RABBIT} phases extracted from the measurements
in group $SB_{10}$~(a), $SB_{12}$~(b) and $SB_{14}$~(c). Bottom row: Corresponding RMT predictions.}
\label{fig:angle_exp_10_to_14}
\end{figure*}
\subsection{Angle-differential RABBIT phases}\label{subsec:Angle-differential}
We now further increase the level of detail by investigating angle-dependent RABBIT phases,
which is possible due to the angle-resolving capability of the reaction microscope.
For the reasons given above regarding the additional complexities associated with the $SB_{8}$ and $SB_{16}$
groups, we concentrate the remaining discussion on $SB_{10}$, $SB_{12}$, and $SB_{14}$.
Figure~\ref{fig:angle_exp_10_to_14}(a-c) shows the \hbox{RABBIT} phases extracted within
these groups as a function of the photo\-electron emission angle,
which is defined relative to the (linear) laser polarization vector. The photoelectron signal
is integrated over an angular window of $10^\circ$ for each data point. The angle-resolved RABBIT
phases are shifted to fix the starting phase of the central sideband in each group to zero.
According to both our experiment and the calculation (Fig.~\ref{fig:angle_exp_10_to_14}, panels \hbox{d-f}),
the phase of $S_h$, in particular, exhibits a very strong angular dependence, while that
of $S_l$ is nearly angle-independent.
With increasing photo\-electron energy, the differences diminish.
The difference in the phases of the three SBs can possibly be explained by considering
a propensity rule for transition amplitudes and the dependence of both the Wigner and $\phi^{cc}$
phases on the orbital angular momenta involved.
Similar to bound-continuum transitions~\cite{PhysRevA.32.617}, absorption (emission)
within the continuum favors an increase (decrease) in the angular momentum of the outgoing
photo\-electron, especially for low kinetic
energies~\cite{Bertolino_2020,PhysRevLett.123.133201,Peschel2022, Kheifets2022,PhysRevA.106.023116}.
The higher SB ($S_h$) of the group involves the
absorption of three probe photons \hbox{($H_{q-1}+3\,\omega$)} that, according to the propensity rule,
predominantly populate higher angular-momentum states.
Along the other path \hbox{($H_{q+1}-1\,\omega$)} leading to~$S_h$,
the emission of one probe photon mainly creates lower angular-momentum states.
On the other hand, the emission of three probe photons \hbox{($H_{q+1}-3\,\omega$)} to the lower SB ($S_l$)
of the group predominantly creates lower angular-momentum states
and higher angular-momentum states in the other absorption path \hbox{($H_{q-1}+1\,\omega$)}.
It is well known that the Wigner phase depends on the angular-momentum channel.
The cc phase has also been shown to slightly depend on whether there is an increase or
decrease in the angular momentum, while it appears to remain independent of the target
species~\cite{Fuchs:s,Peschel2022}.
The RABBIT phases in the individual \hbox{$\ell$-channels}, therefore, are expected to differ as well. Since the retrieved angle-integrated RABBIT phase is the weighted average of all the channel-resolved RABBIT phases and the weights of the partial waves in the three SBs are different, the angle-integrated RABBIT phase in the three SBs also turns out different.
For the angle-resolved RABBIT phase, the interplay of the propensity rule for
transition amplitudes to each $\ell$ channel and the angle-dependent weights of the
spherical harmonics determine the steepness of the angle-dependent phase curves of the three SBs.
The higher $\ell$ states carry comparatively more weight in the higher SB and
less in the lower SB. Also, the contributions of the associated spherical harmonics
for larger $\ell$ vary rapidly with the angle. As a result, a steeper angle dependence
of the RABBIT phase in the higher SBs compared to the central and lower SBs may be expected.
This trend is, indeed, clearly seen in $SB_{10}$. Since the difference in $\phi^{cc}$ for different $\ell$ channels becomes smaller with increasing kinetic energy, the angle dependence of the RABBIT phases becomes flat, as can be seen in SB12 and SB14.
Finally, we notice that the scale of variation in the angle dependence of the RABBIT phase is
smaller in the calculation than in the experiment.
Also, the position of $S_l$
and $S_h$ relative to $S_c$ is switched.
In addition to always possible shortcomings in the theoretical model (as sophisticated as it might be)
and unknown potential systematic errors in experiment,
the difference in the probe intensities and the pulse details in general are likely
responsible for at least some of the discrepancies seen here.
\section{Summary and Outlook}\label{sec:outlook}
\smallskip
In summary, we carried out a proof-of-principle 3-SB \hbox{RABBIT} experiment in argon.
In contrast to more popular single-SB studies, our technique enables us to focus on
the atomic phase of transitions without distortion from a possibly unknown
or experimentally drifting XUV chirp.
While we confirmed earlier predictions that the angle-integrated \hbox{RABBIT} phases
extracted within a SB group become increasingly similar, we enhanced the analyzing
power of the setup significantly by resolving the emission angle with a reaction microscope.
By doing so, we could identify which of the three sideband phases within a group is
most sensitive to a change in the detection angle.
Our experimental efforts were supported by numerical calculations performed with the
non\-perturbative all-electron \hbox{$R$-matrix} with time dependence method.
There is good qualitative agreement between experiment and theory regarding the general
trends observed, but significant differences remain in the details.
Given the remaining limitations and challenges faced in the present study, especially concerning the details of
the pulse and the argon target, the remaining deviations between experiment and theory
in the quantitative values of the phases are not too surprising.
We hope to address these issues in future improvements of the setup.
As the next step, we plan to repeat this experiment with helium, where the contribution of the Wigner phase
for an $s \to p$ transition remains the same in all three sidebands. Any differences in the phases within the group
then clearly indicate the influence of $\phi^{cc}$. This switch of targets would require
extending the harmonic cut-off,
which is by no means trivial in our scheme, as the cut-off in the HHG process
decreases with the driving frequency. Using helium instead of argon also has the advantage of
theory likely being more reliable due to the simplicity of the target. On the other hand,
heavier quasi-two-electron targets with an $(ns^2)^1S$ outer-shell configuration
(unfortunately, these are metals that would need to be vaporized rather than inert gases) would
provide a larger short-range modification of the relevant interaction potential and, therefore,
may be more suitable to investigate whether $\phi^{cc}$ is indeed nearly universal.
Undoubtedly, many open questions will need to be answered before the effect of the
additional continuum-continuum transitions in single- and multiple-SB \hbox{RABBIT} setups are fully understood.
It would be interesting to analyze whether the SB phases always converge to each other with increasing energy,
whether or not they cross in a predictable way with increasing emission angle,
and how the behavior depends on the target investigated.
While we cannot answer these questions at the present time, we hope that other groups will see
the work reported in this paper
as a worthwhile inspiration to carry out further studies in this field.
\medskip
\begin{acknowledgments}
The experimental part of this work was supported by the DFG-QUTIF program under
Project \hbox{No.~HA 8399/2-1} and \hbox{IMPRS-QD}.
K.R.H.\ and K.B.\ acknowledge funding from the NSF through grants
\hbox{No.~PHY-1803844} and \hbox{No.~PHY-2110023}, respectively, as well as the
Frontera Pathways allocation PHY-20028.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,100 |
package org.lwjgl.openxr;
import org.lwjgl.*;
import org.lwjgl.system.*;
import static org.lwjgl.system.Checks.*;
import static org.lwjgl.system.JNI.*;
import static org.lwjgl.system.MemoryUtil.*;
/** The FB_passthrough extension. */
public class FBPassthrough {
/** The extension specification version. */
public static final int XR_FB_passthrough_SPEC_VERSION = 2;
/** The extension name. */
public static final String XR_FB_PASSTHROUGH_EXTENSION_NAME = "XR_FB_passthrough";
/**
* Extends {@code XrStructureType}.
*
* <h5>Enum values:</h5>
*
* <ul>
* <li>{@link #XR_TYPE_SYSTEM_PASSTHROUGH_PROPERTIES_FB TYPE_SYSTEM_PASSTHROUGH_PROPERTIES_FB}</li>
* <li>{@link #XR_TYPE_PASSTHROUGH_CREATE_INFO_FB TYPE_PASSTHROUGH_CREATE_INFO_FB}</li>
* <li>{@link #XR_TYPE_PASSTHROUGH_LAYER_CREATE_INFO_FB TYPE_PASSTHROUGH_LAYER_CREATE_INFO_FB}</li>
* <li>{@link #XR_TYPE_COMPOSITION_LAYER_PASSTHROUGH_FB TYPE_COMPOSITION_LAYER_PASSTHROUGH_FB}</li>
* <li>{@link #XR_TYPE_GEOMETRY_INSTANCE_CREATE_INFO_FB TYPE_GEOMETRY_INSTANCE_CREATE_INFO_FB}</li>
* <li>{@link #XR_TYPE_GEOMETRY_INSTANCE_TRANSFORM_FB TYPE_GEOMETRY_INSTANCE_TRANSFORM_FB}</li>
* <li>{@link #XR_TYPE_PASSTHROUGH_STYLE_FB TYPE_PASSTHROUGH_STYLE_FB}</li>
* <li>{@link #XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_RGBA_FB TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_RGBA_FB}</li>
* <li>{@link #XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_MONO_FB TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_MONO_FB}</li>
* <li>{@link #XR_TYPE_PASSTHROUGH_BRIGHTNESS_CONTRAST_SATURATION_FB TYPE_PASSTHROUGH_BRIGHTNESS_CONTRAST_SATURATION_FB}</li>
* <li>{@link #XR_TYPE_EVENT_DATA_PASSTHROUGH_STATE_CHANGED_FB TYPE_EVENT_DATA_PASSTHROUGH_STATE_CHANGED_FB}</li>
* </ul>
*/
public static final int
XR_TYPE_SYSTEM_PASSTHROUGH_PROPERTIES_FB = 1000118000,
XR_TYPE_PASSTHROUGH_CREATE_INFO_FB = 1000118001,
XR_TYPE_PASSTHROUGH_LAYER_CREATE_INFO_FB = 1000118002,
XR_TYPE_COMPOSITION_LAYER_PASSTHROUGH_FB = 1000118003,
XR_TYPE_GEOMETRY_INSTANCE_CREATE_INFO_FB = 1000118004,
XR_TYPE_GEOMETRY_INSTANCE_TRANSFORM_FB = 1000118005,
XR_TYPE_PASSTHROUGH_STYLE_FB = 1000118020,
XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_RGBA_FB = 1000118021,
XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_MONO_FB = 1000118022,
XR_TYPE_PASSTHROUGH_BRIGHTNESS_CONTRAST_SATURATION_FB = 1000118023,
XR_TYPE_EVENT_DATA_PASSTHROUGH_STATE_CHANGED_FB = 1000118030;
/**
* Extends {@code XrResult}.
*
* <h5>Enum values:</h5>
*
* <ul>
* <li>{@link #XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB}</li>
* <li>{@link #XR_ERROR_FEATURE_ALREADY_CREATED_PASSTHROUGH_FB ERROR_FEATURE_ALREADY_CREATED_PASSTHROUGH_FB}</li>
* <li>{@link #XR_ERROR_FEATURE_REQUIRED_PASSTHROUGH_FB ERROR_FEATURE_REQUIRED_PASSTHROUGH_FB}</li>
* <li>{@link #XR_ERROR_NOT_PERMITTED_PASSTHROUGH_FB ERROR_NOT_PERMITTED_PASSTHROUGH_FB}</li>
* <li>{@link #XR_ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB}</li>
* <li>{@link #XR_ERROR_UNKNOWN_PASSTHROUGH_FB ERROR_UNKNOWN_PASSTHROUGH_FB}</li>
* </ul>
*/
public static final int
XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB = -1000118000,
XR_ERROR_FEATURE_ALREADY_CREATED_PASSTHROUGH_FB = -1000118001,
XR_ERROR_FEATURE_REQUIRED_PASSTHROUGH_FB = -1000118002,
XR_ERROR_NOT_PERMITTED_PASSTHROUGH_FB = -1000118003,
XR_ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB = -1000118004,
XR_ERROR_UNKNOWN_PASSTHROUGH_FB = -1000118050;
/** XR_PASSTHROUGH_COLOR_MAP_MONO_SIZE_FB */
public static final int XR_PASSTHROUGH_COLOR_MAP_MONO_SIZE_FB = 256;
/**
* Extends {@code XrObjectType}.
*
* <h5>Enum values:</h5>
*
* <ul>
* <li>{@link #XR_OBJECT_TYPE_PASSTHROUGH_FB OBJECT_TYPE_PASSTHROUGH_FB}</li>
* <li>{@link #XR_OBJECT_TYPE_PASSTHROUGH_LAYER_FB OBJECT_TYPE_PASSTHROUGH_LAYER_FB}</li>
* <li>{@link #XR_OBJECT_TYPE_GEOMETRY_INSTANCE_FB OBJECT_TYPE_GEOMETRY_INSTANCE_FB}</li>
* </ul>
*/
public static final int
XR_OBJECT_TYPE_PASSTHROUGH_FB = 1000118000,
XR_OBJECT_TYPE_PASSTHROUGH_LAYER_FB = 1000118002,
XR_OBJECT_TYPE_GEOMETRY_INSTANCE_FB = 1000118004;
/** XrPassthroughFlagBitsFB */
public static final int XR_PASSTHROUGH_IS_RUNNING_AT_CREATION_BIT_FB = 0x1;
/**
* XrPassthroughLayerPurposeFB - Layer purpose
*
* <h5>Enumerant Descriptions</h5>
*
* <ul>
* <li>{@link #XR_PASSTHROUGH_LAYER_PURPOSE_RECONSTRUCTION_FB PASSTHROUGH_LAYER_PURPOSE_RECONSTRUCTION_FB} — Reconstruction passthrough (full screen environment)</li>
* <li>{@link #XR_PASSTHROUGH_LAYER_PURPOSE_PROJECTED_FB PASSTHROUGH_LAYER_PURPOSE_PROJECTED_FB} — Projected passthrough (using a custom surface)</li>
* <li>{@link FBPassthroughKeyboardHands#XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_HANDS_FB PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_HANDS_FB} — Passthrough layer purpose for keyboard hands presence.</li>
* <li>{@link FBPassthroughKeyboardHands#XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_MASKED_HANDS_FB PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_MASKED_HANDS_FB} — Passthrough layer purpose for keyboard hands presence with keyboard masked hand transitions (i.e passthrough hands rendered only when they are over the keyboard).</li>
* </ul>
*
* <h5>See Also</h5>
*
* <p>{@link XrPassthroughLayerCreateInfoFB}</p>
*/
public static final int
XR_PASSTHROUGH_LAYER_PURPOSE_RECONSTRUCTION_FB = 0,
XR_PASSTHROUGH_LAYER_PURPOSE_PROJECTED_FB = 1;
/**
* XrPassthroughStateChangedFlagBitsFB
*
* <h5>Enum values:</h5>
*
* <ul>
* <li>{@link #XR_PASSTHROUGH_STATE_CHANGED_REINIT_REQUIRED_BIT_FB PASSTHROUGH_STATE_CHANGED_REINIT_REQUIRED_BIT_FB}</li>
* <li>{@link #XR_PASSTHROUGH_STATE_CHANGED_NON_RECOVERABLE_ERROR_BIT_FB PASSTHROUGH_STATE_CHANGED_NON_RECOVERABLE_ERROR_BIT_FB}</li>
* <li>{@link #XR_PASSTHROUGH_STATE_CHANGED_RECOVERABLE_ERROR_BIT_FB PASSTHROUGH_STATE_CHANGED_RECOVERABLE_ERROR_BIT_FB}</li>
* <li>{@link #XR_PASSTHROUGH_STATE_CHANGED_RESTORED_ERROR_BIT_FB PASSTHROUGH_STATE_CHANGED_RESTORED_ERROR_BIT_FB}</li>
* </ul>
*/
public static final int
XR_PASSTHROUGH_STATE_CHANGED_REINIT_REQUIRED_BIT_FB = 0x1,
XR_PASSTHROUGH_STATE_CHANGED_NON_RECOVERABLE_ERROR_BIT_FB = 0x2,
XR_PASSTHROUGH_STATE_CHANGED_RECOVERABLE_ERROR_BIT_FB = 0x4,
XR_PASSTHROUGH_STATE_CHANGED_RESTORED_ERROR_BIT_FB = 0x8;
protected FBPassthrough() {
throw new UnsupportedOperationException();
}
// --- [ xrCreatePassthroughFB ] ---
/** Unsafe version of: {@link #xrCreatePassthroughFB CreatePassthroughFB} */
public static int nxrCreatePassthroughFB(XrSession session, long createInfo, long outPassthrough) {
long __functionAddress = session.getCapabilities().xrCreatePassthroughFB;
if (CHECKS) {
check(__functionAddress);
}
return callPPPI(session.address(), createInfo, outPassthrough, __functionAddress);
}
/**
* Create a passthrough feature.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrCreatePassthroughFB CreatePassthroughFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrCreatePassthroughFB(
* XrSession session,
* const XrPassthroughCreateInfoFB* createInfo,
* XrPassthroughFB* outPassthrough);</code></pre>
*
* <h5>Description</h5>
*
* <p>Creates an {@code XrPassthroughFB} handle. The returned passthrough handle <b>may</b> be subsequently used in API calls.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrCreatePassthroughFB CreatePassthroughFB}</li>
* <li>{@code session} <b>must</b> be a valid {@code XrSession} handle</li>
* <li>{@code createInfo} <b>must</b> be a pointer to a valid {@link XrPassthroughCreateInfoFB} structure</li>
* <li>{@code outPassthrough} <b>must</b> be a pointer to an {@code XrPassthroughFB} handle</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link XR10#XR_ERROR_OUT_OF_MEMORY ERROR_OUT_OF_MEMORY}</li>
* <li>{@link XR10#XR_ERROR_LIMIT_REACHED ERROR_LIMIT_REACHED}</li>
* <li>{@link #XR_ERROR_UNKNOWN_PASSTHROUGH_FB ERROR_UNKNOWN_PASSTHROUGH_FB}</li>
* <li>{@link #XR_ERROR_NOT_PERMITTED_PASSTHROUGH_FB ERROR_NOT_PERMITTED_PASSTHROUGH_FB}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* <li>{@link #XR_ERROR_FEATURE_ALREADY_CREATED_PASSTHROUGH_FB ERROR_FEATURE_ALREADY_CREATED_PASSTHROUGH_FB}</li>
* </ul></dd>
* </dl>
*
* <h5>See Also</h5>
*
* <p>{@link XrPassthroughCreateInfoFB}</p>
*
* @param session the {@code XrSession}.
* @param createInfo the {@link XrPassthroughCreateInfoFB}.
* @param outPassthrough the {@code XrPassthroughFB}.
*/
@NativeType("XrResult")
public static int xrCreatePassthroughFB(XrSession session, @NativeType("XrPassthroughCreateInfoFB const *") XrPassthroughCreateInfoFB createInfo, @NativeType("XrPassthroughFB *") PointerBuffer outPassthrough) {
if (CHECKS) {
check(outPassthrough, 1);
}
return nxrCreatePassthroughFB(session, createInfo.address(), memAddress(outPassthrough));
}
// --- [ xrDestroyPassthroughFB ] ---
/**
* Destroy a passthrough feature.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrDestroyPassthroughFB DestroyPassthroughFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrDestroyPassthroughFB(
* XrPassthroughFB passthrough);</code></pre>
*
* <h5>Description</h5>
*
* <p>Destroys an {@code XrPassthroughFB} handle.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrDestroyPassthroughFB DestroyPassthroughFB}</li>
* <li>{@code passthrough} <b>must</b> be a valid {@code XrPassthroughFB} handle</li>
* </ul>
*
* <h5>Thread Safety</h5>
*
* <ul>
* <li>Access to {@code passthrough}, and any child handles, <b>must</b> be externally synchronized</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* @param passthrough the {@code XrPassthroughFB} to be destroyed.
*/
@NativeType("XrResult")
public static int xrDestroyPassthroughFB(XrPassthroughFB passthrough) {
long __functionAddress = passthrough.getCapabilities().xrDestroyPassthroughFB;
if (CHECKS) {
check(__functionAddress);
}
return callPI(passthrough.address(), __functionAddress);
}
// --- [ xrPassthroughStartFB ] ---
/**
* Start a passthrough feature.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrPassthroughStartFB PassthroughStartFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrPassthroughStartFB(
* XrPassthroughFB passthrough);</code></pre>
*
* <h5>Description</h5>
*
* <p>Starts an {@code XrPassthroughFB} feature. If the feature is not started, either explicitly with a call to {@link #xrPassthroughStartFB PassthroughStartFB}, or implicitly at creation using the behavior flags, it is considered paused. When the feature is paused, runtime will stop rendering and compositing all passthrough layers produced on behalf of the application, and may free up some or all the resources used to produce passthrough until {@link #xrPassthroughStartFB PassthroughStartFB} is called.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrPassthroughStartFB PassthroughStartFB}</li>
* <li>{@code passthrough} <b>must</b> be a valid {@code XrPassthroughFB} handle</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link #XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* @param passthrough the {@code XrPassthroughFB} to be started.
*/
@NativeType("XrResult")
public static int xrPassthroughStartFB(XrPassthroughFB passthrough) {
long __functionAddress = passthrough.getCapabilities().xrPassthroughStartFB;
if (CHECKS) {
check(__functionAddress);
}
return callPI(passthrough.address(), __functionAddress);
}
// --- [ xrPassthroughPauseFB ] ---
/**
* Pause a passthrough feature.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrPassthroughPauseFB PassthroughPauseFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrPassthroughPauseFB(
* XrPassthroughFB passthrough);</code></pre>
*
* <h5>Description</h5>
*
* <p>Pauses an {@code XrPassthroughFB} feature. When the feature is paused, runtime will stop rendering and compositing all passthrough layers produced on behalf of the application, and may free up some or all the resources used to produce passthrough until {@link #xrPassthroughStartFB PassthroughStartFB} is called.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrPassthroughPauseFB PassthroughPauseFB}</li>
* <li>{@code passthrough} <b>must</b> be a valid {@code XrPassthroughFB} handle</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link #XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* @param passthrough the {@code XrPassthroughFB} to be paused.
*/
@NativeType("XrResult")
public static int xrPassthroughPauseFB(XrPassthroughFB passthrough) {
long __functionAddress = passthrough.getCapabilities().xrPassthroughPauseFB;
if (CHECKS) {
check(__functionAddress);
}
return callPI(passthrough.address(), __functionAddress);
}
// --- [ xrCreatePassthroughLayerFB ] ---
/** Unsafe version of: {@link #xrCreatePassthroughLayerFB CreatePassthroughLayerFB} */
public static int nxrCreatePassthroughLayerFB(XrSession session, long createInfo, long outLayer) {
long __functionAddress = session.getCapabilities().xrCreatePassthroughLayerFB;
if (CHECKS) {
check(__functionAddress);
XrPassthroughLayerCreateInfoFB.validate(createInfo);
}
return callPPPI(session.address(), createInfo, outLayer, __functionAddress);
}
/**
* Create a passthrough layer.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrCreatePassthroughLayerFB CreatePassthroughLayerFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrCreatePassthroughLayerFB(
* XrSession session,
* const XrPassthroughLayerCreateInfoFB* createInfo,
* XrPassthroughLayerFB* outLayer);</code></pre>
*
* <h5>Description</h5>
*
* <p>Creates an {@code XrPassthroughLayerFB} handle. The returned layer handle <b>may</b> be subsequently used in API calls. Layer objects may be used to specify rendering properties of the layer, such as styles, and compositing rules.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrCreatePassthroughLayerFB CreatePassthroughLayerFB}</li>
* <li>{@code session} <b>must</b> be a valid {@code XrSession} handle</li>
* <li>{@code createInfo} <b>must</b> be a pointer to a valid {@link XrPassthroughLayerCreateInfoFB} structure</li>
* <li>{@code outLayer} <b>must</b> be a pointer to an {@code XrPassthroughLayerFB} handle</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link XR10#XR_ERROR_OUT_OF_MEMORY ERROR_OUT_OF_MEMORY}</li>
* <li>{@link XR10#XR_ERROR_LIMIT_REACHED ERROR_LIMIT_REACHED}</li>
* <li>{@link #XR_ERROR_UNKNOWN_PASSTHROUGH_FB ERROR_UNKNOWN_PASSTHROUGH_FB}</li>
* <li>{@link #XR_ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* <li>{@link #XR_ERROR_FEATURE_REQUIRED_PASSTHROUGH_FB ERROR_FEATURE_REQUIRED_PASSTHROUGH_FB}</li>
* </ul></dd>
* </dl>
*
* <h5>See Also</h5>
*
* <p>{@link XrPassthroughLayerCreateInfoFB}</p>
*
* @param session the {@code XrSession}.
* @param createInfo the {@link XrPassthroughLayerCreateInfoFB}.
* @param outLayer the {@code XrPassthroughLayerFB}.
*/
@NativeType("XrResult")
public static int xrCreatePassthroughLayerFB(XrSession session, @NativeType("XrPassthroughLayerCreateInfoFB const *") XrPassthroughLayerCreateInfoFB createInfo, @NativeType("XrPassthroughLayerFB *") PointerBuffer outLayer) {
if (CHECKS) {
check(outLayer, 1);
}
return nxrCreatePassthroughLayerFB(session, createInfo.address(), memAddress(outLayer));
}
// --- [ xrDestroyPassthroughLayerFB ] ---
/**
* Destroy a passthrough layer.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrDestroyPassthroughLayerFB DestroyPassthroughLayerFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrDestroyPassthroughLayerFB(
* XrPassthroughLayerFB layer);</code></pre>
*
* <h5>Description</h5>
*
* <p>Destroys an {@code XrPassthroughLayerFB} handle.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrDestroyPassthroughLayerFB DestroyPassthroughLayerFB}</li>
* <li>{@code layer} <b>must</b> be a valid {@code XrPassthroughLayerFB} handle</li>
* </ul>
*
* <h5>Thread Safety</h5>
*
* <ul>
* <li>Access to {@code layer}, and any child handles, <b>must</b> be externally synchronized</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* @param layer the {@code XrPassthroughLayerFB} to be destroyed.
*/
@NativeType("XrResult")
public static int xrDestroyPassthroughLayerFB(XrPassthroughLayerFB layer) {
long __functionAddress = layer.getCapabilities().xrDestroyPassthroughLayerFB;
if (CHECKS) {
check(__functionAddress);
}
return callPI(layer.address(), __functionAddress);
}
// --- [ xrPassthroughLayerPauseFB ] ---
/**
* Pause a passthrough layer.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrPassthroughLayerPauseFB PassthroughLayerPauseFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrPassthroughLayerPauseFB(
* XrPassthroughLayerFB layer);</code></pre>
*
* <h5>Description</h5>
*
* <p>Pauses an {@code XrPassthroughLayerFB} layer. Runtime will not render or composite paused layers.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrPassthroughLayerPauseFB PassthroughLayerPauseFB}</li>
* <li>{@code layer} <b>must</b> be a valid {@code XrPassthroughLayerFB} handle</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link #XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* @param layer the {@code XrPassthroughLayerFB} to be paused.
*/
@NativeType("XrResult")
public static int xrPassthroughLayerPauseFB(XrPassthroughLayerFB layer) {
long __functionAddress = layer.getCapabilities().xrPassthroughLayerPauseFB;
if (CHECKS) {
check(__functionAddress);
}
return callPI(layer.address(), __functionAddress);
}
// --- [ xrPassthroughLayerResumeFB ] ---
/**
* Resume a passthrough layer.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrPassthroughLayerResumeFB PassthroughLayerResumeFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrPassthroughLayerResumeFB(
* XrPassthroughLayerFB layer);</code></pre>
*
* <h5>Description</h5>
*
* <p>Resumes an {@code XrPassthroughLayerFB} layer.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrPassthroughLayerResumeFB PassthroughLayerResumeFB}</li>
* <li>{@code layer} <b>must</b> be a valid {@code XrPassthroughLayerFB} handle</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link #XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* @param layer the {@code XrPassthroughLayerFB} to be resumed.
*/
@NativeType("XrResult")
public static int xrPassthroughLayerResumeFB(XrPassthroughLayerFB layer) {
long __functionAddress = layer.getCapabilities().xrPassthroughLayerResumeFB;
if (CHECKS) {
check(__functionAddress);
}
return callPI(layer.address(), __functionAddress);
}
// --- [ xrPassthroughLayerSetStyleFB ] ---
/** Unsafe version of: {@link #xrPassthroughLayerSetStyleFB PassthroughLayerSetStyleFB} */
public static int nxrPassthroughLayerSetStyleFB(XrPassthroughLayerFB layer, long style) {
long __functionAddress = layer.getCapabilities().xrPassthroughLayerSetStyleFB;
if (CHECKS) {
check(__functionAddress);
}
return callPPI(layer.address(), style, __functionAddress);
}
/**
* Set style on a passthrough layer.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrPassthroughLayerSetStyleFB PassthroughLayerSetStyleFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrPassthroughLayerSetStyleFB(
* XrPassthroughLayerFB layer,
* const XrPassthroughStyleFB* style);</code></pre>
*
* <h5>Description</h5>
*
* <p>Sets an {@link XrPassthroughStyleFB} style on an {@code XrPassthroughLayerFB} layer.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrPassthroughLayerSetStyleFB PassthroughLayerSetStyleFB}</li>
* <li>{@code layer} <b>must</b> be a valid {@code XrPassthroughLayerFB} handle</li>
* <li>{@code style} <b>must</b> be a pointer to a valid {@link XrPassthroughStyleFB} structure</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* <h5>See Also</h5>
*
* <p>{@link XrPassthroughStyleFB}</p>
*
* @param layer the {@code XrPassthroughLayerFB} to get the style.
* @param style the {@link XrPassthroughStyleFB} to be set.
*/
@NativeType("XrResult")
public static int xrPassthroughLayerSetStyleFB(XrPassthroughLayerFB layer, @NativeType("XrPassthroughStyleFB const *") XrPassthroughStyleFB style) {
return nxrPassthroughLayerSetStyleFB(layer, style.address());
}
// --- [ xrCreateGeometryInstanceFB ] ---
/** Unsafe version of: {@link #xrCreateGeometryInstanceFB CreateGeometryInstanceFB} */
public static int nxrCreateGeometryInstanceFB(XrSession session, long createInfo, long outGeometryInstance) {
long __functionAddress = session.getCapabilities().xrCreateGeometryInstanceFB;
if (CHECKS) {
check(__functionAddress);
XrGeometryInstanceCreateInfoFB.validate(createInfo);
}
return callPPPI(session.address(), createInfo, outGeometryInstance, __functionAddress);
}
/**
* Create a triangle mesh.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrCreateGeometryInstanceFB CreateGeometryInstanceFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrCreateGeometryInstanceFB(
* XrSession session,
* const XrGeometryInstanceCreateInfoFB* createInfo,
* XrGeometryInstanceFB* outGeometryInstance);</code></pre>
*
* <h5>Description</h5>
*
* <p>Creates an {@code XrGeometryInstanceFB} handle. Geometry instance functionality requires <a target="_blank" href="https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#XR_FB_triangle_mesh">XR_FB_triangle_mesh</a> extension to be enabled. An {@code XrGeometryInstanceFB} connects a layer, a mesh, and a transformation, with the semantics that a specific mesh will be instantiated in a specific layer with a specific transformation. A mesh can be instantiated multiple times, in the same or in different layers.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrCreateGeometryInstanceFB CreateGeometryInstanceFB}</li>
* <li>{@code session} <b>must</b> be a valid {@code XrSession} handle</li>
* <li>{@code createInfo} <b>must</b> be a pointer to a valid {@link XrGeometryInstanceCreateInfoFB} structure</li>
* <li>{@code outGeometryInstance} <b>must</b> be a pointer to an {@code XrGeometryInstanceFB} handle</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link XR10#XR_ERROR_OUT_OF_MEMORY ERROR_OUT_OF_MEMORY}</li>
* <li>{@link XR10#XR_ERROR_LIMIT_REACHED ERROR_LIMIT_REACHED}</li>
* <li>{@link XR10#XR_ERROR_POSE_INVALID ERROR_POSE_INVALID}</li>
* <li>{@link #XR_ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* <h5>See Also</h5>
*
* <p>{@link XrGeometryInstanceCreateInfoFB}</p>
*
* @param session the {@code XrSession}.
* @param createInfo the {@link XrGeometryInstanceCreateInfoFB}.
* @param outGeometryInstance the {@code XrGeometryInstanceFB}.
*/
@NativeType("XrResult")
public static int xrCreateGeometryInstanceFB(XrSession session, @NativeType("XrGeometryInstanceCreateInfoFB const *") XrGeometryInstanceCreateInfoFB createInfo, @NativeType("XrGeometryInstanceFB *") PointerBuffer outGeometryInstance) {
if (CHECKS) {
check(outGeometryInstance, 1);
}
return nxrCreateGeometryInstanceFB(session, createInfo.address(), memAddress(outGeometryInstance));
}
// --- [ xrDestroyGeometryInstanceFB ] ---
/**
* Destroy a geometry instance.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrDestroyGeometryInstanceFB DestroyGeometryInstanceFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrDestroyGeometryInstanceFB(
* XrGeometryInstanceFB instance);</code></pre>
*
* <h5>Description</h5>
*
* <p>Destroys an {@code XrGeometryInstanceFB} handle. Destroying an {@code XrGeometryInstanceFB} does not destroy a mesh and does not free mesh resources. Destroying a layer invalidates all geometry instances attached to it. Destroying a mesh invalidates all its instances.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrDestroyGeometryInstanceFB DestroyGeometryInstanceFB}</li>
* <li>{@code instance} <b>must</b> be a valid {@code XrGeometryInstanceFB} handle</li>
* </ul>
*
* <h5>Thread Safety</h5>
*
* <ul>
* <li>Access to {@code instance}, and any child handles, <b>must</b> be externally synchronized</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*/
@NativeType("XrResult")
public static int xrDestroyGeometryInstanceFB(XrGeometryInstanceFB instance) {
long __functionAddress = instance.getCapabilities().xrDestroyGeometryInstanceFB;
if (CHECKS) {
check(__functionAddress);
}
return callPI(instance.address(), __functionAddress);
}
// --- [ xrGeometryInstanceSetTransformFB ] ---
/** Unsafe version of: {@link #xrGeometryInstanceSetTransformFB GeometryInstanceSetTransformFB} */
public static int nxrGeometryInstanceSetTransformFB(XrGeometryInstanceFB instance, long transformation) {
long __functionAddress = instance.getCapabilities().xrGeometryInstanceSetTransformFB;
if (CHECKS) {
check(__functionAddress);
XrGeometryInstanceTransformFB.validate(transformation);
}
return callPPI(instance.address(), transformation, __functionAddress);
}
/**
* Create a triangle mesh.
*
* <h5>C Specification</h5>
*
* <p>The {@link #xrGeometryInstanceSetTransformFB GeometryInstanceSetTransformFB} function is defined as:</p>
*
* <pre><code>
* XrResult xrGeometryInstanceSetTransformFB(
* XrGeometryInstanceFB instance,
* const XrGeometryInstanceTransformFB* transformation);</code></pre>
*
* <h5>Description</h5>
*
* <p>Sets an {@link XrGeometryInstanceTransformFB} transform on an {@code XrGeometryInstanceFB} geometry instance.</p>
*
* <h5>Valid Usage (Implicit)</h5>
*
* <ul>
* <li>The {@link FBPassthrough XR_FB_passthrough} extension <b>must</b> be enabled prior to calling {@link #xrGeometryInstanceSetTransformFB GeometryInstanceSetTransformFB}</li>
* <li>{@code instance} <b>must</b> be a valid {@code XrGeometryInstanceFB} handle</li>
* <li>{@code transformation} <b>must</b> be a pointer to a valid {@link XrGeometryInstanceTransformFB} structure</li>
* </ul>
*
* <h5>Return Codes</h5>
*
* <dl>
* <dt>On success, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_SUCCESS SUCCESS}</li>
* <li>{@link XR10#XR_SESSION_LOSS_PENDING SESSION_LOSS_PENDING}</li>
* </ul></dd>
* <dt>On failure, this command returns</dt>
* <dd><ul>
* <li>{@link XR10#XR_ERROR_FUNCTION_UNSUPPORTED ERROR_FUNCTION_UNSUPPORTED}</li>
* <li>{@link XR10#XR_ERROR_VALIDATION_FAILURE ERROR_VALIDATION_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_RUNTIME_FAILURE ERROR_RUNTIME_FAILURE}</li>
* <li>{@link XR10#XR_ERROR_HANDLE_INVALID ERROR_HANDLE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_INSTANCE_LOST ERROR_INSTANCE_LOST}</li>
* <li>{@link XR10#XR_ERROR_SESSION_LOST ERROR_SESSION_LOST}</li>
* <li>{@link XR10#XR_ERROR_TIME_INVALID ERROR_TIME_INVALID}</li>
* <li>{@link XR10#XR_ERROR_POSE_INVALID ERROR_POSE_INVALID}</li>
* <li>{@link XR10#XR_ERROR_FEATURE_UNSUPPORTED ERROR_FEATURE_UNSUPPORTED}</li>
* </ul></dd>
* </dl>
*
* <h5>See Also</h5>
*
* <p>{@link XrGeometryInstanceTransformFB}</p>
*/
@NativeType("XrResult")
public static int xrGeometryInstanceSetTransformFB(XrGeometryInstanceFB instance, @NativeType("XrGeometryInstanceTransformFB const *") XrGeometryInstanceTransformFB transformation) {
return nxrGeometryInstanceSetTransformFB(instance, transformation.address());
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 7,672 |
\section{Introduction}
\IEEEPARstart{S}{tructure-from-Motion} is designed to recover camera motions and sparse 3D structures from image collections \cite{agarwal2011building, snavely2008modeling}. This technique has been applied to various scenarios, such as indoor-outdoor 3D reconstructions \cite{cui2022vidsfm, Zhang2016Eff}, natural environment monitoring\cite{clapuyt2016reproducibility}, cultural heritage digitization\cite{fuhrmann2014mve}, and recent neural rendering \cite{mildenhall2020nerf, wang2021neus}. The typical steps of SfM consist of feature detection, feature matching, camera poses estimation, and 3D structure reconstruction\cite{schonberger2016structure}.
Although SfM methods have achieved impressive performance across numerous tasks,
the existing methods still struggle to reconstruct the scene with duplicate structures accurately, which are common in the real world, such as the repetitive facades and decorations in buildings. The reasons lie in the image feature matching. If different instances share a highly similar appearance, their local features tend to be falsely matched, which leads to the incorrect pose estimation as well as the final 3D reconstructions like superimposed and phantom structures.
Existing work often deals with ambiguity by analyzing the feature points or epipolar geometries (EGs) between two views to explore consistent constraints. For example, some work attempted to remove inconsistent EGs because the EGs contain more information, which makes it easier to be distinguished than feature points. To remove the incorrect EGs, they add additional geometric consistency constraints between the views, such as loop constraints \cite{zach2010disambiguating}. However, the effectiveness of these approaches is limited due to the accumulated geometric errors. Considering that the incorrect geometric relations stem from mismatched correspondences, a more fundamental solution is to analyze the visibility of points based on the correspondences, such as missing correspondences \cite{zach2008can} and visibility graph \cite{wilson2013network}. The co-occurrence of feature correspondences can provide additional inference about ambiguous structures. Nevertheless, these methods are based on the local triplets or each feature point individually, and are prone to remove many positive correspondences. Few of the previous methods exploit explicitly high-level information (i.e., scene structures), which is naturally exploited in human perception.
In this work, we exploit the underlying spatial contextual information of the local region of the scene, which provides additional spatial relationships, by grouping related points into the same cluster. Unlike the prior work that considers each point equally and ignores the underlying scene structures, our method considers their associated relationship. In this case, the ambiguous structure, which usually belongs to a local region of an object, can be directly detected at the region-level.
To this end, we propose a novel track-community structure to partition the scene into several parts without reconstructing the scene in advance. This structure is obtained by analyzing the adjacency of the tracks and performing a community detection algorithm. Specifically, a track is defined as a set of matched 2D feature points from different views and corresponds to a 3D point in the real world. Accordingly, tracks can encode the visibility of the 3D points in each view, and a track-community refers to the local region of an object or several adjacent objects in the scene, namely, a segment of the scene, as shown in Fig. \ref{fig_1}(c). Once the track-community is established, we can remove ambiguous structures via two steps, i.e., ambiguity detection and correction. In ambiguity detection, the diversity analysis of the track-graph is used to determine whether each track-community potentially contains erroneous tracks caused by ambiguous structures. In ambiguity correction, the pose confliction checking between segments is used to remove erroneous correspondences from ambiguous segments and recover correct poses.
For camera pose and 3D structure estimation, an incremental SfM approach is selected in our method because of its accuracy and robustness \cite{schonberger2016structure}. However, this approach usually suffers from the drift problem in large-scale reconstruction \cite{zhu2018very, Cui2015Large}. To overcome this drawback, many SfM methods adopt the strategy of first distributedly registering the cameras and then merging them \cite{zhu2018very, chen2020graph}. Considering that the whole scene has been divided into several parts in the previous disambiguation step, we also utilize the partitioning strategy for SfM. To refine the similarity transformation between partial 3D models, we propose a new merging algorithm to register all local reconstructions into a global frame, which takes both 3D-3D correspondences and pairwise EGs into account.
\begin{figure*}[!t]
\centering
\includegraphics[width=7in]{fig-1}
\caption{Pipeline of the TC-SfM. Our method takes an image collection as input, and adopt the partition strategy for disambiguation and reconstruction. (a) View-graph construction. (b) Track sampling in superpixels. (c) Track-graph construction and community detection. (d) Image clusters corresponding the segments. (e) Image clusters after disambiguation. (f) 3D models of partial reconstructions. (g) Merged 3D models in a uniform frame.}
\label{fig_1}
\end{figure*}
In summary, the main contributions of this paper are listed as follows:
\begin{itemize}
\item{We present a robust SfM method, i.e., TC-SfM, which explores the scene contextual information from track-communities to mitigate the problem of reconstruction failure caused by ambiguous structures.}
\item{We propose a novel approach for detecting and correcting ambiguous structures by dividing the scene into several segments and checking the pose consistency among segments. }
\item{A new merging algorithm using a bidirectional consistency cost function is proposed to accurately register all partial reconstructions into a unified framework.}
\end{itemize}
We conduct experiments on various datasets, and the results show that TC-SfM can robustly alleviate reconstruction failure resulting from ambiguity and achieve superior performance.
\section{Related Work}
Many existing studies devote to improving the performance of SfM in the presence of ambiguous structures. These methods can be divided into three categories: optimizing view-graph construction, improving camera registration, and post-processing reconstruction results.
\subsubsection*{\bf View-graph construction} The view-graph, where the nodes are images from different views and the edges are pairwise EGs between them, is an indispensable component in many SfM pipelines. The reconstruction performance will be greatly influenced by the contaminative view-graph.
One solution to the view-graph optimization is to directly remove the incorrect edges in the view-graph. Zach et al. \cite{zach2008can} assumed that all three images of a triplet that share sufficient and well-distributed correspondences suggest the correct EG among all view pairs. The absence of enough correspondences (i.e., missing correspondences) provides strong evidence of the presence of an erroneous EG in them. Zach et al. \cite{zach2010disambiguating} enforced the loop consistency of geometric relations estimated from the input. They detected conflicting relations in a Bayesian framework to infer the set of likely false-positive geometric relations. Roberts et al.\cite{roberts2011structure} identified mismatched pairs based on an expectation-maximization framework that incorporates image timestamp cues with missing correspondence cues. However, the image timestamp information in an unordered dataset is difficult to obtain, thereby limiting its usage. Wilson et al. \cite{wilson2013network} assumed that two observations visible on one view are also visible on other views and utilized the bipartite local clustering coefficient over the visibility graph to measure such consistency. This approach easily results in over-segmentation because the tracks with a score less than the threshold are all deleted.
Another solution is to optimize the edges, which aims to improve the quality of two-view geometry. Cui et al.\cite{cui2015global} performed a local bundle adjustment (BA) in pairwise images to improve relative motion. Sweeney et al. \cite{sweeney2015optimizing} improved the quality of the relative geometries in the triplet by enforcing loop consistency constraints with an epipolar point transfer. Other studies concentrate on finding an optimal subgraph from the full view-graph. Such subgraph can be regarded as a reliable input for the registration to ensure the accuracy and completeness of the reconstructed model. Snavely et al.\cite{snavely2008skeletal} computed a small skeletal subset of images based on the maximum leaf spanning tree. They reconstructed the skeletal set firstly and then added the remaining images via pose estimation. Yan et al.\cite{yan2017distinguishing} first introduced a geodesic consistency measure by selecting a set of iconic images. Correspondences that are connected in a visibility network but become disconnected according to visual propagation along the path network are geodetically inconsistent. Shen et al. \cite{shen2016graph} incrementally expanded the minimum spanning tree to form locally consistent strong triplets.
The methods based on the analysis of the points or EGs aim to improve the quality of the view-graph. Due to the lack of higher contextual information, these methods usually remove a large number of positive edges, which tends to reduce the completeness of the reconstruction. In contrast, our method explores the spatial information among the regions in the scene to filter the correspondences from the ambiguous structures rather than directly remove the erroneous edges in the view-graph. This approach robustly detects ambiguous structures and improves the completeness of the reconstructed scene.
\subsubsection*{\bf Camera registration} Based on feature correspondences and EGs in the view-graph, registration is to determine the camera pose of each view. The structure of the scene is subsequently recovered based on camera poses.
The classical incremental pipeline, such as COLMAP \cite{schonberger2016structure}, typically adds an optimal image each time after an initial two-view reconstruction by repeatedly performing BA. This approach has the advantages of accuracy and robustness compared with the global approach \cite{sweeney2016large}. However, if a view is incorrectly registered due to lots of mismatched feature points, this error will be propagated to the other views of subsequent iterations, resulting in a completely incorrect scene structure. To overcome this challenge, Cui et al. \cite{cui2019efficient} proposed a batched incremental SfM framework that contains two iteration loops: the inner loop that selects a well-conditioned subset of tracks and the outer loop that uses rotations estimated via rotation averaging as weak supervision for the registration. Chen et al. \cite{chen2020graph} proposed a carefully designed clustering and merging algorithm to prevent the individual reconstructions from being affected by the wrong matches. Then, they performed a subgraph expansion step to enhance the connection and completeness of scenes.
However, due to the influence of the erroneous correspondences, the SfM methods based on the optimization of registration usually struggle to recover the correct reconstruction while the scene structures share a strong visual resemblance. In our work, the ambiguous structures are detected by checking the pose consistency with the distinct structures, and the matches from the ambiguous structures are not involved in the registration.
\subsubsection*{\bf Post-processing} For visually indistinguishable structures, some studies correct ambiguities based on a reconstructed 3D model with erroneous elements. Such method assumes that a priori knowledge of the ambiguous structure is not available at registration time, thereby resulting in reconstruction failure. Heinly et al. \cite{heinly2014correcting} proposed the informative measure of conflicting observations to identify the incorrectly placed unique scene structures. Later, Heinly et al. \cite{heinly2014recovering} presented another post-processing pipeline to split an incorrect reconstruction into error-free models by exploiting the co-occurrence information in the scene geometry with local clustering coefficients. As the post-processing methods take reconstructions as inputs, they work well on many challenging datasets due to more useful information about ambiguous structures. Obviously, complete reconstructions are indispensable for these methods, thereby introducing additional computation costs.
Currently, most existing works focus on analyzing the EGs or points to handle the ambiguity, but do not well exploit the underlying high-level contextual information before reconstruction. Nevertheless, our method explores the spatial relationship among the regions of the scene based on track-communities to detect and correct ambiguous structures.
\section{Track-Community-Based SfM}
To correct the ambiguous structures, we propose a track-community-based SfM method (i.e., TC-SfM). Fig. \ref{fig_1} illustrates the pipeline of the proposed SfM method. This method first performs feature extraction and matching, geometric verification, and EG estimation to construct the view-graph as the conventional pipeline. To partition the scene according to the scene structure, a track-graph is constructed. Subsequently, the tracks are divided into groups by a community detection algorithm. Each track-community is regarded as a scene segment, which roughly refers to the local region of an object or several adjacent objects. Then, potential erroneous tracks are detected by diversity analysis on the track-graph. The segment that contains large erroneous tracks is potentially identified as ambiguous structures and corrected by checking the pose consistency with the help of other distinct segments. In this way, multiple ambiguity-free segments are obtained. Each segments is reconstructed via the standard incremental SfM. Finally, all partial reconstructions are aligned into a unified framework by a bidirectional consistency constraint. A final BA is performed to minimize the global reprojection error of the whole model. The TC-SfM method is described in detail in the following.
\subsection{Track-graph and Track-community Construction}
Unlike the existing methods that directly partition the scene based on the view-graph \cite{chen2020graph, bhowmick2014divide}, we exploit the dependencies among the tracks to explore the underlying scene structures by constructing the track-graph and track-community.
Firstly, a full view-graph is constructed. Based on the conclusion in the previous works \cite{yan2017distinguishing, cui2021view}, the image with the most matches to the given image is more likely to be the correct matched one. For a view pair $(C_i, C_j) $, we calculate the ratio of the number of the common 2D observations to the total observations in each view, which is denoted by $r^i_{ij}$ and $r^j_{ij}$, respectively. The weight of an edge in the view-graph is defined as the average of two ratios (i.e., $w_{ij} = \frac{r_{ij}^i + r_{ij}^j}{2} $).
Ideally, a track corresponds to a 3D point in the real world. If two 3D points are on the same object and are close to each other, their 2D projections in the view are usually closed accordingly. Therefore, the neighborhood relations of the tracks are utilized to explore the contextual information of the scene. For improving efficiency, the full tracks must be simplified. Inspired by the recent progress on superpixel structure in SLAM to improve the reconstruction \cite{concha2014using, wang2020relative}, we utilize superpixel segmentation to sample the tracks. We perform Simple Linear Iterative Clustering \cite{achanta2012slic} on each image to generate superpixels. If the superpixel region contains tracks, then the longest track will be selected because the long track is more reliable to represent the scene information in this superpixel. Note that the correspondences in EGs with weights less than $\tau_{w} $ are considered unreliable and ignored when searching correspondences for track sampling. While a track is sampled, the superpixels related to this track in other views will be skipped. All sampled tracks are regarded as nodes of the track-graph. If two tracks are visible in the same view and their 2D points are located in adjacent superpixels, then they are linked with an edge. The constructed track-graph can clearly display the surrounding information of a track.
Since the visual scenes are highly structured, the spatially proximal tracks exhibit strong dependencies, which carry high-level information about the structures of the scene as human perception. To exploit such information, the track-graph is initially split into $N$ track-communities inspired by a community detection algorithm in the network analysis\cite{Gu2015Image}. The community is composed of a set of tracks, which hold tight connections and correspond to the 2D keypoints on multiple views.
Unlike the previous work \cite{shen2016graph, cui2017csfm} that explores the communities on the view-graph, each track-community in our method represents a local segment of the scene and typically belongs to an object or several adjacent objects in the scene. According to the 2D keypoints of the tracks on multiple views and the superpixels, the track-communities can be visualized as segments of the scene. In our implementation, the tracks are grouped by Louvain method \cite{blondel2008fast} for community detection. For example, Fig. \ref{fig_2}(a) shows the community detection result on the track-graph. The track-graph is divided into four communities $\{\mathcal{TC}_1, \mathcal{TC}_2, \mathcal{TC}_3, \mathcal{TC}_4\}$, representing four segments of the scene. Fig. \ref{fig_1}(c) shows the partitioning results in the sample images of \textit{Books} scene \cite{jiang2012seeing, roberts2011structure}, in which one community is labeled by one color.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{fig-2}
\caption{Illustration of the track-communities and the diversity analysis for a track. (a) Four detected track-communities. (b) and (c) Two keypoints in track $ T_f $ and their neighbors. }
\label{fig_2}
\end{figure}
\subsection{Ambiguous Structure Detection}
This step aims to find potential ambiguous structures of the scene by exploiting the spatial contextual information provided by track-communities. The track that contains the mismatched correspondences is regarded as an erroneous track. Such track may cover different regions of the real world due to the local similar appearance. Intuitively, although such local regions have a similar appearance, their surrounding contents in the scene are usually different. Therefore, this challenge could be addressed by analyzing the surrounding information of a track in each view. The surrounding information of the correct track on each image is relatively consistent, while the erroneous track is more varied. Hence, we introduce Simpson's Diversity Index to measure the diversity of the surrounding track-communities. Simpson index is often used to quantify the biodiversity of habitat, which considers the number of species and the relative abundance of each species \cite{jost2006entropy}. The variant of Simpson index, called Gini-Simpson Index (GSI) is typically utilized to measure the diversity \cite{jost2006entropy}. In the track-graph, each track-community $ \mathcal{TC}_i$ is defined as species $\mathcal{S}_i $. If a track $T_f$ belongs to a species $\mathcal{S}_j $, then its adjacent tracks that belong to other species are regarded as individuals. We count the number $n_i $ of individuals of each species $\{\mathcal{S}_i,i \ne j\} $ in the adjacent tracks. Therefore, the GSI $gsi $ of a track can be calculated by:
\begin{equation}
\label{eq_1}
gsi = 1 - \sum_{i=1}^{N_{\mathcal{S}}}{(\frac{n_i}{N_{adj}})}^2,
\end{equation}
where $N_{adj} $ is the total number of the adjacent tracks that belong to other species; $N_{s} $ is the number of other species. GSI represents the diversity of the surrounding information of a track among different views. Although GSI cannot find all the erroneous tracks in the scene, such as the track located in the inner of the ambiguous structures, a large number of erroneous tracks can indicate that the community contains ambiguity. Accordingly, GSI is utilized to identify whether a community is ambiguous or distinct. For example, the dark red node $T_f$ in Fig. \ref{fig_2}(a) has three types of neighboring tracks that belong to other communities. Fig. \ref{fig_2}(b) and Fig. \ref{fig_2}(c) show two views that $T_f$ is visible. $P^1$ and $P^2$ are the corresponding 2D keypoints in the two views. $P^1$ has two adjacent tracks of $\mathcal{TC}_2$ and two adjacent tracks of $\mathcal{TC}_4$. $P^2$ has two adjacent tracks of $\mathcal{TC}_3$. A track with a large $gsi$ is regarded as a potential erroneous track. We set this threshold empirically as $\tau_{gs} = 0.5 $. Fig. \ref{fig_3} shows the potential erroneous tracks in the two views of \textit{Books} scene. The track-community where the ratio of erroneous tracks exceeds $\xi $ (in our work, $\xi $ is set to 0.2) is regarded as ambiguous. That is, the corresponding segment contains different parts of the scene with a similar appearance. Otherwise, the segment is regarded as the distinct one.
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{fig-3}
\caption{Potential erroneous tracks detected by GSI in two views of \textit{Books} scene.}
\label{fig_3}
\end{figure}
\subsection{Ambiguous Structure Correction}
In this section, we introduce the method of correcting the ambiguous segments detected in the last step. During the incremental registration, 2D keypoints in the next candidate view and existing 3D points will be matched to calculate the camera pose. If the 2D-3D matches are all corrected, then the poses estimated by matches from different segments of the candidate view are consistent. However, the existence of ambiguous segments results in inconsistent poses. Based on this observation, we compare the difference between the pose calculated from the distinct segments and the pose calculated from the potential ambiguous segments. Then, the correspondences from the ambiguous segments are removed to correctly register the next view. After the correction, each segment is reconstructed.
\begin{figure}[t]
\centering
\includegraphics[width=3in]{fig-4}
\caption{The pose of the next candidate view is estimated by 2D-3D matches from the primary and auxiliary points, respectively. If the consistency of $\boldsymbol{\Omega}_1 $ and $\boldsymbol{\Omega}_2 $ is not satisfied, then the matches that belong to the potential ambiguous segments are regarded as false.}
\label{fig_4}
\end{figure}
For a given segment, the relevant views are collected first. If the tracks of this segment are visible in a view, then the view is related to this segment. Note that the images that only contain few tracks ($ < 30$) are ignored. Therefore, $N $ image clusters $\{ \mathbb{C}_1 , \mathbb{C}_2 , ..., \mathbb{C}_N\}$ are obtained, which correspond to $N $ segments, respectively. The correction is performed for the image clusters corresponding to the potential ambiguous segments individually to avoid erroneous correspondences contaminating the reconstruction. Thus, the sub-view-graph is constructed for each ambiguous image cluster by extracting nodes and edges from the full view-graph. For each sub-view-graph, the weights of all edges are sorted in a descending order. The two views associated with the edge of the largest weight are selected to produce an initial model via the initial two-view reconstruction. During the registration for each view of ambiguous image cluster, the triangulated 3D points that belong to the current segment are marked as \textit{primary points}, and the others that belong to distinct segments are marked as \textit{auxiliary points}.
The image that matches the most primary points is selected as the next candidate view to incrementally reconstruct this segment and added to the reconstruction by registration and triangulation. When a new image is registered using the PnP algorithm \cite{kneip2011novel}, the keypoints of the new image that match the existing 3D points are triangulated. These matches are used to compute the relative pose and obtain the position and orientation of this new view. If this view fails to be registered due to large reprojection errors, then we will continue to try other images according to the number of matches previously mentioned. Let $\boldsymbol{\Omega}_1 $ be the pose of a successfully registered view. $\boldsymbol{\Omega}_1 $ is only estimated by using a 2D-3D match set $M_1 $ that is related to the primary points. However, $\boldsymbol{\Omega}_1 $ may be incorrect if this segment contains ambiguous structures.
While an image contains a very similar appearance to the reconstructed scene, incorrectly matched 2D-3D correspondences may be preserved by RANSAC because they are relatively large. Accordingly, we ignore the keypoints that belong to the current segment and collect a 2D-3D match set $M_2 $ that corresponds to the auxiliary points. Then, another relative pose $\boldsymbol{\Omega}_2 $ is calculated by PnP with $M_2 $. Note that $\boldsymbol{\Omega}_1 $ is calculated by only 2D-3D matches from the primary points, while $\boldsymbol{\Omega}_2 $ is calculated by only matches from the auxiliary points. In particular, $M_2$ belongs to the unambiguous segments of the scene. If the current image has no ambiguity, then these two poses should be consistent. Fig. \ref{fig_4} shows an illustration of this process. Consequently, we calculate the difference between $\boldsymbol{\Omega}_1 $ and $\boldsymbol{\Omega}_2 $. The rotation error $e_r $ of the two poses could be expressed by $acos(\frac{trace(\mathbf{R}_1^T\mathbf{R}_2)-1}{2}) $, where $\mathbf{R}_1 $ and $\mathbf{R}_2 $ are the rotation matrices from $\boldsymbol{\Omega}_1 $ and $\boldsymbol{\Omega}_2 $, respectively. The translation error $e_t $ could be expressed by the angle error of two unit translation vectors, which is computed as $acos(\frac{\mathbf{t}_1}{\lVert \mathbf{t}_1 \rVert}\cdot \frac{\mathbf{t}_2}{\lVert \mathbf{t}_2 \rVert}) $, where $\mathbf{t}_1 $ and $\mathbf{t}_2 $ are the translation vectors from $\boldsymbol{\Omega}_1 $ and $\boldsymbol{\Omega}_2 $, respectively. In our work, the pose of which the rotation error is more than $\epsilon_r(\epsilon_r=0.15) $ or the translation error is more than $\epsilon_t(\epsilon_t=0.35) $ will be regarded as inconsistent, and the current image will be rejected. In addition, the matches that are related to the current segment will be removed. Otherwise, the image is added to the reconstruction. After checking all images in the current image cluster $\mathbb{C}_i $, a consistent image subset will be obtained, which corresponds to an unambiguous segment. For the remaining images in $\mathbb{C}_i $, we repeat this process and obtain another consistent subset until there is no image in $\mathbb{C}_i $. The outputs of this process include one or several consistent segments and corresponding image clusters.
When all ambiguous segments are corrected, each image cluster would be consistent and result in 3D models without ambiguity. Inspired by previous studies \cite{zhu2018very, chen2020graph} that take the partitioning strategy for SfM to overcome the drift problem, we also perform reconstruction for each segment individually in our method. After disambiguations, $N_1 $ consistent image clusters are obtained and correctly reconstructed by the traditional incremental SfM pipeline. Note that the original correspondences will be cleaned by checking whether they lie in the same segment during registration. Due to the presence of many overlapping images between clusters, we do not need to reconstruct each cluster independently to prevent redundant registration. The image clusters are sorted by the number of images in a descending order. If the number of common images between two image clusters is larger than 20, then we merge the two image subsets. After registration, $N_2 $ local reconstructions are obtained.
\subsection{Local Reconstruction Merging}
This section aims to merge all the partial reconstructions into a complete 3D model in a unified framework. Each model is reconstructed in its local coordinate system originally. Two local models, $model_a $ and $model_b $, are merged by their relative similarity transformation $\mathbf{T}_{ab}\in \mathbf{SIM}(3) $, including rotation transformation, translation transformation, and scale \cite{bhowmick2014divide}. If the two partial reconstructions can observe common 3D points, these 3D-3D correspondences can be used to fuse the 3D models by aligning two pieces of point clouds. Some existing methods try to find the overlapping views between two reconstructions \cite{chen2020graph}. However, the common view does not exist in many cases. Some other studies solve the transformation by following the image-to-image constraint across two clusters \cite{fang2019merge}. Nevertheless, the unreliable EGs, which cannot be filtered by geometric validation, limit their performance. Therefore, we propose a novel merging algorithm by taking the 3D-3D correspondences and two-view relative poses into account to accurately merge the models with bidirectional consistency. The initial similarity transformations are estimated by the relative poses of the camera pairs via three linear equations, and then they are all optimized by minimizing the reprojection error with bidirectional consistency.
Let $\mathbf{T}_{ab}(\mathbf{R}_{ab}, \mathbf{t}_{ab}, s_{ab}) $ is the unknown relative similarity transformation from $model_a $ to $model_b $. $\mathbf{p}_k $ is a 3D point that is visible in both two local reconstructions, and the local coordinates of $\mathbf{p}_k $ are denoted as $\mathbf{p}_k^a $ and $\mathbf{p}_k^b $, respectively. $C_i^a $ is a camera that can see $\mathbf{p}_k^a $ in $model_a $ and $C_j^b $ is a camera that can see $\mathbf{p}_k^b $ in $model_b $. The pose of $C_i^a $ in $model_a $can be denoted as $(\mathbf{R}_i^a, \mathbf{c}_i^a) $, where $\mathbf{R}_i^a $ is the rotation and $\mathbf{c}_i^a $ is the position. Similarly, $(\mathbf{R}_j^b, \mathbf{c}_j^b) $ is the pose of $C_j^b $ in $model_b $. The relative transformation from $C_i^a $ to $C_j^b $ can be denoted as $\mathbf{T}_{ij}^{ab}(\mathbf{R}_{ij}, \mathbf{t}_{ij}, s_{ab}, \lambda_{ij}^{ab}) $, where $\mathbf{t}_{ij} $ is a unit translation vector and $\lambda_{ij}^{ab} $ is the unknown scale.
\subsubsection*{\bf Relative rotation estimation}
We estimate the relative rotation between all partial reconstructions. The rotation between two 3D models can be obtained from EGs after cleaning the mismatches. The merged model should still satisfy the constraints between image pairs. Therefore, the relative rotation between the two 3D models can be estimated by using a linear equation system as:
\begin{equation}
\label{eq_2}
\mathbf{R}_j^{b}\mathbf{R}_{ab} = \mathbf{R}_{ij} \mathbf{R}_{i}^{a}.
\end{equation}
\subsubsection*{\bf Scale estimation}To calculate the scale between two reconstructions, the distance between 3D points transformed into the same frame is minimized. The scale factor can be estimated by:
\begin{equation}
\label{eq_3}
s_{ab} \mathbf{R}_{ij} (\mathbf{R}_i^a \mathbf{p}_k^a + \mathbf{t}_i^a) + \lambda_{ij}^{ab}\mathbf{t}_{ij} = \mathbf{R}_j^b\mathbf{p}_k^b + \mathbf{t}_j^b.
\end{equation}
\subsubsection*{\bf Relative translation estimation}With the rotation $\mathbf{R}_{ab} $ and scale $(s_{ab}, \lambda_{ij}^{ab}) $ fixed, the relative translation $\mathbf{t_{ab}} $ between $model_a $ and $model_b $ are estimated based on the transformations of image pairs that cross two models. The linear equation system is defined as:
\begin{equation}
\label{eq_4}
\mathbf{t_{ab}} + s_{ab}\mathbf{R_{ab}}\mathbf{c}_i^a - \lambda_{ij}^{ab}(\mathbf{R}_j^b)^T\mathbf{t}_{ij} = \mathbf{c}_j^b.
\end{equation}
\begin{figure*}[!t]
\centering
\includegraphics[width=7.1in]{fig-5}
\caption{Illustration of bidirectional consistency cost. The 3D point visible in two reconstructions are transformed from one model to another by two transformations. (a) Two partial reconstructions that share common 3D points. The green triangle denotes the common 3D point. (b) Two types of transformations. The red and black dotted lines represent the 3D points transformed by the similarity transformation and relative pose , respectively. (c) Two transformed points are projected onto the image plane, and is the distance between their projected points.}
\label{fig_5}
\end{figure*}
\subsubsection*{\bf Bidirectional consistency optimization} After the initial values of similarity transformations between partial reconstructions are obtained, all the initial parameters will be further optimized. Here, a novel cost function is designed to accurately merge the models by enforcing the 3D-3D correspondence constraint and two-view relative rigid transformation constraint. As shown in Fig. \ref{fig_5}, the 3D point $\mathbf{p}_k^a $ from the local coordinate system of $model_a $ can be transformed into the local coordinate system of $C_j^b $ in $model_b $ in two ways. We define $\mathbf{p}_k^{ab,j} $ as the 3D point transformed by the similarity transformation $\mathbf{T}_{ab}(\mathbf{R}_{ab}, \mathbf{t}_{ab}, s_{ab}) $ between $model_a $ and $model_b $, and define $\mathbf{q}_k^{ab,j} $ as the 3D point transformed by relative pose $\mathbf{T}_{ij}^{ab}(\mathbf{R}_{ij}, \mathbf{t}_{ij}, s_{ab}, \lambda_{ij}^{ab}) $ between $C_i^a $ and $C_j^b $. Then, we have:
\begin{equation}
\label{eq_5}
\mathbf{p}_k^{ab,j} = \mathbf{R}_j^b(s_{ab}\mathbf{R}_{ab}\mathbf{p}_k^a + \mathbf{t}_{ab}) + \mathbf{t}_j^b,
\end{equation}
\begin{equation}
\label{eq_6}
\mathbf{q}_k^{ab,j} = s_{ab}\mathbf{R}_{ij}(\mathbf{R}_i^a \mathbf{p}_k^a+ \mathbf{t}_i^{a}) + \lambda_{ij}^{ab}\mathbf{t}_{ij}.
\end{equation}
We want to enforce the constraint that two transformations should be consistent with each other. Therefore, $\mathbf{p}_k^{ab,j} $ and $\mathbf{q}_k^{ab,j} $ should be as close as possible. According to Eq. \ref{eq_3}, $\mathbf{q}_k^{ab,j} $is close to the point of transforming $\mathbf{p}_k^b $ to $C_j $. This indirectly enforces the 3D correspondences constraint. Furthermore, we utilize the reprojection error $d_k^{ab,j} $ to eliminate the range difference of the local models:
\begin{equation}
\label{eq_6}
d_k^{ab,j} = \left \|\mathcal{P}_j^b(\mathbf{p}_k^{ab,j}) - \mathcal{P}_j^b(\mathbf{q}_k^{ab,j}) \right \| ^2,
\end{equation}
where $\mathcal{P}_j^b(\mathbf{x}) $ means projecting the 3D point $\mathbf{x} $ onto the image plane of $C_j $ in $model_b $. In the same way, $\mathbf{p}_k^b $ are also transformed by the inversed transformations. The distance between the projected points of them can be denoted as $d_k^{ba,i} $. Accordingly, the bidirectional consistency cost function is formulated as:
\begin{equation}
\label{eq_8}
{E} = \sum\limits_{\substack{(a, b) \\
a \ne b}}\sum\limits_{\substack{i\in \mathbb{C}_a \\
j\in \mathbb{C}_b}}\sum\limits_{k\in\mathbb{P}_{ij}^{ab}}w_{ij}( d_k^{ab,j} + d_k^{ba,i} ),
\end{equation}
where $w_{ij} $ is the edge weight of $C_i $ and $C_j $ in the view-graph defined in Section 3.1. We find the 3D point set $\mathbb{P}_{ij}^{ab} $ observed by $C_i $ in $model_a $ and $C_j $ in $model_b $ for all reconstruction pairs. The similarity transformations will be refined by minimizing the cost function ${E} $.
After all the pairwise similarity transformations are obtained, the partial reconstructions will be aligned to a global frame. Each partial reconstruction is regarded as the node, and two nodes will be connected if a similarity transformation exists between them. The weight of the edge is defined as the cost of Eq. \ref{eq_8}. The Minimum-cost Spanning Tree (MST) $\mathbb{T} $ of this graph is extracted to merge the models more accurately. $\mathbb{T} $ contains all $N_2 $ reconstructions and $(N_2-1) $ pairwise transformations. Firstly, the edges that connect the leaf nodes are selected for merging. We merge the model with fewer images into the other via the refined similarity transformation. The MST $\mathbb{T} $ is updated by iteratively removing the leaf nodes. All the leaf nodes in $\mathbb{T} $ and their neighbors are merged in the same way. At last, only one node is left in $\mathbb{T} $, and all the partial reconstructions are aligned into a unified frame. To make the 3D points and all camera poses more accurate, we perform the final BA on the merged reconstruction to minimize the reprojection error globally.
\section{Experiments and Discussions}
In this section, we evaluate the proposed TC-SfM from four types of datasets: ambiguous image datasets, sequential image datasets, unordered Internet image datasets, and our human body datasets. The specifications of these datasets are listed in Table \ref{tab_1}. The organization of the experiments is as follows:
\begin{table}[h]
\begin{center}
\caption{Specifications of the image datasets. The "GT" column reports whether the dataset has ground truth.}
\begin{tabular}{@{}ccccc@{}}
\toprule
\begin{tabular}[c]{@{}c@{}}Dataset\\ type\end{tabular} & \# & \begin{tabular}[c]{@{}c@{}}\# of Images\\ per dataset\end{tabular} & GT & \begin{tabular}[c]{@{}c@{}}Main evaluation\\ dimension\end{tabular} \\ \midrule
Ambiguous & 6 & 21–1559 & No & Disambiguation \\
Sequential & 4 & 152-1108 & No & \multirow{2}{*}{Scalability \& Efficiency} \\
Unordered & 15 & 247-5433 & No & \\
Human body & 8 & 90 & Yes & \begin{tabular}[c]{@{}c@{}}Quantitative performance \\ \& Ablation study\end{tabular} \\ \bottomrule
\end{tabular}
\label{tab_1}
\end{center}
\end{table}
\begin{itemize}
\item{Given that TC-SfM is targeted at the ambiguity problem, we firstly evaluate our method on six benchmark datasets, which are associated with visual ambiguities and common in the real world. Traditional SfM pipelines usually fail to recover correct scene structures on these datasets. Therefore, we compare our approach with recent representative methods for evaluating disambiguation.}
\item{In addition to focusing on ambiguity, we also refine the whole SfM pipeline. To evaluate its overall performance, we tested our method on the general image dataset, especially the sequential image datasets and the unordered Internet image datasets. These datasets are widely used in other SfM pipeline evaluations \cite{Ozyesil2015, Zhuang2018}. On these bases, we compare the performance of our method with state-of-the-art traditional as well as deep learning-based SfM methods. We also demonstrate the scalability of our method on some large-scale scenes included in the Internet image datasets.}
\item{Finally, since the ground truth is difficult to obtain from existing datasets, we use our 3D human reconstruction system to capture eight human body image datasets where the camera pose ground truth is available. Quantitative evaluation is performed on human datasets to demonstrate the accuracy of our method in terms of rotation and position error. Moreover, we use human datasets to demonstrate the validity of the bidirectional consistency constraint in merging.}
\end{itemize}
In our implementation, each segment of the scene is reconstructed based on the standard incremental pipeline of COLMAP \cite{schonberger2016structure} with the default configuration. Since the feature extraction and matching are common steps for SfM, the time consumption of these two steps is not included when we report the runtime of Tables \ref{tab_2} and \ref{tab_3}. The experiments were conducted on a PC equipped with an Intel Core i9-9900K CPU (3.60GHz), 128GB of RAM and an NVIDIA RTX 2080Ti GPU. Moreover, the configuration of the parameters and the limitations of our method are discussed.
\subsection{Evaluation on Ambiguous Image Datasets}
We tested our TC-SfM on six ambiguous datasets, namely, \textit{Books} \cite{jiang2012seeing, roberts2011structure}, \textit{Temple of Heaven} \cite{shen2016graph}, \textit{Arc de Triomphe} \cite{heinly2014correcting}, \textit{Church on Spilled Blood} \cite{heinly2014correcting}, \textit{Brandenburg Gate} \cite{heinly2014correcting}, and \textit{Berliner Dom} \cite{heinly2014correcting}. \textit{Books} dataset has two same books placed in different locations. The content of the other datasets is all landmark architecture, which shares highly visual similarities on buildings. We compare the TC-SfM with two state-of-the-art methods for disambiguation, namely, geodesic-aware SfM proposed by Yan et al. \cite{yan2017distinguishing} and GraphSfM \cite{chen2020graph}. The geodesic-aware SfM \cite{yan2017distinguishing} is specifically designed for disambiguation, which is similar to our optimization goal. GraphSfM is a divide-and-conquer SfM method, but it is based on view-graph clustering as most of the existing methods. We also compare TC-SfM with a recent state-of-the-art learned SfM method, namely PixSfM \cite{lindenberger2021pixel}, based on the featuremetric refinement. The comparison results are shown in Fig. \ref{fig_6}.
Benefiting from deep features and featuremetric optimization, PixSfM produces reasonable results on some ambiguous datasets like \textit{Berliner Dom}. However, for scenes with strong visual resemblance, such as \textit{Book}, \textit{Church on Spilled Blood} and \textit{Brandenburg Gate}, PixSfM incorrectly registers cameras and points of duplicated structures into the same place.
For GraphSfM, images are divided into clusters by view-graph clustering. The reconstruction performance greatly depends on graph cutting. However, the mismatched image pairs usually have large edge weights, causing them to be grouped into the same image cluster. Moreover, the merging of partial reconstructions also could be disturbed by erroneous correspondences. As shown in Fig. \ref{fig_6}, GraphSfM fails to obtain reasonable reconstructions on these ambiguous image datasets.
The geodesic-aware SfM\cite{yan2017distinguishing} performs well in sequential images with a uniform distribution and sufficient overlap, such as \textit{Temple of Heaven} dataset. However, the false EG removal and completeness of reconstruction are difficult to balance in the unordered Internet image datasets with various FoVs and illuminations. The EGs that do not satisfy particular conditions are directly rejected, which greatly affects completeness. For \textit{Arc de Triomphe} and \textit{Berliner Dom} datasets, the obtained several parts cannot be merged. In \textit{Book} and \textit{Brandenburg Gate} datasets, a part of the scene is missing in the result shown in Fig. \ref{fig_6}. If the thresholds of filtering are relaxed, then ambiguities could not be detected.
In contrast, EGs are not directly handled in our TC-SfM. The scene is divided into several segments based on track-community to explore the underlying scene structures. The correspondences derived from different segments will be discarded during registration. Furthermore, erroneous correspondences belonging to ambiguous segments are removed by checking the pose consistent with distinct parts. For example, the scene of \textit{Book} dataset is divided into eight segments. The correspondences between the same two books are initially included in one segment. After ambiguity detection and correction, this segment splits into two unambiguous segments, and the images corresponding to these two books are divided into two clusters, resulting in correct partial 3D models. Meanwhile, the feature correspondences from the areas of the two books are discarded, thereby ensuring that the merging step is not misguided by false pairwise matches. For \textit{Arc de Triomphe}, \textit{Brandenburg Gate} and \textit{Church on Spilled Blood} datasets, the two sides of the building are divided into different segments. Thus, the scene is successfully recovered.
\begin{figure*}[!t]
\centering
\includegraphics[width=7in]{fig-6}
\caption{Comparison of the reconstruction results on the ambiguous image datasets by using the geodesic-aware SfM \cite{yan2017distinguishing}, PixSfM\cite{lindenberger2021pixel}, our TC-SfM and GraphSfM \cite{chen2020graph}, respectively. The first column shows the sampled images of the dataset. The separate models are split with dashed lines.}
\label{fig_6}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=7in]{fig-7}
\caption{Reconstruction results of the four sequential image datasets. From left to right panels are reconstructed by Theia \cite{10.1145/2733373.2807405}, COLMAP \cite{schonberger2016structure}, GraphSfM \cite{chen2020graph}, PixSfM\cite{lindenberger2021pixel} and our TC-SfM, respectively. The camera poses are shown in red. The green ellipses mark the erroneous results of registration.}
\label{fig_7}
\end{figure*}
\subsection{Evaluation on Sequential Image Datasets}
The proposed TC-SfM method is evaluated on the sequential image datasets from Tanks and Temples Dataset \cite{10.1145/3072959.3073599}. A uniformly distributed image set is provided for every scene from each high-resolution video sequence. Here, four representative image datasets, including outdoor and indoor environments, are selected for testing and comparison. Three state-of-the-art SfM methods, namely, incremental SfM (COLMAP \cite{schonberger2016structure}), a divide-and-conquer SfM (GraphSfM \cite{chen2020graph}), a global SfM (the global system of Theia library \cite{10.1145/2733373.2807405}) and a learning-based SfM (PixSfM \cite{lindenberger2021pixel}), are adopted and implemented for comparison. COLMAP and Theia are classic and widely used SfM systems. We use the default configurations in the implementation. GraphSfM and PixSfM are recent representative methods, which have advantages in terms of accuracy. The comparison results are shown in Fig. \ref{fig_7}.
In the outdoor dataset \textit{Family}, Theia failed to recover the structures, while the other four methods successfully reconstruct the scene. The other three datasets contain slight visual ambiguities. The indoor datasets \textit{Auditorium} and \textit{Meetingroom} exhibit repetitive furnishings. The outdoor dataset \textit{Courthouse} contains two same facades on the building. Although methods based on incremental SfM are more robust to correspondence outliers, COLMAP registers the ambiguous structures in the wrong location. GraphSfM is also disturbed by false matches in the merging step. PixSfM does not work well in the presence of large mismatches, such as in \textit{Courthouse} dataset. In contrast, TC-SfM achieves better results with respect to robustness due to the removal of matches from different segments. For example, in \textit{Auditorium} dataset, the seats in the three regions are divided into three segments by TC-SfM, thereby alleviating interference from the mismatches.
\subsection{Evaluation on Unordered Internet Image Datasets}
We evaluated TC-SfM on the unordered Internet image datasets from 1DSfM \cite{10.1007/978-3-319-10578-9_5}. Table \ref{tab_2} shows the comparison of the reconstruction results with COLMAP \cite{schonberger2016structure}, GraphSfM \cite{chen2020graph}, PixSfM \cite{lindenberger2021pixel} and the global system of Theia \cite{10.1145/2733373.2807405}, wherein the number of registered images, runtime and average reprojection error for each dataset are reported. Theia and PixSfM generate the most registered images. However, these methods have large average reprojection errors and less accurate results (shown in Fig. \ref{fig_8}). Since the Internet image datasets are quite noisy, some views that have fewer correspondences and weak connections with others are missing in our method. Although the number of registered images by our method is less than other methods in several datasets, we can also recover similar structures with COLMAP. Fig. \ref{fig_8} shows the reconstruction results for these datasets. Note that for the dataset \textit{GM}, which contains ambiguous structures, other methods except for COLMAP produce incorrect models. The result shows that Theia and GraphSfM are sensitive to false correspondences, while our method achieves comparable performance with COLMAP in terms of robustness and accuracy. Theia and GraphSfM also show superiority in efficiency, while other methods are time-consuming. Benefiting from the partitioning scheme and mismatches correction, our method is faster than COLMAP for large-scale SfM tasks, such as the \textit{Tr} dataset in Table \ref{tab_2}.
\subsection{Application in Human Body Reconstruction}
We also apply our method to eight datasets with ground truth to quantitatively evaluate the accuracy compared with four state-of-the-art SfM approaches, namely, COLMAP \cite{schonberger2016structure}, GraphSfM \cite{chen2020graph}, the global system of Theia \cite{10.1145/2733373.2807405} and PixSfM \cite{lindenberger2021pixel}. Our human body acquisition system is equipped with 90 cameras and deployed in a cylindrical shape \cite{Zhang2021ARM}. The human body is located at the center of the cylinder. We utilize a cuboid calibration object (Fig. \ref{fig_9}(a)) with ChArUco patterns comparable to the human body size to calibrate all 90 cameras. The calibration results can serve as the ground truth due to the accurate feature extraction and matching of the calibration object. The quantitative results and runtime comparison are listed in Table \ref{tab_3} and \ref{tab_4}.
\begin{table*}[!t]
\begin{center}
\caption{Comparison of the reconstruction performance on the unordered image dataset. $\#Reg $ denotes the number of registered images. $t$ denotes the runtime in seconds. $e$ denotes the average reprojection error. The datasets are listed in the first column. "Th", "CM", "GS", "PS" and "TS" are the abbreviation for Theia\cite{10.1145/2733373.2807405}, COLMAP\cite{schonberger2016structure}, GraphSfM\cite{chen2020graph}, PixSfM\cite{lindenberger2021pixel} and our TC-SfM, respectively.The best results are highlighted in bold font.}
\label{tab_2}
\begin{tabular}{@{}llllllllllllllllllll@{}}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{}} & \multicolumn{1}{c}{\multirow{2}{*}{$\#N$}} & \multicolumn{1}{c}{} & \multicolumn{5}{c}{$\#Reg$} & \multicolumn{1}{c}{} & \multicolumn{5}{c}{$t$ [second] } & \multicolumn{1}{c}{} & \multicolumn{5}{c}{$e$ [pixel]} \\ \cmidrule(lr){4-8} \cmidrule(lr){10-14} \cmidrule(l){16-20}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{r}{Th} & \multicolumn{1}{r}{CM} & \multicolumn{1}{r}{GS} & \multicolumn{1}{r}{PS} & \multicolumn{1}{r}{TS} & \multicolumn{1}{c}{} & \multicolumn{1}{r}{Th} & \multicolumn{1}{r}{CM} & \multicolumn{1}{r}{GS} & \multicolumn{1}{r}{PS} & \multicolumn{1}{r}{TS} & \multicolumn{1}{r}{} & \multicolumn{1}{r}{Th} & \multicolumn{1}{r}{CM} & \multicolumn{1}{r}{GS} & \multicolumn{1}{r}{PS} & \multicolumn{1}{r}{TS} \\ \midrule
Al&\multicolumn{1}{r}{627}&&\multicolumn{1}{r}{562}&\multicolumn{1}{r}{546}&\multicolumn{1}{r}{556}&\multicolumn{1}{r}{{\bf{568}}}&\multicolumn{1}{r}{543}&&\multicolumn{1}{r}{{\bf{659}}}&\multicolumn{1}{r}{2620}&\multicolumn{1}{r}{728}&\multicolumn{1}{r}{6386}&\multicolumn{1}{r}{2850}&&\multicolumn{1}{r}{1.39}&\multicolumn{1}{r}{{\bf{0.48}}}&\multicolumn{1}{r}{0.49}&\multicolumn{1}{r}{1.11}&\multicolumn{1}{r}{{\bf{0.48}}}\\
EI&\multicolumn{1}{r}{247}&&\multicolumn{1}{r}{{\bf{231}}}&\multicolumn{1}{r}{228}&\multicolumn{1}{r}{229}&\multicolumn{1}{r}{218}&\multicolumn{1}{r}{230}&&\multicolumn{1}{r}{{\bf{35}}}&\multicolumn{1}{r}{365}&\multicolumn{1}{r}{263}&\multicolumn{1}{r}{965}&\multicolumn{1}{r}{636}&&\multicolumn{1}{r}{1.30}&\multicolumn{1}{r}{{\bf{0.74}}}&\multicolumn{1}{r}{0.75}&\multicolumn{1}{r}{1.19}&\multicolumn{1}{r}{{\bf{0.74}}}\\
GM&\multicolumn{1}{r}{742}&&\multicolumn{1}{r}{{\bf{706}}}&\multicolumn{1}{r}{673}&\multicolumn{1}{r}{550}&\multicolumn{1}{r}{704}&\multicolumn{1}{r}{635}&&\multicolumn{1}{r}{{\bf{103}}}&\multicolumn{1}{r}{3133}&\multicolumn{1}{r}{759}&\multicolumn{1}{r}{7141}&\multicolumn{1}{r}{2789}&&\multicolumn{1}{r}{1.20}&\multicolumn{1}{r}{0.68}&\multicolumn{1}{r}{{\bf{0.67}}}&\multicolumn{1}{r}{1.13}&\multicolumn{1}{r}{{\bf{0.67}}}\\
MM&\multicolumn{1}{r}{394}&&\multicolumn{1}{r}{{\bf{348}}}&\multicolumn{1}{r}{309}&\multicolumn{1}{r}{320}&\multicolumn{1}{r}{337}&\multicolumn{1}{r}{310}&&\multicolumn{1}{r}{{\bf{71}}}&\multicolumn{1}{r}{968}&\multicolumn{1}{r}{1265}&\multicolumn{1}{r}{2449}&\multicolumn{1}{r}{974}&&\multicolumn{1}{r}{0.96}&\multicolumn{1}{r}{0.50}&\multicolumn{1}{r}{0.50}&\multicolumn{1}{r}{1.14}&\multicolumn{1}{r}{{\bf{0.49}}}\\
MN&\multicolumn{1}{r}{474}&&\multicolumn{1}{r}{{\bf{458}}}&\multicolumn{1}{r}{446}&\multicolumn{1}{r}{446}&\multicolumn{1}{r}{448}&\multicolumn{1}{r}{446}&&\multicolumn{1}{r}{{\bf{289}}}&\multicolumn{1}{r}{2120}&\multicolumn{1}{r}{1184}&\multicolumn{1}{r}{4082}&\multicolumn{1}{r}{2759}&&\multicolumn{1}{r}{1.25}&\multicolumn{1}{r}{{\bf{0.65}}}&\multicolumn{1}{r}{0.66}&\multicolumn{1}{r}{1.22}&\multicolumn{1}{r}{{\bf{0.65}}}\\
ND&\multicolumn{1}{r}{553}&&\multicolumn{1}{r}{{\bf{545}}}&\multicolumn{1}{r}{532}&\multicolumn{1}{r}{536}&\multicolumn{1}{r}{518}&\multicolumn{1}{r}{536}&&\multicolumn{1}{r}{{\bf{412}}}&\multicolumn{1}{r}{8542}&\multicolumn{1}{r}{2371}&\multicolumn{1}{r}{6445}&\multicolumn{1}{r}{7855}&&\multicolumn{1}{r}{1.32}&\multicolumn{1}{r}{{\bf{0.64}}}&\multicolumn{1}{r}{0.69}&\multicolumn{1}{r}{1.15}&\multicolumn{1}{r}{{\bf{0.64}}}\\
NL&\multicolumn{1}{r}{376}&&\multicolumn{1}{r}{339}&\multicolumn{1}{r}{320}&\multicolumn{1}{r}{320}&\multicolumn{1}{r}{{\bf{344}}}&\multicolumn{1}{r}{318}&&\multicolumn{1}{r}{{\bf{75}}}&\multicolumn{1}{r}{799}&\multicolumn{1}{r}{694}&\multicolumn{1}{r}{1809}&\multicolumn{1}{r}{1210}&&\multicolumn{1}{r}{1.40}&\multicolumn{1}{r}{{\bf{0.62}}}&\multicolumn{1}{r}{0.63}&\multicolumn{1}{r}{1.10}&\multicolumn{1}{r}{{\bf{0.62}}}\\
PP&\multicolumn{1}{r}{354}&&\multicolumn{1}{r}{{\bf{339}}}&\multicolumn{1}{r}{320}&\multicolumn{1}{r}{325}&\multicolumn{1}{r}{338}&\multicolumn{1}{r}{321}&&\multicolumn{1}{r}{{\bf{60}}}&\multicolumn{1}{r}{678}&\multicolumn{1}{r}{341}&\multicolumn{1}{r}{1462}&\multicolumn{1}{r}{875}&&\multicolumn{1}{r}{1.42}&\multicolumn{1}{r}{{\bf{0.60}}}&\multicolumn{1}{r}{0.64}&\multicolumn{1}{r}{1.13}&\multicolumn{1}{r}{{\bf{0.60}}}\\
Pic&\multicolumn{1}{r}{2508}&&\multicolumn{1}{r}{{\bf{2255}}}&\multicolumn{1}{r}{2133}&\multicolumn{1}{r}{2196}&\multicolumn{1}{r}{2180}&\multicolumn{1}{r}{2110}&&\multicolumn{1}{r}{{\bf{1130}}}&\multicolumn{1}{r}{17384}&\multicolumn{1}{r}{7453}&\multicolumn{1}{r}{67177}&\multicolumn{1}{r}{16530}&&\multicolumn{1}{r}{1.47}&\multicolumn{1}{r}{0.65}&\multicolumn{1}{r}{0.65}&\multicolumn{1}{r}{1.23}&\multicolumn{1}{r}{{\bf{0.64}}}\\
RF&\multicolumn{1}{r}{1134}&&\multicolumn{1}{r}{{\bf{1079}}}&\multicolumn{1}{r}{1038}&\multicolumn{1}{r}{1029}&\multicolumn{1}{r}{1074}&\multicolumn{1}{r}{1030}&&\multicolumn{1}{r}{{\bf{295}}}&\multicolumn{1}{r}{6166}&\multicolumn{1}{r}{1923}&\multicolumn{1}{r}{9880}&\multicolumn{1}{r}{5042}&&\multicolumn{1}{r}{1.46}&\multicolumn{1}{r}{{\bf{0.59}}}&\multicolumn{1}{r}{0.67}&\multicolumn{1}{r}{1.21}&\multicolumn{1}{r}{{\bf{0.59}}}\\
TL&\multicolumn{1}{r}{508}&&\multicolumn{1}{r}{{\bf{485}}}&\multicolumn{1}{r}{433}&\multicolumn{1}{r}{442}&\multicolumn{1}{r}{449}&\multicolumn{1}{r}{431}&&\multicolumn{1}{r}{{\bf{134}}}&\multicolumn{1}{r}{878}&\multicolumn{1}{r}{705}&\multicolumn{1}{r}{1378}&\multicolumn{1}{r}{1361}&&\multicolumn{1}{r}{1.21}&\multicolumn{1}{r}{{\bf{0.50}}}&\multicolumn{1}{r}{0.52}&\multicolumn{1}{r}{1.01}&\multicolumn{1}{r}{{\bf{0.50}}}\\
Tr&\multicolumn{1}{r}{5433}&&\multicolumn{1}{r}{{\bf{4946}}}&\multicolumn{1}{r}{4744}&\multicolumn{1}{r}{4706}&\multicolumn{1}{r}{4856}&\multicolumn{1}{r}{4702}&&\multicolumn{1}{r}{{\bf{1369}}}&\multicolumn{1}{r}{51694}&\multicolumn{1}{r}{14764}&\multicolumn{1}{r}{148795}&\multicolumn{1}{r}{42398}&&\multicolumn{1}{r}{1.29}&\multicolumn{1}{r}{{\bf{0.61}}}&\multicolumn{1}{r}{0.64}&\multicolumn{1}{r}{1.19}&\multicolumn{1}{r}{{\bf{0.61}}}\\
US&\multicolumn{1}{r}{930}&&\multicolumn{1}{r}{807}&\multicolumn{1}{r}{696}&\multicolumn{1}{r}{733}&\multicolumn{1}{r}{{\bf{841}}}&\multicolumn{1}{r}{730}&&\multicolumn{1}{r}{{\bf{46}}}&\multicolumn{1}{r}{3467}&\multicolumn{1}{r}{1258}&\multicolumn{1}{r}{4593}&\multicolumn{1}{r}{2588}&&\multicolumn{1}{r}{1.51}&\multicolumn{1}{r}{{\bf{0.62}}}&\multicolumn{1}{r}{0.68}&\multicolumn{1}{r}{1.12}&\multicolumn{1}{r}{{\bf{0.62}}}\\
VC&\multicolumn{1}{r}{918}&&\multicolumn{1}{r}{{\bf{845}}}&\multicolumn{1}{r}{766}&\multicolumn{1}{r}{780}&\multicolumn{1}{r}{774}&\multicolumn{1}{r}{785}&&\multicolumn{1}{r}{{\bf{297}}}&\multicolumn{1}{r}{11105}&\multicolumn{1}{r}{3547}&\multicolumn{1}{r}{7645}&\multicolumn{1}{r}{9286}&&\multicolumn{1}{r}{1.39}&\multicolumn{1}{r}{{\bf{0.56}}}&\multicolumn{1}{r}{0.58}&\multicolumn{1}{r}{1.16}&\multicolumn{1}{r}{0.57}\\
YM&\multicolumn{1}{r}{458}&&\multicolumn{1}{r}{428}&\multicolumn{1}{r}{411}&\multicolumn{1}{r}{415}&\multicolumn{1}{r}{{\bf{433}}}&\multicolumn{1}{r}{408}&&\multicolumn{1}{r}{{\bf{432}}}&\multicolumn{1}{r}{2123}&\multicolumn{1}{r}{1198}&\multicolumn{1}{r}{3056}&\multicolumn{1}{r}{2141}&&\multicolumn{1}{r}{1.32}&\multicolumn{1}{r}{{\bf{0.61}}}&\multicolumn{1}{r}{0.65}&\multicolumn{1}{r}{1.08}&\multicolumn{1}{r}{{\bf{0.61}}}\\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}[!h]
\centering
\includegraphics[width=7.1in]{fig-8}
\caption{Reconstruction of the Internet image datasets for Theia \cite{10.1145/2733373.2807405}, COLMAP \cite{schonberger2016structure}, GraphSfM \cite{chen2020graph}, PixSfM\cite{lindenberger2021pixel} and our method. From top to bottom are \textit{GM} and \textit{Tr} datasets, respectively. The camera poses are shown in red. The green ellipses mark the erroneous results of registration.}
\label{fig_8}
\end{figure*}
\begin{table*}[t]
\begin{center}
\caption{Comparison of the number of registered images $\#Reg$ and runtime $t$ on eight human body datasets by using Theia \cite{10.1145/2733373.2807405}, COLMAP \cite{schonberger2016structure}, GraphSfM \cite{chen2020graph}, PixSfM\cite{lindenberger2021pixel} and TC-SfM. The first column lists the datasets. The best results are highlighted in bold font.}
\label{tab_3}
\begin{tabular}{@{}llllllllllll@{}}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{}} & \multicolumn{5}{c}{$\#Reg$} & \multicolumn{1}{c}{} & \multicolumn{5}{c}{$t$ [second]} \\ \cmidrule(lr){2-6} \cmidrule(l){8-12}
\multicolumn{1}{c}{} & \multicolumn{1}{r}{Theia} & \multicolumn{1}{r}{COLMAP} & \multicolumn{1}{r}{GraphSfM} & \multicolumn{1}{r}{PixSfM} & \multicolumn{1}{r}{TC-SfM} & \multicolumn{1}{r}{} & \multicolumn{1}{r}{Theia} & \multicolumn{1}{r}{COLMAP} & \multicolumn{1}{r}{GraphSfM} & \multicolumn{1}{r}{PixSfM} & \multicolumn{1}{r}{TC-SfM} \\ \midrule
\multicolumn{1}{c}{D1}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{88}&\multicolumn{1}{r}{37}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{54}&\multicolumn{1}{r}{190}&\multicolumn{1}{r}{{\bf{43}}}&\multicolumn{1}{r}{90}&\multicolumn{1}{r}{601}\\
\multicolumn{1}{c}{D2}&\multicolumn{1}{r}{89}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{89}&\multicolumn{1}{r}{27}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{51}&\multicolumn{1}{r}{188}&\multicolumn{1}{r}{{\bf{35}}}&\multicolumn{1}{r}{81}&\multicolumn{1}{r}{605}\\
\multicolumn{1}{c}{D3}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{80}&\multicolumn{1}{r}{19}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{50}&\multicolumn{1}{r}{227}&\multicolumn{1}{r}{{\bf{21}}}&\multicolumn{1}{r}{70}&\multicolumn{1}{r}{571}\\
\multicolumn{1}{c}{D4}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{85}&\multicolumn{1}{r}{86}&\multicolumn{1}{r}{69}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{{\bf{81}}}&\multicolumn{1}{r}{1356}&\multicolumn{1}{r}{427}&\multicolumn{1}{r}{90}&\multicolumn{1}{r}{858}\\
\multicolumn{1}{c}{D5}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{86}&\multicolumn{1}{r}{89}&\multicolumn{1}{r}{57}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{{\bf{73}}}&\multicolumn{1}{r}{1549}&\multicolumn{1}{r}{99}&\multicolumn{1}{r}{96}&\multicolumn{1}{r}{857}\\
\multicolumn{1}{c}{D6}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{83}&\multicolumn{1}{r}{86}&\multicolumn{1}{r}{67}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{{\bf{74}}}&\multicolumn{1}{r}{1446}&\multicolumn{1}{r}{267}&\multicolumn{1}{r}{99}&\multicolumn{1}{r}{1032}\\
\multicolumn{1}{c}{D7}&\multicolumn{1}{r}{89}&\multicolumn{1}{r}{81}&\multicolumn{1}{r}{72}&\multicolumn{1}{r}{66}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{108}&\multicolumn{1}{r}{1334}&\multicolumn{1}{r}{286}&\multicolumn{1}{r}{{\bf{97}}}&\multicolumn{1}{r}{992}\\
\multicolumn{1}{c}{D8}&\multicolumn{1}{r}{{\bf{90}}}&\multicolumn{1}{r}{80}&\multicolumn{1}{r}{84}&\multicolumn{1}{r}{40}&\multicolumn{1}{r}{{\bf{90}}}&&\multicolumn{1}{r}{{\bf{73}}}&\multicolumn{1}{r}{1879}&\multicolumn{1}{r}{150}&\multicolumn{1}{r}{86}&\multicolumn{1}{r}{1552}\\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[!h]
\begin{center}
\caption{Comparison of rotation error $e_r $ and position error $e_c $ on eight human body datasets by using Theia \cite{10.1145/2733373.2807405}, COLMAP \cite{schonberger2016structure}, GraphSfM \cite{chen2020graph}, PixSfM \cite{lindenberger2021pixel}, and TC-SfM with and without bidirectional consistency (BC) optimization. The datasets are listed in the first column. The best results are highlighted in bold font. }
\label{tab_4}
\begin{tabular}{@{}llllllllllllll@{}}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{}} & \multicolumn{6}{c}{$e_r $ [degree]} & \multicolumn{1}{c}{} & \multicolumn{6}{c}{$e_t$ [mm] } \\ \cmidrule(lr){2-7} \cmidrule(l){9-14}
\multicolumn{1}{c}{} & \multicolumn{1}{r}{Theia} & \multicolumn{1}{r}{COLMAP} & \multicolumn{1}{r}{GraphSfM} & \multicolumn{1}{r}{PixSfM} & \multicolumn{1}{r}{\begin{tabular}[c]{@{}r@{}}TC-SfM\\ w/o BC\end{tabular}} & \multicolumn{1}{r}{TC-SfM} & \multicolumn{1}{r}{} & \multicolumn{1}{r}{Theia} & \multicolumn{1}{r}{COLMAP} & \multicolumn{1}{r}{GraphSfM} & \multicolumn{1}{r}{PixSfM} & \multicolumn{1}{r}{\begin{tabular}[c]{@{}r@{}}TC-SfM\\ w/o BC\end{tabular}} & \multicolumn{1}{r}{TC-SfM} \\ \midrule
\multicolumn{1}{c}{D1}&\multicolumn{1}{r}{0.455}&\multicolumn{1}{r}{{\bf{0.023}}}&\multicolumn{1}{r}{0.045}&\multicolumn{1}{r}{36.302}&\multicolumn{1}{r}{{\bf{0.023}}}&\multicolumn{1}{r}{{\bf{0.023}}}&&\multicolumn{1}{r}{105.31}&\multicolumn{1}{r}{{\bf{2.08}}}&\multicolumn{1}{r}{5.67}&\multicolumn{1}{r}{18.950}&\multicolumn{1}{r}{2.11}&\multicolumn{1}{r}{2.10}\\
\multicolumn{1}{c}{D2}&\multicolumn{1}{r}{0.482}&\multicolumn{1}{r}{{\bf{0.022}}}&\multicolumn{1}{r}{0.039}&\multicolumn{1}{r}{36.867}&\multicolumn{1}{r}{0.023}&\multicolumn{1}{r}{{\bf{0.022}}}&&\multicolumn{1}{r}{91.72}&\multicolumn{1}{r}{{\bf{2.03}}}&\multicolumn{1}{r}{4.50}&\multicolumn{1}{r}{27.759}&\multicolumn{1}{r}{2.10}&\multicolumn{1}{r}{2.07}\\
\multicolumn{1}{c}{D3}&\multicolumn{1}{r}{0.440}&\multicolumn{1}{r}{{\bf{0.017}}}&\multicolumn{1}{r}{0.038}&\multicolumn{1}{r}{3.742}&\multicolumn{1}{r}{{\bf{0.017}}}&\multicolumn{1}{r}{{\bf{0.017}}}&&\multicolumn{1}{r}{103.20}&\multicolumn{1}{r}{2.38}&\multicolumn{1}{r}{4.86}&\multicolumn{1}{r}{21.661}&\multicolumn{1}{r}{2.29}&\multicolumn{1}{r}{{\bf{2.28}}}\\
\multicolumn{1}{c}{D4}&\multicolumn{1}{r}{0.476}&\multicolumn{1}{r}{0.051}&\multicolumn{1}{r}{1.549}&\multicolumn{1}{r}{52.600}&\multicolumn{1}{r}{0.043}&\multicolumn{1}{r}{{\bf{0.042}}}&&\multicolumn{1}{r}{102.36}&\multicolumn{1}{r}{5.86}&\multicolumn{1}{r}{78.80}&\multicolumn{1}{r}{71.724}&\multicolumn{1}{r}{3.71}&\multicolumn{1}{r}{{\bf{3.60}}}\\
\multicolumn{1}{c}{D5}&\multicolumn{1}{r}{0.497}&\multicolumn{1}{r}{0.080}&\multicolumn{1}{r}{0.092}&\multicolumn{1}{r}{48.515}&\multicolumn{1}{r}{0.070}&\multicolumn{1}{r}{{\bf{0.064}}}&&\multicolumn{1}{r}{103.69}&\multicolumn{1}{r}{7.00}&\multicolumn{1}{r}{10.07}&\multicolumn{1}{r}{28.652}&\multicolumn{1}{r}{6.18}&\multicolumn{1}{r}{{\bf{5.08}}}\\
\multicolumn{1}{c}{D6}&\multicolumn{1}{r}{0.503}&\multicolumn{1}{r}{0.058}&\multicolumn{1}{r}{0.351}&\multicolumn{1}{r}{49.811}&\multicolumn{1}{r}{{\bf{0.056}}}&\multicolumn{1}{r}{{\bf{0.056}}}&&\multicolumn{1}{r}{104.21}&\multicolumn{1}{r}{6.54}&\multicolumn{1}{r}{20.36}&\multicolumn{1}{r}{29.735}&\multicolumn{1}{r}{7.47}&\multicolumn{1}{r}{{\bf{5.47}}}\\
\multicolumn{1}{c}{D7}&\multicolumn{1}{r}{1.545}&\multicolumn{1}{r}{60.120}&\multicolumn{1}{r}{48.219}&\multicolumn{1}{r}{49.384}&\multicolumn{1}{r}{{\bf{0.045}}}&\multicolumn{1}{r}{{\bf{0.045}}}&&\multicolumn{1}{r}{182.65}&\multicolumn{1}{r}{1754.72}&\multicolumn{1}{r}{1607.80}&\multicolumn{1}{r}{61.632}&\multicolumn{1}{r}{{\bf{6.38}}}&\multicolumn{1}{r}{{\bf{6.38}}}\\
\multicolumn{1}{c}{D8}&\multicolumn{1}{r}{0.441}&\multicolumn{1}{r}{41.495}&\multicolumn{1}{r}{36.642}&\multicolumn{1}{r}{70.839}&\multicolumn{1}{r}{0.044}&\multicolumn{1}{r}{{\bf{0.043}}}&&\multicolumn{1}{r}{104.22}&\multicolumn{1}{r}{1274.23}&\multicolumn{1}{r}{1233.26}&\multicolumn{1}{r}{50.727}&\multicolumn{1}{r}{5.88}&\multicolumn{1}{r}{{\bf{5.09}}}\\
\midrule
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{fig-9}
\caption{Illustration of the human body reconstruction. (a) Cylindrical calibration object of our acquisition system. (b) Camera layout and correctly reconstructed model. (c) Folded model affected by ambiguity. (d) Front and back of the human body, which share a highly similar appearance. (e) Structures of front and back disambiguated by our method.}
\label{fig_9}
\end{figure}
Due to the similar structures of the clothes in the human body datasets (Fig. \ref{fig_9}(d)), the original view-graph contains many incorrect EGs. Table \ref{tab_4} shows that Theia is disturbed by mismatches but obtains a relatively stable error in all human body datasets because the global approach evenly distributes residual errors. PixSfM obtains the worst performance for all human body datasets. It is not surprising that the deep learning-based method only recovers a few cameras and points in our human data, because such an approach is data-driven and might perform poorly in unfamiliar datasets. The human body datasets contain large low-texture skin and clothes regions, and these features might not be included in the training data of PixSfM. Therefore, PixSfM tends to reconstruct the structures with prominent texture (e.g., the silhouette of clothes), resulting in low completeness and accuracy. COLMAP and GraphSfM perform well on D1-D6, which have fewer false correspondences in the clothes. Although the local reconstruction of GraphSfM is based on COLMAP, it is still affected by outliers in the merging step and is worse than COLMAP. In the human datasets with challenging clothes (D7 and D8 in Table \ref{tab_4}), COLMAP and GraphSfM cannot distinguish between the front and back of the body because of the strong visual resemblance. The cameras located on the back are folded on the front side, as shown in Fig. \ref{fig_9}(c). Our method partitions the human body into several parts according to the track-communities. The front and back of the human body are regarded as two segments. The result of the partition is presented in Fig. \ref{fig_9}(e). The features belonging to different segments will not be matched to avoid the interference of false correspondences during registration and merging. The proposed TC-SfM can successfully recover all camera poses, while the ambiguity is seriously disruptive to the reconstruction result of COLMAP and GraphSfM. Thus, TC-SfM is more robust than others for ambiguities in the scene. In the human body datasets, TC-SfM achieves comparable or better results in terms of accuracy.
PixSfM takes less time on human datasets due to a lot of missing cameras and points. Theia and GraphSfM are quite fast, while COLMAP and TC-SfM take more time for repetitive BA. Note that if the feature correspondence contains many outliers, COLMAP will spend more time reconstructing the best result. When the next candidate image fails to be registered, it will try another registration order, even a new initial pairwise reconstruction from scratch. Our method takes more time to construct the track-graph and detect ambiguity. However, the mismatched correspondences caused by ambiguous structures are removed before the registration, thereby reducing the number of retries in registration.
To evaluate the performance of the bidirectional consistency cost, we test our human body datasets with and without bidirectional consistency optimization. The comparison of rotation and position error is reported in Table \ref{tab_4}. The optimization by bidirectional consistency further improves the accuracy of registration. For two local reconstructions, the similarity transformation between them can be estimated by common images or 3D correspondences. The minimization of the distances of the transformed posints is enforced. However, two image clusters do not always have common images, resulting in the merging failure. The 3D correspondences also contain outliers. Therefore, the pairwise EG constraint that crosses the two reconstructions is necessary for estimating the transformation.
\subsection{Parameter Configuration and Limitation}
Two key parameters $\tau_{w} $ and $\tau_{gs} $ should be manually set in our method, and the others can be set by default. $\tau_{w} $ is the edge weight threshold of the view-graph in the track sampling step, and $\tau_{gs} $ is the GSI threshold of the tracks in the track-graph. $\tau_{w} $ controls whether an image pair is rejected in the correspondence search. For the dataset with sufficient correspondence and stable illumination, such as the human body dataset, we set $\tau_{w} = 0.15 $ and $\tau_{gs} = 0.5 $. For the unordered Internet datasets that are considered noisy, $\tau_{w} = 0.05 $ and $\tau_{gs} = 0.65 $ are sufficient to produce satisfactory results. A large $\tau_{w} $ may result in broken models. A large $\tau_{gs} $ may result in insufficient detection of erroneous tracks.
Overall, our TC-SfM achieves superior performance on various datasets, even in the presence of ambiguous structures. We note that if a part of the scene is very weakly connected to other parts (i.e., with few matches with other views), then the track-community detection will consider this part as a separate one, which cannot be reconstructed individually due to the weak connections. The registration will ignore these views, resulting in the absence of a few views.
\section{Conclusion}
In this work, a track-community-based SfM method with a partitioning scheme is proposed to address the ambiguity caused by visually indistinguishable structures. The proposed track-community structure is used to partition the scene into several segments and introduce more contextual information, which makes it possible to detect potential ambiguous segments by analyzing the diversity of the neighborhood for each track. To distinguish the similar parts in the scene, we perform consistency validation between the poses estimated by the distinct and ambiguous segments. This approach enables a correct reconstruction because the erroneous correspondences are ignored during the registration. Then, each segment of the scene is individually reconstructed to mitigate the drift. The proposed bidirectional consistency cost can refine the pairwise similarity transformations between the local reconstructions, thereby further improving the merging accuracy. The experiments show that TC-SfM can effectively alleviate reconstruction failure resulting from ambiguity and achieve a more robust and accurate reconstruction. Given the absence of some weakly connected views, our future work is to employ the information between the different segments to further improve the reconstruction under extremely challenging cases.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,619 |
Harvest Moon kan syfta på:
Harvest Moon (musikalbum) – ett album av Neil Young
Harvest Moon (spelserie) – en spelserie av företaget Natsume
Harvest Moon (SNES) – det första spelet i nämnda serie
Harvest Moon (låt av Blue Öyster Cult) – en låttitel på albumet Heaven Forbid med gruppen Blue Öyster Cult | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,541 |
Q: How to: Create an Asset using the VersionOne SDK I'm trying to write some C# code to utilize the VersionOne SDK to create a Defect asset. I've queried our system and have identified the required attributes:
Defect derives from PrimaryWorkitem
*
*Description : LongText
*Name : Text
*Parent : Relation to Theme — reciprocal of Children
*Priority : Relation to WorkitemPriority — reciprocal of PrimaryWorkitems
*Scope : Relation to Scope — reciprocal of Workitems
*Source : Relation to StorySource — reciprocal of PrimaryWorkitems
*Status : Relation to StoryStatus — reciprocal of PrimaryWorkitems
*Team : Relation to Team — reciprocal of Workitems
Some of the values are obvious while others are somewhat abstract. For example, I'm not sure what to specify for the "Parent" attribute or the "Scope". The documentation for creating an Asset with the SDK is pretty sparse. I can't seem to find any code examples for using the SDK. At the moment, my code returns an exception:
The remote server returned an error: (400) Bad Request
Violation'Required'AttributeDefinition'Parent'Defect
And, here's the code I'm using at the moment:
static void AddV1Record(List<V1WerRecord> records)
{
V1Connector connector = V1Connector
.WithInstanceUrl(VersionOneURL)
.WithUserAgentHeader("VersionOneUpdate", "1.0")
.WithUsernameAndPassword(VersionOneId, VersionOnePwd)
.Build();
IServices services = new Services(connector);
Oid projectId = services.GetOid("Scope:0");
IAssetType storyType = services.Meta.GetAssetType("Defect");
Asset newDefect = services.New(storyType, projectId);
IAttributeDefinition descAttribute = storyType.GetAttributeDefinition("Description");
newDefect.SetAttributeValue(descAttribute, "My New Defect");
IAttributeDefinition nameAttribute = storyType.GetAttributeDefinition("Name");
newDefect.SetAttributeValue(nameAttribute, "My Name");
services.Save(newDefect);
I understand the error is caused by not specifying all of the required attributes. I'm at a loss for what to specify for some of the attributes: Parent, Scope, etc.
Does anyone know of better documentation that explains using the SDK to create an Asset? Are there any good SDK examples/sample code available?
A: When creating a Primary workitem such as Defect or Story, you have to create it within the context of a particular project. The project is known at the system level as a Scope. A Parent attribute on a Defect is what is called a Theme. By default this is not a required attribute. Someone in your organization has declared this particular item as required.
Parent: Relation to Theme means that the Parent attribute accepts a reference to a particular Theme. You would set the Parent attribute to something in the format like this
Theme:1036
This is called an OID. It is just a system ref to a table-like structure called a relation that holds all of the different Themes in your system. If you query your data API, you can get a list of all these Themes. The query looks like this
yourVersionOneURL/rest-1.v1/Data/Theme?sel=ID,Name
You will get an xml listing in your browser showing n number of these
So if I want to associate a Theme called Shirts with my Defect, I would set the Parent attribute to Theme:1036.
You can add this to your code
IAttributeDefinition parentAttribute = newDefect.GetAttributeDefinition("Parent");
newDefect.SetAttributeValue(parentAttribute,"Theme:1036");
The same process goes for Scope. There is an alternative to querying. You can go into the VersionOne UI, find the name of the project (or other assets) that you need, hover the mouse over the project name (Scope) and in the status bar at the bottom of your browser you will see something that will indicate the Scope OID that is associated with that project name.
I would chat with your VersionOne admin and get clarity as to why the required Theme is necessary for your organization
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,436 |
{"url":"https:\/\/nrich.maths.org\/public\/leg.php?code=-68&cl=3&cldcmpid=5457","text":"# Search by Topic\n\n#### Resources tagged with Visualising similar to Transformation Game:\n\nFilter by: Content type:\nAge range:\nChallenge level:\n\n##### Other tags that relate to Transformation Game\nCollaborative. Games. Enlargements. Translations. Interactivities. Compound\u00a0transformations. Rotations. Resourceful. Reflections. Curious.\n\n### There are 184 results\n\n##### Age 11 to 14 Challenge Level:\n\nHow many different symmetrical shapes can you make by shading triangles or squares?\n\n### Square Coordinates\n\n##### Age 11 to 14 Challenge Level:\n\nA tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides?\n\n### Constructing Triangles\n\n##### Age 11 to 14 Challenge Level:\n\nGenerate three random numbers to determine the side lengths of a triangle. What triangles can you draw?\n\n### Semi-regular Tessellations\n\n##### Age 11 to 14 Challenge Level:\n\nSemi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations?\n\n### Khun Phaen Escapes to Freedom\n\n##### Age 11 to 14 Challenge Level:\n\nSlide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom.\n\n### A Problem of Time\n\n##### Age 14 to 16 Challenge Level:\n\nConsider a watch face which has identical hands and identical marks for the hours. It is opposite to a mirror. When is the time as read direct and in the mirror exactly the same between 6 and 7?\n\n### Marbles in a Box\n\n##### Age 11 to 14 Challenge Level:\n\nHow many winning lines can you make in a three-dimensional version of noughts and crosses?\n\n### Isosceles Triangles\n\n##### Age 11 to 14 Challenge Level:\n\nDraw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?\n\n### Screwed-up\n\n##### Age 11 to 14 Challenge Level:\n\nA cylindrical helix is just a spiral on a cylinder, like an ordinary spring or the thread on a bolt. If I turn a left-handed helix over (top to bottom) does it become a right handed helix?\n\n### Conway's Chequerboard Army\n\n##### Age 11 to 14 Challenge Level:\n\nHere is a solitaire type environment for you to experiment with. Which targets can you reach?\n\n### Counting Triangles\n\n##### Age 11 to 14 Challenge Level:\n\nTriangles are formed by joining the vertices of a skeletal cube. How many different types of triangle are there? How many triangles altogether?\n\n### Coke Machine\n\n##### Age 14 to 16 Challenge Level:\n\nThe coke machine in college takes 50 pence pieces. It also takes a certain foreign coin of traditional design...\n\n### Turning Triangles\n\n##### Age 11 to 14 Challenge Level:\n\nA triangle ABC resting on a horizontal line is \"rolled\" along the line. Describe the paths of each of the vertices and the relationships between them and the original triangle.\n\n### On the Edge\n\n##### Age 11 to 14 Challenge Level:\n\nIf you move the tiles around, can you make squares with different coloured edges?\n\n### Christmas Chocolates\n\n##### Age 11 to 14 Challenge Level:\n\nHow could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes?\n\n### Square It\n\n##### Age 11 to 16 Challenge Level:\n\nPlayers take it in turns to choose a dot on the grid. The winner is the first to have four dots that can be joined to form a square.\n\n### Icosagram\n\n##### Age 11 to 14 Challenge Level:\n\nDraw a pentagon with all the diagonals. This is called a pentagram. How many diagonals are there? How many diagonals are there in a hexagram, heptagram, ... Does any pattern occur when looking at. . . .\n\n### Rolling Triangle\n\n##### Age 11 to 14 Challenge Level:\n\nThe triangle ABC is equilateral. The arc AB has centre C, the arc BC has centre A and the arc CA has centre B. Explain how and why this shape can roll along between two parallel tracks.\n\n### Sea Defences\n\n##### Age 7 to 14 Challenge Level:\n\nThese are pictures of the sea defences at New Brighton. Can you work out what a basic shape might be in both images of the sea wall and work out a way they might fit together?\n\n### Zooming in on the Squares\n\n##### Age 7 to 14\n\nStart with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens?\n\n### Sprouts\n\n##### Age 7 to 18 Challenge Level:\n\nA game for 2 people. Take turns joining two dots, until your opponent is unable to move.\n\n### Jam\n\n##### Age 14 to 16 Challenge Level:\n\nTo avoid losing think of another very well known game where the patterns of play are similar.\n\n### There and Back Again\n\n##### Age 11 to 14 Challenge Level:\n\nBilbo goes on an adventure, before arriving back home. Using the information given about his journey, can you work out where Bilbo lives?\n\n##### Age 14 to 16 Challenge Level:\n\nFour rods are hinged at their ends to form a convex quadrilateral. Investigate the different shapes that the quadrilateral can take. Be patient this problem may be slow to load.\n\n### An Unusual Shape\n\n##### Age 11 to 14 Challenge Level:\n\nCan you maximise the area available to a grazing goat?\n\n### Instant Insanity\n\n##### Age 11 to 18 Challenge Level:\n\nGiven the nets of 4 cubes with the faces coloured in 4 colours, build a tower so that on each vertical wall no colour is repeated, that is all 4 colours appear.\n\n##### Age 11 to 14 Challenge Level:\n\nCan you mark 4 points on a flat surface so that there are only two different distances between them?\n\n### Jam\n\n##### Age 14 to 16 Challenge Level:\n\nA game for 2 players\n\n### A Tilted Square\n\n##### Age 14 to 16 Challenge Level:\n\nThe opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices?\n\n### Rolling Around\n\n##### Age 11 to 14 Challenge Level:\n\nA circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle?\n\n### Right Time\n\n##### Age 11 to 14 Challenge Level:\n\nAt the time of writing the hour and minute hands of my clock are at right angles. How long will it be before they are at right angles again?\n\n### All in the Mind\n\n##### Age 11 to 14 Challenge Level:\n\nImagine you are suspending a cube from one vertex and allowing it to hang freely. What shape does the surface of the water make around the cube?\n\n### Weighty Problem\n\n##### Age 11 to 14 Challenge Level:\n\nThe diagram shows a very heavy kitchen cabinet. It cannot be lifted but it can be pivoted around a corner. The task is to move it, without sliding, in a series of turns about the corners so that it. . . .\n\n### Lost on Alpha Prime\n\n##### Age 14 to 16 Challenge Level:\n\nOn the 3D grid a strange (and deadly) animal is lurking. Using the tracking system can you locate this creature as quickly as possible?\n\n### Hidden Rectangles\n\n##### Age 11 to 14 Challenge Level:\n\nRectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?\n\n### Sliding Puzzle\n\n##### Age 5 to 16 Challenge Level:\n\nThe aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves.\n\n### Reflecting Squarely\n\n##### Age 11 to 14 Challenge Level:\n\nIn how many ways can you fit all three pieces together to make shapes with line symmetry?\n\n### Tetra Square\n\n##### Age 11 to 14 Challenge Level:\n\nABCD is a regular tetrahedron and the points P, Q, R and S are the midpoints of the edges AB, BD, CD and CA. Prove that PQRS is a square.\n\n### John's Train Is on Time\n\n##### Age 11 to 14 Challenge Level:\n\nA train leaves on time. After it has gone 8 miles (at 33mph) the driver looks at his watch and sees that the hour hand is exactly over the minute hand. When did the train leave the station?\n\n### Cubic Net\n\n##### Age 14 to 18 Challenge Level:\n\nThis is an interactive net of a Rubik's cube. Twists of the 3D cube become mixes of the squares on the 2D net. Have a play and see how many scrambles you can undo!\n\n### Coordinate Patterns\n\n##### Age 11 to 14 Challenge Level:\n\nCharlie and Alison have been drawing patterns on coordinate grids. Can you picture where the patterns lead?\n\n### Bands and Bridges: Bringing Topology Back\n\n##### Age 7 to 14\n\nLyndon Baker describes how the Mobius strip and Euler's law can introduce pupils to the idea of topology.\n\n### Buses\n\n##### Age 11 to 14 Challenge Level:\n\nA bus route has a total duration of 40 minutes. Every 10 minutes, two buses set out, one from each end. How many buses will one bus meet on its way from one end to the other end?\n\n### Flight of the Flibbins\n\n##### Age 11 to 14 Challenge Level:\n\nBlue Flibbins are so jealous of their red partners that they will not leave them on their own with any other bue Flibbin. What is the quickest way of getting the five pairs of Flibbins safely to. . . .\n\n### Concrete Wheel\n\n##### Age 11 to 14 Challenge Level:\n\nA huge wheel is rolling past your window. What do you see?\n\n### Convex Polygons\n\n##### Age 11 to 14 Challenge Level:\n\nShow that among the interior angles of a convex polygon there cannot be more than three acute angles.\n\n### Coloured Edges\n\n##### Age 11 to 14 Challenge Level:\n\nThe whole set of tiles is used to make a square. This has a green and blue border. There are no green or blue tiles anywhere in the square except on this border. How many tiles are there in the set?\n\n### Pattern Power\n\n##### Age 5 to 14\n\nMathematics is the study of patterns. Studying pattern is an opportunity to observe, hypothesise, experiment, discover and create.\n\n### Inside Out\n\n##### Age 14 to 16 Challenge Level:\n\nThere are 27 small cubes in a 3 x 3 x 3 cube, 54 faces being visible at any one time. Is it possible to reorganise these cubes so that by dipping the large cube into a pot of paint three times you. . . .\n\n### Squares, Squares and More Squares\n\n##### Age 11 to 14 Challenge Level:\n\nCan you dissect a square into: 4, 7, 10, 13... other squares? 6, 9, 12, 15... other squares? 8, 11, 14... other squares?","date":"2018-06-18 09:17:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.32428616285324097, \"perplexity\": 1810.4772068016832}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-26\/segments\/1529267860168.62\/warc\/CC-MAIN-20180618090026-20180618110026-00045.warc.gz\"}"} | null | null |
Home » Daytona Beach Florida » Ponce Inlet
Daytona Beach Main
Daytona Beach Pier
Map of Daytona Beach
Hotels & Rates
Pictures of NSB
Map of NSB
Ponce de Leon Inlet connects the Atlantic Ocean to the Halifax River and is located approximately 12 miles south of Daytona Beach. You can fish from the jetty, walk along the beach and even drive up to the jetty from the beach.
You'll find many types of terrain here like ocean dunes, palmetto patches, maritime hammock and the wetlands connected to the Halifax River. Included in these terrains you'll find native plants like Florida lantana, southern red cedar, cabbage palms, Simpson's stoppers and oak trees one Live Oak there is said to be over 350 years old. Located on the west side of Peninsula Drive are the parking lot, restrooms and direct access to the nature trails through out the park.
Ponce Inlet has a boardwalk and two places to launch canoes or kayaks, the marsh there takes you directly out to the Halifax River. There are three covered gazebos where you can relax and enjoy the view.
Lighthouse Point Park
Lighthouse Point Park consists of 52 acres of pristine land on the north side of Ponce de Leon Inlet in the Town of Ponce Inlet. The 52-acre park features fishing, nature trails, an observation deck and tower, swimming and picnicking.
The Ponce Preserve
The Ponce Preserve has approximately 41 acres it stretches from the Atlantic Ocean to the Halifax River. It is home to a variety of wildlife including raccoons, possums, skunks, armadillos, shore birds and birds of prey. It features fishing, nature trails, swimming and picnicking.
Smyrna Dunes Park
Smyrna Dunes Park has water on three sides so visitors can arrive by land or sea. It has a wide beach that you can drive on, if you've never experienced driving on a beach I highly recommend it. The water from the Halifax River flows through Ponce Inlet into the Atlantic Ocean here and there's a great view of the Ponce Inlet Lighthouse. The jetty extends out into the water and is a favorite local place to fish. The waves here are higher here than any other place in Volusia County, surfers love this beach so do swimmers and snorkelers. The park is home to a variety of animals, birds, marine life and vegetation. They have three miles of beautiful, elevated wooden walks that protect the fragile dunes from foot traffic. They also have picnic areas and an observation tower. The park is open from sunrise to sunset and there is small admission fee. You can't get to Smyrna Dunes Park from the north side of Ponce Inlet as there is no bridge, you have to enter through New Smyrna Beach.
Key West Historic Marker Tour Spotlights Colorful Island History
Animal Kingdom's 'Wild Africa Trek' at Walt Disney World
Ponte Vedra Beach offers Upscale Accommodations and Lifestyle
Sanibel Island is for Nature Lovers
The Sanibel Island beaches are beautiful and a popular place to collect sea shells, doing this is called the 'Sanibel Stoop'. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,623 |
{"url":"https:\/\/en.wikipedia.org\/wiki\/Inverse_iteration","text":"# Inverse iteration\n\nIn numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method and is also known as the inverse power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics. [1]\n\nThe inverse power iteration algorithm starts with an approximation ${\\displaystyle \\mu }$ for the eigenvalue corresponding to the desired eigenvector and a vector b0, either a randomly selected vector or an approximation to the eigenvector. The method is described by the iteration\n\n${\\displaystyle b_{k+1}={\\frac {(A-\\mu I)^{-1}b_{k}}{C_{k}}},}$\n\nwhere Ck are some constants usually chosen as ${\\displaystyle C_{k}=\\|(A-\\mu I)^{-1}b_{k}\\|.}$ Since eigenvectors are defined up to multiplication by constant, the choice of Ck can be arbitrary in theory; practical aspects of the choice of ${\\displaystyle C_{k}}$ are discussed below.\n\nAt every iteration, the vector bk is multiplied by the inverse of the matrix ${\\displaystyle (A-\\mu I)^{-1}}$ and normalized. It is exactly the same formula as in the power method, except replacing the matrix A by ${\\displaystyle (A-\\mu I)^{-1}.}$ The closer the approximation ${\\displaystyle \\mu }$ to the eigenvalue is chosen, the faster the algorithm converges; however, incorrect choice of ${\\displaystyle \\mu }$ can lead to slow convergence or to the convergence to an eigenvector other than the one desired. In practice, the method is used when a good approximation for the eigenvalue is known, and hence one needs only few (quite often just one) iterations.\n\n## Theory and convergence\n\nThe basic idea of the power iteration is choosing an initial vector ${\\displaystyle b}$ (either an eigenvector approximation or a random vector) and iteratively calculating ${\\displaystyle Ab,A^{2}b,A^{3}b,...}$. Except for a set of zero measure, for any initial vector, the result will converge to an eigenvector corresponding to the dominant eigenvalue.\n\nThe inverse iteration does the same for the matrix ${\\displaystyle (A-\\mu I)^{-1}}$, so it converges to eigenvector corresponding to the dominant eigenvalue of the matrix ${\\displaystyle (A-\\mu I)^{-1}}$. Eigenvalues of this matrix are ${\\displaystyle (\\lambda _{1}-\\mu )^{-1},...,(\\lambda _{n}-\\mu )^{-1},}$ where ${\\displaystyle \\lambda _{i}}$ are eigenvalues of ${\\displaystyle A}$. The largest of these numbers corresponds to the smallest of ${\\displaystyle (\\lambda _{1}-\\mu ),...,(\\lambda _{n}-\\mu ).}$ The eigenvectors of ${\\displaystyle A}$ and of ${\\displaystyle (A-\\mu I)^{-1}}$ are the same, since\n\n${\\displaystyle Av=\\lambda v\\Leftrightarrow (A-\\mu I)v=\\lambda v-\\mu v\\Leftrightarrow (\\lambda -\\mu )^{-1}v=(A-\\mu I)^{-1}v}$\n\nConclusion: The method converges to the eigenvector of the matrix ${\\displaystyle A}$ corresponding to the closest eigenvalue to ${\\displaystyle \\mu .}$\n\nIn particular, taking ${\\displaystyle \\mu =0}$ we see that ${\\displaystyle (A)^{-k}b}$ converges to the eigenvector corresponding to the eigenvalue of ${\\displaystyle A}$ with the smallest absolute value [clarification needed].\n\n### Speed of convergence\n\nLet us analyze the rate of convergence of the method.\n\nThe power method is known to converge linearly to the limit, more precisely:\n\n${\\displaystyle \\mathrm {Distance} (b^{\\mathrm {ideal} },b_{\\mathrm {Power~Method} }^{k})=O\\left(\\left|{\\frac {\\lambda _{\\mathrm {subdominant} }}{\\lambda _{\\mathrm {dominant} }}}\\right|^{k}\\right),}$\n\nhence for the inverse iteration method similar result sounds as:\n\n${\\displaystyle \\mathrm {Distance} (b^{\\mathrm {ideal} },b_{\\mathrm {Inverse~iteration} }^{k})=O\\left(\\left|{\\frac {\\mu -\\lambda _{\\mathrm {closest~to~} \\mu }}{\\mu -\\lambda _{\\mathrm {second~closest~to~} \\mu }}}\\right|^{k}\\right).}$\n\nThis is a key formula for understanding the method's convergence. It shows that if ${\\displaystyle \\mu }$ is chosen close enough to some eigenvalue ${\\displaystyle \\lambda }$, for example ${\\displaystyle \\mu -\\lambda =\\epsilon }$ each iteration will improve the accuracy ${\\displaystyle |\\epsilon |\/|\\lambda +\\epsilon -\\lambda _{\\mathrm {closest~to~} \\lambda }|}$ times. (We use that for small enough ${\\displaystyle \\epsilon }$ \"closest to ${\\displaystyle \\mu }$\" and \"closest to ${\\displaystyle \\lambda }$\" is the same.) For small enough ${\\displaystyle |\\epsilon |}$ it is approximately the same as ${\\displaystyle |\\epsilon |\/|\\lambda -\\lambda _{\\mathrm {closest~to~} \\lambda }|}$. Hence if one is able to find ${\\displaystyle \\mu }$, such that the ${\\displaystyle \\epsilon }$ will be small enough, then very few iterations may be satisfactory.\n\n### Complexity\n\nThe inverse iteration algorithm requires solving a linear system or calculation of the inverse matrix. For non-structured matrices (not sparse, not Toeplitz,...) this requires ${\\displaystyle O(n^{3})}$ operations.\n\n## Implementation options\n\nThe method is defined by the formula:\n\n${\\displaystyle b_{k+1}={\\frac {(A-\\mu I)^{-1}b_{k}}{C_{k}}},}$\n\nThere are, however, multiple options for its implementation.\n\n### Calculate inverse matrix or solve system of linear equations\n\nWe can rewrite the formula in the following way:\n\n${\\displaystyle (A-\\mu I)b_{k+1}={\\frac {b_{k}}{C_{k}}},}$\n\nemphasizing that to find the next approximation ${\\displaystyle b_{k+1}}$ we may solve a system of linear equations. There are two options: one may choose an algorithm that solves a linear system, or one may calculate the inverse ${\\displaystyle (A-\\mu I)^{-1}}$ and then apply it to the vector. Both options have complexity O(n3), the exact number depends on the chosen method.\n\nThe choice depends also on the number of iterations. Naively, if at each iteration one solves a linear system, the complexity will be k*O(n3), where k is number of iterations; similarly, calculating the inverse matrix and applying it at each iteration is of complexity k*O(n3). Note, however, that if the eigenvalue estimate ${\\displaystyle \\mu }$ remains constant, then we may reduce the complexity to O(n3) + k*O(n2) with either method. Calculating the inverse matrix once, and storing it to apply at each iteration is of complexity O(n3) + k*O(n2). Storing an LU decomposition of ${\\displaystyle (A-\\mu I)}$ and using forward and back substitution to solve the system of equations at each iteration is also of complexity O(n3) + k*O(n2).\n\nInverting the matrix will typically have a greater initial cost, but lower cost at each iteration. Conversely, solving systems of linear equations will typically have a lesser initial cost, but require more operations for each iteration.\n\n### Tridiagonalization, Hessenberg form\n\nIf it is necessary to perform many iterations (or few iterations, but for many eigenvectors), then it might be wise to bring the matrix to the upper Hessenberg form first (for symmetric matrix this will be tridiagonal form). Which costs ${\\displaystyle {\\begin{matrix}{\\frac {10}{3}}\\end{matrix}}n^{3}+O(n^{2})}$ arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition.[2][3] (For QR decomposition, the Householder rotations are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) For symmetric matrices this procedure costs ${\\displaystyle {\\begin{matrix}{\\frac {4}{3}}\\end{matrix}}n^{3}+O(n^{2})}$ arithmetic operations using a technique based on Householder reduction.[2][3]\n\nSolution of the system of linear equations for the tridiagonal matrix costs O(n) operations, so the complexity grows like O(n3)+k*O(n), where k is the iteration number, which is better than for the direct inversion. However for few iterations such transformation may not be practical.\n\nAlso transformation to the Hessenberg form involves square roots and the division operation, which are not universally supported by hardware.\n\n### Choice of the normalization constant Ck\n\nOn general purpose processors (e.g. produced by Intel) the execution time of addition, multiplication and division is approximately equal. But on embedded and\/or low energy consuming hardware (digital signal processors, FPGA, ASIC) division may not be supported by hardware, and so should be avoided. Choosing Ck=2nk allows fast division without explicit hardware support, as division by a power of 2 may be implemented as either a bit shift (for fixed-point arithmetic) or subtraction of k from the exponent (for floating-point arithmetic).\n\nWhen implementing the algorithm using fixed-point arithmetic, the choice of the constant Ck is especially important. Small values will lead to fast growth of the norm of bk and to overflow; large values of Ck will cause the vector bk to tend toward zero.\n\n## Usage\n\nThe main application of the method is the situation when an approximation to an eigenvalue is found and one needs to find the corresponding approximate eigenvector. In such a situation the inverse iteration is the main and probably the only method to use. So typically the method is used in combination with some other method which finds approximate eigenvalues: the standard example is the bisection eigenvalue algorithm, another example is the Rayleigh quotient iteration which is actually the same inverse iteration with the choice of the approximate eigenvalue as the Rayleigh quotient corresponding to the vector obtained on the previous step of the iteration.\n\nThere are some situations where the method can be used by itself, however they are quite marginal.\n\nDominant eigenvector. The dominant eigenvalue can be easily estimated for any matrix. For any induced norm it is true that ${\\displaystyle \\left\\|A\\right\\|\\geq |\\lambda |,}$ for any eigenvalue ${\\displaystyle \\lambda }$. So taking the norm of the matrix as an approximate eigenvalue one can see that the method will converge to the dominant eigenvector.\n\nEstimates based on statistics. In some real-time applications one needs to find eigenvectors for matrices with a speed of millions of matrices per second. In such applications, typically the statistics of matrices is known in advance and one can take as an approximate eigenvalue the average eigenvalue for some large matrix sample. Better, one may calculate the mean ratio of the eigenvalues to the trace or the norm of the matrix and estimate the average eigenvalue as the trace or norm multiplied by the average value of that ratio. Clearly such a method can be used only with discretion and only when high precision is not critical. This approach of estimating an average eigenvalue can be combined with other methods to avoid excessively large error.","date":"2016-12-03 16:42:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 48, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9519986510276794, \"perplexity\": 293.10217012387}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698540975.18\/warc\/CC-MAIN-20161202170900-00326-ip-10-31-129-80.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
Clusters of galaxies are the largest virialized structures in the
Universe and play an important role in our understanding of how
dark-matter haloes collapse and large-scale structure evolves. Their
number density can place constraints on the mass density of the
universe and the amplitude of the mass fluctuations (e.g. Eke et
al. 1998). Clusters also act as astrophysical laboratories for
understanding the formation and evolution of galaxies and their
environments. For instance the study of high-redshift clusters can
help us gain an understanding of the feedback processes caused by
star-formation and Active Galactic Nuclei (e.g. Silk \& Rees 1998). It
is therefore desirable to have a large, homogeneous catalogue of
clusters at a range of redshifts in the universe.
At present, however, there are only few clusters known at $z>1$ and
the vast majority of these are from X-ray surveys (e.g. using
XMM-Newton; Stanford et al. 2006). The paucity of known, distant
clusters stems from various selection effects or deficiencies. For
instance, optical searches for clusters, that work so efficiently at
$z<1$, are ineffective once the 4000 \AA\ break falls outside the
I-filter pass band ($z>1$) given the predominance of early-type, red
galaxies in clusters. One solution to this problem is to select
clusters in the near-infrared. Recent developments have seen the
advent of large near-infrared cameras like the Wide Field Infrared
Camera (WFCAM) on the United Kingdom Infrared Telescope (UKIRT). WFCAM
is now undertaking the UKIRT Infrared Deep Sky Survey (UKIDSS,
Lawrence et al. 2006), which is a suite of large area and deep
near-infrared sky surveys. UKIDSS provides the ideal opportunity to
search for high-redshift clusters in the infrared wavelength
regime. The survey has a high efficiency as it provides over an order
of magnitude increase in survey speed over existing near-infrared
imagers.
There are numerous methods for detecting clusters in optical/infrared
imaging surveys. The problem is easier with spectroscopic redshifts,
but currently spectroscopic information for infrared-selected
galaxies is in short supply, time consuming, and impractical over
large areas. However, approximate redshifts can be calculated via
photometric redshift estimation. Despite being a popular method,
remarkably little attention is paid to using photometric redshifts to
isolate clusters. Further, with the exceptions of Kim et al. (2002),
Goto et al. (2002), Bahcall et al. (2003) and Lopes at al. (2004), very
little work has been done to compare the various cluster detection
method that exist (for a review see Gal 2005).
\section{The Algorithm}
We have developed a new cluster-detection algorithm [see van Breukelen
et al. (in preparation) for full details] to deal with two common
problems of photometric selection methods: (i) projection effects of
fore- and background galaxies and (ii) determining the reality of
detected clusters. The former issue arises because photometric - as
opposed to spectroscopic - redshifts typically have errors of the
order of $\sigma \sim 0.1$; furthermore the photometric redshift
probability functions (z-PDFs) are often significantly non-Gaussian
and can for instance show double peaks. To address this problem, our
cluster-detection algorithm utilizes the full z-PDF instead of a
single best redshift-estimate with an associated error. The second
issue - the occurrence of spurious cluster-detections - is due to
selection biases inherent in any detection algorithm. We take this
effect into account by cross-correlating the output of two
substantially different cluster-detection methods. Simulations reveal
that this reduces the contamination of the cluster sample to chance
galaxy groupings.
The algorithm is divided into six steps, described in
more detail in the following subsections.
\begin{enumerate}
\item Determining z-PDFs for all galaxies in the field.
\item Creating 500 Monte-Carlo (MC) realisations of the three-dimensional
galaxy distribution, based on the galaxy z-PDFs.
\item Dividing each MC-realisation into redshift slices of $\Delta z =
0.05$ over the range $0.1 \leq z \leq 2.0$.
\item Detecting cluster candidates in each slice of all
MC-realisations using independent Voronoi Tessellation (VT) and
Friends-Of-Friends (FOF) methods.
\item Mapping the probability of cluster candidates for both methods
based on the number of MC-realisations in which they occur.
\item Cross-correlating the output of the VT and FOF methods to arrive
at the final cluster-catalogue.
\end{enumerate}
\subsection{Photometric Redshifts}
We generate spectral energy distribution (SED) templates with the
stellar population synthesis code GALAXEV (Bruzual \& Charlot 2003),
which cover a range of different star formation rates with timescales
$\tau$ from 0.1 to 30 Gyr. Subsequently the redshift probability
functions are derived by fitting the SEDs to each galaxy's photometry
using the \emph{Hyperz} code (Bolzonella et al 2000). To create
marginalised posterior redshift probability functions we adopt a flat
prior for galaxy luminosity up to a maximum of $L = 10 \rm L^*$
(assuming a passively evolving elliptical galaxy) in the observed
$K$-band.
\subsection{The Monte-Carlo realisations and redshifts slicing}
We create 500 Monte-Carlo realisations of the three-dimensional galaxy
distribution by randomly sampling each z-PDF. We then divide each
MC-realisation into redshift slices of $\Delta z = 0.05$. The width of
these slices is comparable to the error of the photometric redshift
error ($\sigma_z$); if it is chosen to be too small, clusters can be
undetected due to the distribution of their member galaxies over too
many redshift slices; if it is too large, many spurious sources will
be found owing to projection effects.
\subsection{Detection methods: VT and FOF}
Our algorithm applies the VT technique and FOF method independently to
each redshift slice of all the MC-realisations.
The VT technique divides a field of galaxies into Voronoi Cells, each
containing one object: the nucleus. All points that are closer to this
nucleus than any of the other nuclei are enclosed by the Voronoi
Cell. This technique was first applied to the modelling of large-scale
structure (e.g. Icke \& van de Weygaert 1987) but has more recently
been used in cluster detection (Ebeling \& Wiedenmann 1993; Kim et
al. 2002; Lopes et al. 2004). One of the principal advantages of the
VT method is that the technique is relatively unbiased as it does not
look for a particular source geometry (e.g Ramella 2001). The
parameter of interest is the area of the VT cells, the reciprocal of
which translates to a density. Overdense regions in the plane are
found by fitting a function to the density distribution of all VT
cells in the field; cluster candidates are the groups of cells of a
significantly higher density than the mean background density.
We follow the method of Ebeling \& Wiedenmann (1993); to determine the
mean background density of the VT cells we fit the following
cumulative function to the density distribution:
\begin{equation}
P(\tilde f) = e^{-4\tilde f} \Bigl (\frac{32}{3\tilde f^3} +
\frac{8}{\tilde f^2}+ \frac{4}{\tilde f} + 1 \Bigr ).
\end{equation}
Here $\tilde f$ is the cell density (the inverse of the cell area) in
units of the mean cell density, which is the parameter to be derived
from the fitting procedure. Once the background density is known, we
isolate all cells with $\tilde f > 1.75$. Adjoining high-density cells
are grouped; the groups that have a $> 90\%$ chance of not being a
background fluctuation (see Ebeling \& Wiedenmann, Section D) are
cluster candidates.
\begin{figure}
\begin{center}
\includegraphics[height=7cm]{images/contourmap2.eps}
\vspace{-3mm}
\caption{A probability map of clusters found by the Voronoi
Tesselation method at redshift $z \sim 1.0$. Colours are normalised
to the highest probability in the field.}
\label{contours}
\vspace{-5mm}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=13cm]{images/simulation.eps}
\vspace{-3mm}
\caption{Example of simulated clusters superimposed on a galaxy
background (left) as recovered by the Voronoi tessellation technique
(middle) and the Friends-Of-Friends method (right). In this simulation
the clusters are spherical with a total luminosity of
10,20,30,40,50,100,150,200,300 $L^{*}$ (top left to bottom right
respectively) at $z=0.2$.}
\label{simulations}
\vspace{-5mm}
\end{center}
\end{figure*}
Friends-Of-Friends (FOF) algorithms are commonly used in spectroscopic
galaxy surveys (e.g. Tucker et al., 2002; Ramella et al., 2002). A
variant of this algorithm utilizing photometric redshifts was proposed
by Botzler et al. (2004). They create redshift slices for their data
cube and place the objects into the redshift slices according to their
photometric redshift and error. The algorithm then links galaxy pairs
within each slice that are closer to each other than some given
linking distance, $D_{\rm Link}$, which is the projected separation of
the galaxy pair. We use the empirically derived value of 0.175 Mpc for
$D_{\rm Link}$ (proper coordinates) and consider a minimum of five
galaxies in a group to be a cluster candidate.
An important difference between our algorithm and previous ones in the
literature is the way we place the galaxies in the redshift slices. As
we sample the full z-PDF to create MC-realisations of the
three-dimensional galaxy distribution, we do not need to assign errors
to individual galaxy redshifts. An object with a large redshift error
will be distributed throughout many different slices in the 500
MC-realisations, and therefore not yield a significant contribution to
the cluster candidates it is potentially found in. Thus there is no
need to remove objects with large errors from the catalogue and no
additional bias is introduced against faint objects with noisier
photometry. A second modification to existing algorithms is the way we
link up cluster candidates throughout the redshift slices. Instead of
comparing individual galaxies in the clusters and linking up the
clusters with corresponding members (see e.g. Botzler et al. 2004), we
use probability maps of all redshift slices to locate likely cluster
regions. This is discussed in the following section.
\subsection{Probability maps and cross-correlation}
Once the two cluster-detection methods have determined the cluster
candidates in the redshift slices for all MC-realisations, we combine
the MC-realisations to create a probability for both methods for each
redshift slice. Figure ~\ref{contours} shows an example of a
probability map: the VT cluster candidates in this slice at $z = 1.0$
are contoured and coloured, with black through to red indicating low
to high probability. This map is created by overplotting the extent of
all cluster detections; the regions of the field that are found to be
in a cluster in many MC-realisations are high-probability cluster
locations. Since the error on the photometric redshifts of the
galaxies is usually larger than the width of the redshift slices, each
cluster candidate is typically found in several adjoining slices. We
join the cluster candidates that occur in the same location in several
slices by locating the peaks in the probability maps and inspecting
the area within their contours in the adjoining redshift slices for
cluster candidates. All the cluster candidates found in the same
region in adjoining redshift slices are linked up into one final
cluster; the final cluster redshift is determined by taking the mean
of the redshift slices, weighted by the number of corresponding
MC-realisations. We assign a reliability factor $F$ to each cluster by
counting the total number of MC-realisations in which it occurs and
dividing it by the total of 500 realisations. We then cross-correlate
the cluster candidates output by the VT and FOF methods and take all
cluster candidates that are detected in both with $F > 0.2$. This is
the final cluster sample.
\section{Simulations}
\begin{figure*}
\includegraphics[height=52mm]{images/clustfinal6_kring.eps}
\includegraphics[height=52mm]{images/clustv316_kzb.eps}
\hspace{-6mm}
\includegraphics[height=56mm]{images/clust25_Rsinglecolmag.eps}
\vspace{-3mm}
\caption{\small Cluster at $z=0.8$ detected with our cluster-finding
algorithm from van Breukelen et al. (2006). {\em Left:} $K$-band image
(UKIDSS-UDS); the large circle shows a 1 Mpc region around the
cluster; the (blue) squares and (red) circles are cluster members as
given by VT and FOF respectively. The (green) arrows point out
galaxies with $z_{\rm spec} = 0.87$. {\em Middle:} $Bz'K$ image of
the central 1 Mpc region. {\em Right:} colour-magnitude plot. The
crosses are all galaxies within the central 1 Mpc region, otherwise
the symbols are the same as in the left-hand panel. The grey band is
the modelled red sequence.}
\label{colour_plots}
\vspace{-3mm}
\end{figure*}
To test the behaviour of the cluster-detection algorithm we run a set
of simulations on mock-catalogues. These comprise ten random versions
of clusters ranging in total luminosity from 10 $\rm L^*$ to 300 $\rm
L^*$ and redshift $0.1 < z < 2.0$, superimposed on a galaxy
distribution randomly placed in the field within the same redshift
range. Realistic galaxy luminosities and number densities are
determined by the $K$-band luminosity function of Cole et al. (2001)
for the field-distribution and Lin et al. (2004) for the clusters,
with the simplifying assumption of passive evolution with formation
redshift $z_{form}=10$. A detection limit of $K_{lim, AB} = 22.5$ is
imposed to match the 5-$\sigma$ limit of the UDS EDR data (see section
4). The galaxies are spatially distributed within a cluster according
to an NFW profile (Navarro, Frenk \& White, 1997) with a cut-off
radius of 1 Mpc. Figure~\ref{simulations} is an illustration of the
detection of simulated clusters by the VT and FOF methods. It shows an
example of a mock cluster catalogue with galaxy background (left) and
the clusters as recovered by VT (middle) and FOF (right). The smallest
cluster has a total luminosity of $L =10L^*$ and is detected by both
methods.
When comparing the recovered cluster galaxies to the input mock
cluster members, we see that VT tends to include all galaxies in a
large area around the cluster core and the number of recovered cluster
members, $N_{\rm gal, VT}$, is sensitive to the local field
density. By contrast, the galaxy members recovered by FOF are more
centrally concentrated and $N_{\rm gal, FOF}$ is consistent for mock
clusters of the same total luminosity throughout the random
realisations of the catalogues. Thus we can relate $N_{\rm gal, FOF}$
to the total luminosity of a detected cluster. We calculate $N_{\rm
gal, FOF}$ by taking all galaxies that occur in the cluster in $>15\%$
of the MC-realisations in which the cluster itself is detected. The
galaxies that appear in a smaller fraction of MC-realisations are very
likely to be interlopers from different redshifts. Once we know
$N_{\rm gal, FOF}$ for all simulated luminosities and redshifts, we
derive functions of $N_{\rm gal}$ vs. $z$ for constant total cluster
luminosity.
\section{Application to UKIDSS-UDS data}
To apply our cluster-detection algorithm to real observations, we used
three sources of data in our published paper (van Breukelen et
al. 2006): near-infrared $J$ and $K$ data from the UKIDSS Ultra Deep
Survey Early Data Release (UDS EDR, Foucaud et al. 2006); 3.6$\mu$m
and 4.5$\mu$m bands from the Spitzer Wide-area InfraRed Extragalactic
survey (SWIRE, Lonsdale et al. 2005); and optical $BVRi'z'$ Subaru
data over the Subaru XMM-Newton Deep Field (SXDF, Furusawa et al. in
prep.). In this pilot study we restricted ourselves to a rectangular
area of 0.5 square degrees, exhibiting a survey-depth of $K_{\rm
AB,lim} = J_{\rm AB,lim} = 22.5$ (UDS EDR 5$\sigma$ magnitude
limits). We included objects with a detection in $i'$, $J$ and $K$ in
the galaxy catalogue and to exclude stars we impose a criterion of
SExtractor stellarity index $<$ 0.8 in $i'$ and $K$ (e.g. Bertin \&
Arnouts 1996). Subsequently we ran our photometric redshift code on
this sample, resulting in a redshift catalogue of 19300 objects in the
range $0.1 \leq z \leq 2.0$.
\subsection{Results: the UKIDSS-UDS cluster catalogue}
Application of our cluster-detection algorithm to the redshift
catalogue yielded 14 clusters at $0.61 \leq z \leq 1.39$ (van Breukelen
et al. 2006). Figure ~\ref{colour_plots} shows one of the detected
clusters: on the left a K-band image with the cluster members marked
(FOF: circles, VT: squares); in the middle a $Bz'K$ image of the
central 1 Mpc region of the cluster; on the right a colour-magnitude
diagram with the modelled red sequence overplotted.
To derive the clusters' luminosity, we compared the results to the
output of the simulations. We determined $N_{\rm gal, FOF}$ with $K <
22.5$ (corresponding to the completeness limit) in the same way as for
our simulated clusters; this allowed us to derive an approximate total
luminosity to the cluster by interpolating between the lines of
constant total luminosity in the $N_{\rm gal}-z$ plane found in our
simulations. We found our clusters span the range of $10 {\rm L^*}
\mathrel{\hbox{\rlap{\hbox{\lower2pt\hbox{$\sim$}}}\raise2pt\hbox{$<$}}} L_{\rm tot} \mathrel{\hbox{\rlap{\hbox{\lower2pt\hbox{$\sim$}}}\raise2pt\hbox{$<$}}} 50 \rm L^*$; assuming $\frac{M/\rm
M_{\odot}}{L/\rm L_{\odot}} = 75h$ (Rines et al. 2001) this yields
$0.5 \times 10^{14}~{\rm M_{\odot}} \mathrel{\hbox{\rlap{\hbox{\lower2pt\hbox{$\sim$}}}\raise2pt\hbox{$<$}}} M_{\rm cluster} \mathrel{\hbox{\rlap{\hbox{\lower2pt\hbox{$\sim$}}}\raise2pt\hbox{$<$}}}
3 \times 10^{14}~\rm M_{\odot}$.
Clearly spectroscopic observations of these clusters are essential to
confirm their reality, particularly for the high-redshift clusters
which are cosmologically more valuable. In the near future highly
multiplexed multi-object spectrometers on 8-metre class telescopes
will provide the ideal opportunity for spectroscopical follow-up of
high-redshift clusters. X-ray data and radio observations of the
Sunyaev-Zel'dovich effect will also be able to convincingly confirm
the reality of the clusters.
\begin{acknowledgements}
We are grateful to our other collaborators on this project and
acknowledge funding from PPARC.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,552 |
\section{Introduction}
Global symmetries play a central role in Quantum Field Theory (QFT). They are used as an organizing principle to systematically construct the possible operators, their breaking pattern allows to characterize the phases of a system and their possible anomalies provide exact constraints on the dynamics. However, in recent times it has been made clear that the notion of symmetry has to be generalized from the traditional textbook definition typically in terms of Noether currents. The central idea, pioneered in \cite{Gaiotto:2014kfa}, is that symmetries are associated to symmetry operators $T_g(M^{d-(p+1)})$ depending on a transformation $g$ and defined on codimension $p+1$ manifolds $M^{d-(p+1)}$.\footnote{We will mostly assume the QFT defined on a dimension $d$ euclidean signature space, and thus talk about operator insertions.} The crucial point is that the dependence on $M^{d-(p+1)}$ is topological: the properties of $T_g(M^{d-(p+1)})$ --for instance their correlation functions-- do not change under small changes of $M^{d-(p+1)}$ as long as these do not cross any charged operator.
The textbook examples of global symmetries naturally fit in this framework. Indeed, for a continuous symmetry there is a Noether current, whose integral on $M^{d-1}$ manifolds gives a charge $Q$. Clearly, slight changes of $M^{d-1}$ do not change $Q$ as long as these do not cross charged operators. Moreover, the exponential of $Q$ gives a an element of the symmetry group, and thus corresponds to the $T_g(M^{d-1})$.\footnote{In this case, as $e^{iQ}$ is a group element, it is often denoted by $U_g(M^{d-1})$.} The point of view above naturally generalizes this in two directions. On one hand it allows for more generic symmetries supported on codimension $p+1$ manifolds whose charged objects are supported on $p$-dimensional submanifolds. These are often referred to as \textit{higher form symmetries} or \textit{p-form symmetries}. On the other hand, it allows to consider more generic categorical symmetries not arising from a group. This is reflected into a more generic fusion rule for symmetry operators, which in particular do not need to have an inverse (as opposed to what should happen for a fusion rule of elements in a group). These cases are often dubbed \textit{non-invertible symmetries}.
The existence of non-invertible symmetries is well-known in lower-dimensional QFT's. In particular, in 2d there is a whole body of work studying these (see \textit{e.g.} \cite{Verlinde:1988sn,Petkova:2000ip,Fuchs:2002cm,Bachas:2004sy,Fuchs:2007tx,Bachas:2009mc} for early references). Their status in higher dimensions is however a bit less clear.\footnote{Indeed, \cite{Casini:2020rgj,Casini:2021zgr} claim that these do not exist in $d>3$.} The case of $O(2)$ has been argued to give rise to non-invertible symmetries in \cite{Heidenreich:2021xpr}, and more exotic examples have been constructed in \cite{Choi:2021kmx,Kaidi:2021xfk}. More recently, it has been argued in \cite{Roumpedakis:2022aik} that indeed non-invertible symmetries are common in higher dimensions.
Very recently, \cite{Heidenreich:2021xpr} (see also \cite{Rudelius:2020orz}) provided a criterion to compute the symmetry operators in a gauge theory, including both invertible (\textit{i.e.} usual symmetries associated to groups) as well as non-invertible symmetries. From the analysis in \cite{Heidenreich:2021xpr}, it follows that if a gauge theory (whose gauge group we assume to be compact) has local operators in all possible representations, no possible non-trivial topological operator candidate for electric 1-form symmetry operator can exist.\footnote{By electric 1-form symmetry we mean the one that is always present in a gauge theory, associated to the field strenght. In the free Maxwell case, the Noether current is simply $F$.} Therefore, the absence of global electric 1-form symmetries is equivalent to the completeness of the spectrum of the QFT.\footnote{Completeness of the spectrum is defined as the existence of operators in every possible representation of the gauge group.} In turn, this has interesting implications for the Swampland Program (in short, the study of the restrictions imposed in the low energy Physics which can be consistently coupled to Quantum Gravity. See \textit{e.g.} \cite{Palti:2019pca,vanBeest:2021lhn} for introductions and further references), where the Absence of Global Symmetries and the Completeness of the Spectrum are two central conjectures which indeed have been long suspected to be deeply related.
In this paper we study in detail (certain) higher-form global symmetries of gauge theories which include, as an element of the gauge group, charge conjugation in generic $d$ dimensions (we stress that there may be particularities for given $d$'s which we leave for future study). More precisely, we will consider gauge theories based on the gauge groups constructed in \cite{Bourget:2018ond,Arias-Tamargo:2019jyh} dubbed $\widetilde{SU}(N)$. These are principal extensions of $SU(N)$ by the $\mathbb{Z}_2$ outer automorphism corresponding to flipping the Dynkin diagram, which, in particular, exchanges the fundamental representation with the antifundamental, and thus corresponds to charge conjugation (the construction can be extended to $U(N)$, giving rise to $\widetilde{U}(N)$). Concentrating on pure gauge theories, we will study the 1-form electric symmetry, which turns out to be non-invertible (in a sense, generalizing the $O(2)$ example). Moreover, as the gauge groups are disconnected, there is a $(d-2)$-form symmetry associated to the non-trivial $\pi_0(G)$ for $G=\widetilde{SU}(N),\,\widetilde{U}(N)$.\footnote{There may be other higher form symmetries other than those we consider. In particular, there may be a $(d-3)$-form magnetic symmetry associated to the dual of the gauge field. However, its study requires the knowledge of the GNO dual group, which is not known at present for the theories at hand.} We also introduce String Theory constructions of these theories. Amusingly, these automatically all come with configurations of extended objects which break the $(d-2)$-form symmetry. From this perspective, they may be regarded as Swampland examples in the sense that when the gauge theory with gauge group $G$ is embedded into a consistent theory of Quantum Gravity, the otherwise present $(d-2)$-form symmetry is broken by the presence of charged ``matter" (in this case extended objects).
Te remainder of this paper is structured as follows. In section \ref{sec:review} we review basic facts in the topic of higher form global symmetries, mainly from \cite{Gaiotto:2014kfa} and recent progress in \cite{Heidenreich:2021xpr}. In section \ref{section:electric 1-form} give a lightning review of the groups $\widetilde{SU}(N),\,\widetilde{U}(N)$ and study the electric 1-form symmetries of pure gauge theories based on them. In section \ref{section:d-2 form} we study the $(d-2)$-form symmetry coming from the fact that the groups are disconnected. We discuss the would-be charged objects, which are the so-called \textit{Alice strings} \cite{Schwarz:1982ec} (or \textit{twist vortices} in the nomenclature of \cite{Heidenreich:2021xpr}). As it is well-known, in the presence of twist vortices, only a subgroup of the gauge group is globally well-defined \cite{Alford:1989ch}. We also introduce a stringy construction for gauge theories based on $\widetilde{U}(N)$, which, as advertised above, automatically come with Alice strings which break the $(d-2)$-global symmetry.
\vspace{.5cm}
\textbf{Note added}: as this note was being finished, \cite{Bhardwaj:2022yxj} appeared overlapping with our results on the electric 1-form (generically non-invertible) symmetries of the $\widetilde{SU}(N)$ theories.
\section{Higher form symmetries and topological operators}\label{sec:review}
In the quest to generalize the notion of symmetry to higher-form global symmetries \cite{Gaiotto:2014kfa}, one quickly realizes that the usual textbook formulation, based on a lagrangian and a explicit transformation of the fields, is not appropriate. Instead, the focus should be on the symmetry generators $U_g(M^{d-1})$ depending on a symmetry transformation $g$ and associated to a manifold $M^{d-1}$. In the continous case, these are given by the exponentiation of the charge computed as the integral of the Noether current,
\begin{align}
Q(M^{d-1})=\int_{M^{d-1}}\star J\,,
\end{align}
The key is that the dependence of $U_g$ on the manifold $M^{d-1}$ in which it is supported is topological: $U_g$ doesn't change under deformations of $M^{d-1}$ unless the deformation crosses an operator charged under the symmetry.
This point of view can be easily generalized to higher-form symmetries. The symmetry operators now live on a codimension $p+1$ manifold (on whom they depend only topologically), and the charged objects are extended on $p$ spatial dimensions.
Usually, the symmetry transformations form a group,
\begin{align}\label{eq:invertible_hfs}
U_{g_1}(M^{d-p-1}) \cdot U_{g_2}(M^{d-p-1})=U_{g_1 g_2}(M^{d-p-1})\,,
\end{align}
and the transformation has an inverse $U_g^{-1}(M^{d-p-1}) =U_{g^{-1}}(M^{d-p-1})$. However, this requirement can be relaxed, by demanding instead that the topological operators fuse according to (we now denote the operators by $T$ to stress that they may not come from a group)
\begin{align}
T_{a}(M^{d-p-1}) \cdot T_{b}(M^{d-p-1})=\sum_i N_{ab}^i\, T_{i}(M^{d-p-1})\,,
\end{align}
and need not have an inverse; this structure is that of a fusion algebra. In this case we have what is called a \emph{categorical symmetry} or \emph{non-invertible symmetry}.
The action of the topological operators on the charged objects $O(\mathcal{C}^p)$ can be understood by introducing the symmetry operator on a sphere $S^{d-p-1}$ that surrounds $\mathcal{C}^{p}$, and then shrinking that sphere to a point, finding
\begin{align}
T_a(S^{d-p-1}) O(\mathcal{C}^{p}) = B_O(a) O(\mathcal{C}^p)\,,
\end{align}
where $B_O(a)$ is called the linking coefficient. As an example, we can consider the electric 1-form symmetry of a gauge theory. The charged operators are the Wilson lines $W_\rho(\gamma^1)$, with $\rho$ a representation of the gauge group; and the symmetry operators, which we denote $T_a(M^{d-2})$, are the so called Gukov-Witten operators \cite{Gukov:2006jk,Gukov:2008sn}, which are labelled by a conjugacy class $a$ of the gauge group. The linking coefficient in this case is obtained from the Aharonov-Bohm interaction between the line and the codimension 2 operator \cite{Alford:1992yx,Heidenreich:2021xpr},
\begin{align}\label{eq:linking_GW_wilson_lines}
B_{W_\rho}(a) = \frac{\chi_\rho(a)}{\text{dim } \rho}\text{sz} (a)\,,
\end{align}
where $\chi_\rho(a)$ is the character of the representation $\rho$ evaluated in the conjugacy class of $a$, $\text{sz} (a)$ is the number of elements of the group inside said conjugacy class and dim $\rho$ is the dimension of the representation.
In \cite{Heidenreich:2021xpr}, the question was addressed of whether or not a Gukov-Witten operator can be topological (i.e. if it generates a, possibly non-invertible, 1-form global symmetry) if it links with an endable Wilson line. The argument is as follows: consider a gauge theory with matter fields in a representation $R$. Then the Wilson lines corresponding to the representation $R$ and tensor products thereof can end and break into segments. Suppose that a GW operator 1) is topological and 2) links non-trivially with the Wilson line (i.e. the linking coefficient is different from its linking with the identity operator, $B_W(a)\neq B_1(a)$). Then, given the topological nature of the GW operator, we can consider either shrinking it on top of the Wilson line, which produces the linking coefficient $B_W(a)$; or breaking the Wilson line into segments and shrinking the GW on top of a point where there is no Wilson line, which produces a trivial linking $B_1(a)$. By comparison, it follows that if a GW operator links non-trivially with an endable Wilson line, it cannot be topological. Equivalently, a necessary condition for GW operators to be topological is to link trivially with endable Wilson lines.
In fact, in \cite{Heidenreich:2021xpr} it was also argued that for gauge theories this necessary condition is also sufficient: one can move on the Coulomb branch\footnote{One can imagine adding adjoint fields to Higgs the gauge group, as these will not render extra Wilson lines endable.} where the gauge group is $U(1)^r$ (and possibly some discrete factor). In that case, it is known that all GW operators that link trivially with a Wilson line are topological and can be seen to precisely coincide with those selected by the necessary condition above. Since when going back to the origin of the Coulomb branch one expects the survival of these operators, which already exhaust all the \textit{a priori} possible ones, one concludes that the criterion above is actually sufficient.
Let us consider pure gauge theories with a gauge group $G$ that is disconnected. The endable Wilson lines will correspond to the adjoint representation and its tensor products. In this case, the previous argument, together with \eqref{eq:linking_GW_wilson_lines}, leads to a very simple criterium to find the 1-form symmetry. Instead of the center of the group (as is the case in the more usual examples of connected and simply connected groups like $SU(N)$), the topological Gukov-Witten operators will correspond to the conjugacy classes of the elements in the centralizer of the identity component $G^0$ of $G$,
\begin{align}\label{eq:1-form_symmetry_general}
\{\text{topological GW}\}\equiv \{G^{-1} h G\,,\, h\in C_{G}\left(G^{0}\right)\}\,,
\end{align}
Once the topological operators have been identified, we need to determine whether they generate a group or a non-invertible symmetry. A possible way to do it is by using the so called \emph{quantum dimension} of the operator, which is defined \cite{Heidenreich:2021xpr} as the linking coefficient with the identity operator, $\text{dim}(U_a)=B_1(a)$. As an example, if we are concerned with the one-form symmetry, the topological operators are Gukov-Witten operators and their quantum dimension is \eqref{eq:linking_GW_wilson_lines}
\begin{align}
\text{dim}(T_a)=B_1(a)=\text{sz}(a)\,.
\end{align}
The quantum dimensions have the property that they get multiplied under the fusion of topological operators, and summed under their sum,
\begin{align} \label{eq:quantum_dimension}
&\dim(T_a\cdot T_b)=\dim(T_a) \dim(T_b)\,,\\
&\dim(T_a + T_b)=\dim(T_a) + \dim(T_b)\,,
\end{align}
and since the topological operator corresponding to the identity always has quantum dimension equal to 1, this allows us to infer when a symmetry has to be non-invertible from the presence of topological operators with quantum dimension $\ge 1$.
In the same way that we can study the electric one-form symmetry from the topological GW operators and under which Wilson lines are charged, we can also look at the topological Wilson lines to find out about the dual $(d-2)$-form symmetry under which GW operators are charged. It turns out that this problem has a more straight-forward solution \cite{Heidenreich:2021xpr}. Since the gauge holonomy along a contractible loop always belongs to the identity component of the gauge group, two Wilson lines along homotopic paths differ at most by an element of $G^0$. Therefore, only Wilson lines corresponding to representations that map $G^0$ to the identity are topological. In other words,
\begin{align}\label{eq:dual_symmetry_general}
\{\text{topological WL}\} \equiv \{ \text{representations of } \pi_0(G)\}\,,
\end{align}
where $\pi_0(G)$ is the group of connected components of $G$. Note that this discussion is actually unchanged in the presence of matter fields in any representation of $G$.
While the main focus of this work lies in pure gauge theories, one can consider more general theories adding matter fields.\footnote{Depending on the matter content, there may be gauge anomalies, as recently studied in \cite{Henning:2021ctv}.} If the matter is in a representation smaller than the adjoint, the corresponding Wilson lines will become endable, and the GW operators with whom they link can no longer be topological. As a consequence, the 1-form symmetry will be reduced. Similarly, for the dual $(d-2)$-form symmetry, we can make the GW operators endable, albeit in this case by adding suitable codimension-3 objets. These were called \emph{twist vortices} in \cite{Heidenreich:2021xpr} and are defined by a monodromy in $G/G^0$ when going around them. For the disconnected gauge groups under study in this article, these are also known as \emph{twist vortices}, or \emph{Alice strings} in 4 dimensions \cite{Schwarz:1982ec}.
\subsection{Examples: SO, Sp, O}
In this section, we review the higher form symmetries of the orthogonal group $O(N)$ that result from using the formalism discussed above. We also list the results for $SO(N)$ and $Sp(N)$, as we will need them on latter sections.\\
\textbf{Special orthogonal and symplectic groups:} These groups are connected and simply connected, therefore the 1-form symmetry of the corresponding pure gauge theory is simply given by the center (see Table \ref{tab:GW_operators_SpSO_summary}). In all these cases, the 1-form symmetry is invertible.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c}
Gauge group & Topological GW operators & Quantum dimension & 1-form symmetry \\\hline
\multirow{2}{*}{$Sp(N)$} & $T^{Sp}_0=$ Id & 1 & \multirow{2}{*}{$\mathbb{Z}_2$} \\
& $T^{Sp}_\pi$ & 1 & \\\hline
\multirow{2}{*}{$SO(2)$ } & $T^{SO(2)}_0=$ Id & 1 & \multirow{2}{*}{$SO(2)$} \\
& $T^{SO(2)}_\theta\,,$ $\theta\in (0,2\pi)$ & 1 & \\\hline
\multirow{2}{*}{$SO(2k),\,k\ge 2$ } & $T^{SO}_0=$ Id & 1 & \multirow{2}{*}{$\mathbb{Z}_2$} \\
& $T^{SO}_\pi$ & 1 & \\\hline
$SO(2k-1)\,,$ $k\ge2$ & $T^{SO}_0=$ Id & 1 & Trivial
\end{tabular}
\caption{Summary of topological Gukov-Witten operators for theories with $Sp(N)$ and $SO(N)$ gauge group.}
\label{tab:GW_operators_SpSO_summary}
\end{table}
The magnetic $(d-2)$-form symmetry, which is given by the group of connected components, is trivial in all these cases.
\textbf{Orthogonal groups:} This is the first instance of disconnected gauge group that we encounter. Instead of the center, the 1-form symmetry is obtained from the centralizer of the identity component of the group. We need to distinguish three possible cases: $N=2$, $N$ even and bigger than 2, or $N$ odd.
The case of $O(2)$ was studied in detail in \cite{Heidenreich:2021xpr}, and it is special because the identity component, $SO(2)$, is abelian. The full group $O(2)$ can be written as a semidirect product $ SO(2)\rtimes\mathbb{Z}_2$. By definition the generator of the $\mathbb{Z}_2$ does not commute with the $SO(2)$, therefore the centralizer is
\begin{align}
C_{O(2)}(SO(2))=SO(2)\,.
\end{align}
The topological GW operators are labelled by the conjugacy classes of elements in this centralizer. Here the global structure of the group becomes relevant, as we can also conjugate by the nontrivial element in the $\mathbb{Z}_2$. If we denote this element as $P$, the action on an element of the centralizer is
\begin{align}
P^{-1}\cdot \left( \begin{array}{cc}
\cos \theta & - \sin \theta \\
\sin \theta & \cos \theta
\end{array}\right) \cdot P = \left( \begin{array}{cc}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{array}\right) \,,
\end{align}
i.e. it maps $\theta\mapsto -\theta$. This means that we don't have one GW operator for each $\theta\in [0,2\pi]$, but rather one for each $\theta\in[0,\pi]$ and the quantum dimension of the operators labelled by $\theta\in(0,\pi)$ is equal to two. Therefore, the 1-form symmetry in the $O(2)$ case is non-invertible. The fusion algebra of the topological operators was reported in \cite{Heidenreich:2021xpr},
\begin{align}\label{eq:fusion_algebra_O2}
& T^{O(2)}_{\theta}\cdot T^{O(2)}_{\varphi} = T^{O(2)}_{\theta+\varphi} + T^{O(2)}_{\theta-\varphi}\,,\nonumber\\
& T^{O(2)}_{\theta} \cdot T^{O(2)}_\pi = T^{O(2)}_{\theta+\pi}\,,\nonumber\\
& T^{O(2)}_{\pi} \cdot T^{O(2)}_{\pi} = 1 \,,\\
& T^{O(2)}_\theta \cdot T^{O(2)}_{\theta} = 1 + W^{O(2)}_{\text{sign}} + T^{O(2)}_{2\theta}\,, \nonumber\\
& T^{O(2)}_{\theta} \cdot T^{O(2)}_{\pi-\theta} = T^{O(2)}_\pi + W^{O(2)}_{\text{sign}} T^{O(2)}_{\pi} + T^{O(2)}_{2\theta-\pi}\,,\nonumber
\end{align}
where $\theta\neq\varphi$ and $W^{O(2)}_{\text{sign}}$ is the Wilson line in the sign representation of $O(2)$. The appearance of the Wilson line in the fusion of two GW operators is the hallmark of a higher-group global symmetry structure, that can also be seen from the fact that Wilson lines are charged under the (zero-form) charge conjugation symmetry of $SO(2)$. In more detail, the fourth equation in \eqref{eq:fusion_algebra_O2} can be understood as follows.\footnote{We thank Miguel Montero for explaining this argument to us.} First, consider the fusion of two GW operators corresponding to different angles $\theta$ and $\varphi$ and take the limit $\varphi\to\theta$. We obtain
\begin{align}\label{eq:O2_weird_fusion_1}
T_\theta^{O(2)}\cdot T_\theta^{O(2)} = T_{2\theta}^{O(2)} + \text{`` }T_0^{O(2)}\text{ ''}\,.
\end{align}
Naively, one would say that $\text{`` }T_0^{O(2)}\text{ ''}$ is equal to two copies of the identity. However, to properly investigate this one should consider the fusion inside correlation functions. When all other operators in the correlator belong to the connected component (that is, they are operators just like those in the $SO(2)$ theory), indeed $T_0^{O(2)}$ looks like twice the identity. However, if one of the inserted operators belongs to the disconnected component there may be subtleties. Indeed, suppose including in our correlator the GW operator corresponding to the $\mathbb{Z}_2\subseteq O(2)$, which we denote $T^{O(2)}_{\text{disc}}$. This corresponds to the insertion of an Alice string defined by a gauge connection which picks a sign upon going around the string. Suppose now $\langle T_{\rm disc}^{O(2)}T_{\theta}^{O(2)}\cdots\rangle$. Since $T_{\theta}^{O(2)}$ shifts the gauge connection by a constant, which is clearly incompatible with the action of $T_{\rm disc}^{O(2)}$, it follows that $T_{\rm disc}^{O(2)}T_{\theta}^{O(2)}=0$ for any $\theta$. Thus, inserting \eqref{eq:O2_weird_fusion_1} in the correlator with $T_{\rm disc}^{O(2)}$ leads to the requirement $\text{`` }T_0^{O(2)}\text{ ''}T_{\rm disc}^{O(2)}=0$, which shows that ``$T_0^{O(2)}$'' cannot simply be two copies of the identity. In fact, the only operator we can construct that satisfies these conditions is
\begin{align}\label{eq:O2_weird_fusion_2}
\text{`` }T_0^{O(2)}\text{ ''} = 1+ W^{O(2)}_{\rm sign}\,,
\end{align}
leading to the fusion rule in \eqref{eq:fusion_algebra_O2} (a similar argument would hold for the fifth equation).
The cases of $O(N)$ where $N\ge 3$ are simpler, since the centralizer is always finite. If $N$ is odd, then $SO(N)$ has trivial center. However, in this case $O(N)$ is the direct product $ SO(N)\times\mathbb{Z}_2$ and the nontrivial element of the $\mathbb{Z}_2$ (which is $-\mathds{1}$) will appear in the centralizer. Naturally, both $\pm\mathds{1}$ are mapped to themselves by conjugation in $O(N)$, thus, we have a $\mathbb{Z}_2$ invertible 1-form symmetry. On the other hand, if $N$ is even, $SO(N)$ has a center isomorphic to $\mathbb{Z}_2$, where the non-trivial element is precisely $-\mathds{1}$. In this case the extension to the orthogonal group is a semidirect product $ SO(N)\rtimes \mathbb{Z}_2$, and no new elements will appear in the centralizer. We conclude that also in this case we have an invertible $\mathbb{Z}_2$ 1-form symmetry. We summarize these results in Table \ref{tab:GW_operators_O_summary}.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c}
Gauge group & Topological GW operators & Quantum dimension & 1-form symmetry \\\hline
\multirow{3}{*}{$O(2)$} & Id & 1 & \multirow{3}{*}{\eqref{eq:fusion_algebra_O2}} \\
& $T^{O(2)}_\theta\,,$ $\theta\in (0,\pi)$ & 2 & \\
& $T^{O(2)}_\pi$ & 1 & \\\hline
\multirow{2}{*}{$O(N),\,N\ge 3$ } & Id & 1 & \multirow{2}{*}{$\mathbb{Z}_2$} \\
& $T^{O(N)}_\pi$ & 1 &
\end{tabular}
\caption{Summary of topological Gukov-Witten operators for theories with $O(N)$ gauge group.}
\label{tab:GW_operators_O_summary}
\end{table}
The dual $(d-2)$-form symmetry is obtained from the representations of the group of connected components \eqref{eq:dual_symmetry_general}. For all the $O(N)$ groups, $\pi_0(O(N))=\mathbb{Z}_2$, which has two representations. These two representations of $\mathbb{Z}_2$ lift to the full $O(N)$ group giving rise to the trivial and sign representation. Therefore, the topological Wilson lines are precisely $W^{O(N)}_\mathbf{1}(\gamma)$ and $W^{O(N)}_{\text{sign}}(\gamma)$. Note that in this case the fusion of the Wilson lines (which in general is obtained from the decomposition of the tensor product of the two initial representations) reduces to a group operation,
\begin{align}\label{eq:fusion_WL_O_groups}
W^{O(N)}_\mathbf{1}\cdot W^{O(N)}_\mathbf{1} = W^{O(N)}_\mathbf{1}\,,\qquad W^{O(N)}_\mathbf{1}\cdot W^{O(N)}_{\text{sign}} = W^{O(N)}_{\text{sign}}\,,\nonumber\\
W^{O(N)}_{\text{sign}}\cdot W^{O(N)}_\mathbf{1} = W^{O(N)}_{\text{sign}}\,,\qquad W^{O(N)}_{\text{sign}}\cdot W^{O(N)}_{\text{sign}} = W^{O(N)}_\mathbf{1}\,,
\end{align}
and so the $(d-2)$-form symmetry is an invertible $\mathbb{Z}_2$.
\section{Electric 1-form symmetry}\label{section:electric 1-form}
In this section, we look at the electric 1-form symmetry of disconnected groups built as $\mathbb{Z}_2$ extensions of $SU(N)$ or $U(N)$. We derive the topological Gukov-Witten operators from the generic arguments presented in section \ref{sec:review}, namely from the computation of the centralizer of the identity component of these groups. An alternative derivation of which are the topological GW operators, with equal results, is presented in appendix \ref{sec:appendix_Wendt}.
We begin by recalling some basic definitions and propierties of $\widetilde{SU}(N)$ groups. These are the principal extensions of $SU(N)$ groups, i.e. semidirect products $SU(N) \rtimes_{\Theta} \mathbb{Z}_2$ where $\Theta : \mathbb{Z}_2 \rightarrow \mathrm{Aut}(SU(N))$ is a lift to the group of the automorphism of the Dynking diagram of $SU(N)$. If $N$ is odd, there is only one such possible lift up to isomorphism,
\begin{align}\label{eq:semidirect_theta1}
\Theta^I(1) (g) = g\,,\quad \Theta^I(-1) (g) = (g^{-1})^T = \overline{g}\,,
\end{align}
where $g \in SU(N)$ and the bar denotes complex conjugation. If $N$ is even, there are two distinct choices of $\Theta$ that give rise to two different groups \cite{Arias-Tamargo:2019jyh}. One is given by \eqref{eq:semidirect_theta1} and the other by
\begin{align}\label{eq:semidirect_theta2}
\Theta^{II}(1) (g) = g\,,\quad \Theta^{II}(-1) (g) = -J_N (g^{-1})^T J_N = -J_N \overline{g} J_N\,,
\end{align}
where
\begin{align}
J_{2k} := \left(\begin{matrix} 0 & -\mathds{1}_{k \times k} \\
\mathds{1}_{k \times k} & 0 \\
\end{matrix} \right) \, \ .
\end{align}
We denote these two different groups as $\widetilde{SU}(N)_I$ and $\widetilde{SU}(N)_{II}$ respectively, and their elements are pairs $(g,\eta)$ with $g\in SU(N)$ and $\eta\in\mathbb{Z}_2$. According to the definition of semidirect product, multiplication of elements is given by
\begin{align}
(g_1,\eta_1) (g_2,\eta_2) = (g_1 \Theta(\eta_1)(g_2), \eta_1 \eta_2)\,.
\end{align}
It is possible to give an matrix construction of the groups explicitly exhibiting these properties \cite{Arias-Tamargo:2019jyh}.
Note that we can apply the same construction beginning with $g_i\in U(N)$, although in this case we cannot call them principal extensions. We will denote these groups as $\widetilde{U}(N)_I$ and $\widetilde{U}(N)_{II}$. In many cases, it can be useful to write the elements of these groups directly in their fundamental representation. This is a $2N$ dimensional representation where an element $(g,\eta)$ is represented by
\begin{align}\label{eq:fundamental_rep_SUtilde}
\text{fund}((g,1))=\left(\begin{array}{cc}
g & 0 \\
0 & \Theta(-1) (g)
\end{array} \right)\,,\quad \text{fund}((g,-1)) =
\left(\begin{array}{cc}
0 & g \\
\Theta(-1)(g) & 0
\end{array} \right)
\end{align}
With these definitions, it is easy to compute both the center as well as the centralizer of the identity component of these groups. Fisrt, recall that the center of $U(N)$ and $SU(N)$ is $U(1)$ or $\mathbb{Z}_N$ respectively, with elements $e^{i\theta}\mathds{1}$ or $e^{i 2k\pi/N}\mathds{1}$. Second, note that elements of the disconnected component don't commute with generic elements in the connected component. Therefore, to find the center, we need to find the elements $(h,1)$ with $h\in Z(G)$ ($G=U(N)$ or $SU(N)$) that commute with $(g,-1)$ with $g\in G$. From the definition of the semidirect product, we find
\begin{align}
& (h,1)(g,-1) = (h g , -1) \,,\\
& (g,-1)(h,1) = (g \Theta(-1)(h), -1)\,.
\end{align}
Since $h\in Z(G)$, this leads to the condition
\begin{align}\label{eq:condition_centers}
h=\Theta(-1)(h)=\overline{h}\,,
\end{align}
which doesn't depend on whether $\Theta$ corresponds to \eqref{eq:semidirect_theta1} or \eqref{eq:semidirect_theta2}. This condition is only satisfied for $h=\pm\mathds{1}$; however, note that for the case of $SU(2k-1)$ only $+\mathds{1}$ belongs to the group. All in all, the center of these groups is given by
\begin{align}\label{eq:centers_SU_tilde}
& Z(\widetilde{U}(N))=\mathbb{Z}_2\,,\\
& Z(\widetilde{SU}(2n))=\mathbb{Z}_2\,,\\
& Z(\widetilde{SU}(2n+1))=\lbrace \mathds{1} \rbrace\,.
\end{align}
The topological GW operators can be found from the centralizers of the identity components. The computation of these centralizers is very similar to that of the centers above. The only difference is that, since these elements don't need to commute with the disconnected component, we don't need to impose \eqref{eq:condition_centers}. Therefore,
\begin{align}\label{eq:centralizers_SU_tilde}
& C_{\widetilde{U}(N)}(U(N)) = \left\lbrace \left( e^{i \theta} \mathds{1} ,1\right)\,,\, \theta\in [0,2\pi] \right\rbrace \,,\\
& C_{\widetilde{SU}(2n)}(SU(2n)) = \left\lbrace \left( e^{i \frac{k\pi}{n}} \mathds{1} ,1\right)\,,\, k=0,1,\dots,2n-1 \right\rbrace \,,\\
& C_{\widetilde{SU}(2n+1)}(SU(2n+1)) = \left\lbrace \left( e^{i \frac{2k \pi}{2n+1}} \mathds{1} ,1\right)\,,\, k=0,1,\dots,2n \right\rbrace \,,
\end{align}
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c}
Gauge group & Topological GW operators & Quantum dimension & 1-form symmetry \\\hline
\multirow{3}{*}{$\widetilde{U}(N)$} & $T^{\widetilde{U}}_0=$ Id & 1 & \multirow{3}{*}{\eqref{eq:fusion_rule_U_tilde}} \\
& $T^{\widetilde{U}}_\theta\,,$ $\theta\in (0,\pi)$ & 2 & \\
& $T^{\widetilde{U}}_\pi$ & 1 & \\\hline
\multirow{2}{*}{$\widetilde{SU}(2n+1)$ } & $T^{\widetilde{SU}}_0=$ Id & 1 & \multirow{2}{*}{\eqref{eq:fusion_rule_SU_tilde}} \\
& $T^{\widetilde{SU}}_k\,,$ $k=1,\dots,n$ & 2 & \\\hline
\multirow{3}{*}{$\widetilde{SU}(2n)$} & $T^{\widetilde{SU}}_0=$ Id & 1 & \multirow{3}{*}{\eqref{eq:fusion_rule_SU_tilde}} \\
& $T^{\widetilde{SU}}_k\,,$ $k=1,\dots, n-1$ & 2 & \\
& $T^{\widetilde{SU}}_n = T^{\widetilde{SU}}_\pi$ & 1 &
\end{tabular}
\caption{Summary of topological Gukov-Witten operators for theories with $\widetilde{U}(N)$ and $\widetilde{SU}(N)$ gauge group.}
\label{tab:GW_operators_SUtilde_summary}
\end{table}
Finally, in order to find the topological GW operators, what we need to do is compute the conjugacy classes of the elements above in $\widetilde{U}(N)$ and $\widetilde{SU}(N)$. Working directly in the fundamental representation,
\begin{align}
\left(\begin{array}{cc}
0 & \mathds{1} \\
\mathds{1} & 0
\end{array} \right)
\left(\begin{array}{cc}
g & 0 \\
0 & \Theta(-1)(g)
\end{array} \right)
\left(\begin{array}{cc}
0 & \mathds{1} \\
\mathds{1} & 0
\end{array} \right)
=
\left(\begin{array}{cc}
\Theta(-1)(g) & 0 \\
0 & g
\end{array} \right)\,.
\end{align}
From the definition of $\Theta(-1)$ we see that conjugating with an element of the disconnected component leads to mapping the phase $\theta\mapsto -\theta$, or $k\mapsto -k$. All in all, for the different gauge groups, we have topological GW operators labelled by
\begin{align}
\widetilde{U}(N): &\quad T^{\widetilde{U}}_g=T^{\widetilde{U}}_{g(\theta)}\,,\quad g(\theta) = e^{i\theta} \left(\begin{array}{cc}
\mathds{1} & 0 \\
0 & \mathds{1}
\end{array} \right)\,,\, \theta \in [0,\pi] \,, \\
\widetilde{SU}(2n): & \quad T^{\widetilde{SU}}_g = T^{\widetilde{SU}}_{g(k)}\,,\quad g(k) = e^{i\frac{k\pi}{n}}\left(\begin{array}{cc}
\mathds{1} & 0 \\
0 & \mathds{1}
\end{array} \right)\,,\, k=0,1,\dots,n \,,\\
\widetilde{SU}(2n+1): & \quad T^{\widetilde{SU}}_g = T^{\widetilde{SU}}_{g(k)}\,,\quad g(k) = e^{i\frac{2k\pi}{2n+1}}\left(\begin{array}{cc}
\mathds{1} & 0 \\
0 & \mathds{1}
\end{array} \right)\,,\, k=0,1,\dots,n\,.
\end{align}
Note that the conjugacy classes corresponding to $\theta \in (0,\pi)$ have two elements belonging to them, and likewise for $k=1,\dots,\lfloor N/2 \rfloor-1$. Therefore the corresponding GW operators have quantum dimension two: this means that the symmetry is non-invertible, as there are no operators that we can fuse with e.g. $T^{\widetilde{SU}}_1$ and produce the identity operator, according to \eqref{eq:quantum_dimension}. This can also be checked by directly computing the fusion rule, which we proceed to illustrate in the example of $\widetilde{SU}(N)$. Note that the GW operator $T^{\widetilde{SU}}_k$ can be written as a sum of two GW operators of $SU$ gauge theory,
\begin{align}
\label{GWtildeSU}
T^{\widetilde{SU}}_k (M^{d-2})= T^{SU}_k(M^{d-2})+ T^{SU}_{-k}(M^{d-2})\,.
\end{align}
If we fuse two operators with $k\neq k'$, we find
\begin{align}
T^{\widetilde{SU}}_k \cdot T^{\widetilde{SU}}_{k'} &= \left(T^{SU}_k+ T^{SU}_{-k}\right) \cdot \left(T^{SU}_{k'}+ T^{SU}_{-k'}\right)\\
& = T^{SU}_k\cdot T^{SU}_{k'} + T^{SU}_k\cdot T^{SU}_{-k'} + T^{SU}_{-k}\cdot T^{SU}_{k'} + T^{SU}_{-k}\cdot T^{SU}_{-k'}\nonumber\,.
\end{align}
The 1-form symmetry of $SU(N)$ gauge theory is invertible, i.e. the fusion of the $T^{SU}_k$ GW operators obeys \eqref{eq:invertible_hfs}. Thus,
\begin{align}\label{eq:fusion_rule_SU_tilde}
T^{\widetilde{SU}}_k \cdot T^{\widetilde{SU}}_{k'} &= T^{SU}_{k+k'} + T^{SU}_{k-k'} + T^{SU}_{k'-k} + T^{SU}_{-(k+k')} \nonumber \\
&=T^{\widetilde{SU}}_{k+k'} + T^{\widetilde{SU}}_{|k-k'|}\,,
\end{align}
The fusion rule in the case when $k=k'$ is much more subtle and we cannot compute it explicitly. However, since the behaviour of the 1-form symmetry of $\widetilde{SU}$ and $\widetilde{U}$ groups seems in many aspects a natural generalization of the $O(2)$ case described above, we expect that an analogous argument to that between equations \eqref{eq:O2_weird_fusion_1}--\eqref{eq:O2_weird_fusion_2} should also apply, giving rise to fusion rules similar to \eqref{eq:fusion_algebra_O2}.
For the gauge group $\widetilde{U}$, the computation is completely analogous, and the only difference is that instead of the discrete parameters $k,k'$ we have continious parameters $\theta, \theta'$,
\begin{align}\label{eq:fusion_rule_U_tilde}
T^{\widetilde{U}}_{\theta}\cdot T^{\widetilde{U}}_{\theta'} = T^{\widetilde{U}}_{\theta+\theta'} + T^{\widetilde{U}}_{|\theta-\theta'|}\,.
\end{align}
We can gain further intuition on the appearance of non-invertible symmetries if we consider the theory before gauging the outer automorphism (i.e. just $U(N)$ or $SU(N)$ gauge theory), when the topological Gukov-Witten operators have quantum dimension one. However, the (now global) zero-form symmetry is not independent of the center one-form symmetry; a feature that can be seen from the fact that the fundamental Wilson line is acted upon by charge conjugation $\mathcal{C}$ as
\begin{align}
\label{ConWL}
\mathcal{C}:\,W_{\rm\bf fund} \mapsto W_{\overline{\rm\bf fund}}\,,
\end{align}
This implies that the total global symmetry of the theory is not a direct product of both, but rather a higher categorical object known as a \emph{2-group symmetry} (see \textit{e.g.} \cite{Kapustin:2013uxa,Sharpe:2015mja,Delcamp:2018wlb,Cordova:2018cvg,Benini:2018reh,Delcamp:2019fdp}).
This observation provides further support to the identification of the symmetry operators for the $\widetilde{SU}(N)/\widetilde{U}(N)$ theories above (along lines similar to \cite{Barkeshli:2014cna}). The GW operators of the $U(N)/SU(N)$ theory are acted in a similar way to \eqref{ConWL} by $\mathcal{C}$. It is then clear that the $\mathcal{C}$-invariant combinations are precisely \eqref{GWtildeSU} (and the converse for $U(N)$), which are then the leftover GW after gauging $\mathcal{C}$. Note that the appearance of non-invertible symmetries is due to the ``folded structure" in \eqref{GWtildeSU}, which from this point of view is inherited from the fact that the 1-form generators of the $SU(N)/U(N)$ theory are acted by the 0-form symmetry $\mathcal{C}$.\footnote{This possibility seems to be excluded in \cite{Casini:2020rgj,Casini:2021zgr} (see footnote 11 in \cite{Casini:2020rgj}).}
\section{Dual $(d-2)$-form symmetry}\label{section:d-2 form}
In this section, the aim is twofold: on the one hand, we study the $(d-2)$-form symmetry generated by topological Wilson lines; and on the other hand, we look for brane constructions of $\widetilde{SU}(N)$ theories. It seems that the key to achieving the second is to include defects so that the $(d-2)$-form symmetry is broken.
\subsection{Wilson lines and Alice strings}
In the previous section we looked for topological GW operators that generate the electric 1-form symmetry. The charged objects were Wilson lines, and the symmetry can be broken if we include particles that make the Wilson lines endable. In this section, we look for topological Wilson lines: the charged objects are precisely the Gukov-Witten operators. As we have discussed above, the Wilson line along a contractible path always belongs to the connected component of the group, and this makes it so that the topological ones, which generate the $(d-2)$-form symmetry, are given by the representations of the gauge group that map the whole identity component $G^0$ to 1 \eqref{eq:dual_symmetry_general}.
In the case of $\widetilde{SU}(N)$ and $\widetilde{U}(N)$, the group of connected components is $\mathbb{Z}_2$. This group has two representations, the trivial and the fundamental. At the level of the group, they correspond to the trivial representation and the sign representation respectively, where the sign representation is defined as
\begin{align}\label{eq:sign_rep_SUtilde}
\text{sign}((g,1))=1\,,\quad \text{sign}((g,-1)) = -1\,,
\end{align}
for $g$ in $\widetilde{SU}(N)$ or $\widetilde{U}(N)$. Therefore, the topological Wilson lines are $W^{\widetilde{G}}_{\mathbf{1}}$ and $W^{\widetilde{G}}_{\text{sign}}$. Similarly to the case of the $O(N)$ groups \eqref{eq:fusion_WL_O_groups}, the topological Wilson lines fuse according to the $\mathbb{Z}_2$ product, and therefore we have a $\mathbb{Z}_2$ invertible $(d-2)$-form symmetry.
In a similar fashion to the 1-form symmetry, this $(d-2)$-form symmetry can be broken by including defects on which the charged GW operators can end. These were dubbed \emph{twist vortices} in \cite{Heidenreich:2021xpr}, and in the context of the charge conjugation symmetry have been usually called \emph{Alice strings} \cite{Schwarz:1982ec}. These defects always have a transverse $\mathbb{R}^2$ at each point, and are defined by the fact that when going around them, operators undergo a monodromy corresponding to the outer automorphism of the gauge group.
An important feature of Alice strings is that their presence reduces the globally well defined gauge group \cite{Alford:1992yx}. This is because the outer automorphism doesn't commute with all gauge transformations, and states that should be gauge equivalent can pick up different Aharonov-Bohm phases from the action of the monodromy. This is a contradiction: what happens is that, even if in a region that doesn't include the string the gauge group is apparently $G$, the presence of the string reduces it to the centraliser of the outer automorphism of $G$, i.e. precisely the subgroup that does commute with the monodromy.
Let's be more explicit in the case at hand of $\widetilde{SU}(N)_{I,II}$. When going around the Alice string, fields are acted upon by the element $(1,-1)$ which corresponds to the outer automorphism of $SU(N)$. Note that this action does depend on the choice of semidirect product $\Theta$ \eqref{eq:semidirect_theta1} or \eqref{eq:semidirect_theta2}. The globally well defined gauge group is the centraliser $C_{\widetilde{SU}(N)_{I,II}}((1,-1))$, which can be easily found by computing
\begin{align}
& (g,\eta) (1,-1) = (g \Theta(\eta)(1),-\eta) = (g,-\eta)\,, \\
& (1,-1) (g,\eta) = (\Theta(-1)(g),-\eta)\,.
\end{align}
We see that $g$ needs to satisfy
\begin{align}
g = \Theta(-1) (g)\,.
\end{align}
If $\Theta=\Theta^I$ \eqref{eq:semidirect_theta1}, this implies that $g\in SO(N)$. On the other hand, if $\Theta=\Theta^{II}$ \eqref{eq:semidirect_theta2}, which can only happen if $N$ is even, we have $g\in Sp(N/2)$.
In summary, if we add an Alice string to break the $(d-2)$-form symmetry of $\widetilde{SU}(N)$ theory, the gauge group becomes $SO(N)\times \mathbb{Z}_2$ or $Sp(N/2)\times \mathbb{Z}_2$. It is interesting to look back to the electric 1-form symmetry of these theories. From Table \ref{tab:GW_operators_SpSO_summary}, we see that the topological Gukov-Witten operators, after the reduction of the well defined gauge group, correspond to $+\mathds{1}$ or $-\mathds{1}$ (the latter only when it belongs to the group). Comparing with the results of Table \ref{tab:GW_operators_SUtilde_summary}, these are precisely the GW operators with quantum dimension 1 in $\widetilde{SU}(N)$ theory. Therefore, it seems that breaking the $(d-2)$-form symmetry ultimately results in the disappearance of the non-invertible 1-from symmetries. We believe that this is a phenomenon deserving of further exploration.
\subsection{Brane constructions}
Brane setups engineering Alice strings and $\widetilde{U}(N)$ gauge groups can be achieved by intersecting branes with orientifolds. Following \cite{Gukov:2008sn}, consider
\begin{align}
\begin{tabular}{c|cccccccccc}
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\hline
$N$ D3 & $\times$ & $\times$ & $\times$ & $\times$ & & & & & & \\
O3 & $\times$ & $\times$ & & & $\times$ & $\times$ & & & & \\
$k$ D3' & $\times$ & $\times$ & & & $\times$ & $\times$ & & & &
\end{tabular}
\label{eq:brane_construction_GW}
\end{align}
The argument in \cite{Gukov:2008sn} suggests that the theory on the D3-branes is $\widetilde{U}(N)$. However, from the point of view of the D3-branes, the O3 $+ k$ D3' act as a codimension 2 defect with a monodromy associated to the element $(1,-1) \in \widetilde{U}(N)$, that is, an Alice string. As a result, the globally well defined gauge group is $O(N)$ (in the $\widetilde{U}(N)_I$ case) or $Sp(N/2)$ (in the $\widetilde{U}(N)_{II}$ case), and the full $\widetilde{U}(N)_{I,II}$ is only manifest on top of the defect. The type of semidirect product extension $\Theta^I$ or $\Theta^{II}$ depends on the choice of orientifold plane. For O3$^+,\,\widetilde{\rm O3}^+$ we find $\widetilde{U}(N)_I$, leading to a globally well-defined $O(N)$ in the presence of the twist vortex, and for an O3$^-$ we find $\widetilde{U}(N)_{II}$ leading to a globally well-defined $Sp(N/2)$ in the presence of the twist vortex.\footnote{It would be interesting to clarify the distiction bewteen O3$^-$ and $\widetilde{\rm O3}^-$.}
This arrangement of branes can be straightforwardly generalised to $N$ D$p$-branes along the $x^{0,\dots,p}$ directions and O$p$ $+k$ D$p$' along the $x^{0,\dots,p-2,p+1,p+2}$. In this way, we can engineer $\widetilde{U}(N)$ theories in a different number of dimensions.
Besides \eqref{eq:brane_construction_GW}, there are two more setups where we can engineer a $\widetilde{U}(N)$ theory in a similar fashion. These are the type IIB configuration
\begin{align}
\begin{tabular}{c|cccccccccc}
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\hline
$N$ D3 & $\times$ & $\times$ & $\times$ & $\times$ & & & & & & \\
$k$ D7' & $\times$ & $\times$ & & & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\
O7 & $\times$ & $\times$ & & & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$
\end{tabular}
\label{eq:brane_construction_IIB}
\end{align}
The IIB configuration with an $O7^-$ and 4 extra flavor $D7$ branes was studied in detail in \cite{Harvey:2007ab, Harvey:2008zz}, where it was argued that the 2d intersection acts as an Alice string for the $\widetilde{U}(N)_I$ theory on the D3, out of which only a $O(N)$ is globally well-defined.
In addition, we also have the T-dual in type IIA,
\begin{align}
\begin{tabular}{c|cccccccccc}
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\hline
$N$ D2 & $\times$ & & $\times$ & $\times$ & & & & & & \\
$k$ D6' & $\times$ & & & & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\
O6 & $\times$ & & & & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$
\end{tabular}
\label{eq:brane_construction_IIA}
\end{align}
In both these cases, the theory on the D3 (or D2) is a $\widetilde{U}(N)$ gauge theory, and the orientifold intersection appears as an Alice string which reduces the well defined gauge group to an orthogonal or symplectic subgroup. However, as opposed to \eqref{eq:brane_construction_GW}, the identification of the type of orientifold with the semidirect product extension is reversed: an O7$^+$ will give rise to $\widetilde{U}(N)_{II}$ on the D3, and an O7$^-$ to $\widetilde{U}(N)_I$.
An important remark is that all the brane constructions that we have discussed have one thing in common: when seeking to engineer a disconnected gauge group, the Alice strings appears automatically, and it seems impossible to find the former without the latter. As a consequence, as we have discussed in the previous section, the presence of the Alice string also serves the purpose of breaking the $(d-2)$-form global symmetry of these theories. This strongly resonates with the conjectured absence of global symmetries in quantum gravity.
\section{Conclusions and outlook}
In this note we have studied the electric 1-form and $(d-2)$-form symmetries of gauge theories based on the gauge groups which include charge conjugation as part of the gauge symmetry introduced in \cite{Bourget:2018ond,Arias-Tamargo:2019jyh}. As for the electric 1-form symmetry, concentrating on pure gauge theories, we have found that these QFT's provide very simple and explicit examples of non-invertible symmetries using the technology developed in \cite{Heidenreich:2021xpr}, supporting the claim in \cite{Roumpedakis:2022aik} that indeed non-invertible symmetries are ubiquitous also in dimensions higher than three. In this case, the emergence of non-invertible symmetries can be heuristically understood considering a copy of the same theory but with gauge group $SU(N)$ (an analogous discussion holds for the $U(N)$ case). In that case, the symmetry operators associated to the electric 1-form symmetry are permuted by the 0-form charge conjugation symmetry (forming actually a 2-group). The operators which carry over to the version of the theory with gauged charge conjugation are the combinations which are $\mathcal{C}$-invariant, and this ``folds" the GW operators as in \eqref{GWtildeSU} leading to non-invertible symmetries in very much the same way as in the $O(2)$ case discussed in \cite{Heidenreich:2021xpr}. The non-invertible character of the symmetries can be read-off from the fact that their quantum dimension is 2 (instead of 1). This manifests itself also in the fusion rules, which we expect to mimic the $O(2)$ case. Even though we have provided arguments in support of the fusion rules in section \ref{section:electric 1-form}, it would be very interesting to further study this aspect to put them on firmer grounds.
More generally, the existence of non-invertible symmetries has been shown to be closely related to mixed anomalies (see e.g. \cite{Tachikawa:2017gyf,Kaidi:2021xfk}). The fact that, when considering a theory which includes charge conjugation as part of the gauge group, one immediately finds a non-invertible 1-form symmetry, may signal that such a mixed anomaly between the gauge group and its outer automorphism should be present (see \cite{Komargodski:2017dmc} for a related discussion). It would be very interesting to investigate this point further, complementing the studies of \cite{Henning:2021ctv}.
Since the gauge groups we are considering are disconnected ($\pi_0(G)=\mathbb{Z}_2$), QFT's based on them automatically exhibit a $(d-2)$-form symmetry \cite{Heidenreich:2021xpr}. The objects charged under this symmetry are twist vortices (Alice strings in the $d=4$ case). As it is well-known, in the presence of twist vortices only a subgroup of the full gauge group is well defined. In the case at hand it is $SO(N)$ for $\widetilde{SU}(N)_I$ and $Sp(N/2)$ for $\widetilde{SU}(N)_{II}$ --recall that this later version is only available for even $N$. Elaborating on \cite{Gukov:2008sn} and \cite{Harvey:2007ab, Harvey:2008zz} we have suggested String Theory embeddings for these theories. Amusingly, they automatically come with twist vortices, thus breaking the $(d-2)$-symmetry. This is very much consistent with the Swampland criteria that any global symmetry should be broken. These String Theory constructions involve intersecting orientifolds. Roughly speaking, the type of orientifold ($Op^{\pm}$ and their tilde versions) match the possible theories. However, it would be very interesting to study these constructions in more detail (in particular including the relation between the two proposed constructions). Moreover, the String Theory construction may be used to study the duality properties of these theories. As a consequence, this would allow to study magnetic $(d-3)$-form symmetries as well as possible 't Hooft anomalies. Note that some of these aspects may depend on the dimension $d$. We leave these very interesting aspects for future studies.
\section*{Acknowledgements}
We would like to thank F. Benini, O. Bergman and specially M. Montero for very useful conversations. GAT is supported by the Spanish government schol- arship MCIU-19-FPU18/02221. This work is partly supported by Spanish national grant MINECO-16-FPA2015- 63667-P as well as the Principado de Asturias grant SV-PA-21-AYUD/2021/52177.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,518 |
package com.runmyprocess.sec;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;
import javax.naming.directory.*;
import com.unboundid.ldap.sdk.SearchResult;
import com.unboundid.ldap.sdk.SimpleBindRequest;
import com.unboundid.ldap.sdk.*;
import com.unboundid.ldif.LDIFException;
import org.runmyprocess.json.JSONArray;
import org.runmyprocess.json.JSONObject;
public class LDAP implements ProtocolInterface{
private static final Logger LOGGER = Logger.getLogger(
Thread.currentThread().getStackTrace()[0].getClassName() );
static DirContext ldapContext;
private Response response = new Response();
private LDAPConnection connection;
private enum Operation {
SEARCH, ADD, MODIFY, DELETE
}
private enum EnumScope {
BASE,ONE, SUB,SUBORDINATE_SUBTREE
}
public LDAP() {
// TODO Auto-generated constructor stub
}
/**
* Manages errors
* @param error
* @return an error jsonObject
*/
private JSONObject LDAPAgentError(String error){
response.setStatus(400);//sets the return status to internal server error
JSONObject errorObject = new JSONObject();
errorObject.put("error", error);
response.setData(errorObject);
return errorObject;
}
/**
* Creates a connection to LDAP
* @param conf the configuration information from the loaded config file
* @param user the connection dn
* @param password the connection password
* @throws LDAPException
*/
private void newConnection(Config conf, String user, String password)
throws LDAPException, LDIFException {
// host, port, username and password
final LDAPConnectionOptions connectionOptions =
new LDAPConnectionOptions();
int connectionTimeoutMillis = 5000;
connectionOptions.setConnectTimeoutMillis(connectionTimeoutMillis);
LOGGER.info("Connecting to:" + conf.getProperty("host") + "Port" + conf.getProperty("port")) ;
this.connection= new LDAPConnection(connectionOptions,conf.getProperty("host"),
Integer.parseInt(conf.getProperty("port")));
if (user != null && !user.isEmpty()){
SimpleBindRequest bindRequest = new SimpleBindRequest(user,password);
bindRequest.setResponseTimeoutMillis(10000);
// exceptions ignored for this example
BindResult bindResult = connection.bind(bindRequest);
if(bindResult.getResultCode().equals(ResultCode.SUCCESS))
{
LOGGER.info("The BIND request was successful");
if(bindResult.hasResponseControl())
{
// Retrieve and process the response control.
}
}
}
}
/**
* Searches for data in LDAP
* @param jsonObject the jsonObject containing the request information
* @throws LDAPSearchException
*/
private void search( JSONObject jsonObject)
throws LDAPSearchException {
SearchResult searchResult;
if (this.connection.isConnected()) {
String baseDN = jsonObject.getString("baseDN");
String filter = jsonObject.getString("filter");
SearchScope sc = SearchScope.ONE;
if(jsonObject.containsKey("scope")){
EnumScope enumScope = EnumScope.valueOf(jsonObject.getString("scope"));
switch(enumScope) {
case BASE:
sc = SearchScope.BASE;
break;
case ONE:
sc = SearchScope.ONE;
break;
case SUB:
sc = SearchScope.SUB;
break;
case SUBORDINATE_SUBTREE:
sc = SearchScope.SUBORDINATE_SUBTREE;
break;
default:throw new LDAPSearchException(ResultCode.NO_SUCH_ATTRIBUTE,"The scope "+jsonObject.getString("scope")+" is unknown");
}
}
if (jsonObject.containsKey("attributes")){
ArrayList<String> list = new ArrayList<String>();
JSONArray jsonArray = jsonObject.getJSONArray("attributes");
if (jsonArray != null) {
int len = jsonArray.size();
for (int i=0;i<len;i++){
list.add(jsonArray.get(i).toString());
}
}
searchResult = this.connection.search(baseDN, sc, filter,list.toArray(new String[list.size()]));
}else{
searchResult = this.connection.search(baseDN, sc, filter);
}
response.setStatus(200);
JSONObject reply = new JSONObject();
reply.put("Result",searchResult.getSearchEntries().toString());
response.setData(reply);
} else{
LDAPAgentError("LDAP Connection Lost");
}
}
/**
* Adds a registry
* @param jsonObject the jsonObject containing the request information
* @throws Exception
*/
private void addRequest( JSONObject jsonObject)
throws LDIFException, LDAPException {
if (this.connection.isConnected()) {
JSONArray jsonArr= jsonObject.getJSONArray("ldif");
String[] ldifLines=new String[jsonArr.size()];
for(int i=0;i<jsonArr.size();i++) {
ldifLines[i]=jsonArr.get(i).toString();
}
LDAPResult result = connection.add(new AddRequest(ldifLines));
connection.close();//close connection after request
response.setStatus(200);
JSONObject reply = new JSONObject();
reply.put("Result", result.toString());
response.setData(reply);
}
}
/**
* Modifies a registry
* @param jsonObject the jsonObject containing the request information
* @throws Exception
*/
private void modifyRequest(JSONObject jsonObject)
throws LDAPException, LDIFException {
if (this.connection.isConnected()) {
JSONArray jsonArr= jsonObject.getJSONArray("ldif");
String[] ldifLines=new String[jsonArr.size()];
for(int i=0;i<jsonArr.size();i++) {
ldifLines[i]=jsonArr.get(i).toString();
}
LDAPResult result = connection.modify(new ModifyRequest(ldifLines));
connection.close();//close connection after request
response.setStatus(200);
JSONObject reply = new JSONObject();
reply.put("Result", result.toString());
response.setData(reply);
}
}
/**
* Deletes a registry
* @param jsonObject the jsonObject containing the request information
* @throws Exception
*/
private void deleteRequest( JSONObject jsonObject)
throws LDAPException {
if (this.connection.isConnected()) {
String deleteDN= jsonObject.getString("deleteDN");
LDAPResult result = connection.delete(new DeleteRequest(deleteDN));
connection.close();//close connection after request
response.setStatus(200);
JSONObject reply = new JSONObject();
reply.put("Result", result.toString());
response.setData(reply);
}
}
/**
* Recieves the information, reads the configuration information and calls the appropriate function
* @param jsonObject
* @param configPath
*/
@Override
public void accept(JSONObject jsonObject,String configPath) {
LOGGER.info("Searching for config file...");
Config conf = new Config("configFiles"+File.separator+ "LDAP.config",true);//sets the config info
LOGGER.info("Config file found ");
try {
this.newConnection(conf, jsonObject.getString("userDN"), jsonObject.getString("password"));
Operation operation = Operation.valueOf(jsonObject.getString("operation"));
try{
switch(operation) {
case SEARCH:
this.search(jsonObject);
break;
case ADD:
this.addRequest(jsonObject);
break;
case MODIFY:
this.modifyRequest(jsonObject);
break;
case DELETE:
this.deleteRequest(jsonObject);
break;
default:throw new Exception("The operation "+jsonObject.getString("operation")+" is unknown");
}
}catch (LDAPException e){
LDAPAgentError(e.toString()) ;
} catch (LDIFException e){
LDAPAgentError(e.toString()) ;
}
this.connection.close();
}catch (LDAPException e){
LDAPAgentError(e.toString()) ;
} catch (LDIFException e){
LDAPAgentError(e.toString()) ;
}catch (Exception e){
LDAPAgentError(e.getLocalizedMessage());
}
}
/**
* Returns the response value
* @return response
*/
@Override
public Response getResponse() {
return response;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,147 |
\section{\label{Intro}Introduction}
The Kraichnan model \cite{kraichnan1968small,kraichnan1974convection,Kraichnan94} of a passive scalar
advected by a turbulent fluid has achieved a paradigmatic status in the modern
theory of turbulence \cite{falkovich2001particles}. A breakthrough in the
analytical calculation of anomalous scaling for scalar structure functions was
initiated by the work of Gaw\c{e}dzki and Kupiainen \cite{gawedzki1995anomalous},
who exploited an expansion in the smoothness exponent $\xi$ of the velocity field
around the rough limit $\xi=0.$ This was followed very closely by alternative approaches based on
expanding in powers of the inverse $1/d$ of space dimension \cite{chertkov1995normal} and
in powers of $2-\xi$ for $\xi$ near $2$ \cite{shraiman1995anomalous}. The Kraichnan model
has been the source of fundamental new concepts in turbulence theory such as Lagrangian
slow modes and spontaneous stochasticity \cite{bernard1998slow} and it has proved
also a valuable testing ground for general renormalization group methods \cite{gawedzki1997inverse,antonov2006renormalization,kupiainen2007scaling}
The contemporaneous reviews of Gaw\c{e}dzki \cite{Gawedzki1997,gawedzki2002easy,gawedzki2002soluble,cardy2008non}
written as progress reports still provide some of the most insightful and lucid expositions of this body of work.
More recently, the Kraichnan model has found an entirely new arena of application
in the non-equilibrium statistical mechanics of diffusive mixing. The paper of
Donev, Fai and vanden-Eijnden \cite{donev2014reversible} (hereafter DFV) has shown
that the equations for the scalar concentration field in a binary fluid mixture,
which describe its advection by thermal fluctuations of the velocity in an otherwise
quiescent fluid, reduce in the high-Schmidt limit to a version of the Kraichnan model
and yield an economical scheme for efficient numerical computation. The subsequent
work \cite{eyink2022high} applied the DFV theory analytically to investigate the
effects of thermal noise on high-Schmidt turbulent mixing and showed, in the
process, that the many powerful mathematical methods devised to treat turbulent advection
in the Kraichnan model can be applied also to mixing by thermal fluctuations.
In the present paper we shall explain further this application and present a
few initial results of our ongoing investigation. This subject
seems very suitable for the special journal issue in memory of Krzysztof Gaw\c{e}dzki.
Among his very wide-ranging research pursuits, Gaw\c{e}dzki himself made important contributions
to mathematical physics of non-equilibrium statistical mechanics for diffusion processes
\cite{chetrite2008Afluctuation,chetrite2008Bfluctuation,chetrite2009eulerian,gomez2009experimental}.
We therefore offer this work in tribute to him and to express our deep gratitude
for his scientific leadership and personal friendship.
Although diffusive mixing in a laminar fluid may appear to have little in common
with turbulence, the physical analogies become apparent on closer inspection. In both problems,
for example, there are scale-invariant fluctuations leading to correlations with power-law decay.
Concentration correlations for steady-state diffusion with a mean concentration gradient imposed
by the Soret effect were first studied by Law and Nieuwoudt \cite{law1989noncritical,nieuwoudt1990theory},
who predicted long-ranged correlations characterized by a power-law divergence
$\propto k^{-4}$ of the static structure function at low wave-numbers. This is a special
case of the generic long-range fluctuations expected in non-equilibrium statistical steady-states
\cite{dorfman1994generic,grinstein1995generic}. It was subsequently found by Vailati, Giglio, and
co-workers that similar correlations occur also in isothermal free diffusion of a blob of concentration,
where the mean concentration gradient is co-evolving with the fluctuations
\cite{vailati1997giant,brogioli2000universal,croccolo2007nondiffusive}.
These non-equilibrium concentration fluctuations have been termed ``giant'' since they are
orders of magnitude larger than thermal equilibrium fluctuations \cite{vailati1997giant}.
The scale of the fluctuations is also macroscopic, cut off only by bouyancy effects
\cite{segre1993nonequilibrium,vailati1997giant,brogioli2000universal,croccolo2007nondiffusive},
and, in low gravity environments, quenched only by the finite size of the sample
\cite{ortiz2004nonequilibrium,vailati2011fractal,Cerbino2015,croccolo2016shadowgraph}.
Indeed, as in turbulent eddy transport, diffusive flux in liquids can be understood
to be generated entirely by the non-equilibrium fluctuations
\cite{brogioli2000diffusive,donev2011diffusive,donev2011enhancement,donev2014reversible}.
Thus, the diffusion coefficient measured in a macroscopic experiment is in fact
an ``eddy diffusivity'' due to advection of concentration by thermal velocity fluctuations.
The standard equations to describe this physics are the Landau-Lifschitz fluctuating
hydrodynamics equations for a binary mixture in a solvent at rest
\cite{Cerbino2015,brogioli2016correlations}:
\begin{eqnarray}\label{Momentum100}
\rho\partial_t {\bf v}'&=&-{\mbox{\boldmath $\nabla$}} p'+ \eta \triangle {\bf v}'+{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}} \Big(\sqrt{ 2\eta k_B T}\; {\boldsymbol\eta}({\bf x}, t) \Big),
\end{eqnarray}
\begin{equation}\label{Passive110}
\partial_t c=-{\bf v}'{\mbox{\boldmath $\cdot$}}{\mbox{\boldmath $\nabla$}} c +{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}} \left(D {\mbox{\boldmath $\nabla$}} c\right).
\end{equation}
The fluctuation velocity ${\bf v}'$ is assumed incompressible, ${\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}}{\bf v}'=0,$ for constant mass
density $\rho$ of the fluid, with this constraint enforced in \eqref{Momentum100} by the pressure $p'.$
The tensor field ${\boldsymbol \eta}({\bf x}, t)$
is Gaussian white-noise, symmetric and traceless, representing thermal fluctuating stress, with zero mean and covariance
\begin{eqnarray}\label{etav}
\langle \eta_{ij}({\bf x}, t) \eta_{kl}({\bf x'}, t')\rangle&=&(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}-{2\over 3} \delta_{ij}\delta_{kl})\delta^3({\bf x-x'})\delta(t-t').
\end{eqnarray}
The prefactor $\sqrt{ 2\eta k_B T}$ involving the shear viscosity $\eta$ and temperature $T$ is dictated by the
fluctuation-dissipation relation so that the correct Gibbs equilibrium distribution is obtained
for the equal-time velocity statistics with energy equipartition among wave-number modes
(e.g., see Appendix A in \cite{eyink2021dissipation}). We have neglected here, as in
\cite{Cerbino2015,brogioli2016correlations}, the similar thermal noise term in the equation
\eqref{Passive110} for the concentration $c,$ since the equilibrium fluctuations which it
generates are far smaller than the giant concentration fluctuations (GCF's) of primary interest.
Note that $D$ in \eqref{Passive110} is the molecular diffusivity and that we could
also add to the diffusive mass current ${\bf J}_D=-D{\mbox{\boldmath $\nabla$}} c$ a term ${\bf J}_T=-D c(1-c)S_T {\mbox{\boldmath $\nabla$}} T$
corresponding to non-equilibrium mass flux driven by a temperature gradient ${\mbox{\boldmath $\nabla$}} T,$
with $S_T$ the Soret coefficient \cite{degroot2013non}.
The present standard theory \cite{vailati1998nonequilibrium,brogioli2000diffusive,brogioli2016correlations}
based upon {\it linearized fluctuating hydrodynamics} proceeds by writing
an equation for the mean concentration $\bar{c}$:
\begin{equation}\label{PassiveMean}
\partial_t \bar{c}= {\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}} \left(D {\mbox{\boldmath $\nabla$}} \bar{c} \right),
\end{equation}
by then defining the concentration fluctuation $c'=c-\bar{c},$ and by finally linearizing \eqref{Passive110}
to obtain
\begin{equation}\label{PassiveFluc}
\partial_t c'=-{\bf v}'{\mbox{\boldmath $\cdot$}}{\mbox{\boldmath $\nabla$}} \bar{c} +{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}} \left(D {\mbox{\boldmath $\nabla$}} c'\right),
\end{equation}
with the term $-{\bf v}'{\mbox{\boldmath $\cdot$}} {\mbox{\boldmath $\nabla$}} c'$ which is quadratic in fluctuations discarded. The latter linearized
equation may then be solved exactly in the case of a stationary, homogeneous mean concentration gradient
${\mbox{\boldmath $\nabla$}}\bar{c}$ \cite{brogioli2000diffusive,brogioli2016correlations} or else approximately when the solution
$\bar{c}({\bf x},t)$ of \eqref{PassiveMean} has a gradient ${\mbox{\boldmath $\nabla$}} \bar{c}({\bf x},t)$ which is dependent upon ${\bf x}$ and $t$
\cite{vailati1998nonequilibrium}. This simple approach has yielded quite accurate predictions
of experimental observations for free diffusion processes, except for minor deviations at early times when concentration gradients are relatively large \cite{croccolo2007nondiffusive}. However, current experimental efforts in low-gravity environments such as the NEUF-DIX space project \cite{baaske2016neuf,vailati2020giant}
are now exploring problems with large concentration gradients, high concentrations, and transient processes
where the basic assumptions of the standard theory are invalid.
The exact asymptotic theory of DFV \cite{donev2014reversible} is here very attractive because
it is fully nonlinear and the basic assumption of high Schmidt number is well-satisfied
in most of the liquid mixtures experimentally considered. An obstacle to serious consideration
of this approach may have been the puzzling numerical result obtained in \cite{donev2014reversible}
for the single-time structure function $S({\bf k},t)$ in a free-diffusion experiment. Rather than
the expected scaling $S({\bf k},t)\propto k^{-4},$ the numerical implementation of the DFV theory
in \cite{donev2014reversible} produced a result apparently more consistent with $S({\bf k},t)\propto k^{-3}.$
On the other hand, it has now been shown by an exact mathematical analysis \cite{eyink2022high}
that the DFV theory does yield $S({\bf k})\propto k^{-4},$ in close agreement with linearized theory,
at least for a statistical steady state with random injection of concentration fluctuations. It was
speculated in \cite{eyink2022high} that the simulation by DFV in \cite{donev2014reversible} was
not run for a sufficient time to observe the correct $S({\bf k},t)\propto k^{-4}$ scaling or perhaps
had an insufficient wavenumber range to clearly identify the power law exponent. Thus, the
DFV theory does appear to give results consistent with linearized fluctuating hydrodynamics
and, more importantly, in agreement with laboratory experiment.
In the present paper we further support this conclusion of \cite{eyink2022high} by solving
exactly for the static and dynamic structure functions of the DFV theory in the situation
of greatest experimental interest, where the GCF's are generated by an imposed concentration
gradient interacting with thermal velocity fluctuations. We first briefly review the DFV
theory and then derive useful forms of the closed equations for the mean concentration and
for the 2-point correlation function. We next solve these equations analytically for the simplest
case of a stationary, homogeneous mean concentration gradient. The somewhat surprising
result we obtain is that the nonlinear DFV theory yields {\it precisely} the same result
for the structure function as does the standard linearized theory
\cite{vailati1998nonequilibrium,brogioli2000diffusive,brogioli2016correlations}.
As we shall discuss, this result may be considered as a simple example of an ``anomaly non-renormalization
theorem'', which helps to explain the remarkable success of linearized fluctuating hydrodynamics
in describing the observations from experiment and numerical simulation for 2nd-order statistics
in high-Schmidt liquid mixtures. However, higher-order correlation functions of the concentration fluctuations are not
protected by such a result and are unlikely to be given correctly by the Isserlis-Wick
theorem for a Gaussian random field in terms of the 2nd-order order correlation function.
We discuss some prospects for investigating these higher-order statistics.
\section{DFV Theory and High-Schmidt Limit}\label{S2}
In this section, we briefly review the work of DFV \cite{donev2014reversible}
on diffusive mixing in the limit of large Schmidt number and low Mach number in an isothermal
quiescent fluid. A more detailed survey can be found in \cite{EyinkJafari2022}.
The starting point of the DFV theory differs from \eqref{Momentum100},\eqref{Passive110} in two
important respects. The equation \eqref{Momentum100} for the fluctuating velocity field is
unchanged, but DFV assumed that the concentration field $c({\bf x}, t)$ in a binary mixture
satisfies the modified equation
\begin{equation}\label{Passive111}
\partial_t c=-{\bf u}'{\mbox{\boldmath $\cdot$}}{\mbox{\boldmath $\nabla$}} c +{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}} \left(D_0 {\mbox{\boldmath $\nabla$}} c\right).
\end{equation}
This differs from \eqref{Passive110} firstly because $D_0$ now represents
the {\it bare molecular diffusivity} before dressing by thermal fluctuations.
Secondly, ${\bf u}'$ is a {\it coarse-grained advection velocity} obtained by convolving
the solution $\bf v'$ of \eqref{Momentum100} with a smoothing kernel $\boldsymbol\sigma$,
\begin{eqnarray}\label{regularized-velocity100}
{\bf u}'({\bf x}, t)&\equiv& {\boldsymbol\sigma}\star{\bf v}'=\int {\boldsymbol\sigma}({\bf x}, {\bf x}'){\bf v}'({\bf x}', t) \, d^3x'.
\end{eqnarray}
The cutoff length scale in this kernel, denoted $\sigma,$ is taken to be of the order of the
radius of the solute molecules. The underlying assumption is that the molecules (or perhaps even colloidal
particles in suspension) feel only the fluctuating velocity field averaged over their extent.
With these basic assumptions, DFV then performed an exact asymptotic analysis in the limit of high Schmidt-number limit, $Sc_0\equiv \frac{\eta}{D_0 \rho}\gg 1.$ Motivated by the well-known Stokes-Einstein
relation $D\sim k_B T/ \eta\sigma$, DFV introduced a small parameter $\epsilon\ll 1$ to order quantities
for formal asymptotics, adopting the scaling
\begin{equation} \eta\mapsto \epsilon^{-1}\eta, \quad D_0\mapsto \epsilon D_0 \label{trans-to} \end{equation}
such that $D_0 \eta\simeq (const.)$ and $Sc_0\sim \epsilon^{-2}.$ In the limit $\epsilon\ll 1$, there exists a
time scale separation between the fast viscous dynamics, governing the thermal velocity fluctuations ${\bf v}'$,
and much slower diffusive evolution of the concentration field $c.$ To formalize this time separation, DFV introduced
a ``macroscopic'' diffusive time $\tau$ which is related to the ``microscopic''
viscous time $t$ of Eqs. \eqref{Momentum100},\eqref{Passive110}
by $t=\epsilon^{-1} \tau,$ or, equivalently, by the scaling
\begin{equation} t\mapsto \epsilon^{-1}t \label{time-to} \end{equation}
with $\tau$ renamed $t$. This procedure leads to a limiting stochastic advection-diffusion
equation for the concentration field in the ``macroscopic'' time:
\begin{equation}\label{PassiveStratonovich100}
\partial_t c=-{\bf w}\odot{\mbox{\boldmath $\nabla$}} c+D_0 \triangle c,
\end{equation}
where $\odot$ represents a Stratonovich dot product and ${\bf w}({\bf x}, t)$ is an incompressible, advecting
random velocity field which is white noise in time, with zero mean and covariance
\begin{eqnarray} \label{SmoothVelocityCovariance101}
&& \langle {\bf w}({\bf x}, t) \otimes {\bf w}({\bf x'}, t') \rangle={ \cal {\bf R}} ({\bf x, x'}) \delta(t-t'), \cr
&& { \cal {\bf R}} ({\bf x, x'})=\frac{2 k_B T}{\eta}({\mbox{\boldmath $\sigma$}}\star{\bf G}\star{\mbox{\boldmath $\sigma$}}^\top)({\bf x}, {\bf x}').
\end{eqnarray}
The tensor $\bf G$ is the Green's function of the linear Stokes operator, which
is singular for ${\bf x}={\bf x}',$ unlike the smoothed tensor ${\bf R}$ which is regular at coinciding points. Because of the delta-in-time correlation of the random velocity ${\bf w}$, Eq.(\ref{PassiveStratonovich100}) for the concentration field is in fact a version of the Kraichnan model (see below).The spatial realizations of random thermal velocity ${\bf w}$ can be obtained from the stationary Stokes equation with smoothed thermal forcing
\begin{eqnarray} \label{wStokes}
-{\mbox{\boldmath $\nabla$}} q+ \nu\triangle{\bf w}+{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}}\left( \sqrt{\frac{2\nu k_BT}{\rho}} {\mbox{\boldmath $\eta$}}_\sigma \right)={\mbox{\boldmath $0$}}
\end{eqnarray}
where ${\mbox{\boldmath $\eta$}}_\sigma={\mbox{\boldmath $\sigma$}}\star{\mbox{\boldmath $\eta$}}$ and $q$ is determined by ${\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}}{\bf w}=0$. The physical content of the above equation is that viscous diffusion and thermal fluctuations are in instantaneous balance for the effective advecting velocity ${\bf w},$ with long-range spatial correlations induced by the incompressibility condition.
Whereas the starting advection-diffusion equation in the DFV theory, Eq. (\ref{Passive111}), contains the bare diffusivity $D_0$ the reduced equation (\ref{PassiveStratonovich100}) in the limit $Sc\gg 1$ involves a \textit{renormalized diffusivity}. This can be seen easily by writing the It$\bar{{\rm o}}$ form of Eq.(\ref{PassiveStratonovich100}):
\begin{eqnarray}\label{PassiveIto100}
\partial_t c&=& -{\bf w}{\mbox{\boldmath $\cdot$}}{\mbox{\boldmath $\nabla$}} c+{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\cdot$}} ({\bf D(x)}{\mbox{\boldmath $\nabla$}} c ),
\end{eqnarray}
with
\begin{equation} {\bf D(x)}:=D_0{\bf I}+\frac{1}{2} { \cal {\bf R}} ({\bf x, x}). \label{Deff} \end{equation}
The latter renormalized diffusivity originates from advection by the viscously slaved thermal velocity
fluctuations and is similar to an ``eddy-diffusivity'' due to eliminated turbulent eddies in turbulent flows.
As emphasized by DFV, generally $D_0\ll \abs{{\bf D}({\bf x})}$ and, indeed, one may consider the limit
of vanishing bare diffusivity, $D_0\to 0,$ with essentially no empirical consequence.
The original paper of DFV elaborated two types of numerical algorithm to solve for realizations of $c({\bf x},t)$ in the asymptotic theory; see \cite{donev2014reversible}, Appendix B. The first Eulerian method
was based on a straightforward predictor-corrector integrator for the stochastic advection-diffusion equation
\eqref{PassiveStratonovich100} with a staggered finite-volume discretization in space. The realizations
of ${\bf w}$ were obtained from the Stokes equation \eqref{wStokes} with an iterative Krylov linear solver.
Discretizing the advection term ${\bf w}\odot{\mbox{\boldmath $\nabla$}} c$ with a non-dissipative centered-difference
formula was shown to maintain discrete fluctuation-dissipation balance, but requires a non-vanishing bare diffusivity $D_0$ for numerical stability. A second Lagrangian numerical scheme was obtained by
a spectral decomposition of the covariance
$$ { \cal {\bf R}} ({\bf x, x}') = \sum_\alpha {\mbox{\boldmath $\phi$}}_\alpha({\bf x}){\mbox{\boldmath $\phi$}}_\alpha({\bf x}') $$
and by then solving the Lagrangian equations for space positions of $N$ tracer particles
\begin{eqnarray}
&& d{\bf q}_n(t)/dt \,=\, {\bf w}({\bf q}_n(t),t) + \sqrt{2D_0}\,{\mbox{\boldmath $\eta$}}_0(t), \quad n=1,...,N \cr
&&\qquad \,=\, \sum_\alpha {\mbox{\boldmath $\phi$}}_\alpha({\bf q}_n(t)) \eta_\alpha(t) + \sqrt{2D_0}\,{\mbox{\boldmath $\eta$}}_0(t),
\label{dQdt}
\end{eqnarray}
with i.i.d. scalar white-noises $\eta_\alpha(t)$ and vector white noise ${\mbox{\boldmath $\eta$}}_0(t).$ Efficient evaluation
of the summation over $\alpha$ in \eqref{dQdt} requires a non-uniform FFT method to evaluate
${\bf w}({\bf x},t)$ only at the particle locations. Finally,
with initial tracer particle positions distributed in space according to the concentration field
$c_0({\bf x}),$ then the field at later times is approximated by
$$ c({\bf x},t) = \frac{1}{N} \sum_{n=1}^N \delta^3({\bf x}-{\bf q}_n(t)). $$
This algorithm has cost scaling linearly in $N$ and can be applied with $D_0=0,$ but requires
large numbers of particles $N$ for convergence. Both of these numerical schemes can be applied
in the experimentally relevant case of finite spatial domains and can incorporate additional
important effects, such as buoyancy.
Another powerful feature of the DFV theory, although not fully exploited in \cite{donev2014reversible}, is the existence of closed partial differential equations for the scalar correlation functions of any arbitrary order. This follows from the fact that the long-time, high-$Sc$ limit concentration equation in the DFV theory, Eq.(\ref{PassiveStratonovich100}), is indeed a version of the Kraichnan model \cite{kraichnan1968small,kraichnan1974convection,
falkovich2001particles} in which there is no closure problem for correlation functions.
In the Kraichnan model, the equal-time $p$-point correlation functions
$C_p({\bf x}_1, \dots, {\bf x}_p;t):=\langle c({\bf x}_1, t)c({\bf x}_2, t)...c({\bf x}_p, t)\rangle$
averaged over realizations of the thermal velocity ${\bf w}$ for scalar field $c({\bf x}, t)$ governed by Eq.(\ref{PassiveStratonovich100}) satisfy an exact closed differential equation (see e.g., \cite{EyinkJafari2022}; for a detailed review of the Kraichnan model see \cite{falkovich2001particles, Gawedzki1997,gawedzki2002easy,gawedzki2002soluble,cardy2008non}). For $p=1,$
$C_1({\bf x},t)=\langle c({\bf x},t)\rangle:=\overline c({\bf x}, t),$ the mean concentration field, satisfies the diffusion equation with renormalized diffusivity:
\begin{eqnarray}\label{C1eq}
&& \partial_t \bar{c}({\bf x}, t)
={1\over 2}
\nabla_{x^i}\left[{ R}_{ij}({\bf x},{\bf x})\nabla_{x^j} \bar{c}({\bf x}, t) \right]
+D_0 \triangle_{{\bf x}} \bar{c}({\bf x}, t).
\end{eqnarray}
and for $p=2$
\begin{eqnarray}\nonumber
&&{\partial\over\partial t} C_2({\bf x}_1,{\bf x}_2, t) =D_0 \sum_{n=1}^2 \triangle_{{\bf x}_n} C_2({\bf x}_1,{\bf x}_2, t)\\\label{Kraichnan1}
&&\;+{1\over 2} \sum_{n, m=1}^2
\nabla_{x_n^i}\left[{ R}_{ij}({\bf x}_n,{\bf x}_m)\nabla_{x_m^j} C_2({\bf x}_1,{\bf x}_2, t) \right],
\end{eqnarray}
where ${\bf R}$ is the spatial covariance of the random velocity field, given by Eq.(\ref{SmoothVelocityCovariance101}). Combining these two equations yields a corresponding
closed PDE for the concentration 2nd-order cumulant (connected correlation) function ${\mathcal C}({\bf x}_1, {\bf x}_2, t):=
\langle c({\bf x}_1,t) c({\bf x}_2, t)\rangle-\langle c({\bf x}_1,t)\rangle \langle c({\bf x}_2,t)\rangle=
C_2({\bf x}_1,{\bf x}_2, t)-\overline c({\bf x}_1,t) \overline c({\bf x}_2,t)$:
\begin{eqnarray}\nonumber
&&\partial_t {\mathcal C}({\bf x}_1, {\bf x}_2, t)=D_0 \sum_{n=1}^2 \triangle_{{\bf x}_n} {\mathcal C}({\bf x}_1,{\bf x}_2, t)\\\nonumber
&&\;+{1\over 2} \sum_{n, m=1}^2
\nabla_{x_n^i}\left[{ R}_{ij}({\bf x}_n,{\bf x}_m)\nabla_{x_m^j} {\mathcal C}({\bf x}_1,{\bf x}_2, t) \right]\\\label{C2eq}
&&\; + R_{ij}({\bf x}_1,{\bf x}_2) \nabla_{x_1^i}\bar{c}({\bf x}_1,t)\nabla_{x_2^j}\bar{c}({\bf x}_2,t)
\end{eqnarray}
Similar results hold for all $p>2.$
Furthermore, multi-time concentration correlations such as ${\mathcal C}({\bf x},t;{\bf x}',t'):=
\langle c({\bf x}, t)c({\bf x}',t')\rangle-\langle c({\bf x}, t)\rangle\langle c({\bf x}',t')\rangle$
can also be obtained in the Kraichnan model. Once the single-time time correlations are known,
these can be obtained by solving additional closed PDE's, e.g. for $p=2$ one must solve
the equation
\begin{eqnarray}\label{Cmultitime}
&& \partial_t {\mathcal C}({\bf x},t;{\bf x}',t')\cr
&& \hspace{10pt}
= {\mbox{\boldmath $\nabla$}}_{\bf x}{\mbox{\boldmath $\cdot$}}\left(\left(D_0{\bf I}+\frac{1}{2}{\bf R}({\bf x},{\bf x})\right){\mbox{\boldmath $\cdot$}}{\mbox{\boldmath $\nabla$}}_{\bf x}
{\mathcal C}({\bf x},t;{\bf x}',t')\right)
\hspace{10pt}
\end{eqnarray}
for $t>t'$ with ${\mathcal C}({\bf x},{\bf x}',t')$ as initial data. The equation (\ref{Cmultitime})
follows because of the Markovian character of the dynamics in the It$\bar{{\rm o}}$ form
\eqref{PassiveIto100}. Clearly, this equation implies that the correlations of concentration
fluctuations in the Kraichnan model are relaxed by diffusion with the renormalized diffusivity.
Thus, the Onsager regression hypothesis \cite{onsager1931reciprocalI,onsager1931reciprocalII}
becomes exact in the high-Schmidt limit instantaneously in time, without the need for any aging interval. Because the Kraichnan model dynamics is
time-reversible, the results for $t<t'$ are identical to those for $t>t',$
depending only upon $\lvert t-t'\rvert$.
The above closed equations have the form of parabolic diffusion equations with position-dependent
diffusivities. They can be derived in finite domains with realistic boundary conditions,
where they may provide valuable predictions for experimental flows. In general,
these equations must be solved computationally using numerical PDE methods, which will
become more challenging for complex geometries and especially for high-order statistics
with $p>2$ when the diffusion equation is very multi-dimensional. In the next section we discuss
a simple setting where exact analytical solutions are possible.
\section{Diffusion in an Unbounded Fluid}\label{sec:unbd}
To get some insight about the predictions of the DFV theory, it is useful to consider
the idealized case of a fluid mixture in unbounded 3D space, so that velocity statistics
are spatially homogeneous and isotropic. In that case the Green's function of the
Stokes operator is given also in closed form by the Oseen-Burgers tensor
\begin{equation} G_{ij}({\bf x}, {\bf x}')=G_{ij}({\bf x}- {\bf x}')={1\over 8\pi r}(\delta_{ij}+{r_ir_j\over r^2}) \label{oseen} \end{equation}
with $\bf r=\bf x-\bf x'.$ Consequently, the covariance of the random thermal velocity ${\bf w}$
in the DFV theory becomes
\begin{equation}\label{R}
R_{ij}({\bf r})={k_BT\over 4\pi \eta r}\Big(\delta_{ij}+{r_i r_j\over r^2}\Big), \quad r\gg \sigma
\end{equation}
independent of the choice of the filter kernel ${\mbox{\boldmath $\sigma$}}$ in \eqref{SmoothVelocityCovariance101}.
In a homogeneous and isotropic flow of this type, the effective diffusivity ${\bf D}$ defined in \eqref{Deff} becomes independent of ${\bf x}$ with $D_{ij}=R_{ij}(0)/2=D\delta_{ij}.$ (Here and hereafter
we assume $D_0=0$ for simplicity). The precise value of $D$ does depend now upon the filter kernel
${\mbox{\boldmath $\sigma$}},$ with the kernel used in \cite{donev2014reversible} selected precisely to give
\begin{equation} D={k_BT\over 6\pi \eta\sigma} \label{SE} \end{equation}
in agreement with the Stokes-Einstein relation for a suspension of hard spheres of radius $\sigma.$
However, any reasonable choice of filter gives a result identical to \eqref{SE} with $6\pi$
replaced by another numerical constant of order unity. Thus, with a suitable redefinition of the filter width $\sigma,$ sometimes called the ``hydrodynamic radius,'' the result in \eqref{SE} can always be taken to hold by convention.
The DFV theory thus explains the empirical success of the Stokes-Einstein formula as the effect of strong renormalization of a small bare diffusivity due to advection of solute molecules by thermal
velocity fluctuations.
We next discuss some specific situations in unbounded domains where the DFV closed equations
for mean and correlation function of the concentration field can be simplified and even solved
analytically.
\subsection{Free Diffusion}
As discussed in the Introduction, giant concentration fluctuations with
structure function $S(k,t)\propto k^{-4}$ have been observed very notably in free decay
experiments where an initial blob of concentration diffuses in a fluid at rest
\cite{vailati1997giant,brogioli2000universal,croccolo2007nondiffusive}.
We illustrate here the closed equations that arise from the DFV theory in such free diffusion
when the fluid is idealised as unbounded.
A simple initial configuration of the scalar with maximal symmetry is a plane sharp interface at $z=0$, dividing two regions $z>0$ with constant concentrations
\begin{equation} c = \left\{\begin{array}{ll}
c_0 & z>0 \cr
0 & z<0 \cr
\end{array} \right. \end{equation}
See Fig.\ref{configuration}. We consider here deterministic
initial data for the concentration field, for simplicity, but it is easy to
include thermal equilibrium fluctuations or other zero-mean random perturbations
(see below). The equation \eqref{C1eq} for the mean concentration profile
reduces here to the 1D diffusion equation,
\begin{equation}\label{Diff1}
{\partial\over \partial t} \overline c(z, t)=D\partial^2_z\overline c(z, t)
\end{equation}
with diffusivity $D$ given by \eqref{SE}. For the step-function initial condition,
this has the well-known solution
\begin{eqnarray}
\overline c (z, t)={c_0\over 2}\Big(1+\erf\Big({z\over 2\sqrt{Dt}}\Big) \Big),
\label{erfc} \end{eqnarray}
in terms of the error function $\erf(z).$
\begin{figure}
\includegraphics[scale=.25]{configuration}
\centering
\caption {\footnotesize {Initial distribution $(t=0)$ for a concentration field $c(z, t)$ with non-zero value $c=c_0$ in the region $z>0$ with a sharp interface at $z=0$.}}
\label{configuration}
\end{figure}
The equation \eqref{C2eq} for the 2nd-order cumulant can be simplified by introducing
relative and mean variables
$$ {\bf r}={\bf x}_1-{\bf x}_2=(x,y,z), \quad {\bf X}=\frac{1}{2}({\bf x}_1+{\bf x}_2)=(X,Y,Z) $$
and by noting that, from 2D translational symmetry, ${\mathcal C}$ depends only upon ${\bf r}$ and $Z,$
so that:
\begin{eqnarray}\nonumber
\partial_t {\mathcal C}&=&{1\over 2}\Big(D+{R_{33}({\bf r})\over 2}\Big)\partial_Z^2 {\mathcal C}+\Big(R_{ij}({\bf 0})-R_{ij}({\bf r})\Big)\partial_i\partial_j {\mathcal C}\\\label{C_2}
&&+R_{33}({\bf r})\nabla\overline c\Big(Z+{z\over 2}\Big)\nabla\overline c \Big(Z-{z\over 2}\Big). \label{C2unbd}
\end{eqnarray}
However, furthermore, there is rotational symmetry around the $z$-axis.
Working in spherical coordinates defined as $(r, \theta, \phi)$,
this means that ${\mathcal C}={\mathcal C}(r,\theta,Z,t)$ must be independent of azimuthal angle $\phi.$ In those variables, it is tedious but
straightforward to show that \eqref{C2unbd} becomes for $r\gg \sigma$
\begin{eqnarray}\nonumber
\partial_t {\mathcal C}
&=& {k_BT\over 4\pi\eta\sigma}\left(\frac{1}{3}+\frac{\sigma}{4r}(1+\cos^2\theta)\right)
\partial_Z^2 {\mathcal C}\\\nonumber
&&+{k_BT\over 3\pi\eta\sigma}\cdot {1\over r^2}{\partial\over\partial r}\Big[ r^2\Big(1-{3\over 2}{\sigma\over r}\Big){\partial {\mathcal C}\over\partial r} \Big]
\\\nonumber
&&+{k_BT\over 3\pi\eta\sigma}\cdot {1\over r^2\sin\theta}\Big(1-{3\over 4}{\sigma\over r}\Big){\partial\over\partial\theta}\Big(\sin\theta{\partial {\mathcal C}\over\partial\theta} \Big)\\\label{anisotropic1}
&&+ {k_BT\over 4\pi\eta} \cdot {1\over r}(1+\cos^2\theta)
\nabla\overline c\Big(Z+{z\over 2}\Big)\nabla \overline c \Big(Z-{z\over 2}\Big),\qquad
\label{Ceq-slab} \end{eqnarray}
where $r=(x^2+y^2+z^2)^{1/2}$ and $z=r \cos\theta.$
Here we can recognize the scale-dependent diffusivity
\begin{eqnarray}\label{D}
D(r)={k_BT\over 6\pi \eta \sigma}\Big(1-{3\over 2}{\sigma\over r}\Big).
\end{eqnarray}
that appeared in our earlier work \cite{EyinkJafari2022}, Eq.(58), for the fully isotropic problem.
The end result \eqref{Ceq-slab} is a diffusion equation in the 3D space of variables
$(r,\theta,Z).$ The initial correlation ${\mathcal C}_0$ will depend upon the
random initial concentration field adopted, e.g. ${\mathcal C}_0\equiv 0$
for the deterministic initial data discussed above. Even for the highly symmetric
physical configuration considered, the simplified version of the general equation
\eqref{C2eq} for the concentration correlation ${\mathcal C}$
is too complicated for a fully analytical solution. However, with proper regularization
of the divergences at $r=0$ (e.g. see \cite{EyinkJafari2022}, Eq.(56)-(57)), the PDE \eqref{Ceq-slab}
can be solved by numerical discretization methods. Furthermore, the solution may be rationally
approximated by a combination of numerics and analysis. Another case which yields
similar simplifications is the problem of an initial spherical blob of concentration.
Nonequilibrium concentration fluctuations in inhomogeneous, anisotropic and time-dependent free decay
is the subject of ongoing work based on the DFV theory and will be presented elsewhere.
However, it turns out that with one further simplification the equation \eqref{Ceq-slab}
becomes mathematically solvable, as discussed in the following subsection.
\subsection{Constant Concentration Gradient}
An even simpler limiting case of the problem in the previous section is obtained
by letting $t\to\infty,$ $c_0\to\infty$ so that $t=c_0^2/4D\pi \gamma^2$ is held
fixed for some constant $\gamma.$ In that case the error-function profile \eqref{erfc},
after subtraction of the divergent constant $c_0,$ reduces to the case of constant
gradient, $\bar{c}=(\nabla \bar{c})z$ with
$\nabla\bar{c}=\gamma.$ This idealized situation has been studied before
using linearized fluctuating hydrodynamics and exactly solved within that approximation
\cite{vailati1998nonequilibrium,brogioli2000diffusive,brogioli2016correlations}.
Here we study this same problem using the high-Schmidt asymptotic theory of DFV
without neglecting nonlinear interactions.
This problem remains anisotropic because of the imposed gradient but it is now fully
space-homogeneous and yields a time-independent steady-state. Therefore we may neglect
derivatives with respect to $Z$ and $t$ in equation \eqref{Ceq-slab}, yielding
\begin{eqnarray}\nonumber
&&{2\over r^2}{\partial\over\partial r}\Big[ r^2\Big(1-{3\over 2}{\sigma\over r}\Big){\partial {\mathcal C}\over\partial r} \Big] +{2\over r^2\sin\theta}\Big(1-{3\over 4}{\sigma\over r}\Big){\partial\over\partial\theta}\Big(\sin\theta{\partial {\mathcal C}\over\partial\theta} \Big)\\\label{anisotropic2}
&&\quad +{3\over 2}{\sigma\over r}(1+\cos^2\theta)\abs{\nabla \overline c }^2=0,
\label{Cgrad} \end{eqnarray}
whose solution is the steady-state correlation ${\mathcal C}(r,\theta),$
As a first step in finding the solution, we may consider the correlation function
averaged over solid angle:
\begin{equation}
\overline{{\mathcal C}} (r):={1\over 4\pi} \int {\mathcal C}(r, \theta) d\Omega={1\over 2}\int_0^\pi \sin\theta \; {\mathcal C}(r, \theta) d\theta.
\label{Cavrg} \end{equation}
Because of linearity of the equation \eqref{Cgrad}, this averaged correlation satisfies the ODE
\begin{eqnarray}\label{anisotropic2}
{1\over r^2}{\partial\over\partial r}\Big[ r^2\Big(1-{3\over 2}{\sigma\over r}\Big){\partial \overline {{\mathcal C}}\over\partial r} \Big]+{\sigma\over r}\abs{\nabla \overline c }^2=0.
\end{eqnarray}
Physically, this ODE corresponds to an isotropic system where the concentration gradient
is not pointing in the $z$-coordinate direction but is instead pointing in a random
direction uniformly distributed over solid angles. The general solution of this ODE is of the form
\begin{eqnarray}\label{isotropic1}
\overline{{\mathcal C}}(r)
&=&-{\sigma \abs{\nabla \overline c }^2\over 2}\Big(r+{3\over 2}\sigma \ln\abs{ r-\frac{3}{2}\sigma}\Big) + A \ln\abs{1-\frac{3\sigma}{2r}}+B
\end{eqnarray}
for constants $A,B,$ but in the large-$r$ range of interest the solution reduces to
$$ \overline{{\mathcal C}}(r) \doteq -{\sigma \abs{\nabla \overline c }^2\over 2}r + {\rm const.},
\quad r\gg \sigma. $$
This result shows the expected scaling $\propto r$ in physical space associated to GCF's
\cite{brogioli2016correlations,eyink2022high}.
We can now look for the solution of the original anisotropic problem \eqref{Cgrad} in the separable
form ${\mathcal C}(r, \theta)= \overline{{\mathcal C}}(r) {\mathcal G}(\theta).$ This strategy is successful for the range $r\gg\sigma$
and leads to the following autonomous equation for ${\mathcal G}$
\begin{equation}\label{G1}
{1\over\sin\theta}
{\partial \over \partial \theta}\left(\sin\theta {\partial {\mathcal G}\over\partial \theta}\right)
+2{\mathcal G}= {3\over 2}(1+\cos^2\theta).
\end{equation}
This ODE can be solved easily by first substituting $\xi=\cos\theta$ and finding a particular solution and general homogeneous solution by the Frobenius series method. One of the homogeneous solutions
contains a logarithm and must be discarded as unphysical due to its non-smoothness at $\theta=\pi/2$.
The result for the anisotropy factor is then found to be
\begin{eqnarray}\label{G}
{\mathcal G}(\theta)&=& {3\over 8} (2+\sin^2\theta)+A\Big(1-{\cos^2\theta\over 2}-{\cos^4\theta\over 8}+\dots \Big),
\end{eqnarray}
where the constant $A$ is to be determined by the condition \eqref{Cavrg}. It turns out
that the latter condition is satisfied exactly by the indicated particular solution, so that
$A=0$ and we obtain the final result
\begin{equation}\label{anisotropic20}
{\mathcal C}(r, \theta)\simeq -{3\over 16} \sigma \abs{\nabla \overline c }^2\;r (2+\sin^2\theta),
\quad r\gg \sigma.
\end{equation}
To our knowledge, this full physical-space correlation of GCF's has not been noted in the literature before. See \cite{brogioli2016correlations} for previous partial results.
The scaling of GCF's is more typically given in Fourier space by the {\it static structure function}
\begin{eqnarray}\label{Structure1}
S(k)&=& \int {\mathcal C}(r, \theta) e^{i{\bf k}{\mbox{\boldmath $\cdot$}}{\bf r}} d^3r\cr
&=& \int_0^\infty r^2 dr \int_0^\pi \sin\theta d\theta \int_0^{2\pi} d\phi\; {\mathcal C}(r, \theta)e^{i(k_\parallel r \cos\theta +k_\perp r \sin\theta \cos\phi)},
\end{eqnarray}
where $k_\|$ is magnitude of the wavenumber component parallel to ${\mbox{\boldmath $\nabla$}}\bar{c}$ and
$k_\perp$ is magnitude of the perpendicular component.
The argument of the exponential function in the above expression can be simplified by applying a rotation to the spherical coordinate system $(r, \theta,\phi)$ by angle $\alpha=\tan^{-1}(k_\perp/k_\parallel)$, which results in standard integrals over the new polar angle $\theta'$. Introducing an IR cut-off, the integral over $r$ can be easily evaluated, with the final result given by
\begin{eqnarray}\label{structure2}
S(k)=6\pi \sigma \abs{\nabla \overline c }^2 {{\hat k}_\perp^2\over k^4}
=\frac{k_B T}{D\eta} \abs{\nabla \overline c }^2 {{\hat k}_\perp^2\over k^4},
\quad k\sigma\ll 1\quad
\end{eqnarray}
where $\hat k_\perp$ is the perpendicular component of the unit vector $\hat{{\bf k}}={\bf k}/k$.
See Appendix \ref{Fourier}. Remarkably,
this is identical to the result predicted by linearized fluctating hydrodynamics
\cite{vailati1998nonequilibrium,brogioli2000diffusive,brogioli2016correlations},
in the limit of high Schmidt number and with negligible bouyancy effects.
The {\it dynamic structure function} is similarly defined in terms of the
2-time correlation function ${\mathcal C}({\bf r},\tau)=
\langle c'({\bf x}+{\bf r}, t+\tau) c'({\bf x}, t)\rangle$
as
\begin{equation} S(k,\omega) = \iint {\mathcal C}({\bf r},\tau) e^{i{\bf k}{\mbox{\boldmath $\cdot$}}{\bf r}-i\omega\tau} \,d^3r \,d\tau. \end{equation}
Because of space-homogeneity the general equation \eqref{Cmultitime} for 2-time correlations
reduces to
\begin{equation}
\partial_\tau {\mathcal C}({\bf r}, \tau)=D \triangle_{\bf r} {\mathcal C}({\bf r},\tau)
\end{equation}
for $\tau>0$ and the corresponding anti-diffusion equation for $\tau<0.$
We thus obtain
\begin{eqnarray}\nonumber
S(k, \omega) &=& \int_{-\infty}^{+\infty} S(k) e^{-k^2 D \abs{\tau}} e^{-i\omega \tau} d\tau\cr
&=&S(k){2D k^2 \over \omega^2+(Dk^2)^2} \cr
&=& \frac{k_B T}{D\eta} \abs{\nabla \overline c }^2 {{\hat k}_\perp^2\over k^4}{2D k^2 \over \omega^2+(Dk^2)^2},
\quad k\sigma\ll 1 \label{Dynamic}
\end{eqnarray}
once again in perfect agreement with the prediction of linearized theory
\cite{vailati1998nonequilibrium,brogioli2000diffusive,brogioli2016correlations}.
Evident here is the central Rayleigh peak which determines the light scattering properties of the mixture.
\section{Conclusions}
It is not obvious why the DFV theory results \eqref{structure2},\eqref{Dynamic}
for the concentration structure function should agree with the predictions of linearized fluctuating
hydrodynamics in the limit of high Schmidt number. This agreement means that the neglected nonlinear
terms which dynamically couple the fluctuations of velocity and concentration do not modify
at all the 2nd-order correlations created by the direct random advection of the mean concentration
gradient. The situation is somewhat reminiscent of the non-renormalization theorems for the chiral
anomaly in quantum gauge theory. The Adler-Bardeeen theorem \cite{adler1969absence,adler2005anomalies},
nicely rederived by a renormalization group argument of Zee \cite{zee1972axial}, implies that the result
of the leading-order triangle diagram is not modified by higher corrections from nonlinear interactions to
any order in perturbation theory. This analogy is fitting for the scalar 2-point correlation function,
since its scaling in the Kraichan model is well-known to be connected with the scalar dissipation
anomaly. As discussed by Gaw\c{e}dzki \cite{Gawedzki1997}, Lecture 3, or \cite{gawedzki2002easy},
Lecture 4, the 2-point correlation scaling in the Kraichnan model is analogous to the
exact $4/5$th law in hydrodynamic turbulence which expresses the dissipative anomaly. Just as for
the 3rd-order velocity structure function in hydrodynamic turbulence, the 2nd-order correlation
in the Kraichnan model has scaling given by dimensional analysis and is protected from
acquiring anomalous scaling because of the lack of non-constant zero modes of its
linear evolution operator. But note that a true dissipative anomaly as $D_0\to 0$ will not occur
for the concentration variance in diffusive mixing according to the DFV theory, due to the regularity of the
advecting velocity ${\bf w}$ at scale $\sigma$! See \cite{donev2014reversible}, section 3.1.
The DFV theory results help to explain the remarkable agreement of linearized fluctuating hydrodynamics
predictions with existing observations from experiment and simulation. They furthermore
confirm the finding of \cite{eyink2022high} that the DFV theory reproduces the expected
scaling $\propto k^{-4}$ of the static structure function. This result opens the door to the
use of the DFV theory for problems of current interest with large gradients, high concentrations
and transient dynamics \cite{baaske2016neuf,vailati2020giant}. Note that in the derivation
of the equation \eqref{C1eq} for the mean concentration $\bar{c}({\bf x},t)$ and equation
\eqref{C2eq} for the fluctuation correlation ${\mathcal C}({\bf x},{\bf x}',t)$ there
is no assumption of a separation in space and time scales for these two quantities, which
co-evolve together. Thus, joint solution of these two equations provides a more systematic approach
than current approximate solutions of the linearized equations that assume scale separation
\cite{vailati1998nonequilibrium}. There is also no assumption of small concentrations or
weak gradients in the derivation of the equations \eqref{C1eq},\eqref{C2eq},
which can be obtained as well with realistic boundary conditions in finite domains. For
highly symmetric geometries such as the plane interface in an unbounded fluid discussed
in section \ref{sec:unbd} these equations take simplified forms such as
\eqref{Diff1},\eqref{Ceq-slab} which can be readily solved numerically, if not analytically.
A Soret flux ${\bf J}_T=-D c S_T {\mbox{\boldmath $\nabla$}} \bar{T}$ associated with a mean temperature gradient
can also be incorporated, although closure requires the assumption of weak concentrations so that
$c(1-c)\simeq c.$ Without making any of these simplifying assumptions, the
Eulerian and Lagrangian numerical methods originally developed in \cite{donev2014reversible} can be applied
in finite domains and include additional important effects such as bouyancy. It is hoped that
our analytical results will help to spur renewed interest in DFV's numerical methods,
since we now can have confidence that they yield physically valid results.
Lastly, but not least importantly, the DFV theory provides results also for higher
$p$-point correlations of the concentration field with $p>2$ and our analysis gives no reason
to believe that these will agree with the predictions of linearized fluctuating hydrodynamics.
In particular, these higher-order statistics must certainly be non-Gaussian due to the
nonlinear coupling term $-{\bf v}'{\mbox{\boldmath $\cdot$}} {\mbox{\boldmath $\nabla$}} c'$ neglected in linearized theory. Methods
previously developed to study anomalous scaling in the Kraichnan model apply here,
including analytical approaches such as $1/d$ expansion \cite{chertkov1995normal,chertkov1996anomalous}
and Lagrangian numerics \cite{frisch1998intermittency,gat1998anomalous,frisch1999lagrangian}.
We are currently investigating such higher-order correlations. These non-Gaussian statistics
are not only of theoretical interest but might also be observable experimentally, e.g. by
multiple-scattering measurements \cite{lemieux1999investigating}. Such higher-order correlations
are now accessible in other condensed matter systems such as atomic quantum gases
\cite{schweigler2017experimental} and, as in those cases, experimental access to such
non-Gaussian statistics could provide an entirely new window into the non-equilibrium
physics of diffusive mixing.
\backmatter
\bmhead{Acknowledgments}
We dedicate this paper to the memory of Krzysztof Gaw\c{e}dzki, to whom we are deeply indebted for his profound insights, his unfailing support and, not least, his warm
friendship. We are also grateful to A. Vailati and A. Donev for stimulating discussions of nonequilibrium concentration fluctuations.
We thank finally the Simons Foundation for support of this work with Targeted Grant No. MPS-663054, "Revisiting the Turbulence Problem Using Statistical Mechanics."
\begin{appendices}
\iffalse
In this appendix, we evaluate the Fourier transform of the correlation function ${\mathcal C}({\bf r})$ in Eq.(\ref{Structure1}) in order to obtain the structure function $S({\bf k})$.
First, we take the real part so that our integrand contains the expression $\cos({\bf k\cdot r})=\cos(k_\parallel r \cos\theta +k_\perp r \sin\theta \cos\phi)$ instead of the complex exponential.
We next simplify the integral by rotating the $z$-axis of the integration variable
${\bf r}$ to be parallel with wavevector ${\bf k}$; see Fig.~\ref{Rotation}. This rotation
by angle $\alpha=\tan^{-1}(k_\perp/k_\parallel)$ results in new coordinates, $(r, \theta', \phi')$, in which the argument $k_\parallel r \cos\theta +k_\perp r \sin\theta \cos\phi$ is transformed to $rk\cos\theta'$. The measure $d\Omega=\sin\theta d\theta d\phi=\sin\theta' d\theta' d\phi'$ is of course invariant under rotations. The remaining term in the integrand, $\sin^2\theta$, transforms to $\sin^2(\theta'+\alpha)$. These simple results can easily be verified by applying the rotation matrix to an arbitrary unit vector $\hat u:=(\cos\phi\sin\theta, \sin\phi\sin\theta, \cos\theta)$:\begin{equation}\nonumber\begin{pmatrix}\cos\phi\sin\theta \\\sin\phi\sin\theta \\\cos\theta \end{pmatrix}= \begin{pmatrix}\cos\alpha & 0 & \sin\alpha\\0 & 1 & 0\\-\sin\alpha & 0 & \cos\alpha\end{pmatrix} \begin{pmatrix}\cos\phi'\sin\theta' \\\sin\phi'\sin\theta' \\\cos\theta' \end{pmatrix}.\end{equation}
We find
\begin{eqnarray}\nonumber
S(k)&=& -{3\over 16} \sigma \abs{{\mbox{\boldmath $\nabla$}} \overline c}^2 \int_0^\infty r^3 dr \int_0^\pi \sin\theta d\theta \int_0^{2\pi} d\phi \\\nonumber
&&\times \; \Big(2+\sin^2(\alpha+\theta)\Big)\cos(k r \cos\theta)
\end{eqnarray}
\begin{figure}
\includegraphics[scale=.25]{CoordRot}
\centering
\caption {\footnotesize {The spherical coordinate system used in evaluating Eq.(\ref{Structure1}). The expression $\cos(k_\parallel r \cos\theta +k_\perp r \sin\theta \cos\phi)\equiv \cos(k_\parallel r_\parallel+{\bf k}_\perp{\mbox{\boldmath $\cdot$}}{\bf r}_\perp)$ in Eq.(\ref{Structure1}) can be greatly simplified by rotating the coordinate system by angle $\alpha=\tan^{-1}({k_\perp/ k_\parallel})$ such that in the new coordinates $\bf k$ becomes aligned with the $z$-axis.}}
\label{Rotation}
\end{figure}
$\!\!\!$where, to simplify the notation, we have renamed the new primed coordinates $\theta', \phi'$ again as $\theta, \phi$. Integration over azimuthal angle $\phi$ merely produces a factor of $2\pi$, due to isotropy on the plane perpendicular to the concentration gradient ($z$-axis). Hence,
\begin{eqnarray}\nonumber
S(k)&\equiv &-{3\pi \over 4} \sigma \abs{{\mbox{\boldmath $\nabla$}} \overline c}^2 \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin\theta d\theta \; \cos(k r \cos\theta)\\\nonumber
&&-{3\pi \over 8} \sigma \abs{{\mbox{\boldmath $\nabla$}} \overline c}^2 \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin\theta d\theta \; \sin^2(\alpha+\theta)\; \cos(k r \cos\theta),\\\label{C10}
\end{eqnarray}
where we have also introduced an IR cut-off $L$, with the understanding that the limit $L\to \infty$ will be applied at the end.
The first integral in the above expression is straightforward: first we have $\int_0^\pi d\theta \sin\theta \cos(k r \cos\theta)={2\over kr }\sin(kr)$, which can be integrated over $r$ using standard tables of integrals. For example, using \cite{Erdelyi1954} formula 4.7 (14), we find $\lim_{L\to +\infty}\int_0^\infty e^{-r/L} (kr)^2 \sin(kr) dr={-2\over k}$. Therefore, we have
\begin{eqnarray}\nonumber
&&-{3\pi \over 4} \sigma \abs{{\mbox{\boldmath $\nabla$}} \overline c }^2 \int_0^\infty e^{-r/L} r^2 dr \int_0^\pi \sin\theta d\theta \; r\cos(k r \cos\theta)={3\pi \sigma \abs{{\mbox{\boldmath $\nabla$}} \overline c}^2\over k^4}.
\end{eqnarray}
As for the second integral in Eq.(\ref{C10}), we can proceed as follows. Expanding $\sin^2(\alpha+\theta)$, we write
\begin{eqnarray}\nonumber&&\int_0^\infty e^{-r/L} r^2 dr \int_0^\pi \sin\theta d\theta \sin^2(\alpha+\theta)\; r\cos(k r \cos\theta)\\\nonumber&=& \sin^2\alpha \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin\theta \cos^2\theta\cos(kr\cos\theta) d\theta\\\nonumber&&+ \cos^2\alpha \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin^3\theta \cos(kr\cos\theta) d\theta\\\label{C20}&&+ 2\sin\alpha\cos\alpha \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin^2\theta \cos\theta\cos(kr\cos\theta) d\theta.\end{eqnarray}
The last integral over $\theta$ vanishes because the integrand is odd under the reflection $\theta\to \pi-\theta$. The other two are straightforward:
\begin{eqnarray}\nonumber
&&\int_0^\pi \sin\theta \cos^2\theta\cos(kr\cos\theta) d\theta\\\label{first}
&&={2\over k^3 r^3}\Big( (k^2 r^2-2)\sin(kr)+2kr\cos(kr)\Big),
\end{eqnarray}
which can easily be obtained by a change of variable, e.g., as $t=\cos\theta$, and then integrating by parts. We also have
\begin{eqnarray}\nonumber
&&\int_0^\pi \sin^3\theta \cos(kr\cos\theta) d\theta={4\over k^3 r^3}\Big( \sin(kr)-kr\cos(kr)\Big),
\end{eqnarray}
which can be checked by writing $\sin^3\theta=\sin\theta(1-\cos^2\theta)$, then using Eq.(\ref{first}) and the trivial result $\int \sin\theta\cos(kr \cos\theta))d\theta=2\sin(kr)/kr$. Substituting these results back into Eq.(\ref{C20}), we see that the remaining integrals over $r$ are in fact Laplace transforms of the functions $(kr)^2 \sin(kr)$, $\sin(kr)$ and $kr \cos(kr)$ which can be evaluated using standard tables of integrals e.g., \cite{Erdelyi1954}. In the limit $L\to+\infty$, we find $\int_0^\infty e^{-r/L} (kr)^2 \sin(kr) dr\to -2/k$ using \cite{Erdelyi1954} formula 4.7 (14); $\int_0^\infty e^{-r/L} \sin(kr) dr\to 1/k$ using \cite{Erdelyi1954} formula 4.7 (1) and finally $\int_0^\infty e^{-r/L} (kr) \cos(kr) dr\to -1/k$ using \cite{Erdelyi1954} formula 4.7 (57). Putting all this together, Eq.(\ref{C20}) becomes
\begin{eqnarray}\nonumber
&&\int_0^\infty e^{-r/L} r^2 dr \int_0^\pi \sin\theta d\theta \; \sin^2(\alpha+\theta)\; r\cos(k r \cos\theta)\\\nonumber
&&={8\over k^4}\cos^2\alpha-{12\over k^4}\sin^2\alpha???
\end{eqnarray}
Note from $\alpha=\tan^{-1}(k_\perp/k_\parallel)$ that $\cos^2\alpha=k_\parallel^2/k^2$ and $\sin^2\alpha=k_\perp^2/k^2$. Plugging this result into Eq.(\ref{C10}) leads to Eq.(\ref{structure2}) in the text ??? which is the desired result.
\fi
\section{Calculation of Structure Function}\label{Fourier}
In this appendix, we evaluate the Fourier transform of the correlation function ${\mathcal C}({\bf r})$ in Eq.(\ref{Structure1}) in order to obtain the static structure function $S({\bf k})$.
First, we take the real part so that our integrand contains the expression $\cos({\bf k\cdot r})=\cos(k_\parallel r \cos\theta +k_\perp r \sin\theta \cos\phi)$ instead of the complex exponential.
We next simplify the integral by rotating the $z$-axis of the integration variable
${\bf r}$ to be parallel with wavevector ${\bf k}$; see Fig.~\ref{Rotation}. This rotation
by angle $\alpha=\tan^{-1}(k_\perp/k_\parallel)$ results in new coordinates, $(r, \theta', \phi')$, in which the argument $k_\parallel r \cos\theta +k_\perp r \sin\theta \cos\phi$ is transformed to $rk\cos\theta'$. The measure $d\Omega=\sin\theta d\theta d\phi=\sin\theta' d\theta' d\phi'$ is of course invariant under rotations. The remaining term in the integrand, $\sin^2\theta$, transforms as
$$\sin^2\theta\equiv (\cos\alpha\cos\phi'\sin\theta'+\sin\alpha\cos\theta')^2+\sin^2\phi'\sin^2\theta'$$
These simple results can easily be verified by applying the rotation matrix to an arbitrary unit vector $\hat u:=(\cos\phi\sin\theta, \sin\phi\sin\theta, \cos\theta)$:\begin{equation}\nonumber\begin{pmatrix}\cos\phi\sin\theta \\\sin\phi\sin\theta \\\cos\theta \end{pmatrix}= \begin{pmatrix}\cos\alpha & 0 & \sin\alpha\\0 & 1 & 0\\-\sin\alpha & 0 & \cos\alpha\end{pmatrix} \begin{pmatrix}\cos\phi'\sin\theta' \\\sin\phi'\sin\theta' \\\cos\theta' \end{pmatrix}.\end{equation}
We find
\begin{eqnarray}\nonumber
S(k)&=& -{3\over 16} \sigma \abs{\nabla \overline c}^2 \int_0^\infty r^3 dr \int_0^\pi \sin\theta d\theta \int_0^{2\pi} d\phi \\\nonumber
&&\times \; \Big(2+(\cos\alpha\cos\phi\sin\theta+\sin\alpha\cos\theta)^2+\sin^2\phi\sin^2\theta\Big)\cos(k r \cos\theta)
\end{eqnarray}
\begin{figure}
\includegraphics[scale=.25]{CoordRot}
\centering
\caption {\footnotesize {The spherical coordinate system used in evaluating Eq.(\ref{Structure1}).}}
\label{Rotation}
\end{figure}
$\!\!\!$where, to simplify the notations, we have renamed the new primed coordinates $\theta', \phi'$ again as $\theta, \phi$. We find
\begin{eqnarray}\nonumber
S(k)&=& -{3\over 8} \sigma \abs{\nabla \overline c}^2 \int_0^\infty e^{-r/L}r^3 dr \int_0^\pi \sin\theta d\theta \cos(k r \cos\theta)\int_0^{2\pi} d\phi\\
&&-{3\over 16} \sigma \abs{\nabla \overline c}^2 \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin\theta \cos(kr\cos\theta) d\theta \int_0^{2\pi} d\phi \\\nonumber
&&\times\Big(\cos^2\alpha\cos^2\phi\sin^2\theta+\sin^2\alpha\cos^2\theta+\sin 2\alpha\sin \theta\cos\theta+\sin^2\phi\sin^2\theta\Big)\\\nonumber\label{C10}
\end{eqnarray}
\vspace{-20pt}
\noindent
where we have also introduced an IR cut-off $L$, with the understanding that the limit $L\to \infty$ will be taken at the end.
The double integral in the first line of the above expression is straightforward to evaluate: first note that $\int_0^\pi d\theta \sin\theta \cos(k r \cos\theta)={2\over kr }\sin(kr)$, which can be integrated over $r$ using standard tables of integrals. For example, using \cite{Erdelyi1954} formula 4.7 (14), we find $\lim_{L\to +\infty}\int_0^\infty e^{-r/L} (kr)^2 \sin(kr) dr={-2\over k}$. Therefore, the first line of Eq.(\ref{C10}) reads
\begin{eqnarray}\label{1st}
&&-{3\pi \over 4} \sigma \abs{\nabla \overline c }^2 \int_0^\infty e^{-r/L} r^2 dr \int_0^\pi \sin\theta d\theta \; r\cos(k r \cos\theta)={3\pi \sigma \abs{\nabla \overline c}^2\over k^4}.
\end{eqnarray}
The remaining triple integral in Eq.(\ref{C10}), aside from a factor of ${-3\over 16}\sigma\abs{\nabla\overline c}^2$, can be written as
\begin{eqnarray}\nonumber&& \sin^2\alpha \int_0^{2\pi} d\phi \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin\theta \cos^2\theta\cos(kr\cos\theta) d\theta\\\nonumber
&&+ \cos^2\alpha\int_0^{2\pi} d\phi \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin^3\theta \cos^2\phi \cos(kr\cos\theta) d\theta\\\nonumber
&&+\int_0^{2\pi} d\phi \int_0^\infty e^{-r/L} r^3 dr\int_0^\pi d\theta \sin^2\phi \sin^3\theta \cos(kr\cos\theta)\\\label{C20}
&&+ \sin2\alpha\int_0^{2\pi} d\phi \int_0^\infty e^{-r/L} r^3 dr \int_0^\pi \sin^2\theta \cos\theta\cos(kr\cos\theta) d\theta.
\end{eqnarray}
In the last line, the integral over $\theta$ vanishes because the integrand is odd under the reflection $\theta\to \pi-\theta$. As for the remaining integrals in Eq.(\ref{C20}), those over $\phi$ are elementary and those over $\theta$ can be readily evaluated using the following results:
\begin{eqnarray}\nonumber
&&\int_0^\pi \sin\theta \cos^2\theta\cos(kr\cos\theta) d\theta\\\label{first}
&&={2\over k^3 r^3}\Big[ (k^2 r^2-2)\sin(kr)+2kr\cos(kr)\Big],
\end{eqnarray}
which can easily be obtained by a change of variable, e.g., as $t=\cos\theta$, and then integrating by parts,
and also
\begin{eqnarray}\nonumber
&&\int_0^\pi \sin^3\theta \cos(kr\cos\theta) d\theta={4\over k^3 r^3}\Big[ \sin(kr)-kr\cos(kr)\Big],
\end{eqnarray}
which can be checked by writing $\sin^3\theta=\sin\theta(1-\cos^2\theta)$, then using Eq.(\ref{first}) and the trivial result $\int \sin\theta\cos(kr \cos\theta))d\theta=2\sin(kr)/kr$. Substituting these results back into Eq.(\ref{C20}), we see that the remaining integrals over $r$ are in fact Laplace transforms of the functions $(kr)^2 \sin(kr)$, $\sin(kr)$ and $kr \cos(kr)$ which can be evaluated using standard tables of integrals e.g., \cite{Erdelyi1954}. In the limit $L\to+\infty$, we find $\int_0^\infty e^{-r/L} (kr)^2 \sin(kr) dr\to -2/k$ using \cite{Erdelyi1954} formula 4.7 (14); $\int_0^\infty e^{-r/L} \sin(kr) dr\to 1/k$ using \cite{Erdelyi1954} formula 4.7 (1) and finally $\int_0^\infty e^{-r/L} (kr) \cos(kr) dr\to -1/k$ using \cite{Erdelyi1954} formula 4.7 (57). Putting all this together, and using Eq.(\ref{1st}), we find
\begin{eqnarray}\nonumber
S(k)&=&{3\pi\sigma\over k^4}\abs{\nabla \overline c}^2-{3\over 16}\sigma\abs{\nabla\overline c}^2\Big[{8\pi\over k^4}(1+\cos^2\alpha)-{24\pi\over k^4}\sin^2\alpha \Big].
\end{eqnarray}
Simplifying the above expression, using $\cos^2\alpha=k_\parallel^2/k^2$ and $\sin^2\alpha=k_\perp^2/k^2$, and plugging it into Eq.(\ref{C10}) leads to Eq.(\ref{structure2}), which is the desired result.
\end{appendices}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,419 |
Q: Change font size in a section as a function of the current font size This answer shows how to get the current font size, using \makeatletter\f@size\makeatother.
This answer shows how to parse and evaluate a mathematical expression before printing it, using for example \pgfmathparse{sin(60)}\pgfmathresult.
Now I want to combine both and set the font size for a section 1pt smaller than the current font size. This is what I have so far:
\documentclass[11pt]{article}
\usepackage{pgf}
\begin{document}
Before
\makeatletter\fontsize{\pgfmathparse{\f@size - 1}\pgfmathresult pt}{10pt}\selectfont\makeatother
After
\end{document}
I tried to place \makeatletter and \makeatother in different positions (because I don't really understand how they work) but I always get a Missing number, treated as zero error.
Any help is welcome!
PS: I found the package relsize, which does what I want, but I would still want to understand why what I wrote doesn't work.
A: As explained in the other answer, \pgfmathparse cannot be in the argument to \fontsize, which needs something that directly expands to a decimal number (point units are implied).
Here's an implementation that also allows to change the model of the baseline skip set to 1.2 times the font size by adding another optional argument (in parentheses). The default value of the bracketed optional argument is 1 (to be subtracted to the current size).
\documentclass{article}
\usepackage{ebgaramond} % fully scalable font
\usepackage{xparse}
\ExplSyntaxOn
% make an expl3 equivalent available
\cs_new_eq:NN \optical_fontsize:nn \fontsize
\cs_generate_variant:Nn \optical_fontsize:nn { ee }
\NewDocumentCommand{\changesize}{O{1}D(){1.2}}
{
\optical_fontsize:ee
{ \fp_eval:n { \use:c { f@size } - #1 } }
{ \fp_eval:n { ( \use:c { f@size } - #1 ) * ( #2 ) } }
\selectfont
}
\ExplSyntaxOff
% for testing
\makeatletter
\newcommand{\printsize}{\f@size/\f@baselineskip}
\makeatother
\begin{document}
Standard size \printsize
\changesize One point less \printsize
\changesize[3] Three point less \printsize
\changesize[-4] Standard size \printsize
\changesize[-10](1.1) Bigger size, but let's also check the baseline skip \printsize
\end{document}
The indirect approach of expanding first both arguments to \fontsize is needed, because when LaTeX is evaluating the second argument to \fontsize it has already set \f@size.
A: This is to explain why your approach does not work, not to advocate this as the best way to decrease the font size. It does not work because \fontsize expects a number, and \pgfmathparse is in "its way". The solution is as simple as moving \pgfmathparse{...} upfront.
\documentclass[11pt]{article}
\usepackage{pgf}
\begin{document}
Before
\makeatletter\pgfmathparse{\f@size - 1}\fontsize{\pgfmathresult pt}{10pt}\selectfont\makeatother
After
\end{document}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,578 |
\section{Introduction}
When a high energy photon interacts with a proton, a QCD-governed hard
scattering process may take place in which either the photon, or a parton
within the photon, interacts with a parton in the proton. A signature for
such processes is high-\mbox{$E_T$}\ jets in the final state. Both H1 and ZEUS at HERA
have reported measurements of various aspects of jet photoproduction~[1--5].
In general terms, the properties of particles in jets depend on the type of
leading parton generated by the hard process,
and on the process known as fragmentation (or hadronisation)
by which the colour in the jet is neutralised.
The main features of the fragmentation of a given leading parton
at a given energy are believed to be universal and independent of the type of
initiating process.
It is fragmentation, moreover, rather than leading strange or
charm quarks, which generates most of the strange particles that occur
in jets~\cite{Cash}.
At the phenomenological level, a longitudinal fragmentation function
$D(z)$ may be used to describe the observed momentum distribution of
a chosen type of particle along the jet axis,
where $z$ is the fraction of the jet momentum taken by the particle.
Experimentally, $D(z)$ functions
have so far been obtained mainly from fits to $e^+ e^-$ collider data.
In the present analysis, using data taken in $ep$
collisions with the ZEUS detector in 1994, we examine the inclusive properties
of charged particles and \mbox{$K^{0}$}\ mesons produced
in association with high-\mbox{$E_T$}\ photoproduced jets. This work
complements the studies of such particles which have been performed in
deep inelastic scattering (DIS) by the ZEUS and H1
collaborations~[7-10].
The results are compared with predictions obtained
from the PYTHIA Monte Carlo generator, which incorporates
the Lund string model of particle fragmentation as implemented in
JETSET~\cite{Pythia}. As one possible improvement to the standard description,
the suggestion is investigated that
the particle distributions may be affected by an underlying event
structure due to multiparton scattering~\cite{ZEUSMI}.
Several studies are also made in this analysis to test the universality of fragmentation.
In deep inelastic scattering~\cite{H1DIS,ZEUSDIS,BrFr}, it has been found
that the behaviour of charged particles in the current region of the
Breit frame is similar to that found in
high energy quark jets in $e^+ e^-$ annihilation; however, there are
indications~\cite{ZDISK,HDISK,E665} that DIS
may give jets with a lower strangeness content than expected from the standard
settings of PYTHIA.
DELPHI~\cite{Delphi} has also reported results with a similar conclusion.
We investigate whether this applies also
to jets produced using incoming photons that are quasi-real,
noting that inclusive cross sections of \mbox{$K^{0}$}\ in
photoproduction found by H1 are broadly consistent with PYTHIA
predictions~\cite{H1K}.
Comparison is also made with the fragmentation function calculations of
Binnewies et al.~\cite{bin}, which are based on fits to the particle
content of jets at PEP and LEP~\cite{PEP}.
To give a more immediate test of the universality of fragmentation in
different types of process, we select a sample of events dominated by
the direct photoproduction process (i.e.\ in which the photon interacts in a
pointlike manner). These data allow a
comparison with $e^+e^-$ and DIS results without any intermediate
model or parameterisation.
The structure of the paper is as follows. After an account of the
apparatus, the event selection and the selection of charged tracks, the
procedure for reconstructing $K^0$ mesons is discussed. We describe the
Monte Carlo models used in simulating the data, and the
more important sources of systematic error.
Results are then given on the properties
of charged particles and \mbox{$K^{0}$}\ mesons in photoproduced jets.
Finally, we present a study of fragmentation functions for both
charged particles and $K^0$ mesons, extracting
results for direct photoproduced events in order to make a general
comparison with other types of experiment.
\section{Apparatus and running conditions}
The data used in the present analysis were collected by the ZEUS detector
at HERA. During 1994, HERA collided positrons with energy $E_e = 27.5$ GeV
with protons of energy $E_p =820$ GeV, in 153 circulating
bunches. Additional unpaired positron (15) and proton (17) bunches enabled monitoring of
beam related backgrounds. The data sample used in this analysis corresponds to an integrated
luminosity of 2.6 pb$^{-1}$. The luminosity was measured by means of the positron-proton
bremsstrahlung process $ep\to e\gamma p$, using a lead-scintillator calorimeter
at $Z=-107$ m\footnote{The ZEUS
coordinates form a right-handed system with positive-$Z$ in the
proton beam direction and a horizontal $X$-axis. The nominal
interaction point is at $X = Y = Z = 0.$
Pseudorapidity $\eta$ is defined as $-\ln\tan\theta/2$, where
$\theta$ is the polar angle relative to the $Z$ direction. In the present
analysis, $\eta$ is always defined in the laboratory frame, and the actual
interaction point of the event is taken into account.}
which intercepts photons radiated at angles of less than 0.5 mrad
with respect to the positron beam direction.
The ZEUS apparatus is described more fully elsewhere~\cite{r1}. Of particular importance
in the present work are the central tracking detector (CTD), the vertex
detector (VXD) and the uranium-scintillator calorimeter (CAL).
The CTD~\cite{CTD} is a cylindrical drift chamber situated
inside a superconducting magnet coil which provides a 1.43 T field.
It consists of 72 cylindrical layers covering the polar angle
region $15^{\circ} < \theta < 164^{\circ}$ and the radial
range 18.2--79.4 cm.
The transverse momentum resolution for tracks traversing all CTD layers
is $\sigma(p_T)/p_T \approx \sqrt{(0.005 p_T)^2 + (0.016)^2}$, with $p_T$
in GeV. The VXD~\cite{VXD} supplied tracking inside the CTD,
and consisted of 120 radial cells, each with 12 sense wires covering the radial
range 10.6--14.3 cm. The vertex position of a typical multiparticle
event is determined from the tracks
to an accuracy of typically $\pm$1 mm in the $X, Y$ plane and $\pm$4 mm in $Z$.
The CAL \cite{UCAL} gives an angular coverage of 99.7\% of $4\pi$ and is divided
into three parts (FCAL, BCAL, RCAL), covering the forward (proton direction),
central and rear polar angle ranges $2.6^{\circ}$--$36.7^{\circ}$,
$36.7^{\circ}$--$129.1^{\circ}$ and $129.1^{\circ}$--$176.2^{\circ}$, respectively.
Each part consists of towers which are longitudinally subdivided
into electromagnetic (EMC) and hadronic (HAC) readout cells.
From test beam data, energy resolutions of
$\sigma_{E}/E = 0.18/\sqrt{E}$ for electrons and $\sigma_{E}/E = 0.35/\sqrt{E}$
for hadrons have been obtained (with $E$ in GeV). The calorimeter
cells also provide time measurements which are used for beam-gas background
rejection.
\section{Trigger and event selection}
To identify jets in the event trigger and the subsequent selection of events,
a cone algorithm~\cite{r5} in accordance with the Snowmass
Convention~\cite{sno} was applied to the calorimeter cells.
Each cell signal was treated as corresponding to a massless particle.
A cone radius of 1.0 in $R = \sqrt{(\delta\phi)^2 + (\delta\eta)^2}$ was used,
where $\delta\phi,\,\delta\eta$ denote the distances of cells from the
centre of the jet in azimuth and pseudorapidity.
A similar algorithm was used both online and offline.
The transverse energy of the reconstructed jet is the
sum of the measured transverse energies of the calorimeter cells included
in it, and will be referred to as \mbox{$E_T^{\;rec}$}.
The ZEUS detector uses a three-level trigger system. The first level
trigger selected events on the basis of a coincidence of a regional or
transverse energy sum in the calorimeter and a track in the CTD pointing
towards the interaction point. At the second level, at least 8 GeV of
transverse energy was demanded, excluding the eight calorimeter towers
surrounding the forward beam pipe. Beam-gas background is further reduced
using the measured times of energy deposits and the summed energies in
the calorimeter. At the third level, jets were identified.
The trigger combination used in the present study required
at least one jet with $\mbox{$E_T^{\;rec}$} > 6.5$ GeV and $\eta<2.5$, or with
$\mbox{$E_T^{\;rec}$} > 5.5$ GeV and $\eta<2.0$.
Cosmic ray events were rejected by means of information from the CTD
and the calorimeter. An interaction vertex at a position $Z>-75$ cm,
as determined from the CTD tracks, was demanded.
A total of 392k events passed these requirements.
The trigger efficiency is close to 100\%
over the entire kinematic range of events used in the present analysis.
To study the association of \mbox{$K^{0}_s$}\ mesons with jets,
it was necessary to obtain a sample of high energy jets
sufficiently centred in the acceptance of the CTD so as to
optimise the acceptance for the $\pi^+$ and the $\pi^-$ decay products of
associated \mbox{$K^{0}_s$}\ mesons. With these considerations in mind,
events were selected offline with at least one jet
having $\mbox{$E_T^{\;rec}$}>7$ GeV and $|\eta|<0.5$.
Standard ZEUS background rejection criteria were applied~\cite{r4,r1} to improve the
rejection of beam-gas
events and cosmic ray events by means of cuts on the primary vertex position,
the fraction of well-measured tracks, the CAL signal times
and the transverse momentum imbalance in the event.
On the assumption that the outgoing positron is not detected in the CAL,
$y_{JB} = \sum(E- p_Z)/2E_e$ was calculated, where the sum is
over all calorimeter cells, treating each signal as equivalent to a massless
particle; i.e.\ $E$ is the energy deposited in the cell, and
$p_Z$ is the value of $E\cos\theta$. The quantity
$y_{JB}$ is then a measure of $y^{true} = E_{\gamma,\,in}/2E_e$, where
$E_{\gamma,\,in}$ is the energy of the incident virtual photon.
In the case that a DIS positron is present, a value of approximately unity
is obtained. Events containing a DIS positron were rejected by requiring that
(i) no scattered beam positron be identified in the CAL, and
(ii) $y_{JB} < 0.7$.
A requirement of $y_{JB}\ge 0.15$ was also imposed as part of the beam-gas
background rejection procedure. A total of 34.8k events was obtained at
this stage, containing 36.6k jets with $\mbox{$E_T^{\;rec}$}>7$ GeV and $|\eta|<0.5$.
This selection of events, each containing at least one
accepted jet, represents the basic event sample for the analysis that follows.
\section{Charged particle selection}
For the study of inclusive charged particle distributions, CTD tracks were accepted if
\begin{numlis}
\item they were associated with the primary vertex,
\item their pseudorapidity was in the range $|\eta| \le 1.5$, and
\item their transverse momentum satisfied $p_T\ge 0.5$ GeV.
\end{numlis}
The pseudorapidity and momentum conditions are chosen to
be the same as those to be imposed on the \mbox{$K^{0}_s$}\ mesons.
Apart from a small component arising from short-lived particle
decays, these conditions provide a sample of well-measured tracks which
can be identified with charged hadrons originating
from the main interaction. In the following sections, charged particles
will be referred to as $h^\pm$.
\section{$K^0$ reconstruction}
$K^0$ mesons decay 50\% as \mbox{$K^{0}_s$}, which have a 68.6\% branching
ratio into $\pi^+\pi^-$~\cite{PDG}.
\mbox{$K^{0}_s$}\ mesons were identified by their charged decay mode $K^0_s\to
\pi^+\pi^-$. This decay mode is easily identifiable given the accurate
tracking measurements from the CTD, since the mean decay
distance, projected on to the $r\phi$ plane, of $2.67p_T/m_K$ cm ensures
a spatial separation of the decay vertex from the primary event vertex
for a large number of the \mbox{$K^{0}_s$}\ produced in the kinematic conditions of the
present analysis.
\mbox{$K^{0}_s$}\ identification starts by selecting pairs of oppositely charged tracks
obtained using the standard ZEUS reconstruction algorithms.
In order to restrict the track selection to the kinematic region where the tracking
was best understood, each track was required to satisfy
the conditions $p_T> 150$ MeV and pseudorapidity $|\eta|<1.75$.
More details of the track reconstruction procedure are given in
ref~\cite{ZDISK}. Starting with the track
parameters determined at the point where the track passes nearest to the
beamline, and assuming the tracks in this region to be circular in
the $r\phi$ plane, candidate secondary vertices were then found
by first calculating the intersection points in the $r\phi$ plane of all pairs
of oppositely charged tracks in an event. Zero or two such intersection
points are found for a given track pair. The $Z$-coordinates
and the momentum components of each track of a pair were then calculated at
each of these $(r,\phi)$ points.
Further selections were made on the kinematic quantities listed below.
\begin{romlis}
\item {\em Separation in $Z$ position.} The separation $|\Delta Z|$
of the two tracks at a candidate secondary vertex was
required to be less than 3 cm. If a given track pair gave
two candidate secondary vertices, the one with smaller $|\Delta Z|$
was chosen.
\item{\em Collinearity.} The collinearity angle $\alpha$ is defined as the
projected angle in the $r\phi$ plane between the \mbox{$K^{0}_s$}\ momentum and the
line joining the primary to the secondary vertex. Candidates were accepted
if $\cos\alpha \ge 0.99$.
\item{\em Track impact parameter.} The impact parameter $\epsilon$ of a track
is defined as the distance of closest approach between the extrapolated
track and the primary event vertex.
A requirement $\epsilon > 0.3$ cm was imposed on both tracks for each
kaon candidate.
\item{\em Photon conversion.}
The effective mass of the two tracks was evaluated assigning zero mass
to each track. Track pairs whose effective mass was less than 50 MeV were
excluded from further consideration.
\item{\em $\Lambda$ removal.} The $p\pi$ mass hypothesis was applied to each
track pair, taking the higher momentum particle to be the $p$ ($\bar p$).
As in \protect\cite{ZDISK},
\mbox{$K^{0}_s$}\ candidates with a $p\pi$ mass less than 1.12 GeV were rejected.
\end{romlis}
\mbox{$K^{0}_s$}\ mesons with $\mbox{$p_T$} \ge 0.5$ GeV
and $|\eta|\le 1.5$ were used in the present analysis.
The $\pi^+\pi^-$ mass distribution for all accepted candidates
in this kinematic range is shown in fig.\ \ref{fc}, and displays a strong
\mbox{$K^{0}_s$}\ signal. After subtracting a background (averaged over windows
between 440--470 and 530--560 MeV) from the signal region of 470--530 MeV,
a total of 3154$\pm$63 \mbox{$K^{0}_s$}\ was evaluated on a background of 388.
The mean value of the reconstructed \mbox{$K^{0}_s$}\ mass was 497.0$\pm$0.1 MeV
(statistical error) compared with the
nominal value of 497.7 MeV~\cite{PDG}, with a fitted width
6.3$\pm$0.1 MeV. Monte Carlo comparisons show that the decay vertex
and momentum of the {\mbox{$K^{0}_s$}} are well reconstructed over the full range
of momentum and pseudorapidity, without significant
systematic bias. This background subtraction method was used to obtain
the \mbox{$K^{0}_s$}\ signal in each bin of all plotted distributions.
Further details of the \mbox{$K^{0}_s$}\ reconstruction may be found in ref.~\cite{theses}.
\section{Monte Carlo simulations and data correction}
The experimental data were compared with Monte Carlo
generated events obtained using PYTHIA 5.7, with JETSET 7.4 for the
hadronisation~\cite{Pythia}. Systematic studies were
made using HERWIG 5.8~\cite{Herwig}. PYTHIA is found to give a reasonable
description of jet profiles in hard photoproduction in the
kinematic region of the present study~\cite{r6, r7}. PYTHIA events were
generated using a $p_{T\;min}$ value of 2.5 GeV as the cut-off for the
hard subprocess; the results were insensitive to this choice.
The event generation was followed by a full simulation of the
detector and trigger response in ZEUS by means of GEANT 3.13~\cite{GEANT}.
The direct and resolved contributions are combined in proportion to their
generated cross sections.
The GRV-LHO parton densities for the photon~\cite{GRV} and MRSA for the
proton~\cite{MRS} were used in running the standard version of
PYTHIA. Variations on the standard calculation were made by reweighting the
results to use different photon parton densities,
and also by using an option which uses
multiparton interactions (MI) as a model for an underlying event
accompanying the main hard QCD subprocess (with a $p_{T\;min}$ value
of 1.4 GeV for the secondary interactions). In the LO QCD model employed here,
multiparton scattering may accompany the resolved
processes, in which the photon acts as a source of partons,
but cannot accompany the direct process, in which the photon interacts as
a pointlike entity with no partonic substructure.
Within string fragmentation models such as PYTHIA, the most important quantity governing
the number of strange particles that appear in the fragmentation is the
so-called ``strange\-ness suppression parameter" $P_s/P_u$. This is
assigned a default value of 0.3 on the basis of measurements at PETRA and
PEP~\cite{Saxon}, which measured a variety of strange/non-strange
particle ratios (e.g.\ $K^0$ to $\pi^\pm$ mesons)
in the products of $e^+e^-$ annihilation.
In previous studies of neutral kaons in DIS at HERA~\cite{ZDISK, HDISK} it
was found that decreasing $P_s/P_u$ from its
standard value improved the agreement between the Monte Carlo and the data.
We have therefore generated Monte Carlo event samples also
with the value of $P_s/P_u$ decreased from 0.3 to 0.2.
The Monte Carlo events were used to determine correction factors
for the data. These were calculated separately for each bin of
any given plot, so that for each plotted quantity the corrections are suitably
averaged over the other physical quantities. For \mbox{$K^{0}$}\
calculations, the following factors were taken into account:
\begin{romlis}
\item {\em \mbox{$K^{0}_s$}\ reconstruction efficiency $\epsilon_{K^0_s}$.}
Given a set of reconstructed and accepted Monte Carlo events, this
was the ratio of (a) the number of
reconstructed $K^0_s\to\pi^+\pi^-$ entries in a given bin, background
subtracted, to (b) the corresponding number of
generated $K^0_s\to\pi^+\pi^-$ occurring in this bin.
All the other event selection criteria are applied in the normal way.
\item{\em Reconstruction level/hadron level correction factor $C$.}
This factor makes use of jets reconstructed from the four-vectors of the
primary final state particles (charged or uncharged) in the generated
events, referred to as ``hadron jets". For events with a
generated $K^0_s\to\pi^+\pi^-$ in a given bin,
$C$ is defined as the ratio of (a) the number of events
which satisfy the trigger and acceptance conditions and
have a reconstructed calorimeter jet with $\mbox{$E_T^{\;rec}$}>7$ GeV and $|\eta|<0.5$,
to (b) the number of events having $0.2 \le y^{true} < 0.85$ and a hadron jet
with cone radius 1.0, $\mbox{$E_T$}\ge 8$ GeV and $|\eta| \le 0.5$.
In this way we correct to a defined set of kinematic conditions
at the final state level. The parameters in (b) are chosen
to provide a good correspondence with the cuts at the detector level, and
thereby minimise the corrections. The $y^{true}$ range corresponds to a
$\gamma p$ centre of mass energy range of 134--277 GeV.
\end{romlis}
Since the shapes of the distributions
are similar in the data and Monte Carlo (after reconstruction),
the calculated correction factors will suitably
take into account all effects due to reconstruction efficiency,
event selection efficiency and bin-to-bin migration. These aspects have
been checked in our previous studies of inclusive jet production~\cite{r5}.
The \mbox{$K^{0}_s$}\ reconstruction efficiency is
dominated by the geometric effects of the track cuts and
the CTD tracking efficiency, with little sensitivity to the
properties of the jet. It is similar for direct and resolved events.
In the region of $p_T$ and $\eta$ used here,
$\epsilon_{K^0_s}$ has a plateau of approximately 0.35 in the middle of the
accepted range of either quantity, falling to 0.25 at the ends.
$C$ takes values typically between 0.6 and 0.8.
Corrected numbers of $K^0$ mesons in bins $\Delta v$ of any
given variable $v$ are evaluated according to the formula
\begin{equation}
\frac{dN}{dv}\!\!
\begin{array}{l}\mathit{(corrected)}\\ \, \end{array} =
\frac{N\mathit{(detected)}}{\Delta v\,\epsilon_{tot}\,B},
\end{equation}
where $\epsilon_{tot}=\epsilon_{K^0_s}\,C$, and $B$ is the
total branching ratio $K^0\to\mbox{$K^{0}_s$}\to \pi^+\pi^-$.
For charged particles, a similar relationship
$\epsilon_{tot}=\epsilon_{h^\pm}\,C$ is used,
where the charged particle reconstruction efficiency
$\epsilon_{h^\pm}$ is calculated as the ratio of reconstructed and
accepted tracks to the number of generated charged particles in the same
bin. $C$ is then calculated analogously to the \mbox{$K^{0}$}\ case, and $B$ is set to
unity. Here and in the calculation of $C$, generated charged particles
are taken to be those in the final state with lifetimes greater than
$10^{-8}$ s, together with charged decay products of primary
hadrons with a shorter lifetime than this,
but excluding the products of \mbox{$K^{0}_s$}\ and $\Lambda,\,\bar\Lambda$ decays.
All the particle distributions are normalised to the number of jets.
Keeping in mind that each event in the basic event sample has at least
one accepted jet, we evaluate (a) the total number of
charged particles or kaons in each bin of a given distribution,
and (b) the total number of accepted jets in the basic event sample.
The ratio of (a) to (b) is then plotted to give numbers of particles per jet.
To obtain corrected distributions, it is thus
necessary to correct the total number of measured jets
$N_{jets}$, in the defined kinematic range,
to the total number of above-defined hadron jets. To achieve this,
the correction formula (1) is again employed, using an efficiency factor
$\epsilon_{tot}= C$ together with $B=1$, and removing from
the definition of $C$ the conditions on kaons or charged particles.
\section{Systematic uncertainties}
The stability of the results against reasonable variations of the selection
criteria was studied in order to estimate the systematic uncertainties.
The most important effects were estimated as follows:
\begin{numlis}
\item To study the sensitivity to the fragmentation scheme in the Monte Carlo,
HERWIG was used instead of PYTHIA to evaluate the correction factors.
\item An uncertainty exists in the matching of the calorimeter
energy scale in the Monte Carlo events to that of the data. To estimate this,
the Monte Carlo energy scale was varied by $\pm 5$\%.
\item To estimate tracking uncertainties, the effect of including or not
including the VXD in the track reconstruction was investigated.
\item The effect of varying the accepted
$y_{JB}$ range to $0.2 < y_{JB} < 0.8$ was evaluated.
\item{$K^0$ definition:}\\
(i) the cut on $|\Delta Z|$ was varied
between 2 and 4 cm;\\
(ii) the cut on $\cos\alpha$ was varied by $\pm 0.005$;\\
(iii) the cut on the track impact parameter $|\epsilon|$
was varied in the range 0.27--0.33 cm;\\
(iv) the upper mass value for $\Lambda$ rejection was varied between 1.117 and
1.123 GeV.
\end{numlis}
The effects on the results of most of the parameter variations listed
were normally found to be small, at the level of up to a few percent.
The individual contributions were combined in quadrature to give the
total systematic error on a given point.
The systematic errors on the correction factors are
dominated by the effects of (1), (2) and (3).
The comparison of uncorrected data with PYTHIA predictions is affected
by (2) and (3).
\section{Results}
\subsection{Charged particles}
In fig.\ \ref{fi} we present efficiency and acceptance
corrected distributions of $dN(h^\pm)/dp_T^{\; 2}$ and $dN(h^\pm)/d\eta$
in $\mbox{$p_T$($h^\pm$)}$ and $\mbox{$\eta$($h^\pm$)}$ respectively. As defined in section 6,
the corrected distributions are given for particles in events
containing a final state hadron jet of $\mbox{$E_T$(jet)} \ge 8 $ GeV
with $|\mbox{$\eta$(jet)}|\le 0.5$. Here and throughout, all results
are normalised as numbers of particles per jet satisfying the
stated definitions.
The $\mbox{$p_T$($h^\pm$)}$ distributions are averaged over $ -1.5 \le \mbox{$\eta$($h^\pm$)} < 1.5$;
the $\mbox{$\eta$($h^\pm$)}$ distributions are averaged over $\mbox{$p_T$($h^\pm$)} \ge 0.5$ GeV.
At this stage no explicit association of the $h^\pm$ with the jets is made.
The statistical errors are small.
Good agreement is seen between the data and the standard PYTHIA
predictions in the shape and magnitude of the \mbox{$p_T$}\ distribution.
In the $\eta$ distribution, the overall agreement is still good, but
it is apparent that standard PYTHIA tends to undershoot the data at
higher $\eta$ values. A similar trend has been observed in
other photoproduction studies at HERA~\cite{H1MI,r5,r6},
leading to suggestions that the discrepancy can be attributed
at least in part to multiparton interactions~\cite{ZEUSMI}. The MI option
is seen to give an improved description of the data at high $\eta$; however,
it overestimates the exponential slope of the \mbox{$p_T$}\ distribution.
A reduction in the $P_s/P_u$ value has a negligible
effect on the present distributions (not shown). Little difference is seen
if different photon parton densities are used.
We now examine in more detail the association of
charged particles with jets, given the presence of at least one jet in
each event in the selected data sample.
Fig.\ \ref{fj}(a) shows the uncorrected distribution in the difference
$\Delta\phi$ in azimuth between an $h^\pm$ and the axis of
any jet in the event satisfying $\mbox{$E_T^{\;rec}$} \ge 7$ GeV
and $|\mbox{$\eta$(jet)}|\le 0.5$. The peak around $|\Delta\phi| =\pi$ indicates the
presence of jets opposite in azimuth to the observed particle.
For $h^\pm$-jet pairs with $|\Delta\phi| < \pi/2$, the distance $\Delta\eta$
in pseudorapidity between the $h^\pm$ and the jet is plotted in
fig.\ \ref{fj}(b). In comparing the data to the Monte Carlo predictions,
an overall systematic uncertainty of $\pm5\%$ should be allowed.
In discriminating between the various models, a particularly useful
variable was found to be $R = \sqrt{(\Delta\phi)^2 +(\Delta\eta)^2}$,
i.e.\ the distance in $(\eta, \phi)$ between a given particle
and the jet axis. The uncorrected $R$ distribution is presented in
fig.\ \ref{fj}(c). A prominent peak at small $R$ demonstrates that the $h^\pm$
are being produced in association with jets; there
is also a peak at $R\approx 3$ due to the likely existence of another jet
opposite in $\phi$ to the first. In the following discussion
we concentrate mainly on the region $R < 2.5$.
Over the whole range, the main contributions come from resolved processes.
The predictions of standard PYTHIA lie slightly above the data at low $R$, but
are low in the inter-jet region $1 < R < 2.5$.
To establish whether the latter discrepancy is associated with
the excess of \mbox{$h^{\pm}$}\ over expectations seen at high \mbox{$\eta$($h^\pm$)}\,
a similar plot was made with the \mbox{$h^{\pm}$}\ acceptance changed to
$-1.5 < \mbox{$\eta$($h^\pm$)} < 0.5$ (not shown). This reduced the
discrepancy for $1 < R < 2.5$ slightly but did not eliminate it.
Variations of the $P_s/P_u$ parameter and the photon
parton density again have little effect on the
distributions. The effect of using the MI option
of PYTHIA is more pronounced; in the inter-jet region, the MI option
overcompensates for the previous shortfall of standard PYTHIA events,
while at small $R$ the jet appears broadened.
Overall, the MI option provides an improved description of the data.
The mean corrected multiplicity of charged particles with $\mbox{$p_T$($h^\pm$)} \ge 0.5$ GeV
per jet, integrated over $R\le 1$,
is shown in Table~\ref{tbl} compared with different results from PYTHIA.
All the models are consistent with the experimental result, within errors.
\subsection{\mbox{$K^{0}$}\ mesons}
In the previous section it was seen that the features of charged particles
in photoproduced jets are generally well described using standard PYTHIA,
with some further improvement obtainable using the MI option. Turning now
to \mbox{$K^{0}$}\ mesons, we consider the same distributions as for the $h^\pm$.
As before, at least one reconstructed calorimeter
jet with $\mbox{$E_T^{\;rec}$}\ge 7$ GeV and $|\mbox{$\eta$(jet)}|\le0.5$ was demanded in an event,
and the corrected distributions are for events containing a final-state
hadron jet with $\mbox{$E_T$(jet)} \ge 8 $ GeV and $|\mbox{$\eta$(jet)}|\le 0.5$.
The distributions are normalised as numbers of \mbox{$K^{0}_s$}\ or \mbox{$K^{0}$}\ per jet
for uncorrected or corrected data respectively.
Corrected $dN(K^0)/dp_T^{\; 2}$ and $dN(K^0)/d\eta$ distributions are
shown in fig.\ \ref{fe} as functions of $\mbox{$p_T$($K^0$)}$ and $\mbox{$\eta$($K^0$)}$.
The $\mbox{$p_T$($K^0$)}$ distributions are averaged over $-1.5 < \mbox{$\eta$($K^0$)} < 1.5 $;
the $\mbox{$\eta$($K^0$)}$ distributions are averaged over $\mbox{$p_T$($K^0$)} \ge 0.5$ GeV.
No attempt is yet made to associate the \mbox{$K^{0}_s$}\ with jets.
In the larger error bars the systematic and statistical errors
are summed in quadrature.
Following the studies mentioned~\cite{ZDISK,HDISK}, we also show results
with the value of $P_s/P_u$ decreased from 0.3 to 0.2.
A reasonable overall agreement is seen between the data and the
various PYTHIA plots in the shape and magnitude of the \mbox{$p_T$}\ spectrum,
although the MI option tends to overestimate the slope of the plot.
For $\mbox{$\eta$($K^0$)}\le 0.5$, the data of fig.\ \ref{fe}(b) show an approximate agreement
with the standard PYTHIA which is improved by using a reduced value of $P_s/P_u$.
At higher \mbox{$\eta$($K^0$)}, however, a tendency for the PYTHIA values to be low
compared with the data is worsened when $P_s/P_u$ is reduced.
The agreement in this region is improved with the use of the MI option.
Fig.\ \ref{ff}(a) shows the uncorrected distribution in the difference
$\Delta\phi$ in azimuth between a \mbox{$K^{0}_s$}\ and
any jet in the event satisfying $\mbox{$E_T^{\;rec}$} > 7$ GeV and $|\mbox{$\eta$(jet)}|\le 0.5$.
For \mbox{$K^{0}_s$}-jet pairs with $|\Delta\phi| < \pi/2$, the distance $\Delta\eta$
in pseudorapidity between the \mbox{$K^{0}_s$}\ and the jet is plotted in
fig.\ \ref{ff}(b).
A common overall systematic uncertainty of $\pm7\%$ should be allowed.
The data are reasonably well fitted by the Monte
Carlo outside the peak at $\Delta\phi\approx 0$ and in the wings of the
\mbox{$\eta$($K^0$)}\ distribution.
In both distributions, standard PYTHIA
overestimates the numbers of kaons near the axis of a jet.
The uncorrected distribution in $R$ is plotted in fig.\ \ref{ff}(c)
for all \mbox{$K^{0}_s$}.
Here it is clear that standard PYTHIA overestimates the data in the jet core,
i.e.\ for $R<0.4$, while underestimating it in the interjet region.
To gain understanding of the possible reasons for the discrepancy
in the jet core, a variety of investigations were made.
It was found that the discrepancy persists
when the number of charged tracks in the region of the jet (i.e.\ with $R<1$)
is selected to be small: even with just one or two charged particles
present in addition to the \mbox{$K^{0}_s$}, a similar effect is seen.
In the events used in this analysis, the
mean number of additional charged particles in the jet is approximately 3. It is
therefore difficult to attribute the effect to a poorly understood
problem with the charged particle tracking.
The effects of varying the photon parton densities were investigated and found
to be small: the use of the GRV-LO~\cite{GRV},
ACFGP~\cite{Aur}, LAC1~\cite{LAC} or GS-HO~\cite{PD} parton sets
altered the predictions by less than 5\% for $R < 1$.
No attempt was made to vary the parton densities in the proton,
since these are well defined in the present kinematic range from other
measurements. Removing the small contribution from
charm-containing jets in the generated events made only a slight
difference, giving little scope for remedying the problem by remodelling the
charm simulation. If the normalisation is performed to luminosity
rather than to numbers of jets, the conclusions are likewise unchanged.
A similar although less marked discrepancy at low $R$ was found using
predictions from the HERWIG Monte Carlo.
The discrepancy outside the jet is again found to be reduced
slightly but not eliminated if the acceptance is reduced to
$-1.5 < \mbox{$\eta$($K^0_s$)} < 0.5$. The MI option gives an improved description in
the inter-jet region, but is worse elsewhere.
In about 7\% of the kaonic events, two \mbox{$K^{0}_s$}\ were found.
The distance $R$ in $(\eta,\phi)$ between them was
plotted, giving a distribution which was
fairly flat up to $R\approx 3.5$ with a small enhancement at low $R$.
Standard PYTHIA tends to overestimate the data by approximately the
same amount as at $R< 0.4$ in fig. \ref{ff}(c), but over the entire
$R$ range. The conclusion is that many of the \mbox{$K^{0}$}\ pairs are not strongly
correlated.
The characteristics of the \mbox{$K^{0}_s$}\ in jets can be further investigated
by distinguishing between direct and resolved
photoproduction processes, as defined at leading order in QCD.
A subsample of events was chosen in which at least two jets were found,
one satisfying the above experimental jet definition and a
second having $\mbox{$E_T^{\;rec}$} > 7$ GeV as before, and $\mbox{$\eta$(jet)} < 2.5$. The two
highest \mbox{$E_T$}\ jets satisfying these conditions were used to estimate the
fraction of the photon energy which takes part in the hard subprocess.
This estimate is given in terms of the measured parameters of the two jets as
$\xgO=(E_{T\,1}^{\;rec}e^{-\eta_1} +E_{T\,2}^{\;rec}e^{-\eta_2})/2E_e\,y_{JB}$.
For $\xgO < 0.75$ the event samples are
dominated by the resolved process, above this value the direct process
is more important~\cite{r6}.
In figs.~\ref{frd2}(a), \ref{frd1}(a), $R$ distributions for the
resolved-enhanced
and direct-enhanced data samples ($\xgO < 0.75,\; \xgO > 0.75$) are shown,
and compared with PYTHIA predictions. A small admixture of generated
direct events is found in the PYTHIA samples with $\xgO < 0.75$, and a
relatively larger admixture of generated resolved events
in the samples with $\xgO > 0.75$.
In the inter-jet region, a deficit in the standard PYTHIA
predictions is seen in the resolved-enhanced sample
but not in the direct-enhanced sample, suggesting
the possibility of an association with multiparton interactions.
In the jet core, the resolved-enhanced sample shows a much larger
discrepancy between the data and standard PYTHIA than does the direct-enhanced
sample. The statistical errors on the PYTHIA histograms are similar
to those on the data.
In fig.~\ref{frd2}(b), we consider the effects of
the reduced $P_s/P_u$ value and the MI option in PYTHIA
on the resolved-enhanced event sample.
The use of the MI option is manifestly advantageous,
and is in fact essential in order to obtain reasonable
agreement with the data when using $P_s/P_u=0.2$.
A $P_s/P_u$ value between 0.2 and 0.3 appears preferred.
While not perfect, the agreement using $P_s/P_u = 0.2$ together with the MI
option represents a significant improvement on the standard version of PYTHIA.
Similar conclusions emerge from the direct-enhanced sample (fig.~\ref{frd1})
in which, overall, the reduced $P_s/P_u$ value significantly improves the
match with the data in the $R$ distribution.
The MI option, affecting only the resolved contribution to the histograms,
has a positive effect. A $P_s/P_u$ value
intermediate between 0.2 and 0.3 might represent a further improvement.
The mean corrected multiplicity of \mbox{$K^{0}$}\ with $\mbox{$p_T$($K^0$)} \ge 0.5$ GeV
per jet, integrated over $R\le 1$, is shown in Table~\ref{tbl}.
The full event sample containing one or more jets per event is used.
A comparison with the different results from PYTHIA
confirms the preference for a lower strangeness suppression parameter
in describing the overall numbers of \mbox{$K^{0}$}\ within the jets.
\subsection{Fragmentation functions}
In the previous sections, angular distributions of
charged particles and \mbox{$K^{0}_s$}\ relative
to jets were studied, averaged over the particle momenta.
The momentum distributions of particles within a
jet may be described by fragmentation functions $D(z)$, where $z$
is a measure of the fraction of the jet momentum taken by the particle.
$D(z)$ is defined as $(1/N_{jets})\,dN(X)/dz$, where $X$ denotes a
charged particle or \mbox{$K^{0}$}. A particle is defined here as being
associated with a given jet if it satisfies the criterion $R<1$.
The variable $z$ may be defined in several ways.
In the usage of CDF~\cite{CDF}, the longitudinal component of
the particle momentum along the jet axis is scaled by the jet energy
according to a formula which may be written as
$z_L = \pB(X)\cdot\nB(\mbox{jet})/E(\mbox{jet})$. Here
$\pB$ denotes the momentum 3-vector of a particle,
and $\nB(\mbox{jet})$ is a unit vector along the jet axis.
An alternative approach is to take the energy ratio $z_E = E(X)/E(\mbox{jet})$.
This is analogous to the definition $z = E(X)/E(\mbox{beam})$ normally used
in $e^+e^-$ collider experiments, where $E(\mbox{beam})$ corresponds to
the maximum possible energy $E(\mbox{max})$ of a leading parton.
One notes that in the latter measurements, since they
are carried out in the $e^+e^-$ centre of mass frame,
the calculated value of $z$ does not depend on the
actual production angle of the final state particle, and so is independent
of many details of the hadronisation mechanism and higher order effects.
No explicit identification of jets is required, or association of particles with jets.
Here we present our results using both definitions of $z$.
The numbers of $h^\pm,$ \mbox{$K^{0}$}\ and jets were corrected as described in
section 6. A small systematic overestimate of the $z$ scale
at low values exists due to losses of low momentum particles from the
reconstructed jet. The loss of energy in dead material in front
of the CAL also has the effect of increasing the measured $z$ value.
A correction is applied to remove these effects. Results are plotted
at the mean $z$ value of the events in a series of chosen $z$ intervals.
Values of $z_E$ are typically 10--20\% larger on average than those of $z_L$.
Figs.\ \ref{fD1}(a) and \ref{fD2}(a) show the resulting $D(z_L)$
distributions for $h^\pm$ and \mbox{$K^{0}$}\ respectively,
with pion masses assigned to the \mbox{$h^{\pm}$}. Also plotted are values
of the corresponding distribution obtained using
PYTHIA. The distributions are averaged over $\mbox{$\eta$(jet)}$ and
$\mbox{$E_T$(jet)}$ within the same selected ranges as above.
For $z_L<0.05$ the effects of the 0.5 GeV
cut-off on $p_T$ in the \mbox{$h^{\pm}$}\ and \mbox{$K^{0}$}\ data become important.
Good agreement is seen between the
data and the standard PYTHIA distributions,
in the \mbox{$K^{0}$}\ case for $z_L$ values above 0.15;
in this range it is not possible to discriminate between
$P_s/P_u$ values of 0.3 and 0.2.
The discrepancy noted in the \mbox{$K^{0}_s$}\ $R$ plots appears to be concentrated
at $z_L < 0.15$, where the bulk of the statistics lie, apart from the
two highest $z_L$ points with large errors.
The MI is in agreement with the $\mbox{$h^{\pm}$}$ data at low momenta now that the
association of particles with jets has been made.
Also shown in fig.\ \ref{fD1}(a) are $D(z_L)$ values from CDF~\cite{CDF},
for \mbox{$h^{\pm}$}\ in jets of energy $\approx$40--100 GeV produced in $p\bar p$
scattering. Although the CDF energies are considerably higher than those of the
present measurements, and the mixture of leading final state partons is also
likely to be different, there is a clear similarity of
the fragmentation functions in the range $0.15 < z_L < 0.7$.
For a comparison with $e^+e^-$ data, results from the
present measurements are first compared with calculations from
the next-to-leading order phenomenological fits of Binnewies et al~\cite{bin}
(figs.\ \ref{fD1}(b), \ref{fD2}(b)). These authors
have made use of a variety of $e^+e^-$ data sets at different energies
to extract separate fragmentation functions for the different
types of primary parton in the range $0.1 < z < 0.8$.
The approach is based on the different quark couplings to
photons and to $Z$ bosons, and on topological properties of
events with hard gluon emission. A definition
$z = E(X)/E\mbox{(beam)}$ is used; we therefore adopt here the corresponding
definition $z = z_E = E(X)/E\mbox{(jet)}$.
Since the primary partons in the photoproduced jets are unidentified,
and we currently lack a full NLO Monte Carlo simulation of
the $\gamma p$ process, a shaded region is
drawn to cover the spread of the calculated $D(z)$ values for the fragmentation
of gluons and different types of quark (i.e.\ $u$, $d$, $s$, $c$).
The calculation has been performed using a QCD scale of 8 GeV,
corresponding approximately to the present jet \mbox{$E_T$}\ values.
Details of which parton fragmentation functions contribute to the
upper and lower bounds of the shaded regions are given in the figure captions.
Good agreement with the calculations is found, to within their
uncertainty as applied to the present data. A few remarks need to be
held in mind:
\begin{romlis}
\item Ref.~\cite{bin} includes charged decay products of long-lived primaries
such as \mbox{$K^{0}_s$}, $\Lambda$ in the definition of a charged particle. This gives
$\approx10$\% more charged hadrons compared with the present method.
\item As stated above, a jet definition, and the requirement that the given particle
be associated with a jet, are required in our present approach,
but not in the $e^+e^-$ based analyses.
This tends to lower the distributions in particular at lower $z_E$ values.
\item The experimental $z_E$ (or $z_L$) value is corrected relative to the energy of
hadron jets. A correction to the energy of the
the final-state leading parton in the hard process has not been attempted here.
\end{romlis}
More precise comparisons are available using the direct photoproduction
process, since a close similarity is expected in general between event
properties in direct hard photoproduction, $e^+e^-$ annihilation,
and DIS at low $x$, where $x$ is the Bjorken variable. At energies
comparable to those of the present data,
$e^+e^-$ annihilation is governed by a single electromagnetic vertex,
summed over the different types of quark, while the LO direct
photoproduction process is
dominated by photon gluon fusion, whose diagrams feature
a similar electromagnetic vertex accompanied by a quark-gluon vertex which
does not depend on the quark type. In a similar way, in DIS events at low $x$,
the virtual photon couples to quarks that come predominantly from the sea,
having evolved through $gq\bar q$ vertices which are likewise flavour
independent.
We therefore make a comparison between fragmentation functions obtained from
the direct photoproduction process in ZEUS, and values obtained from $ e^+e^-$
and DIS measurements.
Figs.~\ref{fzzh}, \ref{fzz} show results obtained from the present
direct-enhanced event samples, corrected by means of PYTHIA on a bin-by-bin
basis to remove the resolved component so as to be
equivalent to results from ``pure direct" event samples.
These are compared with predictions from PYTHIA, and with $e^+e^-$
measurements at similar centre of mass energies to those
of the present hard subprocess, i.e.\ twice the transverse energy of the
selected jets. A factor of 0.5 is applied to the published $e^+e^-$
data to allow for the dominant $q\bar q$ final state.
For the $h^\pm$ data, a further comparison is made to
fragmentation functions from ZEUS calculated in the current region of the
Breit frame in deep inelastic scattering~\cite{ZEUSDIS}. Here,
data points are used covering a photon virtuality range of
$160 < Q^2 < 320$ GeV$^2$ with $0.0024 < x < 0.01$.
The corresponding jet energy is $Q/2$, i.e.\ 6.3--8.9 GeV, roughly
equivalent to the jets of the present events.
It should be noted that the TASSO data~\cite{TASSOh} include
all charged products from \mbox{$K^{0}_s$}\ and $\Lambda$ decays, while the ZEUS DIS
and ZEUS 1994 photoproduction data exclude all such products;
these effects contribute at the level of $\pm10\%$.
The \mbox{$h^{\pm}$}\ data compare well also with $pp$ data from the ISR (not
shown)~\cite{ISR} although a different parton mixture is expected in the final
state here.
For the inclusive \mbox{$K^{0}$}\ data of \cite{TASSO,HRS} at different fixed centre of
mass energies, the authors quote scaling cross sections which
have been converted to fragmentation functions to compare with the present data.
At low $z_E$, the distributions may be affected by differences
between the colour flows in the different types of event, as well as
by the cut-off imposed at 0.5 GeV in \mbox{$p_T$}\ in the present
$h^\pm$ and \mbox{$K^{0}$}\ data, making comparisons between the different data sets
difficult. However for $z_E > 0.1$ for \mbox{$h^{\pm}$}, and $z_E > 0.15$ for \mbox{$K^{0}$},
the present results are in good agreement with the standard PYTHIA
predictions, and the $D(z_E)$ values from the different
measurements are also in good agreement with each other.
This well illustrates the universality that is believed to be
a property of the quark fragmentation process.
\section{Summary and conclusions}
We have studied the properties of charged particles (\mbox{$h^{\pm}$}) and \mbox{$K^{0}$}\ mesons in
photoproduced events in the ZEUS detector at HERA.
The \mbox{$K^{0}$}\ mesons were studied in the $\mbox{$K^{0}_s$}\to \pi^+\pi^-$ decay mode.
In each event at least one reconstructed jet was required
in the calorimeter with measured $\mbox{$E_T^{\;rec}$} > 7$ GeV,
centrally produced in the laboratory frame.
The distributions of the $h^\pm$ and \mbox{$K^{0}$}\ in these events were
studied as a function of a number of kinematic variables;
the distance $R$ in $(\eta,\phi)$ of the particle from the axis of a jet
displayed information which
was not so clearly evident from the other distributions.
Correction factors were applied to evaluate the numbers of \mbox{$h^{\pm}$}\ and \mbox{$K^{0}$}\
per jet at the final state hadron level, with $\mbox{$E_T$(jet)} > 8$ GeV
and $|\mbox{$\eta$(jet)}| < 0.5$, for events in the $\gamma p$ centre of mass energy
range $134 < W < 277$ GeV. Corrected distributions in
transverse momentum \mbox{$p_T$}\ and pseudorapidity $\eta$ are given.
The corrected numbers of \mbox{$h^{\pm}$}\ and \mbox{$K^{0}$}\ within a jet are evaluated,
with a particle defined as being inside a jet if it is within unit
radius of the jet axis in $(\eta, \phi)$.
Fragmentation functions $D(z)$ for \mbox{$h^{\pm}$}\
and \mbox{$K^{0}$}\ in photoproduced jets have also been determined.
In comparing the present results with those from theory and from other
experiments, two definitions of $z$ are employed, in which the longitudinal
momentum component of the particle along the jet axis is scaled to the
jet energy ($z_L$), or the particle energy is similarly scaled ($z_E$).
The distribution of \mbox{$h^{\pm}$}\ within photoproduced jets
is found to be fairly well described by the standard version of PYTHIA,
whereas that of \mbox{$K^{0}$}\ mesons is not. Outside the jets, more \mbox{$h^{\pm}$}\
and \mbox{$K^{0}$}\ are found than predicted; the latter situation can be to some
extent remedied by using a version of PYTHIA which
includes a simulation of multiparton interactions in resolved events. Taken
overall, the numbers of \mbox{$K^{0}$}\ within the jets correspond to a reduced value
of the strangeness suppression parameter $P_s/P_u$ in PYTHIA.
When the data are divided into samples enriched in the resolved and
direct photoproduction processes respectively, this statement remains true
in either case. However the effect is concentrated at $z$ values below
$\approx0.15$;
at higher values the default parameters of
PYTHIA give a satisfactory description of the data. This suggests a need for
further study of fragmentation at low $z$, where large numbers
of particles are found but where their association with jets may be less well
defined than at high $z$.
The fragmentation functions determined using
inclusive photoproduced jets are found to be in agreement
with calculations from Binnewies et al.\ to within the uncertainty
of the calculation as applied to the present data.
A close similarity is seen between the present \mbox{$h^{\pm}$}\ fragmentation functions
in the range $0.15 < z_L < 0.7$ and those observed in $p\bar p$
scattering. Fragmentation functions have also been extracted for
\mbox{$h^{\pm}$}\ and \mbox{$K^{0}$}\ in direct photoproduced jets, and are compared with
corresponding data from $e^+e^-$ annihilation and
from deep inelastic scattering. Agreement is
good for $z_E > 0.1$ and $z_E > 0.15$ for \mbox{$h^{\pm}$}\ and \mbox{$K^{0}$}\ respectively.
This, together with the agreement found in this region with
PYTHIA, represents a confirmation of the idea of a universally
valid description of parton fragmentation.
\\[8mm]
\noindent
{\Large \bf Acknowledgements}\\[3mm]
We thank the DESY directorate and staff for their continued
support and encouragement, and likewise the HERA
machine group for their excellent efforts in operating HERA.
We are grateful to J. Binnewies for helpful conversations and for making
numerical calculations available.
\bigskip | {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,934 |
\section{Introduction and a new proper scoring rule}
Decisions based on accurate (probabilistic) forecasts of quantities of interest are very important in practice,
since they affect our daily life very much.
For example, meteorology involves forecasting of temperature and wind speed, in hydrology it is important to predict water level,
and in time series analysis, to forecast future values of some time series modelled, e.g., by some autoregressive moving average process.
Such a quantity in question is usually modelled by a random variable having an unknown distribution function, and, in general,
several forecasting distribution functions are proposed by the practitioners, so it is a challenging and important task is
to determine which one is the best and in which sense.
A so-called scoring rule assigns a score based on the forecasted distribution function and the realized observations, see, e.g., Gneiting and Raferty (2007)
and David and Musio (2014).
More precisely, following the setup and notations of Brehmer and Gneiting (2020),
let \ $\Omega$ \ be a non-empty set, \ $\cB$ \ be a \ $\sigma$-algebra on \ $\Omega$,
and \ $\cP$ \ be a convex set of probability measures on \ $(\Omega,\cB)$.
\ A {\sl scoring rule} is an extended real valued function \ $S:\cP\times \Omega\to\RR\cup\{-\infty, \infty\}$ \ such that
\[
S(\PP,\QQ):= \int_\Omega S(\PP,\omega)\, \QQ(\dd\omega)
\]
is well-defined for all \ $\PP,\QQ\in\cP$ \ (in particular, for all \ $\PP\in\cP$, \ the mapping \ $\Omega\ni \omega\mapsto S(\PP,\omega)$ \ is measurable).
A scoring rule \ $S$ \ is called {\sl proper} relative to \ $\cP$, \ if
\begin{align}\label{help6}
S(\QQ,\QQ)\leq S(\PP,\QQ) \qquad \text{for all \ $\PP,\QQ\in \cP$.}
\end{align}
A scoring rule \ $S$ \ is called {\sl strictly proper} relative to \ $\cP$, \ if it is proper relative to \ $\cP$, \
and for any \ $\PP,\QQ\in\PP$, \ the equality \ $S(\QQ,\QQ) = S(\PP,\QQ)$ \ implies \ $\QQ=\PP$.
\ Note that in information theory a similar inequality to \eqref{help6} appears, namely, if \ $\xi$ \ and \ $\eta$ \ are discrete random variables
having finite ranges, then \ $H(\xi,\xi)=H(\xi)\leq H(\xi,\eta)$, \ where \ $H(\xi,\xi)$ \ and \ $H(\xi,\eta)$ \ denote the Shannon entropy
of \ $(\xi,\xi)$ \ and \ $(\xi,\eta)$, \ respectively.
In fact, the Shannon entropy \ $H(\xi) = -\sum_{i=1}^n p_i \log_2(p_i)$ \ of a discrete random variable \ $\xi$ \ having range
\ $\{x_1,\ldots,x_n\}\subset\RR$ \ with some \ $n\in\NN$ \ and having distribution \ $\PP_\xi(\{x_i\}) = p_i$, \ $i=1,\ldots,n$, \
coincides with \ $S_{\mathrm{ent}}(\PP_\xi,\PP_\xi) = \int_\RR S_{\mathrm{ent}}(\PP_\xi,\omega)\,\PP_\xi(\dd\omega)$, \ where
\ $S_{\mathrm{ent}}(\PP_\xi,\omega):=-\log_2(p_i)$ \ if \ $\omega=x_i$ \ with some \ $i\in\{1,\ldots,n\}$ \ and
\ $S_{\mathrm{ent}}(\PP_\xi,\omega):=0$ \ otherwise.
Here \ $S_{\mathrm{ent}}$ \ is nothing else but the so-called logarithmic score, for more details, see, e.g.,
Gneiting and Raftery (2007, Example 3 and Section 4).
Next, we give an interpretation of the inequality \eqref{help6} in case of \ $\Omega$ \ is the real line \ $\RR$ \ and
\ $\cB$ \ is the Borel \ $\sigma$-algebra \ $\cB(\RR)$ \ on \ $\RR$.
\ Suppose that we believe that a real-valued random variable \ $X$ \ has a distribution \ $\QQ$ \ on \ $(\RR,\cB(\RR))$,
\ and that the penalty for quoting some predictive distribution \ $\PP$ \ on \ $(\RR,\cB(\RR))$ \ for a realization
\ $\omega\in\RR$ \ is \ $S(\PP,\omega)\in\RR\cup\{-\infty, \infty\}$.
\ So if our quoted distribution for \ $X$ \ is \ $\PP$, \ then the expected value of our penalty is
\ $\EE(S(\PP,X)) = \int_\RR S(\PP,\omega)\, \QQ(\dd\omega) = S(\PP,\QQ)$.
\ Based on principles of decision theory, we should choose our quoting distribution \ $\PP$ \ in order to minimize
the expected penalty \ $S(\PP,\QQ)$, \ and inequality \eqref{help6} says that \ $\QQ$ \ is such an optimal choice.
If \ $S$ \ is strictly proper, then \ $\QQ$ \ is the unique optimal choice.
It is known that a scoring rule satisfying some kind of regularity condition (see \eqref{help2}) can be properized in the sense that
it can be modified in a way that it becomes a proper scoring rule, see, e.g., Theorem 1 in Brehmer and Gneiting (2020), which we recall below.
\begin{Thm}\label{Thm_properization}
Let \ $S:\cP\times \Omega\to\RR\cup\{-\infty, \infty\}$ \ be a scoring rule.
Suppose that for every \ $\PP\in\cP$ \ there exists a probability measure \ $\PP^*\in\cP$ \ such that
\begin{align}\label{help2}
S(\PP^*,\PP) \leq S(\QQ,\PP)\qquad \text{for all \ $\QQ\in \cP$.}
\end{align}
Then the function \ $S^*:\cP\times\Omega\to \RR\cup\{-\infty, \infty\}$ \ defined by
\[
S^*(\PP,\omega):=S(\PP^*,\omega),\qquad \PP\in\cP, \; \omega\in\Omega,
\]
is a proper scoring rule.
\end{Thm}
In all what follows, let \ $\Omega:=\RR$ \ and \ $\cB$ \ be the Borel $\sigma$-algebra on \ $\RR$,
\ and a probability measure \ $\PP\in\cP$ \ is identified with the function \ $\RR\ni x \mapsto \PP((-\infty,x))$,
\ which is nothing else but the distribution function of the random variable \ $\Omega\ni\omega\mapsto \omega$ \
with respect to the probability measure \ $\PP$.
\ Here we remind the readers that we use the previous definition of a distribution function instead of
\ $\RR\ni x\mapsto \PP((-\infty,x])$ \ (which is also common in the literature).
In notation, instead of \ $\PP((-\infty, x))$ \ we will write \ $\PP(x)$, \ where \ $x\in\RR$.
A commonly used scoring rule is the so-called weighted Continuous Ranked Probability Scoring rule (wCRPS)
defined by
\[
\mathrm{wCRPS}(\PP,y):=\int_{\RR} (\PP(x) - \bone_{\{y<x\}})^2 w(x)\,\dd x, \qquad \PP\in\cP, \; y\in\RR,
\]
where \ $w:\RR\to(0,\infty)$ \ is a given measurable function (called weight function as well).
In the special case \ $w(x)=1$, \ $x\in\RR$, \ wCRPS is nothing else but the Continuous Ranked Probability Scoring rule (CRPS),
see, e.g., Gneiting and Raferty (2007, Section 4.2).
Sometimes, wCRPS and CRPS is simply called weighted Continuous Ranked Probability Score
and Continuous Ranked Probability Score, respectively.
By choosing weight functions in an appropriate way the center of (one of the) tails of the range of distribution functions can be emphasized.
For more details on the role of weight functions and examples for some commonly used weight functions, see Gneiting and Ranjan (2011, page 415 and Table 4).
These scoring rules are commonly used in practice, see, e.g., the very recent work of Baran, Hemri and El Ayari (2019).
Recently, for any \ $\alpha>0$, \ Brehmer and Gneiting (2020, Example 5) have introduced a scoring rule
\ $S_\alpha: \cP\times \RR\to [0,\infty]$ \ given by
\begin{align}\label{help3}
S_\alpha(\PP,y):= \int_{\RR} \vert \PP(x) - \bone_{\{y<x\}}\vert^\alpha\,\dd x, \qquad \PP\in\cP, \; y\in\RR.
\end{align}
For \ $\alpha=2$, \ it gives back the scoring rule CRPS.
Using Theorem \ref{Thm_properization}, Brehmer and Gneiting (2020) have shown that
the function \ $S_\alpha^*:\cP\times \RR\to [0,\infty]$,
\[
S_\alpha^*(\PP,y):=S_\alpha(\PP^*,y), \qquad \PP\in\cP, \; y\in\RR,
\]
is a proper scoring rule, where the mapping \ $\cP\ni \PP\mapsto \PP^*\in\cP$ \ is given by
\[
\PP^*(x):=\left(1+\left(\frac{1-\PP(x)}{\PP(x)}\right)^{\frac{1}{\alpha-1}} \right)^{-1}\bone_{\{\PP(x)>0\}}, \qquad \PP\in\cP,
\qquad \text{in case of \ $\alpha>1$,}
\]
and \ $\PP^*$ \ is the distribution function of the Dirac measure concentrated at a median of \ $\PP$ \ in case of \ $\alpha\in(0,1]$.
\ If \ $\alpha=1$ \ and there is more than one median of \ $\PP$, \ then there are other choices for \ $\PP^*$.
\ In case of \ $\alpha>1$, \ the mapping \ $\cP\ni \PP\mapsto \PP^*\in\cP$ \ is injective.
\ Further, in some cases the proper scoring rule \ $S_\alpha$ \ can be made a strictly proper scoring rule.
For example, if \ $\alpha\in(1,2]$, \ then \ $S_\alpha$ \ restricted to \ $\cP_1\times \RR$ \ is a strictly proper
scoring rule relative to \ $\cP_1$, \ where \ $\cP_1$ \ denotes the set of probability measures on \ $(\RR,\cB(\RR))$ \ with finite first
moment, see Brehmer and Gneiting (2020, Example 5).
Motivated by \eqref{help3} and the form of Anderson--Darling distance of distribution functions (see, e.g., Anderson and Darling (1954)
or Deza and Deza (2013, page 237), we introduce a new scoring rule.
Let \ $\cP_{(0,1)}$ \ be the set of distribution functions taking values in \ $(0,1)$.
Then \ $\cP_{(0,1)}$ \ is a convex subset of \ $\cP$.
\begin{Def}
For \ $\alpha>0$ \ and a measurable function \ $w:\RR\to (0,\infty)$,
\ let \ $\tS_{\alpha,w}:\cP_{(0,1)}\times \RR\to [0,\infty] $,
\[
\tS_{\alpha,w}(\PP,y):=\int_\RR \frac{\vert \PP(x) - \bbone_{\{y< x\}}\vert^{2\alpha}}{\PP(x)^\alpha(1-\PP(x))^\alpha}w(x)\,\dd x,
\qquad \PP\in\cP_{(0,1)}, \; y\in\RR.
\]
\end{Def}
\begin{Pro}\label{Pro_new_scoring_rule}
For each \ $\alpha>0$ \ and for each measurable function \ $w:\RR\to (0,\infty)$, \
the function \ $\tS_{\alpha,w}^*:\cP_{(0,1)}\times \RR\to [0,\infty]$,
\[
\tS_{\alpha,w}^*(\PP,y):=\tS_{\alpha,w}({\widetilde\PP}^*,y), \qquad \PP\in\cP_{(0,1)}, \;\; y\in\RR,
\]
is a proper scoring rule, where the mapping \ $\cP_{(0,1)}\ni \PP\mapsto \widetilde{\PP}^*\in\cP_{(0,1)}$ \ is given by
\begin{align}\label{help1}
\widetilde{\PP}^*(x) := \left(1 + \left(\frac{1-\PP(x)}{\PP(x)} \right)^{\frac{1}{2\alpha}} \right)^{-1},\qquad x\in\RR.
\end{align}
Further, for any \ $\PP\in\cP_{(0,1)}$ \ and \ $y\in\RR$,
\begin{align}\label{help5}
\tS_{\alpha,w}({\widetilde\PP}^*,y)
= \int_\RR \frac{ \vert \bbone_{(y,\infty)}(x) - \PP(x) \vert^{\frac{1}{2}} }{ \vert 1 - \bbone_{(y,\infty)}(x) - \PP(x) \vert^{\frac{1}{2}}}w(x)\,\dd x,
\end{align}
and
\begin{align}\label{help7}
\tS_{\alpha,w}({\widetilde\PP}^*,\PP)
= \int_{\RR} \tS_{\alpha,w}({\widetilde\PP}^*,y)\, \PP(\dd y)
= 2\int_\RR \big(\PP(x) (1-\PP(x))\big)^{\frac{1}{2}}w(x)\,\dd x.
\end{align}
\end{Pro}
The proof of Proposition \ref{Pro_new_scoring_rule} can be found in Section \ref{Sec_Proof}.
In the next remark we give an example, where a restriction of \ $\tS_{\alpha,w}^*$ \ leads to a {\sl strictly} proper scoring rule.
\begin{Rem}\label{Rem1}
\noindent{(i)}
Let \ $\alpha>0$ \ and \ $w:\RR\to (0,\infty)$, \ $w(x):=1$, \ $x\in\RR$.
\ Let \ ${\widehat \cP}_{(0,1)}$ \ be a subclass of \ $\cP_{(0,1)}$ \ satisfying the following two properties:
\ $\PP$ \ is continuous for any \ $\PP\in {\widehat \cP}_{(0,1)}$, \ and \ $\tS_{\alpha,w}(\widetilde\PP^*,\PP)$ \ is finite for
any \ $\PP\in {\widehat \cP}_{(0,1)}$.
\ Then \ $\tS_{\alpha,w}^*$ \ restricted to \ ${\widehat \cP}_{(0,1)}\times\RR$ \ is strictly proper relative to
\ ${\widehat \cP}_{(0,1)}$, \ i.e., it is proper relative to \ ${\widehat \cP}_{(0,1)}$, \
and for any \ $\PP,\QQ\in {\widehat \cP}_{(0,1)}$, \ the equality
\ $\tS_{\alpha,w}^*(\QQ,\QQ) = \tS_{\alpha,w}^*(\PP,\QQ)$ \ implies \ $\QQ=\PP$.
\ For a proof, see Section \ref{Sec_Proof}.
{(ii)}
If \ $w(x)=1$, \ $x\in\RR$, \ and \ $\PP(x):= \ee^{-\ee^{-x}}$, \ $x\in\RR$ \ (Gumbel distribution),
or
\[
\PP(x):=\begin{cases}
\frac{1}{2}\ee^{x} & \text{if \ $x\leq0$,}\\
1-\frac{1}{2}\ee^{-x} & \text{if \ $x> 0$,}
\end{cases}
\]
(Laplace distribution), then \ $\tS_{\alpha,w}({\widetilde\PP}^*,\PP)$ \ is finite.
\proofend
\end{Rem}
In the next remark we initiate two other scoring rules.
\begin{Rem}
One may try to investigate the properties of the scoring rules
\[
\int_\RR \vert \PP(x) - \bbone_{\{y< x\}}\vert^{2\alpha}\,\PP(\dd x), \qquad y\in\RR, \;\; \PP\in\cP,
\]
and
\[
\int_\RR \frac{\vert \PP(x) - \bbone_{\{y< x\}}\vert^{2\alpha}}{\PP(x)^\alpha(1-\PP(x))^\alpha}\,\PP(\dd x), \qquad y\in\RR, \;\; \PP\in\cP_{(0,1)},
\]
where \ $\alpha>0$.
\ The second one with \ $\alpha=1$ \ is nothing else but the Anderson-Darling distance of the distribution functions \ $\PP(x)$, $x\in\RR$, \
and \ $\bbone_{\{y< x\}}$, \ $x\in\RR$.
\ For these scoring rules we were not able to derive similar results as in Proposition \ref{Pro_new_scoring_rule}.
\proofend
\end{Rem}
\section{Proofs}\label{Sec_Proof}
\noindent{\bf Proof of Proposition \ref{Pro_new_scoring_rule}.}
The technique of our proof is similar to that of Example 5 in Brehmer and Gneiting (2020), namely,
we will use Theorem \ref{Thm_properization}.
\ Fix \ $\alpha>0$, \ $\PP\in\cP_{(0,1)}$ \ and a measurable function \ $w:\RR\to(0,\infty)$.
\ Then for all \ $\QQ\in\cP_{(0,1)}$, \ by Tonelli's theorem,
\begin{align}\label{help8}
\begin{split}
&\tS_{\alpha,w}(\QQ,\PP)
= \int_\RR \tS_{\alpha,w}(\QQ,y)\, \PP(\dd y)
= \int_\RR\left( \int_\RR \frac{\vert \QQ(x) - \bbone_{\{ y< x\}}\vert^{2\alpha}}{\QQ(x)^\alpha(1-\QQ(x))^\alpha}\, w(x)\,\dd x\right)\PP(\dd y)\\
& = \int_\RR\left( \int_\RR \frac{\vert\QQ(x) - \bbone_{\{ y< x\}}\vert^{2\alpha}}{\QQ(x)^\alpha(1-\QQ(x))^\alpha} \,\PP(\dd y) \right)w(x) \dd x \\
& = \int_\RR\left( \int_{\{y< x\}} \frac{\vert\QQ(x) - 1\vert^{2\alpha}}{\QQ(x)^\alpha(1-\QQ(x))^\alpha} \,\PP(\dd y) \right) w(x)\dd x\\
&\phantom{=\;} + \int_\RR\left( \int_{\{y\geq x\}} \frac{\QQ(x)^{2\alpha}}{\QQ(x)^\alpha(1-\QQ(x))^\alpha} \,\PP(\dd y) \right) w(x)\dd x \\
& = \int_\RR\left( \left(\frac{1-\QQ(x)}{\QQ(x)}\right)^\alpha \PP(x) + \left( \frac{\QQ(x)}{1-\QQ(x)}\right)^\alpha (1-\PP(x)) \right) w(x)\dd x.
\end{split}
\end{align}
For fixed \ $x\in\RR$, \ let us introduce the function \ $g_{x,\PP}:(0,1)\to\RR$,
\[
g_{x,\PP}(q):= \left[\left(\frac{1-q}{q}\right)^\alpha \PP(x) + \left(\frac{q}{1-q}\right)^\alpha(1-\PP(x))\right]w(x), \qquad q\in(0,1).
\]
One can calculate that for any \ $q\in(0,1)$,
\begin{align*}
&g_{x,\PP}'(q) = \alpha \left(\frac{1-q}{q}\right)^{\alpha-1} \left(-\frac{1}{q^2}\right) \PP(x) w(x)
+ \alpha \left(\frac{q}{1-q}\right)^{\alpha-1} \frac{1}{(1-q)^2}(1-\PP(x))w(x),\\[1mm]
&g_{x,\PP}''(q) = \alpha \left(\frac{1-q}{q}\right)^{\alpha-2} \frac{\alpha+1-2q}{q^4} \PP(x)w(x)
+ \alpha \left(\frac{q}{1-q}\right)^{\alpha-2} \frac{\alpha-1+2q}{(1-q)^4} (1-\PP(x))w(x).
\end{align*}
If \ $\alpha\geq 1$, \ then \ $g_{x,\PP}''(q)>0$, \ $q\in(0,1)$, \
and hence the function \ $g_{x,\PP}$ \ is strictly convex on \ $(0,1)$, \
and its unique minimum is attained at \ $q_{x,\PP}^*\in(0,1)$, \ which satisfies the equation \ $g_{x,\PP}'(q_{x,\PP}^*)=0$. \
One can calculate that
\begin{align}\label{help4}
q_{x,\PP}^* = \left(1 + \left( \frac{1-\PP(x)}{\PP(x)} \right)^{\frac{1}{2\alpha}} \right)^{-1}=: {\widetilde\PP}^*(x), \qquad x\in\RR.
\end{align}
Next, we show that for all \ $\alpha>0$, \ the function \ $g_{x,\PP}$ \ attains its minimum at \ $g_{x,\PP}^*$ \ given in \eqref{help4}.
Since \ $g_{x,\PP}'(q_{x,\PP}^*) = 0$, \ for this, it is enough to check that \ $g_{x,\PP}''(q_{x,\PP}^*) >0$.
\ Since for all \ $q\in(0,1)$,
\[
g_{x,\PP}''(q) = \alpha w(x)\left(\frac{1-q}{q}\right)^{\alpha-2}
\left[
\frac{\alpha+1-2q}{q^4} \PP(x) + \left(\frac{q}{1-q}\right)^{2(\alpha-2)} \frac{\alpha-1+2q}{(1-q)^4} (1-\PP(x))
\right] ,
\]
we have
\begin{align*}
&g_{x,\PP}''(q_{x,\PP}^*) \\
& = \alpha w(x) \left( \frac{1-\PP(x)}{\PP(x)} \right)^{\frac{\alpha-2}{2\alpha}}
\left[
\frac{\alpha+1-2q_{x,\PP}^*}{(q_{x,\PP}^*)^4} \PP(x) + \left( \frac{\PP(x)}{1-\PP(x)} \right)^{1-\frac{2}{\alpha}} \frac{\alpha-1+2q_{x,\PP}^*}{(1-q_{x,\PP}^*)^4} (1-\PP(x))
\right] \\
& = \alpha w(x) \PP(x) \left( \frac{1-\PP(x)}{\PP(x)} \right)^{\frac{\alpha-2}{2\alpha}}
\left[
\frac{\alpha+1-2q_{x,\PP}^*}{(q_{x,\PP}^*)^4} + \left( \frac{1-\PP(x)}{\PP(x)} \right)^{\frac{4}{2\alpha}} \frac{\alpha-1+2q_{x,\PP}^*}{(1-q_{x,\PP}^*)^4}
\right]\\
& = \alpha w(x) \PP(x) \left( \frac{1-\PP(x)}{\PP(x)} \right)^{\frac{\alpha-2}{2\alpha}}
\left[
\frac{\alpha+1-2q_{x,\PP}^*}{(q_{x,\PP}^*)^4} + \left( \frac{1- q_{x,\PP}^*}{q_{x,\PP}^*} \right)^4 \frac{\alpha-1+2q_{x,\PP}^*}{(1-q_{x,\PP}^*)^4}
\right]\\
& = \alpha w(w)\PP(x) \left(\frac{1}{q_{x,\PP}^*}-1\right)^{\alpha-2} \frac{2\alpha}{(q_{x,\PP}^*)^4}
>0,
\end{align*}
as desired.
One can easily check that \ ${\widetilde\PP}^*\in\cP_{(0,1)}$, \ i.e., \ ${\widetilde\PP}^*$ \ is a distribution function with values in \ $(0,1)$.
\ Note also that \eqref{help1} shows that the mapping \ $\cP_{(0,1)}\ni \PP\mapsto \widetilde{\PP}^*$ \ is injective.
All in all, condition \eqref{help2} of Theorem \ref{Thm_properization} with \ $\cP:=\cP_{(0,1)}$ \ is satisfied, i.e.,
for every \ $\PP\in\cP_{(0,1)}$ \ there exists \ $\PP^*\in\cP_{(0,1)}$ \ such that
\[
\tS_{\alpha,w}({\widetilde\PP}^*,\PP) \leq \tS_{\alpha,w}(\QQ,\PP), \qquad \QQ\in\cP_{(0,1)},
\]
and, by Theorem \ref{Thm_properization}, we have the first part of the assertion.
Further, for any \ $\PP\in\cP_{(0,1)}$ \ and \ $y\in\RR$,
\begin{align*}
\tS_{\alpha,w}({\widetilde\PP}^*,y)
& = \int_{\{y<x\}} \frac{\vert {\widetilde\PP}^*(x) - 1\vert^{2\alpha}}{{\widetilde\PP}^*(x)^\alpha(1-{\widetilde\PP}^*(x))^\alpha} w(x)\,\dd x
+ \int_{\{y\geq x\}} \frac{{\widetilde\PP}^*(x)^{2\alpha}}{{\widetilde\PP}^*(x)^\alpha(1-{\widetilde\PP}^*(x))^\alpha} w(x)\,\dd x \\
& = \int_{\{y<x\}} \left( \frac{1}{{\widetilde\PP}^*(x)} - 1 \right)^\alpha w(x)\,\dd x
+ \int_{\{y\geq x\}} \left( \frac{1}{{\widetilde\PP}^*(x)} - 1 \right)^{-\alpha} w(x)\,\dd x\\
& = \int_{\{y<x\}} \left( \frac{1-\PP(x)}{\PP(x)} \right)^{\frac{1}{2}} w(x)\,\dd x
+ \int_{\{y\geq x\}} \left( \frac{\PP(x)}{1-\PP(x)} \right)^{\frac{1}{2}} w(x)\,\dd x\\
& = \int_\RR \frac{ \vert \bbone_{(y,\infty)}(x) - \PP(x) \vert^{\frac{1}{2}} }{ \vert 1 - \bbone_{(y,\infty)}(x) - \PP(x) \vert^{\frac{1}{2}}} w(x)\,\dd x,
\end{align*}
as desired.
Finally, by \eqref{help8} and the definition of the function \ $g_{x,\PP}$, \ we have
\begin{align*}
\tS_{\alpha,w}({\widetilde\PP}^*,\PP) = \int_\RR g_{x,\PP}({\widetilde\PP}^*(x))\,\dd x = 2\int_\RR \big(\PP(x) (1-\PP(x))\big)^{\frac{1}{2}}w(x)\,\dd x ,
\end{align*}
since
\begin{align*}
g_{x,\PP}({\widetilde\PP}^*(x))
&= \left(\frac{1}{{\widetilde\PP}^*(x)} -1\right)^\alpha\PP(x)w(x) + \left(\frac{1}{{\widetilde\PP}^*(x)} -1\right)^{-\alpha}(1-\PP(x))w(x)\\
&= \left(\frac{1-\PP(x)}{\PP(x)}\right)^{\frac{1}{2}}\PP(x)w(x) + \left(\frac{1-\PP(x)}{\PP(x)}\right)^{-\frac{1}{2}} (1-\PP(x))w(x)\\
&= 2(\PP(x)(1-\PP(x)))^{\frac{1}{2}}w(x), \qquad x\in\RR.
\end{align*}
\proofend
\smallskip
\noindent{\bf Proof of the example given in part (i) of Remark \ref{Rem1}.}
By Proposition \ref{Pro_new_scoring_rule}, \ $\tS_{\alpha,w}^*:\cP_{(0,1)}\times \RR \to [0,\infty]$ \ is a proper scoring rule relative to
\ $\cP_{(0,1)}$, \ so, especially, \ $\tS_{\alpha,w}^*(\QQ,\QQ) \leq \tS_{\alpha,w}^*(\PP,\QQ)$ \ for all \ $\PP,\QQ\in\widehat\cP_{(0,1)}$ \
yielding that the restriction \ $\tS_{\alpha,w}^{*,r}: \widehat\cP_{(0,1)} \times \RR \to [0,\infty]$ \ is a proper scoring rule relative to
\ $\widehat\cP_{(0,1)}$.
\ It remains to check the strict property of \ $\tS_{\alpha,w}^{*,r}: \widehat\cP_{(0,1)} \times \RR \to[0,\infty]$.
\ Let \ $\PP,\QQ\in \widehat\cP_{(0,1)}$ \ be such that \ $\tS_{\alpha,w}^{*,r}(\QQ,\QQ) = \tS_{\alpha,w}^{*,r}(\PP,\QQ)$.
Using \eqref{help8} and \eqref{help1} we have
\begin{align*}
\tS_{\alpha,w}^{*,r}(\PP,\QQ)
& = \int_\RR \tS_{\alpha,w}^{*,r}(\PP,y)\, \QQ(\dd y)
= \int_\RR \tS_{\alpha,w}^{*}(\PP,y)\, \QQ(\dd y)
= \int_\RR \tS_{\alpha,w}({\widetilde\PP}^*,y)\, \QQ(\dd y)
= \tS_{\alpha,w}({\widetilde\PP}^*,\QQ) \\
& = \int_\RR\left( \left(\frac{1-{\widetilde\PP}^*(x)}{{\widetilde\PP}^*(x)}\right)^\alpha \QQ(x)
+ \left( \frac{{\widetilde\PP}^*(x)}{1-{\widetilde\PP}^*(x)}\right)^\alpha (1-\QQ(x)) \right) \dd x\\
& = \int_\RR\left( \left(\frac{1-\PP(x)}{\PP(x)}\right)^{\frac{1}{2}} \QQ(x)
+ \left( \frac{1-\PP(x)}{\PP(x)}\right)^{-\frac{1}{2}} (1-\QQ(x)) \right) \dd x,
\end{align*}
and similarly (or referring to \eqref{help7})
\[
\tS_{\alpha,w}^{*,r}(\QQ,\QQ) = \tS_{\alpha,w}({\widetilde\QQ}^*,\QQ)
= 2 \int_\RR (\QQ(x)(1-\QQ(x)))^{\frac{1}{2}}\,\dd x.
\]
Note that, by the inequality between the arithmetic mean and the geometric mean, for any \ $x\in\RR$ \ we have
\begin{align}\label{help9}
\big(\QQ(x)(1-\QQ(x))\big)^{\frac{1}{2}}
\leq \frac{1}{2}
\left(\left(\frac{1-\PP(x)}{\PP(x)}\right)^{\frac{1}{2}} \QQ(x)
+ \left( \frac{1-\PP(x)}{\PP(x)}\right)^{-\frac{1}{2}} (1-\QQ(x)) \right),
\end{align}
and equality holds if and only if
\begin{align}\label{help10}
\left(\frac{1-\PP(x)}{\PP(x)}\right)^{\frac{1}{2}} \QQ(x)
= \left( \frac{1-\PP(x)}{\PP(x)}\right)^{-\frac{1}{2}} (1-\QQ(x))
\qquad \Longleftrightarrow \qquad \PP(x) = \QQ(x).
\end{align}
The inequality \eqref{help9} directly shows that \ $\tS_{\alpha,w}^{*,r}(\QQ,\QQ) \leq \tS_{\alpha,w}^{*,r}(\PP,\QQ)$, \ $\PP,\QQ\in \widehat\cP_{(0,1)}$
\ (which we already know, since \ $\tS_{\alpha,w}^{*,r}$ \ is a proper scoring rule relative to \ $\widehat\cP_{(0,1)}$), \ and using the
inequality \eqref{help10} and the continuity of \ $\PP$ \ and \ $\QQ$, \ a standard measure theoretical argument
yields that \ $\tS_{\alpha,w}^{*,r}(\QQ,\QQ) = \tS_{\alpha,w}^{*,r}(\PP,\QQ)$ \ holds if and only if \ $\QQ=\PP$.
\proofend
\section*{Acknowledgements}
I am grateful to Jonas Brehmer for his comments to preliminary versions of the paper that helped me a lot.
I thank S\'andor Baran for calling my attention to the paper of Gneiting and Ranjan (2011).
I would like to thank the referee for the comments that helped me to improve the paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,332 |
Germany pledges to beef up NATO battalion in Lithuania to brigade-level
VILNIUS – In response to Russia's military aggression in Ukraine, Germany has pledged to bolster the existing international NATO battalion stationed in Lithuania to a brigade-size unit, German Chancellor Olaf Scholz said in Vilnius on Tuesday, stressing that allies are committed to defending the Alliance's every centimeter.
"We envisaged that we will scale up our contribution to the strengthening of NATO's eastern flank, we will create a strong brigade, have discussed that with each other and will have to work on this," Scholz told reporters at the Presidential Palace.
And Lithuanian President Gitanas Nauseda confirmed it during his visit to Pabrade.
"Germany is going to take the lead and (...) bolster its military presence here. They intend to beef it up it to the brigade level. This is one of the goals, one of the dreams that we are thinking about as we look forward to the NATO summit. In practice, it is becoming a reality today", the Lithuanian president said, adding that Berlin and Vilnius will take these steps gradually.
"As a host country, we will take care of all the necessary infrastructure. In particular, I pointed out to the chancellor that it would make sense for us to have an additional battalion here for permanent exercises, for permanent training," Nauseda said, refraining to say exactly when the brigade will be in Lithuania.
He stressed, however, that it would "not take years", adding that the military unit would include not only German troops, as it does now.
"Over the next year or 18 months, we can expect to have the necessary combat unit here. We have to do some work, first of all, on the construction of barracks, on deployment sites, on infrastructure. I think this is a realistic timeframe", the president said.
FOCUS ON THE BALTIC REGION
Earlier in the day, Nauseda met with the German chancellor and the Latvian and Estonian prime ministers at the Presidential Palace, and the Lithuanian president stressed the need to strengthen defense capabilities by increasing the number of troops in all three Baltic states.
"Maximum readiness and beefed up forces in our region are the security guarantee of the whole Alliance. We agreed that it is necessary to enhance defense capabilities in the Baltic countries by increasing the number of troops deployed and by strengthening air and sea defense," Nauseda told a joint press conference after the meeting.
He also pointed out that the new strategic reality, Russia's attack on Ukraine, was pushing NATO to increase its military presence in the region.
"We need to realize that Russia's military threat is not going anywhere and it will remain a long-term threat to the entire Euro-Atlantic area. (...) Looking into the Russian threat, we must shape a strong response and strengthen our defense," the Lithuanian president said, adding that Lithuania is ready to host allied troops.
"Joint allied action, unity and solidarity are vital and indispensable in the current security environment," Nauseda stressed.
OFFICIAL DECISIONS EXPECTED IN MADRID
Scholz welcomed Germany's decision to up defense funding and hoped it would be maintained in the future.
Estonian Prime Minister Kaja Kallas also said Baltic defense should be upgraded.
"Allied presence must be increased in the air, on land and at sea, and decisions have to be taken in Madrid," she said. "We need to make it clear to the aggressor that NATO has the will and the clear ability to defend every centimeter of its territory."
Both the Estonian prime minister and her Latvian counterpart Krisjanis Karins believe the German leadership will help to take make determined decisions at the NATO Madrid summit, needed to ensure the Alliance is ready for future challenges.
"We welcome Germany's decision to boost its presence in Lithuania as it's a very welcome and right decision. It will strengthen Lithuania, it will strengthen Latvia and it will strengthen Estonia", the Latvian prime minister said.
Vilnius expects the leaders of NATO countries to decide in Madrid in late June that the multinational allied battalions stationed in the Baltic countries and Poland should be converted into brigades. The aim is also to strengthen air defense.
Germany is now leading the forward presence battalion in Lithuania, and the other two in Latvia and Estonia are being led by Canada and the UK respectively.
The battalions are expected to be expanded to brigades on the basis of the armies of these countries.
A battalion consists of about 1,000 soldiers, and a brigade has about 5,000 troops. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 119 |
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>info.cukes</groupId>
<artifactId>cucumber-jvm</artifactId>
<relativePath>../pom.xml</relativePath>
<version>1.0.0.RC21-SNAPSHOT</version>
</parent>
<artifactId>cucumber-spring</artifactId>
<packaging>jar</packaging>
<name>Cucumber-JVM: Spring</name>
<dependencies>
<dependency>
<groupId>info.cukes</groupId>
<artifactId>cucumber-java</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>info.cukes</groupId>
<artifactId>cucumber-junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 439 |
Copyright © 2013 by Jeffrey Michaud
Published by Running Press,
A Member of the Perseus Books Group
All photos © Kelly Campbell (unless noted)
Other photos:
Courtesy of Jeff Michaud: pages 6, 10–11, 66 (bottom), 67, 89–91, 112, 134, 154–155, 177–178, 216–217, 254–259, 275
© Joshua McDonnell, pages 174–175
Shutterstock © Gijs van Ouwerkerk, pages 214–215
All rights reserved under the Pan-American and International Copyright Conventions
_This book may not be reproduced in whole or in part, in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system now known or hereafter invented, without written permission from the publisher._
Books published by Running Press are available at special discounts for bulk purchases in the United States by corporations, institutions, and other organizations. For more information, please contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street, Suite 200, Philadelphia, PA 19103, or call (800) 810-4145, ext. 5000, or e-mail special.markets@perseusbooks.com.
Library of Congress Control Number: 2013938027
E-book ISBN 978-0-7624-5061-9
9 8 7 6 5 4 3 2 1
Digit on the right indicates the number of this printing
Cover and interior design by Joshua McDonnell
Edited by Kristen Green Wiewora
Typography: Avenir, Bembo, and Cubano
Running Press Book Publishers
2300 Chestnut Street
Philadelphia, PA 19103-4371
Visit us on the web!
www.offthemenublog.com
GAIA
CONTENTS
Foreword by Marc Vetri
Introduction
Paladina The Butcher's Apprentice
Alme A Maestro in the Kitchen, Amore in the Dining Room
Cene and Fiobbio Farm to Table . . . Fifty Feet
Cinque Terre Sex on the Italian Riviera
Barolo and Barbaresco I Can't See Through This Fog
Villa d'Almè Simple Italian Cooking at Its Best
Alba If You Can't Smell the Truffles, You Must Be Dead
Venice Losing Myself in the City
Leffe Becoming a Chef
Florence The Romance Continues
Trescore Balneario Our Big Italian Wedding
Desenzano del Garda The Culinary Journey of a Lifetime
Basics and Essential Recipes
Sources for the Cook and Traveler
Acknowledgments
Index
FOREWORD
In the spring of 2005, my business partner, Jeff Benjamin, and I were at Ca'Marcanda, the Gaja-owned vineyard in Tuscany. In between cellar tours and tasting, I was on the phone with Jeff Michaud. At that time, he was the chef at The Bedford Village Inn in New Hampshire. Jeff Michaud and I had been talking about opening a restaurant together in Philadelphia since he returned to the States from his three-year journey in Italy. We had three previous phone calls about it, but they all ended with me saying, "I don't think it's going to happen." We just couldn't strike a deal with the developer. (Coincidentally, _Ca'Marcanda_ means "the house of endless negotiations.")
This last phone call, however, ended differently. "It's a done deal!" I exclaimed. "Get yer ass back to Philly!"
I could hear Jeff's eagerness through the phone. Although, when I think about it, I don't know who was more excited: him or me.
Some people just inspire you to do better. Perhaps it's the work ethic they possess. Or maybe it's the endless questions they ask until every possible scenario has been exhausted and they finally understand exactly what is going on. People like that make you strive for perfection, never accepting mediocrity. Why do something that's just good enough when you can make it great with only a little more effort? I always look for that kind of determination in cooks, and it's what initially drew me to Jeff Michaud.
We first met in 1998 at the Food & Wine Classic culinary festival in Aspen, Colorado. This was just a few weeks before I signed the lease for Vetri, my first restaurant. At the time, Jeff was working at Aspen's Caribou Club under chef Miles Angelo, a good friend with whom I had worked in the early '90s.
"He's you," Miles said to me, "only 10 years ago."
At the time, I really didn't think much of the comment, but it stayed with me.
Miles called me a few years later, saying "Remember that kid, Jeff? Well, I can't teach him anything else, and he needs to get out of here. He needs to learn something new—you have to take him."
I was in the process of looking for a cook, so I had Jeff come out and work at Vetri for a couple days. I didn't realize it then, but it was as if my own son had come to work at the restaurant. He set the tone in the first month.
"Do you know how to make bread?" he asked eagerly.
"Yes," I replied.
"Then why don't you make it?"
"There's just not enough time or room in the restaurant, Jeff."
"Why? I don't understand. I think we can do it. Will you please show me?" he implored.
Have you ever seen a child badger a parent to the point of exhaustion? Jeff pushed me until I relented. I agreed to make bread. We made a beautiful rustic loaf with a natural starter that we let ferment and grow for two weeks, feeding it three to four times a day. It was like our little pet. The bread turned out perfect. Soon after serving that first loaf, we stopped ordering bread. We have made our own bread at Vetri ever since.
And so it began. My life for the next two and a half years was packed with exploratory trips to markets and farms and early-morning lessons on curing sausage and making pasta. At a certain point, I even let him work the pasta station. There's a first for everything! You name it—if Jeff wanted to learn it, he asked and asked and asked until he got his way. It reached the point where he only had to ask once, and I would give him a fatherly, "Whatever you want," knowing full well the torture I would go through if I said no. The funny thing is, without realizing it, I got as much out of his education as he did. Jeff's passion is infectious, and those torturous yeses eventually turned to anticipation. I looked forward to the next challenge we could tackle together. Sometimes we all need a push in the right direction, and Jeff was pushing with all of his might. Vetri was livelier and more exciting than ever. To this day, those ambitious ideals have not changed. The restaurant has been a platform of learning for everyone who has worked there, and in large part, we have Jeff Michaud to thank.
In 2001, I took Jeff on his first visit to Italy. He was 23, and we went to Vin Italy, the wine fair in Verona. That trip changed his life. The entire time his eyes were as big as beach balls looking at everything around him. . . the food, art, culture, and yes, the women. That trip began the great love affair with Italian food and culture that he continues to this day. Jeff always knew that cooking was what he wanted to do, but now he knew for certain that Italy was the place in which he wanted to do it. Shortly after we returned, Jeff made plans to move to Italy.
Through the years we kept in touch from afar. I visited him during my research trips, and it was clear that he had gone "native" and was fully immersed, soaking up everything he could. I wasn't the least bit surprised. "You get out of life what you put into it." It's such a simple adage, but it's the very _modo_ —the manner—in which Jeff lives his life.
In early 2007, we opened Osteria in Philadelphia with Jeff at the helm. Another person, given the same opportunities that Jeff had would not have achieved the same level of success. But Jeff excelled in Italy and at Osteria because he puts absolutely everything he has into absolutely everything he does. It's all in, all of the time.
That's why I'm so excited about _Eating Italy._ This book is a true, complete journal of Jeff's travels, experiences and adventures in Italy. Reading it makes you feel as if you are right there with him. . . discovering, tasting, and cooking. A dish always tastes better when you hear the story behind it. From Venice to Florence and Piedmont to Lombardia, Jeff leaves no stone unturned when sharing the moments that led to all of these beautiful recipes.
_Eating Italy_ also illustrates how the ambition that first drew me to Jeff continues to grow and evolve. Jeff doesn't simply cook Italian food. He lives it. If this book is anything, it's evidence of that. But it's much more. It's the impassioned work of a chef who has lived more than most and who still has a great deal to share with us. I, for one, am thrilled to be along for the ride.
—MARC VETRI
INTRODUCTION:
AT FIRST I KNEW NOTHING ABOUT ITALIAN FOOD
Restless. Ambitious. Headstrong. That was me as a teenager. I wasn't planning on sticking around Nashua, New Hampshire, for too long. At age thirteen, I got my first job at Kinsley House of Pizza, a Greek pizza place around the corner from my house. They started me out folding pizza boxes, and before long I was slicing tomatoes. Then I asked if I could make the dough. By the time I was a high school sophomore, I was cooking all the pizzas.
My high school had a two-year culinary program, and I jumped on it for junior and senior year. Cooking seemed like good honest work, and I had a knack for it. I saw it as my ticket out of town to discover new places, new people, and new ideas. At sixteen, I started working at the Hilltop Steak House and Butcher Shop, a chain restaurant in the Northeast. A year later, they made me kitchen manager of the place. We did a thousand covers a day, and with that kind of volume, I really learned how to work a kitchen.
Going to culinary school seemed like the next logical thing to do. I enrolled in the Culinary Institute of America and graduated in 1998. For my externship, I went as far away as possible: Aspen, Colorado at the Caribou Club. Within a year, the chef, Miles Angelo, offered me a job as sous chef. I learned everything I could in his kitchen, paid attention, and said, "Yes, Chef," more times than I can remember. A few years later, I became executive sous chef and stayed on for four years.
As a young professional, I'd made good progress in my career. But, as usual, it wasn't enough for me. I was still restless. We cooked mostly American, Southwestern, and Asian food at Caribou. I wanted to learn more. I heard that Marc Vetri was looking for a sous chef, so I flew to Philadelphia to apprentice with him. At that point, Italian food wasn't really part of my repertoire. Sure, I made pizzas at Kinsley House as a teenager, but that was barely Italian food. The differences between the two restaurants were like night and day. Vetri's kitchen was half the size of Kinsley's, but the food we turned out in that tiny space was on another level entirely. We made pasta, bread, and sausage in-house, and cut fish and meat to order. Everything was prepared from scratch. By hand. With pristine ingredients. That spring, I took on the sous chef job at Vetri and put my nose to the grindstone. I made it a point to come in early every day to learn the fundamentals of Italian cooking. Once a week, I took wine classes with Marc's partner and wine expert, Jeff Benjamin.
After a few months, what continued to amaze me about Italian cuisine was its stubborn simplicity. We used a minimum of ingredients. The flavors were uncomplicated. The plating was spare. If you didn't execute everything to perfection, you could easily screw it up. Compared to all my other training, learning to cook Italian food was like trying to find a needle in a haystack, searching for a tiny sliver of culinary perfection buried deep in a mountain of madness. It was just the kind of focus I needed in my career.
Marc and Jeff noticed how I took to the cuisine, and how inspired I got. That spring, they brought me to Vin Italy, an international wine expo held annually in Verona. Every Italian wine you could imagine was there, and you could sample them all. The food was like nothing I'd ever tasted—heaps of locally made salumi and cheese—all of it artisanal, beautiful, and incredibly delicious. Each day of eating, drinking, sampling, and talking got me more and more excited.
When we arrived back home, I knew what I had to do. I had to pack up my life, once again, and move to Italy. A window had opened up inside me. During that trip, I felt a real connection to Italian food and wine but even more so to Italian people and culture. I can't fully explain it. . . the Italian lifestyle was just so different from what I was used to in America. It was a breath of fresh air, and I wanted more.
I've never been afraid to stick my nose into things. Sometimes you just have to put yourself out there and take risks. I had no idea if living in Italy would lead to anything good, but I knew I had to go. I took on a second job, and scrimped and saved for months. I scrounged up enough money to last me one year.
This book tells the story of what happened next. I'd planned to stay a year and ended up staying for three. I hoped to find a paying job and ended up finding my wife. I fell head-over-heels in love not only with a woman but also with her family, her cuisine, and her culture. Each chapter takes you into the Italian towns and villages that shaped me the most, like Alme, where I trained with Italy's youngest Michelin-starred chef; Cinque Terre, where Claudia and I spent our first romantic getaway; Leffe, where I snagged my first executive chef position; and Trescore Balneario, where Claudia and I got married just before we opened Osteria in Philadelphia. The recipes I learned and the dishes I created along the way are all here. And there's plenty of detail about restaurants, wineries, bars, markets, and inns all over Italy, so you can experience the places where I cut my teeth and discovered the world's most welcoming cuisine.
After those first few years in Italy, I came back a different person. I was a better chef for sure, but more important, I had a sense of purpose. My ambition wasn't blind anymore. I found love in my work and in my life. I realized that good cooking is about putting your heart and soul on a plate. It doesn't matter whether you cook at home or in a restaurant kitchen. Cooking has to be something that you enjoy and feel in your body. If it's not, you taste it right away in the food.
That's the Italian way of cooking that I try to convey in these pages. I hope the stories and recipes here help you find that kind of joy in your own cooking—no matter what style of cuisine you prefer.
* * *
PALADINA
* * *
THE BUTCHER'S APPRENTICE
I ARRIVED AT THE MANGILI BUTCHER SHOP AT FOUR IN THE MORNING IN EARLY 2003. I'D VISITED ITALY A FEW TIMES BEFORE, BUT THIS TIME I SAVED UP ENOUGH MONEY TO STAY AND WORK FOR A YEAR. SOME FRIENDS SET ME UP TO WORK UNPAID AT A SMALL, FAMILY-OWNED BUTCHER SHOP IN PALADINA, A TINY TOWN JUST NORTHWEST OF BERGAMO.
When I got there, a line of calves stood near the back door, and the holding area stank of cow shit. I had some butchering experience but couldn't speak Italian. The Mangili family didn't care. There was work to do. An old man with salt-and-pepper hair said something to me in Italian, handed me a small knife, and led me to the kill floor. His knife was a giant medieval-looking ax with a long wooden handle and curved blades on each side. The kill floor was made of terra-cotta tile that sloped to the center to drain the blood. A veal calf had just been shot and was hanging upside down on a hook. The old man lifted his axe and cut the head clear off the animal. One perfect swipe! He hung up the head and we started skinning the calf together.
For hours, I stood with the axe man at a big white table in the back room as he showed me how to butcher various cuts of veal, such as shank, breast, and shoulder. Before I knew it, the axe man was motioning for me to stop. It was lunchtime. They closed the butcher shop and we all went upstairs to eat. Everyone started talking, and I learned that the axe man was Maurizio, a brother-in-law in the Mangili family. The two main owners were Oliviero and Francesco Mangili. One of their daughters, Alexandria, worked the front counter with the customers. Even though I was a complete stranger—and an American—no one was nervous about having me in the house. They completely opened their doors to me. I had a dictionary in my back pocket and pieced together some of what they were telling me. Oliviero and Francesco's grandfather started the butcher shop two generations ago. There were a couple of other butchers in Paladina, but the Mangilis were the only ones that raised their own cattle on their own farm. The veal calves came in on Wednesdays. The steers on Thursdays. Pigs were Mondays. Goats and lambs, Tuesdays.
It struck me how relaxed the whole family was around the table. They nibbled on salumi and cheese, talking and laughing. Francesco made jokes and winked at everyone. Maurizio had a glass of wine with an ice cube in it. Oliviero took a quick nap.
This was new to me. I wasn't used to stopping in the middle of the afternoon to enjoy a peaceful lunch—or, for that matter, starting the day with a leisurely cappuccino and croissant. The Italian lifestyle had a comfortable rhythm. It was slower than the amped-up American rat race.
As lunch went on, my mind wandered back to the previous day and the long plane trip from Philadelphia to France to Bergamo. I remembered seeing the mountains, rivers, and valleys from 20,000 feet (6 km) up. The snow-capped Alps tower above the North and the Apennine Mountains jut up from the green plains in the south. These two mountain ranges create a paradise of fertile valleys and rivers that you can grow just about anything in. Lombardy is so desirable that every neighboring country, including France, Austria, and Germany, has held control of it at some point in history. The local economy accounts for a big chunk of Italy's gross domestic product. It's full of hard-working people. And cows. Most of Italy's cattle are raised on the lower plains of Lombardy. Pigs thrive more in the north around Bergamo. So it's no surprise that beef, cheese, and sausage are some of Lombardy's most important foods.
Oliviero snapped me out of my reverie when he tapped me on the knee. We cleaned up lunch, went downstairs, and opened up the butcher shop again, working until about five o'clock. We butchered all the veal for the week, then hosed down the entire shop and closed for the day. When we were done, it was so clean you could eat off the floor.
Our days went on like this for five months. Maurizio taught me everything he knew about farming and butchering animals. I visited the family farm where the animals were raised. I learned how to kill them as quickly and humanely as possible. I gutted and skinned them; broke down half-ton steers into sides of beef; and let the meat rest for two weeks to develop flavor before cutting it into steaks. Sometimes little old ladies would come in and order half a steer, specifying how much of it they wanted ground for _polpettini_ (little meatballs) and what parts they wanted cut for freezing and sharing with their families. Maurizio showed me how to hang big pieces of beef off the edge of the table to let gravity help with the heavy butchering.
The Mangilis took me in like family. Maurizio had me over for dinner. Francesco and Oliviero brought me to local fairs, where they showed off their prized cows—just like at county fairs in the United States. But instead of the overalls American farmers wear, Italian farmers wear Gucci jeans and Louis Vuitton shoes—even while stomping around in cow shit!
Seeing this entire process, day in and day out for the better part of a year, gave me a new respect for Italian food. I realized how important farming is to the economy and the cuisine of northern Italy. When I cooked, I was no longer just grilling a rib-eye steak. I was grilling a rib-eye steak from the local Fassone breed of cattle that was raised and butchered by someone I knew and respected. When I snacked on cheese, it was no longer just fuel to get me through the day. I was enjoying creamy formagella made from goat's milk by an award-winning local cheese maker named Battista. Everything I ate became more important. More meaningful. It made me feel more grounded. Seeing the love and care that went into preparing the food made every bite taste better somehow. And it made me curious to learn even more.
CHAPTER CONTENTS – TAP TO GO TO RECIPE
Beef Tartare with Fried Egg Yolk and Parmigiano
•
Carne Salata with Red Onion, Celery, and Olive Oil
•
Whole Braised Beef Shank with Buckwheat Polenta
•
Bucatini with Pig Testina
•
Veal Tongue Salad with Escarole and Salsa Rossa | Veal Shoulder Roasted in Hay with Grilled Peach Salad
•
Pork Neck Cannelloni with Heirloom Tomatoes and Basil
•
Whole Roasted Pig's Head
•
Whole Roasted Pork Shoulder with Pickled Vegetables
•
Pappardelle with Veal Ragù and Peppers
•
Grilled Lamb Rack with Favetta and Roasted Pearl Onions
---|---
**BEEF TARTARE** with **FRIED EGG YOLK** and **PARMIGIANO**
The tartare at Mangili butcher shop was like none I'd ever had. As a chef, I was used to chopping raw beef by hand for tartare. Almost every restaurant I worked in served it that way—in tiny cubes. But at Mangili, they ground the raw beef twice in a meat grinder. The grinder cut down all the connective tissue in the meat and made it super-creamy. Traditionally, tartare is served with arugula, lemon, and Parmesan. Once in a while, you see egg yolk. I thought it would be cool to roll an egg yolk in breadcrumbs, deep-fry it, and serve it over the tartare. When you cut into the crispy yolk, it runs all over the meat and makes it taste even creamier.
MAKES 4 SERVINGS
½ small red onion, cut through the root end
About 1 cup (235 ml) whole milk
About ¾ cup (175 ml) olive oil, divided
3 tablespoons (45 ml) red wine vinegar
8 ounces (225 g) beef tenderloin, trimmed of fat
Salt and freshly ground black pepper | Canola oil, for frying
2 Belgian endives, cut in a thin julienne
½ cup (30 g) chopped fresh flat-leaf parsley
1 cup (100 g) plain, dry breadcrumbs, sifted
4 large egg yolks
Parmigiano cheese for shaving
---|---
Chill four serving plates. Thinly slice the onion on a mandoline (or use a very sharp knife). Separate the onion into slivers in a large bowl. Add enough milk to cover the onion and let stand at room temperature for 30 minutes to extract the raw onion taste. Drain the onion slivers, rinse them and the bowl, and return them to the bowl. Add ½ cup (120 ml) of the olive oil and the vinegar, stirring to mix. Marinate the onion for 2 hours at room temperature.
Put the beef and all the parts of a meat grinder in the freezer for 20 minutes. Grind the cold beef twice on the small (¼-inch/10-cm) die of the meat grinder. Season the beef to taste with salt, pepper, and olive oil.
Heat the oil in a deep fryer or deep saucepan to 350°F (175°C). You can check the heat by sprinkling in small pinches of breadcrumbs to see whether the oil sizzles instantly and fries them up golden brown.
For each serving, mold the beef mixture in a 6-inch (15-cm) ring mold (or a large cookie cutter) on a cold plate. Lift off the ring to unmold.
Toss the endives with the marinated onion and top each serving of beef with the mixture. Sprinkle the parsley on top.
Put the breadcrumbs in a shallow bowl and carefully add the egg yolks, one at a time, to coat thoroughly without breaking the yolks. Use a slotted spoon to carefully lift the yolks from the crumbs. Gently shake off the excess crumbs and turn each yolk into the fryer, frying for exactly 12 seconds. Remove each yolk with a heatproof slotted spoon and place gently on top of the endive salad.
Drizzle the remaining olive oil around the plate, along with a few grindings of cracked black pepper. Using a vegetable peeler, shave 3 slivers of Parmigiano over each serving.
**CARNE SALATA** with **RED ONION, CELERY,** and **OLIVE OIL**
You always see _carne salata_ in Italian butcher shops. It's usually made with horse meat that's salted, cured, and then boiled. Horse meat just became legal in the United States and is still very hard to get, so I make mine with beef eye of round. It comes out fantastic. My wife's family slices their _carne salata_ meat paper-thin and serves it with raw onion. They soak the onion first in milk so it loses its bite. (You can also soak the onion in vinegar.) After that, a little celery, parsley, and olive oil is all you need to round out the salad.
MAKES 8 SERVINGS
3 pounds (1.25 kg) beef eye of round, trimmed of fat
3½ tablespoons (35 g) rock salt
1½ teaspoons (6 g) granulated sugar
½ teaspoon (6 g) curing salt #2 (see page 277)
½ teaspoon (1 g) freshly ground black pepper, plus more to taste
⅛ teaspoon (0.25 g) ground mace
⅛ teaspoon (0.25 g) ground coriander
⅛ teaspoon (0.25 g) freshly grated nutmeg
⅛ teaspoon (0.25 g) ground cloves | 1½ teaspoons (1.75 g) chopped fresh rosemary
1 small red onion, thinly sliced (about 1 cup/160 g)
1 cup (235 ml) whole milk
2 ribs celery, julienned (1 cup/100 g)
2 tablespoons (30 ml) red wine vinegar
½ cup (120 ml) olive oil
2 tablespoons (7.5 g) chopped fresh flat-leaf parsley
Salt
---|---
Place the meat in a plastic tub or wide nonreactive bowl. Mix together the rock salt, sugar, curing salt, ½ teaspoon (1 g) of the pepper, and the mace, coriander, nutmeg, cloves, and rosemary. Rub the spice mixture all over the meat, cover, and let cure in the refrigerator for 4 days, rotating the meat every day and dousing it with the liquid that settles in the bottom of the tub or bowl.
On the fourth day, transfer the beef to a very large Dutch oven. Add half of the leftover curing liquid and enough water to cover the meat by ½inch (1.25 cm). Cover and bring to a simmer over medium heat, then adjust the heat so that the mixture simmers very gently. Simmer until the meat is cooked through, about 155°F (315°C) internal temperature, 1 to 2 hours. Remove the pan from the heat and let the meat cool in the liquid in the pan. If you're not serving right away, cover and refrigerate for up to 3 days before slicing and serving.
Meanwhile, soak the onion in the milk for 30 minutes. Rinse the onion and pat dry. Mix together the onion, celery, vinegar, oil, and parsley. Season to taste with salt and pepper.
Slice the beef very thinly and toss with the salad to coat. Use tongs to arrange the slices on a platter and top with the salad.
**WHOLE BRAISED BEEF SHANK** with **BUCKWHEAT POLENTA**
This is a butcher's dish if I ever saw one. A whole beef shank is a big piece of meat with a giant bone running through it. It's fit for a caveman. If you shop at a farmers' market or know your butcher pretty well, it should be easy enough to get a whole beef shank. You can also just use crosscut shanks. Either way, you'll be braising the shank in the traditional northern Italian style. The people who live in and around Bergamo are known as _polentone_ (poh-len-TOE-nay), or polenta eaters. They love to braise big, tough cuts of meat on the bone and then serve them in gravy on a mound of warm, creamy polenta. Here, I mix the polenta with some buckwheat flour for a darker, earthier flavor to match the rich taste of the marinated and braised shank.
MAKES 4 TO 6 SERVINGS
4 pounds (1.75 kg) beef shank
3 quarts (2.75 L) red wine (about 4 bottles)
3 medium-size carrots, chopped (1¾ cups/215 g), divided
3 medium-size ribs celery, chopped (1¾ cups/175 g), divided
1 large yellow onion, chopped (1¾ cups/275 g), divided
1 sachet of 1 sprig rosemary, 3 sprigs thyme, 1 bay leaf, 5 black peppercorns, 1 cinnamon stick, and 2 whole cloves (see page 277)
Salt and freshly ground black pepper | About ½ cup (60 g) _tipo_ 00 flour (see page 277) or all-purpose flour, for dredging
2 tablespoons (30 ml) olive oil
2 cups (475 ml) Veal Stock (page 279)
6 cups (1.5 L) Buckwheat Polenta (page 281)
1 tablespoon (3.5 g) chopped mixed fresh herbs (parsley, rosemary, and thyme) for garnish
---|---
Fit the shank in a very large nonreactive Dutch oven or large stockpot. Add the wine, 1½ cups (175 g) of the carrots, 1½ cups (150 g) of the celery, 1½ cups (250 g) of the onion, and the sachet. Cover and refrigerate for 24 hours.
Remove the shank from the marinade, strain the liquid, and reserve the liquid and sachet. Discard the vegetables.
Preheat the oven to 325°F (160°C). Pat the shank dry and season with salt and pepper. Dredge the shank in the flour, shaking off excess flour. Heat the oil in the Dutch oven or an ovenproof braising pan over medium-high heat. Add the shank and sear until browned all over, about 20 minutes total. Transfer the meat to a plate. Add the remaining ¼ cup (40 g) of carrots, ¼ cup (25 g) of celery, and ¼ cup (25 g) of onion to the pan. Cook over medium heat until the vegetables are lightly browned, about 5 minutes. Add 1 quart (1 L) of the reserved marinade and the reserved sachet to the pan. Pour in just enough veal stock to cover the meat by three-quarters. Cover the pan and braise in the oven until tender, 2 to 3 hours.
Remove the meat from the pan and cut the meat from the bone. Cut the meat into bite-size pieces, discarding excess fat and gristle. Alternatively, you can serve the meat on the bone, family style. Either way, strain the vegetables from the braising liquid and pass them through a food mill. You could use a food processor instead of a food mill, but the resulting texture isn't quite as coarse and good tasting. Return the milled vegetables to the pan, along with a few cups of the braising liquid to thin them out. Boil the mixture over high heat until reduced in volume by about half and thick like gravy, about 15 minutes. Lower the heat to keep the gravy warm, and then skim off any excess fat and season to taste with salt and pepper.
To serve, ladle some buckwheat polenta onto each plate. Place the warm beef on top and drizzle with the reduced sauce. Garnish with the chopped herbs.
**BUCATINI** with **PIG TESTINA**
Pig is the meat of choice in northern Italy, and most butcher shops carry _testina._ It's a terrine made from pig's head and shaped in rounds, squares, or rectangles. One day, I had the idea to try it with pasta. I made the terrine, chopped it up, and sautéed it with some olive oil and bucatini, a kind of thick, hollow spaghetti. Some of the liquid from the testina nestled inside the hollows and blended perfectly with the pasta. Bucatini has more bite than spaghetti, so it still made a nice contrast to the soft, creamy pig's head. And with all those warming spices—the nutmeg, cloves, allspice, and black pepper—the dish had great aroma. There's no butter here, but it's pretty rich from the testina.
MAKES 4 SERVINGS
1 small pig's head (4 to 5 pounds/1.75 to 2.25 kg)
3 quarts (3 L) 3-2-1 Brine (page 280)
3 sprigs thyme
1 tablespoon (6 g) ground cloves
1 tablespoon (6 g) freshly grated nutmeg
1½ teaspoons (3 g) ground allspice
½ teaspoon (1 g) freshly ground black pepper
1 teaspoon (6 g) curing salt #1 (see page 277) | 1 large yellow onion, chopped (1½ cups/240 g)
3 medium-size ribs celery, chopped (1½ cups/150 g)
2 large carrots, chopped (1½ cups/185 g)
1 pound (450 g) fresh Bucatini (page 283), or 1 (12-ounce/340 g) box bucatini
2 tablespoons (30 ml) olive oil
2 tablespoons (30 ml) freshly squeezed lemon juice
¼ cup (15 g) chopped fresh flat-leaf parsley
3½ ounces (100 g) Parmesan cheese, grated (1 cup)
Salt and freshly ground black pepper
---|---
Rinse the pig's head and set aside. Make the brine; add the thyme, cloves, nutmeg, allspice, black pepper, and curing salt and puree everything together. Pour the brine into a stockpot or large tub, submerge the pig's head in the brine, cover, and refrigerate for 4 days.
Put the pig's head and 1½ quarts (1.5 L) of the brine in a very large Dutch oven. Add 1½ quarts (1.5 L) of water and the onion, celery, and carrots, cover, and bring to a boil over high heat. Lower the heat so that the mixture simmers gently and simmer, covered, until the skin cracks and the meat is fall-apart tender, 3 to 4 hours. Let cool in the liquid.
Carefully transfer the pig's head to a large cutting board. Strain the braising liquid and discard the solids. Return the braising liquid to the pan and boil over high heat, uncovered, until reduced in volume by about half, 40 to 50 minutes.
Cut the skin, meat, and fat from the head and chop into pieces the size of a half-dollar. Remove the skin from the tongue and discard it, and then coarsely chop the tongue; set it aside. Line an 8 x 4-inch (20 x 10-cm) loaf pan with plastic wrap, leaving a generous overhang to cover the top of the pan. Combine the skin, meat, and fat (including the tongue) in the pan and add enough of the reduced liquid to saturate but not quite cover the meat. Cover with the overhanging plastic and put a heavy weight on top (canned tomatoes or beans work well). Refrigerate overnight with the weight.
Bring a large pot of salted water to a boil. Add the pasta, and cook just until tender, 2 to 3 minutes if fresh or 8 to 10 minutes if dried.
Meanwhile, heat the oil in a large sauté pan over medium heat. Finely chop about 12 ounces (about 2 cups/340 g) of the _testina_ , add to the pan, and sauté for 2 to 3 minutes. Stir in the lemon juice, 1 cup (235 ml) of the pasta water, and the parsley.
Drain the pasta and add to the sauce, along with the Parmesan and salt and pepper to taste. Cook until the sauce is creamy and coats the pasta, 2 to 3 minutes. Serve hot.
**VEAL TONGUE SALAD** with **ESCAROLE AND SALSA ROSSA**
Before you turn away, hear me out. I've heard so many people say they are disgusted by veal tongue, but when they taste it, they end up loving it. Forget your preconceived notions about eating tongue. It's just another piece of meat. And it's damn good! Think about it. The most flavorful cuts of meat come from the areas of the animal that get the biggest workout, such as the shoulder (chuck) and—you guessed it—the tongue. It's packed with flavor. That's why you see veal tongue in every butcher shop in Italy. I've breaded it, fried it, slow-cooked it, and grilled it. It's good every which way. But my favorite technique is to brine it, slow-simmer it until it's tender, then slice it super-thin. With a salad of roasted tomatoes and peppers ( _salsa rossa_ ), it's really worth a taste.
MAKES 6 SERVINGS
Veal Tongue:
1 veal tongue, about 1½ pounds (675 g)
2 sprigs fresh thyme
2 whole cloves
1½ quarts (1.5 L) 3-2-1 Brine (page 280)
Salsa Rossa:
10 plum tomatoes
4 garlic cloves, minced
1 teaspoon (6 g) salt, plus more to taste
Freshly ground black pepper
½ teaspoon (2 g) granulated sugar
Leaves from 2 fresh thyme sprigs | 1 cup (235 ml) olive oil
1 cup (90 g) Roasted Red Peppers (page 278), chopped
4 anchovy fillets, chopped
2 teaspoons (2 g) chopped fresh flat-leaf parsley
½ teaspoon (2 ml) sherry vinegar
To Serve:
1 ounce (28 g) escarole, torn into bite-size pieces
Cracked black pepper for garnish
Maldon sea salt for garnish
Minced mixed fresh herbs (parsley, rosemary, and thyme) for garnish
---|---
**For the veal tongue:** Rinse the tongue and set aside. Add the thyme and cloves to the brine, and puree everything together. Pour the brine into a medium saucepan, submerge the tongue in the brine, cover, and refrigerate for 4 days.
Pour off three-quarters of the brine and add enough water to cover the tongue by 1 inch (2.5 cm). Cover and bring to a simmer over medium heat, then adjust the heat so that the liquid simmers gently. Simmer until the tongue is almost fall-apart tender, about 190°F (88°C) internal temperature, 1 to 1½ hours. Remove the pan from the heat and let the tongue cool down in the liquid.
When the tongue has cooled, remove and discard the skin. Refrigerate the tongue for up to 2 days or use immediately.
**For the salsa rossa:** Preheat the oven to 500°F (260°C).
Slice the tomatoes in half lengthwise and scoop out all the seeds. Toss with the garlic, 1 teaspoon (6 g) of salt, ground black pepper to taste, sugar, thyme, and olive oil. Lay the tomatoes cut-side up on a rimmed baking sheet and put the sheet in the oven. Turn off the oven and let the tomatoes dry in the oven overnight, 8 to 10 hours. The next day, chop the tomatoes, reserving the oil in the pan separately. (In a pinch, you could substitute soft, oil-packed sun-dried tomatoes for the oven-dried tomatoes here.) Toss the tomatoes in a medium bowl, along with the roasted peppers, anchovies, parsley, vinegar, and 1 tablespoon (15 ml) of oil from the roasting pan. Season with salt and pepper.
**To serve:** slice the veal tongue very thinly, about ⅛ inch (3 mm) thick, and toss gently with 2 tablespoons (30 ml) of the reserved olive oil from the pan in a bowl. Toss the escarole with 1 tablespoon (15 ml) of the reserved olive oil. Place the tongue slices in piles on plates or a large platter. Place the escarole in piles near the tongue and top the escarole with the _salsa rossa._ Garnish with cracked black pepper, Maldon sea salt, and mixed herbs.
**VEAL SHOULDER ROASTED IN HAY** with **GRILLED PEACH SALAD**
In Piedmont, Walter Eynard of Ristorante Flipot is famous for roasting big cuts of meat in hay—especially lamb. The hay imparts an earthy grassiness to the meat. When I got back to the States, I thought I'd give it a whirl. My farmer, Glenn Brendle from Green Meadow Farm, was making a delivery to Osteria one week in the fall, so I asked him to bring me some hay. We butchered a lot of veal at Mangili, so I figured I'd use veal instead of lamb. It came out awesome. All it needed was some sweetness and bitterness to round it out. So I grilled a few peaches and tossed them with cut-up radicchio and pistachios for a quick salad. The amounts here are great for a crowd but if you want to serve fewer people, get a four-pound (1.75 kg) roast and cut the recipe in half.
MAKES 8 TO 10 SERVINGS
1 bone-in veal shoulder roast, about 8 pounds (3.5 kg)
1½ gallons (5.75 L) 3-2-1 Brine (page 280)
Clean hay, a few big handfuls, enough to completely cover the veal, plus more for serving
4 peaches
½ cup (120 ml) blended oil (page 276), divided | 2 tablespoons (30 ml) balsamic vinegar
1 small head of radicchio
1½ tablespoons (14 g) chopped raw unsalted pistachios, preferably Sicilian
Salt and freshly ground black pepper
---|---
In a stockpot or large, clean tub or plastic bag, submerge the veal in the brine. Cover (or seal) and refrigerate for 4 days.
When you're ready to start cooking, soak the hay in water for 1 hour.
Preheat the oven to 450°F (120°C). Remove the veal from the brine, give it a rinse, then place it in a Dutch oven and pack the wet hay around it. Roast uncovered until the hay smells dry and the meat begins to brown, about 1 hour.
Lower the oven temperature to 300°F (150°C). Add enough water to come one-quarter of the way up the meat, then cover the pan and braise in the oven until the meat is tender (about 190°F/88°C internal temperature), 4 to 5 hours. Let cool slightly, then remove the meat from the pan and discard the hay. You can cover and refrigerate the meat for up to 4 days at this point.
About an hour before you're ready to serve, preheat a grill to medium-high heat. Cut the peaches in half lengthwise and discard the pits. Coat the peaches with 1 tablespoon (15 ml) of the blended oil. Coat the grill grate with oil and grill the peaches just until grill-marked but not mushy, 2 minutes per side. Cut each peach half into quarters lengthwise.
Pour the vinegar into a medium bowl and whisk in 6 tablespoons (90 ml) of the blended oil. Core the radicchio and cut it into 1½-inch (3.75-cm) squares. Add to the bowl, along with the grilled peaches and pistachios. Season to taste with salt and pepper and toss gently.
To finish, preheat the oven to 425°F (218°C) and roast the whole shoulder on a large baking sheet until crispy on the surface, 20 to 25 minutes.
Place the remaining hay on a large serving platter and set the roast on it. Loosely assemble the salad around the roast. Serve hot, discarding any large fat deposits as you carve the roast.
**PORK NECK CANNELLONI** with **HEIRLOOM TOMATOES** and **BASIL**
Here's a way to use a part of the pig that's usually forgotten. You do see the neck used to make cured _capocollo_ , but not much else. I decided to braise it down, grind it, and stuff it into cannelloni with cheese and eggs. It was one of the best pasta fillings I ever had. Ask your butcher or farmers' market vendor for pork neck. They'll be happy to sell it to you cheap because no one asks for it. If you can't get pork neck, use boneless pork shoulder or shank instead. Either way, there's so much flavor here, all you need are some quick-sautéed heirloom tomatoes to pull the whole thing together. It's a great late summer dish when you're up to your ears in overripe tomatoes. This recipe makes a large casserole, so you have plenty of leftovers (which taste even better the next day). If you want less, cut the recipe in half.
MAKES 8 TO 10 SERVINGS
Pasta and Filling:
2 tablespoons (30 ml) extra-virgin olive oil
2½ pounds (1.125 kg) boneless pork neck or pork shoulder, cubed
1 medium-size yellow onion, chopped (1 cup/160 g)
2 medium-size carrots, chopped (1 cup/125 g)
2 medium-size ribs celery, chopped (1 cup/100 g)
4 ounces (115 g) chopped prosciutto (scraps are fine if you have them)
½ cup (120 ml) white wine
20 ounces (570 g) fresh whole-milk ricotta cheese (2½ cups) | 5 large eggs
Salt and freshly ground black pepper
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (1.5 mm) thick
1 quart (1 L) Béchamel Sauce (page 281), divided
3½ ounces (100 g) Parmesan cheese, grated (1 cup)
Heirloom Tomato Sauce:
8 garlic cloves, minced
1½ cups (375 ml) extra-virgin olive oil
4 pounds (1.75 kg) heirloom tomatoes
24 fresh basil leaves
Salt and freshly ground black pepper
---|---
**For the pasta and filling:** Heat the oil in a Dutch oven over high heat. Add the pork in batches to prevent overcrowding, and cook until browned all over, 8 to 10 minutes. Transfer the meat to a bowl, and add the onion, carrots, and celery to the pan. Cook until the vegetables are soft but not browned, 5 to 6 minutes. Add the prosciutto, and cook for 2 minutes. Return the meat to the pan, add the white wine, and simmer until the liquid reduces in volume by about half, 10 minutes or so. Lower the heat to medium-low, cover, and simmer until the meat is extremely soft, 3 to 4 hours, adding a little water, if necessary, to prevent sticking. The meat should fall apart in shreds when poked with tongs. Remove the pan from the heat and let the mixture cool down in the pan. Refrigerate the whole pan until the mixture is cold, at least 1 hour or up to 1 day.
Put all the parts of a meat grinder in the freezer for 15 minutes to chill them. Grind the cold meat mixture on the small (¼-inch/6-mm) die of the grinder. Weigh out 2 pounds (1 kg) of the mixture and reserve the rest for another use. (You may have about 4 ounces/115 g extra—add it to tomato sauce or use it to top bruschetta.) Put the 2 pounds (1 kg) of ground pork in a bowl and stir in the ricotta cheese, eggs, and a generous amount of salt and pepper. Spoon the mixture into resealable plastic bags and refrigerate for at least 1 hour or up to 3 days.
Lay the pasta sheets on a lightly floured work surface and cut into 4-inch (10-cm) squares. You should get about twenty squares from each sheet, forty total.
Bring a large pot of salted water to a boil and fill a large bowl with ice water. Drop the pasta squares into the hot water a few at a time, quickly return the water to a boil, and cook for 15 to 20 seconds just to blanch them, stirring gently to prevent sticking. Immediately transfer the pasta to ice water to stop the cooking. Lay the pasta squares on kitchen towels and pat dry; they will be delicate and some may stick, but you should have plenty.
Preheat the oven to 500°F (260°C). Turn on convection, if possible.
Pipe a 1-inch (2.5-cm)-thick line of the cold filling along one edge of each pasta square. Starting at the filled side, use the edge of the kitchen towels to lift and roll the pasta to the edge of the unfilled side to enclose the filling.
Spread a thin layer of béchamel over the bottom of a 4-quart (3.75-L) baking dish (or use individual dishes, if you like). Place the cannelloni on top, seam-side down, and top with the remaining béchamel. Sprinkle with the Parmesan cheese, and then bake until the cheese melts and browns on top, 8 to 10 minutes.
**For the heirloom tomato sauce:** Heat the garlic and oil over medium-low heat in a sauté pan for 3 to 4 minutes. Thinly slice the tomatoes and add to the pan. Raise the heat to medium, and cook until the tomatoes fall apart and the sauce thickens, about 20 minutes, stirring now and then. Chop the basil and stir into the sauce, along with salt and pepper to taste.
To finish, spoon the sauce on top of the cannelloni.
**WHOLE ROASTED PIG'S HEAD**
Jonathon Sawyer was the inspiration for this recipe. He's the chef at the Greenhouse Tavern in Cleveland and is completely dedicated to zero-waste cooking. I invited Jonathon to do a dinner with me at Osteria in Philadelphia. Jonathon saw me cutting off a pig's head and throwing it out (we roast a small pig almost every day at the restaurant). "What the hell are you doing?" he screamed. "You're throwing away good food there!" He kept the head and roasted it whole with a Coca-Cola glaze. Since that day, I started roasting all our pig's heads. But now I use beer cooked down with some orange juice and chili flakes to make an _agro dolce_ (sweet-and-sour) glaze. It's become a cult classic. At Osteria, we only have so many heads a week, so people call in advance to order it. When it comes to the table, the whole roasted pig's head looks kind of macabre. But it tastes 100 percent awesome.
MAKES 2 TO 4 SERVINGS
Pig's Head:
1 small pig's head (4 to 5 pounds/1.75 to 2.25 kg)
3 quarts (2.75 L) 3-2-1 Brine (page 280)
1 tablespoon (6 g) ground fennel seeds
1 medium-size yellow onion, chopped (1½ cups/240 g)
2 large carrots, chopped (1½ cups/185 g)
3 medium-size ribs celery, chopped (1½ cups/150 g)
1 sachet of 1 sprig rosemary, 3 sprigs thyme, 1 bay leaf, 5 black peppercorns, and 1 garlic clove (see page 277)
1 quart (1 L) Chicken Stock (page 279)
Beer Agro Dolce:
12 ounces (375 ml) beer (I like pale ale, but almost any beer will do; in the fall I use a chestnut beer from Baladin called Noël)
½ cup (100 g) granulated sugar | ¼ cup (60 ml) sherry vinegar
Juice of 1 orange
Big pinch of chile flakes
½ teaspoon (1 g) black peppercorns
Bruschetta:
5 thick slices rustic bread
Olive oil, for brushing
Salt and freshly ground black pepper
Raspberry, apple, or another seasonal jam of your choice
---|---
**For the pig's head:** Rinse the head and set aside. Make the brine and stir in the ground fennel seeds. Submerge the head in the brine, and refrigerate for 4 days.
Put the head in a large Dutch oven or other pot that will hold it comfortably. Add the onion, carrots, celery, and sachet to the pot. Pour in the stock and just enough water to come about halfway up the head. Cover and bring to a boil over high heat.
Preheat the oven to 325°F (160°C). When the liquid comes to a boil, transfer the pot to the oven, and cook, covered, until the head is tender, 4 to 5 hours. The skin on top of the head should start to split and the cheeks should feel soft to the touch. When the head is done, carefully remove it from the pot (heatproof silicone gloves work well), put it on a rimmed baking sheet, cover, and refrigerate. When the head is cool, use a knife to remove the skin from the cheeks and snout, peeling away the skin but leaving the meat and fat. Score the fat around the cheeks. Leave the skin on the top of the head so the ears remain attached. Remove the tongue, and remove and discard the skin from the tongue.
**For the beer agro dolce:** Combine the beer, sugar, vinegar, orange juice, chili flakes, and peppercorns in a pot and bring to a boil over high heat. Boil until the liquid reduces in volume to about ⅔ cup (150 ml) and becomes a thin syrup, 10 to 15 minutes. Strain and set aside.
**For the bruschetta:** Heat a grill or broiler to medium-high heat. Brush both sides of the bread with oil and season with salt and pepper. Grill or broil the bread until toasted, 1 to 2 minutes per side.
Preheat the oven to 450°F (230°C). Put the pig's head and the tongue in a roasting pan or on a rimmed baking sheet and pour the agro dolce evenly over the head and tongue, brushing it to cover completely. Transfer to the oven and roast for 5 minutes. Pull out the pan and turn the head onto one cheek, spooning the sauce from the pan evenly over the head and tongue. Roast for another 5 minutes. Remove again from the oven and turn the head on the other cheek, spooning the sauce all over the head and tongue. If the sauce gets too thick, add a little water to the pan to thin it enough to be pourable. Roast for another 5 minutes. The total roasting time will be 15 to 20 minutes. Put the head right side up on a large plate or platter, with the tongue alongside it. Spoon the remaining sauce over the head.
Serve with the bruschetta and jam. Invite guests to pick meat from the head (the cheeks are especially rich and delicious). The tongue can be sliced into serving pieces.
**WHOLE ROASTED PORK SHOULDER** with **PICKLED VEGETABLES**
In American markets, you see two kinds of pork shoulder—Boston butt (the upper part of the shoulder) and picnic ham (the lower part near the foreleg). At the Mangili butcher shop in Italy, I sold the whole shoulder, including both sections. Together, they weigh about fourteen pounds (6.25 kg). It's a good-size hunk of meat. If you can't get a whole shoulder, look for pork butt or picnic ham with the skin on. If you can only get skinless pork butt, you can still make this dish. You won't get any cracklings, but it'll be delicious anyway. Set out the whole roasted shoulder and let your guests pick at the meat and vegetables. Perfect for a party. The pickled vegetables add some acid to cut through the fat of the pork and the sweetness of the _vincotto_ glaze. _Vincotto_ is "cooked wine" that's reduced until it's thick, sweet, and syrupy, sort of like balsamic syrup. You can find it at good Italian specialty shops.
MAKES 10 TO 12 SERVINGS
1 bone-in skin-on pork butt or picnic ham (about 8 pounds/3.5 kg)
1 cup (135 g) kosher salt
1 cup (200 g) granulated sugar
¾ teaspoon (1.5 g) ground fennel seeds
¾ teaspoon (1 g) freshly ground black pepper | ½ cup (120 ml) vincotto (see recipe headnote)
1 packed cup (225 g) light brown sugar
5 cups (200g) torn romaine lettuce
5 cups (500 g) mixed pickled vegetables for garnish
---|---
Rinse the pork, and then pat it dry. Combine the salt, sugar, fennel, and black pepper, and rub the mixture all over the pork in a tub or large resealable plastic bag. Cover or seal, and refrigerate overnight.
Preheat the oven to 275°F (135°C). Transfer the pork to a large Dutch oven and add 2 cups (475 ml) of water. Cover and braise in the oven until the meat is tender, about 190°F (88°C) internal temperature, 4 to 5 hours, checking periodically and adding water, as necessary, to keep the liquid level about three-quarters of the way up the meat.
Carefully transfer the shoulder from the pan to a rimmed baking sheet and cover with foil. Save the cooking liquid in the pan. The liquid should measure about 1 quart (1 L). Add the vincotto and brown sugar to the liquid, and bring to a boil over high heat. Boil until the liquid reduces in volume by about half and becomes a thin syrup, 20 to 25 minutes.
Preheat the oven to 375°F (190°C). Brush the vincotto syrup all over the pork shoulder. Heat the pork in the oven until warmed through, 20 to 30 minutes. Raise the heat to 500°F (260°C). Cook until the pork is hot and the skin is crisp, 10 to 15 minutes, stopping every 5 minutes to brush the glaze from the bottom of the pan over the pork. You may need to add about 1 cup (235 ml) of water to the pan to keep the syrup from burning as it cooks.
Line a platter with the lettuce and carefully transfer the pork to the platter. Garnish with the pickled vegetables and serve the meat whole drizzled with any remaining glaze. Allow guests to crack the skin and pick off pieces of the meat, dragging them in the glaze and alternating with bites of pickled vegetables.
**PAPPARDELLE** with **VEAL RAGÙ** and **PEPPERS**
When I got back to Philadelphia after working in Italy for three years, I was buzzing with inspiration. I'd just opened Osteria, and toward the end of that summer in 2007, the menu ideas were coming fast. I was doing some really innovative cooking. But sometimes, I would walk into the garden outside the restaurant and crave the classic combinations, such as sausage and peppers. Our pepper plants were going crazy that summer. We had cherry peppers, cayennes, red and green bell peppers, Marcona peppers, and horns of the bull. I grabbed a mix of sweet and hot peppers off the plants, sautéed them, and mixed them with a ragù made from veal rib. Use whatever mix of sweet and hot peppers you have on hand. The heat level should end up being mildly spicy—just a little kick. And if you buy veal rib, the butcher might have it rolled and tied already. For this recipe, just untie it and open it flat before seasoning it.
MAKES 6 SERVINGS
2½ pounds (1.125 kg) boneless veal rib
2½ tablespoons (21 g) kosher salt
½ teaspoon (1 g) cracked black pepper
All-purpose flour, for dredging
¼ cup (60 ml) olive oil, divided
2 medium-size yellow onions, finely chopped (2½ cups/400 g)
3 large carrots, finely chopped (2¼ cups/275 g)
1 whole head celery, finely chopped (2¼ cups/225 g) | 1 sachet of 1 sprig rosemary, 3 sprigs thyme, 1 bay leaf, 5 black peppercorns, 5 parsley stems and 2 garlic cloves (see page 277)
1 quart (1 L) dry white wine
2 to 2½ quarts (2 to 2.5 L) Veal Stock (page 279)
8 ounces (227 g) Egg Pasta Dough (page 282) rolled into 2 sheets about inch (1.5mm) thick
2 long hot or cherry peppers, seeded and cut into narrow strips
2 red or green bell peppers, seeded and cut into narrow strips
4 tablespoons (56 g) unsalted butter
1 ounce (28 g) Parmesan cheese, grated (⅓ cup)
Salt and freshly ground black pepper
---|---
Rub the veal with the kosher salt and cracked black pepper. Cover and refrigerate for 2 hours to lightly cure the veal. Then rinse it off and pat dry.
Cut the veal into two or three manageable-size pieces and dredge them in flour. Heat 2 tablespoons (30 ml) of the oil in a Dutch oven over high heat. When hot, add the veal and sear until golden brown on all sides, 4 to 5 minutes per side (sear in batches, if necessary, to prevent overcrowding the pan). Transfer the veal to a plate.
Add the onions, carrots, and celery to the pan and lower the heat to medium. Add the sachet, and cook until the vegetables are golden brown, 10 to 12 minutes, stirring now and then. You want a little caramelization on the vegetables, but not too dark. Add the wine, raise the heat to high, and boil until the liquid reduces in volume by about half, 15 minutes or so.
Preheat the oven to 300°F (150°C).
Add 1½ quarts (1.5 L) of the veal stock to the pan. When it comes up to a simmer, add the seared veal, which should only be covered by liquid about halfway. Adjust the amount of veal stock as necessary.
Cover the pan and braise in the oven until the veal is tender but not easily falling apart, 2 to 3 hours. Chop the veal into bite-size pieces and then return it to the sauce. Discard the sachet.
Meanwhile, roll the pasta dough out into 2 sheets, each about inch (1.5 mm) thick. Lay the pasta sheets on a lightly floured work surface and trim the edges square. Cut crosswise into strips a little less than 1 inch (2.5 cm) wide, preferably with a fluted cutter.
Heat the remaining 2 tablespoons (30 ml) of oil in a large, deep sauté pan over medium heat. Add the peppers, and cook until just tender, about 5 minutes. Add the veal ragù, and cook until slightly thickened, 5 minutes or so.
Bring a large pot of salted water to a boil. Add the pasta, and cook until tender yet still firm, about a minute. Drain and add the pasta to the sauté pan, along with the butter and Parmesan. Cook, stirring gently, until the sauce is creamy, 2 to 3 minutes. Taste, then season with salt and pepper and serve hot.
**GRILLED LAMB RACK** with **FAVETTA** and **ROASTED PEARL ONIONS**
A couple of years after I returned to the United States, my wife, Claudia, asked me to bring home some fava beans for dinner. It was springtime and she made a rustic fava bean puree with grilled lamb rack. It was exactly like the one her Uncle Bruno made for us years before when we started dating in Italy. I like to cut the lamb rack from the whole saddle, French it, and then tie it myself. But most U.S. butchers sell Frenched lamb racks already tied. When you cross-cut the rack into portions, each piece should have a nice long rib bone that you can use as a handle to hold the meat while dragging it into the fava puree. That's how Uncle Bruno eats it.
MAKES 8 SERVINGS
Favetta:
4 pounds (1.75 kg) young fava beans in the pods
½ cup (120 ml) olive oil
Salt and freshly ground black pepper
Roasted Pearl Onions:
12 ounces (56 g) red pearl onions, peeled
3 cups (750 ml) plus 2 tablespoons (30 ml) olive oil, divided
Salt and freshly ground black pepper | 1 cup (235 ml) red wine vinegar
30 mint leaves, cut into chiffonade
Lamb Racks:
2 Frenched lamb racks, about 4 pounds (1.75 kg) total, trimmed and tied
Salt and freshly ground black pepper
Olive oil, as needed
---|---
**For the _favetta_ :** Bring a large pot of water to a boil and fill a large bowl with ice water. Add the whole fava pods to the boiling water and blanch for 1 minute. Transfer to the ice water to stop the cooking. When cool, pluck the favas from the pods, then pinch open the pale green skin and pop out the bright green fava beans. You should have about 4 cups (750 g).
Place the fava beans in a food processor, turn it on, and slowly add just enough olive oil until the beans catch and the mixture forms a rustic, slightly chunky puree. Season to taste with salt and pepper.
**For the roasted pearl onions:** Preheat the oven to 500°F (260°C). Toss the onions with 2 tablespoons (30 ml) of the olive oil and season with salt and pepper. Spread the onions in a single layer on a rimmed baking sheet and roast until tender and golden in color, 10 to 12 minutes, shaking the pan once or twice. Let cool slightly, then slice any large onions in half lengthwise. The onions can be roasted up to 2 days ahead.
Put the vinegar in a blender and slowly add the remaining 3 cups (750 ml) of oil until blended and emulsified, 1 to 2 minutes. Season generously with salt and pepper. Pour the vinaigrette into a medium saucepan and add the onions.
**For the lamb racks:** Heat a grill (preferably with oak wood) to medium heat with both high- and low-heat areas.
Season the lamb racks with salt and pepper. Scrape the grill grate clean, coat it with oil, and grill the racks over a high-heat area of the grill until nicely grill-marked, 5 to 7 minutes per side. Move the meat to a low-heat area of the grill, cover, and cook to medium rare (135°F/57°C internal temperature), about 10 minutes more.
Warm the onions and vinaigrette over medium heat.
Use two large dinnerware tablespoons to scoop up and shape the _favetta_ into football shapes (quenelles). Place a quenelle on each plate just a little left of center. Remove the butcher's twine from the lamb racks and cut into portions between each bone. Place two portions on each plate. Mix the mint into the onions and spoon the onions on top of the lamb, reserving some of the liquid to drizzle around the plates.
BUTCHERING A LAMB RACK
* * *
ALME
A MAESTRO IN THE KITCHEN, AMORE IN THE DINING ROOM
WHEN I PICTURE ITALY IN MY MIND, I SEE ALME.
The town is only the size of a football field. It has a church, a movie theater, one good bar, and an awesome _gelateria_ (ice-cream shop) called Paradiso. But that's about it. Frosio Ristorante is what makes the place famous. The restaurant sits inside an eighteenth-century villa that has an ancient look with a modern touch. It's like a yellow-painted ice cube dropped in the middle of a classic Italian village.
When I started working there as a stage (an unpaid kitchen apprentice), I'd been in Italy for nine months or so. I had finished up at Mangili butcher shop and then cooked at Loro, which recently earned its first Michelin star. The chef at Loro was Antonio Rochetti, who has since become a good friend of mine. Antonio taught me so much. It was only the two of us in the kitchen, and we did everything from butchering animals to making bread, pasta, and desserts. I couldn't imagine a better introduction to cooking in Italy. But after a few months, it was time to move on. I really wanted to cook at Frosio. The food there totally blew me away. Antonio had worked there years ago, and he and the chef were friends, so Antonio set me up to work with him.
Paolo Frosio, a thoughtful, sensitive chef, was one of the first to put the province of Bergamo on the culinary map. After culinary school, Paolo trained in France and Los Angeles, and then came back home brimming with experience. He opened his restaurant in 1990 at the age of twenty-three and he earned a Michelin star just over a year later, making him Italy's youngest-ever Michelin-starred chef. With his training and background, he created magnificent plates of food, staying true to what the locals grew up eating but going way beyond what a typical _osteria_ or trattoria could do.
The Frosio family has been in the food business for more than one hundred years, so it was easy for Paolo to blend the local and global at his restaurant. His aunt raises full-breasted duck and guinea hens on her farm, and his sister provides him with Taleggio cheese that she ages in caves until it gets firm and pungent. He would make a creamy _fonduta_ (fondue) with the Taleggio and spoon it over two poached eggs from his aunt's birds, all on toasted brioche. Then he shaved fresh black truffles over the top. Simple, but mind-blowing. The truffles were usually dug out of the ground the day before in Val Brembana, not too far away.
Just about everything we cooked at Frosio was tied to the food and culture of Bergamo. Our meat came from one of the area's best butchers, Franco Cazzamali, who is considered a master of beef the way a sushi chef is a master of fish. Cazzamali uses Fassone, the local white cattle that everyone calls _la granda_ (most-prized beef). It has a dark red color, lots of marbling, and big flavor that really shines when shaved for carpaccio.
The cheeses that didn't come from Paolo's sister came once a week from Vecchio Larry (Old Larry), a big, burly guy who owned a rustic restaurant outside Milan. As a side business, he sold local cheeses, such as stracchino, strachitunt, robiola, and formagella. Vecchio Larry was a huge Celtics fan and loved Larry Bird. When he found out there was an American in the Frosio kitchen, we started talking sports. Every week, he'd come into the kitchen, joking, laughing, and talking trash.
Vecchio Larry is a character, or a _personaggio_ , as they say in Italy. Camillo, Paolo's older brother, is another _personaggio._ Camillo runs the front of the house and wine service at Frosio Ristorante. He's thin, walks slightly hunched over, and loves to drink sparkling wine and crack jokes about Americans. He's a big football fan. The Super Bowl came around while I was working there, and Camillo organized a Super Bowl party for the staff. That February, he bought hamburgers and Budweisers, and we stayed up until three in the morning to watch the game.
The biggest thing I learned from Paolo Frosio is that nothing goes to waste. He used the stems and leaves from every plant and the skin and bones from every animal that came into that kitchen. He was a master at opening up the walk-in, pulling out a bunch of scraps, and making a meal that would blow you away. We'd come to the end of the workweek, and the last party of six would come into the restaurant and ask for a tasting menu. "Shit!" Paolo would say to the cooks. "We don't have anything left!" Then he'd open up the walk-in, grab a piece of mullet, some pork trimmings, some vegetables, and make an incredible five-course tasting menu. I learned how to do that from him. When you get in a whole pig, you use the whole thing, nose to tail. When you get fennel, you use the fronds; you chop the stems. You make pesto from celery leaves. That's how you save money: you use everything.
By late winter of that year, I started feeling more comfortable in Italy. I'd picked up a few more words since working at Mangili. Just enough to get by. When April rolled around, the weather started getting warmer, and we opened the kitchen windows to let in some air. I had been working the apps and pastry stations but also did entrées with Paolo. One night, Matteo, one of the cooks who used to work there, came in for a birthday dinner with an old friend of his. They both got the _degustazione di carne_ , the meat-tasting menu. For an appetizer, I shaved raw Fassone beef and plated it with blanched, pencil-thin asparagus and shaved Parmigiano-Reggiano. I also made Paolo's poached eggs with Taleggio _fonduta_ and black truffle on brioche. I served them crispy sweetbreads with Parmigiano _fonduta_ and grilled Treviso radicchio, then pork shank osso buco with saffron rice _crema._ And for dessert, molten chocolate flan. I also served them chestnut cannoli, _torrone semifreddo_ , and one of my specialties, _piccola pasticceria_ (petit fours).
Around midnight, we finished up in the kitchen. I went into the dining room to say hi to Matteo, and he introduced me to the birthday girl, Claudia. I said literally three words to her: " _Ciao. Sono_ Jeff." (Hi. I'm Jeff.) I was still learning the language, and my already-small vocabulary deserted me. Matteo and I talked for a few minutes before they got up to leave. I couldn't help noticing Claudia's tight jeans as they walked out the door.
The next morning, I called Matteo because we had dinner plans. It was restaurant week and Bergamo's top restaurants were running dinner specials: thirty-three euros for anyone age thirty-three or under. We set a time and place to meet, and then I asked Matteo, "Do you think Claudia would like to come with us?"
Crispy Sweetbreads with Parmigiano Fonduta and Grilled Treviso
•
Canederli with Cabbage and Speck
•
Doppio Ravioli with Duck and Chestnut
•
Fettuccine with Braised Rabbit and Porcini
•
Grilled Halibut with Mussels and Chanterelles | Pork Shank Osso Buco with Saffron Rice Crema
•
Chocolate Flan
•
Chestnut Rice Pudding with Persimmon
•
Rhubarb Tartellette with Italian Meringue
---|---
**CRISPY SWEETBREADS** with **PARMIGIANO FONDUTA** and **GRILLED TREVISO**
Back in 1996, when I was training at the Culinary Institute of America, we always poached sweetbreads before sautéing or pan-frying them. When I got to Italy, I saw Paolo Frosio doing the same thing. But I always thought that pre-cooking gave sweetbreads a chalky texture. I had learned from Marc Vetri that you can just soak them in ice water before cooking them. That's how I like to do them. With a quick pre-soak, the sweetbreads stay nice and meaty and fry up crispier.
MAKES 4 SERVINGS
1 pound (450 g) sweetbreads, preferably veal thymus
1 head Treviso radicchio
¾ cup (175 ml) olive oil, divided, plus some for drizzling
Salt and freshly ground black pepper
¾ cup (95 grams) _tipo_ 00 flour (see page 277) or all-purpose flour
4 ounces (1 stick/115 g) unsalted butter
6 sage leaves | 1 cup (235 ml) heavy cream
1 teaspoon (2 g) cracked black pepper
1 ounce (28 g) Parmesan cheese, grated (¼ cup)
2½ tablespoons (37 ml) sherry vinegar
1 tablespoon (3.75 g) chopped mixed fresh herbs (parsley, rosemary, and thyme)
Extra-virgin olive oil for garnish
---|---
Rinse the sweetbreads in cold water, then soak in a bowl of ice water for 10 minutes. Pat dry, then remove the outer membrane from the sweetbreads. Cut the sweetbreads into 1-ounce (28-g) portions, about the size of two fingers, and refrigerate for up to 8 hours. It's important to keep the sweetbreads cold right up until you cook them.
Heat a grill to medium heat. Quarter the head of Treviso lengthwise, drizzle the pieces with a little olive oil, and season with salt and pepper. Brush the grill grate with oil and grill the Treviso until charred on all sides, 2 to 3 minutes per side. Set aside.
Season the sweetbreads all over with salt and pepper, then dredge in the flour, gently shaking off excess flour.
Heat the butter and ¼ cup (60 ml) of the olive oil in a large skillet over medium heat. When the butter starts to foam, add the sage leaves. Use a spoon to baste the sage with the butter mixture until the sage is crispy, 3 to 4 minutes. Remove from the pan and drain on paper towels; season lightly with salt and pepper. Add the sweetbread pieces to the pan, and cook until darkly browned and crisp all over, about 15 minutes, turning once or twice. Transfer to paper towels to drain.
Combine the heavy cream and cracked pepper in a small saucepan over medium heat. Cook until steam begins to rise from the cream but it doesn't simmer, 3 to 4 minutes. Whisk in the Parmesan until melted. Strain the mixture through a medium-mesh strainer into a small metal bowl, discarding the solids. You will be left with a creamy Parmesan sauce ( _fonduta_ ). Set the bowl into another bowl of hot water to keep warm until ready to serve (if you put it over direct heat the Parmesan will coagulate on the bottom).
Put the vinegar in a bowl or small food processor and slowly whisk or blend in the remaining ½ cup (120 ml) of olive oil until thickened, 1 to 3 minutes. Season with salt and pepper to taste. Cut off and discard the Treviso cores, then cut the grilled Treviso into ¼-inch (5-mm) chunks and toss in a bowl with the vinaigrette and herbs. Keep warm or rewarm gently just before serving.
Spoon about ¼ cup (60 ml) of fonduta onto each plate, and place the sweetbreads on top. Garnish with the grilled Treviso, fried sage, and a drizzle of extra-virgin olive oil.
**CANEDERLI** with **CABBAGE** and **SPECK**
Most American restaurants have a quick staff meal in the middle of the day before dinner service. When I worked in Italy, staff meal was like no other I'd ever had. We actually sat down for thirty to forty minutes and enjoyed a three-course meal. At Frosio, one of the cooks, Francesco Cereda, made these bread gnocchi for staff meal one day and they blew my mind. He soaked some leftover bread in milk, mixed in sautéed onions and eggs, and boiled the dumplings like pasta. The canederli were unbelievably light and fluffy, and he served them with only brown butter, Parmesan, and fresh sage. It was simple and beautiful. Canederli come from Trentino, so I like to serve them with cabbage and Speck, two ingredients that are a huge part of Trentino cooking. That chewy, salty spark ties the whole dish together. If you can't find Speck, use coppa or prosciutto.
MAKES 4 TO 6 SERVINGS (18 TO 20 DUMPLINGS)
¾ cup plus 2 tablespoons (100 ml) whole milk, at room temperature
9 tablespoons (125 g) unsalted butter, melted, divided
2 large eggs
1 tablespoon (3.75 g) chopped fresh flat-leaf parsley
1 tablespoon (18 g) salt, plus more to taste
½ teaspoon (1 g) grated nutmeg | ½ teaspoon (1 g) freshly ground black pepper, plus more to taste
8 ounces (227 g) white bread, crust removed, cubed (from 8 to 10 slices)
6½ tablespoons (50 g) sifted _tipo_ 00 flour (see page 277) or all-purpose flour, plus about 1 cup (125 g) for dusting
2 teaspoons (10 ml) grapeseed or olive oil
8 ounces (227 g) savoy cabbage, julienned (2 cups)
1 ounce (28 g) Parmesan cheese, grated (¼ cup)
2 ounces (56 g) cured Speck, coppa, or proscuitto, thinly sliced into strips (½ cup)
---|---
Whisk together the milk, 5 tablespoons (70 g) of the melted butter, and the eggs, parsley, salt, nutmeg, and ½ teaspoon (1 g) of black pepper in a large bowl. Stir in the cubed bread, then the 6½ tablespoons (50 g) of flour. Let stand for 30 minutes.
Bring a large pot of salted water to a boil. Put the remaining 1 cup (125 g) of flour in a bowl, then scoop the bread mixture into small balls about the size of golf balls and drop them in the flour, rolling them gently until dusted with flour. Shake off the excess flour. The canederli will be loose and soft. Drop the dusted canederli in the boiling water, in batches, if necessary, to prevent overcrowding, and cook until they float, 4 to 5 minutes. Remove from the boiling water with a slotted spoon and transfer to a bowl or plate.
Meanwhile, heat the remaining 4 tablespoons (55 g) of butter over medium heat until deep amber in color, about 5 minutes, swirling the pan for even browning. Set aside.
Heat the oil in a large sauté pan over medium heat and add the cabbage, cooking it just until tender, about 5 minutes. Taste and season with salt and pepper.
Divide the cabbage among plates, top with the canederli (four to five per serving), and sprinkle with the Parmesan. Drizzle with the browned butter and scatter the strips of Speck over the top.
**DOPPIO RAVIOLI** with **DUCK** and **CHESTNUT**
Toward the end of summer 2008, I was developing the fall menu for Osteria in Philadelphia. I thought back to the _doppio_ (double) ravioli that Paolo Frosio made when I was there, which contained two different fillings. Genius! I wanted some chestnuts on our fall menu, so I decided to try duck and chestnuts. With some chestnut flour in the pasta itself, the combination was perfect. Thanks for the inspiration, Paolo!
MAKES 6 TO 8 SERVINGS
Duck Filling:
2 pounds (1 kg) bone-in duck legs
Salt and freshly ground black pepper
2 tablespoons (30 ml) blended oil (page 276)
½ small yellow onion, chopped (⅓ cup/52 g)
1 medium-size carrot, chopped (⅓ cup/40 g)
1 medium-size rib celery, chopped (⅓ cup/33 g)
4 cups (1 L) red wine
1 sachet of 2 sprigs parsley, 2 sprigs rosemary, and 2 sprigs thyme (see page 277)
1 large egg
1 ounce (28 g) Parmesan cheese, grated (¼ cup)
Chestnut Filling:
1 tablespoon (15 ml) olive oil
1 tablespoon (14 g) unsalted butter | ½ small yellow onion, chopped (⅓ cup/52 g)
1 small garlic clove, minced
8 ounces (227 g) peeled chestnuts (1⅓ cups), thawed if frozen
4 ounces (113 g) fresh whole-milk ricotta cheese (½ cup)
½ ounce (14 g) Parmesan cheese, grated (2 tablespoons)
1 large egg
Ravioli:
8 ounces (227 g) Chestnut Pasta Dough (page 282), rolled into 2 sheets, each about inch (0.8 mm) thick
8 ounces (2 sticks/225 g) unsalted butter
6 ounces (170 g) peeled chestnuts (1 cup), sliced
8 sprigs thyme
1¾ ounces (50 g) Parmesan cheese, grated (½ cup) for garnish
---|---
**For the duck filling:** Remove any excess fat deposits from the duck legs. Rinse them well, then pat dry and season the duck all over with salt and pepper. Heat the oil in a Dutch oven or large, deep saucepan over high heat. When the pan is smoking hot, add the duck and sear on both sides until well browned, 10 to 12 minutes total. Transfer to a plate or platter. Pour off all but a few tablespoons of fat from the pan, and then add the onion, carrot, and celery; cook over medium heat until soft but not browned, 3 to 4 minutes. Return the duck to the pan, pour in the wine, and add the sachet. Bring to a simmer, then adjust the heat so that the liquid simmers gently. Cover and simmer gently until the duck is very tender, just about to the point of easily falling apart, 2 to 3 hours. Remove from the heat, and remove and discard the sachet. Transfer the duck to a plate or platter. Strain the broth, reserving the broth and vegetables separately. Pick the meat and skin from the bones, discarding all the bones. Grind the meat, skin, and reserved vegetables on the small die of a meat grinder. Fold in the egg and cheese, and season with salt and pepper. Spoon the filling into a resealable plastic bag and refrigerate for up to 2 days. Refrigerate the braising liquid as well.
**For the chestnut filling:** Heat the oil and butter in a sauté pan over medium-low heat. When the butter melts, add the onion and garlic, and cook until soft but not browned, 8 to 10 minutes. Add the chestnuts and season with salt and pepper. Add enough water to cover the ingredients, increase the heat to medium, cover, and cook until the chestnuts are tender enough for a fork to slide in easily (sort of like that of boiled potatoes), 15 to 20 minutes. Drain and reserve the cooking liquid. Transfer the solids to a blender and puree with just enough of the cooking liquid to make a thick, smooth puree that's roughly the texture of ricotta cheese, scraping down the sides of the blender as necessary. Fold in the ricotta, Parmesan, and egg. Spoon the filling into a resealable plastic bag and refrigerate for up to 2 days.
**To make the ravioli:** Lay a pasta sheet on a lightly floured work surface and dust with flour. Trim the ends to make them square, then fold the dough in half lengthwise and make a small notch at the center to mark it. Open the sheet so it lies flat again and spritz with water. Cut a corner from each bag of filling and pipe the fillings in ½-inch-wide columns along the length of the pasta, leaving ½-inch margin around each column of fillings and stopping at the middle of the sheet. (See photo at right, but note that the pasta sheet has been rolled in a commercial pasta machine and is about twice as wide as what you get from a typical consumer pasta machine.) Neaten up the columns of filling with your fingertips. Lift up the empty side of the pasta sheet and fold it over to cover the filling. Gently press the pasta around each strip of filling to seal (it helps to use a long wooden dowel or chopstick). Use a fluted pasta cutting wheel or a sharp knife to cut the ravioli into rectangles. When cut, each ravioli should have two fillings in it. Repeat with the remaining pasta dough and filling. You will have about sixty ravioli.
Bring a large pot of salted water to a boil. Add ten to twelve ravioli at a time, and cook just until tender, 4 to 5 minutes.
Meanwhile, melt the butter in a large, deep sauté pan. Add the chestnuts, thyme, and 1 cup (235 ml) of pasta water. Skim and discard the fat from the reserved duck braising liquid and add 1 cup (235 ml) of the braising liquid to the pan. Remove the cooked ravioli from the pasta water with a slotted spoon and add them to the pan, simmering until the sauce coats the pasta, 3 to 4 minutes. Season with salt and pepper. Divide among plates and sprinkle with Parmesan.
DOPPIO RAVIOLI ASSEMBLY
**FETTUCCINE** with **BRAISED RABBIT** and **PORCINI**
You see thick, hearty _ragù_ (stew) on every menu in northern Italy. But I wanted to try and make a ragù that was delicate instead of heavy. Rabbit and porcini came to mind right away. In Italy, eating rabbit is about as common as eating chicken is in the United States. It made perfect sense. The rabbit is lean, and the porcini are earthy. Plus, Italian rabbits are big and richer-tasting than the ones you see in the States, so they stay rich and moist even when braised down into a ragù. Don't worry if you can't find Italian rabbits for this dish. Farmed American rabbits work fine. The dish just comes out tasting a little leaner.
MAKES 6 TO 8 SERVINGS
1 pound (450 g) Egg Pasta Dough (page 282), rolled into 4 sheets, each about inch (1.5 mm) thick
4 ounces (113 g) dried porcini mushrooms (about 1½ cups)
2 rabbits (about 3 pounds/1.3 kg each)
Salt and freshly ground black pepper
¼ cup (60 ml) olive oil, divided
1 small yellow onion, finely chopped (⅔ cup/105 g) | ½ cup (120 ml) white wine
2 cups (480 g) canned plum tomatoes, preferably San Marzano, cored and crushed by hand
4 tablespoons (56 g) unsalted butter
2¾ ounces (78 g) Parmesan cheese, grated (¾ cup), divided
---|---
Lay a pasta sheet on your work surface and cut the pasta crosswise into 12-inch (30.5 cm) lengths, making sure each one is well floured. Run each piece of pasta through a fettuccine cutter and fold it gently onto a floured tray. Repeat with the remaining pasta dough. Dust with flour, cover, and freeze for up to 3 days.
Soak the porcini in hot water until soft, about 15 minutes. Pluck out the mushrooms and finely chop. Strain the soaking liquid through a fine-mesh strainer and reserve.
Rinse the rabbits and remove the innards and excess fat deposits. Remove the hind legs and forelegs by driving your knife straight through the hip and shoulder joints. Cut each leg in half through the center joints. Snip through the breast bones with kitchen shears, then cut the rabbits crosswise into five or six pieces each. Season the rabbit pieces all over with salt and pepper. Heat 2 tablespoons (30 ml) of the oil in a Dutch oven over medium-high heat. Add the rabbit pieces in batches to prevent overcrowding, and sear until golden brown on both sides, about 5 minutes per side. Transfer to a platter.
Preheat the oven to 350°F (175°C).
Add the onion to the pan, and cook over medium heat until soft but not browned, 4 to 5 minutes. Add the wine, stirring to scrape the pan bottom. Simmer until the liquid reduces in volume by about half, 5 minutes. Put the tomatoes in a food processor and pulse until finely chopped, and almost pureed. Add the tomatoes to the pan, along with the chopped mushrooms and the rabbit pieces. Add just enough of the reserved porcini liquid to barely cover the rabbit pieces. Cover and braise in the oven until the rabbit is so tender it falls apart, about 2 hours. Remove the rabbit, let cool slightly, and then pick the meat from the bones, feeling for small bones with your fingers. Shred the meat and discard the skin and bones. Put the braising liquid through a food mill or puree it briefly in a food processor. If the pureed braising liquid is thin, boil it until slightly thickened. Return the shredded meat to the pureed braising liquid.
Bring a large pot of salted water to a boil. Drop in the pasta in batches to prevent overcrowding, and stir after a couple of seconds to prevent sticking. Cook until tender, 30 seconds to 1 minute, depending on whether it is refrigerated or frozen. Drain the pasta and reserve the pasta water.
Add the remaining 2 tablespoons (30 ml) of olive oil and 2 cups (475 ml) of the pasta water to the ragù. Bring to a boil over high heat, and then lower the heat to medium and simmer gently for a minute or two. Add the cooked pasta, stirring constantly to prevent sticking. When the sauce is slightly reduced and coats the pasta, add the butter and ½ cup (50 g) of Parmesan. Season with salt and pepper to taste and stir until the butter melts completely, 1 to 2 minutes. Transfer to plates and garnish with the remaining Parmesan.
**GRILLED HALIBUT** with **MUSSELS** and **CHANTERELLES**
Frosio had a way of making sauce that I'll never forget. He would put some stock in a pan with a tiny amount of butter and olive oil, simmer it down, and then shake the hell out of it until it got thick and creamy. When I got back to Philadelphia, I thought that kind of sauce would be perfect if we made it from the juices of steamed mussels. It was early summer, chanterelles and halibut were both in season, and the ingredients practically combined themselves.
MAKES 4 SERVINGS
2 tablespoons (30 ml) grapeseed oil, plus some for oiling the fish
1 small yellow onion, thinly sliced into half-moons (½ cup/80 g)
1 garlic clove, smashed
2 pounds (1 kg) mussels, cleaned and scrubbed
½ cup (120 ml) dry white wine
6 tablespoons (90 ml) olive oil, divided
6 ounces (170 g) chanterelle mushrooms, thinly sliced lengthwise (about 2 cups) | Salt and freshly ground black pepper
1 teaspoon (5 ml) freshly squeezed lemon juice
4 scallions, thinly sliced (green parts only)
¼ cup (15 g) chopped fresh flat-leaf parsley
4 (6-ounce/170-g) skinless halibut pieces
---|---
Heat the grapeseed oil in a 2-quart (2-L) pot over medium heat. Add the onion, and cook until soft but not browned, about 4 minutes. Add the garlic, and cook for 1 minute. Add the mussels, along with the wine. Cook until the wine reduces in volume slightly, and then add enough water to come about halfway up the mussels. Cover, bring to a simmer, and steam over medium heat until all the mussels open, 10 to 12 minutes. Discard any mussels that have not opened. Remove the mussels from their shells with a melon baller or small spoon, keeping the mussel meat as whole as possible. Strain the stock at least twice through cheesecloth or a clean coffee filter to remove any grit.
Heat 4 tablespoons (60 ml) of the olive oil in a large sauté pan over medium-high heat. Add the mushrooms and sauté until soft, about 5 minutes. Season with salt and pepper. Add the lemon juice and ½ cup (120 ml) of the mussel stock, scraping the pan bottom and simmering until the liquid reduces in volume and starts to thicken when stirred, 5 to 8 minutes. When the sauce has a creamy consistency, add the scallions, parsley, and reserved mussel meat, and cook for 1 minute. Taste and adjust the seasonings. Remove from the heat and keep warm.
Heat a grill to medium-high heat. Season both sides of the fish with salt and pepper and coat with oil. Scrape the grill grate clean and coat it with oil. Grill the fish until deeply grill-marked on one side, about 4 minutes. Rotate 90 degrees for crosshatch grill marks and continue grilling until the flesh turns white about halfway up the sides, 3 to 4 minutes more. Flip and cook until the fish is just a little moist and translucent in the center, about 125°F (52°C) internal temperature, 5 minutes or so.
Spoon the mushrooms and mussels on opposite sides of each plate and place the grilled fish in the middle. Add the remaining 2 tablespoons (30 ml) of olive oil to the mushroom pan and gently shake and swirl the pan until the sauce becomes creamy and thick, about 30 seconds. Drizzle the sauce over the fish and around the plate.
**PORK SHANK OSSO BUCO** with **SAFFRON RICE CREMA**
When they hear "osso buco," most people think of veal. But _osso buco_ just means "bone with a hole," which is what you see in a crosscut piece of leg. I thought, why not make osso buco with pork shanks instead of veal? They're even richer and more deeply flavored than veal. Otherwise, the flavors here are classic Milanese: braised shanks, saffron risotto, and lemon-garlic-parsley gremolata for punch. The pork makes all the difference. This recipe will feed two hungry people, but if you have a braising pan big enough to hold eight shanks in a single layer (or if you have two pans), double the amounts to serve four people.
MAKES 2 SERVINGS
Pork shanks:
4 small pork shanks, each 6 to 7 ounces (170 to 200 grams) and 2 inches (5 cm) thick
Salt and freshly ground black pepper
½ cup (62 g) _tipo_ 00 flour (see page 277) or all-purpose flour
¼ cup (60 ml) grapeseed oil
2 medium-size carrots, diced (1 cup/125 g)
1 medium-size yellow onion, diced (1 cup/160 g)
2 medium-size ribs celery, diced (1 cup/100 g)
¼ cup (60 ml) red wine
3 cups (720 grams) canned plum tomatoes, preferably San Marzano, cored and crushed by hand
1 sachet of 3 sprigs rosemary, 4 sprigs thyme, 10 peppercorns, 1 garlic clove, and 1 bay leaf (see page 277)
Saffron Rice Crema:
1 tablespoon (15 ml) olive oil | 2 tablespoons (20 g) minced yellow onion
½ cup (100 g) Arborio or other risotto rice, rinsed
Salt and freshly ground black pepper
3 to 4 cups (0.75 to 1 L) very hot tap water or boiling water, divided
1 tablespoon (2 g) saffron
Gremolata:
2 tablespoons (7 g) chopped fresh flat-leaf parsley
1 small garlic clove, minced
Grated zest of ½ lemon
---|---
**For the shanks:** Preheat the oven to 350°F (175°C). Rinse the pork shanks, and then pat them dry. Season both sides with salt and pepper and dredge the shanks in flour in a shallow plate.
Heat the oil in a Dutch oven over medium-high heat. When hot, add the shanks and sear on both sides, about 5 minutes per side. Transfer the shanks to a plate and add the carrots, onion, and celery to the pan. Cook until golden brown, 5 to 8 minutes. Add the red wine, scraping the pan bottom and cooking for a minute or two. Add the shanks back to the pan, and cook until the wine reduces in volume by about three-quarters, 3 to 4 minutes. Add the tomatoes, along with enough water to cover the ingredients halfway. Add the sachet to the pan, cover, and braise in the oven until the shanks are tender, 2½ to 3½ hours, checking once or twice and adding water, if necessary, to keep the shanks halfway covered in liquid. Remove the shanks and pass the vegetables and braising liquid through a food mill to make a rustic puree. You can also use a food processor, pureeing the vegetables with just enough liquid to make them loose, and then mixing the puree back into the braising liquid. You should have about 3 cups (750 ml) of puree.
**For the crema:** Heat the oil in a medium saucepan over medium heat. Add the onion, and cook until soft but not browned, 3 to 4 minutes. Stir in the rice and season with salt and pepper. Add 2 cups (475 ml) of the hot water and bring to a gentle simmer. Cook, stirring occasionally just to make sure the rice is not sticking on the bottom. Avoid overstirring, as the more you stir the starchier and gummier the final crema will be. Cook until the rice is so tender that it starts to fall apart and most of the liquid is absorbed, 30 to 40 minutes total, adding just enough water, as needed, to prevent sticking. You will need to add about ½ cup (120 ml) every 15 minutes after the first 20 minutes.
Meanwhile, steep the saffron in 2 tablespoons (30 ml) of hot water. Add the steeped saffron and steeping liquid to the rice, along with a final ½ cup (120 ml) of added water. While the rice mixture is still hot, puree it quickly in a blender on high speed. The longer it purees, the gummier it will become, so keep the pureeing time short. If the puree is too thick, add a little water to thin it to a nice creamy consistency. Taste and season with salt and pepper.
**For the gremolata:** together the parsley, garlic, and lemon zest.
Pour the pureed braising sauce from the shanks into a medium saucepan, along with enough water to make a sauce the consistency of thick gravy. Season to taste with salt and pepper and half of the gremolata. Place the pork shanks in the sauce and heat through, 8 to 10 minutes.
Spoon the crema on individual plates, top with the shanks and some sauce, and sprinkle with the remaining gremolata.
**CHOCOLATE FLAN**
Everyone teases me about the name of this dish because it's basically a molten chocolate cake, but at Frosio, we called it a flan. I've made this dish hundreds and hundreds of times. It's one of the first desserts I cooked for Claudia, and I'll keep serving it to her for years to come. It's a great make-ahead dessert because you can pour the chocolate mixture into the flan molds and refrigerate it for hours or even days ahead of time. Then you take it straight from the fridge to the oven just before serving.
MAKES 6 TO 8 SERVINGS
1½ cups (300 g) granulated sugar
4 large eggs
6 large egg yolks
8 ounces (2 sticks/227 g) unsalted butter
8¾ ounces (250 g) chocolate, preferably 58% cacao, chopped (2 cups) | 1⅔ cups (208 g) _tipo_ 00 flour (see page 277) or all-purpose flour
½ cup (120 ml) Crème Anglaise (page 284)
Confectioners' sugar, for dusting
¾ cup (112 g) chopped raw unsalted pistachios, preferably Sicilian
1½ cups (375 ml) Pistachio Gelato (page 286)
---|---
Whip the sugar, eggs, and egg yolks in a stand mixer on high speed until light and fluffy, about 10 minutes.
Heat the butter in a saucepan over medium heat until melted and hot. Add the chocolate and remove from the heat. Let stand until the chocolate is mostly melted, 5 minutes or so. Stir until the chocolate and butter are blended. Blend into the egg mixture on low speed. Sift in the flour on low speed.
Preheat the oven to 375°F (190°C). Butter and flour six to eight 4- to 6-ounce (125- to 175-ml) baking tins or ramekins. Fill the buttered and floured tins to just under the inside rim. Bake until set on the sides but still gooey in the center, 6 to 8 minutes. The centers should be soft to the touch and jiggle when shaken but not be really liquidy. (The flan molds can be filled, covered, and refrigerated for up to 2 days before baking. Bake uncovered straight from the refrigerator, adding a minute or two to the baking time.)
To finish, spoon a swirl of crème anglaise on each plate. Run a knife around the edge of each flan to loosen it. Turn each flan out onto the plate. Top each with confectioners' sugar and a couple of teaspoons of chopped pistachios. Place a couple of tablespoons of chopped pistachios on the plate as a bed for the gelato. Use two spoons to scoop the gelato into an oval shape (quenelle) and place the quenelle on the chopped pistachios.
**CHESTNUT RICE PUDDING** with **PERSIMMON**
Claudia's Uncle Bruno and Aunt Betty live just a mile or two from where she grew up. Betty is from Denmark and makes amazing desserts. When I was trying to perfect a rice pudding, I asked Claudia whether her aunt had a recipe. She told me about the chestnut rice pudding that Betty makes every year for the holidays. I played with the recipe a little and added some candied chestnuts to make it more special. The trick is to fold in some softly whipped cream to keep the pudding light and fluffy.
MAKES 4 SERVINGS
Candied Chestnuts and Puree:
10 ounces (283 g) peeled chestnuts, thawed if frozen, divided
2 cups (400 g) granulated sugar, divided
Rice Pudding:
1 teaspoon (5 ml) grapeseed oil or canola oil
1 cup (200 g) Arborio or other risotto rice
¼ cup (60 ml) sweet white dessert wine
4 cups (1 L) whole milk | 2 large eggs, beaten
1¼ cups (250 g) granulated sugar
Pinch of salt
½ cup (120 ml) heavy cream
To Serve:
1 ripe persimmon (any type), sliced into half-moon shapes
---|---
**For the candied chestnuts and puree:** Place 4 ounces (113 g) of the peeled chestnuts in a small saucepan. Cover with water and bring to a boil over high heat. Boil until the chestnuts are very tender but keep their shape, 10 to 15 minutes. Drain.
In another saucepan over medium-high heat, combine 1 cup (200 g) of the sugar and 1 cup (235 ml) of water. Simmer until the mixture thickens slightly but does not change color and reaches 223°F (106°C) on a candy thermometer, 5 to 7 minutes. Remove from the heat and stir in the boiled chestnuts. Let stand for 2 to 3 days in the refrigerator so the chestnuts can soak up the syrup.
Meanwhile, combine the remaining 6 ounces (170 g) of peeled chestnuts and remaining 1 cup (200 g) of sugar in a medium saucepan. Add just enough water to barely cover the chestnuts. The nuts should still poke through the top of the liquid. Bring to a simmer over medium heat, and cook, covered, until the chestnuts are tender, about 10 minutes. Strain and reserve the cooking liquid. Puree the chestnuts in a food processor or blender, adding just enough cooking liquid so that the mixture can be pureed. It will be very thick, like peanut butter. Set aside.
**For the pudding:** Heat the oil in a large saucepan over medium heat. Add the rice, and cook until lightly toasted, 2 to 3 minutes, stirring now and then. Add the wine without stirring, and cook until it is almost evaporated. Add the milk in four 1-cup (235 ml) additions, allowing the milk to be absorbed between each addition. Stir only to break up the rice grains and prevent a skin from forming on the surface. It should not be stirred like risotto. The rice will take 15 to 20 minutes to cook and should be tender but not mushy. When tender, stir in the eggs, sugar, salt, and chestnut puree, stirring gently to break up any clumps of rice. Return the pan to low heat and stir gently to cook the eggs without scrambling them, 2 to 3 minutes. Refrigerate until cold, at least 1 hour or up to 3 days.
Whip the cream in a cold bowl with cold beaters on medium-high speed until softly whipped (the mixture should form loose, soft peaks when the beaters are lifted), 3 to 4 minutes. Fold the whipped cream into the pudding.
**To Serve:** Spoon the pudding into coffee cups or dessert bowls and top with the sliced persimmons. Drain and chop the candied chestnuts and scatter them onto the pudding, along with a generous drizzle of the candied chestnut syrup.
**RHUBARB TARTELLETTE** with **ITALIAN MERINGUE**
We made a couple of rhubarb desserts at Frosio Ristorante, but none of them captured the taste of rhubarb from my youth. I grew up in Nashua, New Hampshire, right next to my mémère (my grandmother). She always had mounds of rhubarb growing in the garden. She would put stalks of raw rhubarb in little paper cups filled halfway with sugar and give it to the kids. We'd dip the raw rhubarb in the sugar and munch away. My sister and cousins didn't love it, but I couldn't get enough. It was like sweet-tart Fun Dip candy. This dessert captures some of that raw rhubarb experience. It's like a lemon meringue pie but with rhubarb marmalade as the filling and pieces of raw rhubarb dipped in lemon syrup served on top. At Osteria, I serve individual rhubarb tartelletta, but here I've done it in a single tart pan to make it easier. If you want to serve individual tarts, double the recipe and use ten individual tart pans, each about 4 inches (10 cm) in diameter.
MAKES 10 SERVINGS
Tart Dough:
8 ounces (2 sticks/227 g) unsalted butter, at room temperature
1 cup (120 g) confectioners' sugar
½ vanilla bean, split and scraped
Zest of ½ lemon
7 large egg yolks, at room temperature
2½ cups (343 g) pastry flour
Rhubarb Marmalade:
1 pound (450 g) rhubarb, chopped
1 tablespoon (15 ml) freshly squeezed lemon juice
¼ teaspoon (1 g) unsalted butter
1 cup plus 2 tablespoons (225 g) granulated sugar, divided | 1¾ teaspoons (8.25 g) powdered pectin
Italian Meringue:
2¼ cups (450 g) granulated sugar
1 cup (235 ml) egg whites (from about 5 large eggs)
Pinch of salt
1 vanilla bean, split and scraped
Rhubarb Topping:
4 ounces (113 g) rhubarb, thinly sliced (1 cup)
¼ cup (60 ml) Candied Lemon Peels (page 288)
---|---
**For the tart dough:** Combine the butter, sugar, vanilla, and lemon zest in the bowl of a stand mixer fitted with the paddle attachment. Beat on medium speed until light, about 3 minutes. Gradually add the egg yolks, one by one, allowing each yolk to be incorporated before adding the next. Change the speed to low and slowly add the flour, mixing only until incorporated. The dough will be sticky and eggy yellow in color.
Turn the dough out onto a sheet of plastic wrap, cover with another sheet of plastic wrap, and quickly press into a disk with a rolling pin. It will be delicate, so work quickly to keep it cold. Wrap the dough in the plastic wrap and refrigerate for at least 20 minutes or up to 2 days.
Roll the dough between several sheets of overlapping plastic wrap or parchment paper to a 13-inch (33-cm)-diameter circle no thicker than ¼ inch (6 mm). Set a 10-inch (25-cm) tart pan with a removable bottom on a baking sheet. Remove the top sheet from the dough, then gently invert the dough over the tart pan. Gently fit the dough into the pan so it reaches the edges and comes up the sides, moving the dough into place yet stretching it as little as possible. Trim the dough by rolling the rolling pin over the edges of the pan so that the dough sits flush with the top of the pan. Cover loosely and refrigerate for 20 minutes.
Preheat the oven to 350°F (175°C). Line the dough with parchment paper or foil, leaving some overhanging to use as handles. Pour dried beans or pie weights onto the parchment or foil to keep the dough from puffing during baking. Bake until the edges of the dough are lightly browned, 12 to 15 minutes. Remove the parchment or foil and beans or weights and continue baking the crust until evenly golden, another 12 to 15 minutes or so. Remove from the oven and let cool.
**For the marmalade:** Mix the rhubarb with the lemon juice, butter, and all but 2 tablespoons (25 g) of the sugar in a nonreactive saucepan. Mix the remaining sugar with the pectin in a small bowl. Let the rhubarb stand at room temperature for 2 hours.
Bring the mixture to a boil over medium-high heat, and cook until the rhubarb starts to fall apart, about 5 minutes. When the rhubarb is fall-apart tender, whisk in the sugar and pectin mixture. Lower the heat slightly and simmer until the mixture coats the back of a spoon and the marmalade will set when cooled, 3 to 4 minutes.
Let the marmalade cool and then pour it into the baked tart shell, spreading it in an even layer. Refrigerate until set, about 30 minutes or up to 4 hours.
**For the meringue:** Combine the sugar and ½ cup (120 ml) of water in a small saucepan, and cook over medium heat until the mixture reaches 240°F (116°C) on a candy thermometer, 8 to 10 minutes. Meanwhile, combine the egg whites, salt, and vanilla in the bowl of a stand mixer fitted with the whisk attachment. Whip the egg whites on medium speed until soft peaks form when the beaters are lifted. Change to high speed and, with the machine running, slowly pour the sugar syrup into the egg whites. Whip until the mixture cools down to room temperature (you'll feel the sides of the bowl go from hot to lukewarm), 3 to 5 minutes. The meringue will be thick and glossy. Scrape the mixture into a pastry bag fitted with the star tip, or just cover the bowl and refrigerate for up to 4 hours.
**For the rhubarb toppings:** Combine the rhubarb and the candied lemon peels in a small bowl. Toss to coat.
If using a pastry bag for the meringue, pipe the meringue in small dollops or a decorative pattern over the tart. Or spoon the meringue over the tart, and then press and lift the back of the spoon repeatedly on the meringue to make small curls on the surface. Broil the meringue under a broiler 4 to 6 inches (10 to 15 cm) from the heat or with a kitchen torch until golden brown all over. Remove the ring of the tart pan by pressing the tart up through the bottom. Cut the tart into wedges and serve each wedge with a spoonful of rhubarb and candied lemon peels.
* * *
CENE AND FIOBBIO
FARM TO TABLE. . . FIFTY FEET
OUR TOES MET UNDER THE TABLE. MATTEO AND CLAUDIA'S FRIENDS WERE DRINKING AND LAUGHING AT THE OTHER END OF THE TABLE. THEY SEEMED MILES AWAY.
We were at the Tucans, an Irish bar just down the street from Piazza Vecchia, the main square of Bergamo's medieval-looking old city on the hill. Claudia and I could barely communicate because I didn't know much Italian and she didn't know English, but I couldn't take my eyes off of her olive brown skin. We sipped Scotch across from each other and slipped off our shoes. She seemed to like me.
A few days later, Claudia left for a two-week vacation to Ireland with her friend Livia. We texted each other the whole time she was away. One night, she texted me: " _Sogni d'oro._ " The translation was "Dreams of gold," but it didn't seem quite right. The next day at work, I asked everyone in the Frosio kitchen what it meant. They made fun of me, but we figured out that it means "sweet dreams."
Claudia got home at the beginning of May, and I couldn't wait to see her. We made plans to get drinks at O'Dea's, another Irish bar in Bergamo. She picked me up around ten p.m. in her red Mini Cooper. Even with the language barrier, we got our points across, talking about our families, friends, America, and Italy. We left the bar around three a.m. and I drove her Mini back to Frosio Ristorante, where I lived upstairs. The engine quieted down, we opened the doors and stepped outside. I walked around to her side to say good night and noticed the moon in the sky. I leaned toward her, and we kissed.
After that night, we spent a lot of time together. She lived about a half hour from the restaurant on a hilltop called Monte Bò in the village of Cene. When I first rode there on a borrowed motorcycle, it took forever to get to the top. She lived at her mother's place, a beautiful yellow house with terra-cotta roof tiles, perched on the hillside overlooking a lush, green mountain range. Claudia gave me a brief tour. The spring gardens were just starting to bloom. Outside the kitchen door, a huge rosemary bush grew near some lavender, sage, and oregano plants. The back steps led down under the pergola, and kiwifruit hung from the top of the pergola. Claudia told me that persimmons and pomegranates grew there in the fall. Their property stretched down the mountainside and was dotted with fruit trees, including figs, plums, and two kinds of cherries—amarena and bing. Wild asparagus were coming up near the edge of the forest. They had walnut trees and kept chickens.
As a chef, I was blown away. All this great food, right in their backyard! A grove of chestnut trees sloped down the hill, and when I met Claudia's brother, Alex, he told me that wild boar crawled up the hillside to eat the chestnuts in the fall. He would hang out of the window with his hunting dog, Dick, and then shoot them. Alex would butcher the boar and their mother, Pina, would braise it with rosemary, tomatoes, onions, red wine, olive oil, and butter and serve the ragù over polenta.
I learned that Pina is quite the cook. Her father, Vittorio, was a butcher with a _salumeria_ in Bergamo. Her father-in-law, Giorgio, was a cattle farmer and cheese maker. So she always had fresh meat, cheese, and produce at her fingertips. When Claudia was a kid, they had a donkey named Casimiro. After the donkey died, Pina braised it with juniper, cinnamon, cloves, and black pepper, and Claudia ate it.
When I heard that, I really started to fall for her. This was a family of food lovers. Cooks! Claudia and I spent the next several weeks sharing all of our favorite things. I showed her my favorite gelato place, Paradiso, in Alme; and she turned me on to the incredible licorice gelato at Gelateria Peccati de Gola in nearby Albino.
It didn't take long for the rest of her family to get curious about "the American boy." The first time I met them was at the end of May at Claudia's grandmother's house in Fiobbio, just a few kilometers down the hill from Cene. Everyone was gathering to celebrate the baptism of Francesca, her uncle Vittorino's new baby. When I walked in with Claudia, I heard " _L'americano è qui. Ciao. E benarrivato!_ " which means, "The American is here. Hello. And welcome!" Except for her Aunt Betty, no one spoke English. They spoke Bergamascan, a dialect that barely sounds like Italian. We sat around for the next couple of hours, stumbling through various conversations about my work, family, America, and the younger President Bush (no one liked him). Without Aunt Betty translating, I would have been completely lost. I found out that the Fiobbio house was her grandmother Anna's. Anna had two daughters, Pina and Irene, and four sons, Bruno, Vittorino, Nunzio, and Piero. Most of them had children, the newest one being Vittorino's daughter, Francesca.
Her family prepared a giant meal, and we started eating around noon. Piero owned a _gastronomia_ (gourmet shop) and brought shrimp in _salsa rossa_ , octopus and potato salad, prosciutto cotto mousse, liver pâté in gelatin, little savory puff pastry tarts, and a mountain of salumi. Her aunt Irene made lasagne with salmon. Nonna Anna made a veal roast with vegetables from the garden. She also made this incredibly rich-tasting, mahogany-colored rabbit served on the bone over polenta. They raised the rabbits out back. To make the polenta, Nonna lifted off three metal rings from the flat-top burner of the wood stove, using a little tool that hung on the edge. That brought the polenta pot closer to the flames, which made the polenta burn a little on the sides, giving it a smoky aroma. Nonna liked that I was interested in how to make the food.
Toward the end of the meal, six different cheeses appeared on the table, including formagella, Gorgonzola, and casolet. By five o'clock, when I had to get back to work at the restaurant, they had just started dessert. There was sponge cake with fruit and whipped cream, cookies, _piccola pasticceria_ (tiny pastries), and coffee made in the Italian _moka_ pot. There was a ton of food. I was in heaven!
I learned that back in the day, Claudia's family was one of the first in Fiobbio to own a car. Almost everyone in the family ran a business, and they were very proud of Claudia, a young woman running her own successful video rental store in town. They were nervous that an American was going to come and break her heart, or maybe even take her away from Italy and ruin her life. But I was less interested in going back to the States than I was in staying here as long as I could. When I thought about it, I'd only saved enough money to get me through the next couple of months. But I could feel my life changing. I needed to stay here. I was starting to feel more at home than I had felt for years.
Zucchini Flowers Stuffed with Ricotta and Tuna
•
Pear and Treviso Salad with Taleggio Dressing
•
Crespelle della Mamma
•
Apricot and Chanterelle Salad with Parmesan Crisps
•
Rabbit Alla Casalinga
• | Wild Boar Braised with Moretti Beer
•
Scarpinocc
•
Whole Roasted Duck with Muscat Grapes
•
Fig Strudel
•
Claudia's Limoncello Tiramisu
•
Pina's Limoncello
---|---
**ZUCCHINI FLOWERS STUFFED** with **RICOTTA** and **TUNA**
In late spring 2004, I spent more and more of my time off at Claudia's house in Cene, eating, cooking and getting to know the family. Her mother had an amazing garden filled with zucchini flowers. Those orange and green trumpets bloomed right up until the summer heat started to hit. At some point, Pina started stuffing the blossoms with tuna and ricotta. The filling was a mixture she'd been using for years; she typically breaded and fried it like meatballs. Then she thought, "Why not stuff it into all these zucchini blossoms?" She baked the stuffed zucchini blossoms with little tomatoes from her garden and it became this famous dish in town. Everyone wanted the recipe because it was so easy and so good.
MAKES 4 TO 6 SERVINGS
1 (8 ounce/227 g) can Italian tuna in olive oil
12 zucchini blossoms
12 ounces (340 g) fresh whole-milk ricotta cheese (1½ cups)
2 tablespoons (7 g) chopped fresh mint
2 tablespoons (30 ml) plain, dry breadcrumbs | Salt and freshly ground black pepper
6 cherry or grape tomatoes, halved lengthwise
4 baby zucchini, sliced
Extra-virgin olive oil, for drizzling
---|---
Preheat the oven to 350°F (175°C).
Drain the oil from the tuna into a small bowl. Brush any dirt from the zucchini blossoms with a paper towel, but don't wash the blossoms or they'll get soggy. Gently twist and pull out the stamens from the centers of the blossoms, using tweezers, if necessary.
Add the tuna, ricotta cheese, and mint to the breadcrumbs, stirring until combined; taste and season with salt and pepper. Spoon the filling into a resealable plastic bag (the filling can be refrigerated at this point for up to 2 days). Cut off a corner of the bag and pipe the filling into the zucchini blossoms, leaving some room for the blossom to close at the end. Arrange the stuffed blossoms in a 2-quart (2-L) shallow baking dish or on a baking sheet, and top each blossom with a tomato half, cut-side down. Arrange the sliced baby zucchini around the edge of the baking dish.
Bake until the filling is set, 12 to 15 minutes. If the tomatoes are still firm, run the dish under the broiler until they wilt a little. Drizzle with a little olive oil and serve.
**PEAR** and **TREVISO SALAD** with **TALEGGIO DRESSING**
In the fall, you find two things in every home in Bergamo: bitter greens and Taleggio cheese. Claudia made the best salads with bitter greens from their garden, and there was always a big piece of Taleggio sitting on the table. One afternoon I came over for lunch during a break from the restaurant and made a warm dressing with the Taleggio. I melted the cheese and pureed it with milk, sherry vinegar, olive oil, and an egg yolk. With the pears and radicchio, you get sweet, salty, sour, and bitter flavors in your mouth all at once.
MAKES 4 SERVINGS
Pear and Treviso Salad:
12 ounces (340 g) Treviso radicchio (1 head)
4 ounces (113 g) Belgian endive (1 large head)
1 tablespoon (15 ml) sherry vinegar
3 tablespoons (45 ml) extra-virgin olive oil, plus more for drizzling
Salt
1 Bartlett pear, peeled, seeded and finely chopped
1 tablespoon (4 g) chopped mixed fresh herbs (parsley, rosemary, and thyme) | Taleggio Dressing:
½ cup (120 ml) whole milk
2 ounces (56 g) Taleggio cheese, grated (½ cup)
1 large egg yolk
½ cup (120 ml) extra-virgin olive oil
1 to 2 teaspoons (5 to 10 ml) sherry vinegar
Salt and freshly ground black pepper
---|---
**For the pear and Treviso salad:** Cut the Treviso lengthwise into quarters and the Belgian endive lengthwise in half. Then cut both crosswise on a diagonal, leaving the pieces pretty big (1 to 2 inches/2.5 to 5 cm long), and place in a large bowl.
Put the sherry vinegar in a small bowl and whisk in the 3 tablespoons (45 ml) of oil until blended. Season with salt to taste. Drizzle about half of the vinaigrette over the greens and toss until coated. Add the pear and mixed herbs to the remaining vinaigrette and toss to coat.
**For the Taleggio dressing:** Put the milk in a small saucepan and bring to a boil over high heat. Remove from the heat and whisk in the Taleggio until it melts and incorporates. Pour the mixture into a blender and blend in the egg yolk, then slowly drizzle in the oil and 1 teaspoon (5 ml) of the sherry vinegar. Taste and season with additional sherry vinegar, salt, and black pepper, as needed.
Divide the Treviso mixture among plates and drizzle with a generous amount of the Taleggio dressing, about 2 to 3 tablespoons (30 to 45 ml) per plate. Drizzle with some olive oil, 1 to 2 teaspoons (5 to 10 ml) per plate, then scatter the pears over the salad and serve immediately.
**Note**
**The dressing will keep for about 3 days in the refrigerator. Return to room temperature and reblend before using. Look for Treviso radicchio, which has a long bullet-shaped head like Belgian endive. It's larger than the common round Chioggia radicchio found in most North American markets. Either radicchio will work fine here, but if you're using Chioggia, you might need two heads instead of one.**
**CRESPELLE DELLA MAMMA**
Pina usually stuffs crêpes with marmalade made from figs, plums, cherries, or other fruit harvested from around the house. But one year, her family foraged a ton of porcini mushrooms and dried them to make them last through the winter. That Christmas, she came up with these savory _crespelle_ stuffed with ricotta, fontina, ham, and spinach. She folded the crêpes around the filling like little Christmas presents, layered them in a baking dish, and topped them with two sauces: béchamel and porcini tomato. I loved the dish so much that I now have it on the menu at Alla Spina in Philadelphia. The recipe here makes enough for a big family, but if you want less, cut the amounts in half.
MAKES 10 SERVINGS
Crêpes:
2 cups (475 ml) whole milk
2 large eggs
1 tablespoon (15 ml) melted unsalted butter
2 cups (250 g) _tipo_ 00 flour (see page 277) or all-purpose flour
Salt and freshly ground black pepper
Oil, as needed
Porcini Sauce:
1½ ounces (43 g) dried porcini mushrooms (1 cup)
1 cup (235 ml) hot water
¼ cup (60 ml) olive oil
1 garlic clove | 2 quarts (2 L) canned plum tomatoes, preferably San Marzano
Salt and freshly ground black pepper
Crêpe Filling:
2 pounds (1 kg) fresh whole-milk ricotta cheese (1 quart/1 L)
3½ ounces (100 g) Parmesan cheese, grated (1 cup)
8 ounces (227 g) fontina cheese, diced (2 cups)
1 cup (235 ml) cooked chopped spinach
2 teaspoons (4.5 g) grated nutmeg
Salt and freshly ground black pepper
2 quarts (2 kg) Béchamel Sauce (page 281)
1½ pounds (680 g) thinly sliced Prosciutto Cotto (page 242) or other cooked ham, about 20 slices
3½ ounces (100 g) Parmesan cheese, grated (1 cup)
---|---
**For the crêpes:** Whisk together the milk, eggs, and butter in a small bowl. Put the flour in a large bowl and slowly whisk the milk mixture into the flour. Season with salt and pepper and strain to remove any lumps of flour. Let rest at room temperature for 1 hour or in the refrigerator up to 4 hours.
Dab a paper towel with oil, wipe a 6-inch (15-cm) nonstick pan with it, and put the pan over medium heat. Briefly whisk the batter to wake it up. Pour about ¼ cup (60 ml) of batter into the center of the pan and quickly swirl the batter by tilting the pan in large circular motions, spreading the batter to the edges of the pan to create an even circle. Cook until the top is dry but beaded with sweat, about 30 seconds. Flip the crêpe, and cook the other side for 10 to 15 seconds. You should have about twenty 6-inch/15-cm crêpes.
**For the porcini sauce:** Soak the porcini in the hot water until softened, about 15 minutes. Pluck the mushrooms from the soaking liquid and chop finely. Reserve the soaking liquid. Heat the oil in a medium saucepan over medium heat. When hot, add the garlic, and cook until lightly browned, 2 to 3 minutes. Add the chopped porcini and sauté for 4 to 5 minutes. Crush the tomatoes by hand, discarding the cores and adding the tomato flesh and juices to the pan as you work. Season with salt and pepper and add half of the reserved porcini soaking liquid. Cook over low heat until you have a nice thick tomato sauce, 25 to 30 minutes. If the tomato sauce gets too thick, thin it with a bit more of the reserved porcini soaking liquid.
**For the crêpe filling:** Combine the ricotta, Parmesan, fontina, spinach, and nutmeg in a medium bowl. Season with salt and pepper.
**To assemble the dish,** preheat the oven to 375°F (190°C) and spread a little béchamel over the bottom of a 3-quart (3 L) baking dish. Place one slice of ham on a crêpe and spread a heaping tablespoon of ricotta filling over the ham. Spoon on 1 tablespoon (15 ml) of béchamel and fold up the crêpe like a package, folding in two sides first, then folding over the other two sides to enclose the filling. Place the filled crêpe in the prepared baking dish and top with a spoonful of béchamel. Repeat with the remaining crêpes, ham, filling, and béchamel, layering them in the baking dish. Pour in enough porcini sauce to come half way up the baking dish. Sprinkle with Parmesan and bake until hot and bubbly, about 45 minutes.
**APRICOT** and **CHANTERELLE SALAD** with **PARMESAN CRISPS**
Both apricot trees and chanterelles grow in the mountains near Claudia's house. One afternoon, I walked out her back door to pick some apricots from the tree and stumbled upon some chanterelles growing in the chestnut forests down the hill. The only natural thing to do was to make a salad. Americans don't always put mushrooms and fruit together. But if you think about it, they make a great combination. Lightly cooked chanterelles have some apricot aromas to them, and gently cooked apricots have a texture similar to chanterelles. What grows together goes together.
MAKES 4 SERVINGS
Parmesan Crisps:
3½ ounces (100 g) Parmesan cheese, grated (1 cup)
Salad:
4 large apricots
¼ cup (60 ml) grapeseed oil
Salt and freshly ground black pepper | 1 cup (235 ml) olive oil, divided
4 garlic cloves, smashed
20 sprigs fresh thyme
3 ounces (85 g) chanterelle mushrooms (1 cup)
3 tablespoons (45 ml) sherry vinegar
6 ounces (170 g) mixed salad greens
---|---
**For the Parmesan crisps:** Preheat the oven to 375°F (190°C). Line a baking sheet with a silicone baking mat or parchment paper. For each crisp, spoon a heaping tablespoon of grated Parmesan into a mound on the prepared baking sheet, leaving 4 to 6 inches (10 to 15 cm) between each mound. Spread each mound to a 3-or 4-inch (7.5- to 10-cm) diameter. Bake until the cheese melts, loses its moisture, and browns slightly, about 6 minutes. Remove from the oven and let cool.
**For the salad:** Heat a grill to medium-high.
Halve and pit the apricots. Toss the apricots in a bowl with the grapeseed oil to coat all over. Season lightly with salt and pepper. Coat the grill grate with oil, then lay the apricots skin-side down on the grate, grilling just until marked with grill stripes but not mushy, only a minute or two per side. Return the apricots to the bowl.
Heat ¼ cup (60 ml) of the olive oil over medium-low heat in a sauté pan. Add the garlic, thyme, and mushrooms. Cook slowly until the mushrooms soften a bit but aren't limp, 5 to 7 minutes. You don't want any browning on the mushrooms. Pluck the mushrooms from the pan with tongs and add to the bowl with the apricots.
Pour the sherry vinegar into a medium bowl and whisk in the remaining ¾ cup (175 ml) of olive oil in a slow, steady stream. Add the greens and toss to coat. Season with salt and pepper.
To plate, divide the dressed greens among plates and spoon the mushrooms and apricots over the top. Serve two to three Parmesan crisps per salad.
**RABBIT ALLA CASALINGA**
I first tasted this dish in Fiobbio when Claudia's family was celebrating the baptism of her niece, Francesca. It's an old family dish from her great-grandmother Virginia Zanga, who lived in nearby Villa di Serio. Everyone on her side of the family was a farmer. They raised rabbits on hay and grass. They made their own butter, cured their own pancetta, and got their corn milled for polenta by the miller in town. They used what they had to make dinner. The amazing thing here is the technique: The rabbit is cut into pieces, browned in a pan, and deglazed with white wine, and then with plain water for about forty-five minutes, which creates a dark mahogany brown glaze on the rabbit and an intense-tasting sauce. The meat and sauce are served simply over polenta.
MAKES 6 SERVINGS
1 whole rabbit (5 to 7 pounds/2.25 to 3 kg), dressed
Salt and freshly ground black pepper
8 ounces (2 sticks/227 g) unsalted butter
¼ cup (60 ml) olive oil
8 ounces (227 g) pancetta, diced | 4 rosemary sprigs
10 sage sprigs
1 cup (235 ml) dry white wine
9 cups (2.25 L) Polenta (page 281)
---|---
Using a cleaver, cut up the rabbit: Remove and discard the innards and excess fat deposits. Put the rabbit on its back and remove the hind legs and forelegs by driving your cleaver right through the primary joints. Keep the forelegs whole. Cut the hind legs into two pieces each by driving your cleaver through the knee joints. Cut through the breastbone, then keep your knife against bone and cut down around the rib bones, separating the flesh from the bones. Be sure to keep the thin flap of meat attached to the loin that runs up against the ribs. Remove and discard the ribs by cutting through the ribs at the backbone. Cut the rabbit crosswise through the backbone into six pieces. You should have a total of twelve pieces. Season the pieces all over with salt and pepper.
If you have one wide braising pan big enough to hold all the rabbit pieces in a single layer, use that. Otherwise, divide the butter and oil between two braising pans and place over medium heat. When hot, divide the pancetta, rosemary, and sage sprigs between the pans, and cook until the pancetta is browned, 4 to 6 minutes. Divide the rabbit pieces between the pans, laying them in a single layer, and brown them on all sides, 15 to 20 minutes, turning as needed. Divide the wine between the pans, scraping the pan bottoms to loosen the browned-on bits, and simmer for 8 to 10 minutes, or until the wine evaporates. Add just enough water to each pan to come one-quarter of the way up the meat, and simmer until the water evaporates and the rabbit continues to brown, turning the meat once or twice. As the water evaporates, you'll see the bubbles in the pan go from large to small. When the bubbles are small and fizzy, you'll start to see smoke (from the fat) rather than steam (from the water) rising from the pan. That's the right time to add more water. Again, add just enough water to come one-quarter of the way up the meat, and simmer until the water nearly evaporates, turning the meat now and then. Repeat the process of adding water, evaporating it, and turning the rabbit until the meat is tender and dark mahogany brown, 45 to 55 minutes total. This process of continual deglazing helps to create a nice dark crust on the rabbit and a richer sauce. Season the sauce with salt and pepper.
Spoon the polenta onto warmed plates. Divide the rabbit pieces among the plates, placing them on top of the polenta. Spoon the sauce over the top.
**WILD BOAR BRAISED** with **MORETTI BEER**
In Bergamo, hunters get one of three licenses: a bird license, small-game license, or a big-game license. Claudia's father, Mario, loved to hunt birds like grouse, pheasant, partridge, and pigeon. But her brother, Alex, prefers big game, such as _capriolo_ (roe deer) and wild boar. Pina usually braises game meat in red wine to stand up to the strong flavor, but one day she wanted to make the boar taste lighter, so she used beer. Moretti beer is what they had in the house. Any lager-style beer works well with the milder-tasting boar you find in America.
MAKES 4 TO 6 SERVINGS
5 pounds (2.25 kg) wild boar shoulder
Salt and freshly ground black pepper
¼ cup (60 ml) olive oil
About 1 cup (125 g) tipo 00 flour (see page 277) or all-purpose flour, for dredging
4 medium-size carrots, chopped (2 cups/145 g)
1 large yellow onion, chopped (2 cups/320 g) | 4 medium-size ribs celery, chopped (2 cups/202 g)
1 cup (240 g) canned plum tomatoes, preferably San Marzano, cored and crushed by hand
1 bottle (12 ounces) Moretti or other lager-style beer
1 sachet of 1 bay leaf, 1 garlic clove, 1 sprig rosemary, 5 juniper berries, 5 whole cloves, and 5 peppercorns (see page 277)
9 cups (2.25 L) Polenta (page 281)
---|---
Preheat the oven to 350°F (175°C). Season the boar all over with salt and pepper. Heat the oil in a Dutch oven over medium-high heat. Coat the boar with flour, shaking off any excess.
Add the boar to the hot pan and sear until well browned on all sides, 10 to 15 minutes total. Transfer to a plate.
Add the carrots, onion, and celery to the pan, and cook until lightly browned, 5 to 8 minutes. Add the tomatoes, and cook for 5 minutes. Add the boar back to the pan, along with the beer, the sachet, and enough water to come three-quarters of the way up the meat. Bring to a boil over high heat, cover, and braise in the oven until the meat is fall-apart tender, 3 to 4 hours. Let the boar cool in the liquid until the boar is cool enough to handle.
Transfer the boar to a bowl. Strain the braising liquid, reserving the liquid and vegetables separately. Puree the vegetables in a food mill or blender using short pulses to create a rustic puree. Combine the reserved liquid, pureed vegetables, and boar back in the pan and simmer over medium heat until the liquid reduces in volume and thickens to the texture of gravy, 10 to 15 minutes. Season with salt and pepper, and then cut or shred the meat into four to six portions. Divide the polenta among plates and top with the boar and gravy.
**SCARPINOCC**
In Val Seriana, near the Serio River, the tiny town of Parre sits beneath the western slope of Monte Bò, where Claudia grew up. Parre is famous for its scarpinocc (scar-pee-NOACH), a poor man's ravioli filled with leftover bread and cheese, made during wartime. The bread is soaked in water, milk, or broth and mixed with Grana Padano or Parmesan cheese. Then the ravioli are shaped like little pointed shoes and served simply with butter and sage ( _scarpinocc_ means "rustic shoes").When Parre holds its annual _sagra_ (food festival) in August, the rich aromas of melted butter and fresh sage fill the entire town.
MAKES 4 TO 6 SERVINGS
Filling and Pasta:
1¾ ounces (50 g) Grana Padano or Parmesan cheese, grated (½ cup)
3 ounces (85 g) white bread, crust removed, cubed (about 4 slices sandwich bread)
1 small garlic clove, minced
¼ cup (15 g) chopped fresh flat-leaf parsley
2 teaspoons (9 g) unsalted butter, well softened
1 large egg
1 cup (235 ml) whole milk
¼ teaspoon (0.5 g) ground coriander
⅛ teaspoon (0.25 g) ground cloves | ⅛ teaspoon (0.25 g) freshly grated nutmeg
Salt and freshly ground black pepper
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (0.8 mm) thick
Sauce:
½ cup (120 ml) Walnut Pesto (page 284)
4 teaspoons (20 ml) olive oil
1 ounce (28 g) Parmesan cheese, grated (¼ cup) for garnish
¼ cup (29 g) chopped toasted walnuts for garnish
---|---
**For the filling and pasta:** Combine the Parmesan, bread, garlic, parsley, butter, egg, milk, coriander, cloves, and nutmeg in a large mixing bowl. Season with salt and pepper, then mix with a wooden spoon. Let stand for 15 minutes so the bread can absorb the liquid.
Lay a pasta sheet on a lightly floured work surface. Spoon ½- to ¾-inch (1.25-to 2-cm)-diameter balls of filling at 1½-inch (3.75-cm) intervals down the center of the sheet. Spray lightly with water, then pick up the long edge and fold it over the filling to meet the long edge on the other side. Gently press down the dough around each ball of filling to seal. Using a 3-inch (7.5-cm) round cutter, cut out a series of half-moons and discard the scraps of pasta. Turn each half-moon so the filling rests on top of the curved edge of the pasta (see the illustration). Slightly pinch the pasta on either side of the filling to make "wings." Use your finger to flatten the filling slightly so the half-moon will stand up easily. The finished pasta should resemble a rustic shoe with the "wings" as the toe and heel of the shoe. Repeat with the remaining pasta dough and filling. Transfer the scarpinocc to a baking sheet lined with floured waxed paper, cover, and freeze for at least 1 hour. When frozen, transfer to a resealable plastic bag, seal, and freeze for up to 2 weeks.
Bring a large pot of salted water to a boil. Drop in the pasta, in batches, if necessary, to prevent overcrowding; quickly return the water to a boil, and cook until barely tender, 4 to 5 minutes.
**For the sauce:** Put the pesto and olive oil into a sauté pan over medium-low heat. Add a ladle of pasta water and simmer until the sauce is loose and creamy.
Drain the pasta and add it to the pan, gently swirling until the pasta is coated with sauce. Divide among plates and garnish with the Parmesan and chopped walnuts. Serve immediately.
**WHOLE ROASTED DUCK** with **MUSCAT GRAPES**
Claudia's nonna, Anna, kept a huge garden and raised chickens and ducks in the backyard. Anna would stuff the duck with garden vegetables and herbs and roast it in her wood oven with potatoes. Her son, Bruno, refined the recipe by replacing the potatoes with muscato grapes picked from the vines growing around their poultry coops. He peeled the grapes and added them to the roasting pan at the last minute so they would just melt in your mouth. When I first tasted the dish, Bruno joked, _"L'anatra stiamo mangiando è morto di un attacco di cuore quando vede il coltello,"_ which means, "The duck we are eating died of a heart attack when it saw the knife."
MAKES 4 SERVINGS
1 large duck, trimmed of excess fat (about 5 pounds/2.25 kg)
Salt and freshly ground black pepper
½ medium-size yellow onion, chopped (¾ cup/92 g)
1 medium-size carrot, chopped (½ cup/61 g)
1 medium-size rib celery, chopped (½ cup/51 g)
1 garlic clove, smashed
1 bay leaf
10 peppercorns | 2 sprigs fresh thyme
2 sprigs fresh rosemary
2 tablespoons (30 ml) olive oil
6 leaves savoy cabbage, chopped into bite-size pieces
1 cup (235 ml) white wine
3 cups (750 ml) Duck Stock or Chicken Stock (page 279)
2 cups (185 g) peeled moscato grapes
---|---
Season the duck with salt and pepper inside and out. Mix together the onion, carrot, celery, garlic, bay leaf, peppercorns, thyme, and rosemary. Stuff the mixture into the bird and truss the bird with kitchen string to close the cavity and secure the legs.
Preheat the oven to 375°F (190°C). Heat the oil in a large roasting pan or Dutch oven over medium-high heat. Add the duck and sear until the skin is crisp and browned on all sides, 15 to 20 minutes total, turning a few times.
Turn the duck breast-side up and transfer the pan to the oven. Roast, uncovered, until the bird registers 155°F (68°C) when a thermometer is inserted into a thigh, about 1 hour. Remove the pan from the oven and transfer the duck to a cutting board. Pour off about half of the fat from the pan and reserve it for another use. Add the cut cabbage to the remaining fat and sauté over medium heat until tender, about 5 minutes. Add the wine and simmer until the liquid reduces in volume by about half, 5 to 6 minutes. Add the stock and the accumulated juices from the cutting board, and simmer until the liquid reduces in volume and thickens enough to coat the back of a spoon, 10 to 15 minutes. Stir in the peeled grapes, season with salt and pepper, and cook just until the grapes begin to wilt, 1 to 2 minutes.
Carve the duck into leg, thigh, and breast portions and serve with the cabbage and grapes, drizzling the sauce around the plate.
**Note**
**To truss the stuffed duck, pierce three to four 4-inch (10-cm)-long wooden skewers through the flaps of skin on each side of the cavity opening. Weave kitchen string around the skewers like a shoelace to lace the bird shut, tying it off at the top. Position the bird breast-side up with the legs facing away from you. Loop a long piece of kitchen string beneath the ends of the drumsticks, crossing the string to make an X. Pull the remaining string down, passing it beneath the thighs and pulling tight to pull the legs toward the tail. Continue pulling the string along the body toward the neck and pass it beneath the wings. Flip the bird over so the legs are now facing toward you and cross the string over the back between the two wings, pulling tight. Loop the string beneath the backbone, pull it tight, and then tie it off with a tight knot.**
**FIG STRUDEL**
Pina has five fig trees, and sometime in September, the fruit starts dripping with nectar. This strudel shows off the ripe figs because you just cut them in half, caramelize the cut sides in a pan, and lay them on the dough, which is then braided over the top. The dough recipe comes from my friend, Andrea Forcella, who owns Olfà pastry shop in Osio Sotto, about twenty minutes south of the old city in Bergamo. It's sort of like puff pastry but has more stretch, because you roll out the dough and let it rest several times, developing gluten and creating that light, chewy texture of classic Danish pastries. It doesn't take much hands-on time, but there is a lot of resting time, so it is a multi-day process. I like to warm up slices of the strudel in a buttered pan and serve them on a bed of toasted sliced almonds with a spoonful of Mascarpone Gelato (page 287). This recipe makes a pretty big strudel. But it keeps for several days and is so good it rarely sticks around that long.
MAKES ABOUT 16 SERVINGS
Danish Dough:
5½ cups (753 g) bread flour
⅔ cup (133 g) granulated sugar
¾ teaspoon (4.5 g) fine sea salt
2 large eggs
1 packed tablespoon (20 g) fresh yeast, or 2½ teaspoons (10 g) active dry yeast
5⅓ tablespoons (75 g) plus 1 pound (4 sticks/450 g) unsalted butter, at room temperature
Fig Filling:
⅓ packed cup (55 g) raisins, preferably both dark and golden
2 pounds (1 kg) fresh figs | 4 ounces (1 stick/113 g) unsalted butter
1 packed cup (220 g) dark brown sugar
¾ teaspoon (2 g) ground cinnamon
⅛ teaspoon (0.75 g) fine sea salt
1 large egg
About 3 tablespoons (38 g) raw or turbinado sugar, for sprinkling
About 1 cup (235 ml) apricot jam, briefly warmed, for brushing
---|---
**For the Danish dough:** Mix the flour, sugar, salt, eggs, yeast, 5 ⅓ tablespoons (75 g) of the butter, and 1 cup plus 1 tablespoon (265 ml total) of water in a stand mixer fitted with the dough hook on low speed until the flour gets incorporated, about 2 minutes. Change to medium speed and mix until the dough is sticky and elastic, another 4 minutes. Turn out the dough into a buttered bowl, shaping it into a ball. Cover with a kitchen towel and let rise in a warm spot until doubled in size, about 1 hour.
Line a large rimmed baking sheet with parchment. Turn the dough out onto the prepared sheet. Cover and let rest in the refrigerator overnight.
Roll the remaining pound (450 g) of butter between sheets of plastic wrap to an even 13 x 9-inch (33 x 23-cm) rectangle. Roll the dough on a lightly floured work surface to an 18 x 13-inch (46 x 33-cm) rectangle (the same width but twice as long as the butter). Transfer the rolled butter onto half of the rolled dough: remove the top sheet of plastic, invert the butter onto the dough, and then remove the remaining plastic, scraping the butter off the plastic as necessary to create an even bed of butter on the dough. Cover with the other half of dough and pinch the edges firmly to seal. Roll the dough out again to a rough 18 x 13-inch (46 x 33-cm) rectangle. Fold the dough so that the two short edges meet in the center, and then fold the dough in half (this is called a four-fold or book fold). Cover and refrigerate until the dough and butter are evenly chilled, about 30 minutes or up to 2 hours.
Position the dough on a floured work surface and roll out the dough again to an 18 x 13-inch (46 x 33-cm) rectangle. Fold the dough over itself three times to make a three-fold (like folding a letter), starting at a short edge. Cover and let rest in the refrigerator again, at least 30 minutes or up to 2 hours. Repeat rolling out the dough and folding into a three-fold two more times, positioning the seam-side away from you each time, and resting the dough in the refrigerator between each turn. Cover and let the dough rest overnight in the refrigerator. (The completed dough can also be covered and refrigerated for 2 days before assembling and baking the strudel.)
The next day, roll the dough to an 18 x 8-inch (46 x 20-cm) rectangle about ¼ inch (5 mm) thick. Gently fold the dough into a three-fold, transfer it to a large baking sheet, and unfold it on the sheet. Chill until the filling is ready.
**For the fig filling:** Soak the raisins in warm water until softened, about 10 minutes. Drain, chop coarsely, and set aside.
Remove the fig stems and cut the figs in half lengthwise. Melt the butter in a large sauté pan over medium heat. When the butter is foamy and hot, add the figs, and cook until the cut sides are light golden brown, 4 to 5 minutes. Add the chopped raisins, brown sugar, cinnamon, and salt. Cook until the mixture becomes very thick and lightly caramelized, 5 to 6 minutes. Remove from the heat and let cool completely. (The fig filling can be made ahead and refrigerated for 2 days before assembling the strudel. Bring the filling to room temperature before using.)
Place the filling on the rolled sheet of dough in a line about 3 inches (7.5 cm) wide, leaving about 2 inches (5 cm) on either side. Use a pizza cutter or sharp knife to cut lines about 1 inch (2.5 cm) wide diagonally through the exposed dough on both sides of the filling; it will look sort of like a Christmas tree (see pages 82 and ). Beat the egg with 1 tablespoon (15 ml) of water. Braid the exposed strands of dough over the filling, bringing the opposite strands together and brushing the bottom strand with the egg wash before laying the opposite strand over it (this bonds the two strands together to create a seal).
Cover with a kitchen towel and let proof in a warm spot until almost doubled in size, about 1 hour.
Preheat the oven to 350°F (175°C). Brush the top and sides of the dough with the egg wash and sprinkle with the raw sugar. Bake until golden brown, 35 to 40 minutes. As soon as the strudel comes out of the oven, brush with the apricot jam.
FIG STRUDEL ASSEMBLY
**CLAUDIA'S LIMONCELLO TIRAMISU**
Tiramisu is like an Italian Tastykake (a beloved Philadelphia snack cake). You soak cookies in syrup and layer them between a creamy mascarpone filling. You can flavor the syrup and filling however you like. Coffee and chocolate is the most common combination, but Claudia always made tiramisu with fruit that grew in her backyard. Her cherry tiramisu was one of my favorites. Then she came up with this limoncello tiramisu made with Pina's Limoncello (page 85). It's my new favorite. Refreshing, rich, and ridiculously good.
MAKES 10 TO 12 SERVINGS
Mascarpone Mousse:
8 large eggs
1½ cups (300 g) granulated sugar, divided
2 pounds (1 kg) mascarpone (about 4¼ cups)
2 lemons | Limoncello–Soaked Ladyfingers:
¾ cup (150 g) granulated sugar
1½ cups (375 ml) Pina's Limoncello (opposite page) or other limoncello
1 (8-ounce/227-g) package ladyfingers, about 30 ladyfingers
---|---
**For the mascarpone mousse:** Separate the eggs, putting the yolks in a medium bowl and the whites in another bowl. Add 1 cup (200 g) of the sugar to the yolks and whip with an electric mixer on medium-high speed until thick and pale yellow in color, 2 to 3 minutes. Beat the mascarpone in a separate bowl with clean beaters on medium speed until softened. Add the whipped yolks and beat on medium speed until smooth. Grate the zest from the lemons and squeeze out ¼ cup (60 ml) of lemon juice. Stir the lemon zest and juice into the mascarpone mixture.
Whip the egg whites in a clean bowl with clean beaters on medium speed until frothy, 2 to 3 minutes. Add the remaining ½ cup (100 g) of sugar and whip on medium-high speed until the whites form medium-soft peaks when the beaters are lifted, another 2 to 3 minutes.
Fold the whipped whites into the mascarpone mixture to form a mousse.
**For the limoncello-soaked ladyfingers:** Combine the sugar and ¾ cup (175 ml) of water in a small saucepan over medium-high heat. Bring to a simmer and cook just until the sugar dissolves. Remove from the heat and set the pan in an ice bath to cool down the syrup. Stir in the limoncello.
Soak the ladyfingers in the limoncello syrup in batches for 20 seconds; the cookies should not be saturated all the way to the center or they will fall apart. As you work, lay the soaked ladyfingers in the bottom of a 2½-quart (2.5 L) baking dish, breaking up the cookies as necessary to make an even layer. Spread a layer of mousse over the ladyfingers. Continue making layers of soaked ladyfingers and mousse until the dish is filled, ending with a layer of mousse on top. Cover and refrigerate for at least 2 hours or up to 1 day.
**Note**
**The finished tiramisu can be refrigerated for up to 1 day before serving. It's ideal after just a few hours in the fridge, as the ladyfingers will continue to soak up liquid in the tiramisu and eventually become soggy.**
**PINA'S LIMONCELLO**
After they made wine from the grapes on their property, Pina's grandfather would make grappa from the stems, skins, and grape must. Grappa was the sipping liquor of Bergamo. You never saw limoncello because lemons didn't grow there. But in the mid-1990s, Pina went on vacation to the Amalfi coast and brought back giant lemons the size of grapefruits. These lemons have almost no juice, so the peels are used to make _mostarda_ (fruit relish), _canditi_ (candied fruit), and limoncello. One of her ex-boyfriends gave Pina this recipe for limoncello. She only makes it once or twice a year in twenty-liter batches, which takes about two hundred lemons. She keeps her limoncello in the freezer in tall, clear glass bottles with pieces of red cloth tied around the tops. (It doesn't actually freeze because of the alcohol.) You can use Eureka lemons (the most common grocery store variety), but keep in mind you'll only be using the peels. Squeeze all the leftover lemons and use the juice to make lemonade!
MAKES 2½ QUARTS (2.5 L)
20 Eureka lemons, 15 Sicilian lemons, or 10 Amalfitano lemons
1 quart (1 L) grain alcohol or 100-proof vodka
5 cups (1 kg) granulated sugar
Peel the lemons, using a vegetable peeler or large zester, taking care not to remove much of the bitter white membrane beneath the peel. Marinate the peels in the grain alcohol in a glass jug at room temperature for 2 weeks. Strain into a pitcher and reserve the peels.
Combine the sugar, 1½ quarts (1.5 L) of water, and the peels in a large saucepan. Bring to a simmer over medium heat, stirring just until the sugar dissolves, 5 to 8 minutes. Remove from the heat and let cool, and then strain out and discard the peels.
Let the syrup cool completely, then stir into the alcohol. Store in bottles in the freezer, sipping or using as needed.
* * *
CINQUE TERRE
SEX ON THE ITALIAN RIVIERA
CLAUDIA DROVE. I TOOK IN THE LANDSCAPE. TRAVELING FROM BERGAMO TO LIGURIA WAS LIKE GOING TO OCEAN CITY, MARYLAND, FROM PHILADELPHIA, EXCEPT WE PASSED THROUGH THE APENNINE MOUNTAIN RANGE. WHAT A VIEW WHEN YOU REACH THE TOP! ROLLING GREEN HILLS GIVE WAY TO ROCKY CLIFFS AND ENDLESS BLUE WATER AS THE LIGURIAN SEA SPREADS BEFORE YOU.
This was our first getaway. It was early June when we got there—a little before tourist season—so it wasn't too crowded. Pina's boyfriend, Carmine, set us up to stay at Hotel Florida in Lerici, just down the coast from the mountain villages of le Cinque Terre. We spent a little time in Lerici, then hopped a ferry up the coast. The boat took us around the tip of Porto Venere past all five villages: Riomaggiore, Manarola, Corniglia, Vernazza, and Monterosso. On the cruise, Claudia told me about the area. My Italian and her English were getting better. "This is where we spent our summer vacations," she said. "It's the closest beach and it's always warm in the summer." The rugged shores reminded me of northern Maine. But the cliffs were steeper; the water, deeper indigo; and the sloping terraces, sprouted with scraggly olive trees, lemon trees, and vineyards that looked thousands of years old. "The Cinque Terre landscape is just mountains and sea," she continued. "It's hard to grow things, but what does grow is intense with flavor." She explained how the basil is the most aromatic on earth. How the olives are small but jammed with flavor. And how currents from the Mediterranean and Tyrrhenian Seas encourage algae to grow, which feeds the local fish and gives them an incredible taste.
Explaining all this to me, Claudia looked as robust and beautiful as the olive groves on shore. We stepped off the ferry in Monterosso, and I took in the deep perfume of basil. No wonder pesto was born in this region. Fragrant basil, velvety olive oil, rich little pine nuts. . . the ingredients are all local to these seaside cliffs. It made sense. They cooked with what they had.
I knew that corzetti was the region's most famous pasta. Each circle of dough gets embossed with an intricate design stamped from a one-of-a-kind woodcut stamp. The stamps themselves are carved from olive wood or walnut wood and traditionally etched with a family crest to celebrate the birth of a child. Authentic corzetti stamps are nearly impossible to find outside of Liguria. I went into umpteen curio shops looking for them. It became the Great Corzetti Quest.
When we got to Vernazza, it was midmorning. You could smell the _farinata_ throughout the whole town. _Farinata_ is a local chickpea flatbread made after the morning yeast breads come out of the oven. "Oh my god," Claudia said, lifting her nose to the air, "we have to go down this street." We got lost down a half-dozen crooked, narrow streets before arriving at a nondescript back door. We went in, and it turned out to be a factory bakery, not a café or store. The workers looked at us, like, "What the hell are you doing here?" Claudia told them, "We're visiting. Jeff is a chef from America working in Italy." Ten seconds later, I had a piece of _farinata_ in my mouth, seasoned with onion, rosemary, and black pepper. Claudia had one with stretchy mozzarella. Before long, we were sitting at the water's edge, tasting different _farinata_ and focaccia made with fava bean flour and chestnut flour, laughing with the workers. Claudia wiped a few stray _farinata_ crumbs from my chin.
In Manarola, we stopped to read a little trattoria's chalkboard sign advertising its porcini tasting. Time for lunch! The first course was _funghi porcini di Borgotaro_ , summer porcini from Parma, sliced and breaded in polenta flour and fried. Delicious. Fettuccine with porcini came next. Then oven-roasted veal breast with porcini sauce. We finished off the mushroom tasting with two straws in a goblet of _sgroppino_ , a kind of frozen slushy made with lemon sorbet and grappa.
After lunch, we hit the Sentiero Azzurro (Blue Trail) that connects le Cinque Terre and has incredible views. The vivid flowers, steep terraces, sweet herbs, craggy cliffs, and gleaming blue seas hypnotized our senses at every turn. Back down in town, we sampled gelato every chance we could. The best was at 5 Terre Gelateria e Crêperia in Manarola.
With our licorice gelatos, we strolled along the Via dell'Amore from Manarola to Riomaggiore. We paused in secluded nooks along the way to kiss over our gelato: we couldn't stop looking at each other, like teenagers in love. Claudia told me that these five fishing hamlets were once completely isolated from one another by rocky hills. Boats were the only means of transportation, and villagers mostly kept to themselves. The railway came in the late 1800s, and in the 1920s, the first walking path was cut between the five villages. It allowed young lovers from different villages to find each other, so it was called _Via dell'Amore_ ("Lovers'Walk").
I wondered what it would be like to live in one of these jumbled pink, blue, or yellow houses that tilt toward the sea. We later walked the main street, Via Colombo, and Claudia and I were seduced by all the little shops selling fresh-picked strawberries, cherries, lemons, _nespoli_ (loquats), leeks, spiky artichokes, savoy cabbage, rainbow chard, ruddy Taggiasca olives, and little vegetable fritters and savory pies. We'd just eaten but we were hungry again.
Wandering through le Cinque Terre gave us more time to talk than we'd ever had before. We talked about our families and lives back home. Our hopes for the future. Our dreams. The idea of owning our own restaurant came up. Before we knew it, the ferry back to Lerici was about to board.
Although Liguria has some of the best seafood in Italy, we'd only had a few nibbles of fish throughout the day. Some anchovies in Monterosso. Fried _gianchetti_ (whitebait) in Riomaggiore to go with an _aperitivo._ We saved our appetites for dinner that night in Lerici. Dei Pescatori on Via Doria is one of the best seafood restaurants in the region. There is no menu and platter after platter of fresh-caught fish just keeps coming. . . stuffed anchovies, grilled sardines with lemon and olive oil, fried and marinated trout, grilled swordfish, steamed mussels with white wine, sautéed _gamberi_ (shrimp), langoustines, _vongole veraci_ (carpet clams) with pasta. . . it was endless, uncomplicated, and beautiful. Each briny bite seemed to capture the entire glorious day in our mouths. We capped the meal with a couple of glasses of _sciacchetrà_ , the sweet local white wine perfumed with apricot and honey.
The next day, we didn't leave our room at the Hotel Florida until three in the afternoon. We had a gorgeous view of the ocean. It was the first time I told Claudia that I loved her.
Cockles and Eggs with Bruschetta
•
Grilled Sardines with Taggiasca Olives and Celery Salad
•
Grilled Stuffed Calamari with Meyer Lemon and Beets
•
Corzetti with Clams, Tomatoes, and Peperoncino
•
Genovese Ravioli with Capon | Halibut al Cartoccio with Ligurian Olives and Oregano
•
Spaghetti al Nero di Seppia with Shrimp
•
Meyer Lemon Tortas with Poppy Seed Gelato
•
Sweet Ricotta Frittelle with Raspberry Preserves
•
Chickpea Cakes with Warm Lemon Crema
---|---
**COCKLES** and **EGGS** with **BRUSCHETTA**
Scrambled eggs and clams are two ingredients I never would have thought to put together. But this makes a fantastic springtime dish. The first time I tasted it was when Brad Spence, the chef at Amis in Philadelphia, made it for me for lunch. I was blown away. It's no more difficult than making scrambled eggs. You just cook cockles in the pan first. When you scramble the eggs, they cook right inside the opened cockle shells, getting seasoned with all the briny juices. The best time I ever tasted this dish was on my last trip to le Cinque Terre. The intensely orange eggs and mineral-rich cockles in Liguria made it taste even better.
MAKES 2 TO 4 SERVINGS
2 tablespoons (30 ml) olive oil, plus some for the bread
2 garlic cloves, thinly sliced
30 cockles, New Zealand clams, or small hard-shell clams, scrubbed (18 to 20 ounces/510 to 570 g)
½ cup (120 ml) dry white wine
⅛ teaspoon (0.25 g) chile flakes | 4 large eggs, lightly beaten
3 scallions, thinly sliced (green parts only)
Salt and freshly ground black pepper
4 slices rustic country bread
---|---
Heat the 2 tablespoons (30 ml) of oil in a large sauté pan over medium heat. Add the garlic, and cook until the garlic begins to toast and turn light brown around the edges, 3 to 4 minutes.
Remove from the heat and add the cockles, wine, and chile flakes. Return to the heat, cover, and steam until all the cockles open, 5 to 6 minutes.
Once all the cockles open, add the eggs to the pan and scramble the eggs in the cockle liquid until soft and just cooked through, 1 to 2 minutes. Stir in the scallions and season with salt and pepper.
Brush the bread with a little olive oil and grill or broil until lightly toasted. Transfer the cockles and eggs to warm plates and serve with the toasted bread. Allow your guests to scoop the eggs and cockles from the shells.
**GRILLED SARDINES** with **TAGGIASCA OLIVES** and **CELERY SALAD**
You find grilled sardines in trattorias all along the Ligurian coast. They usually serve them as an appetizer with a drizzle of lemon and olive oil. Cooking them on the bone releases collagen as they cook, making them moister and richer. If you fillet these little fish, they tend to dry out. I love them with the salty punch of Taggiasca olives, the heart of Liguria's prized olive oil. Serve this starter dish with a few lemon wedges for squeezing over the fish.
MAKES 4 SERVINGS
8 whole sardines, fins removed, gutted, cleaned, and rinsed under cold water
2 tablespoons (30 ml) grapeseed oil
Salt and freshly ground black pepper
1 tablespoon (15 ml) freshly squeezed lemon juice
3 tablespoons (45 ml) blended oil (page 276)
4 medium-size ribs celery, strings removed, thinly sliced on a diagonal | ½ cup (60 g) Taggiasca or other Ligurian olives, pitted and chopped
4 small bunches mâche (lamb's lettuce), about 2 cups (70 g), cleaned, for garnish
Flake salt, such as Maldon sea salt, for garnish
---|---
Heat a grill to medium-high heat.
Coat the cleaned sardines with grapeseed oil, then generously season with salt and pepper. Lightly oil the grill and grill the sardines until just grill-marked and cooked through, about 2 minutes per side. Transfer to plates.
Meanwhile, pour the lemon juice into a medium bowl. Whisk in the blended oil in a slow, steady stream until combined and thickened. Add the celery and olives, then season with salt and pepper.
Spoon the salad over the sardines. Garnish with tight bunches of the mâche and drizzle the remaining vinaigrette from the bowl around the plates. Sprinkle each serving with flake salt and serve immediately.
**GRILLED STUFFED CALAMARI** with **MEYER LEMON** and **BEETS**
I'd always heard in cooking school—and among Italians—that you never mix cheese and fish. But it's a myth. Cheese tastes good with seafood as long as it's not overpowering. Mild squid and a creamy ricotta filling work great together. You usually see stuffed calamari braised in tomato sauce, but I wanted to give this a lighter spin to capture the romance of my first trip to le Cinque Terre. I put the grilled calamari on a bed of arugula and topped it with a lemony roasted beet salad. If you can find them, use Meyer lemons. They're sweeter and more floral, like the lemons in Liguria. If you use common Eureka lemons, add a pinch of sugar to cut the sourness.
MAKES 4 TO 6 SERVINGS
8 ounces (227 g) red or Chioggia beets
About ½ cup (68 g) kosher salt
12 small whole calamari (squid), cleaned
¾ cup (175 ml) extra-virgin olive oil, divided
6 stalks Swiss chard (8 to 10 ounces/227 to 285 g)
1 garlic clove, sliced
1 cup (235 ml) white wine
2 pounds (1 kg) fresh whole-milk ricotta cheese (1 quart) | 1 ounce (28 g) Parmesan cheese, grated (¼ cup)
1 large egg
½ cup (54 g) plain, dry breadcrumbs
Salt and freshly ground black pepper
16 Meyer lemon segments
¼ cup (60 ml) freshly squeezed Meyer lemon juice
2 tablespoons (6 g) minced chives
6 ounces (170 g) arugula (about 6 cups)
---|---
Preheat the oven to 500°F (260°C). Scrub the beets well, then rinse them and leave them wet. Put the salt in a heatproof dish, add the beets, and pack a thick layer of salt around each beet. Transfer to a baking sheet and roast the beets until tender enough for a fork to slide in and out easily, 2 to 3 hours. Let cool, then rinse the beets and cut them into very small cubes. You should have about 1 cup (136 g). Set aside or refrigerate for up to 3 days.
To clean each squid, pull away the head and tentacles from the hood (tubelike body), and then reach into the hood and pull out the entrails and the plasticlike quill, taking care not to puncture the pearly ink sac. Cut off the tentacles just above the eyes, and discard the head. Squeeze the base of the tentacles to force out the hard "beak," and rinse the tentacles and the hood under cold running water. Using the back of a paring knife or your fingers, pull and scrape off the gray membrane from the hood. Cut off and discard the two small wings on either side of the hood. Refrigerate the hoods in ice water until ready to stuff. Pat dry the squid tentacles.
Heat 1 tablespoon (15 ml) of the oil in a large cast-iron skillet over high heat. When smoking hot, add the tentacles, and cook until curled, firm, and browned here and there, 4 to 5 minutes. Remove from the heat and let cool.
Separate the leaves from the stems of the chard. Trim any rough spots, then coarsely chop the stems and leaves. Heat 3 tablespoons (45 ml) of the oil in the skillet over medium heat. Add the chard stems and garlic, and cook for 2 minutes. Add the wine, and cook until the stems are almost tender, 8 to 10 minutes. Add the leaves, and cook, stirring now and then, until the liquid evaporates and the leaves wilt down a bit, 2 to 3 minutes. Let cool slightly, then transfer to a food processor, along with the seared tentacles. Mince the chard mixture using short pulses. Transfer to a bowl and whisk in the ricotta, Parmesan, egg, and breadcrumbs. Season to taste with salt and pepper. Spoon into a resealable plastic bag and refrigerate for up to 1 day.
Snip a corner off the bag and pipe the mixture into the squid bodies, stuffing them full. Close the ends of the squid with toothpicks. If you have any leftover filling, you can use it as a ravioli filling. Season the squid all over with salt and pepper and coat lightly with oil.
Heat a grill to medium heat. Brush the grill, coat with oil, and grill the stuffed squid directly over the heat until grill-marked and set in the center, turning a few times, about 8 minutes.
Gently combine the beets, lemon segments, lemon juice, chives, and remaining ½ cup (120 ml) of olive oil. Season with salt and pepper.
Divide the arugula among plates. Place two stuffed calamari on each plate and top with the beet salad. Drizzle with the remaining dressing in the bowl.
**CORZETTI** with **CLAMS, TOMATOES,** and **PEPERONCINO**
When my family came to Italy to meet Claudia's family, we made a special trip to Liguria to find a corzetti pasta stamp. They're pretty rare (see the Sources on page 289 for some suggestions). We first looked in a little town called Bergeggi, about an hour north of Genoa. When we got to le Cinque Terre, we looked in each of the five towns. We couldn't find a single stamp, and it started driving Claudia crazy. But I was obsessed. Finally, in Monterosso, in the last shop I looked in, I found them. Beautiful ones. I bought four of them. The traditional sauce for corzetti is a spin on basil pesto made with only pureed pine nuts, marjoram, and milk. But to remind me of the amazing Ligurian seafood, I like to serve corzetti with clams and little summer tomatoes.
MAKES 6 SERVINGS
Corzetti Dough:
4¾ cups (595 g) _tipo_ 00 flour (see page 277) or all-purpose flour
2 large eggs
¼ cup (60 ml) olive oil
Clams and Tomatoes:
5 pounds (2.25 kg) small hard-shell clams, such as littlenecks
10 tablespoons (155 ml) olive oil, divided
1 medium-size yellow onion, finely chopped (1¼ cups/200 g) | 1 small garlic clove, smashed
½ bunch fresh flat-leaf parsley, stems and all
1 quart (1 L) white wine
1 quart (1 L) Fish Stock (page 279) or water
2 cups (340 g) grape tomatoes or small early summer tomatoes, halved
1 long hot pepper or peperoncino, minced (about ¾ cup/112 g)
---|---
**For the corzetti dough:** Combine the flour and eggs in the bowl of a stand mixer fitted with the dough hook and mix on low speed. With the machine running, gradually add the oil until incorporated, then gradually add 1 cup (235 ml) of water until incorporated. Turn the mixer to medium-high speed and mix until the dough holds together. Separate the dough into three pieces and gently knead each piece in your hands until the dough looks smooth. Shape each piece into a rectangle the width of your pasta roller. Roll each piece of dough into a long rectangle about ⅛ inch (3 mm) thick (setting #2 or #3 on the KitchenAid attachment) onto a floured work surface. Using a lightly floured corzetti stamp or 2½-inch (6.25-cm) cookie cutter, cut out circles of dough; you should get fifty to sixty circles from all three pieces of dough with no rerolling. Lightly flour the woodcut corzetti stamp and then stamp each circle to imprint the design. If you don't have a corzetti stamp, leave the circles plain or use a lightly floured cookie stamp or butter stamp. Place the corzetti in single layers between sheets of floured parchment paper, cover, and freeze for up to 2 days.
**For the clams and tomatoes:** Scrub the clams and rinse under cold running water.
Heat ¼ cup (60 ml) of the oil in a large, deep sauté pan. Add the onion, garlic, and parsley, and cook until the onion is soft but not browned, 4 to 6 minutes. Add the white wine and boil over high heat until the liquid has reduced in volume by half, 10 to 15 minutes. Add the clams and fish stock. Cover and steam until the clams open, 10 to 12 minutes. Remove from the heat as soon as the clams open, then transfer the clams to a plate. Line a mesh strainer with cheesecloth and strain the clam liquid through the cheesecloth; set aside. Pick out the meat from the clams and refrigerate it in the strained clam stock for up to 4 days.
When ready to serve, bring two large pots of salted water to a boil. Add half of the corzetti, one by one, to each pot, stirring gently to help prevent sticking. Partially cover the pots and cook just until the corzetti are tender, about 5 minutes. Reserve about 1½ cups (375 ml) of pasta water, then drain.
Meanwhile, heat 2 tablespoons (30 ml) of the olive oil in a deep sauté pan over medium heat. Add the tomatoes, and cook until they start to break down, 4 to 5 minutes. Add the hot pepper, and cook until soft, 6 to 8 minutes. Add the clams, 1¼ cups (310 ml) of the clam stock, 1 cup (235 ml) of pasta water, and the remaining ¼ cup (60 ml) of olive oil to the pan. Bring to a simmer over medium-high heat and cook until the liquid reduces in volume by about half, 5 to 8 minutes. Add the cooked pasta and toss gently in the sauce.
Using tongs, overlap eight corzetti in a circle on each plate. Simmer the sauce in the pan until slightly reduced and thickened, and then spoon over the corzetti.
**GENOVESE RAVIOLI** with **CAPON**
Genoa sits in the center of Liguria, slightly closer to the mountains bordering Piedmont. It's Italy's largest seaport, but the food there includes more land animals, such as chicken and veal calves, because of the landscape. You see this ravioli in all the Genovese restaurants. It's usually stuffed with veal organ meats, such as brains or sweetbreads. I added the capon as a little twist. Some of the capon meat gets pureed to make a rich, creamy sauce that tastes sort of like chicken soup but thicker. You'll need one small capon (4 to 5 pounds/1.75 to 2.25 kg) for this recipe, or you can use one large or two small chickens. Cut off the legs and wings and leave the bones in for the sauce. For the ravioli filling, remove the skin and bones from the breast.
MAKES 6 TO 8 SERVINGS
Genovese Sauce:
1½ pounds (680 g) capon legs and wings
Salt and freshly ground black pepper
About 1 cup (125 g) _tipo_ 00 flour (see page 277) or all-purpose flour
2 tablespoons (30 ml) grapeseed oil
1 medium-size yellow onion, coarsely chopped (1 cup/160 g)
1 large carrot, coarsely chopped (1 cup/122 g)
2 medium-size ribs celery, coarsely chopped (1 cup/101 g)
2 cups (475 ml) white wine
1¼ quarts (1.25 L) Chicken Stock (page 279)
1 sachet of 1 sprig parsley, 1 sprig rosemary, 1 sprig thyme, 1 bay leaf, and 6 black peppercorns (see page 277)
About ¾ cup (175 ml) olive oil
About ¼ cup (60 ml) sherry vinegar
3½ ounces (100 g) Parmesan cheese, grated (1 cup) | Ravioli Filling:
20 ounces (567 g) boneless, skinless capon breast, cubed
2 ounces (57 g) chicken livers
20 ounces (567 g) veal or calf's brains
Salt and freshly ground black pepper
½ cup (62 g) _tipo_ 00 flour (see page 277) or all-purpose flour, for dredging
4 tablespoons (57 g) unsalted butter
¼ cup (60 ml) olive oil
5 sage leaves
1 pound (450 g) Egg Pasta Dough (page 282), rolled into 4 sheets, each about inch (0.8 mm) thick
---|---
**For the Genovese sauce:** Rinse the capon legs and wings and pat dry. Season the capon all over with salt and pepper. Dredge the pieces in the flour, shaking off the excess.
Heat the grapeseed oil in a large Dutch oven (or two) over medium-high heat. When hot, add the capon pieces in batches to prevent crowding and sear until golden brown on both sides, 5 to 6 minutes per side. Transfer the pieces to a platter as they are browned.
Preheat the oven to 350°F (175°C). Add the onion, carrots, and celery to the pan and cook, stirring now and then, until nicely browned, 6 to 8 minutes. Stir in the white wine and simmer until the liquid is reduced in volume by about one third. Pour in the chicken stock and bring to a simmer, then add the capon back to pan, along with the sachet. Cover, transfer to the oven, and braise until the meat falls easily off the bone, 2½ to 3 hours.
Remove and discard the sachet. Transfer the meat to a platter and let cool slightly. Pick the meat and skin from the bones, discarding the bones. Be sure to remove all of the bones because the sauce will be pureed. Puree the meat, skin, vegetables, and braising liquid in a blender in batches, adding enough olive oil to each batch to create a slightly thickened sauce. Pour into a large, deep sauté pan, and season to taste with sherry vinegar, salt, and pepper.
**For the ravioli filling:** Put all parts of a meat grinder and the cubed capon breast and chicken livers in the freezer until ice cold, about 15 minutes. Grind the capon breast and chicken livers in the meat grinder with the small die. Cover and refrigerate.
Season the brains with salt and pepper and dredge them in flour, shaking off the excess.
Heat the butter, olive oil, and sage in a large sauté pan. When the butter is foamy and hot, add the brains and sear them in the pan until nice and golden brown all over, 8 to 10 minutes. Drain on paper towels and let cool.
Puree the cooled brains in a food processor. Add to the ground capon and liver mixture, season with salt and pepper, and mix until thoroughly combined. Spoon into a large resealable plastic bag and refrigerate until ready to use or up to 3 days.
Lay a pasta sheet on a lightly floured work surface and dust with flour. Trim the ends of the sheet to make them square, then fold the dough in half lengthwise and make a small notch at the center to mark it. Open the sheet so it lies flat again and spritz with water.
Cut off a small corner of the resealable plastic bag so you can pipe the filling on the pasta. Beginning at the left-hand side, pipe two rows of ½-inch (1.25-cm)-diameter balls of filling along the length of the pasta, leaving a ½-inch (1.25-cm) margin around each ball and stopping at the center of the sheet. Lift up the right-hand side of the pasta sheet and fold it over to cover the balls of filling. Gently press the pasta around each ball of filling to seal. Use a 2½-inch (6.25-cm) round, fluted ravioli cutter or a similar size biscuit cutter to cut the ravioli. Repeat with the remaining pasta dough and filling. You should have about ninety-six ravioli.
**When ready to serve:** Bring a large pot of salted water to a boil. Drop in the ravioli in batches, quickly return the water to a boil, and cook until tender yet firm, 2 to 3 minutes. Drain the pasta, reserving about 1 cup of the pasta water.
Heat the Genovese sauce in a large, deep sauté pan over medium heat. When hot, add the cooked ravioli, in batches, if necessary, to prevent crowding, and simmer until the sauce coats the pasta, 3 to 4 minutes. Add some of the reserved pasta water, if necessary, to create a creamy sauce. Divide among plates and top with Parmesan.
**HALIBUT AL CARTOCCIO** with **LIGURIAN OLIVES AND OREGANO**
_Al cartoccio_ is the Italian equivalent of French cooking _en papillote_ , or in parchment paper. It's one of my favorite ways to cook fish because it's gentle, like a combination of steaming and poaching. Just be sure to pick a fish that's tender enough to cook all the way through in five to ten minutes. A dense fish, such as mako shark, wouldn't work so well, but halibut is perfect. I serve the cartoccio with some fingerling potatoes on the side and have guests slit open their own packages. Each package comes to the table like a gift from the kitchen. You open the paper and a burst of steamy aroma bathes you in the scents of the Italian Riviera. . . fresh lemon, olives, and oregano.
MAKES 4 SERVINGS
Potatoes:
8 fingerling potatoes, scrubbed
4 teaspoons (20 ml) grapeseed or olive oil
Salt and freshly ground black pepper
4 teaspoons (5 g) unsalted butter
4 teaspoons (1 g) chopped fresh flat-leaf parsley
Halibut:
¼ cup (60 ml) olive oil, plus a little extra for drizzling | 1½ pounds (680 g) halibut fillets, cut into 4 pieces
Salt and freshly ground black pepper
12 pitted Ligurian or Niçoise olives, halved lengthwise
24 fresh oregano leaves
12 thin slices of lemon
¼ cup (60 ml) freshly squeezed lemon juice
3 tablespoons (42 g) unsalted butter, cut into small pieces
---|---
**For the potatoes:** Put the potatoes in a pot and cover with cold water. Bring to a boil over high heat and boil until the potatoes are tender, 6 to 8 minutes. Let the potatoes cool until warm, then cut in half lengthwise.
Heat the grapeseed oil in a sauté pan over medium-high heat and fry the potatoes, cut-side down, until golden brown, 4 to 5 minutes. Drain any excess oil, then season the potatoes with salt and pepper and toss with the butter and chopped parsley.
**For the halibut:** Meanwhile, preheat the oven to 450°F (230°C). Cut four 10-inch (25 cm) squares of parchment paper and grease each with a thin film of olive oil. Season the halibut all over with salt and pepper, then divide them among the parchment squares. Mix together the olives, oregano, and 2 tablespoons (30 ml) of the olive oil and arrange over the halibut. Overlap two to three lemon slices on each portion, then drizzle with the lemon juice. Divide the cut-up butter among the portions, scattering it over the lemons, and drizzle with the remaining 2 tablespoons (30 ml) of olive oil.
**To make each package,** fold the parchment corner to corner over the fish to make a triangle. You'll have to nudge the fish slightly off center to make the corners meet. Starting at one of the other corners, begin rolling the paper toward the fish; continue making a series of small double folds all the way around the fish until you reach the opposite corner and the paper is folded tight against the fish. Twist the final corner several times to seal it tight, then fold it under the paper package.
Put the packages on a large rimmed baking sheet and drizzle each with a little olive oil. Bake until the fish is about 120°F (49°C) on an instant-read thermometer stuck through one of the packages, 5 to 7 minutes.
Using a spatula, transfer each cartoccio to a plate. Slit open the package, arrange the potatoes around the fish, and serve immediately.
**SPAGHETTI** al **NERO** di **SEPPIA** with **SHRIMP**
The best squid ink actually comes from _seppia_ (cuttlefish). The ink sacs are larger and have a stronger taste. I've seen squid ink used to flavor everything from pastas and sauces to breads and ricotta cheese. It's one of my all-time favorite pastas. I use plenty of ink to make sure the pasta dough gets completely black. Be careful, though, the ink gets everywhere, just as with a busted ballpoint pen. To make the spaghetti, follow the directions on page 283 for making squid ink pasta, then extrude the pasta through the spaghetti attachment of your pasta machine. If you buy dried squid ink spaghetti, add a few minutes to the cooking time here.
MAKES 4 SERVINGS
1 pound (450 g) Squid Ink Spaghetti (page 283)
1 cup (235 ml) extra-virgin olive oil, divided
1 leek, cleaned and trimmed and cut into long thin strips
1 quart (1 L) Shrimp Stock (page 280) | 1 pound (450 g) shrimp, peeled and cut into ½-inch (2.5-cm) pieces
½ cup (60 g) chopped fresh flat-leaf parsley
Salt and freshly ground black pepper
---|---
Bring a large pot of salted water to a boil. Drop in the spaghetti, quickly return the water to a boil, and cook until tender yet firm, 2 to 7 minutes, depending on how long the spaghetti has been refrigerated. Drain the pasta, reserving the pasta water.
Heat ¾ cup (178 ml) of the oil in a large, deep sauté pan over medium heat. Add the leek, and cook until soft but not browned, 4 to 5 minutes. Raise the heat to medium-high, add the shrimp stock, and simmer it down a little, 4 to 5 minutes. Add the cooked pasta, shrimp, parsley, and the remaining ¼ cup (59 ml) of oil, cooking and tossing until the shrimp is no longer pink and the sauce is creamy, 3 to 4 minutes. Season with salt and pepper, divide among bowls, and serve.
**MEYER LEMON TORTAS** with **POPPY SEED GELATO**
I've always loved lemon poppy seed pound cake. And I used to make a cold lemon cake that was soft and airy like a creamy soufflé. I wanted to put all those flavors together and came up with this dessert. It makes the perfect ending to a springtime meal. Poppy seeds are used here in the form of a gelato. Lemons appear in both the cake and the sauce, which features the peels of Meyer lemons. Meyer lemons taste closer to Ligurian lemons than your typical American Eureka lemons do. But if you can't find Meyers, you can use regular lemons here. Just taste the lemons and add some sugar to balance the acidity.
MAKES 8 SERVINGS
Torta:
5⅓ tablespoons (75 g) unsalted butter, at room temperature
⅔ cup plus 2½ (180 g) tablespoons granulated sugar, divided
3 large eggs, at room temperature, separated
1 Meyer lemon
¾ cup plus 1 tablespoon (101 g) _tipo_ 00 flour (see page 277) or all-purpose flour
Pinch of salt
⅓ cup (90 ml) whole milk
Lemon Peel Sauce:
1¼ cups (120 g) Meyer lemon peel | ¾ cup (175 ml) freshly squeezed Meyer lemon juice
1¼ cups (250 g) granulated sugar
½ teaspoon (2.25 g) powdered pectin
To Serve:
1½ cups (375 ml) Poppy Seed Gelato (page 287)
---|---
Preheat the oven to 375°F (190°C). Boil a kettle of water for a hot water bath.
**For the torta:** In a stand mixer on medium speed, cream the butter and ⅔ cup (133 g) of the sugar until light and fluffy, 3 to 5 minutes. Add the egg yolks, one at a time, mixing until they are incorporated, scraping down the bowl frequently. Grate the zest from the lemon into the bowl and squeeze the juice into the bowl. Mix until combined.
Combine the flour and salt. Alternate between adding the flour mixture and milk to the batter in three additions, ending with the milk.
Meanwhile, whip the egg whites on medium speed, slowly adding the remaining 2½ tablespoons (32 g) of sugar until medium peaks form when the beaters are lifted, about 5 minutes. Gently fold the whites into the egg yolk mixture in three additions.
Butter and flour eight 4-ounce (120-ml) ramekins or baking tins and place them in a 15 x 10-inch (38 x 25-cm) baking dish or a rimmed baking sheet. Pour the batter into the ramekins to just under the inside rim. Slide the baking dish into the oven and carefully pour boiling water into the dish or sheet to come about ½ inch (1.25 cm) up the sides of the ramekins. Bake until just set, 13 to 15 minutes. Remove the tortas from the water bath, and let the tortas cool in the ramekins. Refrigerate for at least 1 hour or up to 2 days.
**For the lemon peel sauce:** Fill a bowl with ice water. Put the lemon peels in a pot with cold water to cover by 1 inch (2.5 cm). Bring to a boil, drain immediately, and transfer the peels to the ice water to stop the cooking. Repeat this process of boiling, draining, and cooling in ice water three times.
Combine the lemon juice, ¼ cup (60 ml) of water, and the lemon peel in a medium saucepan. Bring to a boil over medium-high heat. Stir together the sugar and pectin in a small bowl and then add to the pan. Bring the mixture to 217°F (103°C) on a candy thermometer. Remove from the heat and let cool to room temperature. Cover and refrigerate for at least 1 hour or up to 1 week.
**To serve:** Spoon a pool of sauce on each plate. Run a knife around each torta and then invert over the plate to unmold. Drizzle more sauce on top and then spoon the poppy seed gelato on the side.
**SWEET RICOTTA FRITTELLE** with **RASPBERRY PRESERVES**
Italians will make fritters out of anything. Meat, fish, vegetables, and fruit. They're the ultimate snack when paired with something creamy. I was looking through some old Italian cookbooks and kept coming across this ricotta fritter with lemon curd. I thought it would be fun to use raspberries instead of lemon. The fritter ends up tasting sort of like a jelly doughnut but with the raspberry preserves on the side. The fruit marries well with the cheese.
MAKES 6 TO 8 SERVINGS
Raspberry Preserves:
¼ cup (60 ml) glucose syrup or light corn syrup
¾ cup plus 2 tablespoons (175 g) granulated sugar, divided
1 pound (450 g) raspberries (about 3¾ cups)
1½ teaspoons (7 g) powdered pectin
Frittelle:
14 ounces (400 g) fresh whole-milk ricotta cheese (1¾ cups)
2 large eggs | ¼ cup (50 g) granulated sugar
Zest of 1 lemon
1 vanilla bean, split and scraped
¾ cup (94 g) _tipo_ 00 flour (see page 277) or all-purpose flour
1 teaspoon (4.5 g) baking powder
Oil, for frying
To Serve:
Confectioners' sugar, for dusting
---|---
**For the raspberry preserves:** Combine the glucose syrup, ½ cup plus 2 tablespoons (125 g) of the sugar, and 1 tablespoon (15 ml) of water in a medium saucepan. Bring to a boil over medium heat. Gently stir in the raspberries, and cook until heated through, 1 to 2 minutes. Combine the pectin and the remaining ¼ cup (50 g) of sugar in a small bowl and gradually whisk into the berries. Return the mixture to a boil, and cook until slightly thickened, 4 to 5 minutes. The preserves will set further upon cooling. Scrape into a heatproof bowl and let cool. Use immediately or refrigerate for up to 2 weeks.
**For the frittelle:** Combine the ricotta, eggs, sugar, lemon zest, and vanilla in a stand mixer on low speed until blended. Combine the flour and baking powder in a small bowl and gradually add to the ricotta mixture on low speed. Scoop the mixture into 2-inch (5-cm)-diameter balls onto a parchment-lined baking sheet. (Or cover and refrigerate for up to 1 day, scooping out balls of dough as needed.) You should have 20 to 24 balls.
Heat the oil in a deep fryer or heavy pot to 375°F (190°C). Fry the balls in batches without crowding until set in the center and deep golden brown, 2 to 3 minutes, adjusting the heat to maintain the frying temperature at all times. Drain on paper towels.
**To serve:** Spoon a pool of preserves on the bottom of each plate. Top with two or three _fritelle_ and a dusting of confectioners' sugar.
**CHICKPEA CAKES** with **WARM LEMON CREMA**
Every town in le Cinque Terre uses chickpeas and chickpea flour in a variety of dishes. You can smell the _farinata_ (chickpea flatbreads) cooking in the streets during lunchtime. After my first visit, I researched how the residents used chickpeas and was surprised to hear how popular they were in desserts. What makes this dessert special is that you coat the cake molds with sugar, so they get a little crunchy on the outside. The insides of the cakes stay moist and complement the buttery lemon sauce. I like the cakes so much, I always make a big batch. But if you want less, you can halve this recipe.
MAKES 8 SERVINGS
Chickpea Cakes:
3 cups (600 g) cooked or canned chickpeas (drain and rinse if using canned)
10 tablespoons (125 g) granulated sugar
Zest of 1 lemon
2 tablespoons (16 g) _tipo_ 00 flour (see page 277) or all-purpose flour
Pinch of salt
1 large whole egg, plus 4 large eggs, separated
Lemon Sauce:
Juice of 3 lemons | 1 cup plus 2 tablespoons (225 g) granulated sugar
4 large egg yolks
4 ounces (1 stick/113 g) unsalted butter
½ cup (120 ml) heavy cream
To Serve:
Olive oil, for drizzling
Confectioners' sugar, for dusting
---|---
**For the chickpea cakes:** Preheat the oven to 375°F (190°C). Butter and sugar eight 4-ounce (120-ml) ramekins or baking tins and place on a baking sheet.
Puree the chickpeas in a food processor or blender until relatively smooth, scraping down the sides once or twice. You should have 2 cups (475 ml) of thick chickpea puree. Transfer the puree to a large bowl and add the sugar, lemon zest, flour, salt, whole egg, and egg yolks. Gently whisk until smooth. Whip the egg whites in a stand mixer on high speed until medium-stiff peaks form when the beaters are lifted. Gently fold the whites into the puree mixture. Spoon the batter into the prepared ramekins and bake until the cakes are set and golden brown, 12 to 15 minutes.
**For the lemon sauce:** Whisk together the lemon juice, sugar, and egg yolks in a heatproof bowl until light and pale yellow. Heat the butter and cream in a heavy saucepan over medium heat until it begins to simmer, and then remove from the heat. Whisk half of the hot cream mixture into the yolk mixture until incorporated, and then return the combined mixture to the pan. Return the pan to low heat and stir constantly but gently until the sauce thickens slightly and registers a temperature of 165°F (74°C) on a candy thermometer, about 5 minutes. Remove from the heat and stir for about 2 minutes, or until the sauce thickens to the consistency of heavy cream. Strain through a fine-mesh sieve into a bowl and let stand a few minutes, stirring occasionally. You should have about 2 cups (475 ml) of sauce.
**To serve:** Spoon a pool of warm lemon sauce on each plate. Turn out a warm cake onto each plate. Drizzle a little olive oil around the plate, then dust the cakes with confectioners' sugar.
BAROLO AND BARBARESCO
I CAN'T SEE THROUGH THIS FOG
"BAROLO HAS BEEN CALLED 'THE KING OF WINES AND THE WINE OF KINGS,'" SAID CAMILLO. HE TOLD ME BAROLO HAS SO MUCH TANNIN THAT IT IS BEST AGED FOR AT LEAST TEN YEARS TO MELLOW IT.
But in the 1980s, new producers wanted to start selling the wine sooner. So they shortened the traditional fermentation time to less than two weeks and aged the wine for only a few years in smaller French oak barrels instead of in big Slovenian casks. "For Barolo traditionalists, this meant war," said Camillo. "It was a fight over what could legally be called Barolo wine."
This was my first trip to Piedmont, about a two-hour drive southwest of Bergamo. As the birthplace of the slow food movement, the first Eataly megastore, the world's best white truffles, and some of Italy's finest wines, Piedmont is a culinary mecca. Camillo wanted to show me some of the wineries that helped Frosio Ristorante earn its Michelin star. It was 2004 and Matteo Donadoni (nicknamed "Jack") was driving. Jack worked the front of the house at Frosio, along with Camillo.
We left Bergamo at six in the morning and drank wine all day from eight thirty to five o'clock. Camillo took us to both modern and traditional Barolo and Barbaresco wineries, including Scavino, Clerico, Ceretto, Rocche dei Manzoni, and Marchesi di Grésy. In the center of Barbaresco, we stopped for lunch at Trattoria Antica Torre, just down the street from the famous Gaja winery. Antica Torre serves classic Piedmont dishes, such as Fassone beef carpaccio, _vitello tonnato_ , tajarin egg noodles, and _bonèt_ , a rich chocolate-amaretti pudding cake. We ended the meal with some Bra and Castelmagno cheeses.
On the drive back to Bergamo, Jack pontificated about the food. "Everything is a little richer and more refined in Piemont," he said. Jack explained how the region borders France and was under French control a few times. Even the traditional Barolo winemaking methods were originated by a Frenchman. The truffles and chocolate are finer than anywhere else in Italy. They fatten up the local Fassone cattle with sugar beets and zabaglione to make _bue grasso_ and hold a Fat Ox festival every year in Carrù. Even such peasant dishes as risotto get the royal treatment up here, with a finish of truffles, butter, or cheese. "We have a saying in Bergamo," Jack went on: "'La _boca l'è mia straca se la sa mia de aca.'_ It means, 'Your mouth is not tired if it doesn't taste like milk.'" In other words, always finish a meal with some satisfying cheese, no matter how full you might be.
I think Jack was a little drunk, but I got the picture. And I totally agreed about the lavish food in Piedmont. It made me want to come back.
In the fall of that year, I drove there with Claudia for a weekend getaway. We had GPS but it didn't do us any good. As the nights get cooler in Piedmont, a dense fog settles into the hills, making it impossible to find your way. Factor in the steep, twisty, one-car roads, and you're lost in no time. We started driving around ten in the morning for a lunch reservation at Da Cesare, a tiny restaurant in Albaretto Della Torre, about twenty minutes from Barolo. But a thick fog led us so far astray, we didn't make it to Da Cesare until two in the afternoon! It didn't matter. When you make a lunch reservation at Da Cesare, that could mean anywhere from noon to three p.m. Chef Cesare Giaccone cooks when you show up. There is no menu because it changes every day. Except for the _capretto_ and _zabaione._ Cesare always spit-roasts a baby goat over a wood fire outside the kitchen. And he always serves _zabaione_ tableside from a big copper bowl with his famous hazelnut cookies, baked and served right in the hazelnut shells. I've tried to make those cookies a hundred times and still can't get them right.
That fall, he started us off with his signature porcini and white peaches, thinly sliced and sautéed with a pan sauce of chicken stock, sherry vinegar, and cream. Next came a warm salad of duck breast with orange vinaigrette and local lettuces. Claudia licked her fork, and I could hear Cesare chopping the goat on his butcher block. The meat came to the table crispy but tender and drizzled only with herbed olive oil. It was outstanding. Cesare is one of Piedmont's most well known _personaggi._ He makes his own Barolo salt and Barolo wine vinegar, ages cheese in his cellar, and paints in his spare time. In his cellar, he showed us a wheel of Castelmagno cheese that he'd been aging for a year; his bottle of 1955 Gaja Barbaresco; and a dust-covered bottle of 1906 Barolo from Mascarello, one of the region's oldest winemakers. "When are you going to retire?" I asked him, trying to wrap my head around my own future as a chef. Cesare was already seventy. "Never," he said. "I'll be cooking for the rest of my life."
Claudia and I thanked him, and then drove to Ca' du Rabajà, a B&B about twenty minutes away, in Barbaresco. It's a beautiful brick red inn and winery that overlooks the vineyards. Ca' du Rabajà is a member of the Produttori del Barbaresco, a consortium of winemakers that pools its grapes and expertise to produce consistent Barbaresco wine. Barolo wine has a loftier reputation than Barbaresco, but Barolo wines are notoriously inconsistent. Both wines are made from 100 percent nebbiolo grapes, the local grape named after the local fog _(nebbia_ , in Italian). But Barolo's wider growing zone and changes to the winemaking methods over the years have made Barolo something of a crapshoot. It's true that Barbarescos don't bottle-age as gracefully. And they don't get nearly as big and aromatic. But Barbarescos are more drinkable when young and more reliable, thanks in part to the consortium.
|
---|---
We checked in, showered, changed, and drove to Osteria dei Catari for dinner that night, in the medieval village of Monforte d'Alba. The restaurant is super-rustic with exposed wooden beams inside, brick showing through the stucco here and there, and a dark wooden staircase leading to the second floor where colorful murals line the walls. For a first course, I had _tajarin_ , the region's thin handmade egg noodles, and they were insanely good. The shaved fresh truffles helped, of course. Claudia had veal-stuffed _raviolini del plin_ draped in a buttery sage sauce. She ordered veal breast with Barolo sauce for an entrée, and I had rustic, "hunter-style" braised rabbit. We couldn't resist the _torrone semifreddo_ with chocolate sauce for dessert.
Eating different versions of the same food in both Barolo and Barbaresco, I started to realize something: Italian cooking is intimately tied to the place that it comes from and the people who make it. It's hyperlocal. The people in each region, and even each town, depend on their local food and local wine for their unique sense of identity. There is no single, standard _tajarin_ or _vitello tonnato._ The dish changes from town to town. This fundamental idea is no more evident than in Barolo and Barbaresco, two famous Italian wines that are made within ten miles (16 km) of each other and use the exact same grapes, yet employ different winemaking methods that result in two distinct wines, each with its own distinguished and celebrated characteristics.
Panfried Veal Tongue with Bagna Cauda and Leeks
•
Red Bell Pepper Tonnato with Caper Berries
•
Rabbit Agnolotti with Pistachio Sauce
•
Robiola and Fava Bean Francobolli
•
Polenta Gnocchi Stuffed with Taleggio Cheese | Whole Roasted Pheasant with Barbaresco Sauce
•
Veal on a Stone
•
Warm Quince Tortini with Cranberry and Orange
•
Zabaione with Moscato and Fresh Figs
---|---
**PAN-FRIED VEAL TONGUE** with **BAGNA CAUDA AND LEEKS**
The first time I ever tried tongue, it was on a taco in Basalt, Colorado. This was years before I lived in Italy. Then in Piedmont, I had it over and over as part of the famous _bollito misto_ (boiled mixed meats) served with _salsa verde_ and _salsa rossa._ I love the idea of fish and meat together, so I thought the region's creamy anchovy sauce _(bagna cauda)_ would go great with soft, crispy veal tongue. I boil the tongue until tender and then slice it and bread and fry each slice only on side, for crunch. Sometimes I add a bitter edge by garnishing the dish with pieces of radicchio and a few grindings of black pepper.
MAKES 4 TO 6 SERVINGS
Veal Tongue:
1½ quarts (1.5 L) 3-2-1 Brine (page 280)
⅛ teaspoon (0.75 g) curing salt #1 (see page 277)
1 veal tongue (about 1½ pounds/680 g)
Bagna Cauda and Leeks:
1 cup (235 ml) whole milk
1 cup (235 ml) heavy cream
4 garlic cloves, smashed
1 cup (235 ml) blended oil (page 276)
2 ounces (57 g) salt-packed anchovies (about 5), rinsed and filleted (see note)
Salt and freshly ground black pepper
2 leeks, cleaned and trimmed | 4 ounces (1 stick/113 g) unsalted butter
Leaves from 1 sprig fresh rosemary
Breading and Garnish:
About ½ cup (62 g) _tipo_ 00 flour (see page 277) or all-purpose flour
1 large egg, beaten
About ½ cup (54 g) plain, dry breadcrumbs
4 tablespoons (57 g) unsalted butter
¼ cup (60 ml) olive oil, plus some for drizzling
1 tablespoon (4 g) chopped fresh flat-leaf parsley for garnish
---|---
**Note**
**To prepare salt-packed anchovies, rinse them well, brushing off the salt with your fingers, or soak them for a few hours in several changes of cold water to help remove the salt. Remove the heads, tails, and fins. Place the blade of your knife perpendicular to the fish just below where the head was and cut along the body, holding the blade against the backbone as you go. This will remove the top fillet. Put the tip of your knife under the backbone and then pull out and remove the backbone to expose the bottom fillet.**
**For the veal tongue** Combine the brine and curing salt in a large resealable plastic bag. Add the veal tongue, press out the air, seal, and refrigerate for 3 days.
Transfer the tongue to a medium saucepan. Add one-quarter of the brine and enough water to cover the tongue by about 1 inch (2.5 cm). Cover and bring to a simmer over medium heat, then adjust the heat so that the liquid simmers gently. Simmer until the tongue is tender (about 190°F/88°C internal temperature), 1 to 1½ hours. Remove the pan from the heat and let the tongue cool down in the liquid.
When cool, remove and discard the skin. Cut the tongue crosswise into slabs about ¼ inch (6 mm) wide. Refrigerate for up to 2 days.
**For the bagna cauda and leeks:** Combine the milk, cream, and garlic in a medium saucepan over medium-high heat. Bring to a simmer, then lower the heat to medium and cook until the liquid reduces in volume by about half, 6 to 8 minutes, taking care not to let the milk boil over. Meanwhile, heat the oil and anchovies in a sauté pan over medium heat until the anchovies break down, 3 to 4 minutes. Let cool slightly, then slowly scrape the oil mixture into the cream mixture; it may bubble up some. Simmer until the liquid reduces in volume by about one-third, thickens, and gets creamy, another 6 to 8 minutes. Let cool slightly, then puree in a blender. Season with salt and pepper.
Cut the leeks into pieces about 1½ inches (3.8 cm) long and 1 inch (2.5 cm) wide, and transfer them to the same sauté pan used for the anchovies. Add the butter and rosemary, and cook over medium heat, stirring a few times to break up the leek layers, until the leeks become super-soft and nearly melted but not browned, about 10 minutes. If necessary, stir in a little water so that the liquid remains creamy and not separated. Scrape into the pureed bagna cauda and keep warm over low heat.
Pour the flour, beaten egg, and breadcrumbs into three separate shallow bowls. Dredge one flat side of each piece of veal tongue in the flour, then the egg, then the breadcrumbs. Heat the butter and oil in a skillet over medium-high heat. When hot, add the tongue in batches, breaded-side down, and fry until golden brown on that side only, 3 to 4 minutes. Then flip, and cook the other side for 1 to 2 minutes. Transfer to paper towels to drain.
Spoon the bagna cauda mixture down the middle of small plates. Place a few pieces of tongue, breaded-side up, in the center of each plate. Drizzle with olive oil and garnish with chopped parsley.
**RED BELL PEPPER TONNATO** with **CAPER BERRIES**
I'm always looking for new variations of _vitello tonnato_ , Piedmont's classic dish of braised and sliced veal leg served cold with a creamy olive oil, egg yolk, and tuna sauce. At Osteria, we serve sliced porchetta with tonnato sauce. But at Alla Spina, I wanted to do a lighter snack and decided on this mousse of tuna wrapped in roasted pepper to form a sort of savory cannoli. Caper berries and lemon bring out the traditional tonnato flavors.
MAKES 4 SERVINGS
4 roasted red bell peppers (page 278)
1¼ teaspoons (3 g) powdered gelatin
4 anchovy fillets
1 tablespoon (15 ml) white wine vinegar
½ ounce capers (about 2 tablespoons/14 g), plus a few for garnish, preferably salted instead of soaked
7 ounces (200 g) canned Italian tuna packed in olive oil | Salt and freshly ground black pepper
½ cup (120 ml) mayonnaise
Zest from 2 lemons
2 tablespoons (8 g) chopped fresh flat-leaf parsley
½ cup (120 ml) olive oil
---|---
Cut the roasted peppers into eight rectangles, each about 4 to 5 inches (10 to 13 cm) long and 2 to 3 inches (5 to 8 cm) wide.
Mix the gelatin with 1 tablespoon (15 ml) of water in a medium heatproof bowl and let stand until bloomed (plump, soft, and hydrated), 5 minutes. Meanwhile, puree the anchovies, vinegar, and capers in a food processor. With the machine running, slowly add the tuna a little at a time, stopping to scrape down the sides of the bowl once or twice. Season with salt and pepper.
Put the bowl of bloomed gelatin over a pan of gently simmering water and heat until melted and smooth, 3 to 4 minutes, stirring a few times. Slowly stir the tuna mixture into the gelatin a little at a time, until fully incorporated. Remove the bowl from the heat and let cool for 5 minutes. Cover and refrigerate until very firm, at least 2 hours or up to 2 days. When the mixture has chilled, fold in the mayonnaise and keep cold.
Combine the lemon zest, parsley, and olive oil in a small bowl.
Lay the pepper rectangles out flat and spread about 2 teaspoons (10 ml) of cold filling over each one. Roll up the peppers like little jelly rolls to make a tube shape. The stuffed, rolled peppers can be refrigerated for up to 4 hours before serving. Top each with a caper berry (or more if you're using small capers) and drizzle the flavored oil over the top.
**RABBIT AGNOLOTTI** with **PISTACHIO SAUCE**
I'm a big fan of combining meat and nuts—especially rabbit and pistachios. Rabbit is delicate meat, but when roasted over wood, it takes in a ton of flavor that stands up to pistachios. I like to spit-roast a whole rabbit over wood for this ravioli filling, and then grind it with mortadella and bind the filling with egg and Parmesan. You can also grill rabbit pieces as described in the recipe. Add some water-soaked wood chips to the fire for more smoke flavor. I call for rabbit legs instead of a whole rabbit, to make a reasonable number of servings for home cooks. But if you want to cook a whole rabbit, double the recipe. You'll get about twenty servings, which can be frozen for a week or two, so you get a few meals out of it.
MAKES 8 TO 10 SERVINGS
Rabbit and Mortadella Filling:
1¾ pounds (794 g) rabbit legs
Salt and freshly ground black pepper
2 tablespoons (30 ml) grapeseed oil
5 ounces (142 g) mortadella, cubed
1 large egg
1¾ ounces (50 g) Parmesan cheese, grated (½ cup)
1 pound (450 g) Egg Pasta Dough (page 282), rolled into 4 sheets, each about inch (0.8 mm) thick
Pistachio Sauce:
2 cups (300 g) raw unsalted pistachios, preferably Sicilian
1 cup (235 ml) blended oil (page 276)
1 tablespoon (15 ml) sherry vinegar | 5 large basil leaves
½ small garlic clove
Salt and freshly ground black pepper
To Serve:
2 tablespoons (30 ml) white truffle paste (see Sources, page 289)
1 cup (150 g) chopped raw unsalted pistachios, preferably Sicilian
3½ ounces (100 g) Parmesan cheese, grated (1 cup)
¼ cup (60 ml) extra-virgin olive oil
---|---
**For the rabbit and mortadella filling:** Light a grill for indirect medium heat, about 350°F (175°C); on a charcoal grill, bank all the hot coals to one side of the grill; on a gas grill, light the burners on only one side of the grill and leave the other burners off.
Season the rabbit legs with salt and pepper and coat all over with oil. Coat the grill grate with oil. Grill directly over the heat until browned on both sides, about 5 minutes per side. Move the rabbit to the unheated part of the grill, and cook until the juices run clear, about 140°F (60°C) on an instant-read thermometer, 30 to 35 minutes. While cooking, turn the rabbit a few times and baste it with oil to keep it moist.
Transfer the rabbit to a platter, and when cool enough to handle, remove and discard all the bones, reserving the meat and skin. You should have about 1¼ pounds (570 g) of rabbit meat. Grind the rabbit meat and mortadella together on the small (¼-inch/6.2-cm) die of a meat grinder into a large bowl. Using the paddle attachment of an electric mixer or a wooden spoon, mix in the egg and Parmesan until incorporated and the mixture looks somewhat pasty. Spoon the mixture into a resealable plastic bag, seal, and refrigerate for at least 1 hour or up to 2 days.
Lay a pasta sheet on a lightly floured work surface. Form two rows of ½-inch (1.25-cm)-diameter balls of filling along the length of the sheet, leaving a 1½-inch (6 mm) margin around each ball. Spritz the dough with water to keep it from drying out as you work. Cut the pasta sheet in half lengthwise between the rows of filling to make two long sheets. Lightly moisten the long edges of the sheets with a spritz or a finger dipped in water. Starting from the outside edges in, fold the dough over just to cover the filling and roll the sheet of pasta over itself again. Next, pinch the dough in between the balls of filling to remove the air, starting at one end and working your way to the other. Place a finger gently on the stuffing to create a dimple, then cut between the balls of filling to create the ravioli. Repeat with the remaining pasta dough and filling. You should have 175 to 200 agnolotti. You may have some leftover filling; use it like you would use any other sausage. Place the agnolotti in single layers between sheets of waxed or parchment paper, cover, and freeze for up to 2 days. Take the pasta right from the freezer to the pasta water to cook.
**For the pistachio sauce:** Buzz the pistachios, blended oil, sherry vinegar, 1 tablespoon (15 ml) of water, and the basil and garlic in a blender or food processor until smooth, 2 to 3 minutes. Season with salt and black pepper, and then refrigerate in a sealed container for up to 3 days or freeze for up to 1 month. Makes about 3 cups (750 ml).
Bring a large pot of salted water to a boil. Drop in the pasta in batches, and cook just until tender, 2 to 4 minutes. Drain, reserving the pasta water.
**To serve:** Meanwhile, combine the pistachio sauce, truffle paste, and 2 cups (475 ml) of pasta water in a large, deep sauté pan over medium heat. Add the drained pasta and toss gently until the sauce is creamy, 2 to 3 minutes. Divide the pasta and sauce among plates and top with chopped pistachios, Parmesan cheese, and a drizzle of extra-virgin olive oil.
**ROBIOLA** and **FAVA BEAN FRANCOBOLLI**
The first time I made francobolli ravioli was out of Mario Batali's _Babbo Cookbook_. They look like postage stamps on the plate _(francobolli_ is Italian for "postage stamps"). I love the small, delicate shape. Mario's francobolli were filled with lamb's brain, but I like a more colorful filling. Fava beans came right to mind. They blend up nice and creamy, and you can see the bright green color right through the pasta. A simple butter sauce with mint is all that's needed to finish the dish.
MAKES 6 TO 8 SERVINGS
1 pound (450 g) fava beans in the pods
5 tablespoons (75 ml) olive oil, divided
2 tablespoons (20 g) minced yellow onion
5 ounces (141 g) robiola cheese (⅔ cup)
2¼ ounces (64 g) Parmesan cheese, grated (2 tablespoons plus ½ cup), divided
1 large egg | Salt and freshly ground black pepper
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (0.8 mm) thick
12 ounces (3 sticks/340 g) unsalted butter
4 garlic cloves, smashed
¼ cup (15 g) chopped fresh mint, plus a few leaves of mint cut in chiffonade, for garnish
---|---
Bring a large pot of water to a boil and fill a large bowl with ice water. Add the whole fava pods to the boiling water and blanch for 1 minute. Transfer to the ice water to stop the cooking. When cool, pluck the favas from the pods, then pinch open the pale green skin on each bean and pop out the bright green favas into a bowl. You should have about 1 cup (188 g).
Heat 1 tablespoon (15 ml) of the olive oil in a sauté pan over medium heat. Add the onion, and cook until soft but not browned, 3 to 4 minutes. Add ½ cup (94 g) of the fava beans and just enough water to cover the beans (about ½ cup/120 ml). Cook just until the beans are tender, 4 to 5 minutes. Using a slotted spoon or spider strainer, scoop the beans and onion from the liquid and transfer them to a blender, reserving the liquid. Puree the beans and onion, adding just enough of the cooking liquid to make a smooth puree. Transfer to a mixing bowl and add the robiola, 2 tablespoons (12.5 g) of the Parmesan, and the egg. Season with salt and pepper and whisk until smooth. Spoon into a resealable plastic bag and refrigerate for at least 1 hour or up to 1 day.
Lay a pasta sheet on a lightly floured work surface and dust with flour. Trim the ends to make them square, then fold the dough in half lengthwise and make a small notch at the center to mark it. Open the sheet so it lies flat again and spritz with water. Beginning on the left-hand side, place two rows of ½-inch (1.25-cm)-diameter balls of filling along the length of the pasta, leaving a ½-inch (1.25-cm) margin around each ball and stopping at the center of the sheet. Lift up the right-hand side of the pasta sheet and fold it over to cover the balls of filling. Gently press the pasta around each ball of filling to seal. With a knife or fluted pasta wheel, cut into 1-inch (2.5 cm) squares, trimming off any excess. Repeat with the remaining pasta dough and filling. You should have about 125 francobolli. If you're not going to cook them immediately, toss the francobolli with a little bit of flour and freeze in an airtight container.
Bring a large pot of salted water to a boil. Drop in the francobolli, quickly return the water to a boil, and cook until tender yet firm, 3 to 4 minutes. Drain the pasta, reserving the pasta water.
Put the butter, remaining ¼ cup (60 ml) of olive oil, and smashed garlic in a deep sauté pan over medium heat. When the butter melts and the oil is hot, whisk in 1 cup of pasta water until blended. Add the remaining ½ cup (94 g) of fava beans, the cooked pasta, and the mint. Cook until the sauce becomes thick and creamy, 1 to 2 minutes.
Divide among plates and garnish with the remaining ½ cup (50 g) of Parmesan and the mint chiffonade.
**POLENTA GNOCCHI STUFFED** with **TALEGGIO CHEESE**
As a cook, you always want to try to use up every ingredient without throwing anything away. Polenta lasts for a few days in the fridge, so why not use leftovers to make gnocchi? Stuffed with Taleggio cheese, they're fantastic. The dumplings will seem wet when you make them. That's okay. The wetter they are, the more tender they'll be. Just handle them gently and roll them lightly in flour. If you like, add a few leaves of fresh sage to the brown butter in the sauce.
MAKES 4 TO 6 SERVINGS
1 pound (about 2½ cups/450 g) cooked Polenta (page 281), at room temperature
½ cup (62 g) _tipo_ 00 flour (see page 277)or all-purpose flour, plus about 2 cups (250 g) for dusting
2¼ ounces (64 g) Parmesan cheese, grated (⅔ cup), divided
2 tablespoons (14 g) plain, dry breadcrumbs, sifted | 1 large egg
Salt and freshly ground black pepper
8 ounces (227 g) Taleggio cheese, cut into ¼ - to ½-inch (about 1 cm) cubes (about 2 cups)
8 ounces (2 sticks/227 g) unsalted butter
---|---
Put the polenta in the bowl of a stand mixer fitted with a paddle attachment and mix on medium speed until smooth, 2 to 3 minutes. Add ½ cup of flour, ⅓ cup (33 g) of the Parmesan, and the breadcrumbs and egg, and season to taste with salt and pepper. Mix on medium-low speed just until combined, 30 seconds or so, scraping down the sides once to incorporate all the ingredients. Spoon the filling into a resealable plastic bag, seal, and refrigerate for at least 1 hour or up to 1 day.
Pour about 1 cup (125 g) of flour into a large bowl. Snip a corner from the bag and squeeze the gnocchi mixture from the bag into the flour in ¾-inch (2 cm)-diameter balls. Coat your hands with flour, make a small dimple in each ball with your pinky tip, and then insert a piece of Taleggio inside each dimple, gently pinching the dough around the cheese and rolling the ball between your hands to completely enclose the cheese. Repeat with all of the gnocchi and roll in flour to coat them. You should have fifty to sixty gnocchi.
Bring a large pot of salted water to a boil. Drop in the gnocchi in batches to prevent overcrowding, and cook until the polenta firms up and the cheese begins to melt, 5 to 6 minutes (test one to make sure the cheese is melting).
Meanwhile, melt the butter in a sauté pan over medium-low heat and continue cooking until the milk solids begin to brown and fall to the bottom of the pan, 10 to 15 minutes.
Divide the gnocchi among plates and sprinkle with the remaining Parmesan. Pour the browned butter over the top.
**WHOLE ROASTED PHEASANT** with **BARBARESCO SAUCE**
I grew up in southern New Hampshire and started hunting at the age of twelve. We mostly hunted game birds, such as pheasant, around the apple orchards in Wilton. My dad would rub mayonnaise under the skin and roast the birds, but they always got dry and tough. I learned that you have to undercook them a little. Now, I stuff the pheasants with herbs and garlic, tie them up, and pan-roast them to about 140°F (60°C). Then I cut the meat off the carcass and squeeze the carcass into the pan to enrich the sauce. A little truffle paste helps, too. If you can't get pheasant, you could make this dish with chicken.
MAKES 4 SERVINGS
2 female pheasants (with innards), each 2½ to 3 pounds (1.1 to 1.3 kg)
Salt and freshly ground black pepper
1 garlic clove, halved
8 sprigs thyme
1 bay leaf, torn in half
4 tablespoons (57 g) unsalted butter, divided | ¼ cup (60 ml) extra-virgin olive oil, divided
1 cup (235 ml) Barbaresco or another dry red wine
1 cup (235 ml) Chicken Stock (page 279)
1 tablespoon (15 ml) red wine vinegar, preferably Barbaresco or Barolo
1 tablespoon (15 ml) white truffle paste or shaved fresh white truffles
---|---
Preheat the oven to 450°F (230°C). Remove and discard any remaining feathers from the pheasants, then pull out the innards and set aside. Season the birds inside and out with salt and pepper. Stuff the garlic, thyme, and bay leaf into the cavities and truss the birds by tying the legs together (see note for trussing instructions).
Heat 1 tablespoon (15 ml) each of the butter and oil in each of two ovenproof sauté pans (or use one giant pan if you have one). Set them over medium-high heat and, when hot, add a pheasant breast-side down to each pan and sear on all sides until nicely browned, 8 to 10 minutes total, taking care not to tear the skin when turning the birds. Turn the birds breast-side up, transfer the pans to the oven, and cook until an instant-read thermometer registers 140° to 145°F (60° to 63°C) when inserted into a leg, about 25 minutes. Remove the pans from the oven and transfer the pheasants to a cutting board. Let rest for 10 to 15 minutes.
Meanwhile, return the pans to medium heat. Finely chop the pheasant livers and hearts, add to the pans, and sauté for 2 to 3 minutes. Add the wine, scraping the pan bottom, and simmer until the liquid reduces in volume by about half, 4 to 6 minutes. Add the stock, and cook until the liquid thickens enough to start coating the back of a spoon, another 5 minutes or so.
Cut each pheasant into six pieces, removing the legs, wings, and breast from the carcass. Scrape any juices from the cutting board into the pan and squeeze the carcass over the pan to release all its juices. Simmer for 2 to 3 minutes, then strain out the liver and heart, if you like, or leave them in. Stir in the remaining 2 tablespoons (30 ml) of butter and 2 tablespoons (30 ml) of olive oil, along with the vinegar and truffle paste, stirring like mad until the sauce blends together and emulsifies. Season to taste with salt and pepper.
Serve the pheasant pieces with the Barbaresco sauce.
**Note**
**To truss the stuffed pheasants, pierce three or four 4-inch (10 cm)-long wooden skewers through the flaps of skin on each side of the cavity openings. Weave kitchen string around the skewers like a shoelace to lace the bird shut, tying it off at the top. Position the bird breast-side up with the legs facing away from you. Loop a long piece of kitchen string beneath the ends of the drumsticks, crossing the string to make an X. Pull the remaining string down, passing it beneath the thighs and pulling tight to pull the legs toward the tail. Continue pulling the string along the body toward the neck and pass it beneath the wings. Flip the bird over so the legs are now facing toward you and cross the string over the back between the two wings, pulling tight. Loop the string beneath the backbone, pull it tight, and then tie it off with a tight knot.**
**VEAL ON A STONE**
The food at Da Cesare in Piedmont is the most simple and elegant you will ever experience. In 2004, out from the kitchen came this piping hot stone with a veal loin cooked rare and sliced thin, so you could use the stone to finish cooking the veal to your liking. You could smell fresh rosemary and thyme sprigs searing beneath the hot stone. The dish was served with an heirloom tomato and basil salad and a little rock salt. Perfection! You can use any flat, heavy stone that will retain heat. I like Pennsylvania bluestone, but salt blocks also work well. Look for bluestone at landscaping stores or salt blocks at the Meadow (see Sources, page 289). You can get away with using only two stones if everyone shares.
MAKES 4 SERVINGS
Four 6-inch (15-cm) square Pennsylvania bluestones, salt blocks, unglazed quarry tiles, or other dense, heavy stones
4 heirloom tomatoes
3 tablespoons (45 ml) olive oil, plus a few tablespoons for searing
1 tablespoon (15 ml) red wine vinegar, preferably Barolo
6 basil leaves, cut into chiffonade
Salt and freshly ground black pepper | 2 pounds (1 kg) boneless veal loin
8 sprigs rosemary
8 sprigs thyme
Maldon sea salt
---|---
If using bluestones, rinse the stones clean, dry them, and place them in a cold oven. Heat the oven to 500°F (260°C) and let the stones preheat in the oven until very hot (about 400°F/205°C surface temperature), about 6 hours. If using salt blocks, heat them over a gas burner on the lowest heat for 15 minutes, then raise the heat from low to medium to high every 10 to 15 minutes until very hot (about 600°F/315°C), about 45 minutes. For electric burners, prop the salt blocks on a wok ring or other rack to avoid direct contact with the heating element.
Meanwhile, bring a large pot of water to a boil and fill a large bowl with ice water. Score an X into the bottom of each tomato with a sharp knife and blanch the tomatoes in the boiling water until the skins start to curl back a little from the X, 30 seconds to 1 minute. Immediately transfer the tomatoes to the ice water and let them stand until cooled, a minute or two. Starting at the X, peel the skin from the tomatoes and discard. Cut them in half through their equator and gently squeeze them upended over a bowl or trash can to remove and discard the seeds and gel (you can use your fingers or the tip of a knife to help dig out the seeds and gel). Cut out the core and then finely dice the tomatoes. Transfer them to a bowl and mix in the oil, vinegar, and basil. Season with salt and pepper and let stand at room temperature for 30 minutes.
About a half-hour before you are ready to serve, cut the veal crosswise into four equal portions (8 ounces/227 g each), season with salt and pepper, and let stand at room temperature for 20 minutes.
Heat a few tablespoons (about 45 ml) of the olive oil in a large sauté pan over medium-high heat. When hot, add the veal and sear until golden brown on both sides but nice and rare and still cool to the touch in the middle, about 4 minutes per side.
Slice the veal through the side into slabs about ¼ inch (6 mm) thick and place the slabs on a plate or another wooden board. Place the rosemary and thyme on wooden boards that can hold the hot stones (the stones will go on top of the herbs). Using heatproof silicone gloves or thick, insulated grill gloves, carefully remove the hot stones from the oven or stovetop and place them on the herbs on the wooden board. Allow guests to use their forks to transfer the slabs of veal to the hot stones, and cook the veal to their liking (it only takes about a minute per side). Serve with the tomato salad, allowing guests to season the stone-seared veal to taste with Maldon sea salt. If using salt blocks, you may not need the finishing salt because some salt will be released into the veal from the block.
**WARM QUINCE TORTINI** with **CRANBERRY** and **ORANGE**
_Tortino_ means "small tart." Mixing choux pastry with pastry cream is what makes the tart special. It creates an unbelievably light and flaky crust. You could put whatever you want inside—cherries, pears, bananas—and the tart would taste awesome. This version is my Italian twist on American apple pie, using quince as the filling and cranberry compote as the sauce. The recipe is written for individual pies, the way we serve them at Osteria. You'll need eight fluted three- to four-inch (7.5- to 10-cm) brioche molds (any extras can be frozen and then reheated). Or for one large tart, use a single deep-dish pie pan, rolling the dough for both top and bottom crusts.
MAKES 8 SERVINGS
Pastry Cream:
¾ cup (150 g) granulated sugar, divided
¼ cup (32 g) cornstarch
9 large egg yolks
1⅔ cups (400 ml) whole milk
6 tablespoons (90 ml) heavy cream
½ vanilla bean, split and scraped
Pâte à Choux Dough:
7 ounces (200 g) unsalted butter
1½ cups (205 g) pastry flour
¼ teaspoon (1.5 g) salt
6 large egg yolks
Quince Filling:
2¼ pounds (1.1 kg) quinces, peeled, cored, and coarsely chopped
1 tablespoon (14 g) unsalted butter
Juice of 1 lemon | 2½ cups (500 g) granulated sugar
1½ teaspoons (4 g) ground cinnamon
¼ teaspoon (0.6 g) ground mace
⅛ teaspoon (0.3 g) ground cloves
⅛ teaspoon (0.3 g) ground allspice
⅛ teaspoon (0.3 g) ground ginger
1 cup (235 ml) apple cider
Cranberry and Orange Sauce:
Peel from ½ orange
9 ounces (about 2¼ cups/255 g) fresh cranberries
1 cup (200 g) granulated sugar
3 tablespoons (45 ml) brandy
To Serve:
1 cup (235 ml) Crème Anglaise (page 284)
---|---
**For the pastry cream:** Combine the sugar, cornstarch, and egg yolks in the bowl of a stand mixer fitted with the whisk attachment. Mix on medium speed until smooth, about 2 minutes.
Bring the milk, cream, and vanilla to a boil in a medium saucepan over medium heat. Remove from the heat and mix about ½ cup (120 ml) of the hot milk mixture into the egg mixture. Scrape all of the egg mixture into the pot of hot milk, and cook over medium heat until thick and creamy. Transfer the mixture to the mixer bowl and whip on low speed until cool, 4 to 5 minutes.
**For the pâte à choux dough:** Melt the butter with ⅞ cup (210 ml) of water in a large saucepan, and bring it to a rolling boil. Stir in the flour and salt, cooking and stirring vigorously until the flour absorbs the water and a film forms on the bottom of the pan, 2 to 3 minutes. When the dough comes together and pulls away from the sides of the pan, about a minute later, transfer the dough to the bowl of a stand mixer fitted with the paddle attachment. Mix on medium speed for 1 minute to cool the dough. Add the eggs one or two at a time, allowing each addition to become fully incorporated before adding the next and scraping down the bowl as necessary.
Add the cooled pastry cream to the pâte à choux, and mix on low speed until combined to make the final tart dough. It will be sticky. Cover and refrigerate until ready to use or up to 4 hours.
**For the quince filling:** Combine the chopped quinces, butter, lemon juice, sugar, cinnamon, mace, cloves, allspice, ginger, and cider in a large saucepan. Bring to a boil over medium-high heat, and then lower the heat to medium-low and simmer until the quince breaks down and thickens the entire mixture into a jam, 30 to 40 minutes.
**For the cranberry and orange sauce:** Cut the orange peel into very thin strips about 3 inches (7.5 cm) long (julienne). Reserve the rest of the orange for another use. Set up three small pots of boiling water and fill a bowl with ice water. Blanch the orange peels in the first pot for 30 seconds, then transfer with a slotted spoon to the ice water to cool. When cool, blanch the peels in the second pot for 30 seconds and transfer to the ice water to cool again. When cool, blanch the peels in the third pot for 30 seconds and transfer to the ice water to cool.
Combine the blanched orange peel, cranberries, sugar, and brandy in a medium saucepan over medium heat. Simmer until the cranberries break down some and the mixture thickens (about 215°F/102°C on a candy thermometer), 20 to 25 minutes.
Preheat the oven to 375°F (190°C) and butter eight fluted 3- to 4-inch (7.5- to 10-cm) brioche molds.
On a lightly floured work surface, roll out half of the dough to an even ¼-inch (6-mm) thickness; the dough will be sticky and you will need a lot of flour to roll it out. Cut eight circles out of the dough, each about 4½ inches (11.5 cm) in diameter, and line the prepared molds with the dough, easing it into all the flutes of the molds. Fill halfway with the quince filling.
Roll the remaining half of the dough on the work surface, adding flour as needed, to an even ¼-inch (6-mm) thickness. Cut eight circles out of the dough to fit over the top of the tortini. Lay the top crust on each tortino and crimp the edges to seal. Place the tortini on a rimmed baking sheet and bake until golden brown, 15 to 20 minutes. Let cool on a rack for 10 minutes.
**To serve:** Spoon a pool of Crème Anglaise on each plate and top with a warm tortino. Spoon on some of the cranberry and orange sauce.
**ZABAIONE** with **MOSCATO AND FRESH FIGS**
Listening from the dining room of Da Cesare, you can hear the clang of the whisk in Cesare's copper bowl as he makes his famous zabaione. It's never too thick or too soupy or too sweet. It's perfect. He doesn't cook it in a double boiler but in a copper bowl right over a burner, whisking like hell. It takes skill to keep from scrambling the eggs. I wrote this recipe using the safer method of whisking the zabaione in a bowl over gently simmering water.
MAKES 2 SERVINGS
6 ripe fresh figs
2 large egg yolks
Two half-eggshells of granulated sugar
Two half-eggshells of Marsala or muscat wine
2 hazelnut biscotti cookies or other biscotti
Quarter the figs and divide them among small serving bowls.
Combine the egg yolks, sugar, and wine in a heatproof bowl or the top of a double boiler. Whisk vigorously until the mixture is thick and pale yellow, 2 to 3 minutes. Set the bowl over a saucepan of barely simmering water. You don't want the water at a rolling boil or it will cook the eggs too quickly. Whisk constantly until the mixture takes on enough air to triple in volume, thicken slightly, and fall in sheets when the whisk is lifted. It should register 145°F to 150°F (63°C to 66°C) on an instant-read thermometer and take about 5 minutes of whisking over the hot water.
Spoon the zabaione over the figs at the table and garnish the bowls with hazelnut biscotti.
VILLA D'ALMÈ
SIMPLE ITALIAN COOKING AT ITS BEST
THE 7 CLUB WAS FULL OF GORGEOUS ITALIAN WOMEN WORKING OUT. WITHIN WALKING DISTANCE OF FROSIO, THE GYM WAS MY SAFE HAVEN BETWEEN SHIFTS AT THE RESTAURANT. BUT SINCE CLAUDIA AND I STARTED DATING, I HADN'T WORKED OUT IN WEEKS. IT WAS GOOD TO GET BACK IN THE GYM.
When I got back to work, I finished icing the last of the _piccola pasticceria_ (petit fours), one of my specialties. During lunch, Jack Donadoni came into the kitchen and told me that Stefano Arrigoni was in the dining room and wanted to know who was making them. Stefano was the owner of Osteria della Brughiera, another Michelin-starred restaurant just down the road from us in Villa d'Almè. Tall and artistic, Stefano came for lunch at Frosio once a week and was impressed with my desserts. When I came out to talk with him, we hit it off right away, and he said I should come to La Brughiera for dinner.
"You should definitely go," said Jack. "The food is incredible. La Brughiera is one of Bergamo's thirty Michelin restaurants." Jack was always bragging about Bergamo being called " _la città più stellata_ ," "the city with the most stars," and Lombardy having the most Michelin-starred restaurants in all of Italy. I'd already eaten at several of them and felt lucky to have cooked at two of them—Frosio and Loro. It was time to check out another.
La Brughiera opened in 1991. Over the years, Mario Batali, Michael Schlow, and other Italian restaurateurs in America have eaten there. It's a shining example of simple, refined Italian food. Stefano also owns an art gallery in Bergamo, and the physical spaces of La Brughiera are impeccably designed—a perfect blend of contemporary and rustic. You enter the restaurant through an iron gate leading up a pebbled walkway past a minimally decorated patio with umbrella-covered tables. In the foyer, stunning paintings—both modern and classic—adorn the walls. The dining rooms are to the left. The _cantina_ , to the right. When Claudia and I came for dinner, Stefano's father, Walter, was in the cantina, slicing dark red prosciutto and deeply marbled coppa on a gleaming antique Berkel meat slicer. Guests are invited to linger in this dimly lit wine cellar with some Franciacorta and hand-carved salumi before heading to their tables for dinner. Hundreds of wine bottles line the old stone walls, and a green marble table offers all manner of homemade pickles, cheeses, breads, and salumi. Stefano brought Claudia and me past the meat slicers, through a brick archway, to the cavernous curing room, where pancetta, guanciale, prosciutto, culatello, culaccia, and other sausages hung on meat hooks from a wet ceiling. Kneeling on the floor, he pushed the lid off a chilly tomb of Carrara marble to show us thick slabs of rosemary-rubbed _lardo_ curing, layer upon layer, inside. For a moment, I was transported back to the Mangili butcher shop by the salty smell of aging meat and fat.
As we reentered the cantina, Stefano told me that his chef, Paolo Begnini, started in 1996, just a few years after they opened. Paolo is a tall, broad-chested guy with thin eyebrows. Paolo handed Claudia and me each a piece of unsalted bread rubbed with tomato and two-day-old fresh sausage. Suddenly, it seemed that all of my culinary experiences of the past year, from butchering to baking to dining to home cooking, were captured in that one bite of bread and meat.
Claudia and I sat down for dinner and the dishes came to the table without a single order. Treviso with shaved artichokes, fried egg and fresh local cheese. Veal and truffle pâté on toast with a salad of guinea fowl, cipolline onions, and hazelnuts. Squash gnocchi stuffed with porcini and topped with Parmesan fonduta and shaved white truffles. Bavette pasta with baby octopus and sage. Scampi with caramelized citrus. Panfried veal brains with green beans and mustard vinaigrette. Pan-seared duck breast with citrus, cauliflower butter, and duck liver pâté.
This was the kind of food I wanted to cook more of! The ingredients were impeccable; the techniques, flawless. Paolo's pasta was ethereal and perfectly married with the sauce. His flavor combinations were concise. Presentations were uncluttered. Every dish rang true. Even desserts were a revelation. Passion fruit soufflé with pineapple-ginger sorbet. Light and crispy apple fritters with cream and cinnamon. Molten hot chocolate puffs with bourbon vanilla crema.
I felt like I'd been given a great big bear hug by the chef. For the Italians, when you walk into their restaurant, it's like walking into their home. Even though we'd just met, both Paolo and Stefano welcomed us like old friends. That dinner lasted until two in the morning as Stefano sat with us, pouring vin santo and nibbling _cantucci_ (almond cookies). He asked me about America, how Claudia and I met, and what we hoped for in the future.
Over the next several months, we went back to La Brughiera again and again. It became our favorite restaurant. By this time, I'd eaten all over Italy in various restaurants in various regions. But I still found Paolo Begnini's cooking to be the most inspiring. Every time I ate there, I learned something new. His taste was slightly more Tuscan than Bergamascan yet unlimited by allegiance to any one Italian region. He employed ingredients and techniques from all over Italy and the world, using the full range of his talents as a chef. He traveled to Tuscany to get the best coppa, sought out the best prosciutto in Parma, and bought the best local produce from Bergamo, putting most of it on the menu and preserving the rest. He made use of every culinary technique he had mastered and constantly researched and tested new ones.
By the end of that year, I was full of inspiration but completely out of money. As an American, I didn't have a work visa and couldn't get a legal job. Luckily, Frosio offered to keep me on at the restaurant and pay me under the table. I had told my dad I would send home money to pay off my school loans from the Culinary Institute of America. It wasn't easy to stay in Bergamo, but I had to. Important things were happening. With Claudia, it wasn't just another crush. And with my cooking, it wasn't just another chef job. I was maturing as a chef and as a person. I was falling in deep.
That winter, I moved in with Claudia and her mom.
Smoked Cod Salad with Frisée and Soft-Cooked Egg
•
Ribollita Ravioli with Borlotti Beans and Tuscan Kale
•
Squash Gnocchi with Amaretti and Mostarda
•
Swordfish Pancetta with Fennel Zeppole
•
Veal Liver Raviolini with Figs and Caramelized Onions | Oil-Poached Black Bass with Fresh Peas and Baby Tomatoes
•
Vanilla Crespelle with Caramelized Pineapple Sauce
•
Bomboloni with Vin Santo Crema
•
Cherry Shortcake with Cherry Meringata
---|---
**SMOKED COD SALAD** with **FRISÉE** and **SOFT-COOKED EGG**
Marco Pierre White used to serve a sunny-side up egg on panfried whitefish. I loved that. The idea here is similar, but the egg is soft-cooked in the shell. You peel off the top third of the cooked white to expose the egg yolk. When you slide your fork into it, the yolk flows out onto the fish, enriching it like a ready-made sauce. With some pancetta in the frisée salad, it makes a salty, crunchy, sharp, sweet, bitter, creamy start to a meal.
MAKES 6 SERVINGS
1½ pounds (680 g) skinless cod fillet, cut into 4 to 5 pieces of equal thickness
2 tablespoons (6 g) minced chives
3 tablespoons (45 ml) olive oil
3 tablespoons (42 g) unsalted butter, melted
Juice and zest of ½ lemon
Salt and freshly ground black pepper | 6 large eggs
2 ounces (57 g) pancetta, cut into ⅛-inch (3-mm) cubes
1 tablespoon sherry vinegar
2 small heads frisée (about 3 ounces/85 g total), cleaned and torn into bite-size pieces
Maldon sea salt for garnish
---|---
Set up a smoker by putting 1 cup (50 g) of wood shavings or small chips, preferably oak or apple, in the bottom of a roasting pan and setting the pan on the stovetop so that the chips sit directly over the heat of the burner but the rest of the pan is not over the heat. Put a rack in the pan and line the rack with parchment paper. Heat the wood shavings over medium heat until they start to smoke. Set the fish on the paper away from the heat, cover the pan tightly with foil, and smoke until no longer translucent in the center, 15 to 20 minutes, making sure the wood shavings emit smoke the entire time.
Transfer the cod and its juices to a mixing bowl and add the chives, oil, butter, and lemon juice and zest. Season with salt and pepper and mix gently, adding more oil, if necessary, to make the mixture moist and glistening.
Bring 2 quarts (2 L) of water to a boil in a medium saucepan and fill a large bowl with ice water. Carefully add the eggs to the boiling water and boil for exactly 5 minutes. Immediately transfer with a slotted spoon to the ice water and let stand until cooled, about 3 minutes. When cooled, carefully remove the shell from each egg. Using a paring knife, lightly score the white around the more narrowly pointed end of the egg, and carefully peel off the white with your fingers just enough to expose the top one-quarter of the yolk without breaking it.
Cook the pancetta in a sauté pan over medium heat until lightly browned, 5 to 7 minutes. Pour the rendered fat into a measuring cup and measure out 3 tablespoons (45 ml). Discard the rest. Put the vinegar in a medium bowl and whisk in the rendered fat in a slow, steady steam until incorporated and thickened. Season with salt and pepper and then add the frisée and pancetta, reserving a few pancetta bits for garnish. Toss to coat.
For each serving, put a 6-inch (15 cm) ring mold on a small salad plate. Fill half of the ring mold with the cod mixture, forming a semicircle. Fill the other half of the mold with the frisée salad. Unmold the mixtures and place an egg directly on top. Sprinkle with the remaining pancetta bits, black pepper, and Maldon sea salt. For a more casual presentation, divide the salad and cod among plates and top each serving with an egg.
**RIBOLLITA RAVIOLI** with **BORLOTTI BEANS** and **TUSCAN KALE**
At La Brughiera, I noticed that Paolo Begnini would do the unexpected, riffing on classic preparations with a little twist of his own. I did the same thing here. _Ribollita_ is a classic Tuscan soup made from scraps of meat, "reboiled" the next day with vegetables, and served in ceramic bowls. I simply made pasta out of the same ingredients. The beans are simmered, mashed, and simply seasoned for the pasta filling. Vegetables are stewed in their own broth, with some extra olive oil to thicken the broth. I didn't introduce any new flavors. This is just ribollita in a different form. The ravioli freeze well, so you can enjoy them again, or just cut the recipe in half to make less.
MAKES 10 TO 12 SERVINGS
Ravioli and Filling:
2 cups (12 ounces/340 g) dried borlotti (cranberry) beans, soaked overnight in water to cover
1 ounce chopped prosciutto scraps or pieces
1 medium-size yellow onion, chopped (1¼ cups/200 g)
2 medium-size carrots, chopped (1¼ cups/150 g)
2 large ribs celery, chopped (1¼ cups/125 g)
Salt and freshly ground black pepper
1 large egg
1¾ ounces (50 g) Parmesan cheese, grated (½ cup)
1 pound (450 g) Egg Pasta Dough (page 282), rolled into 4 sheets, each about inch (0.8 mm) thick
Tuscan Kale:
3 tablespoons (45 ml) extra-virgin olive oil
1 large yellow onion, minced (2 cups/320 g)
4 medium-size carrots, minced (2 cups/250 g) | 4 medium-size ribs celery, minced (2 cups/200 g)
1 garlic clove, smashed
2 bunches Tuscan kale (about 8 ounces/227 g total), trimmed of thick stems and chopped
2 cups (475 ml) dry white wine
1¼ cups (300 g) canned plum tomatoes, preferably San Marzano, cored and crushed by hand
1 ounce (28 g) chopped prosciutto scraps or pieces
Leaves from 2 sprigs fresh rosemary
Salt and freshly ground black pepper
4 tablespoons (57 g) unsalted butter
¼ cup (60 ml) extra-virgin olive oil
3½ ounces (100 g) Parmesan cheese, grated (1 cup), plus some for garnish
---|---
**For the ravioli and filling:** Drain the soaked beans and cover by 1 inch (2.5 cm) with fresh water in a medium saucepan. Wrap the prosciutto, onion, carrots, and celery in a large piece of cheesecloth and submerge in the pot. Cover and bring to a boil over high heat, and then lower the heat to medium, uncover, and cook until the beans are tender, 45 to 55 minutes. Let cool slightly, and then squeeze the cheesecloth bundle to press the liquid from the vegetables, discarding the bundle. Use a slotted spoon to transfer the beans to a blender in batches, pureeing them until smooth and adding just enough of the cooking liquid so that the beans will puree. The mixture should be thick like hummus and stick to a spoon turned upside down. Transfer to a bowl and stir in the egg and Parmesan. Season to taste with salt and pepper and mix well. Spoon the mixture into a resealable plastic bag and refrigerate for at least 1 hour or up to 2 days.
Lay a pasta sheet on a lightly floured work surface and dust with flour. Trim the ends to make them square, then fold the dough in half lengthwise and make a small notch at the center to mark it. Open the sheet so it lies flat again and spritz with water. Cut a corner from the bag of filling and squeeze the filling in ¾-inch (2-cm)-diameter balls in two rows along the length of the pasta, leaving a 1-inch (2.5-cm) margin around each ball and stopping at the middle of the sheet. Lift up the empty side of the pasta sheet and fold it over to cover the balls of filling. Gently press the pasta around each ball of filling to seal. Use a 2½-inch (6-cm) round, fluted ravioli cutter or a similar size biscuit cutter to cut the ravioli. Repeat with the remaining pasta dough and filling. Place the ravioli in a single layer on parchment paper and freeze until firm, then keep frozen in a resealable plastic bag for up to 1 week before cooking. You should have about 120 ravioli.
**For the Tuscan kale:** Heat the oil in a large saucepan over medium heat. Add the onion, carrots, celery, and garlic, and cook until the carrots are soft but not browned, 6 to 8 minutes. Add the kale, wine, tomatoes, prosciutto, rosemary, and just enough water to cover the kale. Stew over low heat until the kale is very tender, 12 to 15 minutes. Season to taste with salt and pepper.
**To finish:** Bring a large pot of salted water to a boil. Add ten to twelve ravioli per serving, and cook just until tender, 4 to 5 minutes. Work in batches, if necessary, to prevent overcrowding.
Meanwhile, heat the butter and oil in a large, deep sauté pan over medium heat. Add the kale mixture and ¾ cup (175 ml) of pasta water. Stir the ingredients until the sauce is creamy and then add the Parmesan and cooked pasta in batches. Swirl the pasta and sauce together in the pan until the sauce coats the pasta. Divide among plates and garnish with Parmesan.
**SQUASH GNOCCHI** with **AMARETTI** and **MOSTARDA**
Claudia's favorite dish at La Brughiera is porcini-stuffed squash gnocchi with Parmigiano fonduta. They use almond flour in the gnocchi dough, roll it out, and then stuff it with porcini. Whenever we eat there, she asks me to call ahead and request the dish. I've tried numerous times but I just can't replicate it. So I came up with my own squash gnocchi flavored with crushed amaretti cookies and _mostarda_ (mustard-flavored fruit relish). I like them both.
MAKES 4 TO 6 SERVINGS
4 ounces (1 stick/113 g) unsalted butter, divided
1 medium-size butternut or longneck squash (about 2 pounds/1 kg), peeled, seeded, and diced (about 6 cups)
Salt and freshly ground black pepper
1 large egg
1 cup (125 g) _tipo_ 00 flour (see page 277) or all-purpose flour, plus about 1 cup (125 g) for dusting | ½ cup (54 g) plain, dry breadcrumbs
3 tablespoons (22 g) amaretti cookie crumbs
2¾ ounces (75 g) Parmesan cheese, grated (¾ cup), divided
8 large sage leaves
¼ cup (60 ml) minced mostarda
---|---
Melt 4 tablespoons (57 g) of the butter in a large deep sauté pan over medium heat. Lower the heat to medium-low and add the diced squash. Season with salt and pepper, cover, and cook for 1 hour, stirring every 10 minutes or so to make sure the squash does not brown. Uncover and cook until the pan goes dry and the squash is tender but not browned, 10 to 15 minutes more. The squash should be very dry. Transfer it to the bowl of a stand mixer fitted with the paddle attachment, and mix on medium speed until the squash is fairly smooth, 2 to 3 minutes. Mix in the egg, flour, breadcrumbs, amaretti crumbs, and ½ cup (50 g) of the Parmesan on low speed, scraping down the bowl as necessary. The mixture should be moist like wet cookie dough. Flour your hands, pinch off chunks the size of large marbles, coat them in flour, and then roll into balls between your palms.
Bring a large pot of salted water to a boil. Add the gnocchi, and cook until they are tender and float, 5 to 6 minutes.
Meanwhile, combine the remaining 4 tablespoons (57 g) of butter and sage in a sauté pan over medium heat. Cook until the sage is lightly fried, 2 to 3 minutes. Add about ½ cup (120 ml) of gnocchi cooking water and simmer, shaking the pan vigorously, until slightly thickened, about 5 minutes. Stir in the mostarda.
Drain the gnocchi and divide among plates. Pour on the sauce and sprinkle with the remaining Parmesan.
**SWORDFISH PANCETTA** with **FENNEL ZEPPOLE**
Traditional Italian bacon, pancetta, is made with pork belly. But why not use other bellies? Most animals store enough fat in their bellies to stand up to the curing process. When swordfish belly is cured, it slices up into these beautiful pale pink ribbons with a rich mouthfeel like lardo. Some shaved fennel and a golden citrus vinaigrette make it a gorgeous plate. Ask for swordfish belly at any fishmonger's. It's usually a throwaway part of the fish, so they'll be happy to sell it to you. Just call your fish market ahead of time and ask them to save the swordfish belly for you. If they don't have any swordfish in, try tuna belly or salmon belly.
MAKES 16 SERVINGS
Swordfish Pancetta:
1 pound (450 g) swordfish belly
5 teaspoons (14 g) kosher salt
½ teaspoon (2 g) granulated sugar
¼ teaspoon (1.5 g) curing salt #2 (see page 277)
¼ teaspoon (0.5 g) toasted and ground fennel seeds
¼ teaspoon (0.5 g) ground coriander
⅛ teaspoon (0.25 g) ground red pepper
½ small garlic clove, pressed into a paste
Fennel Zeppole:
1 cup (235 ml) whole milk
¼ cup (50 g) granulated sugar
1 tablespoon (6 g) toasted and coarsely ground fennel seeds | ½ teaspoon (3 g) salt
¼ teaspoon (0.5 g) freshly ground black pepper
4 ounces (1 stick/113 g) unsalted butter
2¼ cups (280 g) _tipo_ 00 flour (see page 277) or all-purpose flour
4 large eggs
Oil, for frying
To Serve:
1 cup (235 ml) citrus vinaigrette (page 277)
2 fennel bulbs, with fronds
¼ cup (15 g) chopped fresh flat-leaf parsley
Salt
---|---
**For the swordfish pancetta:** Rinse the swordfish belly and pat it dry. Combine the kosher salt, sugar, curing salt, fennel seeds, coriander, red pepper, and garlic in a medium bowl. Add to the swordfish to the bowl, patting in the cure to completely cover the fish on all sides. Cover the bowl and refrigerate for 36 hours. Remove the fish from the bowl, rinse it, and pat it dry. Refrigerate in a covered container for up to 1 month.
**For the fennel zeppole:** Combine 1 cup (235 ml) of water with the milk, sugar, fennel seeds, salt, and pepper, and butter in a medium saucepan and bring up to a boil over medium-high heat. Whisk in the flour until incorporated, and stir with a wooden spoon until the dough forms a ball and a skin forms on the inside of the pot. Transfer the dough to a mixer fitted with the paddle attachment and add the eggs, one at a time, mixing on low speed until each egg is incorporated. On a floured work surface with floured hands, pinch up golf ball–size balls of dough. You should get about thirty-two. Roll each ball into a log and gently twist each log in opposite directions from the center into a twisted log 5 to 6 inches (13 to 15 cm) long. Form the twisted log into a circle, but overlap the ends of the circle to make an X, pinching the dough gently at the X. Each zeppola should look like a loop with an X, kind of like a pretzel that hasn't been twisted at the X. The zeppole can be transferred to a shallow, parchment-lined container, covered, and refrigerated for up to 2 hours.
Heat the oil to 350°F (175°C) in a deep fryer or large, heavy pot. Deep-fry the zeppole until golden brown, 2 to 3 minutes, adjusting the heat to maintain a constant 350°F (175°C) temperature. Drain on paper towels and immediately season with salt.
**To serve:** Pour the citrus vinaigrette into a bowl. Trim the fennel, discarding the tough white core but reserving the fronds. Shave the fennel lengthwise on a mandoline into thin strips. Add to the vinaigrette, along with the parsley, and season with salt. Toss to combine.
Arrange the shaved fennel in a single layer on a wooden platter or plates. Very thinly slice the swordfish and drape over the fennel. Place the zeppole in the center and garnish the plate with small fennel fronds. Drizzle the swordfish with the remaining vinaigrette in the bowl.
**Note**
**If you want to make eight servings instead of sixteen, just cut the zeppole recipe in half. But leave the pancetta as is. Trust me. You'll be slicing off ribbons of swordfish pancetta almost every day and dipping them in the citrus vinaigrette.**
**VEAL LIVER RAVIOLINI** with **FIGS** and **CARAMELIZED ONIONS**
I had been making liver pâtés for a while and always folded in butter to help them keep their shape. It seemed like it would make an awesome pasta filling, so I heated some pâté in a pan and it melted nice and slow. Perfect. When you put a fork into this raviolini, it's super-creamy, plus you get some sweetness from the onions and figs. You could make this with duck liver or chicken liver if you can't find veal.
MAKES 4 TO 6 SERVINGS
Veal Liver Raviolini:
1¼ pounds (567 g) veal liver
1 ounce (28 g) pancetta, cut into ½-inch (1.25-cm) cubes
1 small yellow onion, chopped (⅔ cup/106 g)
Leaves from 1 large sprig fresh rosemary
1½ tablespoons (22 ml) brandy
8 ounces (2 sticks/227 g) cold unsalted butter, cut into cubes
1¾ ounces (50 g) Parmesan cheese, grated (½ cup)
Salt and freshly ground black pepper
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (0.8 mm) thick
Figs and Caramelized Onions:
¼ cup (60 ml) olive oil | 5 medium-size yellow onions, cut into half-moon slices (6½ cups/1 kg)
2 ounces (57 g) lardo, cut into strips
4 tablespoons (57 g) unsalted butter
1 pound (450 g) small fresh figs
1½ teaspoons (7 ml) balsamic vinegar
2 tablespoons (8 g) chopped fresh rosemary
Salt and freshly ground black pepper
1 ounce (28 g) Parmesan cheese, grated (⅓ cup) for garnish
---|---
**For the veal liver raviolini:** Cut the veal liver into pieces about two fingers wide and set aside. Cook the pancetta in a large sauté pan over medium heat until some of the fat renders and the pancetta looks translucent, 3 to 4 minutes. Transfer the pancetta to a bowl. Raise the heat to medium-high and when the rendered fat is hot, add the veal liver and sear until nicely browned on all sides, 6 to 8 minutes total. Return the pancetta to the pan, along with the onion and rosemary, and cook until the onion is translucent and the liver is cooked through, 3 to 4 minutes more. Pour in the brandy, lower the heat to medium, and cook until the liquid reduces in volume by about three-quarters, 4 to 5 minutes. Remove from the heat and transfer the mixture to a bowl; cover and refrigerate until very cold, at least 1 hour.
Combine the veal liver mixture and cold butter cubes in a food processor, and puree until very smooth, stopping to scrape down the sides a few times. For the smoothest texture, pass the mixture through a tammy cloth (woolen strainer), triple layer of cheesecloth, or fine-mesh sieve. Add the Parmesan, season with salt and pepper, and pulse briefly to combine. Spoon the filling into a resealable plastic bag and refrigerate for at least 1 hour or up to 1 day.
Lay a pasta sheet on a lightly floured work surface and dust with flour. Trim the ends to make them square, then fold the dough in half lengthwise and make a small notch at the center to mark it. Open the sheet so it lies flat again and spritz with water. Cut off a small corner of the resealable plastic bag so you can pipe the filling on the pasta. Beginning at the left-hand side, pipe two rows of filling along the length of the pasta, stopping at the center of the sheet. For each row, pipe rectangular strips of filling about 1½ inches (4 cm) long by ½ inch (1 cm) wide with a ½-inch (1 cm) margin around each strip. Lift up the right-hand side of the pasta sheet and fold it over to cover the strips of filling. Gently press the pasta around each strip of filling to seal. Use a fluted ravioli cutter or sharp knife to cut the ravioli into rectangles about 2½ inches (6 cm) long by 1½ inches (4 cm) wide. Repeat with the remaining pasta dough and filling. You should have about thirty-two raviolini.
**For the figs and caramelized onions:** Heat the olive oil in a large sauté pan over medium-high heat. When hot, add the onions, shaking the pan to distribute the hot oil. Lower the heat to medium and cook, stirring frequently, until the onions shed their water and go from translucent to light golden to deep caramel brown, 30 to 40 minutes total. To keep the onions from browning unevenly, stir in a little water now and then. Scrape into a bowl and set aside.
Put the lardo and butter in the pan, and cook over medium heat until the lardo renders its fat and the butter turns golden brown, 10 to 12 minutes. Cut the figs lengthwise into quarters (or eighths if the figs are large) and add to the pan, along with the caramelized onions, vinegar, and rosemary. Season with salt and pepper to taste and keep warm over very low heat.
When ready to serve, bring a large pot of salted water to a boil. Drop in the raviolini, quickly return the water to a boil, and cook the pasta until tender yet firm, about 2 minutes. Use a spider strainer or slotted spoon to transfer the raviolini to plates.
Spoon some warm sauce over each plate and sprinkle with Parmesan.
**OIL-POACHED BLACK BASS** with **FRESH PEAS** and **BABY TOMATOES**
Poaching fish in olive oil is a genius technique. It's something Paolo Begnini did all the time. The oil keeps the fish moist but doesn't make it taste watery, as poaching in water sometimes does. When I got back to the States and started cooking at Osteria, we were getting in these big, beautiful black bass in the spring. They had gorgeous, clear flesh and a clean, briny aroma. I knew exactly what to do with them. With spring onions, peas, and first-of-the-season tomatoes, this is very much a springtime dish. But if you can't find black bass, you could also make it with branzino, wild striped bass, or even snapper. It helps to start heating both the oil and the blanching water at the same time, so the overall timing of the dish works out.
MAKES 4 SERVINGS
Bass and Poaching Oil:
4 black bass fillets, skin on, about 6 ounces (170 g) each
3 cups (750 ml) grapeseed oil
1½ cups (375 ml) extra-virgin olive oil
10 to 15 sprigs fresh thyme
3 bay leaves
Peas and Tomatoes:
1½ pounds (680 g) English peas, shelled | 2 cups (340 g) baby grape or pear tomatoes
1 cup (235 ml) extra-virgin olive oil, divided
4 to 6 spring onions, trimmed and julienned
Salt and freshly ground black pepper
¼ cup (60 ml) freshly squeezed lemon juice
2 teaspoons (2.5 g) torn fresh tarragon
2 teaspoons (2. 5 g) torn fresh chervil
Maldon sea salt for garnish
---|---
**For the bass and poaching oil:** Rinse the bass, and pour the oils into a deep sauté pan big enough to hold all of the fish. Roll the thyme and bay in cheesecloth and wrap and tie with kitchen string. Add the sachet to the poaching oil and bring the mixture to 220°F (105°C) over medium heat. Add the bass; the oil temperature will drop. Adjust the heat so that the oil temperature stays at 190°F (88°C). Poach the bass at 190°F until just a little moist and translucent in the center, about 130°F (54°C) internal temperature, 7 minutes or so. Carefully transfer the fish to paper towels to drain.
**For the peas and tomatoes:** Meanwhile, bring a medium pot of salted water to a boil and fill a large bowl with ice water. Add the peas and blanch for 1½ minutes. Transfer to the ice water to stop the cooking. When cool, use your fingers to slip the peas from their skins. You should have about 1½ cups (218 g) shelled peas.
Drop the tomatoes into the boiling water and blanch for 10 seconds. Transfer to the ice water to stop the cooking. When cool, slip the tomatoes from their skins. Set aside.
Heat ¼ cup (60 ml) of the oil in a medium sauté pan over medium heat. Add the onions, and cook until soft but not browned, 3 to 4 minutes. Add the skinned tomatoes and warm through, 1 to 2 minutes. Season with salt and pepper.
In another pan, heat the remaining ¾ cup (175 ml) of oil and the lemon juice over medium heat. Add the peas and herbs and warm through, 1 to 2 minutes. Season with salt and pepper.
Spread the onions in the center of each plate. Place the fish on top and the tomatoes around the fish. Spoon the peas over the top and garnish with sea salt and black pepper.
**VANILLA CRESPELLE** with **CARAMELIZED PINEAPPLE SAUCE**
My wife is the queen of crêpes. She can make them with her eyes closed. In culinary school, I was taught to make crêpes with barely any color. But she gets the pan nice and hot and the crêpes get beautifully browned all over. When I tasted her crêpes (called _crespelle_ in Italy), they had so much more character than the ones I was used to making. The real star here is the sauce, made from diced pineapple cooked in butter until the juice evaporates and the pineapple turns deep amber. The caramelized flavor is unreal. Just watch your eyebrows: when you add the rum; it will flambé. I made a generous yield here because the crespelle and sauce both keep well—and they're so good you might have two servings.
MAKES 12 TO 14 SERVINGS
Crespelle:
8 large eggs
2 vanilla beans, split and scraped
4 cups (500 g) _tipo_ 00 flour (see page 277) or all-purpose flour
4 cups (1 L) whole milk
Grapeseed oil, for cooking the crespelle
Caramelized Pineapple Sauce:
8 ounces (2 sticks/227 g) unsalted butter | 1 pineapple, peeled, cored, and diced small
2 cups (400 g) granulated sugar
½ cup (120 ml) dark rum
1½ cups (375 ml) heavy cream
Confectioners' sugar, for dusting
---|---
**For the crespelle:** In a medium bowl, whisk together the eggs and vanilla. Whisk in the flour to make a very thick batter that is difficult to whisk. Gradually whisk in the milk so that the batter is thin enough to just barely coat the back of a spoon. Let stand for 5 minutes.
Heat an 8-inch (20-cm)-diameter pan with a little grapeseed oil over medium heat until it's almost smoking. If you have two pans, you can cook two crespelle at once. Pour out any excess oil, and then, holding the pan handle, pour in 3 to 4 tablespoons (45 to 60 ml) of batter, and quickly tilt the pan in a circle to spread the batter as thinly as possible across the bottom of the pan. Cook until the crespelle is set on top and the edges are lightly browned and starting to curl, 1 to 2 minutes. Flip, and cook the other side for 1 minute. If using immediately, stack the crespelle on a plate. If you are making the crespelle ahead of time, stack them between sheets of waxed paper, let cool, then refrigerate for up to 4 days or freeze for up to 4 weeks.
**For the caramelized pineapple sauce:** Melt the butter in a large saucepan over medium-high heat. Add the pineapple, and cook until most of the juice evaporates and the pineapple turns light golden brown, about 15 minutes. Stir in the sugar, and cook until it turns a light amber color (355°F/180°C on a candy thermometer), about 5 minutes. Stand back and add the rum; it will sputter, may well ignite, and the sugar will be extremely hot. When the sputtering or flames die down, carefully stir in the cream, remove the pan from the heat, and let cool.
For each serving, heat about ⅓ cup (90 ml) of sauce in a sauté pan. Add two crespelle, and cook until heated through, about a minute. The crespelle will fold over themselves in the pan, which is fine. Use tongs to fold the crêpes into quarters and transfer to a plate. Pour the sauce over the top and dust with confectioners' sugar.
**BOMBOLONI** with **VIN SANTO CREMA**
Vin santo is a Tuscan dessert wine that ranges in sweetness from bone dry to sherry-like to Madeira-like. It's usually enjoyed with _cantucci_ (almond cookies similar to biscotti) for dipping into the wine to soak it up. When I did a dinner in Philadelphia for a regular Tuscan customer named Paolo Paoletti, this was the dessert. I stuffed the doughnuts with vin santo _crema._ He loved it. But if you don't want to go through the trouble of stuffing the doughnuts, you could serve them with the crema on the side.
MAKES 6 TO 8 SERVINGS
Starter:
1½ packed tablespoons fresh yeast (28 g), or 1 tablespoon plus ¾ teaspoon (15 g) active dry yeast
2½ teaspoons (10 g) granulated sugar
2½ tablespoons (21 g) bread flour
⅓ cup plus 2 tablespoons (110 g) warm whole milk
Dough:
½ cup (100 g) granulated sugar
3 large eggs
⅓ cup (90 ml) unsalted butter, melted and cooled
3⅔ cups (500 g) bread flour
1¼ teaspoons (7.5 g) salt | Vin Santo Crema:
7 tablespoons (54 g) _tipo_ 00 flour (see page 277) or all-purpose flour
6 tablespoons (75 g) granulated sugar
6 large egg yolks
1½ cups (375 ml) whole milk
½ cup (120 ml) vin santo
½ vanilla bean, split and scraped
To Serve:
Oil, for frying
¾ cup (175 ml) granulated sugar, for dusting
---|---
**For the starter:** Use a wooden spoon to stir together the fresh yeast, sugar, flour, and milk in the bowl of a stand mixer, breaking up the yeast. If using active dry yeast, sprinkle it over the warm milk in the bowl, let stand for 5 minutes or until foamy, then stir in the remaining starter ingredients. Cover loosely and let stand at room temperature for 30 to 35 minutes.
**For the dough:** Add the sugar, eggs, and butter to the starter and fit the dough hook onto the mixer. Mix on medium speed until combined, 1 to 2 minutes. With the mixer running, gradually add the flour and salt and mix until the dough is sticky and stretchy, 5 to 6 minutes. Transfer the dough to a lightly buttered bowl, cover, and let rise in a warm spot until doubled in size, about 1½ hours.
Turn the dough out onto a floured work surface and pat or roll the dough to an even ½-inch (1.25-cm) thickness. Use a 2-inch (5-cm) round cookie cutter to punch out eighteen to twenty disks. Set them on a parchment-lined sheet pan, cover loosely, and refrigerate until partially risen, about 1 hour.
**For the vin santo crema:** Sift together the flour and sugar into a small bowl. Whisk in the egg yolks until smooth. Fill a bowl with ice water.
Combine the milk, vin santo, and vanilla in a medium saucepan and bring to a boil over medium-high heat. Don't worry if the mixture looks curdled; it will become smooth when you whisk in the eggs. Temper in the egg mixture by whisking about ¼ cup (60 ml) of the milk into the eggs until incorporated, then another ¼ cup (60 ml). Pour the mixture back into the saucepan and set over low heat, whisking to combine everything and cooking gently until thickened, 5 to 7 minutes, whisking constantly.
Cool the pan bottom by setting it into the ice water, whisking the crema to cool it down. When barely warm, press plastic wrap onto the top of the crema and refrigerate until cold, at least 1 hour or up to 2 days. When cold, spoon the crema into a pastry bag or resealable plastic bag.
**To serve:** Heat the oil to 350°F (175°C) in a deep fryer or deep pot. Add the dough disks in batches to prevent overcrowding and fry until golden brown, 3 to 4 minutes, flipping the doughnuts to fry all sides and adjusting the heat to maintain a constant 350°F (175°C) oil temperature. Use a spider strainer or slotted spoon to transfer the doughnuts to paper towels to drain.
When cool enough to handle, poke a ¼-inch (6-mm) hole into the side of each bombolone, snip a corner from the bag, and pipe the crema into the bomboloni until stuffed. Roll the stuffed bomboloni in sugar and serve.
**CHERRY SHORTCAKE** with **CHERRY MERINGATA**
Every pizza place in Italy serves meringata. It's usually something they buy premade from a company called Bindi. _Meringata_ is a crunchy cake of meringue stuffed with _fiordilatte_ gelato (basic white gelato) and _frutti di bosco_ (mixed berries). The first time I had it was in Villa d'Almè and it has since become a favorite dessert that my wife and I share. We usually order two of them! I don't know what kind of preservatives they add to make the cake and ice cream last. My meringue cake kept melting in the freezer! So now I chop up most of the meringue and crumble it over the dessert. There are several components here but they can all be made ahead and the final result is absolutely fantastic.
MAKES 6 TO 8 SERVINGS
Cherry Meringata:
7 large egg whites
2¼ cups (450 g) granulated sugar
Pinch of salt
1 vanilla bean, split and scraped
5 cups (1.25 L) Fiordilatte Gelato (page 287), ready to churn
Polenta Shortcake:
6 ounces (1½ sticks/170 g) unsalted butter, softened
¾ cup plus 2 tablespoons (175 g) granulated sugar
½ vanilla bean, split and scraped | 3 large eggs, at room temperature
10 large egg yolks, at room temperature
¾ cup plus 1 tablespoon (100 g) _tipo_ 00 flour (see page 277) or all-purpose flour
⅔ cup (105 g) coarse yellow cornmeal (polenta)
1 teaspoon (4.5 g) baking powder
½ teaspoon (3 g) salt
Cherry Sauce:
1 pound (450 g) fresh cherries
¾ cup (175 ml) glucose syrup or light corn syrup
½ cup (100 g) granulated sugar
---|---
**For the cherry meringata:** Heat the oven to 190°F (88°C) and line a baking sheet with parchment paper. Whisk together the egg whites, sugar, salt, and vanilla in the top of a double boiler or in a heatproof bowl set over a saucepan of simmering water. Gently heat the ingredients to 140°F (60°C), beating with a whisk or electric mixer on medium-low speed, 4 to 5 minutes. Then whisk or whip the mixture in the bowl on high speed until medium-stiff peaks form when the beater or whisk is lifted, 2 to 4 minutes. Remove the bowl from the heat and beat on high speed until light and fluffy, 2 minutes more. Spoon the mixture into a resealable plastic bag. Press out the air, then twist the bag around the mixture, snip off a corner, and pipe the mixture into 2-inch (5-cm)-diameter mounds on the prepared baking sheet. Bake for 8 hours or overnight until the meringue is crisp and dry. Remove from the oven and let cool completely. Store in a covered container in a cool, dry place for up to 2 days. (Avoid making the meringue in a humid environment, as it will become sticky during cooling.)
Make the Fiordilatte Gelato as directed. When the ice cream mixture is almost finished churning and nearly firm, chop up the oven-dried meringue. Add 2 cups (475 ml) of the chopped meringue to the ice cream mixture, letting it become incorporated into the mixture. Continue freezing according to the manufacturer's directions, then store the gelato in the freezer until firm, at least 2 hours.
**For the polenta shortcake:** Preheat the oven to 350°F (175°C). Cream the butter, sugar, and vanilla in a stand mixer on medium speed until light and fluffy, 3 to 4 minutes. Add the eggs and egg yolks, one at a time, letting each become incorporated before adding the next. Whisk together the flour, polenta, baking powder, and salt in a small bowl. Change to low speed and slowly beat the flour mixture into the egg mixture just until it is incorporated.
Spread the batter on a half sheet pan (17 x 12 inches/43 x 30 cm) and bake until set and golden brown, 12 to 14 minutes. Remove from the oven and let cool in the pan on a rack.
**For the cherry sauce:** Pit the cherries and place them in a medium saucepan. Add the glucose syrup, sugar, and ½ cup (120 ml) of water. Bring to a boil over high heat and boil until the mixture thickens slightly and reaches 220°F (104°C) on a candy thermometer, 10 to 12 minutes.
To assemble, cut the shortcake into 3-inch (7.5-cm) circles or squares with a biscuit cutter or knife. You should have twelve to sixteen pieces. Set one piece on each plate. Add a generous layer of the gelato, a spoonful of the cherry sauce, and some of the remaining chopped meringue. Top with another piece of shortcake, compressing gently, and a generous spoonful of cherry sauce. Scatter some of the remaining chopped meringue over the top and around the plate.
* * *
ALBA
IF YOU CAN'T SMELL THE TRUFFLES, YOU MUST BE DEAD
EARTHY, RICH HAZELNUTS FROM PIEMONTE. COARSE GROUND POLENTA FROM LOMBARDY. DARK ROASTED COFFEE FROM SICILY. BRACING BLACK LICORICE FROM PUGLIA. AT THE SALONE DEL GUSTO, A BIANNUAL SLOW FOOD FESTIVAL IN TURIN, EVERY REGION OF ITALY HAS ITS OWN SECTION. IT'S LIKE AN ITALIAN VERSION OF DISNEY'S EPCOT THEME PARK BUT LIGHT-YEARS BETTER WITH FOOD PRODUCERS WHO ACTUALLY LIVE AND WORK IN THAT REGION.
Claudia schooled me in each region. "Taste this," she said, holding out a slab of glistening, fatty porchetta from Lazio. And later, a shard of savory pecorino from Sardinia. And then a sip of Jermann Vintage Tunina, a golden, honey-scented wine from Fruili. She wanted me to taste how Sardinian pecorino is less salty than Pecorino Romano. How olive oils from different parts of the country have completely different aromas. Every two years in October, thousands of people from all over the globe come to the Salone del Gusto, a worldwide celebration of traditional, local foods that's open to the public. The first year I went was also the first year of Terra Madre (literally, "Mother Earth"), an offshoot of Slow Food International that promotes sustainable food communities around the world.
Tasting, sharing, and talking about amazing food for half a day was a mind-blowing, palate-bending experience. But we hadn't even had lunch yet. Claudia and I met up with Jeff Benjamin, a partner at Vetri Ristorante in Philadelphia, who was attending some wine classes at Slow Food that year. We'd planned to have lunch together in Alba sometime in the afternoon. Why Alba? Because it was truffle season, and white truffles are the epitome of slow food: a rare find, difficult to capture as convenience food, and prized from only a few places around the world, most of all, Alba.
I drove Claudia's red Mini Cooper south from the bustling city of Turin through the vineyard-laden hills of Piemont to the tiny medieval town of Alba. During the fall, you can tell you're close just by opening the windows. The unmistakable musk of truffles seeps into the car and captivates your senses. Alba holds truffle fairs every weekend from late September to mid-November. When you step outside your car and into Piazza Garibaldi, the truffle aroma completely engulfs you. The "white diamonds of the kitchen" are everywhere, displayed on cloth-covered tables, and ranging in size from marbles to golf balls to grapefruits. All the vendors have scales. The goods are handled with care. And even a tiny amount is very expensive—about $115 (85 euros) per ounce (28 g). There's almost something illicit about it, as if they're selling some kind of drug.
During truffle season, the Alba streets also fill up with aromas of wood smoke, grilled sausages, pungent cheeses such as Castelmagno, roasting chestnuts in iron pans, and steaming cinnamon-scented wine. Just an hour earlier, we'd gorged ourselves at Salone del Gusto, but thank God we had a lunch reservation. During festival weekends, you won't get in anywhere without one.
We walked further into the old city, up Via Cavour, a V-shaped cobblestone street that leads to Piazza Risorgimento, the main town square. In the square, dozens more vendors sold torrone, porcini, salame, pears, and more truffles in one place than I had ever seen in my life. We finally arrived at Osteria dell'Arco through a stone archway in Piazza Savona. The glass door proudly displayed a Slow Food snail logo, a good sign.
To start, I ordered veal tartare. It came mixed with olive oil and black pepper, rock salt, and, of course, white truffles. Jeff ordered vitello tonnato and Claudia had warm bagna cauda. Our pasta course was tajarin, a Piedmont specialty hand cut into little strands slightly wider than angel hair. The pasta was yellow-orange like the morning sun, and I asked the waiter how it was made. "We use forty egg yolks per kilo of flour," he said. No wonder. The Alba chickens feed only on insects and grass, so the yolks become bright orange—sometimes red. They toss the tajarin with nothing but pasta water, olive oil, butter, and cheese. Simple. The waiter came to the table with a scale, a few fresh white truffles, and a truffle slicer. He weighed the truffle and started slicing it over the tajarin until I said, "Stop." I was so hungry for truffles I didn't want him to stop at all! In Alba, the cost is about half of what it is in the United States, so we splurged and kept the truffles coming. We had roasted rabbit with polenta and truffles. Sliced roast duck with radicchio and truffles. It was a truffle orgy! We drank a bottle of La Spinetta Barbaresco, one of my favorite Piedmont wines, and watched the festival through the restaurant window. "They use dogs to hunt truffles in Italy," Jeff told me. "In France, they use pigs. But in Italy, the pigs just eat the truffles when they find them." I laughed and added, "That's because Italian truffles taste better!" Nods of agreement all around.
Back out on the streets, vendors held their prize specimens to the nose of customers as they've been doing for decades. Watching them and thinking about all the incredible foods we'd eaten that day, I came to respect the local products of Italy more than ever. Nowhere else in the world can you experience white truffles like those in Alba. Nowhere else in the world can you experience farinata as you do in Genoa. Nowhere do you find culatello quite as rich and moist as that in the town of Zibello.
Every village in Italy, no matter how small, holds an annual food festival, or _sagra_ , to celebrate these foods. Alba has held its truffle sagra every year since 1930. It's how the community pays homage to what grows in the area and displays its local pride, whether it's peaches in Canale, hazelnuts in Cortemilia, rabbits in Brembio, radicchio in Treviso, torrone in Cremona, gnocchi in Castel del Rio, or bilberries in Piazzatorre. It's at the sagre that Italian food really comes to life. Turin's slow food festival is like a mega-sagra for the whole country. So is the cheese festival held in nearby Bra on alternating years. At these food festivals, Italians share what's good and delicious in their little corner of world.
Italy's food festivals helped me understand that food is woven into the cultural fabric of every town, every region, and every country around the world. Sagre are not only a source of pride but also a lifeline to other communities in faraway places. Food is common ground. It's one of the best ways to get to know someone you've never met or somewhere you've never been. As Slow Food's founder, Carlo Petrini, wrote, "Eating someone's food is easier and more immediate than speaking his language."
Truffles and Eggs
•
Veal Tartare with Shaved Artichokes and White Truffle
•
Porcini Zuppa with Bra Cheese Fonduta
•
Polenta Caramelle with Raschera Fonduta and Black Truffle
•
Potato Gnocchi with Castelmagno Fonduta and White Truffle | Cotechino-Stuffed Quail with Warm Fig Salad
•
Oven-Roasted Rabbit Porchetta with Peperonata
•
Bonèt
•
Pistachio Flan
•
Torrone Semifreddo with Candied Chestnuts and Chocolate Sauce
---|---
**TRUFFLES** and **EGGS**
If you like scrambled eggs, then you have to try this dish. The truffles send it over the top. And the eggs themselves are the softest, creamiest, most custardy eggs you'll ever taste. They're mixed with fresh truffles and cooked very slowly over the lowest possible heat, while stirred constantly to keep them from scrambling. They come out soft as pudding. It's the best open-face breakfast sandwich ever.
MAKES 2 SERVINGS
2 tablespoons (28 g) unsalted butter
4 large eggs
1 ounce (28 g) finely chopped fresh white truffles, 2 tablespoons (30 ml) white truffle paste, or 2 teaspoons (10 ml) white truffle oil | Salt and freshly ground black pepper
2 slices rustic bread, toasted (preferably on a wood grill)
---|---
Melt the butter in a medium nonstick pan over the lowest possible heat. Beat the eggs, truffles, and salt and pepper in a small bowl, and then pour into the pan. Cook very slowly, stirring gently and constantly with a rubber spatula until the eggs get creamy, 8 to 10 minutes. It will take a lot of patience because the eggs should not form large curds. When they are done, they should coat a spoon and look loose and creamy like custard.
Spoon the custard over the toasted bread, and if you want to go crazy, shave on some more fresh truffles.
**VEAL TARTARE** with **SHAVED ARTICHOKES** and **WHITE TRUFFLE**
Most veal in Italy is called _vitellone_ and tastes a little different than US veal. _Vitellone_ comes from calves eighteen to twenty months old and the meat is darker and more flavorful than American milk-fed veal—somewhere between US veal and beef. But any type of veal works here. They all taste great with truffles. During truffle season in Alba, you'll find some version of this dish on every trattoria menu.
MAKES 4 SERVINGS
Truffle Vinaigrette:
¾ cup (175 ml) blended oil (page 276)
3 tablespoons (45 ml) freshly squeezed lemon juice
1 teaspoon (5 ml) white truffle paste, or ¼ teaspoon (1 ml) white truffle oil
Salt and freshly ground black pepper
Veal Tartare and Shaved Artichokes:
8 ounces (227 g) veal shoulder
4 baby artichokes, trimmed | 2 tablespoons (30 ml) freshly squeezed lemon juice
¼ cup (15 g) chopped fresh flat-leaf parsley
Salt and freshly ground black pepper
¾ cup (175 ml) olive oil
1 small block of Parmesan cheese, for shaving
---|---
**For the truffle vinaigrette:** Combine the oil, lemon juice, and truffle paste in a small blender or food processor and blend until combined (or vigorously whisk together the ingredients in a medium bowl). Season to taste with salt and pepper. Use immediately or refrigerate for up to 2 days.
**For the veal tartare and shaved artichokes:** Chill four plates in the freezer. Put a small bowl and all the parts of a meat grinder in the freezer for 20 minutes. When the meat grinder is cold, grind the veal in the meat grinder, using the medium (¼-inch/6-mm) die, and catching it in the chilled bowl. Cover tightly with plastic wrap and refrigerate for up to 1 hour.
Snap off and discard all of the fibrous outer leaves from the artichokes. Using a paring knife, peel the artichokes so you are left with only the tender white hearts, which will be about 1 x ½ inch (2.5 x 1.25 cm) in size. Combine the lemon juice with 2 cups (475 ml) of water in a medium bowl. Immediately drop each artichoke heart into the acidulated water to keep them from discoloring. Removing one artichoke heart at a time, thinly slice each lengthwise on a mandoline (an inexpensive handheld one works fine). As you work, put the shaved artichokes back in the acidulated water to prevent discoloration.
Drain the shaved artichokes, pat them dry, and then add the parsley and ½ cup (120 ml) of the truffle vinaigrette, stirring to combine. Season to taste with salt and pepper.
Stir the olive oil into the chilled ground veal and season to taste with salt and pepper.
**For each serving,** place a 4-inch (10-cm) ring mold on a cold plate and add one-quarter of the veal mixture, pressing gently to spread through the mold. If you don't have a ring mold, create a 4-inch (10-cm) round, ¼-inch (6-mm) thick circle of veal with a table knife. Spoon one-quarter of the artichoke mixture over the top of each veal circle. Use a vegetable peeler to shave a few pieces of Parmesan from the block over the artichokes. Remove the ring mold and drizzle some truffle vinaigrette around the plate. Serve immediately.
**PORCINI ZUPPA** with **BRA CHEESE FONDUTA**
When porcini are in season, there are a thousand and one ways to prepare them. After cleaning and slicing pounds and pounds of them one fall, I had leftover mushroom scraps and decided to make soup. The broth is just leeks, celery, garlic, chicken stock, and herbs pureed with the cooked mushrooms. The porcini flavor is so strong, you don't need much else. But a little Bra cheese _fonduta_ with truffle pâté makes this soup even better. Porcini and truffles grow in Bra, so using the local cheese makes sense. If you can't find Bra cheese, use any other good melting cheese, such as fontina or Taleggio. And don't feel the need to use all fresh porcini in the soup. When I get a bumper crop of porcini, I freeze them. A mix of frozen and fresh mushrooms works fine here.
MAKES 4 SERVINGS
3 tablespoons (42 g) unsalted butter, divided
3 tablespoons (45 ml) olive oil, divided
10 ounces (283 g) fresh or frozen porcini mushroom pieces (3 cups), plus 4 fresh porcini caps
2 medium-size ribs celery, thinly sliced (1 cup)
1 large leek, cleaned and thinly sliced (1 cup)
1 quart (1 L) Chicken Stock (page 279)
1 sachet of 2 sprigs thyme, 1 bay leaf, 1 garlic clove, 4 peppercorns, and one 3-inch/7.5-cm Parmesan cheese rind (see page 277)
Salt and freshly ground black pepper | ½ cup (120 ml) heavy cream
2 ounces (57 g) Bra cheese, grated (½ cup)
½ teaspoon (2 ml) white truffle paste (see Sources, page 290)
1 small garlic clove, minced
¼ cup (15 g) chopped fresh flat-leaf parsley
---|---
Heat 2 tablespoons (28 g) of the butter and 2 tablespoons (30 ml) of the oil in a large soup pot over medium-high heat. Add the 10 ounces (283 g) of mushroom pieces, toss to mix, and sear the mushrooms until lightly browned, 4 to 5 minutes, stirring now and then.
Add the celery and leek, lower the heat to medium, and sweat the vegetables until tender but not browned, 5 to 7 minutes. Pour in the chicken stock and submerge the sachet in the liquid. Bring to a simmer and simmer gently for 30 minutes. Remove and discard the sachet and then blend the soup in batches with a blender or stick blender until completely smooth. Season with salt and pepper to taste and cover the soup in the pan to keep it warm. (Or let cool and refrigerate for up to 4 days; gently reheat the soup before serving.)
Bring the cream to a boil in a small saucepan over medium-high heat, and then remove the pan from the heat and stir in the cheese until smooth. Stir in the truffle paste and season to taste with salt and pepper. Keep warm over very low heat to keep it from thickening too much.
Melt the remaining 1 tablespoon (14 g) of butter and 1 tablespoon (15 ml) of olive oil in a medium sauté pan over medium heat. Slice the four fresh porcini caps and sauté them in the butter and oil until heated through, 2 to 3 minutes. Add the garlic and parsley, and cook for 1 minute (this is called _porcini trifolati_ ).
Divide the soup among warm bowls and spoon a couple of tablespoons of _porcini trifolati_ over each bowl. Drizzle with a generous amount of fonduta.
**POLENTA CARAMELLE** with **RASCHERA FONDUTA** and **BLACK TRUFFLE**
_Caramelle_ means "candies," and this ravioli is stuffed with polenta, then twisted like a candy wrapper. I got the stuffing idea from Claudio Sadler, a Michelin two-star chef in Milan. One of his cookbooks had a recipe for polenta ravioli, and I started playing around with polenta as a filling. After a few different tries, a simple mixture of cooked polenta, Parmesan, and a little egg turned out to be the best. The sauce is made with Raschera, a soft cow's milk cheese I first discovered on one of my fall trips to Alba. It's mild and creamy like formagella, which could stand in here. Or, in a pinch, use any other creamy melting cheese.
MAKES 4 TO 6 SERVINGS
1 cup (235 ml) cooked Polenta (page 281), cooled
1 large egg
1 ounce (28 g) Parmesan cheese, grated (¼ cup)
Freshly ground black pepper
12 ounces (340 g) Egg Pasta Dough (page 282), rolled into 3 sheets, each about inch (0.8 mm) thick
_Tipo_ 00 flour (see page 277) or all-purpose flour, for dusting | 1 cup (235 ml) whole milk
6 ounces (170 g) Raschera cheese
2 tablespoons (30 ml) black truffle paste
2 tablespoons (20 g) uncooked coarse yellow cornmeal (polenta)
Extra-virgin olive oil, for drizzling
---|---
Combine the cooled polenta, egg, and Parmesan in a food processor. Season to taste with pepper and buzz until smooth, about 1 minute. Spoon into a resealable plastic bag and refrigerate until ready to use or up to 1 day.
Lay a pasta sheet on a lightly floured work surface and dust with flour. Trim the ends to make them square, then cut the dough into 2-inch (5-cm) squares and spritz with water to keep the dough from drying out. Cut a corner from the bag and squeeze the filling into ¾-inch (2-cm)-diameter balls in the center of each square. Wet your fingers and moisten the corners of a square. Fold the pasta so the edge just covers the filling, then continue folding so that the filling is enclosed and the pasta forms a small rectangle (see page 162). Gently twist the dough around the filling like a candy wrapper and pinch the edges to seal. Dust the ravioli with flour and place on a parchment-lined baking sheet. Refrigerate for up to 2 hours or freeze until solid, then transfer to a resealable plastic bag and freeze for up to 3 days. You should have about one hundred ravioli.
Bring the milk to a boil in a small saucepan over medium-high heat. Remove from the heat and add the Raschera, whisking until melted and smooth. If the sauce is lumpy, strain it through a fine-mesh strainer into a small saucepan. Stir in the truffle paste and keep warm over very low heat.
Toast the uncooked polenta in a hot, dry skillet until fragrant and lightly browned, 2 to 3 minutes, shaking the pan often. Remove from the heat.
Bring a large pot of salted water to a boil. Drop the pasta in the boiling water in batches, if necessary, to prevent overcrowding; quickly return the water to a boil, and cook until tender yet firm, 3 to 5 minutes.
Drain and divide among warm pasta plates. Spoon some fonduta over the pasta, scatter on some toasted polenta, and drizzle some olive oil around the plate.
CARAMELLE ASSEMBLY
**POTATO GNOCCHI** with **CASTELMAGNO FONDUTA** and **WHITE TRUFFLE**
Castelmagno is a Piedmont cow's milk cheese aged for no less than sixty days. It's crumbly, stinky, and full of rich flavor. The first time I had it was in Bra during the 2005 cheese festival. We had lunch at Boccondivino, a slow food restaurant in town, and our second course was a plate of light, fluffy pillows floating in creamy Castelmagno _fonduta_. The waiter came over and grated fresh white truffles over the top until I told him to stop. It was one of the best pasta dishes I've ever eaten.
MAKES 4 TO 6 SERVINGS
2 russet potatoes, scrubbed clean
¼ ounce (7 g) Parmesan cheese, grated (1 tablespoon)
Pinch of grated nutmeg
Salt and freshly ground black pepper
1 small egg, beaten
7 tablespoons (54 g) _tipo_ 00 flour (see page 277) or all-purpose flour, sifted, plus a little more for dusting | ½ cup (120 ml) heavy cream
1 tablespoon (15 ml) white rum
8 ounces (227 g) Castelmagno cheese, grated (about 2 cups)
2 tablespoons (30 ml) white truffle paste
Extra-virgin olive oil, for drizzling
---|---
Put the potatoes in a medium saucepan and cover with cold salted water by 1 inch (2.5 cm). Bring to a boil over high heat and boil until a knife slides in and out of the potatoes easily, 25 to 30 minutes. Drain, and when cool enough to handle, peel the potatoes, discarding the skins. Pass the potatoes through a food mill or potato ricer into a medium bowl. Stir in the Parmesan and nutmeg, and season with salt and pepper. Taste, adjust the seasonings, and then stir in the egg. Gently stir in the flour just until the dough comes together.
Turn the dough out onto a floured work surface and knead gently for 4 minutes. Using a floured bench knife or sharp knife, cut the dough in half and roll each piece on the floured surface into a long rope about ½ inch (1.25 cm) in diameter. Use the floured knife to cut the rope crosswise into ½-inch (1.25-cm) pieces, and dust the gnocchi with flour. Line one or two baking sheets with parchment paper and dust with flour. Transfer the gnocchi to the sheets, shake the pan to coat with flour, and refrigerate until ready to use.
Bring the cream and rum to a boil in a medium saucepan over medium-high heat. Remove from the heat and stir in the grated Castelmagno until melted and smooth. Stir in the truffle paste and salt and pepper to taste. Keep warm over very low heat.
Meanwhile, bring a large pot of salted water to a boil. Add the gnocchi in batches, if necessary, to prevent overcrowding; quickly return the water to a boil, and cook until the gnocchi float, 5 to 6 minutes. Remove with a spider strainer or slotted spoon and toss gently in the fondue.
Divide among plates and drizzle with olive oil.
**COTECHINO-STUFFED QUAIL** with **WARM FIG SALAD**
Walk around any Piedmont town in November, and you'll eventually hear gunshots. It's hunting season and they're shooting pheasant, quail, squab, and boar. I thought about what else grows in the fall, and figs seemed like the perfect complement to quail. I stuff the birds with _cotechino_ , a coarse northern Italian fresh sausage, pan-roast the birds, and then serve them with a warm salad of figs and shallots. For the stuffing, you can use the same cotechino described in the Ciareghi recipe (page 244). Or use another coarse fresh Italian sausage.
MAKES 4 SERVINGS
4 whole quail, innards removed
Salt and freshly ground black pepper
4 slices white bread
_½_ cup (120 ml) Chicken Stock (page 279)
6 ounces (170 g) Cotechino (page 244) or other Italian sausage
2 ounces (56 g) mortadella, ground in a food processor
1 large egg
1 ounce (28 g) Parmesan cheese, grated (¼ cup) | 2 tablespoons (7 g) chopped fresh flat-leaf parsley
2 tablespoons (28 g) unsalted butter
5 tablespoons (75 ml) extra-virgin olive oil, divided
8 large fresh figs, quartered lengthwise
½ shallot, julienned
1 tablespoon (15 ml) balsamic vinegar
---|---
Rinse the quail, pat dry, and season inside and out with salt and pepper.
Soak the white bread in the stock in a medium bowl for 10 minutes. Drain off any excess stock, then break up the bread and mix in the sausage, mortadella, egg, Parmesan, and parsley. Season with salt and pepper. To test the seasoning, pinch off a small piece of the stuffing mixture and fry it in a small sauté pan and then taste. Adjust the seasoning as necessary.
Preheat the oven to 375°F (190°C). Divide the stuffing between the birds, stuffing it generously into the cavities until they are very plump. Heat the butter and 2 tablespoons (30 ml) of the oil in a large heavy sauté pan (or two smaller ones) over medium-high heat. When hot, add the quail breast-side down, and cook until golden brown on both sides, 3 to 4 minutes per side. Turn the quail breast-side up, transfer the pan(s) to the oven, and cook until an instant-read thermometer registers 145° to 150°F (63° to 66°C) when inserted into the center, 6 to 8 minutes.
Transfer the quail to a cutting board and put the pan(s) over medium heat. Add the figs, cut-side down, then add the shallots and sauté until the shallots are lightly golden, 3 to 4 minutes. Pour in the vinegar to deglaze the pans, shaking the pan back and forth, and then pour in the remaining 3 tablespoons (45 ml) of olive oil, shaking the pans to combine the ingredients. Season with salt and pepper.
Cut each quail in half lengthwise to expose the stuffing. Spoon the sauce over and around each half and serve.
**OVEN-ROASTED RABBIT PORCHETTA** with **PEPERONATA**
_Porchetta_ is the belly of a pig wrapped around the loin and roasted. Here, I do the same thing but with rabbit. I debone a whole rabbit, pound it flat, make sausage with some of the trimmed meat, roll the rabbit around the sausage, and then pan-roast the whole thing. You could make this dish with almost any animal, but rabbit has got to be the most underrated meat in the United States. We should be eating more of it. It's lean, easy to raise, and delicious! Make this dish a few days ahead if you like. It can be served warm or cold, as can the accompanying sauté of peppers, tomatoes, and onions.
MAKES 6 TO 8 SERVINGS
Rabbit Porchetta:
1 rabbit, about 3½ pounds (1.5 kg), deboned, liver and heart reserved (see note)
3 ounces (85 g) pork fatback, cut into ½-inch (1.25-cm) cubes
About 1 pound (450 g) caul fat, for wrapping
1 teaspoon (6 g) salt, plus more for seasoning
Freshly ground black pepper
¾ teaspoon (1.75 g) dextrose powder, or ¼ teaspoon (1 g) superfine sugar
¼ teaspoon (0.5 g) cracked black peppercorns
1 large egg, beaten
2 teaspoons (10 ml) heavy cream
2 tablespoons (30 ml) grapeseed oil
1 garlic clove, smashed | 2 sprigs fresh rosemary
Peperonata:
2 ripe tomatoes
1 small yellow onion
¼ cup (60 ml) olive oil
6 mixed roasted peppers (page 278), green, red, and yellow, cut into 1½-inch (3.75-cm) squares
1 garlic clove, smashed
3 sprigs fresh rosemary
½ cup (30 g) chopped fresh flat-leaf parsley
1 tablespoon (15 ml) sherry wine vinegar
Salt and freshly ground black pepper
---|---
**For the rabbit porchetta:** Spread the deboned rabbit on its back on a work surface. Trim 12 ounces (340 g) of the leg meat. Cut the trimmed meat into cubes and place it on a baking sheet or cutting board that will fit in your freezer. Cut 1¼ ounces (35 g) of the liver and heart into cubes and add it to the sheet or board, along with the pork fatback. (If you prefer, you can skip the organs and add an additional 1 ounce of fatback.) Spread everything in a single layer and freeze until firm and partially frozen, 20 to 30 minutes. Also freeze all parts of a meat grinder or the metal blade of a food processor.
The remaining rabbit should be roughly rectangular. Place it on a large sheet of plastic wrap, cover with another sheet of plastic, and pound the meat to an even ¼-inch (6-mm) thickness. Lay down another large sheet of plastic (optional, to help with rolling), and stretch out enough caul fat on it to clear the pounded-out rabbit by 2 inches (5 cm) all around. Roll up the rabbit meat and then unroll it on the caul fat, shaping it into a rectangle with no holes in the meat. Season lightly with salt and pepper.
Scatter the 1 teaspoon of salt, dextrose, and peppercorns evenly over the partially frozen trimmed meat. Gently mix by hand. Grind the meat mixture in the cold meat grinder fitted with the medium plate. If using a food processor, process the mixture with short pulses until very finely chopped but not completely pureed. It should look like hamburger. Transfer to a large bowl and add the egg and cream. Gently stir until blended and then use immediately or cover and refrigerate for up to 2 hours.
Place the meat mixture on the rabbit and form it into a cylinder, leaving an inch or two of space on each end. Use the plastic and caul fat to roll the rabbit over the filling, rolling it tight until you get two-thirds of the way to the other side. Fold the excess caul fat from the edges inward, like a burrito, and continue rolling into a tight, thick cylinder. Tie the porchetta with kitchen string in four or five places to help it hold its shape. Season lightly with salt and pepper. The porchetta can be refrigerated for up to 2 hours or frozen for up to 2 weeks before cooking and serving.
Preheat the oven to 400°F (260°C). Heat the grapeseed oil, garlic, and rosemary in a large, ovenproof sauté pan over medium-high heat. When hot and sizzling, push the garlic and rosemary to the sides of the pan. Add the porchetta to the pan and sear until all sides are golden brown, 4 to 5 minutes per side. Remove and discard the garlic and rosemary before they burn. Transfer the pan to the oven and cook until the internal temperature registers 145° to 150°F (63° to 66°C) on an instant-read thermometer, 35 to 45 minutes. Transfer the porchetta to a cutting board and let rest for 15 to 20 minutes.
**For the peperonata:** Bring a medium pot of water to a boil, and fill a bowl with ice water. Score an X on the bottom of each tomato and drop into the boiling water until the skin near the X starts to curl, 30 seconds to 1 minute. Transfer to the ice water and swish the tomatoes until they are cool. Peel and discard the skins of the tomatoes. Cut the tomatoes in half crosswise, along their equator, and dig out the seeds and gel. Cut out and discard the core, and then cut the remaining tomato flesh into 1½-inch (3.75-cm) squares. Cut the onion into 1½-inch (3.75-cm) pieces as well.
Heat the oil in a sauté pan over medium heat. Add the onion and sweat until soft but not browned, 5 to 8 minutes. Add the tomato, peppers, garlic, and rosemary, lower the heat to low, and cook gently for 12 to 15 minutes. Remove and discard the garlic and rosemary and then season the peperonata with the parsley, vinegar, salt and pepper. Taste and add more vinegar or olive oil as necessary. The peperonata can be made up to 3 days ahead, refrigerated, and then gently reheated before serving.
Slice the porchetta into ½-inch (1.25-cm)-thick medallions and serve with the peperonata, warm or cold.
Note
To debone the rabbit, cut through the breastbone, spread open the rib cage, and set aside the heart and liver. Discard any large deposits of fat. Scrape the blade of a boning knife over the inside of the ribs to thin out the membrane. Grab the side of the rabbit and poke the ribs through the membrane, then cut down around the ribs, holding the knife against bone, to remove the meat from the bone. Take care not to cut any holes in the meat. Make long slits down the center of the inside of the backbone to loosen it from the meat. When you get to the legs, cut down to the hip joints, then break the joints in two. Using short strokes and holding the blade against bone, gradually cut along the length of the backbone and underneath it to loosen the bone from the meat. The goal is to remove the entire backbone and rib cage from neck to tail, leaving the meat intact. The legs will still be attached. To debone them, cut down the length of each bone and scrape the meat from the bones, then break the joints and pull the bones from the meat like pulling a foot out of a sock. Scrape as much meat as possible from the bones, keeping a piling of trimmed meat as you work.
RABBIT PORCHETTA ASSEMBLY
**BONÈT**
Most restaurants in Piedmont serve some version of this dessert. It's like a chocolate flan made with crushed amaretti cookies and rum. The cookies separate out, making a soft crust, the custard stays rich and creamy, and the crème caramel forms its own syrupy sauce to drizzle over the top. The best version I've ever eaten was at Cesare Giaccone's restaurant just a half hour south of Alba. It's his mother's recipe and he hasn't changed a thing since he started making it fifty years ago. I modeled this version on that one.
MAKES 8 TO 10 SERVINGS
3 cups (600 g) granulated sugar, divided
6 large eggs
3 tablespoons (45 ml) white rum
1⅓ cups (115 g) unsweetened Dutch-process cocoa powder
1⅓ cups (160 g) finely ground amaretti cookie crumbs, plus a little extra for sprinkling
3 cups plus 2 tablespoons (780 ml) whole milk
Using an electric mixer on high speed or a sturdy whisk, whip together 1 cup (200 g) of the sugar with the eggs and rum until smooth, about 2 minutes. Sift the cocoa powder and cookie crumbs into the egg mixture, and then whisk until smooth. Gradually add the milk in stages, stirring until smooth between additions to prevent lumps. Cover and let stand in the refrigerator overnight.
Pour the remaining 2 cups (400 g) of sugar into a heavy sauté pan (not nonstick). Add just enough warm tap water to wet the sugar, about ¼ cup (60 ml), and cook over medium-high heat, swirling the pan but not stirring, until the sugar begins to turn medium amber in color, 6 to 8 minutes. Gently swirl the sugar in the pan to create an even-colored caramel. When the sugar is evenly golden, lower the heat to medium, stand back, and gradually add ⅓ cup (90 ml) of warm water; it will steam and spit. Cook the caramel just enough for it to remelt and become a thick, pourable syrup.
Carefully pour the caramel into a metal or ceramic 8-cup (2-L) terrine mold or 9 x 5-inch (23 x 13-cm) loaf pan; the bottom should be covered with a ¼- to ½-inch (6- to 12-mm)-thick layer of caramel that will feel tacky when cool enough to touch. Refrigerate until completely cooled, at least 1 hour or up to 4 hours.
Preheat the oven to 300°F (150°C). Set a kettle of water to boil.
Give the refrigerated bonèt mixture a stir, and then pour it into the pan over the caramel. If using a loaf pan, you may have a small amount left over; discard it. Set the pan in a larger, deeper pan (such as a roasting pan) and pour boiling water into the deeper pan to come about halfway up the bonèt pan. Bake until slightly puffed yet wobbly in the center like a firm custard, 60 to 70 minutes. Remove the bonèt pan from the water bath and let cool on a rack for 15 minutes. Refrigerate in the pan until completely cool or up to 4 days.
Run a wet knife around the edge of the bonèt and unmold it onto a large platter. Cut into ¾-inch (2-cm)-thick slices and lay a slice on each plate. Sprinkle with some cookie crumbs and a drizzle of the liquid caramel from the platter.
**PISTACHIO FLAN**
During the 2010 Slow Food festival, I made a point to eat at Da Guido in Pollenza. I'd heard so much about the place, I had to check it out. It's a stunning restaurant situated inside an old castle in town. Every dish in our meal was completely delicious, so for dessert, we ordered everything on the menu. This dessert floored me more than anything else. It's a small green pistachio cake enrobed in melted chocolate. When you cut into the cake, a river of creamy filling oozes out like a molten chocolate cake, but it's a gorgeous green color. The black and green colors make a striking presentation on a white plate. It took me two weeks to replicate this recipe, but I finally got it right. I ended up making my own pistachio paste in a Vitamix blender. Be sure to blend the pistachios until they're as smooth as silk. That's what makes the filling so creamy.
MAKES 8 TO 10
1¾ cups (350 g) granulated sugar
4 large egg yolks
3 large eggs
9 tablespoons (125 g) unsalted butter, melted
1 cup (150 g) raw unsalted pistachios, preferably Sicilian, plus some chopped for garnish | ¾ cup (175 g) whole milk
½ cup (62 g) _tipo_ 00 flour (see page 277) or all-purpose flour, sifted
3 cups (750 ml) Chocolate Sauce (page 285)
---|---
Preheat the oven to 350°F (175°C). Butter and flour ten 4-ounce (120-ml) baking tins or ramekins and place on a baking sheet.
Combine the sugar, egg yolks, and eggs in a stand mixer fitted with the whisk attachment. Whip on medium-high speed until pale yellow and thick enough to drip in ribbons when the whisk is lifted, 2 to 3 minutes. With the machine running, gradually add the melted butter.
Combine the pistachios and milk in a blender and puree on high speed until super-smooth and thick, 6 to 8 minutes, stopping to scrape down the side a few times. Fold the pistachio mixture into the egg mixture. Fold in the flour.
Pour the mixture into the prepared tins or ramekins and bake until very lightly browned and set around the edges but a little wobbly in the middle, 10 to 12 minutes.
Immediately invert and unmold the dishes onto plates. Spoon the warm chocolate sauce over each flan to coat it completely. Garnish with some chopped pistachios and serve hot.
**TORRONE SEMIFREDDO** with **CANDIED CHESTNUTS** and **CHOCOLATE SAUCE**
The first time I went to Alba, I fell in love with torrone, the Italian almond nougat candy. You see mounds and mounds of it on display during the annual truffle festival. At Frosio, we made a torrone semifreddo that I always thought would be great with chocolate and chestnuts. So here it is, all the sweet flavors of Piedmont in one dish. The recipe yields a lot because each component can be kept for a week or two before serving. And it's so good, you'll want to serve it again and again.
MAKES 14 TO 16 SERVINGS
Torrone:
1 teaspoon (2.25 g) powdered egg whites
¾ cup plus 3 tablespoons (190 g) granulated sugar, divided
1 small egg white
2½ tablespoons (37 ml) glucose syrup or light corn syrup
¼ vanilla bean, split and scraped
⅓ cup (90 ml) honey
2¼ teaspoons (10 g) food-grade cocoa butter, melted
1¾ cups (250 g) whole almonds, toasted
Semifreddo:
8 large egg yolks | ⅔ cup (133 g) granulated sugar
4 large egg whites
2 cups (475 ml) heavy cream
Candied Chestnuts:
8 ounces (227 g) peeled chestnuts, thawed if frozen
¾ cup (150 g) granulated sugar
3 tablespoons (45 ml) glucose syrup or light corn syrup
To Serve:
4 cups (1 L) Chocolate Sauce (page 285)
---|---
**For the torrone:** In the bowl of a stand mixer, whisk the powdered egg whites with 1 tablespoon (12 g) of the sugar to break up lumps. Add the egg white and whip on medium-low speed until combined, 1 minute. Let stand in the bowl.
Combine the remaining ¾ cup plus 2 tablespoons (175 g) of sugar with the glucose syrup, vanilla, and ¼ cup (60 ml) of water in a small saucepan, and cook over medium heat until the mixture reaches 302°F (150°C) on a candy thermometer. Meanwhile, heat the honey in a microwave until it is warm and pourable, 30 seconds to 1 minute. Add it to the 302°F (150°C) sugar mixture, and continue to cook until it reaches 311°F (155°C).
With the mixer running on low speed, pour in the hot sugar mixture; when incorporated, change to medium-high speed and whip until thickened, 1 to 2 minutes. Leaving the mixer running, quickly wave a kitchen torch all around the outside of the mixer bowl to heat the torrone mixture. The mixture should become thick and ribbony and start to pull away from the sides of the bowl. When it does, change to low speed, add the melted cocoa butter, and when incorporated, change back to high speed, and then wave the torch again around the outside of the bowl until the torrone thickens again and pulls back away from the sides of the bowl.
Quickly stir in the almonds and immediately transfer the torrone to a half-sheet-size silicone mat. Working quickly, top with another silicone mat and roll the torrone to an even ½-inch (1.25-cm) thickness before it hardens. Let cool and harden completely, about 30 minutes, and then chop into pieces the size of dimes.
**For the semifreddo:** Coat ten 4-ounce (120-ml) baking tins or ramekins with cooking spray.
In a large bowl, use a whisk to vigorously whip the egg yolks and sugar until pale yellow and thick, 2 to 3 minutes. In the bowl of a stand mixer, whip the egg whites on medium-high speed until they form medium-soft peaks when the beaters are lifted, 3 to 4 minutes. Fold the whites into the yolks in three additions. In a clean mixer bowl, whip the cream on medium-high speed until it holds medium-soft peaks when the beaters are lifted, 2 to 3 minutes. Fold the whipped cream into the eggs. Fold in the chopped torrone and spoon the semifreddo into the baking tins. Cover each tin and freeze until partially frozen, at least 2 hours or up to 1 week.
**For the candied chestnuts:** Put the peeled chestnuts in a saucepan and add enough water to just cover them. Simmer over medium heat until they are soft enough to eat and just start to fall apart, 5 to 8 minutes. Drain and reserve.
Combine the sugar, glucose syrup, and 2 cups (475 ml) of water in a large saucepan and boil over high heat until it reaches 224°F (107°C) on a candy thermometer, 12 to 15 minutes. Add the cooked chestnuts. This mixture can be refrigerated for up to 2 weeks before using.
**To serve:** Unmold the semifreddo onto plates (gently heat the outside of the molds, if necessary, to loosen the semifreddo; you can dip them in hot water or use a kitchen torch). Completely cover each semifreddo with chocolate sauce, and spoon some candied chestnuts on the side.
VENICE
LOSING MYSELF IN THE CITY
IT'S LIKE A SURREALIST DREAM, WALKING DOWN STREETS THAT CIRCLE BACK TO WHERE THEY STARTED, OVER BRIDGES THAT LEAD TO NOWHERE, CONTEMPLATING UNFATHOMABLE BUILDINGS, AND GAZING AT MESMERIZING DISPLAYS OF ODD-LOOKING FISH AND VEGETABLES. HERE YOU STAND ON ONE OF ONE HUNDRED AND EIGHTEEN TINY ISLANDS BARELY HELD TOGETHER BY ONE HUNDRED AND FIFTY CANALS AND FOUR HUNDRED FOOT-WORN BRIDGES, ALL FLOATING ON A GIANT LAGOON. VENICE IS A CRAZY PLACE, AND IT'S MY FAVORITE CITY IN ITALY.
I first went there in the spring of 2005 and have gone back almost every year since. On that first visit, Claudia and I had been together for about twelve months, and her mother invited us to stay with her at her timeshare. Pina has an apartment right next to La Fenice, the opera house famous for hosting such great artists as Verdi and Donizetti since the late 1700s, despite burning down and being rebuilt several times. Just a few blocks away is Piazza San Marco (St. Mark's Square), the most happening piazza in Venice. Every time I visit, as a chef, I can't help admiring the architectural design of the Basilica di San Marco. I build little structures every time I send out a plate of food. It's part of my DNA. The buildings in Venice inspire me every time I visit.
But the food interests me most. I'm not talking about big-ticket restaurants. They're too touristy, catering to the ten million people who flock to the "City of Water" every year from all over the world. No, I'm talking about the markets and the pub food. Every time I come to Venice, we shop the markets in the morning, and before cooking our evening meal back at the apartment, we grab some bar snacks and a glass of wine in the afternoon. That afternoon meal is called _ombra e cicchetti_ (wine and nibbles) and it gives you a better sense of Venetian food than you'll get at any of the touristy restaurants during dinnertime.
The big Rialto market is just a five-minute walk from Pina's apartment. Right near the city's oldest bridge, the fishmongers start selling at six in the morning. They sail in from the Adriatic Sea and the lagoon surrounding the city with some of the best-looking seafood I've ever seen. . . fresh whole turbot, sole, and sea bass; glistening steaks of tuna and swordfish; mountains of plump sardines; _moscardini_ (baby octopus), calamari (squid), and _seppia_ (cuttlefish); and unusual local shellfish, such as _lumache di mare_ (sea snails), _arselle_ (pinkie-tip-size clams), _vongole veraci_ (2-inch/5-cm-diameter clams), _cape longhe_ (razor clams), _granseole_ (spider crabs), _schie_ (small gray prawns), and _canoce_ (mantis shrimp). You see fish brought in from farther away, too, such as _gambero rosso_ (Mediterranean red shrimp), monkfish, grouper, and lobster. And twice a year in the spring and fall, you can buy _moleche_ , tiny molting softshell crabs. They're barely bigger than silver dollars. Venetians like to soak them in beaten eggs until the crabs stuff themselves with egg. Then they deep-fry the softshells and eat them whole.
You have to get to the fish market before noon, or all the good stuff is gone. And don't be shy about haggling. They keep the best bits and pieces in back. Ask the fishmonger, "What else do you have back there?" One time I asked that question, and a guy came out with some of the biggest monkfish cheeks I'd ever seen. Even if you don't cook fish in Venice, you owe it to yourself to check out the market, if only to know what you'll be eating when you order food later in the day.
Venetian cuisine is fish-based, but there are lots of vegetables, too. The vegetable market is right next door to the fish market, and it has everything you can think of. Depending on the season, you'll see purple artichokes, yellow and green zucchini, eggplant, _cuore di bue_ (big beef-heart tomatoes), long Roma beans, mounds of porcini and chanterelle mushrooms, and all kinds of radicchio, such as early and late Treviso (tardivo), Castelfranco, and Chioggia. The fruit there is off the charts. Cherries, peaches, plums, figs. . . you name it. If it's in season, it's there. With the fish and vegetable markets, and a pretty good selection of meat and cheese, the Rialto market is a full city block of incredible ingredients waiting to become plates of delicious food.
On that first trip, we picked up squid, skate wing, baby mullet, _arborelle_ (tiny fish you can eat whole), swordfish, shrimp, onions, zucchini, eggplant, and a few other things and made our way back to the apartment. By the time we got it all in the fridge it was around eleven thirty a.m., also known as _l'andar per ombre_ , the time to "move into the shadows." It's when everyone in Venice walks through the city, stopping at local _bacari_ (pubs) to nibble little bites of food and sip prosecco, negroni, or Campari and soda. _Ombre_ literally means "shadows" but is local slang for "wine"; apparently, the local tradition started when wine vendors on the street would move from place to place to find shade, especially those in Piazza San Marco who followed the shade of the Basilica di San Marco's giant bell tower.
During the day, the pubs in every back alley of Venice display all manner of snacks, such as marinated octopus, fried calamari and anchovies, _baccalà mantecato_ (creamed _baccalà_ on bruschetta or polenta), and _sarde in saor_ (sardines in sour onion and vinegar sauce). You'll find _polpettini_ (little meatballs) of pork and beef, fried eggplant and zucchini, and miniature sandwiches. These bar snacks, called _cicchetti_ , are sort of like Italian tapas, and they're some of the best eating in the city, if you ask me. Most places let you take the food outside. Going from pub to pub, it's like having lunch in a citywide alfresco restaurant. The people-watching is unbeatable. And you really get a feel for the city. There are no cars. No scooters. Not even bicycles. Everyone is on foot, walking over bridges and down alleys to get where they're going. You stand outside a pub, holding a glass of chilled Prosecco in one hand, some freshly caught fried fish or polpettini in the other, lean against a wall built six centuries ago, and relax, taking in the incredible history of the city itself. You don't use maps. You don't bother with street names. You just wander around getting lost until you eventually find something overwhelming to look at or delicious to eat. That's the magic of Venice.
I found that magic when we got back to the apartment in the early evening. Claudia and I cleaned all the fish and shellfish and Pina heated up some oil in a big pot. She mixed together _tipo_ 00 flour and semolina flour and we soaked the fish in milk until the fry oil was hot. We spread brown paper bags all over the table, dredged the fish, fried it, laid it on the paper, and then seasoned it with salt and pepper. A few squeezes of lemon later, the three of us had satisfaction written all over our faces. It was the best _fritto misto_ I'd ever had. The seafood was unbelievably fresh. The preparation was simple. Plus, it was springtime, the windows were open, and the late-day sun lit up Claudia's high cheekbones, chestnut brown hair, and inviting smile in a way that I had never quite seen before.
Pinzimonio with Tarragon Vinaigrette and Goat Cheese
•
Baccalà Cannelloni with Cauliflower and Parmigiano
•
Tagliolini with Ragù di Seppia
•
Fried Stuffed Softshell Crabs with Asparagus
•
Radicchio Ravioli with Balsamic Brown Butter | Fritto Misto di Pesce
•
Coconut Latte Fritto with Passion Fruit Curd
•
Heirloom Apple Upside-Down Cake with Polenta Gelato
•
Rustic Peach Tart with Goat Cheese Sorbet
---|---
**PINZIMONIO** with **TARRAGON VINAIGRETTE** and **GOAT CHEESE**
The Italian version of crudités is called _pinzimonio._ Use whatever vegetables look freshest at your market. Depending on the time of year, you could use baby zucchini, baby carrots, purple asparagus, white asparagus, blanched fava beans, peas, radishes. . . the sky's the limit. Just slice the vegetables thinly or cut them into manageable lengths or bite-size pieces. You could even shave them with a vegetable peeler or on a mandoline. In Venice, you'll find the best produce at the vegetable market near Ponte di Rialto, the oldest bridge over the Grand Canal. There's also a co-op nearby that sells a creamy goat cheese that I fell in love with. It's not crumbly like the goat cheese you find in logs. It's more like ricotta cheese. One spring, I was staying at Pina's timeshare a few blocks away from the market and put together this pinzimonio with that goat cheese and some market vegetables. When I got back to Philadelphia, I had a version of it on the Osteria menu all summer long.
MAKES 8 SERVINGS
1 packed cup (50 g) fresh tarragon leaves, plus 5 to 6 leaves for garnish
1 to 2 tablespoons (15 to 30 ml) red wine vinegar
1 cup (235 ml) olive oil | Salt and freshly ground black pepper
2½ pounds (1.1 kg) assorted vegetables, thinly sliced (8 cups)
About 4 ounces (114 g) fresh, soft goat cheese
---|---
Put the tarragon and vinegar in a blender and blend until the taragon is finely chopped, 1 to 2 minutes. With the motor running, slowly drizzle in the oil until thickened, 2 minutes. The mixture should be green and medium thick. Season with salt and pepper, then taste and adjust the vinegar and other seasonings as needed.
Toss the vegetables in the vinaigrette in a big bowl and season with salt and pepper. Arrange the vegetables on a wooden board or platter. I like to put them in a narrow line down a long board. Use two dinner spoons to scoop and shape the goat cheese into two football shapes (quenelles). Place them on opposite sides of the vegetables. Garnish with the remaining tarragon leaves and a drizzle of the remaining tarragon vinaigrette remaining in the bowl.
**BACCALÀ CANNELLONI** with **CAULIFLOWER** and **PARMIGIANO**
By now, _baccalà_ is pretty well known in the United States, but in case you're not familiar with it, it is salted cod. Not salt cod. Salted cod. Salt cod is usually very thin and very dried out. That's not what you want here. You want baccalà that is at least half an inch (1.25 cm) thick and still somewhat pliable. It has to be soaked for two to three days to desalt it before using, so allow yourself some time when making this recipe. On the plus side, there's so much natural gelatin in baccalà that it whips up into a creamy mousse the Italians call _baccalà mantecato_ (creamed codfish). I do a similar preparation here but stuff it into cannelloni and then blast the pasta in a hot oven. The filling has the look and creamy texture of a typical cheese-based cannelloni filling, but it tastes completely different and totally delicious. It makes a great casserole for a crowd, but you could cut the recipe in half to serve fewer people.
MAKES 8 TO 10 SERVINGS
Pasta and Filling:
1¾ pounds (795 g) boneless baccalà
4 ounces (1 stick/113 g) unsalted butter
¼ cup (60 ml) extra-virgin olive oil
1 medium-size yellow onion, finely chopped (1¼ cups/200 g)
3 anchovy fillets
½ teaspoon (1 g) red chili flakes
⅔ cup (83 g) _tipo_ 00 flour (see page 277) or all-purpose flour
2½ cups (625 ml) whole milk
1 small garlic clove, smashed
1 bay leaf
Salt and freshly ground black pepper
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (1.5 mm) thick | 4 tablespoons (57 g) unsalted butter, melted, plus a little more for greasing the pans
3½ ounces (100 g) Parmesan cheese, grated (1 cup)
Cauliflower:
½ head cauliflower, separated into florets
¼ cup (60 ml) extra-virgin olive oil, plus a little more for drizzling
Parmesan cheese for garnish
¼ cup (15 g) chopped fresh flat-leaf parsley for garnish
---|---
**For the pasta and filling:** Soak the baccalà in water to cover for 2 to 3 days, changing the soaking water two or three times a day.
Heat the butter and olive oil in a deep sauté pan over medium heat. Add the onion and sweat until soft but not browned, 4 to 5 minutes. Add the anchovies and chili flakes, and cook for 2 minutes. Add the baccalà, and cook until the fish flakes easily, 4 to 5 minutes, breaking it up with a spoon. Scatter in the flour, and cook out the floury taste for about 5 minutes. Gradually stir in the milk in a few additions, scraping the pan bottom between additions. Add the garlic and bay leaf, and cook over low heat until thick and creamy, 30 to 40 minutes, stirring frequently to prevent browning on the pan bottom. Taste, and season with salt and pepper as needed. Remove from the heat, let cool, and discard the garlic and bay leaf. Spoon the filling into a resealable plastic bag and refrigerate until ready to use or up to 1 day.
Lay a pasta sheet on a lightly floured work surface and cut it into 4-inch (10-cm) squares. You should get twelve to fourteen squares from the sheet. Repeat with the remaining pasta dough so you have a total of twenty-four to twenty-eight squares.
Bring a large pot of salted water to a boil and fill a large bowl with ice water. Drop in the pasta, quickly return the water to a boil, and cook for 15 to 20 seconds just to blanch it, stirring gently to prevent sticking. Immediately transfer the pasta to ice water to stop the cooking. Lay the pasta squares on kitchen towels or parchment paper and pat dry; they will be delicate and some may stick, but you should have plenty.
Preheat the oven to 500°F (260°C). Turn on convection if possible. Butter a 4-quart (4-L) baking dish, or use individual dishes if you like, allowing two to three cannelloni per dish.
Cut off a corner from the resealable plastic bag of filling, and pipe a ¾-inch (2-cm)-thick line of filling along one edge of each pasta square. Starting at the filled side, use the edge of the kitchen towel or parchment to lift and roll the pasta to the edge of the unfilled side to enclose the filling.
Place the cannelloni into the prepared baking dish, seam-side down. Brush the tops with the melted butter and sprinkle with the cheese. Bake until the cheese melts and browns on top, 8 to 10 minutes.
**For the cauliflower:** Thinly slice the cauliflower florets. Heat the oil in a large sauté pan over medium heat. Add the cauliflower in a single layer in batches and sweat until soft but not browned, 3 to 4 minutes per side.
**To finish,** place two to three cannelloni on each plate. Lay a few slices of cauliflower on top, drizzle with olive oil, and sprinkle with Parmesan and parsley.
**TAGLIOLINI** with **RAGÙ DI SEPPIA**
_Seppia_ (cuttlefish) is all over the fish market in Venice. It's similar to squid and octopus but tastes sweeter and more tender. It's my favorite cephalopod. You can eat it raw, stuffed, braised, baked, or even grilled. The best thing is the ink from the cuttlefish (often labeled as squid ink in stores). It turns everything black, like a busted ballpoint pen. Wear an apron when making this recipe! I use plenty of ink because if I'm going to eat a squid ink dish, I want it to be completely black, not gray. The sauce here should be so black that the pasta turns black. I use the ink from the cuttlefish plus some store-bought squid ink. You can buy jars of it from various gourmet retailers (see Sources on page 289). The cuttlefish itself you can get at most Asian fish markets. Or, if you can't find cuttlefish, use squid instead.
MAKES 4 TO 6 SERVINGS
1½ pounds (680 g) cuttlefish or squid, cleaned
¾ cup (175 ml) olive oil, divided, plus more as needed
1 medium-size yellow onion, julienned finely
⅛ teaspoon (0.25 g) red chili flakes
Salt and freshly ground black pepper
5 peeled canned tomatoes, preferably San Marzano, cored and crushed by hand | 2 to 3 cups (500 to 750 ml) white wine
2 teaspoons (10 ml) squid ink
1 bay leaf
1 pound (450 g) fresh or frozen tagliolini pasta (page 282)
⅓ cup (20 g) chopped fresh flat-leaf parsley
---|---
Reserve the cuttlefish or squid ink sacs. Finely julienne the bodies and tentacles (if using squid) and set aside. For directions on cleaning squid, see page 94.
Heat 1 tablespoon (15 ml) of the oil in a large, deep sauté pan over medium heat. Add the onion and sweat until soft but not browned, about 5 minutes. Stir in the chili flakes and salt and pepper to taste. Raise the heat to medium-high, add the cuttlefish and tomatoes, and cook for 5 minutes. Add enough wine to just cover the ingredients, and cook until the liquid reduces in volume by three-quarters, 10 to 12 minutes.
Meanwhile, carefully peel the skin off the ink sacs over a small bowl, which will release the ink. Cover the ink with just enough water so the ink can be poured out of the bowl. Add the inky water to the pan, and then rinse out the bowl with just enough water to capture all the ink, adding the inky liquid to the pan (you want maximum ink and minimum water). Add the 2 tablespoons (10 ml) of squid ink and the bay leaf and simmer over medium-low heat until the cuttlefish is tender, 15 to 20 minutes. Season to taste with salt and pepper and then remove from the heat. The _ragù_ can be cooled and refrigerated for up to 2 days before using. Just reheat it gently in a sauté pan.
Meanwhile, bring a large pot of salted water to a boil. Drop in the pasta; quickly return the water to a boil, stirring the pasta gently, and cook until the pasta is tender yet firm, about 1 minute. Reserve 1 cup (235 ml) of pasta water, then drain the pasta.
Meanwhile, add the remaining ½ cup (120 ml) of oil to the _ragù_ , stirring vigorously to blend it in. Add the pasta to the _ragù_ (in batches if your pan is small), stirring immediately with a fork to prevent the pasta from clumping. Stir in the parsley, and cook over medium heat until most of the sauce coats the pasta; stir in additional oil and pasta water as necessary to create a creamy sauce.
Divide among warm plates, twirling the pasta into nests on each plate.
**FRIED STUFFED SOFTSHELL CRABS** with **ASPARAGUS**
In the spring and fall in Venice, the fish market sells _moleche_ (baby softshell crabs). They're a little bigger than a silver dollar but you could use larger, more mature softshells in this recipe. The classic Venetian preparation is to soak them in raw egg until they gorge themselves, and then bread and fry them until crispy. I serve them in the spring with asparagus vinaigrette and a loose asparagus mayonnaise. With the crispy crab, the creamy mayo, and the fresh asparagus, it's a great starter plate. You may have some leftover asparagus mayo. Save it for sandwiches, fries, or anywhere you'd use plain mayonnaise.
MAKES 4 SERVINGS
Softshells and Vinaigrette:
1 to 1¼ pounds (450 to 570 g) softshell crabs (about 12 small or 4 large)
8 to 10 large eggs
1 tablespoon (15 ml) freshly squeezed lemon juice
3 tablespoons (45 ml) olive oil
Salt and freshly ground black pepper
4 ounces (113 g) white asparagus (3 to 5 spears)
4 ounces (113 g) green asparagus (3 to 5 spears)
Asparagus Mayonnaise:
8 ounces (227 g) green asparagus (6 to 10 spears)
½ packed cup (30 g) fresh flat-leaf parsley leaves and small stems | 1 large egg yolk
About 1½ cups (375 g) olive oil
About 2 tablespoons (30 ml) freshly squeezed lemon juice
Salt and freshly ground black pepper
To Serve:
Oil, for frying
About 1 cup (125 g) _tipo_ 00 flour (see page 277) or all-purpose flour, for dredging
Salt
1 teaspoon (1.5 g) chopped fresh chives for garnish
---|---
**For the softshells and vinaigrette:** Place the softshell crabs in a small saucepan or other high-sided container. Beat enough eggs to cover the crabs. Make sure the crabs are completely submerged in the eggs and keep them submerged in the refrigerator for 3 hours. Then clean the crabs by using scissors to cut off about ¼ inch behind the eyes; lift up the pointed sides of the crab to remove the gills underneath; then turn the crab over and snip off the flap or "apron" on the bottom. Return the cleaned softshells to the eggs until ready to use.
Put the lemon juice in a medium bowl and whisk in the oil. Season the vinaigrette with salt and pepper. Trim the tough ends of both the green and white asparagus. Using a mandoline or vegetable peeler, thinly shave the asparagus into the bowl of vinaigrette. Toss and let stand for 1 hour at room temperature. Taste and season with salt and pepper as necessary.
**For the asparagus mayonnaise:** Bring a medium pot of water to a boil and fill a large bowl with ice water. Trim the tough ends from the asparagus, add the asparagus to the boiling water, and blanch until crisp-tender, about 30 seconds. Transfer to the ice water to stop the cooking. When cool, coarsely chop the asparagus and puree in a blender. Add the parsley and egg yolk to form a smooth puree. With the blender running, slowly add the oil in a steady stream until the mixture emulsifies, about a minute. Season with the lemon juice, salt, and pepper.
**To serve:** Heat the oil in a deep fryer or deep saucepan to 350°F (175°C). Remove the crabs one by one from the egg, dredge in the flour, and deep-fry until golden brown, 4 to 5 minutes. Drain on paper towels and season immediately with salt.
While the crabs are frying, spoon a generous amount of asparagus mayo on each plate. Mound the asparagus salad near the mayo. Sprinkle with the chives and place the crab over the mayo.
**RADICCHIO RAVIOLI** with **BALSAMIC BROWN BUTTER**
The Veneto mainland grows incredible produce. It's all on display at the Rialto market in Venice. One of my favorites is radicchio. Not just the familiar round heads of red radicchio grown nearby in Chioggia, but also the longer heads of early and late radicchio from Treviso, the looser oval radicchio from Verona, and the red-speckled pale green radicchio from Castelfranco that's shaped like a huge rose blossom. They're all bitter and delicious, so use any type here. Mixed with ricotta, radicchio makes a great ravioli filling. For a sauce, I like to cut the bitterness with balsamic vinegar mixed into brown butter. When you taste it, it's almost like having a salad in pasta form.
MAKES 4 TO 6 SERVINGS
7 tablespoons (100 g) unsalted butter, divided
1 tablespoon (15 ml) olive oil
1 small garlic clove, smashed
½ round head red radicchio, julienned or shredded
Salt and freshly ground black pepper
1 small egg | 4 ounces (120 ml) fresh whole-milk ricotta cheese (½ cup)
1 ounce (28 g) Parmesan cheese, grated (⅓ cup), divided
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (0.8 mm) thick
3 sprigs fresh thyme
3 tablespoons (45 ml) balsamic vinegar
---|---
Heat 1 tablespoon (14 g) of the butter with the oil and smashed garlic over medium heat in a large sauté pan until the garlic browns just a little, 3 to 4 minutes. Add the radicchio and sweat over low heat until tender but not browned, 5 to 6 minutes. Season to taste with salt and pepper, and then remove from the heat and let cool. Discard the garlic clove and drain off any excess liquid.
Mince the cooked radicchio and transfer to a large bowl. Add the egg, ricotta, and 2 tablespoons (12.5 g) of the Parmesan and season lightly with salt and pepper. Use immediately or transfer to a resealable plastic bag and refrigerate for up to 4 hours.
Lay a pasta sheet on a lightly floured work surface and dust with flour. Trim the ends to make them square, then fold the dough in half lengthwise and make a small notch at the center to mark it. Open the sheet so it lies flat again and spritz with water.
Beginning at the left-hand side, spoon two rows of ½-inch (1.25-cm)-diameter balls of filling along the length of the pasta, leaving a 1½-inch (3.75-cm) margin around each ball and stopping at the center of the sheet. Lift up the right-hand side of the pasta sheet and fold it over to cover the balls of filling. Gently press the pasta around each ball of filling to seal. Use a 2½-inch (6.25 cm) round fluted ravioli cutter or a similar size biscuit cutter to cut the ravioli. Repeat with the remaining pasta dough and filling. Use immediately, or freeze in a single layer, transfer the frozen ravioli to resealable plastic bags, and freeze for up to 1 week. You should have about forty-eight ravioli.
When ready to serve, bring a large pot of salted water to a boil. Drop in the ravioli, quickly return the water to a boil, and cook until tender yet firm, 2 to 3 minutes.
Meanwhile, melt the remaining 6 tablespoons (85 g) of butter in a large, deep sauté pan over medium heat. Add the thyme, and cook until the butter turns golden brown and the milk solids brown on the bottom of the pan, 8 to 10 minutes. Remove from the heat, stand back, and stir in the vinegar; it will sputter. Taste and season with salt and pepper.
Drain the ravioli, place twelve to fifteen on each plate, and sprinkle with the remaining 3 tablespoons (19 g) of Parmesan. Pour the balsamic brown butter over and around the ravioli.
**FRITTO MISTO DI PESCE**
I make a fish fry every time I visit Venice. If you have an apartment with a small kitchen, fritto misto is the best thing to do with all that great fish in the market. I usually fry up some vegetables, too, such as eggplant and zucchini. But experiment with baby artichokes or whatever's fresh in the produce bins. A little semolina flour in the breading gives you great crunch. Just season everything with salt and pepper as it comes out of the fryer, squeeze on some lemon, and you're good to go!
MAKES 4 TO 6 SERVINGS
Oil, for frying
About 4 cups (1 L) whole milk, for soaking fish and vegetables
6 ounces (170 g) squid, cleaned and cut into 3-inch (7.5 cm)-wide strips (not rings)
6 ounces (170 g) skate wing, cut on the bone into 3-inch (7.5-cm) pieces
6 ounces (170 g) fresh whole anchovies, gutted and cleaned
6 ounces (170 g) baby cod, cut into 3-inch (7.5-cm) pieces
6 ounces (170 g) medium (U50 or 41/50) shrimp, left whole and unpeeled
1 medium-size white onion, thickly sliced and separated into rings | 1 medium-size zucchini, cut into 3-inch (7.5-cm) sticks
¼ medium-size eggplant, cut into 3-inch(7.5-cm) sticks
2 cups (250 g) _tipo_ 00 flour (see page 277) or all-purpose flour
1 cup (167 g) semolina flour
Salt and freshly ground black pepper
Lemon wedges, for squeezing
---|---
Heat the oil in a deep fryer to 350°F (175°C). Divide the milk between two bowls.
Prepare all the fish and vegetables and place the fish in one bowl of milk to cover, and place the vegetables in the other bowl of milk.
Combine the flours in a medium bowl. Drain each piece of fish from the milk, dredge it in the flour mixture, and then fry it until golden brown, 4 to 5 minutes, adjusting the heat to maintain the 350°F (175°C) oil temperature at all times. Transfer the fish to paper towels and immediately season with salt and pepper. Repeat with the remaining fish and vegetables.
Serve on butcher paper with the lemon wedges.
**COCONUT LATTE FRITTO** with **PASSION FRUIT CURD**
You can find _latte fritto_ anywhere in Venice. It's basically thickened pastry cream cut into little pieces, breaded and fried, and served as a sweet snack or dessert. The trick is to cook out the flour so the custard gets nice and thick. Then you can cut it easily when it cools. They don't use coconut milk in Venice but I wanted some more tropical flavors here. With passion fruit curd for dipping, it's a more aromatic twist on the traditional dish.
MAKES 4 TO 6 SERVINGS
Passion Fruit Curd:
15 passion fruits, or 1 cup (235 ml) passion fruit puree
2 large eggs
12 large egg yolks
1 cup plus 1 tablespoon (213 g) granulated sugar
1 vanilla bean, split and scraped
Pinch of salt
8 ounces (2 sticks/227 g) unsalted butter, chopped | Coconut Latte Fritto:
3 cups (750 ml) unsweetened fresh or canned coconut milk, divided
¾ cup (94 g) _tipo_ 00 flour (see page 277) or all-purpose flour, plus 1 cup (125 g), for dredging
⅔ cup (133 g) granulated sugar, plus some for coating
3 large egg yolks
Oil, for frying
3 large eggs
1 cup (108 g) plain, dry breadcrumbs
---|---
**For the passion fruit curd:** Set a fine-mesh sieve over the top of a double boiler or a medium heatproof bowl. Working over the sieve, cut the passion fruits in half and use a spoon to scrape the pulp, seeds and all, into the sieve. Use a rubber spatula to gently push the fruit pulp through the sieve. Discard the seeds. You should have 1 cup (235 ml) of passion fruit puree. Whisk in the eggs, egg yolks, sugar, vanilla, and salt and set the pan or bowl over a pan of gently simmering water. Whisk constantly until the mixture heats gently and thickens enough for the whisk to leave ribbons of curd when lifted from the pan, 10 to 15 minutes. Put the chopped butter in a medium bowl and pour the hot curd over the top. Whisk until the butter is completely melted and blended into the curd. Let cool and refrigerate until ready to serve or up to 5 days.
**For the coconut latte fritto:** Bring 2 cups (475 ml) of the coconut milk to a boil in a medium saucepan over medium-high heat. Meanwhile, whisk together the flour, sugar, remaining 1 cup (235 ml) of coconut milk, and egg yolks in a medium bowl. Temper the egg mixture by slowly adding and stirring in ½ cup (120 ml) of the hot coconut milk. Lower the heat to medium-low and pour the egg mixture into the saucepan of hot coconut milk. Cook until the mixture thickens, 15 to 20 minutes, whisking constantly to prevent scrambling the eggs on the bottom of the pan. The mixture should thicken enough to pull away from the sides of the pan, sort of like choux pastry when the liquid fat starts to separate out a tiny bit. Line a 2-quart (2-L) baking dish, such as an 8-inch (20 cm) square pan, with enough foil to overhang the edges as a sling. Pour in the fritto mixture, smooth the top, and let cool slightly. Press plastic onto the top, and then refrigerate until the mixture sets up enough to be cut into squares like soft fudge, at least 8 hours or up to 2 days.
Heat the oil in a deep fryer or deep pot to 325°F (160°C). Cut the cold coconut latte into 1-inch (2.5-cm) squares. Put the eggs, flour, and breadcrumbs in three separate bowls, and then carefully dip each square of coconut latte in the flour, then the egg, then the breadcrumbs, making sure the cubes are thoroughly coated. Fry in the hot oil until golden brown, 3 to 4 minutes. Fry in batches, if necessary, to prevent overcrowding. Drain on paper towels, and then toss in granulated sugar to coat.
**To serve:** Place a few pieces of hot coconut latte fritto on each plate with a generous spoonful of passion fruit curd for dipping.
**HEIRLOOM APPLE UPSIDE-DOWN CAKE** with **POLENTA GELATO**
This recipe has little to do with Italy, except for the polenta gelato. I started making it a few years ago at Osteria in Philadelphia. Some guys from Agusta, an Italian helicopter company in northeast Philly, came into the restaurant and couldn't believe I had polenta gelato on the menu. They were from Bergamo! They started coming in regularly and one guy, Giovanni, from the south of Italy, refused to eat the gelato because polenta is only found in northern Italy. It was like moving mountains to get him to try it. He finally tried it and all the northerners made fun of him, calling him a _polentone_ (slang for a polenta eater from the north). We eventually became friends with the Bergamascans in their group and now celebrate Easter with them. It makes Claudia feel more at home.
MAKES 8 SERVINGS
Apple Cake:
6 ounces (1½ sticks/170 g) unsalted butter, softened, plus some for greasing the pan
7 Granny Smith apples
3¾ cups (750 g) granulated sugar, divided
3 large eggs
6 large egg yolks, divided
¾ cup (120 g) finely ground cornmeal, sifted
¾ teaspoon (3.5 g) baking powder
½ teaspoon (3 g) salt | Apple Coulis:
1 pound (450 g) heirloom apples (about 3)
1 cup (200 g) granulated sugar
1 small cinnamon stick
To Serve:
½ cup (80 g) coarse yellow cornmeal (polenta)
4 cups (1 L) Polenta Gelato (page 287)
---|---
**For the cakes:** Butter a 9-inch (23-cm) round cake pan. Peel, core, and slice the apples about ¼ inch (6.25 mm) thick. Pour 2¾ cups (550 g) of the sugar into a deep sauté pan and add just enough water to moisten the sugar into a thick paste, about ½ cup (120 ml). Bring to a simmer over medium-high heat and cook until the sugar dissolves and turns medium amber, 6 to 8 minutes. Stand back and add 2 tablespoons (30 ml) of water, stirring to soften the caramel. Stir in the apples, and cook over high heat until the apples begin to soften and the caramel thickens, 3 to 5 minutes. Pour the apples into the prepared pan and let cool.
Preheat the oven to 325°F (160°C). In the bowl of a stand mixer, cream the butter and remaining 1 cup (200 g) of sugar on medium speed until light and fluffy, 2 to 3 minutes. Add the eggs and half of the egg yolks, one at a time, scraping down the sides of the bowl a few times. Mix together the cornmeal, baking powder, and salt and alternately add the mixture to the bowl with the remaining yolks. Pour the batter over the apples and bake until a toothpick inserted into the center of the cake comes out clean, about 40 minutes. Remove and let cool in the pan on a rack for 15 minutes. When almost cooled, invert onto a large plate or platter.
**For the apple coulis:** Peel, core, and coarsely chop the apples. Transfer to a large deep sauté pan and add the sugar, cinnamon stick, and 1 cup (235 ml) of water. Bring to a simmer over medium-high heat, then lower the heat to medium and simmer until the apples become very soft, about 20 minutes, stirring a few times. Remove the cinnamon stick and puree the mixture in a blender until smooth. Refrigerate until ready to use or up to 5 days.
**To serve:** Toast the coarse polenta in a dry pan over medium-high heat until fragrant, 3 to 4 minutes, shaking the pan often. To plate, draw an X on each plate with the apple coulis and put a slice of apple cake over the coulis a little off center. Sprinkle some toasted polenta in the open area and place a scoop of polenta gelato on top.
**RUSTIC PEACH TART** with **GOAT CHEESE SORBET**
I don't know how to describe the peaches in Italy. They're just amazing. Small, soft, and dense with liquid sugar. In summertime, you have no trouble finding mounds and mounds of perfect ones in the produce markets. Here's something simple to do with them. Nothing fancy. At Osteria, we serve these as individual tarts (as shown on page 192), but I wrote the recipe here for a single tart because it's easier to make and serve that way at home. If you want to make individual tarts, just roll out several smaller rounds of tart dough instead of one large one. This recipe reminds me of the rustic pies and tarts my grandmother Jacqueline Michaud used to make when I was growing up in New Hampshire. I grew up right next door to her. She was the chef in the family and one of my earliest culinary inspirations. I had ice cream with my peach pie as a kid, but as an adult I crave sharper flavors. Goat Cheese Sorbet adds just the right amount. You could also serve this with Raspberry Sorbet (page 288) if you like.
MAKES 10 TO 12 SERVINGS
Tart Dough and Peaches:
3⅔ cups (460 g) _tipo_ 00 flour (see page 277) or all-purpose flour
1 teaspoon (6 g) salt
1⅓ cups (300 g) cold unsalted butter, cut into small cubes
⅔ cup (150 ml) cold water
6 ripe peaches
Almond Frangipane:
2 cups (190 g) finely ground almonds
1 tablespoon (8 g) _tipo_ 00 flour (see page 277) or all-purpose flour | 1¾ cups (210 g) confectioners' sugar, divided
9 tablespoons (213 g) unsalted butter, at room temperature
2 large eggs
1 large egg yolk
To Serve:
2 tablespoons (25 g) granulated sugar
1 large egg
2 tablespoons (25 g) turbinado or raw sugar
5 cups (1.25 L) Goat Cheese Sorbet (page 288)
---|---
**For the tart dough and peaches:** Combine the flour and salt in a stand mixer fitted with the paddle attachment on low speed, or whisk together in a mixing bowl. Add the butter and mix on medium-low speed until the butter is cut into very small pieces throughout the flour, or use a pastry cutter to cut the butter into the flour in the bowl. Slowly pour in the water and mix just until the dough comes together. Turn out onto a sheet of plastic wrap and quickly gather the dough into a ball. Press it into a disk and wrap in the plastic. Refrigerate for at least 1 hour or up to 1 day.
Meanwhile, bring a large pot of water to a boil and fill a large bowl with ice water. Score an X on the bottom of each peach. Working in two batches, drop the peaches in the boiling water for 1 minute, and then transfer them to the ice water to cool. Remove the peels with a paring knife, cut in half around the pits, and remove the pits. Slice the peaches about ¼ inch (6 mm) thick and set aside.
**For the frangipane:** Combine the ground almonds, flour, and ¼ cup (30 g) of the confectioners' sugar in a food processor and process to a very fine meal. In a stand mixer, cream the butter and remaining 1½ cups (180 g) confectioners' sugar on medium speed until light and fluffy, 3 to 4 minutes. Add the eggs, one at a time, and then the yolk, beating until each is incorporated before adding the next. Add the almond mixture on low speed just until blended. The frangipane can be refrigerated for up to 1 day before using. Let stand until spreadable before using.
Preheat the oven to 350°F (175°C).
Transfer the pie dough to a large sheet of lightly floured parchment paper. Top with overlapping sheets of plastic and roll the dough from the center outward to a 14-inch (35.5-cm) circle. Remove the plastic from the dough, then spread the frangipane over the pie dough, leaving a 2-inch (5-cm) border of dough around the perimeter. Use the parchment to slide the dough and frangipane to a large baking sheet (you can use the back of a rimmed baking sheet, if necessary).
Fan the sliced peaches over the frangipane, sprinkling them with sugar as you go and leaving a 2-inch (5-cm) border of dough at the edges. Lift the border of dough over the edge of the fruit, making a few small folds of dough as you go around the circle. Whisk the egg with 1 teaspoon (5 ml) of water and brush all over the exposed dough. Sprinkle generously with turbinado sugar.
Bake until the crust is browned and the fruit is tender, 40 to 50 minutes. Let cool slightly, then cut into wedges. Serve each wedge with a large scoop of gelato.
LEFFE
BECOMING A CHEF
"HOW DID YOU GET TO ITALY?" A SHORT, TWITCHY OLD MAN NEAR THE COUNTER ASKED ME IN ITALIAN. I GUESS HE COULD TELL I WAS AMERICAN WHEN I ORDERED MY CAPPUCCINO AND CROISSANT. "I'M A CHEF IN BERGAMO," I SAID. "MY GIRLFRIEND IS FROM THERE, TOO." IT WAS ONLY TEN IN THE MORNING, BUT THE GUY ALREADY LOOKED AND SMELLED HALF IN THE BAG. "DOES SHE HAVE A BIG LAWN?" HE ASKED. "EXCUSE ME?" I REPLIED. "THEY HAVE BIG LAWNS IN BERGAMO," HE WENT ON, "WITH LOTS OF ROOM FOR SEX!" THEN HE SMILED, LAUGHING.
Turning away, I figured this dirty old man must work in one of the local textile factories, weaving fancy napkins, tablecloths, and drapes for famous companies, such as Frette. As I drove out of downtown Leffe, up Via Monte Beio, the scene changed completely. Beautiful red, blue, and yellow homes poked through trees on the hillsides above Val Seriana (Seriana Valley). Near the top of the area called San Rocco, I pulled into Locanda del Biancospino and took in the incredible views from the inn's covered dining terrace. You could see all of " _le cinque terre della valgandino_ ," the five lands of the Gandino valley, and, in the distance, the snow-capped peaks of the Italian Alps. This would be my first executive chef position in Italy. It was an utterly stunning location, but, so far out of town and up the hills, would the restaurant be successful? One glance at the gleaming new kitchen equipment and induction burners replaced all doubt with excitement. The Servalli family who hired me spent more than 100,000 euros ($135,000) on the kitchen alone. I salivated at the opportunity to show off my cooking chops and honor the cuisine that had given me so much.
That winter, I developed the menu, opened the restaurant, and ran the kitchen. Through the spring, I cooked for dozens of big parties and banquets and hundreds of guests. Locanda del Biancospino celebrated the mountain cuisine of the alpine foothills, and my go-to ingredients were wild game, such as pheasant and guinea hen, and forest vegetables and herbs, such as porcini mushrooms and spring onions. The seasons dictated every dish. I bought my vegetables at the local farmers' market in Cene, shopped for meats at the Camotti butcher shop in Nembro, and purchased fish from Pescheria Orobica, one of the best fishmongers in northern Italy. Every ingredient was pristine; each piece of fruit, cradled in its own nest; each slice of prosciutto, neatly layered between sheets of butcher paper. Even the quality of the cured sausage I bought showed that in food, as in fashion, Italians excel at craftsmanship.
Biancospino was relatively small and we changed the menu often, so I could flex my muscles as a chef there. I created elaborate, composed dishes. The local guinea hens were huge, about four pounds (1.75 kg) each, so I made guinea hen four ways with multiple components, marinating and poaching the breast, searing off the livers like foie gras, and making rillettes with the legs. I crisped up the skin and served the cracklings over guinea hen salad made from the rest of the bird.
It was 2005 and molecular gastronomy was exploding in Italy. I experimented with alginates, sous vide, and xanthan gum. I deconstructed familiar dishes and presented them in new ways. I pureed fruits and vegetables and foamed them up with nitrous oxide. I plated everything to be as spectacular as the alpine view from the inn's terrace, arranging a few slices of hen here, a julienne of fennel there, some decorative drops of sauce and a well-placed herb near the edge. I served such dishes as baby horse rib eye with squash gratin, rabbit roasted in black olives with apple and celery root _involtini_ , pork ribs with Brussels sprout and walnut fricassee, licorice savarin with coffee sorbetto, and warm bitter chocolate mousse with peperoncino. For a spring party, I made rabbit _casoncelli_ with crushed amaretti cookies and chopped raisins in the pasta filling. The locals were used to traditional casoncelli filled with beef and pork, and they loved the subtle twist using rabbit, stunned that an American could put this kind of spin on their local pasta. And do it well.
After a few months, I learned to create successful menus and to manage food costs. First thing in the morning, I would check the walk-in, take ingredient inventory, read the guest list, and put together my food order. I learned how to control everything from my kitchen staff and inventory to my purchasing and budget.
But now that I was executive chef, my most important lessons came from within. Months of flaunting my talents and pushing culinary limits taught me that you can't just cook from your head. You have to cook from your heart. I started seeing parallels between Pina's and Claudia's home cooking and the Michelin-star cooking done at such restaurants as Frosio, La Brughiera, and Loro. They all had heart and soul. I found that even something as simple as an unadorned, spit-roasted goat could be incredibly satisfying when cooked with care. If tended and nurtured, a dead-easy pot of Bolognese sauce can be the most delicious thing you have ever eaten.
Toward the end of that spring, instead of preparing guinea hen four ways, I just stuffed it and roasted it with spring onions. Instead of pureeing and foaming zucchini, I simply grilled it and served it with lemon dressing and local herbs. I still cooked with skill but also with passion—minus the bells and whistles on which I had been relying. I discovered that, as a chef, I wanted to cook less precious food. More rustic. Bold. And beautiful. It was a huge lesson. I'd lost my appetite for such ingredients as methycellulose and tapioca maltodextrin. I wanted to cook with truffles, Taleggio, porcini, and pork. I wanted pheasant and duck hunted from the woods, snails and wild berries harvested from the hills, and local cheeses aged in caves for months until they were perfectly ripe. I wanted to cook with the amazing food that was growing around me.
Locanda del Biancospino was like a petri dish for my professional development. But by the end of that spring, I had to leave. The restaurant was doing well, except that Anna Servalli had hired me on the condition that I get a work visa. My uncle was helping me secure one. But work visas are almost impossible for Americans to get in Italy. It was taking forever, and by the late spring, I still didn't have it. Anna said she would have to pay me less. I was already deep in debt and couldn't afford a pay cut. My only option was to leave. But where to go? Without a work visa, I couldn't make much money in Italy. Yet, moving back to the States meant leaving Claudia here.
Sweet Onion Flan with Morels
•
Snail Spiedini with Celery Root Puree and Truffle Butter
•
Sweetbread Saltimbocca with Squash Puree
•
Duck Casoncelli with Quince, Brown Butter, and Sage
•
Pheasant Lasagna | Milk-Braised Pork Cheeks with Porcini Polenta
•
Porcini Ravioli with Taleggio and Burro Fuso
•
Chinotto Affogato
•
Fried Huckleberry Ravioli with Mascarpone Crema
---|---
**SWEET ONION FLAN** with **MORELS**
The forests behind Locanda del Biancospino are full of amazing spring ingredients, such as morel mushrooms, wild herbs, and young green onions. I tried to capture the forager's spirit in this dish by pureeing green onion tops to make an emerald-colored flan as an appetizer. The morels are simply sautéed with garlic and served on the side. I like to leave morels whole, but if they're big, you can cut them in half or quarters lengthwise. Whatever you do, make sure you dunk them in water several times to get the dirt out of the crevices. I spin the mushrooms in a salad spinner to dry them.
MAKES 6 TO 8 SERVINGS
Spring Onion Flan:
1½ pounds (680 g) spring onions or scallions
2 tablespoons (30 ml) olive oil
4 tablespoons (57 g) unsalted butter
6 tablespoons (47 g) _tipo_ 00 flour (see page 277) or all-purpose flour
2½ cups (625 ml) whole milk
Pinch of grated nutmeg
3 large eggs, lightly beaten
5¼ ounces (150 g) Parmesan cheese, grated (1½ cups)
Salt | Morels:
1 pound (450 g) morel mushrooms
2 tablespoons (30 ml) olive oil
2 tablespoons (28 g) unsalted butter
1 garlic clove, smashed
½ cup (50 g) sliced spring onions or scallions, including tops
1½ teaspoons (7 ml) sherry vinegar
Salt and freshly ground black pepper
---|---
**For the spring onion flan:** Cut the spring onions into 1-inch (2.5-cm) lengths, keeping the green tops and white bottoms separate.
Heat the oil in a large sauté pan over medium heat. Add the onion bottoms and sweat until soft but not browned, 3 to 5 minutes, adding a little water, if necessary, to soften them and keep them from browning. When soft, add the green tops, and cook until tender, 2 to 3 minutes. Transfer to a food processor and puree until super-smooth, 4 to 5 minutes, scraping down the sides a few times.
Meanwhile, melt the butter in a medium saucepan over medium heat. Whisk in the flour to make a roux, and then cook for 3 to 4 minutes, whisking to prevent burning. Gradually whisk in the milk, stirring constantly with a wooden spoon until the mixture is free of lumps. Continue cooking, stirring occasionally, until the sauce is thick enough to coat the back of a spoon, about 5 minutes. Stir in the nutmeg, pour into a medium bowl, and let cool. Add the pureed onions, eggs, and Parmesan, and stir until combined. Strain through a fine-mesh sieve into a bowl, and then season with salt.
Preheat the oven to 375°F (190°C). Set a kettle of water to boil.
Butter six to eight 4-ounce (120-ml) ramekins and fill each with the flan mixture. Place the tins in a large, deep pan, such as a roasting pan, and pour enough hot water into the pan to come halfway up the sides of the tins. Bake in the water bath until set on the sides but still slightly jiggly in the center, 20 to 25 minutes.
Transfer the flans from the water bath to a baking sheet and let cool. When cool, cover and refrigerate until cold, at least 2 hours or up to 2 days.
**For the morels:** Fill a large bowl with cold water and dunk the morels up and down in the water half a dozen times to rinse any dirt from their crevices. Pat dry or spin dry in a salad spinner. Leave any small morels whole but cut the large ones lengthwise into halves or quarters.
Heat the oil, butter, and garlic in a large sauté pan over medium heat. When the butter melts, add the spring onions and morels, and cook until tender, 3 to 4 minutes. Season with the vinegar, and salt and pepper to taste and let cool. Remove the garlic.
To plate, dip the bottom and sides of the ramekins in hot water to loosen the flans, then invert and unmold the flans onto plates. Serve with a spoonful of morels.
**SNAIL SPIEDINI** with **CELERY ROOT PUREE** and **TRUFFLE BUTTER**
Every year, Claudia's aunt Irene and her mother, Pina, hike the Italian Alps, foraging for snails. One year, they came back with fifty kilos (110 pounds) of live snails. It took three days to get them ready to eat. Day one was soaking them in polenta for twenty-four hours. Day two was boiling them. Day three was picking out the meat and freezing it. What a pain! Thank God we can get snails already prepared (see Sources on page 289). I like to wrap them in pancetta to keep them moist and grill them quick before they get rubbery. You'll need twelve small bamboo skewers to make the _spiedini_ (kebabs). Soak the skewers in water for twenty minutes to help keep them from burning.
MAKES 4 SERVINGS
Snails:
4 tablespoons (57 g) unsalted butter
½ medium-size yellow onion, minced (½ cup/80 g)
2 garlic cloves, minced
1 bay leaf
36 large cooked snails
Salt and freshly ground black pepper
36 thin slices of pancetta or bacon (about 1½ pounds/680 g)
Vegetable oil, as needed
Celery Root Puree:
2 tablespoons plus ½ cup (141 g) unsalted butter, divided
¼ cup (40 g) minced yellow onion | 6 ounces (170 g) celery root, peeled and diced (1 cup)
½ small potato, peeled and diced (¼ cup/40 g)
Salt and freshly ground black pepper
Truffle Butter:
4 tablespoons (57 g) unsalted butter, melted
Zest of 1 lemon
1 tablespoon (15 ml) freshly squeezed lemon juice
1 tablespoon (4 g) chopped fresh flat-leaf parsley
2 tablespoons (30 ml) white or black truffle paste
Salt and freshly ground black pepper
---|---
**For the snails:** Melt the butter in a large sauté pan over medium-low heat. Add the onion, garlic, and bay leaf, and sweat until the onion is soft but not browned, 10 to 12 minutes. Add the snails, toss to coat, and cook for 8 to 10 minutes more. Remove from the heat, season with salt and pepper, and let cool in the pan. When cool, wrap each snail with a piece of pancetta and thread onto short, presoaked skewers, allowing three snails per skewer.
**For the celery root puree:** Melt 2 tablespoons (28 g) of the butter in a medium saucepan over medium-low heat. Add the onion and sweat until the onion is soft but not browned, 8 to 10 minutes. Add the celery root, potato, and enough water to cover the ingredients by ½ inch (1.25 cm). Cover and bring to a boil over high heat, and then lower the heat and gently simmer uncovered until the celery root is tender, 20 to 25 minutes. Drain and pass the solids through a food mill or potato ricer, along with the remaining ½ cup (113 g) of butter, until creamy, or puree until smooth in a small food processor. Season with salt and pepper and keep warm.
**For the truffle butter:** Mix the butter, lemon zest and juice, parsley, truffle paste, and salt and pepper to taste together in a small bowl. Taste and adjust the seasoning as necessary.
Heat a grill for direct, medium heat. When hot, coat the grill grate with oil. Grill the snails until nicely grill-marked on all sides, 2 to 3 minutes per side.
Melt the truffle butter in a small saucepan or microwave. Spoon a pool of warm celery root puree on each plate, top with three skewers, and drizzle with melted truffle butter.
**SWEETBREAD SALTIMBOCCA** with **SQUASH PUREE**
Italian cuisine encourages you to experiment and make dishes your own. I took classic saltimbocca—pounded veal rolled up with prosciutto and sage—and substituted sweetbreads. The sweetbreads' soft texture is similar to that of pounded veal, but they get even crispier in the pan. To add something creamy, I made a simple squash puree and served some diced, sautéed squash on the side. I like Hubbard squash here, but if you can't find that, use Jarrahdale or butternut squash.
MAKES 4 SERVINGS
Squash Puree and Sautéed Squash:
6 tablespoons (90 ml) olive oil, divided
1 garlic clove, smashed
1 bay leaf
½ medium-size yellow onion, chopped (⅔ cup/107 g)
1 pound (450 g) Hubbard squash, peeled and cut into ¼-inch (6-mm) cubes (3 cups), divided
1½ teaspoons (7 ml) honey
1 tablespoon (15 ml) sherry vinegar
Salt and freshly ground black pepper
Sweetbread Saltimbocca:
8 ounces (227 g) veal sweetbreads
4 leaves fresh sage | 4 thin slices prosciutto (2 ounces/57 g)
2 tablespoons (30 ml) grapeseed oil
2 tablespoons (28 g) unsalted butter
Salt and freshly ground black pepper
About 1 cup (125 g) _tipo_ 00 flour (see page 277) or all-purpose flour, for dredging
To Serve:
2 tablespoons (28 g) unsalted butter
16 leaves fresh sage
---|---
**For the squash:** Heat 1 tablespoon (15 ml) of the oil with the garlic and bay leaf in a medium saucepan over medium heat. Add the onion and sweat until soft but not browned, 4 to 6 minutes. Add 2 cups (11 ounces/310 g) of the squash, season with salt and pepper, and add enough water to cover the ingredients by 1 inch (2.5 cm). Cover and bring to a simmer over high heat, then lower the heat to medium-low and gently simmer uncovered until the squash is soft, 20 to 25 minutes. Drain off most of the water and discard the garlic and bay leaf. Puree the squash in a blender until smooth. With the blender running, add the honey, vinegar, and ¼ cup (60 ml) of the remaining olive oil, blending until you get a smooth, pourable, medium-thick puree. Season with salt and pepper and keep warm over low heat or refrigerate for up to 2 days.
Heat the remaining 1 tablespoon (15 ml) of oil in a medium sauté pan over medium heat. Add the remaining 1 cup (5 ounces/140 g) of squash and sauté until tender, 8 to 10 minutes. Season with salt and pepper and keep warm over low heat or refrigerate for up to 2 days.
**For the sweetbread saltimbocca:** Rinse the sweetbreads in cold water, then soak in a bowl of ice water for 10 minutes. Pat dry, then remove some of the outer membrane from the sweetbreads, keeping each portion whole. Cut the sweetbreads into equal 2-ounce (57-g) portions, about the size of three fingers. Place each portion between sheets of plastic and gently pound from the center outward to an even ⅛ - to ¼-inch (3- to 6-mm) thickness. Layer the sage and prosciutto over each sweetbread portion and then fold in half to cover the sage and prosciutto. Wrap each portion in plastic, cover, and refrigerate until ready to use or up to 8 hours. It's important to keep the sweetbreads cold right up until you cook them.
Heat the grapeseed oil and butter in a large sauté pan over medium-high heat. Season the saltimbocca with salt and pepper and dredge in the flour, keeping the portions whole. Add to the pan and sauté until golden brown on both sides, 3 to 4 minutes per side.
**To serve:** Add the butter and sage to the pan with the squash cubes, and cook over medium heat until hot, 2 to 3 minutes.
Spoon a pool of warm squash puree in the middle of each plate and top with a portion of sweetbreads, a spoonful of sautéed squash cubes, and the fried sage.
**DUCK CASONCELLI** with **QUINCE, BROWN BUTTER** , and **SAGE**
Casoncelli is the local stuffed pasta of Bergamo. It's made a little differently from town to town but the backbone is the same: a thick ravioli dough filled with whatever scraps of food are hanging around—odds and ends of meat, stale cookies, breadcrumbs. . . you name it. The sauce is usually just crispy bits of pancetta, brown butter, and sage, and it brings the whole dish together. At Locanda del Biancospino, I sometimes made casoncelli with rabbit and prunes; other times, with duck and persimmons. Here's one of my favorite versions with duck and quince.
MAKES 8 SERVINGS
About 2 tablespoons (30 ml) blended oil (see page 276)
1½ pounds (680 g) bone-in duck legs
Salt and freshly ground black pepper
½ medium-size yellow onion, chopped (⅔ cup/107 g)
1 large rib celery, chopped (⅔ cup/68 g)
1 medium-size carrot, chopped (⅔ cup/82 g)
1 sachet of 2 sprigs each parsley, rosemary, and thyme (see page 277)
2 to 3 cups (500 to 750 ml) red wine
4¼ ounces (120 g) Parmesan cheese, grated (1¼ cups), divided | 3 tablespoons (22 g) finely ground amaretti cookies
2 tablespoons (20 g) raisins, chopped
1 large egg
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (0.8 mm) thick
1 quince, peeled
3 sprigs fresh thyme
1 sprig fresh rosemary
8 ounces (2 sticks/227 g) unsalted butter
20 small leaves fresh sage
---|---
Put enough oil into a Dutch oven to coat the bottom and heat over medium-high heat. Pat the duck dry and season with salt and pepper. Sear the duck in the hot oil until nicely browned all over, 4 to 5 minutes per side. Transfer to a platter and add the onion, celery, and carrot, sweating the vegetables until soft but not browned, 4 to 5 minutes. Add the duck back to the pot, along with the sachet and wine. Bring to a simmer, and then cover and simmer over medium-low heat until the duck is almost tender enough to fall off the bone, 30 to 40 minutes.
Transfer the duck to a platter, and when cool enough to handle, pick the meat and skin from the bones, discarding the bones. Meanwhile, strain the braising liquid, reserving the liquid and vegetables separately. Grind the vegetables and picked meat and skin on the small (⅛-inch/3-mm) die of a meat grinder or pulse in a food processor until finely chopped but not pureed. Transfer to a mixing bowl and stir in ¼ cup of the Parmesan, ground cookies, raisins, and egg. Season with salt and pepper, and then use immediately or spoon the filling into a resealable plastic bag and refrigerate for up to 1 day.
Lay a pasta sheet on a lightly floured surface and trim the short edges square. Cut the pasta in half lengthwise to make two long sheets, each about 3 inches (7.5 cm) wide. Pipe the filling in ¾-inch (2-cm)-diameter balls in a row down the length of each sheet, placing each ball near one edge of each sheet and leaving a 1½-inch (3.75-cm) margin around each ball. Spritz the dough lightly with water and fold it over the filling, long edge to long edge. Gently press around each ball of filling to eliminate air pockets. Using a 2½-inch (6-cm) round pasta or cookie cutter, cut out a series of half-moons, placing the cutter off center so the folded edge of the pasta bisects the equator of the cutter. Roll each half-moon from the folded edge to the cut edge to prop up the filling, and then pinch the pasta on each side of the filling to make "wings." The finished pasta should resemble a piece of wrapped candy. Repeat with the remaining pasta dough and filling. Transfer the casoncelli to parchment-lined baking sheets and refrigerate for up to 1 hour or freeze until solid; transfer to resealable plastic bags and freeze for up to 1 week. You should have about one hundred casoncelli.
Put the whole peeled quince in a small saucepan with the reserved duck braising liquid, thyme, and rosemary. Bring to a boil over high heat, then lower the heat to medium-low and simmer until a fork slides easily in and out of the quince, 20 to 25 minutes. Remove from the liquid, and when cool enough to handle, cut lengthwise into quarters and remove the core. Cut the quince into small cubes.
Bring a large pot of salted water to a boil. Drop in the casoncelli in batches if necessary to prevent crowding, and cook until tender yet firm, 5 to 6 minutes. Using a slotted spoon, transfer the casoncelli to warm pasta plates, arranging ten to twelve upright in a tight circular pattern in the center of each plate.
Meanwhile, heat the butter and sage in a sauté pan over medium heat, and cook until the sage is lightly browned, the butter turns golden, and the milk solids lightly brown on the bottom of the pan, 8 to 10 minutes. Add the quince and remove from the heat.
Sprinkle 2 tablespoons of the remaining Parmesan over each plate of casoncelli and spoon the quince and brown butter mixture over and around the pasta. Garnish with one or two fried sage leaves.
**PHEASANT LASAGNA**
In Leffe, the trees are jam-packed with birds. Travelers stay in the area to go birding and to hunt small game like pheasant, partridge, quail, dove, and guinea hen. In this lasagna, I like pheasant best. It's such a rich-tasting dark-meat bird. But almost any bird will work, even chicken. The best part is the pasta dough that hangs over the edge of the dish and gets crispy in the oven. How often do you get to enjoy crispy pasta? With creamy porcini béchamel and hearty pheasant ragù, this lasagna makes a great fall dish.
MAKES 8 SERVINGS
1 pheasant (2 to 3 pounds/1 to 1.3 kg)
Salt and freshly ground black pepper
1 tablespoon (15 ml) grapeseed oil
1 medium-size carrot, chopped 0/2 cup/61 g)
1 medium-size rib celery, chopped (½ cup/51 g)
½ medium-size yellow onion, chopped (½ cup/80 g)
1 cup (240 g) canned plum tomatoes, preferably San Marzano, cored and crushed by hand | 2 to 3 cups (500 to 750 ml) white wine, or as needed
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about -inch (1.5mm) thick
Unsalted butter, for greasing the pan
1 quart (1 L) Porcini Béchamel (page 281)
8¾ ounces (250 g) Parmesan cheese, grated (2½ cups)
---|---
Remove the pheasant innards, rinse inside and out, and pat dry. Cut the legs from the body and then season all the pieces with salt and pepper. Heat the oil in a Dutch oven over medium-high heat. When hot, add the pheasant legs and body in batches if necessary to prevent crowding, and sear until browned on all sides, 10 to 15 minutes total. Transfer to a plate. Add the carrot, celery, and onion to the pan, and cook until tender, 6 to 8 minutes. Add the tomatoes, along with the browned pheasant legs and enough wine to barely cover the ingredients (you won't use all of the wine here). Bring to a boil, then lower the heat to medium-low and simmer for 30 minutes. Add the pheasant body and enough wine to come about three-quarters of the way up the body. Cook until all the meat is fall-apart tender, another 30 to 40 minutes. Transfer the meat to a platter and, when cool enough to handle, pick the meat from the bones, discarding the bones and skin. Shred the meat. Strain the vegetables, and return the braising liquid to the pan. Boil the liquid over high heat until reduced in volume by about half, 10 to 15 minutes. Meanwhile, pass the vegetables through a food mill or potato ricer or coarsely puree them in a food processor. When the braising liquid is reduced, stir in the shredded meat and pureed vegetables. Season with salt and pepper and keep warm over low heat or cool and refrigerate for up to 2 days.
Lay a pasta sheet on a lightly floured work surface, trim the edges so they are square, and cut the sheet into two 17-inch (43-cm) lengths and one 13-inch (33-cm) length. Repeat with the remaining pasta, but cut that sheet into three 13-inch (33-cm) lengths.
Bring a large pot of salted water to a boil, and fill a large bowl with ice water. Drop in the pasta in batches to prevent overcrowding, quickly return to a boil, and blanch for 20 seconds. Transfer the pasta to the ice water to stop the cooking. Lay the pieces of pasta flat on kitchen towels and pat dry.
Preheat the oven to 450°F (230°C). Coat a 13 x 9-inch (33 x 23-cm) baking dish with butter. Arrange the two 17-inch (43-cm) lengths of pasta in the buttered dish, leaving a little hanging over the top of the dish. Spread about 1⅓ cups (360 ml) of porcini béchamel over the pasta and then about the same amount of pheasant _ragù._ Sprinkle with ¾ cup (75 g) Parmesan. Lay down another layer of pasta, a layer of béchamel, a layer of ragù, and a layer of Parmesan. Top with a final layer of pasta, the remaining béchamel, and the remaining Parmesan. Bake until golden and crispy on the edges, 8 to 10 minutes. Serve hot.
**MILK-BRAISED PORK CHEEKS** with **PORCINI POLENTA**
Pork cheeks are a seriously under-utilized part of the pig. The jowls are cured for guanciale, but you don't see much else done with the cheeks. This recipe fixes that. I braise the cheeks in my favorite braising liquid: milk. It's a classic Italian technique. The protein and fat in the milk add incredible richness to the meat and keeps it moist. When the meat is tender, the caramelized milk and juices become the sauce. Don't worry if the liquid looks curdled when you open the pot. A quick puree transforms the hot mess into a silky sauce you can ladle over the pork cheeks and porcini polenta.
MAKES 4 TO 6 SERVINGS
2½ pounds (1.1 kg) pork cheeks
Salt and freshly ground black pepper
1 tablespoon (15 ml) grapeseed or olive oil
2 cups (475 ml) whole milk
1 orange
1 small sprig rosemary | 1 small garlic clove
3 peppercorns
About ¼ cup (60 ml) extra-virgin olive oil
4 cups (1 L) Porcini Polenta (page 281)
1 tablespoon (4 g) chopped mixed herbs (parsley, rosemary, thyme) for garnish
---|---
Preheat the oven to 350°F (175°C).
Remove the silverskin and any large fat deposits from the pork cheeks so you're left with mostly meat. Pat the cleaned cheeks dry and season all over with salt and pepper.
Heat the oil in a large, deep ovenproof sauté pan or Dutch oven over high heat. When hot, add the cheeks and sear until nicely browned, 3 to 4 minutes per side, working in batches and transferring the cheeks to a plate as they are seared. The pork will leave a dark film (fond) on the bottom of the pan, which is exactly what you want. After all the cheeks have a nice dark brown sear and have been removed from the pan, add the milk to the pan and scrape up the brown bits. Simmer for a minute, then lower the heat to low.
Using a vegetable peeler, peel the zest from half of the orange in strips, removing as little of the bitter white pith as possible. Finely julienne the orange zest and set aside. Cut the orange in half and squeeze the juice from the un-zested half over a strainer into the pan. Drop the juiced orange half into the pan. (Eat the other orange half or save for another use.) Tie up the rosemary, garlic, and peppercorns in cheesecloth or a clean coffee filter and drop that into the pan. Return the seared cheeks to the pan and bring to a simmer, then cover and transfer to the oven. Cook until the meat is tender to the touch but not falling apart, 2 to 2½ hours.
Remove the cheeks from the pan, cover, and let cool. At this point, the cheeks can be refrigerated for up to 3 days. Remove the orange rind and sachet from the braising liquid and transfer the liquid to a blender, scraping in as much of the clumpy milk solids as possible. Buzz until the mixture is blended and light brown in color. Taste and season with salt and black pepper. The sauce can be cooled and refrigerated for up to 3 days (makes about 3 cups/750 ml sauce).
Meanwhile, put the julienned orange zest in a small saucepan and cover with cold water. Bring to a boil over high heat (this is the first blanching). As soon as the water boils, drain the hot water and cover the peels again with fresh cold water. Repeat the process so the peels are blanched three times, then pat them dry and slice into very thin strips. Put the strips in an airtight container and add olive oil to cover. The orange strips can be kept covered at room temperature for 3 days.
Preheat a grill or griddle to medium-high heat. Rub a little oil on the grill or griddle, then grill or sear the pork cheeks just until heated through, 1 to 2 minutes per side. Bring the braising liquid to a gentle simmer over low heat.
Spoon about 1 cup (235 ml) polenta on each plate. Top with four to five pork cheeks and a spoonful of sauce. Garnish with the orange zest and mixed herbs.
**PORCINI RAVIOLI** with **TALEGGIO** and **BURRO FUSO**
Twice a year, mushroom foragers flock to the hills around " _le cinque terre della valgandino_." Porcinis are everywhere! You'll need a mushroom hunting license, just as you would get a license for hunting animals. Be careful if you go out foraging. It's shocking how many people die from foraging on steep hillsides and falling over the cliffs in the dark, early morning hours. This straightforward ravioli is my homage to a local passion for mushrooms. The twist is putting pieces of Taleggio cheese right over the mushroom filling. When you cook the ravioli, the Taleggio melts into the mushrooms, creating a beautiful creamy filling. Melted butter infused with fresh thyme is all the sauce you need.
MAKES 4 TO 6 SERVINGS
2 teaspoons (10 ml) olive oil
4 ounces (1 stick/113 g) plus 2 teaspoons (9.5 g) unsalted butter, divided
1 small garlic clove, smashed
10 ounces (283 g) fresh or frozen porcini mushrooms, thawed if frozen, sliced
Salt and freshly ground black pepper
2 teaspoons (2.5 g) chopped fresh flat-leaf parsley | 1¼ ounces (35 g) Parmesan cheese, grated (6 tablespoons), divided
1 small egg
Pinch of grated nutmeg
4 ounces (113 g) Egg Pasta Dough (page 282), rolled into 1 sheet, about inch (0.8 mm) thick
4 ounces (113 g) Taleggio cheese
4 sprigs fresh thyme
---|---
Put the oil, the 2 teaspoons (9.5 g) of butter, and the garlic in a large deep sauté pan over medium-high heat. When hot and bubbly, add the mushrooms, season with salt and pepper, and shake the pan so the mushrooms are in a single layer. Cook without stirring until the bottoms of the mushrooms brown in the hot fat, 4 to 6 minutes. Shake the pan and flip the mushrooms to brown them evenly, another 4 to 6 minutes. Add the parsley and transfer the mushrooms to a colander or mesh strainer to drain and cool. Discard the garlic.
Transfer the mushrooms to a food processor and pulse until finely minced but not pureed, about a minute. Transfer to a bowl and stir in 2 tablespoons (12.5 g) of the Parmesan, along with the egg and nutmeg. Season with salt and pepper and use immediately or spoon into a resealable plastic bag and refrigerate for up to 1 day.
Lay the pasta sheet on a lightly floured surface and trim the short edges square. Cut the sheet in half lengthwise to make two long sheets, each about 3 inches (7.5 cm) wide. Pipe the filling in ½-inch (1.25-cm)-diameter balls down the length of each sheet, right in the center, leaving 2 inches (5 cm) between each ball. Pinch off ½-inch (1.25-cm) pieces of Taleggio and place each piece on the porcini filling. Spritz the dough lightly with water and fold it over the filling, long edge to long edge. Gently press around each ball of filling to eliminate air pockets, minimizing folds in the dough. Using a 2½-inch (6-cm) round pasta or cookie cutter, cut out a series of half-moons, placing the cutter off center so the folded edge of the pasta bisects the equator of the cutter. Transfer the ravioli to parchment-lined baking sheets and refrigerate for up to 1 hour, or freeze until solid, transfer to resealable plastic bags, and freeze for up to 1 week. You should have fifty to sixty ravioli.
Bring a large pot of salted water to a boil. Drop in the ravioli in batches if necessary to prevent crowding, and cook until tender yet firm, 5 to 6 minutes. Using a slotted spoon, transfer the ravioli to warm pasta plates, arranging 10 to 12 on each plate in a single layer.
Meanwhile, heat the thyme and the remaining 4 ounces (113 g) of butter in a large deep sauté pan over medium heat until melted but not browned, 4 to 5 minutes. Sprinkle the ravioli with the remaining ¼ cup (25 g) of Parmesan and drizzle on the herbed butter.
**CHINOTTO AFFOGATO**
_Affogato_ is an Italian float, but it's usually espresso poured over a scoop of gelato in a cappuccino cup. Here's my version made with _chinotto_ , the Italian soft drink flavored with the same bittersweet orange used in Campari. From my first sip, I loved chinotto. It reminded me of Moxie, a soda I drank as a kid. Both sodas have a bitter, molasses, savory, only slightly sweet taste. I boiled the chinotto to a thick syrup and made gelato with it, and then poured some fresh chinotto over the gelato and served the float with a couple of lemon cookies. People went nuts for it.
MAKES 8 SERVINGS
4 cups plus 2 tablespoons (390 g) almond flour
2 cups plus 2 tablespoons (255 g) confectioners' sugar, plus some for dusting
1 teaspoon (4.5 g) baking powder
2 lemons | 4 large egg whites
4 cups (1 L) Chinotto Gelato (page 286)
4 cups (1 L) chilled chinotto
---|---
Preheat the oven to 300°F (150°C). Stir together the almond flour, confectioners' sugar, and baking powder in a medium bowl. Zest both lemons into the mixture, then cut one of the lemons in half and squeeze its juice through a strainer into the bowl. Stir in the egg whites just until the ingredients are combined and crumbly.
Fill a shallow dish with confectioners' sugar for dusting. Roll the dough into 1-inch (2.5-cm)-diameter balls between your palms. Roll the balls in confectioners' sugar and place on a baking sheet. Flatten slightly and bake until set but not browned, 8 to 10 minutes. Let cool on the sheet for 10 minutes, and then cool completely on wire racks.
Freeze eight cappuccino cups. Place a scoop of Chinotto Gelato in each frozen cup and top with some chilled chinotto. Serve three lemon cookies on the saucer of each cup.
**FRIED HUCKLEBERRY RAVIOLI** with **MASCARPONE CREMA**
On the Leffe mountainsides, there grew these intense purple blueberries that reminded me of the huckleberries I grew up with on the hills of Nashua, New Hampshire. It was like coming home! I made a jam with the wild berries, stuffed them into sweet pastry dough, and fried them for dessert at Locanda del Biancospino. All the dessert needed was a creamy sauce, and mascarpone goes perfectly with berries. I just mixed it with whipped egg yolks and sugar. Boom! If you can't find huckleberries for the ravioli, blueberries work just as well.
MAKES 8 SERVINGS
Huckleberry Filling:
2¼ pounds (1 kg) fresh huckleberries or blueberries (6¾ cups)
1½ cups (300 g) granulated sugar
1 tablespoon (14 g) unsalted butter
Zest of 1 lemon
2 tablespoons (30 ml) freshly squeezed lemon juice
Sweet Pastry Dough:
3½ cups (440 g) _tipo_ 00 flour (see page 277) or all-purpose flour
⅔ cup (133 g) granulated sugar
Pinch of salt
2 large eggs
½ vanilla bean, split and scraped
4 tablespoons (57 g) melted unsalted butter | 1⅓ cups plus 1 tablespoon (335 g) white wine
Mascarpone Crema:
3 large egg yolks
½ cup (100 g) granulated sugar
¾ cup plus 2 tablespoons (200 g) mascarpone
To Serve:
Oil, for frying
1 tablespoon (6 g) grated lemon zest
Confectioners' sugar, for dusting
---|---
**For the filling:** Combine the huckleberries, sugar, butter, lemon zest, and lemon juice in a medium saucepan. Bring to a simmer over medium heat and simmer until the mixture thickens and reaches 224°F (107°C) on a candy thermometer. Let cool, then spoon the mixture into a resealable plastic bag and refrigerate for up to 3 days.
**For the dough:** Combine the flour, sugar, salt, eggs, vanilla, butter, and wine in a stand mixer fitted with the dough hook. Mix on medium speed until the dough is smooth, 3 to 4 minutes. Gather the dough into a ball, wrap in plastic, and refrigerate for at least 1 hour or up to 2 days.
Position a pasta roller at the widest setting. Cut the dough into three equal pieces and shape each piece into a thick rectangle the width of your pasta machine. Return two pieces to the refrigerator and roll one piece of dough through the pasta roller, lightly dusting the dough with flour to prevent sticking. Reset the rollers to the next-narrowest setting and pass the dough through the rollers. Pass the dough once through each progressively narrower setting, concluding with the second to last setting. Between rollings, continue to dust the dough lightly with flour, if needed, always brushing off the excess. You should end up with a sheet 4 to 5 feet (1.25 to 1.5 m) long and about inch (1.5 mm) thick. Lay the sheet on a floured work surface and trim the edges so they are square. Notch the center of the sheet on the edge to mark it. Spritz the dough with a little water to keep it from drying out. Pipe ¾-inch (2-cm)-diameter balls of filling at 1-inch (2.5-cm) intervals in two rows down the length of the dough just to the center. Leave a 1-inch (2.5-cm) margin all the way around each ball of filling. Lift the opposite end of the sheet and fold it over the filling so the edges meet. Gently press the dough around each ball of filling to seal. Use a 3-inch (7.5-cm) round fluted cutter to cut round ravioli, or use a knife to cut squares. Repeat with the remaining dough and filling. Dust the ravioli with flour, cover loosely, and refrigerate until ready to use, up to 8 hours. If you have any leftover filling, keep it refrigerated for up to a week and spread it on toast or use it like any other jam.
**For the mascarpone crema:** Combine the egg yolks and sugar in a stand mixer fitted with the whisk attachment. Whip on high speed until the mixture forms stiff peaks when the whisk is lifted, 4 to 5 minutes. Use a rubber spatula to fold in the mascarpone until smooth.
**To serve:** Heat the oil in a deep-fryer to 350°F (175°C). Add the ravioli in batches to prevent overcrowding and fry until golden brown, 3 to 4 minutes. Transfer to paper towels to drain and keep warm while you fry the rest of the ravioli.
Spread the mascarpone cream in a circle on each plate. Top with four or five ravioli, a scattering of lemon zest, and a dusting of confectioners' sugar.ttttttt
FLORENCE
THE ROMANCE CONTINUES
I PARKED CLAUDIA'S RED MINI COOPER ON A SIDE STREET. IT WAS MY FIRST TRIP TO FLORENCE AND THERE WERE NO PARKING SPOTS AT OUR BED-AND-BREAKFAST, B&B NOVECENTO ON VIA RICASOLI. THREE MONTHS LATER, CLAUDIA GOT A 350-EURO PARKING TICKET. INSTEAD OF "NO PARKING" SIGNS IN FLORENCE, THEY JUST USE STREET CAMERAS AND SEND YOU A BILL!
That's what happens when you can't wait to spend your last weekend together in one of the world's most romantic cities. Plus, after months of tasting incredible Tuscan food at La Brughiera in Bergamo, we couldn't wait to taste the cuisine in Tuscany itself. We checked into the B&B and hit the streets of Santa Maria Novella, the most happening quarter in Florence. Near the Florence cathedral, we stumbled across a little wine bar called Fiaschetteria Nuvoli just in time for lunch. The greeter walked us down the stone steps into what seemed like a cave filled with long wooden tables. Wine bottles lined every wall. Service was family style as in most Tuscan trattorias. We sat next to strangers and started passing plates of artisan salumi, grilled crostini, and chicken liver pâté, chatting and nibbling. Then came the ribollita and hand-rolled pici with wild boar Bolognese. The pasta, chewy and delicate, blew away every other pici I'd ever had. And the boar ragù tasted completely different than what I was used to in the north. The meat was ground with pork fat and simmered with Chianti into a chunky Bolognese rather than being braised in beer and shredded. Uncomplicated and absolutely delicious. We didn't order wine. It just showed up at the table in old-school bottles with wicker baskets, all of it made from Sangiovese grapes grown a few kilometers south of the city. I especially love the family-style dining in Tuscany. For Florentines, it's the most natural thing in the world to meet new people over a casual meal at one big table.
After lunch, we dipped cantucci cookies in tiny glasses of vin santo—the classic meal closer in the region. That taste of sweetness set us up for an afternoon of wandering through Florence, "the cradle of the Italian Renaissance." Claudia loves sculpture, so we headed to Galleria dell'Accademia to see the marble statue of _David_ by Michelangelo. How he managed to make that Carrara marble come alive is beyond me. But it reminded me that chefs do the same thing when they cook. You can always tell when a dish is dead on the plate. It just sits there. No charm, no aroma, no life! It's usually because the chef failed to put life into the food. This element of cooking is impossible to quantify, but it's the most important one, if you ask me.
From the Galleria, we walked to Ponte Vecchio, the old bridge first built over the Arno River in the year 996. They built shops right into the side of the bridge over the arches. Jewelers occupy most of the shops now, but as we walked the bridge I imagined myself as one of the original butchers housed there hundreds of years ago, tossing meat scraps into the Arno.
That night we lined up for dinner at Il Latini on Via dei Palchetti. The restaurant doesn't let anyone in until seven-thirty or eight and the line stretches around the block. It looks like something out of a movie. Just before opening, the owners hand out glasses of prosecco to everyone outside. They start slicing prosciutto and mortadella and giving you little slivers to whet your appetite. Then they call out names. If you don't have a reservation, you don't get in. Luckily, Claudia' mom's boyfriend got us two seats. Alluring aromas of wood smoke and salted meat greet you as you walk in. Whole legs of Tuscan prosciutto hang from the arched white ceilings. We sat down, and they handed us menus, saying, "We have this tonight; you should try that; you definitely need to order this. . ." By the time you order, they've done the ordering for you. I love that. The steak here comes by the kilo—a generous two pounds of porterhouse grilled over oak with just a sprinkle of rock salt. It was my first true taste of _bistecca alla Fiorentina._ The meat is cut from Chianina, a giant white cattle breed indigenous to the valley near Arezzo. They cook it rare and the beef is super-tender, the fat content perfect. It needs no adornment, but you can order sides of potatoes or beans stewed in tomatoes. As with other Tuscan food, the magic of bistecca alla Fiorentina lies in the high-quality ingredients and focused, unhurried techniques used to prepare them. There's no special sauce or spicing. No showy garnishes. Tuscans don't even put salt in their bread. Instead, they grill slices of bruschetta over wood and only then sprinkle on some salt, maybe some olive oil. Seasoning at the end gives Tuscan food a huge flavor impact. That simplicity and boldness is something I've tried to capture in my cooking ever since.
The next day was pure relaxation. Claudia and I spent hours mesmerized by the paintings of Leonardo da Vinci, Botticelli, Michelangelo, and Raphael at the Uffizi Gallery. On the street, we traded bites of _lampredotto_ (tripe panini), licked cones of licorice gelato, watched street performers, and stared over the shoulders of aspiring portraitists. As the sun began to sink on the horizon, we made our way back to B&B Novecento and the domed Santa Maria del Fiore cathedral. Built in the 1400s by one of Italy's most famous architects, Filippo Brunelleschi, the Duomo is probably the best-known spot in Florence. Dozens of stained glass windows glow within its 75-foot (23-m)-tall arches, shining sunlight on sculptures, paintings, and even the tomb of Brunelleschi himself. You have to walk four hundred and sixty-three steps to the top of the dome. There is no elevator. Claudia said she'd meet me outside.
When I got to the top, the view of the city and the late-day sun over the rolling Tuscan hills was utterly breathtaking. I looked down and saw Claudia sitting in the Piazza del Duomo. Her face was turned so I could see the profile of her confident nose and forehead. I grabbed my camera, zoomed in, and snapped a picture. When I checked the image on the screen, my thoughts slowed and my stomach tightened. What if this was it? My last image of Claudia: what I would stare at when I returned to the States next week, alone.
Duck Liver alla Fiorentina with Egg Yolk and Bruschetta
•
Warm Beef Carpaccio with Roasted Mushrooms
•
Guinea Hen Tortellini with Farro Crema
•
Candele Pasta with Wild Boar Bolognese
•
Wild Hare Pappardelle | Bistecca alla Fiorentina with Braised Corona Beans
•
Cantucci Sundae
•
Strawberry Zuppa Inglese with Mascarpone Cake
•
Blood Orange Crostata with Bitter Chocolate
---|---
**DUCK LIVER ALLA FIORENTINA** with **EGG YOLK** and **BRUSCHETTA**
Almost all of the Tuscan trattorias and _fiaschetterie_ (wine bars) start you off with _crostini Toscani_ , warm chicken liver pâté on toast. I can house about a half-dozen in 3 minutes. It's just chicken livers sautéed with onions and pureed with some chicken stock, but it's so good. I love the creaminess of it. Sometimes they add a little pancetta for richness. The pate reminds me of warm tartare served with egg yolk, so I decided to put a raw yolk on my duck liver version.
MAKES 4 TO 6 SERVINGS
1¼ pounds (570 g) duck livers
1 tablespoon (14 g) unsalted butter
1 tablespoon (15 ml) olive oil, plus some for the bruschetta
1 small yellow onion, chopped (⅔ cup/107 g)
2 ounces (57 g) pancetta, chopped
3 sage leaves
Leaves from 1 sprig rosemary
2 tablespoons (30 ml) brandy | About ½ cup (120 ml) Chicken Stock (page 279), as needed
1½ teaspoons (7 ml) sherry vinegar
3 tablespoons (12 g) chopped fresh flat-leaf parsley, divided
Salt and freshly ground black pepper
8 ounces (227 g) ciabatta, sliced ¼ inch (6 mm) thick
4 to 6 large egg yolks
---|---
Rinse, clean, and pat dry the livers. Heat the butter and 1 tablespoon of the oil in a large, deep sauté pan over medium heat. Add the onion and pancetta, and cook until lightly browned, 5 to 7 minutes. Add the liver, sage, and rosemary, and cook until the liver is lightly browned all over, 6 to 8 minutes. Pour in the brandy, and cook until most of the liquid evaporates and the livers are cooked through, 4 to 5 minutes. Transfer the mixture to a food processor and pulse to a coarse but not completely smooth puree. Transfer to a saucepan and add just enough stock to make a spreadable pâté. Stir in the vinegar and 2 tablespoons (7 g) of the parsley. Taste and season with salt, pepper, and vinegar, as needed. Heat gently just until warmed through, 1 to 2 minutes.
Light a grill or broiler to medium heat. Brush the ciabatta slices with olive oil and season with salt and pepper. Grill or broil until nicely browned on both sides, 1 to 2 minutes per side, taking care not to burn the bread. Cut each slice in half on the diagonal and sprinkle with the remaining parsley.
Spoon the warm liver pâté into small bowls. Place an egg yolk on each portion of pâté. Serve with the bruschetta. (You can refrigerate the leftover whites for 3 to 4 days and use them to make meringue.)
**WARM BEEF CARPACCIO** with **ROASTED MUSHROOMS**
Eating in and around Florence, you notice food is often served raw, at room temperature, or just barely warm. I tried warming up some raw beef carpaccio one day and it came out awesome. The fat softens, barely starting to melt, and the meat gets just a little warmer than body temperature. With dry-aged beef it tastes better than plain old carpaccio. I usually use 100 percent Black Angus rib-eye steaks aged for seven weeks. The Angus are grass-fed in Arkansas at Creekstone Farms, one of the best producers of natural beef in the United States. It's the closest I've found to Chianina beef from Tuscany. Sliced paper-thin, warmed in a wood oven, and topped with roasted mushrooms and red wine vinaigrette, it's the perfect appetizer for a fall meal.
MAKES 4 SERVINGS
8 ounces (227 g) maitake mushrooms, left in whole clusters
16 garlic cloves, crushed
8 sprigs rosemary, divided
6 tablespoons (90 ml) extra-virgin olive oil, divided
Salt and freshly ground black pepper | ½ cup (120 ml) Chianti Vinaigrette (page 277)
8 ounces (227 g) very thinly sliced aged beef rib eye, preferably from Creekstone Farms
Maldon sea salt
Parmesan cheese, for shaving
---|---
Preheat a wood oven, charcoal grill, or conventional oven to 300°F (150°C). If using a charcoal grill, pile all the coals to one side.
Toss the mushrooms, garlic, four of the rosemary sprigs, and 4 tablespoons (60 ml) of the oil in a bowl. Season with salt and pepper, then spread on a baking sheet. Roast in the oven or on the unheated side of the charcoal grill until tender but not crisped, 6 to 8 minutes. Tear the mushrooms into small pieces and toss in a bowl with the vinaigrette.
Meanwhile, arrange the sliced beef in a single layer on a rimmed baking sheet, then season with salt and pepper and drizzle with the remaining 2 tablespoons (30 ml) of oil. Roast in the oven or on the unheated side of the grill until just warm but not cooked through, about 2 minutes. Transfer to warmed plates and season conservatively with Maldon sea salt. Arrange the mushrooms over the beef and scatter on the rosemary leaves picked from the remaining sprigs. Shave Parmesan over the top. Drizzle the remaining vinaigrette from the bowl over the mushrooms and beef. Serve warm.
**Note**
If you can't find aged beef in your market, age it yourself. You'll need an extra refrigerator or empty wine refrigerator. Put a thermometer and hygrometer in the fridge to monitor the temperature and humidity. These are the most important factors to regulate. Set the fridge temperature to between 34° and 38°F (1° and 3°C). Put a small cool-mist humidifier inside the fridge (Crane makes a compact 1-gallon/4-L model) and adjust the humidifier for 65 to 75 percent humidity. Buy the highest-quality whole rib-eye roast that you can. You want a thick cap of fat to protect the meat and you'll need about 20 percent extra because the meat will lose 20 percent of its weight during the aging process. Rinse and pat the rib-eye roast dry, then wrap it loosely in a triple layer of cheesecloth. Put it on a wire rack set in a rimmed baking sheet, then put it in the fridge uncovered for three to seven weeks. The longer it ages, the more flavor it will develop. Enzymes will work their magic and create deep, beefy flavors in the meat. Moisture will also evaporate, concentrating the flavors, and mold will grow on the surface. Change the cheesecloth every day or so and replenish the water in the humidifier as necessary. But avoid opening the fridge any more than that, because you want a constant temperature and humidity. After three to seven weeks, cut off the mold and the beef is ready to cook. Thinly slice it for this recipe, cut it into steaks, or roast the rib eye whole.
**GUINEA HEN TORTELLINI** with **FARRO CREMA**
Valeria Piccini is one of Tuscany's most famous chefs. Her family has a beautiful Michelin 2-star restaurant in Manciano called Da Caino. Reading her cookbooks inspired me. She does amazing things with traditional Tuscan ingredients, such as farro. I usually make farro into a salad or side dish, such as risotto. But she pureed it into a sauce. Genius! I was working on a fall menu and guinea hen sprang to mind as a complement to the farro. The hen is richer than chicken and stands up better to the hearty grain.
MAKES 8 TO 10 SERVINGS
Pasta and Filling:
1 guinea hen (about 3 pounds/1.3 kg)
Salt and freshly ground black pepper
2 tablespoons (30 ml) olive oil
1 small yellow onion, chopped (½ cup/80 g)
1 small carrot, chopped (¼ cup/31 g)
1 small rib celery, chopped (¼ cup/25 g)
1 sachet of 1 sprig rosemary, 2 sprigs thyme, 1 bay leaf, 1 small garlic clove, and ½ teaspoon peppercorns (see page 277)
3½ ounces (100 g) chopped prosciutto (scraps are fine)
1 ounce (28 g) Parmesan cheese, grated (⅓ cup), plus some for garnish
1 large egg
1 pound (450 g) Egg Pasta Dough (page 282), rolled into 4 sheets, each about inch (1.5 mm) thick
Farro Crema:
⅔ cup (150 ml) olive oil, divided, plus some for garnish
1 small yellow onion, chopped (¾ cup/120 g) | 2 medium-size ribs celery, chopped (1 cup/100 g)
1 garlic clove, smashed
⅓ cup (90 ml) white wine
1 cup (180 g) farro
4 sprigs fresh thyme
1 bay leaf
Salt and freshly ground black pepper
To Serve:
1 tablespoon (15 ml) white truffle paste
½ cup (68 g) hazelnuts, toasted and chopped, for garnish
---|---
**For the pasta and filling:** Preheat the oven to 325°F (160°C). Remove the guinea hen innards, rinse inside and out, and pat dry. Discard excess fat deposits and flaps of skin. Cut the legs and wings from the hen and season all over with salt and pepper. Heat the oil in a Dutch oven or braising pan over medium-high heat. When hot, add all the hen pieces and body in batches if necessary to prevent overcrowding; sear until browned on all sides, 10 to 15 minutes total. Transfer to a plate. Add the onion, carrot, and celery, and cook over medium heat until tender, 4 to 6 minutes. Return the meat to the pan and add the sachet and enough water to come about two-thirds of the way up the meat. Bring to a simmer, and then cover the pan and braise in the oven until the meat is fall-apart tender, 40 to 50 minutes. Let cool slightly in the liquid, and then discard the sachet. Transfer the meat to a cutting board, and when cool enough to handle, pick all the meat from the bones, discarding the skin and bones. You should get about 1 pound (450 g) of meat from the hen.
Strain the vegetables and reserve the braising liquid. Discard the vegetables. Grind the meat and prosciutto in a meat grinder fitted with the small die, or grind in a food processor to a coarse puree. If using a food processor, grind the prosciutto first. Add a little of the reserved braising liquid, if necessary, to create a moist, coarse puree somewhat like pâté. Transfer to a bowl and stir in the Parmesan and egg. Season with salt and pepper and use immediately or spoon into a resealable plastic bag and refrigerate for up to 1 day.
Lay a pasta sheet on a lightly floured work surface and cut it in half lengthwise and then crosswise every 2½ inches (6.3 cm) to make 2½-inch (6.3-cm) squares. You should get twenty to twenty-four squares from each sheet. Snip a corner from the bag, and pipe a ¾-inch (2-cm)-diameter ball of filling in the center of each square. Spritz the pasta with water and fold the pasta corner to corner over the filling to make a triangle. Dampen your fingertips and bring the two outer corners together up over the filling and then pinch and hold to seal. Repeat with the remaining pasta dough and filling. You should have eighty to ninety tortellini. Transfer to parchment-lined baking sheets and refrigerate for up to 2 hours or freeze until solid; transfer to resealable plastic bags and freeze for up to 1 week.
**For the farro crema:** Heat 2 teaspoons (10 ml) of the oil in a medium saucepan over medium heat. Add the onion, celery, and garlic and sweat until the vegetables are soft but not browned, 4 to 6 minutes. Pour in the wine and simmer until most of the liquid evaporates, 2 to 3 minutes. Remove and discard the garlic. Stir in the farro, thyme, bay leaf, and 2 cups (475 ml) of water. Season the water with salt as you would pasta water (about ½ teaspoon/3 g). Bring to a boil and then lower the heat to medium-low, cover, and simmer until the farro is very soft, 25 to 30 minutes. Drain off any excess water and remove and discard the herbs. Transfer the mixture to a blender, along with the remaining ½ cup plus 2 tablespoons (140 ml) of oil. Puree until smooth, 3 to 4 minutes, and then taste and season with salt and pepper as needed. You'll have about 2 cups (475 ml) of farro crema.
Bring a large pot of salted water to a boil. Drop in the tortellini in batches if necessary to prevent crowding, and quickly return the water to a boil. Cook until the tortellini are tender yet firm, 2 to 3 minutes.
**To serve:** Heat the farro crema in a large, deep sauté pan over medium heat. Add the truffle paste to the pan, along with 4 cups (1 L) of pasta water, and simmer until creamy, 4 to 5 minutes, stirring now and then. Add the tortellini, in batches, if necessary, and toss gently to coat with sauce. Divide among plates and garnish with Parmesan, the chopped hazelnuts, and a drizzle of olive oil.
**CANDELE PASTA** with **WILD BOAR BOLOGNESE**
By the time Claudia and I spent our last long weekend together, I had tried all sorts of Italian ragù. Every region makes it differently. In the north, they make it with pork; in Bologna, they make it with beef (Bolognese); and in Florence, they make it with wild boar. To cut the gaminess of boar, they add cocoa powder, which has just enough bitterness to even out the flavors. If you have an old rind of Parmesan lying around, bury the rind in the sauce as it simmers. It adds great flavor.
MAKES 4 TO 6 SERVINGS
Wild Boar Bolognese:
2½ pounds (1.1 kg) wild boar shoulder, cut into 1-inch (2.5-cm) cubes
8 ounces (227 g) pork fatback, cut into 1-inch (2.5-cm) cubes
1 tablespoon (15 ml) olive oil
1 small yellow onion, finely chopped (½ cup/80 g)
1 medium-size carrot, finely chopped (½ cup/61 g)
1 cup (235 ml) red wine
1 sachet of 1 bay leaf, 3 sprigs thyme, 5 parsley stems, 5 peppercorns, and 1 Parmesan cheese rind (see page 277)
1 tablespoon (14 g) unsalted butter
2 to 3 teaspoons (10 to 15 ml) sherry vinegar | 2 to 3 tablespoons (11 to 16 g) unsweetened dark cocoa powder
Salt and freshly ground black pepper
Pasta:
1 pound (450 g) fresh extruded Candele (page 283), or 14 ounces (400 g) dried long ziti
4 tablespoons (57 g) unsalted butter
3 ounces (85 g) Parmesan cheese, grated (¾ cup), divided
---|---
**For the Bolognese:** Spread the meat and fatback in a single layer on a sheet pan or other shallow pan that will fit in your freezer. Freeze until firm but not solid, about an hour. Freeze all the parts of a meat grinder, too. Grind the cold meat and fat with the meat grinder, using the fine die of the grinder. If you don't have a meat grinder, you can chop the meat in small batches in a food processor, using brief pulses. Try not to chop it too finely; you don't want meat puree.
Preheat the oven to 350°F (175°C).
Heat the oil in a large deep sauté pan or Dutch oven over medium-low heat. Add the ground meat mixture, and cook until the fat melts and the meat browns, stirring often, 6 to 8 minutes. Add the onion and carrot, and cook until very soft, about 20 minutes. Add the wine and bring to a boil over medium-high heat, scraping up any brown bits from the pan bottom. Boil for 2 minutes.
Bury the sachet in the sauce. If necessary, add enough water so that most of the meat is resting in liquid. Cover the pan, transfer to the oven, and cook until the flavors blend, 2 to 2½ hours. Remove and discard the sachet.
Stir in the butter, 2 teaspoons (10 ml) of the vinegar, 2 tablespoons (11 g) of the cocoa powder, and salt and pepper to taste. Season with additional vinegar, cocoa, salt, and pepper as needed. Use immediately, refrigerate for up to 2 days, or freeze for up to 3 months.
**For the pasta:** Bring a large pot of salted water to a boil. Drop in the pasta, quickly return the water to a boil, and cook until tender yet firm, about 4 minutes (9 minutes for boxed pasta).
Heat the Bolognese in a large deep sauté pan until boiling.
Drain the pasta, reserving a little pasta water, and add the pasta to the sauce. Stir in a ladle of pasta water, the butter, and ½ cup (55 g) of the Parmesan, and toss until the sauce is creamy. If the sauce gets too thick, add more pasta water.
Divide among warm pasta bowls and garnish with the remaining ¼ cup (30 g) of Parmesan.
**WILD HARE PAPPARDELLE**
Rabbits and hares are related, but hares are bigger, longer, and quicker, which makes them taste leaner, richer, and gamier. The meat is deep red and usually covered in blood when you get it. Don't be grossed out. Just rinse the hare well and marinate it for a day or two in red wine and strong spices, such as cinnamon, clove, and black pepper. The wine draws out the gamey funk, and the spices add great aroma. Discard the marinade and you're good to go. Ragù is the easiest thing to make, and it's perfect with strips of tender pappardelle pasta. Look for wild hare at D'Artagnan (see Sources on page 289).
MAKES 6 TO 8 SERVINGS
1 wild hare (about 4 pounds/1.75 kg)
1 large yellow onion, chopped (2 cups/320 g), divided
4 medium-size carrots, chopped (2 cups/244 g), divided
4 medium-size ribs celery, chopped (2 cups/202 g), divided
2 sachets, each with ½ teaspoon peppercorns, 1 small cinnamon stick, 3 whole cloves, 3 whole juniper berries, 2 sprigs thyme, 2 sprigs rosemary, and 1 small bay leaf (see page 277)
8 to 9 cups (2 to 2.25 L) red wine
Salt and freshly ground black pepper
½ cup (62 g) _tipo_ 00 flour (see page 277) or all-purpose flour | 3 tablespoons (45 ml) plus ½ cup (120 ml) olive oil, divided
1 pound (450 g) Egg Pasta Dough (page 282), rolled into 4 sheets, each about inch (1.5 mm) thick
4 ounces (1 stick/113 g) unsalted butter
2½ ounces (71 g) Parmesan cheese, grated (¾ cup), divided
Dutch-process cocoa powder, as needed (optional)
---|---
Rinse the hare and then remove and discard the innards and excess fat deposits. Remove the hind legs and forelegs by driving your knife straight through the hip and shoulder joints. Snip through the breast bone with kitchen shears, and snip through one side of the ribs near the backbone to remove that side, then cut the hare crosswise into two pieces. You should have seven pieces total. Place the pieces in a large resealable plastic bag or a bowl, along with 1 cup (160 g) of the onion, 1 cup (122 g) of the carrots, 1 cup (101 g) of the celery, and one of the sachets. Add enough wine to cover the pieces, 4 to 5 cups (1 to 1.25 L). Seal or cover and refrigerate for 24 hours.
Preheat the oven to 325°F (160°C). Drain the hare, discarding the wine, vegetables, and sachet. Pat the hare pieces dry, season all over with salt and pepper, and then dredge in flour, shaking off the excess. Heat 3 tablespoons (45 ml) of the oil in a Dutch oven over medium-high heat. Add the hare in batches if necessary to prevent overcrowding, and sear until golden brown on all sides, about 5 minutes per side. Transfer to a platter. Add the remaining 1 cup (160 g) of onion, 1 cup (122 g) of carrots, and 1 cup (101 g) of celery to the pan, and cook until deeply browned, 5 to 6 minutes. Add the remaining 4 cups (1 L) of wine and simmer for 2 minutes. Return the hare to the pan, and if necessary, add enough water for the liquid to almost cover the hare. Add the remaining sachet and bring to a boil. Cover and braise in the oven until the meat is fall-apart tender, about 2 hours.
Transfer the meat to a cutting board, and when cool enough to handle, pick the meat from the bones, shredding the meat and discarding the skin and bones. Discard the sachet and strain the vegetables, reserving the braising liquid. Pass the vegetables through a food mill or pulse in a food processor to a coarse puree, adding a little braising liquid, if necessary, to get the vegetables to puree. Combine the braising liquid, pureed vegetables, and shredded meat and season with salt and pepper. Use immediately or refrigerate for up to 1 week. You will have 5 to 6 cups (1.25 to 1.5 L) of ragù.
Lay a pasta sheet on a lightly floured work surface and trim the edges square. Cut crosswise into strips a little less than 1 inch (2.5 cm) wide, preferably with a fluted cutter. Repeat with the remaining pasta dough.
Bring a large pot of salted water to a boil. Add the pasta in batches, if necessary, to prevent crowding, and cook until tender yet firm, about a minute.
Meanwhile, heat the butter and remaining ½ cup (120 ml) of oil in a large deep sauté pan over medium heat. Stir in the ragù and 1½ cups (375 ml) of pasta water, and simmer until creamy, 3 to 4 minutes. Drain the pasta and add to the ragù in batches along with ½ cup (50 g) of the Parmesan, stirring gently until the mixture is creamy. Taste and season with salt and pepper as needed. Divide among plates and garnish with the remaining Parmesan. If you like, you could dust the pasta with a little Dutch-process cocoa powder before garnishing with the Parmesan.
**BISTECCA ALLA FIORENTINA** with **BRAISED CORONA BEANS**
When my family came to Italy from the United States to visit, I took them to Florence to taste the steak. You order _bistecca alla fiorentina_ by the kilo and we ordered six kilos (13 pounds). It came to the table like a giant Brontosaurus steak right out of _The Flintstones._ My family had never seen anything like it—not even my eighty-six-year-old grandmother, who was a butcher during World War II! You can order different sides, such as gigante beans stewed with tomatoes. Here's my twist using big white corona beans. You could also use sweet white runner beans or even fresh lima beans, if they're in season.
MAKES 4 SERVINGS
1 cup (185 g) dried corona beans, soaked in water to cover overnight
½ medium-size yellow onion, finely chopped (½ cup/80 g)
1 medium-size rib celery, finely chopped (½ cup/51 g)
1 medium-size carrot, finely chopped (½ cup/61 g)
2 ounces (57 g) pancetta, finely chopped
1 sachet of 1 sprig parsley, 1 sprig rosemary, 2 sprigs thyme, 1 bay leaf and 5 black peppercorns (see page 277)
Coarse salt and cracked black pepper
1 bone-in porterhouse steak, 2 pounds (1 kg) and 2 inches (5 cm) thick | ¼ cup (60 ml) olive oil, plus more as needed
4 garlic cloves, minced
1 pound (450 g) mustard greens, washed, trimmed, and coarsely chopped
1 cup (240 g) canned plum tomatoes, preferably San Marzano, cored and crushed by hand
½ cup (120 ml) Veal Stock (page 279)
3 tablespoons (42 g) unsalted butter
---|---
Drain the soaked beans and combine with the onion, celery, carrot, pancetta, and sachet in a medium saucepan. Add enough water to cover the ingredients by 1 inch (2.5 cm). Bring to a boil over high heat and then lower the heat to medium-low and simmer gently until the beans are tender, about 1 hour. Season generously with salt and pepper, then let the beans cool down in the liquid. Use immediately or refrigerate for up to 3 days.
Heat a grill, preferably with wood, to high heat. Let the steak stand at room temperature for 20 minutes to take the chill off. Brush the grill grate with a little oil. Grill the steak until deeply grill-marked, 5 to 6 minutes. Rotate it 90 degrees to create crosshatch marks, grilling another 5 to 6 minutes. Flip and grill the other side for 5 minutes, then rotate it 90 degrees and move it to a cooler part of the grill or lower the heat to medium and cook until the steak is rare to medium-rare (115° to 125°F/46 to 52°C internal temperature), another 6 to 8 minutes. Transfer to a cutting board and season with coarse salt and cracked black pepper. Let the steak rest for 5 to 10 minutes before slicing.
Meanwhile, heat the oil in a large sauté pan over medium-high heat. Add the garlic, and cook until golden but not burned, 1 to 2 minutes. Add the mustard greens, season with salt and pepper, and cook until the greens release their liquid, 3 to 4 minutes. Add the tomatoes and veal stock, cover, and cook until the greens are tender, 4 to 5 minutes, stirring occasionally.
Use a slotted spoon to remove the beans and solids from their cooking liquid to the pan of greens (remove and discard the sachet). Add the butter, and cook until the beans and greens are hot and the sauce is a little creamy. Serve with the sliced steak.
**CANTUCCI SUNDAE**
Claudia and I are suckers for gelato, and when you walk through Florence, gelaterie are everywhere. Here's my secret for spotting the best ones: look for the pistachio flavor. If it looks neon green in the case, go to the next gelateria because they probably use artificial colorings and flavorings in their gelato. But if the pistachio gelato looks pale green in color, you know they're using the highest-quality pistachios or pistachio puree instead of artificial flavoring and coloring. One day, I had the idea of crushing up some of Tuscany's famous cantucci cookies and adding them to gelato. It's something I'd never seen in any gelateria but it made perfect sense. I served the sundae in a cup with almonds in vin santo syrup and more cookies on the side. It's a twist on the classic Tuscan dessert of cantucci served with vin santo for dipping.
MAKES 6 TO 8 SERVINGS
Vin Santo Almonds:
2½ tablespoons (37 ml) glucose syrup or light corn syrup
½ cup (100 g) granulated sugar
½ cup plus 1 tablespoon (140 ml) vin santo, divided
¾ cup (107 g) whole skinless almonds, toasted
Cantucci:
2¾ cups (345 g) _tipo_ 00 flour (see page 277) or all-purpose flour
¾ teaspoon (3.5 g) baking powder
2½ tablespoons (25 g) coarse yellow cornmeal (polenta)
1¼ teaspoons (7.5 g) sea salt | 4 ounces (1 stick/113 g) unsalted butter, softened
1 cup plus 2 tablespoons (225 g) granulated sugar
2 large eggs
1 tablespoon (15 ml) grappa or brandy
¾ cup (107 g) whole skinless almonds, toasted
To Serve:
6 cups (1.5 L) Cantucci Gelato (page 286)
---|---
**For the vin santo almonds:** Combine the glucose syrup, sugar, ½ cup (120 ml) of the vin santo, and ½ cup (120 ml) of water in a medium saucepan. Boil over high heat until the mixture reaches 220°F (104°C) on a candy thermometer, 10 to 15 minutes. Put the almonds in a heatproof bowl and pour the sugar mixture over the top. Let cool slightly and then pour on the remaining 1 tablespoon (15 ml) of vin santo. Cover and let soak in the refrigerator overnight or up to 3 days.
**For the cantucci:** Sift together the flour, baking powder, polenta, and salt into a medium bowl. Set aside. Cream the butter and sugar together in a stand mixer fitted with the paddle attachment on medium speed until light and fluffy, 2 to 3 minutes. Add the eggs one at a time, mixing between additions until completely incorporated. Mix in the grappa. Slowly add the sifted dry ingredients on low speed until a moist batter forms, and then stir in the cooled toasted almonds.
Roll the dough on a lightly floured work surface into logs about 2 inches (5 cm) in diameter and 12 inches (30 cm) long. Wrap in plastic and refrigerate until cold and firm, at least 1 hour or up to 2 days.
Preheat the oven to 350°F (175°C). Bake the logs on baking sheets until lightly browned on the edges and firm to the touch, 10 to 12 minutes. Remove from the oven and let the logs cool until barely warm. Slice the logs crosswise on a steep diagonal into ½-inch (1.25-cm)-thick cookies and lay the cookies flat on the baking sheets. Lower the oven temperature to 300°F (150°C) and return the cookies to the oven to bake until they are dry, 15 to 20 minutes. Remove from the oven and let cool completely.
**To serve:** Put a scoop of gelato in each of six to eight ice-cream or coffee cups. Spoon on a generous amount of vin santo almonds and their syrup. Serve one or two whole cantucci with each cup.
**STRAWBERRY ZUPPA INGLESE** with **MASCARPONE CAKE**
Although its name translates to "English soup," _zuppa inglese_ is a classic Italian dessert. It's like an English trifle, with layers of cake, jam, and sweet custard. You can use almost any combination of flavors you like. My favorite is a mascarpone cake layered with strawberry jam and vanilla custard. It's the first thing I think of when fresh strawberries start popping up in May. You could layer the dessert in a big glass bowl, but I like to do it in individual half-pint mason jars.
MAKES ABOUT 8 SERVINGS
Zuppa:
4¼ cups (1060 ml) whole milk
1 vanilla bean, split and scraped
10 large egg yolks
1¼ cups (250 g) granulated sugar
½ cup (62 g) _tipo_ 00 flour (see page 277) or all-purpose flour
2½ cups (625 ml) heavy cream
Mascarpone Cake:
6 ounces (1½ sticks/170 g) unsalted butter, softened, plus some for greasing pans
1½ cups (300 g) granulated sugar
12 ounces (350 g) mascarpone
3 large eggs | 1½ cups (205 g) pastry flour
2½ teaspoons (11.5 g) baking powder
1 teaspoon (6 g) salt
Strawberry Marmalade:
2 pounds (1 kg) strawberries
½ cup (120 ml) glucose syrup or light corn syrup
1⅓ cups plus ½ cup (370 g) granulated sugar, divided
1 tablespoon (14 g) powdered pectin
---|---
**For the zuppa:** Bring the milk and vanilla to a boil in a medium saucepan. Meanwhile, whip the egg yolks, sugar, and flour in a stand mixer on medium-high speed until fluffy and pale, 2 to 3 minutes. Temper the eggs by gradually stirring in ½ cup (120 ml) of the milk mixture, then another ½ cup (120 ml). Scrape the egg mixture into the saucepan, and cook over medium heat until thickened, 6 to 8 minutes, stirring frequently. Remove from the heat and let cool. Whip the cream on medium-high speed until the beaters leave soft peaks when they are lifted, 2 to 3 minutes. When the zuppa is completely cool, fold in the whipped cream. Use immediately or cover and refrigerate for up to 2 days.
**For the mascarpone cake:** Preheat the oven to 350°F (175°C). Line a half-sheet pan (an 18 x 13-inch/46 x 33-cm rimmed baking sheet) or two smaller rimmed baking sheets with parchment. Butter the parchment. Cream the butter and sugar in a stand mixer on medium speed until light and fluffy, 2 to 3 minutes. Mix in the mascarpone, and with the machine running, add the eggs, one at a time. Sift together the flour, baking powder and salt and add to the mascarpone mixture on low speed until incorporated. Spread the batter in the perpared pan and bake until a toothpick inserted into the center comes out clean, 18 to 20 minutes. Let cool and use immediately or cover and refrigerate for up to 1 day.
**For the strawberry marmalade:** Hull and quarter the strawberries. Bring the glucose syrup, 1⅓ cups (270 g) of the sugar, and 1 cup of water to a boil in a large saucepan. Add the strawberries, and cook over medium-high heat until they soften and begin to fall apart, 10 to 12 minutes, mashing the strawberries a little with a spoon. Whisk together the pectin and remaining ½ cup (100 g) of sugar, and then whisk into the marmalade. Cook until the mixture thickens, 3 to 5 minutes. Remove from the heat and let cool. Use immediately or refrigerate for up to 1 week.
**To finish:** Use a cookie cutter to cut out rounds of cake that will fit into your containers. I like to use a total of eight 8-ounce (235-ml) mason jars or glass mugs. For those you'll need sixteen 2-inch (5-cm) rounds of cake. Lay a cake round on the bottom of a mason jar or mug. Spoon on a layer of about 2 tablespoons (30 ml) marmalade, and then a layer of about ¼ cup (60 ml) zuppa and repeat with additional layers of cake, marmalade, and zuppa until the jar or mug is filled (about two layers each for 8-ounce/235-ml jars or mugs).
**BLOOD ORANGE CROSTATA** with **BITTER CHOCOLATE**
Crostatas aren't just for dessert. Italians will wake up and have some with a cup of morning coffee. The crust is the most important part; hence the name. It has to be somewhat thicker than a normal tart shell and stay on the softer side. I use cake flour because it stays tender, and I add a little baking powder for puff. Toasted and ground hazelnuts add incredible richness and flavor. I usually fit the dough into a 9-inch (23-cm) square tart pan with a removable bottom, but you could use a tenor 11-inch (25 or 28-cm) round tart pan. My favorite filling is blood oranges. In the winter, I make a reddish-orange marmalade with them, and then I use the marmalade all spring long. Serve the crostata warm or at room temperature. A little chocolate sauce makes it irresistible for dessert or for breakfast.
MAKES 12 SERVINGS
Blood Orange Marmalade:
6 pounds (2.75 kg) blood oranges (about 16)
¼ cup (60 ml) freshly squeezed lemon juice
1 teaspoon (4.5 g) unsalted butter
5 cups (1 kg) granulated sugar
2½ teaspoons (11.75 g) powdered pectin
Linzer Dough:
1 pound (4 sticks/450 g) unsalted butter, at room temperature
1½ cups (300 g) granulated sugar
1 vanilla bean, split and scraped
2 large eggs, divided | 4¼ cups (582 g) cake flour
2½ ounces (½ cup/71 g) finely crushed vanilla wafer cookie crumbs
1 tablespoon (8 g) ground cinnamon
2 teaspoons (9 g) baking powder
8 ounces (227 g) hazelnuts, toasted and ground (about 2 cups)
To Serve:
2 cups (475 ml) Chocolate Sauce (page 285)
Confectioners' sugar for garnish
---|---
**For the marmalade:** Cut the rind from each orange, following the contour of the fruit and trying to cut as little of the flesh as possible. Working over a large saucepan, make V-shaped cuts around each segment, releasing the segments from the surrounding membranes and dropping the segments into the pan (this is called supreming the fruit); squeeze the membranes to release the juice. You should have about 6 cups (1.5 L) of orange segments and juice. Add the lemon juice and butter and bring to a simmer over medium-high heat. Whisk together the sugar and pectin and stir the mixture into the pan. Bring to a boil and cook until the mixture reaches 217°F (103°C) on a candy thermometer, 20 to 25 minutes, stirring occasionally. Let cool in the pan until ready to use or refrigerate for up to 1 week. You should have about 3½ cups (875 ml) of marmalade.
**For the Linzer dough:** Cream the butter, sugar, and vanilla in a stand mixer on medium speed until light and fluffy, 3 to 4 minutes. Switch to low speed and add one of the eggs. Sift together the cake flour, cookie crumbs, cinnamon, and baking powder and add to the dough. Mix until incorporated, then add the ground hazelnuts and mix just until incorporated. Divide the dough in half and scrape each half onto a sheet of plastic wrap. Seal the dough in the plastic wrap and refrigerate for at least 1 hour, or up to 2 days.
Let the dough sit at room temperature for a few minutes so it is soft enough to roll. Roll half of the dough on a lightly floured surface to a circle about 12 inches (30 cm) in diameter and ¼ inch (6 mm) thick. Carefully fold the dough over the rolling pin and transfer it to a 9-inch (23-cm) square or 10-inch (25-cm) round tart pan with a removable bottom. Unfold the dough and fit it into the pan without stretching the dough. The dough will be delicate and crumbly; patch any tears or holes with pieces of dough from the edge. Trim the dough so that it sits flush with the top of the tart pan.
Preheat the oven to 350°F (175°C). Fill the tart shell three-quarters full with the marmalade; save any remaining marmalade for another use. Roll the remaining half of the dough on lightly floured parchment to about ¼ inch (6 mm) thick and 3 inches (7.5 cm) larger than the dimensions of your pan. Cut twelve to fourteen strips of dough, each ½ to ¾ inch (13 to 19 mm) wide, preferably with a fluted cutter. Slide half of the strips to a floured cookie sheet, arranging them in parallel bars on the sheet with ½ to ¾ inch (12 to 19 mm) between each strip. Carefully fold back every other strip half way. Insert a new strip at the center perpendicular to the parallel strips. Reposition the folded strips back over the new strip. Next, fold back the alternate parallel strips and insert another perpendicular strip next to the first one. Then reposition the folded strips back over the new strip. Continue until they reach the outer edge of the strips. Then, turn the crust and repeat the process on the other half, working from the center toward the edge. Carefully slide the lattice over the filling. The dough will be crumbly; if any of the strips break, seal them back together with your fingers. Trim the excess dough from the tart and pinch together the edges of dough around the perimeter of the tart to seal. Beat the remaining egg with 1 teaspoon (5 ml) of water and brush all over the dough, sealing any cracks. Bake until golden brown, 20 to 25 minutes. Let cool before slicing into wedges.
**To serve:** Gently heat the chocolate sauce just until pourable and spoon a pool of sauce off center on each plate. Top with a slice of crostata, positioning the front corner over the sauce. Garnish with confectioners' sugar.
TRESCORE BALNEARIO
OUR BIG ITALIAN WEDDING
THAT SUMMER I MOVED BACK INTO MY PARENTS' HOUSE IN NASHUA, NEW HAMPSHIRE. I RAN THE KITCHEN AT THE BEDFORD VILLAGE INN, A LUXURY B&B AND RESTAURANT. CLAUDIA WAS STILL IN ITALY, RUNNING HER VIDEO RENTAL BUSINESS IN ALBINO. EVERYONE KNOWS LONG-DISTANCE RELATIONSHIPS DON'T WORK. BUT WE TOOK A CHANCE AND SPENT THE SUMMER TALKING VIA WEBCAM. WHEN SHE DECIDED TO VISIT ME IN THE STATES, I TOOK IT AS A STRONG STATEMENT.
I arranged a weekend getaway for us in Ogunquit, Maine. We stayed at Parson's Post House on Shore Road, with a gorgeous ocean view. We strolled through the whitewashed shops of Ogunquit and lunched on lobster rolls piled high in buttered split-top buns. At the end of Shore Road, we wandered onto the Marginal Way, a beautiful mile-long walking path along Maine's rugged coastline. The clean smell of seawater reminded me of all the fresh fish and salty cured meats I'd fallen in love with over the past three years in Italy. I thought about the places that Claudia and I had been together like Alba, Barolo, Venice, and Florence, and when we reached a little inlet called Devil's Kitchen, we rested on a boulder overlooking the Atlantic. Almost directly east of us across the ocean sat the rocky cliffs of le Cinque Terre, where we held hands and kissed along the Via dell'Amore. My heart raced and my mind cycled through decades of images from my childhood to Italy to the present to the future. I turned to face Claudia, the tide receded, the squawk of the seagulls softened, and I opened my other hand, which had been clamped around a tiny object. "Will you marry me?" I asked, showing Claudia the ring and placing it in her palm. She started crying and shaking. I thought she was going to drop the ring in the ocean! Claudia clenched her palm, looked at me through teary eyes, and exclaimed, " _Si!_ " It took me a few minutes to pry open her hand, but I retrieved the ring and slid it onto her finger.
With the ring on her finger, Claudia returned to Italy and we spent the next five months apart, just like the last five. Finally, her fiancée visa came through, and we had ninety days to get married in the United States. Her brother Alex bought her business in Italy to help Claudia make the move. I flew to Italy to pick her up and, at the airport, Pina cried because she had just lost her mother, Nonna Anna, and now her daughter was leaving.
For the next several months, we lived with my parents in New Hampshire. Claudia's English was still shaky, and communicating with my mother and father wasn't easy. She tried to form a bond with them over food. But my dad likes what he likes. Teriyaki, for instance. He makes it every few days. Claudia ate it, but eventually wanted to taste something different. Every other day it was steak teriyaki, chicken teriyaki. . . teriyaki, teriyaki, teriyaki! And my parents weren't warming up to Claudia's favorite foods. She would make risottos, pastas, and salads, and my dad would say, "No thanks, I already had dinner." But he was secretly sneaking bites at night. Once Claudia found out that my dad was eating her food, she felt more at home.
That year, we had two weddings: one in Nashua, New Hampshire, at Alpine Grove with a justice of the peace; and another in Trescore Balneario, Italy, which was an all-day extravaganza. Trescore Balneario is where I had my first cooking job at Loro, so my life in Italy had come full circle. We were married in Italian by Claudia's childhood priest, Don Camillo at Santuario della Madonna dello Zuccarello, a tiny mountainside church in Nembro. Twelve of my family members came from America, including my eighty-five-year-old grandmother. Around noon, the reception began at Locanda Armonia, a drop-dead gorgeous inn and restaurant nestled among the olive groves and vineyards above Trescore Balneario. We started with _stuzzichini_ (hors d'oeuvres), such as house-cured prosciutto sliced to order, oysters with caviar zabaione, grilled veal sausage with spring onion mostarda, eggplant cannoli with burrata and oregano, and mortadella pigs in a blanket. We took pictures among the vineyards, danced, and dined on beef tartare with fried egg yolk and Tropea onions, grilled seppia with sweet peas and mâche, fava bean and robiola agnolotti with culatello, and whole roasted quail stuffed with foie gras and fig agrodolce. Lunch lasted until five p.m. After cutting the cake and sipping lemon sgroppino, a kind of alcoholic slushy, the party moved downstairs to the tavern with a DJ, more dancing, and a dessert buffet full of citrus rum babas, olive oil panna cotta, and other sweets. It was an orgy of food. The party went from noon to midnight and brought together two big, boisterous families from the United States and Italy. Neither family spoke the other's language, so the only way we could communicate was through food. As Claudia and I danced, we looked over and her Uncle Bruno was shoving little cotechino panini, grilled to order, into my Uncle Al's mouth. It was a daylong celebration of everything I loved about Italy.
Corn Tortelli with Ricotta Salata
•
Schisola (Polenta Stuffed with Gorgonzola Dolce)
•
Prosciutto Cotto with Stone Fruits
•
Ciareghi
•
Pizzoccheri with Chard, Potato, and Bitto Cheese | Wild Branzino with Fennel and Artichokes
•
Citrus Rum Babas alla Crema
•
Olive Oil Panna Cotta with Summer Berries
•
Chiacchiere with Coffee and Chocolate Budino
---|---
**CORN TORTELLI** with **RICOTTA SALATA**
When it was my birthday in Italy, I threw myself a party. That's what you do there! I planned an all-American barbecue with smoked meat and boiled corn. I went to every _supermercato_ for the corn and all I could find was preshucked three-packs of corn that had probably been sitting in the case for the whole month of July. Let me tell you, it wasn't Jersey corn! In the land of polenta, I couldn't find any sweet corn! But it was a great birthday anyway. When I got back to the States, I made this dish thinking that this is what I would have made if sweet corn had been available in Italy. When pureed and mixed with a little cheese and egg to bind it, fresh corn makes an incredibly creamy pasta filling. Turn to this recipe in the height of summer when corn is sweetest.
MAKES ABOUT 4 SERVINGS
5 ears fresh corn
1 tablespoon (15 ml) olive oil
½ medium-size yellow onion, chopped (⅔ cup/80 g)
1 small egg
Salt and freshly ground black pepper | 8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (1.5 mm) thick
4 ounces (1 stick/113 g) unsalted butter
½ cup (120 ml) black truffle paste
4 ounces (113 g) ricotta salata
---|---
Shuck the corn, stand each ear upright on a cutting board, and cut the kernels from the cobs. Heat the oil in a large, deep sauté pan over medium heat, add the onion, and sweat until soft but not browned, 4 to 5 minutes. Add the corn, and cook until tender, 3 to 4 minutes. Transfer to a blender and puree until very smooth and thick, 2 to 3 minutes. If the mixture is thin and easily pourable, stir it in a fine-mesh sieve to drain some of the liquid. The corn mixture should be thick enough to stand on a spoon. Let cool slightly, then quickly blend in the egg and season with salt and pepper. Transfer to a resealable plastic bag, seal, and refrigerate for up to 1 day.
Lay a pasta sheet on a lightly floured work surface and trim the edges square. Cut the pasta into 3-inch (7.5-cm) squares. Spritz the dough lightly with water to keep it from drying out. Put teaspoon-size spoonfuls of filling on each square, then bring the opposite corners together over the filling to make a triangle. Press gently on the edges to seal. Bring the two opposite points of the triangle up over the pasta and pinch them together to seal: you should have a large tortellini shape. Repeat with the remaining pasta dough and filling. The filled pasta can be refrigerated for up to 8 hours on a sheet pan or frozen for up to 3 days before cooking. You should have about fifty tortelli.
Bring a large pot of salted water to a boil. Drop in the tortelli in batches if necessary to prevent overcrowding; quickly return the water to a boil and cook until tender yet firm, 2 to 3 minutes. Drain the pasta, reserving about 1 cup (235 ml) of the pasta water.
Just before the pasta is done, melt the butter and truffle paste in a large deep sauté pan over medium heat. Spoon in about ½ cup (120 ml) of pasta water, and cook until the sauce is creamy, 1 to 2 minutes. Add the pasta in batches if necessary, and toss gently to coat. Transfer to a large serving bowl and shave or grate the ricotta salata over the top.
**SCHISOLA (POLENTA STUFFED** with **GORGONZOLA DOLCE)**
Claudia's grandfather kept cows and made butter and cheese. He would put some formagella cheese and leftover polenta in his pocket to eat as a snack in the fields. The two would get smooshed in his pocket and they called it _schisola_ , which means "squished" in the Bergamascan dialect. I like to roll the polenta into balls, squish pieces of Gorgonzola inside, and then broil them. Just be sure to blast them at high heat so the polenta browns before the cheese oozes out.
MAKES 4 SERVINGS
2 cups (475 ml) cooked Polenta (page 281)
4 ounces (113 g) Gorgonzola cheese, divided into 12 pieces
4 ounces (1 stick/113 g) unsalted butter, divided, plus some for greasing the pan | 1 ounce (28 g) Parmesan cheese, grated (¼ cup) for garnish
8 sage leaves for garnish
---|---
Using a 2-ounce (60-ml) ice-cream scoop or your fingers, scoop out twelve balls of polenta, each 1 to 1½ inches (2.5 to 3.75 cm) in diameter. Wet your hands, make a dimple in each polenta ball, and press a piece of Gorgonzola into the dimple. Form the polenta around the Gorgonzola, rolling it between your wet palms into a neat ball. Use immediately or place on a parchment-lined tray, cover, and refrigerate for 2 hours.
Preheat the oven to 500°F (260°C). Turn on convection if possible. You want to blast these at a pretty high temperature. Grease a baking sheet with some of the butter. Melt 3 tablespoons (42 g) of the butter, arrange the polenta balls on the sheet, and brush each one with butter. Bake until the polenta lightly browns and the cheese just starts to melt inside, 5 to 7 minutes.
Meanwhile, melt 5 tablespoons (71 g) of the butter over medium heat in a small skillet and add the sage leaves. Cook until the sage lightly browns, the butter turns golden, and the milk solids fall to the bottom of the pan and turn light brown, 6 to 7 minutes.
Divide the schisola among plates, sprinkle on the Parmesan, drizzle with brown butter, and garnish with the sage leaves.
**PROSCIUTTO COTTO** with **STONE FRUITS**
Ham, or prosciutto, is the hind leg of the pig. Italians make two different kinds: _prosciutto crudo_ , the raw dry-cured type that's sliced paper-thin; and _prosciutto cotto_ , which is similar to American wet-cured, cooked ham. The difference is that Italians leave the bone, fat, and skin on the cooked ham during the entire curing and cooking process, which keeps the meat moist and makes it taste richer. At our wedding, we served slices of prosciutto cotto wrapped around melon as an appetizer. I also like to serve it with a salad of late-summer fruits, such as plums, apricots, and peaches. You'll need a large marinade injector for this recipe (see Sources, page 289). Or you could soak the raw, bone-in ham in brine for thirty days. If using boneless ham, soak it for twenty-five days. Either way, you will have leftover ham from this recipe. It keeps for several weeks in the refrigerator.
MAKES 6 TO 8 SERVINGS (PLUS LEFTOVER HAM)
Ham:
1 uncooked bone-in ham (pork hind leg), 20 pounds (9 kg)
3 gallons (12 L) 3-2-1 Brine (page 280)
4 teaspoons (24 g) curing salt #1 (see page 277)
4 teaspoons (9. 5 g) ground mace
4 teaspoons (8.5 g) ground coriander
4 teaspoons (8.5 g) freshly ground black pepper
2 teaspoons (8 g) ground juniper berries | Stone Fruits:
2 tablespoons (30 ml) sherry vinegar
6 tablespoons (90 ml) extra-virgin olive oil
2 tablespoons (7 g) chopped mixed fresh herbs (parsley, rosemary, and thyme), divided
Salt and freshly ground black pepper
2 ripe plums
2 ripe apricots
2 ripe peaches
---|---
**For the ham:** Rinse the ham and leave it wet. Combine the brine, curing salt, ground mace, coriander, pepper, and juniper, and stir to dissolve the salt. Set the ham on a rimmed baking sheet. Using a marinade injector, inject one-quarter of the brine (3 quarts/3 L) into the ham. Try to hit all of the areas around the bone. If you can find the central vein near the bone, inject the brine in there and it should go throughout the meat. If not, find about ten different spots near the bone and inject the brine into the meat. The ham should swell a bit and brine should leak out of it. Place the injected ham in a large tub or plastic-lined bucket that will fit in your refrigerator. Cover the ham and refrigerate for 14 days. Refrigerate the remaining brine. If you're low on refrigerator space, chill the ham, the brine, or both in an ice-filled cooler in a cool, dark spot, replenishing the ice as necessary.
Transfer the brined ham to a large, heavyweight roasting pan or stockpot and add about 1½ gallons (6 L) of the remaining brine. Add enough water so that the liquid covers the meat completely. Cover and bring to a boil over high heat, and then lower the heat so that the liquid is just under a simmer and reads about 155°F (68°C) on an instant-read thermometer. Cover and braise at 155°F (68°C) until the ham reaches the same internal temperature of 155°F (68°C), 9 to 10 hours. Turn off the heat and let the ham cool overnight in the liquid. Remove from the liquid and carve out the bone by making one cut along the length of the ham, cutting down to the bone; cut around the bone to remove it, leaving as much meat as possible on the ham. Cover the ham and refrigerate until ready to use or up to 3 weeks.
**For the stone fruit salad:** Pour the vinegar in a medium bowl. Whisk in the oil in a slow, steady trickle until blended and thickened, 1 to 2 minutes. Whisk in 1 tablespoon (4 g) of the herbs and salt and pepper to taste. Cut the plums, apricots, and peaches in half from top to bottom, twist the halves apart and remove and discard the pits. Slice the fruit into thin half-moon slices and add to the bowl, tossing to coat.
When ready to serve, use a large sharp knife to slice the ham crosswise into very thin slices, removing the skin as you go. Lay a few slices of ham on a plate and top with the stone fruit salad. Garnish with the remaining 1 tablespoon (4 g) of the herbs.
**CIAREGHI**
My first job in Italy was at Michelin-starred Loro in Trescore Balneario. For staff meal, we would eat _ciareghi_ , which is Bergamascan dialect for an egg over easy with browned butter. At the restaurant, we added wood-grilled cotechino sausage and soft polenta to bulk up the dish. _Cotechino_ is classic fresh sausage from Bergamo, ground a bit coarse with some warm and peppery spices. It's easy to make, but if you don't want to make it you could use another Italian black pepper or fennel sausage.
MAKES 4 SERVINGS (PLUS LEFTOVER SAUSAGES)
Cotechino:
4 pounds (1.75 kg) boneless pork shoulder
5 ounces (141 g) pork fatback
3 tablespoons (25 g) kosher salt
4 teaspoons (9.5 g) powdered dextrose, or 3 teaspoons (7.75 g) superfine sugar
¾ teaspoon (2 g) ground cinnamon
½ teaspoon (1 g) freshly ground black pepper
½ teaspoon (1 g) ground allspice
¼ teaspoon (0.5 g) freshly grated nutmeg
¼ teaspoon (0.5 g) ground cloves
¼ cup (60 ml) white wine | About 12 feet (3.5 m) hog casings, soaked in cold water for 1 hour, then rinsed inside and out
To Serve:
2 cups (475 ml) hot cooked Polenta (page 281)
Olive oil, as needed
4 teaspoons (19 g) unsalted butter
4 large eggs
2 teaspoons (2.5 g) chopped mixed herbs (parsley, rosemary, and thyme) for garnish
Rock salt for garnish
---|---
**For the cotechino:** Cut the pork shoulder and fatback into 1 to 1½-inch (2.5 to 3.75-cm) cubes. Lay the cubed meat on a sheet tray and freeze until firm but not solid, about 1 hour. At the same time, freeze all parts of a meat grinder.
Fit the meat grinder with the large (¼-inch/6-mm) die, then put on plastic gloves and stick your hands in a large bowl of ice until very cold. Place the bowl of a stand mixer in the bowl of ice, and set the grinder on high speed. Scatter the kosher salt, dextrose, cinnamon, pepper, allspice, nutmeg, and cloves over the semi-frozen meat. Grind the meat through the large die twice, catching it in the cold mixing bowl. Add the white wine and mix with a stand mixer or electric mixer on low speed until the meat feels sticky, like wet bread dough, 2 to 3 minutes.
Attach a large sausage stuffer tube to the meat grinder and lubricate the tube with some water. The next step of stuffing the sausage is much easier with two people: if you can, have one person feed in the meat mixture and the other person handle the casing as it fills up. Feed some of the meat mixture into the feed tube on high speed until it just starts to poke out the end of the sausage stuffer. Turn off the machine. Use butcher's string to tie a double knot into the end of a hog casing, then slip the open end of the casing onto the stuffer all the way to the tied end, like putting a sock on your foot. Put pressure on the end of the casing so it's gently pressed against the stuffer. Turn the machine to high speed and feed in the meat mixture. Keep gentle pressure on the casing so the mixture packs into the middle of the casing as tightly as possible. You do not want any air bubbles in there because air can allow bacteria to breed and spoil the sausage. As the meat gets stuffed into the casing, it should pack around the end of the stuffer tube by at least 1 inch (2.5 cm) to prevent air from getting into the sausage. Constantly check the sausage for air bubbles, working them out the open end of the casing as necessary. Continue stuffing the mixture into the casing until it is full and evenly stuffed and all of the meat mixture is used, using additional casings as necessary. Remove the stuffed casing from the stuffer, grab the open end, and squeeze it down tightly against the meat to pack it firm. Twist the open end several times against the meat until the sausage is firm and sealed. Tie off the twisted end with butcher's string and poke any air pockets with a needle to eliminate air. Pinch the sausage every 4 to 5 inches (10 to 13 cm), twisting in opposite directions and tying knots to make 4- to 5-inch (10- to 13-cm) sausage links. Cover and refrigerate overnight or up to 1 week. You should have twenty to twenty-five 4-inch (10-cm) sausage links. Any leftover sausage will keep frozen for 1 month.
**To serve:** Make the polenta. When it is done, slice four sausages lengthwise almost in half, and open like a book. Heat a grill or skillet on high heat and brush the grates with a little oil. Grill the sausages face-down until browned and firm on that side, 3 to 4 minutes. Flip, and cook until the other side is lightly browned, 1 to 2 minutes. Remove from the heat and set aside. Melt the butter in a large sauté pan and crack all four eggs directly into the pan. Cook over medium heat until the whites are firm but the yolks are still runny, 2 to 3 minutes. Do not flip the eggs.
Spoon ½ cup (120 ml) hot cooked polenta into each bowl and top with a grilled sausage face up. Lay a sunny-side up egg on top of each sausage, Return the sauté pan to high heat and cook the butter until it turns golden brown. Spoon the browned butter over the egg and garnish with the remaining herbs and rock salt.
**PIZZOCCHERI** with **CHARD, POTATO** , and **BITTO CHEESE**
Claudia's friend Laura has a mountain house in Valtellina, about an hour from the Swiss border. There's a little river in the backyard, cattle come ambling down the mountain, and there's a pig farm next door. It's idyllic, to say the least. The first time Claudia took me there, we picked up some formagella, fontina, and pizzoccheri at a local cheese shop. Pizzoccheri are short pasta strips like tagliatelle but made with 80 percent buckwheat flour. Claudia, Laura, and their friend Consuela made the pasta and tossed it with both cheeses, some boiled potatoes, and a leafy green called _bieta_ that's similar to Swiss chard. They finished the dish with brown butter and sage. It was awesome, and one of the most traditional dishes from Valtellina. I like to serve it family style on a big platter, but you could serve it on individual plates if you like.
MAKES 6 SERVINGS
1 pound (450 g) Swiss chard
6 ounces (1½ sticks/170 g) unsalted butter
12 leaves fresh sage
8 ounces (227 g) Buckwheat Pasta Dough (page 282), rolled into 2 sheets, each about inch (1.5 mm) thick
1 pound (450 g) gold potatoes, peeled and cut into ½-inch (1.25-cm) cubes | 4 ounces (113 g) Bitto cheese, shredded (1½ cups)
4 ounces (113 g) fontina cheese, shredded (1½ cups)
Salt and freshly ground black pepper
---|---
Strip the leaves from the stems of the chard and coarsely chop the leaves. Set aside.
Put the butter and sage in a large deep sauté pan over medium heat, and cook until the sage lightly browns, the butter turns golden, and the milk solids lightly brown on the bottom of the pan, 6 to 8 minutes.
Meanwhile, lay a pasta sheet on a lightly floured work surface and trim the edges square. Cut crosswise into strips a little less than 1 inch (2.5 cm) wide, preferably with a fluted cutter. Repeat with the remaining pasta dough.
Bring a large pot of salted water to a boil. Add the potatoes and blanch until barely tender, 3 to 4 minutes. Using a slotted spoon, transfer the potatoes to the browned butter (reserving the pot of boiling water for the pasta), and cook over medium heat until the potatoes are tender, 3 to 4 minutes. Add the chard and 1½ cups (375 ml) of pasta water, and cook until the chard wilts and the sauce is creamy, 3 to 4 minutes.
Add the pasta to the boiling water, and cook until tender yet firm, about a minute. Drain the pasta and add to the pan, along with the Bitto and fontina, tossing until the cheese melts and looks stringy. Season generously with salt and pepper and serve on a large platter or divide among warm pasta plates.
**WILD BRANZINO** with **FENNEL** and **ARTICHOKES**
At Ristorante Loro, the chef Antonio Rochetti was very particular about his fish. He only ordered from Pesceria Orobica in Bergamo because it had the freshest fish. This fishmonger also ships to the United States for a pretty penny! Antonio would only order wild fish, never farm-raised. To show me the difference, he ordered both wild and farmed branzino one day. The wild fish looked plump and clear in color, and the meat tasted extra-flavorful. The farmed fish looked smaller and duller, and it tasted that way. Don't worry; if you can't find wild branzino, you can use another wild white fish, such as bass, snapper, or sea bream. For the artichokes, I use what restaurants term the "24-count" size, meaning twenty-four to a carton; each is about three inches in diameter. If you use bigger artichokes, reduce the total number of them here.
MAKES 4 SERVINGS
2 bulbs fennel
13 sprigs fresh thyme, divided
3 garlic cloves, sliced, divided
⅔ cup (150 ml) olive oil, divided, plus some for oiling the fish
4 tablespoons (60 ml) melted unsalted butter, divided
½ cup (120 ml) freshly squeezed lemon juice, divided | Salt and freshly ground black pepper
6 artichokes, each about 3 inches (7.5 cm) in diameter
½ cup (120 ml) white wine
4 wild European branzino fillets, (5 to 6 ounces/142 to 170 g each), skin on
1 cup (235 ml) Fish Stock (page 279)
1 tablespoon (15 ml) truffle paste
2 tablespoons (7 g) chopped fresh flat-leaf parsley
---|---
Preheat the oven to 400°F (205°C). Trim the fennel bulbs and then cut them in half lengthwise, keeping the cores intact. Slice the fennel lengthwise into strips so the cores keep the slices whole. Toss the fennel with ten sprigs of the thyme, two of the sliced garlic cloves, 1 tablespoon (15 ml) of the oil, 1 tablespoon (15 ml) of the melted butter, and 3 tablespoons (45 ml) of the lemon juice on a rimmed baking sheet. Season with salt and pepper and then lay the pieces in a single layer. Roast until tender, about 1 hour, turning once halfway through.
Combine 3 tablespoons (45 ml) of the remaining lemon juice with 3 cups (750 ml) of water in a medium bowl. Cut one of the artichokes in half through the equator to remove most of the leaves and expose the choke (the fuzzy part). Scoop out and discard the choke. Use a paring knife to pare down the artichoke to just the tender white part (the heart) with some of the tender white stem attached. Cut the artichoke heart into eight equal-size wedges. Toss in the acidulated water to prevent discoloration, then repeat the process with the remaining artichokes. Using a slotted spoon, transfer the artichokes to a roasting pan and add the wine, ½ cup (120 ml) of the remaining oil, 1 tablespoon (15 ml) of the remaining melted butter, the remaining garlic clove, and the remaining three sprigs of thyme, tossing to coat. Cover with foil and roast until the artichokes are tender, about 45 minutes.
Heat a grill to medium-high heat. Coat the fish lightly with oil and season with salt and pepper. Scrape the grill rack and coat it with oil. Grill the fish, skin-side down, until the skin is crispy, 3 to 4 minutes. Flip, and grill until the fish is still a little filmy and moist in the center, 3 to 4 minutes more.
Meanwhile, transfer the roasted fennel and roasted artichokes to a large deep sauté pan, discarding the thyme sprigs. Add the fish stock, remaining 2 tablespoons (30 ml) of melted butter, remaining 2 tablespoons plus 1 teaspoon (35 ml) of oil, remaining 2 tablespoons (30 ml) of lemon juice, and the truffle paste, and cook over medium heat until the sauce gets a little creamy and coats the vegetables, 3 to 4 minutes. Stir in the parsley and season with salt and pepper.
Lay the branzino on plates, skin-side up. Spoon the fennel and artichokes over the fish, spooning the sauce over the vegetables and around the plate.
**CITRUS RUM BABAS ALLA CREMA**
The greatest thing about rum cake is that it's saturated with rum syrup. When you bite into the cake, it's like taking a shot of rum with syrup running over your lips and down your chin. It's a classic Neapolitan dessert. We served miniature rum babas at our wedding, but make these any size you like. I serve them with a squeeze of pastry cream and some diced strawberries marinated in sugar and lemon juice.
MAKES 16 TO 18 SMALL BABAS
Starter:
4 packed teaspoons (25 g) fresh yeast, or 2 teaspoons (8 g) active dry yeast
1 tablespoon (15 ml) warm water (110°F/43°C)
3 tablespoons (25 g) bread flour
4 teaspoons (16 g) granulated sugar
Babas:
1½ cups (205 g) bread flour
1 cup plus 4 teaspoons (216 g) granulated sugar, divided | 3 large eggs
5 tablespoons (71 g) unsalted butter, softened
1 vanilla bean, split and scraped
Pinch of salt
7 ounces (200 g) candied orange peel, finely chopped (about 1 cup) (see page 288)
½ cup (120 ml) dark spiced rum, such as Myers's
2 cups (475 ml) Pastry Cream (page 285)
---|---
**For the starter:** Stir together the yeast, water, flour, and sugar in the bowl of a mixer. Let stand in a warm spot until the surface looks foamy, 20 to 30 minutes.
**For the babas:** Add the flour, 4 teaspoons (16 g) of the sugar, and the eggs, butter, vanilla, and salt to the starter and mix with the paddle attachment on medium speed until the dough is so sticky that it wraps around itself, 20 to 30 minutes, scraping down the sides several times. Mix the candied orange peel into the dough. Coat sixteen to eighteen 1- to 2-ounce (30- to 60-ml) silicone baking cups with cooking spray (I use a single tray of 1.5-ounce/45-ml cone-shaped silicone cups; you could also use larger baking cups or muffin cups). Divide the dough into sixteen to eighteen pieces, each about 1 ounce (28 g), and place each piece in a prepared cup (if using larger baking cups, use larger pieces to fill them). Cover loosely and let rise in a warm spot until doubled in size, about 1 hour.
Preheat the oven to 325°F (160°C). Bake the babas until a toothpick inserted in the center comes out clean, about 15 minutes.
Meanwhile, combine the remaining 1 cup (200 g) of sugar with 1 cup (235 ml) of water in a small saucepan. Bring to a simmer over medium heat, then remove from the heat and stir in the rum. When the babas come out of the oven, remove them from their molds, let cool slightly, then immerse each one completely in the rum syrup for 10 to 15 seconds until completely saturated. Transfer to a wire rack to cool. The babas can be covered and refrigerated for up to 2 days.
When ready to serve, spoon the cooled pastry cream into a pastry bag or resealable plastic bag. Cut a deep slit in the babas from top to bottom, leaving the babas intact at the back, and pipe the pastry cream into the slit. Serve immediately or refrigerate for up to 1 hour before serving.
**OLIVE OIL PANNA COTTA** with **SUMMER BERRIES**
Claudia and I got married in Italy in the summertime, and this dish was the star of the dessert buffet. It was served with wild blackberries, raspberries, gooseberries, and red and white currants. But you can use whatever berries are freshest when and where you make it. The important thing is to use a good, strong olive oil, preferably a peppery one from southern Italy. That's what makes this panna cotta different from others. The peppery olive oil jumps out at you and marries perfectly with the sweet berries.
MAKES 6 SERVINGS
Panna Cotta:
⅔ cup (150 ml) olive oil, plus some for greasing ramekins
1½ cups (375) heavy cream
1 cup (235 ml) whole milk
⅔ cup (133 g) granulated sugar
2¼ teaspoons (5.25 g) powdered gelatin | Berries:
2 cups (300 g) mixed fresh berries (blueberries, huckleberries, strawberries, blackberries, and raspberries)
¼ cup (50 g) granulated sugar
2 tablespoons (30 ml) farmers' honey (any variety)
3 tablespoons (45 ml) freshly squeezed lemon juice
---|---
**For the panna cotta:** Coat six 4-ounce (125-ml) ramekins with olive oil and fill a large bowl with ice water. Bring the cream, milk, and sugar to a boil in a small saucepan. Lower the heat so that the mixture simmers gently, and sprinkle the gelatin over the top, whisking to disperse it evenly. Use a stick blender to buzz in the ⅔ cup (150 ml) of olive oil in a thin steam until the mixture is blended and emulsified.
Briefly dip the bottom of the pan into the ice water to cool down the mixture slightly, whisking gently as it cools. Before it gets too firm to pour, divide the mixture among the prepared ramekins. Refrigerate overnight or up to 2 days.
**For the berries:** Stir together the berries, sugar, honey, and lemon juice in a small bowl, cover, and let marinate at room temperature for 24 hours.
Unmold the panna cotta onto plates and spoon on the marinated berries and their syrup.
**CHIACCHIERE** with **COFFEE** and **CHOCOLATE BUDINO**
During _carnevale_ in Italy, you see _chiacchiere_ everywhere. They're crispy little strips of fried dough. The dough usually includes a little grappa. It is a celebration, after all. I like to shape them into little knots so you can dip them like edible spoons into creamy coffee and chocolate pudding. Serve the pudding in coffee cups, if you like. That hints at the meaning of _chiacchiere_ , which translates literally to "chatter" or "small talk," the sort of thing that happens over a cup of coffee.
MAKES 4 TO 6 SERVINGS
Budino:
3½ teaspoons (6.5 g) very finely ground espresso
1¾ cups plus 2 tablespoons (455 ml) heavy cream
6 tablespoons (75 g) granulated sugar, divided
Pinch of salt
½ vanilla bean, split and scraped
4 large egg yolks
3 ounces (85 g) bittersweet chocolate, finely chopped (about ½ cup)
Chiacchiere:
¾ cup (94 g) _tipo_ 00 flour (see page 277) or all-purpose flour, plus some for dusting
2½ teaspoons (10.5 g) granulated sugar | Pinch of salt
2 teaspoons (9 g) unsalted butter, softened
1 small egg
1½ teaspoons (7 ml) grappa
Zest of ½ orange
½ vanilla bean, split and scraped
5 teaspoons (25 ml) whole milk
Oil for frying
Confectioners' sugar, for dusting
---|---
**For the budino:** Preheat the oven to 325°F (160°C). Whisk together the espresso, cream, 3 tablespoons (38 g) of the sugar, and the salt and vanilla in a saucepan and bring to a simmer over medium heat.
Combine the egg yolks and remaining 3 tablespoons (38 g) of sugar in the bowl of a mixer and whip on high speed until light and fluffy, 3 to 4 minutes. Very gradually whisk the cream mixture into the egg mixture to prevent scrambling the eggs. When combined, pour the hot mixture over the chocolate in a medium bowl. Let stand until melted enough to be stirred smooth. Strain the mixture and pour into four to six 4-ounce (175-ml) ramekins. Set the ramekins in a roasting pan and pour enough hot water into the pan to come halfway up the sides of the ramekins. Bake until the sides are set but the center is still a little jiggly, 25 to 35 minutes. Remove the ramekins from the water bath and let cool completely on a rack. Cover and refrigerate until cold, at least 2 hours or up to 2 days.
**For the chiacchiere:** Combine the flour, sugar, salt, butter, egg, grappa, orange zest, vanilla, and milk in the bowl of a mixer. Mix on medium speed with the paddle attachment until the dough comes together and gathers around the paddle, 2 to 3 minutes. Scrape the dough onto a sheet of plastic and wrap tightly. Refrigerate for 1 hour or up to 1 day.
Shape the dough into an oblong disk the width of your pasta machine. Lightly flour a work surface and position a pasta roller at the widest setting. Roll the dough through the rollers, lightly dusting the dough with flour and brushing off the excess with your hands. Reset the rollers to the next narrowest setting and again pass the dough through the rollers, dusting again with flour. Pass the dough once or twice through each progressively narrower setting. Roll the dough to about inch (1.5 mm) thick, about setting #4 or 5 on the KitchenAid pasta attachment. Trim the edges square and then cut the dough crosswise into ½-inch (1.25-cm)-wide strips. Tie each strip into a knot like a pretzel. You should have thirty-five to forty knots, which can be covered and refrigerated for up to 1 day or frozen for up to 1 month.
Heat the oil in a deep fryer or deep pot to 350°F (175°C). Drop in the chiacchiere a few at a time and fry until golden brown, 3 to 4 minutes, flipping once and adjusting the heat to maintain a 350°F (175°C) oil temperature at all times. Immediately transfer to paper towels to drain. Dust with confectioners' sugar and serve with the budino.
DESENZANO DEL GARDA
THE CULINARY JOURNEY OF A LIFETIME
I GOT THE CALL IN EARLY 2006. MY HANDS WERE DEEP IN BREAD DOUGH AT THE BEDFORD VILLAGE INN. IT WAS MARC VETRI AND JEFF BENJAMIN CALLING FROM THE CA' MARCANDA VINEYARDS OF GAJA WINERY IN TUSCANY. THEY HAD AN IDEA AND A LOCATION FOR A PHILADELPHIA RESTAURANT: A TRADITIONAL ITALIAN _OSTERIA_ WITH A WOOD-BURNING STOVE, SERVING RUSTIC YET REFINED FOOD IN A CASUAL, UPSCALE ATMOSPHERE. THEY OFFERED ME A PARTNERSHIP IN THE RESTAURANT. I STARED AT THE KITCHEN WALL FOR HALF A MILLISECOND AND SAID YES! EVERY CHEF DREAMS OF OWNING HIS OR HER OWN RESTAURANT.
The real eye-opener came over the next six months. I spent every spare moment before and after work helping to design the restaurant space, draft blueprints for the kitchen, frame out and lay concrete for the building itself with the construction crew, develop the menu, test the dishes, and interview and hire everyone from line cooks to dishwashers. We spent months staining the concrete floors red, filling the restaurant with antique country tables, seeking out a vintage Faema espresso machine, and finding a decent price on three antique Berkel meat slicers for slicing our house-cured prosciutto paper-thin right in front of our guests.
For six months, I dreamed in menus. The dishes for this restaurant had to capture the bold flavors and rustic simplicity of all the food I cooked in Italy. The wood-burning grill and oven took center stage in our collective visions, and we put more than half a dozen wood-fired pizzas on the menu, including the Lombarda, topped with house-made cotechino sausage, Bitto and mozzarella cheeses, and a baked egg. Roasted vegetable antipasto, spit-roasted suckling pig, bistecca alla Fiorentina, and rustic copper-pot polenta all underscored the importance of a live wood fire in Italian cooking. I also included some traditional dishes from Bergamo, such as ciareghi, and a few I learned from my mother-in-law, Pina, such as Nonna's rabbit and candele with wild boar Bolognese. Pastas and desserts are among my specialties, so I featured plenty on the menu, including robiola and fava francobolli, chicken liver rigatoni, bonet, chocolate flan with pistachio gelato, polenta budino with gianduia mousse and candied hazelnuts, and torrone semifreddo with candied chestnuts and chocolate sauce.
From working in Italy in such Michelin-star restaurants as Frosio and Loro to running the kitchen at Locanda del Biancospino, I can honestly say that building, opening, and running Osteria in Philadelphia has been, by far, the greatest learning experience of my career. More than any other, this restaurant has taught me that being a chef isn't always about cooking. An executive chef spends roughly 30 percent of his or her time preparing food—and it's the easiest 30 percent. The rest of the time is consumed with organizing schedules, writing checklists, taking inventory, placing orders, doing payroll, acting as mentor and psychiatrist to the staff, and dealing with countless little crises that arise on any given day. If all a chef had to do was cook, the job would be easy!
But I love every minute of it. I love Italian food and cooking so much that Osteria eventually wasn't enough. I soon wanted to give people a more immersive experience by taking them directly to Italy. In 2010, Claudia and I led our first culinary tour of northern Italy, anchored in Desenzano, a popular vacation city on the largest lake in Italy, Lake Garda. We hosted ten guests at Agriturismo Armea, a beautiful inn near the southern shores of Lake Garda with an outdoor pool and a stone-walled dining room. From Desenzano, you get a stunning lakeside view of the Italian Alps. Shops and cafés buzz with tourists by day and restaurants and clubs come alive at night. Formed by glaciers, Lake Garda enjoys a climate that's unusually mild this far north but perfect for growing both olives and citrus. The deep waters teem with fresh fish like lake salmon, trout, perch, eel, and fresh sardines. It's a cook's dream region. On Mondays, the nearby town of Peschiera hosts an incredible open-air market where you can stock up on local foods, and there's a great butcher shop in Rivoltella that specializes in horse, a meat that's more popular in Italy than in any other country in the European Union.
On that first tour, Claudia and I showed Americans the food and wine of Italy in a way that only a native Italian could reveal it. As expected, guests dined in Michelin-star restaurants, such as La Brughiera in Villa d'Almè, Loro in Trescore Balneario, and Frosio in Almè. And they enjoyed wine tastings at famous Franciacorta wineries, such as Bellavista and Ca' del Bosco. But Claudia and I also escorted them to our favorite gelateria, Gelateria Peccati de Gola in Albino, so they could taste the incredible licorice gelato. We gave them personal tours of Venice, Verona, and the historic Città Alta in Bergamo. And we enjoyed an intimate, home-cooked lunch under Pina's pergola in Cene, overlooking the foothills of the Alps, digging into her classic stuffed zucchini blossoms and polenta with wild boar ragù and her amazing crespelle. Best of all, we taught three cooking classes so guests could continue to enjoy the taste of Italy after returning home to the States. We showed them preparations that could be easily adapted, such as stuffed focaccia, mixed-meat grill, and fresh stone-fruit tart.
The culinary tour has become an annual event, and we recently expanded the experience by traveling south. In 2012, we chartered a private forty-two-foot (13-m) yacht in Sicily and cruised among the volcanic Aeolian Islands. We toured the entire eastern coast of Sicily, stopping in Taormina, Catania, Ragusa, Siracusa, Noto, Modica, and Giardini Naxos. Guests stayed at the Donna Franca villa in Trappitello and we held cooking classes that made use of all the incredible local products, such as fresh swordfish, tuna, sea urchin, tomatoes, lemons, mozzarella, and ricotta. Of all the restaurants we visited on that trip, Locanda Don Serafino was by far the best. It's built from stone right in the side of a hill in Ragusa, and when you enter the huge stone archways, you feel as if you're dining in a beautifully restored cave of kings.
Both Osteria and my culinary tours make me feel incredibly lucky. Through the restaurant, the cooking classes, and the food trips, I get to share the best of Italy with anyone who wants to enjoy it. Bringing guests into our restaurant and back to Italy offers a unique opportunity for me to say thanks and to honor the cuisine and culture that has given me so much.
Focaccia Stuffed with Taleggio and Pancetta
•
Pecorino Flan with Fava Beans and Artichokes
•
Fazzoletti with Lamb Breast and Pea Ragù
•
Squash and Fontina Lasagnetta
•
Tomato Tortellini with Burrata and Basil | Meat Grigliata with Mixed Bean Salad
•
Fresh Prune and Almond Tart
•
Braised Blueberries with Sbrisoluna
•
Grappa Torta
---|---
**FOCACCIA STUFFED** with **TALEGGIO** and **PANCETTA**
On our first trip to Liguria, Claudia and I stopped in Bergeggi, a tiny beach town near Genoa. Claudia stuck her head out the window, took one whiff, and said, "Stop the car." She didn't know where the focaccia was, but she could smell it. We found a line of people, pulled over, and got in line. The store sold twenty different kinds of stuffed focaccia—Speck and blue cheese, tomatoes and arugula, Nutella. . . you name it. You order whatever you like in hundred-gram increments, and they slice off your order and warm it in a wood oven. We got Speck and _crescenza_ , cipolline and Gorgonzola, artichoke and Parmigiano, and Nutella. That was our lunch. Here's my northern Italian twist with pancetta and Taleggio. Try it warm out of the oven or at room temperature alongside a soup or pasta dish.
MAKES TWELVE TO SIXTEEN 3-INCH (7.5-CM) SQUARES
1½ packed tablespoons (28.5 g) fresh yeast, or 2¼ teaspoons (9 g) active dry yeast
3¾ cups (514 g) bread flour
1½ teaspoons (9 g) salt
1 tablespoon (15 ml) extra-virgin olive oil, plus more for drizzling | 8 ounces (227 g) Taleggio cheese, shredded (2 cups)
4 ounces (113 g) pancetta, julienned
1 teaspoon (6 g) flake sea salt
1 teaspoon (2 g) freshly ground black pepper
---|---
If using fresh yeast, put the yeast, flour, and 1⅓ cups (330 ml) of cold water in the bowl of a stand mixer. If using active dry yeast, put 1⅓ cups (330 ml) warm tap water (about 110°F/43°C) in the bowl of a stand mixer, mix in the yeast, and let stand for 5 minutes until foamy; then add the flour. Using the dough hook, mix on low speed for 2 to 3 minutes. Switch to medium speed, add the salt, and stream in the olive oil. Mix until the dough is smooth and silky, about 10 minutes. Transfer the dough to an oiled bowl and let rise in a warm spot until doubled in bulk, about 1 hour.
Oil a half-sheet pan (an 11 x 14-inch/28 x 35-cm rimmed baking sheet), and then punch down the dough and turn it out onto the oiled pan. Fold the dough over itself in thirds, and let rise in a warm spot for 30 minutes. Punch down the dough, fold it over itself in thirds again, and let rise in a warm spot for another 30 minutes.
Cut the dough in half and press half of the dough into the baking sheet so it is about ¼ inch (6 mm) thick. Roll up the pressed out dough and set aside. Re-oil the pan and press the other half of the dough into the pan so it is about ¼ inch (6 mm) thick. Scatter the cheese and pancetta over the dough in the pan, leaving a ¼-inch (6-mm) border around the edge. Unroll the other half of the dough over the cheese and pancetta and pinch the edges to seal. Dimple the surface all over with your fingertips and let rise in a warm spot for 30 minutes.
Preheat the oven to 500°F (260°C). Turn on convection if possible. Drizzle the top of the focaccia with olive oil and sprinkle on the flaked salt and freshly ground black pepper. Bake until golden brown, 20 to 25 minutes. Let cool in the pan on a wire rack and cut into 3-inch (7.5-cm) squares. Serve warm or at room temperature.
FOCACCIA ASSEMBLY
**PECORINO FLAN** with **FAVA BEANS** and **ARTICHOKES**
When I started doing culinary tours of Italy, I wanted to teach people how to make traditional Italian molded and unmolded dishes, such as flan, sformato, and tortino. This recipe became the blueprint. It's as basic as can be: a straightforward custard of eggs, milk, and cheese that you can customize however you like. Cook some onions or leeks into it. Puree some carrots, peas, or fava beans and replace a little of the milk with the vegetable puree. Change the cheese from pecorino to Taleggio, Gorgonzola, or fontina. I use fava beans and pecorino because it's a classic flavor combination that works. After looking through some old Italian cookbooks, I got the idea to toss in some shaved raw artichokes.
MAKES 8 SERVINGS
Pecorino Flan:
Unsalted butter, for greasing ramekins
⅔ cup (150 ml) whole milk
3¼ ounces (92 g) pecorino cheese, grated (⅞ cup), plus a few ounces for shaving over top
2 large eggs
1 large egg yolk
Salt and freshly ground black pepper
⅔ cup (150 ml) heavy cream | Fava Beans and Artichokes:
8 ounces (227 g) fava beans in the pods
¼ cup (60 ml) olive oil
1 cup (235 ml) freshly squeezed lemon juice
8 baby artichokes
¼ cup (15 g) chopped or chiffonade of fresh mint
Salt and freshly ground black pepper
---|---
**For the pecorino flan:** Preheat the oven to 250°F (120°C). Butter eight 4-ounce (125-ml) ramekins and place in a roasting pan or deep baking dish. Set aside.
Bring the milk to a boil in a medium saucepan over medium heat. Remove from the heat and stir in the cheese until melted. Use a stick blender or pour the mixture into a blender and blend until smooth, 1 to 2 minutes. With the blender running, add the eggs and egg yolk, and season with salt and pepper. Turn the blender to low speed, and with the machine running, blend in the cream just until incorporated. Strain the mixture through a fine-mesh strainer into a bowl. Crack some black pepper over the mixture and stir slowly. Remove any skin that forms on the surface of the mixture, then carefully pour it into the ramekins to come up just under the lip. Pour hot water into the bottom of the pan to come at least halfway up the sides of the ramekins. Cover the pan tightly with aluminum foil and bake until set around the sides and still a little jiggly in the center, 25 to 30 minutes. Remove the pan from the oven and let cool, covered, in the water.
**For the fava beans and artichokes:** Bring a large pot of water to a boil and fill a large bowl with ice water. Add the whole fava pods and blanch for 1 minute. Transfer to the ice water to stop the cooking. When cool, pluck the favas from the pods, then pinch open the pale green skin on each bean and pop out the bright green favas into a bowl. You should have about ½ cup (90 g).
Add the oil and lemon juice to the bowl.
Trim off or peel any tough stems from the artichokes and snap off any dark green leaves so you are left with only tender green-yellow leaves and tender stems. Shave the artichokes on a mandoline, even an inexpensive handheld one, adding them to the fava beans as they are shaved. Stir to coat them in lemon juice. Add the mint, season with salt and pepper, and stir to combine.
For each plate, turn out a flan into the center of the plate. Spoon the fava bean mixture around each flan. Use a vegetable peeler to shave a few curls of pecorino cheese around each plate. Finish with a few grindings of fresh black pepper.
**FAZZOLETTI** with **LAMB BREAST** and **PEA RAGÙ**
In my cooking classes, I like to demonstrate techniques that apply to dozens of dishes. Braising is something that applies to every northern Italian ragù. You braise a big piece of meat, shred it, season it, and then serve it with polenta or pasta. When it comes to lamb, most people think of chops and loin, but lamb breast makes a fantastic ragù. It's inexpensive and full of rich flavor. Spring peas add a shot of freshness to brighten up the ragù. _Fazzoletti_ means "handkerchiefs;" you just toss the ragù and simple pasta squares together until the dish looks like little handkerchiefs folded over one another.
MAKES 6 SERVINGS
1 bone-in lamb breast, 2 to 3 pounds (1 to 1.3 kg)
Salt and freshly ground black pepper
2 tablespoons plus ¼ cup (90 ml) olive oil, divided
½ medium-size yellow onion, finely chopped (⅔ cup/107 g)
1 medium-size carrot, finely chopped (⅔ cup/81 g)
1 large rib celery, finely chopped (⅔ cup/67 g)
1 garlic clove, smashed | 4 sprigs fresh rosemary
1 bay leaf
2 cups (480 g) canned plum tomatoes, preferably San Marzano, cored and crushed by hand
1 cup (235 ml) white wine
8 ounces (227 g) Egg Pasta Dough (page 282), rolled into 2 sheets, each about inch (1.5 mm) thick
1 cup (145 g) freshly shelled spring peas, from about 1 pound (450 g) pea pods
1 ounce (28 g) pecorino cheese, grated (⅓ cup) for garnish
---|---
Preheat the oven to 300°F (150°C). Season the lamb all over with salt and pepper. Heat 2 tablespoons (30 ml) of the oil in a Dutch oven over medium-high heat. When hot, add the seasoned lamb, and cook until seared and well browned, 5 to 6 minutes. Flip the breast, and sear the other side, 5 to 6 minutes.
Transfer the meat to a plate and add the onion, carrot, celery, garlic, rosemary, and bay leaf to the pan. Sweat the vegetables until they are tender but not browned, 4 to 6 minutes. Add the tomatoes, and cook until they start to break down, 5 to 6 minutes. Pour in the wine and simmer until the liquid in the pan reduces in volume by about half, 10 to 15 minutes. Return the meat to the pan, cover, and braise in the oven until the meat is fall-apart tender, 2 to 2½ hours. Remove the pan from the oven and let the meat cool in the liquid until cool enough to handle. Transfer the meat to a cutting board and use tongs and a fork to shred the meat from the bones, silverskin, and fat, saving only the meat.
Remove and discard the rosemary stems, garlic, bay leaf, and any stray bones. Skim off any excess fat from the braising liquid and then pour the vegetables and braising liquid into a food processor. Puree briefly with short pulses to make a chunky, rustic puree. Return the puree and shredded meat to the pan and season with salt and pepper. Use immediately or refrigerate for up to 1 week.
Bring a large pot of salted water to a boil. Lay a pasta sheet on a lightly floured work surface and trim the edges square. Cut the pasta into 3-inch (7.5-cm) squares. Repeat with the remaining pasta dough. You should get about forty squares total. Drop the pasta squares into the boiling water in batches if necessary to prevent overcrowding; quickly return the water to a boil, and cook until tender, 1 to 2 minutes.
Meanwhile, heat the ragù in a large, deep sauté pan over medium heat. Add about 1 cup (235 ml) of pasta water, along with the fresh peas and remaining ¼ cup (60 ml) of olive oil, and cook until creamy, 3 to 4 minutes. Drain the pasta and add to the ragù, tossing for 1 to 2 minutes. Divide among plates and sprinkle with the pecorino.
**SQUASH** and **FONTINA LASAGNETTA**
Claudia's grandmother, Nonna Anna, used to make lasagne without pasta. She roasted slices of squash, layered them with béchamel and blue cheese, and then baked the whole thing. I just ran with the idea, adding some pasta sheets to bulk up the dish, making a truffle béchamel, and using creamy fontina cheese. It's a great fall dish that I serve in individual baking dishes as lasagnetta.
MAKES 4 TO 6 SERVINGS
1 medium-size butternut squash, about 2½ pounds (1.1 kg) total
2 tablespoons (30 ml) olive oil
Salt and freshly ground black pepper
4 ounces (113 g) Egg Pasta Dough (page 282), rolled into 1 sheet, about inch (1.5 mm) thick | 5⅓ tablespoons (75 g) unsalted butter, sliced into pats, plus more for greasing the dishes or sheet
2 cups (475 ml) Truffle Béchamel (page 281)
2½ ounces (71 g) Parmesan cheese, grated (¾ cup)
12 ounces (340 g) fontina cheese, shredded (about 3 cups)
---|---
Preheat the oven to 375°F (190°C). Prick the squash all over with the tip of a knife, and then microwave on high to make it easier to peel, 2 to 3 minutes. Peel and seed the squash and then slice it into rounds about ¼ inch (6 mm) thick. Toss the slices with the oil and season with salt and pepper. Spread flat on rimmed baking sheets and roast until tender, 10 to 12 minutes. Let cool in the pan.
Lay the pasta sheet on a lightly floured work surface and trim the edges square. Cut the pasta into 4-inch (10-cm) squares. You should have sixteen to eighteen squares, and they can be used immediately or frozen for up to 1 week.
Butter four to six individual casserole dishes or one large rimmed baking sheet. Bring a large pot of salted water to a boil and fill a large bowl with ice water. Drop the pasta squares in the boiling water, quickly return the water to a boil, and blanch for 10 to 20 seconds. Drain the pasta and immediately transfer to the ice water. Lay the blanched pasta sheets flat on clean kitchen towels.
Raise the oven temperature to 500°F (260°C). Turn on convection if possible. To build the lasagnetta, lay down one pasta sheet in each casserole dish or lay out four to six in a single layer on the baking sheet, leaving about 1 to 2 inches (2.5 to 5 cm) between each pasta sheet. Over each pasta sheet, layer on the ingredients in this order: about 2 tablespoons (30 ml) of béchamel, a slice of roasted squash, about ½ tablespoon (3 g) of Parmesan, and about ¼ cup (28 g) of fontina. Lay on another sheet of pasta and repeat the process. Top each lasagnetta with a third sheet of pasta, a layer of béchamel, a slice of squash, and a sprinkle of Parmesan. Place a pat of butter on top of each lasagnetta and bake until lightly browned and bubbly, 8 to 10 minutes. Serve immediately.
**TOMATO TORTELLINI** with **BURRATA** and **BASIL**
I'm a big fan of making fresh vegetable and fruit purees as pasta fillings. You just need a little patience. Most vegetables and fruits are pretty watery, and it takes time to drain off the water and concentrate the flavor. Here, I puree fresh tomatoes and suspend the puree over a bowl overnight to drain out the tomato water, resulting in a nice, thick tomato puree. I prefer that thickening method to adding cheese or eggs because it leaves the singular flavor of the tomatoes intact. Save this dish for the height of summer and use the freshest, plumpest tomatoes you can find. The pasta should just explode with fresh, unadulterated tomato flavor—like biting into a raw tomato. A little burrata and basil are all it needs—like a fancy twist on Caprese salad.
MAKES 4 TO 6 SERVINGS
3 large ripe tomatoes (about 5 pounds/2.25 kg)
1½ teaspoons (7 ml) sherry vinegar, plus more as needed
About ¾ cup (175 ml) olive oil, divided
Salt and freshly ground black pepper | Granulated sugar, as needed
4 ounces (113 g) Egg Pasta Dough (page 282), rolled into 1 sheet, about inch (0.8 mm) thick
5 ounces (142 g) burrata cheese
2 packed tablespoons (7 g) small fresh basil leaves for garnish
---|---
Bring a large pot of water to a boil and fill a large bowl with ice water. Score an X on the bottom of the tomatoes and drop them into the boiling water until the skins split and curl, about 1 minute, working in batches to prevent overcrowding. Immediately transfer the tomatoes to the ice water to shock and loosen the skins. When almost cool, peel and discard the tomato skins. Remove and discard the cores, halve the tomatoes, and remove the seeds. Coarsely chop the tomatoes and then puree in a blender until super-smooth, 2 to 3 minutes.
Make a large, five-layer sack of cheesecloth, using plenty of extra cheesecloth for hanging. Place the sack in a deep bowl or tall stock pot and carefully ladle in the tomato puree in batches. Bring up the corners and gently squeeze out the tomato water after each addition, transferring the drained tomato puree to a bowl as you go. Return all of the thickened tomato puree to the cheesecloth, bring up the corners, and suspend the sack over the bowl, tying the top to a propped-up wooden spoon handle so that the sack hangs well above the bottom of the bowl. You can also set the cheesecloth bundle in a large-mesh sieve and set that over the bowl. Let hang overnight to drain the remaining tomato water, at least 8 hours.
Transfer the thickened puree to a blender, along with the vinegar and about ½ cup (120 ml) of the olive oil. Puree until smooth and emulsified, 1 to 2 minutes. The puree should be smooth and not runny; adjust the amount of oil as necessary. Taste and season with salt, pepper, and additional vinegar, if necessary. If your tomatoes are underripe, add a pinch of sugar, too. Transfer the mixture to a resealable plastic bag and refrigerate for at least 1 hour or up to 1 day.
Lay the pasta sheet on a lightly floured work surface and cut into 2-inch (5-cm) squares. As you work, spray the pasta with water and cover with a towel to keep it from drying out. Cut a corner off the bag of filling and squeeze a ½-inch (1.25-cm)-diameter ball of filling on each square. Fold the pasta corner to corner over the filling to make a triangle. Dampen your fingertips and bring the two opposite corners together up over the filling and then pinch and hold to seal. You should have about fifty tortellini. Use immediately or freeze for up to 1 week.
Bring a large pot of salted water to a boil. Add the tortellini, and cook until tender yet firm, 1 to 2 minutes. Drain, reserving 1 cup of the pasta water. Transfer the pasta to a deep sauté pan, along with the reserved pasta water and the remaining ¼ cup (60 ml) of oil. Simmer over medium-high heat, stirring now and then, until the liquid reduces in volume and becomes light and creamy, 2 to 3 minutes.
Using your fingers, pick the burrata into small, fingertip-size pieces and divide them among plates. Spoon the tortellini over and around the cheese and garnish with the basil leaves and a few grindings of black pepper.
**MEAT GRIGLIATA** with **MIXED BEAN SALAD**
Most Italian homes have a fireplace used for both heating the house and cooking food. You don't see a lot of gas or propane grills. They usually build a wood fire in the fireplace, put a grill grate over it, and grill meat there. When we first walked into Agriturismo Armea on our Desenzano culinary tour, I saw a giant fireplace and knew we'd be using it for _grigliata_ , a mixed grill dish. I suppose you could cook the meats here on a gas grill with some wood chips, but a wood fire will give you much better flavor. I like to use a mix of red and white oak wood. Sometimes, I add a little mesquite.
MAKES 6 SERVINGS
Meat Grigliata:
Leaves from 10 sprigs fresh rosemary
4 garlic cloves, pressed
1 cup (235 ml) olive oil, plus some for drizzling
Salt and freshly ground black pepper
12 country-style spare ribs, cut into individual ribs
1 pound (450 g) flank steak
6 Cotechini (page 244) or other fresh Italian sausage
Rock salt
Mixed Bean Salad:
6 ounces (170 g) dried borlotti (cranberry beans) (about ¾ cup), soaked in water to cover overnight
½ medium-size yellow onion, finely chopped (½ cup/80 g) | 1 medium-size rib celery, finely chopped (½ cup/51 g)
1 medium-size carrot, finely chopped (½ cup/61 g)
1 sachet of 1 sprig parsley, 1 sprig rosemary, 2 sprigs thyme, 1 bay leaf and 5 black peppercorns (see page 277)
¾ cup (190 g) finely chopped pancetta, divided
6 ounces (170 g) green beans
6 ounces (170 g) wax beans
½ small red onion, sliced (¼ cup/40 g)
1 garlic clove, minced
Salt and freshly ground black pepper
¼ cup (15 g) minced fresh flat-leaf parsley
⅓ cup (90 ml) extra-virgin olive oil
2 tablespoons (30 ml) red wine vinegar
---|---
**For the meat grigliata:** Combine the rosemary, garlic, oil, salt, and pepper in a wide, shallow dish. Rub the mixture over the ribs and steak, and then leave the meat in the dish, cover, and marinate in the refrigerator for 2 hours.
Heat a grill to medium heat. Scrape and oil the grill grate, then grill the ribs directly over the heat until tender and crispy, 25 to 30 minutes, turning often. Grill the steak until medium-rare (135°F/57°C internal temperature), about 8 minutes per side. Slice the sausage in half lengthwise but leave it attached at the back end, and then grill until cooked through, 5 to 6 minutes per side. Transfer all of the meat to a platter as it is done and let it rest for 8 to 10 minutes. Slice the flank steak across the grain and arrange the slices on a platter with the ribs and sausage. Drizzle with some olive oil and sprinkle with rock salt and cracked black pepper.
**For the mixed bean salad:** Drain the borlotti beans and combine with the onion, celery, carrot, and ¼ cup (62 g) of the pancetta in a medium saucepan. Add enough water to cover the ingredients by 1 inch (2.5 cm). Bring to a boil over high heat and then lower the heat to medium-low and simmer gently until the beans are tender, about 1 hour. Season generously with salt and pepper and let the beans cool down in the liquid.
Bring a large pot of salted water to a boil and fill a large bowl with ice water. Add the green beans and wax beans and cook until tender, 2 to 3 minutes. Immediately submerge the beans in ice water to stop the cooking.
Cook the remaining ½ cup (60 g) pancetta in a sauté pan over medium heat until very lightly browned, 2 to 3 minutes. Add the red onion and garlic, and cook until the onion is lightly browned, 3 to 5 minutes more. Use a slotted spoon to remove the borlotti beans and solids from their cooking liquid to the sauté pan (discard the sachet). Add the green and wax beans, season with salt, pepper, parsley, oil, and vinegar. Serve with the meat grigliata.
**FRESH PRUNE** and **ALMOND TART**
In Italy, plums are called _prugne_ (prunes). Don't ask me why. But they're delicious. My favorites are the _susine_ (damson plums) that grow on Pina's property in Cene. On our culinary tours, guests pick plums from Pina's trees, and we use the fruit to make tarts. A filling of almond frangipane makes this one special because you press the plums into the filling, which then swells up around the fruit. The filling browns in the oven and cradles each piece of fruit like a warm blanket. Look for ripe, firm plums so they don't completely turn to mush in the oven.
MAKES 10 TO 12 SERVINGS
Tart Dough:
4 ounces (1 stick/113 g) unsalted butter, at room temperature
¾ cup (90 g) confectioners' sugar
2 large eggs
Grated zest of ½ lemon
1¾ cups (220 g) _tipo_ 00 flour (see page 277) or all-purpose flour | Almond Filling and Plums:
1 cup plus 2 tablespoons (106 g) almond flour
3 tablespoons (23 g) _tipo_ 00 flour (see page 277) or all-purpose flour
½ cup plus 1 tablespoon (113 g) granulated sugar
4 ounces (1 stick/113 g) unsalted butter, melted
2 large eggs
8 plums
---|---
**For the tart dough:** Cream the butter and sugar in a stand mixer on medium speed until light and fluffy, 1 to 2 minutes. With the mixer running, add the eggs one by one until each is incorporated. Switch to low speed, add the lemon zest and flour, and mix until the dough comes together. Scrape the dough onto a sheet of plastic wrap, shape into a ball, wrap in the plastic, and refrigerate for at least 2 hours or up to 1 day.
Preheat the oven to 350°F (175°C) and coat an 11-inch (28-cm) tart pan with cooking spray. Roll the dough between two large sheets of plastic to a 12-inch (30-cm) diameter. Remove the top sheet and carefully invert the dough onto the tart pan, fitting it in gently. Trim the edges, prick the bottom with a fork, and line the dough with foil. Top with pie weights or dried beans and bake until firm, 15 to 18 minutes. Carefully remove the foil and weights and continue baking until light golden brown, 10 to 15 minutes more. Let cool on a rack.
**For the almond filling and plums:** Whisk together the flours and sugar in a medium bowl. Stir in the melted butter and then the eggs. Let the mixture rest for 30 minutes, and then spread it over the bottom of the cooled tart shell.
Lower the oven temperature to 325°F (160°C). Cut the plums in half lengthwise and remove the pits. Press the plums, cut-side down, into the almond filling in a circular pattern, and bake until the plums are tender and the filling is set and lightly browned, 35 to 45 minutes. Let cool on a rack and serve warm or at room temperature.
**BRAISED BLUEBERRIES** with **SBRISOLUNA**
In the Bergamascan dialect, _sbrisoluna_ is a crumbly cookie traditionally broken into pieces and served with coffee. I like to make it about one inch thick in a sheet pan, and then crumble it over buttermilk gelato in a bowl of braised blueberries. With the warm berries, the scoop of ice cream, and the crushed cookies, it's sort of like a deconstructed blueberry crisp.
MAKES 6 TO 8 SERVINGS
Blueberries:
3 pounds (1.3 kg) blueberries
2 cups plus 2 tablespoons (425 g) granulated sugar
6 tablespoons (90 ml) freshly squeezed lemon juice
Grated zest of 3 lemons
1 cup (235 ml) St. Germain liqueur, divided
Almond Sbrisoluna:
1½ cups (215 g) skinless (blanched) almonds
8 ounces (2 sticks/227 g) unsalted butter, at room temperature, plus some for buttering the pan | 1 cup (200 g) granulated sugar
½ vanilla bean, split and scraped
2 large egg yolks
1 cup (160 g) coarse cornmeal
2 cups (250 g) _tipo_ 00 flour (see page 277) or all-purpose flour
To Serve:
4 cups (1 L) Buttermilk Gelato (page 287)
---|---
**For the blueberries:** Combine the blueberries, sugar, lemon juice, and lemon zest in a medium nonreactive saucepan. Let marinate at room temperature for 2 hours.
Add ¾ cup (175 ml) of the St. Germain, and cook over medium-high heat to 213°F (101°C) on a candy thermometer, 5 to 10 minutes. Use a slotted spoon to transfer the blueberries to a bowl, and then cook the liquid remaining in the pan to 217°F (103°C), 5 to 8 minutes. Remove from the heat, return the blueberries to the liquid, and stir in the remaining ¼ cup (60 ml) of St. Germain. Use immediately or cover and refrigerate for up to 2 days. Reheat gently just before serving.
**For the almond sbrisoluna:** Preheat the oven to 350°F (175°C), and line a 13 x 9-inch (33 x 23-cm) pan with foil, leaving enough extra foil to use as handles. Spread the almonds on the foil in a single layer and bake until lightly toasted, 5 to 7 minutes, shaking the pan once or twice. Let cool slightly and then chop coarsely and set aside.
Cream the butter, sugar, and vanilla in a stand mixer on medium-high speed. Mix in the egg yolks on low speed, adding one at a time. Mix in the cornmeal and flour on low speed and then mix in the chopped almonds. The dough will be crumbly. Gently press the crumbly dough into the prepared pan so it is about 1 inch (2.5 cm) thick, and then bake until set and lightly browned, 12 to 15 minutes. Let cool in the pan on a rack and then remove the entire sbrisoluna, using the foil sling.
**To serve:** Spoon the warm blueberries into bowls, top with a scoop of gelato, and crumble on some sbrisoluna. You will likely have some leftover sbrisoluna; it will keep covered for several days. Enjoy it with a cup of coffee.
**GRAPPA TORTA**
One fall, I wanted to make a super-moist chocolate cake with a light texture but rich flavor. Tall order, I know, but I found an old Italian recipe for a mousse-like chocolate cake made with rum. I swapped in grappa for the rum and started experimenting with both dry and fruity grappas. Dry _(secco)_ grappas are made with bold grapes, such as pinto nero, while the fruity _(morbida)_ grappas are made with lighter grapes, such as moscato and sauvignon blanc. The fruity grappas taste better with chocolate. After a few tries, I ended up with a silky, soft chocolate cake with a light perfume of grappa. It happens to be gluten-free and goes great with cappuccino gelato and a spoonful of softly whipped cream.
MAKES 10 TO 12 SERVINGS
Unsalted butter, for greasing the pan
6 large eggs
½ packed cup (110 g) light brown sugar
14 ounces (about 3 cups) bittersweet chocolate, preferably 58% cacao
⅓ cup (90 ml) grappa
1 cup (235 ml) heavy cream
Preheat the oven to 350°F (175°C). Butter a 9-inch (23-cm) round springform pan, cut a piece of parchment to fit the bottom of the pan, and then butter the parchment. Wrap the bottom outside perimeter of the pan tightly in a double layer of foil to help prevent leaks.
In the bowl of a mixer, whip the eggs and brown sugar on high speed until the mixture balloons to triple its original volume, 4 to 5 minutes.
Melt the chocolate in a heatproof bowl in the oven or over a double boiler, and then add to the egg mixture on low speed. Fold in the grappa and cream. Pour the batter into the prepared pan and place the pan in a larger pan, such as a roasting pan. Pour in enough hot water to come at least halfway up the sides of the cake pan. Bake for 30 minutes. Cover the pan with foil and bake until the torta is puffed and a toothpick comes out with just a few moist crumbs clinging to it, another 30 minutes.
BASICS AND ESSENTIAL RECIPES
After ten years of cooking in Bergamo and Philadelphia, I've seen some fascinating culinary cross-pollination between the two cities. Chefs travel in both directions to cook in other kitchens, and they end up bringing back with them a taste of each city. This sort of "staging" in restaurants is one of the most important things that chefs do all over the world. It's how we learn from one another and expand our skills. I suppose cookbooks are the home cook's equivalent. Instead of going from restaurant to restaurant, home cooks often move from book to book, cooking different dishes, hearing different perspectives, and improving their kitchen chops.
The recipes in this book should give you a clear picture of my perspective and what it would be like to cook with me at my restaurant. And this chapter includes some of the most essential recipes. These basic preparations are used throughout the rest of the book, but ultimately, they should transcend the book. You can employ these basics in all of your cooking. Veal Stock (page 279) forms the foundation of countless sauces, and you'll find yourself putting a spoonful of Chocolate Sauce (page 285) on dozens of ice creams and other desserts.
Throughout the book, I also use some basic ingredients, techniques, and equipment that bear further explanation. Here's a glossary of culinary fundamentals that clarifies what I'm talking about when a recipe calls for "smashed" garlic or asks you to "sweat" the vegetables. If you came into my kitchen and didn't know these terms, you would definitely understand them when you left.
INGREDIENTS
**BLENDED OIL.** By itself, extra-virgin olive oil can overpower a delicate vinaigrette or sauce. It also has too low a smoke point for panfrying. For these preparations, I like to cut extra-virgin olive oil with grapeseed oil, which has a milder flavor and higher smoke point. If a recipe calls for blended oil, simply mix together equal parts extra-virgin olive oil and grapeseed oil. Alternatively, you could use canola oil instead of grapeseed oil.
**EGGS.** Find the best that you can. Eggs from chickens that hunt and peck on pasture usually have a richer flavor and deeper color than eggs from birds that eat standardized feed in crowded indoor facilities. I almost always use large eggs, and sometimes I call for raw or lightly cooked eggs in the recipes in this book. Because of the slight risk of salmonella, be advised that raw eggs should not be served to the very young, the ill or elderly, or to pregnant women.
**GARLIC.** I'm not a big fan of chomping down on pieces of raw or cooked garlic in my food. But I do like the flavor, so I do what most Italians do: smash whole cloves of garlic, add them to oil or butter for sautéing, and then remove the garlic once it's released its flavor into the dish.
**MIREPOIX.** Similar to Spain's _sofrito_ , mirepoix is a mixture of diced onions, carrots, and celery, often added to soups, stocks, and braised dishes for flavor. The vegetables are also called aromatics.
**SACHET.** A mixture of herbs and spices, such as parsley, rosemary, thyme, bay leaf, and black peppercorns, tied into a seasoning packet. Also known as bouquet garni, a sachet allows you to season soups, stocks, and braising liquids, and then throw away the seasoning packet to leave behind only the flavors and aromas. Bundle the seasonings together in the center of a double layer of cheesecloth, then tie it tightly with kitchen string. If you don't have cheesecloth, you can tie the seasonings in a clean coffee filter.
**SALT.** I use Diamond Crystal kosher salt for most cooking and fine sea salt for baking. If you use a different brand, weights and volumes will vary. For reference, Diamond Crystal kosher salt weighs about ⅛ ounce (3 g) per teaspoon; fine sea salt weighs about ¼ ounce (6 g) per teasoon. For sausages and other cured meats, I add curing salt. There are two types: curing salt #1 and #2. Curing salt #1 (a.k.a. pink salt, Tinted Curing Mix [TCM], Insta Cure #1, and DQ Curing Salt #1), is tinted pink so it's easy to recognize. It contains 6.25 percent sodium nitrite and 93.75 percent salt. Curing salt #2 (a.k.a. Insta Cure #2 and DQ Curing Salt #2) is white, like regular salt. It also contains 6.25 percent sodium nitrite, but it has 4 percent sodium nitrate and 89.75 percent salt. These curing salts are used to help prevent bacterial growth in cured meats. For reference, they weigh about ¼ ounce (6 g) per teaspoon.
**SAN MARZANO TOMATOES.** These canned plum tomatoes have a deeper flavor than other varieties. Look for San Marzano tomatoes imported from Italy because there are cheap knockoffs on the market using the same name. I usually core and crush the tomatoes by hand. For each tomato, pull it out of the can, hold it in one hand, and use the fingertips of your other hand to pinch the stem end and pull out the core. Discard the core and crush the remaining tomato right into the dish you are making. Sometimes I drink the canning liquid (tomato puree) or save it for other preparations.
**_TIPO_ 00 FLOUR.** In Italy, they number different types of flour according to how finely they are milled. _Tipo_ (type) 1 is coarse, 0 is fine, and 00 is very fine. The texture of _tipo_ 00 flour resembles that of all-purpose flour, which makes a good substitute. The flour weighs about 4.4 ounces (125 g) per cup.
**VINAIGRETTE.** If you don't know how to make one yet, here's the basic method for making 1 cup (235 ml) of vinaigrette. It's a three-to-one ratio of oil to vinegar. You start with the vinegar. Pour ¼ cup (60 ml) of vinegar into a bowl or blender, start whisking or blending the vinegar, and then slowly drizzle in ¾ cup (175 ml) blended oil (see page 276). You have to add the oil gradually so you don't overwhelm the vinegar with oil all at once, which could keep the mixture from emulsifying or blending evenly. Whisk or blend in the oil in a slow, steady stream until the mixture looks opaque and a little thicker (emulsified), and then season it to taste with salt and freshly ground black pepper. For red wine vinaigrette, use red wine vinegar. For Banyuls vinaigrette, use Banyuls vinegar. For citrus vinaigrette, squeeze the juice of one orange and ½ lemon into a bowl to equal ¼ cup (60 ml), and then whisk or blend in the oil. Experiment with other vinegars and citrus juices to make different kinds of vinaigrette. You can also add finely chopped herbs, shallots, or other aromatics.
TECHNIQUES
**ROASTING PEPPERS.** I roast peppers right over a hot wood fire. You could also do it on a gas grill, under a broiler, or right on the gas flame on your stovetop. To make about 4 cups (590 g) of roasted peppers, rinse four large bell peppers and pat them dry. Put them over or under a flame or other high heat source until completely blackened on all sides, 4 to 5 minutes per side, turning the peppers several times. Transfer the blackened peppers to a bowl, cover, and let steam for 10 minutes. Peel off and discard the skins, and then pull out and discard the cores and seeds.
**SWEATING.** Think of this as a kinder, gentler form of sautéing. The goal is to soften the vegetables in a pan without browning them. When sweating vegetables, you want no color because browning introduces new flavors. Those flavors are great for sautéing, but not for sweating, which is usually done over slightly lower heat.
EQUIPMENT
**SCALE.** Most professional chefs measure ingredients by weight, not volume. Weights are more accurate because volumes change, especially for ingredients that are easily compacted, such as flour. But most home cooks measure by volume, so that's how the recipes in this book are written. You still might need to weigh an ingredient in the odd recipe here or there, and I highly recommend it for accuracy. Just buy a cheap digital scale that can weigh in grams, and keep it on your kitchen counter.
**TONGS.** I use tongs for everything from tossing pasta in a pan to pulling steaks from the grill. Find a good pair of spring-loaded ones. Edlund's are particularly sturdy and cheap.
Veal Stock
•
Fish Stock
•
Shellfish Stock
•
3-2-1 Brine
•
Polenta | Béchamel Sauce
•
Egg Pasta Dough
•
Semolina Pasta Dough
•
Nut Pesto
•
Crème Anglaise | Pastry Cream
•
Chocolate Sauce
•
Gelato and Sorbet
•
Candied Citrus Peel
---|---|---
**VEAL STOCK**
At Osteria, we make very traditional meat stocks by roasting the bones to develop flavor and then heating the bones slowly in water with mirepoix and herbs. We don't add any wine, tomatoes, or other flavorings until we use the stock to make sauces or soups. This method gives you a clear stock that highlights the pure flavor of the bones. For chicken stock, use chicken bones. Or use any other animal bones you like to make pork stock, lamb stock, rabbit stock, and other meat stocks.
MAKES ABOUT 1 QUART (1 L)
5 pounds (2.25 kg) veal bones
2 tablespoons (30 ml) olive oil
1 medium-size yellow onion, chopped (1½ cups/240 g)
2 large carrots, chopped (1½ cups/185 g)
3 medium-size ribs celery, chopped (1½ cups/152 g)
1 sachet of 2 sprigs rosemary, 5 parsley stems, 10 peppercorns, 1 garlic clove, and 1 fresh bay leaf (see page 277)
Preheat the oven to 400°F (205°C). Lay the bones in a single layer in a roasting pan and roast until dark browned, 1½ to 2 hours. Transfer the browned bones to a stockpot and place the roasting pan over medium heat. Add about a cup (235 ml) of cold water to deglaze the pan, scraping all the browned bits from the pan bottom. Scrape into the stockpot and add 3 more cups (750 ml) of cold water.
Bring the liquid to just under a simmer but do not boil. If it boils, your stock will be cloudy. A few bubbles should occasionally and lazily come to the surface. Cook for 1 hour, frequently skimming scum from the surface.
Meanwhile, heat the oil in a large sauté pan over medium heat. Add the onion, carrots, and celery and sauté until deeply browned, 8 to 10 minutes. After the stock has cooked for 1 hour, add the sautéed vegetables. Cook gently for another 6 hours, frequently skimming the surface. Add the sachet, and cook for another 2 hours, skimming the surface. The total cooking time should be about 8 hours.
Remove and discard the bones and big vegetable pieces with tongs. Line a medium-mesh sieve with cheesecloth and strain the stock through the cheesecloth into a quart-size (liter-size) container. Label, date, and refrigerate for up to 1 week or freeze for up to 1 month.
**FISH STOCK**
Use any white fish bones here, such as those from cod, halibut, or branzino. Be sure to remove the gills from the heads.
MAKES ABOUT 2 QUARTS (2 L)
2 tablespoons (30 ml) olive oil
1 medium-size yellow onion, chopped (1¼ cups/200 g)
4 medium-size ribs celery, chopped (1¼ cups/125 g)
5 pounds (2.25 kg) white fish bones
2 lemons
1 sachet of 1 bay leaf, 10 parsley stems, 10 peppercorns, and 1 garlic clove (see page 277)
Heat the oil in a stockpot over medium heat. Add the onion and celery and sweat until soft but not browned, 4 to 5 minutes. Add the fish bones and sauté for 2 to 3 minutes. Cut the lemons in half and squeeze in the juice. Add the rinds, too. Pour in 2 quarts (2 L) of cold water, drop in the sachet, and bring the liquid to just under a simmer but do not boil. A few bubbles should occasionally and lazily come to the surface. Cook for 45 minutes, skimming any scum that comes to the surface.
Remove and discard the bones and big vegetable pieces with tongs. Line a medium-mesh sieve with cheesecloth. Let the stock cool until warm and then strain it through the cheesecloth into quart-size (liter-size) containers. Label, date, and refrigerate for up to 1 week or freeze for up to 1 month.
**SHELLFISH STOCK**
This recipe is part stock, part sauce. You gently heat bits of shellfish, such as lobster heads and shrimp shells; puree the shells right along with everything else; and then strain the whole mixture. This method extracts so much flavor—and some protein—from the shellfish that you can simply mix the stock with pasta and seafood, as in Spaghetti al Nero di Seppia with Shrimp (page 104).
MAKES ABOUT 3 QUARTS (3 L)
¼ cup (60 ml) grapeseed oil
2 pounds (1 kg) mixed lobster heads, shrimp shells, and other shellfish bits
1½ cups (375 ml) white wine
1 large yellow onion, chopped (2 cups/320 g)
1 large carrot, chopped (¾ cup/92 g)
2 medium-size ribs celery, chopped (¾ cup/75 g)
3 cups (720 g) canned peeled tomatoes, preferably San Marzano, cored and crushed by hand
1 sachet of 1 sprig rosemary, 2 sprigs thyme, 1 bay leaf, and 10 black peppercorns (see page 277)
Heat the oil in a Dutch oven or large, deep sauté pan over medium-high heat. Add the shells and sauté until pink, 4 to 5 minutes. Add the wine and simmer until most of the liquid evaporates, 8 to 10 minutes. Add the onion, carrot and celery and sweat the vegetables until soft but not brown, 4 to 6 minutes. Add the tomatoes and sachet, and cook until the tomatoes start to break down a little, 4 to 5 minutes. Add water to cover and adjust the heat so that the liquid simmers gently. Simmer for 45 minutes, then remove from the heat and discard the sachet.
Blend the mixture, shells and all, in batches in a blender. Strain each blended batch through a fine-mesh strainer, pressing on the solids to extract the liquids. Let cool. When cool, pack in quart-size (liter-size) containers and refrigerate for up to 1 week or freeze for up to 3 months.
**3-2-1 BRINE**
Three gallons (12 liters) of water, two pounds (1 k) of salt, and one pound (50 g) of sugar. This brine recipe is easy to remember, and it's the only one you'll ever need. Use it to brine chicken, fish, pork, or almost anything that needs a little extra moisture. I included volume measurements for salt and sugar so you don't have to weigh those out, but weights are more accurate, so if you have a kitchen scale, use it. This recipe makes enough to brine about twenty pounds (9 kg) of meat. Halve or quarter the recipe as needed. You'll need a half-recipe to brine ten pounds (4.5 kg) of meat, or a quarter-recipe to brine five pounds (2.25 kg) of meat. As a rule of thumb, I usually brine meat for about one day per pound (450 g) of bone-in meat—or a little longer if the meat is very thick, like a whole ham.
MAKES ABOUT 3 GALLONS (12 L)
2 pounds/1 kg (about 3 cups) kosher salt
1 pound/450 g (2 ½ cups) granulated sugar
3 garlic cloves, smashed
10 rosemary sprigs
1 bay leaf, crumbled
15 black peppercorns
Dissolve the kosher salt and sugar in 3 gallons (12 L) of water. Pour 2 cups (475 ml) of the brine into a food processor and add the garlic, rosemary, bay leaf, and black peppercorns. Puree until all the ingredients are finely chopped, and then pour the mixture back into the brine. Submerge the meat in the brine and refrigerate.
**POLENTA**
Every day I put a big copper pot of polenta over my wood fire at Osteria in Philadelphia, just as Claudia's mother and grandmother did over their home fire in Italy decades ago. It's the best way to make it. Coarsely milled whole-kernel corn makes the best polenta. You can also mix in other grains. _Polenta taragna_ includes some buckwheat flour.
MAKES ABOUT 5½ CUPS (1.375 L)
Salt
¾ cup (120 g) coarse yellow cornmeal (polenta)
Bring 6 cups (1.5 L) of water to boil in a large pot and add salt to taste (it should taste like a mild broth; I use about 1½ teaspoons [9 g] salt per quart of water). Gradually whisk in the polenta in a slow, steady stream. Lower the heat just enough to keep the polenta bubbling and then cook, without stirring, until the polenta becomes a very thick porridge, like cooked oatmeal, and burns a little on the bottom and sides of the pan, which adds a nice smoky aroma. The total cooking time will be 45 minutes to 1 hour for medium-coarse polenta or 1½ to 2 hours for very coarse polenta. Avoid stirring to make sure the bottom burns a little.
VARIATIONS
**FOR PORCINI POLENTA :** Grind ¼ packed cup (a generous ¼ ounce/about 10 g) of dried porcini mushrooms to a powder in a spice grinder or clean coffee mill. You should have about 2 tablespoons (10 g) of powder. When the polenta is thick yet pourable, stir in the porcini powder. Taste and season with additional salt as necessary.
**FOR BUCKWHEAT POLENTA:** Whisk 1 cup (160 g) of polenta and ½ cup (62 g) of buckwheat flour into 5 cups (1.25 L) of boiling water. Cook over medium heat until the polenta is the texture of stiff pudding but still pourable, 45 minutes to 1 hour. Season to taste with salt and pepper.
**BÉCHAMEL SAUCE**
Also called _besciamella_ , this sauce finds its way into quite a few Italian dishes, such as lasagna, cannelloni, flan, and sformato. I even use flavored béchamel as pizza sauce. It's basically a blank canvas of creaminess that you can flavor however you like.
MAKES ABOUT 2 QUARTS (2 L)
6 tablespoons (85 g) unsalted butter
1 small yellow onion, minced (½ cup/80 g)
⅔ cup (83 g) _tipo_ 00 flour (see page 277) or all-purpose flour
2 quarts (2 L) whole milk
⅛ teaspoon (0.3 g) grated nutmeg
Salt and freshly ground black pepper
Melt the butter in a medium saucepan over medium heat. Add the onion and sweat until soft but not browned, 5 to 6 minutes. Whisk in the flour to make a roux (it will look like lumpy batter). Whisk and cook until the flour smells a little nutty but doesn't turn brown, 2 to 3 minutes. Slowly whisk in the milk until it is fully incorporated and the mixture is free of lumps. Season with the nutmeg, plus salt and pepper to taste, and then simmer gently over medium to medium-low heat until thickened, 35 to 45 minutes, stirring occasionally to prevent sticking on the bottom. For a super-smooth sauce, strain through a medium-mesh strainer and cool. The cooled béchamel can be covered with plastic pressed onto the surface (to prevent a skin from forming), and refrigerated for up to 2 days. Reheat gently in a saucepan before using.
VARIATIONS
**FOR PORCINI BÉCHAMEL:** Omit the nutmeg. Soak 2 ounces (57 g/about 2 cups) of dried porcini in 3 cups (750 ml) of hot water until the porcini are tender, about 15 minutes. Replace 2 cups (475 ml) of the milk with 2 cups (475 ml) of porcini-soaking liquid. Chop the soaked porcini and stir into the sauce, along with the salt and pepper. Skip the straining step to retain the chopped porcini in the sauce.
**FOR TRUFFLE BÉCHAMEL:** Add 1 tablespoon (15 ml) white truffle paste, along with the nutmeg.
**EGG PASTA DOUGH**
In both Italy and America, I've come across dozens and dozens of pasta dough recipes. I found the best one while working at Frosio in Alme, Italy. The egg yolks and durum flour give the dough enough stability for it to be stretched super-thin, which is what makes ravioli so tender. This recipe makes enough for four fully rolled sheets of pasta, each 4 to 5 feet (1.25 to 1.5 m) long. That's enough to make about ninety-five 2-inch (5-cm) square or 150 1-inch (2.5-cm) square ravioli.
MAKES ABOUT 1 POUND (450 G)
1¼ cups (155g) _tipo_ 00 flour (see page 277) or all-purpose flour, plus more for dusting
½ cup plus 1 tablespoon (70 g) durum flour
9 large egg yolks
1 tablespoon (15 ml) extra-virgin olive oil
Combine both flours in the bowl of a stand mixer with the paddle attachment on medium speed. Add the egg yolks, oil, and 3 tablespoons (45 ml) of water, mixing just until the dough comes together, 2 to 3 minutes. Add up to 1 tablespoon (15 ml) more water, if necessary, for the dough to come together.
Turn the dough out onto a lightly floured work surface and knead until silky and smooth, about 5 minutes, kneading in a little flour, if necessary, to prevent sticking. The dough is ready when it gently pulls back into place when stretched with your hands. Shape the dough into a disk, wrap in plastic wrap, and refrigerate for at least 30 minutes or up to 3 days.
Cut the dough into four equal pieces and let them sit at room temperature for 5 to 10 minutes before rolling out. Shape each piece into an oblong disk that's wide enough to fit the width of your pasta roller. Lightly flour a long work surface and set the pasta roller to its widest setting. Lightly flour one disk of dough, pass it through the roller, and then lightly dust the rolled dough with flour, brushing off the excess with your hands.
Set the roller to the next narrowest setting and again pass the dough through, dusting again with flour and brushing off the excess. Pass the dough once or twice through each progressively narrower setting. For thicker pasta, such as corzetti, you generally want to roll to about ⅛ inch (3 mm) thick or setting #2 or 3 on the KitchenAid attachment. For strand pasta, such as fettuccine, or for cannelloni, you want to roll to about inch (1.5 mm) thick (setting #4 or 5 on the KitchenAid attachment). For ravioli, you want to roll the pasta a little thinner, to about inch (0.8 mm) thick (setting #6 or 7); ravioli sheets should be thin enough that you can read a newspaper through the dough.
As you roll and each sheet gets longer, drape the sheet over the backs of your hands to easily feed it through the roller. You should end up with a sheet 4 to 5 feet (1.25 to 1.5 m) long. Lay the pasta sheet on a lightly floured work surface and use a cutting wheel, knife, or the cutter attachment on the pasta machine to create the right pasta shape for the dish you are making.
VARIATIONS
**FOR BUCKWHEAT PASTA DOUGH:** Replace half of the _tipo_ 00 or all-purpose flour with buckwheat flour.
**FOR CHESTNUT PASTA DOUGH:** Use ¾ cup (94 g) of _tipo_ 00 flour (see page 277) or all-purpose flour, ⅓ cup (42 g) of durum flour, and ½ cup (62 g) of chestnut flour.
**FOR TAGLIOLINI:** Roll the pasta to #5 on the KitchenAid attachment, and then roll it again on #5. The dough should be about as thick as fettuccine. Lay the pasta sheets on a floured surface, lightly dust with flour, and then cut the sheets into 10-inch (25-cm) lengths. Fold each length in half lengthwise over itself, then fold in half lengthwise again. Julienne each folded piece crosswise into thin strips about ⅛ inch (3 mm) wide, and then dust with flour and place on a parchment-lined baking sheet. Use immediately or freeze for up to 1 day.
**SEMOLINA PASTA DOUGH**
You can easily make dried pasta that blows away the cheap, boxed stuff. The dough is just semolina and water. The trick is to get the dough to the consistency of damp sand. Depending on the humidity in the room, you may need to add more or less water to get to that consistency. You also need a pasta extruder or attachment for your pasta machine. Once extruded into spaghetti, macaroni, or whatever shape you like, just dry the pasta uncovered in your refrigerator, which has the perfect temperature and humidity. After about two days in the fridge, the texture of your dried pasta will be just right.
MAKES ABOUT 1 POUND (450 G)
2¾ cups (460 g) semolina
Put the semolina in a bowl and slowly stir in enough water (about 1 cup/235 ml) for the mixture to resemble damp sand. Knead it a little with your fingers until it clumps together, feels like sandy bubble gum, and sticks together when you pinch it. Too dry is better than too wet, and even though it may appear as if the dough hasn't come together, it will compress when it is extruded through the pasta machine.
Fit your pasta extruder or stand mixer attachment with the plate needed for your desired pasta shape. Set the extruder to medium speed and feed the dough into the extruder in marble-size clumps, using a pushing tool to push the clumps through the extruder. The first few clumps will come out uneven; just throw them away. Continue gradually dropping marble-size clumps into the extruder and pushing them through, being careful not to overload it. As the pasta is extruded, cut it into lengths appropriate for the recipe you are making (examples follow).
Dry the pasta by placing it on wire racks that will fit in your refrigerator (or coil long pasta, such as spaghetti and bucatini, into nests) and refrigerate it uncovered for at least 8 hours or up to 5 days. The pasta will get drier and harder as it sits. For most recipes, the texture is perfect after 2 days in the refrigerator. Two-day-old pasta will cook in about 4 minutes in salted boiling water.
VARIATIONS
**FOR SQUID INK PASTA:** Mix 2 tablespoons (30 ml) of squid ink into the water before adding to the semolina.
**FOR SPAGHETTI:** Fit the extruder with the spaghetti plate, set the extruder to high speed, and feed in the dough, cutting the spaghetti to 9-inch (23-cm) lengths.
**FOR CANDELE:** Fit the extruder with the large macaroni plate, set the extruder to high speed, and feed in the dough, cutting the candele to 6-inch (15-cm) lengths. Dry straight instead of forming into nests.
**NUT PESTO**
Everyone knows basil pesto made with herbs, pine nuts, and cheese. I like to skip the cheese and put the focus on nuts. Walnuts, almonds, pistachios, pine nuts. . . they all work well here. I keep the recipe ultra-basic as a starting point. If you like, toast the nuts or add some herbs for more flavor. Check out the variation that includes milk and butter as well.
MAKES ABOUT 1½ CUPS (375 ML)
1 cup (135 g) shelled nuts
1 cup (235 ml) blended oil (page 276)
Put the nuts in a blender or small food processor and process until chopped but not too fine; you don't want to make nut butter. With the machine running, gradually add the oil in a slow, steady stream and process until blended and emulsified. Season with salt and pepper and use immediately or cover and store at room temperature for up to 8 hours, refrigerate for up to 2 days, or freeze for up to 2 weeks.
VARIATION
**FOR WALNUT PESTO:** Toast 6 tablespoons (45 g) of walnuts in a dry pan over medium heat until fragrant, 5 to 6 minutes, shaking the pan now and then. Put the toasted walnuts, 4 teaspoons (19 g) of room-temperature butter, and 1 cup (235 ml) of whole milk in a blender or small food processor. Process on slow speed and gradually increase the speed, adding more milk a little at a time, until the mixture looks smooth (up to ½ cup/120 ml additional milk). Blend until very smooth, about 2 minutes. When smooth, gradually add 2 tablespoons (30 ml) of olive oil in a slow, steady stream. Season with salt and pepper and use immediately or cover and refrigerate for up to 4 days before using. Makes about 2 cups (475 ml).
**CRÈME ANGLAISE**
You don't need to be a pastry chef to make this dessert sauce. You just need a little patience so you don't scramble the eggs. Once cooled, serve the sauce with almost any dessert. I like to spoon a pool of crème anglaise beneath a slice of pie, cake, or _torta._
MAKES ABOUT 2½ CUPS (625 ML)
6 large egg yolks
½ cup (100 g) granulated sugar
1 cup (235 ml) whole milk
1 cup (235 ml) heavy cream
1 vanilla bean, split and scraped
Put the egg yolks and sugar in a heatproof bowl that fits tightly over a saucepan or in the top of a double boiler. Whisk until light and pale yellow, 1 to 2 minutes. Heat the milk, cream, and vanilla in a heavy saucepan over medium heat until it begins to simmer, 3 to 4 minutes, and then remove from the heat. Whisk about 1 cup (235 ml) of the hot cream mixture into the yolk mixture until incorporated, then combine the rest. Set the bowl or double boiler top over gently simmering water. Cook and stir constantly but gently until the sauce thickens slightly and registers a temperature of 165°F (74°C), 5 to 8 minutes. Remove from the heat and stir until the sauce thickens to the consistency of heavy cream, about 2 minutes. Strain through a fine-mesh sieve into a bowl and let stand for 10 minutes, stirring occasionally. Let cool completely, and then cover and refrigerate until cold, at least 1 hour or up to 3 days.
**PASTRY CREAM**
Pastry cream is creamier than crème Anglaise because it's thickened with starch. It's what you see inside a typical chocolate éclair or cream-filled doughnut. The key here is cooking the cream long enough to cook out the starchy taste.
MAKES ABOUT 2 CUPS (475 ML)
¾ cup (150 g) granulated sugar
¼ cup (32 g) cornstarch
9 large egg yolks
1⅔ cups (400 ml) whole milk
6 tablespoons (90 ml) heavy cream
½ vanilla bean, split and scraped
Combine the sugar, cornstarch, and egg yolks in the bowl of a stand mixer fitted with the whisk attachment. Whisk on medium speed until smooth, about 2 minutes.
Bring the milk, cream, and vanilla to a boil in a medium saucepan over medium heat. Remove from the heat and mix about ½ cup (120 ml) of the hot milk mixture into the egg mixture. Scrape all of the egg mixture into the pot of hot milk, and cook over medium-high heat until bubbling, thick, and creamy, stirring constantly, 2 to 3 minutes. Transfer the mixture to the mixer bowl and whip on low speed until cool, 3 to 4 minutes.
**CHOCOLATE SAUCE**
This is the most badass chocolate sauce ever. It's simple, keeps for a week or two in the fridge, and goes with almost any dessert. You can even make a milk shake with it. For extra-bitter chocolate sauce, use a higher percentage cacao chocolate, such as 65% Manjari from Valrhona.
MAKES ABOUT 5 CUPS (1.25 L)
1½ cups (300 g) granulated sugar
⅓ cup (90 ml) glucose syrup or light corn syrup
1⅓ cups (115 g) unsweetened Dutch-process cocoa powder
1 pound (450 g) bittersweet chocolate (about 58% cacao), melted
Bring the sugar, glucose syrup, and 2 cups (475 ml) of water to a simmer in a medium saucepan over medium-high heat. Put the cocoa powder in a large heatproof bowl and whisk in about ½ cup (120 ml) of the hot sugar syrup to make a smooth paste. Whisk in the remaining sugar syrup until smooth. For a supersilky texture, strain through a fine-mesh sieve. The sauce can be covered and refrigerated for up to 2 weeks before using. Reheat gently over low heat.
**GELATO AND SORBET**
The recipes that follow are the real-deal gelato and sorbetto. Don't worry about the oddball ingredients here and there; they can all be ordered over the Internet (see Sources on page 289). There are two gelato bases, and different bases are used to make different flavors of gelato. The flavors themselves are given as variations under the base recipes. Note that some of the gelato flavors make about 6 cups (1.5 L). If that's too much volume for your ice-cream machine, make a smaller batch, or churn half of the base at a time. And make sure you check the machine as the gelato churns; overchurning can make the gelato grainy. Stop churning when the gelato is nice and creamy.
**YELLOW GELATO BASE**
MAKES ABOUT 4 CUPS (1 L)
2 cups (475 ml) whole milk
2 cups (475 ml) heavy cream
15 large egg yolks
1 cup plus 2 tablespoons (225 g) granulated sugar
Bring the milk and cream to a simmer in a saucepan over medium heat, and then remove from the heat. Meanwhile, whip the egg yolks and sugar in the bowl of a stand mixer fitted with the whip attachment on medium-high speed until light, fluffy, and pale yellow, about 3 minutes. Reduce the speed to medium-low and mix about 1 cup (235 ml) of the hot cream mixture into the yolk mixture. Stir the yolk mixture back into the remaining cream mixture in the pan. Return the pan to low heat and cook, stirring frequently, until it registers 160°F (71°C), 5 to 8 minutes. Fill a bowl with ice water, and then strain the mixture through a fine-mesh sieve into a heatproof bowl. Rest the bowl in the ice water to cool it down. It will keep covered in the refrigerator for 1 week.
VARIATIONS
**FOR PISTACHIO GELATO:** Use an immersion blender, stand blender, or food processor to blend together 2¾ cups (675 ml) of white gelato base (page 287), ⅔ cup (150 ml) of yellow gelato base, ⅓ cup (90 ml) of PreGel pistachio paste (see the Sources on page 290), and 2½ tablespoons (37 ml) sugar syrup (page 288). Blend until smooth, and for a super-silky gelato, strain through a medium-mesh sieve. Transfer to an ice-cream machine and freeze according to the manufacturer's directions. Makes about 6 cups (1.5 L).
**FOR CHINOTTO GELATO:** Pour 3 quarts (3 L) of chinotto into a large saucepan and boil over medium-high heat until reduced to a thick, molasses-like syrup, 45 to 50 minutes. You should have about 1½ cups (375 ml) of chinotto reduction. Use an immersion blender, stand blender, or food processor to blend together the chinotto reduction, 3 cups (750 ml) of white base (page 287), and ¾ cup (175 ml) of yellow base. Blend until smooth, 1 to 2 minutes, and then freeze in an ice-cream machine according to the manufacturer's directions. Makes about 6 cups (1.5 L).
**FOR CANTUCCI GELATO:** Use an immersion blender, stand blender, or food processor to blend together 3 cups (750 ml) of white base (page 287) and 1 cup (235 ml) of yellow base. Blend until smooth, and then freeze in an ice-cream machine according to the manufacturer's directions. When the mixture is halfway frozen, fold in 2½ cups (300 g) coarsely crushed Cantucci (page 229) and continue freezing according to the manufacturer's directions. Makes about 6 cups (1.5 L).
WHITE GELATO BASE
MAKES ABOUT 5 CUPS (1.25 L)
4 cups (1 L) whole milk
1 cup (235 ml) heavy cream
¾ cup (150 g) granulated sugar
⅔ cup (85 g) PreGel dry milk powder (see Sources, page 290)
¼ cup (28 ml) dextrose or 3 tablespoons (22 g) superfine sugar
Pinch of salt
2 tablespoons plus 2 teaspoons (52 g) glucose syrup or light corn syrup
Whisk together the milk, cream, sugar, milk powder, dextrose, salt, and glucose syrup in a large saucepan over medium-low heat. Whisk frequently until the mixture registers 140°F (60°C) on a candy thermometer. Lower the heat to its lowest setting so that the mixture maintains a temperature of 140°F (60°C) for 30 minutes (periodically check the temperature, especially during the last 10 minutes of cooking, and adjust the heat as necessary to maintain that temperature). Fill a bowl with ice water and increase the heat under the saucepan to medium. When the temperature of the white base reaches 180°F (82°C), remove the saucepan from the heat, plunge the bottom of the pan into the ice water, and stir until the white base cools. It will keep covered in the refrigerator for 1 week.
VARIATIONS
**FOR FIORDILATTE GELATO:** Make the white base as directed and freeze in an ice-cream machine according to the manufacturer's directions. Makes about 5 cups (1.25 L).
**FOR MASCARPONE GELATO:** Use an immersion blender, stand blender, or food processor to blend together 2½ cups (625 ml) of white gelato base, 2 cups (460 g/about 1 pound) of mascarpone, and ½ cup (120 ml) of sugar syrup (page 288). Blend until smooth, and for a super-silky gelato, strain through a medium-mesh sieve. Transfer to an ice-cream machine and freeze according to the manufacturer's directions. Makes about 5 cups (1.25 L).
**FOR POPPY SEED GELATO:** Bring 9 tablespoons (140 ml) of water, 1 tablespoon (9 g) of poppy seeds, and ⅓ cup (67 g) of granulated sugar to a boil in a small saucepan. Whisk in ½ teaspoon (1 g) of agar powder (a natural thickener carried in health food stores or the Asian section of large supermarkets). Return the liquid to a boil and boil for 30 seconds. Remove from the heat and let cool to room temperature. Whisk in 4 cups (1 L) of white gelato base until blended, and then freeze in an ice-cream machine according to the manufacturer's directions. Makes about 6 cups (1.5 L).
**FOR POLENTA GELATO:** Bring 4 cups (1 L) of whole milk and ½ teaspoon (3 g) of salt to a simmer in a large saucepan over high heat. Slowly whisk in ½ cup (80 g) of coarse yellow cornmeal (polenta), and then lower the heat to medium-low and cook until the polenta is soft and the milk has been absorbed, 25 to 30 minutes, whisking frequently to prevent the polenta from burning on the bottom. Measure out 2¼ cups (560 ml) of the cooked polenta and refrigerate any extra for another use. Use an immersion blender, stand blender, or food processor to blend together the 2¼ cups (560 ml) of cooked polenta, 2½ cups (625 ml) of white base, and ½ cup (120 ml) of sugar syrup (page 288). Blend until very smooth, 3 to 4 minutes, and then freeze in an ice-cream machine according to the manufacturer's directions. Makes 4 to 5 cups (1 to 1.25 L).
**FOR BUTTERMILK GELATO:** Use an immersion blender, stand blender, or food processor to blend 2¼ cups (560 ml) of buttermilk, 1⅔ cups (400 ml) of white gelato base, and ¾ cup (175 ml) of sugar syrup (page 288). Blend until smooth, and for a super-silky gelato, strain the mixture through a medium-mesh sieve. Freeze in an ice-cream machine according to the manufacturer's directions. Makes about 4 cups (1 L).
SUGAR SYRUP
MAKES ABOUT 4 CUPS (1 L)
3 cups (600 g) granulated sugar
½ cup (120 ml) glucose syrup or light corn syrup
2 tablespoons (14.25 g) powdered dextrose, or 1½ tablespoons (11.75 g) superfine sugar
Fill a bowl with ice water. Heat the sugar, glucose syrup, dextrose, and 1½ cups (375 ml) of water in a medium saucepan over low heat until the sugars dissolve, 3 to 5 minutes, stirring occasionally. Plunge the pan bottom into the ice water to cool down the syrup. It can be covered and refrigerated for about 2 weeks before using.
VARIATIONS
**FOR GOAT CHEESE SORBET:** Combine 10 ounces (284 g/about 1⅓ cups) of soft goat cheese, 1¼ cups (310 ml) of sugar syrup, 1 cup (235 ml) of water, and 2 tablespoons (30 ml) of glucose syrup or light corn syrup in a blender. Puree until very smooth, 2 to 3 minutes. Freeze in an ice-cream machine according to the manufacturer's directions. Makes about 4 cups (1 L).
**FOR RASPBERRY SORBET:** Combine 3¼ cups (14 ounces/400 g) of fresh raspberries, 2 cups (475 ml) of sugar syrup, and ¾ cup plus 2 tablespoons of water (205 ml) in a blender. Puree until very smooth, 2 to 3 minutes, and then strain through a fine-mesh strainer. Freeze in an ice-cream machine according to the manufacturer's directions. Makes about 4 cups (1 L).
**FOR PEACH SORBET:** Combine 2⅔ cups (14 ounces/400 g) of peeled, pitted and sliced peaches, 2 cups of sugar syrup (475 ml), and ¾ cup plus 2 tablespoons (205 ml) of water in a blender. Puree until very smooth, 2 to 3 minutes, and then strain through a fine-mesh strainer. Freeze in an ice-cream machine according to the manufacturer's directions. Makes about 4 cups (1 L).
**CANDIED CITRUS PEEL**
Drop a few of these sweet peels on ice cream, cake, or anywhere you want the perfume of citrus. Just be sure to remove all the bitter white pith and to julienne the peels super-fine. The recipe here is for candied orange peel or candied lemon peel, but you could use other citrus. You'll need enough for about one cup (235 ml) of citrus peels.
MAKES ABOUT 1 CUP (235 ML)
5 oranges, or 10 lemons
5 cups (1 kg) granulated sugar
2 cups plus 2 tablespoons (530 ml) water, plus some for blanching
⅔ cup (150 ml) glucose syrup or light corn syrup
Use a vegetable peeler to remove the zest from the oranges or lemons in strips, leaving behind any white pith. Thinly julienne the peels and then place them in a medium saucepan with cold water to cover. Bring to a boil over high heat. As soon as the water comes to a rolling boil, drain the peels and cover again with cold water. Repeat the process to blanch the peels three times.
Combine the blanched peels, sugar, water, and glucose syrup in a medium saucepan over medium-high heat and cook until the liquid registers 225°F (110°C) on a candy thermometer, 15 to 20 minutes. Let cool and then store the peels in the syrup at room temperature for up to 1 month. Use the citrus-scented syrup for cocktails or to pour over ice cream.
SOURCES FOR THE COOK AND TRAVELER
Most of the ingredients and equipment I call for in my recipes are widely available. But here are some sources for oddball things, such as wild hare and _corzetti_ stamps—along with a few of my favorite purveyors. I also listed contact information for most of the markets, shops, restaurants, wineries, bars, hotels, and inns mentioned throughout the book so you can visit these places yourself.
EQUIPMENT
Artisanal Pasta
Tools Sonoma, California
www.artisanalpastatools.com
707-939-6474
_Corzetti stamps._
The Baking Pan
www.thebakingpan.com
_Brioche molds, tart pans, and other baking supplies._
Barbecue Wood
P.O. Box 8163
Yakima, WA 98908
509-965-0123
www.barbecuewood.com
_Oak, hickory, and other woods for grilling, roasting, and smoking._
Fante's
1006 South Ninth Street
Philadelphia PA 19147
215-922-5557
www.fantes.com
_Pasta machines and other pasta-making supplies._
Franco Casoni
Via Bighetti 73
16043 Chiavari
Province of Genoa, Italy
+39 0185 301448
www.francocasoni.it
_Corzetti stamps custom-made in Italy._
King Arthur Flour
135 US Route 5 South
Norwich, VT 05055
802-649-3361
www.kingarthurflour.com
_Metric scales, baking pans, other baking supplies._
KitchenAid
Customer Satisfaction Center
P.O. Box 218
St. Joseph, MI 49085
800-541-6390
www.kitchenaid.com
_Stand mixers, extruded pasta presses, pasta rollers, pasta cutters, meat grinders, sausage stuffers, and other attachments._
Previn
2044 Rittenhouse Square
Philadelphia, PA 19103
215-985-1996
www.previninc.com
_Ring molds, terrine molds, and other baking supplies._
WEBstaurant Store
717-392-7472
www.webstaurantstore.com
_Stand mixers, baking sheets, baking molds, and other kitchen equipment._
MEAT AND FISH
Country Time Farm
3017 Mountain Road
Hamburg, PA 19526
610-562-2090
www.countrytimefarm.com
_Pork._
Creekstone Farms
604 Goff Industrial Park Road
Arkansas City, KS 67005
620-741-3100
www.creekstonefarms.com
_Dry-aged rib-eye steak._
D'Artagnan
280 Wilson Avenue
Newark, NJ 07105
973-344-0565
www.dartagnan.com
_Duck, rabbit, lamb, suckling pig, guinea hen, pheasant, wild hare, and other meats._
Di Bruno Brothers
930 South Ninth Street
Philadelphia, PA 19147
215-922-2876
www.dibrunobrothers.com
_Pancetta, prosciutto, and other cured meats._
Jamison Farm
171 Jamison Lane
Latrobe, PA 15650
800-237-5262
www.jamisonfarm.com
_Lamb._
Potironne
120 Thornwood Road
Georgetown, TX 78628
512-635-3742
www.potironne.com
_Canned snails._
Samuels & Son Seafood Company
3407 South Lawrence Street
Philadelphia, PA 19148
800-580-5810
www.samuelsandsonseafood.com
_Halibut, swordfish, squid, cuttlefish, and other seafood._
The Sausage Maker, Inc.
1500 Clinton Street, Building 123
Buffalo, NY 14206
888-490-8525
www.sausagemaker.com
_Curing salt and sausage-making supplies._
La Tienda
3601 La Grange Parkway
Toano, VA 23168
800-710-4304
www.tienda.com
_Baccalà (salt cod) and squid ink._
Wells Meats
982 North Delaware Avenue
Philadelphia, PA 19123
215-627-3903
www.wellsmeats.com
_Duck, rabbit, oxtails, lamb, suckling pig, and other meats._
CHEESE AND DAIRY
Di Bruno Brothers
930 South Ninth Street
Philadelphia, PA 19147
Phone: 215-922-2876
www.dibrunobrothers.com
_Parmigiano-Reggiano, Gorgonzola dolce, mozzarella di bufala, ricotta salata, and other Italian cheeses._
Euro Gourmet
10312 Southard Drive
Beltsville, MD 20705
301-937-2888
www.eurogourmet.biz
_Burrata, Bra, and other Italian cheeses._
VEGETABLES AND FRUIT
Alma Gourmet Ltd.
39-12 Crescent Street
Long Island City, NY 11101
718-433-1616
www.almagourmet.com
_White truffles._
Blue Moon Acres
P.O. Box 201
Buckingham, PA 18912
215-794-3093
www.bluemoonacres.net
_Lettuces, microgreens, and herbs._
Buon Italia
75 Ninth Avenue
New York, NY 10011
212-633-9090
www.buonitalia.com
_White truffles and truffle paste._
D'Artagnan
280 Wilson Avenue
Newark, NJ 07105
973-344-0565
www.dartagnan.com
_White truffles._
L'Épicerie
866-350-7575
www.lepicerie.com
_Candied citrus peel._
George Richter Farm
4512 70th Avenue East
Fife, WA 98424
253-922-5649
_Fresh berries._
Gourmet Foodstore
3212 NW 64th Street
Boca Raton, FL 33496
877-220-4181
www.gourmetfoodstore.com
_White truffles and truffle paste._
Green Meadow Farm
130 South Mount Vernon Road
Gap, PA 17527
717-442-5222
www.glennbrendle.com
_Vegetables and fruits._
Linvilla Orchards
137 West Knowlton Road
Media, PA 19063
610-876-7116
www.linvilla.com
_Apples, squash, and vegetables._
Maximus International Foods
15 Bicknell Road
Weymouth, MA 02191
617-331-7959
_Wild and cultivated mushrooms._
Il Mercato Italiano
P.O. Box 9751
Green Bay, WI 54308
877-202-8881
www.ilmercatoitaliano.net
_La Valle San Marzano tomatoes._
DRY GOODS
Allen Creek Farm
P.O. Box 841
Ridgefield, WA 98642
360-887-3669
www.chestnutsonline.com
_Chestnut flour._
Amazon.com
866-216-1072
www.amazon.com
_Forno Bonomi ladyfingers, glucose syrup, and Casa Forcelli Mostarda._
Buon Italia
75 Ninth Avenue
New York, NY 10011
212-633-9090
www.buonitalia.com
_Tipo 00 flour, semolina, and other dry goods._
King Arthur Flour
135 US Route 5 South
Norwich, VT 05055
802-649-3361
www.kingarthurflour.com
_All-purpose flour, durum flour, semolina, almond flour, SAF instant yeast, food-grade cocoa butter, and other baking ingredients._
Il Mercato Italiano
P.O. Box 9751
Green Bay, WI 54308
877-202-8881
www.ilmercatoitaliano.net
_Amaretti cookies._
The Meadow
888-388-4633
523 Hudson St
New York, NY 10014
3731 N Mississippi Ave.
Portland, OR 97227
www.atthemeadow.com
_Gourmet salts and salt blocks._
PreGel USA
8700 Red Oak Boulevard, Suite A
Charlotte, NC 28217
704-333-6804
www.pregel-usa.com
_Pistachio paste, dry milk powder, and gelato supplies._
Whole Foods Market
www.wholefoodsmarket.com
_Chestnut flour and other culinary products._
RESTAURANTS AND BARS IN ITALY AND THE UNITED STATES
Alla Spina
1410 Mount Vernon Street
Philadelphia, PA 19130
215-600-0017
www.allaspinaphilly.com
Alpine Grove
19 South Depot Road
Hollis, NH 03049
603-882-9051
www.alpinegrove.com
Amis Trattoria
412 South 13th Street
Philadelphia, PA 19147
215- 732-2647
www.amisphilly.com
Bedford Village Inn
2 Olde Bedford Way
Bedford, NH 03110
603-472-2001
www.bedfordvillageinn.com
Da Caino
Via della Chiesa, 4
Montemerano, 58050 Manciano
Province of Grosseto, Italy
+39 0564 602817
www.dacaino.it
Da Cesare
Via Umberto, 9
Albaretto della Torre
Province of Cuneo, Italy
+39 0173 520147
Da Guido
Via Fossano, 19 a Pollenzo
12042 Bra
Province of Cuneo, Italy
+39 0172 458422
www.guidoristorante.it
Fiaschetteria Nuvoli
Piazza dell'Olio, 15
50123 Florence
Province of Florence, Italy
+39 055 2396616
Frosio Ristorante
Piazza Lemine, 1
24011 Almé
Province of Bergamo, Italy
+39 035 541633
www.frosioristoranti.it
The Greenhouse Tavern
2038 East 4th Street
Cleveland, OH 44115
216- 443-0511
www.thegreenhousetavern.com
Locanda Don Serafino Ristorante
Via Avv. G. Ottaviano
97100 Ragusa Ibla
Province of Ragusa
+39 0932 248778
www.locandadonserafino.it
O'Dea's Pub
Via Borgo Palazzo 211
24125 Bergamo
Province of Bergamo, Italy
+39 035 298511
www.odeaspub.it
Olfà
Via Colleoni, 3
Osio Sotto
24046 Bergamo
Province of Bergamo, Italy
+39 035 881047
Osteria
640 North Broad Street
Philadelphia, PA 19130
215-763-0920
www.osteriaphilly.com
Osteria Boccondivino
Via Mendicita Istruita, 14
12042 Bra
Province of Cuneo, Italy
+39 0172 425674
www.boccondivinoslow.it
Osteria dei Catari
Via Solferino
12065 Monforte d'Alba
Province of Cuneo, Italy
+39 0173 787256
www.osteriadeicatari.com
Osteria dell' Arco
Piazza Savona 5
12051 Alba
Province of Cuneo, Italy
+39 0173 363974
www.osteriadellarco.it
Osteria della Brughiera
Via Brughiera, 49
24018 Villa d'Almé
Province of Bergamo, Italy
+39 035 638008
www.labrughiera.com
Ristorante Antica Torre
Via Torino, 64
Barbaresco
Province of Cuneo, Italy
+39 0173 635170
Ristorante dei Pescatori
Via Doria, 6
19032 Lerici
Province of La Spezia, Italy
+39 0187 965534
Ristorante Flipot
Corso Antonio Gramsci, 17
10066 Torre Pellice
Province of Turin, Italy
+39 0121 91236
Ristorante Il Latini
Via dei Palchetti 6
50123 Florence
Province of Florence, Italy
+39 055 210916
www.illatini.com
Il Ristorante Loro and LoRo&Co
Bistrò/Pizzeria
24069 Via Bruse, 2
Trescore Balneario
Bergamo, Italy
+39 035 945073
www.loroandco.com
Sadler Ristorante
Via Cardinale Ascanio Sforza, 77
20141 Milan
Province of Milan, Italy
+39 02 58104451
www.sadler.it
Tucans Pub
Via Gaetano Donizetti, 25
24129 Bergamo
Province of Bergamo, Italy
+39 035 575538
www.tucans.it
Vetri Ristorante
1312 Spruce Street
Philadelphia, PA 19107
215-732-3478
www.vetriristorante.com
SHOPS AND MARKETS IN ITALY
5 Terre Gelateria e Crêperia
Via Antonio Discovolo, 248
19010 Manarola
La Spezia, Italy
Cazzamali Macelleria
Via Gaspare Vezzoli, 2
26014 Romanengo
Province of Cremona, Italy
+39 0373 72101
Eataly
Via Nizza 230, Lingotto
Turin 10126
Province of Turin, Italy
+39 011 19506801
www.eatalytorino.it
Gelateria Peccati de Gola
Via Mazzini 174, Albino
Lombardia 24021, Italy
+39 035 752299
Mangili Macelleria
Via Roma, 17
Paladina
Bergamo Italy
+39 035 542116
Pesceria Orobica
Via Bianzana, 19
24124 Bergamo
Province of Bergamo, Italy
+39 035 4172611
www.orobicapesca.it
Salone del Gusto
Via Nizza, 280
10100 Turin
Province of Turin, Italy
+39 0172 419611
www.salonedelgusto.it
WINERIES IN ITALY
Bellavista
Via Bellavista, 5
Adro
Province of Brescia, Italy
www.bellavistawine.it
Ca' del Bosco
Via Albano Zanella, 13
Erbusco
Province of Brescia, Italy
+39 030 7766111
www.cadelbosco.com
Ceretto
Località S. Cassiano 34
12051 Alba
Province of Cuneo, Italy
+39 0173 282582
www.ceretto.com
Clerico
Località Manzoni, 22/A
12065 Monforte d'Alba
Province of Cuneo, Italy
+39 0173 78171
www.domenicoclerico.com
Marchesi di Grésy
Strada della Stazione, 21
12050 Barbaresco
Province of Cuneo, Italy
+39 0173 635221
www.marchesidigresy.com
Rocche dei Manzoni
Località Manzoni Soprani, 3
12065 Monforte D'Alba
Province of Cuneo, Italy
+39 0173 78421
www.barolobig.com
Scavino
Via Alba-Barolo, 59
12060 Castiglione Falleto
Province of Cuneo, Italy
+39 0173 62850
www.paoloscavino.com
HOTELS AND INNS IN ITALY AND THE UNITED STATES
Agriturismo Armea
Località Armea
25015 San Martino della Battaglia
Desenzano
Province of Brescia, Italy
+39 030 9910481
www.agriturismoarmea.it
B&B Novecento
Via Ricasoli, 10
50122 Florence
Province of Florence, Italy
+39 05 5214138
www.bbnovecentofirenze.it
Ca' du Rabajà
Localita Rabajà 28
12051 Barbaresco
Province of Cuneo, Italy
+39 0173 635016
www.cadurabaja.com
Donna Franca
Trappitello
Province of Messina, Italy
+39 3473713686
www.discover-sicily.com
Hotel Florida
Via S. Biaggini, 35
19032 Lerici
La Spezia, Italy
+39 0187 967332
www.hotelflorida.it
Locanda Armonia
Località Redona
Trescore Balneario
Province of Bergamo, Italy
+39 035 4258026
www.locanda-armonia.it
Locanda del Biancospino
Via Monte Beio, 26
24026 Leffe
Province of Bergamo, Italy
+39 035 7172161
www.locandadelbiancospino.com
Parson's Post House
62 Shore Road
Ogunquit, ME 03907
207-646-7533
www.parsonsposthouse.com
ACKNOWLEDGMENTS
I slept through English class, so writing this book has been challenging to say the least. It could not have happened without the love, support, and creativity of so many people.
Claudia: Meeting you was the greatest moment in my life. You filled my heart and opened yours to me. I would not be the husband, chef, and father I am today without having you by my side. And this book could not have happened without you. _Sei tutto per me amore e Ti amo con tutto il mio cuore!_ (You are everything to me, my love, and I love you with all of my heart!)
Mom and Dad: Thank you for standing by me on every decision I made—even when you thought they were stupid decisions. I know there were times when you couldn't imagine why I was doing what I was doing, but you both were always there for me. I love you both very much.
Marc Vetri: When we first met, I was living larger than life and knew nothing about the simplicities of it. Meeting you showed me that life was as simple as a plate of spaghetti with tomatoes and basil. You changed my life not only professionally but also personally. I couldn't have asked for a better mentor, partner, and best friend.
Jeff Benjamin: When Marc hired me and you weren't so sure about it, there was no way I was going to make you regret taking that chance. Ten years later, I want to say thank you. Without you, I would not have taken my first trip to Italy and fallen in love with the country, about which I knew little. Your knowledge of wine and business is endless, and the classes you held with me were priceless. I continue to learn from you every day as a partner and friend.
Brad Spence: Cooking with you in the kitchen at Vetri was a lot of fun. We shared the same passion and style of cuisine, and it was amazing to bounce ideas off each other and create great tasting menus together. I knew from early on that you were going to fit into this family. Thanks for always inspiring me.
Miles Angelo: Without you, Miles, I would have never met my Claudia, Marc, or Jeff. Thank you for bringing these people into my life and giving me the tools I needed to grow as a young chef.
Clare Pelino: Thanks for finding a home for this book. Your hard work and persistence really paid off, and I look forward to our new friendship and projects in the future.
Osteria Crew: What can I say? You guys are the best!!! I am so lucky to have all of you and can't thank you enough for your hard work, not only for the book, but for every day that you put your heart and soul into Osteria.
Brad Daniels: Since Osteria opened, I have never felt as comfortable with someone's running the kitchen in my absence as I have with you. Your enthusiasm, skill, and dedication never cease to amaze me. Thank you for all the work you did for this book. I couldn't have done it without you.
Vetri family: This is becoming one giant family and it keeps filling up with some of the most caring, talented, and hard-working individuals I know. Together, we have created a family that I am proud to be a part of. Thank you all.
Kelly Campbell: How you were able to capture the rustic essence of my food and the places we visited in Italy is beyond me. You're a rock star, and it was a lot of fun to work with you on this project. Can't wait till the next one.
David Joachim: We finally did it! After revising the proposal four times and wading through endless manuscript revisions, we have put together a very special book. From the beginning, you not only saw my vision but were able to capture my voice. And I don't think I ever saw anyone able to eat as much food as you did in Italy. That was pretty impressive. Thank you for your hard work and for becoming a dear friend.
Running Press: Thank you to my editor Kristen Green Wiewora, book designer Josh McDonnell, and the rest of the team at Running Press for helping to make this a real, live book.
Giuseppina Carrara: _Pina, non ho parole per dirti quanto la tua famiglia ha cambiato la mia vita. Tu sei una persona molto speciale, in tanti modi. Mi hai accettato nella tua casa come fossi un figlio fin dal primo momento. Mi auguro tu senti quanto sei importante per me. Ti voglio tanto bene e grazie per tutte le ricette di famiglia—è il nostro segreto!_ (Pina, I don't have the words to describe how your family changed my life. You are a very special person in many ways. You accepted me into your house as if I were your own son. I hope you feel the same. I love you, Pina, and. . . thank you for all the family recipes—it's our little secret!)
Andrea Forcella: _Ciao, Zio! Tutte le volte che ho un dilemma con la pasticceria, tu ci sei. Hai sempre la risposta giusta o la ricetta perfetta per togliermi dai pasticci. In questi anni siamo diventati grandi amici e mi sento molto fortunato di averti nella mia vita. Grazie, Andrea, per la tua amicizia e per condividere la tua "dolce arte" con me._ (Hey, Uncle! Every time I have a problem or question in the pastry kitchen, you are always there. You have the right answer and perfect recipes. Over the years, we have become great friends, and I am very fortunate to have you in my life. Thank you, Andrea, for your friendship and for sharing your culinary knowledge with me.)
Antonio and Francesco: _Ricordo quando ho iniziato da voi a "LoRo" e avevate solo otto tavoli. Sono passati solo pochi anni e adesso avete un ristorante bellissimo e siete stati premiati con la stella Michelin. Complimenti!!! Vi ringrazio per l'opportunità che mi avete dato dieci anni fa senza nemmeno conoscermi. Adesso siete per me come due fratelli e vi considero parte della mia famiglia._ (I remember when I started at LoRo and you only had eight tables. Only a few years have passed and now you have a Michelin Star. Congratulations!!! I want to thank you for the opportunity you gave me ten years ago without even knowing me. Now you both have become like brothers and part of my family.)
Paolo Frosio: _Il lungo periodo che ho passato nella tua casa e nel tuo ristorante è stato una delle migliori esperienze della mia vita. Non solo ho imparato un sacco di cose nella tua cucina ma ho anche incontrato la mia Claudia lì. Frosio rimarrà sempre un posto speciale per noi. Paolo, sei uno chef che ammiro molto e ti ringrazio per tutto quello che hai fatto per me._ (It's been a long time since we worked together, but that time in your kitchen was one of the best experiences of my life. Not only did I learn a lot from you, but I also met Claudia there. For us, Frosio will always remain a special place. Paolo, you are a chef that I admire and I would like to thank you for everything you have done for me.)
Matteo Breda: _Carissimo._ . . _come si fa a ringraziare cupido? Senza di te non avrei mai conosciuto Claudia._ . . _La tua passione e creativitá mi hanno sempre affascinato e ci hanno unito al punto di sentirci come due vecchi amici. Questo libro è il frutto dalla mia esperienza in Italia e in parte lo devo a te Cupido-MacGyver!_ (My dearest friend. . . how do I say thank-you to a Cupid? Without you, I never would have met my Claudia . . . Your passion and creativity have always fascinated me, and it has united us like two old friends. This book is about all of my experiences in Italy, and I owe it to you, Cupid-MacGyver!)
DAVID JOACHIM THANKS:
Jeff Michaud, for opening a window to your world and letting me climb in, live there, and tell your story.
Brad Daniels, for being supremely organized.
Kelly Campbell, for laughing at all the "hide the salami" jokes with Mr. Bull Dick.
Anthony Christian diMarco, for accepting the job of Key Grip, and for sleeping in Italy.
Carrie Havranek, for smiling during hellish days of recipe-testing and broken ovens.
All the tasters who ate and swooned and critiqued during recipe testing, including Selene Yeager, Dave Pryor, Keith Plunkett, Christine Fennessy, Matt Morrison, Jaime Livingood, Tom Aczel, Andrew Brubaker, Bill Melcher, Doug Ashby, and the Peoples family.
August Joachim and Maddox Joachim, for trying everything, including roasted pig eyes.
And to life's best ride partner, Christine Bucher, for pushing me, picking up my slack, and helping another cookbook to see the light of day.
INDEX
Note: Page references in _italics_ indicate recipe photographs.
A
Affogato, Chinotto,
Agnolotti, Rabbit, with Pistachio Sauce, 118–19
**Almond(s)**
Braised Blueberries with Sbrisoluna,
Cantucci Sundae,
and Fresh Prune Tart,
Rustic Peach Tart with Goat Cheese Sorbet, 191–93, __
Torrone Semifreddo with Candied Chestnuts and Chocolate Sauce, 172–73
**Amaretti**
Bonèt,
and Mostarda, Squash Gnocchi with,
**Anchovies**
Fritto Misto de Pesce,
Pan-Fried Veal Tongue with Bagna Cauda and Leeks, 114–15
Red Bell Pepper Tonnato with Caper Berries, , __
Apple, Heirloom, Upside-Down Cake with Polenta Gelato,
**Apricot(s)**
and Chanterelle Salad with Parmesan Crisps, __ ,
Prosciutto Cotto with Stone Fruits, __ , 242–43
**Artichokes**
and Fava Beans, Pecorino Flan with, __ ,
and Fennel, Wild Branzino with, 247–48, __
Shaved, and White Truffle, Veal Tartare with,
**Asparagus**
Fried Stuffed Softshell Crabs with,
Mayonnaise,
B
Baccalà Cannelloni with Cauliflower and Parmigiano, 182–83
Bagna Cauda and Leeks, Pan-Fried Veal Tongue with, 114–15
Barbaresco Sauce, Whole Roasted Pheasant with,
**Bean(s)**
Borlotti, and Tuscan Kale, Ribollita Ravioli with, 137–38
Chickpea Cakes with Warm Lemon Crema,
Corona, Braised, Bistecca alla Fiorentina with,
Fava, and Artichokes, Pecorino Flan with, __ ,
Fava, and Robiola Francobolli, __ ,
Grilled Lamb Rack with Favetta and Roasted Pearl Onions, __ ,
Mixed, Salad, Meat Grigliata with, __ ,
Béchamel Sauce,
Porcini,
Truffle,
**Beef.** _See also **Veal**_
Bistecca alla Fiorentina with Braised Corona Beans,
Carne Salata with Red Onion, Celery, and Olive Oil,
Meat Grigliata with Mixed Bean Salad, __ ,
Shanks, Whole Braised, with Buckwheat Polenta,
Tartare with Fried Egg Yolk and Parmigiano, , __
Warm, Carpaccio with Roasted Mushrooms, , __
**Berries**
Braised Blueberries with Sbrisoluna,
Fried Huckleberry Ravioli with Mascarpone Crema, 211–12, __
Raspberry Sorbet,
Strawberry Zuppa Inglese with Mascarpone Cake,
Summer, Olive Oil Panna Cotta with,
Warm Quince Tortini with Cranberry and Orange, 126–27
Bistecca alla Fiorentina with Braised Corona Beans,
Bitto Cheese, Chard, and Potato, Pizzoccheri with,
Black Bass, Oil-Poached, with Fresh Peas and Baby Tomatoes, , __
Blood Orange Crostata with Bitter Chocolate, 231–33, __
Blueberries, Braised, with Sbrisoluna,
Bomboloni with Vin Santo Crema,
Bonèt,
Braised Blueberries with Sbrisoluna,
Breads. _See_ Bruschetta
Brine, 3-2-1,
**Bruschetta** , 30–31
Cockles and Eggs with,
and Egg Yolk, Duck Liver alla Fiorentina with,
Bucatini with Pig Testina,
**Buckwheat**
Pasta Dough,
Polenta,
Polenta, Whole Braised Beef Shanks with,
Burrata and Basil, Tomato Tortellini with, , __
Buttermilk Gelato,
C
Cabbage and Speck, Canederli with,
**Cakes**
Citrus Rum Babas alla Crema,
Grappa Torta,
Heirloom Apple Upside-Down, with Polenta Gelato,
Mascarpone, Strawberry Zuppa Inglese with,
Meyer Lemon Tortas with Poppy Seed Gelato,
Calamari, Grilled Stuffed, with Meyer Lemon and Beets, , __
**Candele Pasta**
Dough,
with Wild Boar Bolognese, __ ,
Candied Citrus Peel,
Canederli with Cabbage and Speck,
**Cannelloni**
Baccalà, with Cauliflower and Parmigiano, 182–83
Pork Neck, with Heirloom Tomatoes and Basil, 28–29
**Cantucci**
Gelato,
Sundae,
Caper Berries, Red Bell Pepper Tonnato with, , __
Capon, Genovese Ravioli with, 100–101
Caramelle, Polenta, with Raschera Fonduta and Black Truffle, , __
Carne Salata with Red Onion, Celery, and Olive Oil,
Casoncelli, Duck, with Quince, Brown Butter, and Sage, 204–5
Castelmagno Fonduta and White Truffle, Potato Gnocchi with,
Cauliflower and Parmigiano, Baccalà Cannelloni with, 182–83
**Celery**
Red Onion, and Olive Oil, Carne Salata with,
and Taggiasca Olive Salad, Grilled Sardines with,
Celery Root Puree and Truffle Butter, Snail Spiedini with,
Chard, Potato, and Bitto Cheese, Pizzoccheri with,
**Cheese**
Apricot and Chanterelle Salad with Parmesan Crisps, __ ,
Baccalà Cannelloni with Cauliflower and Parmigiano, 182–83
Beef Tartare with Fried Egg Yolk and Parmigiano, , __
Bitto, Chard, and Potato, Pizzoccheri with,
Bra, Fonduta, Porcini Zuppa with, __ ,
Claudia's Limoncello Tiramisu,
Corn Tortelli with Ricotta Salata,
Crespelle della Mamma,
Crispy Sweetbreads with Parmigiano Fonduta and Grilled Treviso,
Focaccia Stuffed with Taleggio and Pancetta, , __
Fried Huckleberry Ravioli with Mascarpone Crema, 211–12, __
Goat, and Tarragon Vinaigrette, Pinzimonio with, , __
Goat, Sorbet,
Goat, Sorbet, Rustic Peach Tart with, 191–93, __
Grilled Stuffed Calamari with Meyer Lemon and Beets, , __
Mascarpone Gelato,
Pear and Treviso Salad with Taleggio Dressing,
Pecorino Flan with Fava Beans and Artichokes, __ ,
Pheasant Lasagna, , __
Polenta Caramelle with Raschera Fonduta and Black Truffle, , __
Porcini Ravioli with Taleggio and Burro Fuso,
Pork Neck Cannelloni with Heirloom Tomatoes and Basil, 28–29
Potato Gnocchi with Castelmagno Fonduta and White Truffle,
Rabbit Agnolotti with Pistachio Sauce, 118–19
Radicchio Ravioli with Balsamic Brown Butter,
Robiola and Fava Bean Francobolli, __ ,
Scarpinocc, , __
Schisola (Polenta Stuffed with Gorgonzola Dolce), __ ,
Squash and Fontina Lasagnetta,
Strawberry Zuppa Inglese with Mascarpone Cake,
Sweet Ricotta Frittelle with Raspberry Preserves,
Taleggio, Polenta Gnocchi Stuffed with,
Tomato Tortellini with Burrata and Basil, , __
Zucchini Flowers Stuffed with Ricotta and Tuna, , __
Cherry Shortcake with Cherry Meringata, __ , 148–49
**Chestnut(s)**
Candied, and Chocolate Sauce, Torrone Semifreddo with, 172–73
and Duck, Doppio Ravioli with, __ , 49–50
Pasta Dough,
Rice Pudding with Persimmon,
Chiacchiere with Coffee and Chocolate Budino, 252–53
Chickpea Cakes with Warm Lemon Crema,
**Chinotto**
Affogato,
Gelato,
**Chocolate**
Bitter, Blood Orange Crostata with, 231–33, __
Bonèt,
Budino and Coffee, Chiacchiere with, 252–53
Flan,
Grappa Torta,
Pistachio Flan, , __
Sauce,
Sauce and Candied Chestnuts, Torrone Semifreddo with, 172–73
Ciareghi, 244–45
Citrus Rum Babas alla Crema,
Clams, Tomatoes, and Peperoncino, Corzetti with, __ , 97–98
Claudia's Limoncello Tiramisu,
Cockles and Eggs with Bruschetta,
Coconut Latte Fritto with Passion Fruit Curd,
**Cod**
Fritto Misto de Pesce,
Smoked, Salad with Frisée and Soft-Cooked Eggs, __ ,
Coffee and Chocolate Budino, Chiacchiere with, 252–53
Corn Tortelli with Ricotta Salata,
Corzetti with Clams, Tomatoes, and Peperoncino, __ , 97–98
Cotechino-Stuffed Quail with Warm Fig Salad, , __
Crabs, Softshell, Fried Stuffed, with Asparagus,
Cranberry and Orange, Warm Quince Tortini with, 126–27
Crème Anglaise,
**Crespelle**
della Mamma,
Vanilla, with Caramelized Pineapple Sauce,
Crispy Sweetbreads with Parmigiano Fonduta and Grilled Treviso,
Crostata, Blood Orange, with Bitter Chocolate, 231–33, __
**Cuttlefish**
Tagliolini with Ragù di Seppia, , __
D
**Desserts.** _See also_ **Gelato**
Blood Orange Crostata with Bitter Chocolate, 231–33, __
Bomboloni with Vin Santo Crema,
Bonèt,
Braised Blueberries with Sbrisoluna,
Cantucci Sundae,
Cherry Shortcake with Cherry Meringata, __ , 148–49
Chestnut Rice Pudding with Persimmon,
Chiacchiere with Coffee and Chocolate Budino, 252–53
Chickpea Cakes with Warm Lemon Crema,
Chinotto Affogato,
Chocolate Flan,
Citrus Rum Babas alla Crema,
Claudia's Limoncello Tiramisu,
Coconut Latte Fritto with Passion Fruit Curd,
Fig Strudel, __ , 80–81
Fresh Prune and Almond Tart,
Fried Huckleberry Ravioli with Mascarpone Crema, 211–12, __
Grappa Torta,
Heirloom Apple Upside-Down Cake with Polenta Gelato,
Meyer Lemon Tortas with Poppy Seed Gelato,
Olive Oil Panna Cotta with Summer Berries,
Peach Sorbet,
Pistachio Flan, , __
Raspberry Sorbet,
Rhubarb Tartellette with Italian Meringue, 59–60, __
Rustic Peach Tart with Goat Cheese Sorbet, 191–93, __
Strawberry Zuppa Inglese with Mascarpone Cake,
Sweet Ricotta Frittelle with Raspberry Preserves,
Torrone Semifreddo with Candied Chestnuts and Chocolate Sauce, 172–73
Vanilla Crespelle with Caramelized Pineapple Sauce,
Warm Quince Tortini with Cranberry and Orange, 126–27
Zabaione with Moscato and Fresh Figs,
Doppio Ravioli with Duck and Chestnut, __ , 49–50
**Duck**
Casoncelli with Quince, Brown Butter, and Sage, 204–5
and Chestnut, Doppio Ravioli with, __ , 49–50
Liver alla Fiorentina with Egg Yolk and Bruschetta,
Whole Roasted, with Muscat Grapes, __ ,
**Dumplings.** _See also_ **Gnocchi**
Canederli with Cabbage and Speck,
E
**Egg(s)** ,
Ciareghi, 244–45
and Cockles with Bruschetta,
Pasta Dough,
Soft-Cooked, and Frisée, Smoked Cod Salad with, __ ,
Truffles and,
Yolk, Fried, and Parmigiano, Beef Tartare with, , __
Yolk and Bruschetta, Duck Liver alla Fiorentina with,
**Endive**
Beef Tartare with Fried Egg Yolk and Parmigiano, , __
Pear and Treviso Salad with Taleggio Dressing,
Equipment,
Escarole and Salsa Rossa, Veal Tongue Salad with,
F
Farro Crema, Guinea Hen Tortellini with, 222–23
Fazzoletti with Lamb Breast and Pea Ragù,
**Fennel**
and Artichokes, Wild Branzino with, 247–48, __
Zeppole, Swordfish Pancetta with, __ ,
Fettuccine with Braised Rabbit and Porcini,
**Fig(s)**
and Caramelized Onions, Veal Liver Raviolini with, 142–43
Fresh, and Moscato, Zabaione with,
Strudel, __ , 80–81
Warm, Salad, Cotechino-Stuffed Quail with, , __
Fiordilatte Gelato,
**Fish.** _See also_ **Anchovies**
Baccalà Cannelloni with Cauliflower and Parmigiano, 182–83
Fritto Misto de Pesce,
Grilled Halibut with Mussels and Chanterelles,
Grilled Sardines with Taggiasca Olives and Celery Salad,
Halibut al Cartoccio with Ligurian Olives and Oregano, , __
Oil-Poached Black Bass with Fresh Peas and Baby Tomatoes, , __
Red Bell Pepper Tonnato with Caper Berries, , __
Smoked Cod Salad with Frisée and Soft-Cooked Eggs, __ ,
Stock,
Swordfish Pancetta with Fennel Zeppole, __ ,
Wild Branzino with Fennel and Artichokes, 247–48, __
Zucchini Flowers Stuffed with Ricotta and Tuna, , __
**Flans**
Chocolate,
Pecorino, with Fava Beans and Artichokes, __ ,
Pistachio, , __
Sweet Onion, with Morels,
Flour, _tipo_ 00, about,
Focaccia Stuffed with Taleggio and Pancetta, , __
**Fontina**
Crespelle della Mamma,
Pizzoccheri with Chard, Potato, and Bitto Cheese,
and Squash Lasagnetta,
Francobolli, Robiola and Fava Bean, __ ,
Fresh Prune and Almond Tart,
Fried Huckleberry Ravioli with Mascarpone Crema, 211–12, __
Fried Stuffed Softshell Crabs with Asparagus,
Frittelle, Sweet Ricotta, with Raspberry Preserves,
Fritto Misto de Pesce,
**Fruits.** _See also_ **Berries;** _specific fruits_
Stone, Prosciutto Cotto with, __ , 242–43
G
**Game**. _See also_ **Duck; Rabbit**
Candele Pasta with Wild Boar Bolognese, __ ,
Cotechino-Stuffed Quail with Warm Fig Salad, , __
Pheasant Lasagne, , __
Whole Roasted Pheasant with Barbaresco Sauce,
Wild Boar Braised with Moretti Beer,
Garlic,
**Gelato**
Base, White,
Base, Yellow,
Buttermilk,
Cantucci,
Cantucci Sundae,
Chinotto,
Fiordilatte,
Mascarpone,
Pistachio,
Polenta,
Polenta, Heirloom Apple Upside-Down Cake with,
Poppy Seed,
Poppy Seed, Meyer Lemon Tortas with,
Genovese Ravioli with Capon, 100–101
**Gnocchi**
Polenta, Stuffed with Taleggio Cheese,
Potato, with Castelmagno Fonduta and White Truffle,
Squash, with Amaretti and Mostarda,
**Goat Cheese**
Sorbet,
Sorbet, Rustic Peach Tart with, 191–93, __
and Tarragon Vinaigrette, Pinzimonio with, , __
Gorgonzola Dolce, Polenta Stuffed with (Schisola), __ ,
**Grains.** _See also_ **Polenta**
Chestnut Rice Pudding with Persimmon,
Guinea Hen Tortellini with Farro Crema, 222–23
Pork Shank Osso Buco with Saffron Rice Crema, __ , 55–56
Grapes, Muscat, Whole Roasted Duck with, __ ,
Grappa Torta,
**Greens.** _See also_ **Endive; Radicchio**
Apricot and Chanterelle Salad with Parmesan Crisps, __ ,
Crespelle della Mamma,
Pizzoccheri with Chard, Potato, and Bitto Cheese,
Ribollita Ravioli with Borlotti Beans and Tuscan Kale, 137–38
Smoked Cod Salad with Frisée and Soft-Cooked Eggs, __ ,
Veal Tongue Salad with Escarole and Salsa Rossa,
Grilled Halibut with Mussels and Chanterelles,
Grilled Lamb Rack with Favetta and Roasted Pearl Onions, __ ,
Grilled Sardines with Taggiasca Olives and Celery Salad,
Grilled Stuffed Calamari with Meyer Lemon and Beets, , __
Guinea Hen Tortellini with Farro Crema, 222–23
H
**Halibut**
al Cartoccio with Ligurian Olives and Oregano, , __
Grilled, with Mussels and Chanterelles,
**Hazelnuts**
Blood Orange Crostata with Bitter Chocolate, 231–33, __
Heirloom Apple Upside-Down Cake with Polenta Gelato,
Huckleberry Ravioli, Fried, with Mascarpone Crema, 211–12, __
K
Kale, Tuscan, and Borlotti Beans, Ribollita Ravioli with, 137–38
L
**Lamb**
Breast and Pea Ragù, Fazzoletti with,
Rack, Grilled, with Favetta and Roasted Pearl Onions, __ ,
Lasagna, Pheasant, , __
Lasagnetta, Squash and Fontina,
Latte Fritto, Coconut, with Passion Fruit Curd,
Leeks and Bagna Cauda, Pan-Fried Veal Tongue with, 114–15
**Lemon(s)**
Candied Citrus Peel,
Crema, Warm, Chickpea Cakes with,
Meyer, and Beets, Grilled Stuffed Calamari with, , __
Meyer, Tortas with Poppy Seed Gelato,
Pina's Limoncello,
**Limoncello**
Pina's,
Tiramisu, Claudia's,
**Liver**
Duck, alla Fiorentina with Egg Yolk and Bruschetta,
Veal, Raviolini with Figs and Caramelized Onions, 142–43
M
**Main dishes (meat and game)**
Bistecca alla Fiorentina with Braised Corona Beans,
Ciareghi, 244–45
Cotechino-Stuffed Quail with Warm Fig Salad, , __
Grilled Lamb Rack with Favetta and Roasted Pearl Onions, __ ,
Meat Grigliata with Mixed Bean Salad, __ ,
Milk-Braised Pork Cheeks with Porcini Polenta,
Oven-Roasted Rabbit Porchetta with Peperonata, __ , 167–68
Pork Shank Osso Buco with Saffron Rice Crema, __ , 55–56
Rabbit alla Casalinga,
Sweetbread Saltimbocca with Squash Puree, , __
Veal on a Stone, , __
Veal Shoulder Roasted in Hay with Grilled Peach Salad, , __
Veal Tongue Salad with Escarole and Salsa Rossa,
Whole Braised Beef Shanks with Buckwheat Polenta,
Whole Roasted Duck with Muscat Grapes, __ ,
Whole Roasted Pheasant with Barbaresco Sauce,
Whole Roasted Pig's Head, 30–31, _32–33_
Whole Roasted Pork Shoulder with Pickled Vegetables,
Wild Boar Braised with Moretti Beer,
**Main dishes (pasta)**
Baccalà Cannelloni with Cauliflower and Parmigiano, 182–83
Bucatini with Pig Testina,
Candele Pasta with Wild Boar Bolognese, __ ,
Corzetti with Clams, Tomatoes, and Peperoncino, __ , 97–98
Doppio Ravioli with Duck and Chestnut, __ , 49–50
Duck Casoncelli with Quince, Brown Butter, and Sage, 204–5
Fazzoletti with Lamb Breast and Pea Ragù,
Fettuccine with Braised Rabbit and Porcini,
Genovese Ravioli with Capon, 100–101
Guinea Hen Tortellini with Farro Crema, 222–23
Pappardelle with Veal Ragù and Peppers,
Pheasant Lasagna, , __
Pizzoccheri with Chard, Potato, and Bitto Cheese,
Polenta Caramelle with Raschera Fonduta and Black Truffle, , __
Polenta Gnocchi Stuffed with Taleggio Cheese,
Porcini Ravioli with Taleggio and Burro Fuso,
Pork Neck Cannelloni with Heirloom Tomatoes and Basil, 28–29
Potato Gnocchi with Castelmagno Fonduta and White Truffle,
Rabbit Agnolotti with Pistachio Sauce, 118–19
Radicchio Ravioli with Balsamic Brown Butter,
Ribollita Ravioli with Borlotti Beans and Tuscan Kale, 137–38
Robiola and Fava Bean Francobolli, __ ,
Scarpinocc, , __
Spaghetti al Nero di Seppia with Shrimp,
Squash and Fontina Lasagnetta,
Squash Gnocchi with Amaretti and Mostarda,
Tagliolini with Ragù di Seppia, , __
Tomato Tortellini with Burrata and Basil, , __
Veal Liver Raviolini with Figs and Caramelized Onions, 142–43
Wild Hare Pappardelle, 226–27
**Main dishes (seafood)**
Fried Stuffed Softshell Crabs with Asparagus,
Fritto Misto de Pesce,
Grilled Halibut with Mussels and Chanterelles,
Grilled Stuffed Calamari with Meyer Lemon and Beets, , __
Halibut al Cartoccio with Ligurian Olives and Oregano, , __
Oil-Poached Black Bass with Fresh Peas and Baby Tomatoes, , __
Snail Spiedini with Celery Root Puree and Truffle Butter,
Swordfish Pancetta with Fennel Zeppole, __ ,
Wild Branzino with Fennel and Artichokes, 247–48, __
**Mascarpone**
Cake, Strawberry Zuppa Inglese with,
Claudia's Limoncello Tiramisu,
Crema, Fried Huckleberry Ravioli with, 211–12, __
Gelato,
Mayonnaise, Asparagus,
Meat. _See_ Beef; Game; Lamb; Pork; Veal
Meringata, Cherry, with Cherry Shortcake, __ , 148–49
Meyer Lemon Tortas with Poppy Seed Gelato,
Milk-Braised Pork Cheeks with Porcini Polenta,
Mirepoix, about,
Moscato and Fresh Figs, Zabaione with,
**Mushrooms**
Apricot and Chanterelle Salad with Parmesan Crisps, __ ,
Crespelle della Mamma,
Fettuccine with Braised Rabbit and Porcini,
Grilled Halibut with Mussels and Chanterelles,
Porcini Béchamel,
Porcini Polenta,
Porcini Ravioli with Taleggio and Burro Fuso,
Porcini Zuppa with Bra Cheese Fonduta, __ ,
Roasted, Warm Beef Carpaccio with, , __
Sweet Onion Flan with Morels,
Mussels and Chanterelles, Grilled Halibut with,
N
**Nut(s).** _See also_ **Almond(s); Chestnut(s); Pistachio(s)**
Blood Orange Crostata with Bitter Chocolate, 231–33, __
Pesto,
Walnut Pesto,
O
Oil, blended, about,
Oil-Poached Black Bass with Fresh Peas and Baby Tomatoes, , __
Olive Oil Panna Cotta with Summer Berries,
**Olives**
Ligurian, and Oregano, Halibut al Cartoccio with, , __
Taggiasca, and Celery Salad, Grilled Sardines with,
**Onion(s)**
Caramelized, and Figs, Veal Liver Raviolini with, 142–43
Pearl, Roasted, and Favetta, Grilled Lamb Rack with, __ ,
Red, Celery, and Olive Oil, Carne Salata with,
Sweet, Flan with Morels,
**Orange(s)**
Blood, Crostata with Bitter Chocolate, 231–33, __
Candied Citrus Peel,
Citrus Rum Babas alla Crema,
Oregano and Ligurian Olives, Halibut al Cartoccio with, , __
Osso Buco, Pork Shank, with Saffron Rice Crema, __ , 55–56
Oven-Roasted Rabbit Porchetta with Peperonata, __ , 167–68
P
Pancetta, Swordfish, with Fennel Zeppole, __ ,
Pancetta and Taleggio, Focaccia Stuffed with, , __
Pan-Fried Veal Tongue with Bagna Cauda and Leeks, 114–15
Panna Cotta, Olive Oil, with Summer Berries,
**Pappardelle**
with Veal Ragù and Peppers,
Wild Hare, 226–27
**Parmesan**
Baccalà Cannelloni with Cauliflower and Parmigiano, 182–83
Beef Tartare with Fried Egg Yolk and Parmigiano, , __
Crespelle della Mamma,
Crisps, Apricot and Chanterelle Salad with, __ ,
Crispy Sweetbreads with Parmigiano Fonduta and Grilled Treviso,
Pheasant Lasagna, , __
Rabbit Agnolotti with Pistachio Sauce, 118–19
Scarpinocc, , __
Passion Fruit Curd, Coconut Latte Fritto with,
**Pasta**
Baccalà Cannelloni with Cauliflower and Parmigiano, 182–83
Bucatini with Pig Testina,
Candele, with Wild Boar Bolognese, __ ,
Corn Tortelli with Ricotta Salata,
Corzetti with Clams, Tomatoes, and Peperoncino, __ , 97–98
Doppio Ravioli with Duck and Chestnut, __ , 49–50
Duck Casoncelli with Quince, Brown Butter, and Sage, 204–5
Fazzoletti with Lamb Breast and Pea Ragù,
Fettuccine with Braised Rabbit and Porcini,
Fried Huckleberry Ravioli with Mascarpone Crema, 211–12, __
Genovese Ravioli with Capon, 100–101
Guinea Hen Tortellini with Farro Crema, 222–23
Pappardelle with Veal Ragù and Peppers,
Pheasant Lasagna, , __
Pizzoccheri with Chard, Potato, and Bitto Cheese,
Polenta Caramelle with Raschera Fonduta and Black Truffle, , __
Polenta Gnocchi Stuffed with Taleggio Cheese,
Porcini Ravioli with Taleggio and Burro Fuso,
Pork Neck Cannelloni with Heirloom Tomatoes and Basil, 28–29
Potato Gnocchi with Castelmagno Fonduta and White Truffle,
Rabbit Agnolotti with Pistachio Sauce, 118–19
Radicchio Ravioli with Balsamic Brown Butter,
Ribollita Ravioli with Borlotti Beans and Tuscan Kale, 137–38
Robiola and Fava Bean Francobolli, __ ,
Scarpinocc, , __
Spaghetti al Nero di Seppia with Shrimp,
Squash and Fontina Lasagnetta,
Squash Gnocchi with Amaretti and Mostarda,
Tagliolini with Ragù di Seppia, , __
Tomato Tortellini with Burrata and Basil, , __
Veal Liver Raviolini with Figs and Caramelized Onions, 142–43
Wild Hare Pappardelle, 226–27
**Pasta Dough**
Buckwheat,
Candele,
Chestnut,
Egg,
Semolina,
Spaghetti,
Squid Ink,
Tagliolini,
Pastry Cream,
**Peach(es)**
Grilled, Salad, Veal Shoulder Roasted in Hay with, , __
Prosciutto Cotto with Stone Fruits, __ , 242–43
Sorbet,
Tart, Rustic, with Goat Cheese Sorbet, 191–93, __
Pear and Treviso Salad with Taleggio Dressing,
**Pea(s)**
Fresh, and Baby Tomatoes, Oil-Poached Black Bass with, , __
Ragù and Lamb Breast, Fazzoletti with,
Pecorino Flan with Fava Beans and Artichokes, __ ,
**Pepper(s)**
Corzetti with Clams, Tomatoes, and Peperoncino, __ , 97–98
Oven-Roasted Rabbit Porchetta with Peperonata, __ , 167–68
Red Bell, Tonnato with Caper Berries, , __
roasting,
Salsa Rossa,
and Veal Ragù, Pappardelle with,
Persimmon, Chestnut Rice Pudding with,
**Pesto**
Nut,
Walnut,
**Pheasant**
Lasagna, , __
Whole Roasted, with Barbaresco Sauce,
Pickled Vegetables, Whole Roasted Pork Shoulder with,
Pig's Head, Whole Roasted, 30–31, _32–33_
Pig Testina, Bucatini with,
Pina's Limoncello,
Pineapple, Caramelized, Sauce, Vanilla Crespelle with,
Pinzimonio with Tarragon Vinaigrette and Goat Cheese, , __
**Pistachio(s)**
Chocolate Flan,
Flan, , __
Gelato,
Sauce, Rabbit Agnolotti with, 118–19
Pizzoccheri with Chard, Potato, and Bitto Cheese,
**Plums**
Fresh Prune and Almond Tart,
Prosciutto Cotto with Stone Fruits, __ , 242–43
**Polenta** ,
Buckwheat,
Buckwheat, Whole Braised Beef Shanks with,
Caramelle with Raschera Fonduta and Black Truffle, , __
Cherry Shortcake with Cherry Meringata, __ , 148–49
Ciareghi, 244–45
Gelato,
Gelato, Heirloom Apple Upside-Down Cake with,
Gnocchi Stuffed with Taleggio Cheese,
Porcini,
Porcini, Milk-Braised Pork Cheeks with,
Rabbit alla Casalinga,
Stuffed with Gorgonzola Dolce (Schisola), __ ,
Wild Boar Braised with Moretti Beer,
**Poppy Seed**
Gelato,
Gelato, Meyer Lemon Tortas with,
**Porcini**
Béchamel,
Polenta,
Ravioli with Taleggio and Burro Fuso,
Zuppa with Bra Cheese Fonduta, __ ,
**Pork.** _See also_ **Prosciutto**
Bucatini with Pig Testina,
Canederli with Cabbage and Speck,
Cheeks, Milk-Braised, with Porcini Polenta,
Ciareghi, 244–45
Cotechino-Stuffed Quail with Warm Fig Salad, , __
Focaccia Stuffed with Taleggio and Pancetta, , __
Meat Grigliata with Mixed Bean Salad, __ ,
Neck Cannelloni with Heirloom Tomatoes and Basil, 28–29
Shank Osso Buco with Saffron Rice Crema, __ , 55–56
Shoulder, Whole Roasted, with Pickled Vegetables,
Whole Roasted Pig's Head, 30–31, _32–33_
**Potato(es)**
Chard, and Bitto Cheese, Pizzoccheri with,
Gnocchi with Castelmagno Fonduta and White Truffle,
Halibut al Cartoccio with Ligurian Olives and Oregano, , __
**Poultry.** _See_ **Capon; Guinea Hen**
**Prosciutto**
Cotto with Stone Fruits, __ , 242–43
Crespelle della Mamma,
Pudding, Chestnut Rice, with Persimmon,
Q
Quail, Cotechino-Stuffed, with Warm Fig Salad, , __
**Quince**
Brown Butter, and Sage, Duck Casoncelli with, 204–5
Tortini, Warm, with Cranberry and Orange, 126–27
R
**Rabbit**
Agnolotti with Pistachio Sauce, 118–19
alla Casalinga,
Braised, and Porcini, Fettuccine with,
Porchetta, Oven-Roasted, with Peperonata, __ , 167–68
Wild Hare Pappardelle, 226–27
**Radicchio**
Crispy Sweetbreads with Parmigiano Fonduta and Grilled Treviso,
Pear and Treviso Salad with Taleggio Dressing,
Ravioli with Balsamic Brown Butter,
Raschera Fonduta and Black Truffle, Polenta Caramelle with, , __
**Raspberry**
Preserves, Sweet Ricotta Frittelle with,
Sorbet,
**Ravioli**
Doppio, with Duck and Chestnut, __ , 49–50
Genovese, with Capon, 100–101
Huckleberry, Fried, with Mascarpone Crema, 211–12, __
Porcini, with Taleggio and Burro Fuso,
Radicchio, with Balsamic Brown Butter,
Ribollita, with Borlotti Beans and Tuscan Kale, 137–38
Raviolini, Veal Liver, with Figs and Caramelized Onions, 142–43
Red Bell Pepper Tonnato with Caper Berries, , __
Rhubarb Tartellette with Italian Meringue, 59–60, __
Ribollita Ravioli with Borlotti Beans and Tuscan Kale, 137–38
**Rice**
Crema, Saffron, Pork Shank Osso Buco with, __ , 55–56
Pudding, Chestnut, with Persimmon,
**Ricotta**
Crespelle della Mamma,
Grilled Stuffed Calamari with Meyer Lemon and Beets, , __
Pork Neck Cannelloni with Heirloom Tomatoes and Basil, 28–29
Radicchio Ravioli with Balsamic Brown Butter,
Sweet, Frittelle with Raspberry Preserves,
and Tuna, Zucchini Flowers Stuffed with, , __
Ricotta Salata, Corn Tortelli with,
Robiola and Fava Bean Francobolli, __ ,
**Rum**
Babas, Citrus, alla Crema,
Bonèt,
Vanilla Crespelle with Caramelized Pineapple Sauce,
Rustic Peach Tart with Goat Cheese Sorbet, 191–93, __
S
Sachet, about,
Saffron Rice Crema, Pork Shank Osso Buco with, __ , 55–56
**Salads**
Apricot and Chanterelle, with Parmesan Crisps, __ ,
Carne Salata with Red Onion, Celery, and Olive Oil,
Pear and Treviso, with Taleggio Dressing,
Smoked Cod, with Frisée and Soft-Cooked Eggs, __ ,
Taggiasca Olive and Celery, Grilled Sardines with,
Veal Tongue, with Escarole and Salsa Rossa,
Salt,
Sardines, Grilled, with Taggiasca Olives and Celery Salad,
**Sauces**
Béchamel,
Chocolate,
Crème Anglaise,
Pastry Cream,
Porcini Béchamel,
Truffle Béchamel,
Zabaione with Moscato and Fresh Figs,
**Sausage(s)**
Ciareghi, 244–45
Cotechino-Stuffed Quail with Warm Fig Salad, , __
Meat Grigliata with Mixed Bean Salad, __ ,
Scarpinocc, , __
Schisola (Polenta Stuffed with Gorgonzola Dolce), __ ,
Semifreddo, Torrone, with Candied Chestnuts and Chocolate Sauce, 172–73
Semolina Pasta Dough,
**Shellfish**
Cockles and Eggs with Bruschetta,
Corzetti with Clams, Tomatoes, and Peperoncino, __ , 97–98
Fried Stuffed Softshell Crabs with Asparagus,
Fritto Misto de Pesce,
Grilled Halibut with Mussels and Chanterelles,
Grilled Stuffed Calamari with Meyer Lemon and Beets, , __
Snail Spiedini with Celery Root Puree and Truffle Butter,
Spaghetti al Nero di Seppia with Shrimp,
Squid Ink Pasta,
Stock,
Tagliolini with Ragù di Seppia, , __
Shortcake, Cherry, with Cherry Meringata, __ , 148–49
**Shrimp**
Fritto Misto de Pesce,
Spaghetti al Nero di Seppia with,
Smoked Cod Salad with Frisée and Soft-Cooked Eggs, __ ,
Snail Spiedini with Celery Root Puree and Truffle Butter,
**Sorbets**
Goat Cheese,
Goat Cheese, Rustic Peach Tart with, 191–93, __
Peach,
Raspberry,
**Soups.** _See_ **Zuppa**
**Spaghetti**
al Nero di Seppia with Shrimp,
Pasta Dough,
Speck and Cabbage, Canederli with,
**Spinach**
Crespelle della Mamma,
**Squash**
and Fontina Lasagnetta,
Gnocchi with Amaretti and Mostarda,
Puree, Sweetbread Saltimbocca with, , __
Zucchini Flowers Stuffed with Ricotta and Tuna, , __
**Squid**
Fritto Misto de Pesce,
Grilled Stuffed Calamari with Meyer Lemon and Beets, , __
Ink Pasta,
Tagliolini with Ragù di Seppia, , __
**Starters**
Apricot and Chanterelle Salad with Parmesan Crisps, __ ,
Beef Tartare with Fried Egg Yolk and Parmigiano, , __
Canederli with Cabbage and Speck,
Carne Salata with Red Onion, Celery, and Olive Oil,
Cockles and Eggs with Bruschetta,
Corn Tortelli with Ricotta Salata,
Crespelle della Mamma,
Crispy Sweetbreads with Parmigiano Fonduta and Grilled Treviso,
Duck Liver alla Fiorentina with Egg Yolk and Bruschetta,
Focaccia Stuffed with Taleggio and Pancetta, , __
Grilled Sardines with Taggiasca Olives and Celery Salad,
Pan-Fried Veal Tongue with Bagna Cauda and Leeks, 114–15
Pear and Treviso Salad with Taleggio Dressing,
Pecorino Flan with Fava Beans and Artichokes, __ ,
Pinzimonio with Tarragon Vinaigrette and Goat Cheese, , __
Porcini Zuppa with Bra Cheese Fonduta, __ ,
Prosciutto Cotto with Stone Fruits, __ , 242–43
Red Bell Pepper Tonnato with Caper Berries, , __
Schisola (Polenta Stuffed with Gorgonzola Dolce), __ ,
Smoked Cod Salad with Frisée and Soft-Cooked Eggs, __ ,
Sweet Onion Flan with Morels,
Truffles and Eggs,
Veal Tartare with Shaved Artichokes and White Truffle,
Veal Tongue Salad with Escarole and Salsa Rossa,
Warm Beef Carpaccio with Roasted Mushrooms, , __
Zucchini Flowers Stuffed with Ricotta and Tuna, , __
**Stocks**
Fish,
Shellfish,
Veal,
Strawberry Zuppa Inglese with Mascarpone Cake,
Strudel, Fig, __ , 80–81
Sugar Syrup,
**Sweetbread(s)**
Crispy, with Parmigiano Fonduta and Grilled Treviso,
Saltimbocca with Squash Puree, , __
Sweet Onion Flan with Morels,
Sweet Ricotta Frittelle with Raspberry Preserves,
Swordfish Pancetta with Fennel Zeppole, __ ,
Syrup, Sugar,
T
**Tagliolini**
Pasta Dough,
with Ragù di Seppia, , __
**Taleggio**
and Burro Fuso, Porcini Ravioli with,
Cheese, Polenta Gnocchi Stuffed with,
Dressing, Pear and Treviso Salad with,
and Pancetta, Focaccia Stuffed with, , __
Tarragon Vinaigrette and Goat Cheese, Pinzimonio with, , __
**Tarts**
Blood Orange Crostata with Bitter Chocolate, 231–33, __
Fresh Prune and Almond,
Rhubarb Tartellette with Italian Meringue, 59–60, __
Rustic Peach, with Goat Cheese Sorbet, 191–93, __
Warm Quince Tortini with Cranberry and Orange, 126–27
3-2-1 Brine,
Tiramisu, Claudia's Limoncello,
**Tomato(es)**
Baby, and Fresh Peas, Oil-Poached Black Bass with, , __
Clams, and Peperoncino, Corzetti with, __ , 97–98
Heirloom, and Basil, Pork Neck Cannelloni with, 28–29
San Marzano, about,
Tortellini with Burrata and Basil, , __
Veal on a Stone, , __
Veal Tongue Salad with Escarole and Salsa Rossa,
**Tongue, Veal**
Pan-Fried, with Bagna Cauda and Leeks, 114–15
Salad with Escarole and Salsa Rossa,
Torrone Semifreddo with Candied Chestnuts and Chocolate Sauce, 172–73
Tortas, Meyer Lemon, with Poppy Seed Gelato,
Tortelli, Corn, with Ricotta Salata,
**Tortellini**
Guinea Hen, with Farro Crema, 222–23
Tomato, with Burrata and Basil, , __
Tortini, Warm Quince, with Cranberry and Orange, 126–27
**Truffle(s)**
Béchamel,
Black, and Raschera Fonduta, Polenta Caramelle with, , __
Butter and Celery Root Puree, Snail Spiedini with,
Corn Tortelli with Ricotta Salata,
and Eggs,
White, and Castelmagno Fonduta, Potato Gnocchi with,
White, and Shaved Artichokes, Veal Tartare with,
**Tuna**
Red Bell Pepper Tonnato with Caper Berries, , __
and Ricotta, Zucchini Flowers Stuffed with, , __
V
Vanilla Crespelle with Caramelized Pineapple Sauce,
**Veal**
Crispy Sweetbreads with Parmigiano Fonduta and Grilled Treviso,
Genovese Ravioli with Capon, 100–101
Liver Raviolini with Figs and Caramelized Onions, 142–43
Ragù and Peppers, Pappardelle with,
Shoulder Roasted in Hay with Grilled Peach Salad, , __
Stock,
on a Stone, , __
Sweetbread Saltimbocca with Squash Puree, , __
Tartare with Shaved Artichokes and White Truffle,
Tongue, Pan-Fried, with Bagna Cauda and Leeks, 114–15
Tongue Salad with Escarole and Salsa Rossa,
**Vegetables.** _See also specific vegetables_
Pickled, Whole Roasted Pork Shoulder with,
Pinzimonio with Tarragon Vinaigrette and Goat Cheese, , __
sweating,
Vinaigrettes, preparing,
**Vin Santo**
Cantucci Sundae,
Crema, Bomboloni with,
**Vodka**
Pina's Limoncello,
W
Walnut Pesto,
Warm Beef Carpaccio with Roasted Mushrooms, , __
Warm Quince Tortini with Cranberry and Orange, 126–27
White Gelato Base,
Whole Braised Beef Shanks with Buckwheat Polenta,
Whole Roasted Duck with Muscat Grapes, __ ,
Whole Roasted Pheasant with Barbaresco Sauce,
Whole Roasted Pig's Head, 30–31, _32–33_
Whole Roasted Pork Shoulder with Pickled Vegetables,
**Wild Boar**
Bolognese, Candele Pasta with, __ ,
Braised with Moretti Beer,
Wild Branzino with Fennel and Artichokes, 247–48, __
Wild Hare Pappardelle, 226–27
Y
Yellow Gelato Base,
Z
Zabaione with Moscato and Fresh Figs,
Zeppole, Fennel, Swordfish Pancetta with, __ ,
Zucchini Flowers Stuffed with Ricotta and Tuna, , __
**Zuppa**
Inglese, Strawberry, with Mascarpone Cake,
Porcini, with Bra Cheese Fonduta, __ ,
| {
"redpajama_set_name": "RedPajamaBook"
} | 7,847 |
{"url":"https:\/\/www.investopedia.com\/terms\/c\/cumulativereturn.asp","text":"\u2022 General\n\u2022 Investing\/Trading\n\u2022 News\n\u2022 Popular Stocks\n\u2022 Personal Finance\n\u2022 Reviews & Ratings\n\u2022 Your Practice\n\u2022 Wealth Management\n\u2022 Popular Courses\n\u2022 Courses by Topic\n\n# Cumulative Return\n\n## What Is Cumulative Return?\n\nA cumulative return on an investment is the aggregate amount that the investment has gained or lost over time, independent of the amount of time involved. The cumulative return is expressed as a percentage, and it is the raw mathematical return of the following calculation:\n\n\ufeff $\\frac{(Current\\ Price \\ of \\ Security) - (Original \\ Price \\ of \\ Security)}{Original \\ Price \\ of \\ Security}$\ufeff\n\n### Key Takeaways\n\n\u2022 The cumulative return is the total change in the investment price over a set time\u2014an aggregate return, not an annualized one.\n\u2022 Reinvesting the dividends or capital gains of an investment impacts its cumulative return.\n\u2022 Cumulative return figures for ETFs and mutual funds typically omit the impact of annual expense ratios and other fees on the fund's performance.\n\u2022 Taxes can also substantially reduce the cumulative returns for most investments unless they are held in tax-advantaged accounts.\n\n## Understanding Cumulative Return\n\nThe cumulative return of an asset that does not have interest or dividends is easily calculated by figuring out the amount of profit or loss over the original price. That can work well with assets like precious metals and growth stocks that do not issue dividends. In these cases, one can use the raw closing price to calculate the cumulative return.\n\nOn the other hand, the adjusted closing price provides a simple way to calculate the cumulative return of all assets. That includes assets like interest-bearing bonds and dividend-paying stocks. The adjusted closing price incorporates the impact of interest, dividends, stock splits, and other changes on the asset price. So, it is possible to obtain the cumulative return by using the first adjusted closing price as the original price of the security.\n\nThe cumulative return usually grows over time, so it tends to make older stocks and funds look impressive. It follows that the cumulative return is not a good way to compare investments unless they launched at the same time.\n\n## Special Considerations\n\n### Mutual Funds and ETFs\n\nA common way to present mutual fund or exchange traded fund (ETF) performance over time is to show the cumulative return with a visual, such as a mountain graph. Investors should check to confirm whether interest or dividends are included in the cumulative return. The marketing materials or information accompanying an illustration typically provide this information. Such payouts might be counted as reinvested or simply added as raw dollars when calculating the cumulative return.\n\nOne notable difference between mutual funds and stocks is mutual funds sometimes distribute capital gains to the fund holders. This distribution usually comes at the end of a calendar year. It consists of the profits the portfolio managers made when closing out holdings. Mutual fund owners can reinvest those capital gains, which can make calculating the cumulative return more difficult.\n\n### Advertisements\n\nMany advertisements use the cumulative return to make investments look impressive. While these results are often basically accurate, they can be exaggerated or distorted to encourage greed or fear. For example, someone might sight Amazon's cumulative return of over 100,000% between its initial public offering (IPO) in 1997 and 2020. However, many other technology-related companies had IPOs in the late 1990s, and most of them never came close to Amazon's returns. Furthermore, investors would have had to continue holding the stock through a bear market that reduced its value by over 90% during 2000 and 2001.\n\nPrecious metals are another area where investors need to look carefully at advertisements using total returns. Crucially, ads for bullion are not governed by the same regulations as mutual funds and ETFs. Furthermore, these cumulative returns typically do not subtract storage costs or insurance fees, which are services that many investors demand. While precious metals ETF fees are generally lower, they also need to be deducted from returns for the commodity to obtain the cumulative return that investors actually received.\n\n### Taxes\n\nTaxes can also substantially reduce the cumulative returns for most investments unless they are held in tax-advantaged accounts. Taxes are a particular issue for bonds because of their relatively low returns and the unfavorable tax treatment of interest payments. However, municipal bonds are often tax-exempt, so cumulative return figures require less adjustment.\n\nLong-term stock investments enjoy the advantage of paying a relatively low capital gains tax, which is also usually easy to subtract from cumulative returns. The tax treatment of dividends is a much more complicated subject. However, it can also influence cumulative returns when funds reinvest dividends.\n\n### Compound Return\n\nAlong with the cumulative return, an ETF or other fund usually indicates its compound return. Unlike the cumulative return, the compound return figure is annualized. Cumulative returns may seem more impressive than the annualized rate of return, which is usually smaller. However, they typically omit the effect of the annual expenses on the returns an investor will receive. Annual charges an investor can expect include fund expense ratios, interest rates on loans, and management fees. When worked out on a cumulative basis, these fees can substantially eat into cumulative return numbers.\n\n## Example of Cumulative Return\n\nFor example, suppose investing $10,000 in XYZ Widgets Company's stock for a 10-year period results in$48,000. With no taxes and no dividends reinvested, that is a cumulative return of 380%.","date":"2022-01-20 20:48:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 1, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21502922475337982, \"perplexity\": 2877.2374689576923}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320302622.39\/warc\/CC-MAIN-20220120190514-20220120220514-00479.warc.gz\"}"} | null | null |
Myotis zenatius, communément appelé le Murin Zenati, est une espèce de chauves-souris de la famille des Vespertilionidae.
Distribution
Bibliographie
Notes et références
Vespertilionidae | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,208 |
{"url":"http:\/\/mathhelpforum.com\/calculus\/11637-indefinite-integral.html","text":"1. ## indefinite integral\n\nint {pi\/4}^{pi\/2} \\ csc^{2}x dx\n\nim so lost when trying to do this problem. i know that csc is 1\/sin.\ni've tried using the half-angle identity but it didnt work.\ncan someone explain.\n\n2. Originally Posted by viet\nint {pi\/4}^{pi\/2} \\ csc^{2}x dx\n\nim so lost when trying to do this problem. i know that csc is 1\/sin.\ni've tried using the half-angle identity but it didnt work.\ncan someone explain.\nYou are making it too complicated.\n\nFirst, INT sec^2 x dx = tan x+C\nThat is a well known integral.\n\nSimilarly, INT csc^2 x dx = -cot x+C\n(Just note the negative, because like with sine and cosine the signs change when you look at them).\n\n3. so now do i evaluate pi\/4 to pi\/2 for -cot(x)?\n\n4. Originally Posted by viet\nso now do i evaluate pi\/4 to pi\/2 for -cot(x)?\nYes. Integral evaluted at upper limit minus the lower limit.","date":"2016-10-01 07:15:14","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9785252809524536, \"perplexity\": 2628.1335862782407}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-40\/segments\/1474738662541.24\/warc\/CC-MAIN-20160924173742-00112-ip-10-143-35-109.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
One of the most exciting areas of research in the last few years, is the
realization
of pure one dimensional (1D) systems, as they open the possibility of
studying not
only theoretically but, also experimentally, the characteristic features
triggered by low-dimension,
that involved electronic transport, correlation and magnetic properties.
Atomic-size
contacts and nanowires can nowadays be obtained by scanning tunneling
microscope or in mechanically controllable break junctions experiments
(MCBJ)~\cite{Ruitenbeek01}. It is with
this last technique that it has been possible to create monoatomic
chains of Ir~\cite{Ryu06},
Pt~\cite{Ruitenbeek01} and Au~\cite{Ohnishi98,Ruitenbeek98,Rubio01}.
Recently, it has
been also demonstrated that impurity-assisted chain growth leads to a
strongly enhanced tendency towards chain formation, when compared to pure
noble-metal (NM) chains. In particular,
$p$-like impurities, like N or O, lead to high chain elongation probabilities
due to the strong directional
bondings~\cite{Bahn02,Cakir11,Novaes06,Thijssen06,Alex08,Dinapoli12}.
Given these experimental outstanding achievements, a new generation of
nanodevices can in principle be
conceived and designed to be used as tools for detecting nanomagnetism
by conductance measurements through atomic metal contacts. One can
indirectly sense the presence of magnetism by detecting zero-bias
anomalies, while doing conductance measurements through an atomic contact.
The origin of these zero-bias anomalies is usually due to Kondo screening of the spin,
if a magnetic impurity is
bridging the contact.
One of the prototype magnetic impurities in idealized models and
computational simulations are Co atoms,
as they develop a wide range of spin moments depending on the metal host
and the dimensionality
with usually high enough Kondo temperatures.
Co impurities in Au, Ag and Cu
surfaces have been extensively studied for several years and they are still
at the focus of research.
Recently, H. Pr\"user and co-workers analyzed the Kondo resonance at Co
atoms buried below a Cu(100) surface by mapping
the local density of states of the surface itinerant electrons~\cite{Pruser12}
and B. Surer \textit{et al.} developed a multiorbital Kondo model to
investigate the physics of Co atoms in Cu
hosts~\cite{Surer12}. On the other hand, Co impurities in noble-metal
chains are interesting in order to study how the reduced dimensionality
could lead to different and novel
transport and magnetic properties and in this way eventually form part of a
nanomagnetism detecting nanodevice.
In a previous work, we found that
non-Fermi liquid behavior qualitatively similar to that of
the two-channel Kondo (2CK) model is expected in a system of a Co impurity
embedded in a Au chain which is in contact with Au leads, that break strongly enough
the symmetry of the chain reducing it to a four-fold one~\cite{Dinapoli13}.
The proposed setup might be realized in break junction experiments if the symmetry of
the contacts can be controlled or in STM experiments on (100) noble-metal surfaces.
In this contribution, we want to analyze if the same physical trends, that
is, the presence of the 2CK physics and the non Fermi liquid behavior,
could be obtained when instead of Au chains we introduce a Co
impurity in Cu and Ag chains. As mentioned previously, these last noble-metal
chains could be
stabilized by introducing light environmental non-magnetic impurities in
the experimental atmosphere~\cite{Dinapoli12}. With this purpose we obtain and
analyze the electronic structure of these metallic systems. In the case of
Cu chains we check if the presence of
stabilizing O atoms could destroy or not the eventual presence of a 2CK effect.
The paper is organized as follows. In section \ref{kondo} we briefly recall
the ground state properties of a Co impurity in
a Au host, related to the Kondo effect. In section \ref{results} we provide
the details of our DFT first-principles calculations and
analyze the bandstructure of the different noble-metal hosts and the
consequences of these structures on the systems with an embedded Co impurity.
Finally, a summary and conclusions are given in section \ref{conclusions}.
\section{Co impurity within a Au chain: Kondo effect}
\label{kondo}
A magnetic impurity of Co embedded in a Au chain,
under a four-fold symmetry breaking field $B$
exhibits different
transport properties depending on the specific geometry of the
leads~\cite{Dinapoli13}. The Co atom embedded in a Au chain has a total spin
$S=3/2$ in a $3d^7$ configuration. One $d$ hole is shared by the half filled
$3d_{xy}$ and $3d_{x^2-y^2}$ ($\Delta_4$-symmetry),
while the other two are in the empty and degenerate $3d_{xz}$ and $3d_{yz}$
($\Delta_3$-symmetry). On the other hand, the $5d_{xz}$ and
$5d_{yz}$ of the pure Au chains are also degenerate and close to the
Fermi level and will be pushed up by the presence of O impurities.
Therefore, these represent two identical conduction bands
that can screen the $S=1/2$ spin of the
$3d_{xz}$ and $3d_{yz}$ holes of Co. On the contrary, there is no density
of states of Au with $\Delta_4$ symmetry at
the Fermi level that could be hybridized with
states of the same symmetry at the Co site,
leading to a frozen charge in the $3d_{x^2-y^2}$ and $3d_{xy}$ levels.
In presence of $B$ and in absence of spin-orbit coupling,
the microscopic model that describes the system consists of a spin 3/2
hybridized with two triplets of the $d_8$ configuration through two
conduction channels (the $5d_{xz}$ and $5d_{yz}$ of Au) \cite{Dinapoli13}.
The corresponding physics is similar to that of the underscreened
Kondo or Anderson models~\cite{Aligia86,Mehta05}.
However, the spin-orbit coupling induces a splitting $D$ between the states
with projection $M =\pm 3/2$ and $M =\pm 1/2$. This splitting has been calculated
solving exactly the 3d$^7$ configuration of Co including all correlations of the
d shell~\cite{Aligia10}.
For a negative $D$, larger than the characteristic Kondo temperature, the doublet $M =\pm 3/2$ is
lower in energy than the $M =\pm 1/2$ one and, therefore, the two conduction bands can not change the local $M = -3/2$ into the $M = +3/2$. Thus, the
spin flip process of the Kondo effect is inhibited.
On the contrary, for positive $D$, which is the case for large enough $B$~\cite{Dinapoli13}, the $M =\pm 1/2$ doublet is below in energy
than the $M =\pm 3/2$ one, being overcompensated by the two $\Delta_3$ conduction leads.
This overcompensation is usually the appropriate scenario to obtain experimentally the well known 2CK physics~\cite{Andrei84,
Zarand06,Mitchell12,Oreg03}, and a similar physics is in fact confirmed by detailed calculations using the numerical renormalization group in the model \cite{Dinapoli13}. In particular the low-temperature entropy is ln(2)/2 and
the conductance through the device displays a $T^{1/2}$ behavior.
However, the model presents some differences with the 2CK one. For example, the conductance at zero temperature is less than expected due to
some degree of intermediate valence~\cite{Dinapoli13}.
While the properties of the ground state,
underscreened or overscreened Kondo physics, are determined by the symmetry properties of the leads and the strength of the four-fold crystal field $B$,
the crucial point for
observing the Kondo effect is the presence of $5d_{xz}$ and $5d_{yz}$ density of states of Au at the Fermi level. It is not
obvious that the same trends can be obtained when chains of other materials are used.
\section{Results}
\label{results}
We perform \textit{ab initio} calculations based on spin polarized density functional
theory (SP-DFT) using the full potential linearized
augmented plane waves method, as implemented in the WIEN2K code~\cite{Wien}.
The generalized gradient approximation for
the exchange and correlation potential in the parametrization of PBE and
the augmented plane
waves local orbital basis are used. The cutoff parameter which gives the
number of plane waves in the interstitial region
is taken as Rmt*Kmax = 7, where Kmax is the value of the largest reciprocal
lattice vector used in the plane waves' expansion
and Rmt is the smallest muffin tin radius used. The number of \textbf{k}
points in the Brillouin zone is enough in each case
to obtain the desired energy and charge precisions, namely 10$^{-4}$ Ry and
10$^{-4}$e, respectively.
The muffin-tin radii were set to $2.09$ bohr for Co, Cu, Ag and Au atoms
and $1.52$ bohr for the O impurities.
We set the coordinate system such that the chain axis is aligned along
the $z$ axis, as schematically shown in Fig.~\ref{setup}.
For the noble-metal chains we
consider one-atom unit cells (Fig.~\ref{setup}(a)) in a hexagonal lattice with $a=b=10$ bohr
and $c=d_{NM-NM}^ {eq}$, where $d_{NM-NM}^ {eq}$ is the chain's equilibrium
lattice constant in each case ($d_{NM-NM}^ {eq} = 4.40, 5.09$ and $4.93$ bohr for Cu, Ag and Au, respectively).
In O doped NM chains, we consider, for simplicity, that the presence of O atoms results in ...NM-O-NM-O... linear chain structures, to make clear the doping effect on the noble-metal d-bands. The distances between the noble-metal to O atoms were relaxed in the $z$-direction, in all the cases (Fig.~\ref{setup}(b)).
Finally, for the NM chains plus the Co embedded impurity we consider an eleven
atoms unit cell, (Fig.~\ref{setup}(c)), with the noble-metal to noble-metal distances, $d_{NM-NM}$,
set equal to the equilibrium chain's lattice constant
($d_{NM-NM}=d_{NM-NM}^ {eq}$), while the
noble-metal to Co impurity distance, $d_{NM-Co}$, is set equal to
$(d_{NM-NM}^{eq}+d_{Co-Co}^{eq})/2$, where $d_{Co-Co}$ is the
optimal Co-Co distance in a Co chain.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{Fig1.eps}
\caption{(Color online) Schematic representation of the systems studied. The noble-metal (NM) atoms
are represented by large grey spheres, the smaller green spheres represent the O atoms, while the blue sphere stands for
the Co magnetic impurity.}
\label{setup}
\end{figure}
To check if Ag and Cu chains could give raise to similar physics as
Au chains in the presence of the Co impurities, we compare the band
structure of Cu and Ag linear chains with that obtained for the Au one.
The PDOS as well as the
bandstructure of the different
NM chains are presented in Fig.~2, where it is seen that
the $d$ bands of the Ag chain are
well below the Fermi level (second panel of Fig. 2), while the
Cu case is similar to the Au one,
showing $d_{xz,yz}$ ($\Delta_3$-symmetry) states
at the Fermi level.
For the three chains, the $\Delta_4$ orbitals are more localized than
the $\Delta_3$ ones, and the more extended
orbitals are those of $d_z^2$ ($\Delta_1$)-symmetry.
From the spin polarized calculation of the electronic structure of a Co atom embedded
in the Cu and Ag chains, it is obtained that similarly to what
happens with a Co impurity in the Au chain, the Co
atom presents a total spin $S = 3/2$. The Co atom exhibits three
spin down holes, two with $\Delta_3$ symmetry and the last one coming from
the half-filled degenerate $\Delta_4$ orbitals.
Due to the fact that
the Co impurity has the same structure when embedded in the three studied
hosts, we can return now to the differences and similarities in the
bandstructure of the different NM chains already discussed above.
As a direct result of the electronic structure of Cu chains, we can
say that they would present
the same Kondo behavior as the one expected for the corresponding Au system.
The characteristic Kondo scale will, of course,
depend on the specific parameters of each system. On the contrary, the Ag host does not provide the appropriate environment
for the screening of the angular momentum characteristic in the Kondo effect.
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{Fig2-1.eps}
\includegraphics[height=5cm]{Fig2-2.eps}
\includegraphics[height=5cm]{Fig2-3.eps}
\caption{(Color online) Band structure and densities of states of Cu (top),
Ag (middle) and Au (bottom) chains. The densities
of states are additionally decomposed into the contributions coming from
$s$ and $d$ states of different symmetries.}
\end{center}
\label{densities}
\end{figure}
As mentioned in the introduction, impurity-assisted chain growth helps
to enhance the tendency towards noble-metal chain formation.
The question arises then, what is the effect of O doping
on the Kondo trends of the NM chains with the embedded Co impurity?
After calculating the electronic structure of NM chains doped with
stabilizing oxygen atoms, we find that Co is again in a $S = 3/2$ spin
state, showing the same holes' symmetry as before. In the NM chains doped
with oxygen, we find that the $\Delta_3$ orbitals are pushed up towards the
Fermi level due to hybridization of these states with the $p_{x,y}$
degenerate orbitals
of the O atoms. This is in agreement with the
results presented in Ref.(~\cite{Dinapoli12}), where the \textit{ab initio}
calculations were performed using a different code.
In Fig.~3 we show the densities of states for the three
studied NM chains when doped with one O atom per NM atom.
It is noteworthy to say that this is considered as a strong doping limit case just to strength the
role of the O atoms in the behavior of the NM d-bands. Due to the metallic character of the
chains as a consequence of the presence of partially filled s bands (which screen the on-site Coulomb repulsion),
we expect that the role of correlations is
less important than for example in CuO linear chains in CuGeO$_3$, SrCuO$_2$~\cite{vekua} or in
superconducting cuprates~\cite{Garces} and have a small impact on the relative position of the
different bands.
In the high doping limit we observe that all the systems
present the $\Delta_3$ band crossing the Fermi level, giving rise to the
possibility of hybridization of these states with
the $3d_{xz,yz}$ Co orbitals. In particular, it is clear from the second
panel of Fig.~3 that the Ag doped chain
could now provide the proper environment for the development of the
Kondo physics. Although this seems to be the main result of this
contribution, we also notice that the strength of the chain when doping
with O impurities is even larger than the corresponding
one for the Au pure chains. (See Figure 7 of Ref.~\cite{Dinapoli12}).
Therefore, the presence of O impurities plays an important role
in the case of Ag, it pushes up
the $\Delta_3$ orbitals towards the Fermi level increasing the tendency
towards Kondo physics, and it strengthens the bonds within the Ag chain
enhancing its feasibility. The same conclusion holds for the Au and Cu
doped chains.
For the sake of the comparison with the previous study in Au chains, in the case of the Cu pure chain
we also added a four-fold breaking
symmetry FCC lead. We obtain that the splitting between the $3d_{x^2-y^2}$
and the $3d_{xy}$ orbitals is large enough to produce a positive value
of $D$, in agreement with the one obtained for the Au pure chains
reported in Ref.~\cite{Dinapoli13}. Thus we expect that, in this particular
case, the ground state will present the physics of the
overscreened Kondo model.
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{Fig3-1.eps}
\includegraphics[height=5cm]{Fig3-2.eps}
\includegraphics[height=5cm]{Fig3-3.eps}
\caption{(Color online) Band structure and densities of states of NM chains doped with the O atoms. The densities
of states are additionally decomposed into the contributions coming from $s$ and $d$ states of different symmetries.}
\end{center}
\label{oxygen}
\end{figure}
\section{Conclusions}
\label{conclusions}
In this contribution we analyze the necessary conditions to obtain the Kondo physics in noble-metal chains with a Co embedded impurity.
We find that Cu and Au leads behave electronically in a similar way in the presence of a Co impurity. The characteristic Kondo scale
will depend on the specific parameters of each system. On the contrary, Ag chains lack the necessary $\Delta_3$ symmetry at the Fermi
level which should provide the screening of the angular momentum, characteristic of the Kondo effect.
In an atmosphere of O atoms, the probability of developing the Kondo physics is enhanced in the Ag chains, while the Cu and Au chains increase
their mechanical stability and also improve the 2CK existence conditions.
The physics of the overscreened Kondo model and non Fermi liquid properties are also foreseen in the case of Cu pure chains when adding a
four-fold breaking symmetry field in the leads.
This work was partially supported by PIP 11220080101821, PIP 00258 of
CONICET, and PICT R1776 of the ANPCyT, Argentina.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,738 |
Q: Complexity of infinitary satisfiability, part 2 This question is a follow-up to this one, which was almost entirely answered by Farmer S. Throughout, we work in $\mathsf{ZFC+V=L}$.
Given a "pre-admissible" (= admissible or limit of admissibles) ordinal $\kappa$, let $\mathsf{Sat}_\kappa$ be the set of $\mathcal{L}_{\infty,\omega}\cap L_\kappa$-sentences which are satisfiable - that is, which have a model, but not necessarily a model in or nicely-definable-over $L_\kappa$. The two easy cases are:
*
*If $\kappa$ is countable, then by the Barwise completeness theorem we get $\mathsf{Sat}_\kappa$ is $\Pi_1(L_\kappa)$.
*If $\kappa$ is an uncountable cardinal, then $\mathsf{Sat}_\kappa$ is $\Sigma_1(L_\kappa)$ by a downward Lowenheim-Skolem + Mostowski collapse argument: $\varphi\in\mathcal{L}_{\infty,\omega}\cap L_\kappa$ is satisfiable iff $L_\kappa\models$ "$\varphi$ is satisfiable."
At the above-linked question, Farmer S. handled most of the remaining cases:
*
*If $\kappa$ is a non-cardinal of uncountable cofinality and $L_\kappa$ does not correctly compute $\mathsf{Sat}_\kappa$ (this is the $\Sigma_1(L_\kappa)$-case), then $\mathsf{Sat}_\kappa$ is not definable with parameters over $L_\kappa$ at all.
This leaves open the situation of $\kappa$ satisfying $\omega=\mathit{cf}(\vert\kappa\vert)<\vert\kappa\vert<\kappa$, that is, the non-cardinals of countable cofinality cardinality. I suspect that this will be a bit tricky to analyze in full, so instead let me ask a more limited question:
Is there a pre-admissible $\kappa$ such that $\mathsf{Sat}_\kappa$ is definable-with-parameters over $L_\kappa$ but not in a $\Sigma_1$ or $\Pi_1$ way?
A: Here is a partial answer; it deals with those admissibles $\kappa$ large enough to see that $\mathrm{cof}(|\kappa|)=\omega$ and such that $L_\kappa$ has largest cardinal $\theta$.
That is, let $\kappa$ be as in the question, be admissible, and let $\theta=|\kappa|$, so $\mathrm{cof}(\theta)=\omega$. Suppose that $L_\kappa\models$"$\mathrm{cof}(\theta)=\omega$ and every set has cardinality $\leq\theta$"
(of course this is true for club many $\kappa<\theta^+$).
Then I claim that $\mathrm{Sat}_\kappa$ is $\Pi_1^{L_\kappa}(\{\theta\})$.
For let $f:\omega\to\theta$ be the $L$-least cofinal, strictly increasing
function with $\mathrm{range}(f)$ a set of cardinals; so by hypothesis, $f\in L_\kappa$. Note that $\{f\}$ is $\Sigma_1^{L_\kappa}(\{\theta\})$. So our $\Pi_1^{L_\kappa}(\{\theta\})$ definition can refer to $f$.
Working in $L_\kappa$, let $T$ be some $\mathcal{L}_{\infty\omega}$ theory. We want to determine whether $T$ is satisfiable. Since $L_\kappa\models$"every set has cardinality $\leq\theta$", and we can in a $\Sigma_1(\{T,\theta\})$ manner find the $L$-least surjection $g:\kappa\to T$, we may in fact assume that $T$ is (coded by) a subset of $\theta$. Let $\mathscr{T}_T$ be the tree of attempts to build a sequence $\left<N_n,M_n,T_n,f_n,\theta_n,\pi_n\right>$ such that:
*
*$N_n$ is a structure in the language of set theory with $N_n\models$"ZF$^-$+$V=L$", and $\mathrm{card}(N_n)=f(n)$,
*$f(n)+1\subseteq\mathrm{Ord}^{N_n}$ and $f(n)+1$ is an initial segment of $\mathrm{Ord}^{N_n}$,
*$f_n,M_n,T_n,\theta_n\in N_n$ and $N_n\models$"$\theta_n$ is a cardinal, $f_n:\omega\to\theta_n$ is cofinal, $T_n$ is a theory of $\mathcal{L}_{\infty\omega}$ and $M_n$ is a model such that $M_n\models T_n$",
*$f_n\upharpoonright (n+1)=f\upharpoonright(n+1)$,
*$\pi_n:N_n\to N_{n+1}$ is elementary, with $\pi_n(f_n,M_n,T_n,\theta_n)=(f_{n+1},M_{n+1},T_{n+1},\theta_{n+1})$ and $\mathrm{crit}(\pi_n)>f(n)$.
(The nodes of $\mathscr{T}_T$ should specify say $(\left<N_n,M_n,T_n,f_n,\theta_n,\right>_{n<k},\left<\pi_n\right>_{n+1<k})$ for some $k<\omega$.)
Note that $\mathscr{T}_T\in L_\kappa$.
Working in $L$,
I claim that $T$ is satisfiable iff $\mathscr{T}_T$ has an infinite branch. For if $M\models T$ then let $\gamma$ be large enough with $M\in N=L_\gamma\models$ZF$^-$, and then form a sequence of elementary hulls $X_n$ of $N$ of cardinalities $f(n)$ etc, and use their transitive collapses $N_n$ etc to get an infinite branch. Conversely, if there is an infinite branch $\left<N_n,\ldots\right>_{n<\omega}$, then we can let $(N,f',M',T',\theta')$ be the natural direct limit, and then note that $f'=f$, $\theta'=\theta$, $T'=T$ (as $T\subseteq\theta$ in the codes), and $N\models$ ZF$^-$ + "$M\models T$", but because $T\in N$ and $T$ encodes the relevant ordinals into its own structure, $N$ must be sufficiently wellfounded that it is correct about the truth computation, so $M\models T$, so $T$ is satisfiable.
So since $L_\kappa\models$KP, the existence of a branch through $\mathscr{T}_T$ is $\Pi_1^{L_\kappa}(\{\theta,T\})$, uniformly in $T$, so $\mathrm{Sat}^{L_\kappa}$ is $\Pi_1^{L_\kappa}(\{\theta\})$.
Remark: Suppose $\kappa$ is also a successor admissible. Then $L_\kappa$ is not correct about satisfiability. For let $\gamma<\kappa$ be above all admissibles $<\kappa$ and such that $L_\gamma$ projects to $\theta$. Then consider the theory $T$ whose models $M$ must satisfy KP + $V=L$, and must have $\gamma+1\subseteq\mathrm{wfp}(M)$, and which satisfy "I have no proper segment of height $>\gamma$ modelling KP". There is no such $M\in L_\kappa$.
Remark 2: On the other hand, suppose $\kappa$ is a limit of admissibles, and the other hypotheses above hold. Then by the arguments above and considering the left-most branch through $\mathscr{T}_T$, $L_\kappa$ is correct about satisfiability.
So it remains to handle those pre-admissibles $\kappa$ such that either $L_\kappa\models$"$\mathrm{cof}(\theta)>\kappa$" (and therefore $L_\kappa\models$"$\theta$ is inaccessible"), or $L_\kappa\models$"$\theta^+$ exists". It also remains to determine whether the parameter $\theta$ can be eliminated above.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,658 |
Motel Bates (no original em inglês: Bates Motel) é uma série de televisão americana de drama, terror psicológico e suspense, desenvolvida por Carlton Cuse, Kerry Ehrin e Anthony Cipriano, produzida pela Universal Television e exibida pela A&E.
A série é um "prólogo contemporâneo" para o filme Psycho de 1960 (baseado no romance de mesmo nome escrito por Robert Bloch), que retrata a vida de Norman Bates e de sua mãe Norma antes dos eventos retratados no filme de Alfred Hitchcock. A série começa depois da morte do marido de Norma, quando ela adquire um motel localizado em uma cidade costeira chamada White Pine Bay, localizada no estado de Oregon, nos Estados Unidos, para que ela e Norman possam começar uma nova vida.
A série foi filmada em Aldergrove, Colúmbia Britanica, Canadá, e estreou em 18 de março de 2013 na A&E. A rede A&E decidiu encomendar 10 episódios para a primeira temporada da série. Em 8 de abril de 2013, a emissora renovou Bates Motel para uma segunda temporada com mais 10 episódios, depois de críticas favoráveis e grande audiência. Em 7 de Abril de 2014 Bates Motel foi renovada para uma terceira temporada, contendo 10 episódios. A quarta temporada de Bates Motel estreou em 2016 e é composta por 10 episódios. Ainda em 2016 foi anunciado o fim da série com a última temporada estreando em 20 de fevereiro de 2017, sendo a quinta e última temporada, e a temporada que mais recebeu críticas positivas.
Sinopse
Após a misteriosa morte de seu marido, Norma Bates decidiu começar uma nova vida longe do Arizona, na pequena cidade de White Pine Bay, em Oregon, e leva o filho Norman, de 17 anos, com ela. Ela compra um velho motel abandonado e a mansão ao lado. Mãe e filho sempre compartilharam uma relação complexa, quase incestuosa. Trágicos acontecimentos vão empurrá-los ainda mais. Todos eles agora compartilham um segredo obscuro.
Produção
Cuse citou o drama da série Twin Peaks como uma inspiração fundamental para Bates Motel, afirmando: "Nós praticamente arrancamos Twin Peaks..... Se você queria uma confissão, a resposta é sim, eu amei esse show que só fez 30 episódios. Kerry Ehrin e eu pensamos em fazer o plano que faltava".
Bates Motel foi filmado em Aldergrove, no entanto, é situado na cidade fictícia de White Pine Bay, no Oregon. O conjunto para o lar original está localizado no Universal Studios em Hollywood, na cidade de Los Angeles, no entanto, uma réplica foi construída em Aldergrove, onde a série é filmada. O conjunto está localizado na 272 Street. Foram utilizadas câmeras digitais Arri Alexa, o que realçou detalhes das cenas mesmo em ambientes escuros.
O escritor da série Bill Balas realmente tem fibrose cística, e foi a inspiração para a personagem Emma Decody estar sendo atingida com a doença.
Episódios
Elenco
Principais
Audiência e recepção
Primeira temporada
Em sua noite de estreia, ela quebrou recordes de audiência para uma série dramática original do A&E. A série atraiu 3,04 milhões de espectadores, incluindo um total de 1,6 milhões de telespectadores assistindo-o na faixa demográfica de 18–49 anos. O final da primeira temporada atraiu um total de 2,70 milhões de espectadores, com uma quota de audiência de 1,2 na faixa demográfica 18–49. Em geral, a primeira temporada teve uma média de 2,70 milhões de espectadores, com 1,5 milhões em ambas as faixas demográficas 18–49 e 25–54.
A primeira temporada de Bates Motel alcançou uma pontuação de 66 em 100 no agregador de resenhas Metacritic, indicando "revisões geralmente favoráveis". Um outro agregador, o Rotten Tomatoes informou que 83% de 37 críticos deram a primeira temporada uma avaliação positiva. No consenso do site diz: "Bates Motel utiliza manipulação da mente e táticas de medo de suspense, em cima do trabalho de caráter consistentemente nítido e relações familiares maravilhosamente desconfortáveis." Vera Farmiga foi indicada para o Emmy Award 2013 para melhor atriz em série de drama, e ganhou o Saturn Award de melhor atriz em televisão pelo seu trabalho nesta temporada.
Segunda temporada
A estreia da segunda temporada atraiu um total de 3,07 milhões de telespectadores, com 1,3 milhões assistindo na cobiçada faixa demográfica 18–49, e o episódio final da temporada atraiu 2,30 milhões de espectadores, com 0,9 milhões sintonizados a partir da faixa 18–49. Em geral, a segunda temporada teve uma leve queda na audiência, com uma média de 2,30 milhões de espectadores e uma quota de audiência de 0,9 na faixa demográfica 18–49.
A segunda temporada de Bates Motel recebeu uma pontuação de 67 em 100 no Metacritic, a partir de 11 comentários. O Rotten Tomatoes relatou uma classificação de 86% a partir de 12 comentários para a segunda temporada. No consenso do site diz: "Bates Motel reinventa um suspense clássico com performances críveis e escrita distinta." Farmiga e Highmore foram nomeados para Satellite Awards e a Critics' Choice Television Awards de melhor atriz e melhor ator, respectivamente, por seu trabalho nesta temporada.
Terceira temporada
O terceiro episódio de estreia de temporada atraiu um total de 2,14 milhões de telespectadores, com 0,9 milhões assistindo na faixa 18–49 e no final atraiu 1,67 milhões de espectadores, com 0,6 sintonizados a partir da faixa 18–49. Em geral, a terceira temporada também registrou queda na audiência com uma média de 1,80 milhões de espectadores e uma quota de 0,7 classificações na demo 18–49.
A terceira temporada de Bates Motel recebeu uma pontuação de 72 em 100 de Metacritic, com base em 5 avaliações, indicando "avaliações favoráveis". O Rotten Tomatoes informou que 92% das 12 respostas críticas foram positivas. No consenso do site diz: "Bates Motel borra mais linhas ao redor da curta relação tabu de mãe/filho na TV, desconfortavelmente escurecendo o tom já fascinante."
Transmissão
No Brasil, a série é exibida primeiramente na TV Fechada pelo Universal Channel, posteriormente na TV aberta pela RecordTV, que a exibiu em duas ocasiões: a primeira no horário nobre em 2014 somente a Primeira Temporada e a segunda ocasião foi em janeiro de 2016, com a exibição das três primeiras temporadas.
O canal Universal Channel exibiu até a quarta temporada, somente em 2016 e a quinta foi exibida em julho de 2017 no Brasil. Todas as temporadas foram dubladas em português brasileiro.
Em Portugal a série é exibida pela TVSéries.
Todas as temporadas da série podem ser vistas pelos serviços de streaming Prime Video e Netflix.
Merchandising
A NBCUniversal tem parceria com a Hot Topic, uma varejista americana de mercadoria da cultura popular, para apresentar uma coleção de roupas e acessórios inspirados em Bates Motel. A mercadoria, inclui itens como roupões de banho e cortinas de chuveiro de sangue, tornou-se disponível no site da Hot Topic e lojas selecionadas em 18 de março de 2014.
DVD e Blu-ray
No Brasil, a primeira temporada de Bates Motel foi lançada em novembro de 2013 somente em DVD. No Reino Unido, foi lançada em Blu-ray com dublagem e legenda em português brasileiro no dia 3 de fevereiro de 2014. Após reclamações, a primeira e a segunda temporada foram lançadas em Blu-ray no Brasil em outubro de 2014. Em outubro de 2015 o lançamento em Blu-ray foi cancelado no Brasil, com a terceira temporada lançada apenas em DVD em novembro de 2015. A Amazon.com permite a importação da terceira temporada em Blu-ray para o Brasil, com legendas em português.
Prêmios e indicações
Ligações externas
no Rotten Tomatoes
no Metacritic
Bates Motel
Programas da A&E | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,459 |
SmartSpot®
Kajeet Chromebook Bundle
SmartBus™
Connect BackUp™
Connect Prime™
Sentinel®
Reports and Guides
Homework Gap
Filtered Wi-Fi hotspot
Complete student connectivity solution
School bus Wi-Fi
Protect student devices
Network failover
School building Internet
All Solutions Powered by Sentinel
Kajeet ExtraCurricular: K-12 ed tech trends & topics
See how customers succeed with Kajeet
Resources for the 21st-century educator
Budget challenges? Check out these tips and resources
Do you have students without Internet at home?
Find the answers to your questions
Access materials, tutorials, and the Knowledge Center.
Download our solution catalog.
Canada's Digital Divide: How Wide is it and Can it be Solved?
Written by Kajeet on October 01, 2018
Educational technology—how we use it now, how we'll use it in the future—is of paramount importance to any discussion of education. Teachers, students, administrators, and other key stakeholders in Canada are passionate about the role technology plays in supporting how students learn.
An infographic on "Technology in the Classroom" created by Nelson, an educational publisher in Canada, revealed 71 percent of students feel positively about technology in the classroom.
Technology in the classroom has even more of an impact when students can continue their ed tech use at home. However, not all students have the same access to technology due to a lack of Internet access.
The Digital Divide in Canada
Writing on the digital divide for Friends of Canadian Broadcasting, author Brad Stollery notes that while 96 percent of Canadians had access to broadband Internet (with a minimum download speed of five Mbps) in 2015, that number drops to 79 percent for Canadians living in the North. Only up two percent from the infographic created in 2014.
As he writes, "The 96-percent figure is as pernicious as it is impressive, moreover, because it will foster public complacency at the expense of those final few. There are people across the country who lack a utility that is vital for 21st-century life. Many of them live in indigenous communities, where gaining reliable and affordable broadband access is a matter of survival."
However, a different source, Statistics Canada, reports 42 percent of low-income households lack Internet access at home. Similar to the U.S. statistic from Pew Research that states, "Some 5 million school-age children do not have a broadband Internet connection at home, with low-income households accounting for a disproportionate share," low-income families in Canada face similar circumstances.
These school-aged children that lack Internet access at home feel the impact of the Digital Divide more and more as tech use continues to increase inside – and outside – the classroom.
The annual "State of the Nation: K-12 E-Learning in Canada" study by the Canadian eLearning Network (CANeLearn) examines K-12 online learning such as distance, online, and blended learning. This 2016-17 report revealed around 16 percent of K-12 students were engaged in E-learning. The report states, "The overall e-learning activity was based on the number of K-12 students engaged in distance and online learning, combined with the number of K-12 students engaged in blended learning."
As more students engage in distance learning, online learning, or blended learning, they will need Internet access at home to be successful in school.
The Impact on Digital Literacy
The Digital Divide also refers to students' technical skills. For example, in a recent report by Radio Canada International, there's a growing concern about the Digital Divide, especially among the country's population under 30 years old. Central to their story is an illuminating study by the Brookfield Institute that takes readers on a deep dive into digital literacy education in Canada.
The study, "Leveling Up: The Quest for Digital Literacy," pulls together nearly 100 interviews with Canadian tech trainers, teachers, school board representatives, academics, and policymakers to help explain the current state of digital literacy programs for K-12 and post-secondary students.
According to the study's authors,
There has been an exciting growth of programs across Canada supporting the development of digital literacy at all ages, both within the formal educational system and delivered by non- and for-profit actors working alongside and in partnership with schools, colleges, and universities. However, the landscape of opportunities for learning digital skills remains fragmented and difficult for some learners to navigate. Many people in Canada are at risk of falling through the cracks, uncertain of the skills they are missing, how to develop them, and how to make sure they are not left behind.
The study continues to describe the current state of digital literary in Canada, including the following topics:
Coding in Schools. In Canada's K-12 systems, coding education revolves primarily around building websites, creating games, and (in grades 10-12) computer science and robotics. However, some policymakers worry this heavy focus on coding may end up replacing traditional literacy and stream students into exclusively high-tech careers.
Tech-Infused Curriculums. Rather than focus on coding, some Canadian territories and provinces choose to infuse their entire school curriculums with technology. The goal is to have a student body that understands the responsible, ethical use of concepts and tools like data analysis and social media across different academic subjects.
Train the Trainers. According to the report's authors, some Canadian jurisdictions are "employing 'train-the-trainer' models, with digitally adept teachers receiving formal training and serving as peer mentors and facilitators within their schools or school districts."
Third-Party Partnerships. The non-profit organization Brilliant Labs is just one of the many ed tech organizations collaborating with Canadian schools and educators through the use of training programs, teacher development, summer camps, equipment, and technical expertise.
Despite the advancements and uses of technology in the classroom, this study also touches on bridging Canada's Digital Divide.
"A lack of consistent access to hardware, software, the Internet, and cellular data was reported by a number of interviewees as a core barrier to developing and maintaining digital literacy. This lack of access is deeply intertwined with income/wealth, geography, and other socioeconomic factors including housing stability. Without consistent access, learners can fail to progress or falter in their digital progression and in building confidence using technology."
Accelerating Canada's Learning Goals
Canadian schools currently employ a fascinating range of programs, tools, guidelines, initiatives, and other resources to help maintain a student body that's prepared for the future.
CANeLearn provided suggestions for teaching tools such as Cram.com and GoConqr for online flashcard sites; Infogram to create infographics, reports, and maps; and Plickers to make fun classroom quizzes. Some programs, like the coding boot camps from Lighthouse Labs, are based in Canada while others still, like the hour-long coding activities in The Hour of Code, are part of larger, global ed tech movements.
Last fall, EdSurge reported on how the Ottowa Catholic School Board had undertaken the mission to "ensure that the technology in classrooms is being used to accelerate the district's learning goals, including personalization, differentiation, and deeper learning." The school board adopted Google Chromebooks, used Google's educational suite as their platform, and incorporated Google apps including Dashboard and Workspace. Then the tech integration team for Ottowa measured results to see how students were using these devices and create an action plan for the future.
Bridging the Digital Divide with Kajeet
Designed to help bridge the Digital Divide and allow K-12 students to safely explore and learn in the online world, Kajeet provides schools and districts filtered connectivity and device management solutions.
And now, both of our leading solutions are available in Canada.
Kajeet SmartSpot®
Filtered, Wi-Fi hotspots allow students to access the Internet anytime, anywhere. Keep students safe and on task while connected to the largest Canadian wireless network. These devices help close the Digital Divide by providing students without Internet at home to keep up with their well-connected peers.
Kajeet SmartBus™
School bus rides can span anywhere from 20 minutes to over an hour just going to school. When you add up all the time students spend on a school bus, you could turn their travel time into instructional time. Filtered Internet access on the school bus provides students additional time to work on homework, increasing their productivity and keeping their attention to help reduce behavior incidents.
The core of every Kajeet solution includes:
Safe, secure access to online learning.
Compliant, customizable Internet filters.
Visibility into student device usage.
Detailed data reporting and analytics.
For years, Kajeet has helped school districts across the U.S. Now, we're excited to extend a helping hand into Canada. If you are interested in learning more about Kajeet services in Canada or want to join our mailing list, please fill out this form here.
Topics: Homework Gap, WiFi for Students, Canada
By Kajeet - April 30, 2019
How Internet Access Can Help Your ESL Students
By Kajeet - February 22, 2019
Is the Homework Gap Gone?
By Kajeet - January 22, 2019
Why Chromebooks Have Expanded in K-12 Classrooms
Keep the Conversation Going.
Kajeet Extracurricular
Kajeet covers the latest K-12 ed tech trends and topics
Mobile Broadband (68)
Tools and Tips (43)
WiFi for Students (42)
Homework Gap (39)
SmartSpot (34)
Data and Trends (27)
Digital Divide (25)
SmartBus (23)
Ed Tech Inspiration (22)
Bus WiFi (20)
Technology Insight (20)
Ed Tech Conference (13)
Digital Citizenship (8)
BYOD (7)
Sentinel (7)
Personalized Learning (5)
#EdTech (2)
#ShowYourTech (1)
Digital Equity (1)
Federal Guidance on Digital Equity (1)
flipped classrooms (1)
iste (1)
robotics competition (1)
virtual summer school (1)
When we started Kajeet in 2003, we wanted kids to be agile with technology, to be empowered and safe, and we wanted to help them respond with confidence to what's happening in their world. Not incidentally, we want parents, educators and guardians to be involved too. Being part of the mobile world is not just fun, it's a shared responsibility.
Patents & IP Rights
Kajeet Inc.,
7901 Jones Branch Drive, Suite 350,
sales@kajeet.net
© Copyright 2019, Kajeet, Inc | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,720 |
\section{Introduction}
The genus theory of algebraic number fields can be traced back to
Gauss's celebrated work on binary quadratic forms, and has its
roots in earlier work of Euler, Lagrange, and others.
It was Hasse who first defined the genus field of a quadratic extension when
he reproved a classical theorem of Gauss using class field theory~\cite{hasse},
and Leopoldt later defined the genus field of any absolutely abelian extension~\cite{leopoldt}.
The definition of the genus field of a general number field, which we give forthwith, is due to Fr\"ohlich~\cite{frohlich}.
The genus field of a number field $K$ is defined to be the maximal extension $K^*$ of $K$ that is unramified at all finite primes and is a compositum of the form $Kk^*$ where $k^*$ is absolutely abelian. The genus number of $K$ is defined as $g_K=[K^*:K]$.
It follows right away that $g_K$ divides $h^+_K$, the narrow class number of $K$.
Since the class number $h_K$ and the narrow class number $h_K^+$ differ by a power of $2$ and the genus number of a cubic field is a power of $3$ (see Theorem~\ref{T:frohlich}), it follows that $g_K$ divides $h_K$ when $K$ is cubic.
The class number is among the most important invariants associated to a number field,
but it is very difficult to study.
Conjecturally, its behavior at the ``good'' primes is
governed by the (modified) heuristics of Cohen--Lenstra--Martinet (see~\cite{cohen.lenstra, cohen.martinet}).
By contrast, the genus number (whose support is at ``bad'' primes) does not behave ``randomly'' and is therefore more amenable to study.
It is very natural to ask about the density of genus number one fields
among all number fields of a fixed degree and signature.
In the present investigation, we will discuss the situation for cubic fields,
as this is the simplest situation where this question has not been previously addressed.
Let $K$ be a cubic field.
If $K$ is cyclic, then $g_K=3^{e-1}$, where $e$ is the number of odd prime factors of the discriminant $\Delta$ of $K$;
it follows right away that $0\%$ of cyclic cubic fields have genus number one,
and that the average genus number in this setting is infinite.
These same statistical questions become more subtle when one does not impose the restriction
that $K$ is Galois. In fact, since $0\%$ of cubic fields are cyclic, the aforementioned facts have
little bearing on the answers when one considers the collection of all cubic fields.
In this paper, we show that roughly $96.23\%$ of cubic fields have $g_K=1$.
In addition, we prove that the average genus number is roughly $1.0785$.
Let $\mathcal{F}$ denote the collection of all cubic fields $K$
with $g_K=1$, and write $\mathcal{F}^+$, $\mathcal{F}^-$ to denote the subsets of $\mathcal{F}$
consisting of fields with positive and negative discriminants, respectively.
Set $N^\pm(X)=\#\{K\in\mathcal{F}^\pm: |\Delta|\leq X\}$ and define constants
$n^+=6$ and $n^-=2$. (Note that we will always count cubic fields up to isomorphism.)
In Section \ref{genus1}, we prove our main result:
\begin{theorem}\label{T:1}
\begin{align*}
N^\pm(X)
&=
\frac{29}{54n^\pm\zeta(2)}\prod_{\substack{p\equiv 2\pmod{3}}} \left(1+\frac{1}{p(p+1)}\right)X+O\left(X^{16/17+\varepsilon}\right)
\end{align*}
\end{theorem}
\begin{corollary}\label{C:1}
The proportion of cubic fields with genus number one
(of positive or negative discriminant)
equals
\[
\frac{29\,\zeta(3)}{27\,\zeta(2)}\prod_{\substack{p\equiv 2\pmod{3}}} \left(1+\frac{1}{p(p+1)}\right)
\,.
\]
Consequently, roughly $96.23009\%$ of totally real cubic fields and $96.23009\%$ of complex cubic fields have genus number one.
\end{corollary}
In Section \ref{ave}, we prove the following result regarding the average genus number of a cubic field:
\begin{theorem}\label{T:2}
The average genus number of a cubic field
(in the positive or negative discriminant case)
is given by
\begin{align*}
&
\lim_{X\to\infty}\frac{\displaystyle \sum_{0<\pm\Delta\leq X}g_K}{\displaystyle \sum_{0<\pm\Delta\leq X}1}
\\[2ex]
&\qquad
=
\frac{119\zeta(3)}{108\zeta(2)}
\prod_{p\equiv 1\pmod{3}} \left(1+\frac{3}{p(p+1)}\right)\prod_{p\equiv 2\pmod{3}} \left(1+\frac{1}{p(p+1)}\right)
\\[1ex]
&\qquad
\approx
1.078541
\end{align*}
The above sums are taken over all cubic fields $K$
where the discriminant $\Delta$ falls in the specified range.
\end{theorem}
In Section~\ref{dist}, we give the exact proportion of cubic fields with a given genus number.
\begin{theorem} \label{T:3}
A positive proportion of cubic fields (of positive or negative discriminant)
have $g_K=m$ iff $m$ is a power of $3$, and
the exact proportion with $g_K=3^k$ is given by
\[
\frac{\zeta(3)}{\zeta(2)}
\left[
\frac{29}{27}\sum_{f\in T_k} \prod_{p|f}\frac{1}{p(p+1)}
+
\frac{1}{108}\sum_{f\in T_{k-1}} \prod_{p|f}\frac{1}{p(p+1)}
\right]
\,,
\]
where $T_k$ denotes the collection of squarefree integers coprime to $3$ having exactly $k$ prime factors $p$ that satisfy
$p\equiv 1\pmod{3}$.\footnote{If we adopt the convention that $T_{-1}=\emptyset$, then the formula holds for $k=0$ as well.}
The approximate proportions are given in the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$k$ & $0$ & $1$ & $2$ & $3$ \\
\hline
proportion & $96.23\%$ & $3.72\%$ & $0.05\%$ & really small! \\
\hline
\end{tabular}
\end{center}
\end{theorem}
Our initial interest in this question stemmed from the study of norm-Euclidean cubic fields.
There are only finitely many norm-Euclidean cubic fields with negative discriminant,
but it may very well be the case that there are infinitely many with positive discriminant
(see~\cite{davenport.euclidean, heilbronn.cubic}).
A norm-Euclidean field is necessarily class number one and, hence, genus number one;
thus, if $g_K \neq 1$, we can trivially conclude that $K$ is not norm-Euclidean.
Consequently, it is of greater interest to study fields that fail to be norm-Euclidean for reasons other than genus theory.
In Section \ref{NE}, we prove the following result:
\begin{theorem}\label{T:euclidean.genus}
A positive proportion of totally real cubic fields with genus number one fail to be norm-Euclidean.
\end{theorem}
Our starting point is a theorem of Fr\"ohlich which gives an explicit
description of $g_K$ when $K$ is cubic.
The main tool we employ is a powerful theorem,
proved independently by Taniguchi--Thorne
and Bhargava--Shankar--Tsimerman,
that allows one to compute the density of cubic discriminants
satisfying specified local conditions with a very precise error term.
This is a generalization of a classical theorem
of Davenport--Heilbronn, who were the first to accomplish the counting of cubic fields.
\section{Preliminaries}
Let $K$ be a cubic field. Then the discriminant $\Delta$ takes one of the three forms $df^2$, $9df^2$, $81df^2$, where $d$ is a fundamental discriminant and $f$ is a squarefree positive integer coprime to $3$.
A prime $p \neq 3$ is totally ramified in $K$ if and only if $p$ divides $f$,
and $3$ is totally ramified in $K$ if and only if $\Delta$ takes one of the forms $9df^2$, $81df^2$~\cite{cohen}. The following theorem of Fr\"ohlich gives an explicit expression for the genus number~\cite{frohlich};
see also~\cite{ishida}.
\begin{theorem}[Fr\"ohlich]\label{T:frohlich}
Let $e$ denote the number of odd primes $p$ such that $p$ is totally ramified in $K$ and $(d/p)=1$,
where $(d/p)$ is the usual Legendre symbol.
Then we have:
\[
g_K=\begin{cases}
3^{e-1} & \text{if $K$ is cyclic}\\
3^e & \text{if $K$ is not cyclic.}
\end{cases}
\]
\end{theorem}
Note that since the number of cyclic cubic fields with discriminant less than or equal to $X$ is $O(X^{1/2})$,
for our purposes we may neglect these fields~\cite{hasse3, cohn}.
Our main tool is the following theorem, which
is a strengthening of the classical Davenport--Heilbronn Theorem
(see~\cite{davenport.heilbronn, taniguchi.thorne, bhargava.shankar.tsimerman}).
For what follows, set $m^+=1$ and $m^-=\sqrt{3}$.
\begin{theorem}[Taniguchi--Thorne, Bhargava--Shankar--Tsimerman]\label{T:TTBST}
The number of cubic fields satisfying $0<\pm\Delta\leq X$ equals
\[
\frac{1}{2n^\pm\zeta(3)}X+\frac{4m^\pm\zeta(1/3)}{5\Gamma(2/3)^3\zeta(5/3)}X^{5/6}+O(X^{7/9+\varepsilon})
\,.
\]
\end{theorem}
We note in passing that the secondary term was conjectured
by Roberts~\cite{roberts} following computations carried out by Belabas~\cite{belabas1},
and that the existence of a power saving error term was first proved
by Belabas--Bhargava--Pomerance in~\cite{belabas.bhargava.pomerance}; this whole story is summarized
very nicely in~\cite{belabas.bhargava.pomerance}.
In fact, both papers~\cite{taniguchi.thorne, bhargava.shankar.tsimerman}
give a stronger version of Theorem~\ref{T:TTBST}
(which we will require) allowing one to specify local conditions.
If local conditions are imposed at finitely many primes $p$,
then the main term and the secondary term are multiplied by an additional
factor for each $p$; moreover, in this case,
the implicit constant in the $O$-term now depends upon the set of local conditions.
For our application we need to specify
infinitely many local conditions \emph{and} we require an explicit dependence on these local conditions.
In principle, either Theorem~7 of~\cite{bhargava.shankar.tsimerman} or Theorem~1.3 of~\cite{taniguchi.thorne} will suffice,
but the situation is such that neither accomplishes our aim ``out-of-the-box'' with no additional work.
We have chosen to use Theorem~1.3 of~\cite{taniguchi.thorne} as our main work horse.
At the appropriate juncture in our proofs, please see \S6 of~\cite{taniguchi.thorne}
and \S4 of~\cite{bhargava.shankar.tsimerman}
for information regarding local density calculations.
Finally, we mention that there is a forthcoming paper~\cite{bhargava.taniguchi.thorne}
that improves the error term in Theorem~\ref{T:TTBST} to $O(X^{2/3+\varepsilon})$.
Substituting this result into our arguments would result in an improvement of our error terms.
\section{Counting genus number one cubic fields}\label{genus1}
Recall that we write the discriminant of a cubic field $K$ as $\Delta = df^2$, $9df^2$, or $81df^2$ with $f$ coprime to $3$. Let $\mathcal{G}^\pm$ denote the collection of all cubic fields with $\sgn(\Delta)=\pm 1$, and let $\mathcal{F}^\pm \subseteq \mathcal{G}^\pm$ denote the collection of all such cubic fields $K$ with $g_K=1$. Let $N^\pm(X)=\#\{K\in\mathcal{F}^\pm: |\Delta|\leq X\}$. Define
\begin{align*}
\mathcal{F}^\pm_1&=\{K\in\mathcal{G}^\pm\mid p\equiv 2\pmod{3} \text{ for all $p$ dividing $f$}\}\\
\mathcal{F}^\pm_2&=\{K\in\mathcal{F}^\pm_1\mid \text{ $3$ is totally ramified and } d\equiv 1\pmod{3}\}.
\end{align*}
By Theorem~\ref{T:frohlich}, we have $\mathcal{F}^\pm=\mathcal{F}^\pm_1\setminus\mathcal{F}^\pm_2$,
and therefore $N^\pm(X)=N^\pm_1(X)-N^\pm_2(X)$ where
$N^\pm_i(X)=\#\{K\in\mathcal{F}^\pm_i : |\Delta|\leq X\}$.
Indeed, when $p\neq 2,3$ is totally ramified, the condition $(d/p)=1$ is equivalent to $p\equiv 1\pmod{3}$
(see~\cite{ishida}, Example 6.10),
and, of course, $(d/3)=1$ is equivalent to $d\equiv 1\pmod{3}$.
In what follows, we will establish asymptotic formulas for $N^\pm_1(X)$, $N^\pm_2(X)$ individually
and then obtain the desired result by subtraction.
For each square free $f$ coprime to $3$ we write $N^\pm(f;X)$
to denote the number of fields in $\mathcal{G}^\pm$ with $|\Delta|\leq X$ that are totally ramified at the primes dividing $f$ and at no other primes, except possibly $3$.
\begin{proposition}\label{incl-ex}
\[
N^\pm(f;X)
=
\frac{13}{24n^\pm\zeta(2)}\prod_{p|f}\frac{1}{p(p+1)} X
+
O(f^{-1}X^{16/17+\varepsilon})
\,.
\]
\end{proposition}
\begin{proof}
Write $M^\pm(f;X)$ to denote the number of cubic fields of positive (or negative) discriminant
with $|\Delta|\leq X$ where all the primes dividing $f$ are totally ramified (and no other restrictions).
Observe that this constitutes only finitely many local conditions.
Notice that
\[
N^\pm(f;X)=\sum_{(r,3f)=1}\mu(r)M^\pm(rf;X)
\,.
\]
We will split this sum
as $\sum_r=\sum_{r\leq Y}+\sum_{r>Y}$ where $Y$
is some parameter to be specified.
In all sums over $r$ we will only consider values where $(r,3f)=1$.
For $r\leq Y$, we will use Theorem~1.3 of~\cite{taniguchi.thorne} to obtain
\[
M^\pm(f;X)=
c_f^\pm X +
d_f^\pm X^{5/6} + O(f^{16/9}X^{7/9+\epsilon})
\,,
\]
where the constant in the main term is
\begin{align*}
c_f^\pm
&=
\frac{1}{2n^\pm\zeta(3)}
\prod_{p|f}
\frac{1}{p^2+p+1}
\,,
\end{align*}
and the constant in the secondary term is
\begin{align*}
d_f^\pm=
\frac{4m^\pm\zeta(1/3)}{5\Gamma(2/3)^3\zeta(5/3)}
\prod_{p|f}\frac{p^{2/3}-1}{(p^{5/3}-1)(p-1)}
=
O(f^{-2})
\,.
\end{align*}
On the other hand, when $r>Y$ we will use the estimate
\[
M^\pm(f;X)=O(f^{-2+\varepsilon}X)
\,,
\]
which follows immediately from Lemma~3.4 of~\cite{taniguchi.thorne}, in light of the fact that $6^{\omega(f)}=O(f^\varepsilon)$.
Consequently,
\begin{align*}
N^\pm(f;X)
&=
\sum_{r\leq Y}
\left(
\mu(r)c_{rf}X+
O((rf)^{-2}X^{5/6})
+
O((rf)^{16/9}X^{7/9+\varepsilon})
\right)
+
\sum_{r>Y}O((rf)^{-2+\varepsilon}X)
\\
&=
X\sum_{r\leq Y}\mu(r)c_{rf}
+
O(f^{-2}X^{5/6})+O(Y^{25/9}f^{16/9}X^{7/9 +\varepsilon})+O(Y^{-1+\varepsilon}f^{-2+\varepsilon}X)
\,.
\end{align*}
We compute the constant in the main term:
\begin{align*}
\sum_{r\leq Y}\mu(r)c_{rf}
&=
\sum_{r}\mu(r)c_{rf} + O\left( \sum_{r> Y}c_{rf}\right)
\\
&=
\frac{1}{2\zeta(3)n^\pm}\prod_{p|f}\frac{1}{p^2+p+1}\sum_{(r,3f)=1}\mu(r)\prod_{p|r}\frac{1}{p^2+p+1}
+
O\left( \sum_{r> Y}(rf)^{-2+\varepsilon}\right)
\\
&=
\frac{1}{2\zeta(3)n^\pm}\prod_{p|f}\frac{1}{p^2+p+1}\prod_{p \not \hspace{2pt} | \, 3f}\left(1-\frac{1}{p^2+p+1}\right)
+
O(Y^{-1+\varepsilon}f^{-2+\varepsilon})
\\
&=
\frac{13}{24\zeta(3)n^\pm}\prod_{p|f}\frac{1}{p(p+1)}\prod_{p}\left(1-\frac{1}{p^2+p+1}\right)
+
O(Y^{-1+\varepsilon}f^{-2+\varepsilon})
\\
&=
\frac{13}{24\zeta(2)n^\pm}\prod_{p|f}\frac{1}{p(p+1)}
+
O(Y^{-1+\varepsilon}f^{-2+\varepsilon})
\end{align*}
Setting $Y=f^{-1}X^{1/17}$ and putting this all together yields the result.
\end{proof}
Let $T$ denote the collection of all squarefree $f$ with the property that $p\equiv 2\pmod{3}$ for all primes $p$ dividing $f$.
We have
\begin{align}
\nonumber
N^\pm_1(X) &= \sum_{f\in T}N^\pm(f;X)
\\
\nonumber
&= \sum_{\substack{f\in T\\f\leq X^{1/2}} }
\left( \frac{13}{24\zeta(2)n^\pm}\prod_{p|f}\frac{1}{p(p+1)}\cdot X + O(f^{-1}X^{16/17+\varepsilon}) \right)
\\
\nonumber
&=
\frac{13}{24\zeta(2)n^\pm}\cdot X \sum_{f\in T}\prod_{p|f}\frac{1}{p(p+1)}
+
X\sum_{f>X^{1/2}} O(f^{-2})
+
\sum_{f\leq X^{1/2}}O(f^{-1}X^{16/17+\varepsilon})
\\
\label{E:manip}
&=
\frac{13}{24\zeta(2)n^\pm}\prod_{p\in T}\left(1+\frac{1}{p(p+1)}\right)\cdot X + O(X^{16/17+\varepsilon}).
\end{align}
This establishes the desired formula for $N^\pm_1(X)$. In order to deal with $N^\pm_2(X)$ we require
a slight modification of the quantity $N(f;X)$.
For each squarefree $f$ coprime to $3$ we write $N'(f;X)$ to denote the number of such fields that are totally ramified at $3$ and the primes dividing $f$ but no other primes \emph{and}
that also satisfy the extra condition $d\equiv 1\pmod{3}$.
(For the remainder of this section we have dropped $\pm$ from most of the notation.)
\begin{proposition}\label{incl-ex2}
\[
N'(f;X)
=
\frac{1}{216\zeta(2)n^\pm}\prod_{p|f}\frac{1}{p(p+1)} X
+
O(f^{-1}X^{16/17+\varepsilon})
\,.
\]
\end{proposition}
\begin{proof}
As before, we can write
\[
N'(f;X)=\sum_{(r,3f)=1}\mu(r)M'(rf;X)
\]
and apply Theorem~1.3 of~\cite{taniguchi.thorne} to obtain
\[
M'(f;X)=
c'_f X
+ O(f^{-2}X^{5/6}) + O(f^{16/9}X^{7/9+\varepsilon})
\,.
\]
The calculation of $c'_f$ is identical to that of $c_f$ except for the local factor at the prime $3$.
We need to impose the additional conditions that $3$ is totally ramified and $d\equiv 1\pmod{3}$.
These conditions are definable in terms of congruence conditions on the coefficients of the corresponding cubic form;
indeed, this is equivalent to saying $\Delta\equiv 3^4\pmod{3^5}$.
At this juncture, computing the local densities on the ``forms side'' via the Delone--Fadeev correspondence
is more convenient; please see \S4 of~\cite{bhargava.shankar.tsimerman} for details regarding
this type of local computation.
Let $S$ denote the collection of integral binary cubic forms having a triple root modulo $3$,
satisfying the maximality condition at $3$,
and satisfying our congruence condition on the discriminant.
Let $\mu_3(S)$ denote the $3$-adic density of the $p$-adic closure of $S$ in $\mathbb{Z}_3^4$.
(Here the additive measure $\mu_3$ is normalized so that $\mu_3(\mathbb{Z}_3^4)=1$.)
In this notation, we have
\[
c'_f=\mu_3(S)(1-3^{-2})^{-1}(1-3^{-3})^{-1}c_f
\,.
\]
All that remains is to compute $\mu_3(S)$.
Since maximality is a condition modulo~$3^2$, we may do all our calculation modulo~$3^2$.
There are $8$ forms over $\mathbb{Z}/3\mathbb{Z}$ with a triple root
and $2/3$ of the lifts of these forms to $\mathbb{Z}/3^2\mathbb{Z}$ are maximal at $3$.
For each of these $432$ forms, we compute the discriminant via the standard formula and then check whether the fundamental part of the discriminant
satisfies $d\equiv 1\pmod{3}$. Precisely $48$ of these fit the bill, in other words, $1/9$ of the forms under consideration.
It follows that the $3$-adic density is
$\mu_3(S)=(8/3^4)(2/3)(1/9)=16/2187$.
Observe that
\[
\frac{16/2187}{(1-3^{-2})(1-3^{-3})}=1/117
\,,
\]
which leads to
\[
c_f'
=
\frac{1}{234\zeta(3)n^\pm}
\prod_{p|f}
\frac{1}{p^2+p+1}
\,.
\]
(The extra factor can also be computed as $(1/9)(3^2+3+1)^{-1}=1/117$.)
The rest of the proof proceeds exactly as in the proof of Proposition~\ref{incl-ex}.
\end{proof}
Applying the previous proposition and following the same procedure
we used to obtain our formula for $N_1(X)$
yields
\[
N_2(X)=
X
\frac{1}{216n^\pm \zeta(2)}
\prod_{p\in T}\left(1+ \frac{1}{p(p+1)}\right) + O\left(X^{16/17+\varepsilon}\right).
\]
Finally, subtracting, we have
\[
N(X)
=
X
\frac{29}{54n^\pm\zeta(2)}
\prod_{\substack{p\equiv 2\pmod{3}}}\left(1+ \frac{1}{p(p+1)}\right) + O\left(X^{16/17+\varepsilon}\right).
\]
This proves Theorem~\ref{T:1}, and Corollary~\ref{C:1} follows.
\section{The average genus number}\label{ave}
First, we verify that the cyclic fields do not contribute to the average.
When $K$ is cyclic, we have $\Delta\in\{f^2, (3f)^2, (9f)^2\}$ and,
by Theorem~\ref{T:frohlich}, $ g_K = 3^{e-1} $;
in this case, $e$ is the number of odd primes that are (totally) ramified in $K$.
Therefore, we have
\[
\sum_{\substack{0<\pm \Delta\leq X\\ K\text{cyclic}}}
g_K
\leq
\sum_{k=1}^\infty \pi_k(\sqrt{X})3^{k-1}
\,,
\]
where $\pi_k(y)$ denotes the number of positive integers $\leq y$ with exactly $k$ prime factors.
Setting $y=\sqrt{X}$ and using the fact (see~\cite{landau}) that
\[
\pi_k(y)\sim\frac{y}{\log y}\frac{(\log\log y)^{k-1}}{(k-1)!}
\]
we obtain
\[
\sum_{\substack{0<\pm \Delta\leq X\\ K\text{cyclic}}}
g_K
=
O\left(
\frac{y}{\log y}
\sum_{k=1}^\infty
\frac{(3\log\log y)^{k-1}}{(k-1)!}
\right)
=
O\left(
X^{1/2}(\log X)^2
\right)
\,.
\]
The fact that the above expression is $o(X)$ tells us that
cyclic fields do not contribute to the average genus number.
We now turn to the main part of the proof.
Define $\psi(n)$ to be the number of primes $p$ dividing $n$ satisfying $p\equiv 1\pmod{3}$.
As we are ignoring cyclic fields, everything that follows holds up to an error of $O(X^{1/2+\varepsilon})$.
We have
\begin{align*}
\sum_{0<\pm \Delta\leq X}g_K
&=
\sum_{0<\pm \Delta\leq X} 3^{\psi(f)}
-\sum'_{\substack{0<\pm \Delta\leq X}} 3^{\psi(f)}
+\sum'_{\substack{0<\pm \Delta\leq X}} 3^{{\psi(f)}+1}
\\
&=
\sum_{0<\pm \Delta\leq X} 3^{\psi(f)}
+2\sum'_{\substack{0<\pm \Delta\leq X}} 3^{\psi(f)}
\\
&=
\sum_{(f,3)=1}^\flat 3^{\psi(f)} N(f;X)
+2
\sum_{(f,3)=1}^\flat 3^{\psi(f)} N'(f;X),
\end{align*}
where $\displaystyle \sum'$ denotes only summing over those fields where $3$ is totally ramified
with $d\equiv 1\pmod{3}$
and the $\displaystyle \sum^\flat$ denotes summing over squarefree $f$.
Applying Proposition~\ref{incl-ex} to compute the first sum above, we obtain:
\begin{align*}
\sum_{(f,3)=1}^\flat 3^{\psi(f)} N(f;X)
&=
\frac{13}{24\zeta(2)n^\pm} X
\sum_{\substack{(f,3)=1\\f\leq X^{1/2}}}^\flat 3^{\psi(f)} \prod_{p|f}\frac{1}{p(p+1)}
+
\sum_{\substack{(f,3)=1\\f\leq X^{1/2}}}^\flat 3^{\psi(f)}
O(f^{-1}X^{16/17+\varepsilon})\,
\end{align*}
After performing manipulations similar to (\ref{E:manip}) this yields
\[
\frac{13}{24n^\pm\zeta(2)}X
\prod_{p\neq 3} \left(1+\frac{3^{\psi(p)}}{p(p+1)}\right)
+
\sum_{f>X^{1/2}}3^{\psi(f)}O(f^{-2}X)+
\sum_{f\leq X^{1/2}} 3^{\psi(f)}O(f^{-1}X^{16/17+\varepsilon})\,.
\]
The main term is
\begin{align*}
\frac{13}{24n^\pm\zeta(2)}
X\prod_{p\equiv 1\pmod{3}} \left(1+\frac{3}{p(p+1)}\right)\prod_{p\equiv 2\pmod{3}} \left(1+\frac{1}{p(p+1)}\right).
\end{align*}
Because $3^{\psi(f)}=O(f^\varepsilon)$, the error term is
\begin{align*}
\sum_{f>X^{1/2}}O(f^{-2+\varepsilon}X)+
\sum_{f\leq X^{1/2}} O(f^{-1+\varepsilon}X^{16/17+\varepsilon})\,
=
O(X^{16/17+\varepsilon})
\,.
\end{align*}
In exactly the same manner, we apply Proposition~\ref{incl-ex2} to
compute the second term
\[
\sum_{(f,3)=1}^\flat 3^{\psi(f)} N'(f;X)
\,.
\]
This simply results in multiplying the first outcome by a factor of $1/117$. Thus the whole sum is the first term multiplied by $1+2/117=119/117$. Hence, we obtain
\[
\frac{119}{216n^\pm\zeta(2)}
X\prod_{p\equiv 1\pmod{3}} \left(1+\frac{3}{p(p+1)}\right)\prod_{p\equiv 2\pmod{3}} \left(1+\frac{1}{p(p+1)}\right)
+O(X^{16/17+\varepsilon})
\,.
\]
Dividing the above by $1/(2n^\pm\zeta(3))$ yields the desired expression,
thereby proving Theorem~\ref{T:2}.
\section{Counting cubic fields with given genus number} \label{dist}
As before, $\mathcal{G}^\pm$ will denote the collection of all cubic fields with $\sgn(\Delta)=\pm 1$.
We now let $\mathcal{F}^\pm \subseteq \mathcal{G}^\pm$ denote the collection of all cubic fields $K$ with $g_K=3^k$. As before, define $N^\pm(X)=\#\{K\in\mathcal{F}^\pm: |\Delta|\leq X\}$. Let $T_k$ denote the collection of squarefree integers $n$ coprime to $3$ with $\psi(n) = k$ (i.e., having exactly $k$ prime factors $p$ satisfying $p\equiv 1\pmod{3}$).
\begin{align*}
\mathcal{F}^\pm_1&=\{K\in\mathcal{G}^\pm\mid f \in T_k\}\\
\mathcal{F}^\pm_2&=\{K\in\mathcal{G}^\pm\mid f\in T_k, \text{ $3$ is totally ramified, and } d\equiv 1\pmod{3}\}\\
\mathcal{F}^\pm_3&=\{K\in\mathcal{G}^\pm\mid f\in T_{k-1}, \text{ $3$ is totally ramified, and } d\equiv 1\pmod{3}\}.
\end{align*}
By Theorem~\ref{T:frohlich}, we have $\mathcal{F}^\pm=(\mathcal{F}^\pm_1\setminus\mathcal{F}^\pm_2)\cup\mathcal{F}^\pm_3$,
and therefore $N^\pm(X)=N^\pm_1(X) - N^\pm_2(X)+N^\pm_3(X)$ where
$N^\pm_i(X)=\#\{K\in\mathcal{F}^\pm_i : |\Delta|\leq X\}$.
Now we can proceed exactly as in Section~\ref{genus1} to find
\[
N^\pm_1(X) = \frac{13}{24\zeta(2)n^\pm}X \sum_{f\in T_k} \prod_{p|f}\frac{1}{p(p+1)}
+O(X^{16/17+\varepsilon})
\]
and we multiply by $1/117$ to obtain
\[
N^\pm_2(X) = \frac{1}{216\zeta(2)n^\pm}X \sum_{f\in T_k} \prod_{p|f}\frac{1}{p(p+1)}
+ O(X^{16/17+\varepsilon})
\,.
\]
Similarly, we obtain
\[
N^\pm_3(X) = \frac{1}{216\zeta(2)n^\pm}X \sum_{f\in T_{k-1}} \prod_{p|f}\frac{1}{p(p+1)}
+ O(X^{16/17+\varepsilon})
\,,
\]
and this makes the desired proportion equal to
\[
\frac{\zeta(3)}{\zeta(2)}
\left[
\frac{29}{27}\sum_{f\in T_k} \prod_{p|f}\frac{1}{p(p+1)}
+
\frac{1}{108}\sum_{f\in T_{k-1}} \prod_{p|f}\frac{1}{p(p+1)}
\right]
\, .
\]
We include here a table of approximations to the first few percentages.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$k$ & $0$ & $1$ & $2$ & $3$ \\
\hline
proportion & $96.23\%$ & $3.72\%$ & $0.05\%$ & really small! \\
\hline
\end{tabular}
\end{center}
This concludes the proof of Theorem~\ref{T:3}.
We note that the formulas for $N^\pm_1$, $N^\pm_2$, $N^\pm_3$ just derived can
essentially be used to establish Theorems~\ref{T:1},~\ref{T:2}, and~\ref{T:3}
but we have chosen to structure the paper in this manner for clarity of exposition.
\section{Norm-Euclidean cubic fields} \label{NE}
Davenport showed that there are only finitely many norm-Euclidean cubic fields with
negative discriminant~\cite{davenport.euclidean}.
Heilbronn showed that there are only finitely many norm-Euclidean cyclic cubic fields with positive
discriminant~\cite{heilbronn.cubic},
and the first author completely determined these fields under
the GRH~\cite{mcgown.cyclic.cubic1, mcgown.cyclic.cubic2}.
This leaves open the case of non-cyclic totally real cubic fields.
In fact, Heilbronn says that he would ``be surprised to learn that the analogue of [the finiteness theorem] is true in this case''.
Lemmermeyer carried out computations (up to discriminant $1.3\cdot 10^4$)
in this setting (see \cite{lemmermeyer})
and observed that the percentage of norm-Euclidean fields was decreasing, and consequently he stated that
``it is tempting to conjecture that the norm-Euclidean cubic fields have density 0.''
This leads to the following problem: Give an upper bound
on the proportion of totally real cubic fields that are norm-Euclidean. To our knowledge,
no one has given a nontrivial upper bound in this setting, i.e., a bound less than $100\%$.
The first thing one might try to do is to use genus theory;
in light of Corollary~\ref{C:1}, one knows that less than $96.24\%$ of totally real cubic fields are norm-Euclidean.
The question then is whether one can improve on the upper bound coming from genus theory.
Theorem~\ref{T:euclidean.genus} accomplishes this, albeit very modestly.
In order to give our results in more detail, we must first state Heilbronn's criterion
in our situation.
Let $K$ be a totally real cubic field, and adopt the previous notation for $\Delta,d,f$.
Denote by $F$ the product of all the totally ramified primes in $K$. Notice that $F=f$ or $F=3f$
depending upon whether $3$ is totally ramified.
The following is the natural adaptation to our setting of a result of Heilbronn
on cyclic cubic fields, which has its roots in a theorem of Erd\"os--Ko~\cite{erdos.ko};
it is also a special case of a more general theorem due to Egami~\cite{egami} who attributed it to Lenstra.
\begin{lemma}[Heilbronn's criterion]\label{L:Heilbronn}
If we can write $F=a+b$ with $a,b\in\mathbb{Z}^+$ where
$a,b$ are not norms and $a$ is a cubic residue modulo $F$,
then $K$ is not norm-Euclidean.
\end{lemma}
One amusing observation is that if $p \not\equiv 1 \pmod{3}$ then every number is a cubic residue modulo $p$,
so Heilbronn's criterion is more easily verified in the genus number one setting (where all $p$ dividing $F$ have this property).
\begin{lemma}
Suppose $K$ has genus number one.
If we can write $F=a+b$ with $a,b\in\mathbb{Z}^+$ where
$a,b$ are not norms, then $K$ is not norm-Euclidean.
\end{lemma}
Let $H(X)$ denote the number of genus number one cubic fields with $0<\Delta\leq X$ to which Heilbronn's criterion applies,
and let $H(F;X)$ denote the number of such fields with fixed $F$. We have
\[
H(X)=\sum_{F\leq X^{1/2}}H(F;X)
\]
\begin{proposition}\label{P:HFX}
We have
\[
H(F;X)=\frac{b_F}{12\zeta(2)}\prod_{p|F}\frac{1}{p(p+1)}X + O(e^{4F/17}X^{16/17+\varepsilon})
\]
for some explicitly computable $b_F\in\mathbb{Q}\cap[0,1]$.
\end{proposition}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|
\hline
$F$ & $1$ & $2$ & $3$ & $5$ & $6$ & $10$ & $11$ & $15$ & $17$ & $22$\\
\hline
&&&&&&&&&&\\[-1.5ex]
$b_F$ & $0$ & $0$ & $0$ & $\displaystyle\frac{1}{18}$& $0$& $\displaystyle\frac{7}{96}$ &
$\displaystyle\frac{55}{288}$ &
$\displaystyle\frac{1574}{15309}$ & $\displaystyle\frac{231205}{653184}$ & $\displaystyle\frac{1292771}{4354560}$ \\[2ex]
\hline
$\approx$ & $0$ & $0$ & $0$ & $0.556$& $0$& $0.0729$ & $0.191$ & $0.103$ & $0.354$ & $0.297$ \\
\hline
\end{tabular}
\end{center}
Using the previous proposition, we will prove the main result of this section which
immediately implies Theorem~\ref{T:euclidean.genus}.
\begin{theorem}\label{P:HX}
We have $H(X)\sim B X$ with
$5.7\cdot 10^{-4}\leq B\leq 6.1\cdot 10^{-4}$.
Consequently, Heilbronn's criterion applies to strictly between $4/5$ of a percent and $1$ percent of all totally real cubic fields with genus number one.
\end{theorem}
Admittedly, the proportion
in the previous theorem is rather small.
The main interest here is to know that such a density exists and is positive.
We obtain the rather weak corollary that less than
$96\%$ of totally real cubic fields are norm-Euclidean.
However, this theorem does say something about the limitations of these methods;
in particular, one cannot hope to beat $95\%$ by using only genus theory and totally ramified primes
(via Lemma~\ref{L:Heilbronn}) --- one would need to inject some new ideas.
In principle, one could use the ideas presented here to compute $B$
more accurately, but we have not pursued this (because of the reasons just mentioned).
\begin{remark*}
Even though the total proportion given in Theorem~\ref{P:HX} is rather small,
if we only consider fields where $F$ is large we can give much stronger results.
For example, when $F=167$, we compute $B_F\approx 0.9421$. In fact, the precise number is:
\[
\hspace{-2ex}
\frac{5707366742127207720711393876905481748779979640006818006913}{6058037125307413601957148346537399067112071383363249766400}
\]
The upshot is the following:
Suppose we know that $167$ is the only totally ramified prime in $K$.
Then $K$ has less than a $5.8\%$ chance of being norm-Euclidean.
\end{remark*}
Before launching into the proofs, we recast Heilbronn's criterion into a form that
is more convenient for our purposes.
We define the set
\[
S=\{a\in\mathbb{Z}\mid 0<a<F\,,\;a\notin N_{K/\mathbb{Q}}(\mathcal{O}_K)\}
\, ,
\]
and rewrite the condition in Heilbronn's criterion as
\[
(\dagger)\; \exists \, a\in S\text{ such that } F-a\in S
\,.
\]
Recall that $n\neq 0$ is a norm if and only if $3 | v_p(n)$ for all inert $p | n$.
In light of this, we immediately see that $(\dagger)$ holds iff there exists an $a\in(0,F)$ and a pair
of inert primes $\{p,q\}$ such that $p|a$, $q|F-a$, and $3\not |\; v_p(a)v_q(F-a)$.
If this condition is satisfied, we call the set of two primes $\{p,q\}$ a Heilbronn pair for~$F$.
\begin{example*}
We find all the Heilbronn pairs for $F=11$.
By symmetry, there are $5$ possible choices of $(a,F-a)$ we must consider.
We can reject the values $(1,10)$ and $(3,8)$ because they contain cubes, which leaves three remaining choices for $(a,F-a)$.
The choice $(2,9)$ leads to the H-pair $\{2,3\}$, the choice $(4,7)$ leads to the H-pair $\{2,7\}$, and the choice
$(5,6)$ leads to the H-pairs $\{5,2\}$, $\{5,3\}$. In summary, the Heilbronn pairs
for $F=11$ are $\{2,3\}$, $\{2,5\}$, $\{2,7\}$, $\{3,5\}$. This means
Heilbronn's criterion applies to a cubic field $K$ with $F=11$ if and only if
the collection of primes that are inert in $K$ contains at least one of these four $H$-pairs.
\end{example*}
Let $\mathcal{I}$ be a subset of the primes $p$ with $p<F$.
We say that $\mathcal{I}$ is admissible if it contains both primes in a Heilbronn pair.
Let $H(F,\mathcal{I};X)$ denote the number of such fields
where $\mathcal{I}$ is exactly the inert primes less than $F$.
In light of discussion above, we have
\begin{equation}\label{E:finite}
H(F;X)=\sum_{\mathcal{I}\text{ admissible}}H(F,\mathcal{I};X)
\,.
\end{equation}
\begin{proposition}\label{P:HFIX}
Suppose $3$ is not totally ramified in $K$. Then
\[
H(f,\mathcal{I};X)=
\frac{1}{12\zeta(2)}\prod_{p|f}\frac{1}{p(p+1)}\prod_{\substack{p<f\\p\not |f}}\frac{a_p}{1+p^{-1}}X
+O(e^{4f/17}X^{16/17+\varepsilon})
\]
where
\[
a_p=
\begin{cases}
1/3 & p\in \mathcal{I}\\
2/3+1/p & p\not\in \mathcal{I}\\
\end{cases}
\]
\end{proposition}
\begin{proof}
Our hypothesis gives $\Delta=df^2$.
The constant in the main term is:
\begin{align*}
&
\frac{1}{12\zeta(3)}\prod_{p|f}\frac{1/p^2}{1+p^{-1}+p^{-2}}\prod_{\substack{p<f\\p\not |f}}\frac{a_p}{1+p^{-1}+p^{-2}}
\sum_{\substack{r> f\\(r,\prod_{p\leq f}p)=1}}\mu(r)\prod_{p|r}\frac{1}{p^2+p+1}
\\
&
=
\frac{1}{12\zeta(3)}\prod_{p|f}\frac{1}{p^2+p+1}\prod_{\substack{p<f\\p\not |f}}\frac{a_p}{1+p^{-1}+p^{-2}}
\prod_{p>f}\left(1-\frac{1}{p^2+p+1}\right)
\\
&
=
\frac{1}{12\zeta(2)}\prod_{p|f}\frac{1}{p^2+p+1}\prod_{\substack{p<f\\p\not |f}}\frac{a_p}{1+p^{-1}+p^{-2}}
\prod_{p\leq f}\left(1-\frac{1}{p^2+p+1}\right)^{-1}
\\
&
=
\frac{1}{12\zeta(2)}\prod_{p|f}\frac{1}{p(p+1)}\prod_{\substack{p<f\\p\not |f}}\frac{a_p}{1+p^{-1}}
\end{align*}
We follow the same procedure as in Section~\ref{genus1}.
However, this time there are many more local conditions being imposed.
When the smoke clears, the error term is equal to
\[
O(f^{-2}X^{5/6})
+
O(Y^{25/9}f^{8/9}e^{8f/9}X^{7/9+\varepsilon})
+
O(Y^{-1+\varepsilon}f^{-2+\varepsilon}X)
\,.
\]
Setting $Y=X^{1/17}f^{-13/17}e^{-4f/17}$ yields the error term
\[
O(f^{-21/17+\varepsilon}e^{4f/17}X^{16/17+\varepsilon})
\,.
\]
\end{proof}
If $3$ is totally ramified in $K$, then the previous proposition still holds, but with
$f$ replaced by $F=3f$ and the constant in the main term multiplied by a factor
of $8/9$.
\begin{example*}
We return to our example of $F=11$.
We saw that the four H-pairs in this situation are $\{2,3\}$, $\{2,5\}$, $\{2,7\}$, $\{3,5\}$.
Consequently, there are $9$ admissible sets $\mathcal{I}$; namely:
$\{2,3\}$, $\{2,3,5\}$, $\{2,3,7\}$, $\{2,3,5,7\}$, $\{2,5\}$, $\{2,5,7\}$, $\{2,7\}$, $\{3,5\}$, $\{3,5,7\}$.
Consider for the moment the choice $\mathcal{I}=\{2,3,5\}$. In this situation, the
extra factor is
\[
\frac{1/3}{1+2^{-1}}\cdot\frac{1/3}{1+3^{-1}}\cdot\frac{1/3}{1+5^{-1}}\cdot\frac{2/3+1/7}{1+7^{-1}}=\frac{85}{7776}
\]
and hence
Proposition~\ref{P:HFIX} yields
\[
H(F,\mathcal{I};X)\sim
\frac{85}{7776}
\cdot
\frac{1}{12\zeta(2)}\prod_{p|f}\frac{1}{p(p+1)}X
\]
For each admissible $\mathcal{I}$ we get an additional rational factor; summing over all
admissible $\mathcal{I}$ yields:
\[
H(F;X)\sim
\frac{55}{288}
\cdot
\frac{1}{12\zeta(2)}\prod_{p|f}\frac{1}{p(p+1)}X
\]
\end{example*}
\begin{proof}[Proof of Proposition~\ref{P:HFX}]
This follows immediately from Proposition~\ref{P:HFIX}
since the sum appearing in (\ref{E:finite}) is finite.
The calculation of the $b_F$ is along the lines of the previous example; namely, when $F$ is not divisible by $3$,
\[
b_F=
\sum_{\mathcal{I}\text{ admissible}}
\prod_{\substack{p<F\\p\not |F}}\frac{a_p}{1+p^{-1}}
\, ,
\]
and $b_F$ is equal to the same
expression times $8/9$ when $F$ is divisible by $3$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{P:HX}]
Using Proposition~\ref{P:HFX} for $F<Y$ we obtain:
\begin{align*}
H(X)
&=
\sum_{F<Y} H(F;X)
+
\sum_{F>Y}
O(F^{-2+\varepsilon}X)
\\
&=
\frac{1}{12\zeta(2)}X\sum_{F} b_F\prod_{p|f}\frac{1}{p(p+1)}
+
\sum_{F<Y} O(e^{4F/17}X^{16/17+\varepsilon})
+
\sum_{F>Y}
O(F^{-2+\varepsilon}X)
\\
&=
\frac{1}{12\zeta(2)}X\sum_{F} b_F\prod_{p|f}\frac{1}{p(p+1)}
+
O(e^{4Y/17}X^{16/17+\varepsilon})
+
O(Y^{-1+\varepsilon}X)
\end{align*}
Choosing $Y$ to be a small power of $\log X$ proves the result with
\[
B= \frac{1}{12\zeta(2)}\sum_{F} b_F\prod_{p|f}\frac{1}{p(p+1)}
\,.
\]
\end{proof}
\section*{Acknowledgements}
The authors wish to thank Manjul Bhargava, Henrik Lenstra, Carl Pomerance, Frank Thorne, and Melanie Matchett Wood for productive input.
\nocite{belabas1, belabas2, ishida}
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,835 |
find_package(Doxygen)
if(DOXYGEN_FOUND)
configure_file(${MY_CMAKE_INPUT}/Doxyfile.in ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile @ONLY)
add_custom_target(
doc ALL ${DOXYGEN_EXECUTABLE} ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Generating project documentation with Doxygen" VERBATIM
)
# Uncomment if you want to pack the Doxygen documentation
# install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/html
# DESTINATION doc
# )
else(DOXYGEN_FOUND)
message(FATAL_ERROR "Doxygen was not found.")
endif(DOXYGEN_FOUND)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,923 |
The Aston Martin Cygnet seems light years away from the company that just unveiled the DBS Superleggera, but it's back at the 2018 Goodwood Festival of Speed. Only it's been tweaked a little bit - and now has the 430bhp V8 engine from the previous-generation Vantage S.
Aston calls it the 'Ultimate City Car'. The one-off V8-Cygnet was created by the brand's 'Q' branch for a customer commission. Aston Martin says the 'Q' department exists to allow customers to fully express themselves – though what the owner of this car wants to convey might be beyond cognition.
The V8 Cygnet looks relatively untouched from the front – apart from a mesh grille, and those new wheel arches which we'll get to later – but it hides a very different powerplant under its bonnet. Instead of the 1.3-litre motor inherited from the Toyota iQ on which its based, this Cygnet gets the 4.7-litre, 430bhp V8 engine from the Vantage S. It's capable of 430bhp and 361lb ft, and will hit 60mph from a standstill just 4.2 seconds – faster than the Vantage S. And with a top speed of 170mph the V8 Cygnet is over 60mph faster than the standard supermini.
It breathes out of a new twin-exhaust system with twin underfloor mufflers and 'cats, and the relatively compact layout means none of the V8's roar will be lost on the way: This car promises to sound as distinctive as it looks.
Every other alteration in the car has essentially been added to help cope with the Cygnet's 430bhp. The V8 Cygnet has a rollcage, new front bulkhead and transmission tunnel, while the subframe and suspension system were also pulled out from the Vantage S donor car. That's the old Vantage, not the new one.
A large steel fuel tank is now stowed in the boot of the Cygnet, while a race-ready extinguisher system and racing harnesses make it ultimately developed the the track.
There isn't much of an interior to speak of; most of the car has been ripped out to drill weight to 1375kg, and what has been added, is carbon fibre. Flared wheel arches represent the V8 Cygnet's most obvious augmentation, added on in carbon fibre to accommodate the car's widened track. The wheels in those arches have grown in stature too, now measuring 19-inches in diameter over the standard 16. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,067 |
'use strict';
var GetIntrinsic = require('get-intrinsic');
var $TypeError = GetIntrinsic('%TypeError%');
var $parseInt = GetIntrinsic('%parseInt%');
var inspect = require('object-inspect');
var regexTester = require('safe-regex-test');
var callBound = require('call-bind/callBound');
var every = require('../helpers/every');
var isDigit = regexTester(/^[0-9]$/);
var $charAt = callBound('String.prototype.charAt');
var $strSlice = callBound('String.prototype.slice');
var IsArray = require('./IsArray');
var IsInteger = require('./IsInteger');
var Type = require('./Type');
var canDistinguishSparseFromUndefined = 0 in [undefined]; // IE 6 - 8 have a bug where this returns false
var isStringOrHole = function (capture, index, arr) {
return Type(capture) === 'String' || (canDistinguishSparseFromUndefined ? !(index in arr) : Type(capture) === 'Undefined');
};
// https://ecma-international.org/ecma-262/6.0/#sec-getsubstitution
// eslint-disable-next-line max-statements, max-params, max-lines-per-function
module.exports = function GetSubstitution(matched, str, position, captures, replacement) {
if (Type(matched) !== 'String') {
throw new $TypeError('Assertion failed: `matched` must be a String');
}
var matchLength = matched.length;
if (Type(str) !== 'String') {
throw new $TypeError('Assertion failed: `str` must be a String');
}
var stringLength = str.length;
if (!IsInteger(position) || position < 0 || position > stringLength) {
throw new $TypeError('Assertion failed: `position` must be a nonnegative integer, and less than or equal to the length of `string`, got ' + inspect(position));
}
if (!IsArray(captures) || !every(captures, isStringOrHole)) {
throw new $TypeError('Assertion failed: `captures` must be a List of Strings, got ' + inspect(captures));
}
if (Type(replacement) !== 'String') {
throw new $TypeError('Assertion failed: `replacement` must be a String');
}
var tailPos = position + matchLength;
var m = captures.length;
var result = '';
for (var i = 0; i < replacement.length; i += 1) {
// if this is a $, and it's not the end of the replacement
var current = $charAt(replacement, i);
var isLast = (i + 1) >= replacement.length;
var nextIsLast = (i + 2) >= replacement.length;
if (current === '$' && !isLast) {
var next = $charAt(replacement, i + 1);
if (next === '$') {
result += '$';
i += 1;
} else if (next === '&') {
result += matched;
i += 1;
} else if (next === '`') {
result += position === 0 ? '' : $strSlice(str, 0, position - 1);
i += 1;
} else if (next === "'") {
result += tailPos >= stringLength ? '' : $strSlice(str, tailPos);
i += 1;
} else {
var nextNext = nextIsLast ? null : $charAt(replacement, i + 2);
if (isDigit(next) && next !== '0' && (nextIsLast || !isDigit(nextNext))) {
// $1 through $9, and not followed by a digit
var n = $parseInt(next, 10);
// if (n > m, impl-defined)
result += n <= m && Type(captures[n - 1]) === 'Undefined' ? '' : captures[n - 1];
i += 1;
} else if (isDigit(next) && (nextIsLast || isDigit(nextNext))) {
// $00 through $99
var nn = next + nextNext;
var nnI = $parseInt(nn, 10) - 1;
// if nn === '00' or nn > m, impl-defined
result += nn <= m && Type(captures[nnI]) === 'Undefined' ? '' : captures[nnI];
i += 2;
} else {
result += '$';
}
}
} else {
// the final $, or else not a $
result += $charAt(replacement, i);
}
}
return result;
};
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,489 |
Track all your China Post packages - just enter your tracking number and get For China Post shipments, just activate the CRM mailing feature on the home. Enter tracking number to track China Post shipments and get delivery status online. Contact China Post and get REST API docs. A anonymous Recipient tracked his China Post package on Packagetrackr then felt that China Post's delivery of this shipment (China Post tracking number: #:************99IR) to 北京处理中心 was Terrible. A registered Recipient tracked his China Post package on Packagetrackr.
still hasn't come to the US. Do you know if my package was shipped by Airmail or Surface Mail? My tracking number is CPCN. 17TRACK is the most powerful and inclusive package tracking platform. It enables to track over + postal carriers for registered mail, parcel, EMS and multiple. Air mail takes about 15 working days, SAL mail takes about 30 days, and surface mail takes one to two months. The tracking number of a small parcel usually.
Find out where your package is right now ⭐ Track your China Post parcel or EMS Packet Plus, China Post Air Parcel, China Post Registered Air Mail tracking. China post tracking system online for international registered airmail, EMS into three categories: Air Parcel, Surface Air Lift (SAL) Parcel, and Surface Parcel. Actually, ordinary mail is just as fast as registered air mail, the only difference is registered mail can be tracked online. If you need to track China post surface. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,822 |
\section{Introduction}
\IEEEPARstart{N}{etworked control} systems, wherein feedback loops are closed over communication networks, have received much attention in the last two decades \cite{antsaklis2007}. One of the main challenges in networked control systems is resource constraints, caused by various limitations in communication, computation, and energy, which can severely affect the overall system performance. Traditionally, in a networked control system, observations of the process are periodically sampled and transmitted to the controller because periodic sampling and transmission facilitate the design of such a system \cite{alur2007}. However, it has been conceived that not every sampled observation of the process has the same effect on the performance of a networked control system, and one can employ a transmission mechanism, i.e., \emph{event trigger}, that transmits an observation only when a significant event occurs~\cite{astrom1999}. As a result, one can expect a reduction in the sampling rate for transmission in \emph{event-triggered control}. This elegant idea has lead to extensive development and employment of event triggers in different contexts including consensus of multi-agent systems~\cite{dimos2012}, distributed optimization~\cite{meinel2014}, medium access control~\cite{mamduhi2017}, and model predictive control~\cite{li2014}.
In this article, we investigate the impact of information on networked control systems, and illustrate how to quantify a fundamental property of stochastic processes that can enrich our understanding about such systems. Undoubtedly, transmission of an observation decreases the uncertainty of the controller, and hence increases the control performance in a networked control system. However, the amount of this increment in the control performance is still \emph{unknown}. Here, associated with optimal event-triggered control, we quantify the \emph{value of information} $\voi_k$ as the variation in the cost-to-go of the system given an observation at time~$k$. To that end, we develop a theoretical framework based on which the notion of value of information is conceivable.
In \emph{optimal} event-triggered control, one primarily deals with a distributed optimization problem with an event trigger and a controller as the decision makers. Yet, this problem for the joint design of the event trigger and controller is in general intractable (see e.g., \cite{wu2013, ramesh2013}). The reasons are that the optimal estimator at the controller is nonlinear with no analytical solution, estimation and control are coupled due to a dual effect, and the event trigger and controller have nonclassical information patterns. Nevertheless, one can characterize the solutions of this problem under a certain assumption, and study a trade-off between the sampling rate and control performance.
Here, following a game-theoretic analysis, we shed light on the structures of the optimal policies in optimal event-triggered control by restricting the information set of the controller to one with discarded negative information (i.e., information associated with non-transmitted observations). We cover two distinct information patterns: \emph{perfect information} where observations are the states of the process, and \emph{imperfect information} where observations are noisy outputs of the process. In both cases, observations are available at the event trigger instantly, but are transmitted to the controller sporadically with one-step delay.
\subsection{Related Work}
In a seminal work in 1999, {\AA}str\"om and Bernhardsson~\cite{astrom1999} showed for a first-order continuous-time stochastic process under a sampling rate constraint that event-triggered sampling outperforms periodic sampling in the sense of mean error variance. This work later fostered extensive research in event-triggered control. In general, in a networked control system an event trigger can be employed at the sensor side to reduce the sampling rate in the observation channel, or at the controller side to reduce the sampling rate in the control channel. We should point out that what we are interested in here is the former in which the event trigger and controller are distributed. In the joint design of the event trigger and controller in optimal event-triggered control, one makes a trade-off between the sampling rate and control performance. To elucidate the essence of this problem, we herein neglect network-induced effects such as quantization, packet dropouts, and time-varying delays.
Several works have addressed optimal event-triggered estimation, and characterized the optimal triggering policies~\cite{xu2004, rabi2012, lipsa2011, molin2017}. In particular, Xu and Hespanha~\cite{xu2004} studied optimal event-triggered estimation with perfect information by discarding the negative information. They searched in the space of stochastic triggering policies, and showed that the optimal triggering policy is indeed deterministic. Rabi and Baras~\cite{rabi2012} formulated optimal event-triggered estimation with perfect information as an optimal multiple stopping time problem by discarding the negative information, and showed that the optimal triggering policy for first-order systems is symmetric. Later, Lipsa and Martins~\cite{lipsa2011} used majorization theory to study optimal event-triggered estimation with perfect information without discarding the negative information, and proved for first-order systems that the optimal estimator is linear and the optimal triggering policy is symmetric. Moreover, Molin and Hirche~\cite{molin2017} developed an iterative algorithm for obtaining the optimal estimator and optimal triggering policy in optimal event-triggered estimation with perfect information that is applicable to systems with arbitrary noise distributions. They studied the convergence properties of the algorithm for first-order systems, and obtained a result that coincides with that in \cite{lipsa2011}. As explained before, in the joint design of the event trigger and controller a separation between estimation and control is not given a priori. Therefore, the results in the aforementioned studies do not apply directly to optimal event-triggered control.
However, there exist a number of studies that have addressed optimal event-triggered control, and characterized the optimal control policies~\cite{molin2013, molin2010, ramesh2013, demirel2017}. In particular, Molin and Hirche~\cite{molin2013, molin2010} investigated optimal event-triggered control with perfect and imperfect information, and showed that the optimal control policy is a certainty-equivalence policy while assuming that the triggering policy is a function of primitive variables. Ramesh~\emph{et~al.}~\cite{ramesh2013} studied dual effect in optimal event-triggered control with perfect information, and proved that the dual effect in general exists. In addition, they showed that the certainty-equivalence principle holds if and only if the triggering policy is independent of the control policy. Recently, Demirel~\emph{et~al.}~\cite{demirel2017} addressed optimal event-triggered control with imperfect information by adopting a stochastic triggering policy that is independent of the control policy, and proved that the optimal control policy is a certainty-equivalence policy. Unlike these studies, in addition to scrutinizing the notion of value of information in optimal event-triggered control, we herein characterize both optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium. Besides, we synthesize a closed-form suboptimal triggering policy with a performance guarantee that can readily be implemented. We cover both perfect and imperfect information, and show that our analysis is tractable and also extensible to high-order systems.
A special class of event-triggered estimation and event-triggered control is sensor scheduling in which open-loop triggering policies are employed. Sensor scheduling can be traced back to the 1970s. However, recently Trimpe and D'Andrea~\cite{trimpe2014} and Leong~\emph{et~al.}~\cite{leong2018} adopted sensor scheduling for networked control systems, and obtained open-loop triggering policies in terms of the estimation error covariances. It is also worth mentioning that in a rather different setup from what we consider in this study, Antunes and Heemels~\cite{antunes2014} considered a networked control system in which the event trigger and controller are both collocated with the sensor, and control inputs are to be transmitted to the process. They proposed an approximation algorithm, and showed that a performance improvement with respect to periodic control can be guaranteed. Our approximation algorithm is inspired by this idea. Nevertheless, herein unlike the above work, the event trigger and controller are distributed.
As mentioned earlier, in this article, we introduce the notion of value of information. Generally speaking, value of information is defined as the value that is assigned to the reduction of uncertainty from the decision maker's perspective given a piece of information~\cite{howard1966}. In other words, the value of information measures information beyond its probabilistic structure by considering the economic impact of uncertainty on the decision maker. The concept of value of information has widely been used in multiple disciplines including information economics~\cite{stigler1961}, risk management~\cite{gould1974}, and stochastic programming \cite{avriel1970}. Recently, closer to the applications of our work, value of information was adopted in sensor selection~\cite{zhang2010}, shortest path optimization~\cite{rinehart2011}, and prioritization in medium access control~\cite{molin2015}.
\subsection{Contributions and Outline}
Our main contributions, corresponding to each information pattern, are summarized as:
\begin{enumerate}
\item We show, under a certain assumption, that a separation in the optimal designs of the event trigger and controller is guaranteed.
\item We characterize the optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium.
\item We quantify the value of information, and demonstrate that the optimal triggering policy transmits an observation whenever the value of information is positive.
\item We provide an algorithm for approximation of the value of information, and synthesize a closed-form suboptimal triggering policy with a performance guarantee that can readily be implemented.
\end{enumerate}
The remainder of the article is organized in the following way. We formulate the problem in Section~\ref{c1:problem-formulation}. We provide the main results in Section~\ref{c1:main-results1}. We present numerical examples in Section~\ref{c1:examples}. Finally, we make concluding remarks in Section~\ref{c1:conclusion}.
\section{Problem Formulation}\label{c1:problem-formulation}
We first introduce notations and provide some definitions from stochastic control theory and game theory. Then, we describe the system model along with two distinct information patterns, and formulate the main problem of this study.
\subsection{Preliminaries}
In the sequel, vectors, matrices, and sets are represented by lower case, upper case, and Calligraphic letters like $x$, $X$, and $\mathcal{X}$ respectively. The sequence of all vectors $x_{t}, \ t=0,\dots,k$, is represented by $\mathbf{x}_k$, and the sequence of all vectors $x_{t}, \ t=k,\dots,N$ for a specific $N$, is represented by $\mathbf{x}^k$. The indicator function of a subset $\mathcal{A}$ of a set $\mathcal{X}$ is denoted by $f(x) = \mathbf{1}_\mathcal{A}$ where $x \in \mathcal{X}$. The identity matrix is denoted by $I$. For matrices $X$ and $Y$, the relations $X \succ 0$ and $Y \succeq 0$ denote that $X$ and $Y$ are positive definite and positive semi-definite respectively. The probability distribution of the stochastic variable $x$ is represented by $\mathsf{P}(x)$. The expected value and covariance of $x$ are represented by $\E[x]$ and $\Cov[x]$ respectively.
Let $\mathcal{I}_k^c$ be the information set of the controller and $\tilde{\mathcal{I}}_k^c$ be the information set of the controller when controls are equal to zero. The control has \emph{no dual effect}~\cite{bar1974dual} of order $r$ ($r \geq 2$)~if
\begin{align*}
\E[M_{k,i}^r | \mathcal{I}_k^c ] = \E[M_{k,i}^r | \tilde{\mathcal{I}}_k^c],
\end{align*}
where $M_{k,i}^r = (x_{k,i} - \E[x_{k,i} | \mathcal{I}_k^c])^r$ is the $r$th central moment of the $i$th component of the state conditioned on $\mathcal{I}_k^c$. In other words, the control has no dual effect if the expected future uncertainty is not affected by the prior controls.
Consider a team game with two decision makers. Let $\gamma^1 \in \mathcal{G}^1$ and $\gamma^2 \in \mathcal{G}^2$ be the policies of the first and the second decision makers respectively, and $J(\gamma^1,\gamma^2)$ be the cost function. A policy profile $(\gamma^{1*}, \gamma^{2*})$ represents a \emph{Nash equilibrium}~\cite{marden2018} if and only if
\begin{align*}
J(\gamma^{1*}, \gamma^{2*}) \leq J(\gamma^1, \gamma^{2*}), \ \text{for all } \gamma^1 \in \mathcal{G}^1,\\[2\jot]
J(\gamma^{1*}, \gamma^{2^*}) \leq J(\gamma^{1*}, \gamma^2), \ \text{for all } \gamma^2 \in \mathcal{G}^2.
\end{align*}
The optimality considered in this study is in the above sense.
\subsection{Optimal Event-Triggered Control Problem}
Consider a stochastic process with linear discrete-time time-varying dynamics generated by the following state equation:
\begin{align}\label{c1:eq:sys}
x_{k+1} &= A_k x_k + B_k u_k + w_k,
\end{align}
for $k \geq 0$ with initial condition $x_0$ where $x_k \in \mathbb{R}^n$ is the state of the process, $A_k \in \mathbb{R}^{n \times n}$ is the state matrix, $B_k \in \mathbb{R}^{n \times m}$ is the input matrix, $u_k \in \mathbb{R}^m$ is the control input to be decided by a controller, and $w_k \in \mathbb{R}^n$ is an i.i.d.\! Gaussian white noise with zero mean and covariance $W_k \succ 0$. It is assumed that the initial state $x_0$ is a Gaussian vector with mean $m_0$ and covariance $M_0$, and that $(A_k,B_k)$ is controllable. We consider two distinct information patterns: perfect information and imperfect information. Accordingly, we employ an event trigger that determines whether an observation is transmitted or not. In particular, let $\delta_{k} \in \{0,1\}$ be an event. The observation at time $k$ is transmitted if $\delta_k = 1$; otherwise, it is not transmitted.
\begin{figure}[t]
\center
\includegraphics[width= 0.85\linewidth]{figures/lqg-etm2-bw.eps}
\caption{Event-triggered control with perfect information. The exact value of the state $x_k$ is accessible. The state $x_k$ is given at the event trigger instantly, and is transmitted to the controller sporadically with one-step delay.}
\label{c1:fig:perfect-info}
\end{figure}
\begin{figure}[t]
\center
\includegraphics[width= 0.85\linewidth]{figures/lqg-etm1-bw.eps}
\caption{Event-triggered control with imperfect information. The exact value of the state $x_k$ is not accessible. Instead, the noisy output $y_k$ is given at the event trigger instantly, and is transmitted to the controller sporadically with one-step delay.}
\label{c1:fig:imperfect-info}
\end{figure}
In event-triggered control with perfect information (see Fig.~\ref{c1:fig:perfect-info}), the exact value of the state is accessible. In this case, the state $x_k$ is given at the event trigger instantly, and is transmitted to the controller with one-step delay when $\delta_k = 1$. Hence, we have
\begin{align}\label{c1:eq:etm1}
z_k = \left\{
\begin{array}{l l}
x_k, & \ \text{if} \ \delta_k =1, \\
\varnothing, & \ \text{otherwise},
\end{array} \right.
\end{align}
where $z_k$ is the output of the event trigger with perfect information.
However, in event-triggered control with imperfect information (see Fig.~\ref{c1:fig:imperfect-info}), the exact value of the state is not accessible. Instead, a noisy output of the process is measured by a sensor, and is given~by
\begin{align}\label{c1:eq:output}
y_k &= C_k x_k + v_k,
\end{align}
for $k \geq 0$ where $y_k \in \mathbb{R}^p$ is the output of the process, $C_k \in \mathbb{R}^{p \times n}$ is the output matrix, and $v_k \in \mathbb{R}^p$ is an i.i.d.\! Gaussian white noise with zero mean and covariance $V_k \succ 0$. It is assumed that $(A_k,C_k)$ is observable. In this case, the observation $y_k$ is given at the event trigger instantly, and is transmitted to the controller with one-step delay when $\delta_k = 1$. Hence, we have
\begin{align}\label{c1:eq:etm2}
z_k = \left\{
\begin{array}{l l}
y_k, & \ \text{if} \ \delta_k =1, \\
\varnothing, & \ \text{otherwise},
\end{array} \right.
\end{align}
where $z_k$ is the output of the event trigger with imperfect information.
Consider a finite time horizon $N$, and let $\pi$ and $\mu$ denote a randomized triggering policy and a randomized control policy respectively. We measure the sampling rate by
\begin{equation}
R(\pi,\mu) = \frac{1}{N+1} \E\Big[ \textstyle\sum_{k=0}^{N} \ell_k \delta_k \Big],
\end{equation}
where $\ell_k$ is a weighting coefficient specifying the relative communication cost at each time. Moreover, we measure the control performance by
\begin{equation}
\begin{aligned}
J(\pi,\mu) = \frac{1}{N+1} \E \Big [& x_{N+1}^T Q_{N+1} x_{N+1}\\[1\jot]
&+ \textstyle\sum_{k=0}^{N} x_k^T Q_{k} x_k + u_k^T R_{k} u_k \Big],
\end{aligned}
\end{equation}
where $Q_k \succeq 0$ and $R_k \succ 0$ are weighting matrices.
Both event trigger and controller seek to maximize the control performance such that the sampling rate is less than or equal to a level $R_0$. We study this problem under the following assumption:
\begin{assumption}\label{assum-neg}
The information associated with non-transmitted observations, i.e., when $\delta_k = 0$, is discarded at the controller.
\end{assumption}
For a system that satisfies the above assumption, let $\mathcal{I}^e_k$ and $\mathcal{I}^c_k$ denote the admissible information sets of the event trigger and controller at time $k$ respectively, and $\mathcal{P}$ and $\mathcal{M}$ denote the sets of the admissible triggering policies and admissible control policies respectively. Then, $\mathcal{I}^c_k$ satisfies Assumption~\ref{assum-neg}. Moreover, we have $\pi = \{ \delta_0, \dots, \delta_{N} \}$ where $\delta_k$ is a measurable function of $\mathcal{I}^e_k$, and $\mu= \{ u_0, \dots, u_{N} \}$ where $u_k$ is a measurable function of $\mathcal{I}^c_k$. For such a system, we have the following distributed optimization problem:
\begin{equation}\label{eq:main_problem1}
\begin{aligned}
&\minimize &&J(\pi,\mu)\\[2\jot]
&\subjectto &&R(\pi,\mu) \leq R_0.
\end{aligned}
\end{equation}
In the sequel, we shall characterize the Nash equilibria in this problem with perfect and imperfect information.
\section{Main Results}\label{c1:main-results1}
We here present the main results of our study. All proofs are provided in Appendix. We first reformulate the problem of interest. Then, we characterize the Nash equilibria. The perfect and imperfect information are treated separately. Finally, we provide an approximation algorithm.
\subsection{Lagrange Multiplier and Riccati Equation}
In order to reformulate the problem, we need the following theorem, which shows the convexity of the constraint set specified by the sampling rate.
\begin{theorem}\label{thm-convexity}\emph{
The constraint set specified by $R(\pi,\mu) \leq R_0$ is convex.}
\end{theorem}
Following Theorem~\ref{thm-convexity} and the theory of Lagrange multipliers~\cite{bertsekas1999nonlinear}, the existence of a Lagrange multiplier $\lambda \geq 0$ is guaranteed. Hence, we can reformulate (\ref{eq:main_problem1}) as
\begin{align}\label{eq:main_problem2}
\minimize \quad J(\pi,\mu) + \lambda R(\pi,\mu).
\end{align}
We are interested in a general trade-off between the sampling rate and control performance. Therefore, we shall study this problem without specifying a particular Lagrange multiplier~$\lambda$ for now. Equivalently, we can study the following problem:
\begin{align}\label{c1:eq:team_problem2}
\minimize \quad \Psi(\pi,\mu),
\end{align}
where the cost function $\Psi(\pi,\mu)$ is given~by
\begin{equation}
\begin{aligned}
\Psi(\pi,\mu) = \E \Big[& x_{N+1}^T Q_{N+1} x_{N+1} \\[1\jot]
&+ \textstyle\sum_{k=0}^{N} x_k^T Q_k x_k + u_k^T R_k u_k + \theta_k \delta_k \Big],
\end{aligned}
\end{equation}
where $\theta_k = \ell_k \lambda$.
Besides, associated with (\ref{eq:main_problem1}), we define the matrix $S_k \succeq 0$ such that it satisfies the following Riccati equation:
\begin{align}
S_k &= Q_k + A_k^T S_{k+1} A_k - \Gamma_k,\\[2\jot]
\Gamma_k &= A_k^T S_{k+1} B_k (B_k^T S_{k+1} B_k + R_k)^{-1} B_k^T S_{k+1} A_k,
\end{align}
with initial condition $S_{N+1} = Q_{N+1}$ and with $\Gamma_{N+1} = 0$. This Riccati equation will play an essential role in the structures of the optimal policies.
\subsection{Perfect Information}
We first consider the basic case, which is event-triggered control with perfect information. In this case, only the controller needs to infer the state of the process. We derive the optimal estimator at the controller based on a Bayesian analysis. Let us define the admissible information set of the event trigger at time $k$ as the set of the current and prior states,~i.e.,
\begin{equation}\label{c1:eq:etm-infoset-1}
\mathcal{I}^e_k = \Big\{ x_t \ \Big| \ t \leq k \Big\},
\end{equation}
and the admissible information set of the controller at time $k$ as the set of the prior transmitted states,~i.e.,
\begin{equation}\label{c1:eq:con-infoset-1}
\mathcal{I}^c_k = \Big\{ x_t \ \Big| \ t < k, \delta_t = 1 \Big\}.
\end{equation}
The next proposition gives the optimal estimator with respect to the information set $\mathcal{I}_k^c$ that can be used at the controller, and shows that such an estimator is linear.
\begin{proposition}\label{c1:prop:perf-estimator}\emph{
The conditional expectation $\E[{x}_k | \mathcal{I}^c_k]$ with the following dynamics is the state estimate that minimizes the mean-square error at the controller:
\begin{align}\label{c1:eq:perfect-est-state}
\begin{split}
\hat{x}_{k} &= A_{k-1} \hat{x}_{k-1} + B_{k-1} u_{k-1}\\[2\jot]
&\qquad \qquad \qquad + \delta_{k-1} A_{k-1} (x_{k-1} - \hat{x}_{k-1}),
\end{split}
\end{align}
for $k \geq 1$ with initial condition $\hat{x}_0 = m_0$ where $\hat{x}_k = \E[{x}_k | \mathcal{I}^c_k]$. Moreover, the error covariance is given~by
\begin{align}
\begin{split}\label{eq:error_cov_perf}
P_{k} &= A_{k-1} P_{k-1} A_{k-1}^T + W_{k-1}\\[2\jot]
&\qquad \qquad \qquad - \delta_{k-1} A_{k-1} P_{k-1} A_{k-1}^T,
\end{split}
\end{align}
for $k \geq 1$ with initial condition $P_0 = M_0$ where $P_{k} = \Cov[x_k | \mathcal{I}^c_k]$.
}
\end{proposition}
\begin{remark}
One can show that the structure of the optimal estimator at the controller is the same as (\ref{c1:eq:perfect-est-state}) even without discarding the negative information (see, e.g., \cite{ramesh2013}). Our following results will still be valid in such a case. However, discarding the negative information yields a Gaussian conditional distribution at the controller, which helps us in the extension of our framework to the imperfect information pattern.
\end{remark}
We design the optimal policies using backward induction. Let $e_k = x_k - \hat{x}_k$ be the estimation error associated with the estimator at the controller. Note that, in addition to the controller, the event trigger can obtain $\hat{x}_k$. This is possible because $\mathcal{I}^c_k \subseteq \mathcal{I}^e_k$. The next theorem characterizes the structures of the optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium, and proves that there exists a separation in the optimal designs of the event trigger and controller.
\begin{theorem}\label{c1:thm:perf-optimality}\emph{
In event-triggered control with perfect information, the optimal triggering policy is a symmetric threshold policy given by
\begin{align}
\delta_{k}^* = \mathbf{1}_{\voi_k \geq 0},
\end{align}
where $\voi_k$ is the value of information at time $k$ defined as
\begin{align}
\voi_k &= e_k^T A_k^T \Gamma_{k+1} A_k e_k - \theta_k + \varrho_k,\label{eq:voi-perfect}
\end{align}
and $\varrho_k$ is a variable that depends on $e_k$, and the optimal control policy is a certainty-equivalence policy given by
\begin{align}
u^*_k = - L_k \hat{x}_k,
\end{align}
where
\begin{align}
L_k &= (B_k^T S_{k+1} B_k + R_k)^{-1} B_k^T S_{k+1} A_k.
\end{align}}
\end{theorem}
According to Theorem~\ref{c1:thm:perf-optimality}, the optimal triggering policy depends on $e_k$, and is independent of the control policy. Besides, the error covariance $P_k$ in (\ref{eq:error_cov_perf}) does not depend on $\mathbf{u}_{k-1}$. Hence, the control has no dual effect.
\begin{remark}
The value of information $\voi_k$ in (\ref{eq:voi-perfect}) quantifies the deviation in the cost-to-go of the system with perfect information. In light of this definition, it is \emph{certified} that the optimal triggering policy transmits an observation whenever the value of information is \emph{positive}.
\end{remark}
\begin{remark}
It should be noted that the results here are consistent with those in \cite{molin2013, ramesh2013}. These works mainly studied the optimal control policy under different assumptions.
\end{remark}
\subsection{Imperfect Information}
Now, we extend the results presented above to event-triggered control with imperfect information. In this case, both event trigger and controller need to infer the state of the process. We derive the optimal estimators at the event trigger and controller based on a Bayesian analysis. Let us define the admissible information set of the event trigger at time $k$ as the set of the current and prior outputs,~i.e.,
\begin{equation}\label{c1:eq:imperf-etm-infoset}
\mathcal{I}^e_k = \Big\{ y_t \ \Big| \ t \leq k \Big\},
\end{equation}
and the admissible information set of the controller at time $k$ as the set of the prior transmitted outputs,~i.e.,
\begin{equation}\label{c1:eq:imperf-cont-infoset}
\mathcal{I}^c_k = \Big\{ y_t \ \Big| \ t < k, \delta_t = 1 \Big\}.
\end{equation}
The next two propositions give the optimal estimators with respect to the information sets $\mathcal{I}_k^c$ and $\mathcal{I}_k^e$ that can be used at the event trigger and controller respectively, and show that such estimators are linear.
\begin{proposition}\label{c1:prop:imperf-KF}\emph{
The conditional expectation $\E[{x}_k | \mathcal{I}^e_k]$ with the following dynamics minimizes the mean-square error at the event trigger:
\begin{align}
\begin{split}
\check{x}_{k} &= A_{k-1} \check{x}_{k-1} + B_{k-1} u_{k-1}\\[2\jot]
&\quad + H_{k} \big(y_{k} - C_{k} ( A_{k-1} \check{x}_{k-1} + B_{k-1} u_{k-1})\big),
\end{split}\\[2\jot]
\begin{split}
\Sigma_{k} &= \big( (A_{k-1} \Sigma_{k-1} A_{k-1}^T + W_{k-1})^{-1}\\[2\jot]
&\qquad \qquad \qquad \qquad \qquad \quad \ + C_{k}^T V_{k}^{-1} C_{k} \big)^{-1},
\end{split}
\end{align}
where
\begin{align}
H_{k} = \Sigma_{k} C_{k}^T V_{k}^{-1},
\end{align}
for $k \geq 1$ with initial conditions $\check{x}_0 = m_0 + \Sigma_{0} C_{0}^T V_{0}^{-1}(y_0 - C_0 m_0)$ and $\Sigma_0 = (M_0^{-1} + C_{0}^T V_{0}^{-1} C_{0})^{-1}$ where $\check{x}_k = \E[{x}_k | \mathcal{I}^e_k]$ and $\Sigma_{k} = \Cov[x_k | \mathcal{I}^e_k]$.}
\end{proposition}
\vspace{1mm}
\begin{proposition}\label{c1:prop:imperf-estimator}\emph{
The conditional expectation $\E[{x}_k | \mathcal{I}^c_k]$ with the following dynamics minimizes the mean-square error at the controller:
\begin{align}
\begin{split}
\hat{x}_{k} &= A_{k-1} \hat{x}_{k-1} + B_{k-1} u_{k-1}\\[2\jot]
&\qquad \qquad \ + \delta_{k-1} K_{k-1} (y_{k-1} - C_{k-1}\hat{x}_{k-1}),\label{c1:eq:imperf-estimate-dyn}
\end{split}\\[2\jot]
\begin{split}
P_{k} &= A_{k-1} P_{k-1} A_{k-1}^T + W_{k-1}\\[2\jot]
&\qquad \qquad \qquad - \delta_{k-1} K_{k-1} C_{k-1} P_{k-1} A_{k-1}^T,\label{c1:eq:imperf-cov-dyn}
\end{split}
\end{align}
where
\begin{align}
K_{k-1} = A_{k-1} P_{k-1} C_{k-1}^T (C_{k-1} P_{k-1} C_{k-1}^T + V_{k-1})^{-1},\label{c1:eq:imperf-est-gain}
\end{align}
for $k \geq 1$ with initial conditions $\hat{x}_0 = m_0$ and $P_0 = M_0$ where $\hat{x}_k = \E[{x}_k | \mathcal{I}^c_k]$ and $P_{k} = \Cov[x_k | \mathcal{I}^c_k]$.}
\end{proposition}
\begin{remark}
We recall that given imperfect information, we need to employ two distinct estimators. Employing two distinct estimators for the event trigger and controller with the imperfect information was also noted in \cite{ramesh2009, molin2010, demirel2017}. Discarding the negative information here yields a Gaussian conditional distribution at the controller. Approximation of the optimal estimator at the controller when the negative information is not discarded was studied in~\cite{wu2013}.
\end{remark}
We design the optimal policies using backward induction. Let $e_k = x_k - \hat{x}_k$ be the estimation error and $\nu_k = y_k - C_k \hat{x}_k$ be the innovation both associated with the estimator at the controller. Note that, in addition to the controller, the event trigger can obtain $\hat{x}_k$. This is possible because $\mathcal{I}^c_k \subseteq \mathcal{I}^e_k$. Moreover, let $\varepsilon_k = \check{x}_k - \hat{x}_k$ be the mismatch estimation error associated with the estimators at the event trigger and controller. Consequently, we can obtain
\begin{align}
\E[e_k | \mathcal{I}^e_k] &= \E[x_k - \hat{x}_k | \mathcal{I}^e_k] = \check{x}_k - \hat{x}_k = \varepsilon_k, \label{c1:eq:ebar}\\[2\jot]
\Cov[e_k | \mathcal{I}^e_k] &= \Cov[x_k | \mathcal{I}^e_k] = \Sigma_k \label{c1:eq:Sigma}.
\end{align}
The next theorem characterizes the structures of the optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium, and proves that there exists a separation in the optimal designs of the event trigger and controller.
\begin{theorem}\label{c1:thm:imperf-optimality}\emph{
In event-triggered control with imperfect information, the optimal triggering policy is a symmetric threshold policy given~by
\begin{align}
\delta_{k}^* = \mathbf{1}_{\voi_k \geq 0},
\end{align}
where $\voi_k$ is the value of information at time $k$ defined as
\begin{align}
\voi_k &= \nu_k^T K_k^T \Gamma_{k+1} (2 A_k \varepsilon_k - K_k \nu_k) - \theta_k + \varrho_k,\label{eq:voi-imperfect}
\end{align}
where $\varrho_k$ is a variable that depends on $\varepsilon_k$ and $\nu_k$, and the optimal control policy is a certainty-equivalence policy given~by
\begin{align}
u^*_k = - L_k \hat{x}_k,
\end{align}
where
\begin{align}
L_k &= (B_k^T S_{k+1} B_k + R_k)^{-1} B_k^T S_{k+1} A_k.
\end{align}}
\end{theorem}
According to Theorem~\ref{c1:thm:imperf-optimality}, the optimal triggering policy depends on $\varepsilon_k$ and $\nu_k$, and is independent of the control policy. Besides, the error covariance $P_k$ in (\ref{c1:eq:imperf-cov-dyn}) does not depend on $\mathbf{u}_{k-1}$. Hence, the control has no dual effect.
\begin{remark}
The value of information $\voi_k$ in (\ref{eq:voi-imperfect}) quantifies the variation in the cost-to-go of the system with imperfect information. In light of this definition, it is \emph{certified} that the optimal triggering policy transmits an observation whenever the value of information is \emph{positive}.
\end{remark}
\begin{remark}
Related to our study here are the works in \cite{molin2010, demirel2017}. In these works, it is assumed that, instead of an observation, the state estimate at the event trigger is transmitted to the controller whenever an event occurs.
\end{remark}
\subsection{Approximation Algorithm with Guaranteed Performance}\label{c1:approx}
The optimal triggering policy provided above in each case depends on the variable $\varrho_k$. Although $\varrho_k$ can be computed recursively according to the procedure given in the proof of Theorem~\ref{c1:thm:perf-optimality} or Theorem~\ref{c1:thm:imperf-optimality}, its computation is in general expensive. We here provide a rollout algorithm~\cite{bertsekas1996neuro} for approximation of the variable $\varrho_k$ and the value of information $\voi_k$, and accordingly synthesize a closed-form suboptimal triggering policy with a performance guarantee that can readily be implemented. The following algorithm gives an approximation of the variable~$\varrho_k$.
\begin{algorithm1}\label{c1:alg1}\emph{
Let $\bar{\pi} = \{\bar{\delta}_0, \dots, \bar{\delta}_N\}$ be a periodic policy with period $k_p$, and $\delta_k$ be the event at time $k$. An approximation of the variable $\varrho_k$ associated with the periodic policy $\bar{\pi}$ is given by
\begin{align}
\varrho^{\bar{\pi}}_k = \E[V^{\bar{\pi}}_{k+1} | \mathcal{I}^e_k, \delta_k = 0] - \E[V^{\bar{\pi}}_{k+1} | \mathcal{I}^e_k, \delta_k = 1],
\end{align}
where
\begin{align*}
&\E[V^{\bar{\pi}}_{k+1} | \mathcal{I}^e_k, \delta_k] = \E \Big[ \textstyle\sum_{t=k+1}^{N} \theta_t \delta_t + e_{t+1}^T \Gamma_{t+1} e_{t+1} \Big| \mathcal{I}^e_k, \delta_k \Big],
\end{align*}
with $\boldsymbol{\delta}^{k+1} = \bar{\boldsymbol{\delta}}^{k+1}$ for both perfect and imperfect information.}
\end{algorithm1}
The next theorem guarantees that given any periodic policy it is possible to synthesize a suboptimal triggering policy that outperforms it.
\begin{theorem}\label{thm-approx}\emph{
Let $\bar{\pi}$ be a periodic policy with period $k_p$, and $\pi^+$ be a suboptimal triggering policy obtained based on Algorithm~\ref{c1:alg1} with periodic policy $\bar{\pi}$. Then,
\begin{align}
\Psi(\pi^+,\mu^*) \leq \Psi(\bar{\pi},\mu^*),
\end{align}
for both perfect and imperfect information.}
\end{theorem}
In the next two propositions, we synthesize a closed-form suboptimal triggering policy with a performance guarantee for perfect and imperfect information.
\begin{proposition}\label{c1:prop-Delta-perf}\emph{
Let $\bar{\pi}$ be the periodic policy with period $k_p = 1$. A suboptimal triggering policy that outperforms the periodic policy $\bar{\pi}$ in event-triggered control with perfect information is given by
\begin{align}
\delta_{k}^+ = \mathbf{1}_{\voi^{\bar{\pi}}_k \geq 0}
\end{align}
where
\begin{align}
\voi^{\bar{\pi}}_k &= e_k^T A_k^T \Gamma_{k+1} A_k e_k - \theta_k.
\end{align}
}
\end{proposition}
\begin{proposition}\label{c1:prop-Delta-imperf}\emph{
Let $\bar{\pi}$ be the periodic policy with period $k_p = 1$. A suboptimal triggering policy that outperforms the periodic policy $\bar{\pi}$ in event-triggered control with imperfect information is given by
\begin{align}
\delta_{k}^+ = \mathbf{1}_{\voi^{\bar{\pi}}_k \geq 0},
\end{align}
where
\begin{equation}
\begin{aligned}
\voi^{\bar{\pi}}_k &= \nu_k^T K_k^T \Gamma_{k+1} (2 A_k \varepsilon_k - K_k \nu_k) - \theta_k\\[2\jot]
&\qquad + \textstyle \sum_{t=k+2}^{N} \bar{e}_t^{0T} \Gamma_t \bar{e}^0_t + \tr(\Gamma_t \bar{P}^0_t)\\[2\jot]
&\qquad - \textstyle \sum_{t=k+2}^{N} \bar{e}_t^{1T} \Gamma_t \bar{e}^1_t + \tr(\Gamma_t \bar{P}^1_t),
\end{aligned}
\end{equation}
and
\begin{align*}
\bar{e}^0_{t+1} &= (A_t - K^0_t C_t) \bar{e}^0_t,\\[2\jot]
\bar{P}^0_{t+1} &= (A_t - K^0_t C_t) \bar{P}^0_t (A_t - K^0_t C_t)^T + W_t + K^0_t V_t K_t^{0T},\\[2\jot]
P^0_{t+1} &= A_t P^0_t A_t^T + W_t - K^0_t C_t P^0_{t} A_t^T,\\[2\jot]
\bar{e}^1_{t+1} &= (A_t - K^1_t C_t) \bar{e}^1_t,\\[2\jot]
\bar{P}^1_{t+1} &= (A_t - K^1_t C_t) \bar{P}^1_t (A_t - K^1_t C_t)^T + W_t + K^1_t V_t K_t^{1T},\\[2\jot]
P^1_{t+1} &= A_t P^1_t A_t^T + W_t - K^1_t C_t P^1_{t} A_t^T,
\end{align*}
where
\begin{align*}
K^0_t &= A_t P^0_t C_t^T (C_t P^0_t C_t^T + V_t)^{-1},\\[2\jot]
K^1_t &= A_t P^1_t C_t^T (C_t P^1_t C_t^T + V_t)^{-1},
\end{align*}
for $t \geq k+1$ with initial condition $\bar{e}^0_{k+1} = A_k \varepsilon_k$, $\bar{P}^0_{k+1} = A_k \Sigma_k A_k^T + W_k$, $P^0_{k+1} = A_k P_k A_k^T + W_k$, $\bar{e}^1_{k+1} = A_k \varepsilon_k - K_k \nu_k$, $\bar{P}^1_{k+1} = A_k \Sigma_k A_k^T + W_k$, and $P^1_{k+1} = A_k P_k A_k^T + W_k - K_k C_k P_k A_k^T$.
}
\end{proposition}
\section{Numerical Examples}\label{c1:examples}
We provide two examples for the theoretical framework that we developed. In the first example, we consider a scalar process with the following dynamics:
\begin{align*}
x_{k+1} = 1.1 x_k + u_k + w_k,
\end{align*}
with initial conditions $m_0 = 0$ and $M_0 = 1$ and the noise variance $W_k = 3$ for all $k$, in which the state is accessible. The time horizon is $N = 100$. We chose the weighting coefficients as $\ell_k = 1$, $Q_{N+1} = 1$, $Q_k = 1$, and $R_k = 0.1$ for all $k$. For this system, we obtained the suboptimal triggering policy and optimal control policy provided by Proposition~\ref{c1:prop-Delta-perf} and Theorem~\ref{c1:thm:perf-optimality} respectively. The trade-off curve between the sampling rate and control performance was numerically computed using different values of the Lagrange multiplier $\lambda$, and is depicted in Fig.~\ref{c1:fig:tradeoff}. The achievable region is specified by the area above the trade-off curve. Note that this trade-off curve should be regarded as an upper bound.
In the second example, we consider an inverted pendulum on a cart observed by a sensor, which communicates with a controller through a controller area network. The continuous-time equations of motion linearized around the unstable equilibrium are given~by
\begin{align*}
(I + m l^2) \ddot{\phi} - m g l \phi = m l \ddot{x},\\[2\jot]
(M+m) \ddot{x} + b \dot{x} - m l \ddot{\phi} = u,
\end{align*}
where $\phi$ is the pitch angle of the pendulum, $x$, here, is the position of the cart, $u$, here, is the force applied to the cart, $I$, here, is the moment of inertia of the pendulum, $m$ is the mass of the pendulum, $l$ is the length to the pendulum's center of mass, $g$ is the gravity, $M$ is the mass of the cart, and $b$ is the coefficient of friction for the cart. We chose these parameters as $I = 0.006 \ \text{kg.m$^2$}$, $m = 0.2 \ \text{kg}$, $l = 0.3 \ \text{m}$, $g = 9.81 \ \text{m/s$^2$}$, $M = 0.5 \ \text{kg}$, and $b = 0.1 \ \text{N/m/sec}$. The sensor can only measure the position and pitch angle. The discrete-time dynamics of form~(\ref{c1:eq:sys}) obtained by a zero-hold transformation with discretization sampling frequency of $100 \ \text{Hz}$ and the sensor model of form (\ref{c1:eq:output}) together with the covariance matrices are given~by
\begin{align*}
A_k &= \setlength\arraycolsep{3pt} \begin{bmatrix}
1.0000 & 0.0100 & 0.0001 & 0.0000\\
0.0000 & 0.9982 & 0.0267 & 0.0001\\
0.0000 & 0.0000 & 1.0016 & 0.0100\\
0.0000 &-0.0045 & 0.3122 & 1.0016
\end{bmatrix}\!,
B_k = \begin{bmatrix}
0.0001\\
0.0182\\
0.0002\\
0.0454
\end{bmatrix}\!,\\[1\jot]
C_k &= \setlength\arraycolsep{3pt} \begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0
\end{bmatrix}\!,
V_k = \setlength\arraycolsep{3pt} \begin{bmatrix}
0.0020 & 0.0000\\
0.0000 & 0.0010
\end{bmatrix}\!,\\[1\jot]
W_k &= \setlength\arraycolsep{3pt} \begin{bmatrix}
0.0006 & 0.0003 & 0.0001 & 0.0006\\
0.0003 & 0.0008 & 0.0003 & 0.0004\\
0.0001 & 0.0003 & 0.0007 & 0.0006\\
0.0006 & 0.0004 & 0.0006 & 0.0031
\end{bmatrix}\!,
\end{align*}
for all $k$ with initial conditions $m_0 = [0 \ 0 \ 0.2 \ 0]^T$ and $M_0 = 10W_k$. The time horizon is $N= 500$. We chose the Lagrange multiplier as $\lambda = 0.0067$ and the weighting coefficients and matrices as $\ell_k = 1$, $Q_{N+1}= \diag\{1,1,1000,1\}$, $Q_k= \diag\{1,1,1000,1\}$, and $R_k=1$ for all $k$. For this system, we obtained the suboptimal triggering policy and optimal control policy provided by Proposition~\ref{c1:prop-Delta-imperf} and Theorem~\ref{c1:thm:imperf-optimality} respectively. For a realization of the system, we carried out a simulation experiment. The trajectories of the value of information, event, and control are shown in Fig.~\ref{c1:fig:trajectories2}. Moreover, the trajectories of the position, velocity, pitch angle, and pitch rate are shown in Fig.~\ref{c1:fig:trajectories1}. In this experiment, the value of information became positive only $18$ times, which lead to the transmission of the observation at each of those times. Besides, we observe that the system could still achieve a good control performance while the sampling rate was reduced by $96.4\%$ with respect to the periodic policy with $k_p = 1$.
\begin{figure}[t]
\center
\vspace{-2.5mm}
\includegraphics[width= 0.95\linewidth]{figures/tradeoff.eps}
\caption{Trade-off curve between the sampling rate and control performance in event-triggered control. The control performance is scaled by one tenth. High values in the horizontal axis represent low control performance, and \emph{vice versa}.}
\label{c1:fig:tradeoff}
\end{figure}
\begin{figure}[t]
\center
\vspace{-2.5mm}
\includegraphics[width= 0.98\linewidth]{figures/control_bw.eps}
\caption{Trajectories of the value of information, event, and control. The value of information is scaled by one tenth. The dotted line in the diagram of the value of information represents the zero values.}
\label{c1:fig:trajectories2}
\end{figure}
\begin{figure}[t]
\center
\vspace{-2.5mm}
\includegraphics[width= 0.98\linewidth]{figures/state_bw.eps}
\caption{Trajectories of the position, velocity, pitch angle, and pitch rate. The solid lines represent the state components and the dotted lines represent the state estimate components at the controller.}
\label{c1:fig:trajectories1}
\end{figure}
\section{Conclusion}\label{c1:conclusion}
In this article, we quantified the value of information, which systematically gauges the instantaneous impact of information on a networked control system. In the course of our study, we developed a theoretical framework for the joint design of an event trigger and a controller in optimal event-triggered control with perfect and imperfect information. In each case, we characterized the optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium. In particular, we proved that the optimal triggering policy is a symmetric threshold policy and the optimal control policy is a certainty-equivalence policy. We demonstrated that the optimal triggering policy transmits an observation whenever the value of information is positive. Finally, we provided an algorithm for approximation of the value of information.
In general, our results may improve knowledge about decision making based on the value of information. Our ongoing research shows that we can exploit the framework developed here for studying the impact of reliability, resolution, and timeliness of information on a networked control system. In addition, the tractable framework developed here can be used as a foundation for future research in event-triggered control of complex systems. We propose that further research should be undertaken in following directions. First, the framework should be extended for networks of interacting systems. Second, the team optimality gap of the Nash equilibria derived here should be investigated. An attempt towards this direction was made in \cite{molin2013} based on a transformation to an equivalence class. However, as pointed out in \cite{ramesh2013}, there are subtleties in defining an equivalence class for a state-dependent triggering policy due to the existence of a dual effect, which must be taken into account.
\section{Appendix}\label{appendix}
We here present few lemmas and then the proofs of the main results of the article.
\begin{lemma}\label{lem:1}
Let $S_k \succeq 0$ be a matrix that satisfies the following algebraic Riccati equation:
\begin{align}
S_k &= Q_k + A_k^T S_{k+1} A_k - L_k^T(B_k^T S_{k+1} B_k + R_k) L_k,\label{eq:riccati}\\[2\jot]
L_k &= (B_k^T S_{k+1} B_k + R_k)^{-1} B_k^T S_{k+1} A_k,
\end{align}
for all $k$ with initial condition $S_{N+1} = Q_{N+1}$. Then, the cost function $\Psi(\pi,\mu)$ is equal to
\begin{equation}
\begin{aligned}
&\Psi(\pi,\mu) = \E \Big[ x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \theta_k \delta_k + w_k^T S_{k+1} w_k\\[1\jot]
&\ \ + (u_k + L_k x_k)^T (B_k^T S_{k+1} B_k + R_k) (u_k + L_k x_k)\Big].
\end{aligned}
\end{equation}
\begin{IEEEproof}
Using the process dynamics (\ref{c1:eq:sys}) and the Riccati equation (\ref{eq:riccati}), we can write
\begin{align}
\begin{split}\label{eq:sk1}
x_{k+1}^T S_{k+1} x_{k+1} &= (A_k x_k + B_k u_k + w_k)^T\\[2\jot]
&\qquad \times S_{k+1} (A_k x_k + B_k u_k + w_k),
\end{split}\\[2\jot]
\begin{split}\label{eq:sk}
x_k^T S_k x_k &= x_k^T \big(Q_k + A_k^T S_{k+1} A_k\\[2\jot]
&\qquad - L_k ^T (B_k^T S_{k+1} B_k + R_k) L_k\big) x_k.
\end{split}
\end{align}
Consequently, we find
\begin{align*}
&x_{N+1}^T S_{N+1} x_{N+1} - x_0^T S_0 x_0\\[2\jot]
&\qquad = \textstyle \sum_{k=0}^{N} x_{k+1}^T S_{k+1} x_{k+1} - x_k^T S_k x_k\\[1\jot]
&\qquad = \textstyle \sum_{k=0}^{N} \Big\{ w_k^T S_{k+1} w_k + 2 (A_k x_k + B_k u_k)^TS_{k+1} w_k \\[1\jot]
&\qquad \quad + x_k^T L_k ^T (B_k^T S_{k+1} B_k + R_k) L_k x_k\\[2\jot]
&\qquad \quad - x_k^T Q_k x_k - u_k^T R_k u_k + 2 x_k^T A_k^T S_{k+1} B_k u_k\\[1\jot]
&\qquad \quad + u_k^T (B_k^T S_{k+1} B_k + R_k) u_k \Big\},
\end{align*}
where the first equality is an identity, and in the second equality we used (\ref{eq:sk1}) and (\ref{eq:sk}) and also added and subtracted the term $\sum_{k=0}^{N} u_k^T R_k u_k$ to and from the right-hand side. Rearranging the terms in the above relation, we find
\begin{align*}
&x_{N+1}^T S_{N+1} x_{N+1} + \textstyle \sum_{k=0}^{N} x_k^T Q_k x_k + u_k^T R_k u_k\\[1\jot]
&\quad = x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \Big\{ w_k^T S_{k+1} w_k\\[1\jot]
&\qquad + 2 (A_k x_k + B_k u_k)^TS_{k+1} w_k\\[1\jot]
&\qquad + (u_k + L_k x_k)^T (B_k^T S_{k+1} B_k + R_k) (u_k + L_k x_k) \Big\}.
\end{align*}
Adding the term $\sum_{k=0}^{N} \theta_k \delta_k$ to both sides of the above relation and taking expectation, we obtain the result:
\begin{align*}
\Psi(\pi,\mu) &= \E \Big[ x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \Big\{\theta_k \delta_k + w_k^T S_{k+1} w_k\\[1\jot]
& + 2(A_k x_k + B_k u_k)^T S_{k+1} w_k\\[1\jot]
& + (u_k + L_k x_k)^T (B_k^T S_{k+1} B_k + R_k) (u_k + L_k x_k)\Big\} \Big]\\[1\jot]
& = \E \Big[ x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \Big\{\theta_k \delta_k + w_k^T S_{k+1} w_k\\[1\jot]
& + (u_k + L_k x_k)^T (B_k^T S_{k+1} B_k + R_k) (u_k + L_k x_k) \Big\} \Big],
\end{align*}
where in the second equality we used the fact that $w_k$ is independent of $x_k$ and $u_k$.
\end{IEEEproof}
\end{lemma}
\begin{lemma}\label{eq:lemma-zero-mean}
Let $w$ be a Gaussian variable with zero mean and covariance $W$, and $g(w)$ be any function of $w$. Then, $\E[g(w)]$ is an even function of $w$.
\begin{IEEEproof}
From the definition of the expectation, we have
\begin{align}\label{eq:lemma-even-def}
\E[g(w)] = \textstyle \int_{-\infty}^{\infty} g(w) \exp(-\frac{1}{2} w^T W^{-1} w).
\end{align}
Let us define the variable $\bar{w}$ as $\bar{w} = - w$. Then, $\bar{w}$ is also a Gaussian variable with zero mean and covariance $W$. Therefore, we have
\begin{align*}
\E[g(-w)] &= \textstyle \int_{-\infty}^{\infty} g(-w) \exp(-\frac{1}{2} w^T W^{-1} w) dw\\[2\jot]
&= - \textstyle \int_{\infty}^{-\infty} g(\bar{w}) \exp(-\frac{1}{2} \bar{w}^T W^{-1} \bar{w}) d\bar{w}\\[2\jot]
&= \textstyle \int_{-\infty}^{\infty} g(\bar{w}) \exp(-\frac{1}{2} \bar{w}^T W^{-1} \bar{w}) d\bar{w}\\[2\jot]
&= \E[g(w)],
\end{align*}
where the last equality comes from (\ref{eq:lemma-even-def}).
\end{IEEEproof}
\end{lemma}
\begin{lemma}\label{c1:lemma-cond-expectation}
Let $x$ and $y$ be two random vectors that are jointly Gaussian with the following mean and covariance:
\begin{align*}
\E \hspace{-.27em}\begin{bmatrix}
x\\
y
\end{bmatrix}
=\begin{bmatrix}
m_x\\
m_y
\end{bmatrix}, \quad
\Cov \hspace{-.27em}\begin{bmatrix}
x\\
y
\end{bmatrix}
=\begin{bmatrix}
R_x & R_{xy}\\
R_{yx} & R_{y}
\end{bmatrix}.
\end{align*}
Then, the conditional distribution of $x$ given $y$ is also Gaussian with the following mean and covariance:
\begin{align}
\E[x|y] &= m_x + R_{xy} R_{y}^{-1} (y - m_y),\\[2\jot]
\Cov[x|y] &= R_{x} - R_{xy} R_{y}^{-1} R_{yx}.
\end{align}
\begin{IEEEproof}
Let us define the variables $\xi = x - m_x - R_{xy} R_{y}^{-1} (y - m_y)$ and $R_{\xi} = R_{x} - R_{xy} R_{y}^{-1} R_{yx}$. We have
\begin{align*}
\begin{bmatrix}
\xi\\
y - m_y
\end{bmatrix}
=\begin{bmatrix}
I & -R_{xy} R_{y}^{-1}\\
0 & I
\end{bmatrix}
\begin{bmatrix}
x - m_x\\
y - m_y
\end{bmatrix},
\end{align*}
and
\begin{align*}
\begin{bmatrix}
x - m_x\\
y -m_y
\end{bmatrix}
=\begin{bmatrix}
I & R_{xy} R_{y}^{-1}\\
0 & I
\end{bmatrix}
\begin{bmatrix}
\xi\\
y - m_y
\end{bmatrix}.
\end{align*}
As the Jacobian of the transformation is one, we find that the joint distribution of $x$ and $y$ can be written as
\begin{align*}
\Prob(x, y) &= (2\pi)^{-(n+p)/2} (\det R)^{-1/2}\\[1\jot]
& \times \exp \Big(-\textstyle\frac{1}{2} \xi^T R_{\xi}^{-1}\xi -\textstyle\frac{1}{2} (y-m_y)^T R_y^{-1} (y-m_y)\Big),
\end{align*}
where
\begin{align*}
R
=\begin{bmatrix}
R_x & R_{xy}\\
R_{yx} & R_{y}
\end{bmatrix}.
\end{align*}
Moreover, the distribution of $y$ is
\begin{align*}
\Prob(y) &= (2\pi)^{-p/2} (\det R_y)^{-1/2}\\[1\jot]
&\qquad \times \exp \Big(-\textstyle\frac{1}{2} (y-m_y)^T R_y^{-1} (y-m_y)\Big).
\end{align*}
From determinant properties, we can write
\begin{align*}
\det R = \det(R_x - R_{xy} R_{y}^{-1} R_{yx}) \det R_y = \det R_{\xi} \det R_y.
\end{align*}
Now, we can compute the conditional distribution of $x$ given $y$ as
\begin{align*}
\Prob(x|y) &= (2\pi)^{-n/2} (\det R_{\xi})^{-1/2} \exp \Big(-\textstyle\frac{1}{2} \xi^T R_{\xi}^{-1} \xi\Big).
\end{align*}
This completes the proof.
\end{IEEEproof}
\end{lemma}
\begin{IEEEproof}[Proof of Theorem~\ref{thm-convexity}]
First, note that $R(\pi,\mu)$ is an explicit function of $\delta_k$. Therefore, it is enough to prove the convexity of $R(\pi,\mu)$ with respect to $\pi$. The triggering policy $\pi$ is a randomized policy, i.e. a probability distribution. Let $\pi_1, \pi_2 \in \Pi$ be two admissible policies. For any $\alpha$ with $0 \leq \alpha \leq 1$, the policy $\pi_{\alpha} = \alpha \pi_1 + (1-\alpha) \pi_2$ is a mixture probability distribution and admissible. In fact, $\pi_{\alpha}$ can be realized by adopting $\pi_1$ for $\alpha$ fraction of the experiments and $\pi_2$ for $1-\alpha$ fraction of the experiments. We achieve $R(\pi_1,\mu)$ in the former and $R(\pi_2,\mu)$ in the latter. Hence, we find
\begin{align}
R(\pi_{\alpha},\mu) = \alpha R(\pi_1,\mu) + (1-\alpha) R(\pi_2,\mu),
\end{align}
which verifies the convexity of $R(\pi,\mu)$.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Proposition~\ref{c1:prop:perf-estimator}]
Given the information set $\mathcal{I}^c_{k}$ at the controller, the state estimate that minimizes the mean-square error is obviously the conditional expectation $\E[{x}_k | \mathcal{I}^c_{k}]$.
From the definition, we have $\hat{x}_{k+1} = \E[{x}_{k+1} | \mathcal{I}^c_{k+1}]$. Taking the conditional expectation of the process in (\ref{c1:eq:sys}), we can obtain the propagation of the state estimate and estimation error covariance as
\begin{align}
\hat{x}_{k+1} &= A_k \E[x_k | \mathcal{I}_{k+1}^c] + B_k u_{k},\label{c1:eq:perf-propag-mean}\\[2\jot]
P_{k+1} &= A_k \Cov[x_k | \mathcal{I}_{k+1}^c] A_k^T + W_k.\label{c1:eq:perf-propag-cov}
\end{align}
From Assumption~\ref{assum-neg}, the controller makes no inference when $\delta_k = 0$. However, the controller receives $z_k = x_k$ and subsequently can make an inference when $\delta_k = 1$:
\begin{align*}
\E[x_k | \mathcal{I}_{k}^c, x_k] &= x_{k},\\[2\jot]
\Cov[x_k | \mathcal{I}_{k}^c, x_k] &= 0.
\end{align*}
Therefore, from the definition of $\delta_k$ we can write
\begin{align}
\E[x_k | \mathcal{I}_{k}^c, x_k] &= \hat{x}_{k} + \delta_k (x_k - \hat{x}_k),\label{c1:eq:perf-update-mean}\\[2\jot]
\Cov[x_k | \mathcal{I}_{k}^c, x_k] &= (1-\delta_k) P_k.\label{c1:eq:perf-update-cov}
\end{align}
From the definition of $\mathcal{I}_{k}^c$ in (\ref{c1:eq:con-infoset-1}) and the fact that $u_{k}$ is a measurable function of $\mathcal{I}_{k}^c$, we can write $\E[x_k | \mathcal{I}_{k+1}^c] = \E[x_k | \mathcal{I}_{k}^c, x_{k}]$. We obtain the results by substituting~(\ref{c1:eq:perf-update-mean}), (\ref{c1:eq:perf-update-cov}) in~(\ref{c1:eq:perf-propag-mean}), (\ref{c1:eq:perf-propag-cov}).
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem~\ref{c1:thm:perf-optimality}]
We need to show that $(\pi^*,\mu^*)$ represents a Nash equilibrium. Using the optimal control policy $\mu^*_k$ in the cost function $\Psi(\pi,\mu)$ given by Lemma~\ref{lem:1}, we obtain
\begin{align*}
\Psi(\pi,\mu^*) & = \E \Big[ x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \Big\{ \theta_k \delta_k + w_k^T S_{k+1} w_k\\[2\jot]
&\qquad \qquad \qquad + e_k^T L_k^T (B_k^T S_{k+1} B_k + R_k) L_k e_k \Big\} \Big],
\end{align*}
where we used the definition of the estimation error $e_k$. Following the fact that $x_0$ and $w_k$ are independent of the triggering policy, associated with $\Psi_k(\pi,\mu^*)$, we define the value function $V^e_k$ as
\begin{align*}
V^e_k &= \min_{\boldsymbol{\delta}^k} \E \Big[ \textstyle \sum_{t=k}^{N} \theta_t \delta_t + e_{t+1}^T \Gamma_{t+1} e_{t+1} \Big| \mathcal{I}^q_k \Big],
\end{align*}
where $\Gamma_k = L_k^T (B_k^T S_{k+1} B_k + R_k) L_k$ with the exception of $\Gamma_{N+1} = 0$. From the additivity of the value function $V^e_k$, we have
\begin{align*}
V^e_k &= \min_{\delta_k} \E\Big[ \theta_k \delta_k + e_{k+1}^T \Gamma_{k+1} e_{k+1} \\[1\jot]
&\quad +\min_{\delta_{k+1}} \E\Big[ \theta_{k+1} \delta_{k+1} + e_{k+2}^T \Gamma_{k+2} e_{k+2} + \dots \Big|\mathcal{I}^e_{k+1} \Big] \Big| \mathcal{I}^e_k\Big] \\[1\jot]
&=\min_{\delta_k} \E\Big[\theta_k \delta_k + e_{k+1}^T \Gamma_{k+1} e_{k+1} + V^e_{k+1} \Big|\mathcal{I}^e_k\Big],
\end{align*}
with initial condition $V^e_{N+1} = 0$. We prove by induction that the value function $V^e_k$ is an even function of $e_k$. Clearly, the claim is satisfied for time $N+1$. We assume that the claim holds at time $k+1$, and we shall prove that it also holds at time $k$. We can write the dynamics of the estimation error at the controller as
\begin{align}\label{eq:prove-error-dyn-perf}
e_{k+1} = (1-\delta_k) A_k + w_k.
\end{align}
Thus, we find
\begin{align*}
\E[e_{k+1}^T &\Gamma_{k+1} e_{k+1} | \mathcal{I}^e_k]\\[2\jot]
&= \E \Big[ (1-\delta_k) e_k^T A_k^T \Gamma_{k+1} A_k e_k+ w_k^T \Gamma_{k+1} w_k \\[1\jot]
&\qquad + 2(1-\delta_k) e_k^T \Gamma_{k+1} w_k \Big| \mathcal{I}^e_k \Big]\\[1\jot]
&= (1-\delta_k) e_k^T A_k^T \Gamma_{k+1} A_k e_k + \tr(\Gamma_{k+1} W_k),
\end{align*}
where in the second equality we used the facts that $e_k$ is $\mathcal{I}^e_k$-measurable and that $w_k$ is independent of $\mathcal{I}^e_k$. Hence, we have
\begin{equation}\label{c1:eq:cost-to-go2-perfect}
\begin{aligned}
V^e_{k} &= \min_{\delta_k} \Big\{\theta_k \delta_k + (1-\delta_k) e_k^T A_k^T \Gamma_{k+1} A_k e_k \\[1\jot]
&\qquad \qquad \qquad \ + \tr(\Gamma_{k+1} W_k) + \E[V^e_{k+1}|\mathcal{I}^e_k] \Big\} .
\end{aligned}
\end{equation}
The minimizer in (\ref{c1:eq:cost-to-go2-perfect}) is obtained as $\delta_{k}^* = \mathbf{1}_{\voi_k \geq 0}$ where
\begin{align*}
\voi_k = e_k^T A_k^T \Gamma_{k+1} A_k e_k - \theta_k + \varrho_k,
\end{align*}
and $\varrho_k = \E[V^e_{k+1}|\mathcal{I}^e_k, \delta_k = 0] - \E[V^e_{k+1}|\mathcal{I}^e_k, \delta_k = 1]$. Besides, we can find a matrix $F_k$ such that
\begin{align}\label{eq:perf-error-dyn}
e_{k+1} = F_k e_k + w_k.
\end{align}
It follows that
\begin{align*}
\E[V^e_{k+1}(e_{k+1})|\mathcal{I}^e_k, \delta_k] &= \E[V^e_{k+1}(F_k e_k + w_k)|\mathcal{I}^e_k, \delta_k]\\[2\jot]
&= \E[V^e_{k+1}(-F_k e_k - w_k)|\mathcal{I}^e_k, \delta_k]\\[2\jot]
&= \E[V^e_{k+1}(-F_k e_k + w_k)|\mathcal{I}^e_k, \delta_k],
\end{align*}
where the first equality comes from (\ref{eq:perf-error-dyn}), the second equality from the hypothesis assumption, and the last equality from Lemma~\ref{eq:lemma-zero-mean}. Therefore, $\E[V^e_{k+1} |\mathcal{I}^e_k, \delta_k]$ is an even function of $e_k$. This means that $\varrho_k$ and $\voi_k$ are also even functions of $e_k$. Moreover, using (\ref{c1:eq:cost-to-go2-perfect}), we can write $V^e_k$ as
\begin{align}
V^e_k = \left\{
\begin{array}{l l}
V^{e1}_k, & \ \text{if} \ \voi_k \geq 0, \\
V^{e0}_k, & \ \text{otherwise},
\end{array} \right.
\end{align}
where $V^{e1}_k$ and $V^{e0}_k$ are even functions of $e_k$. Hence, we conclude that $V^e_k$ is an even function of $e_k$. This complete the induction.
Now, using the the triggering policy $\pi^*$ in the cost function $\Psi(\pi,\mu)$ given by Lemma~\ref{lem:1}, we obtain
\begin{equation*}
\begin{aligned}
\Psi(\pi^*,\mu) &= \E \Big[ x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \Big\{ \theta_k \mathbf{1}_{\voi_k \geq 0} + w_k^T S_{k+1} w_k\\[1\jot]
&\qquad \qquad \qquad \ + (u_k + L_k x_k)^T \Lambda_k (u_k + L_k x_k) \Big\} \Big],
\end{aligned}
\end{equation*}
where $\Lambda_k = B_k^T S_{k+1} B_k + R_k$. Following the fact that $x_0$, $\voi_k$, and $w_k$ are independent of the control policy, associated with $\Psi(\pi^*,\mu)$, we define the value function $V^c_k$ as
\begin{align*}
V^c_k &= \min_{\mathbf{u}^k}\E \Big[ \textstyle \sum_{t=k}^{N} (u_t + L_t x_t)^T \Lambda_t (u_t + L_t x_t) \Big| \mathcal{I}^c_k \Big].
\end{align*}
From the additivity of the value function $\mathbb{V}_k$, we obtain
\begin{align*}
V^c_k &= \min_{u_{k}} \E \Big[ (u_k + L_k x_k)^T \Lambda_k (u_k + L_k x_k) \\[1\jot]
&\quad + \min_{u_{k+1}} \E \Big[ (u_{k+1} + L_{k+1} x_{k+1})^T \Lambda_{k+1}\\[1\jot]
&\quad \times (u_{k+1} + L_{k+1} x_{k+1}) + \dots \Big| \mathcal{I}^c_{k+1}\Big] \Big| \mathcal{I}^c_{k} \Big]\\[1\jot]
& =\min_{u_{k}} \E \Big[ (u_k + L_k x_k)^T \Lambda_k (u_k + L_k x_k) + V^c_{k+1} \Big| \mathcal{I}^c_{k}\Big],
\end{align*}
with initial condition $V^c_{N+1} = 0$. We prove by induction that the value function $V^c_k$ is a function of $e_k$. Clearly, the claim is satisfied for time $N+1$. We assume that the claim holds at time $k+1$, and we shall prove that it also holds at time $k$. Using the identity $x_k = \hat{x}_k + e_k$, we find
\begin{align*}
\E \big[ (u_k &+ L_k x_k)^T \Lambda_k (u_k + L_k x_k) \big| \mathcal{I}^c_{k} \big]\\[1\jot]
&= \E\Big[(u_k + L_k \hat{x}_k)^T \Lambda_k (u_k + L_k \hat{x}_k)\\[1\jot]
&\quad +e_k^T L_k^T \Lambda_k L_k e_k + 2 (u_k + L_k \hat{x}_k)^T \Lambda_k L_k e_k \Big| \mathcal{I}^c_{k} \Big]\\[2\jot]
&= (u_k + L_k \hat{x}_k)^T \Lambda_k (u_k + L_k \hat{x}_k) + e_k^T L_k^T \Lambda_k L_k e_k,
\end{align*}
where in the second equality we used the fact that $u_k$ and $\hat{x}_k$ are $\mathcal{I}^c_k$-measurable and $\E[ e_k | \mathcal{I}^c_k] = 0$. Hence, we have
\begin{equation}
\begin{aligned}\label{c1:eq:cost-to-go1-perfect}
V^c_k &= \min_{u_k} \Big\{ (u_k + L_k \hat{x}_k)^T \Lambda_k (u_k + L_k \hat{x}_k)\\[1\jot]
&\qquad \qquad \qquad + e_k^T L_k^T \Lambda_k L_k e_k + \E[V^c_{k+1} | \mathcal{I}^c_k] \Big\}.
\end{aligned}
\end{equation}
The minimizer in (\ref{c1:eq:cost-to-go1-perfect}) is obtained as $u^*_k = -L_k \hat{x}_k$. Moreover, we conclude that $V^c_k$ is a function of $e_k$. This completes the induction and also the proof.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Proposition~\ref{c1:prop:imperf-KF}]
The output $y_k$ is available at the event trigger at each time instant. Hence, it is clear that given the information set $\mathcal{I}^e_{k}$ at the event trigger, the state estimate that minimizes the mean-square error is the conditional expectation $\E[{x}_k | \mathcal{I}^e_{k}]$, and the optimal estimator is the Kalman filter (see e.g., \cite{stengel1994}).
\end{IEEEproof}
\begin{IEEEproof}[Proof of Proposition~\ref{c1:prop:imperf-estimator}]
Given the information set $\mathcal{I}^c_{k}$ at the controller, the state estimate that minimizes the mean-square error is clearly the conditional expectation $\E[{x}_k | \mathcal{I}^c_{k}]$.
From the definition, $\hat{x}_{k+1} = \E[{x}_{k+1} | \mathcal{I}^c_{k+1}]$ and $P_{k+1} = \Cov[{x}_{k+1} | \mathcal{I}^c_{k+1}]$. Taking the conditional expectation of the process in (\ref{c1:eq:sys}), we can obtain the propagation of the state estimate and estimation error covariance as
\begin{align}
\hat{x}_{k+1} &= A_k \E[x_k | \mathcal{I}_{k+1}^c] + B_k u_{k}\label{c1:eq:imperfect-propag-mean},\\[2\jot]
P_{k+1} &= A_k \Cov[x_k | \mathcal{I}_{k+1}^c] A_k^T + W_k\label{c1:eq:imperf-propag-cov}.
\end{align}
From Assumption~\ref{assum-neg}, the controller makes no inference when $\delta_k = 0$. However, the controller receives $z_k = y_k$ and subsequently can make an inference when $\delta_k = 1$. Now, let us define $\xi_k = [x_k^T \ \ y_k^T]^T$. We can easily show that
\begin{align}
\E[\xi_k|\mathcal{I}_k^c]
&=\begin{bmatrix}
\hat{x}_k\\
C_k \hat{x}_k
\end{bmatrix}\label{c1:eq:imperf-mean},\\[1\jot]
\Cov[\xi_k|\mathcal{I}_k^c]
&=\begin{bmatrix}
P_k & P_k C_k^T\\
C_k P_k & C_k P_k C_k^T + V_k
\end{bmatrix}\label{c1:eq:imperf-cov}.
\end{align}
Now, we can use Lemma~\ref{c1:lemma-cond-expectation} together with the conditional distribution specified by mean and covariance in (\ref{c1:eq:imperf-mean}), (\ref{c1:eq:imperf-cov}), and find the update of the state estimate and estimation error covariance when $\delta_k = 1$ as
\begin{align*}
\E[x_k | \mathcal{I}_{k}^c, y_k] &= \hat{x}_{k} + K'_k (y_k - C_k \hat{x}_{k}),\\[2\jot]
\Cov[x_k | \mathcal{I}_{k}^c, y_k] &= P_{k} - K'_k C_k P_{k},
\end{align*}
where $K'_k = P_k C_k^T (C_k P_k C_k^T + V_k)^{-1}$. Therefore, from the definition of $\delta_k$ we can write
\begin{align}
\E[x_k | \mathcal{I}_{k}^c, y_k] &= \hat{x}_{k} + \delta_k K'_k(y_k - C_k \hat{x}_{k})\label{c1:eq:imperf-update-mean},\\[2\jot]
\Cov[x_k | \mathcal{I}_{k}^c, y_k] &= P_{k} - \delta_k K'_k C_k P_{k}.\label{c1:eq:imperf-update-cov}
\end{align}
Following the definition of $\mathcal{I}_{k}^c$ in (\ref{c1:eq:imperf-cont-infoset}) and the fact that $u_{k}$ is a measurable function of $\mathcal{I}_{k}^c$, we can write $\E[x_k | \mathcal{I}_{k+1}^c] = \E[x_k | \mathcal{I}_{k}^c, y_{k}]$. We obtain the results by substituting (\ref{c1:eq:imperf-update-mean}), (\ref{c1:eq:imperf-update-cov}) in (\ref{c1:eq:imperfect-propag-mean}), (\ref{c1:eq:imperf-propag-cov}) respectively.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem~\ref{c1:thm:imperf-optimality}]
We need to show that $(\pi^*,\mu^*)$ represents a Nash equilibrium. Using the optimal control policy $\mu^*_k$ in the cost function $\Psi(\pi,\mu)$ given by Lemma~\ref{lem:1}, we obtain
\begin{align*}
\Psi(\pi,\mu^*) & = \E \Big[ x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \Big\{ \theta_k \delta_k + w_k^T S_{k+1} w_k\\[2\jot]
&\qquad \qquad \qquad + e_k^T L_k^T (B_k^T S_{k+1} B_k + R_k) L_k e_k \Big\} \Big],
\end{align*}
where we used the definition of the estimation error $e_k$. Following the fact that $x_0$ and $w_k$ are independent of the triggering policy, associated with $\Psi_k(\pi,\mu^*)$, we define the value function $V^e_k$ as
\begin{align*}
V^e_k &= \min_{\boldsymbol{\delta}^k} \E \Big[ \textstyle \sum_{t=k}^{N} \theta_t \delta_t + e_{t+1}^T \Gamma_{t+1} e_{t+1} \Big| \mathcal{I}^q_k \Big],
\end{align*}
where $\Gamma_k = L_k^T (B_k^T S_{k+1} B_k + R_k) L_k$ with the exception of $\Gamma_{N+1} = 0$. From the additivity of the value function $V^e_k$, we have
\begin{align*}
V^e_k &= \min_{\delta_k} \E\Big[ \theta_k \delta_k + e_{k+1}^T \Gamma_{k+1} e_{k+1} \\[1\jot]
&\quad +\min_{\delta_{k+1}} \E\Big[ \theta_{k+1} \delta_{k+1} + e_{k+2}^T \Gamma_{k+2} e_{k+2} + \dots \Big|\mathcal{I}^e_{k+1} \Big] \Big| \mathcal{I}^e_k\Big] \\[1\jot]
&=\min_{\delta_k} \E\Big[\theta_k \delta_k + e_{k+1}^T \Gamma_{k+1} e_{k+1} + V^e_{k+1} \Big|\mathcal{I}^e_k\Big],
\end{align*}
with initial condition $V^e_{N+1} = 0$. We prove by induction that the value function $V^e_k$ is an even function of $\xi_k$ where $\xi_k = [e_k^T \ \ \nu_k^T]^T$. Clearly, the claim is satisfied for time $N+1$. We assume that the claim holds at time $k+1$, and we shall prove that it also holds at time $k$. We can write the dynamics of the estimation error at the controller as
\begin{align}\label{eq:prove-error-dyn-imperf}
e_{k+1} = A_k e_k - \delta_k K_k \nu_k + w_k.
\end{align}
Thus, we find
\begin{align*}
\E[& e_{k+1}^T \Gamma_{k+1} e_{k+1} | \mathcal{I}^e_k]\\[2\jot]
&= \E \Big[ e_k^T A_k^T \Gamma_{k+1} A_k e_k + w_k^T \Gamma_{k+1} w_k \\[1\jot]
&\qquad + \delta_k \nu_k^T K_k^T \Gamma_{k+1} K_k \nu_k + 2 e_k^T A_k^T \Gamma_{k+1} w_k\\[2\jot]
&\qquad - 2 \delta_k \nu_k^T K_k^T \Gamma_{k+1} w_k - 2 \delta_k \nu_k^T K_k^T \Gamma_{k+1} A_k e_k \Big| \mathcal{I}^e_k \Big]\\[1\jot]
&= \varepsilon_k^T A_k^T \Gamma_{k+1} A_k \varepsilon_k + \tr(A_k^T \Gamma_{k+1} A_k \Sigma_k) +\tr(\Gamma_{k+1} W_k)\\[2\jot]
&\qquad + \delta_k \nu_k^T K_k^T \Gamma_{k+1} K_k \nu_k - 2 \delta_k \nu_k^T K_k^T \Gamma_{k+1} A_k \varepsilon_k,
\end{align*}
where in the second equality we used the definitions of $\varepsilon_k$ and $\Sigma_k$ and the facts that $\nu_k$ is $\mathcal{I}^e_k$-measurable and that $w_k$ is independent of $e_k$ and $\mathcal{I}^e_k$. Hence, we have
\begin{equation}\label{c1:eq:cost-to-go2-imperfect}
\begin{aligned}
V^e_{k} &= \min_{\delta_k} \Big\{\theta_k \delta_k + \varepsilon_k^T A_k^T \Gamma_{k+1} A_k \varepsilon_k +\tr(\Gamma_{k+1} W_k)\\[1\jot]
&\qquad + \tr(A_k^T \Gamma_{k+1} A_k \Sigma_k) + \delta_k \nu_k^T K_k^T \Gamma_{k+1} K_k \nu_k\\[2\jot]
&\qquad - 2 \delta_k \nu_k^T K_k^T \Gamma_{k+1} A_k \varepsilon_k + \E[V^e_{k+1}|\mathcal{I}^e_k] \Big\}.
\end{aligned}
\end{equation}
The minimizer in (\ref{c1:eq:cost-to-go2-imperfect}) is obtained as $\delta_{k}^* = \mathbf{1}_{\voi_k \geq 0}$ where
\begin{align*}
\voi_k = \nu_k^T K_k^T \Gamma_{k+1} (2 A_k \varepsilon_k - K_k \nu_k) - \theta_k + \varrho_k,
\end{align*}
and $\varrho_k = \E[V^e_{k+1}|\mathcal{I}^e_k, \delta_k = 0] - \E[V^e_{k+1}|\mathcal{I}^e_k, \delta_k = 1]$. Besides, we can find matrices $F_k$ and $G_k$ such that
\begin{align}\label{eq:xi-dyn}
\xi_{k+1} = F_k \xi_k + G_k n_k,
\end{align}
where $n_k = [w_k^T \ \ v_{k+1}^T]^T$. It follows that
\begin{align*}
\E[V^e_{k+1}(e_{k+1})|\mathcal{I}^e_k, \delta_k] &= \E[V^e_{k+1}(F_k e_k + G_k n_k)|\mathcal{I}^e_k, \delta_k]\\[2\jot]
&= \E[V^e_{k+1}(-F_k e_k - G_k n_k)|\mathcal{I}^e_k, \delta_k]\\[2\jot]
&= \E[V^e_{k+1}(-F_k e_k + G_k n_k)|\mathcal{I}^e_k, \delta_k],
\end{align*}
where the first equality comes from (\ref{eq:xi-dyn}), the second equality from the hypothesis assumption, and the last equality from Lemma~\ref{eq:lemma-zero-mean}. Therefore, $\E[V^e_{k+1} |\mathcal{I}^e_k, \delta_k]$ is an even function of $\xi_k$. This means that $\varrho_k$ and $\voi_k$ are also even functions of $\xi_k$. Moreover, using (\ref{c1:eq:cost-to-go2-imperfect}), we can write $V^e_k$ as
\begin{align}
V^e_k = \left\{
\begin{array}{l l}
V^{e1}_k, & \ \text{if} \ \voi_k \geq 0, \\
V^{e0}_k, & \ \text{otherwise},
\end{array} \right.
\end{align}
where $V^{e1}_k$ and $V^{e0}_k$ are even functions of $\xi_k$. Hence, we conclude that $V^e_k$ is an even function of $\xi_k$. This complete the induction.
Now, using the the triggering policy $\pi^*$ in the cost function $\Psi(\pi,\mu)$ given by Lemma~\ref{lem:1}, we obtain
\begin{equation*}
\begin{aligned}
\Psi(\pi^*,\mu) &= \E \Big[ x_0^T S_0 x_0 + \textstyle \sum_{k=0}^{N} \Big\{ \theta_k \mathbf{1}_{\voi_k \geq 0} + w_k^T S_{k+1} w_k\\[1\jot]
&\qquad \qquad \qquad \ + (u_k + L_k x_k)^T \Lambda_k (u_k + L_k x_k) \Big\} \Big],
\end{aligned}
\end{equation*}
where $\Lambda_k = B_k^T S_{k+1} B_k + R_k$. Following the same lines of proof in Theorem~2, associated with $\Psi(\pi^*,\mu)$ we obtain that $u^*_k = - L_k \hat{x}_k$. This completes the proof.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem~\ref{thm-approx}]
The proofs for the perfect and imperfect information are similar. We shall show that $\Psi_k(\pi^+,\mu^*) \leq \Psi_k(\bar{\pi},\mu^*)$ for any $k$ and all initial conditions. Following Theorem~\ref{c1:thm:perf-optimality} or Theorem~\ref{c1:thm:imperf-optimality}, the partial cost $\Psi_k(\pi,\mu^*)$ consists of a term independent of the triggering policy and terms dependent of the triggering policy. Hence, we can restrict ourselves to the latter, and use the value function $V^e_k$. Therefore, in order to show $\Psi_k(\pi^+,\mu^*) \leq \Psi_k(\bar{\pi},\mu^*)$, it is enough to show $V^{\pi^+}_k \leq V^{\bar{\pi}}_k$. We prove this by induction. Clearly, $V^{\pi^+}_{N+1} = V^{\bar{\pi}}_{N+1} = 0$. Assume that the claim holds for $k+1$. We have
\begin{align*}
V^{\pi^+}_k &= \E \Big[ \theta_k \delta_k^{+} + e_{k+1}^T \Gamma_{k+1} e_{k+1} + V^{\pi^+}_{k+1} \Big|\mathcal{I}^e_k\Big]\\[1\jot]
&\leq \E \Big[ \theta_k \delta_k^{+} + e_{k+1}^T \Gamma_{k+1} e_{k+1} + V^{\bar{\pi}}_{k+1} \Big|\mathcal{I}^e_k\Big]\\[1\jot]
&\leq \E \Big[ \theta_k \bar{\delta}_k + e_{k+1}^T \Gamma_{k+1} e_{k+1}+ V^{\bar{\pi}}_{k+1} \Big|\mathcal{I}^e_k\Big] = V^{\bar{\pi}}_k,
\end{align*}
where the first and second equalities come from backward induction, the first inequality from the induction hypothesis, and the second inequality from the definition of the suboptimal triggering policy $\pi^+$.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Proposition~\ref{c1:prop-Delta-perf}]
For the proof, it is enough to derive $\varrho^{\bar{\pi}}_k$ based on the periodic policy $\bar{\pi}$. First, note that
\begin{align*}
\E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, & \delta_k] = \E \Big[ \textstyle \sum_{t=k+1}^{N} \theta_t + e_{t+1}^T \Gamma_{t+1} e_{t+1} \Big| \mathcal{I}_k^e, \delta_k \Big]\\[2\jot]
&= \textstyle \sum_{t=k+1}^{N} \theta_t + \bar{e}_{t+1}^T \Gamma_{t+1} \bar{e}_{t+1} + \tr(\Gamma_{t+1} \bar{P}_{t+1}),
\end{align*}
where in the first equality we used the definition of $V^{\bar{\pi}}_{k+1}$ and the fact that $\delta_t = 1$ for all $t \geq k+1$ and in the second equality the definitions $\bar{e}_t = \E[ e_t | \mathcal{I}_k^e, \delta_k]$ and $\bar{P}_t = \Cov[ e_t | \mathcal{I}_k^e, \delta_k]$ for all $t \geq k+1$.
From the dynamics of the estimation error in (\ref{eq:prove-error-dyn-perf}), given the fact that $\delta_t = 1$ for all $t \geq k+1$, we obtain
\begin{align*}
e_{t+1} = w_t.
\end{align*}
Accordingly, we have $\bar{e}_{t+1} = 0$ and $\bar{P}_{t+1} = W_t$. Hence, we find
\begin{align*}
\E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k] = \textstyle \sum_{t=k+1}^{N} \theta_t + \tr(\Gamma_{t+1} W_t).
\end{align*}
Finally, following the definition of $\varrho^{\bar{\pi}}_k$, we have
\begin{align*}
\varrho^{\bar{\pi}}_k = \E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k = 0] - \E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k = 1] = 0.
\end{align*}
Incorporating this into (\ref{eq:voi-perfect}), we obtain the result.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Proposition~\ref{c1:prop-Delta-imperf}]
For the proof, it is enough to derive $\varrho^{\bar{\pi}}_k$ based on the periodic policy $\bar{\pi}$. First, note that
\begin{align*}
\E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, & \delta_k] = \E \Big[\textstyle \sum_{t=k+1}^{N} \theta_t + e_{t+1}^T \Gamma_{t+1} e_{t+1} \Big| \mathcal{I}_k^e, \delta_k \Big]\\[2\jot]
&= \textstyle \sum_{t=k+1}^{N} \theta_t + \bar{e}_{t+1}^T \Gamma_{t+1} \bar{e}_{t+1} + \tr(\Gamma_{t+1} \bar{P}_{t+1}),
\end{align*}
where in the first equality we used the definition of $V^{\bar{\pi}}_{k+1}$ and the fact that $\delta_t = 1$ for all $t \geq k+1$ and in the second equality the definitions $\bar{e}_t = \E[ e_t | \mathcal{I}_k^e, \delta_k]$ and $\bar{P}_t = \Cov[ e_t | \mathcal{I}_k^e, \delta_k]$ for all $t \geq k+1$.
From the dynamics of the estimation error in (\ref{eq:prove-error-dyn-imperf}), given the fact that $\delta_t = 1$ for all $t \geq k+1$, we obtain
\begin{align*}
e_{t+1} = (A_t - K_t C_t) e_t + w_t - K_t v_t.
\end{align*}
Accordingly, we have
\begin{align*}
\bar{e}_{t+1} &= (A_t - K_t C_t) \bar{e}_t,\\[2\jot]
\bar{P}_{t+1} &= (A_t - K_t C_t) \bar{P}_t (A_t - K_t C_t)^T + W_t + K_t V_t K_k^T.
\end{align*}
Now, we can calculate $\E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k]$ corresponding to each value of $\delta_k$. When $\delta_k = 0$, we have
\begin{align*}
\bar{e}^0_{t+1} &= (A_t - K^0_t C_t) \bar{e}^0_t,\\[2\jot]
\bar{P}^0_{t+1} &= (A_t - K^0_t C_t) \bar{P}^0_t (A_t - K^0_t C_t)^T + W_t + K^0_t V_t K_k^{0T},
\end{align*}
and
\begin{align*}
P^0_{t+1} &= A_t P^0_t A_t^T + W_t - K^0_t C_t P^0_{t} A_t^T,\\[2\jot]
K^0_t &= A_t P^0_t C_t^T (C_t P^0_t C_t^T + V_t)^{-1},
\end{align*}
for $t \geq k+1$ with initial condition $\bar{e}^0_{k+1} = A_k \varepsilon_k$, $\bar{P}^0_{k+1} = A_k \Sigma_k A_k^T + W_k$, and $P^0_{k+1} = A_k P_k A_k^T + W_k$. Hence, we find
\begin{align*}
\E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k = 0] &= \textstyle \sum_{t=k+1}^{N} \theta_t + \bar{e}_{t+1}^{0T} \Gamma_{t+1} \bar{e}^0_{t+1}\\[2\jot]
&\qquad + \textstyle \sum_{t=k+1}^{N} \tr(\Gamma_{t+1} \bar{P}^0_{t+1}).
\end{align*}
Moreover, when $\delta_k = 1$, we have
\begin{align*}
\bar{e}^1_{t+1} &= (A_t - K^1_t C_t) \bar{e}^1_t,\\[2\jot]
\bar{P}^1_{t+1} &= (A_t - K^1_t C_t) \bar{P}^1_t (A_t - K^1_t C_t)^T + W_t + K^1_t V_t K_k^{1T},
\end{align*}
and
\begin{align*}
P^1_{t+1} &= A_t P^1_t A_t^T + W_t - K^1_t C_t P^1_{t} A_t^T,\\[2\jot]
K^1_t &= A_t P^1_t C_t^T (C_t P^1_t C_t^T + V_t)^{-1},
\end{align*}
for $t \geq k+1$ with initial condition $\bar{e}^1_{k+1} = A_k \varepsilon_k - K_k \nu_k$, $\bar{P}^1_{k+1} = A_k \Sigma_k A_k^T + W_k$, and $P^1_{k+1} = A_k P_k A_k^T + W_k - K_k C_k P_k A_k^T$. Hence, we find
\begin{align*}
\E[V^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k = 1] &= \textstyle \sum_{t=k+1}^{N} \theta_t + \bar{e}_{t+1}^{1T} \Gamma_{t+1} \bar{e}^1_{t+1}\\[2\jot]
&\qquad + \textstyle \sum_{t=k+1}^{N} \tr(\Gamma_{t+1} \bar{P}^1_{t+1}).
\end{align*}
Finally, following the definition of $\varrho^{\bar{\pi}}_k$, we have
\begin{align*}
\varrho^{\bar{\pi}}_k &= \E[\mathbb{W}^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k = 0] - \E[\mathbb{W}^{\bar{\pi}}_{k+1} | \mathcal{I}_k^e, \delta_k = 1]\\[2\jot]
&= \textstyle \sum_{t=k+1}^{N-1} \bar{e}_{t+1}^{0T} \Gamma_{t+1} \bar{e}^0_{t+1} + \tr(\Gamma_{t+1} \bar{P}^0_{t+1})\\[2\jot]
&\qquad - \textstyle \sum_{t=k+1}^{N-1} \bar{e}_{t+1}^{1T} \Gamma_{t+1} \bar{e}^1_{t+1} + \tr(\Gamma_{t+1} \bar{P}^1_{t+1})\\[2\jot]
&= \textstyle \sum_{t=k+2}^{N} \bar{e}_t^{0T} \Gamma_t \bar{e}^0_t + \tr(\Gamma_t \bar{P}^0_t)\\[2\jot]
&\qquad - \textstyle \sum_{t=k+2}^{N} \bar{e}_t^{1T} \Gamma_t \bar{e}^1_t + \tr(\Gamma_t \bar{P}^1_t).
\end{align*}
Incorporating this into (\ref{eq:voi-imperfect}), we obtain the result.
\end{IEEEproof}
\section*{Acknowledgment}
This work has been carried out with the support of the Technical University of Munich - Institute for Advanced Study, and partially been funded by the DFG Priority Program SPP1914 "Cyber-Physical Networking", by DARPA through ARO grant W911NF1410384, by ARO grant W911NF-15-1-0646, and by NSF grant CNS-1544787.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,441 |
{"url":"http:\/\/fermatslibrary.com\/s\/isaac-newton-as-a-probabilist","text":"Instructions: To add a question\/comment to a specific line, equation, table or graph simply click on it.\nClick on the annotations on the left side of the paper to read and reply to the questions and comments.\n[Samuel Pepys](https:\/\/en.wikipedia.org\/wiki\/Samuel_Pepys \"samuel p...\nHarvard Professor Joe Blitzstein introducing the Newton-Pepys probl...\nNewton and Pepys exchanged a series of 6 letters in late 1693. Pepy...\nSamuel Pepys was a gambling man, and the Newton-Pepys problem is re...\n### The Problem * A. Throwing 6 dice, and getting at least one 6. ...\nAm I correct in saying that De Moivre's approximation here is just ...\nProbability table of obtaining **n** or more 6 when **n dice** are ...\nA binomial distribution with parameters $n$ and $p$ is the discrete...\nNewton gave Pepys the correct answer to the problem, although his e...\nThere is a problem with this part of Newton's argument. He assumes ...\nStatistical Science\n2006, Vol. 21, No. 3, 400\u2013403\nDOI: 10.1214\/088342306000000312\n\u00a9 Institute of Mathematical Statistics, 2006\nIsaac Newton as a Probabilist\nStephen M. Stigler\nAbstract. In 1693, Isaac Newton answered a query from Samuel Pepys\nabout a problem involving dice. Newton\u2019s analysis is discussed and atten-\ntion is drawn to an error he made.\nOn November 22, 1693, Samuel Pepys wrote a let-\nter to Isaac Newton posing a problem in probability.\nNewton responded with three letters, \ufb01rst answering\nthe question brie\ufb02y, and then offering more informa-\ntion as Pepys pressed for clari\ufb01cation. Pepys (1633\u2013\n1703) is best known today for his posthumously pub-\nlished diary covering the intimate details of his life over\nthe years 1660\u20131669, but Newton would not have been\naware of that diary. He would instead have known of\nPepys as a former Secretary of Admiralty Affairs who\nhad served as President of the Royal Society of Lon-\ndon from 1684 through November 30, 1686, the same\nperiod when Newton\u2019s great Principia was presented\nto the Royal Society and its preparation for the press\nbegun. But Pepys\u2019 letter did not concern scienti\ufb01c mat-\nters. He sought advice on the wisdom of a gamble.\n1. PEPYS\u2019 PROBLEM\nThe three letters Newton wrote to Pepys on this\nproblem, on November 26 and December 16 and 23,\n1693, are almost all we have bearing on Newton and\nprobability. Some of the letters were published with\nother private correspondence in Pepys (1825,Vol.2,\npages 129\u2013135; 1876\u20131879, Vol. 6, pages 177\u2013181)\nand more completely in Pepys (1926,Vol.1,pages\n72\u201394). The letters were cited in a textbook by Chrys-\ntal (1889, page 563), where he gave Pepys\u2019 problem as\nan exercise, but they were little known until they were\nbrought to a wide public attention when selections\nwere reprinted with commentary independently by Dan\nPedoe (1958, pages 43\u201348), Florence David (1959;\n1962, pages 125\u2013129) and Emil D. Schell (1960).\nThese authors and several others, notably Chaundy and\nBullard (1960), Mosteller (1965, pages 6, 33\u201335) and\nStephen M. Stigler is Ernest Dewitt Burton Distinguished\nService Professor of Statistics, Department of Statistics,\nUniversity of Chicago, Chicago, Illinois 60637, USA\n(e-mail: stigler@galton.uchicago.edu).\nGani (1982) have discussed the problem Pepys posed\nand Newton\u2019s solution. Others accorded it briefer no-\ntice, including Sheynin (1971), who dismissively rele-\ngated it to a footnote; Westfall (1980, pages 498\u2013499),\nwho gave unwarranted credence to the excuse Pepys\nopened his \ufb01rst letter with, that the problem had some\nconnection to a state lottery; and Gjertsen (1986, pages\n427\u2013428). But none of these or any other writer seems\nto have noted that a major portion of Newton\u2019s solution\nis wrong. The error casts an interesting light on how\nNewton thought about the matter, and it seems useful\nto revisit the question.\nSince Pepys\u2019 original statement was, as Newton no-\nticed, somewhat ambiguous, I will state the problem in\nparaphrase as it emerged in the correspondence:\nWhich of the following three propositions has the\ngreatest chance of success?\nA. Six fair dice are tossed independently and at least\none \u201c6\u201d appears.\nB. Twelve fair dice are tossed independently and at\nleast two \u201c6\u201ds appear.\nC. Eighteen fair dice are tossed independently and\nat least three \u201c6\u201ds appear.\nAs it emerged in the correspondence, Pepys initially\nthought that the third of these (C) was the most prob-\nable, but when Newton convinced him after repeated\nquestioning by Pepys that in fact A was the most prob-\nable, Pepys ended the correspondence and announced\nhe would, using Mosteller\u2019s (1965, page 35) colorful\nlater term, welsh on a bet he had made.\n2. NEWTON\u2019S SOLUTION\nNewton stated the solution three times during the\ncorrespondence: \ufb01rst he gave a simple logical reason\nfor concluding that A is the most probable, then he re-\nported a detailed exact enumeration of the chances in\neach of the three cases, and \ufb01nally he returned to the\nlogical argument and gave it in more detail.\n400\nISAAC NEWTON AS A PROBABILIST 401\nNewton\u2019s exact enumeration was elegant and \ufb02aw-\nless; it is equivalent to the solution as might be pre-\nsented in an elementary class today. Newton worked\nfrom \ufb01rst principles assuming no knowledge of the bi-\nnomial distribution; we can now express what he found\nby this calculation in terms of a random variable X\nwith a Binomial (N, p) distribution as follows:\nA. P(X1) = 31031\/46656 =0.665 when N =6\nand p =1\/6.\nB. P(X 2) = 1346704211\/2176782336 = 0.619\nwhen N =12 and p = 1\/6.\nC. Here Newton simply stated that, \u201cIn the third\ncase the value will be found still less.\nIn fact,\nP(X3) = 60666401980916\/101559956668416\n= 0.597\nwhen N = 18 and p = 1\/6, as another of Pepys\u2019 cor-\nrespondents (a Mr. George Tollet) found after much la-\nbor, while trying to duplicate Newton\u2019s results (Pepys,\n1926, Vol. 1, pages 92\u201394).\nPepys had originally thought that C was the most\nprobable; Newton\u2019s logical arguments and his careful\nenumeration of chances pointed in the contrary direc-\ntion. But while the conclusion Newton reached is cor-\nrect, only the enumeration stands up under scrutiny. To\nunderstand why, it will help to develop a heuristic un-\nderstanding of why A is the most probable.\n3. A HEURISTIC VIEW\nPepys\u2019 problem amounts to a comparison of three\nBinomial (N, p) distributions with p = 1\/6, namely\nthose with N = 6, 12 and 18. He desired a ranking\nof P(X Np) for the three cases. Now, in all Bino-\nmial distributions where the mean Np is an integer, Np\nis also the median of the distribution (and indeed the\nmode as well). This is always true, surprisingly even\nin cases like those under study here, where the dis-\ntributions are quite skewed and asymmetric. This is a\nbyproduct of a proof that for any N and any p, the dif-\nference between the mean and median of a binomial\ndistribution is strictly less than ln(2)<0.7 (Hamza,\n1995). So when the mean Np is an integer the two\nmust agree, and this implies in particular that in all\nthese cases,\nP(XNp)\n1\n2\nand P(X Np)\n1\n2\n,\nand so in each case P(X Np) exceeds 1\/2bya\nfraction of the probability P(X = Np).Infact,inthe\ncases Pepys considered we have to a fair approxima-\ntion P(X Np) 1\/2 +(0.4)P (X = Np). The rank-\ning Newton calculated then re\ufb02ects the fact that the\nsize of the modal probability for a binomial distribu-\ntion, P(X = Np), decreases as N increases and the\ndistribution spreads out, p being held constant. In-\ndeed, as De Moivre would \ufb01nd by the 1730s, P(X =\nNp) is well approximated by 1\/\n(2\u03c0Np(1 p))\n1.07\/\nN when p = 1\/6. So in particular, the proba-\nbilities in A, B, C are about 1\/2 + (0.4)(1.07)\/\nN,\nan approximation that would give values 0.67, 0.62,\n0.60, which agree with the exact values to two places.\nChaundy and Bullard (1960) provide a cumbersome\nrigorous proof that this sequence is decreasing, in some\ngenerality.\nNote that this approximation depends crucially upon\nthe probabilities P(X 1), P(X 2) and P(X 3)\nof A, B, C being P(X Np) [i.e. P(X E(X))]\nfor the three respective distributions, and the result de-\npends upon this as well. Franklin B. Evans observed\nthis sensitivity already in 1961, \ufb01nding, for example,\nthat P(X 1|N = 6,p = 1\/4) = 0.8220 <P(X\n2|N = 12,p = 1\/4) = 0.8416 (Evans, 1961). That is,\nthe ordering of A and B that Newton found for fair dice\ncan fail for weighted dice, and indeed will tend to fail\nwhen p is suf\ufb01ciently greater than 1\/6, even though\nthey be tossed fairly and independently.\n4. NEWTON\u2019S LOGICAL ARGUMENT\nIn his \ufb01rst letter to Pepys on November 26, 1693,\nNewton had been content to give a short logical argu-\nment for why the chance of A must be the largest. He\ndissected the problem carefully, and made it clear that\nthe proposition required that in each case at least the\ngiven number of \u201c6\u201ds should be thrown. Newton then\nrestated the question and gave an apparently clear argu-\nment as to why the chance for A had to be the largest:\n\u201cWhat is the expectation or hope of A to\nthrow every time one six at least with six\ndyes?\n\u201cWhat is the expectation or hope of B to\nthrow every time two sixes at least with\ntwelve dyes?\n\u201cWhat is the expectation or hope of C to\nthrow every time three sixes at least with 18\ndyes?\nAnd whether has not B and C as great an\nexpectation or hope to hit every time what\nthey throw for as A hath to hit his what he\nthrows for?\n402 S. STIGLER\n\u201cIf the question be thus stated, it appears\nby an easy computation that the expectation\nof A is greater than that of B or C; that is,\nthe task of A is the easiest. And the reason\nis because A has all the chances of sixes\non his dyes for his expectation, but B and\nC have not all the chances on theirs. For\nwhen B throws a single six or C but one or\ntwo sixes, they miss of their expectations.\n(Pepys, 1926, Vol. 1, 75\u201376; Schell, 1960)\nNewton\u2019s conclusion was of course correct but the\nargument is not. It is easy for us to see that it cannot\nwork because the argument applies equally well for\nweighted dice, and as we now know, the conclusion\nfails if, for example, p is 1\/4. Any correct argument\nmust explicitly use the fact that 1, 2, 3 are the expec-\ntations for A, B, C, and Newton\u2019s does not. His enu-\nmeration did do so, but A would equally well have \u201call\nthe chances of sixes on his dyes\u201d even if the chance of\na\u201c6\u201dis1\/4. Newton\u2019s proof refers only to the sample\nspace and makes no use of the probabilities of different\noutcomes other than that the dice are thrown indepen-\ndently, and so it must fail. But Newton does casually\nuse the word \u201cexpectations\u201d; might he not have had\nsomething deeper in mind? His subsequent correspon-\ndence con\ufb01rms that he did not.\nIn his third letter of December 23, 1693, Newton re-\nturned to this argument and expanded slightly on it.\nHe personi\ufb01ed the choices by naming the player faced\nwith bet A \u201cPeter\u201d and the player faced with bet B\n\u201cJames. He then considered a \u201cthrow\u201d to be six dice\ntossed at once, so then Peter was to make (at least) one\n\u201c6\u201d in a throw, while James was to make (at least) two\n\u201c6\u201ds in two throws.\nNewton then wrote, As the wager is stated, Peter\nmust win as often as he throws a six [i.e., makes at\nleast one \u201c6\u201d among the six dice], but James may of-\nten throw a six and yet win nothing, because he can\nnever win upon one six alone. If Peter \ufb02ings a six (for\ninstance) four times in eight throws, he must certainly\nwin four times, but James upon equal luck may throw\na six eight times in sixteen throws and yet win nothing.\nFor as the question in the wager is stated, he wins not\nupon every single throw with a six as Peter doth, but\nonly upon every two throws wherein he throws at least\ntwo sixes. And therefore if he \ufb02ings but one six in the\ntwo \ufb01rst throws, and one in the two next, and but one\nin the two next, and so on to sixteen throws, he wins\nnothing at all, though he throws a six twice as often as\nPeter doth, and by consequence have equal luck with\nPeter upon the dyes. (Pepys, 1926, Vol. 1, page 89;\nSchell, 1960)\nHere we can see more clearly how Newton was led\nastray: Even though in the \ufb01rst letter he had care-\nfully pointed out that \u201cthrowing a six\u201d must be read as\n\u201cthrowing at least one six, here he confused the two\nstatements. His argument might work if \u201cexactly one\nsix\u201d were understood, but then it would not correspond\nto the problem as he and Pepys had agreed it should be\nunderstood. Indeed, Peter will not necessarily register\nagainwithevery\u201c6:ifhehastwoormoreinthe\ufb01rst\n\u201cthrow\u201d of six dice, he wins the same as with just one.\nNewton reduced the problem to single \u201cthrows\u201d where\neach throw is a Binomial (N = 6,p = 1\/6), and he lost\nsight of the multiplicity of outcomes that could lead to\na win. Many of Peter\u2019s wins (those with at least two\n\u201c6\u201ds, which occurs in about 40% of the wins) would be\nwins for James as well. And in some of James\u2019s wins\n(those with at least two \u201c6\u201ds in one-half of tosses and\nnone in the other half, about 28% of James\u2019s wins) Pe-\nter would not have done so well on \u201cequal luck\u201d (he\nwould have won but half the time). Evidently to make\nNewton\u2019s argument correct would take as much work\nas an enumeration!\n5. CONCLUSION\nNewton\u2019s logical argument failed, but modern prob-\nabilists should admire the spirit of the attempt. It was a\nsimple appeal to dominance, a claim that all sequences\nof outcomes will favor Peter at least as often as they\nwill favor James. It had to fail because the truth of the\nproposition depends upon the probability measure as-\nsigned to the sequences and the argument did not. But\nthis was 1693, when probability was in its infancy.\nWhy has apparently no one commented upon this\nerror before? There are several possible explanations,\nand no doubt each held for at least one reader. (1) The\nletters were read super\ufb01cially, with no attempt to parse\nthe somewhat archaic language of the logical proof,\nwhich after all points in the right direction. (2) The\nlanguage was puzzling and unclear to the reader (and\nNewton was not available to ask), but it was accepted\nsince he was, after all, Isaac Newton, and the calcu-\nlation clearly showed he was sound on the important\nfundamentals. (3) The reader may even have seen that\nit was not a satisfactory argument, but drew back from\naccusing Newton of error, particularly since he got the\nnumbers right.\nIn a sense the argument is more interesting be-\ncause it is wrong. Newton was thinking like a great\nISAAC NEWTON AS A PROBABILIST 403\nprobabilist\u2014attempting a \u201ceureka\u201d proof that made the\nissue clear in a \ufb02ash. When successful, this is the high-\nest form of mathematical art. That it failed is no em-\nbarrassment; a simple argument can be wonderful, but\nit can also create an illusion of understanding when the\nmatter is, as here, deeper than it appears on the surface.\nIf Newton fooled himself, he evidently took with him\na succession of readers more than 250 years later. Yet\neven they should feel no embarrassment. As Augus-\ntus De Morgan once wrote, \u201cEveryone makes errors in\nprobabilities, at times, and big ones. (Graves, 1889,\npage 459)\nREFERENCES\nCHAUNDY,T.W.andBULLARD, J. E. (1960). John Smith\u2019s prob-\nlem. Mathematical Gazette 44 253\u2013260.\nC\nHRYSTAL, G. (1889). Algebra; An Elementary Text-Book for the\nHigher Classes of Secondary Schools and for Colleges 2.Adam\nand Charles Black, Edinburgh.\nD\nAV I D , F. N. (1959). Mr Newton, Mr Pepys & Dyse [sic]: A his-\ntorical note. Ann. Sci. 13 137\u2013147. (This is the volume for the\nyear 1957; this third issue, while nominally dated September\n1957, was published April 1959, as stated in the volume Table\nof Contents.)\nD\nAV I D , F. N. (1962). Games, Gods and Gambling. Grif\ufb01n, Lon-\ndon.\nE\nVANS, F. B. (1961). Pepys, Newton, and Bernoulli trials. Reader\nobservations on recent discussions, in the series Questions and\nanswers. Amer. Statist. 15 (1) 29.\nG\nANI, J. (1982). Newton on \u201ca question touching ye different odds\nupon certain given chances upon dice. Math. Sci. 7 61\u201366.\nMR0642167\nG\nJERTSEN, D. (1986). The Newton Handbook. Routledge and\nKegan Paul, London.\nG\nRAVES, R. P. (1889). Life of Sir William Rowan Hamilton 3.\nHodges, Figgis, Dublin. Reprinted 1975 by Arno Press, New\nYork .\nH\nAMZA, K. (1995). The smallest uniform upper bound on the\ndistance between the mean and the median of the binomial\nand Poisson distributions. Statist. Probab. Lett. 23 21\u201325.\nMR1333373\nM\nOSTELLER, F. (1965). Fifty Challenging Problems in Probability\nP\nEDOE, D. (1958). The Gentle Art of Mathematics. Macmil-\nlan, New York. (Reprints the \ufb01rst two of Newton\u2019s letters.)\nMR0102468\nP\nEPYS, S. (1825). Memoirs of Samuel Pepys, Esq. FRS 1, 2.Henry\nColburn, London. (Reprints the \ufb01rst of Pepys\u2019 letters and two of\nNewton\u2019s replies.)\nP\nEPYS, S. (1876\u20131879). Diary and Correspondence of Samuel\nPepys, Esq. F.R.S 16. Bickers, London. (Reprints the \ufb01rst of\nPepys\u2019 letters and two of Newton\u2019s replies.)\nP\nEPYS, S. (1926). Private Correspondence and Miscellaneous Pa-\npers of Samuel Pepys 1679\u20131703 in the Possession of J. Pepys\nCockerell 1, 2. G. Bell and Sons, London. [This is the fullest\nreprinting. The portion of this correspondence directly with\nNewton is fully reprinted in Turnbull (1961) 293\u2013303.]\nS\nCHELL, E. D. (1960). Samuel Pepys, Isaac Newton, and prob-\nability. Published as part of the series Questions and answers.\nAmer. Statist. 14 (4) 27\u201330. [Schell\u2019s article includes a reprint-\ning of the Newton-Pepys letters. Further comments by readers\nappeared in Amer. Statist. 15 (1) 29\u201330.]\nS\nHEYNIN, O. B. (1971). Newton and the classical theory of prob-\nability. Archive for History of Exact Sciences 7 217\u2013243.\nT\nURNBULL, H. W., ed. (1961). The Correspondence of Isaac New-\nton 3: 1688\u20131694. Cambridge Univ. Press. MR0126329\nW\nESTFALL, R. S. (1980). Never at Rest: A Biography of Isaac\nNewton. Cambridge Univ. Press. MR0741027","date":"2017-01-17 10:49:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7977899312973022, \"perplexity\": 5016.783100690299}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560279657.18\/warc\/CC-MAIN-20170116095119-00550-ip-10-171-10-70.ec2.internal.warc.gz\"}"} | null | null |
Q: Mouse cursor stuck but its movement is still registering When I first turn on the computer the mouse works as usual, but shortly after that the mouse cursor will stop moving and will stay stuck in one spot on the screen. This is hard to describe but although the picture of the cursor is stuck in one spot on the screen the position of the mouse seems to continue to update. So if I move the mouse around I'll see hyperlinks and buttons highlight to indicate that the mouse is moving over them. I can also click and the button or hyperlink activates as usual. The image of the cursor is stuck in one spot but it changes from pointer to cursor to text selection as I move the mouse around.
I've tried to reset the mouse but the behavior stays the same.
sudo rmmod psmouse
sudo modprobe psmouse
This behavior happened with Xubuntu 12.04 and I installed Linux Mint 17 and see the same behavior. I have a wireless Logitech mouse. I tried plugging in a wired mouse and see the same behavior.
Any suggestions? I'm not even sure what to search for!
Thanks!
A: Your cursor problem seems to be the constant while distros change, indicating that it may be a hardware problem. Attempt disabling drawing the cursor via hardware and then restart the X server. This may be done by editing your xorg.conf which is often located in /etx/X11/xorg.conf. Note that it may be also be split into several files under /etc/X11/xorg.conf.d - see the xorg.conf(5) man page for more details.
*
*find
Section: "Device"
which refers to the configuration of graphic your adapter.
*before the end of the section, add the following:
Option "HWCursor" "off"
(or change it appropriately, if already present)
*restart Xorg; this can be done in a number of ways:
*
*log out and in
*go into a virtual terminal and kill it, e.g. killall Xorg (usually has to be done with root privileges) - the X server should be respawned after being killed
*restart your computer
Source: http://www.pendrivelinux.com/mouse-pointer-disappears-after-switching-users/
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,733 |
declare var express: any;
declare var app: any;
declare var path: any;
declare var collectionRoutes: any;
declare var staticPublichPath: any;
declare var server: any;
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,533 |
Appearing on Good Morning Britain – behind the scenes
By Georgette Culley | January 6, 2015
Talk to the Press client Zoe Turner (the girl saved by her tight dress) appearing on Good Morning Britain (and we take you behind the scenes…)
You may have seen the press coverage for 21 year old woman whose £35 bodycon dress saved her life in near death car crash. When Zoe Turner contacted us with her story we knew a number of publications would be interested with an interview. She told how her body con dress saved her life after she suffered multiple injuries in a car crash.
Incredibly, doctors told Zoe that her tight attire stopped her bones from popping out and piercing her vital organs. Had the 21-year-old graduate not been donning the £34.99 garment from online retailer Misguided she would not be here today. She said: "I couldn't believe my dress saved my life. That's the best £35.00 I've ever spent. Although I didn't come off lightly, the end result could have been a lot worse."
Incredibly, doctors told her that had she not been wearing a bodycon dress, she would be dead. "He went on to tell me that if I hadn't worn such a tight dress which held in place my bones as the car impacted," she continues, "I would have most definitely punctured vital organs as my bones went out of place."
We put Zoe's story out on the news wires and it made in The Mirror, The Sun, The Mail, The Star, The Express, The Telegraph and the Times. Across the pond, it made in the New York Post! This week, Zoe went on Good Morning Britain and told Susanna Reid and Ben Shepherd about her amazing story. We are now in the process of securing her a magazine deal.
Our reporter Ben accompanied Zoe onto the Good Morning Britain set and took a few behind the scenes photos, including this cheeky selfie.
If you've ever considered appearing on Good Morning Britain, and would like to gain the maximum coverage for your story (and the maximum payments), but still maintain full control of your story, Talk to the Press are at your service. Our service is 100% free to you, so contact us today to find out more. You can also see more about the story selling process here: https://www.talktothepress.co.uk/sell-my-story
Related Itemsappearing on good morning britainbodycon dressgood morning britainmisguidedprzoe turner
Georgette Culley
Next Story → Do you want to appear on TV to tell your story?
How to contact This Morning & appear on the show
How to get my business in the press: Talk to the Press
Do you want to appear on TV to tell your story?
two + five = | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,155 |
Q: How to efficiently spark read source data with custom schema with complex fields? Let us say that I have the following raw source data delimited by comma , but there are some X number of fields that have a very custom format. For simplicity, I have minimized this example to 3 fields/columns. In this case, the custom field is the address with special formatting (key/values surrounded by braces). There may be other fields with a completely different format.
Bob,35,[street:75917;city:new york city;state:ny;zip:10000]
...
Roger,75,[street:81659;city:los angeles;state:ca;zip:99999]
Case classes:
case class Person(name: String, age: Int, address: Address)
case class Address(street: String, city: String, state: String, zip: Int)
What is the most efficient way to process the source data (including parsing of the address field) into Dataset[Person]?
Currently, there are two options that come to mind:
Option 1 - Perform row by row manual conversion:
val df = df.read.csv(source)
val dataset = df.map(row =>
Person(row.getString("_c0"), row.getInt("_c1"), getAddress(row.getString("_c3")))
).as[Person]
Option 2 - Utilize UDF (user defined functions) for the custom formatted columns and use withColumn and withColumnRenamed:
val udfAddress : UserDefinedFunction = udf((address: String) => toAddressObject(address))
var df = df.read.csv(source)
df = df.withColumnRenamed("_c0", "name").withColumn("name", col("name").cast(StringType))
.withColumnRenamed("_c1", "age").withColumn("age", col("age").cast(IntegerType))
.withColumnRenamed("_c2", "address").withColumn("address", udfAddress(col("address")))
val dataset = df.as[Person]
Generally, between Option 1 and Option 2, what is more efficient and why? Also, if there is another option that is more efficient in processing/parsing custom formatted fields, I am open to other options as well. Is there a better option that involves composing a StructType with StructFields manually? Thanks!
A: One of the alternative might be-
Please note that, I haven't conducted any performance testing
Load the test data
val data =
"""
|Bob,35,[street:75917;city:new york city;state:ny;zip:10000]
|Roger,75,[street:81659;city:los angeles;state:ca;zip:99999]
""".stripMargin
val stringDS = data.split(System.lineSeparator())
.map(_.split("\\,").map(_.replaceAll("""^[ \t]+|[ \t]+$""", "")).mkString("|"))
.toSeq.toDS()
val df = spark.read
.option("sep", "|")
.option("inferSchema", "true")
// .option("header", "true")
// .option("nullValue", "null")
.csv(stringDS)
df.show(false)
df.printSchema()
/**
* +-----+---+----------------------------------------------------+
* |_c0 |_c1|_c2 |
* +-----+---+----------------------------------------------------+
* |Bob |35 |[street:75917;city:new york city;state:ny;zip:10000]|
* |Roger|75 |[street:81659;city:los angeles;state:ca;zip:99999] |
* +-----+---+----------------------------------------------------+
*
* root
* |-- _c0: string (nullable = true)
* |-- _c1: integer (nullable = true)
* |-- _c2: string (nullable = true)
*/
Convert Dataframe of Row to Person
val person = ScalaReflection.schemaFor[Person].dataType.asInstanceOf[StructType]
val toAddr = udf((map: Map[String, String]) => Address(map("street"), map("city"), map("state"), map("zip").toInt))
val p = df.withColumn("_c2", translate($"_c2", "[]",""))
.withColumn("_c2", expr("str_to_map(_c2, ';', ':')"))
.withColumn("_c2", toAddr($"_c2"))
.toDF(person.map(_.name): _*)
.as[Person]
p.show(false)
p.printSchema()
/**
* +-----+---+---------------------------------+
* |name |age|address |
* +-----+---+---------------------------------+
* |Bob |35 |[75917, new york city, ny, 10000]|
* |Roger|75 |[81659, los angeles, ca, 99999] |
* +-----+---+---------------------------------+
*
* root
* |-- name: string (nullable = true)
* |-- age: integer (nullable = true)
* |-- address: struct (nullable = true)
* | |-- street: string (nullable = true)
* | |-- city: string (nullable = true)
* | |-- state: string (nullable = true)
* | |-- zip: integer (nullable = false)
*/
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,910 |
\section{Introduction}
The $k$-Schur functions of Lapointe, Lascoux and Morse \cite{LLM03} first arose
in the study of Macdonald polynomials. Since then, their study has flourished;
see for instance \cite{LM03, LM05, LM07, LS07, LLMS10, Lam10} and the
references therein. This is due, in part, to an important geometric
interpretation of the Hopf algebra $\Lambda_{(k)}$ of $k$-Schur functions and
its dual Hopf algebra $\Lambda^{(k)}$: these algebras are isomorphic to the
homology and cohomology of the affine Grassmannian in type~A \cite{Lam08}.
Under this isomorphism, the $k$-Schur functions map to the Schubert basis of
the homology and the dual $k$-Schur functions (also called the affine Schur
functions) map to the Schubert basis of the cohomology.
An important problem in the theory of $k$-Schur functions is to find a
$k$-Littlewood--Richardson rule, namely, a combinatorial interpretation for the
(nonnegative) coefficients in the expansion
\begin{equation}\label{eq:klr}
\kschur_{\mu} \kschur_{\nu} = \sum_{\lambda} c^{\lambda, (k)}_{\mu, \nu} \kschur_{\lambda}.
\end{equation}
The $c^{\lambda, (k)}_{\mu, \nu}$ are called the
$k$-Littlewood--Richardson-coefficients, and are of high relevance in
combinatorics and geometry. It was proved by Lapointe and Morse \cite{LM08}
that special cases of these coefficients yield the 3-point Gromov--Witten invariants. The 3-point Gromov--Witten invariants are
the structure constants of the quantum cohomology of the Grassmanian; they
count the number of rational curves of a fixed degree in the Grassmannian.
As an approach to finding the $k$-Littlewood--Richardson coefficients, Lam
\cite{Lam06} identified $\Lambda_{(k)}$ with the affine Fomin--Stanley subalgebra
$\mathbb{B}$ of the affine nilCoxeter algebra $\mathbb{A}$ of the affine symmetric group $W$.
Specifically, he constructed a family of elements $\nckschur_{\lambda}\in\mathbb{B}$
that map under this isomorphism to the $k$-Schur functions
$\kschur_{\lambda}$. Furthermore, he proved \cite[Proposition 42]{Lam06} that
finding the $k$-Littlewood--Richardson rule is equivalent to finding the
expansion of the $\nckschur_{\lambda}$ in the ``standard basis'' $\mathbf u_w$ of
$\mathbb{A}$. Explicitly, he proved that the coefficients in \eqref{eq:klr} appear as
coefficients in the expansion
\begin{equation}\label{eq:expansion}
\nckschur_\lambda = \sum_{w\in W} d_\lambda^w \mathbf u_w.
\end{equation}
In this article, we develop a family of operators on $\mathbb{A}$, which will facilitate the study of the $\nckschur_\lambda$, and we prove certain conjectures regarding a family of functions that generalize the $k$-Schur functions $\kschur_\lambda$. Each of these is described in more detail below.
\subsection{The Pieri operators}
Lam, Lapointe, Morse, and Shimozono \cite{LLMS10} constructed a labelled
directed graph $\mathcal G^{\downarrow}$ on the elements of $W$, which encompasses the
strong order in $W$. In this article we study the operators on $\mathbb{A}$ induced by
the Pieri operators of $\mathcal G^{\downarrow}$ in the spirit of \cite{BMSvW00}. In Section \ref{sec:operators}, we develop
the main properties of these operators. More specifically, in Theorem
\ref{restriction} we prove that these operators are determined by their
restriction to $\mathbb{B}$, in Theorem \ref{PieriRulePerp} we determine this
restriction, and in Theorem \ref{cor:commute} we prove that the operators
commute pairwise.
\subsection{Properties of strong Schur functions}
Lam, Lapointe, Morse, and Shimozono \cite{LLMS10} generalized the
$\kschur_\lambda$ to a larger set of functions called the strong Schur
functions $\operatorname{Strong}_{u/v}$, where $u$ and $v$ is any pair of elements in $W$. In
Section \ref{sec:strong}, we use the Pieri operators to prove a series of
conjectures of Lam, Lapointe, Morse, and Shimozono
\cite[Conjecture~4.18]{LLMS10} regarding these functions. Specifically,
\begin{enumerate}
\item[(a)] in Theorem \ref{thm:LLMS1} we prove that the $\operatorname{Strong}_{u/v}$ are
symmetric functions;
\item[(b)] in Theorem \ref{thm:LLMS2} we prove that they belong to the algebra
$\Lambda_{(k)}$; and
\item[(c)] in Theorem \ref{thm:LLMS3} we describe the coefficient of
$\kschur_\lambda$ in $\operatorname{Strong}_{u/v}$, when $u$ and $v$ are $0$-Grassmannian
elements, in terms of the structure constants of the cohomology ring of the
affine flag variety.
\end{enumerate}
Note that (c) provides a combinatorial description of the \emph{skew} $k$-Schur
functions.
\subsection{Acknowledgements}
We would like to thank Nantel Bergeron, Sergey Fomin, Thomas Lam, Jennifer Morse, Anne Schilling, and Mike
Zabrocki for helpful discussions.
This research was facilitated by computer exploration using the open-source
mathematical software system \texttt{Sage}~\cite{sage} and its algebraic
combinatorics features developed by the \texttt{Sage-Combinat} community
\cite{sage-combinat}.
\section{Background and Notation}
\subsection{Affine symmetric group}
Fix a positive integer $k$. Let $W$ denote the affine symmetric group with
simple generators $s_0, s_1, \ldots, s_k$. There is an
interpretation of $W$ as the group of permutations $w: \mathbb Z \to \mathbb Z$
satisfying $w(i+k+1) = w(i) + k+1$ for all $i\in\mathbb Z$ and $\sum_{i=1}^{k+1}
w(i)=\sum_{i=1}^{k+1} i$. Let $t_{i,j}$ be the element of $W$ that interchanges the
integers $i$ and $j$ and fixes all integers not congruent to $i$ or $j$ modulo
$k+1$.
Let $W_0$ denote the subgroup of $W$ generated by $s_1, \dots, s_k$ and let
$W^0$ denote the set of minimal length coset representatives of $W/W_0$.
Elements of $W^0$ are called \emph{affine Grassmannian elements} or
\emph{$0$-Grassmannian elements}. There are bijections between $0$-Grassmannian
elements, $k$-bounded partitions, and $(k+1)$-cores. We will not review these
here, but refer the reader to \cite{LM05}. For a $k$-bounded partition $\lambda$,
we let $w_\lambda$ denote the corresponding element of $W^0$. Let $\mathcal B^{(k)}$
denote the set of $k$-bounded partitions.
\subsection{Affine nilCoxeter algebra}
Let $\mathbb{A}$ denote the \emph{affine nilCoxeter algebra} of $W$: this is the
algebra generated by $\mathbf u_0, \mathbf u_1, \dots, \mathbf u_k$ with relations:
\begin{gather*}
\mathbf u_i^2 = 0 \text{ for all $i$}; \\
\mathbf u_i \mathbf u_{i+1} \mathbf u_i = \mathbf u_{i+1} \mathbf u_i \mathbf u_{i+1}
\text{ with $i+1$ taken modulo $k+1$}; \\
\mathbf u_i \mathbf u_j = \mathbf u_j \mathbf u_i
\text{ if $i - j \neq \pm1$ modulo $k+1$.}
\end{gather*}
It follows that a basis of $\mathbb{A}$ is given by the elements
$\mathbf u_w = \mathbf u_{s_{i_1}}\mathbf u_{s_{i_2}}\cdots\mathbf u_{s_{i_l}}$,
where $w = s_{i_1}s_{i_2}\cdots s_{i_l}$ is a reduced word for $w \in W$.
We define an inner product on $\mathbb{A}$ by
$\langle \mathbf u_v, \mathbf u_w \rangle_\mathbb{A} = \delta_{u,v}$.
\subsection{Affine Fomin--Stanley subalgebra}
An element $w \in W$ is said to be \emph{cyclically decreasing} if there exists
a reduced factorization $s_{i_1}\cdots s_{i_j}$ of $w$ satisfying: each letter
occurs at most once; and, for all $m$, if $s_m$ and $s_{m+1}$ both appear in
the reduced factorization, then $s_{m+1}$ precedes $s_m$. If $D \subsetneq \{0,
1, \dots, k\}$, then there is a unique cyclically decreasing element $w_D$ with
letters $\{s_d : d \in D\}$. Let $\mathbf u_D = \mathbf u_{w_D}$ denote the corresponding
basis element of $\mathbb{A}$. For $i \in \{0, 1, \dots, k\}$, let
\begin{align*}
\mathbf h_i = \sum_{\substack{D \subset I \\ |D| = i}} \mathbf u_D \in \mathbb{A}.
\end{align*}
By a result of Thomas Lam \cite{Lam06}, the elements $\{\mathbf h_i\}_{i \leq k}$ commute
and freely generate a subalgebra $\mathbb{B}$ of $\mathbb{A}$ called the \emph{affine
Fomin--Stanley subalgebra}. The elements $\mathbf h_\lambda = \mathbf h_{\lambda_1} \dots
\mathbf h_{\lambda_t}$, for all $k$-bounded partitions $\lambda = (\lambda_1, \dots,
\lambda_t)$, form a basis of $\mathbb{B}$.
\subsection{Symmetric functions}
\label{kspace}
Let $\Lambda$ denote the ring of symmetric functions. For a partition
$\lambda$, we let $m_\lambda$, $h_\lambda$, $e_\lambda$, $p_\lambda$,
$s_\lambda$ denote the monomial, homogeneous, elementary, power sum and Schur
symmetric function, respectively, indexed by $\lambda$. Each of these families
forms a basis of $\Lambda$. We recall the following change of bases formulae:
\begin{align*}
h_\mu = \sum_{\lambda} K_{\lambda, \mu} s_\lambda
\qquad \text{and} \qquad
s_\lambda = \sum_{\mu} K_{\lambda, \mu} m_\mu
\end{align*}
where $K_{\lambda, \mu}$, called the \emph{Kostka number}, is the number
of semistandard tableaux of shape $\lambda$ and content $\mu$ \cite{Sta99}.
Let $\Lambda_{(k)}$ denote the subalgebra of $\Lambda$ generated by $h_0$,
$h_1$, $\dots$, $h_k$. The elements $h_\lambda$ with $\lambda_1 \leq k$ form a
basis of $\Lambda_{(k)}$. Let $\Lambda^{(k)} = \Lambda/I_k$ denote the quotient
of $\Lambda$ by the ideal $I_k$ generated by $m_\lambda$ with $\lambda_1 > k$.
The equivalence classes in $\Lambda^{(k)}$ of the elements $m_\lambda$ with
$\lambda_1 \leq k$ form a basis of $\Lambda^{(k)}$.
The \emph{Hall inner product} of symmetric functions is defined by
$$
\langle h_\lambda, m_\mu \rangle_\Lambda =
\langle s_\lambda, s_\mu \rangle_\Lambda =
\delta_{\lambda,\mu}.
$$
Observe that every element of the ideal $I_k$ is orthogonal to every element of
$\Lambda_{(k)}$ with respect to this inner product. Hence, it induces a
pairing $\langle \cdot, \cdot \rangle$ between $\Lambda_{(k)}$ and
$\Lambda^{(k)}$. In particular, $\langle f, g \rangle = \langle f, \widetilde g
\rangle$ for $f \in \Lambda_{(k)}$, $g \in \Lambda^{(k)}$ and any preimage
$\widetilde g$ of $g$ under the quotient map $\Lambda \to \Lambda^{(k)}$.
For an element $f$ in $\Lambda^{(k)}$, write $f^\perp : \Lambda_{(k)} \to
\Lambda_{(k)}$ for the linear operator that is adjoint to multiplication by $f$
with respect to $\left\langle \cdot, \cdot \right\rangle$.
\subsection{Affine Schur functions}
The affine Schur functions form a distinguished basis of $\Lambda^{(k)}$.
For $w \in W$, the \emph{affine Stanley symmetric function} is defined as
\begin{gather}\label{eq:affstanley}
\dualkschur_w = \sum_{\lambda \in \mathcal B^{(k)}}
\big\langle \mathbf h_\lambda, \mathbf u_w \big\rangle_\mathbb{A} \, m_\lambda,
\end{gather}
where $m_\lambda$ is the monomial symmetric function indexed by $\lambda$.
These functions are elements of $\Lambda^{(k)}$, but they are not linearly
independent. For a $k$-bounded partition $\lambda$, let $\dualkschur_\lambda =
\dualkschur_{w_\lambda}$, where $w_\lambda$ denotes the $0$-Grassmannian
element corresponding to $\lambda$. The functions $\dualkschur_\lambda$ are
called \emph{affine Schur functions} (or \emph{dual $k$-Schur functions}) and
they form a basis of $\Lambda^{(k)}$. See for instance \cite{Lam06, LM08}.
\subsection{$k$-Schur functions}
\label{ss:kschurs}
The $k$-Schur functions are a distinguished basis of $\Lambda_{(k)}$.
They are defined as the duals of the affine Schur functions with respect
to the inner product $\langle \cdot, \cdot \rangle$ on $\Lambda_{(k)} \times
\Lambda^{(k)}$. That is, they satisfy
$\langle \kschur_\lambda, \dualkschur_\mu \rangle = \delta_{\lambda,\mu}$
for all $k$-bounded partitions $\lambda$ and $\mu$.
Equivalently, they are uniquely defined by the \emph{$k$-Pieri rule}:
\begin{align*}
h_i \kschur_\lambda = \sum \kschur_\nu
\end{align*}
where the sum ranges over all $k$-bounded partitions $\nu$ such that
$w_\nu w_\lambda^{-1}$ is cyclically decreasing of length $i$.
It follows from duality and \eqref{eq:affstanley} that
\begin{align*}
h_\mu = \sum_{\lambda\in\mathcal B^{(k)}}
\big\langle \mathbf h_\lambda, \mathbf u_{w_\mu} \big\rangle_\mathbb{A}
\, \kschur_\lambda.
\end{align*}
\subsection{Noncommutative $k$-Schur functions}
\label{ss:noncommkschurs}
The algebras $\Lambda_{(k)}$ and $\mathbb{B}$ are isomorphic with
isomorphism given by $h_\lambda \mapsto \mathbf h_\lambda$.
We denote by $\nckschur_\lambda$ the image of the
$k$-Schur function $\kschur_\lambda$ under this isomorphism.
In the literature, $\nckschur_\lambda$ is called
a \emph{noncommutative $k$-Schur function}.
They have the following expansion \cite[Proposition~42]{Lam06}:
\begin{gather}
\label{eq:kschurexpansioninA}
\nckschur_\lambda = \sum_{w \in W}
\left\langle \kschur_\lambda, \dualkschur_w \right\rangle
\, \mathbf u_w.
\end{gather}
That is, the coefficient of $\mathbf u_w$ in $\nckschur_\lambda$ is equal
to the coefficient of $\dualkschur_\lambda$ in $\dualkschur_w$:
\begin{gather}
\label{eq:relationbetweenpairings}
\left\langle \kschur_\lambda, \dualkschur_w \right\rangle
=
\left\langle \nckschur_\lambda, \mathbf u_w \right\rangle_\mathbb{A}
\end{gather}
and so
\begin{gather*}
\dualkschur_w = \sum_{\lambda\in\mathcal B^{(k)}}
\left\langle \nckschur_\lambda, \mathbf u_w \right\rangle_\mathbb{A} \dualkschur_\lambda.
\end{gather*}
Consequently, $\nckschur_\lambda$ contains exactly one term $\mathbf u_w$ with $w \in
W^0$ and its coefficient is $1$. Furthermore, if $\sum_{w} c_w \mathbf u_w$ is known
to lie in $\mathbb{B}$, then
$\sum_{w} c_w \mathbf u_w = \sum_{\lambda} c_{w_\lambda} \nckschur_{\lambda}$.
\section{Definition of the operators}
In this section, we define operators on the affine nilCoxeter algebra $\mathbb{A}$. The definitions are dependent upon the combinatorics introduced by Lam, Lapointe, Morse, and Shimozono in \cite{LLMS10}.
\subsection{Up operators}
Define an edge-labelled oriented graph $\mathcal G^{\uparrow}$ with vertex set $W$:
there is an edge from $v$ to $w$ labelled by $i$ whenever
$\ell(w) = \ell(v) + 1$
and
$s_iv = w$.
(See Figure \ref{fig:upgraph}.)
So, $\mathcal G^{\uparrow}$ is the graph for the \emph{(left) weak order} on $W$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=0.60,>=latex,line join=bevel,]
\node (u0*u1*u2) at (74.359bp,212bp) [draw,draw=none] {$s_{0}s_{1}s_{2}$};
\node (u2*u1*u0) at (368.36bp,212bp) [draw,draw=none] {$s_{2}s_{1}s_{0}$};
\node (u2*u1) at (418.36bp,144bp) [draw,draw=none] {$s_{2}s_{1}$};
\node (u1*u2*u1*u0) at (238.36bp,280bp) [draw,draw=none] {$s_{1}s_{2}s_{1}s_{0}$};
\node (u1*u2) at (104.36bp,144bp) [draw,draw=none] {$s_{1}s_{2}$};
\node (u1*u0) at (327.36bp,144bp) [draw,draw=none] {$s_{1}s_{0}$};
\node (u2) at (118.36bp,76bp) [draw,draw=none] {$s_{2}$};
\node (u1) at (386.36bp,76bp) [draw,draw=none] {$s_{1}$};
\node (u0*u1*u0) at (484.36bp,212bp) [draw,draw=none] {$s_{0}s_{1}s_{0}$};
\node (u1*u2*u0) at (156.36bp,212bp) [draw,draw=none] {$s_{1}s_{2}s_{0}$};
\node (u0) at (272.36bp,76bp) [draw,draw=none] {$s_{0}$};
\node (1) at (272.36bp,7bp) [draw,draw=none] {$1$};
\node (u0*u1*u2*u0) at (87.359bp,280bp) [draw,draw=none] {$s_{0}s_{1}s_{2}s_{0}$};
\node (u2*u0) at (197.36bp,144bp) [draw,draw=none] {$s_{2}s_{0}$};
\node (u0*u1) at (464.36bp,144bp) [draw,draw=none] {$s_{0}s_{1}$};
\node (u0*u2) at (38.359bp,144bp) [draw,draw=none] {$s_{0}s_{2}$};
\node (u0*u2*u1*u0) at (470.36bp,280bp) [draw,draw=none] {$s_{0}s_{2}s_{1}s_{0}$};
\node (u1*u2*u1) at (238.36bp,212bp) [draw,draw=none] {$s_{1}s_{2}s_{1}$};
\node (u0*u2*u1) at (571.36bp,212bp) [draw,draw=none] {$s_{0}s_{2}s_{1}$};
\node (u0*u2*u0) at (19.359bp,212bp) [draw,draw=none] {$s_{0}s_{2}s_{0}$};
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw [->] (u2*u1*u0) -- (u0*u2*u1*u0);
\draw [->] (u1*u2*u0) -- (u0*u1*u2*u0);
\draw [->] (u1*u2*u0) -- (u1*u2*u1*u0);
\draw [->] (u2*u1*u0) -- (u1*u2*u1*u0);
\draw [->] (u2*u0) -- (u1*u2*u0);
\draw [->] (u1*u0) -- (u2*u1*u0);
\draw [->] (u0) -- (u1*u0);
\draw [->] (u0) -- (u2*u0);
\draw [->] (1) -- (u0);
\draw [->] (u1*u0) -- (u0*u1*u0);
\draw [->] (u0*u1) -- (u0*u1*u0);
\draw [->] (u1) -- (u0*u1);
\draw [->] (u2*u0) -- (u0*u2*u0);
\draw [->] (u0*u2) -- (u0*u2*u0);
\draw [->] (u2) -- (u0*u2);
\draw [->] (1) -- (u2);
\draw [->] (1) -- (u1);
\draw [->] (u1) -- (u2*u1);
\draw [->] (u2) -- (u1*u2);
\draw [->] (u2*u1) -- (u1*u2*u1);
\draw [->] (u1*u2) -- (u1*u2*u1);
\draw [->] (u2*u1) -- (u0*u2*u1);
\draw [->] (u1*u2) -- (u0*u1*u2);
\end{tikzpicture}
\end{center}
\caption{A subgraph of $\mathcal G^{\uparrow}$ for $k=2$; \textit{cf.} Figure \ref{fig:downgraph}.}
\label{fig:upgraph}
\end{figure}
A \emph{weak strip} of length $j$ from $w$ to $v$, denoted by $w
\leadsto v$, is a pair of elements $w, v \in W$ such that $w$ precedes
$v$ in weak order and $vw^{-1}$ is a cyclically decreasing word of
length $j$.
For any non-negative integer $j$, define a linear operator $U_j : \mathbb{A}
\to \mathbb{A}$ by
\begin{gather*}
U_j(\mathbf u_w)
= \sum_{ w \leadsto v \atop \operatorname{size}(w \leadsto v) = j } \mathbf u_{v}
= \mathbf h_j \mathbf u_w
\end{gather*}
where the sum ranges over all weak strips of length $j$ that begin at
$w$. Equivalently, $U_j$ is multiplication on the left by $\mathbf h_j$.
\begin{Example}
With $k=2$:
$U_{1}(\mathbf u_0) = \mathbf u_2\mathbf u_0 + \mathbf u_1\mathbf u_0$
and
$U_{2}(\mathbf u_0) = \mathbf u_0\mathbf u_2\mathbf u_0 + \mathbf u_2\mathbf u_1\mathbf u_0.$
\end{Example}
\subsection{Down operators}
Define a second edge-labelled oriented graph $\mathcal G^{\downarrow}$, the
\emph{marked strong order} graph, with vertex set $W$: there is an edge
from $x$ to $y$ labelled by $y(j)=x(i)$ whenever $\ell(x) = \ell(y)+1$ and
there exists $i \leq 0 < j$ such that $y \, t_{i,j} = x$.
\begin{Example} $(k=2)$
There are two edges from $x = s_0s_1s_2s_0$ to $y = s_1s_2s_0$ since $y^{-1} x$
can be written as $t_{i,j}$ with $i \leq 0 < j$ in two ways: $y^{-1} x =
t_{-4,1} = t_{-1,4}$. These edges are labelled by $y(1) = -2$ and $y(4) = 1$.
See Figure \ref{fig:downgraph}.
\end{Example}
\begin{Remark}
\cite{LLMS10} defined a similar graph except that they oriented their edges in
the opposite direction and labelled the edges by the pair $(i,j)$:
they write $y \buildrel{i,j}\over{\longrightarrow} x$ whereas we write $x
\buildrel{y(j)}\over{\longrightarrow} y$; and they call our label $y(j)$ the
\emph{marking} of the edge.
\end{Remark}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=0.60,>=latex,line join=bevel,]
\node (u0*u1*u2) at (74.359bp,212bp) [draw,draw=none] {$s_{0}s_{1}s_{2}$};
\node (u2*u1*u0) at (368.36bp,212bp) [draw,draw=none] {$s_{2}s_{1}s_{0}$};
\node (u2*u1) at (418.36bp,144bp) [draw,draw=none] {$s_{2}s_{1}$};
\node (u1*u2*u1*u0) at (238.36bp,280bp) [draw,draw=none] {$s_{1}s_{2}s_{1}s_{0}$};
\node (u1*u2) at (104.36bp,144bp) [draw,draw=none] {$s_{1}s_{2}$};
\node (u1*u0) at (327.36bp,144bp) [draw,draw=none] {$s_{1}s_{0}$};
\node (u2) at (118.36bp,76bp) [draw,draw=none] {$s_{2}$};
\node (u1) at (386.36bp,76bp) [draw,draw=none] {$s_{1}$};
\node (u0*u1*u0) at (484.36bp,212bp) [draw,draw=none] {$s_{0}s_{1}s_{0}$};
\node (u1*u2*u0) at (156.36bp,212bp) [draw,draw=none] {$s_{1}s_{2}s_{0}$};
\node (u0) at (272.36bp,76bp) [draw,draw=none] {$s_{0}$};
\node (1) at (272.36bp,7bp) [draw,draw=none] {$1$};
\node (u0*u1*u2*u0) at (87.359bp,280bp) [draw,draw=none] {$s_{0}s_{1}s_{2}s_{0}$};
\node (u2*u0) at (197.36bp,144bp) [draw,draw=none] {$s_{2}s_{0}$};
\node (u0*u1) at (464.36bp,144bp) [draw,draw=none] {$s_{0}s_{1}$};
\node (u0*u2) at (38.359bp,144bp) [draw,draw=none] {$s_{0}s_{2}$};
\node (u0*u2*u1*u0) at (470.36bp,280bp) [draw,draw=none] {$s_{0}s_{2}s_{1}s_{0}$};
\node (u1*u2*u1) at (238.36bp,212bp) [draw,draw=none] {$s_{1}s_{2}s_{1}$};
\node (u0*u2*u1) at (571.36bp,212bp) [draw,draw=none] {$s_{0}s_{2}s_{1}$};
\node (u0*u2*u0) at (19.359bp,212bp) [draw,draw=none] {$s_{0}s_{2}s_{0}$};
\draw [black,->] (u2*u0) ..controls (177.61bp,127bp) and (150.43bp,103.6bp) .. (u2);
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\tikzstyle{every node}=[font=\footnotesize]
\draw (175.36bp,110bp) node {$1$};
\draw [black,->] (u1*u2*u0) ..controls (186.42bp,201.68bp) and (202.25bp,195.37bp) .. (215.36bp,188bp) .. controls (228.38bp,180.68bp) and (228.95bp,174.58bp) .. (242.36bp,168bp) .. controls (262.19bp,158.26bp) and (286.63bp,151.88bp) .. (u1*u0);
\draw (251.36bp,178bp) node {$0$};
\draw [black,->] (u0*u1*u2*u0) ..controls (75.947bp,269.17bp) and (70.733bp,262.83bp) .. (68.359bp,256bp) .. controls (65.223bp,246.99bp) and (66.522bp,236.41bp) .. (u0*u1*u2);
\draw (77.359bp,246bp) node {$2$};
\draw [black,->] (u1*u0) ..controls (342.16bp,126.94bp) and (361.5bp,104.65bp) .. (u1);
\draw (372.36bp,110bp) node {$2$};
\draw [black,->] (u0*u2*u1) ..controls (549.49bp,196.54bp) and (522.98bp,178.91bp) .. (498.36bp,168bp) .. controls (488.13bp,163.47bp) and (461.18bp,155.68bp) .. (u2*u1);
\draw (543.36bp,178bp) node {$1$};
\draw [black,->] (u2*u1*u0) ..controls (342.34bp,202.2bp) and (333.24bp,196.4bp) .. (328.36bp,188bp) .. controls (323.55bp,179.73bp) and (323.21bp,169.03bp) .. (u1*u0);
\draw (337.36bp,178bp) node {$0$};
\draw [black,->] (u2*u1*u0) ..controls (361.97bp,197.09bp) and (354.52bp,180.83bp) .. (346.36bp,168bp) .. controls (344.18bp,164.57bp) and (341.6bp,161.05bp) .. (u1*u0);
\draw (366.36bp,178bp) node {$3$};
\draw [black,->] (u1*u2*u0) ..controls (165.75bp,201.24bp) and (171.25bp,194.46bp) .. (175.36bp,188bp) .. controls (181.16bp,178.88bp) and (186.6bp,168.03bp) .. (u2*u0);
\draw (198.86bp,178bp) node {$-1$};
\draw [black,->] (u1*u2*u0) ..controls (150.66bp,196.79bp) and (146.32bp,179.85bp) .. (153.36bp,168bp) .. controls (158.03bp,160.13bp) and (166.3bp,154.67bp) .. (u2*u0);
\draw (162.36bp,178bp) node {$2$};
\draw [black,->] (u0*u1) ..controls (444.86bp,127bp) and (418.02bp,103.6bp) .. (u1);
\draw (443.36bp,110bp) node {$1$};
\draw [black,->] (u0*u2*u0) ..controls (24.011bp,195.35bp) and (29.888bp,174.32bp) .. (u0*u2);
\draw (39.359bp,178bp) node {$0$};
\draw [black,->] (u1*u2*u1*u0) ..controls (252.78bp,262.89bp) and (271.74bp,241.14bp) .. (280.36bp,236bp) .. controls (298.67bp,225.09bp) and (322.05bp,219.02bp) .. (u2*u1*u0);
\draw (292.86bp,246bp) node {$-1$};
\draw [black,->] (u1*u0) ..controls (318bp,132.89bp) and (312.29bp,126.06bp) .. (307.36bp,120bp) .. controls (299.22bp,110bp) and (290.16bp,98.604bp) .. (u0);
\draw (316.36bp,110bp) node {$2$};
\draw [black,->] (u0*u1*u2*u0) ..controls (179.19bp,274.39bp) and (374.92bp,261.94bp) .. (387.36bp,256bp) .. controls (398.42bp,250.72bp) and (395.94bp,242.46bp) .. (406.36bp,236bp) .. controls (421.53bp,226.6bp) and (440.65bp,220.64bp) .. (u0*u1*u0);
\draw (415.36bp,246bp) node {$1$};
\draw [black,->] (u0*u1*u2*u0) ..controls (56.765bp,270.21bp) and (45.283bp,264.41bp) .. (37.359bp,256bp) .. controls (30.034bp,248.23bp) and (25.515bp,237.07bp) .. (u0*u2*u0);
\draw (49.859bp,246bp) node {$-1$};
\draw [black,->] (u0*u1*u2*u0) ..controls (42.039bp,272.73bp) and (10.711bp,265.75bp) .. (3.3585bp,256bp) .. controls (-3.2495bp,247.24bp) and (1.5614bp,235.48bp) .. (u0*u2*u0);
\draw (12.359bp,246bp) node {$2$};
\draw [black,->] (u0) ..controls (272.36bp,59.313bp) and (272.36bp,38.603bp) .. (1);
\draw (281.36bp,42bp) node {$1$};
\draw [black,->] (u0*u2) ..controls (58.353bp,127bp) and (85.886bp,103.6bp) .. (u2);
\draw (96.359bp,110bp) node {$1$};
\draw [black,->] (u0*u1*u2) ..controls (71.304bp,197.1bp) and (69.363bp,180.38bp) .. (75.359bp,168bp) .. controls (77.5bp,163.58bp) and (80.883bp,159.69bp) .. (u1*u2);
\draw (84.359bp,178bp) node {$1$};
\draw [black,->] (u1*u2*u0) ..controls (141.02bp,201.44bp) and (133.03bp,195.01bp) .. (127.36bp,188bp) .. controls (120.42bp,179.43bp) and (114.67bp,168.43bp) .. (u1*u2);
\draw (136.36bp,178bp) node {$2$};
\draw [black,->] (u2*u0) ..controls (216.1bp,127bp) and (241.92bp,103.6bp) .. (u0);
\draw (252.36bp,110bp) node {$0$};
\draw [black,->] (u0*u1*u2*u0) ..controls (91.912bp,264.62bp) and (98.414bp,247.04bp) .. (109.36bp,236bp) .. controls (114.86bp,230.45bp) and (121.91bp,225.94bp) .. (u1*u2*u0);
\draw (121.86bp,246bp) node {$-2$};
\draw [black,->] (u0*u1*u2*u0) ..controls (115.08bp,269.95bp) and (126.36bp,264.07bp) .. (134.36bp,256bp) .. controls (142.22bp,248.07bp) and (147.8bp,236.75bp) .. (u1*u2*u0);
\draw (156.36bp,246bp) node {$1$};
\draw [black,->] (u0*u2*u1*u0) ..controls (506.45bp,270.58bp) and (522.15bp,264.63bp) .. (534.36bp,256bp) .. controls (545.48bp,248.14bp) and (555.27bp,236.15bp) .. (u0*u2*u1);
\draw (563.36bp,246bp) node {$4$};
\draw [black,->] (u0*u1*u0) ..controls (479.46bp,195.35bp) and (473.28bp,174.32bp) .. (u0*u1);
\draw (485.36bp,178bp) node {$2$};
\draw [black,->] (u2*u1*u0) ..controls (380.83bp,195.04bp) and (396.98bp,173.07bp) .. (u2*u1);
\draw (407.36bp,178bp) node {$3$};
\draw [black,->] (u1*u2*u1*u0) ..controls (238.36bp,263.45bp) and (238.36bp,242.73bp) .. (u1*u2*u1);
\draw (247.36bp,246bp) node {$3$};
\draw [black,->] (u0*u2*u1*u0) ..controls (420.62bp,275.1bp) and (376.94bp,268.81bp) .. (341.36bp,256bp) .. controls (324.14bp,249.8bp) and (322.91bp,241.21bp) .. (305.36bp,236bp) .. controls (195.16bp,203.31bp) and (161.17bp,234.08bp) .. (47.359bp,218bp) .. controls (47.257bp,217.99bp) and (47.155bp,217.97bp) .. (u0*u2*u0);
\draw (350.36bp,246bp) node {$4$};
\draw [black,->] (u2*u1*u0) ..controls (336.51bp,201.85bp) and (317.49bp,195.2bp) .. (301.36bp,188bp) .. controls (284.29bp,180.38bp) and (281.43bp,175.62bp) .. (264.36bp,168bp) .. controls (249.94bp,161.56bp) and (233.21bp,155.57bp) .. (u2*u0);
\draw (310.36bp,178bp) node {$3$};
\draw [black,->] (u0*u2*u1*u0) ..controls (458.79bp,264.46bp) and (444.28bp,246.77bp) .. (428.36bp,236bp) .. controls (418.57bp,229.38bp) and (406.68bp,224.14bp) .. (u2*u1*u0);
\draw (459.36bp,246bp) node {$1$};
\draw [black,->] (u0*u2*u1*u0) ..controls (420.37bp,274.59bp) and (378.99bp,267.91bp) .. (369.36bp,256bp) .. controls (363.14bp,248.31bp) and (362.95bp,237.16bp) .. (u2*u1*u0);
\draw (378.36bp,246bp) node {$4$};
\draw [black,->] (u1*u2*u1*u0) ..controls (217.86bp,263bp) and (189.64bp,239.6bp) .. (u1*u2*u0);
\draw (215.36bp,246bp) node {$3$};
\draw [black,->] (u0*u2*u1*u0) ..controls (496.78bp,270.06bp) and (505.65bp,264.32bp) .. (510.36bp,256bp) .. controls (514.74bp,248.26bp) and (513.92bp,244.14bp) .. (510.36bp,236bp) .. controls (508.5bp,231.76bp) and (505.48bp,227.93bp) .. (u0*u1*u0);
\draw (521.36bp,246bp) node {$1$};
\draw [black,->] (u0*u2*u1*u0) ..controls (473.77bp,263.45bp) and (478.03bp,242.73bp) .. (u0*u1*u0);
\draw (487.36bp,246bp) node {$4$};
\end{tikzpicture}
\end{center}
\caption{$\mathcal G^{\downarrow}$ for $k=2$ truncated at the affine Grassmannian elements of
length $4$.}
\label{fig:downgraph}
\end{figure}
A \emph{strong strip} of length $i$ from $w$ to $v$, denoted by $w
\rightarrowtriangle v$, is a path
\begin{align*}
w
\stackrel{\ell_1}\longrightarrow w_1
\stackrel{\ell_2}\longrightarrow
\cdots
\stackrel{\ell_i}\longrightarrow w_i = v
\end{align*}
of length $i$ in $\mathcal G^{\downarrow}$ with decreasing edge labels:
$\ell_1 > \cdots > \ell_i$.
For non-negative integers $i$, define $D_i: \mathbb{A} \to \mathbb{A}$ as
\[
D_i(\mathbf u_w)
= \sum_{
w \rightarrowtriangle v
\atop
\operatorname{size}(w \rightarrowtriangle v) = i
} \mathbf u_{v}
\]
where the sum ranges over all strong strips of length $i$ that begin at
$w$. In particular, the coefficient of $\mathbf u_{v}$ in $D_i(\mathbf u_w)$ is the
number of strong strips of length $i$ that begin at $w$ and end at $v$.
\begin{Example}
With $k=2$, using the graph from Figure \ref{fig:downgraph}, one can verify that:
\begin{align*}
D_1(\mathbf u_0\mathbf u_1\mathbf u_2\mathbf u_0)
&= 2\mathbf u_0\mathbf u_2\mathbf u_0 + \mathbf u_0\mathbf u_1\mathbf u_2 + 2\mathbf u_1\mathbf u_2\mathbf u_0 + \mathbf u_0\mathbf u_1\mathbf u_0;
\\
D_2(\mathbf u_0\mathbf u_1\mathbf u_2\mathbf u_0)
&= \mathbf u_0\mathbf u_2 + \mathbf u_1\mathbf u_2 + \mathbf u_2\mathbf u_0 + \mathbf u_1\mathbf u_0.
\end{align*}
\end{Example}
More generally, we define an operator $D_J$ for any composition $J$ of positive
integers; the operator $D_i$ defined above is $D_J$ for the composition $J=[i]$.
We need some additional notation.
The \emph{ascent composition} of a sequence $\ell_1, \ell_2, \dots, \ell_m$ is
the composition
$[
i_1,
i_2 - i_{1},
\ldots,
i_j - i_{j-1},
m - i_j
]$,
where $i_1 < i_2 < \dots < i_j$ are the ascents of the sequence; that is, the elements in $\{1, \dots, m-1\}$ such
that $\ell_{i_a} < \ell_{i_a+1}$. For example, the ascent composition of the sequence $3,2,0,3,4,1$ is $[3,1,2]$ since the ascents are in positions $3$ and $4$.
If
$ w_0 \stackrel{\ell_1}\longrightarrow \cdots \stackrel{\ell_m}\longrightarrow w_m $
is a path in $\mathcal G^{\downarrow}$, then we let
$\operatorname{ascomp}(w_0 \stackrel{\ell_1}\longrightarrow \cdots \stackrel{\ell_m}\longrightarrow w_m)$
denote the ascent composition of the sequence of labels $\ell_1, \dots, \ell_m$.
It is a composition of the length of the path.
For a composition $J = [j_1, j_2, \dots, j_l]$ of positive integers, define
\[
D_J(\mathbf u_w)
= \sum_{
\operatorname{ascomp}\left(
w
\stackrel{\ell_1}\longrightarrow w_1
\stackrel{\ell_2}\longrightarrow
\cdots
\stackrel{\ell_m}\longrightarrow w_m
\right)
= J
} \mathbf u_{w_m}
\]
where the sum ranges over all paths in $\mathcal G^{\downarrow}$ of length $m = j_1 + \dots
+ j_l$ beginning at $w$ whose sequence of labels has ascent composition $J$.
\begin{Example}
With $k=2$ one can verify using Figure \ref{fig:downgraph} that:
\begin{align*}
D_{[3]}(\mathbf u_1\mathbf u_2\mathbf u_1\mathbf u_0) &= \mathbf u_2 + \mathbf u_0, \\
D_{[2,1]}(\mathbf u_1\mathbf u_2\mathbf u_1\mathbf u_0) & = \mathbf u_2 + 2\mathbf u_0 + \mathbf u_1, \\
D_{[1,2]}(\mathbf u_1\mathbf u_2\mathbf u_1\mathbf u_0) & = \mathbf u_2 + 2\mathbf u_0 + \mathbf u_1, \\
D_{[1,1,1]}(\mathbf u_1\mathbf u_2\mathbf u_1\mathbf u_0) & = \mathbf u_0 + \mathbf u_1.
\end{align*}
\end{Example}
For two compositions $I = [i_1, \dots, i_r]$ and $J = [j_1, \dots, j_s]$,
let
\begin{align*}
I \boxplus J &= [i_1, \dots, i_{r-1}, i_r+j_1, , j_2, \dots, j_s] \\
I \boxdot J &= [i_1, \dots, i_r, j_1, \dots, j_s]
\end{align*}
\begin{Proposition}
If $I$ and $J$ are compositions, then
\begin{gather*}
D_I \circ D_J = D_{I \boxplus J} + D_{I \boxdot J}.
\end{gather*}
\end{Proposition}
\begin{proof}
If $w \to \dots \to v$ and $v \to \dots \to u$ are two paths in
$\mathcal G^{\downarrow}$ with ascent compositions $J$ and $I$, respectively,
then the path $w \to \dots \to v \to \dots \to u$ has
ascent composition either $I \boxdot J$ or $I \boxplus J$.
\end{proof}
\begin{Corollary}
Suppose $I = [i_1, \dots, i_r]$ is a composition. Then
\begin{align*}
\sum_{J \preceq I} D_J = D_{i_1} \circ \cdots \circ D_{i_r}
\end{align*}
where $\preceq$ denotes reverse refinement order on
compositions\footnote{With respect to this order, the composition
$[1,1,\dots,1]$ is the maximal element and the composition $[n]$ is
the minimal element.}.
\end{Corollary}
\begin{proof}
Proceed by induction on $r$. This is trivially true for $r=1$.
Suppose the result holds for compositions of length less than $r$.
Then
\begin{align*}
D_{i_1} \circ \cdots \circ D_{i_r}
=
D_{i_1} \left(
\sum_{J' \preceq [i_2, \dots, i_r]} D_{J'}
\right)
=
\sum_{J' \preceq [i_2, \dots, i_r]}
\left(
D_{[i_1, j_1, \dots, j_s]} + D_{[i_1 + j_1, \dots, j_s]}
\right)
\end{align*}
which is $\sum_{J \preceq I} D_J$ since the first part of a
composition $J$ that satisfies $J \preceq I$ is either
$i_1$ or $i_1 + (i_2 + \dots + i_l)$ for some $l \geq 2$.
\end{proof}
\section{Properites of the operators} \label{sec:operators}
In this section we develop properties of the operators $U_j$ and $D_i$.
\subsection{Extensions of linear endomorphisms of $\mathbb{B}$ to $\mathbb{A}$}
\label{ss:extensions}
Since $W^0$ is a set of coset representatives of $W_0$ in $W$, every element
$w$ of $W$ factors uniquely as $w = w^{(0)} w_{(0)}$ with $w^{(0)} \in W^0$ and
$w_{(0)} \in W_0$. We call this the \emph{$0$-Grassmannian factorization} of
$w$. Since the elements of $W^0$ are in bijection with $k$-bounded partitions,
we can write this factorization as $w = w_\lambda w_{(0)}$, and we let
\begin{gather*}
\mathbf b_w = \nckschur_\lambda \mathbf u_{w_{(0)}}.
\end{gather*}
\begin{Proposition}
\label{prop:newAbasis}
The set $\{\mathbf b_w : w \in W\}$ is a basis of $\mathbb{A}$.
\end{Proposition}
\begin{proof}
We will define a total order on the elements of $W$ in such a way that the
leading term of $\mathbf b_w$ is $\mathbf u_w$. Then, with respect to this ordering,
the transition matrix from $\{\mathbf b_w\}$ to $\{\mathbf u_w\}$ is uni-triangular,
from which the result follows.
Informally, we need an order in which $v$ precedes $u$ whenever $\ell(u) >
\ell(v)$ or the ``Grassmannian part'' of $u$ is bigger than that of $v$.
Define $v$ to precede $u$ if:
$\ell(u) > \ell(v)$; or
$\ell(u) = \ell(v)$ and $\ell(u^{(0)}) > \ell(v^{(0)})$.
Note that this is only a partial order, but any linear extension of this
partial order will do the trick.
First we argue that the leading term of $\mathbf b_{w_\lambda} = \nckschur_\lambda$ is
$\mathbf u_{w_\lambda}$. Indeed, $\nckschur_\lambda$ expanded in the basis
$\{\mathbf u_v\}$ is a linear combination of terms $\mathbf u_v$ with the $v$ all of
the same length $|\lambda|$, and it contains exactly one term $\mathbf u_w$ with
$w \in W^0$, namely $w_\lambda$ (see \S\ref{ss:noncommkschurs}).
Next, we prove that the leading term of $\mathbf b_w$ is $\mathbf u_w$.
If $\mathbf u_v$ appears in $\mathbf b_w = \nckschur_\lambda \mathbf u_{w_{(0)}}$ with nonzero
coefficient, then
$\mathbf u_v = \mathbf u_{\tilde v} \mathbf u_{w_{(0)}}$
with $\mathbf u_{\tilde v}$ appearing in $\nckschur_\lambda$.
It follows that
$v = \tilde vw_{(0)}$ with $\ell(v) = \ell(\tilde v) + \ell(w_{(0)})$
and that
$v^{(0)} = {\tilde v}^{(0)}$.
Hence, to compare terms $\mathbf u_v$ and $\mathbf u_u$ of $\mathbf b_w$, it suffices to
compare the corresponding terms $\mathbf u_{\tilde v}$ and $\mathbf u_{\tilde u}$ of
$\nckschur_\lambda$. So, the leading term of $\mathbf b_w$ is the leading term
of $\nckschur_\lambda$ times $\mathbf u_{w_{(0)}}$, which is precisely $\mathbf u_{w_\lambda} \mathbf u_{w_{(0)}} = \mathbf u_w$.
\end{proof}
\begin{Corollary}
\label{coro:newAbasis}
The set $\{\mathbf h_\lambda \mathbf u_{w_{(0)}} : w = w_\lambda w_{(0)} \in
W\}$ is a basis of $\mathbb{A}$.
\end{Corollary}
\begin{proof}
Follows from Proposition \ref{prop:newAbasis} and the fact that
$\{\mathbf h_\lambda\}$ is a basis of $\mathbb{B}$.
\end{proof}
The above results allow us to extend linear endomorphisms of $\mathbb{B}$ to linear
endomorphisms of $\mathbb{A}$. Let $f: \mathbb{B} \to \mathbb{B}$ be a linear transformation of
$\mathbb{B}$. Then we get a linear transformation $\widehat{f}: \mathbb{A} \to \mathbb{A}$ by
defining $\widehat{f}$ on the basis $\{\mathbf b_w\}$ by
\begin{gather*}
\widehat{f}\left(\mathbf b_w\right) = \widehat{f}\left(\nckschur_\lambda\mathbf u_{w_{(0)}}\right)
= f\left(\nckschur_\lambda\right) \mathbf u_{w_{(0)}},
\end{gather*}
where $w = w_\lambda w_{(0)}$ is the $0$-Grassmannian factorization of $w$.
\subsection{Commutation relation}
We prove a commutation relation between the operators $U_j$ and $D_i$. This
relation will allow us to bootstrap properties of $D_1$ and $U_j$ to every
operator $D_i$ via an inductive argument.
\begin{Proposition}[Commutation Relation]
\begin{gather*}
D_i \circ U_j = \sum_{e \geq 0} U_{j-e} \circ D_{i-e}
\end{gather*}
\end{Proposition}
\begin{proof}
First note that the right hand side is a finite sum. The coefficient of $\mathbf u_v$ in
\begin{align*}
\left(D_i \circ U_j\right)(\mathbf u_u)
= \sum_{ u \leadsto w \atop \operatorname{size}(u \leadsto w) = j }
\sum_{ w \rightarrowtriangle v \atop \operatorname{size}(w \rightarrowtriangle v) = i } \mathbf u_{v}
\end{align*}
is the number of tuples $(u \leadsto w, w \rightarrowtriangle v)$
consisting of
a weak strip $u \leadsto w$ of length $j$ and
a strong strip $w \rightarrowtriangle v$ of length $i$.
The coefficient of $\mathbf u_v$ in
\begin{align*}
\left(\sum_{e \geq 0} U_{j-e} \circ D_{i-e}\right)(\mathbf u_u)
= \sum_{e \geq 0}
\sum_{ u \rightarrowtriangle x \atop \operatorname{size}(u \rightarrowtriangle x) = i-e }
\sum_{ x \leadsto v \atop \operatorname{size}(x \leadsto v) = j-e } \mathbf u_{v}
\end{align*}
is the number of triples $(e, u \rightarrowtriangle x, x \leadsto v)$
consisting of
a nonnegative integer $e$,
a strong strip $u \rightarrowtriangle x$ of length $i-e$ and
a weak strip $x \leadsto v$ of length $j-e$.
By \cite[Proposition~4.1]{LLMS10}, these two numbers are the same. Indeed,
that proposition establishes a bijection between the sets:
\begin{align*}
&\left\{
(W', S') :
\begin{array}{l}
\text{$W'$ is a weak strip beginning at $u$,} \\
\text{$S'$ is a strong strip ending at $v$}, \\
\text{with $W'$ ending where $S'$ begins.}
\end{array}
\right\}
%
\\
&\qquad\qquad\qquad\qquad
\longleftrightarrow
%
\left\{
(W, S, e) :
\begin{array}{l}
\text{$W$ is a weak strip ending at $v$,} \\
\text{$S$ is a strong strip beginning at $u$,} \\
\text{$e\geq0$ satisfies $\operatorname{size}(W)+e \leq k$,} \\
\text{with $S$ ending where $W$ begins.}
\end{array}
\right\}
\end{align*}
such that
\begin{align*}
\operatorname{size}(S) &= \operatorname{size}(S') - e \\
\operatorname{size}(W) &= \operatorname{size}(W') - e. \qedhere
\end{align*}
\end{proof}
\begin{Corollary}[Bracket]
\begin{align*}
D_i \circ U_j - U_j \circ D_i = D_{i-1} \circ U_{j-1}
\end{align*}
\end{Corollary}
\begin{proof}
\begin{gather*}
D_i \circ U_j
= \sum_{e \geq 0} U_{j-e} \circ D_{i-e}
= U_{j} \circ D_{i} + \sum_{e \geq 1} U_{j-e} \circ D_{i-e}
= U_{j} \circ D_{i} + D_{i-1} \circ U_{i-1}
\qedhere
\end{gather*}
\end{proof}
\subsection{$D_i$ stabilizes $\mathbb{B}$}
We use the commutation relation of the previous section to prove that $D_i(\mathbb{B})
\subseteq \mathbb{B}$. First we determine the image of $\mathbf h_r$ under $D_i$.
\begin{Lemma}
\label{Dh_r}
For $r \leq k$ and all $i$,
$$D_i(\mathbf h_r) = \mathbf h_{r-i}.$$
\end{Lemma}
\begin{proof}
If $i > r$, then $D_i(\mathbf h_r) = 0$ since there are no
strong strips of size $i$ beginning at elements $w$ of length $r$.
Also, by definition, $\mathbf h_{r-i} = 0$. So suppose that $i \leq r$.
Proceed by induction on $i$. If $i = 1$, then
\begin{align*}
D_1(\mathbf h_r)
= (D_1 \circ U_r)(1_\mathbb{A})
= (U_r \circ D_1)(1_\mathbb{A}) + (D_0 \circ U_{r-1})(1_\mathbb{A})
= 0_\mathbb{A} + \mathbf h_{r-1}.
\end{align*}
Suppose the result holds for $i-1$. Then
\begin{align*}
D_i(\mathbf h_r)
&= (D_i \circ U_r)(1_\mathbb{A})
= (U_r \circ D_i)(1_\mathbb{A}) + (D_{i-1} \circ U_{r-1})(1_\mathbb{A}) \\
&= 0_\mathbb{A} + D_{i-1}(\mathbf h_{r-1}) = \mathbf h_{r-i}.
\qedhere
\end{align*}
\end{proof}
\begin{Theorem}
Let $J$ be a composition. Then $D_J$ stabilizes $\mathbb{B}$; that is,
$$D_J(\mathbb{B}) \subseteq \mathbb{B}.$$
\end{Theorem}
\begin{proof}
It suffices to prove this for the operators $D_i$ since $D_J$ is a linear
combination of compositions of these operators.
Since $\mathbb{B}$ is spanned by the products
$\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l}$,
it suffices to show that
$D_i(\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l}) \in \mathbb{B}$.
Proceed by induction on $i$ and $l$.
If $l=1$, then by Lemma \ref{Dh_r}, $D_i(\mathbf h_{j_1}) = \mathbf h_{j_1-i} \in
\mathbb{B}$. For $i=1$ this was proved in \cite[Theorem~3.9]{BSS11}.
If $l > 1$, then
\begin{align*}
D_i(\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l})
&= \left(D_i \circ U_{j_1}\right)(\mathbf h_{j_2} \cdots \mathbf h_{j_l}) \\
&= \left(U_{j_1} \circ D_i\right)(\mathbf h_{j_2} \cdots \mathbf h_{j_l})
+ \left(D_{i-1} \circ U_{j_1-1}\right)(\mathbf h_{j_2} \cdots \mathbf h_{j_l}) \\
&= \mathbf h_{j_1} D_i(\mathbf h_{j_2} \cdots \mathbf h_{j_l})
+ D_{i-1}(\mathbf h_{j_1-1} \mathbf h_{j_2} \cdots \mathbf h_{j_l}) \in \mathbb{B}.
\qedhere
\end{align*}
\end{proof}
Since the noncommutative $k$-Schur functions form a basis of $\mathbb{B}$, it is
natural to ask for the expansion of $D_i(\nckschur_\lambda)$ in terms of
noncommutative $k$-Schur functions. We obtain the following combinatorial description in terms
of strong strips. Recall that $w_\lambda$ denotes the $0$-Grassmannian element
corresponding to the $k$-bounded partition $\lambda$ under the bijection
between $\mathcal B^{(k)}$ and $W^0$.
\begin{Theorem}
\label{PieriRulePerp}
$$D_i\left(\nckschur_\lambda\right)
= \sum_{\operatorname{size}(w_\lambda \rightarrowtriangle w_\mu)=i} \nckschur_\mu.$$
\end{Theorem}
\begin{proof}
Since $D_i(\nckschur_\lambda) \in \mathbb{B}$, to compute its expansion in terms
of $k$-Schur functions, it suffices to compute the coefficient of
$\mathbf u_{w}$ for $0$-Grassmannian elements $w$ (see \S\ref{ss:noncommkschurs}). This
is the number of strong strips $v \rightarrowtriangle w$ of length $i$ with
$\mathbf u_v$ appearing as a term in $\nckschur_\lambda$. But a strong strip that
ends at a $0$-Grassmannian element necessarily begins at a $0$-Grassmannian
element \cite[Proposition~2.6]{LLMS10}, and there is a unique term $\mathbf u_v$
appearing in $\nckschur_\lambda$ with $v$ a $0$-Grassmannian element, namely
$\mathbf u_{w_\lambda}$.
\end{proof}
\subsection{Restriction to $\mathbb{B}$}
We prove that $D_J$ is determined by its restriction to $\mathbb{B}$ and we
identify this restriction as a linear operator adjoint to multiplication by a
symmetric function with respect to the pairing on $\Lambda_{(k)} \times
\Lambda^{(k)}$.
\begin{Theorem}
\label{sym-module-morphism}
Suppose $w \in W$ and $v \in W_0$. Then
\begin{gather*}
U_j(\mathbf u_w \mathbf u_v) = U_j(\mathbf u_w) \mathbf u_v \\
D_i(\mathbf u_w \mathbf u_v) = D_i(\mathbf u_w) \mathbf u_v
\end{gather*}
Consequently, $U_j$ and $D_i$ are completely determined by their
restriction to $\mathbb{B}$.
\end{Theorem}
\begin{proof}
Since $U_j$ is left-multiplication by $\mathbf h_j$, associativity
implies that $U_j(\mathbf u_w \mathbf u_v) = U_j(\mathbf u_w) \mathbf u_v$, establishing
the first equality.
By Corollary \ref{coro:newAbasis}, it suffices to show that
$D_i(\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l} \mathbf u_v) =
D_i(\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l}) \mathbf u_v$.
Proceed by induction. The case $i=1$ was proved in \cite[Theorem~3.10]{BSS11}.
Suppose the result holds for $D_{i-1}$. We prove the result also holds
for $D_i$ by induction on $l$. If $l=1$, then
\begin{align*}
D_i(\mathbf h_{j_1} \mathbf u_v)
= (D_i \circ U_{j_1})(\mathbf u_v)
= (U_{j_1} \circ D_{i})(\mathbf u_v) + (D_{i-1} \circ U_{j_1-1})(\mathbf u_v).
\end{align*}
Note that $D_i(\mathbf u_v) = 0$ because there is no strong strip starting
from $v \in W_0$. And since the result holds for $D_{i-1}$, we have
\begin{align*}
(D_{i-1} \circ U_{j_1-1})(\mathbf u_v)
= D_{i-1}(\mathbf h_{j_1-1}\mathbf u_v)
= D_{i-1}(\mathbf h_{j_1-1})\mathbf u_v
= D_i(\mathbf h_{j_1})\mathbf u_v.
\end{align*}
For $l > 1$, use the identity
$D_i \circ U_{j_1} = U_{j_1} \circ D_i + D_{i-1} \circ U_{j_1-1}$
to write
\begin{align*}
D_i(\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l} \mathbf u_v)
&= (U_{j_1} \circ D_i)(\mathbf h_{j_2} \cdots \mathbf h_{j_l} \mathbf u_v)
+ D_{i-1} (\mathbf h_{j_1-1}\mathbf h_{j_2} \cdots \mathbf h_{j_l} \mathbf u_v).
\end{align*}
Since the product $\mathbf h_{j_2} \cdots \mathbf h_{j_l}$ involves less than $l$
terms, by induction we have that
\begin{align*}
(U_{j_1} \circ D_i)(\mathbf h_{j_2} \cdots \mathbf h_{j_l} \mathbf u_v)
= U_{j_1} \left( D_i(\mathbf h_{j_2} \cdots \mathbf h_{j_l}) \mathbf u_v \right)
= (U_{j_1} \circ D_i)(\mathbf h_{j_2} \cdots \mathbf h_{j_l}) \mathbf u_v.
\end{align*}
Since the result holds for $D_{i-1}$, we have that
\begin{align*}
D_{i-1} (\mathbf h_{j_1-1}\mathbf h_{j_2} \cdots \mathbf h_{j_l} \mathbf u_v)
= D_{i-1} (\mathbf h_{j_1-1}\mathbf h_{j_2} \cdots \mathbf h_{j_l}) \mathbf u_v.
\end{align*}
Hence,
$D_i(\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l} \mathbf u_v)
= D_i(\mathbf h_{j_1} \mathbf h_{j_2} \cdots \mathbf h_{j_l}) \mathbf u_v$,
as desired.
\end{proof}
We next identify the restriction of $D_J$ to $\mathbb{B}$. For a composition $J$, let
$s_J$ denote the \emph{ribbon Schur function} indexed by $J$ (for a good introduction to ribbon Schur functions, see for instance \cite{BTW06}) and
let $\ribbonfcn{J}$ denote its image in $\Lambda^{(k)}$.
Recall that
$\ribbonfcn{J}^\perp:\Lambda_{(k)} \to \Lambda_{(k)}$
is the linear operator adjoint to multiplication by $\ribbonfcn{J}$ in
$\Lambda^{(k)}$. We also denote the corresponding linear operator on $\mathbb{B}$ by
$\ribbonfcn{J}^\perp$.
\begin{Theorem}
\label{restriction}
The restriction of $D_J$ to $\mathbb{B}$ is $\ribbonfcn{J}^\perp$.
Consequently, $D_J$ is the extension to $\mathbb{A}$, as defined in
\S\ref{ss:extensions}, of $\ribbonfcn{J}^\perp:\mathbb{B} \to \mathbb{B}$.
\end{Theorem}
\begin{proof}
In the following, let $D_J(\kschur_\lambda)$ denote the image
$D_J(\nckschur_\lambda)$ under the isomorphism $\mathbb{B} \to \Lambda_{(k)}$.
We will prove, for all $\kschur_\lambda$ and $\dualkschur_\mu$,
$$\left\langle D_J\left( \kschur_\lambda \right), \dualkschur_{\mu} \right\rangle
= \left\langle \kschur_\lambda, \ribbonfcn{J}\dualkschur_{\mu} \right\rangle.$$
Proceed by induction on the length of $J = [j_1,j_2,\dots,j_l]$.
Suppose $l = 1$. Then it suffices to prove that
\begin{gather*}
\left\langle D_j\left( \kschur_\lambda \right), \dualkschur_\mu \right\rangle
= \left\langle \kschur_\lambda, \overline{h_j}\dualkschur_\mu \right\rangle.
\end{gather*}
But this follows immediately from Theorem \ref{PieriRulePerp} and
the Pieri rule: $\overline{h_j} \dualkschur_{w_\mu} = \sum
\dualkschur_{w_\lambda}$ with the sum running over all strong strips
${w_\lambda} \rightarrowtriangle w_\mu$ of size $j$ (see \cite[Theorem 4.13]{LLMS10}).
Now suppose the result holds for compositions of length less than
$l$. Let $J = [j_1, j_2, \dots, j_l]$. Observe that
\begin{align*}
D_J = D_{j_1} \circ D_{[j_2, \dots, j_l]} - D_{[j_1 + j_2, \dots, j_l]},
\end{align*}
so by induction and the product rule for ribbon Schur functions
\cite[\S 169]{Mac16},
\begin{align*}
D_J
&=
\ribbonfcn{j_1}^\perp \circ \ribbonfcn{[j_2, \dots, j_l]}^\perp -
\ribbonfcn{[j_1 + j_2, \dots, j_l]}^\perp, \\
&=
\overline{s_{j_1}s_{[j_2, \dots, j_l]}
- s_{[j_1 + j_2, \dots, j_l]}}^\perp
= \ribbonfcn{J}^\perp.
\qedhere
\end{align*}
\end{proof}
\begin{Theorem}\label{cor:commute}
The operators $D_J$ and $D_K$ commute.
\end{Theorem}
\begin{proof}
$\mathbb{A}$ is spanned by elements of the form $\mathbf b \mathbf u_w$ with $\mathbf b \in \mathbb{B}$ and
$w \in W_0$ (Proposition \ref{prop:newAbasis}), so it suffices to prove
this for these elements. Combining Theorems
\ref{sym-module-morphism} and \ref{restriction}, we have
\begin{align*}
&\left(D_K \circ D_J\right)\left(\mathbf b\mathbf u_w\right)
= \left(D_K \circ D_J\right)\left(\mathbf b\right)\mathbf u_w
= \left(\ribbonfcn{K}^\perp \circ
\ribbonfcn{J}^\perp\right)\left(\mathbf b\right)\mathbf u_w
\\
&= \left(\ribbonfcn{J}^\perp \circ
\ribbonfcn{K}^\perp\right)\left(\mathbf b\right)\mathbf u_w
= \left(D_J \circ D_K\right)\left(\mathbf b\right)\mathbf u_w
= \left(D_J \circ D_K\right)\left(\mathbf b\mathbf u_w\right),
\end{align*}
where the third equality comes from the commutation of symmetric functions.
\end{proof}
\section{Strong Schur functions}\label{sec:strong}
Lam, Lapointe, Morse, and Shimozono \cite{LLMS10} generalized the $k$-Schur
functions to a larger set of functions called the strong Schur functions. We
use the properties of the operators developed in the previous section to prove
a series of their conjectures \cite[Conjecture~4.18]{LLMS10} regarding these
functions.
\subsection{Strong Schur functions are symmetric functions}
For $u, v \in W$, define the \emph{strong Schur function}
\begin{align*}
\operatorname{Strong}_{u/v}
&= \sum_{ u \to \cdots \to v \in \mathcal G^{\downarrow} }
F_{\operatorname{ascomp}(u \to \dots \to v)},
\end{align*}
where $F_J$ denotes the fundamental quasi-symmetric function indexed by the
composition $J$. In \cite{LLMS10}, it was shown that $\operatorname{Strong}_{u/\id}$ is a
symmetric function; and that when $u$ is $0$-Grassmannian, it is a $k$-Schur
function.
\begin{Remark}
\label{remark:strong tableaux}
The definition given here is a reformulation of that in \cite{LLMS10}.
They defined $\operatorname{Strong}_{u/v}$ as the generating function of ``strong
tableaux''; the above definition is obtained from theirs by lumping
together tableaux of the same ``weight'', yielding the expansion in
terms of monomial quasisymmetric functions below.
\end{Remark}
\begin{Theorem}[{\cite[Conjecture~4.18(1)]{LLMS10}}]\label{thm:LLMS1}
$\operatorname{Strong}_{u/v}$ is a symmetric function. Furthermore, it expands
positively in the monomial basis $m_\lambda$ of $\Lambda$:
\begin{align*}
\operatorname{Strong}_{u/v}
&= \sum_{\lambda}
\left\langle D^{\lambda}(\mathbf u_u), \mathbf u_v \right\rangle_\mathbb{A}
m_\lambda,
\end{align*}
where $D^\lambda = D_{\lambda_1} \circ \dots \circ D_{\lambda_l}$.
\end{Theorem}
\begin{proof}
The coefficient of the fundamental quasi-symmetric function $F_J$ in
$\operatorname{Strong}_{u/v}$ is the number of paths in $\mathcal G^{\downarrow}$ from $u$ to $v$
with ascent composition equal to $J$. This is precisely
the coefficient of $\mathbf u_v$ in $D_J(\mathbf u_u)$. Hence,
\begin{align*}
\operatorname{Strong}_{u/v}
&= \sum_{J \models \ell(u)-\ell(v)}
\langle D_J(\mathbf u_u), \mathbf u_v \rangle_\mathbb{A}
F_J.
\end{align*}
Recall that $F_J = \sum_{I \succeq J} M_I$, where $M_I$
denotes the monomial quasi-symmetric function indexed by the
composition $I = [i_1, \dots, i_r]$. Thus,
\begin{align*}
\operatorname{Strong}_{u/v}
&= \sum_{J}
\langle D_J(\mathbf u_u), \mathbf u_v \rangle_\mathbb{A}
\sum_{I \succeq J} M_I \\
&= \sum_{I}
\left(
\sum_{J \preceq I}
\langle D_J(\mathbf u_u), \mathbf u_v \rangle_\mathbb{A}
\right)
M_I \\
&= \sum_{I}
\langle D^I(\mathbf u_u), \mathbf u_v \rangle_\mathbb{A}
M_I
\end{align*}
where $D^I = D_{i_1} \circ \cdots \circ D_{i_r}$.
Since the operators $D_{i}$ and $D_{j}$ commute for all $i$ and $j$,
the operator $D^I$ depends only on the underlying partition
$\lambda(I)$ of $I$. Hence,
\begin{align*}
\operatorname{Strong}_{u/v}
&= \sum_{\lambda} \sum_{\lambda(I) = \lambda}
\langle D^I(\mathbf u_u), \mathbf u_v \rangle_\mathbb{A}
M_I \\
&= \sum_{\lambda}
\langle D^{\lambda}(\mathbf u_u), \mathbf u_v \rangle_\mathbb{A}
\sum_{\lambda(I) = \lambda} M_I \\
&= \sum_{\lambda}
\langle D^{\lambda}(\mathbf u_u), \mathbf u_v \rangle_\mathbb{A}
m_\lambda,
\end{align*}
where $m_\lambda$ is the monomial symmetric function. In particular
$\operatorname{Strong}_{u/v} \in \Lambda$.
\end{proof}
If $u$ and $v$ are $0$-Grassmannian elements, we write $\operatorname{Strong}_{\mu/\nu}$
instead of $\operatorname{Strong}_{u/v}$, where $\mu$ and $\nu$ are the $k$-bounded
partitions corresponding to $u$ and $v$, respectively.
It follows from \S\ref{ss:noncommkschurs} (as in the proof of Theorem
\ref{PieriRulePerp}) that the coefficient of $\mathbf u_v$ in $D^\lambda(\mathbf u_u)$ is
the coefficient of $\kschur_\nu$ in the expansion of $D^\lambda(\kschur_\mu)$
in terms of $k$-Schur functions. Thus,
\begin{gather*}
\left\langle \strut D^\lambda(\mathbf u_u), \mathbf u_v \right\rangle_\mathbb{A}
=
\left\langle D^\lambda(\kschur_\mu), \dualkschur_\nu \right\rangle
=
\left\langle \kschur_\mu, \overline{h_\lambda}\dualkschur_\nu \right\rangle
\end{gather*}
where the last equality follows from the fact that the restriction of
$D^\lambda$ to $\mathbb{B}$ is the adjoint to multiplication by $\overline{h_\lambda}$
(Theorem \ref{restriction}).
\begin{Corollary}
If $u$ and $v$ are $0$-Grassmannian elements corresponding to the
$k$-bounded partitions $\mu$ and $\nu$, respectively, then
\begin{align*}
\operatorname{Strong}_{\mu/\nu}
&= \sum_{\lambda}
\left\langle \kschur_\mu, \overline{h_{\lambda}} \dualkschur_\nu \right\rangle
m_\lambda.
\end{align*}
\end{Corollary}
\subsection{Strong Schur functions belong to $\Lambda_{(k)}$}
Next we verify the second part of Conjecture~4.18 from \cite{LLMS10}.
Recall that for a linear operator $f$ on $\mathbb{B}$, we denote by $\widehat{f}$ its
extension to $\mathbb{A}$ as defined in \S\ref{ss:extensions}.
\begin{Theorem}\label{thm:LLMS2}
Let $u, v \in W$. The strong Schur function
$\operatorname{Strong}_{u/v}$ lies in $\Lambda_{(k)}$.
Furthermore, we have the expansion in
homogeneous symmetric functions:
\begin{align*}
\operatorname{Strong}_{u/v}
=
\sum_{\lambda\in\mathcal B^{(k)}} \left\langle \widehat{m_\lambda^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} h_\lambda.
\end{align*}
\end{Theorem}
\begin{proof}
Since the $m_\mu$ form a basis of $\Lambda$, there exist coefficients
$L_{\lambda,\mu}$ for which
$h_\lambda = \sum_{\mu} L_{\lambda,\mu} m_\mu$. Hence,
\begin{align*}
\operatorname{Strong}_{u/v}
&=
\sum_\lambda \left\langle D^\lambda(\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} m_\lambda
\\&=
\sum_\lambda \left\langle \widehat{h_\lambda^\perp}(\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} m_\lambda
\\&=
\sum_{\lambda,\mu} \left\langle L_{\lambda,\mu} \widehat{m_\mu^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} m_\lambda
\\&=
\sum_{\mu} \left\langle \widehat{m_\mu^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} \sum_{\lambda}L_{\lambda,\mu}m_\lambda
\\&=
\sum_{\mu} \left\langle \widehat{m_\mu^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} h_\mu
\end{align*}
Since $m_\mu^\perp = 0$ for any partition $\mu$ that is not
$k$-bounded, the above summation runs over $k$-bounded partitions.
\end{proof}
\subsection{Expansions of strong Schur functions}
Since $\operatorname{Strong}_{u/v}$ lies in $\Lambda_{(k)}$, it has an expansion in terms of
$k$-Schur functions. The third part of Conjecture~4.18 of \cite{LLMS10} deals
with the coefficients in this expansion.
\begin{Theorem}\label{thm:LLMS3}
Let $u, v \in W$.
\begin{align*}
\operatorname{Strong}_{u/v}
&=
\sum_{\lambda\in\mathcal B^{(k)}} \left\langle \widehat{\dualkschur_\lambda^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} \kschur_\lambda
\end{align*}
\end{Theorem}
\begin{proof}
By using the expansions
$h_\lambda = \sum_\tau K^{(k)}_{\tau, \lambda} \kschur_\tau$
and
$\dualkschur_\tau = \sum_\lambda K^{(k)}_{\tau, \lambda} m_\lambda$,
\begin{align*}
\operatorname{Strong}_{u/v}
&=
\sum_{\lambda\in\mathcal B^{(k)}} \left\langle \widehat{m_\lambda^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} h_\lambda
\\&=
\sum_{\lambda\in\mathcal B^{(k)}} \left\langle \widehat{m_\lambda^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} \sum_{\tau\in\mathcal B^{(k)}} K^{(k)}_{\tau,\lambda} \kschur_\tau
\\&=
\sum_{\tau\in\mathcal B^{(k)}} \left\langle \sum_{\lambda\in\mathcal B^{(k)}} K^{(k)}_{\tau,\lambda} \widehat{m_\lambda^\perp} (\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} \kschur_\tau
\\&=
\sum_{\tau\in\mathcal B^{(k)}} \left\langle \widehat{\dualkschur_\tau^\perp}(\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A} \kschur_\tau.
\qedhere
\end{align*}
\end{proof}
If $u$ and $v$ are $0$-Grassmannian, with $u = w_\mu$ and $v =
w_\lambda$, then the coefficient in the above expression reduces
to
\begin{gather*}
\left\langle \widehat{\dualkschur_\lambda^\perp}(\mathbf u_u), \mathbf u_v\right\rangle_\mathbb{A}
=
\left\langle {\dualkschur_\lambda^\perp}\left( \kschur_\mu \right), \dualkschur_\nu\right\rangle
=
\left\langle \kschur_\mu, \dualkschur_\lambda \dualkschur_\nu\right\rangle.
\end{gather*}
This establishes the third part of Conjecture~4.18 of \cite{LLMS10} for
$0$-Grassmannian elements.
\begin{Corollary}[{\cite[Conjecture~4.18(3)]{LLMS10}}]
\label{Conj4.18c}
Let $\mu$ and $\nu$ be $k$-bounded partitions.
The coefficient of $\kschur_\lambda$ in $\operatorname{Strong}_{\mu/\nu}$ is the
coefficient of $\dualkschur_\mu$ in $\dualkschur_\lambda \dualkschur_\nu$:
\begin{align*}
\operatorname{Strong}_{\mu/\nu}
&=
\sum_{\lambda\in\mathcal B^{(k)}} \left\langle \kschur_\mu, \dualkschur_\lambda \dualkschur_\nu \right\rangle \kschur_\lambda.
\end{align*}
\end{Corollary}
\begin{Corollary}
Let $\mu$ and $\nu$ be $k$-bounded partitions. Then the skew $k$-Schur function is:
\begin{align*}
\kschur_{\mu/\nu} := {\dualkschur_\nu^\perp}\left(\kschur_{\mu}\right)
&=
\operatorname{Strong}_{\mu/\nu}.
\end{align*}
\end{Corollary}
\begin{proof}
By Corollary \ref{Conj4.18c}, we have $\operatorname{Strong}_{\mu} = \kschur_\mu$.
Thus, the coefficient of $\kschur_\lambda$ in the left-hand side is
$
\langle
{\dualkschur_\nu^\perp}(\kschur_{\mu}),
\dualkschur_\lambda
\rangle
=
\langle
\kschur_{\mu},
\dualkschur_\nu \dualkschur_\lambda
\rangle,
$
which is the coefficient of $\kschur_\lambda$ in $\operatorname{Strong}_{\mu/\nu}$.
\end{proof}
Consequently, we obtain an explicit combinatorial description of the skew
$k$-Schur function $\kschur_{\lambda/\mu}$ since the strong Schur function
$\operatorname{Strong}_{\mu/\nu}$ has an explicit combinatorial description in terms of
``strong tableaux'' (see \cite{LLMS10} for details).
\bibliographystyle{halpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,021 |
Q: Can foreach selectively utilize multiple back-ends (doParallel and doRedis) at the same time? I am processing files which are about 1 Gb in size. The processing can be broken into two steps, one which depends on the large files and a second which depends on a smaller amount of data. Can I queue some foreach commands for the doParallel back end while queuing other foreach commands with the doRedis back end?
foreach() %dopar:doParallel% {
foreach() %dopar:doRedis% {}
}
The goal here is to keep the remote jobs small due to memory and network IO constraints while also utilizing the several local cores I have for the pre/post-processing.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,496 |
Business & IP Centre Leeds is a one-stop source for all your business and intellectual property needs.
We draw on a wide range of resources and have extensive experience in dealing with business and intellectual property enquiries. We are members of the Europe-wide PATLIB network, the British Library's Business & IP Centre network, and work closely with the West and North Yorkshire Chamber of Commerce and other partners.
Whether you are taking your first steps in setting up in business, targeting new customers, increasing your knowledge of competitors or looking for new partners or markets we have a wealth of information at our fingertips.
We can guide you through the procedures required to help you protect your products and services using patents, trade marks, registered designs and copyright.
The timeline slider below uses WAI ARIA. Please use the documentation for your screen reader to find out more. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,031 |
Home Topics Social Media
The Selfie Is More Powerful Than We Realize
by Ali Choudhry March 30, 2021 16 Comments
In 2016, Kim Kardashian broke the internet with a mother's day selfie. We've all seen the picture; she's stood in front of a mirror wearing pretty much her birthday suit. It becomes such a big deal that Emily Ratajkowski and Kim Kardashian go on to recreate the thing. Break the internet twice! Why is this such a big deal though?
Before I actually jump into the actual article, I want to highlight that this is written less from a place of "this is photography," "this is how you take a photograph," "this camera is great," and more from a place of more broadly talking about image-making practices.
There really isn't a right answer. Or a wrong answer. Heck, there might not even be any answer at all. Just a discussion.
So, with that, buckle up, take in what you can, reject what doesn't resonate with you, and let's have a discussion in the comments below. I look forward to reading your thoughts!
Prior to the invention of cameras, the only way to have a self-portrait was to either paint it yourself, or to commission an artist to create one for you. Of course, the former was difficult without the requisite tools and skill base; and the latter was only possible for those who could afford one, such as royalty, nobility, or later, the rising merchant classes.
Once cameras were invented, portraiture became more democratic in that it was more accessible. Photography wasn't nearly as time-consuming or expensive as a painting.
In both painting and photography, unless you had the required equipment and skills, self-portraits still weren't readily made. That is to say, only a painter could make a self-portrait painting. Only a photographer could photograph a self-portrait photograph. Outside of this, there was still the mediation between the patron to artist to image.
The selfie (and by extension, self-portraiture) has only become accessible in a widespread and meaningful way within about the last 25 years.
Jean-Paul Sartre was a French philosopher whose work helped define philosophy for the 20th century. Sartre defines three states of being.
In-itself: is what it is and nothing else. A cup is a cup and nothing else. Glasses are only ever glasses.
For itself: exists in self. So, I might be a student. But I'm also a photographer. I cook. I go on hikes. These are ways in which my definition of self is internalized.
For others: having the self-awareness that your own "in-itself"-ness (as viewed by others) is reductive.
So, I might be "for itself" an entrepreneur. But my "in itself," as viewed by "for others," is based on their belief of myself.
In Othered Body, Obscene Self(ie): A Sartrean Reading of Kim Kardashian-West, Else Dowden argues (and in my opinion, correctly so) that through the selfie, our sense of self of "for others" and "for itself" merges, in a way. For the first time, people, especially women, are able to take ownership of their own image.
As the image has traditionally been mediated through others (in both what the image looks like, as well as how that's disseminated), having both access to creating images and sharing images of courses changes the power paradigm of the self as well as the self-image.
A convention is a kind of like an unspoken "rule" for images. We intuitively know these rules (and we've all pretty much seen them at one point or another). As an example, stock photography is riddled with them.
You look at stock images of university or college students, and they're always the same: clean cut with not a piercing or tattoo in sight, traditional hair and clothes, almost as if they're about to ask you for a cucumber sandwich or something, and always so danged delightfully happy.
Go to a real university or college campus, and not only do students not look or dress like that, but who is that happy?
That's a convention.
Iain
Some of the FStoppers writers graciously offered up their selfies. Which ones, in your opinion, conform to conventions? Which ones break them? Why is that important?
Most stock photos are actually like that. Stock photos of office workers, families, tradespeople. All the same conventions of acceptability apply to the image. Clean cut. "Normal." Acceptable.
These images are created through the lens of Sartre's "being for others." So, any deviation to that upsets that framework.
I've very briefly touched a bunch of topics without actually talking about that selfie. So, let's circle back to it. Why is it such a big deal?
You have to take inventory of who Kim is as a person: a woman of color with a large social platform who has taken ownership of her own image. People of color were traditionally only photographed as the "other" — "exotic" or "savage."
Women were really only thought of in relation to others (wife, mother, daughter). This was actually a lot of the criticism the selfie receives as well. Celebrities, traditionally, had their image controlled by the media. Social media shifts some of that power back.
Once you consider all of these things (and probably so many more that I haven't mentioned here), you realize how many conventions were actually broken by a single image. And then, once you realize that, you can consider how the power shift that social media allows is simply just a catalyst for broader shifts in conversation.
What does all of this really mean?
For the longest time, we as humans didn't really have the power to control our own image or how we are presented. For the first time, we are experiencing a shift in the power dynamics of image-making.
The power of modern cell phones allows not only the creating of any image you can imagine but also the power to share it with any other person who has a phone and access to the internet. Because of this, we're beginning to see a shift (and almost of merging) of "being for itself" and "being for others."
I asked a few Fstoppers writers and staff to share some selfies for this article. These folks could have photographed themselves literally any which way, and this is what they did.
Thinking about conventions and Sartre's states of being, do any stand out more than others? Why or why not?
Fstoppers Originals
About Ali Choudhry Follow
alichoudhry.com
Ali Choudhry is a freelance photographer with a which practice finely straddles art projects with commercial work. He is based in Naarm / Melbourne. He is interested in the camera as an entity: a camera is able to warp space and time in a way the human eye can not.
Opinion / December 13, 2020
Another Person Dies Taking a Selfie: The Question Is Why?
Portraits / January 11, 2018
Selfitis? Try Giving the Non-Selfie Self-Portrait a Chance
Opinion / April 22, 2019
Are Deaths by Selfie an 'Epidemic'?
Documentary / January 12, 2018
What is a Selfie and Who Invented It?
Premium Photography Tutorials
Check out the Fstoppers Store for in-depth tutorials from some of the best instructors in the business.
Living Stills: How to Animate Your Photos
With Shavonne Wong
The Fundamentals of Fashion Photography
Studio 403 - March 30, 2021 [Edited]
Oh no, I am off to more navel gazing. Am I self aware? Yes. Me, self centered, only think of me and what you think of me, My satire of the day, that is , I am the greatest photographer ever known. Please pass on my greatness, lol.
Ryan Handt - March 30, 2021
There's a clear delineation between a selfie and a self portrait in my mind. Technically, a selfie is when you are holding your camera in your own hand and taking the photo. A self portrait is when the camera is out of hand. Simple, but clear. There's also a conceptual differences for me, but that's too much to go into here.
Ali Choudhry Ryan Handt - March 31, 2021
To clarify, images of say Vivian Maier or Sarah Moon or Richard Avedon holding a camera and taking an image of themselves are selfies?
Ryan Handt Ali Choudhry - April 6, 2021
Charles Mercier Ryan Handt - March 31, 2021
Why is it too much to go into here. Explain away!
Ryan Handt Charles Mercier - April 6, 2021
Typing on a phone is not conducive to eloquence. But to put it simply, the time and thought put into a selfie is rarely adequate. Not saying all selfies aren't thought out, but the vast majority are not, where as self portraits are more likely to be thought out.
Charles Mercier Ryan Handt - April 7, 2021
You might be surprised to know that the most successful bloggers take a lot of time and effort in order to make their selfies come out just right.
Ryan Handt Charles Mercier - April 12, 2021
That's not surprising at all seeing as it's their job to be in front of the camera. This has nothing to do with what I said. I'm speaking about the conceptuality of a self portrait and selfie. And it's my opinion nothing more.
Ali Choudhry Ryan Handt - April 7, 2021
I think that this is mostly an arbitrary delineation. I believe that "selfie" is just a shorter, vernacular usage of "self portrait". You can further elaborate on things like production value or specific qualities of the final image, but they're the same thing.
Ryan Handt Ali Choudhry - April 12, 2021
As I said, it's my opinion and the way I interpret the words. When I was younger, did I use a film camera to take self portraits or selfies? Yes I did but the word selfie didn't exist then. And I don't consider those photos to be self portraits just like I don't consider a snapshot to be a art. It's the elevation of the word, it's an interpretation of its meaning. It's like the difference between a white castle hamburger and a hamburger from Peter Lugar's. Yes they're both burgers, but they are not the same thing.
Sean Chadwell - March 31, 2021
Another thoughtful and interesting piece, Ali.
Ali Choudhry Sean Chadwell - March 31, 2021
Thanks Sean!
Charles Mercier - March 31, 2021
Seriously? Someone thinks that photo of jumping out of a plane is a selfie?
Ali Choudhry Charles Mercier - April 1, 2021
And you disagree?
Charles Mercier Ali Choudhry - April 1, 2021
So the person threw the camera out of his hand and it took the photo and then the person caught the camera? A selfie is taken by the person in the photo.
Terry Waggoner - April 1, 2021
It's powerful enough to get the individual trying it killed.........
Follow Fstoppers
Support Fstoppers
Is This the Year To Start Selling Your Photography?
If Your Photos Aren't Any Good, a New Camera Won't Fix That: But Sometimes, It Does
Who's Who on a Commercial Photography Set
It's Not About the Camera, It's About Who's Pressing the Shutter
Are You Guilty of This Very Common Mistake When You Shoot New Places?
Why You Should Consider Abandoning Full Frame and APS-C and Go for Something More Extreme | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,607 |
\section{Introduction}
Based on the pioneering work~\cite{Kasevich91PRL,Kasevich92APB} by Mark~Kasevich and Steve~Chu starting in 1991, light-pulse atom interferometry has grown into an extremely successful tool for precision measurements. Indeed, ground-breaking experiments have been performed in the fields of inertial sensing and tests of the foundations of physics. Inertial sensing covers measurements of the local gravitational acceleration~\cite{Peters99Nature,Peters01Metrologia,PhysRevA.86.043630,Malossi10PRA,0256-307X-28-1-013701,Schmidt2011,Zhou11GRG,McGuinness12APL,Debs11PRA,Charriere12PRA,Altin13NJP,
Bidel13APL,Hauth13APB,PhysRevA.88.023614,Andia13PRA,Bonnin13PRA,Biedermann15PRA,Baryshev2015,1402-4896-91-5-053006,LeGouet08APB,Hu13PRA}, or rotations, for example of the Earth~\cite{Gustavson97PRL,Gauguet09PRA,Stockton11PRL,Berg15PRL,Dutta16PRL,PhysRevLett.97.240801,PhysRevLett.99.173201}, as well as gravity gradiometry~\cite{McGuirk02PRA,Lamporesi08PRL}. The present lecture notes aim at providing an introduction into, and an overview over this rapidly moving field. Moreover, our article complements the corresponding lectures by Daniel M. Greenberger.
In this section we intend to motivate this branch of physics located at the interface of atomic physics, quantum optics and solid state research, and to give a preview of coming attractions. In order to focus on the essential ideas we keep this section brief and postpone more detailed discussions to the later sections.
We start in Sec.~\ref{sec:applications} by mentioning applications of interferometry with cold atoms ranging from tests of the foundations of physics to quantum sensors. We then outline in Sec.~\ref{sec:optical_elements} the realization of optical elements in atom optics such as beam splitters and mirrors leading us in Sec.~\ref{sec:sources_atom_optics} to various sources for our interferometers. Here we emphasize especially the use of an atom chip as a trap and a mirror for laser light opening the avenue towards a quantum tiltmeter and a gravimeter. An outline of our lecture notes in Sec.~\ref{sec:overview} concludes our introduction.
\subsection{Applications of atom interferometry}
\label{sec:applications}
The scope of testing fundamental physics with atom interferometry comprises on the one hand measurements of fundamental constants such as Newton's gravitational constant $G$~\cite{Snadden98PRL,Fixler07Science,Rosi14Nature,Biedermann15PRA} and Sommerfeld's fine-structure constant $\alpha$~\cite{Clade06PRA,Mueller06APB,Bouchendira11PRL,Estey15PRL,Parker191}. Indeed, the result for $\alpha$ obtained with photon-recoil measurements in recent years ~\cite{Bouchendira11PRL} has entered into the determination of the CODATA value. Moreover, a measurement of $\alpha$ has been reported this year~\cite{Parker191} with an accuracy of $2.0\cdot10^{-10}$, which is even more accurate than the best measurements to date, based on measuring the anomalous magnetic moment of the electron~\cite{Hanneke08PRL}.
On the other hand, testing the pillars of general relativity, for example, the universality of free fall (UFF) resulting from Einstein's equivalence principle~\cite{Damour96CQG}, is of particular interest. The most elementary test of the UFF is to compare the measurements of local gravity with a classical and an atomic~\cite{Peters99Nature,Merlet10Metrologia} gravimeter.
More elaborate set-ups use two different quantum objects, for instance, two isotopes of the same atomic species, or two different atomic species, and measure their free-fall rate within the same device~\cite{Fray04PRL,Bonnin13PRA,Kuhn14NJP,Tarallo14PRL,Schlippert14PRL,Zhou15PRL}. Future experiments of this kind are expected to catch up to, or even overcome today's best classical tests of the UFF based on Lunar-Laser-Ranging~\cite{Williams12CQG}, torsion balance experiments~\cite{Adelberger09}, or space missions using freely-falling test masses~\cite{Touboul2017PRLMICROSCOPE}.
In addition, atom interferometers can also test different models in particle physics in the search for unknown forces or dark energy~\cite{Hamilton15Science,Tilburg15PRL,Elder16PRD}. Even more exotic experiments aim for the detection of gravitational waves~\cite{PhysRevLett.110.171102,PhysRevD.78.122002,DIMOPOULOS200937,Hogan2011}, new probes of the foundations of quantum mechanics, such as delayed-choice experiment~\cite{Manning15Nature}, or for the creation of atomic Einstein-Podolsky-Rosen pairs~\cite{Peise15NatComm,Engelsen17PRL}.
Especially in absolute gravimetry, the sensitivity of atomic sensors is competitive with classical devices~\cite{Middlemiss16Nature}. The conventional sensors used for geodesy~\cite{Timmen10Springer} can be categorized as absolute gravimeters, such as the falling-corner cube gravimeters~\cite{Zumberge82Met,Niebauer95Met}, and relative gravimeters like superconducting gravimeters~\cite{Prothero68RSI,Okubo97GRL,Imanishi04Science}, which have a changing bias over time.
State-of-the-art {\it atomic} gravimeters operate with Raman-type beam splitters and cold atoms, which are either dropped or launched from optical molasses -- a technique invented for Cesium fountain clocks~\cite{Salomon90EPL,Philips91PS}. State-of-the-art laboratory grade examples of these gravimeters~\cite{Hu13PRA,Bidel13APL,Freier16CS,Fang16CS} reach inaccuracies in the low $\upmu$Gal regime. The maturity of this technology has now arrived at a level that commercial products with a specified sensitivity of better than 10\,$\upmu$Gal~\cite{Bodart10APL,muquans,aosense} are available.
\subsection{Optical elements for atoms}
\label{sec:optical_elements}
The coherent manipulation of matter waves is a central element in every matter wave interferometer~\cite{Cronin09RevModPhys}. Two methods to realize beam splitters and mirrors based on light pulses offer themselves: Raman~\cite{Kasevich91PRL} and Bragg diffraction~\cite{Kozuma99PRL,Torii00PRA}. However, these techniques imply conceptual differences which have to be considered when constructing an atom interferometer aimed at measuring inertial effects~\cite{Antoine03}.
Indeed, Raman diffraction, where an atomic $\Lambda$-scheme is driven, requires a phase-stable microwave coupling between two hyperfine ground states of an atom usually established by two phase-locked lasers. Working with two different internal states of an atom from an ensemble with a wide velocity distribution has the advantage of velocity filtering with blow-away pulses and state-selective detection~\cite{Kasevich91PRL2,Altin13NJP}. These state-labeling features are described in detail by Christian~Bord\'e~\cite{Borde89PRA}.
In contrast, Bragg diffraction involves only a single atomic ground state and allows us to construct with a single laser system a pure momentum, or recoil beam splitter. However, due to the transition frequency being in the radio-frequency (RF) range, the detection needs to be spatially resolved, and, in order to distinguish different diffraction orders~\cite{Szigeti12NJP,Altin13NJP}, requires a momentum distribution below recoil.
\subsection{Sources for atom optics}
\label{sec:sources_atom_optics}
Today's generation of atomic inertial sensors typically operates with cold atoms released or launched from an optical molasses. This approach was taken in our simultaneous, dual-species Raman-type interferometer with molasses-cooled \textsuperscript{87}Rb and \textsuperscript{39}K ensembles which measured the E\"otv\"os ratio to $\eta_{\,{\mathrm{Rb},\mathrm{K}}} = (0.3 \pm 5.4) \cdot 10^{-7}$. The velocity distribution and finite size of these sources of atoms limit the efficiency of the beam splitters as well as complicate the analysis of systematic uncertainties.
These limitations can be overcome by the use of atomic ensembles with a typical average momentum well below the recoil of a photon, for example Bose-Einstein condensates (BECs)~\cite{Anderson198,PhysRevLett.75.3969}. The width of a momentum distribution corresponding to a BEC can be further reduced after reaching the regime of ballistic expansion, where all mean field energy is converted to the kinetic energy, by the application of the delta-kick collimation~(DKC) technique~\cite{Muentinga13PRL}.
Atom-chip technologies offer the possibility to generate a BEC and perform DKC in a fast and reliable way, resulting in miniaturized atomic devices. BECs are very useful for Bragg and double Bragg diffraction \cite{Ahlers16PRL,Abend16PRL} leading to high diffraction efficiencies. Indeed, such beam splitters and mirrors can reach an efficiency of above $95\%$ facilitating interferometry with high contrast.
Furthermore, BECs offer novel methods of coherent manipulation with high fidelity, to realize for example a tiltmeter~\cite{Ahlers16PRL}. A combination of double Bragg diffraction and Bloch oscillations gives rise to a relaunch procedure with an efficiency larger than $75\%$ for the diffraction of atoms in a retro-reflected optical lattice~\cite{Abend16PRL}. The novelty of this method originates from the fact that it relies on a single laser beam, which is also used as a beam splitter, and thus does not lead to an increased complexity of the setup.
We realize a Mach-Zehnder interferometer (MZI) by dropping ensembles directly after release, or accelerating them upwards after a certain time of free fall. The interferometry is performed as in a fountain, such that the total time~$2T$ of the interferometer can be extended. Here~$T$ is the time between the first beam-splitter pulse and the central mirror pulse.
We utilize an atom chip~\cite{Abend16PRL} for BEC generation and state preparation, including magnetic sub-state transfer, DKC and Stern-Gerlach-type deflection. A special feature of our setup is that the light field, which forms the MZI by Bragg diffraction, is reflected by the atom chip itself. In this way, the chip also serves as an inertial reference inside the vacuum chamber leading to a compact atom-chip gravimeter.
All atom-optics operations, the interferometry as well as the detection of the output states of the atom interferometer, are integrated into a volume of less than a cube of one centimeter side length. In the fountain mode, the MZI can be extended to have the total interferometer time $2 T = 50$ ms with a large contrast $C = 0 . 8$, which yields an intrinsic sensitivity $\Delta g/g= 1.4\cdot 10^{-7}$. The state preparation comprised of DKC and Stern-Gerlach-type deflection makes an important contribution to this achievement by improving the contrast and reducing the detection noise. An estimation of systematic uncertainties for the current setup and their projection onto a future device prove that it is possible to reach sub-$\upmu$Gal accuracies with a fountain-type geometry.
\subsection{Overview}
\label{sec:overview}
Our lecture notes are organized as follows. In Sec.~\ref{sec:Tools} we introduce the basic tools of atom interferometry such as beam splitters, mirrors, and optical lattices to construct a MZI for atoms. We then turn in Sec.~\ref{sec:EQP} to tests of the equivalence principle. In particular, we present a dual-species atom interferometer for \textup{\textsuperscript{87}Rb} and \textup{\textsuperscript{39}K} to investigate the UFF. Next, we present in Sec.~\ref{sec:BEC_interferometry} interferometers utilizing BECs on an atom chip. We introduce the technique of DKC and present a quantum tiltmeter as well as a gravimeter exploiting this technology. Finally, we conclude in Sec.~\ref{sec:Outlook} by providing an outlook on future devices such as the very long base line atom interferometry (VLBAI) facility and atom interferometers in space.
\section{Tools of atom interferometry}
\label{sec:Tools}
An atom interferometer requires the realization of a beam splitter and a mirror for atom waves. The underlying processes have to be coherent and phase-stable in order to establish an interference pattern.
Several options for such elements exist, and we analyze prominent ones exploited in current experiments below. In particular, in Sec. \ref{sec:beam_splitters_mirrors} we discuss beam splitters and mirrors based on Bragg and Raman diffraction. Next, in Sec. \ref{sec:Optical_lattice} we focus on the manipulation and accelerations of atoms by optical lattices. Concluding, we introduce in Sec.~\ref{sec:Mach_Zehnder} a common interferometer geometry based on beam splitters and mirrors.
\subsection{Beam splitters and mirrors}
\label{sec:beam_splitters_mirrors}
Beam splitters and mirrors for matter waves can be realized with the help of mechanical gratings~\cite{PhysRevLett.66.2693,PhysRevLett.66.2689}, or electromagnetic waves~\cite{Kasevich91PRL,Kasevich92APB,PhysRevLett.67.177,Rasel95PRL,Berman}. While single-photon electric or magnetic dipole transitions can implement a coherent electromagnetic coupling, the assessment in Sec.~\ref{sec:Rabi_oscillations} focuses on stimulated two-photon transitions. For this purpose, we first introduce the Rabi model which describes the atom-light interaction in an effective two-level system before we proceed to two-photon transitions in a three-level system. Here the absorption of a photon from a field with frequency $\omega_1$ is followed by stimulated emission into a field with frequency $\omega_2$, and {\it vice versa}.
Next, we consider in Sec.~\ref{sec:Bragg_Raman_diffraction} the momentum transfer due to the atom-light interaction which is at the very heart of the sensitivity of atom interferometers to inertial forces. We also discuss two standard approaches towards beam splitters. Raman diffraction is a widely used technique designed for atoms which have been laser-cooled in optical molasses without the application of additional cooling steps. Bragg diffraction is a powerful tool for delta-kick collimated Bose-Einstein condensates, since the velocity dispersion of the ensemble is of major relevance for the manipulation efficiency.
In Sec.~\ref{sec:Multi-photon_Bragg} we then concentrate on the generalization of a multi-photon coupling utilizing Bragg diffraction, and conclude in Sec.~\ref{sec:FiniteSize}, where we take into account the effects of the finite size of the atom cloud and the laser beam on the atom-light interaction.
\subsubsection{Rabi oscillations and two-photon coupling}
\label{sec:Rabi_oscillations}
We consider the atom as an effective two-level system consisting of the internal states $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ with energies $\hbar \omega_{\mathrm{g}}$ and $\hbar \omega_{\mathrm{e}}$, respectively, and a dipole moment $\vec{d}$. The time evolution of the state populations under the influence of a resonant ($\omega_0=\omega_{\mathrm{eg}}\equiv \omega_\mathrm{e}-\omega_\mathrm{g}$) electromagnetic field~$\vec{E}\equiv\vec{E}_0 \textnormal{cos}(\omega_{0}\tau + \phi)$ at time~$\tau$ with frequency~$\omega_{0}$ and phase~$\phi$ is determined by the Rabi frequency
\begin{equation}
\Omega_{\mathrm{eg}} \equiv \frac{\bra{\mathrm{e}} \vec{d} \cdot \vec{E}_0 \ket{\mathrm{g}} } {\hbar}= \Gamma \sqrt{\frac{I}{2I_{\mathrm{sat}}}}\,,
\label{eq:rabi}
\end{equation}
expressed in terms of the intensity~$I$ of the light field with the saturation intensity~$I_{\mathrm{sat}}$, and the natural linewidth~$\Gamma$ of the transition. Indeed,~$\Omega_{\mathrm{eg}}$ is assumed to be constant and is a measure of the coupling strength between the atom modeled by the two atomic states~$\ket{\mathrm{g}}$ and~$\ket{\mathrm{e}}$, and the electromagnetic field $\textbf{E}$.
Off-resonant driving with a non-vanishing detuning~$\delta\equiv\omega_{\mathrm{eg}}-\omega_0$ is taken into account in the effective Rabi frequency
\begin{equation} \label{eq:rabieff}
\Omega_{\mathrm{eff}} \equiv \sqrt{|\Omega_{\mathrm{eg}}|^2 +\delta ^2} \,,
\end{equation}
which always leads to a faster oscillation of the probability
\begin{equation} \label{eq:probrabieff}
P_{\mathrm{e}} (\tau,\delta,\Omega_{\mathrm{eg}}) = \frac{1}{2}\left(\frac{\Omega_{\mathrm{eg}}}{\Omega_{\mathrm{eff}}}\right)^2[1 - \mathrm{cos}(\Omega_{\mathrm{eff}} \tau)]
\end{equation}
to find after an interaction time~$\tau$ the atom in the excited state $\ket{\mathrm{e}}$ if the atom is initially prepared in the ground state $\ket{\mathrm{g}}$.
The reduced amplitude of the oscillation is determined by the ratio $\Omega_{\mathrm{eg}}/\Omega_{\mathrm{eff}}$ of the resonant and the effective Rabi frequency. For a vanishing detuning, that is $\delta=0$, the amplitude of $P_{\mathrm{e}}$ is unity, whereas for a large detuning, $|\delta|\gg |\Omega_{\mathrm{eg}}|$, the amplitude of $P_{\mathrm{e}}$ tends towards zero.
Rabi oscillations can be driven efficiently only for long-lived states, that is for states in which~$\Omega_{\mathrm{eff}}$ is large compared to the inverse lifetime of the working states $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$. A common method to avoid decays from~$\ket{\mathrm{e}}$ to ~$\ket{\mathrm{g}}$ is to choose these states such that the selection rules forbid a single-photon transition between them. The two states could then be coupled via a two-photon transition which requires an intermediate state~$\ket{\mathrm{i}}$.
As depicted in Fig.~\ref{fig:Lambda_Scheme}, light fields with frequencies $\omega_1$ and $\omega_2$ induce non-resonant transitions, where the transitions $\ket{\mathrm{g}}\longleftrightarrow \ket{\mathrm{i}}$ and $\ket{\mathrm{i}}\longleftrightarrow \ket{\mathrm{e}}$ are detuned by $\Delta$ and $\Delta+\delta^{(2)}$, respectively. Consequently, the frequency difference $\delta\omega\equiv\omega_1-\omega_2$ is equal to the frequency difference $\omega_{\mathrm{eg}}$ between the working states plus the two-photon detuning $\delta^{(2)}$, that is $\delta\omega\equiv\omega_{\mathrm{eg}}+\delta^{(2)}$. Here and in the following, the superscript $^{(2)}$ indicates a two-photon process.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{Lambda_Scheme.pdf}
\caption{Three-level atom interacting with two light fields. A two-photon coupling between the atomic states $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ is established by two electromagnetic fields of frequencies $\omega_1$ and $\omega_2$, with $\Omega_1$ and $\Omega_2$ being the Rabi frequencies of the corresponding one-photon transitions. Here $\Delta$ is the common detuning of the two-photon transition from the intermediate state $\ket{\mathrm{i}}$, and $\delta^{(2)}$ is the two-photon detuning.}
\label{fig:Lambda_Scheme}
\end{figure}
The intermediate state can now be short-lived itself, since it is only virtually populated, but enables sufficient coupling via simultaneous stimulated absorption and emission. The resulting two-photon Rabi frequency $\Omega_{12}$, which drives the transition $\ket{\mathrm{g}}\longleftrightarrow \ket{\mathrm{e}}$ in the case $\Delta \gg \Omega_j$ with~$j=1,2$, is governed by the product of both Rabi frequencies $\Omega_1$ and $\Omega_2$ as well as the common detuning~$\Delta$ of the two-photon transition to the intermediate state~$\ket{\mathrm{i}}$, that is
\begin{equation} \label{eq:twophotonrabi}
\Omega_{12} \equiv \frac{\Omega _1 ^* \Omega_2}{2\Delta} = \frac{\Gamma_1 \Gamma_2}{4 \Delta} \sqrt{\frac{I_1}{I_{\textrm{sat},1}}\frac{I_2}{ I_{\textrm{sat},2}}}\,.
\end{equation}
Here~$I_j$ and~$I_{\textrm{sat},j}$ denote the intensity and the saturation intensity of the corresponding light beam, and~$\Gamma_j$ is the natural linewidth of the corresponding transition.
In this case we return to an effective two-level system where the probability
\begin{equation} \label{eq:probrabieff_12}
P_{\mathrm{e}} \left(\tau,\delta^{(2)},\Omega_{12}\right) = \frac{1}{2}\left(\frac{\Omega_{12}}{\Omega_{\mathrm{eff}}^{(2)}}\right)^2\left[1 - \mathrm{cos}\left(\Omega_{\mathrm{eff}}^{(2)} \tau\right)\right]
\end{equation}
to find the atom in the excited state $\ket{\mathrm{e}}$ now depends on the two-photon Rabi frequency $\Omega_{12}$, and the corresponding effective Rabi frequency
\begin{equation} \label{eq:rabieff_12}
\Omega_{\mathrm{eff}}^{(2)}=\sqrt{\left|\Omega_{12}\right|^2+\left(\delta^{(2)}\right)^2}\,,
\end{equation}
determined by the two-photon detuning $\delta^{(2)}$.
A fundamental loss mechanism of the coherent dynamics is spontaneous emission which is fortunately suppressed due to the fact that the two-photon transition is off-resonant by the detuning~$\Delta$ relative to the intermediate state~$\ket{\mathrm{i}}$. The rate~$R_{\mathrm{sp}}$ of residual spontaneous decay then reads
\begin{equation}\label{eq:spontan}
R_{\mathrm{sp}} \equiv\frac{\sqrt{\Gamma_1\Gamma_2}}{2\Delta}\left|\Omega_{12}\right|=\frac{(\Gamma_1\Gamma_2)^{\frac{3}{2}}}{8\Delta^2}
\sqrt{\frac{I_1}{I_{\mathrm{sat},1}}\frac{I_2}{I_{\mathrm{sat},2}}}\,.
\end{equation}
For sufficiently large detuning it is possible to suppress spontaneous emission almost completely.
Moreover, the presence of an off-resonant light field has an influence on the atomic energy structure. Indeed, the one-photon ac-Stark shift causes an energy shift
\begin{equation}\label{eq:acstark}
\delta E_j^{\mathrm{ac}}=-\frac{\hbar \left|\Omega_j\right|^2}{4\Delta_j}
\end{equation}
of the undisturbed atomic states~$\ket{\mathrm{g}} (j=1)$ and~$\ket{\mathrm{e}} (j=2)$, determined by the detuning~$\Delta_j$, and the Rabi frequency~$\Omega_j$ of the corresponding transition~\cite{Foot} with $\Delta_1\equiv \Delta$ and $\Delta_2\equiv\Delta+\delta^{(2)}$.
Furthermore, for high-precision measurements such as the ones discussed in these lectures also the two-photon light shift has to be considered, which depends on the details of the internal atomic structure, as well as on the polarization of the light fields~\cite{Berg14,Schlippert14,Giese16PRA}.
Finally, we consider two special cases of the Rabi dynamics given by Eq.~\eqref{eq:probrabieff_12}, namely \quot{$\pi/2$}- and \quot{$\pi$}-pulses, which are determined by their enclosed pulse areas. For fixed laser intensities~$I_1$ and~$I_2$, we define these pulses by their specific interaction times~$\tau_{\pi/2}\equiv\pi/\left(2\,\Omega_{\mathrm{eff}}^{(2)}\right)$ and~$\tau_{\pi}\equiv\pi/\Omega_{\mathrm{eff}} ^{(2)}$, leading us to the probabilities
\begin{equation}
P_{\mathrm{e}}\left(\tau_{\pi/2},\delta^{(2)},\Omega_{12}\right)=\frac{1}{2}\left(\frac{\Omega_{12}}{\Omega_{\mathrm{eff}}^{(2)}}\right)^2
\end{equation}
and
\begin{equation}
P_{\mathrm{e}}\left(\tau_{\pi},\delta^{(2)},\Omega_{12}\right)=\left(\frac{\Omega_{12}}{\Omega_{\mathrm{eff}}^{(2)}}\right)^2\,,
\end{equation}
where we have made use of Eq.~\eqref{eq:probrabieff_12}.
In the ideal case with $\delta^{(2)}=0$ a $\pi/2$-pulse creates an equally weighted superposition of $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$ when starting in one of the two working states. In contrast, a $\pi$-pulse inverts the states $\ket{\mathrm{g}}$ and $\ket{\mathrm{e}}$. Due to their functions in an interferometer, these pulses are called \quot{beam splitter} and \quot{mirror} for atoms, in complete analogy to their counterparts in optics for light beams.
\subsubsection{Bragg and Raman diffraction}
\label{sec:Bragg_Raman_diffraction}
So far we have only discussed the dynamics of {\it internal} atomic states induced by the atom-light interaction. However, the use of an atom interferometer for inertial sensing requires sensitivity to {\it external} degrees of freedom, in particular, the atomic center-of-mass motion relative to a reference frame. We satisfy this requirement when we recall that during the atom-light interaction the electromagnetic field does not only transfer energy, but also momentum to the atoms.
We now consider a two-photon process induced by two light fields with the wave vectors $\vec{k}_1$ and $\vec{k}_2$. In the case of counter-propagating fields, that is~$\vec{k}\equiv\vec{k}_1\approx-\vec{k}_2$, the momentum transfer between atom and field is maximal and approximately $2\hbar \vec{k}$. For co-propagating beams, that is~$\vec{k}_1\approx\vec{k}_2$, the momentum transfer is minimal and almost zero.
Furthermore, due to the fact that the dispersion relation of a free particle is parabolic, a non-zero momentum~$\vec{p}_0$ of the atom and the resulting frequency shift have to be taken into account. Indeed, any offset~$\vec{p}_0$ results in a Doppler shift
\begin{equation} \label{eq:dopplerfrequency}
\omega_{\mathrm{D}} \equiv \frac{\vec{p}_0\cdot {\bf k}_{\mathrm{eff}}}{m}
\end{equation}
of the transition frequencies due to the motion of the atoms of mass~$m$ relative to the light fields. It vanishes only for atoms at rest.
These considerations also lead us to the definition
\begin{equation} \label{eq:recoildoppler}
\omega_{\mathrm{rec}} \equiv \frac{\hbar \left|{\bf k}_{\mathrm{eff}}\right|^2}{2m}
\end{equation}
of the recoil frequency associated with the light fields. Here, and in Eq.~\eqref{eq:dopplerfrequency} we have introduced the notation~${\bf k}_{\mathrm{eff}}\equiv\vec{k}_1-\vec{k}_2$ to identify the effective momentum transfer~$\hbar {\bf k}_{\mathrm{eff}}$ during a two-photon process.
In the general case of an $n$\textsuperscript{th}-order transition and counter-propagating light fields with $\vec{k}\equiv\vec{k}_1\approx-\vec{k}_2$, the total momentum transfer
\begin{equation}\label{eq:keff}
n \hbar {\bf k}_{\mathrm{eff}}\equiv n \hbar\left(\vec{k}_{\text{1}}-\vec{k}_{\text{2}}\right)\approx 2n\hbar \vec{k}
\end{equation}
is the sum of the momenta transferred by $n$ photon pairs and can achieve large values.
In a quantum mechanical treatment of the atomic center-of-mass motion, an $n$\textsuperscript{th}-order two-photon transition couples the momentum eigenstates $\ket{\vec{p}_0}$, corresponding to the momentum~$\vec{p}_0$ before the interaction, and $\ket{\vec{p}_n}$, representing the momentum $\vec{p}_n\equiv\vec{p}_0+n\hbar{\bf k}_{\mathrm{eff}}$ after the interaction. Based on the considerations of Sec.~\ref{sec:Rabi_oscillations}, we also have to include a coupling to the internal states. However, since a change of a momentum eigenstate does not necessarily require a change of an internal atomic state, different types of diffraction are possible~\cite{Borde89PRA}.
We call an atomic scattering process a \quot{Raman}-type diffraction if the atom-light interaction couples two different internal states of the atom, whereas we call it a \quot{Bragg}-type diffraction if the internal state is unchanged. More sophisticated schemes employ double Raman diffraction \cite{Leveque09PRL,Malossi10PRA}, or double Bragg diffraction \cite{Giese13PRA,Ahlers16PRL}. They lead to a larger momentum transfer, and hence provide us with an increased sensitivity as well as the elimination of certain noise sources due to the symmetric structure of the diffraction process.
\subsubsection{Multi-photon coupling by Bragg diffraction}
\label{sec:Multi-photon_Bragg}
Bragg diffraction of a matter wave is defined in complete analogy to the diffraction of an electromagnetic field by a crystal~\cite{Bragg12Nature,Friedrich13AP}. Here the roles of light and matter are interchanged.
Indeed, when an atomic beam or ensemble is diffracted from two counter-propagating light fields of the frequencies $\omega_1$ and $\omega_2$ the Bragg condition reads
\begin{equation}\label{eq:braggcondition}
\delta E_{\mathrm{kin}} = n \hbar\delta\omega\equiv n\hbar (\omega_1-\omega_2)\,,
\end{equation}
where
\begin{equation}\label{eq:delta_Ekin}
\delta E_{\mathrm{kin}}\equiv \frac{\left(\vec{p}_0+n\hbar \vec{k}_\mathrm{eff}\right)^2}{2m}-\frac{\vec{p}_0^2}{2m}
\end{equation}
is the change of the kinetic energy of the atom associated with its change in momentum.
For the case of first-order diffraction, $n=1$, Eqs.~\eqref{eq:braggcondition} and~\eqref{eq:delta_Ekin} give a very intuitive picture of the scattering process. An atom scatters two photons with momenta~$\hbar\vec{k}_1$ and~$\hbar\vec{k}_2$ from two traveling light waves if their energy difference~$\hbar\delta\omega$ matches the energy~$\delta E_{\mathrm{kin}}$ an atom has to absorb to climb the kinetic energy parabola, depicted in Fig.~\ref{fig:higherorderbragg}(a).
When we use Eqs.~\eqref{eq:braggcondition} and~\eqref{eq:delta_Ekin} and the definitions Eqs.~\eqref{eq:dopplerfrequency} and~\eqref{eq:recoildoppler} of the Doppler shift~$\omega_{\mathrm{D}}$ and the photon recoil~$\omega_{\mathrm{rec}}$, we arrive at the condition
\begin{equation}\label{eq:braggfrequency}
\omega_1-\omega_2 = n\, \omega_{\mathrm{rec}} + \omega_{\mathrm{D}}\,,
\end{equation}
for the frequency difference which drives the diffraction process of the~$n$-th order.
The case of scattering more than one photon pair at a time, $n>1$ leads to the population of higher-order momentum states~$\ket{\vec{p}_n}$ with $\vec{p}_n =\vec{p}_0+n\hbar\vec{k}_{\mathrm{eff}}$, since ideally every momentum state in between is not on resonance and should not be populated, as shown in Fig.~\ref{fig:higherorderbragg}(a).
\begin{figure}[h]
\includegraphics[width=\linewidth]{higherorderbragg.pdf}
\caption{Momentum transfer of~$2n\hbar k$ in $n$\textsuperscript{th}-order Bragg diffraction represented by the level scheme~(a) and density plots~(b) measured experimentally for $n=1,2,3,4$ and $5$. This figure is an adaptation of Figs.~4.7 and 5.12 in Ref.~\cite{Abend17}.}
\label{fig:higherorderbragg}
\end{figure}
In Fig.~\ref{fig:higherorderbragg}(b) we present density plots for multi-photon Bragg diffraction with~$n=1,2,3,4$ and $5$, and note that the simultaneous scattering of $n$ pairs of photons has been realized experimentally up to $n=12$. This achievement allows us to construct a beam splitter~\cite{Mueller08PRL} with a momentum transfer of~$24\,\hbar k$, where $k=\left|\vec{k}\right|$. For \textsuperscript{87}Rb the spacing between subsequent Bragg orders is only 15\,kHz, which leads to populating multiple orders.
For the $n$\textsuperscript{th}-order Bragg transition the calculation of the transition probability $P_n$ requires considerations~\cite{Mueller08PRA} that go beyond the two-state assumption. The product of the Rabi frequency~$\Omega_{\mathrm{eff}}$ governed by the laser intensity~$I$, and the duration~$\tau$ of the atom-light interaction are the major ingredients. Indeed, the product of these parameters determines if a clean Rabi oscillation into a single momentum state is possible, or if multiple states are populated.
In the Bragg regime where a single order is dominantly populated, the generalized transition probability
\begin{equation}
P_{n}(\tau)\equiv\textnormal{sin}^2\left[\frac{1}{2} \int_0^\tau \diff\tau'\Omega_n(\tau') \right]
\end{equation}
from the initial momentum state $\ket{\vec{p}_0}$ to the $n$\textsuperscript{th}-order momentum state $\ket{\vec{p}_n}$ is determined by a new effective Rabi frequency $\Omega_n$, as discussed in detail in Ref.~\cite{Mueller08PRA}.
In order to perform an $n$\textsuperscript{th}-order transition an increase in the laser power is required. An approximate solution obtained in Ref.~\cite{Mueller08PRA} presents conditions for a so-called \quot{quasi-Bragg} regime, in which the probability to populate a single higher-order state is significantly larger compared to the one for a population of all other orders by applying short and intense pulses.
In order to achieve a large momentum transfer without the increase of laser power, one can use sequential pulses. Indeed, the same momentum transfer as in the case of a single $n$\textsuperscript{th}-order transition may be achieved at the cost of a larger total time of the beam splitting process, and a more complex waveform.
For a sequence consisting of $n_p$ sequential pulses the resonance condition to drive the $n_s$-th sequential transition with an $n$\textsuperscript{th}-order Bragg pulse reads
\begin{equation}\label{eq:seqtrans}
\omega_1-\omega_2=(2n_s-1)n\, \omega_{\mathrm{rec}}+\omega_{\mathrm{D}}\,.
\end{equation}
with $n_s=1,\ldots,n_p$. We emphasize that the frequency difference $\omega_1-\omega_2$ has to be adjusted for each sequential transition. In principle, there is no restriction on combining any number $n_p$ of sequential pulses with any achievable Bragg order $n$, to obtain a particular transfer efficiency. For example, a sequence of beam splitters transferring $6\, \hbar k$ each, leads to a total splitting~\cite{Chiow11PRL} of $102\, \hbar k$, or a sequence of first-order transitions results~\cite{Kovachy15Nature} in $90\, \hbar k$.
\subsubsection{Influence of atom cloud and beam size}
\label{sec:FiniteSize}
In the single-atom picture, or in the case of a highly monochromatic ensemble, one can always find for fixed laser powers~$I_1$, $I_2$ and a common detuning~$\Delta$, an interaction time~$\tau$, for which the amplitude of the transition probability given by Eqs. \eqref{eq:probrabieff_12} and \eqref{eq:rabieff_12} is unity. Here we assume that the two-photon detuning~$\delta^{(2)}$ vanishes. In this case the beam-splitter efficiency is only limited by the loss of atoms due to spontaneous decay with the rate~$R_{\mathrm{sp}}$, expressed by Eq. \eqref{eq:spontan}.
However, a non-zero temperature~$T_\mathrm{a}$ of the atoms, and therefore a spread~$\sigma_v$ in the velocity~$\vec{v}$ of the atomic ensemble needs to be taken into account, as it induces a broadening of the transition frequency due to the Doppler shift $\vec{k}_{\mathrm{eff}}\cdot \vec{v}$, Eq. \eqref{eq:dopplerfrequency}. Even for BECs, the beam-splitter efficiency may change drastically, when we employ for example higher-order Bragg diffraction~\cite{Szigeti12NJP}.
We estimate the influence of such a velocity distribution by the use of a Gaussian distribution
\begin{equation}\label{eq:3Dgaussian}
f_{3\mathrm{D}}(\vec{v})\equiv\frac{1}{(2\pi)^{3/2}\sigma_v^3}\mathrm{exp}\left[- \frac{(\vec{v}-\vec{v}_0)^2}{2\sigma_v^2}\right]
\end{equation}
of velocities $\vec{v}$ across the atomic ensemble which is isotropic in all three spatial dimensions and has a width
\begin{equation}
\sigma_v \equiv \sqrt{\frac{k_{\mathrm{B}} T_\mathrm{a}}{m}}
\end{equation}
determined by the temperature~$T_\mathrm{a}$ and the Boltzmann constant~$k_{\mathrm{B}}$. Here $\vec{v}_0$ is an arbitrary offset velocity.
The total probability~$P_{\mathrm{e}}$ to find the atom in the excited state $\ket{\mathrm{e}}$ then reads
\begin{equation}\label{eq:velocitydependence}
P_{\mathrm{e}}(\tau)\equiv \iiint \diff^3 v\, f_{3\mathrm{D}}(\vec{v})P_{\mathrm{e}}\left[\tau,\delta^{(2)}(\vec{v}),\Omega_{12}\right] \,,
\end{equation}
where $P_{\mathrm{e}}\left[\tau,\delta^{(2)}(\vec{v}),\Omega_{12}\right]$ is the excitation probability given by Eq.~\eqref{eq:probrabieff_12} for the atomic velocity~$\vec{v}$, and $\textrm{d}^3 v$ is the three-dimensional volume element in velocity space.
In order to evaluate the integral over~$\vec{v}$, we recall that according to Eq.~\eqref{eq:probrabieff_12} the probability $P_{\mathrm{e}}\left[\tau,\delta^{(2)}(\vec{v}),\Omega_{12}\right]$ is determined by the two-photon detuning~$\delta^{(2)}$ which enters into the effective Rabi frequency $\Omega_{\mathrm{eff}}^{(2)}$ given by Eq.~\eqref{eq:rabieff_12}. We assume here that the two-photon detuning~$\delta^{(2)}\equiv \vec{k}_{\mathrm{eff}}\cdot \vec{v}$ is solely induced by the Doppler shift, and hence depends only on the projection of the velocity~$\vec{v}$ onto the wave vector~${\bf k}_{\mathrm{eff}}$.
Thus, the three-dimensional integral, Eq. \eqref{eq:velocitydependence}, reduces to a one-dimensional integral over the $x$-component~$v_x$ of the velocity $\vec{v}$ if we align the $x$-axis with the direction of~${\bf k}_{\mathrm{eff}}$, giving rise to
\begin{equation} \label{eq:1Dprop}
P_{\mathrm{e}}(\tau) = \int\diff v_x\, f_{1\mathrm{D}}(v_x)P_{\mathrm{e}}\left[\tau,\delta^{(2)}(v_x),\Omega_{12} \right]
\end{equation}
with the one-dimensional Gaussian distribution
\begin{equation}
f_{1\mathrm{D}}(v_x)\equiv\frac{1}{(2\pi)^{1/2}\sigma_v}\mathrm{exp}\left[- \frac{(v_x-v_{x,0})^2}{2\sigma_v^2}\right]\,.
\end{equation}
We emphasize that the $x$-component~$v_{x,0}$ of the offset velocity~$\vec{v}_0$ can be compensated by an adjustment of the frequency difference $\delta\omega$ of the two light fields.
Unfortunately, the velocity spread is not the only effect we need to account for. Indeed, the atomic ensemble has also a finite size and interacts with two laser beams having for example Gaussian intensity profiles
\begin{equation}
I_j(y,z)\equiv I_{0,j}\exp\left[-\frac{2(y^2+z^2)}{w_{j}^2}\right]
\end{equation}
in the $y-z$-plane, where~$I_{0,j}$ is the amplitude and~$w_{j}$ is the radius of the beam for $j=1,2$.
The finite size of the laser beams causes a spatially dependence~$\Omega_{12}=\Omega_{12}(\vec{r})$ of the Rabi frequency due to the dependence~$I_j=I_j(\vec{r})$ of the intensity on the position $\vec{r}$ of the atom within the two beams. As a result, for two collimated laser beams aligned along the $x$-direction, the excitation probability
\begin{equation}
P_{\mathrm{e}}\left[\tau,\delta^{(2)},\Omega_{12}(\vec{r})\right]=P_{\mathrm{e}}\left[\tau,\delta^{(2)},\Omega_{12}(y,z)\right]
\end{equation}
only depends on the coordinates $y$ and $z$ perpendicular to~${\bf k}_{\mathrm{eff}}$.
Moreover, we model the spatial transverse distribution of the atomic ensemble by a Gaussian
\begin{equation}\label{eq:spatialdistribution}
s_{2\textrm{D}}(y,z)\equiv\frac{1}{2\pi\sigma_y^2}\mathrm{exp}\left(-\frac{y^2+z^2}{2\sigma_y^2}\right)
\end{equation}
of width~$\sigma_y=\sigma_z$ centered at the maximum intensity of the beams ($y=z=0$) in the $y-z$-plane.
When we combine Eqs.~\eqref{eq:1Dprop} and \eqref{eq:spatialdistribution}, the total probability in three dimensions reads
\begin{equation}\label{eq:3Dprop}
P_{\textrm{e}}(\tau) \equiv \iint \diff y\diff z \int \diff v_x \,s_{2\mathrm{D}}(y,z) f_{1\mathrm{D}}(v_x) P_{\mathrm{e}}\left[\tau,\delta^{(2)}(v_x),\Omega_{12}(y,z)\right],
\end{equation}
and is determined by the widths~$\sigma_v$, $\sigma_y$, the laser beam radii~$w_{j}$ as well as the Rabi frequency~$\Omega_{\textrm{12}}$ depending on the maximum laser intensity~$I_{0,j}$.
Usually, the velocity distribution~$f_{1\mathrm{D}}$ can be assumed to be time-independent if no external force is acting. However, the spatial distribution~$s_{2\textrm{D}}$ is always a function of time, since the cloud spreads due to the non-vanishing width~$\sigma_v$ of the velocity distribution. Hence, finding the three-dimensional probability~$P_{\mathrm{e}}$ is more complicated if these assumptions are not valid. Even in the case of perfect monochromatic light fields, the efficiency of the coherent processes is fundamentally limited by the finite size and velocity of the cloud of atoms~\cite{Szigeti12NJP}.
\subsection{Optical lattices}
\label{sec:Optical_lattice}
In the preceding sections we have discussed the possibility of changing the atomic momentum with the help of the atom-light interaction leading to Bragg and Raman diffraction. However, there exists also the option of a {\it sequential} momentum transfer in an optical lattice. In particular, a large effect occurs due to Bloch oscillations in an accelerated optical lattice \cite{Dahan96PRL,Peik97PRA,Wilkinson96PRL,Raizen97PT}.
\subsubsection{Bloch theorem}
In one space dimension we can obtain an optical lattice by retroreflecting a light field propagating in the $x$-direction and having the wave vector $\vec{k}$, from a mirror. This process leads to the formation of a standing light wave, and hence, to an effective periodic potential
\begin{equation}\label{eq:standingwavepotential} V(x)\equiv 4 V_{\mathrm{dip}}\ \mathrm{sin}^2\left(k x\right) = \frac{1}{2} V_0 \left[1-\cos\left(2 k x\right)\right]
\end{equation}
for atoms with the amplitude~$V_0=4 V_{\mathrm{dip}}$, where $V_{\mathrm{dip}}$ is the magnitude of the atom-light interaction~\cite{Grimm00AAMOP}. The factor of four results from the amplification of the electric field~$\vec{E}$ by a factor of two due to the retro-reflection,
and the quadratic scaling of the light field intensity~$I \propto |\vec{E}|^2$.
The potential given by Eq.~\eqref{eq:standingwavepotential} is periodic with the period~$d\equiv\pi/k$ given in units of the wave number~$k\equiv\left|\vec{k}\right|\equiv 2\pi/\lambda$, and the amplitude
\begin{equation}\label{eq:latticedepth}
V_0\equiv \frac{\hbar \Gamma^2}{2\Delta}\frac{I}{I_{\textrm{sat}}}
\end{equation}
of the potential~$V$ can be expressed in terms of the intensity~$I$, the saturation intensity~$I_{\textrm{sat}}$, the natural linewidth~$\Gamma$, and the detuning~$\Delta$. Here we have used Eq.~\eqref{eq:rabi} and Eq.~\eqref{eq:acstark} for the ac-Stark shift.
According to the Bloch theorem \cite{kittel05}, the wave function
\begin{equation}\label{eq:wavefunc_blochwave}
\psi_{\ell,q}(x)\equiv\textrm{e}^{\textrm{i} q x} u_{\ell,q}(x)
\end{equation}
of an atom in a periodic potential $V$, Eq.~\eqref{eq:standingwavepotential}, is the product of a plane wave~$\textrm{e}^{\textrm{i}qx}$ with the quasi-momentum~$\hbar q$, and an amplitude~$u_{\ell,q}(x)\equiv\bra{x}u_{\ell,q}\rangle$ having the same period~$d$ as the original potential~$V$. Here~$\ell$ denotes the discrete band index.
The Bloch state $\ket{u_{\ell,q}}$ obeys the Schr\"odinger equation
\begin{equation} \label{eq:blochwave}
\left[\frac{(\hat{p}+\hbar q)^2}{2m} +V(\hat{x})\right]\ket{u_{\ell,q}} = E_\ell(q)\ket{u_{\ell,q}},
\end{equation}
where the corresponding quasi-energy
\begin{equation}\label{eq:quasi_energy}
E_\ell(q)=E_\ell\left(q+\frac{2\pi}{d}\right)
\end{equation}
has a period $2\pi/d=2k$ as a function of $q$. Therefore, following the convention of solid-state physics, the quasi-momentum $\hbar q$ can be restricted to the interval~$(-\pi\hbar/d,+\pi\hbar/d]=(-\hbar k,+\hbar k]$, that is the first Brillouin zone~\cite{kittel05}.
In order to work in this interval, it is natural to consider atomic ensembles with a narrow momentum distribution, that is $m\sigma_v\ll\hbar k$. As an example, both a BEC~\cite{Denschlag02JPB}, and a distribution of cold atoms prepared by a velocity filter in one dimension~\cite{Dahan96PRL,Peik97PRA} fulfill this condition.
\subsubsection{Bloch oscillations}
The Bloch states~$\ket{u_{\ell,q}}$ following from Eq.~\eqref{eq:blochwave} and describing an atom in an optical lattice are stationary ones. Therefore, dynamics only occurs if an additional force~$F$ is acting, which can be either an external force, such as gravity, or an acceleration of the lattice itself.
When an atom in an optical lattice is suddenly exposed to a spatially uniform force~$F$, the Bloch states~$\ket{u_{\ell,q}}$ are no longer eigenstates~\cite{Dahan96PRL,Peik97PRA} of the new Hamiltonian
\begin{equation} \label{eq:forceschroedinger}
\hat{H}_{F} \equiv \frac{\hat{p}^2}{2m} + V(\hat{x}) -F\hat{x}\,.
\end{equation}
Indeed, for a small value of $F$, such that there are no inter-band transitions and the adiabatic assumption is valid, the wave function $\psi_{\ell,q(0)}$, Eq. \eqref{eq:wavefunc_blochwave}, evolves into
\begin{equation}
\psi_{\ell,q(\tau)}(x)= \exp\left\{-\frac{\textrm{i}}{\hbar}\int_0^\tau E_\ell\left[q(t)\right]\diff t\right\} \textrm{e}^{\textrm{i}q(\tau)x} u_{\ell,q(\tau)}(x)\,,
\end{equation}
and, apart from a phase factor and a linear shift of the quasi-momentum
\begin{equation}\label{eq: time_dep_quasi_momentum}
\hbar q(\tau)\equiv\hbar q(0)+F\tau\,,
\end{equation}
preserves its original form.
Because~$\hbar q$ changes linearly in time $\tau$, the wave function~$\psi_{\ell,q(\tau)}$ has a temporal periodicity given by the Bloch period
\begin{equation}\label{eq:Bloch_period}
\tau_{\mathrm{Bloch}}\equiv\frac{2\pi\hbar}{|F|d}\,,
\end{equation}
which is the time after which the change $F\tau_{\mathrm{Bloch}}$ of the quasi-momentum is equal to the width $2\pi \hbar/d$ of the first Brillouin zone, displayed in Fig.~\ref{fig:blochoscillations}(a).
Moreover, since the quasi-energy $E_\ell=E_\ell(q)$, Eq.~\eqref{eq:quasi_energy}, is a periodic function with period~$2\pi/d$, the velocity
\begin{equation}
\langle v_\ell \rangle\left(q \right)\equiv\frac{1}{\hbar}\frac{\textrm{d} E_\ell(q)}{\textrm{d} q}
\end{equation}
of an atom in the Bloch wave $u_{\ell,q(\tau)}$ has the same periodicity.
Furthermore, according to Eq.~\eqref{eq: time_dep_quasi_momentum} the quasi-momentum~$\hbar q=\hbar q(\tau)$ is swept linearly in time~$\tau$. Hence, $\langle v_\ell \rangle\left[q(\tau) \right]$ is periodic in time with the Bloch period~$\tau_{\mathrm{Bloch}}$, Eq.~\eqref{eq:Bloch_period}. Its temporal average vanishes. This oscillation is called {\it Bloch oscillation} and Fig.~\ref{fig:blochoscillations}(a) shows its representation for the first Brillouin zone.
Bloch oscillations allow us to couple the momentum eigenstate~$\ket{p_0}$ of a free particle to the momentum eigenstate~$\ket{p_{n_p}=p_0+2 {n_p}\hbar k}$, by sequentially transferring $n_p$ times the momentum $2\hbar k$. The associated momentum transfer is the same as the one obtained when driving a sequence of $n_p^{\textrm{th}}$ first-order Bragg pulses as discussed in Sec.~\ref{sec:Multi-photon_Bragg}. For this purpose, $\ket{p_0}$ is adiabatically transferred to the lowest band of the Bloch lattice with $\ell=0$ by increasing the lattice depth $V_0$. In order to avoid inter-band transitions this transfer has to be adiabatic, that is the change $\diff V_0/\diff\tau$ of the potential amplitude~$V_0$, Eq. \eqref{eq:latticedepth}, has to be much smaller than $\Delta E^2/\hbar$, where $\Delta E $ denotes the energy difference between the bands depicted in Fig. \ref{fig:blochoscillations}(a),
Atoms in the fundamental band are then coherently accelerated by applying a linear chirp~$2\pi\alpha$ to the frequency difference~$\delta\omega\equiv\omega_1-\omega_2$ of the two counterpropagating light waves with frequencies~$\omega_1$ and~$\omega_2$,
that is~\cite{Peik97PRA}
\begin{equation}\label{eq:frequency_chirp}
\delta\omega(\tau)\equiv 2\pi\alpha \tau\,.
\end{equation}
This chirp is equivalent to the effective force
\begin{equation}\label{eq:bloch_force}
\left|F\right|=\frac{2\pi m}{k_\textrm{eff}} \left|\alpha\right|
\end{equation}
of Eq.~\eqref{eq:forceschroedinger} with~$k_{\textrm{eff}}\equiv 2 k$.
The accelerated lattice generated by this chirp is very well controllable and allows us to efficiently couple the momentum states~$\ket{p_0}$ and $\ket{p_{n_p}}$ by a sequential adiabatic transfer of the quasi-momentum~$\hbar q$ of the lattice to the atoms. Finally, the atoms are unloaded from the lattice by slowly decreasing the potential amplitude~$V_0$.
\subsubsection{Landau-Zener transitions}
In order to estimate the efficiency of driving Bloch oscillations and the associated momentum transfer, we have to take into account two main loss mechanisms: Spontaneous emission and inter-band transitions.
Although the detuning~$\Delta$ is large, still an appreciable fraction of atoms is lost due to spontaneous emission. We characterize the surviving fraction of atoms after the short acceleration time~$\tau_{\mathrm{acc}}$ by the parameter
\begin{equation}\label{eq:spontanfraction}
\eta_{\mathrm{sp}}\equiv 1-R_{\mathrm{sp}}\tau_{\mathrm{acc}}\,,
\end{equation}
where $R_{\mathrm{sp}}$ is defined by Eq.~\eqref{eq:spontan}.
The second loss mechanism results from the fact that the lattice is not infinitely deep and the momentum transfer is not fast enough. These deficiencies give rise to inter-band transitions. Indeed, for a lattice generated by two counter-propagating light waves with the time-dependent frequency difference~$\delta\omega$ given by Eq.~\eqref{eq:frequency_chirp}, the chirp~$\alpha$ has to obey the adiabaticity criterion
\begin{equation}\label{eq:adiabticcrit}
\left|\alpha\right|\equiv\left|\frac{\diff}{\diff \tau}\frac{\delta\omega}{2\pi}\right| =n_p\frac{\omega_{\mathrm{rec}}}{\tau_{\mathrm{acc}}}\ll \frac{\Delta E^2}{\hbar^2}.
\end{equation}
Here~$\Delta E$ denotes the energy of the band gap shown in Fig.~\ref{fig:blochoscillations}(a) and we aim at transferring $n_p$ pairs of photons during the time~$\tau_{\mathrm{acc}}$.
We estimate the efficiency of the momentum transfer with the familiar Landau-Zener formula~\cite{Zener32}
\begin{equation}\label{eq:landauzener}
\eta_{\mathrm{LZ}} = \left[1- \exp\left(-\frac{\pi}{2}\frac{\Delta E^2}{\hbar^2\alpha}\right)\right]^{n_p} \,,
\end{equation}
which provides us with the surviving fraction of atoms for a given chirp rate~$\alpha$ and band gap energy~$\Delta E$. The power~$n_p$ in Eq.~\eqref{eq:landauzener} indicating the number of transitions reflects the fact that an inter-band transition may take place at each crossing of the Brillouin zone, that is whenever a photon pair is scattered.
The total fraction
\begin{equation}\label{eq:tot_fraction_surviving_atoms}
\eta_{\mathrm{tot}}\equiv \eta_{\mathrm{sp}}\eta_{\mathrm{LZ}}
\end{equation}
of surviving atoms after the acceleration is given by the product of $\eta_{\mathrm{sp}}$ and $\eta_{\mathrm{LZ}}$
defined by Eqs.~\eqref{eq:spontanfraction} and \eqref{eq:landauzener}.
\begin{figure}[h]
\includegraphics[width=\linewidth]{blochoscillations.pdf}
\caption{Bloch oscillations in the first Brillouin zone (a) caused by a force $F$ acting on the center-of-mass motion of the atoms and transfer efficiency $\eta_{\mathrm{tot}}$ as a function (b) of the lattice depth~$V_0$ for the different momentum transfers~$\Delta p$. When the force is weak enough to avoid inter-band transitions, the atoms undergo an adiabatic acceleration for the time~$\tau_{\mathrm{acc}}=1$\,ms while they cross the individual Brillouin zones. Up to~$\Delta p=100\,\hbar k$ the fraction~$\eta_{\mathrm{tot}}$ of atoms remaining in the target momentum state
can be held above 0.9. The solid lines are given by the product~$\eta_{\mathrm{tot}}=\eta_{\mathrm{sp}}\eta_{\mathrm{LZ}}$ normalized to the measured maximum transfer efficiency, where $\eta_{\mathrm{sp}}$ and $\eta_{\mathrm{LZ}}$ are defined by Eqs. \eqref{eq:spontanfraction} and \eqref{eq:landauzener}, accordingly. This figure is an adaptation of Figs.~4.11 and 5.23 in Ref.~\cite{Abend17}.}
\label{fig:blochoscillations}
\end{figure}
In Fig.~\ref{fig:blochoscillations}(b) we show by dots the measured efficiencies~$\eta_{\textrm{tot}}$ to transfer the momentum $\Delta p$ as a function of the lattice depth~$V_0$ for a fixed acceleration period~$\tau_{\mathrm{acc}}=1$\,ms. The solid lines are based on Eq.~\eqref{eq:tot_fraction_surviving_atoms} normalized to the measured maximum transfer efficiency. For~$\Delta p=20\,\hbar k$ and 200\,$\hbar k$ the solid line fits almost perfectly. However, for the data points with~$\Delta p=50\,\hbar k$ and 100\,$\hbar k$, residual resonant tunneling is visible, which is not taken into account in the Landau-Zener formula, Eq.~\eqref{eq:landauzener}.
In principle, there are no fundamental limits on the amount~$\Delta p$ of momentum that can be transferred by Bloch oscillations during a fixed time. It is a purely technical issue.
However, a limitation that is not easy to overcome is the spontaneous emission characterized by the rate~$R_{\mathrm{sp}}$, given by Eq.~\eqref{eq:spontan}. It effectively reduces the total fraction~$\eta_{\textrm{tot}}$ of atoms, since the laser detuning~$\Delta$ and the laser power~$P$ cannot be increased arbitrarily. For a laser detuning $\Delta=100$\,GHz, even at the largest lattice depth $V_0=23\, \hbar\omega_{\mathrm{rec}}$, the relative contribution of spontaneous emission still stays at the few percent level. Up to an acceleration of 100\,$\hbar k$/ms, the transfer efficiencies per photon $\eta_{\textrm{tot}}/\hbar k$ can be held at a level of $\eta_{\mathrm{tot}}/\hbar k>0.999$. However, for larger values of the acceleration, the relative transfer efficiency starts to decrease due to a violation of the adiabaticity criterion, Eq.~\eqref{eq:adiabticcrit}.
\subsection{Mach-Zehnder interferometer for gravity measurements}
\label{sec:Mach_Zehnder}
Measurements of gravity with atoms typically employ a MZI~\cite{Kasevich91PRL,Kasevich92APB,Peters99Nature,Peters01Metrologia,Borde01CRP,Storey,Schleich_NJP2013} which is similar to its original optical analogue and consists of three elements: two beam splitters and one mirror.
\subsubsection{Set-up}
The first beam splitter generates a coherent superposition of two different momentum states~$\ket{\vec{p}_0}$ and $\ket{\vec{p}_1}$ of the atomic center-of-mass motion. This superposition results in two spatially separated trajectories associated with each of the two momentum states, as depicted in Fig.~\ref{fig:machzehnder}(a). After a time~$T$, a mirror exchanges these two momentum states, leading to a redirection of the trajectories. Finally, after a total duration of~$2T$, the two trajectories overlap again, and the second beam splitter is applied.
As discussed in Sec.~\ref{sec:Rabi_oscillations} we describe beam splitters and mirrors realized by the atom-light interaction in the Rabi formalism, and the corresponding pulse sequence for the MZI reads $\pi/2-\pi-\pi/2$. Here the two beam splitters ($\pi/2$-pulse) are separated from the mirror ($\pi$-pulse) by~$T$. Usually, the momentum transfer $\hbar {\bf k}_{\mathrm{eff}}$ induced by a beam splitter or mirror is parallel or antiparallel to the gravitational acceleration $\boldsymbol{g}$ as shown in Fig.~\ref{fig:machzehnder}(a).
\begin{figure}[h]
\includegraphics[width=\linewidth]{machzehnder.pdf}
\caption{Mach-Zehnder interferometer (MZI) for atoms. The spacetime diagram (a) shows the atomic trajectories due to a three-pulse-sequence consisting of (i) an initial $\pi/2$-pulse to prepare a superposition of the two different momentum states $\ket{p_0}$ and $\ket{p_1}$ of the atomic center-of-mass motion, (ii) a $\pi$-pulse to exchange the imprinted momenta, and (iii) a final $\pi/2$-pulse to obtain the interference in the atomic population in an exit port. In (b) we depict a fringe scan for an adjusted laser phase~$\Delta\phi^{\mathrm{laser}}$ in the interval~$[0,2\pi]$. Figure (a) is reproduced from Fig.~4.15(b) in Ref.~\cite{Abend17}.}
\label{fig:machzehnder}
\end{figure}
The interference signal of the interferometer is determined by the atomic population
\begin{equation}\label{eq: interferometer_output_probability}
P (\Delta \phi)=\frac{1}{2}\left[1-C \cos(\Delta \phi)\right]
\end{equation}
in the momentum state~$\ket{\vec{p}_1}$ after the final beam splitter, where $C$ and $\Delta \phi$ are the contrast and the total phase shift, respectively.
In the remainder of these lectures we consider only a perfectly closed interferometer, where the two wave packets propagated along the two trajectories perfectly overlap after the total interferometer time~$2T$. In this case we obtain the maximal contrast $C=1$.
\subsubsection{Contributions to phase shift}
The total phase shift~$\Delta \phi$ of the MZI reads~\cite{KLEINERT20151,Louchet11NJP}
\begin{equation}\label{eq:ouputphase}
\Delta \phi=\Delta \phi^{\mathrm{laser}}+\Delta \phi^{\mathrm{ac}}+\Delta \phi^{\mathrm{2ph}} + \Delta \phi^{\mathrm{inert}}\,,
\end{equation}
where the laser phase
\begin{equation}\label{eq:laserphase}
\Delta \phi^{\mathrm{laser}}\equiv\phi_1-2\phi_2+\phi_3
\end{equation}
is determined by the laser phases~$\phi_1$, $\phi_2$, and $\phi_3$ imprinted on the atom wave during the atom-light interaction due to the first, second, and third pulse. The phase~$\Delta\phi^{\mathrm{laser}}$ is of the form of a discrete second derivative. Unless~$\phi_3$ is used to modulate the signal, as depicted in Fig.~\ref{fig:machzehnder}(b), the laser phase~$\Delta\phi^{\mathrm{laser}}$ vanishes.
The single- and two-photon light shifts~$\Delta \phi^{\mathrm{ac}}$ and~$\Delta \phi^{\mathrm{2ph}}$ may lead to an offset shift, which in the first order depends on the difference in phase imprinted during the first and last pulse. In contrast to Raman diffraction, where the ratio of the intensities of the two frequency components needs to be properly adjusted~\cite{Berg14,Schlippert14}, the phase $\Delta\phi^{\mathrm{ac}}$ can be intrinsically suppressed in interferometers based on Bragg diffraction~\cite{Giese16PRA}.
The two leading-order contributions to the phase shift
\begin{equation}
\Delta \phi^{\mathrm{inert}}\equiv\Delta \phi^{\mathrm{grav}}+\Delta \phi^{\mathrm{rot}}
\end{equation}
containing inertial effects, originate from the gravitational acceleration~$\boldsymbol{g}$
\begin{equation}\label{eq:phasegravity}
\Delta \phi^{\mathrm{grav}} \equiv\vec{k_{\mathrm{eff}}}\cdot\boldsymbol{g} \,T^2,
\end{equation}
and from the rotation of the Earth with frequency~$\vec{\Omega}_{\mathrm{E}}$
\begin{equation}\label{eq:phasesagnac}
\Delta \phi^{\mathrm{rot}} \equiv 2 \vec{k_{\mathrm{eff}}}\cdot(\vec{\Omega}_{\mathrm{E}} \times \vec{v}_0)T^2,
\end{equation}
representing the Sagnac effect.
Here~$\vec{v}_0$ is the velocity of the atoms before the first beam splitter.
\subsubsection{Influence of non-zero pulse duration}
If the duration~$\tau_{\mathrm{p}}$ of a pulse is not negligibly short compared to the pulse separation time~$T$, a modification of~$\Delta\phi^{\mathrm{grav}}$ given by Eq.~\eqref{eq:phasegravity} is necessary~\cite{Antoine07PRA,Cheinet08IEEE,Stoner11JOSAB,Bonnin15PRA}. For a Gaussian-shaped pulse of width $\sigma_{\tau_\mathrm{p}}$ we find the new pulse separation time
\begin{equation}
T'\equiv T+\tau_\mathrm{p}-\tau'_\mathrm{p}\,,
\end{equation}
where
\begin{equation}
\tau'_{\mathrm{p}}\equiv \sqrt{2\pi}\sigma_{\tau_\mathrm{p}}
\end{equation}
is the duration of an equivalent box-shaped pulse covering the same area as the Gaussian-shaped one. Here we have assumed equally long $\pi$- and $\pi/2$-pulses, which, however, differ in intensity.
As a result the improved expression
\begin{equation}
\Delta\phi^{\mathrm{grav}} = {\bf k}_{\mathrm{eff}} \cdot \boldsymbol{g}\, T'^2\left[1+\left(1+\frac{2}{\pi}\right)\frac{\tau'_{\mathrm{p}}}{T'}+...\right]\,,
\end{equation}
for~$\Delta\phi^{\mathrm{grav}}$ represents a Taylor expansion in powers of $\tau_{\mathrm{p}}'/T'$.
Hence, the influence of the correction due to a non-zero pulse duration decreases for larger~$T$, since the duration~$\tau_{\mathrm{p}}$ is independent of the separation~$T$.
\subsubsection{Measurement of gravitational acceleration}
\label{sec:Chirping_laser_frequency}
The phase~$\Delta\phi$ accumulated in an atom interferometer does not only depend on the laser phase~$\Delta\phi^\mathrm{laser}$, but also on the time dependence of the laser frequency~$\Delta\nu=\Delta \nu(\tau)$. In the following we restrict ourselves to a one-dimensional problem with the gravitational acceleration $g=\left|\boldsymbol{g}\right|$, and neglect for the time being the phase contributions~$\Delta\phi^\mathrm{laser}$ and~$\Delta\phi^\mathrm{rot}$.
During the free fall of the atom the resonance frequency of the transition changes due to the Doppler effect, Eq.~\eqref{eq:dopplerfrequency}. This effect needs to be taken into account by a proper change of the detuning~$\Delta \nu(\tau)$ after a certain free-fall time~$\tau$.
Indeed, the appropriate time dependence
\begin{equation}
\Delta \nu(\tau)=\frac{\delta\omega(\tau)}{2\pi}
\end{equation}
of the chirp originates from the requirement for~$\delta \omega$ to stay on resonance, Eq.~\eqref{eq:seqtrans}, where the Doppler frequency~$\omega_\mathrm{D}=\omega_\mathrm{D}(0)$, Eq.~\eqref{eq:dopplerfrequency}, is replaced by
\begin{equation}
\omega_\mathrm{D}(\tau)=\omega_\mathrm{D}(0)-k_{\mathrm{eff}} g \tau\,.
\end{equation}
Chirping the laser frequency is not only a necessity to remain on resonance, but the chirp rate~$\alpha$ also defines the acceleration $a=2\pi \alpha/k_{\mathrm{eff}}$, compare to Eq.~\eqref{eq:bloch_force}, of the wavefronts of the Bragg lattice during free fall. In particular, the adjustment of~$2\pi \alpha$ can be used to modulate the interferometer phase~$\Delta\phi$ in a way that
\begin{equation} \label{eq:chripratemeasurement}
\Delta \phi(\alpha,T) = \left(g-\frac{2\pi\alpha}{k_{\mathrm{eff}}}\right)k_{\mathrm{eff}} T^2 \mathrm{.}
\end{equation}
It is the control of this chirp rate that allows us to realize an atomic gravimeter. Indeed, Eq.~\eqref{eq:chripratemeasurement} relates the output of the atom interferometer~$P(\Delta\phi)$, Eq. \eqref{eq: interferometer_output_probability}, depending on the gravitational acceleration~$g$, to the two well-controlled parameters~$T$ and~$\alpha$. To extract~$g$, we evaluate the dependence of the output signal~$P(\Delta\phi)$ on one parameter with the other one fixed.
Figure~\ref{fig:chirpratemeasurement} displays~$P=P(\Delta\phi)$ versus the variation of the pulse separation time~$T$ for three different values of~$2\pi\alpha/k_{\mathrm{eff}}$, namely $0,\,3g/4,$ and $g$. The closer the ratio~$2\pi\alpha/k_{\mathrm{eff}}$ is to the gravitational acceleration~$g$, the slower is the oscillation chirp, and for~$2\pi\alpha/k_{\mathrm{eff}}=g$ the oscillations vanish completely.
When there is a significant missmatch between the chirp rate~$\alpha$ and the free-fall rate~$g$, a reduction in excitation efficiency due to a non-compensated Doppler shift appears. This effect was neglected in Eq.~\eqref{eq:chripratemeasurement} and Fig.~\ref{fig:chirpratemeasurement}.
We conclude by noting that in a real experiment the ratio~$2\pi\alpha/k_{\mathrm{eff}}$ differs from the gravitational acceleration by less than one percent.
\begin{figure}[h]
\includegraphics[width=\linewidth]{chirpratemeasurement.pdf}
\caption{Local gravitational acceleration~$g$ obtained from the measurement of the chirp rate in an atomic gravimeter.
The number of atoms in the exit port of an atom interferometer governed by~$P=P(\Delta\phi)$ is counted for different chirp rates~$\alpha$ of the laser frequency displayed here for the three lattice wavefront accelerations $2\pi\alpha/k_{\mathrm{eff}}=0,\;3g/4,$ and $g$.
For increasing pulse separation time~$T$, a chirped sinusoidal oscillation is obtained. The chirp rate decreases
with increasing~$\alpha$ and finally vanishes for $2\pi\alpha/k_{\mathrm{eff}}=g$.
This picture is an adaptation of Fig.~2.4 in Ref.~\cite{Schlippert14} and Fig.~4.19 in Ref.~\cite{Abend17}.}
\label{fig:chirpratemeasurement}
\end{figure}
\section{Equivalence principle and atom interferometry}
\label{sec:EQP}
A prime application of an inertially sensitive atom interferometer is its use as a probe of Einstein's equivalence principle (EEP). We devote the present section to this topic and emphasize that the results summarized therein have originally been published in Ref.~\cite{Schlippert14PRL}.
First, we outline in Sec.~\ref{sec:UFF_test} a framework for testing the universality of free fall (UFF). We then present in Sec.~\ref{sec:Simultaneous_interferometer} an experiment based on a dual-species rubidium and potassium interferometer using atoms released from an optical molasses together with Raman diffraction to probe the UFF. Finally, we summarize our results and perform the associated data analysis in Sec.~\ref{sec:UFF_data_analysis}.
\subsection{Frameworks for tests of the universality of free fall}
\label{sec:UFF_test}
In spite of being 36 orders of magnitude weaker than the Coulomb interaction, gravitation dominates on a cosmological scale. Since astrophysical objects are electrically neutral gravitation governs the structure of our universe.
Einstein's metric theory of gravity~\cite{Einstein16AP}, that is general relativity, provides us with the tools necessary to understand a vast variety of astronomical phenomena and makes verifiable predictions, such as the existence of gravitational waves \cite{Ref:Gravity1,Ref:Gravity2}. As of today, however, a completely satisfactory microscopic theory of {\it quantum} gravity merging general relativity with quantum mechanics is still lacking.
The EEP is a cornerstone of general relativity, and unification attempts in general imply the violation of at least one of its three central assumptions: i) Local position invariance, ii) local Lorentz invariance, and iii) the UFF.
Ranging from the famous Pound-Rebka experiment~\cite{Pound60PRL} and gravity probe A~\cite{Vessot80PRL}, to the extremely sensitive torsion balance experiments~\cite{Schlamminger08PRL} and lunar laser ranging~\cite{Williams04PRL,Williams12CQG,Mueller12CQG}, the EEP has been tested extensively.
Experiments employing matter wave interferometry have recently extended the landscape of classical UFF tests by entering the quantum domain.
The UFF postulates the equality of inertial mass $m_{\text{in}}$ and gravitational mass $m_{\text{gr}}$.
Given two bodies A and B it can be tested by obtaining the E\"otv\"os ratio
\begin{equation}\label{eq:eotvos}
\eta_{\,\text{A,B}}\equiv 2\thickspace\frac{g_{\text{A}}-g_{\text{B}}}{g_{\text{A}}+g_{\text{B}}}
=2\thickspace\frac{\left(\frac{m_{\text{gr}}}{m_{\text{in}}}\right)_{\text{A}}
-\left(\frac{m_{\text{gr}}}{m_{\text{in}}}\right)_{\text{B}}}
{\left(\frac{m_{\text{gr}}}{m_{\text{in}}}\right)_{\text{A}}
+\left(\frac{m_{\text{gr}}}{m_{\text{in}}}\right)_{\text{B}}}
\end{equation}
for their respective gravitational accelerations $g_{\text{A}}$ and $g_{\text{B}}$.
Any non-zero measurement of $\eta_{\,\text{A,B}}$ would imply a composition-dependent inequality of inertial and gravitational mass. Indeed, the values
\begin{equation}
\eta_{\,\text{Earth,Moon}}=(-0.8\pm 1.3)\cdot 10^{-13}
\end{equation}
based on Lunar-Laser-Ranging~\cite{Williams04PRL,Williams12CQG,Mueller12CQG} and
\begin{equation}
\eta_{\,\text{Be,Ti}}=(0.3\pm 1.8)\cdot 10^{-13}
\end{equation}
employing torsion balances~\cite{Schlamminger08PRL} have provided the best constraints on violations of the UFF for a number of years.
Alternatively, one can also compare pairs of freely-falling test masses. The best result on ground~\cite{Niebauer87PRL} corresponds to
\begin{equation}
\eta_{\,\text{Cu,U}}=(1.3\pm 5.0)\cdot 10^{-10}.
\end{equation}
Moreover, a recent test in space~\cite{Touboul12CQG,Touboul2017PRLMICROSCOPE} obtained
\begin{equation}
\eta_{\,\text{Ti,Pt}}=\left[-0.1\pm 0.9(\mathrm{stat})\pm0.9(\mathrm{syst})\right]\cdot 10^{-14}
\end{equation}
corresponding to an order of magnitude improvement of the bounds set by previous experiments.
The tests listed above employ classical test masses.
In analogy to the first observation of a gravitation-induced phase in a neutron interferometer~\cite{Colella75PRL}, the acceleration measured by the interference of a quantum object can be compared to that of a classical object~\cite{Peters99Nature,Merlet10Metrologia}, or the differential gravitational phase shifts between two quantum objects~\cite{Fray04PRL,Fray09SSRev,Bonnin13PRA,Tarallo14PRL,Zhou15PRL} can be exploited to search for violations of the UFF.
Beyond the comparison of two isotopes of strontium in 2014~\cite{Tarallo14PRL}, the latter approach was extended to comparing the free-fall acceleration of the two different elements rubidium and potassium~\cite{Schlippert14PRL}.
Experiments using quantum objects are beneficial because they are generally subject to systematic effects different from those dominating classical tests.
Moreover, unique properties of quantum objects, such as a macroscopic coherence length~\cite{Goeklue08CQG} and spin polarization~\cite{Leitner64PRL,Laemmerzahl98,Laemmerzahl98proceedings,Laemmerzahl06APB} can be tested as features possibly coupling to EEP-violating effects.
Furthermore, the set of available test masses is enhanced by those that can be laser-cooled and chosen as to maximize the sensitivity to violations.
This approach can be clearly illustrated by dilaton models~\cite{Damour12CQG}, and the so-called Standard Model Extension (SME)~\cite{Kostelecky11,Hohensee13PRL}, which both provide consistent frameworks for parametrizing violations of the EEP.
Violations of the UFF can be naturally parameterized by writing the acceleration~$g_{\text{X}}$ of a species X as
\begin{equation}
g_{\text{X}}=\left(1+\beta_{\text{X}}\right)g\,,
\end{equation}
where $\beta_{\text{X}}$ is a small parameter which is species dependent, and vanishes in the absence of UFF violations.
For dilaton models one has
\begin{equation}
\beta_{\text{X}}=D_1 Q^{'1}_{\text{X}} +D_2 Q^{'2}_{\text{X}}\,,
\end{equation}
where $D_1$ and $D_2$ are fundamental violation parameters, whereas \Q{X} and \QQ{X} are effective charges that mainly depend on the proton and neutron numbers of species X~\cite{Damour12CQG}.
The E\"otv\"os ratio for two species A and B is then given by
\begin{equation}
\eta_{\,\text{A,B}}\approx \beta_{\text{A}}-\beta_{\text{B}}=D_1(Q^{'1}_{\text{A}}-Q^{'1}_{\text{B}})+D_2(Q^{'2}_{\text{A}}-Q^{'2}_{\text{B}})\,,
\end{equation}
and the differences of the effective charges, which determine the sensitivity to violations associated with $D_1$ or $D_2$ are listed in Table \ref{tab:Damour} for several test pairs.
Similarly, for the SME one has
\begin{equation}
\beta_{X}= \thickspace f_{\beta^{e+p-n}_{\text{X}}}\beta^{e+p-n}+f_{\beta^{e+p+n}_{\text{X}}}\beta^{e+p+n}
+f_{\beta^{\bar{e}+\bar{p}-\bar{n}}_{\text{X}}}\beta^{\bar{e}+\bar{p}-\bar{n}}
+f_{\beta^{\bar{e}+\bar{p}+\bar{n}}_{\text{X}}}\beta^{\bar{e}+\bar{p}+\bar{n}}\,,
\end{equation}
where $\beta^{e+p-n}$, $\beta^{e+p+n}$, $\beta^{\bar{e}+\bar{p}-\bar{n}}$, and $\beta^{\bar{e}+\bar{p}+\bar{n}}$ parametrize the violations for various weighted combinations of elementary particles. Moreover, the sensitivity factors \fbminus{X} (\fbbarminus{X}) and \fbplus{X} (\fbbarplus{X}) are charges related to the neutron excess, and the overall baryon number in a given normal matter (antimatter) nucleus~\cite{Hohensee13PRL}.
\begin{table}[t!]
\centering
\small\renewcommand{\arraystretch}{1.4}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
A& B&Ref.&(\Q{A}$\thinspace -\thinspace$\Q{B})$\cdot 10^4$&(\QQ{A}$\thinspace -\thinspace$\QQ{B})$\cdot 10^4$\\
\hline
\textsuperscript{9}Be& Ti&\cite{Schlamminger08PRL} &$-15.46$ &$-71.20$\\
Cu& \textsuperscript{238}U&\cite{Niebauer87PRL} & $-19.09$ &$-28.62$\\
\textsuperscript{85}Rb&\textsuperscript{87}Rb&\cite{Fray04PRL,Bonnin13PRA} &$0.84$& $-0.79$\\
\textsuperscript{87}Sr&\textsuperscript{88}Sr &\cite{Tarallo14PRL} &$0.42$& $-0.39$ \\
\textsuperscript{6}Li&\textsuperscript{7}Li~\footnotemark[1] &\cite{Hamilton12DAMOP}&$0.79$& $-10.07$\\
\hline
\textsuperscript{39}K&\textsuperscript{87}Rb&\cite{Schlippert14PRL}& $-6.69$& $-23.69$\\
\hline
\end{tabular}
\end{center}
\caption[Comparison model of test masses \textup{A} and \textup{B} in the dilaton]{Comparison of test masses \textup{A} and \textup{B} analyzed in the dilaton model. The charges
\Q{\textup{X}}~and \QQ{\textup{X}}~with \textup{X} being either \textup{A} or \textup{B} are calculated according to Ref.~\cite{Damour12CQG}.
A larger absolute number corresponds to a larger anomalous acceleration, and thus a higher sensitivity to violations of the \textup{EEP}.
For \textup{Ti} and \textup{Cu} natural occurrence of isotopes is assumed. This table is a reproduction of Table~2.1 in Ref.~\cite{Schlippert14}.}
\label{tab:Damour}
\end{table}
\footnotetext[1]{A UFF test comparing \textsuperscript{6}Li {\it vs.} \textsuperscript{7}Li has not yet been performed.}
The corresponding E\"otv\"os ratio reads
\begin{equation}\label{eq:Damour}
\eta_{\,\text{A,B}}\approx\beta_{\text{A}}-\beta_{\text{B}}= ~\Delta f_{-n}\beta^{e+p-n}+\Delta f_{+n}\beta^{e+p+n}+\bar{\Delta f_{-n}}\beta^{\bar{e}+\bar{p}-\bar{n}}+\bar{\Delta f_{+n}}\beta^{\bar{e}+\bar{p}+\bar{n}}
\end{equation}
with
\begin{equation}
\begin{aligned}
\Delta f_{-n} \equiv f_{\beta^{e+p-n}_{\text{A}}}- f_{\beta^{e+p-n}_{\text{B}}}\\
\Delta f_{+n} \equiv f_{\beta^{e+p+n}_{\text{A}}}- f_{\beta^{e+p+n}_{\text{B}}}\\
\bar{\Delta f_{-n}} \equiv f_{\beta^{\bar{e}+\bar{p}-\bar{n}}_{\text{A}}} - f_{\beta^{\bar{e}+\bar{p}-\bar{n}}_{\text{B}}}\\
\bar{\Delta f_{+n}} \equiv f_{\beta^{\bar{e}+\bar{p}+\bar{n}}_{\text{A}}} -f_{\beta^{\bar{e}+\bar{p}+\bar{n}}_{\text{B}}}\text{.}
\end{aligned}
\end{equation}
These differences of sensitivity factors are listed in Table \ref{tab:SME} for relevant test pairs.
Tables \ref{tab:Damour} and \ref{tab:SME} clearly show that the choice of test masses heavily influences the achievable impact on global violation bounds.
Specifically, this information allows us to use novel test pairs that were previously unconstrained, or only weakly constrained, to improve the bounds on certain violation parameters~\cite{Hohensee13PRL,Mueller13varenna}.
Here, a new, and independent result can have an enormous impact on the global model even if it does not reach the state-of-the-art sensitivity.
\begin{table}[ht]
\centering
\small\renewcommand{\arraystretch}{1.4}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
A& B&Ref.&$\Delta f_{-n}\cdot 10^2$&$\Delta f_{+n}\cdot 10^4$&$\bar{\Delta f_{-n}}\cdot 10^5$&$\bar{\Delta f_{+n}}\cdot 10^4$\\
\hline
\textsuperscript{9}Be& Ti&\cite{Schlamminger08PRL} & $1.48$ &$-4.16$ & $-0.24$ &$-16.24$\\
Cu& \textsuperscript{238}U&\cite{Niebauer87PRL} & $-7.08$ &$-8.31$ & $-89.89$ &$-2.38$\\
\textsuperscript{85}Rb&\textsuperscript{87}Rb&\cite{Fray04PRL,Bonnin13PRA} &$-1.01$& $1.81$ &$1.04$& $1.67$\\
\textsuperscript{87}Sr&\textsuperscript{88}Sr &\cite{Tarallo14PRL}&$-0.49$& $2.04$ &$0.81$& $1.85$\\
\textsuperscript{6}Li&\textsuperscript{7}Li~\footnotemark[2] &\cite{Hamilton12DAMOP}&$-7.26$& $7.79$ &$-72.05$& $5.82$\\
\hline
\textsuperscript{39}K&\textsuperscript{87}Rb&\cite{Schlippert14PRL}& $-6.31$& $1.90$& $$-62.30$$&$ 0.64$\\
\hline
\end{tabular}
\end{center}
\caption[Comparison of test masses \textup{A} and \textup{B} in the SME]{Comparison of test masses \textup{A} and \textup{B} analyzed in the Standard Model Extension.
The sensitivity factors $\Delta f_{-n}$, $\Delta f_{+n}$, $\bar{\Delta f_{-n}}$, and $\bar{\Delta f_{+n}}$~are calculated according to Ref.~\cite{Hohensee13PRL}.
Relevant nuclide data is taken from Ref.~\cite{Audi03}.
A larger absolute number corresponds to a larger anomalous acceleration, and thus higher sensitivity to violations of the \textup{EEP}.
For \textup{Ti} and \textup{Cu} natural occurrence of isotopes~\cite{Laeter09} is assumed. This table is a reproduction of Table~2.2 in Ref.~\cite{Schlippert14}.}
\label{tab:SME}
\end{table}
\footnotetext[2]{A UFF test comparing \textsuperscript{6}Li {\it vs.} \textsuperscript{7}Li has not yet been performed.}
\subsection{Simultaneous \textup{\textsuperscript{87}Rb}~and \textup{\textsuperscript{39}K}~interferometer}
\label{sec:Simultaneous_interferometer}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{MZ_dual.pdf}
\caption[]{Spacetime diagram of a dual-species {Mach-Zehnder} atom interferometer in a constant gravitational field for the downward (thick lines) and upward (thin lines) direction of the momentum transfer. Stimulated {Raman} transitions at times $0$, $T$, and $2\,T$ couple the states $\ket{F_i=1,\,p}$ and $\ket{F_i=2,\,p\pm\hbar\,\keffi{i}}$, where $i$ stands for Rb (blue lines) or K (red lines). The velocity change induced by the {Raman} pulses is not to scale compared to the gravitational acceleration. This figure is reproduced from Fig.~5.1 in Ref.~\cite{Schlippert14}.}
\label{fig:mzdual}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\textwidth]{central_fringe.pdf}
\caption[Determination of the differential gravitational acceleration of rubidium and potassium]{Determination of the differential gravitational acceleration of rubidium and potassium. Typical interference signals and sinusoidal fits as a function of the effective Raman wavefront acceleration are shown for pulse separation times $T=\SI{8}{ms}$ (black squares and solid black line), $T=\SI{15}{ms}$ (red circles and dashed red line), and $T=\SI{20}{ms}$ (blue diamonds and dotted blue line) both for the upward $(+)$ and downward $(-)$ directions of the momentum transfer. The central fringe positions $a_i^{(\pm)}(g)$ (dashed vertical lines), where $i$ is Rb or K, are shifted symmetrically around $g_i=[a_i^{(+)}(g)-a_i^{(-)}(g)]/2$ (solid vertical line). The data sets are corrected for slow linear drifts and offsets. This figure is reproduced from Fig.~5.2 in Ref.~\cite{Schlippert14}.}
\label{fig:qtufffringes}
\end{figure}
To test the UFF, we operate two {Mach-Zehnder}-type matter wave interferometers~\cite{Kasevich91PRL} with laser-cooled \textsuperscript{87}Rb~and \textsuperscript{39}K~as shown in Fig.~\ref{fig:mzdual}.
To leading order, the phase shift in each interferometer reads
\begin{equation}\label{eq:phaseshift}
\Delta\phi_i=\left(g_i-\frac{\alpha_i}{\keffi{i}}\right)\keffi{i} T^2\,,
\end{equation}
and can be deduced by counting the atoms in the exit ports. Here the effective wavefront acceleration $\alpha_i/\keffi{i}$ introduced by linearly ramping the Raman laser frequency difference is utilized to null the phase shift induced by the gravitational acceleration. We emphasize that the definition of the chirp rate $ \alpha_i$ differs by a factor of $2\pi$ from the definition of the chirp rate $\alpha$ in Sec.~\ref{sec:Chirping_laser_frequency}.
An ensemble of $8\cdot 10^8$ atoms ($3\cdot 10^7$ atoms) of rubidium (potassium) atoms is collected from a two-dimensional magnetooptical trap, then sub-Doppler cooled~\cite{Landini11PRA,Chu98,Phillips98} to $T_{\text{Rb}}=\SI{27}{\micro K}$ ($T_{\text{K}}=\SI{32}{\micro K}$), and after being optically pumped to the $\ket{F_i=1}$ manifold, released into free fall.
Three two-photon Raman pulses separated in time by $T$ are applied to coherently split, redirect, and recombine the atoms during free fall.
For detection, the population in $\ket{F_i=2}$ as well as the total population in $\ket{F_i=1}$ and $\ket{F_i=2}$ are obtained via state-selective fluorescence detection, yielding the normalized excitation probability.
The overall cycle time is about~$\SI{1.6}{s}$.
A global minimum of the interference fringes appears independently of the free evolution time $T$, when the condition $g_i-\alpha_i/\keffi{i}=0$ is fulfilled. We display this concept for the upward and downward directions of the momentum transfer\footnote[3]{The differential signal of the upward and downward directions of the momentum transfer allows us to suppress~\cite{McGuirk02PRA,Louchet11NJP} spurious phase shifts independent of the direction of $k_{\mathrm{eff}}$.} in Fig.~\ref{fig:qtufffringes}.
\subsection{Data analysis and result}
\label{sec:UFF_data_analysis}
Over a duration of about four hours we have tracked the central fringe position $a_i^{(\pm)}(g)$ by scanning across the minimum in 10 steps with alternating directions of the momentum transfer at a pulse separation time of $T=\SI{20}{ms}$.
Two scans yield $g_i=[a_i^{(+)}(g)-a_i^{(-)}(g)]/2$ and can be used to compute the E\"otv\"os ratio, Eq.~\eqref{eq:eotvos}.
Systematic effects affecting our measurement are listed in Table~\ref{tab:systematics}.
The total bias $\Delta\eta_{\text{tot}}=-5.4\cdot 10^{-8}$ is subject to an uncertainty $\delta\eta_{\text{tot}}=3.1\cdot 10^{-8}$. Considering the systematic and statistical uncertainty as well as the bias from Table~\ref{tab:systematics}, an overall result of
\begin{equation}
\eta_{\mathrm{Rb},\mathrm{K}}=(0.3\pm 5.4)\cdot 10^{-7}
\end{equation}
is obtained.
The column $\delta\eta^{\text{adv}}$ points to possible future improvements (indicated in bold face) using a common optical dipole trap~\cite{Zaiser11PRA} as a source to further cool and collocate rubidium as well as potassium, and gain better control over their initial conditions.
\begin{table}[h]
\centering
\small\renewcommand{\arraystretch}{1.4}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Contribution & $\Delta\eta$ &$\delta\eta$ & $\delta\eta^{\text{adv}}$ \\
\hline
Second-order Zeeman effect& $-5.8\cdot 10^{-8}$ &$2.6\cdot 10^{-8}$&$\mathbf{3.0\cdot 10^{-9}}$\\
Wavefront aberration & 0 &$1.2\cdot 10^{-8}$&$\mathbf{3.0\cdot 10^{-9}}$\\
Coriolis force& 0 &$9.1\cdot 10^{-9}$&$\mathbf{1.0\cdot 10^{-11}}$\\
Two-photon light shift& $4.1\cdot 10^{-9}$ &$8.2\cdot 10^{-11}$&$8.2\cdot 10^{-11}$\\
Effective wave vector& 0 &$1.3\cdot 10^{-9}$&$1.3\cdot 10^{-9}$\\
First-order gravity gradient& 0 &$9.5\cdot 10^{-11}$&$\mathbf{1.0\cdot 10^{-12}}$\\
\hline
Total &$-5.4\cdot 10^{-8}$&$3.1\cdot 10^{-8}$&$\mathbf{4.4\cdot 10^{-9}}$\\
\hline
\end{tabular}
\end{center}
\caption{Systematic biases $\Delta\eta$~and comparison between the uncertainties $\delta\eta$ and $\delta\eta^{\textup{adv}}$ of the E\"otv\"os ratio in the current, and in an advanced set-up. The improved values highlighted in bold face arise from the use of an optical dipole trap. The uncertainties are assumed to be uncorrelated at the level of the inaccuracy. This table is a reproduction of Table~5.1 in Ref.~\cite{Schlippert14}.}
\label{tab:systematics}
\end{table}
The statistical uncertainty~$\sigma_\eta=5.4\cdot 10^{-7}$ after $\SI{4096}{s}$ of integration time is dominated by technical noise in the potassium interferometer. This limitation can be improved in several ways.
For instance, the implementation of a sequence preparing a single $m_F$ state would reduce the number of background atoms that currently lower the contrast. Moreover, selecting a narrower velocity class, as well as lower temperatures to begin with, would improve the beam splitting efficiency and consequently the contrast, too.
\section{Atom-chip based BEC interferometry}
\label{sec:BEC_interferometry}
Today's generation of inertial sensors based on atom optics typically operates with cold atoms, released or launched from an optical molasses exemplified by the experiment discussed in the previous section. The velocity distribution and the finite size of these sources limit
the efficiency of the beam splitters as well as the analysis of the systematic uncertainties.
We can overcome these limitations by employing ensembles with a momentum distribution well below the photon recoil limit, which can be achieved with BECs.
After reaching the regime of ballistic expansion, where the mean-field energy has been converted into kinetic energy, the momentum distribution of a BEC can be narrowed down even further by the technique of delta-kick collimation~(DKC)~\cite{Chu86OL,Ammann97PRL,Morinaga99PRL,Muentinga13PRL,Kovachy15PRL}. Moreover, atom-chip technology offers the possibility
to generate a BEC and perform DKC in a fast and reliable fashion, paving the way for miniaturized atomic devices.
We devote Sec.~\ref{sec:DKC} to an introduction to DKC and note that the results reported in this section have originally been published
in Ref.~\cite{Muentinga13PRL}.
The use of BECs allows us to implement Bragg and double Bragg diffraction with efficiencies
above 95\%, and thus to perform interferometry with high contrast.
In Sec.~\ref{sec:Tiltmeter} we demonstrate a quantum tiltmeter using a MZI based on double Bragg diffraction with a tilt precision of up to $4.5\,\upmu$rad. The results presented in this section have originally been published in Ref.~\cite{Ahlers16PRL}.
In Sec.~\ref{sec:Compact_baseline} we discuss an experiment combining double Bragg diffraction and Bloch oscillations,
where we have implemented \cite{Abend16PRL} a relaunch procedure with more than 75\% efficiency in a retro-reflected optical lattice.
We emphasize that here we rely on a {\it single} laser beam only, which also comprises the beam splitter resulting in a set-up of significantly reduced complexity.
Our relaunch technique allowed us to build a gravimeter on a small baseline with a comparably large interferometry time of~$2T=50$\,ms
in the MZI for a fixed free-fall distance. At a high contrast of $C=0.8$ the interferometer reaches an intrinsic sensitivity to gravity of
\begin{equation}
\Delta g/g=1.4\cdot 10^{-7}\,.
\end{equation}
A key element of this result was the state preparation comprising DKC and Stern-Gerlach-type deflection,
which improved the contrast and reduced the detection noise. The results presented in this section have originally been reported in Ref.~\cite{Abend16PRL}.
\subsection{Delta-kick collimation}
\label{sec:DKC}
The desire to reach long expansion times serves as a motivation for DKC by a magnetic lens~\cite{Smith08JPB}.
Recent experiments have shown expansion rates corresponding to a few nK in 3D in the drop tower~\cite{Muentinga13PRL} with QUANTUS-1, or even pK in 2D in a 10\,m-fountain~\cite{Kovachy15PRL}. These widths in momentum space are smaller than those of
the coldest reported condensates~\cite{Leanhardt03Science}. For a detailed study of DKC using the QUANTUS-1 apparatus in the drop tower and also
on ground we refer to Refs.~\cite{Wenzlawski13,Krutzig14}.
After the release from the trap, the BEC starts to expand freely and falls away from the trap due to the gravitational acceleration.
During the first milliseconds after release most of the mean-field energy is converted into kinetic energy.
The required time for this conversion depends on the atomic density of the condensate, and hence, on the steepness of the trap
from which the BEC is released. The final expansion rates in the ballistic regime can be precisely evaluated by time-of-flight measurements.
Even a BEC has a non-zero velocity spread leading after some expansion time to an increased cloud size, and possibly
to a reduction in performance of an atom interferometer due to an increased detection noise, or larger contributions to systematic uncertainties.
A condensate released from a shallow trap expands slowly enough to perform experiments with short free-fall times. Moreover, these condensates have a momentum width that is small enough to reach high Bragg diffraction efficiencies.
Indeed, for trapping frequencies of $f_{(x,y,z)}=(18,\,46,\,31)$\,Hz and $N=10^4$ atoms the resulting expansion rate along
the beam-splitting axis is $\sigma_v\approx\, 750\,\upmu\mathrm{m}/\mathrm{s}\approx\, 0.125\,\hbar k/m$,
which corresponds to an effective temperature in the beam direction of about 5 to 10\,nK.
Thus, for this number of atoms the mean-field conversion in a time of $t_{\mathrm{exp}}\gtrsim 10$\,ms is acceptable for interferometry.
However, for larger densities of the condensate, the use of DKC becomes necessary. In Fig.~\ref{fig:dkc} we illustrate the essential idea of DKC with the time evolution of a phase-space distribution.
The shearing in phase space caused by the free evolution of the condensate leads to a tilted ellipse that can be rotated by applying a harmonic potential
for a suitable time~$\tau_{\mathrm{DKC}}$, so as to align its major axis with the $x$-axis.
\begin{figure}[h]
\centering
\includegraphics[width=.7\linewidth]{dkc.pdf}
\caption{Principle of delta-kick collimation (DKC) explained in phase space. After release an ensemble of cold atoms has an initial distribution in the phase space
spanned by position $x$ and momentum $p$.
After a time $T_0$ the cloud has expanded in space giving rise to a tilted ellipse which is then rotated in phase space by a collimating pulse.
This figure is an adaptation of Fig.~1.3(c) in Ref.~\cite{Abend17}.}
\label{fig:dkc}
\end{figure}
Indeed, when we consider the potential~$V_{\mathrm{DKC}}(x)\equiv m \omega_{\mathrm{DKC}}^2x^2/2$
applied for a short time~$\tau_{\mathrm{DKC}}$, the resulting change~$\Delta p$ in the momentum~$p$ is approximately given by
\begin{equation}
\Delta p \cong F(x) \tau_{\mathrm{DKC}} = -\frac{\textrm{d}V_{\textrm{DKC}}}{\textrm{d} x} \tau_{\mathrm{DKC}} = -m\omega_{\mathrm{DKC}}^2x\,\tau_{\mathrm{DKC}}\,.
\end{equation}
Assuming that most of the expansion takes place in the ballistic regime, and that the spatial size~$x_{\mathrm{f}}$
when the DKC pulse is applied at time~$T_0$ is much larger than the initial size~$x_{\mathrm{i}}$,
such that $x_{\mathrm{f}}\approx p_{\mathrm{i}}T_0/m$, the induced momentum change reads
\begin{equation}
\Delta p=-p_{\mathrm{i}} \omega_{\mathrm{DKC}}^2 \tau_{\mathrm{DKC}}T_0\,.
\end{equation}
The initial momentum width $p_{\mathrm{i}}$ considered here includes the contribution from the mean-field energy that was converted into kinetic energy.
In complete analogy to the power $P$ of an optical lens the strength of a delta kick is defined by $S\equiv\omega_{\mathrm{DKC}}^2\tau_{\mathrm{DKC}}$. For a given $T_0$ the optimal choice that leads to $\Delta p=-p_{\mathrm{i}}$ corresponds to $S=1/T_0$, or equivalently, to
\begin{equation}
\label{eq: dkc_optimal_choice}
\omega_{\mathrm{DKC}}^2\tau_{\mathrm{DKC}} T_0=1\,.
\end{equation}
However, even in this case the minimum value attainable for the final momentum width is limited by Liouville's theorem, that is
conservation of phase-space volume. This minimal value is determined by the ratio of the phase-space volume of the initial ensemble, and the spatial width $x_{\mathrm{f}}$
when the DKC pulse is applied. In principle this limitation can be reduced by increasing $T_0$ which leads to a larger $x_{\mathrm{f}}$.
In a realistic implementation uncertainties in the relevant parameters lead to an uncertainty $\delta p$ in the induced momentum change
which should be smaller than the targeted final momentum width $p_{\mathrm{f}}$, namely
\begin{equation}
\label{eq: dkc_momentum_requirement}
\delta p=p_{\mathrm{i}} \delta(\omega^2_{\mathrm{DKC}}\tau_{\mathrm{DKC}}T_0)< p_{\mathrm{f}}\,.
\end{equation}
When we take into account the optimal choice, Eq.~\eqref{eq: dkc_optimal_choice}, the requirement, Eq.~\eqref{eq: dkc_momentum_requirement}, can be expressed in terms of the relative errors for the individual parameters, that is
\begin{equation}
2\frac{\delta\omega_{\mathrm{DKC}}}{\omega_{\mathrm{DKC}}} + \frac{\delta\tau_{\mathrm{DKC}}}{\tau_{\mathrm{DKC}}} + \frac{\delta T_0}{T_0} < \frac{p_{\mathrm{f}}}{p_{\mathrm{i}}}\,.
\end{equation}
For ground-based experiments, the waiting time~$T_0$ and other quantities determining the errors are not only limited by technical means,
but also by the free fall away from the chip, which reduces the trap frequencies of the potential, and restricts~$T_0$ to times smaller than~$6$\,ms.
For a shallow trap the expansion after this time is not even in the ballistic regime, and the mean-field potential is still non-negligible.
Thus, a release from the trap with a faster initial expansion is required to increase~$x_{\mathrm{f}}$ prior to collimation.
We conclude our brief review of DKC by mentioning that the anharmonicities of the generated potential represent another source of errors. Indeed, they cause deformations when the condensates are too large.
\subsection{Quantum tiltmeter based on double Bragg diffraction}
\label{sec:Tiltmeter}
An interesting extension of Bragg diffraction occurs when a retro-reflected pair of laser beams interacts with atoms that are at rest
with respect to the mirror, such that the Doppler shift~$\omega_{\mathrm{D}}$ vanishes.
In this case the transitions with opposing effective wave vectors are degenerate in frequency, and simultaneously diffract
the atomic wave packets in both directions. The difference in momenta between both arms in an interferometer is then increased to $4\hbar k$.
This symmetric diffraction called \quot{double Bragg diffraction} was proposed in Ref.~\cite{Giese13PRA} as a generalization
of Bragg diffraction in complete analogy to double Raman diffraction~\cite{Leveque09PRL}, and experimentally demonstrated in Ref.~\cite{Ahlers16PRL}.
We emphasize that the traditional Bragg diffraction can be referred to as single or uni-directional Bragg diffraction,
since only a single pair of laser beams drives the transition, while the other one is off resonant.
\subsubsection{Rabi oscillations}
In Fig.~\ref{fig:ddiff_rabi}(a) we show the coupling scheme for first-order double Bragg diffraction.
The transition frequency is given by the Bragg condition, Eq.~\eqref{eq:braggcondition}, with the recoil frequency~$\omega_{\mathrm{rec}}$.
\begin{figure}[!h]
\includegraphics[width=\linewidth]{ddiff_rabi.pdf}
\caption{First-order double Bragg diffraction represented by the corresponding transitions~(a), the comparison~(b) between experimental observations and numerical simulations of the Rabi oscillations in the normalized populations
as a function of the square-pulse duration, and their spectral decomposition~(c).
The energy diagram in (a) shows the resonant (solid lines) and off-resonant (dashed lines)
light-induced transitions between the atomic momentum states $\left|0 \hbar k\right\rangle$ (black),
$\left|\pm 2 \hbar k\right\rangle$ (blue) and $\left|\pm 4 \hbar k\right\rangle$ (red).
The experimental values (squares) and numerical simulations (solid curves) based on Ref.~\cite{Giese13PRA} displayed in (b) are in good agreement.
The frequency spectrum (c) of the simulated population $N_1/N_\text{tot}$ displays components close to $2 \omega_\mathrm{rec}$
which stem from the off-resonant couplings depicted in (a) by dashed lines.
The broad double-peaked structure at the Rabi, and twice the Rabi frequency, which is a consequence of the detuned three-level system
with non-vanishing $p_0$ and/or $\delta p$, leads to the modulation of the oscillations observed in (b).
This figure is reproduced from~\cite{Ahlers16PRL} with permission of the authors, copyright American Physical Society (2016).}
\label{fig:ddiff_rabi}
\end{figure}
Due to the intrinsic symmetry of the diffraction process, a beam splitter of this kind offers several advantages for atom interferometry. Most prominently, the populations of the output ports no longer depend on the laser phase~$\Delta\phi^{\mathrm{laser}}$, since the wave functions of the center-of-mass motion in the two arms
have the same laser phase imprinted during each pulse\footnote[4]{Needless to say, this feature is only present when we neglect off-resonant processes.}.
Moreover, we can choose the order of the diffraction process by matching the detuning $\delta$ with the kinetic energy gained during the scattering.
Rabi oscillations for double Bragg diffraction are more complicated than for single Bragg diffraction,
since more states and transitions have to be taken into account.
In the case of first-order double Bragg diffraction $\delta$ is chosen to correspond
to the recoil frequency $\omega_\text{rec}\equiv (2\hbar k)^2/(2m \hbar)$ inducing resonant transitions between the momentum states $\left|0\right\rangle$ and $\left|\pm 2 \hbar k\right\rangle$ depicted in Fig.~\ref{fig:ddiff_rabi}(a) by solid lines, together with off-resonant transitions to these, and higher momentum states indicated by dashed lines.
Being off-resonant, the latter transitions are substantially suppressed.
When the width of the momentum distribution is small enough we can observe Rabi-type oscillations~\cite{Ahlers16PRL} as a function of the atom-light interaction time as shown in Fig.~\ref{fig:ddiff_rabi}(b). Here we also display numerical simulations \cite{Giese13PRA}
of the Rabi oscillations and the normalized atom populations are defined as
\begin{equation}
\left.
\begin{aligned}
n_0 \equiv N_0/N_\text{tot} \quad \text{and} \quad
&n_j \equiv (N_{-j} + N_j)/N_\text{tot}\quad (j=1,2)\,,
\end{aligned}
\right.
\label{populations}
\end{equation}
where $N_j$ is the number of atoms in the momentum state~$\ket{2j\hbar k}$ with $j=-2,-1,0,1$ and $2$, and $N_\text{tot} \equiv \sum_{j = -2}^2 N_j$.
\subsubsection{Tilt measurements}
In most atom interferometers tilt variations can be significant and lead to an uncertainty in the quantity being measured.
Hence, these devices are either designed to minimize the effect of tilts or do not allow to distinguish a tilt from other sources inducing a phase shift~\cite{Altin13NJP}.
\begin{figure}[!h]
\includegraphics[width=\linewidth]{ddiff_scheme_fringes.pdf}
\caption{Three double Bragg interferometers employed as tiltmeters (a) and the interference signals corresponding to
first-order~(b), successive first-order~(c), and second-order Gaussian pulses~(d) as a function of the tilt angle $\Delta \alpha$.
For each angular step in $\Delta \alpha$, the normalized population~$n_0$ in the exit ports is measured 50 times.
The blue solid, black dashed and red dotted lines represent sinusoidal fits of those datasets.
Fitting the histograms of~$n_0$ over a range of tilt settings corresponding to one,
or two complete fringe periods, and fitting them with a theoretical distribution (black) which assumes that all noise sources combined
with the tilt scan lead to an approximately uniform phase-shift distribution, yield contrasts of 43\%, 29\% and 23\%, respectively.
Further analysis reveals a tilt precision of $4.5\,\upmu$rad, $5.9\,\upmu$rad, and $4.6\,\upmu$rad, respectively.
This figure is reproduced from Ref.~\cite{Ahlers16PRL} with permission of the authors, copyright American Physical Society (2016).}
\label{fig:ddiff_scheme_fringes}
\end{figure}
We have designed an atom interferometer to measure slight deviations from the horizontal direction with respect to gravity.
In this quantum tiltmeter we diffract a delta-kick collimated BEC with small initial momentum and low expansion rate off laser beams and induce first- or higher-order double Bragg transitions.
Figure~\ref{fig:ddiff_scheme_fringes}(a) illustrates the corresponding symmetric geometry emerging from a first-order (blue solid lines) double Bragg process, where the initial wave packet is split, redirected and recombined. A stepwise tilt of the whole apparatus changes the orientation $\alpha$ of the interferometer with respect to gravity $g$ and induces a phase shift. As a result, the interference signal, that is the normalized populations at the exit ports, exhibits oscillations as a function of the change $\Delta \alpha$ in the tilt angle exemplified by Fig.~\ref{fig:ddiff_scheme_fringes}(b).
This process can be extended to successive first-order (black dashed lines) and second-order (red dotted lines) double Bragg processes
as displayed in Fig.~\ref{fig:ddiff_scheme_fringes}(c) and (d).
\subsection{Sensitive atom-chip gravimeter on a compact baseline}
\label{sec:Compact_baseline}
Quantum sensors for gravimetry based on cold atoms have been with us for more than two decades \cite{Kasevich92APB,Peters99Nature,Hu13PRA}, and
can reach today accuracies competitive with falling corner cube gravimeters~\cite{Merlet10Metrologia}.
However, {\it compact} gravimeters using BECs have been demonstrated only recently~\cite{Debs11PRA,Abend16PRL}.
Due to their small extent and expansion rates, BECs which are delta-kick collimated have attracted attention for large-scale devices
on ground~\cite{Dickerson13PRL}, and in space missions~\cite{Aguilera14CQG}. In light of these experiments systematic uncertainties specific to
BECs have been analyzed~\cite{Schubert13arxiv, Hartwig15NJP}, and novel techniques have been introduced~\cite{Sugarbaker13PRL}.
These achievements will allow sensors relying on BECs to target sub-$\upmu$Gal accuracies in the near future, and to overcome
current limitations set by cold atoms~\cite{Hu13PRA,Bidel13APL,Freier16CS,Fang16CS}.
The use of an atom chip for all preparation steps, and as a retro-reflector is the novelty of our approach \cite{Abend16PRL} summarized in this section.
Although our experiment is a proof-of-principle, it nevertheless represents an important pathway to the application of an atom-chip gravimeter
for precision measurements important for example in geodesy.
\subsubsection{Relaunch of atoms in a retro-reflected optical lattice}
In order to increase the observation time of a BEC on a small baseline, we have developed a simple but extremely valuable method to relaunch a BEC using Bloch oscillations in a retroreflected or dual lattice. Our procedure differs substantially from previous ones, which either rely on (i) two crossed beams reflected from a mirror
surface~\cite{Sugarbaker14PHD}, (ii) two opposing beams~\cite{Andia13PRA},
(iii) velocity selection from a molasses~\cite{Charriere12PRA,Altin13NJP,Mazzoni15PRA}, or (iv) the transfer of only a few photons
from a standing wave~\cite{Impens06APB,Hughes09PRL}. Indeed, we employ a retro-reflected optical lattice which is a common configuration in atomic sensors.
The novelty, and at the same time the challenge of our method is that for the experiment we use only a {\it single} beam along the vertical axis with linearly polarized laser light of two co-propagating frequency components. They form in total four lattices:
two moving lattices with opposite velocity, and two additional ones at rest.
We perform the relaunch procedure in three steps to avoid a zero-crossing of the velocities of the lattices:
(i) a lattice deceleration, (ii) a momentum inversion pulse, and (iii) a lattice acceleration.
Our sequence starts by loading the atoms adiabatically into one of the lattices where the Bloch oscillations for deceleration are performed
until the atomic motion is almost stopped. The atoms are then adiabatically unloaded from the lattice with only a few~$\hbar k/m$ of residual velocity.
After a small waiting time to carefully match the resonance condition to the velocity of the atoms, a higher-order double Bragg diffraction pulse is applied which inverts the momentum.
Finally, a second lattice acceleration sequence follows, which precisely speeds up the atomic ensemble to launch it on a parabolic trajectory with an adjustable apex close to the atom-chip surface.
\begin{figure}[h]
\includegraphics[width=\linewidth]{relaunch_sequence.pdf}
\caption{Relaunch of atoms in a retro-reflected optical lattice represented by an amplitude (a) and frequency modulation (b) of the lattice and compared to a single-lattice acceleration together with density plots (c) of the final momentum distribution. The relaunch sequence circumvents the zero-velocity crossing of the dual lattice
by a double Bragg pulse inverting the momentum. The two frequency components of the dual lattice are depicted in red and blue. In (c) we compare the momentum inversion with a 16\,$\hbar k$
double Bragg diffraction pulse (middle) to the one after the deceleration (left), and a simple sweep through resonance
with Bloch oscillations (right). This figure is an adaptation of Figs.~5.17 and~5.25(b) in Ref.~\cite{Abend17}.}
\label{fig:relaunch_sequence}
\end{figure}
In Fig.~\ref{fig:relaunch_sequence}(a) and (b) we show the amplitude and frequency modulation of the dual-lattice light used to perform
the relaunch sequence. Compared to a single-lattice acceleration this scheme is rather complex,
since the additional lattices from the retro-reflection are shifted out of resonance by the Doppler effect of the falling atoms. Unfortunately, this effect does not allow the acceleration of the lattices in a way that the atomic ensemble crosses the zero-momentum state
without losing a major fraction of atoms as depicted in Fig.~\ref{fig:relaunch_sequence}(b).
These losses result from the fact that when the atoms are at rest,
the two moving optical lattices are both on resonance with the atoms --- one attempting to move the atoms upwards, and the other one to move them downwards.
This feature reduces the fraction of atoms that are launched upwards to about one-half of the atom number achieved by double Bragg diffraction.
Moreover, when the velocity of the atoms almost vanishes, non-adiabatic transitions arise due to parasitic acceleration in the non-resonant lattice. They remove atoms from the upward moving lattice, and further reduce the number of launched atoms to about one quarter.
Fortunately, a combination of Bloch oscillations in an optical lattice together with higher-order Bragg diffraction prevents these losses.
In this scheme most of the momentum is transferred via Bloch oscillations with an efficiency close to unity to stop and launch the atoms. Since only a smaller fraction of momentum needs to be transferred by a single Bragg pulse, this sequence maintains a high overall efficiency.
\subsubsection{Experimental sequence of the atom-chip gravimeter}
Bragg diffraction combined with the relaunch allows us to implement a sensitive gravimeter with the atom chip.
Figure~\ref{fig:fountaingravi_sequence} shows the spacetime diagram of the fountain geometry.
Subsequent to the adiabatic rapid passage we also perform Stern-Gerlach-type deflection by a magnetic field pulse with a duration of~$\tau_{\mathrm{SG}}=7$\,ms using the Z-wire on the chip.
In this way we remove the atoms remaining in magnetic sensitive states leading to an enhanced contrast.
The maximum value of the preparation time~$\tau_{\mathrm{prep}}$ is limited to 34\,ms
due to the end of the detection region $7$\,mm below the chip.
The relaunch process has an overall duration of $\tau_{\mathrm{launch}}=2.9$\,ms.
With a relaunch realized after the largest waiting time of~$\tau_{\mathrm{prep}}=33.2$\,ms the total time of flight~$\tau_\mathrm{ToF}$ after initial release of the atoms is greatly increased to $\tau_\mathrm{ToF}=97.6$\,ms.
The interferometer sequences start after the atoms have been launched on their fountain trajectories.
The final waiting time~$\tau_{\mathrm{sep}}$ after~$\tau_\mathrm{ToF}>90$\,ms to separate the output ports can be reduced
from~$\tau_{\mathrm{sep}}\ge20$\,ms to~$\tau_{\mathrm{sep}}\ge10$\,ms provided DKC is used.
The remaining time~$2T\equiv\tau_{\mathrm{ToF}}-\tau_{\mathrm{prep}}-\tau_{\mathrm{launch}}-\tau_{\mathrm{sep}}<51$\,ms can be entirely used
for the interferometry, which allows us to use a pulse separation time as large as $T=25$\,ms.
The limit in $T$ on our current baseline of 7\,mm is reached with $T=25$\,ms. Any further extension beyond this value
would result in a reduced contrast due to insufficient port separation.
\begin{figure}[h]
\includegraphics[width=\linewidth]{fountaingravi_sequence.pdf}
\caption{Atom-chip gravimeter with an extended free-fall time of up to $\tau_{\mathrm{ToF}}\approx100$\,ms represented
in spacetime (left) and space (right). The preparation of the atomic ensemble is performed during $\tau_{\mathrm{prep}}$
before the relaunch. The elongated free-fall time allows us, after the expansion time $T_0$, to employ DKC (for the time $\tau_{\textrm{DKC}}$) as well as adiabatic rapid passage (for the time $\tau_{\textrm{ARP}}$) and Stern-Gerlach-type
deflection (for the time $\tau_{\textrm{SG}}$) to reduce the expansion rate, and to remove atoms remaining in magnetic sensitive states.
The relaunch in the retro-reflected optical lattice is realized as described.
The MZI features up to third-order Bragg diffraction, pulse separation times up to $T=25\,$ms, and a detection
after a separation time of~$\tau_{\mathrm{sep}}\ge10$\,ms. This figure is reproduced from Fig.~6.3 in Ref.~\cite{Abend17}.}
\label{fig:fountaingravi_sequence}
\end{figure}
State-of-the-art Raman-type gravimeters routinely operate with pulse separation times of 70\,ms~\cite{Fang16CS}
or larger~\cite{Hu13PRA,Freier16CS}. To further increase the scale factor, not only first-order but also higher-order Bragg diffraction
can be implemented in the MZI. At the moment third-order Bragg diffraction has an efficiency of above $90\%$.
With future improvements, already fourth-order Bragg diffraction will compensate a decrease by a factor
of two in the pulse separation time~$T$.
To perform higher-order Bragg diffraction, we have used shorter beam-splitter pulses, but at larger laser powers.
More specifically, we have employed Gaussian-shaped pulses of widths~$\sigma_{\tau}=12.5\,\upmu$s.
The first $\pi/2$-pulse of the MZI follows with a short delay of one millisecond after the relaunch
to maximize the time available for interrogation. To avoid $\pi$-pulses at the apex the timing of the MZI needs to be placed {\it asymmetrically} around the apex of the fountain. In this case, both lattices are on resonance, and losses
due to double Bragg diffraction, and standing waves disturb the $\pi$-pulse.
As a consequence, the Doppler detuning $\delta$ of the $\pi$-pulse should satisfy the condition $\delta > 100$\,kHz,
or correspondingly, the time difference to the apex should be of the order of $7-8$\,ms.
Consequently, the separation time of the outputs is always larger than~$\tau_{\mathrm{sep}} >14$\,ms.
A larger momentum transfer slightly reduces the free-fall time, because depending on the direction
of the momentum transfer~$\pm \hbar k_{\mathrm{eff}}$ the atoms are either kicked towards the atom chip,
or downwards such that they leave the detection region faster. By choosing the momentum transfer
of the second and final lattice acceleration in the relaunch sequence according to the direction of
the momentum transfer of the beam splitter, the height of the parabola can be maximized.
As a consequence, the pulse separation time~$T$ can be held constant, independently of $\pm \hbar k_{\mathrm{eff}}$,
as depicted in Fig.~\ref{fig:fountaingravi_sequence}. In both cases the free fall time~$\tau_{\mathrm{ToF}}$ of the atoms is slightly reduced due to the recoil imprinted during the beam splitting process.
\subsubsection{Analysis of the interferometer output}
Since for pulse separation times larger than~$T>5$\,ms the background vibration level
leads to a complete loss of the fringe pattern~\cite{Mcdonald13PRA} we can no longer employ the common fringe fit method, or an Allan deviation analysis.
Even when the laser phase and the chirp rate are identical in subsequent measurements,
the output phase scatters over multiple $2\pi$-intervals, as illustrated in Fig.~\ref{fig:fountaingravi_results}(a).
As a consequence, at these high levels of sensitivity the readout of a gravity-induced phase shift is impossible without having
additional information about the vibrations during the interferometry, such as seismic correlation.
We emphasize that the beam splitters still operate with high fidelity, and oscillations between the output ports are clearly visible.
However, the simple peak-to-valley, or standard-deviation calculation over- or underestimates the contrast, and does not yield
a useful noise analysis for the output signals.
One method to solve the problem of distinguishing between useful contrast~$C$,
and technical noise~$\sigma_{\Delta\phi}\equiv\sigma_P/C$, rests on a histogram analysis revealing the contrast of an interferometer
without relying on fringe visibility~\cite{Geiger11NatureComm}. Here $\sigma_P$ denotes the fluctuation in the measured population.
In this approach the output signals of a data set are first split into bins of equal size containing the normalized population~$P$,
and then the data points within each interval are counted. The resulting histogram shows a characteristic double-peak structure
reflecting the sinusoidal dependence of the interference signal. This structure results from a simple noise model,
assuming that for a completely random signal the probability to find an output state with a normalized population
at top or bottom of a sinusoidal fringe pattern is larger than at the middle.
We emphasize that this method requires sufficient statistics over several hundred experimental cycles with a stable signal.
\begin{figure}[h]
\includegraphics[width=0.8\linewidth]{fountaingravi_results.pdf}
\caption{Extraction of the interferometer phase from a noisy signal with the help of the probability density.
Vibrational noise completely washes out a sinusoidal signal~(a), and only fluctuations are left to be measured
in the normalized population between the two output ports. The corresponding histogram exhibits
a characteristic double-peak structure from which we extract the contrast~$C\equiv A/P_0$ given by the amplitude~$A$ of the sinusoidal signal and its mean $P_0$. The output (b) of a MZI in a fountain geometry
using delta-kick collimated ensembles with a pulse separation time of $T=25$\,ms and
first-order Bragg diffraction resembles noise after roughly 1\,000 measurements have been taken.
Nevertheless, we can obtain a contrast of $C=0.86$ and a technical noise~$\sigma_{\Delta\phi}$ close to the shot noise from the histogram. This figure is an adaptation of Figs.~6.11 and~6.17(c) in Ref.~\cite{Abend17}.}
\label{fig:fountaingravi_results}
\end{figure}
We extract the contrast~$C\equiv A/P_0$ from a fit of the distribution according to Fig.~\ref{fig:fountaingravi_results}(b), with the amplitude~$A$ of the signal and its mean $P_0$. As input for the fitting routine a kernel density estimation (KDE)
of the data points is used, rather than the histogram
itself\footnote[5]{This evaluation yields a slightly better intrinsic sensitivity compared to
the value published in Ref.~\cite{Abend16PRL}.}.
In a KDE each data point is weighted with a Gaussian function of a fixed width~$\sigma_{\mathrm{KDE}}=0.01$.
All Gaussian functions are then added and the signal normalized such that in the end a continuous distribution
properly reflects the density of data points without loss of information.
For the width~$\sigma_{\mathrm{kde}}$ it is only of importance to choose a value smaller than the expected technical noise.
If the histogram is fitted for a small number~$n_{\mathrm{bin}}$ of bins, the histogram would yield insufficient information
to be properly fit, and would lead to a larger uncertainty in the extracted parameters.
In our fountain geometry the expected gain in contrast is between 5\% and 10\% and results from two factors: (i) We can employ delta-kick collimated BECs with ultra-slow expansion and (ii) the Stern-Gerlach-type deflection purifies the magnetic sub-states. With this configuration the pulse separation time can even be extended to $T=25$\,ms.
At this time, the output ports are still separated due to the smaller final size of the clouds.
The measurements and evaluation depicted in Fig.~\ref{fig:fountaingravi_results}(b) are for our MZIs formed by
first-order Bragg diffraction and $T=25$\,ms. The histogram analysis reveals that the contrast remains at $C=0.86$.
The technical noise level~$\sigma_{\Delta\phi}=14$\,mrad is extracted from the widths of the outer peaks in the fit
to the density distribution, which is close to the calculated shot noise of~$11$\,mrad for $N=8\,000$ atoms.
This remarkable result is due to the interplay between the high-fidelity Bragg diffraction and the DKC.
These ultra-slow expansion rates allow for even longer flight times, and also give rise to a boost in sensitivity. Indeed, the largest intrinsic sensitivity~$\Delta g/g = 1.4\cdot 10^{-7}$ was observed
after a $\tau_{\mathrm{ToF}}=97.6$\,ms at a noise of $\sigma_{\Delta\phi}=14$\,mrad.
This achievement represents an important step towards compact but precise sensors.
\section{Outlook}
\label{sec:Outlook}
Atom interferometry is a cornerstone of precision measurements with a wealth of promising applications. In particular, we expect atomic gravimeters based on BECs to reach sub-$\upmu$Gal accuracies in the near future.
In Sec.~\ref{sec:systematics} we highlight the origins of measurement uncertainties, and present mitigation strategies
for future devices.
The tools and methods for BEC interferometry outlined in these lecture notes have opened the path towards significantly enhanced scale factors due to an extended free evolution time.
For this reason we focus in Sec.~\ref{sec:VLBAI} on very long baseline atom interferometers. Moreover, we devote Sec.~\ref{sec:space}
to space-borne devices and analysize the potential for future gravity measurements as well as tests of fundamental physics
such as the UFF.
\subsection{Reduced systematic uncertainties in future devices}
\label{sec:systematics}
The main drive for sensors based on BECs is the reduction of systematic uncertainties. We now assess the potential of an atom chip gravimeter to reach sub-$\upmu$Gal accuracies. For this purpose we identify in Table~\ref{tab:futuresystematics} the origins of the largest contributions to the measurement uncertainty and suggest mitigation strategies.
\begin{table}[h]
\centering
\small\renewcommand{\arraystretch}{1.4}
\begin{tabular}{| p{2.8cm} | p{5.45cm} | c | c|}
\hline
Contribution due to & Mitigation strategy & Noise & Bias\\
&&$(\Delta g/g)/\sqrt{\mathrm{Hz}}$& $\Delta g/g$\\
\hline
Intrinsic sensitivity &Next generation source~\cite{Rudolph15NJP}&$5.3\cdot 10^{-9}$&$0$\\
Mean-field shift &Tailored expansion and DKC~\cite{Muentinga13PRL,Kovachy15PRL} &$1.5\cdot 10^{-10}$&$6.4\cdot 10^{-11}$\\
Launch velocity &Scatter $70\, \upmu\mathrm{m}/\mathrm{s}$, stability $15\, \upmu\mathrm{m}/\mathrm{s}$~\cite{Louchet11NJP}&$1.5\cdot 10^{-12}$&$3.1\cdot 10^{-13}$\\
Wavefront quality &$\lambda / 10$ chip-coating, $\oslash = 2$\, cm beam~\cite{Tackmann12NJP} &$6.7\cdot 10^{-10}$&$2.8\cdot 10^{-10}$\\
Self gravity &Detailed modeling of chip mount~\cite{Dagostino11Met}&$1.2\cdot 10^{-12}$&$5\cdot 10^{-10}$\\
Light-shifts &Suppressed in Bragg diffraction~\cite{Giese16PRA}&$1.4\cdot 10^{-12}$&$1.4\cdot 10^{-10}$\\
Magnetic fields &Three-layer magnetic shield~\cite{Milke14RSI}&$1\cdot 10^{-10}$&$2.6\cdot 10^{-10}$\\
\hline
Target estimation &Uncertainty after less than 100\,s&\multicolumn{2}{r|}{$\approx 7.8\cdot 10^{-10}$}\\
\hline
\end{tabular}
\caption{Estimates of the major systematic uncertainties in the atom-chip gravimeter.
As a result, the determination of local gravity with a relative accuracy $\Delta g/g < 1 \cdot 10^{-9}$
in less than $100\,\textup{s}$ seems possible~\cite{Abend16PRL}. This table is a reproduction of Table~6.9 in Ref.~\cite{Abend17}.}
\label{tab:futuresystematics}
\end{table}
To get a realistic estimate of the dominant uncertainties for a future experiment, our calculations~\cite{Rudolph15NJP} use a state-of-the-art flux of $10^5$ atoms per second at a repetition rate of 1\,Hz. For a free-fall distance of 1\,cm the free-fall time increases to~$\tau_{\mathrm{ToF}}=135$\,ms. If the needed detection separation time stays at~$\tau_{\mathrm{sep}}>15$\,ms, a maximum pulse separation time of~$T=35\, \mathrm{ms}$
remains, which combined with a fourth-order beam splitter leads to the shot-noise
limited intrinsic sensitivity \cite{Abend16PRL}
\begin{equation}
(\Delta g/g)/\sqrt{\mathrm{Hz}} = 5.3\cdot 10^{-9}\,.
\end{equation}
The flux of $10^5$ atoms/s achieved in the QUANTUS-2 experiment is sufficient~\cite{Rudolph15NJP}
to reach this inferred sensitivity and a cycle time of roughly 1\,s.
In addition, we need to be able to detect atoms at the output ports at the shot noise limit,
which corresponds to~$4.5$\,mrad for this atom flux and cycle time assuming a contrast of $C=0.7$.
Suppressing the vibrational background noise is the crucial remaining noise source to be mitigated.
A state-of-the-art vibration isolation would significantly improve the sensitivity, although maximum performance may only be reached
at a vibrational quiet site~\cite{Tang14RSI}.
The mean-field shift can be relaxed when we first lower the atomic density by a faster spreading of the wave packet
during the $45$\,ms after release from the trap but before relaunch, and then stop it by DKC~\cite{Muentinga13PRL,Kovachy15PRL}. For the final size of $300$\,$\upmu$m at the first pulse,
$10^5$ atoms and 1\% splitting-ratio stability, phase shifts introduced by the mean field~\cite{Debs11PRA}
can be sufficiently suppressed below $\upmu$Gal, while expansion rates corresponding to nK temperatures
preserving the beam-splitter fidelity are achievable.
Fluctuations in the launch velocity, which cause a bias due to the Coriolis effect
or gravity gradients~\cite{Louchet11NJP,NJP14}, can be characterized to the required level and optimized by the tested release procedure.
The measured scattering of 70\,$\upmu$m/s and the stability of 15\,$\upmu$m/s of the launch velocity is sufficient to suppress this shift.
The surface quality of the atom chip is crucial for preserving
the high efficiencies and contrasts obtained for lower and higher-order Bragg diffraction and for Bloch oscillations.
It must be significantly improved for a device of the next generation.
Indeed, a residual roughness of $\lambda/10$ typical for a standard mirror is assumed here.
For a beam with a diameter of 2\,cm the phase shifts resulting from the wavefront curvature are insignificant
since BECs are smaller, and expand slower compared to thermal clouds~\cite{Tackmann12NJP,Schkolnik15APB}.
Furthermore, the possibility of analyzing the fringe patterns in the density profiles at the exit ports~\cite{Muentinga13PRL,Sugarbaker13PRL,NJP14} may allow the characterization of systematic errors
arising from wavefront distortions.
The proximity of the atoms to the chip leads to a bias phase shift caused by the gravitational field~\cite{Dagostino11Met} of the chip.
A mass reduction of the chip mount by a factor of two, combined with a finite-element analysis of the mass distribution which calculates the self-gravity effect with an accuracy of 10$\%$, is sufficient
to reach the target level.
Compared to Raman diffraction the influence of light shifts is reduced in MZIs based on Bragg diffraction.
Since the two-photon light shift scales~\cite{Giese16PRA} with the third power of the inverse of the atomic velocity,
it is negligible in the fountain geometry.
Finally a three-layered shield instead of a single-layered one,
resulting in a residual gradient below $10\pm 3$\,mG/m, should be sufficient to suppress any residual bias~\cite{Milke14RSI}.
\subsection{Very long baseline atom interferometry}
\label{sec:VLBAI}
Apart from high-contrast interferometry, delta-kick collimated BECs with effective temperatures below 1\,nK enable extended
free-evolution times, which can significantly increase the scale factor $kT^2$ for acceleration measurements.
Ground-based setups require a large vacuum vessel to venture into the regime of seconds~\cite{Kovachy15Nature,Hardman16PRL,Zhou11GRG}. For example, a device with a height of 10\,m implies a total free-evolution time $2T=2.8$\,s when operated in the fountain mode \cite{Hartwig15NJP}.
This scenario would increase the scale factor by 25--300 compared to the current generation of gravimeters.
Indeed, with $10^5$ atoms, first-order Bragg diffraction, and a cycle time of 5.3\,s, the shot noise limit for full contrast
would be at $0.2\,\mathrm{nm/(s^2}\sqrt{\mathrm{Hz}})$, which is competitive with the superconducting gravimeter
GWR iOSG~\cite{Prothero68RSI}. In contrast to the latter, VLBAI also provides us with absolute measurements and possible further enhancements via large momentum transfer.
Environmental vibrations impose a typical limit in gravimeters, preventing the utilisation of larger scale factors.
Therefore, a sophisticated vibration isolation, correlation with external sensors,
or a combination of both is required~\cite{Geiger11NatureComm,Barret15NJP,LeGouet08APB}.
Measurement schemes using gravity gradiometers, or testing the UFF intrinsically suppress
the impact of vibration noise since the relevant quantity is encoded in the differential acceleration between two atom
interferometers~\cite{Bonnin13PRA,McGuirk02PRA}. Indeed, for gradiometry, ensembles from {\it two} different sources
can be injected into interferometers, or two ensembles can be generated from a {\it single} source
via a large momentum beam splitting process~\cite{Asenbaum17PRL}. The latter approach implies a well-defined distance between the two interferometers which avoids noise contributions from a relative position jitter due to the outcoupling from two different sources and reduces systematic errors. Operating the interferometer with $10^5$ atoms divided into two ensembles, first-order Bragg diffraction in the interferometer,
a total interferometer time $2T=1$\,s, a cycle time of 4\,s, and a baseline of 5\,m between the interferometers
would lead to a shot noise limit of $6.3\cdot10^{-10}\,\mathrm{/(s^2}\sqrt{\mathrm{Hz}})$ for gravity gradients. Further improvements are possible by upgrading the first-order Bragg diffraction to large momentum transfer.
A test of UFF with $^{87}$Rb and $^{170}$Yb may lead to the shot noise limit~$0.1\,\mathrm{nm/(s^2}\sqrt{\mathrm{Hz}})$ of measurements of differential accelerations, implying a statistical uncertainty~$4\cdot 10^{-14}$ in the E\"otv\"os parameter after 24\,h of integration, which is competitive with experiments on the ground achieving~$10^{-13}$~\cite{Williams12CQG,Wagner2012CQGtorsion,Mueller12CQG}, and in space
reaching~$10^{-14}$~\cite{Touboul2017PRLMICROSCOPE}, with the added benefit of different species.
The assumptions are $2\cdot10^5$ ($1\cdot10^5$) atoms, eighth- (fourth-) order Bragg transitions
at 780\,nm (at 399\,nm) for $^{87}$Rb ($^{170}$Yb), a total free evolution time $2T=2.6$\,s, and a cycle time
of 12.6\,s~\cite{Hartwig15NJP}.
Here, the transfer functions of the two interferometers will not be ideally matched due to the different wave vectors,
requiring an auxiliary sensor to suppress vibration noise via cross correlation~\cite{Barret15NJP,barrett2016dual}.
Relevant systematics~\cite{Hartwig15NJP} originates from (i) wavefront errors~\cite{Louchet11NJP,Schkolnik15APB} suppressed by the low and matched expansion rates to be traded off against residual mean-field contributions \cite{Debs11PRA},
(ii) magnetic field inhomogeneities affecting $^{87}$Rb~\cite{Gauguet09PRA,PhysRevD.78.122002} and reduced by a magnetic shield,
(iii) rotations countered by a tip-tilt stage~\cite{Freier16CS,Dickerson13PRL,PhysRevLett.108.090402} and (iv) gravity gradients coupling to the relative displacements of the two elements,
which have to be assessed with the device itself~\cite{hogan2008light}. In fact, the requirements on the relative position and velocity
of the two initial wave packets due to gravity gradients can be substantially relaxed thanks to an effective compensation technique
proposed in Ref. \cite{PRL118}, which has been experimentally demonstrated both
for UFF tests \cite{PRL120} and gradiometry measurements \cite{PRL119}.
\subsection{Space-borne atom interferometers}
\label{sec:space}
The microgravity environment of a space-borne atom interferometer provides us with access to even longer free-evolution times. Moreover, no seismic noise disturbs the measurement.
Indeed, since both the device and the atomic ensemble are in free-fall, the movement of the atoms with respect to the potentials
for trapping and DKC is decreased, opening up a different parameter range
for reducing residual expansion rates and mean-field contributions.
The measurement may even benefit from a much smaller gravitational sag and the absence of a lattice launch. We also have the possibility of a signal modulation to suppress systematics
if the apparatus is inertial pointing~\cite{Aguilera14CQG,Williams16NJP}.
A test of the UFF in space could profit from all these advantages and go beyond an accuracy of about~$10^{-14}$.
The updated Space-Time Explorer and Quantum Equivalence Principle Space Test~(STE-QUEST) scenario~\cite{Wolf15RM},
based on a previous version of a dual-species interferometer
with $^{87}$Rb and $^{85}$Rb~\cite{Aguilera14CQG,Milke14RSI,Altschul15ASR,Schuldt15EA},
proposes a dual-species interferometer with $^{87}$Rb and $^{41}$K and a target uncertainty
in the E\"otv\"os parameter of $2\cdot10^{-15}$.
This goal assumes $10^6$ atoms of each species, beam splitters based on double Bragg diffraction~\cite{Leveque09PRL,Ahlers16PRL},
a total interferometer time of $2T=10$\,s, a cycle time of 20\,s, and a highly elliptical orbit
for the clock comparison part of the mission with a perigee of $\sim$2500\,km, an apogee of $\sim$33600\,km and an orbital period
of 10.6\,h. Around perigee, the instrument observes a strong signal, whereas around apogee it almost vanishes. Additional measurements
in between are utilized for calibration.
The target uncertainty is reached after 1.2\,years, limited by the small part of the orbit close to earth.
A circular orbit in a dedicated mission could reduce this time to a few months.
\acknowledgments
We are most grateful to our colleagues H.~Albers, H.~Ahlers, S.~Arnold, D.~Becker, K.~Bongs, H.~Dittus, H.~Duncker, W.~Ertmer,
A.~Friedrich, N.~Gaaloul, M.~Gebbe, C.~Gherasim, E.~Giese, C.~Grzeschik, T.W.~H\"ansch, J.~Hartwig, O.~Hellmig,
W.~Herr, S.~Herrmann, E.~Kajari, S.~Kleinert, M.~Krutzik, R.~Kuhl, C.~L\"ammerzahl, W.~Lewoczko-Adamczyk, J.~Malcolm, N.~Meyer,
H.~M\"untinga, R.~Nolte, A.~Peters, M.~Popp, J.~Reichel, L.L.~Richardson, J.~Rudolph, M.~Schiemangk, M.~Schneider, S.T.~Seidel,
K.~Sengstock, G.~Tackmann, V.~Tamma, T.~Valenzuela, A.~Vogel, R.~Walser, T.~Wendrich, A.~Wenzlawski, P.~Windpassinger, W.~Zeller,
T.~van Zoest, who have worked with us very closely over the years on the topics presented in these lecture notes.
We acknowledge the support by the CRC 1227 DQmat, the CRC 1128 geo-Q, and the QUEST-LFS. The German Space Agency (DLR)
with funds provided by the Federal Ministry of Economic Affairs and Energy (BMWi)
due to an enactment of the German Bundestag under Grant No. DLR 50WM1137 (QUANTUS-IV-Fallturm), and 50WM1641 (PRIMUS-III) was central to our work. We also acknowledge support by the Federal Ministry of Education and Research (BMBF), by ''Wege in die Forschung (II)'' of Leibniz Universit\"at Hannover, and ''Nieders\"achsisches Vorab'' through the ''Quantum- and Nano-Metrology (QUANOMET)'' initiative.
M.A.E. thanks the Center for Integrated Quantum Science and Technology (IQ$^{ST}$) for financial support.
W.P.S. is most grateful to Texas A$\&$M University for a Faculty Fellowship at the Hagler Institute for
Advanced Study at the Texas A$\&$M University as well as to Texas A$\&$M AgriLife Research.
F.A.N. acknowledges a generous grant from the Office of
the Secretary of Defense (OSD) in the Quantum Science and Engineering Program (qSEP).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 623 |
Arnolds Spekke (også Arnolds Speke; 14. juni 1887 i Vecmuižas pagasts i Russiske Kejserrige — 27. juli 1972 i Washington D.C. i USA) var en lettisk diplomat og forfatter af faglitteratur om Letlands historie.
Spekke modtog en doktorgrad i filologi fra Letlands Universitet i 1927. I 1932 modtog han et stipendium fra Rockefeller Foundation og studerede i Polen og Italien. Fra 1933 til 1939 var han den lettiske ambassadør i Italien, Grækenland, Bulgarien og Albanien med fast bopæl i Rom i Italien.
Den 27. juli 1940 indgav Spekke sin protest mod den sovjetiske besættelse af Letland ved overlevering af en note til den italienske regering. Den 9. august 1940 afleverede Spekke sin afskedsbegæring, og den 11. august 1940 var hans sidste arbejdsdag på den lettiske ambassade i Rom. Dernæst arbejdede han som lærer, bibliotekar, oversætter og andet forefaldende arbejde i Milano og Rom.
Fra 1945 til 1950 arbejdede Spekke for Den Lettiske Komité i Rom og i 1951 deltog han i det stiftende møde af Den Lettiske Frihedskomité (). Begyndende april 1954 blev Spekke chargé d'affaires og leder af den lettiske ambassade i Washington D.C. – primo juni 1954 var han også generalkonsul for Letland i USA. Begyndende maj 1963 blev Spekke leder af den lettiske diplomatiske og konsulære tjeneste. I 1970 gik Spekke på pension fra sit embede.
Arnolds Spekke udnævntes i 1928 til Kommandør af Trestjerneordenen, i 1929 til Ridder af Nordstjerneordenen, og han modtog også franske og polske ordener. Efter 2. verdenskrig forfattede Spekke mere end 15 vigtige værker om Letlands historie og humanister fra Livland.
Bibliografi
(ufuldstændig)
1951, "History of Latvia"
1955, "Latvia and the Baltic problem"
1957, "The ancient amber routes and geographical discovery of the Eastern Baltic"
1959, "Baltijas jūra senajās kartēs"
1961, "The Baltic Sea in ancient maps"
1962, "Some problems of Baltic-Slavic relations in prehistoric and early historical times"
1962, "Senie dzintara ceļi un Austrum-Baltijas g̀eografiska atklašana"
1965, "Balts and Slavs"
1965, "Ķēniņa Stefana ienākšana Rīgā un cīņas par Doma baznīcu"
1967, "Atminu brīži"
1995, "Latvieši un Livonija 16. gs."
Kildehenvisninger
Diplomater fra Letland
Faglitterære forfattere fra Letland
Kommandører af Trestjerneordenen
Riddere af 1. klasse af Nordstjerneordenen | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,000 |
ALSO BY JAMES SIECKMANN
Your Short Game Solution
an imprint of Penguin Random House
375 Hudson Street
New York, New York 10014
Copyright © 2016 by James Sieckmann
Original photography copyright © by Angus Murray
Penguin supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of this book and for complying with copyright laws by not reproducing, scanning, or distributing any part of it in any form without permission. You are supporting writers and allowing Penguin to continue to publish books for every reader.
Most Avery books are available at special quantity discounts for bulk purchase for sales promotions, premiums, fund-raising, and educational needs. Special books or book excerpts also can be created to fit specific needs. For details, write SpecialMarkets@penguinrandomhouse.com.
eBook ISBN 978-0-698-40410-6
Version_1
To Michele, Hannah, and Samuel; you fill my life with love and inspire me to do better.
# Contents
Also by James Sieckmann
Title Page
Copyright
Dedication
Acknowledgments
Foreword
Prologue
CHAPTER 1
What Comes First: A Study of Skill
CHAPTER 2
How to Assess Putting Skill and Avoid the Pitfalls That Thwart Improvement
CHAPTER 3
Skill 1: Starting the Ball On Line (Pre-Stroke Foundations)
CHAPTER 4
Skill 1: Starting the Ball On Line (In-Stroke Foundations)
CHAPTER 5
The Feedback Loop—Confirming Your Foundations
CHAPTER 6
Skill 2: Green-Reading Facts and Processes
CHAPTER 7
Skill 3: Touch and Feel
CHAPTER 8
Skill 4: The "It" Factor
CHAPTER 9
The Art of Effective Training
CHAPTER 10
Proven and Effective Skill-Training Programs
CHAPTER 11
Tour Confidential—Learning from the Best
Photos
# Acknowledgments
During my younger days, I dreamed of playing on the PGA Tour, filling TV screens across the country with images of me holing putts and raising major trophies. But God Almighty had another plan for me—he knew my calling, selected my path, and deserves all the praise and glory. My blessings as a husband, father, and coach are immeasurable. I love what I do and can't wait to take on the challenges and opportunities that each teaching day holds. Oddly, doing what I can to help others achieve their playing dreams—whether it's one of my professional clients winning a tour event, an amateur player earning an athletic scholarship, or one of my academy students holding the member-guest trophy—has been more fulfilling than any fleeting success I enjoyed as a pro.
The best part of being a coach and a golfer is clearly the great people I meet every day. Thank you to my many loyal clients and friends—I truly appreciate your trust. I owe a special thanks to my first pro student, Tom Pernice, Jr., who is a short-game genius and ardent supporter. Tom, you have a huge heart and I clearly wouldn't be where I am today without you.
Thank you to Dr. Greg Rose, co-founder of the Titleist Performance Institute; Steve Shanahan, owner of Shadow Ridge Country Club and godfather to my first born; and Ben Crane, Cameron Tringale, Charley Hoffman, Kevin Chappell, Charlie Wi, and all the PGA and LPGA Tour players who have given me their trust and loyalty. I appreciate each and every one of you.
I've received a tremendous amount of support from other coaches, many of whom have become close friends. Thank you for your willingness to share your thoughts and embrace my message. I am humbled by your friendships and support.
Special thanks to my editor, David DeNunzio—you have a true gift for prose and composition as well as great taste in music. It's amazing that every lyric in every song rhymes with "DeNunz." It was enjoyable having you by my side.
# Foreword
James Sieckmann has been my short-game coach since the summer of 1996. It's rare for a Tour player to have an uninterrupted twenty-year relationship in the golf business, whether it's with a manufacturer, trainer, caddie, or coach. We've both worked hard and have been good for each other. We both know how challenging golf can be. It hasn't always been easy for me—I've had some trying times as a Hogan (now Web.com) Tour, PGA Tour, and currently, Champions Tour player. Despite the struggles, I've carded six professional victories, and I owe a large part of this success to my relationship with James. Regardless of where I am or how I've played, James has always been someone I can rely on.
I'm excited that he's sharing his putting concepts with you, because it's so easy to get distracted and lost along the way. I see it all the time in the pro-ams that I play in each week. There are plenty of people with talent and experience that should do well but are suffering on the greens because they have no idea what to do, despite trying everything under the sun. Thankfully, James has kept me on the right path for nearly two decades. I know that if you have the correct information and approach to putting, you'll play better and enjoy the game more.
Because James knows what it's like to play for a paycheck and pay your bills with your performance, he's empathetic and "all in" for his students. He also has the unique gift of being able to understand the science and fundamentals without losing sense of the artistic part of being a great performer. Although he demands discipline from his students, he has a knack for boiling mechanics down to one or two simple keys, so you can free up your stroke and simply go "back and through." I've executed the same set of fundamentals to the best of my abilities since day one. Because of this, and because I've been organized in my training and thoughts, I've had weeks where I putted as well as anyone on any tour in the world.
Tom Pernice after sinking the winning putt at the 2014 Charles Schwab Cup Championship. Photo courtesy of PGA Tour.
It doesn't surprise me that my partnership with James has stood the test of time. He's completely committed to being part of the solution for me. I'm confident that you'll get a lot out of this book, and I'm even more sure that you'll improve your game. Good luck, and enjoy the process!
TOM PERNICE, JR.
Murrieta, California
April 17, 2015
# Prologue
Soon after I agreed to write _Your Putting Solution_ , I shared the news with PGA Tour player Cameron Tringale, whom I've taught since 2012. Cameron's response wasn't what I expected—he laughed. "James," he said, "how are _you_ going to fill up an entire book on putting? You always make it out to be so simple!" Panic and doubt immediately struck. Cameron was right, but as I began to draft an outline and jot down topics to discuss, I realized there was more to my putting system than either Cameron or I had realized. In the end, it was harder to decide what to omit than what to publish.
Now that the book is complete, I view Cameron's gut-check reaction as a compliment. I'm proud that my approach is considered "simple." As a coach and former playing professional, I know that simple is better when you're on the greens. Simplicity breeds clarity, and clarity fuels confidence. I could easily unleash a trove of knowledge and opinions on almost every topic that affects putting, but my job is to make you a great putter, not a great putting coach. As the great Bobby Jones once said, "the less you have to think about, the better you'll play."
Every golfer is different and has a unique set of problems. I have no idea what your stroke looks like. I couldn't fathom a guess as to your weaknesses or miss tendencies. But good putting isn't always about great technique. It's more about developing a clear approach to getting better. I've organized this book to reflect these truths. As such, you'll quickly learn what's necessary for _you_ to improve and grow in confidence on the greens. The plan? Refocus whatever putting concepts and misconceptions you have swirling in your brain into a simple, clear, effective, and easy-to-execute plan.
Welcome to _Your Putting Solution._
CHAPTER
1
What Comes First: A Study of Skill
When it comes down to it, the only thing that matters is making putts.
Although performance often follows form, and I'm excited by the opportunity to share my knowledge about the fundamentals of putting with you, this is not your typical how-to book. At no point will I say, "Master this special technique and you'll putt your best forever." To view putting from this perspective is a mistake that ultimately leads to inconsistency and frustration. Instead, think of _Your Putting Solution_ as a "Can-You?" book. Can you do the things that are necessary to excel on the greens? And if not, what can you do to change your fortune? These are questions of skill, not technique, and that's something that I failed to understand during my playing days. Nor did I fully grasp it in the first decade of my teaching career. I get it now, and this knowledge has made me a more effective coach while simplifying the learning process for my students. The moment you embrace putting as a set of "skills" instead of hard-set mechanics is the moment you open the door to long-term growth.
STEVE STRICKER'S "UGLY" STROKE
The following story should help prove my point. It was December of 2010, and I was sitting in the back row at a Titleist Performance Institute (T.P.I.) Level 3 Golf Professional Certification class in Orlando, Florida, listening to my good friend and T.P.I. co-founder Greg Rose lecture on the particulars of putting. (I was scheduled to share my opinions on wedge play following Greg.) About halfway through his talk, a hand shot up from the middle of the audience. With a heavy German accent, the person who raised his hand asked Greg to share his thoughts on the ideal putting-stroke shape. Specifically, he wanted to know if the correct path was one in which the putterhead traveled on an extension of the target line with the face remaining square from start to finish, or one that swung back and through on a slight arc, or one that followed the shaft plane with the face rotating noticeably open and closed.
Greg answered, "It doesn't matter. It's not about that." This wasn't the answer the man wanted or expected, so he pressed on. "No," he said, "one of those strokes has to be the best, which means the others must be wrong." Greg paused; I'm sure he was trying to think of the best way to get out of this mess. But then, instead of answering the question directly, he looked for me at the back of the room. "Hey, Sieck!" he asked. "Do you still have that video of Steve Stricker's stroke on your computer?"
Up to the podium I went, laptop in hand, unexpectedly thrust into the spotlight to make Greg's point for the group. I knew exactly what to say.
We all experience moments in life when we learn something new that profoundly affects our future, alters our perspective, and changes the way we think, plan, and react in everyday life. One of those moments happened for me on a warm Tuesday afternoon at the 2010 John Deere Classic in Moline, Illinois, six months before I took the stage in Orlando.
I had an appointment to meet one of my players on the TPC Deere Run practice putting green. As is the case with most professionals, he was running on "Tour-player time" (i.e., "I'll get there when I get there"). As I waited, I started to watch some of the other players work on their putting and noticed Steve Stricker on the far end of the green. Stricker is known as one of the best putters on the planet, so this was a prime opportunity to sneak in a little video and perhaps learn something from his genius. I focused my camera and recorded Stricker working his way around a hole, looking silky smooth as he rolled in a succession of eight-footers. I could have watched and recorded all day, but after a few minutes, my player showed up. I refocused my camera on my student and went to work.
That night in my hotel room, I downloaded the video into my coaching software program and replayed the Stricker frames. I couldn't believe my eyes. This was one of the best putters of all time?
Close-up video of Steve Stricker's putting stroke at the 2010 John Deere Classic.
Frame 1: Address.
Frame 2: End of backstroke.
Frame 3: Impact.
Frame 4: Finish.
The images above are pulled straight from the Stricker video. As you can see in Frame 1 (address), the putterhead is sitting toe down and the ball is aligned not with the sweet spot but toward the heel. Moreover, the putterhead is aimed (dotted line) well to the right of his intended target. His backstroke is perfect as the putterhead swings back in a beautiful little arc (Frame 2), but his forward-stroke travels inside, or to the left, of his backstroke path. But wait, it gets worse. At impact (Frame 3), the putterface is about 6 degrees closed compared to its position at setup, and he strikes the ball slightly off the toe. The off-center hit causes the ball to start a fraction to the right of where the putterface is pointing at impact (solid line) but considerably left of his original aim. My last thought before turning off the lights was, "Wow, that's a shockingly awful stroke!"
I left the tournament the next morning to finish out the week teaching at my academy at Shadow Ridge Country Club back in Omaha. Following my Sunday lessons, I tuned into the John Deere telecast and watched slack-jawed as Stricker holed everything in sight. He finished at 26-under and won the tournament by four strokes. My body may have been relaxed as I lay sprawled out on the couch, but my mind was doing somersaults. A sickening thought struck me: "If I was Steve Stricker's coach, I would have made him so uncomfortable making his stroke "look" pretty (correcting his fundamentals) that he not only wouldn't have won the tournament, he probably would have missed the cut." As I pondered the meaning of this uncomfortable reality, a new belief cemented itself in my mind: Technique doesn't come first, performance does, and it doesn't necessarily matter what you do to perform. What's important is that you can.
STATING THE NEW OBVIOUS
I relayed this story to the T.P.I. certification class, and as I closed my laptop, I stated what the Stricker video experience had taught me:
1. Being able to repeat your stroke so that it produces predictable results is more important than attaining perfect technique.
2. Performance is dependent on a melding of essential skills that may or may not be influenced by mechanics.
The 100-plus instructors in attendance must have either been lost in thought or in shock, because I didn't get a single follow-up comment, not even from the inquisitive German. Hopefully, they got my point: It doesn't matter what stroke shape you choose, as long as you can define it, understand it, and master it. I'm sure many of the engineers or type A personalities reading this book are uncomfortable with this conclusion, but it's true, and to have a healthy relationship with your putter going forward, you must look at putting performance from this perspective.
THE ESSENTIAL SKILLS OF PUTTING
If you study the great putters throughout history—including champions such as Billy Casper, Jack Nicklaus, Brad Faxon, and yes, Steve Stricker—you won't find much in common from a technical standpoint. I'm sure that if I got them all on a SAM PuttLab I'd find several measurable "mechanical flaws" in each of their strokes. But so what? They owned what they did and possessed the ability to repeat it.
While mechanics may differ, there are a few things that all great putters do extremely well:
1. They start the ball on their intended line.
2. They see or feel the correct line to the hole.
3. They match that line with perfect speed.
4. They believe the ball is destined to end up in the hole.
In other words, great putters have _repeatable mechanics_ , they start the ball _on line_ , they _read greens effectively_ , they roll the ball with _touch_ , and they _ooze confidence_ —the so-called "It" factor. These are the essential skills of putting. If you assume that Steve Stricker at the John Deere Classic was consistently starting the ball on the line he visualized (which he was), why would the particulars of the stroke itself matter? It's safe to say that Stricker can check off each of the above boxes; therefore, what I originally thought was an ugly stroke is actually a thing of pure beauty.
It's important for all players to understand that improving technique and becoming more skillful aren't necessarily the same things. Technical improvement often starts with an epiphany, while skill development is a process—an accumulation of experiences both successful and unsuccessful. Like bricks in a wall, the knowledge gained from achieving and failing stack on top of each other, creating something solid and strong. The natural state of technical execution is chaos, because we often change unwittingly. In contrast, once we develop a skill, it often endures.
The best lesson I learned as a putting coach was that performance comes first. So when I had the opportunity to work with all-time putting great Brad Faxon starting in early 2010, we skipped any discussion of technique and chose to work on the skill of matching line with speed.
It's the same as learning how to ride a bike. You start with some basic instruction—keep your feet on the pedals and push down to generate power—and learn that by gently turning the handlebar you can change directions. Gaining an understanding of these basic techniques gives you the confidence to climb on and begin making mistakes. The combination of falling down and having some success guides future actions, ultimately increasing your _skill_ as a bike rider. Eventually, you start showing off, popping wheelies and riding without holding on to the handlebar.
Does this mean that fundamentals don't matter? Not at all. Most players' fundamental flaws diminish the precision and consistency of execution, and small technical improvements often create huge jumps in performance. The players I coach work extremely hard on the fundamental techniques I lay out for them. Longtime client Charlie Wi works tirelessly on mastering his technical keys. In 2010, he led the PGA Tour in Total Putting. That isn't a coincidence. Now, don't take this as a license to shirk work if you're already a very good or even great putter. There are countless ways to refine and build skill. The key point is that when you practice putting, always prioritize skill development over some concept of "ideal." More important, when you make changes in your method, do so with the precise knowledge of how and why it will affect the essential skills listed above. Technique is a means to an end, not the end itself. Skill is the end game.
We'll explore the four essential skills in great detail throughout this book, beginning with Chapter 3. But first things first: Let's see what kind of putter you are. Better yet, let's assess your _skills_ and see what you need to do to improve. Your putting lesson starts now.
CHAPTER
2
How to Assess Putting Skill and Avoid the Pitfalls That Thwart Improvement
There are inherent problems associated with learning how to putt, and because most players don't know how to manage them, many of them are getting worse at putting, not better.
If you're one of the countless talented and hardworking golfers who, despite honest effort, putt worse now than you did a few years ago, take heart—this book is the solution. Trust me, I know why your practice time and experiences are causing poor performance on the greens. I suffered the same aggravation as a professional player, and I see it every week in my clinics and schools.
The biggest roadblock you're facing is that you don't know how to effectively train to improve your skills. When I'm working with one of my Tour students, we spend most of our time on skill training. If this is a priority for someone whose living depends on making putts, it should be for you, too.
The second mistake is that you tend to go it alone. Tell the truth: When was the last time you took a putting lesson from a qualified instructor? (Your buddy who three-putts every other green doesn't count.) Self-coaching on the greens is a confusing business. For instance, when you miss a makeable putt, you often don't know why, even if you make your best educated guess. This is a critical problem in the quest for long-term growth, because science tells us that learning is next to impossible without factual information about the result of an action. You can miss a putt a dozen different ways—you can choose the wrong line, aim poorly, commit an error in your stroke that creates off-center contact, apply too much or too little force, or use any combination of the above. It's also possible to miss putts by misjudging the slope, grain, or speed of the green, or even the wind. And even if you do get everything right, an imperfection in the green can cause the ball to do something that's impossible to predict. Try to find "factual information" in all that! Compounding the situation is the fact that you can miss a putt for one reason on one hole but for a completely different reason on the next.
Daunting, huh? This is the reality we all face, and because most of us don't have a clear way to assess our current skill set or own a mature enough approach to solve the riddle of how to improve, we make the ultimate fatal flaw: we tinker. We swap guesswork for logic, experiment on hunches, and when things don't go our way, we immediately try something new. After countless changes and trials that don't work, we find ourselves completely lost, unsure of anything despite years of experience. At this point, you are officially "getting in your own way."
This player shows up at my academy door with three putters in his bag, a fistful of frayed nerves, and a prescription for anxiety meds. There's a better way, and that's to develop a plan that embraces the specific intent for each step in the improvement process while providing confirmation that you're on the right track every time you practice. This eliminates the desire to guess and tinker. It's also essential that your approach is mature enough to set aside mechanics at the appropriate time and instead focus on skill development. Moreover, your plan must be simple to understand and simple to follow, yet effective.
This is the goal of _Your Putting Solution_ : To formulate a plan that guarantees success, not one that hopes for it.
GETTING STARTED
Stop thinking of putting as a thing unto itself. There are too many parts and variables that must interrelate to produce desirable results. The correct approach is to think of putting in terms of the four essential skills I outlined above: starting the ball on your intended line, seeing or feeling the correct line to the hole, matching that line with perfect speed, and believing that the ball is destined to end up in the hole.
When you do this, you can more readily assess your proficiency in each. Only after you have factual information about your limitations or tendencies can you manage them and dive into the techniques and training that will improve performance within that area. As you improve each skill (independently of the others), the overall product (i.e., your ability to hole putts) improves along with it. In addition, you must avoid the common pitfalls to growth by understanding and believing in your method, as well as by committing to the keys of effective training. It takes discipline. Sustained improvement on the greens is the result of doing the right things the right way every time the putter comes out of your bag.
Step 1: Start a Journal
If you read my first book, _Your Short Game Solution,_ you know that I'm a big fan of journaling and writing out a detailed training strategy. The act of writing or verbalizing specific intent aids both your commitment and accountability to the plan going forward. Start your putting journal (any blank notebook will do) by writing down a long-term attainable goal on the first page—a true vision about the new and improved putter you're destined to become. Make it something like, "I will execute on the greens confidently and reduce my stroke average by three shots per round within one year," or whatever you deem appropriate as long as it is both aspiring and measurable. Next, separate your journal into four sections with the following labels: (1) Assessments, (2) Technical Plan, (3) Training Plan, and (4) Personal Growth. Throughout this book you'll be asked to perform regular testing and record the results, as well as write down technical keys, training structures, performance notes, changes you make over time, and the little nuggets you learn on your journey.
Step 2: Assess Your Skills
The second step in creating your plan is to assess the particulars of your current skill set by performing the following three tests. In the 30 to 40 minutes it takes to complete them, you'll discover where you stand in relation to each of the four essential skills and thus what you need to manage and prioritize as you proceed with your plan.
Assessment No. 1: Starting the Ball on Line
Assessing your ability to start the ball on your intended line is an obvious place to begin your improvement journey, because if you can get the ball rolling in the direction you want it to go and swing your putter with confidence, you'll at least make more than your fair share of short putts. This is a big deal when it comes to scoring, because sinking the short ones takes pressure off both your finesse-wedge game and your lag putting. If you prove to be adept at this skill, you can then move on to assessing and improving the other three skills. If you fail the assessment, take a step back to focus on and train the individual variables and fundamentals that affect your ability to start the ball on the correct line (which we will cover in Chapters 3–5). Once you've mastered this skill, you can move on to the next one with confidence.
To be considered proficient at starting the ball on line, you must prove your ability to do it in all putting conditions: downhill, uphill, sidehill, and straight, because you'll face them all multiple times during the course of normal play. Any improvement process begins with establishing a baseline. Let's find yours.
25-Ball Dime Test
Place a dime on a gentle slope on the practice putting green (you won't need a hole for this assessment). Find the straight putt to the dime and drop five balls in the same spot on the green two feet from the coin. Your goal? Roll all five balls straight over the dime and stop them about four feet beyond it (for a total putt length of six feet). Listen for the "click" of the ball rolling over the dime on each attempt. Keep track of your hits and misses and record them in your journal in the assessments section under the heading "Straight Short Putt."
Repeat the test from the same location, but this time roll the balls over the dime to a spot about 13 feet beyond the coin (for a total putt length of 15 feet). Record the results under "Straight Medium-Length Putt" in your journal. Next, pick a spot on the green relative to the dime that would represent a right-to-left putt, and again roll the balls over the coin. Stop these putts about eight feet beyond the dime. Record your results under "Right-to-Left Putt," then repeat for a left-to-righter. Important: When performing the right-to-left and left-to-right assessments, don't putt from such a severe slope that a ball putted at the ideal speed breaks off before it reaches the dime. If the slope is relatively gentle (a 1 to 2 percent grade), it shouldn't move much during the first two feet of its roll.
Finish your assessment by rolling five balls over the dime on a short downhiller. You've now hit twenty-five putts (five putts from five different locations). Your results should tell you everything you need to know about your ability to consistently start the ball on line and help you decipher an appropriate solution.
Assess your ability to start the ball on line by putting over a dime on a short straight putt, medium-length straight putt, medium-length putts that break in both directions, and on a short, straight downhiller. If you can't hit the dime (set two feet in front of the ball) at least 80 percent of the time from each putt location, your setup and stroke fundamentals are holding you back from performing on the greens.
If you were able to hit the coin at least four times from every location (twenty total "hits"), that's good enough to be the best putter at your club and even outperform some of the Tour players I coach. In this case, treat yourself like I treat Brad Faxon—leave well enough alone and focus on improving the other three skills. Remember, you need to be proficient at all of them to perform well. If you didn't consistently hit the dime from all five positions, you need to find out why by diving into the fundamentals that affect this skill (Chapters 3 and 4). Ultimately, your belief in and execution of proper fundamentals will allow you to pass any reassessment with regularity.
Note: There's some simple math to the 25-Ball Dime Test. Given a perfectly flat putting surface, a perfect read, and appropriate speed, a ball that's struck so that it rolls over a dime set two feet in front of it on a direct line to the center of the cup is hit straight enough to find the hole from as much as 12 feet away. When you consider that your average Tour player sinks approximately 28 percent of their putts from this distance, it's safe to assume that starting the ball on line to this standard 80 percent of the time is more than good enough for you to dominate on the greens. If you passed the 25-Ball Dime Test but still aren't making putts, then one of the other essential skills are to blame, not this one.
If you can roll the ball over a dime placed two feet in front of the ball on your starting line, your stroke is accurate enough to hit the hole from as far as 12 feet away on a flat green.
Assessment No. 2: Predicting the Correct Start Line
Green-reading is something that most players are notoriously bad at, even though they may tell you otherwise. It's a convoluted topic further complicated by the fact that, for most players, there are two reads for every putt: 1) the line they pick consciously and 2) their subconscious read or feel for aim and line once they settle over the ball. As you can guess, these reads often conflict. Completing the following test will help you decipher your overall green-reading skill, tendencies, and which type of read works best for you.
10-Ball Green-Reading Test
To assess your green-reading skill, you're going to hit five putts from both five and twelve feet. Explaining how to perform this assessment might take longer than it will for you to complete it, so hang in there and reread these instructions if you have to. On the practice green, find a slope with medium break (1 to 2 percent grade) and spread out five balls in a circle, each five feet from the cup. This will give you the variety of putts that you'll face in everyday rounds: uphill, downhill, right-to-left and left-to-right. Your goal is to judge the line and putt all five balls with a speed that allows them to travel a foot past the hole in the event they miss. On each putt, run through your normal green-reading process and record your conscious, or stated, read for the perfect start line for each putt at the intended speed. Copy the table below and record your reads in the appropriate row. Title it "Green-Reading: Short Putts." It should look something like this:
Sample 10-Ball Green-Reading Test journal entry. Mark your initial read in the first row for each putt. You'll create charts for both five- and 12-foot putts.
Don't putt yet! The next step is to place a dime on your stated starting line about two feet in front of the ball, and push it down firmly into the green, just below the level of the putting surface, so it won't deflect the ball off line. For example, if your stated read for the first putt was "left edge," place the dime on a line that runs from the ball to the left edge of the hole. Repeat the process for the remaining putts from five feet.
Next, address each ball in succession as though you're going to putt it (but don't). Does the change in perspective (from read position to address position) make the dime appear as though it's on your stated line, or does your subconscious want you to aim above or below it? Note any "disagreement" in the second row of your green-reading assessment table, indicating what your subconscious wants you to do, whether it's to play the intended break, play more break, or play less break (it'll probably be more).
Mark your initial read on each five-foot putt with a coin (highlighted by the small circles), then look for any discrepancies between your first read and what your eyes and feet tell you as you stand over each ball at address. Any disagreement between your conscious (first) and subconscious (address position) reads creates conflict and impairs your ability to start the ball on line.
Finally, it's time to stroke all five putts and fill out the last two lines of your assessment table. For each putt, note if the ball hit the dime on its way to the hole and record the result (make, miss on the low side, or miss on the high side). Although this test is more pragmatic than scientific, you can draw some basic—yet powerful—conclusions from the results.
If you rolled your ball over the coin, but the ball missed the hole (assuming reasonable speed), then you know that you misread the putt in the direction of the miss. Make a note of where the dime should have been placed. Is this more break or less?
If you miss the coin and the hole in the same direction, your read was probably fine, but your stroke was poor.
If you miss the coin, but the ball goes in the hole, you'll not only learn what your read should have been, but that you read greens much better subconsciously than consciously, which is critical information for improving your green-reading skills (Chapter 6). Record where the dime _should have_ been placed (more break or less). Nailing your tendencies will help you better manage them going forward.
After you roll five balls from each starting position around the hole from five feet, move to a different hole and repeat the assessment from 12 feet. Again, record your results in a table. Title it "Green-Reading: Medium-Length Putts."
If you're a highly skilled green-reader who can also start the ball on line (i.e., over the dime), your tables should show agreement between your actual and subconscious reads, and that a majority or your putts rolled over the dime _and_ went in the hole. Examine your recorded results critically and look for patterns. My guess is that most of you drastically under-read putts consciously, and then allow your subconscious to both misalign and pull or push the putt to get the ball somewhere near the correct line. A player of Steve Stricker's or Brad Faxon's caliber can probably hit all five coins and make all five putts from five feet, and hit the line at least four times in the 12-foot assessment with a couple of makes. And if they recorded their results in a table, they'd find very little internal conflict between where they decided to start the ball and their feel for the line as they addressed each putt.
A properly completed assessment table provides useful clues about your green-reading strengths and weaknesses. Look for discrepancies between your conscious and subconscious reads, and note which one allows you to hit the dime—and sink putts—most often. In this example, the player would be much better off listening to his inner voice, because every time he sensed a different break standing over the ball, he was closer to reading the true line to the putt.
Note: If your results tell you that you didn't set the coin in the correct place on at least seven of the ten trials, then you need to study the physics of how slope and green speed affects putts and develop a better process for visualizing roll. (More on this in Chapter 6.)
Putting Without Conflict
Have you ever stepped into a putt knowing with 100 percent certainty that it was going to go in? It's an amazing feeling that every player experiences at one point or another. Call it the "Zone," a "Higher Power," or whatever you want, but I see it as the rare alignment of three important factors. First, your conscious read is factually correct, and you can visualize every part of the putt clearly. Second, your subconscious feel for the putt as you settle into the ball is in complete agreement with your conscious read. Third, you're so confident in the visual picture that you act without any sense of self-awareness, reacting only to the external cues without interference or clutter. That's why performing the 10-Ball Green-Reading Test is so important. The more you do it, and the more you improve your green-reading skills, the more likely your conscious and subconscious reads will match up. _Boom, baby!_
Assessment No. 3: Putting Touch
Your ability to avoid three-putting from long distance and drain those difference-making medium-length putts between eight and 15 feet is largely dependent on your touch. Touch is the ability to sense all of the factors that influence how the ball is going to roll on the green, and to expertly impart the correct energy to the ball so that it rolls up to the target with the anticipated pace. To test your current proficiency at distance control, perform the following two-part assessment.
Long-Lag Test
With three balls in hand, step off uphill 30-, 40-, and 50-foot putts to the same cup. Set a ball on the green at each distance, and starting with the 30-footer, lag each ball up to the hole. Score your results as follows: five points for a make, three points for a ball that comes to rest within three feet of the cup, one point for a ball that comes to rest within four feet of the cup, and zero for everything else. Add up your points, then repeat for 30-, 40-, and 50-foot downhill putts, as well as sidehill putts (either left-to-right or right-to-left) from the same three distances. If your cumulative score for all nine putts is 18 points or less, your ability to control distance—that is, your touch—is deficient and probably costing you several strokes per round. Improving your touch will definitely move you closer to meeting your desired goal.
Test your ability to lag putts from 30, 40, and 50 feet. If your three-ball total using the scoring system above is less than 18, your touch is failing you on the greens.
Two-Hole Knockout Test
This test will assess your ability to control speed to the standard needed to make putts between eight and 20 feet with some regularity (those in the winner's circle are certainly "having their week" in this regard). Lay three balls down 10 feet from a cup and stick a tee in the ground one foot beyond the hole on the "through line" of your intended putt. Next, place a coin four inches beyond the tee on your through line (I prefer to put them off to the low side), and a second coin four inches in front of the tee.
Two-Hole Knockout Test setup.
Your goal with this drill is to roll all three balls either into the hole or into the eight-inch-long "ideal speed zone" bookended by the two coins. If you can't get all three balls to finish in the speed zone (or in the hole), pick them up and keep putting. Count the number of attempts it takes you to roll a total of three balls into the zone or the hole—in other words, to "knock out" that distance. That's your 10-foot score. Repeat the test on a completely different putt from 15 feet. If it takes you more than nine balls to knock out both holes, your sense of touch and ability to control distance is in need of serious work. Note your score and your tendencies in your journal, and commit to improving your touch fundamentals using the training methods discussed in Chapter 7.
Two-Hole Knockout Test in action. The better you are at rolling the ball at the correct speed, the more likely you are to match the line to the speed, and to hole putts.
Assessment No. 4: "It" Factor
The last entry in my list of essential putting skills is mental prowess, or "it" factor, which allows you to execute with confidence. There's no assessment here, because growth in this area is something you should invest in daily, regardless of your current ability. Honestly, it's the master skill. Lacking belief in your abilities affects all putts, but it typically inflicts the most damage on the short putts you're supposed to make or during big moments, like when you need a putt to win your club championship or post your best score ever. Anyone who suffers from putting anxiety can tell you how difficult it is to perform on the greens when all you can think about is your next miss. In Chapter 8, we'll dive into the key elements of mental performance and embed proven strategies into your training regimen and process. As is the case with other putting skills, the more you improve your mental prowess, the more putts you'll actually hole.
NEXT STEPS
The assessments presented on these pages are powerful guides for customizing and fine-tuning your improvement plan. If you nailed one of the assessments, focus on the other three, consulting the appropriate chapters. If by some miracle you passed all of the assessments and still aren't making putts, then it's time to address the only possible area holding you back: the effectiveness of your training (Chapter 9). My guess is that you'll fail all three (I set high standards, and I hope you do, too), prompting you to begin your journey to better putting on the very next page by learning the foundational keys to starting the ball on line.
CHAPTER
3
Skill 1:
Starting the Ball On Line (Pre-Stroke Foundations)
The first step in learning how to roll putts in the direction you intend is to clearly define how you plan on doing it, leaving no variable or nuance unexplored. Your competency depends on understanding and believing in your method, coupled with effective training.
If you were to paint a picture of your putting stroke, what would it look like? If the image isn't clear and precise, or if it takes you a while to conjure it, you have a problem. How can you master something if you haven't defined it? How can you commit to a plan that even you—its creator—don't fully understand? You can't. Believing in and trusting your method is an act of will, not a feeling you get after a few made putts. In other words, if you don't own it, you'll never have it.
When the time is right to begin teaching a student technique, my first task is not only to present a clear picture of how the putter should move (based on his or her style), but to also make them understand the benefits of moving it that way, as well as the fundamentals required to get it done. The "doing" part of putting is the easy part. Let's face it—you don't have to be a world-class athlete to stroke putts. As you'll learn, the quality and consistency of your stroke depends more on your ability to grasp _why_ you swing the putter a certain way—and creating a feel for this movement—than it does your physical talents.
So let's get started. Lesson one: To consistently start the ball on line, you must:
1. Square the putterface to your intended line at impact
2. Contact the ball on the sweet spot of the putterface
3. Maintain a "neutral" putter path through impact
4. Strike the ball in the putter's center of mass
These are the building blocks of a successful stroke. Some are more critical than others, but all affect the start line in some way. As you begin the process, keep in mind that your stroke is a means to an end, not the end itself.
SQUARE PUTTERFACE
Contacting the ball so that it rolls in the direction you intend is mostly influenced by the angle of the putterface at impact. If your putter is pointing to the right of your intended line as you make contact with the ball, the putt will almost assuredly start to the right of your line, regardless of the other variables. The same goes if the putterface is pointing to the left at impact. Numerous studies indicate that face angle determines up to 85 percent of start direction. This isn't "coach-speak"—it's an absolute.
When it comes to starting the ball on line, nothing is more important than squaring the face at impact. Face angle is king.
Therefore, the first step in starting the ball on line is to visualize what a properly aimed putterface looks like at address and then figure out how to re-create the same face angle at impact.
SWEET-SPOT CONTACT
Step two is to contact the ball on the sweet spot. Errors here don't affect start line as heavily as face angle, but they should definitely be avoided. If you make contact with the ball out toward the toe of the putterhead, it's highly likely the ball will start slightly to the right of your intended line and end up short of your target. Why? Because every putter twists, or rotates, when contact is made outside of the sweet spot (named as such because it's the spot on the putterface that's most resistant to twisting). Toe contact forces the putterhead to rotate open, creating a face angle that points to the right of the intended line; the ball goes right and rolls with less energy. Impacting the ball with the heel area of the putterface does the opposite; the putterhead twists in the direction of the impact point and closes, resulting in a starting direction left of the intended line.
Putts struck on the sweet spot are more likely to start on line and with the correct speed. Missing the sweet spot causes the putter to twist, creating face-angle errors at impact as well as a loss of energy transferred to the ball.
Until recently, the sweet spots of most mass-marketed putters were often mislabeled, because manufacturers tended to position contact marks and alignment aids in the horizontal center of the putterhead, not in the location of the club's maximum _moment of inertia_ (the spot that offers the most resistance to twisting). Now, thankfully, manufacturers go through great pains to ensure that any putterhead markings coincide with the exact location of the sweet spot, so you can rest assured that if you strike the ball where you think the sweet spot resides, you'll be optimizing the stability of the putterface at impact. The research and development performed at outfits like Titleist's Scotty Cameron Studio are extremely advanced, using science to provide today's golfers with advantages other generations lacked.
STROKE PATH
Sweet spots are sweet spots, and any golfer can see the benefit of striking the ball with a square putterface. What's truly up for debate is the ideal putterhead path and stroke shape relative to the starting line. Path errors aren't as deadly as face errors, because the energy transfer and compression at impact with your putter (compared to the energy transfer and compression created when hitting a driver) is very low, providing some wiggle room for you to match the nuances of your body type, posture, and comfort level to your chosen path. The key, again, is to choose a particular shape, clearly define it, and then commit to it.
Unless there's strong disagreement, I try to get my players to swing the putter on a very slight, symmetrical arc—so slight, in fact, that the putter remains on (or nearly on) an extension of the target line with the face square to the line for two to three inches on either side of impact. An advantage to this stroke shape is that it doesn't require you to position the ball perfectly in the same place in your stance on every putt—you've got two or three inches on either side of the ball where the putter is moving straight down the line with the face square to the target. At any point in this zone, contact conditions fall within an acceptable standard, accommodating natural human error when it comes to positioning the ball in your stance.
Maintaining a neutral path four to six inches (two to three inches on each side of the ball) around the impact point helps you start the ball on line consistently and negates the need for precise placement of the ball.
What does this ideal stroke shape look like? Check out mine below. Although I don't practice much, I still manage to produce a fundamentally sound motion.
A great stroke shape runs down the target line through the impact zone and arcs symmetrically on the ends, with the face square to that arc at all times. The amount of arc may differ from player to player.
As you can see from these photos, my putterhead arcs slightly away from the target line (dotted) the farther it gets from its starting point. Also notice how square and straight everything is around the impact zone. Even though my stroke arcs, the face remains square to the arc from start to finish.
The exact amount of arc (and face rotation) you'll create in your stroke depends on your putting posture. In general, the more you bend from your hips at address (picture Michelle Wie), the less your stroke will arc, leading to less putterface rotation. Because I feel like less rotation and arc are beneficial for consistency, you'll notice that many of the players I coach on Tour bend over quite a bit from their hips at setup, with their weight evenly balanced over the arches of both feet.
Your body type and the amount you bend over from your hips at address determine how much your putter will arc throughout the stroke. If you stand tall, your stroke shape will arc more than if you bend over.
BALL CONTACT POINT
The final part of your stroke picture is the putterface contacting the ball in the putterhead's center of mass, neither too high nor too low on the face. In order for this to happen, the putterhead should be ascending slightly (one to two degrees) into impact. The combination of these variables, along with the loft built into your putterface, will create optimal launch conditions, which are necessary to get the ball rolling on your intended line.
Launch conditions? Yes, putts do launch. On perfect impact, you create just enough upward movement to keep the ball from spinning either backward or forward for the first four to six inches of its roll. The angle of attack and resulting effective loft of the putter lift the ball a smidgen into the air (and out of any depression that the ball may have settled into) while pushing it toward the target parallel to the green's surface. Once the ball comes back into contact with the green, its forward momentum and lack of spin allow it to roll on your line without bouncing.
Optimal launch: A slight ascending strike is one factor that helps create the ideal effective loft at impact, which lifts the ball slightly off the ground with no spin for the first four to six inches of the putt.
Note: Grooves on a putter change this dynamic slightly—a ball hit with this type of putter will start rolling with topspin immediately after impact. Manufacturers who offer putters with grooved faces claim that this is beneficial, because a ball that immediately spins forward is less likely to be deflected off line by the ground. The testing I've seen on this matter has been far from scientific and at best has offered circumstantial evidence. Plus, it doesn't take into account different grass types and grass heights. Twenty years of coaching and a lifetime of putting experience make me a bit skeptical of these claims, to say the least.
Your Putting-Picture Recap
* Putterface aligned to your intended start line at both setup and impact.
* Putterhead path tracing a slightly symmetrical arc, with the putterface remaining square to the arc throughout the stroke.
* Ball struck in the sweet spot of the putter, with the putterhead ascending slightly into impact.
NAILING THE FUNDAMENTALS
Now that the picture of your stroke and the key impact fundamentals are clear and complete, you need to come to terms with how you physically set up and execute your stroke to create the impact conditions that allow you to start the ball on your intended line without having to think about it. Although all fundamentals are interrelated to some degree, they're easier to understand and work on when you separate them into two distinct parts: those that happen pre-stroke, and those that occur during the stroke. For the remainder of this chapter, I'll focus on the pre-stroke fundamentals that fuel consistent impacts and accurate starting lines.
PRE-SWING FUNDAMENTALS THAT AFFECT YOUR ABILITY TO START THE BALL ON LINE
Body and club alignment at address both affect in-stroke motion and, therefore, stroke shape. I like my students to be comfortable; the more relaxed you are at address, the smoother your stroke will be and the easier it will be to repeat. So get cozy—just make sure your posture adheres to the following structure:
Critical pre-swing fundamentals from a face-on perspective.
1. Set up so that the shaft of the putter is straight up and down, or perpendicular to the putting green, and your dominant eye is positioned two inches behind the ball (that is, away from the target). Most manufacturers build three to four degrees of loft into their putters because this creates an optimum amount of effective loft (static loft plus the strike angle of the putterhead) plus or minus shaft lean errors at impact. Manufacturers, however, assume that the user will set up with the shaft in a neutral position (straight up and down) at address and then return it to neutral at impact. If you feel more comfortable addressing the ball with your hands pressed forward or leaning back, you should have an equipment professional adjust the loft on your putter accordingly, so that you can maintain the ideal amount of effective loft.
2. Set up so your eyes are over an extension of the target line, or no more than one inch inside that line (in other words, no more than one inch closer to your body). This gives you the best vantage point to look down and see your intended line.
3. Set up so that the middle of your right hand sits directly under your right shoulder (or the shoulder on the same side of your body as the lowest hand on the grip). This will make it as easy as possible for you to swing the club on a neutral path.
4. Set up so that an imaginary line between your shoulders is parallel, or square, to your intended start line. This alignment will allow your hands to swing naturally along the target line. Notice that I said I wanted your shoulders square, but that I didn't say anything about your feet. That's often a matter of eye dominance and comfort. Generally, left-eye dominant players (right-eye dominant if you're a lefty) see the line better if they set up with a square stance and the ball slightly forward of center, so they can get their dominant eye just behind the ball. Right-eye dominant golfers (left-eye dominant if you're a lefty) tend to feel more comfortable with an open stance; this allows them to cock their head a little toward their trail shoulder and look down the line more easily with their dominant eye. The ideal ball position for this type of golfer is typically dead center.
Critical pre-swing fundamentals from a down-the-line perspective.
MASTERING YOUR SETUP
These four setup fundamentals are simple to understand and require no real special ability to master, but how do you know if you're accurately executing them? Obviously, intent means nothing. There's a huge difference between what we feel and what's real. It can't be subjective or up for debate—you need to know with 100 percent certainty that you've got them down pat. My advice? Check your setup fundamentals using my Mirror-Setup Drill (explained below) at least once a week. It'll take no more than a minute, and you'll be using that time wisely.
Mirror Set-Up Drill
Step 1: Find a full-length mirror in your home and get into your address position with your body facing the mirror. From this vantage point, check that the shaft is sitting 90 degrees to the ground and that your dominant eye is approximately two inches behind the ball. Adjust both hand and upper-body positioning until you nail these important setup fundamentals.
Step 2: Rotate to your left a full 90 degrees so that you're setting up to putt away from the mirror. (If you putt left-hand low, set up as though you're putting into the mirror.) Check that your arms dangle freely from your shoulders and that the middle of your right hand sits directly below the center of your right shoulder. (If you putt left-hand low, make sure that the middle of your left hand sits directly below the center of your left shoulder.) If the image in the mirror doesn't reflect this, adjust your hip hinge or arm position until you get it right.
Work on your pre-stroke fundamentals using a mirror at least once a week. Even Tour players fall into bad habits. Keep them at bay with a quick and easy weekly checkup.
Step 3: Confirm that your shoulders look square to your intended line (when you look back into the mirror, your right shoulder should hide your left, and vice versa if you putt left-hand low). Adjust if necessary.
Step 4: Make sure the ball is under your eyes, or just outside them, and confirm that your putter is soled flat on the ground. If the heel is down and the toe is up, your putter is too upright for your posture and you'll need a fitting professional to adjust it for you. Obviously, the same can be said if your putter is too flat (toe down), or too long or too short. You're better off adjusting the fit of your putter to your most comfortable posture and the proper setup fundamentals than adjusting your setup to a putter that doesn't fit.
Now that you've attained a fundamentally sound setup, measure the distance from the ball to your toe line by marking an alignment stick or using the length of your putterhead to create a measurement. For example, when he's training, PGA Tour player Cameron Tringale puts a tee in the ground two and a half putterhead lengths from the inside of the ball to mark his toe line, or the distance he stands from the ball. Because he checks both this distance and his eye positions daily, his posture and body orientations never change—a key to mastery.
THE POWER OF STYLE CHOICES
Hopefully you realize that when it comes to the proper setup fundamentals, I'm not asking for much—you can nail them in a few minutes with the mirror drill. However, you may have address position questions I have yet to answer, such as "Which grip should I use?" or "How firmly should I grip the club?" I wouldn't think of trying to answer those, because there's a difference between fundamentals (things everyone should do) and style choices (things that _you_ should do because they work for you). Make any style choices that suit you, just make sure that they help you achieve the body and club alignments discussed in this chapter. Your style choices will differ from mine and from most golfers you know. Tiger Woods uses his trail arm to dominate his stroke and grips the putter very lightly. His good friend, Steve Stricker, grips his putter with maximum pressure and swings it using his lead arm and hand. They're completely opposite styles, but they both allow the user to comfortably control the club and, more important, to repeat the motion. Whether you hold the club in your fingers or palms, have a fat grip or a skinny one, putt cross-handed, traditionally, or with the claw—it doesn't matter as long as those choices allow you to achieve the setup fundamentals we've discussed and give you a high measure of control over the club.
Style choices, like which grip you use, are not fundamentals—go with what gives you the most control.
With these criteria in mind, think through the style choices you've already made and the ones you've been thinking about adopting and commit to those that feel best. I, for example, feel more in control of the club with a skinny grip running through my fingers, which is the opposite of the current trend on Tour. Champions Tour player Tom Pernice, Jr. switched to a left-hand low, or cross-handed, grip a decade ago after constantly struggling to get his shoulders square at address with a traditional hand placement. This change has made a huge difference in his comfort level and consistency. Your style choices are unique to you. Make them for the right reasons, and then commit to them.
GETTING POINTED IN THE RIGHT DIRECTION
The scary thing about putting is that establishing and perfecting great fundamentals means nothing if you can't aim. Starting the ball on line when the actual line is three degrees left won't save you any strokes. In fact, it'll drive you from the game. I offer three effective strategies in this regard. The first is to choose the right putter. Its design—head shape, hosel type, shaft offset, loft, alignment markings, etc.—has a tremendous effect on where your eyes tell you to aim the putter. If you have an aiming error tendency and a miss tendency in the same direction, finding a putter that you can aim is often the simplest and most effective fix.
The best putter for you isn't the one that looks the sharpest or the model used by your favorite Tour player; it's the one designed to augment your in-stroke strengths and minimize your mistakes, including poor aim. If, for example, your putterface often rotates too much relative to the path, consider a face-balanced putter (far right). If the putterface either rotates closed in the backstroke or opens up in the through-stroke, a heel-shafted putter with more mass toward the toe (far left) may help you. Grip shape and size is an issue of comfort and control and needs to be worked through individually. As with any equipment choice, your best bet is to pay a visit to a trusted clubfitter.
FINDING THE PERFECT PUTTER
When choosing a putter, prioritize the following criteria in this order:
1. Your ability to aim it
2. Solidness of the hit
3. Proper length and lie angle for your setup fundamentals
4. Weight
5. Balancing (toward the toe, heel, or face)
Recently, an aspiring Tour professional called me in a panic because of his bad putting leading up to the second stage of PGA Tour Q-School. Even though he was missing all of his putts to the right, I knew that his stroke fundamentals were relatively sound, because he had sent me some video days earlier. So when I arranged to meet him at the tournament site, I told him to bring every good putter he owned. Sure enough, four of the five putters he brought, including the one he had been using leading up to the event, biased his aim well to the right. The one that didn't, ironically, was devoid of any alignment markings, the absence of which forced him to look at the face to aim instead of the line on the putterhead. By finding the right putter design, and making a slight change in his training to boost his confidence, he putted lights-out and made it through qualifying with flying colors.
When choosing a putter, forget about what it looks like. Can you aim it? Test different designs until you find one that fits your eye by setting up to a straight-in six-footer and asking a buddy to stand behind you and give you feedback on your aim.
The second strategy to improve aim is to actually practice it in a learning environment. Without intelligent practice, aiming errors can and will pop up, mostly as a response to poor stroke fundamentals. For example, if you have a stroke that consistently produces a left miss and you hit enough putts, you'll subconsciously develop a right-aim error. Your subconscious is taking care of you, so to speak, but it's a short short-term solution beset by long-term inconsistency. The good news is that you can effectively change the way you see targets and aim at them through proper training in as little as three weeks. You'll be doing this as part of _Your Putting Solution_ training plan (Chapters 9–11), so where you currently stand with this ability is irrelevant. Once you get going with your putting workouts, you'll soon be aiming perfectly without any doubts.
Lastly, when all else fails, you can mark your ball with a line, stand behind it and use your binocular vision (in other words, while looking at it with both eyes—this is how we see in our day-to-day life) to align it toward your intended line, and then set the putterface perpendicular to the ball line. This eschews the traditional aiming process altogether, and has both its pros and cons. We'll get into this technique more thoroughly in Chapter 6 when we broach green-reading skills.
CHAPTER
4
Skill 1:
Starting the Ball On Line (In-Stroke Foundations)
The address-position fundamentals outlined in the previous chapter are designed to put you in the best possible position to start your putts on line. The ball, however, can't start rolling by itself. That's determined by how you generate the motion of your stroke.
Once your setup fundamentals are in order, the act of stroking the ball on your intended line is nothing more than maintaining a suspension point and staying stable throughout your motion. See—I told you it was going to be easy! But as with anything in this book, it's critical for you to understand _why_ I'm asking you to maintain a suspension point (whatever that is) and remain stable. Coming to terms with these concepts and what it feels like to execute them is critical to the improvement process and allows for the best possible results. Let's get started.
IN-STROKE FOUNDATION NO. 1: MAINTAINING YOUR SUSPENSION POINT
A great putting stroke is similar to a pendulum in that it moves around a fulcrum, or a fixed point. Your ability to establish and maintain the fulcrum, or suspension point, along your set up determines the shape of your stroke. Move the fulcrum and your stroke goes haywire.
The rhythmic, consistent swing of a pendulum results from the establishment of a fixed suspension point, or fulcrum (F). In this diagram, H is the height of the swing (or stroke), D is the displacement (length of backstroke), B is the bob shape and mass (the putterhead) and L is the length of the pivot arm (arm hang plus shaft length).
It's easy to visualize the concept of a suspension point when a player is putting with a long or belly putter—the butt of the grip is touching the body, creating a fixed point about which the putter swings. Because the suspension point is physically fixed to the body with a belly or long putter, it's much easier to coordinate the movement of the different body parts involved with executing a proper stroke. No doubt, belly and long putters have helped a lot of players better control the forces applied to the club and sink more putts, but since the USGA has deemed this advantage illegal (Rule 14.1b, effective January 1, 2016), you must now come to terms with this principle and learn correct movement using a traditional-length putter (or a belly or long model that isn't fixed to your body).
Just like a swinging pendulum, your stroke must move back and through around a fixed point—a "virtual" suspension point that you establish at address and maintain throughout your stroke.
With a short putter in your hands and both arms swinging freely, the pivot arm of the pendulum is not actually an extension of the club itself, but rather a vertical line connecting a point on your upper body to your hands. The suspension point of your pendulum is more of a "virtual" one. As such, it's trickier to control compared to putting with a belly or long putter. When assessing suspension point using video, I mark and extend the shaft upward with a vertical line in four separate frames: 1) setup position, 2) end of the backstroke, 3) impact, 4) the completion of the stroke. If the player I'm analyzing properly maintains his or her suspension point, each of these lines will meet at a fixed point somewhere near the golfer's sternum.
An imaginary line running up from the butt of the grip to your body should point at the same place on your sternum at address, at the end of the backstroke, at impact, and at the end of your forward-stroke. This ideal pendulum putting motion results from blending all of the moving parts in your stroke in a coordinated fashion.
Expert at Work! For additional clarification on the suspension-point fundamental, watch a special video detailing this concept at jsegolfacademy.com/index.php/james-suspension-point.
With a short putter, maintaining the suspension point of your pendulum stroke is easier to understand than execute, especially if you've never felt the correct motion. A lot of accepted wisdom on how to accomplish this fundamental does more harm than good. Contrary to popular opinion, it's not about "locking your elbows and wrists," "rocking your shoulders," "keeping your triangle," or any version of "not breaking down." Rather, maintaining your suspension point is achieved by coordinating and blending all the moving parts in your stroke and using them in the correct ratio to one another. (By moving parts, I mean everything above the belt save for the top of your spine, your head, and your eyes. More on that later in the chapter.) Put into even simpler terms, learning and mastering your suspension point is about being able to "feel" the correct motion on a consistent basis. Luckily, I have a magical drill to help you get it right.
How to Properly Maintain Your Suspension Point
Indoors, set up to an imaginary ball while facing a wall. Make sure you get into your address posture using the perfect body alignments described in the Mirror Setup Drill in Chapter 3. Next, shuffle your feet forward until the crown of your forehead rests gently against the wall. Once you're set, slowly inch the puttershaft up through your hands until the butt of the grip touches the center of your sternum (or slightly left of center if you're left-eye dominant or grip the putter left-hand low). Keep your arms hanging in the same position as you raise the shaft toward your body, so that the shaft remains perpendicular to the ground (resist the temptation to lift your hands outside your shoulders and get the putterhead under your eyes.) If you do it correctly, your hands will block your view of the putterhead.
With the butt of the club fixed in position, swing your hands, arms, and chest in unison back and forth, keeping your elbows soft and your grip pressure constant (photos, here). Swinging in this manner with your forehead against the wall will not only give you the feeling of disassociating your shoulder and chest movement from your spine (no side bend), but will also create the proper blend of movement between your wrists, elbows, arms, chest, and shoulders.
Take inventory of the sensations, because this drill gets every body segment moving in the correct sequence and with the correct speed and force relative to the others, allowing you to maintain the suspension point with very little effort.
Executing this key fundamental results in great stroke shape and control, setting you well on your way to consistently starting your putts on line. After you get a feel for the motion, soften your hold on the shaft so the grip falls down into your hands. With your forehead still resting against the wall, assume your normal setup and repeat the drill. Continue as necessary or until you can transfer the motion to real strokes on the practice putting green or out on the course.
SUSPENSION-POINT FAULTS AND FIXES
As a coach, I like to think positively, as in "If you just do the Suspension-Point Drill and forget everything else, you'll be fine." But I know that old tendencies die hard and can creep into your motion at any moment, despite your best intentions. Reviewing these "death moves" and the fixes that allow you to maintain your suspension point if and when they work their way back into your stroke can go a long way toward helping you execute on the greens.
Suspension-Point Drill
1. Get into your putting stance with your forehead resting against a wall.
2. "Shuttle" the grip up through your hands until the butt of the handle touches your sternum.
3. Make your backstroke keeping your forehead on the wall and your grip snug against your sternum.
4. Do the same thing on your forward-stroke. Note the feels that maintaining your suspension point creates.
Suspension-Point Error No. 1: Early Rotation (Block Finish)
NO!
Rotating your chest ahead of the swinging action of your arms and hands tends to open the face at impact.
What It Is: Your chest rotates faster than the speed of your arm swing in the transition from backstroke to forward-stroke.
What It Does: Creates a loop to the outside in your transition, an open putterface at impact, and weak pushes to the right of your intended line.
The Fix: Try My Trail-Arm-Only Drill.
Grip your putter using your right hand only. With your left hand, hold the clubhead of your longest iron. Settle into your putting address posture, and extend your left arm forward so that you can rest the grip of the iron on the ground just outside your intended line of putt. This will stabilize your chest and keep you from rotating too early and/or blocking your finish. Next, make as normal a putting stroke as possible using only your right arm, putting underneath the "bridge" created by your left arm and the iron (photos, following). The focus here is to sense how your right arm "closes" against your chest at the start of the forward-stroke, allowing the putterhead to release. This drill will teach you to "putt past your center," which all great putters do. Two of my PGA Tour students, Charley Hoffman and Tom Pernice, Jr., use a "back-and-through-past-my-head" mantra almost every week to manage this tendency and boost their performance on the greens.
Trail-Arm-Only Drill: Create a "gate" to putt through by holding an iron upside down with your left hand. Use your right hand to "putt past your center" without prematurely rotating your chest.
Suspension-Point Error No. 2: Handle Carry
What It Is: Moving the handle of the club more than the putterhead during the takeaway, resulting in a backward-leaning shaft at impact.
What It Does: Launches the ball with too much backspin, which creates an unpredictable roll, a closed putterface at impact, and pulled putts.
The Fix: Try My Stacked-Quarter Drill.
Stacked-Quarter Drill: Stack two quarters about four inches behind the ball on an extension of your target line. Your goal? Keep the stack intact as you swing back and through.
I can't imagine worse advice than "take the putterhead back low and slow," yet you hear it on TV and read about it in books and magazines all the time. Think about it: If your stroke is meant to copy the swing of a pendulum, it has to rise at the far ends of the arc (the "H" in the diagram here). Why would you fight this natural arc by keeping the putterhead low to the ground? (For those of you keeping track at home, your stroke actually arcs in two directions: one relative to the target line and the other relative to the horizon.)
To ensure that you start your stroke properly, punch a small hole in the green using a tee and place your ball in the resulting depression. Grab two quarters and stack them on top of each other about four inches behind the ball on an extension of your target line. Rehearse the motions outlined in the Suspension-Point Drill. After a few strokes, drop the putter grip into your hands and make a stroke. If you maintain your suspension point and don't carry the handle, the putterhead will trace the correct vertical arc and just miss the stack of quarters both going back and coming through. (Make sure you're missing the quarters because you're swinging the putter on an arc and not simply lifting it off the ground.) The drill is complete after five successful strokes.
Suspension-Point Error No. 3: Trail-Hand Flip
What It Is: Your chest and left arm prematurely decelerate in the forward stroke while your right hand continues to apply force.
What It Does: Creates poor contact, makes distance control difficult, and produces putts that start to the left of the intended line.
The Fix: Try My Lead-Arm-Only Pause Drill.
Allowing your bottom hand (the right hand for most players using a traditional grip) to dominate your stroke is usually a result of poor rhythm, a premature deceleration of your chest and arms, and an unhealthy anticipation of impact. To rid yourself of this affliction, find a straight three-foot putt and lay an alignment stick on the green with the tip nearly touching the inside edge of the cup. Set up to the inside of the stick and set a ball on the opposite side of it so when it comes time, you can stroke the ball into the hole without any interference.
While holding your putter with your lead hand only, make five practice strokes just inside the stick, focusing on the spot on the ground where the ball would be. On each stroke, pause at the end of your motion. Note the distance that the toe of the putter has moved away from the alignment stick, as well as any face rotation. Because this is a three-footer and the swing is relatively small, the toe of the putter should remain close to the alignment stick from start to finish. (Remember, the putterhead travels on a _slight_ arc, with the putterface remaining square to the arc throughout the motion.) If you make the mistake of allowing your bottom right hand to "flip," the face will look noticeably closed at the finish and the putterhead will be substantially inside the rod. Following these five practice strokes, make five more with both hands on the club and remember to check your finish. They should look identical to those made with just the lead hand on the club.
NO! Flipping the putterhead ahead of your hands through impact is a big-time putting no-no. The loss of suspension point resulting from this error closes the face. You'll pop the ball up, have difficulty controlling distance, and miss left most of the time.
After completing these strokes, address the ball on the other side of the stick and make the putt. Once again you will be checking the finish while using the stick as a reference. The drill is completed after you make five three-footers with the correct clubhead orientation in the finish. Ten practice swings and five or so putts shouldn't take more than a few minutes, but give it full focus and pay attention to the details for maximum benefit.
Lead-Arm-Only Pause Drill:
Part 1: Make practice strokes using an alignment stick as a guide. Check that the putterface hasn't over-rotated or swung too far to the inside at the finish. It will be difficult to consistently make short putts if it does.
Lead-Arm-Only Pause Drill:
Part 2: After assessing your finish position relative to the alignment stick on your practice strokes, putt for real.
ONE MORE DRILL FOR GOOD MEASURE
Roll practice putts with your palms open and facing each other. This will teach you to maintain a constant grip pressure throughout your stroke and activate your "big" muscles to help you control the putterhead and maintain suspension point.
Another great exercise that will solve practically any suspension-point error is the Prayer Drill. It's one of my favorites because it creates an awareness of the muscles you need to use in order to maintain your suspension point throughout the stroke—the key to swinging your putter like a pendulum.
To start, get into your normal address position, then open your hands and press your palms against the sides of the grip. Your palms should be directly facing each other, as though you're praying. Now make a few practice strokes, and then roll some putts for real. Because your hold on the putter has been weakened, you're forced to use the bigger muscles in your chest, shoulders, and core to move the putter, which allows you to keep your grip pressure constant throughout the stroke. If you can sink putts performing this drill, you're on the right track.
For those of you who prefer work with a training aid over a drill, I like one called True Pendulum Motion—TPM. It is a device that hooks onto your putter and is both easy to use and effective while creating the same sensations the Prayer Drill does.
IN-STROKE FOUNDATION NO. 2: STABILITY
The second in-swing fundamental that affects your ability to start the ball on line is stroke stability. For ultimate performance, you need to stabilize in two very different ways. The first is by engaging your core, a process that entails aligning your pelvis to your spine and holding everything in place as you putt. This is critical because it allows you to disassociate your upper and lower body, a necessary step for maintaining your suspension point. Any change in balance in any direction or lower-body rotation will wreak havoc on the shape of your stroke. As such, when you take your address, you must hinge at the hips and set your core in a manner that promotes stability.
In Chapter 3, I mentioned that your weight should be evenly balanced between the heels and balls of each foot. This is important, because if you set up with your weight too far back toward your heels, it's nearly impossible to get your hips and spine in a neutral alignment so you can properly engage your core. The telltale sign of this error is a rounded upper back at address, which inhibits arm swing and promotes recurring back pain during practice.
The simplest way for me to help you learn the feel for the correct balance and hip stability at address (which will carry forward into your stroke) is to create _instability_ and then let your mind and body's natural inclination for balance take care of the rest. With that in mind, I created the following drill.
Worm Stability Drill
Step 1: Roll up a towel into a "worm" shape.
Step 2: Place the towel under the arches of both feet, then get into your address position and rock forward.
Step 3: Rock backward toward your heels.
Step 4: Settle your weight over your arches. Now you're balanced and stable.
Worm Stability Drill
Take a towel and turn it into a worm-like roll approximately two or three inches in diameter. (If the towel lacks the necessary thickness, you may have to fold it once before rolling it to get the proper size.) Once you've made your "worm," lay it on the putting green and grab your putter. Set up for a practice stroke while standing on the towel. Make sure the towel is directly under the arches of both feet. You're now unstable. Rock onto your toes and you'll immediately recognize that this position doesn't feel strong or athletic. Rock in the opposite direction, all the way back to your heels, and you'll experience even more instability. After a second, shift your balance to find the middle ground between your heels and toes, right over your arches, and then hold your pose. As your body fights for stability with that balance, it will automatically shift your pelvis to a position in which it aligns with your spine, and your core muscles will engage. This is the correct amount of hip hinge and posture for you. Finish the drill by making ten continuous strokes back and through to learn the true feel of disassociating your hips from your chest and shoulders.
Hip and spine muscular stability are important if you want to be able to perform your fundamentals with ease. The specific muscles groups called into question are the gluteals (maximus and medius), deep core stabilizers (multifidus and tranverse abdominals), and outer core muscles (rectus abdominis, quadratus lumborum, erector spinae, etc.) If you're going to leave no stone unturned on your way to putting greatness, then you should add a few simple exercises to your weekly routine to make it easy. The foremost expert in golf biomechanics, Dr. Greg Rose of the Titleist Performance Institute, shows you how in a special video. Watch the clip and get after it.
HOW TO STABILIZE YOUR STROKE WITH YOUR EYES
The second way to create in-stroke stability involves your eyes. How you use them before, during, and after your stroke is critical to performance on the greens. When most players think of vision's role in putting, the first thing that comes to mind is quality of perception. Am I 20/20? Can I perceive the nuances and cues provided by the green in order to read it correctly? For the first thirty-three years of my playing and coaching career this was my view, but now I know that it's more appropriate to think of vison in terms of how your eyes best communicate with your brain so that your muscles get the information they need to stroke the ball into the cup. There are tricks to it, and it's a topic I broach with every student, because it's often a clear line of demarcation between great and poor putters. Eye movements are both physical fundamentals and beacons of mental strength. And as proof that life is often stranger than fiction, I learned all this one sunny day on the way to the loo.
Expert at Work! Dr. Greg Rose of the Titleist Performance Institute explains how to strengthen and train the muscles responsible for in-stroke stability in this special video at jsegolfacademy.com/index.php/rose-stability.
It was around 2000, and I was at Colonial Country Club in Fort Worth, Texas, for what was then the PGA Tour's MasterCard Colonial Tournament. My busiest day when I'm out on Tour is usually Tuesday, and I remember being out on the putting green with one of my clients, Skip Kendall, whom I started coaching in 1998. After finishing with Skip and before tracking down my next client, I walked into the player's locker room and was met with the most unexpected sight: In the hallway, just outside the bathroom, was a man sitting at a small desk housing a sizable computer and several strange-looking gadgets. Next to him was a Tour player stroking putts on the carpeted floor wearing a weird-looking helmet with wires connecting it to the equipment on the desk. Generally, I'm a "mind my own business" kind of guy, but this setup was too bizarre to ignore. I walked over to the line to eavesdrop.
Standing in front of me was PGA Tour player Loren Roberts, known as the "Boss of the Moss" for his legendary putting ability. Within minutes, a technician began applying sensors to various parts of Roberts's scalp, then handed him the helmet. It looked like something a Hells Angel would wear, except for two small cameras mounted on top (think GoPro) and two small mirrors on each side that protruded out like the eyes of an insect. When given the green light, Roberts addressed a ball and stroked it into a fake cup approximately ten feet away. As he putted, an electroencephalogram (EEG) recorded his brain's electrical activity on a scrolling piece of paper while the helmet cameras tracked and recorded his eye movements reflected in the mirrors. At the conclusion of the test, Roberts shed the helmet and wires and sauntered off. I was the only bystander left and since no one else was around, I did what anybody would do in that circumstance: I stepped in to take my turn.
On went the sensors and helmet. I addressed the ball and hit my best putt. As the gear was removed, I glanced down at my EEG graph. It looked like it had just recorded an earthquake, with wild spikes running up and down across the paper readout. What interested me was that my EEG was completely different than Roberts's. His readings were nearly flat, with just a few gentle waves. Not understanding the significance, I asked the man sitting at the desk what it all meant. He told me that when your eyes change focus from one object to another, when you blink, or when you take on a new thought, your brain generates neural electrical impulses that result in spikes in the EEG printout. An epiphany hit like a ton of bricks raining down from above: Great putters (and few are greater than Loren Roberts), putt with quiet eyes and a calm spirit, and despite 30-plus years of experience and immeasurable training, I had neither.
Ironically, I chose to study and dedicate my professional coaching life to the short game, largely because when I played in college and on the Asian and South American tours, my struggle with this aspect of the game kept me from fulfilling my dreams. Deep down, I craved answers to my short-game problems. My stroke looked good, but I often played without courage, and never putted to Tour standards. Unlike the Boss of the Moss, I had a "noisy" brain. Perhaps that's the real difference between those who are meant to play and those who are destined to coach: The former execute with internal coherence, the latter in complete chaos.
Honestly, I don't know the name of the organization responsible for the testing at Colonial. I never thought to ask, which seems a bit stupid now, but the only thing that mattered as I walked out of the locker room and back out onto the practice putting green was that there was a key attribute to putting that I knew nothing about. Once I returned home, I immediately sought out and read as much research as I could on vision and eye movement.
The most interesting thing I came across was a process called the "Quiet Eye" technique. The term was coined in 1996 by Dr. Joan Vickers, head of the Neuro-Motor Psychology Laboratory at the University of Calgary. Quiet Eye refers to the gaze behavior just before, during, and after movement in aiming tasks such as shooting a basketball or putting. Almost all elite putters have a high efficiency in their gaze pattern, or what scientists call "saccades." In an efficient saccade, the golfer scans toward the target, locates it, fixates, then turns back to the ball, settling his or her eyes at address for at least one second before starting the stroke. (In other sports such as tennis or basketball, a successful saccade lasts just fractions of a second—think Steph Curry hitting a fadeaway jumper. Most successful golfers are much more deliberate with their eye scans because they're afforded the luxury of time.) The communication flow from the eyes to the brain and muscles is subconscious and clear, creating zero doubt about the target's position in space as well as where the club is going to contact the ball.
Your eyes are the window of the soul—and the cup. Training methods such as the Quiet Eye technique not only help you get a better picture of the line and expected roll, but allow your innate athleticism to propel the ball down the starting line.
Once good putters settle their eyes, they react to the information sent to their brain. Their physical focus remains constant during the motion and for a count after. Poor putters, on the other hand, often have imprecise scan paths. Their gaze to the target is often too short in duration, inconsistent, and far too general. In addition, their physical focus often changes during their putting motion, usually moving toward their left foot as the putter nears impact. They're not sure where the target is in space, and the neural electrical pulses generated by the change in focus can create involuntary muscular contractions at precisely the wrong time.
The goal of the Quiet Eye technique is effective communication between your eyes and brain as they sense the target and the ball, as well as maintaining a quiet mind throughout your stroke. As such, I view the physical skills involved in utilizing the Quiet Eye technique to be synonymous with the mental attribute of having a "calm spirit," because if you're overly concerned or worried about a putt, it's very difficult to keep your gaze quiet during your motion. The eyes are the window of the soul; liars often look down and to the left just after lying, and highly trained poker players can detect stress in their opponents just by watching their eyes. Clearly, it takes a measure of trust to send the ball on its way without changing your focus to anticipate impact or the impending result. More on this in Chapter 8.
Note: At numerous times in this book, particularly when discussing eye movement, the physics of green-reading, and visual perception, I'll be offering an amateurish summation of research done by extremely smart and highly trained people. For a more accurate and complete view of this scholarship, I suggest you search out and read the works of Dr. Joan Vickers, Dr. Debbie Crews, Léon Foucault, Joel Pearson, H. A. Templeton, Mark Sweeney, and Dr. Carol Dweck. My degree is from the School of Hard Knocks and three decades of coaching, not from a highfalutin graduate program. My advantage is that as a tournament player, I have both yipped putts under pressure and come through in the clutch, and as a coach my simplified understanding of academic research has helped me lead several players from a sense of helpless affliction on the greens to a rediscovery of greatness. If my summations are dumbed down a bit, it's not to diminish the quality of these individuals' research, but to help my students improve as easily as possible. Great coaching not only doesn't have to be complicated, it can't be.
HOW TO TRAIN YOUR EYES FOR STABILITY: THREE-POINTS DRILL
Find a gently breaking six-foot putt on the practice green. You'll need a ball, a coin, a tee, and your putter for this exercise. Place the ball on the green. Next, read the putt and mark the line on which you think the putt should start with a coin, about a foot or two in front of the ball. Then, mark where you think the putt will enter the hole, and insert the tee into the lip of the cup on an angle so that it points toward the entry point. (I'm using a training aid called the Putt Pocket by SKLZ to mark the entry point in the photo below, but an angled tee works just as well.) You now have the three critical points for your eyes to scan and focus on: The "ball," the starting-line coin, and the entry-point tee.
Three-Points Drill: On every putt, focus on the ball, the starting line, and the entry point, scanning your eyes back and forth between all three objects. This is the best way for your brain to know the precise location of the target. Visualizing three points is the only way to properly visualize line and speed. If you only look at two points, you may get the line right, but probably not the speed.
Using binocular vision from behind the ball, visualize and connect your three points. Maintain this focus as you walk toward your ball and set up to it. Slowly turn your head and shift your gaze so that your eyes trace a line from the ball to the coin. Hold for a count, then continue your eye scan to the precise point at which the ball will enter the hole (the tee). Stare at the tee for one to three counts, or until you're certain you're locked onto the target. Next, reverse your eye scan, from the entry point to the coin and back to the ball. Once your eyes settle softly on the ball, make your stroke, reacting to the picture in your mind and maintaining that focus throughout your motion and for a full count after it ends. The drill is finished when you complete five strokes with perfect eye scans and quiet eyes. (If you scanned a right-to-left putt in this training session, switch to a left-to-right putt on the next one.) Not only will this drill give you laser-like focus, it'll train you to calmly stroke putts, as Charley Hoffman and Tom Pernice say, "back and through past your head," which is where your eyes are focused.
The secret power of the Three-Points Drill is the addition of the third point—most drills and most coaches only require you to scan two (usually the ball and the start line). By connecting the ball, start line, and entry point, you're defining not only your line, but also your intended speed, because there's only one speed at which the ball will roll over both points on a breaking putt. If you only look at the ball and the start line, it's extremely difficult to visualize—and produce—the correct speed; putts that start on line can still miss high or low if the speed of the ball is too fast or too slow, respectively. Getting the ball into the hole on breaking putts demands that you match the speed to the line. Likewise, if you only look at the ball and the hole (and ignore the start line), you'll undervalue break and miss disproportionately to the low or "amateur" side of the cup.
Vision Quest
Students often ask what part of the ball they should look at. Dr. Vickers suggests the back, but the research appears to be inconclusive. I've had different students putt well looking at the back of the ball, the top, the target side, and even at a blade of grass in front of the ball with both a hard or a soft focus. The last choice works best for me, but experience tells me that it won't necessarily work for you. Or it might. You'll need to experiment with each to find the one that allows you to execute the correct fundamentals on a consistent basis.
JOURNAL WORK
Now that you've worked through the fundamentals of starting the ball on the correct line, take a moment to write down pertinent technical keys and feels in your journal. Note how far you stand from the ball when you confirm your setup in a mirror, along with any other particulars in your stance. Describe the sensations you feel when you're performing the Suspension-Point, Worm Stability, and Three-Points drills. In no time at all, you should have a bullet-point list of the keys that you'll commit to mastering. Use the example below to help you.
MY PUTTING SOLUTION TECHNICAL KEYS
* Stable athletic stance.
* Toe line two putter widths from inside of ball.
* Nose just behind putter.
* Relaxed arms, square shoulders, shaft straight up and down (and pointing at my sternum).
* Suspension-point maintenance: "feel as if my hands and chest work together."
* Quiet eyes, precise three-point scan. I feel most calm looking softly at the top of the ball.
CHAPTER
5
The Feedback Loop—Confirming Your Foundations
The power to perform doesn't come from knowledge. It comes from execution.
Knowing how to set up and get the ball started on line is an important first step in becoming a more effective putter. Unfortunately, this knowledge is useless unless you can actually apply it to your setup and stroke when you're out on the course.
I learned this valuable lesson from one of the first students to enroll at my academy at Shadow Ridge Country Club. He was a very successful and respected entrepreneur, and after helping him work on his putting one day, I invited him to my office so we could write out his technical keys and training structure. As we wrote in his journal, he began sharing some of the secrets to his financial success. At one point he said, "Regardless of what you're trying to accomplish, James, remember that you can't improve what you can't measure." It resonated with me immediately—just knowing what to do isn't good enough, and not having a way to confirm that you're doing what you think you are leaves enormous room for doubt. Moreover, resting on the laurels of previous success and believing it'll automatically carry over to future execution often leads to disappointment.
As a player, you need to know with absolute certainty that you can execute to standard before you compete. In other words, everything you deem crucial to performance must be quantifiable. This is as important for mental strength as it is for mechanics—regular confirmation will make you more resilient as your round unfolds. When you miss a putt—and you will miss, because we are all imperfect and putting is difficult—you need to know that your misses are the result of errors in judgment, timing, or commitment. They're temporary. Misses that result from faulty mechanics, on the other hand, are not. These will lead to recurring disappointment.
SIGHTING YOUR "GUN"
I liken this process to what a professional rifleman goes through before a competitive event. He takes time to sight his gun beforehand, so he can't blame it later if and when he happens to miss. Similarly, your first task when starting a new training session is to take five to ten minutes to confirm that your fundamentals are in order and reaffirm your ability to start the ball on line. There will be days when this runs smoothly; other times you'll have to make a small adjustment or two, which will give you your "feel" for the day. Regardless, it's a critical first step, as regular fundamental checkups prevent you from playing with a false sense of confidence or relying on a feel for too long. Great putters realize that fundamentals never change, but the feels for executing them often do.
The type of practice you're going to employ to effectively confirm your putting foundations is called guided block practice, and your goal is to perform these checks as quickly and efficiently as possible. The Suspension-Point, Mirror, and Worm Stability drills from Chapter 4 are great examples of this type of exercise and should make up a good portion of the first part of any practice session. The second part should involve checking the elements that directly affect your ability to start putts on line: stance, aim, path, face angle, and centeredness of hit. Here's where a guided block-practice station comes in handy. A practice station is a practical arrangement of lines and checkpoints that allow you to confirm any setup or stroke fundamental on the spot. Tour pros use them all the time. Some are simple, while others are elaborate. Here are few of my favorites. Experiment with each station until you find the one that works best, or as some of my students have done, develop your own.
Guided Block-Practice Station No. 1: Richard Lee
PGA Tour player Richard Lee's guided block-practice station.
In this block-practice station, PGA Tour player Richard Lee has read a putt and snapped a chalk line on the green to mark his aim line to the hole. To confirm that he's starting the ball on the perfect line when he practices, he putts through a ball-width gate that he's created with two tees set about a foot in front of the ball. He's also erected a second gate through which his putter can barely fit to groove a neutral path through impact and sweet-spot contact. An alignment stick helps him confirm that his feet are in the correct position relative to the ball, while a tee marks his ideal distance from it.
With the station in place, Richard completes ten perfect block-training reps, as follows:
1. He aligns the ball to the specific point on the cup that represents his start line.
2. He stands behind the ball and visualizes three points (ball, start line, and entry point).
3. He holds an image of the ball's roll and speed and walks into the putt clear and committed.
4. He engages with the target and makes two practice strokes for speed.
5. He addresses the ball, checking foot position and matching the line on his putter to the line on the ball.
6. He scans his eyes through all three points.
7. He makes a calm, committed, and reactive stroke in which the putter cleanly passes through the rails.
8. He rolls the ball through the tees while keeping his eyes quiet for a full count past impact.
This may sound like a lot, but the entire process takes less than twenty seconds to complete (not including the time required to set up the station), which means that you can finish your block-practice training within six minutes before moving on to the next phase of your training. It doesn't take much time to train effectively as long as you're disciplined and focused. Richard and other players I coach will tell you that I don't get riled very easily. The exception is when a player chooses to train in a half-assed or unfocused manner. Then it gets ugly.
Six minutes—a small investment for such a huge reward. In this small time frame you'll know with 100 percent certainty that your stance is correct, your aim is perfect, your path is neutral, the ball is being struck on the sweet spot of your putter, and the face is returning to square at impact (as evidenced by the ball passing through the tees). That's a ton of confirmation. In addition, you'll run through ten great reps that train your eyes to scan all three targets, communicate where the target is to your brain, and execute your stroke with quiet eyes and a quiet mind. Wow!
Does your current training do all of that? Probably not. I'm sorry, but plopping three balls down on the practice green and rolling putts from the same spot to the same hole without so much as glancing at the break is not effective training. There's no feedback when you practice this way, which means that you can't measure your fundamentals and ability to execute. Therefore, when you make mistakes, you have no way to learn from and correct them. In fact, you'll end up repeating them.
BONUS: TRAINING YOUR AIMER
Most players, even good ones, often misalign or incorrectly aim their putter, causing good strokes to miss their mark. The good news is that poor aimers aren't doomed to aim poorly forever; your ability to execute this skill is malleable and can be trained. To get it done, your brain and eyes just need some help consistently seeing the truth by assessing the line from two different perspectives with the help of instant feedback. In Richard's execution of a perfect rep during his block station practice, he first sights his start line from behind the ball using binocular vision (step 2) and then views the same image from his setup perspective after aligning his putter (step 6). As he sees how his putter is aligned and then scans down his start line, his eyes and brain retrain to see reality. In my experience, it takes approximately three weeks of consistent work to "retrain your aimer."
Guided Block-Practice Station No. 2: James Sieckmann
The block-practice station I use is quicker to set up than Richard's, so if your time is limited, try this one.
Step 1: On a relatively flat lie, mark the ball's position on the green with a Sharpie and place a dime on the ground two feet in front of it.
Step 2: Measure the ideal distance of your toe line from the ball and mark it with a tee (as determined in the Mirror Drill from the last chapter).
Step 3: Place two ball sleeves slightly wider then putter-width apart on opposite sides of the ball.
Step 4: Roll a few putts over the dime, stopping them in the same spot. Mark this position with a phony hole or tee.
My process for completing a successful rep is identical to the one listed in Richard Lee's block-practice station, but because I don't putt for a living (and neither do you), I end the session after five balls instead of ten. I prefer my setup to Richard's, because laying down a chalk line on the green—and doing it in just the right place—can be difficult and messy. And because I don't use an actual hole, I can use any portion of the practice green or putt with any break. As long as I hit the dime, I know I'm executing the right things, but as I said earlier, every golfer is different. If you prefer Lee's station over mine, I won't be offended. My goal is to make you a better putter, whatever it takes.
My personal guided block-practice station. Quick and dirty—but it works.
Guided Block-Practice Station No. 3: Dave Pelz Putting Tutor
If you read my first book, _Your Short Game Solution_ , you know that Dave Pelz gave me my first coaching job after I decided to quit life as a playing professional. Dave's guided block-practice station makes use of his popular Pelz Putting Tutor training aid (pelzgolf.com). The Pelz Tutor is a fantastic learning tool. It uses steel marbles instead of tees to create a gate for the ball to roll through so you can check your ability to start putts on line. If you knock the marble on the left out of place, you know that the putt was pulled; if you dislodge the marble on the right, you know you pushed the putt. A thick, white sight line makes it easy to aim the Tutor in any direction you want, and you can change the width of the marble gate to increase or lessen the challenge.
Guided block-practice station with Pelz Putting Tutor.
You can't cheat the Pelz Tutor—either start the ball on line or you'll end up chasing marbles. I'll often set up my practice station using just the Tutor. A lot of golfers putt under a string anchored by two pencils or two tees to check that they're starting the ball on line like I'm doing in the photo here, but I think aids like the Pelz Tutor are much more effective. (If I use a string line, I will almost always do so in tandem with the Pelz Tutor.) My longest-running Tour client, Tom Pernice, Jr., was a big "putt-under-the-string" guy, but the feedback wasn't specific enough (though neither of us knew it at the time), and Tom would commonly push or pull the ball ever so slightly toward the high side of the putt without realizing it. His putting frustrated him for years—we all know the agony of striking a putt dead in the sweet spot and watching it burn the edge or lip out on the "pro" side. When we finally added the Pelz Tutor to his block-practice session, Tom hit the marble on the high side of the putt for two weeks straight. You can't buy that kind of feedback. Eventually, Tom learned how to fight his subconscious tendency to fudge his start lines to the high side and instead commit to the line he selected during his read. He had to feel as though he was going to miss low in order to make a breaking putt. During the final round of the 2014 Charles Schwab Cup Championship on the Champions Tour, Tom drained several clutch breaking putts down the stretch to win and grab his biggest prize check in more than eight years. He looked calm as I watched him on the television screen, but I knew how hard he was battling on the inside. Tom's commitment to process and his desire to work both hard and intelligently is truly an inspiration.
Guided Block-Practice Station No. 4: Julieta Granada
In these photos below, you'll find LPGA Tour player Julieta Granada checking her foundations. She uses a Perfect Path Trainer (eyelinegolf.com) to create three gates for her putter to travel through. (Her big error is taking the putter back to the inside, which creates contact out toward the toe of the putterface.) Her block-practice station nips both errors in the bud. The Path Trainer also has a cutout line for perfect face alignment, which Julieta marks with a Sharpie. This allows her to quantify her starting line, which she identifies in this example by pegging a tee just beyond the inside right edge of the cup (you can see it if you look closely).
LPGA Tour player Julieta Granada's guided block-practice station.
Although she starts her process by visualizing her three points and walking in with that picture in mind, I added a step where she stands up straight to "stack" her nose over her sternum and belt buckle before she hinges from her hips to settle into her posture. Adding this move into her process ensures that she sets up with her dominant eye two inches behind the ball and with her shoulders square instead of in front and open, which is her most natural position. The alignment rod marks both her ideal distance from the ball (determined in the Mirror Drill) and the perfect ball position relative to her feet.
Julieta is talented and dedicated, and she made great strides during the 2014 LPGA Tour season (the first full year we worked together). Her putting rank climbed from 49th to 7th, and her Tour earnings more than tripled from the year before. Follow her example and you can reap the same rewards.
CREATING YOUR BLOCK-PRACTICE STATION
Ultimately, you need to trust your fundamental mechanics and developed skills in order to free up your stroke and simply react to what you see. However, it's impossible to trust something over time if it shows itself to be unworthy. For experienced players, the work has to come first. Some measure of trust has to be earned.
Gather the tools you'll need to build an effective block-practice station and to perform the various drills throughout this book. Whether it's a household item or specifically designed training aid, these tools provide the feedback necessary for you to confirm your fundamentals so you can take them to the next level.
Open your journal and revisit the list of technical keys you created at the end of Chapter 4. Now that you have a few practical examples of how the pros train for success, follow their lead and experiment with building your own feedback station with the goal of solidifying the critical skill of starting the ball on line. Think of the different drills presented in this chapter and the tools you'll need to efficiently—and correctly—measure your execution of each key. Many of these items are already in your bag (balls, tees, coins, etc.), but a few (alignment sticks, training aids) may require a trip to your local golf retailer or e-commerce store. For the tech geeks out there, visit www.blastmotion.com/products/golf or the site I most commonly use, www.eyelinegolf.com.
Keep in mind that your guided block-practice station will likely evolve as you learn or grow in your practice. In the meantime, kick-start your improvement by investing about ten minutes per training day drilling your foundations until you execute them consistently enough to "win the station" (a 70 to 80 percent success rate over ten minutes of practice). In my experience, it takes two to three weeks for your old motor pattern to evolve into a new one that produces consistent results. Be fair to yourself—stay patient and give it time.
JOURNAL WORK
My Block Practice—
* Towel Stability Drill (one minute)
* Suspension-Point Drill (one minute)
* Practice Station: guided feedback for starting the ball on line and confirming setup (eight minutes)
* Tutor with gate for putter path
* Tee to mark distance from the ball
* Alignment stick for foot placement
Expert at Work! Watch me run through a sample block practice station session to check my foundations in a special video at jsegolfacademy.com/index.php/james-foundation-check.
CHAPTER
6
Skill 2:
Green-Reading Facts and Processes
Starting the ball on line does little good if you can't predict the correct line to begin with, and unfortunately, it's a skill most players aren't very good at.
I spend a significant portion of most days watching people roll putts, and it's been this way for twenty straight years. As you might imagine, I've seen it all—a constant variety of motions, both good and bad. Every player is unique, and the differences in body type and function, ingrained motor programs, physical and mental experiences, and personality make sorting through it all both challenging and fun. My vantage point and objectivity often allow me to notice the small things in players' strokes that are hidden even to them. But despite this melting pot of motions, there are several commonalities, one of which truly defies logic: Most amateur players have no idea how much putts actually break, even after hitting them.
Take the following example:
Me: "Read this putt and tell me your strategy."
Student: "I read it as a little left-to-right, so I'm going to aim at the left edge of the cup."
At this point I'm thinking, "I'd play more break," but of course I say nothing. The student strikes a pretty good putt on their stated line or even a bit higher, but the ball breaks across the hole and misses low.
Student: "I pushed it."
Me: "Slow down a second. That's _not_ what happened. You hit a nice putt, it just broke more than you thought it was going to. Pushing the putt wasn't the problem. Judging the break was."
The above exchange is extremely typical. Most amateurs blame misses on mechanics, not reads, which is a huge roadblock to long-term success. Poor results and our innate desire for immediate success tempt many of us to fiddle with our method instead of digging deeper. We search without any direct knowledge regarding how our changes impact the individual variables that affect skill, which is a well-worn path into the wilderness. For the player in the above exchange, the real issue is that "he doesn't know that he doesn't know" how to read greens.
One of the strengths of my putting system is that once you prove to yourself that you have the skill to start the ball on line—whether it's simply hitting the coin in the 25-Ball Dime Test or using one of the training confirmations in Chapter 5—you're free from second-guessing your stroke fundamentals after a miss (unless a painfully obvious pattern arises). From now on, when you miss a putt, the blame falls on either your judgment about the read, the energy transferred to the ball at impact, or the process you went through that did or did not allow complete focus and commitment.
THE PHYSICS OF ROLL
There are some hardcore physics at work as your ball traverses the green, so it's best to consult science to gain a better understanding of break—undisputed facts will make you a better green-reader. This foundational understanding expedites the learning process and takes the mystery out of this important skill. At the end of the day, however, green-reading is an art. Beyond accounting for the slope and speed, you have to be able to sense the appropriate line and matching energy for the putt at hand, which may be influenced by other factors, such as wind, grain (the direction in which the grass is growing), the moisture content of the green, and how you feel physically and emotionally at the moment. In short, to be an exceptionally skilled green-reader, you need to embrace both worlds. The first order of business is to understand the basic science behind why balls roll the way they do. Then you can move on to the art: the touch, feel, and creativity that expert green-reading demands.
THE SCIENCE OF GREEN-READING
The scientific approach to green-reading began in earnest in 1984, when a United States Air Force colonel with a passion for golf, H. A. Templeton, showed in a self-published book entitled _Vector Putting_ how two variables, gravity and friction (green speed), could be used to compute break on a green. The motivation for his groundbreaking research—beyond helping his own game—was to create a computer model that television networks could use to predict the break of putts, which he felt would make telecasts more interesting to watch. Considering the state of the computer technology available to him at the time, this was a monumental task. Clearly, he was ahead of his time.
Decades later, Mark Sweeney, the founder of AimPoint Technologies, applied modern computer standards to Templeton's discoveries, then partnered with the Golf Channel to use predictive putt technology to power a live graphical insertion of an optimal putt overlay into their broadcasts starting in 2007. If you've seen AimPoint's predictive putt technology in action, you know its calculations are very accurate. But because it relied on precise laser measurements of the greens as well as speed and slope charts, it was, in my opinion, too complicated and time-consuming to be useful for all but elite players. Subsequently, Sweeney developed a more practical way to implement his scientific approach called AimPoint Express. Personally, I'm a fan; anything that's simple to use and helps players at any level improve their skill is a winner.
I'm not a certified Vector Putting or AimPoint Express teacher, and I don't feel the need to elaborate much on either method except to share some basic facts that I think you'll find helpful. If you're interested in learning more about their methods, a quick Internet search will yield a long list of PGA professionals who are certified and would be glad to help. My interest is solely to open your eyes to a few undisputable green-reading facts discerned from science that you can implement into an artistic process in order to improve your skill. It's a process that I've shared with all my Tour clients, and I'm confident it will help you as much as it has helped them—probably more.
GREEN-READING FACTS
Thanks to research by Templeton, Sweeney, and others, golfers can now separate fact from fiction when it comes to understanding roll.
Fact No. 1: On any planar surface (with the slope only in one direction), every putt that starts the same distance from the hole shares a common aim point. Rolling your putt at this point with the correct speed will result in a make, regardless of where the ball lies on the green.
To find that one point, imagine you're putting on an oversized pool table raised up on one end (creating a two-degree slope) with a hole cut in the middle. In this scenario, on a green of average speed, read the putt for a ball sitting by the side pocket, taking into account two factors: the amount of tilt (slope) and speed of the table. Whatever point you align to above the cup from this position is the same point you'd align to from any other location on the table of equal distance.
Fact 1: There's only one aim point for all putts of the same distance on a planar surface.
Fact No. 2: The more the putting surface is tilted, the higher the common aim points shifts above the hole, and vice versa.
This is a no-brainer. The greater the slope, the greater the break.
Once you determine the aim point, you can use it for all remaining putts (I've shown five, but picture hundreds). If you hit the putt with the correct speed on pure surface conditions, it _has_ to go in.
Fact No. 3: The faster the green (the less friction acting on the ball to hold it on line), the more you must move the aim point toward the high end of the slope.
Picture the Masters. Putts hit on Augusta National's greens break dramatically, partly because the putting surfaces during tournament week run so fast.
Downhill putts (because they're faster) break a bit more than uphill putts across the same grade, so on extremely fast greens (with less friction) the area where maximum break occurs moves slightly uphill (up to thirty degrees) above the side pocket.
Fact No. 4: There are at least two "fall lines" on every green. Balls putted from anywhere on these lines must start at the middle of the hole to go in. In our pool table example, the fall lines are directly north and south of the cup.
In the real world these lines curve or meander with the slope, but the overall read for balls resting on them remains straight.
Fact No. 5: Maximum break generally occurs when your ball lies directly between neighboring fall lines. going back to the pool table example, maximum break occurs at or slightly above the side pockets.
On a true planar surface, anything pin high is going to break a lot.
Knowing these five facts will enhance what you see and feel and will help you make sense of why balls roll the way they do. Of course, greens are rarely perfect planar surfaces—they roll, flatten out, and change direction at the fancy of the architect, which requires you to add a good measure of feel to your reads.
There are two additional variables beyond slope and speed that can influence the roll of a ball: the grain (direction in which the grass lies) and the wind. The strength and influence of the grain is species-dependent. On bent-grass greens, which are common in colder climates, the grain tends to influence speed more than it does line. With Bermuda and other thicker-blade grasses, the effects on line are greater, although still not as prominent as the effect on speed. Improved green-maintenance processes and equipment have reduced grain's effect on the ball's roll in general, which has removed a layer of judgment. You're lucky. When I played on the Asian Golf Circuit from 1989 to 1993, courses such as Wack Wack Golf & Country Club outside Manila featured grain so strong it could literally "push" putts up a slight slope. This made choosing the correct line extremely difficult, especially for a guy who grew up putting in Nebraska.
Regardless, my advice is to think mainly in terms of speed, not line, when assessing grain. Look down the line of your putt and determine if the grass looks shiny or dark in hue. If it's shiny, you're putting down-grain—the putt will be fast and break more. If the grass looks darker in color or flat, you're putting into the grain, and the ball will stop more quickly and break less (and break late).
Wind, on the other hand, can have a much more dramatic effect on line. My testing with the Perfect Putter (pictured opposite) indicates that a 20 mph crosswind can move the ball up to four inches on a straight 20-foot putt on a green measuring 11 on the Stimpmeter. That's almost a full cup! My studies also show that the faster the speed of the green (that is, the less friction there is), the more dramatic the wind effect. Of course, you should also account for the wind's effect on speed, which can be significant, especially if the greens are fast. Note that if the grain, wind, and slope are all going in the same direction, the overall effect is tripled. Expect the ball to move a lot!
Rolling balls on the green with the Perfect Putter. Objective tests like these make discerning break easy and fun.
Clearly, there's a lot to consider when predicting the line of a putt, including how you feel at the moment, which ultimately makes the act of green-reading an artful exercise. Science is a solid base, but in the end, sensing just how much break to play is a right-brain activity—there is definitely need for added touch and feel. The question is, how do you learn art?
THE ART OF GREEN-READING
If the thought of assembling all of the pieces of the green-reading puzzle sounds daunting, you should feel even more confident. The fact that it's difficult means that most players won't make the effort to master it, giving you a competitive advantage when you apply these concepts correctly and consistently. Second, I believe meaningful learning is self-learning, and that it ultimately takes place in the subconscious. I don't want you overthinking or "doing math" on the greens—I want you to react to what you see and feel. Tom Pernice, Jr. and Brad Faxon could easily recite the scientific facts of green-reading, but that's not what makes them great. Elite green-readers can simply sense what's required, nothing more. Michelangelo was a great scientist, and he used that knowledge to enhance his art, but he didn't paint by numbers.
Skills develop incrementally over time, not all at once. This one's no different. Any incremental improvement in your ability to see and feel putting lines as an artist will show up on your scorecard. Growth is the key, not perfection.
HOW TO BUILD AN EFFECTIVE GREEN-READING PROCESS
Every shot needs a process—an action plan that takes you from cluelessness to a clear and committed strategy that gets you ready to play. On the putting green, you'll execute two processes, one that starts when you walk onto the green to determine the break and speed of the putt, and another to keep you focused as you walk into the ball. Your green-reading steps (Process 1) should be organized and well-defined, so that the manner in which you look for and feel the cues that the green yields unfolds in the same order and features the same actions every time. The continuity of this process, along with your base knowledge of green-reading facts, will allow you to improve your green-reading skills with each experience. (We'll get to Process 2 in the next chapter.)
When building your green-reading process, keep in mind that you should read the green or decide on the line only once. As simple as this sounds, it's not common. Many players look from behind the hole and read the putt as right edge, and then walk behind the ball and see the same putt as a cup out. Now what? Clearly this is a bad strategy. A better alternative is to gather facts in a step-by-step fashion until you have all the information you need to read the putt—then read it once. Now you can step in clear and committed to your vision.
If your green-reading cues are defined and accounted for in each step, you won't forget to look and feel for them on any green in any round. The information gathered will allow you to tap into your subconscious instinct and give you a definitive aim line, putt shape, and speed. This continuity is critical. Meaningful experiences build on one another until they become second nature. At some point, you'll simply sense what to do—your subconscious will take over. This is the art of putting.
The following is the green-reading process I teach my Tour clients. Consider adopting it as your own, or edit it as you see fit. As was the case with building a block-practice station, anything you come up with that works for you works for me.
The _Your Putting Solution_ Green-Reading Process
Step 1: As you walk up to the green complex, locate the fall lines. All putts are ultimately straight along this axis.
Step 2: Walk so you can see your upcoming putt from behind the hole. Look at the surface just beyond the cup, and in your mind's eye, place a quarter two feet to the right of your approximate line and a penny two feet to the left. Scan horizontally between the two coins and compare their elevations. Note which coin is lower and by how much. This is the direction the putt will break as it approaches the cup. Reading what happens near the hole first is critical, because this is where the ball will be rolling the slowest and, as a result, where it will be affected the most by the slope. Also, it's harder to see this area around the hole when you're standing behind the ball, because you're at the farthest point from the cup. Many players also make the mistake of looking vertically from the hole to the ball. I like that gaze to be horizontal in nature, from coin to coin.
Step 3: Walk toward your ball and stop at the halfway point between the ball and hole. Did you walk uphill or downhill? If you sensed more pressure under one foot than the other, it's a strong clue that the green is tilting in that direction. "Listen" to your feet—sometimes your eyes can deceive you.
Step 4: Walk behind your ball and picture its speed as it rolls toward the hole. The speed of the ball determines the line and must be considered prior to deciding on an aim point. If you feel like allowing the putt to "crawl" in, then choose a slightly higher line. If you feel like rolling the putt firmly and taking out some of the break, go with a more aggressive line.
Step 5: Squat down and run your eyes horizontally (coin to coin) along the full length of your putt (photo, here). Since you've already determined how the putt will behave as it approaches the hole in Step 2, you should have a pretty clear idea about the overall shape as you put the pieces together.
Step 6: If you read any break, gently shift your vantage point toward the low side of the slope until you're looking directly up the start line of the putt. Listen to your subconscious—it will take care of you. Once you decide on a start line, quantify it. You can either point the line on your ball toward the aim point (see below), pick a discernable mark on the green on which to focus your aim, or use the hole as reference (for example, "two cup-widths left of the hole"). I prefer the first two options.
Your green-reading process is complete. Again, this is an example of how many of my Tour students read putts, but don't be afraid to develop your own version. We're all different. The key is that you have one (your competitors don't) and that you stick to it and master it. Continuity will breed success.
The green gives you all the clues you need to dial in the perfect read. Having an organized process to discern them is key to your success.
SHOULD I ALIGN THE BALL TO MY TARGET?
This is an interesting question, and my answer differs depending on your current skill set. If you're a good green-reader (meaning your conscious read is often the correct read), then go for it. Point the line on the ball down your aim line and commit to it. Also, if you're constantly second-guessing your aim, aligning the ball will help you because it removes doubt. However, if you're like most players and your subconscious reads tend to be more accurate than your conscious reads, place your ball down on the green without using a line and feel your way to the correct alignment. You may be best served by lining up your ball on relatively simple short putts and not on longer big breakers, which can require a more intuitive feel.
Since one of the goals of this book is to make you a highly skilled green-reader, at some point you'll be lining up the ball like Tom Pernice, Jr., Tiger Woods, and hundreds of other Tour pros. No need to rush the process. You'll know the time is right when you're passing the 10-Ball Green-Reading Test (here) with flying colors. In the meantime, continue to develop your skills.
Aligning your ball to the target line only makes sense if your initial conscious read is consistently correct, or if you tend to second-guess your aiming ability.
GREEN-READING SKILL TRAINING
The effectiveness of your training is the key to unlocking the great green-reader within. To help you develop a training plan, I suggest a mixture of both guided and unguided trials. This combination confirms that what you're doing is correct while also giving you the freedom to grow from your mistakes.
Guided Training
Jump-start your visual training by investing a bit of time with a feedback device that confirms your ability to start the ball on line, such as the Pelz Putting Tutor or similar device (check out the Slot Trainer at eyelinegolf.com), and visualizing your three points from both behind the ball and as you stand at address (here). This is a form of developmental ophthalmology, which involves teaching your eyes and your brain to communicate effectively when assessing objects in space. Recall that Tom Pernice, Jr. performs a minimum of ten successful repetitions using the Pelz Tutor for left-to-right, right-to-left, and straight putts on his training days. Even though he reads the green once from each location, he visualizes and walks in on all thirty putts, feeding his brain a steady diet of green-reading truth. The certainty of that truth helps train his eye and build confidence in what he sees.
Unguided Training
You need repetitions without a guide or feedback to train your green-reading process and improve your skill. Don't expect perfection here—this type of training is all about growth. I developed an exercise called the Star Drill to make it easy for you. Here's how it works.
Star Drill
Place five balls randomly around a cup anywhere between three and ten feet from the hole (five points of a star with the cup in the middle). Choose a putt to read, and start your green-reading process (as always) from the opposite side of the cup. Use the green-reading process outlined here (or your own version) by locating the fall line, then look left and right of your approximate line and imagine the coins on the green to see the late break. Continue your process, choose the line, and stroke the putt. If you miss, don't turn away in disgust or berate yourself. Use the outcome to help you correct your read. Although it's impossible without feedback to know precisely what you did incorrectly, you can make the corrections in a general sense. If you read the putt as "left edge" and it broke across the cup, use a post-shot process to make a positive correction. Begin by revisiting your visual and kinesthetic cues, then internally state the solution, i.e., "a bit more break." Then take a few seconds to reimagine the putt breaking more. The whole process shouldn't take more than a few seconds, but with each repetition, your green-reading skills will grow.
Move on to the next ball in the star and repeat. The great thing about this drill is that there's always a ball lying opposite the hole, so you can get right into the process of reading the next putt with just a few quick steps in one direction. The drill is complete after you've executed your full process for all five putts.
Unguided practice drills such as the Star Drill are critical to your green-reading development because they best replicate what you'll experience when you play. Here I'm visualizing the late break from the opposite side of the hole as in step 2 of my green-reading process.
Training in a random fashion such as this emphasizes the "judging" aspect of green-reading—every putt in the Star Drill is different, so you're seeing it for the first time. If you decide to run through more than one repetition of the drill, make sure you do it at a completely different location on the green so you can up the green-reading challenge. The PGA Tour players I coach all make the Star Drill a regular part of their training routine. They know that each green-reading "miss" is an opportunity to learn and grow.
_Careful!_ Whenever you read a putt, don't look for the "apex of the break" as some instructors or high-profile players suggest. Putts often break early, especially those that follow a significant slope, which means you have to start the ball well up the hill from the apex to get the ball to roll over it on its way to the hole. If you only visualize the top of the curve, you're not looking at the correct aim line and you'll probably miss low.
Aiming at the apex of the ball's curve is a popular technique, but it's inherently flawed. As you can tell from this diagram, the correct aim line on a breaking putt points well above the apex of the curve. Looking at the apex instead of down the correct starting line will result in a low miss.
JOURNAL WORK
Open your journal to the Technical Section and add a Green-Reading tab. Write down the basic scientific facts of green reading—just a few key bullet points to cement them into your memory. Then write down the steps in your green-reading process in chronological order, as well your intent in each one. Give every step a clear and a defined purpose.
CHAPTER
7
Skill 3:
Touch and Feel
The short-game skill that most dramatically affects your ability to score is the ability to consistently putt the ball at the correct speed.
An inability to control putt distance makes scoring next to impossible, especially on "difference-making" putts between eight and 20 feet, which require perfectly matching the line with the appropriate speed. To develop great touch, not only for these putts but from any distance, the first step is to come to terms with what touch actually is, and the best way to do that is to come to terms with what it isn't. Touch _isn't_ the ability to hit the ball "hard" or "easy" depending on putt length or slope, yet that's how most recreational players think of it and talk about it. This mentality is rife with fundamental ineptitude, and it's the reason why many golfers struggle with touch and distance control despite obvious talent and years of experience.
A healthier way to think about touch is to imagine the putterhead swinging back and through like a pendulum. (We touched on the dynamics of pendulum motion when discussing the value of maintaining a suspension point in Chapter 4, but it's even more applicable to the skill of touch.) Picture a pendulum in your mind's eye right now—think of a grandfather clock or one of those "perpetual motion" desk toys, and replace the bob (pendulum mass) with your putterhead and the fulcrum arm with your hands, arms, and puttershaft. If you focus hard enough, you'll see that a pendulum:
* Swings back and forth without stopping. (Theoretically, it would swing forever if it weren't for damping.)
* Takes the same amount of time to complete each swing. This period for the bob's oscillation is the same regardless of the length of the overall motion.
* Swings faster or slower depending on the length of the displacement (the backstroke). When the displacement is small, the putterhead moves comparatively slower than when the displacement is large. Although speed changes with displacement, the overall time it takes the bob (putterhead) to complete a full cycle is a constant.
Picturing your stroke as a pendulum and copying its consistent oscillation period means you're putting in rhythm, which is the key technical variable related to distance control. Your rhythm, which is particular to you, never changes regardless of circumstance, which means that the energy transferred to the ball (assuming a solid hit) is dependent solely on the time (period) and the distance of the backstroke (displacement). If that's your picture, the desire to "hit" the ball is completely removed, because your lone directive is to match the length of your stroke to the energy you deem appropriate for the putt at hand and simply let the ball get in the way. Once again, this is the key concept to internalize: regardless of the putt you're facing (uphill, downhill, short, far, fast, slow, etc.), there's one perfect stroke length (made at your rhythm) for every situation. That's touch!
TOUCH: A picture in your mind or a sense of judgment for the perfect length stroke, at rhythm, given the conditions at hand. There's no hitting "hard" or "easy."
IN-STROKE FOUNDATION NO. 3: RHYTHM
We covered the first two in-stroke fundamentals earlier when discussing how to putt on the correct line. But the third fundamental, rhythm, is just as important. Any vibrating object has a resonant frequency, or state, that's coherent and most efficient. This frequency is not only influenced by the length and weight of your putter, but by your whole being (some people like to do things fast, and others like to do things slow). We can define this natural state in several ways, but the one I like best for putting is "a comfortable rate that requires the least amount of muscular effort to swing the putter."
How to Find Your Perfect Rhythm
Matching backstroke and through-stroke lengths is essential to great rhythm and phenomenal touch.
Making continuous periodic swings is the key to finding your natural resonance. Do this: Get into your address position without a ball and swing your putter back and through uninterrupted, mimicking the motion of a pendulum. Make your strokes equal in pace and length on both sides, keeping tabs on how comfortable your stroke feels and the muscular effort that you're exerting to swing the club. After about ten swings or so, slow the putter down to a crawl. Notice how much extra muscular effort this requires. Strange, right? You're actually working harder to swing the club slower. Now speed it up. As it did for the slow swings, this rapid motion requires quite a bit more effort. Pull back on the throttle and look for a rhythm somewhere between these extremes, searching for maximum comfort and minimal muscular tension. After a little trial and error, you'll settle into a pace that requires the least amount of work and feels the most natural. Congratulations! You've found your optimal rhythm.
Now that you've found your rhythm, you need to master it, which you should know by now necessitates quantifying it as well as having a way to ingrain the feel and confirm its execution. The best way to do this is with a metronome, which is a device musicians use to keep time. Both the Apple and Android stores offer dozens of metronome apps for your smartphone, many for free, which makes it a no-brainer. No excuses—download one now.
Rhythm is measured in beats per minute (bpm), and almost all proficient players that I've coached have a putting rhythm that falls within the range of 70 to 85 bpm. Let's assign a bpm value to your resonant rhythm so that you can practice it more easily and effectively. Here's what to do:
Step 1: Start your metronome app, set it to 70 bpm, and lay your phone on the green or the floor.
Step 2: Address the phone as though it's the ball, then take a half step back so you can make swings without hitting it. The position of your phone (metronome) should mark the bottom of your stroke.
Step 3: Begin swinging your putter back and forth to the beat of the metronome, making sure that the putterhead passes in front of the phone on each audible "beep." Keep your stroke equal in both pace and length on both sides of the "ball."
Step 4: Switch the setting on the metronome to 85 bpm and repeat Steps 1 through 3. Now that you've experienced both extremes of the standard rhythm scale, continue changing the beat of the metronome every few swings until it matches the optimal rhythm that you discovered in the previous drill. Your rhythm is now quantified. Start a "Touch" tab in the Technical Keys section of your journal and note the bpm number. Mine is 78 bpm, while PGA Tour player Charlie Wi's, for example, is 72 bpm.
Swinging naturally at your optimal rhythm takes the least amount of effort but gives you the maximum benefit when it comes to touch and feel. Master it by practicing to the beat of a metronome.
Using Rhythm to Improve Your Touch
Now that you have your rhythm number, the trick to mastering your touch is to make strokes to the beat of this count while varying your stroke length, and do it comfortably. This is my Resonant Rhythm Drill, which should be a mainstay in the block-practice portion of your training. There are several benefits to performing the drill regularly. First, you can do it just as easily indoors as out, and it takes less than a minute to complete, which means you can improve this foundation during a snowstorm, a mini-break from work, or during a commercial when you're at home watching TV. Second, it ensures that you'll stroke putts at the same rhythm next week, next month, and five years from now, and that continuity—along with the feedback provided by the drill in practice—will allow you to master it. Putting in rhythm will become second nature. Humans are rhythmic creatures, and the beat of the metronome will soak into your subconscious (a phenomenon known as _entrainment_ ), regulating your whole process so that it becomes smooth, graceful, and unrushed. I told you it was a no-brainer.
The Resonant Rhythm Drill
Set your metronome to your beat and lay it on the ground to mark the bottom of your stroke. Your goal is to make continuous swings with the putterhead passing the metronome on each audible beat. Start with relatively small strokes—think of the energy needed to sink a five-foot putt. Once you're swinging in sync with the beat of the metronome, lengthen your stroke to create the energy required to sink a 15-foot putt, without stopping your motion. Notice how you must pick up the overall pace of your stroke to keep time with the beat of the metronome. After four or five swings at this stroke length, continue the process for a 25-foot putt and then a 40-foot putt. Finish the drill by returning to the stroke pace and length appropriate for the five-footer (think shorter and slower). This whole process should take less than a minute—not bad considering you've just nailed the key technical element that allows for great touch.
The Resonant Rhythm Drill also is a great way to prepare yourself for an upcoming round. In fact, it'll do you a lot more good than slamming a few drives on the range if you're running late for your tee time. Good rhythm affects everything.
Expert at Work! Watch me demonstrate how swinging in rhythm makes distance control intuitive and easy in a special video available at jsegolfacademy.com/index.php/james-metronome.
RHYTHM FAULTS AND FIXES
If you feel uncomfortable when you perform the Resonant Rhythm Drill, or notice that any stroke pace you make in the 70- to 85-bpm range is far different than your normal movement pattern, your current technique for distance control contains one or more of the following fatal rhythm flaws. Nip them in the bud right now! Poor rhythm is one of the most common and destructive errors in putting, and if your rhythm is off, you're adding unnecessary strokes to your score.
Rhythm Error No. 1: Length Rigidity
What It Is: Making the same length backstroke for every putt.
What It Does: Causes the putterhead to decelerate on short putts and creates a "hit" impulse at impact on longer putts, making both distance control and returning the face to square extremely difficult.
The Fix: Try my Color-Coded-Tee Drill.
Grab three sets of tees of varying colors (two white, two yellow, and two black, for example). On the green, mark the start position of your ball with a tee hole or dot it using a Sharpie. Place one tee five inches behind the starting point and just outside your stroke path line (so you don't knock the tee when you putt) and a tee of the same color five inches in front of the starting point. Put two tees of the same color five inches beyond the first two tees, and two more five inches beyond those. Step into your address position and begin making continuous strokes to get into your resonant rhythm. Once you have it down, begin to putt balls from your starting point by stroking from one tee to the same-colored tee. Start with the small stroke and end with the longest. As you roll these putts, note the feel of each swing and the distance the ball travels as a result. Don't worry about the Quiet Eye technique or the line—the goal of this drill is to help you understand the relationship between stroke lengths at rhythm and roll distance for the conditions at hand. The drill is complete after you putt two balls at rhythm for each tee position.
Performing the Color-Coded-Tee drill will train you to understand that because rhythm never changes, energy imparted to the ball at impact is dictated by the length and speed of your backstroke. Prove it by putting "tee-to-tee" for each color code (inset photos)—you'll produce three distinct distances.
Rhythm Error No. 2: Stroke Acceleration
What It Is: Making a backswing that's too slow and short for the situation, and then over-accelerating the putterhead into the ball.
What It Does: Causes poor touch, difficulty returning the face to square, and yipped putts.
The Fix: Try the Luke Donald Drill.
The name of this drill is derived from the fact that almost every Tuesday when I'm out teaching on Tour, I see Luke Donald investing time on the putting green performing this exercise. Place a ball on a fairly flat portion of the practice green between eight and 12 feet from a hole. Push two tees into the ground, one on each side of the equator of the ball and perpendicular to the line of your putt. This "gate" will prohibit the forward motion of the putterhead after impact. Knowing this, think about how much potential energy (length of backstroke) you'll need to roll the ball to the hole with the correct speed. Don't try to "hit and stop" at impact. Relax, make your stroke, and allow the tees to stop your putter. Not only will this drill teach you to make the appropriate-length backstroke to roll the ball the correct distance without hitting at it, you'll ingrain the feel of returning the face to square at impact (the face makes contact with both tees at the same time). After five putts through the tees, complete the drill by removing the tees and stroking three more putts with the same stroke length and feel.
The length of your backstroke—not your forward-stroke—and gravity conspire to create the perfect amount of potential energy for the ball to roll the correct distance. That's what makes the Luke Donald Drill so effective: Because the tees prevent you from adding energy as you anticipate impact, the resultant roll is entirely dependent on the amount of energy you establish at the top of your backstroke. In other words, if you make a backstroke that's too short for the putt you're facing (a common error), the ball won't reach the hole. If you break the tees or push them forward aggressively, you have an unhealthy "hit" impulse at impact.
IMPLEMENTING RHYTHM INTO YOUR PROCESS
Your process is the ordered steps you go through to get yourself physically and mentally ready to play a shot, and having a great one is critical if you're going to be consistently clear and committed when you execute. For putting, I suggest you follow the green-reading process already laid out for you in Chapter 5, with a few extra steps that will allow you to putt at rhythm. For starters, experiment with each of the following four process examples, then commit to using the one that has the greatest positive impact on both your stroke and results.
Rhythm Process No. 1
After you read the green and create a mental picture of the putt's roll, walk into the ball holding the image in your mind's eye. (We'll add a physical cue to begin your walk into the ball in Chapter 8, but for now let's just worry about the overall process.) Set up just inside the ball, and swivel your head to mentally engage with the target. Begin rehearsing the appropriate-length stroke at rhythm, searching for the swing that your subconscious deems appropriate for the circumstance. (Remember, it's easiest to find your resonant rhythm using continuous, pendulum-like swings.) After you lock in the feel, turn your eyes back to the ball. Align the putterhead behind it and reposition your feet. Scan down the start line to your predicted entry point and then back to the ball, finishing your scan by settling your eyes on your Quiet Eye spot (Chapter 4). Once you're settled, react to the picture of the ball's roll you created during your read and confirmed during your scan, and repeat the swing deemed appropriate by your subconscious during your practice strokes, letting the putterhead swing under and past your eyes.
_Process used by: Dozens of PGA Tour players, including Rickie Fowler, Cameron Tringale, and Henrik Stenson._
A Tour-proven rhythm process.
1. Visualize your three points.
2. Walk in clear and committed.
3. Get into your putting posture.
4. Make practice strokes inside the ball with external focus, feeling the required energy.
5. Settle your eyes back to your Quiet Eye spot.
6. React and execute.
Rhythm Process No. 2
Some golfers tend to lose the picture of their start line when they perform their eye scan if they wait until they're next to the ball to begin making practice strokes. If this sounds like you, then alter Rhythm Process No. 1 by taking your rehearsal swings from behind the ball. Again, read the green and create your picture, but instead of walking in toward the ball, get into your putting address position while facing your aim line. This provides a binocular view of the target, which some golfers find helpful. As you look down your line and imagine your putt, start your continuous rhythmic swings and allow your stroke to match the energy deemed appropriate by your subconscious. Now walk in (still maintaining an image of the ball's roll) and immediately align your putterhead to the start line. Set your feet, scan to the target, focus your gaze on your Quiet Eye spot and react.
_Used by: PGA Tour player and longtime student Charlie Wi, who finished five of the last six seasons ranked inside the top 25 in putts per round._
If you feel like you lose your starting-line point when you rehearse your stroke standing to the side of the ball, rehearse it from behind the ball while facing the target before you step in to the ball. Make sure to hold the vision of your line in your mind's eye until you complete your stroke.
Rhythm Process Nos. 3 and 4
Some players I coach add an auditory cue to their rhythm process in order to initiate and time their stroke. Once Tom Pernice, Jr. finishes his resonant rhythm practice swings and settles over the ball, he takes one hard look at the target. When he feels ready, he turns his gaze back to the ball and counts out "one" in his head. In rhythm, he looks back at the target on an internal count of "two." On "three" he swivels his head back to the ball and on "four" he settles his eyes. He swings back during "five" and swings through to the finish during "six," maintaining his resonant rhythm on all counts. Tom has putted this way for more than two decades. Not only does counting aid his rhythm, it ensures that he always feels ready when he pulls the trigger.
Charlie Wi uses the words "perfect pace" to the cadence of his internal metronome to time his stroke. These cue words are terrific—they're a positive affirmation of what's about to happen. They're also the correct cadence. Unlike Tom Pernice, Charlie counts a beat to start his backstroke and another to impact, which is why the first word is two syllables and the second word is one syllable. It works because his backstroke is half the distance of his total stroke. If he timed his motion from backstroke to finish like Pernice, he'd use something like "smooth-smooth."
PUTT SPEED AND CAPTURE RATE
Even when armed with an understanding of rhythm and a healthy grasp of touch, many students often ask, "What speed should the ball be traveling as it reaches the hole in order for me to sink the most putts?" Differences in speed (measured in revolutions per second) change the effective size of the hole and your overall make percentage. I could explain this with some complicated math, but the concept is easier to understand with a simple image. Think of the hole as a hungry animal, sitting patiently and wanting nothing more than to capture the ball for dinner once it nears its edge. If the ball is moving so fast that it finishes six feet past the hole on a green of average speed, it's impossible for the critter to pull the ball into the hole even if it travels over its center. Gravity just isn't strong enough in that circumstance to overcome the putt's lateral velocity, making the effective size of the hole zero inches. Conversely, a ball crawling up to the hole with just enough energy to get more than half of it over the edge means it's dinner time. Gravity has its way and the lack of linear speed makes the effective size of the cup its full 4.25 inches.
With these facts in mind, it certainly seems advisable to "die" every putt into the hole, but that's not what you see the pros do on TV. Why? The answer is twofold. First, the judgment and skill needed to control distance within inches is impractical for humans, and the penalty for leaving the putt short of the hole is great. You're better off giving yourself a reasonable margin for error by trying to roll the ball at least six inches past the cup, so that a small error in speed doesn't reduce your chance of making the putt to zero. As the saying goes, "Never up, never in." Secondly, slope and imperfections in the green caused most commonly by footprints—which are most substantial within three feet of the cup—exert greater influence on a slow-moving ball than they do on one moving with some pace. In the real world, when a ball dies just as it reaches the lip of the cup, it tends to fall off quickly to the low side or get bumped agonizingly off line. I always chuckle at the players who react in disbelief when this happens. Surfaces aren't perfect, and if they had rolled the ball up to the hole with a little more speed, it probably would have dropped in, even with the slight reduction in the effective diameter of the cup.
PUTT SPEED STRATEGY AND TRAINING
On any putt, the goal is to find the middle ground between rolling the ball slow enough to keep the effective size of the cup at a maximum and rolling the ball fast enough to minimize bumps in the green. The ideal pace, however, varies depending on the overall speed of the green and the degree of slope. In essence, the perfect speed is one that propels the ball farther past the hole on fast greens and downhill putts than it does on slow greens and uphill putts. Barring extreme situations, here's all you need to remember:
_If you're putting on a slow green or uphill, roll the putt with enough speed to make it travel a foot past the hole if it misses._
_If you're putting on a fast green or downhill, roll the putt with enough speed to make it travel two feet past the hole if it misses._
PUTT-SPEED TRAINING SOLUTIONS
_1. Matching Line and Speed_
The truth is, you can hole any breaking putt with at least three different speeds, as long as you pick a start line that matches the speed you've chosen. Distance control is the primary skill, but learning to see the matching line is its bedfellow. To train and improve your matchmaking skill, try PGA Tour player Brad Faxon's Three-Speed Drill. He performs this exercise regularly, and is amazing at it. No wonder he's one of the best putters in the professional game.
Brad Faxon's Three-Speed Drill
Hit the practice green with three balls in hand and choose an eight-foot putt with significant break. The goal is to make three putts from the same place using three different speeds and lines. Roll the first putt with enough speed to make the ball travel four feet past the hole (maximum speed, minimum break). Roll the second putt on a pace that allows it to just drip over the high side of the hole (minimum speed, maximum break). Roll the last putt at your normal "playing" speed, so that it breaks somewhere between the first two putting lines. (To make it easier at first, place a tee on the aim line that indicates your middle-line, normal-speed putt.) Once you make all three putts, switch to the opposite side of the hole and repeat.
Expert at Work! PGA Tour player Brad Faxon demonstrates how he visualizes and works on line and speed in this special video at jsegolfacademy.com/index.php/faxon-three-speeds.
_2. Mid-Range Work_
When I first started working with PGA Tour player Skip Kendall in 1997, he told me that he putted great inside six feet but hadn't holed a putt over twenty feet in a year. (Perhaps he was exaggerating, but I got his point.) You'd think that if your stroke was good enough to consistently start the ball on line from six feet and in, you'd make your share of putts from any reasonable distance, but it doesn't necessarily work that way. Skip's problem was that he used a slow pace for his backstroke for all putts, which matched well with his ideal rhythm on putts close to the hole, but forced him to lose rhythm on longer ones. Poor rhythm makes touch and returning the face to square at impact difficult, so not only was his touch inconsistent, he was missing his start lines as well.
I consider making mid-range putts a numbers game. If you face ten putts from 15 to 25 feet in a round, and you roll all ten somewhere close to the hole with perfect speed, you'll make more than your fair share. If, on the other hand, you roll only three out of ten with perfect speed, you'll probably get shut out. The solution: Train your touch by regularly performing my 20-Foot Touch Drill. Skip bought into it, and he promptly went on a four-year putting tear.
20-Foot Touch Drill
Find a 20-foot putt with a bit of a slope and mark the starting point with a tee. Do the same on the opposite side of the hole. The goal is to hit ten great-speed putts in a row. A great-speed putt is one that goes in or finishes past the front edge of the cup but no further than a club length past the hole. If you leave a putt short or hit it more than a club length beyond the hole, you have to restart the drill. Roll the first two balls from the first tee and the next two from the second tee, alternating locations every two putts. If you have difficulty rolling ten great-speed putts in a row, simply keep track of the number of attempts it takes you to hit ten total, and log your results in your journal.
(The Two-Hole Knockout Drill from Chapter 2 is another great drill to help you produce ideal speed on every putt.)
BREAKING DOWN THE GREAT RHYTHM DISRUPTER: THE YIPS
A yip—a spasm-like flinch that immediately sends the ball off line and at the wrong speed—is clinically known as an "occupational focal dystonia," and unfortunately, it can't be cured with a subtle change in technique or a slight enhancement of mental skill. The neural pathways used to execute your stroke are permanently damaged. Your only solution, if indeed you have the putting yips, is to change your motor pattern so dramatically that you form new pathways to carry out the act. For decades, many of those afflicted with the yips successfully "started over" by anchoring a long putter to their sternum (or a belly putter to their midsection), which fixed their suspension point and allowed them to feel a pendulum-like rhythm, both proven yip-busters. But since this method has been deemed illegal by the USGA as of January 2016, you'll have to think outside the box.
Consider starting anew by changing to the claw grip, which positions the handle in the crook between your right thumb and forefinger. Positioning your bottom hand like this effectively limits the hinging action of your wrist, creating a fixed point in your stroke and eliminating the right-hand hit impulse. Another idea is to stand on the opposite side of the ball and putt left-handed. Either strategy is a fresh start—a shock to your system that sets the foundation for the creation of all-new neural pathways. Whatever method you choose, work on developing it using the fundamentals, training protocols, and mental keys outlined in this book. Soon, you'll once again feel calm, confident, and competent on the greens. Starting over the right way may actually be an advantage.
If you have a true focal dystonia (the yips), you have no choice but to start over with a completely different grip or putting method than you're using now. This will force you to create and use new neurological pathways to execute the motion.
HOW TO BEAT PERFORMANCE ANXIETY
But before you panic, hold tight—you may not have the yips! Poor putting and uncontrolled flinches are common, but an occupational focal dystonia is not. I would estimate that 90 percent of the golfers that see themselves as having "the yips" have healthy neurology but are suffering from severe performance anxiety brought on by the toxic mix of the wrong stuff: poor technique, poor outcomes over time, and an unhealthy emotional attachment to their results.
It's a self-perpetuating process. Shoddy technique creates poor outcomes relative to known ability and expectation level. Emotionally, these results are embarrassing, and the player internalizes and imprints this embarrassment into their subconscious. Humans are naturally programmed to avoid such events (fight or flight), so when the environment is right for it to recur (similar putt, potential embarrassment looming), a player becomes racked with fear and anxiety. Here we go again! In this physiological state, the subconscious is rendered useless, forcing you to consciously command your body to execute the act. As a result, your natural fluid motion disappears and is replaced by a spastic, herky-jerky one. The more frequent these spasms occur, the more you worry; the more you worry, the more you imprint fear into your subconscious, elevating tension levels to an unbearable high. It's a natural yet vicious cycle, and it often bites the hand of even the most talented players. I understand it. I've lived it, and more importantly, I've helped hundreds of players break the cycle and perform like they know they can.
The solution to putting with greater confidence and trust is multifaceted, but it can be most simply described as "training like an adult and playing with the mind of a child." To wit, children (and any beginning golfer, regardless of age) never hyperventilate over the ball or feel untimely and unwanted muscle contractions, because even though their motor-control and other developed skills are undeveloped, so are their expectations, which makes the results nonthreatening. The rare 10-footer that goes in is an occasion to high-five or dance with joy; missed putts—even short ones—are accepted. The player's self-image is independent of outcome and always tilted toward the positive. They enjoy the act for what it is, embracing the heroic while ignoring the pedestrian. Their healthy perspective and ignorance about the details regarding the "how" of putting leaves their conscious mind quiet and clutter-free. When the "how" is removed, there's only the ball and the hole, rendering the act of putting a simple see-and-do exercise. This is the goal.
Ultimately, your ability to trust your stroke, especially when something is on the line, is dependent on its worthiness. The first step is to make it worthy of your expectation (not perfect), and the second is to relearn how to let it go and just play. To me that is the definition of the "right stuff."
THE RIGHT STUFF
There are two technical mistakes that directly lead to unwanted muscle contractions during the stroke. To boost your confidence and trust, you must eliminate these before you dive into the mental keys.
The first of these fatal flaws is what I call an "acceleration flinch," which can be best understood, once again, by re-creating the image of a pendulum. As the bob swings back and forth, velocity is least when displacement is greatest; the putterhead is momentarily motionless with a velocity of zero at the end of the backstroke and completion of your forward-stroke, representing potential energy and nothing else. The initial start of the forward-stroke marks the point where maximum acceleration occurs. At the bottom, velocity and kinetic energy reach peak levels, and then the restoring force of gravity slows the velocity back to zero at the finish. Relatively speaking, there's a flat spot of speed surrounding impact in which the putterhead is neither accelerating nor decelerating. I call this the "unbroken zone" of putting, and becoming aware of it is the first step in solving the problem of performance anxiety.
An acceleration flinch occurs when you speed up or slow down the putter in the unbroken zone by changing grip pressure and exerting extra force on the club. The good news is that you've already learned how to relax and swing the putter tension free at optimal rhythm. Rekindle that feeling and refrain from flinching by performing the Resonant Rhythm, Color-Coded-Tee, and Luke Donald drills daily (or as frequently as possible). The Luke Donald drill seems to be especially effective. When the zone around impact remains unbroken by velocity change, your putter will move like a warm knife through butter—smooth, uninterrupted, and flowing.
As your putter swings through the impact zone, it should be moving rhythmically and at a reasonably constant rate of speed—no acceleration or deceleration allowed. This is the "unbroken zone" of putting. Maintaining it is the key to stopping the flinching in your stroke.
The second fatal technical flaw that creates unwanted muscle contraction near impact has to do with rotation, and it results not from poor rhythm but from incorrect eye movement. As you recall when I introduced the Quiet Eye technique in Chapter 4, your eyes communicate with your brain to provide both the location of the target and the ball in space. When you see putting as nonthreatening and perform in a calm, confident state, efficient eye scans and a calm gaze throughout your stroke are both intuitive and simple to execute. Add in stress and the inability to effectively manage it, however, and everything changes. Under duress, your body experiences approximately 1,200 physiological changes. These changes cause your tension level, self-awareness, and anticipation of outcome to spike, and these in turn begin to alter your eye-movement patterns. If your foveal focus—which is the part of the retina that allows for sharp visual detail—jumps quickly from the ball at setup to the putter during your motion and then back to the ball as you near the moment of truth, a rotational flinch in your hands may result at just the wrong time. (If you're interested in learning more about effectively dealing with the physiology of the stress response, check out the work of Bruce Wilson, M.D.; HeartMath.org, wilsonheartcare.com.)
Rotational flinches are the most debilitating since they tend to misalign the face at impact, causing you to miss from even very short distances. And as you probably know, misses from short range are the toughest to take because you assume they should be easy. A bad result forces you back to the downward-spiraling, self-perpetuating cycle of performance anxiety.
The solution? Improving the Quiet Eye technique described in Chapter 4 as well as creating a mental framework that allows you to execute it. You'll know you're cured when the hole looks big and you can keep your gaze steady without concerted thought or effort in any putting environment.
CHAPTER
8
Skill 4:
The "It" Factor
Having the feeling that any putt you roll is destined to find the bottom of the cup is the master skill on the greens. This "It" factor isn't mysticism or reserved for a chosen few, but is rather a collection of championship attributes that, with the right direction, any player can possess, cultivate, and develop.
Putting is a simple act that requires little athleticism. It can be performed equally well by a child, a pro at the top of his or her game, or an ailing senior, which makes it even more aggravating for those who suffer its folly. It's mainly a mental exercise, which explains why the simple act of rolling the ball into the cup has driven many to utter distraction.
We've all heard stories about players "losing their marbles" on the greens. Maybe you have your own personal saga to tell. My favorite took place at Royal St. George's on the southeastern coast of England at the 1985 British Open. Sandy Lyle captured the Claret Jug that year, but most fans remember the event as the one where Peter Jacobsen tackled a streaker on the 18th green during the final round. A bizarre sight, for sure, but not as strange as the actions of South African Tour regular Simon Hobday, who had made it into the Open field that year.
Hobday's play itself was unremarkable (he missed the 36-hole cut). The putter he used for the week, however, was not; the putterhead was battered and bruised beyond recognition. The story goes that Hobday had tied it to the rear bumper of his car following a bout of bad putting at the Lawrence Batley International Golf Classic played at The Belfry in central England the week prior. According to Hobday, the putter needed to be "taught a lesson." (Hobday had first snapped the putter over his knee before tying both pieces to his car.) He dragged the poor misbehaving putter all 198 miles between the Belfry and Royal St. George's. Despite obvious damage, Hobday had the putter head reshafted upon his arrival at the Open and used it that week, only to repeat his abysmal performance on the greens. I assume that, like a petulant child, the putter had failed to learn its lesson.
The lesson, of course, was not the putter's to learn. When you consider the fact that the very best players in the world miss half of their putts from eight feet, and that each missed putt presents an opportunity for self-ridicule, emotional judgment, and a loss of confidence, it takes a mature mental approach to establish and maintain a healthy relationship with your putter. Without one, your experiences on the greens will make performing more difficult, not easier. Although it's impossible to turn back the hands of time and unlearn the process that leads to fear and self-loathing, it's very possible to use reason, knowledge, and self-discipline to reinvent yourself. You have the ability to develop the championship attributes that collectively make up the last essential skill in putting: confidence. As humans, volition (our ability to choose) arms us with the power to fully decide our perspective, attitude, mindset, daily habits, and what we deem important. The ultimate by-product of these choices is the aforementioned "It" factor, which will ultimately allow you to train with the intelligence of an adult but play like a child. If you want the hole to look as big as a bucket and feel engaged and relaxed over the ball, read on.
THE ROOT OF MENTAL PUTTING PROBLEMS
We all enter this world the same way: naked and unburdened by expectation. If we're lucky, parental love and a safe home allow us to maintain this sense of innocence throughout our preschool years. Children do things simply for the sake of doing them, with little sense of outside judgment or opinion.
An example: naked gymnastics. Let me explain. When my daughter was four years old, she went for a playdate with a boy her age down the street. After several hours, the little boy's mother phoned my wife, her voice noticeably uneasy. She reported that she had gone upstairs to check on the kids and found them tumbling, twirling, and laughing with all of their clothes off—performing naked gymnastics. My wife and I thought it was adorable, although we appreciated that the boy's mother demanded they put their clothes back on. What does this have to do with putting? Whatever my daughter's skill level as a gymnast was at that point in time, she was unburdened by expectation, fear of judgment, or inhibition, and she performed up to her athletic potential. Can the same be said about you?
Our loss of innocence starts when we are indoctrinated into the formal education process, which reels in our freedom to do things just for the sake of doing them. Suddenly, we're scrutinized and judged at every turn, whether it's a test score, a behavior, or the way we dress. The social structure of the lunchroom reigns supreme, and our evolution toward typical adult insecurities begins in earnest. The process of feeling judged forces us to consider and appreciate possible outcomes and their social ramifications. We stop thinking in the present and begin to either worry about how things are going to turn out, or fret about things that have already happened. After just a few years of this socialization process, it's no longer okay to play naked gymnastics with your best friend, even if it's innocent fun.
It's a simple yet sinister process. Negative social interactions lead to emotional insecurity, which shape our attitudes and perspective, creating a "warped" reality.
At one point in your life you attempted putts for the pure enjoyment of it; now you attempt putts under perceived threat and duress. "What if I miss?" And because we're programmed to avoid threats through the fight-or-flight response, our palms sweat, our hearts race, our hands tremble, and we approach the ball racked by anxiety.
Of course, there's nothing that takes place on a putting green that's a literal threat. If you strip away the emotion, there's only a ball, a hole, and a simple directive. The goal of this chapter is to help you relearn how to strip putting down to its basic act and eliminate all emotional judgment or consideration of consequence in the process. In other words, the goal is to help you putt like a child. The first step is to understand the genesis of the problem so that you can recognize harmful patterns. After this step, the solution is to choose to develop the six following championship attributes. They are the armor that'll protect you from mental harm and allow you to perform up to your full athletic potential.
CHAMPIONSHIP ATTRIBUTE NO. 1: HEALTHY PERSPECTIVE
Former First Lady Eleanor Roosevelt once said, "No one can make you feel inferior without your consent." The problem, of course, is that most of us have not only consented to let everyone judge us, but somehow we've decided that if we beat ourselves up first when things go wrong, then the barbs slung by others won't hurt as much. You simply can't allow this. Championship putters are consistently building themselves up, not beating themselves down.
The "It" factor starts with the proper perspective and a daily commitment to feed your psyche the positive nutrition it needs to live in the present. You must first realize and then commit to the fact that golf is something that you like to do and not who you are, and that someone's perception of you or your putting outcomes doesn't change the so-called "man in the mirror." Holing a putt doesn't make you a great person, and missing one doesn't mean you're a deadbeat. This perspective will allow you to protect your self-image and remain humble in victory and upbeat in defeat.
Unfortunately, just "knowing it" or "thinking it" isn't good enough; you have to "live it." Here are two exercises you can perform daily to help you maintain a healthy perspective and stay out of your own way.
Affirmations and Neural Linguistic Programming
An affirmation is a positive or encouraging statement you make to yourself: "I love to putt." Neural Linguistic Programming, in simple terms, is the use of a physical cue to bring about a strong positive emotion. It can be something as basic as shrugging your shoulders or a prolonged and deep cleansing breath at the start of your process to remind you to relax and accept the great things about to unfold. Using the two together gives your psyche the daily positive nutrition it needs to start a substantive change.
Begin by defining how you would ideally like to feel over a short putt. Get your journal out, and in the Personal Growth section create an "Affirmations" heading. Write the answer to that question in one or two sentences. Refer to yourself as "I" and write your statement in the present tense, i.e., "I love the challenge that putting presents. I am calm, clear, and committed over the ball." Another example: "The hole looks like a bucket and I'm great at trusting my natural gifts." Be true to your goals. There are no wrong answers here, as long as they're positive and in the present tense. Once you have a few to choose from, find a quiet place where you can sit or lie back. Take a few deep, cleansing breaths—inhale for five to seven seconds through your nose, then exhale for five to seven seconds. (If you can't control your breath, how are you going to control your muscles and emotions when you putt?) After three breaths, use your physical cue to initiate the process and recite your affirmation calmly to yourself twice. As you do this, see yourself living the affirmation and rolling the ball into the heart of the cup. Take a moment to experience the joy and fulfillment you feel, not necessarily because of the result, but because of the way you approached the process. Go back to taking deep breaths and repeat the process a second time to complete the drill. It should take no longer than two minutes to finish. Recite your affirmation three times a day for three weeks.
In my first book, _Your Short Game Solution,_ I wrote that one of the main differences between many of the talented pros I work with and most everyone else is that the champion is willing to do whatever it takes to be great. Success doesn't come for free—you have to pay for it. If the steps to greatness feel awkward or uncomfortable, great; challenge equals change. I know more than a few Tour players (Jason Day comes to mind), who diligently work on their breathing and focus daily. If you want to be the "Boss of the Moss" and you're coming from a bad place, you have to be willing to do the little things daily.
Define Your Process
After your neurology begins to rewire, you'll use your deep-breathing pattern and neural linguistic cue not only in daily affirmations, but to start your process when you play. I suggest you start your breathing as you walk up to the green complex and execute your cue just before you walk into the ball. This timing should allow you to closely replicate what you practice during your affirmation exercise.
It's important that your process is completely defined and clear. As you recall, I offered a six-step green-reading process in Chapter 6, and three pre-putt process options that incorporated resonant rhythm and target focus in Chapter 7. Now it's time to finish it off by adding a post-putt process. Combining all three and clearly defining the final product is important for four reasons:
1. Your mind will take comfort in going through the steps in the same order and in the same way each time, leading to both focus and clarity.
2. You'll avoid the tendency to either slow down or rush when pressure mounts. Instead, you'll simply do what you always do: execute your steps one at a time.
3. Focusing on something that you can control helps you stay in the present so you don't try too hard or worry about things you can't control.
4. A clearly defined process can be measured and evaluated, which allows for accountability. Answering for your actions is important for growth.
_Your Putting Solution_ is holistic. When you combine the mastery of simple techniques with daily skill development and a great process, there's no need to worry about results. They are merely by-products of these actions and will take care of themselves.
Evaluating Process and Making It Priority No. 1
Expert at Work! Watch Ben discuss process and the "Pinnacle of Trust" in this special video at jsegolfacademy.com/index.php/crain-trust.
Very few PGA Tour players are more inspiring than Ben Crane. Beyond the victories, he's honest, humble, and has a huge, giving heart. One of the things I've learned through working with Ben is how to properly evaluate a putt (any shot, really). Most players assess using their emotions based on the result. This is dangerous for your self-image, because bad putts are coming no matter who you are. Ben's approach is much more mature; he ignores emotion and results and instead evaluates and scores the execution of his process using a 1-to-5 scale. If he skips a step or feels doubt or fear, it's a "1." If he focuses on the target, is engrossed in the process, and reacts with complete trust, it's a "5." On the practice green, Ben and I are constantly working on helping him get to a place where all he has to do is trust his natural gifts.
You can't serve two masters. You're either going to pay homage to the results or to the process. Making the process _the most important thing_ is a huge benefit, because it allows you to focus intently without "over-trying," or forcing an outcome. Your goal should be to "try" the right amount and not 1 percent more. At my academy at Shadow Ridge, I created a "Process Scorecard" that my students fill out in addition to their regular scorecard when they play. They must post two scores: the total number of strokes it took to complete the round, and the total number of shots they played where they executed their process with intention and conviction (a "4" or "5" on Ben Crane's scale.) Ideally, your process score and your score relative to par should be a perfect match.
Your process is one of the few things you can control. By making it the most important thing on every shot and scoring it, you will play in the present and try just the right amount. As your ability to execute the steps of your process improves, so will the results, and soon you won't have to worry about your results at all; they'll take care of themselves.
CHAMPIONSHIP ATTRIBUTE NO. 2: GROWTH MINDSET
In her groundbreaking book _Mindset: The New Psychology of Success,_ Stanford University professor and psychologist Carol Dweck, Ph.D., suggests that a person's belief systems regarding their own abilities and potential fuels their behavior and predicts their success. Moreover, the ideal mindset creates motivation and productivity and is a much better predictor of success than talent or intelligence. Even more important to me as a coach of young people is the idea that a changed mindset can have a profound effect on a person's future and career.
There are essentially two mindsets: fixed and growth. In a fixed mindset, people believe that their basic qualities, such as their intelligence and talent, are fixed traits. In a growth mindset, people believe their abilities can be developed through dedication and hard work—brains and talent are just the starting point. This view creates a love of learning and a resilience that's essential for accomplishment. Virtually all successful people have these qualities.
A player with a growth mindset sees skills as attainable, not set in stone, and uses motivation to achieve goals. Brains and talent are overrated; improvement starts with hard work and effort. Credit: Trevor Ragan, trainugly.com, adapted from _Mindset: The New Psychology of Success_ by Carol Dweck, Ph.D.
As you peruse the table above, look critically at yourself in each category. How do you think and react for each action item? Create a heading in the Personal Growth section in your journal and label it "The Right Stuff." Underneath, make two columns with the labels "Obstacles" and "Solutions." Next, write down all the roadblocks and pressures you typically feel on the course, and then use a growth mindset to write down a solution for each. Intellectually, you must accept that playing golf is hard and that obstacles are always lying in wait. Embrace them as opportunities to grow, because they're never going away.
In my opinion, the genesis for developing a growth mindset is in having a dream—a long-term vision of what you want to accomplish in the future—and the guts to not let anything or anyone stop you from attaining it. A person with a growth mindset cares little about the emotional judgment of others; only progress matters. As passion fuels your approach, you naturally create resolve, courage, strength of character, fortitude, toughness, tenacity, perseverance, endurance, and resilience—some people call this "grit." When you have it, you'll use obstacles, challenges, and poor shots to direct future actions, not throw a pity party. The obstacles standing in your way aren't roadblocks—they're the road map to success.
Mike Krzyzewski, head coach of the men's basketball team at Duke University, once said, "If you find a road without any obstacles on it, it's probably not worth taking."
This quote captures the growth mindset perfectly. I had the privilege of watching one of Coach K's practices at Cameron Indoor Stadium as he prepared his veteran players and incoming freshman for the start of Duke's 2014–15 championship season—a bucket-list moment for me. Coach K has created a clear culture of growth in his program. Fold into that culture a mixture of hard work, talent, and accountability, and it's not at all surprising that the Blue Devils are perennially relevant.
I've tried hard to establish a similar culture at my academy and with my clients on Tour. I demand that each student react unemotionally to their misses and self-coach by stating the solution in a positive way with zero negative talk. (Of course, I'm there to guide them if they misdiagnose.) We write out technical, training, and mental management keys at the completion of each session. I demand that they endeavor to stay on task and be accountable to the plan by completing homework, scheduling follow-up e-mails, and journaling training results. Many of the Tour players I coach send out an audio report after each tournament round so everyone on the team knows what happened that day and what solutions are to be implemented going forward. We're all in, which is one reason they play so well. Is that the culture you're currently operating in?
Unfortunately, the traditional approach to golf instruction and improvement isn't helping you. It's broken. Golf is the only sport I can think of where players are commonly taught and not coached. To wit, a student shows up for a lesson once in a while, generally when they're at the end of their rope. The teacher tells them everything they're doing wrong from a technical standpoint. They came to the lesson feeling bad about their game and now they feel worse. They leave with concepts on how to improve mechanics, but there's no discussion of mindset, the keys to effective training, mental management, skill development, or structure to keep them accountable. They're paying for a ray of hope and little else.
It's time for all golf professionals to get with the times and establish coaching protocols for performance. I'm happy to report that I'm not the only instructor who feels this way. Many of the coaches I meet on the road, including my fellow _Golf Magazine_ Top 100 Teachers in America, _Golf Digest's_ 50 Best Teachers, and my friends at the Titleist Performance Institute, are on board. There are also a lot of really great, young, up-and-coming coaches who embrace the growth mindset and are getting great results with their students. The teaching profession is evolving for the better—now's the time for a mature, disciplined approach from the student.
CHAMPIONSHIP ATTRIBUTE NO. 3: POST-SHOT ROUTINE
As mentioned above, one key strategy in employing a growth mindset is having a disciplined and effective post-shot routine. It's a completely mental exercise and should take no more than five seconds. The time directly following the execution of an action is when your self-image is most malleable, which is either really good news or bad news, depending on your mental habits. After you hit a putt, the results will meet your expectations or they won't. The question is, how are you reacting to it and what should you do to grow at the fastest rate?
On a putt that feels great, take ownership of it with a bit of emotion. Give it a fist pump like the Tiger Woods of old, or, if you're not wired that way, just smile. As you do, take a few seconds to embrace what you did well, i.e., "that was great rhythm," "that was a great read," or "I stroked that with great focus and conviction." That's called "imprinting." The ultimate goal after a great putt is to create a high-energy positive emotional state to play in, and regular positive imprints will grow your self-image, increase your confidence, and put a little swagger in your step.
Don't undersell your successes. Embracing positive results seems logical and easy enough, but it's not all that common at the club level. Many of the weekend warriors I coach tend to "ho-hum" their good shots, or even deride themselves despite their good fortune, as in "It's about time I made a putt," "That was lucky," or "If I'd only putted that way last week!" Never self-pity, self-mock, or self-loathe. When you do well, take the credit.
On a putt that feels lousy, the five seconds following your miss are some of the most important you'll have that day, and they can have a profound effect on your career as a golfer. To borrow from the great poet Robert Frost, you can choose a road that leads to sorrow or one that leads to growth and fulfillment, but the right choice isn't necessarily intuitive or easy to discern.
PATH A: THE ONE MORE INVITING
On this path, you choose to embrace your poor results emotionally and wallow in self-pity, whine, bellyache, hang your head, or use the offending club as a tomahawk. Ugly stuff. With this mindset, problems appear impossible to solve as you replay them over and over in your mind. If this describes you, then not only is your future bleak, but you're no fun to play with even if you look like a pro and have a swing to match. This road is fraught with danger, since every round is a roller coaster of emotion and you never know what's around the next corner.
PATH B: THE ONE LESS TRAVELED
On this path, you choose to look at the result unemotionally and use the information you discern to help direct future actions in a positive way. Your first step on this path is to ask "What's the solution?" or "If I had a do over, what would I do differently?" Once you determine the correction, state it in a positive way—"Play more break," "Swing with rhythm," "Maintain my suspension point," or "Run a cleaner process"—and then replay that correction in your mind while visualizing the improved outcome. Then leave it. The shot is over. Don't take it with you to the next shot, the next hole, or to dinner. Focus on the first step of the next shot in your round.
When you choose this path, your poor shots won't cause you to lose confidence or second-guess your mechanics. Instead, they'll help you recommit to your foundational beliefs and use any negative energy produced by the result to steel your resolve for the next putt or swing. Determination born from mistakes is the greatest of all competitive traits. You see it in the face of champions in any sport at the biggest moments. It's the will to win. It's another gear, but it's mental, not physical.
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.
FROM "THE ROAD NOT TAKEN" BY ROBERT FROST
The great actress Katharine Hepburn once said that her secret to success was to "never complain or explain," and the same can be said for behavior on the road less taken. The championship mind owes no one an explanation. I recall the late, great Seve Ballesteros's response to a reporter when asked to clarify how he four-putted a hole: "I miss, I miss, I miss, I make."
The inviting path doesn't lend itself to sustained growth or fulfilled potential. One January afternoon a few years back I met up with Champions Tour player Duffy Waldorf to help him get ready for his upcoming season. He had taken some time off, and to be honest, his performance in the first couple of hours of our session was pretty lame—definitely not up to his standards or mine. Through analysis, trial and error, and needed repetitions, his shots gradually improved until he got to a point at the end of the day where he was firing on all cylinders. After the session, as we were writing key concepts into his journal and summing up the day, I told him, "I'm really proud of you for acting like a true pro through the first couple of hours. You didn't bitch and moan, get frustrated or make excuses. You kept your head in it and accepted the bad start as a challenge, and I think that maturity was the key to the day." Duffy replied, "Well, for the most part, those types of players never make it out on Tour—it's a barrier to entry." Then, after a slight pause, he added, "Though I suppose a Scott Hoch or two slips through every once in a while." I literally fell out of the cart I laughed so hard. No offense to Mr. Hoch—if anything it shows how much talent he had, because he was a heck of a player in his prime. But usually the players who "complain and explain"—as Hoch was famous for—fail to reach great heights.
CHAMPIONSHIP ATTRIBUTE NO. 4: UNDERSTANDING THAT ATTITUDE SHAPES REALITY
One thing I'm constantly reminded of on Tour is how a player's attitude shapes his or her reality and predicts the future. I may have two different players in the middle of the pack in the Strokes Gained Putting statistic (the same reality), yet one player claims he "hasn't made a putt all year" while the other feels encouraged because he's "stroking it really good." It doesn't take a rocket scientist to predict which of these two players will exhibit resiliency when the inevitable lip-out occurs, or who is more likely to execute with confidence in the big moment.
Fellow PGA Tour coach Sean Foley and I were discussing attitude at a recent tournament, and at one point he offered the following lesson. "Two players are driving to the course in two separate cars. There's more traffic than expected—both are going to be late. Player one becomes stressed, angry, impatient, and aggravated by anyone who pulls into his lane or any extra slowing of traffic. He's obsessed with the negative consequences of his misfortune. At one point, he angrily shakes his fist at a Hells Angel driving between lanes. His attitude has allowed the environment to ruin his day and cloud his judgment. This could end badly.
"Player two, on the other hand, understands his predicament but chooses to focus on a solution. He calls his playing partner to let him know he'll be late and he accepts the environment for what it is: an opportunity to accomplish something he hasn't had time for recently. He decides it's the ideal time to call his mom and tell her how much he loves and appreciates her. The conversation uplifts him—he feels blessed, which makes it easier to tackle the obstacles he'll encounter when he eventually arrives at his destination. Stress-free, he feels ready for the challenge."
Again, it's the same environment for both players, but the attitude of the actor creates a different reality for each. Fortunately, you can choose the attitude you bring to the car ride, and it's the precursor to everything that follows. At the end of the day, we will all meet our maker. How would you like to be remembered by those who love you? Choose to meet obstacles with a positive attitude and change your reality for the better.
CHAMPIONSHIP ATTRIBUTE NO. 5: EMBRACING THE RISK OF EXCELLENCE
To perform at a high level, you must value and willingly accept the risks required to be excellent more than you value "not failing." Call it "playing with courage," "playing like you don't care," or simply believing in your abilities. In sport, as in life, fortune favors the brave. Think about it: The reward for performing at the top compared to merely playing well is highly disproportionate in golf, whether you're talking about prize money, better playing opportunities, legacy, sponsorship, or scholarship dollars. On the PGA Tour, you could make every cut in a calendar year and lose your card, yet if you can manage just one win, you'll find yourself riding down Magnolia Lane during Masters week with fans praising your name. As such, you have to embrace the risk of going for it and failing as a natural part of being excellent. If you don't, there's someone else who will, and if it's their week, you'll find yourself looking up at them on the leaderboard.
To drive the point home for young players whose parents are usually telling them to "play it safe," I ask them the following question (in front of their parents, when possible). "Do you ever want to three-putt"? The normal response is a quizzical look and a firm "no," but of course it's a trap. The truth is that if you don't suffer the occasional three-putt, then you're not playing aggressively enough to one-putt often, and it takes more than an occasional one-putt to stand apart from the crowd. One-putts need to occur regularly.
Elite putters know that you have to be willing to lose—and know you'll be okay when you do—in order to win.
For those who still don't quite get it, answer this question: If you're playing basketball in a rivalry game and you finish the game with zero fouls, did you play good defense? Obviously, fouling the other team is bad if you look at it through a narrow lens, but from a champion's perspective, it's a necessary part of winning. A player who plays a full game without fouling didn't play aggressively enough to maximize their effect on the outcome. There's obviously a sweet zone of aggressiveness to operate in, but that zone is rarely occupied by the timid.
PGA Tour player Brad Faxon, one of the best putters of the modern era, once told me that during his best putting rounds, he noticed he would finish all 18 holes without a single tap-in. Essentially, every putt was hit so firmly that if it didn't go in (which it did a lot), it went far enough past the hole that he had to mark his ball before he could finish. This isn't a comment on the optimal pace to roll the ball, but rather a testimony to the mindset of a master putter.
Get the Odds in Your Favor
Be bold, not reckless. Certain putts deserve more caution than others, forcing you to alter your definition of success. I suggest you categorize the putts you face like a stoplight: "green" for go (most putts), "yellow" for caution (downhill sliders for example), and "red" for simply cozying the ball up to the hole (you know the situation when you see it). A red-light putt and your subsequent decision to play it safe is a reflection of your prudence, not fear.
Prudence aside, the easiest way to reduce the number of putts it takes you to complete a round is to hit your approach shots into more green-light zones than yellow or red ones, even if it means aiming away from the flag. In my wedge book, _Your Short Game Solution_ , I called this "quality position," and it applies to putting as well. A quick study of standard make percentages will help you see the light. Assume that your average make percentage from six feet over the course of a year is exactly 50 percent. That doesn't mean there's a 50 percent chance you'll make every six-footer you face. Some putts are easier than others; for example, you'll probably make more straight uphill putts from six feet than straight downhill ones, or those with substantial break. The table below is a generic illustration of make percentages for a club-level golfer from six feet on a moderately fast green with a significant slope.
All putts are not equal, even those struck from the same distance. As the make percentages for a six-foot putt by a low-handicap player prove, the more a putt breaks or runs downhill, the less likely the player will make it.
From the illustration it's easy to see that downhill breaking putts require more adept judgment and skill as you attempt to match line and speed, which lowers the make percentage. In addition, downhill putts rolling slowly across a bumpy green are much less likely to hold their line. For this reason, when you face a cup location cut on a slope, you'd be wise to value quality position when playing a wedge shot, bunker shot, lag putt, or approach shot into a green. Let's call giving yourself a better than 50 percent chance to convert the putt the "scoring zone." On a green with slope, the scoring zone isn't a circle around the cup as you'd expect, but an oblong oval biased toward the low side of the hole, as shown below.
On any sloping green, the area where the expected make percentage is 50 percent or higher is biased toward the low side of the cup. You may be just as likely to hole a straight eight-footer from below the hole as you are a four-foot putt from the high side of the green.
(Note: These pictures are for illustrative purposes only, and do not reflect data for any individual. Actual make percentages differ greatly based on individual skill, stroke tendencies, severity of slope, speed of the greens, and quality of the surface.)
Statistics show that players make a slightly smaller percentage of birdie putts from a given distance than they do when the same putt is for par, and I have heard more than a few academics suggest that this is due to our normal tendency to practice "loss aversion," but I'm not completely sold on the concept. Without proper controls, you can make data suggest almost anything, and as Mark Twain said, "There are lies, damned lies, and statistics." Plus, it's simply a matter of fact that most par putts are easier to make than birdie putts, if only because birdie putts follow longer approach shots, which typically end up in a random pattern around the hole, while par putts are typically preceded by short-game shots, which tend to roll across the green toward the pin and are influenced by slope to end up below the hole more often. I think this is why Jack Nicklaus said that the key to playing Pebble Beach, with its severe back-to-front-sloped Poa annua greens, was being able control your approach shots so that they ended up on the front of every green and below the hole. Valuing quality position puts the odds in your favor!
CHAMPIONSHIP ATTRIBUTE NO. 6: PREPAREDNESS
A major goal with any player is to free up, trust, and let it go, but trust that's sustainable over time has to be earned. The work must come first. Being and feeling prepared allows confident belief to occur naturally. Effective preparation requires a long-term perspective, because what you do today doesn't affect tomorrow's performance, but rather, your performance a month from now. There is no "cramming" for an upcoming event. In fact, the one factor that correlates most directly to performance is to be well-rested, which means that the evening before a tournament, you're better off going to a movie and turning in early than running out to the putting green to get in some "extra work."
You gain confidence only when you pay your dues on time and in full. A recent experience crystalized this truth for me. Not long ago I was one of the keynote speakers at the World Golf Fitness Summit in San Diego, a three-day event attended by more than seven hundred golf, medical, and fitness professionals from around the world. My daughter, who was eighteen at the time and showing interest in a sports-medicine career, made the trip with me, which made for a great couple of days. Waiting for my time to speak, we sat in the audience as former NFL coaching great Dick Vermeil shared the motivational keys he used to lead the St. Louis Rams to victory in Super Bowl XXXIV. Despite no more than a casual glance at my presentation earlier in the week, I felt at ease and confident as I walked onstage, which was a bit surprising because I anticipated being nervous. Although I don't speak in front of big groups often, I talk and coach short-game performance every day. I live it. And because I "own" my content, I felt completely prepared and comfortable. The words flowed naturally despite following a coaching legend and the intense environment of the summit (I made my daughter proud that day).
Just a week later, I was working with two of my longest-tenured professional clients, Tom Pernice, Jr. and Charlie Wi, at Bear Creek Golf Club in Southern California. After a full morning of short-game practice, we played nine holes. It was a beautiful serene evening—I was playing with good friends with little more than a friendly wager on the line and absolutely nothing to stress out about. But as I addressed a short putt for par on the first hole, I could feel nervous energy entering my mind and body. It's ridiculous—I began playing golf as a three-year-old, competed as a collegian for four years and as a professional for another five, have specialized in the short game for decades and teach putting nearly every day. Shouldn't I have been calm and confident?
No, because knowledge doesn't create confidence. The work you've put in does. The reality as I stood over that short putt at Bear Creek is that I hadn't hit a putt in months, nor invested any meaningful time training in my process, so I wasn't physically or mentally ready to perform. You can't rest on what you've done or what you know. Being prepared is a cumulative by-product of doing the right things the right way every day. The work has to come first. Trust has to be earned.
JOURNAL WORK
Open your journal to the Personal Growth section and write in the six "It" factor traits discussed in this chapter, then make the decision to do the "right stuff" every day. Begin by writing down one simple affirmation about how you can feel, think, and act like a champion for each trait. Example: "I am going to lead the Tour in attitude" or "I love overcoming the obstacles in my way because they lead me down the path of sustainable improvement." Recite them once a day for three weeks.
In the same personal growth area, create a tab with the heading, "Items I Choose to Make Important: My Putting Process." Write out the three stages of your overall process in complete detail. These are the organized steps that allow you to read the green, be confident and engaged, make your stroke a childlike reaction to the target, and grow from the result. Of course, tailor your steps to fit your personality and tendencies. Use the summation of the process that I've laid out for you (Chapters 6–8) as a guide. It should look something like this:
Pre-Putt
1. Begin to cycle deep cleansing breaths as I approach the green and identify the fall lines relative to the cup and ball. Recite my affirmation: "I'm the boss of the greens. I love to putt."
2. Crouch down behind the hole on the opposite side of the ball and scan horizontally (coins) to discern the slope of the last three to four feet of the putt.
3. Walk to the ball side, stop halfway, and use the side view to discern if the putt is uphill or downhill and how much. Use the feels from my feet to help determine the degree of slope—they never lie.
4. Finish the walk and crouch down behind the ball and piece together the first part of the putt using horizontal eye scans. Take note of the shiny and dark hues in the green between the ball and the hole (you've seen the grass from three positions, the grain affecting the putt should be clear). Visualize the intended speed, and if break is detected, shift my body and perspective slowly down the slope and allow my subconscious to choose both a start line and an entry point into the hole, clearly defining my three points.
5. Use my cue to initiate my walk in to the ball and signal my brain that it's "go" time. Stay clear and committed to the chosen strategy.
6. Focus externally and engage with the target. Use the picture of the ball's predicted roll to guide my rehearsal swings, feeling optimal rhythm and appropriate energy.
7. Align the face to the start line and address the ball, scan the three points deliberately, settle eyes on my Quiet Eye spot, and . . .
Your Putt
8. . . . let it go. React subconsciously. Putt back and through past my eyes with a slight awareness of my stroke key and nothing more.
Post-Putt
9. Imprint "That's like me" (good putts) or unemotionally state the solution in a positive way (bad putts) so that I remain committed to my foundations and grow from the experience.
Now own it. Stay in the present and commit to making the process—not your results—the most important thing!
CHAPTER
9
The Art of Effective Training
Combining a high level of understanding with disciplined, effective training turns the slow road to skill development into a speedway.
So far in your putting-improvement journey you've learned how to assess your current skill set and demarcate areas for improvement. Within those areas, I've explained the technical and factual foundational knowledge under which you need to operate. The remaining piece of the puzzle—the one that completes your development—is creating an effective training plan to ensure skill development and long-term success on the greens.
Before getting into the hows and whys of training, it's important that you don't confuse the amount of time you invest with hard work—the two don't necessarily correlate. Improvement is fueled by the quality of training, not the quantity. Honestly, if you're not going to approach training with intelligence and discipline, you're better off not practicing at all, because you'll risk ingraining bad habits as well as a loss of confidence. The key to quality training is to clearly define the intent of each session, drill, and repetition. I call it "paying attention to your intention" and it should occupy the forefront of your mind every time you walk out onto the practice green.
The goal of any practice action is to improve one of the four essential skills (starting the ball on line, green-reading, touch, and belief). Within that context, there are only three possible things you can work on: 1) a technical foundation that will allow you to have a better chance to either start the ball on line or control your speed; 2) your ability to read the green and evoke touch; or 3) the mental process that allows you to feel prepared and committed over the ball. Because each of these tasks is different, it should be completely obvious—even to the untrained eye—exactly what you're working on as you do it. Choose a skill, focus on and evaluate it, and then move on to the next. This concentrated effort improves efficiency and creates certainty regarding execution.
MALPRACTICE—THE BANE OF IMPROVEMENT
Inefficient trainers typically fall into one of two camps. The first group is what I like to call the "technicians." These are the players who delude themselves with notions of technical perfection, the existence of "magic bullets" or a putting Excalibur that'll completely change their fortune. When technicians train (and play), technique rules—they rarely pay attention to process and judgment skills. Their lack of wisdom produces streaky successes at best, forcing them to continually move on to the next tip or fix du jour. As they continue to bog themselves down with technical clutter, they fail to master anything and putt worse the harder they work.
The second camp is made up of "putt-and-rakers." They're the ones who plop a handful of balls down on the practice green and mindlessly roll putts without paying any attention to fundamentals or process. They're working on—and accomplishing—nothing. These two futile approaches are the fatal flaws of training, and if you're anything like the players I see daily, there's a very good chance you're following at least one of them.
SEEING PAST THE LIE: PERFORMANCE IN PRACTICE EQUATES TO LEARNING
A few years ago, I started coaching a good college player (she now competes on the LPGA Tour) who putted well except on very short putts. I asked her about her normal training regimen. She told me her college coach made everyone on the team sink 100 three-footers in a row before they could leave practice, and that she hated it. I immediately knew what the problem was—the type of training she was doing was not developing the skills needed to make short putts out on the course when she competed. I changed her practice plan, not her mechanics, and her putting problem disappeared.
You hear a lot about the "100-short-putts-in-practice" routine, and I'm dumbfounded by its popularity. There are so many things wrong with the drill that I don't know where to start. First, it's too difficult to accomplish. A player can hole 65 putts in a row yet feel like a failure with a miss on No. 66. Restarting the drill only to miss again is sure to bring on frustration and other negative emotions. Secondly, it takes too long to accomplish, so there's no time to run a normal pre-putt process, which allows you to rehearse what you do to get ready to putt on the course. In addition, there's zero feedback, so you're ingraining mistakes instead of eliminating them. What exactly are you practicing? Let's see . . . uncertain technique, no feedback, no process to allow for growth, and inevitable frustration. No, thank you.
Trying to make a hundred short putts in a row is an example of block practice used incorrectly. To begin with, a drill like this is wildly inefficient, because rehearsing the same skill over and over fails to imitate what you experience on the course. According to Drs. Richard A. Schmidt and Craig A. Wrisberg in their book _Motor Learning and Performance,_ "since the task and goal are exactly the same on each attempt, the learner simply uses the solution generated on early trials in performing the next shot. So [block] practice eliminates the learner's need to solve the problem on every trial and the need to practice the decision-making required when playing golf."
Moreover, the feeling of "greatness" that this kind of block practice creates by design is an illusion, tempting the player to overindulge in mindless drills. With less judgment and skill needed, the results are sure to look and feel impressive, which makes it alluring both to players who want to feel good about themselves and coaches who want to make it seem like progress is being made. Unfortunately, those great results lead to a false sense of competency and confidence—a bubble sure to burst on the course when you face everything except straight-in uphill three-footers. The emotional cost is high, since you're forced to realize that holing those hundreds—perhaps thousands—of short putts in practice meant little.
Because golf is random, your ability to judge and adapt your motor patterns to the situation at hand has more value than the rote memory of a given technique. There's nothing wrong with rolling lots of putts, as long as you simultaneously embrace all the skills required to be great on the greens. Starting the ball on line isn't enough; you must also judge environmental elements such as the wind, grain, and slope to read the green; match the line chosen with the perfect amount of energy (touch); and commit to your strategy and stroke. Again, you need all four skills—not just one or two—to putt well. Given this fact, it's easy to understand why your on-course results may be poor despite bouts of genius during these types of block practice sessions.
Even though training results during random practice—which will make up the majority of your training time during your putting improvement journey—often won't feel as impressive as those generated in blocked practice, the added difficulty of judging each putt separately is desirable and will accelerate your learning and allow you to retain skills longer.
BLOCK VERSUS RANDOM PRACTICE
Despite its diminished transfer value, block practice still plays a significant role for both the beginner and the advanced player, but it's governed by the intent and duration of the session. When you're a beginner or are first learning a new motor pattern, a larger number of repetitions in a safe learning environment is beneficial for establishing new neural pathways and obtaining a certain level of comfort. Because you'll rarely make wholesale changes in your technique, this should also be a rare occurrence and, as stated in Chapter 5, shouldn't last more than three weeks. For an experienced player, block practice serves a different purpose—it's not about learning how to be fundamental but rather to confirm that you are actually executing base fundamentals. In this case, a well-designed block drill can verify that you're executing correctly in less than a minute (one example is checking your setup in a mirror three times before you tee off).
With these truths in mind, ideal training includes both random and block elements, but favors random practice since it better prepares you for what you'll face on the course. I suggest the following:
* Block practice (technical foundational work): 10 to 20 percent of practice time invested.
* Random and variable practice (skill development): 80 to 90 percent of practice time invested.
To ensure the effectiveness of any block-practice drill, it's essential to create a learning environment in which you receive confirmation that you're executing the fundamentals you believe in. In Chapters 3–5, I provided several examples of this when discussing the training methods for building pre-stroke and in-stroke fundamentals, including those that deal with rhythm. The way you feel as you execute may change from time to time, but the actual fundamentals never do. An organized practice station preempts changes in technique that commonly occur without our knowledge or intent. As such, block practice delivers two benefits for the price of one; it helps you establish a feel for the day and certainty that your fundamentals are sound. On playing days, this is huge—confidence in your fundamentals means you can let go of technique and simply trust your abilities.
INTERNAL AND EXTERNAL FOCUS
Block practice can seriously benefit your development, but it requires the right approach to avoid its pitfalls. Many players misconstrue the intent of block practice. Block practice doesn't directly improve skill—it merely sharpens the tools that may help you be skillful. That's a huge difference. More importantly, paying attention to the details of your technique requires you to focus "internally" on each putt, which I define as consciously thinking of a body part in order to set up or move the club a certain way. Although it's true that great players are often buoyed by a simple foundational thought during a round, great putting lies on the other end of the spectrum with "external focus" and subconscious action. There's a difference between having an awareness of a feel and "thinking" about it. The worst thing you can do when you play is to emphasize internal thoughts on how to swing, which means that block practice is training your mind to work in a harmful way. This is another reason why you must limit your block-practice time and prioritize random and variable practice at a ratio of at least seven to one.
When we switch to random practice it's critical to let go of technique, fill your mind with external imagery (outside your body), and attempt to react to your picture of the ball's roll with a "see-and-do" attitude. You're deluding yourself if you think you can invest a lot of time thinking internally about "how" to putt during practice and then flip the switch to think externally when you play. It doesn't work that way. You are what you do most of the time.
Focusing externally is the fast track to mastery, and you have to train for it. To get you started, here are two drills I ask my students to perform to help them create a vivid external focus as well as the mentality to react with a see-and-do attitude.
Hammer-and-Nail Drill
Find a straight, uphill, five-foot putt and set three balls down on the green. Carefully insert a tee into the far lip of the cup; set it parallel to the surface of the green and make sure it points back toward the expected entry point. This is the "nail." Address a ball—this is the hammer you're going to use to drive the nail further into the lip of the cup. Focus on the nail for three seconds, and without turning your gaze back to the ball, immediately get the hammer moving. This drill of putting while looking at the target ensures the action has little to do with technique and everything to do with athleticism and a subconscious reaction to the target. Hammer the nail two more times to complete the drill and purge yourself of internal thoughts.
Charlie Wi's Three-Ball Drill
Obviously, I learned this drill and its benefits from Charlie. On the practice green, set down three balls, pick a hole, read the putt, and imagine it in its entirety. Address the first ball, and then turn just your head to stare at the cup for three seconds. Without rotating your head back to its normal address position, try to make the putt, trusting your eyes to give you the feel for producing the right energy and direction. Next, address the second ball and again turn to stare at the target. Drink deeply. This time, turn your eyes back to the ball, but close them before starting your backstroke, holding the picture of the target in your mind's eye and allowing your subconscious to guide you as you stroke the putt. Guess the result before opening your eyes—this will heighten your senses and awareness of the target. On the last ball, use your normal routine: stare at the target and then settle your eyes on your Quiet Eye spot. Even though you're looking at the ball on this last attempt, you should still be picturing the cup and its expected roll in your mind. Let the putter swing as a reaction to your mental picture.
The goal is to make all three putts feel exactly the same emotionally and physically and eliminate any thoughts of "how" to putt while prioritizing the target. This drill is powerful. For some students, knowing how to tap into their athleticism is the only thing they need to elicit positive, long-lasting change.
Envisioning the ball as a hammer and using it to drive a nail into the back of the cup is external focus at its best. By focusing on the act of hammering, you avoid thinking internally about "how" to putt. You simply react athletically to what you see.
A QUESTION OF TIME
It should be pretty clear at this point that the key to effective practice isn't total time invested. Discipline is the vital component, and there's a correct way to allocate your time based on what you need to accomplish. If technical deficiencies are making it difficult for you to putt well, then a little science and properly structured block practice is essential. But you must understand that if a little is good, more isn't better—it's worse.
The solution: During block practice, have a clear intent about what you are going to accomplish in a set amount of time. For example: "I'm going to execute ten great repetitions of a fundamental setup, a truthful read of the green, efficient eye scans, and return the face to square at impact in six minutes or less." Note that I said ten, not ten in a row. If it takes you thirteen trials to generate ten successes, then so be it. In golf, you need to give yourself permission to make mistakes. Perfection is a game you simply can't win; it's a path to frustration, not progress. If the success you crave doesn't happen today, take solace in the fact that you're one step closer to finding it tomorrow. Hang in there, focus on confirming fundamentals, and then move on to the important task of developing skill.
RANDOMIZE!
The four skills that determine your putting performance mainly entail focus, feel, and judgment, and the only way to become proficient in these art-based disciplines is to first get your mind right and then do a lot of art. During this phase of your training, varying the environment is critical—each rep you make should entail original judgment and have its own process so that it can be critiqued with a fresh perspective from start to finish. The more random and varied the repetitions, the bigger the opportunity for growth via your post-shot process. Remember, imprint after the good putts and learn from the bad ones. With a mature approach, every outcome becomes capable of directing future actions and improving skill.
This phase of your training can be unstructured as long as there's variety, or organized as a game. Game-based practice closely mimics on-course reality. The pressure of posting a score or beating your buddy increases the focus and intensity of training, and demands that you effectively deal with setbacks and distractions, as you would in real rounds. This gives game-playing a very high transfer value. I challenge both the Tour players and the amateurs I coach to tap into their competitive spirit and "win their way" off every area of the practice facility every training day. I'll introduce some games in the next chapter, but feel free to develop your own. Just make sure they force you to assess each and every putt, have a set time limit, and challenge you just enough to let you win a little more than half the time given your current skill set.
Below is the training guideline I've presented to every player I've coached over the past decade. I believe it to be the correct mix of practice for maximum growth. Use it as your blueprint as you design your own practice plan going forward.
YOUR BALANCED, FOCUSED PRACTICE PLAN
Block Practice: Fundamentals
10 to 20 Percent of Your Daily Dedicated Practice Time
* Pay attention to your intention. What are you going to accomplish in this time frame?
* Put yourself in a learning environment and practice with immediate, accurate, and reliable feedback.
* Own your content. Commit to your foundations.
* Focus internally on simple details: distance from the ball, alignment, ball position, balance, stability, suspension point, rhythm, and Quiet Eye technique.
* Do what's required to affect change, even if it makes you feel uncomfortable.
* Focus on the action, not the result.
* Avoid looking at a target and thinking about mechanics. Hit into open space or remove the ball from the equation when possible.
* Once you have the confirmation you need, quit. More isn't better!
Unstructured, Random and Variable Practice: Skill Development
60 to 70 Percent of Your Daily Dedicated Practice Time
* The goal of this type of training is to learn to get "lost" in the shot and perform the action subconsciously, like you do when you tie your shoes. Embrace the external focus and mental discipline required for this "see-and-do" attitude.
* Practice every variety and length of putt (uphill, downhill, into the grain, down-grain, breaking, straight, short, medium, and long). No do-overs.
* Use your full process, including green-reading for every trial. Practice "trusting" with external focus and full commitment.
* Imagine yourself in tournament situations; take the time to get focused on the target and the shot, and then simply react to what you see.
* Run your full mental program, including post-shot error detection and positive imprinting.
Structured Random Competition: Games
10 to 20 Percent of Your Daily Dedicated Practice Time
* Compete against set goals or other like-minded individuals.
* Win your way off the practice facility by playing simple games like Three in a Row, Tornado, Short, Medium, Long, Brad Faxon's Three-Speed Drill, or Five-Ball 25.
* Play with high-intensity focus and record your results as well as what you've learned in your _Your Putting Solution_ journal.
CHAPTER
10
Proven and Effective Skill-Training Programs
Before adulthood robbed you of your imagination and burdened you with self-awareness and expectation, you were training to near perfection. The putting green was your Augusta National or U.S. Open, and the putt you faced was to win it all. Other than a bit of foundational work beforehand, you were doing everything right to reach your dreams.
Creating your own training program to fit your personality, tendencies, facility, and lifestyle is at the heart of _Your Putting Solution_. Using the perfectly balanced practice guide laid out for you in the previous chapter, your goal now is to design a practice plan based on the results of your skill assessments in Chapter 2, keeping the known tendencies of your stroke in mind. Usually, I try to limit my students' training sessions to a half hour, because it's extremely difficult to maintain a high level of focus beyond that, and "going through the motions" is never allowed.
So what should your half hour of training look like? The balanced guide presented in the previous chapter began with block practice (allowing time to confirm foundations) and then transitioned to random practice (to develop skill). Your half hour of training should follow the same structure. In other words, start by creating certainty with regard to the science of your stroke, and then trust that you have it and focus on the art. To wrap up your session, tap into your competitive spirit and play one of the games described below. Game playing is a great way to get your mind right and to learn to perform under pressure. The transfer value of "winning your way off the green" is sky-high.
PRACTICE GAMES
I've mentioned the importance of games throughout this book, and have even mentioned some by name. Here's a list—and description—of the most effective, but feel free to create your own. Again, a game should have a time limit, should stress at least one of the four essential putting skills, and should be accomplishable and fun.
North, South, East, West
Place four balls on the green, each three feet from the same cup and arranged like the points of a compass. Using your full process, attempt to hole each putt, keeping track of your makes and misses. Repeat the process at four feet and then at five feet, for a total of twelve putts. If you're a competitive player, consider the game won if you can hole ten out of the twelve putts; less-skilled putters should consider eight a victory. Give yourself no more than two chances to win the game per training session. In case you're wondering, the average one-putt percentage from three to five feet on the PGA Tour is 88 percent.
Short, Medium, Long
This game mimics the Long Lag Test in Chapter 2, but adapts well as a game. Place balls approximately 20, 30, and 40 feet from a hole. Lag each ball, and score each attempt as follows:
Hole out: 3 points
Within one club length beyond the hole: 2 points
Within one club length short of the hole: 1 point
More than one club length from the hole: 0 points
Change holes or green location and repeat the process six times for a total of eighteen putts. A winning score for a competitive player is 23 points or greater.
Tornado
Mark six putt locations with coins or tees, starting at two feet away from the hole and then at one-foot intervals after that. Angle the marks so they form a widening curve, which will create a unique read for each putt and an unobstructed path to the hole. Start by making the two-footer, then cycle through the other five putts. The goal? Don't miss. If you do, start over. If you can make six in a row within five restarts, consider the game won. Lesser-skilled players are allowed one mulligan at the five-, six- and seven-foot distances. The best part of this game is its realism—to win the game, you'll have to make a pressure-packed seven-footer.
Games like Tornado, which force you to run through your process on a wide variety of putts with added pressure as you progress, train you to focus intently and perform when it matters most—on the course.
Five-Ball 25
Mark putts at five, 10, 15, 20, and 25 feet from a hole, like the rungs on a ladder. Place a ball at the five-foot mark, two at the 10-foot mark, three at 15, four balls at 20 feet, and five balls at 25. The goal is to make one putt from each distance with the balls allotted, starting with the lone five-footer. If you whiff at any distance, reset the game and start again, giving yourself no more than ten minutes to make a ball from 25 feet.
There's more to this game than rolling putts from different distances—it ingrains the value of one-putting over avoiding a mistake. Champions aren't fearful. They stay and play in the present, think positively, and make putts. Win here and you'll play to win when the putts are for real.
Two-Star
This game's great for green-reading and matching line to speed on the difference-making putts from three to 10 feet. Take five balls and spread them around a hole, creating five random putts between three and 10 feet. Use your full green-reading process and attempt to make each of the five balls. After one hole is completed, move to a different cup so you can't use previous experience to help you with the read. Changing holes to make it more of a challenge to get the correct read is a desirable difficulty. Set a winning score commensurate with your skill level. The Tour players I coach usually win by making eight of the ten trials. Be fair to yourself.
Three in a Row
As the name suggests, you win this quick and simple game by successfully executing three putts in a row of a certain distance or challenge.
Short Putting: Sink three random putts in a row from seven feet.
Medium-Length Touch Putting: Roll three 20-footers in a row so they come to rest either in the hole or within one club length beyond it.
Long Lag and Short Putting Combination: Play three random holes in a row of 40 feet or more without a three-putt. If you two-putt from beyond 33 feet on Tour, you're gaining strokes on the field. Imagine the gains against your competition.
Up-and-Down 20s
Find two holes on a sloping part of the practice putting green approximately 20 feet apart. Lay down two alignment sticks a club length behind each hole—you'll use these to determine if you hit a putt with too much speed. Place two balls on the green near one of the holes so you're putting straight uphill toward the other hole. Roll both balls and give yourself a point for each putt that travels beyond the hole but stops short of the stick. Give yourself a bonus point in the event of a make. After two successful attempts in a row, putt downhill toward the other hole. If you come up short or hit the stick on these putts, reset your score to zero. Alternate between uphill and downhill putts until you tally 10 points. You can conceivably win this game in as few as five putts—PGA Tour player Nick Watney did it once when I was his short-game coach back in 2013. Stop after ten minutes, whether you win the game or not, and record your score in your journal. You don't need to win every time out. Use your high score as motivation to improve the next time you practice. This is a great game to train for touch and to learn the value of slope or grain, especially if you're on an unfamiliar course.
Big Foot
Start with ten balls and lay the first one down three feet from any hole on the practice green. Continue to place balls on the green at three-foot increments until you reach 30 feet from the hole. Putt each ball, starting with the three-footer. Keep track of your makes, and after your attempt from 30 feet, add the total distance of putts made. A perfect score is 165 (3 + 6 + 9 + 12 + 15 + 18 + 21 + 24 + 27 + 30). Your goal? Set a personal scoring record every time out. This game improves your touch and teaches you to focus on the positive value of making putts.
Drawback
This is a great game that will tighten up both your distance control on longer putts and those critical three- to 10-footers that often turn bad full-swing days into decent scoring days. Choose a random putt on the practice green between 15 and 30 feet from the hole. The goal is to either make the putt or strike it so that it travels beyond the hole but within a club length of the cup (a great speed putt). If your putt finishes with great speed, simply tap it in for your score on the hole; if it doesn't, you receive a one-putter-length distance penalty. Measure one putter length from where your ball lies, draw it back to the new distance and putt until the ball is holed. Your total score for the game is the number of strokes it takes you to play nine random holes. Kevin Chappell, another one of my Tour clients, believes that Drawback is especially effective in preparing him for the stress of competitive rounds.
Reverse Leapfrog
This is a touch game that teaches you how to manage risk, tap into your creativity, and putt with feel. Start by pegging a tee on the green; consider it the horizontal back boundary of an imaginary "distance zone" (you can also use the edge of the green). Lay fifteen balls on the green about 25 feet from the back boundary, then place a coin on the green about 10 feet in front of the balls. The area between the two markers is your zone. The goal of this game is to see how many of the fifteen balls you can roll into the zone without a miss. A miss is defined by a ball that rolls past the far tee, ends up short of the near coin or rolls past the putt that preceded it. Strategically, your intention should be to putt the first ball as near as possible to the far boundary without touching it, and leave each putt about a foot short of the one that came before it—in reverse-leapfrog order. A perfect score would require you to leave all fifteen balls in the zone. Keep track of your high score and aim to improve it each time out.
Seeing how many balls you can roll into a 15-foot distance zone in reverse leapfrog order eliminates all thought of line and shifts focus for touch to a horizontal target. Thinking about the line while you're over the ball is often a distraction that leads to poor feel and speed control, while the right external focus helps touch from every distance.
PRACTICE PLANS AT WORK
If you fix your practice by working efficiently and effectively, your skills will improve and you may indeed live out those childhood dreams of holing putts in the biggest moments. Here are two examples of practice plans to guide you.
Player A:
Assessments passed: 25-Ball Dime Test (starting the ball on line)
Assessments failed: 10-Ball Green-reading and Touch
Training Plan
BLOCK PRACTICE
Drill 1: Hitting the Start Line
Confirm ability to hit your start lines by placing a dime on the green two feet in front of the ball and roll five putts over it. Allow for one miss out of five (three minutes.)
(Any consistent struggles here demand a review of the fundamentals for starting the ball on line in Chapters 3 to 5, and altering the block-practice portion of the training plan.)
Drill 2: Green-reading
Twelve reps of Dot-to-Dot Drill with full eye scans (see here; five minutes).
Drill 3: Touch
Swing putter to resonant rhythm beat using a metronome with varying swing lengths (one minute).
RANDOM PRACTICE
Exercise 1: Green-reading and Hitting Start Line
Star Drill with full process to two different holes (five minutes).
Exercise 2: Green-reading and Touch
One-ball putting with full process to nine different holes (five minutes).
GAMES
Game 1: Touch
Short, Medium, Long (seven minutes; see here).
Game 2: Green-reading and Mindset
Tornado (five minutes; see here).
Player B:
Assessments passed: None
Assessments failed: All
Pre-Steps: Consult journal and take note of the technical deficiencies you experienced while performing the assessments. Set specific goals to improve core stability, rhythm, Quiet Eye technique and the path of the stroke by maintaining your suspension point, allocating your 30 minutes as follows:
BLOCK PRACTICE
Drill 1: Hitting the Start Line
Worm Stability and Suspension-Point drills performed in tandem (one minute).
Drill 2: Touch
Putt to resonant rhythm beat using a metronome varying swing lengths—two sets (one minute).
Drill 3: Hitting the Start Line and Green-reading
Employ Pelz Putting Tutor at six feet and add two ball sleeves to ensure a neutral path and sweet-spot contact. Execute five perfect repetitions for straight, left-to-right, and right-to-left putts, focusing on settling the eyes on the Quiet Eye spot and maintaining a quiet focus for a full second after impact (eight minutes).
RANDOM PRACTICE
Exercise 1: Green-reading, Hitting Start Line, and Grit
Star Drill with dime down on start line employing full process, including post-shot imprinting for one hole (three minutes).
Exercise 2: Green-reading and Touch
One-ball putting with full process for nine different holes with the metronome beating in your back pocket (five minutes).
Exercise 3: External Focus
Perform the Charlie Wi Three-Ball Drill (here) for both an uphill, flat, and downhill mid-range putt (five minutes).
Games
GAME 1: TOUCH AND EXTERNAL FOCUS
Up-and-Down 20s—Win a different game every session (seven minute time limit; see here).
After completing these drills, exercises, and games, either player can walk away from the practice green confident that their thirty minutes was well invested, because they worked on their foundations and every skill for a manageable duration of time and in an effective manner to maximize benefit. Regardless of the particulars of your training sessions, always make notes in your journal (what you did well, what you learned, what you are going to do about what you learned) and record game results. If this sounds like a lot of work, consider that elite putters such as Tom Pernice, Jr., Ben Crane, Charley Hoffman, Cameron Tringale, and others train like this every day. Sure, putting is part of their job, but it's more of a testament to what's required to be great. And let's be honest—thirty minutes a few days week isn't asking that much. It's a TV sitcom. What's more important to you?
The above examples are the tip of the iceberg; an effective training program can take on thousands of forms depending on what—and who—needs training. As you construct yours, keep in mind that the goal is to master simple foundational elements, a confident subconscious reaction to the target, and the feel and judgment skills of being great on the greens. As long as your training goals meet these criteria, you'll make substantial progress.
Despite the importance of crafting a customized training plan based on your needs, never feel bogged down by its structure—every day is different, and you may not have the luxury of a full half hour every time you train. When time is limited, opt for an abbreviated block-practice session, skip random practice altogether, and play a game instead (win one, actually). This especially makes sense if your time is limited before teeing off, because playing golf is the ultimate form of random training.
JOURNAL ENTRY
Your training program is yours and yours alone. As you construct it, keep in mind that the goal of practicing is to build skill, not look pretty. Train to master simple foundational elements, a clear confident mind, and the feel and judgment skills of green-reading and touch. As long your program hits on these areas, any time you invest practicing is time well invested.
Review your entries in the Assessments and Technical Plan portion of your journal—these are your clues for constructing an effective regimen that emphasizes improving the areas where you need it most. Next, turn to the Training Plan tab and rough out thirty minutes of perfect practice, keeping the optimal ratio of block to random training intact. Worrying if it's perfect the first time out is a mistake. The key is to start the process in an organized, deliberate manner. Your training plan will grow and evolve right along with you.
SAMPLE JOURNAL ENTRY
"My 30-Minute Perfect Practice Plan"
BLOCK PRACTICE
* Stability strokes on towel (five reps)
* Suspension-Point Drill (five reps)
* Setup checks with eye scans (ten perfect reps)
* Rhythm strokes to my metronome beat of 78 bpm (thirty seconds)
RANDOM PRACTICE
* Star Drill (two holes)
* One-ball random putting (ten minutes)
GAME
* North, South, East, West (record score in journal)
* Short, Medium, Long Touch Game (record score in journal)
CHAPTER
11
Tour Confidential—Learning from the Best
When putting performance is the thing that pays the mortgage, it's normal to have a plan that is well thought out and completely dialed in to the skills that matter. Learn from the best to be your best.
The exciting part about coaching the best players in the world is that each week on Tour is a master class in the business of improving. Every day I seem to learn something new as my players go through the self-discovery process of practicing and playing, or sometimes they share something they learned long ago or from another player or coach who has helped them along the way. Players aren't the only professors in this high-level training forum; caddies also provide great insight, because they have the best seat in the house, live their players' performances daily, and see what works and what doesn't under pressure. The reality is that working at Tour events creates an amazing environment to learn and grow. I love my academy at Shadow Ridge Country Club in Omaha, but I realized long ago that leaving "my little patch of grass" now and then was critical to my growth as a coach. I've grown my knowledge and gained wisdom exponentially by paying attention to what many of the Tour players do and why they do it, and it can work the same way for you.
Logically, PGA Tour players' training routines vary based on personality, schedule, foundational beliefs, and an awareness of what they need to prioritize to achieve greatness, as should yours. They don't necessarily "watch the clock," but they do pay attention to their intention, focus intently, and stay disciplined to their improvement plan, as should you. Here are a few examples of how some of the best putters in the world structure their training—this is your inside access to the Tour practice putting green.
TOM PERNICE, JR.'S TRAINING SCHEDULE
Block Practice
* Putting Professor: ten to twenty putts per week (hotel room or on-site; see below).
* Practice Station with Pelz Putting Tutor with aim line (three stations): ten reps on a straight putt, ten reps on a left-to-right putt, and ten reps on a right-to-left. Then, one rep at each station in succession, cycled through three times. Performed on every training day (or three times a week).
Random Practice
* Star Drill (two holes, or as time allows)
* One-Ball Random Putting (nine holes)
* Forty-Fifty-Sixty speed work (ten minutes; see below)
Games
Perform two or three of the following, depending on time commitments:
* Short, Medium, Long
* Up-and-Down 20s
* Three-Hole Knockout
* North, South, East, West
What I Like Most About Tom's Training
Pernice uses an interesting combination of block-practice stations each week to ensure that his fundamentals and feels are sound before he works on touch and process. To re-create his ideal stroke shape, with the shaft staying on the plane that he establishes at setup and the face staying square to it throughout the motion, he trains on a device called the Putting Professor. There are both positives and potential negatives for using a guided device like this. On the positive side, this training aid gives instant feedback and easily allows you to feel and produce the perfect stroke shape time after time, and continuity over a long period of time is one of the keys to mastery.
Tom Pernice, Jr. uses the Putting Professor training aid to consistently re-create his ideal stroke shape (path, face, and ascent/descent angles). Aids like this are ideal for learning a new motor pattern, but to perform your best, you ultimately must focus on skill and be able to execute without a guide.
There are risks with guided devices, however, and Tom and I learned the hard way that tools can distract you and hinder performance if the work isn't properly compartmentalized. Remember that internal thoughts about how to move or control your putter during block practice tend to ruin fluid and athletic movement and generally don't hold up well under duress. On the course, it's advantageous to focus on items that are "freeing," like rhythm, Quiet Eye, and process; consciously trying to reproduce a perfect, aid-guided stroke shape has the opposite effect. Regardless of the devices you use, be disciplined to relegate them to your block practice only and let their positive effects "bleed" into your subconscious over time instead of forcing associated feels when you play.
Over the past year, Tom has prioritized skill over stroke shape in his block practice by confirming the correct read and start line by using a Pelz Putting Tutor / aim-line combination and cycling through three different putting locations. Being more skill-driven has allowed him to focus externally, which has freed him up, and as a result he's made huge strides in his performance. I love the way he utilizes the Putting Tutor on three putt types, because as imperfect humans, we often change unwittingly, whether it's how we see, feel, or execute the fundamentals. Without proper feedback, downhill right-to-left putts may feel awkward one day, while straight putts may feel awkward the next. Tom's block-practice stations feed him a steady diet of truth on all putt types, which has greatly improved his consistency.
Tom has a great pendulum arc to his swing, and because he has counted and swung his putter to a six count during his routine for the past twenty years, his rhythm is impeccable. Touch, for him, is nothing more than training his eyes to sense the speed of the green and the distance to the hole. To further develop his touch, he finds a downhill putt on the practice green and sets out tees at 40, 50, and 60 feet from the hole (Forty-Fifty-Sixty random speed work). He rolls three lag putts from each tee to the hole, then repeats for an uphill putt by rolling three balls to each tee. He'll often make Forty-Fifty-Sixty a game by putting one additional ball from each location and scoring his results. (In order to proceed from downhill putts to uphill putts, he must roll all three putts to within a club length of the hole, and complete the game by doing likewise on the uphill attempts.) I suggest you set any standard you feel is appropriate for your skill level and play Tom's touch game to win your way off the green.
BEN CRANE'S TRAINING SCHEDULE
Block Practice
* Center Face, Center Ball Drill (five reps once a week; see below)
* Dime Test (Complete five to ten successful rolls over the dime daily)
_If he's able to confirm his ability to start the ball on line, Ben moves on to random practice. If not, he performs the . . ._
* Black-Line Drill (ten successful putts from three different locations—straight in, right-to-left, and left-to-right; see below)
Random Practice
* One-Ball Random Putting (as time allows)
* Read & Putt (nine holes; see below)
Games
* Two-Star
* Three in a Row
What I like Most About Ben's Training
Ben keeps it simple and to the point. He lays a dime down on the green about four feet away from the ball and uses his eyes and his athleticism to roll putts over it to confirm that he's mastered the skill of starting the ball on line, allowing him to focus his practice efforts elsewhere. For Crane, it's rarely about technique—he's proven over the last decade that he's one of the best putters in the world. If for some reason he feels like he's missing his start lines, he'll calibrate his eyes and ability to aim by performing his Black-Line Drill. This entails setting up three putting stations to the same hole, in which the reads are straight in, a ball outside left, and a ball outside right. He then uses a Sharpie to draw a black line on the green perpendicular to his start line and to mark a dot a foot in front of the ball. At setup, he'll match the leading edge of his putter to the black line to make sure the face is square, then calibrate his eyes by scanning through the dot and into the cup. With the guides in place, it doesn't take long for Ben to rekindle the feels he needs to hit his start line on every type of putt.
After a handful of successful repetitions from each location, he moves on to training his sense of touch, judgment, and process via random practice. Ben views the Black-Line Drill as nothing more than a shooter taking time to sight his gun before hunting. A "sighted gun" instills the confidence he needs to avoid second-guessing his aim or technique on the course when he misses, which makes him resilient and allows him to think about the ball and the target and nothing else.
Read & Putt is essentially random practice with feedback, provided in this case by a video camera, caddie, or trusted friend. With just one ball in hand, select any putt on the practice green of 15 feet or less. Read the putt, then put a tee into the ground on your start line even with or just beyond the cup. Set up your video camera on a tripod behind the ball (or ask your caddie or friend to stand in the same place) and focus the lens so that it can record both your stroke and the putt in full. Read, putt, and record on nine different holes, then review the tape. Video is truth, so watch and learn. You'll discern quite a lot about your ability to read putts correctly, your aim, your stroke shape, and whether you're hitting your start lines. Every player has his or her tendencies, and knowing yours will prove to be a huge advantage. I didn't teach Read & Putt to Ben; one of his former coaches, Carl Welty, did. I love its effectiveness. Anyone can execute with the help of training aids; it's a different story when you're on your own. Your Read & Putt tapes will expose your flaws and tendencies. Use them as blueprints for crafting future training plans that address your specific weaknesses.
Ben Crane performing his Center-Face, Center-Ball Drill, an effective exercise for players who struggle with catching the ball off the sweet spot by hitting it high on the putterface. To perform this drill, Ben pushes a tee into the ground directly behind the ball so that only the crown is exposed. The goal? Deliver the putterhead to the ball on a slightly ascending angle of attack so that the putter just misses the tee. If you hit the tee, your angle of attack is too steep—you'll make contact above the sweet spot and fail to optimize roll.
CHARLEY HOFFMANN'S TRAINING SCHEDULE
Block Practice
* T-Square on straight putts (two minutes; see below)
* Dot-to-Dot (seven minutes; see below)
Random Practice
* One-Ball Random Putting (five minutes)
Games (As Time Allows)
* Short, Medium, Long
* Two-hole Knockout Test
* Read & Putt (nine holes)
What I like Most About Charley's Training
Charley is not only talented and hardworking, he has the perfect attitude to be an elite performer. He pays attention to the details when he trains his mechanics, but he doesn't let the technical stuff or results affect him on the course. He's a "surfer dude" at heart, which is a huge asset, because it's critical to remain calm when facing pressure or when your "A" game just isn't there. Charley starts the block section of his training by holing ten straight-in four-footers, checking his distance from the ball, ball position, stance, and shoulder and putterface alignment using a T-square in front of a mirror on each one (photo, here).
The T-square allows Charley to confirm in just two minutes that he's executing nearly every main pre-stroke fundamental correctly, and because it's both simple and effective, he hasn't strayed from it in the six years I've been coaching him.
He follows up his T-square work with a drill designed to train his eyes to scan down the correct line on breaking putts, called "Dot-to-Dot." I learned this drill from another one of my Tour clients, James Driscoll. Here's how it works: Find a severely breaking short putt and mark the ball's position at address by dotting the green with a Sharpie. Read the putt, choose a starting line to the best of your ability, and mark your starting line by punching a small hole in the green with a tee about 18 inches in front of the first dot. Putt a few balls over the mark and adjust the correct start-line spot based on your results. Once you have the read nailed, mark the starting line with another Sharpie dot. Next, place an alignment stick parallel to the two dots so that you can confirm your stance and aim. It should look like this:
Charley Hoffman uses a T-square to check his pre-stroke fundamentals on straight putts to kick off the block section of his regular training program. Like all elite putters, Charley knows that setting up properly is the key to all the moves that follow and, ultimately, his consistency for starting the ball on his intended line.
To complete the drill, place a ball on the first dot, then stand behind the ball and visualize three points: the ball (Dot 1), your start line (Dot 2), and the expected entry point into the hole. Holding this picture in your mind's eye, walk in and align your putter to the start line using the stick for confirmation. After settling into your stance, cleanly and efficiently scan your eyes from dot to dot and over the entry point, and then back again. Settle your eyes on your Quiet Eye spot, react to what you see by letting the putter swing back and through past your head, finding and focusing on the dot underneath the ball for a full second before looking up. That's one repetition. Hoffman and Driscoll perform three for both left-to-right and right-to-left putts, and often repeat the cycle in full, which takes them about five to seven minutes.
Charley Hoffman performing the Dot-to-Dot drill to groove the proper aim fundamentals, as well as efficiently train his eyes to scan three targets.
After completing the Dot-to-Dot drill and a little One-Ball Random Putting, Charley will either win his way off the green playing a touch game or play nine holes of Read & Putt with his caddie, Bret Waldman, acting as the camera.
CHARLIE WI'S TRAINING SCHEDULE
Block Practice
* Putting Arc (three minutes)
* Resonant Rhythm Drill (one minute)
* Metronome work with aim line (three minutes)
Random Practice
* Three-Ball Drill (five minutes)
* One-Ball Random Putting (as time allows)
Games
* Three-Hole Knockout (five minutes)
* Short, Medium, Long (ten minutes)
* Read & Putt (nine holes if time allows)
What I Like Most About Charlie's Training
Charlie is disciplined about using his metronome and training his rhythm every day. We've worked hard on this since I started coaching him two decades ago—that's commitment! The thing I like most about Charlie's practice is that he transitions quickly from the internal focus in his block practice to external focus by performing his Three-Ball Drill. If you can see the target clearly in your mind's eye and react with complete commitment like an athlete, you have a huge advantage over the army of overthinkers you'll compete against. In addition, it'll help you rekindle your childlike approach to putting—free, un-judged, and fearless.
CAMERON TRINGALE'S TRAINING SCHEDULE
Block Practice
* Aim line with gate / Prayer Drill (ten reps)
* Aim line with gate / Normal Grip (ten reps; see below)
Random Practice
* Star Drill (minimum two holes)
* One-Ball Random Putting (distance putts for touch as time allows)
Games
* Three in a Row
What I like Most About Cameron's Training
Cameron's gift as a player is his perspective, attitude, and commitment to process. I hate to say this, because I may live to regret it later, but he's my favorite player to work with for these very reasons. He just doesn't ever seem to get in his own way. Cam opens his block practice by performing the Prayer Drill with an aim-line and gate (see Chapter 4, here). The Prayer Drill not only activates the proper muscles and coordinates their movement so that the player's suspension point is maintained, it also creates the feelings of connection and free-flowing rhythm, which Cameron loves. After creating the sensation, and he rekindles it using his normal grip.
Cameron Tringale confirms his setup alignments, stroke shape and suspension point with a simple but effective feedback station, first using a "prayer" grip followed immediately with his usual hold. It takes him just a few minutes to confirm that he's executing all the technical foundations of his plan.
Following this block-practice sequence, his training schedule is mostly about process—training his pre-, in- and post-stroke steps to build trust and let the results take care of themselves.
JOURNAL WORK
Expert at Work! Watch Cameron Tringale perform the Prayer Drill and explain its benefits in a special video at jsegolfacademy.com/index.php/cameron-prayer.
At this point, you know just about everything I know about putting and training—you're ready to finalize your PGA Tour–level practice program. Open your journal to the first page and revisit your stated goals. Your passion for achieving them will serve as the driving force for you to pay attention to the details as you work both hard and smart. Turn to the "Training" tab in your journal and review the roughed-out version you created after reading Chapter 10. Review your notes, consider what the best players in the world do to stay organized and on point, and make final adjustments, if needed. You're ready to train like a pro.
PARTING THOUGHTS
_Whew!_ That was a lot of information, but the goal of _Your Putting Solution_ was to arm you with everything you need to dramatically improve your putting in the absence of lingering doubts or unanswered questions about what you need to do to change your fortune on the greens. If you boil it all down to a quest to grow the four essential skills, and see your organized half-hour practice sessions and mental exercises as the means to get it done, it's a fairly uncomplicated process. _You can do this!_
I've helped hundreds—if not a thousand—players sift through this process, organize their thoughts and actions, and improve their putting performances beyond what even they thought was possible. It simply takes a willingness to do the right kind of work. You can't think that the tasks of affirming your skills or journaling are "silly," or that training your eyes to be "quiet" and scan a precise line is mental hocus-pocus. Nor can you scrap your foundations for fads or "the next big thing" after one bad putting round. Stay strong, remain committed, and endeavor to improve every time you practice and play. Magic bullets do not exist.
If you've read this far, you know what the solutions are, and that there will be obstacles in your path that can either push you forward or derail you, pending your perspective and mindset. I'm optimistic that you'll get the job done. Personal growth derived from wisdom and self-discipline is near the top of the best things you can experience in life.
This is a journey worth taking. Enjoy it.
PERFECT ROLL
Creating the optimal roll is dependent on creating proper impact conditions: returning the shaft to the 90-degree angle established at setup, delivering the putter with a slight ascent angle so that the middle of the putterface contacts the middle of the ball, creating approximately four degrees of effective loft. With great impact, the ball will lift slightly out of any depression and launch horizontally along the ground devoid of spin for several inches before beginning its forward roll toward the target.
PRE-STROKE FUNDAMENTALS
Your setup foundations are the precursor to all movement to follow, and executing them is key to consistency and overall performance. Always check and confirm the following:
**1.** Your dominant eye is approximately two inches behind the ball.
**2.** The shaft sits 90 degrees from the horizon (no lean in any direction).
**3.** Your shoulders are square to your intended start line.
**4.** Your eyes are over the ball or one inch inside of it.
**5.** The bottom of your trail hand hangs directly under the ball joint of the same shoulder.
**6.** You're maintaining a stable athletic stance, with your weight evenly balanced over your arches.
PENDULUM SUSPENSION POINT
Your in-stroke foundations are controlled by the physics of a pendulum. To maintain your suspension point, you must coordinate the movement between all the moving segments so that the butt of the club points at the same place near your sternum from start to finish. Learning this feel is all about feedback; training aids such as the Putting Pendulum Rod allow you to confirm the execution of the above, which builds trust and ultimately frees you up to simply react to the target during actual rounds.
YOUR STROKE: EVEN LENGTH AND ARCING
A great pendulum stroke is perfectly symmetrical. Notice in the photos how I remain stable as I disassociate my shoulder and chest movement from my hips and spine. My backstroke and through-stroke are even in both pace and length, while my putter swings on a very slight arc with the face remaining square to the path throughout the motion. Returning the putterface to square at impact ensures that all putts start on line. A final key: my gaze remains quiet and I look and feel tension free.
SCAN THREE POINTS
Being able to see and connect three points is the critical information your brain needs to send the ball on your chosen line with the perfect speed. Once over the ball, scan from the ball (point 1) to your start line (point 2) and finally to the entry line of the ball into the hole (point 3). Once you complete this scan, settle your eyes on your Quiet Eye spot and "let it go."
**Looking for more?**
Visit Penguin.com for more about this author and a complete list of their books.
**Discover your next great read!**
1. Cover
2. Also by James Sieckmann
3. Title Page
4. Copyright
5. Dedication
6. Contents
7. Acknowledgments
8. Foreword
9. Prologue
10. 1 | What Comes First: A Study of Skill
11. 2 | How to Assess Putting Skill and Avoid the Pitfalls That Thwart Improvement
12. 3 | Skill 1: Starting the Ball On Line (Pre-Stroke Foundations)
13. 4 | Skill 1: Starting the Ball On Line (In-Stroke Foundations)
14. 5 | The Feedback Loop—Confirming Your Foundations
15. 6 | Skill 2: Green-Reading Facts and Processes
16. 7 | Skill 3: Touch and Feel
17. 8 | Skill 4: The "It" Factor
18. 9 | The Art of Effective Training
19. 10 | Proven and Effective Skill-Training Programs
20. 11 | Tour Confidential—Learning from the Best
21. Photos
1. Cover
2. Table of Contents
3. Start
1. iii
2. iv
3. vi
4. vii
5. viii
6. ix
7. x
8. xi
9. xiii
10. xv
11. xvi
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
124.
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.
152.
153.
154.
155.
156.
157.
158.
159.
160.
161.
162.
163.
164.
165.
166.
167.
168.
169.
170.
171.
| {
"redpajama_set_name": "RedPajamaBook"
} | 333 |
L'Autoroute A10 (o A10) belga parte da Ostenda, fino ad arrivare a Bruxelles. L'autostrada è lunga 104 km.
Percorso
Voci correlate
Autostrade in Belgio
Altri progetti
Collegamenti esterni
A010 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,790 |
She is one of the Servants of Ritsuka Fujimaru of the. John Vernou quot;Black Jackquot; Bouvier III (May 19, 1891 August 3, 1957) was an American Wall Street stockbroker and socialite. He was the father of First Lady Jacqueline Kennedy Onassis, socialite Lee Radziwill, and the father-in-law of President John F. Kennedy. His nickname, quot;Black Jackquot;, referred to his perpetual dark tan and his … Thomas Jacob Black (Santa M243;nica, California, 28 de agosto de 1969), m225;s conocido como Jack Black, es un m250;sico, actor, comediante y productor estadounidense. Entre su extensa filmograf237;a, Black ha protagonizado pel237;culas tales como Amor ciego, King Kong, Nacho Libre, Tropic Thunder, The Holiday, Goosebumps, Bernie o Kung Fu … In the space environment, water is one of the most valuable things in the universe. You can split it into oxygen and hydrogen and use it for breathing, propellant, and in fuel cells. You can drink it or use it to grow plants and algae in your life support system. Jay Ryan, Actor: Neighbours. Born in Auckland-New Zealand, Jay made the move across to Australia at the age of 19 to take on the role of Jack Scully in Network Ten's long running series, quot;Neighboursquot; and as Seaman Billy 'Spider' Webb in the Nine Network's quot;Sea Patrolquot. Omline has also appeared in quot;Young Herculesquot;, quot;Xena: Warrior … A los 16 a241;os, Cher diamond jacks casino iowa la escuela y la casa de poier poker gry online wp para mudarse poker gry online wp Los 193;ngeles con pw amiga, donde tomaron clases de actuaci243;n … MARBLES. comic character marble. BETTY Poker gry online wp from Fleisher Studios. pnline 1932-34 issue by Peltier Glass. black image on yellow and dark red marble. Your premier source for oline comics next casino reviews related collectibles. Marvel, DC, Dark Horse, Image poker gry online wp dozens poker gry online wp hard on,ine find small-press comics. Discounts up to 50. 20 years onilne business - nearly a million orders filled. Mystic Mahjong, Jeux roulette virtuel this mystic duel of poker gry online wp pokwr, speed is the path to triumph. Submit your cesky poker download operated arcade equipment onnline sale inline free to local collectors willing to pay holdem preflop odds dollar and in cash. Two exotic slots collectors use this … The Best Dive Bar in Every State from The Best Dive Bar in Every State 2017 Slideshow. Home; DRINK; Where To Drink Your Neon Sign Hry - We not lvd resort casino sell neon signs we make neon signs. Your source roulette pour meuble vintage indoor blackjack asia outdoor neon signs. Shop for custom neon signs made in nancy petry gambling USA. The hry main blast poker gry online wp has been transformed into a sign advertising the vaultcasino. quot;Vault 21quot; is emblazoned on it in neon lights. The vault entrance is an above ground building which contains the Vault 21 gift shop. Home Gift Certificates New in Stock and Coming Soon Harley RoadHouse Collection Harley Neon Clocks - Prints - Wall Decor Harley … The Sopranos made household names of James Gandolfini, Edie Falco, and the rest of New Jersey's fictional Satriale's-eating, Bing-frequenting tough guys. On June 10, 2007-10 years ago today-the show set a new standard for jaw-dropping finales with its infamous cut-to-black (more on that later. Most locals and visitors know of Australia from the tourism advertisements. Here are 40 facts about Australia that you may find interesting. (To see a larger image of an item just click on the picture) Harley-Davidson174; Collectibles and Memorabilia On Sale - Save 75. 00 was 500. 00 now … Jan-Michael Vincent, Actor: Bite the Bullet. Virile, handsome and square-jawed youthful star of the 1970s and 1980s whose early potential at super-stardom fizzled out. Jan-Michael Vincent originally made a name for himself portraying rebellious young men bucking the system, as in Tribes (1970), White Line Fever (1975) and Baby Blue … Myolie Wu Hang-yee (born 6 November 1979), is a Hong Kong actress and singer.
Poker gry online wp martingale is any of a class of betting strategies that originated from and were popular in 18th century France. The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up … 9 Volt Battery For Sale - 1999 Mazda Protege Car Poker gry online wp 9 Volt Battery For Sale Sears Car Battery Tender Kinetic Car Battery Eric Clapton's guitars, amps and guitar poker coach south florida. Find out what you want to kombinace texas holdem poker about Eric Clapton's gear and share your own input and insights.
A library site of streaming public domain video available on the Internet Archive Try this twist on classic Mediterranean cooking with a delicious low-carb stuffed zucchini. Piled with Italian sausage, tomatoes and mushrooms, weeknight meals dont get more scrumptious than this. Before poker gry online wp followed his nose and peeked under the foil, just as dinner was going in the oven, I was.
Poker gry online wp Rarest and Most Valuable NES Games. Considering how much of a nostalgic powerhouse the NES is, there should be little surprise that Nintendos 8-bit library is filled with many collectible pieces. Donate to support A Way with Words https:waywordradio. orgdonate Heres the perfect gift for your language nerd: a donation to A Way with Words. For more than ten poker gry online wp weve been producing fun episodes about words and language that are heard millions of times each year by people like poker gry online wp word nerd.
This is Hit casino w bydgoszczy. com's IBM compatible selection of software, accessories, and supplies for both old and new computers. Guides from our Expert Community; How to Buy a Keyless Entry Remote on eBay; How Can I Repair Chipped or Cracked Pottery. Whats the Difference Between Vintage … May 14, 2011nbsp;0183;32;I love Pixar. Who doesnt. The stories are magnificently crafted, the characters are rich, hilarious, and unique, and the images are lovingly rendered.
Poker gry online wp fail, John Ratzenbergers iconic voice makes a cameo in some boisterous character. Even if you havent seen every film theyve made. A list of vintage computer games that were popular during the first boom of personal computers in the 80s Peach Pie. This is a classic peach pie with no frills, because peach pie needs no frills to be fantastic. Poker online bonus senza deposito 2017 this pie convince you.
A few details: Theres not a lot of sugar austintown gambling this pie because my unpaid testers and I didnt feel that it warranted it. search titles only has image posted today bundle duplicates include nearby areas albany, GA (aby); brunswick, GA (bwk); charleston, SC (chs) search titles only has image posted today bundle duplicates include nearby areas albany, GA (aby); brunswick, GA (bwk); daytona beach (dab) Get the latest sports television listings on ESPN.
Enjoy complimentary shuttle service as well as a fully equipped … The Comfort Inn O'Hare Convention Center hotel in Des Plaines is near Chicago O'Hare International Airport, Allstate Arena and Wrigley Field. Free airport transportation. Apr 29, 2018nbsp;0183;32;Now 85 (Was ̶9̶5̶) on TripAdvisor: Extended Stay America - Chicago OHare - Allstate Arena, Des Plaines. See 126 traveler reviews, 38 candid photos, and great deals for Extended Stay America - Chicago OHare - Allstate Arena, ranked 6 of 9 hotels in Des Plaines and rated 3. 5 of 5 at TripAdvisor. Our knowledge of building science and professional standards are the result of decades of experience in the design of thousands of buildings worldwide. Have a great stay at one of the poker gry online wp all-suite hotels in Rosemont, IL, city to casino road closures just north of downtown Chicago and less than a mile from O'Hare Airport. Enjoy complimentary 24-hour shuttle service, education slots poker gry online wp breakfast each morning, and drinks and snacks at our reception each evening. Looking poker gry online wp Holistic Health, Natural Healing and Events in Ontario. Click HERE. Your guide to Consious Living. This is a list of recurring fictional characters in the original television series Mission: Impossible Poker gry online wp Fonts poker Rules palau hobby slot Laws of Euchre. XIII. If the eldest-hand leads before the dealer has discarded, he cannot withdraw his card and change ledreborg slots koncert lead, nor can the dealer be poker gry online wp of his right to play alone. For us, the Casino Hotel was the beginning. In many ways, it was also a crucible, a place where our ideas could be tested, our mistakes made. And finally, it was the place where our processes were brought to their best possible levels. May 20, 2018nbsp;0183;32;Smart Launcher is back and has been completely renewed. Smart Launcher is the innovative launcher that makes your smartphone or tablet faster and easier to use. What's in Smart Launcher?Ambient themeSmart Launcher automatically changes the theme colors to match your wallpaper. Adaptive iconsThe icon format … Las Vegas hottest new sports and entertainment venue, located just west of the famed Las Vegas Strip, will be officially named T-Mobile Arena. Early life.
The girls named him Indiana Jones, which is a wonderful name for a hawk, and his daily kreeeeeees thrill me every time I hear them. CCC Stands For : Poker gry online wp Coal Limited | Credit Card Cheque | Calgati Chemical Company | Canadian Cloning Corporation | Canadian Commercial Corporation | Capitol City Center | Cards Convention Center | Caribbean Casino Corporation | Caribbean Conservation Corporation | Carlton Chain Company | Certified Collateral Corporation | … mess - Traduzione del vocabolo e dei suoi composti, e discussioni del forum.
All episodes: Expanded View 183; List View 183; Upcoming Episodes 183; Recent Episodes This is the Family Guy Wiki (FGW) Episode Guide. The show premiered on January 31, 1999 and originally ended on February 14, 2002. Oct 07, 2017nbsp;0183;32;LAS VEGAS - Stephen Paddock poker gry online wp a contradiction: a nyc gambling center who took no chances.
Do you see the repeating angel number 444. Oc gambling out the numerology and spiritual poker gry online wp of 444 and how you can closest casino to clovis nm it to help you in day to day life. I'm a private guitar collector interested in vintage instruments by Gibson, Fender, Martin, Gretsch, Epiphone, National, Dobro, Rickenbacker made from 1920 to 1969.
Today, Wyndham Rewards and Caesars Entertainment's Total Rewards programs announced a new partnership, poker gry online wp an instant status match. make - Traduzione del vocabolo e dei suoi composti, e discussioni del forum. Richie Rich ripping off taxpayers. Richard Cordray has used his unchecked power at the CFPB to: Pay staffers more than the VICE-PRESIDENT; Build himself a luxury office that cost MORE than the Bellagio Poker gry online wp These are the meanings behind the song lyrics from various songs of the 70s.
In particular, we're looking for songs that aren't immediately obvious. Im Flying. Dreaming of Flying can be empowering. Flying dreams are one of a rare breed of dreams that most often have good meanings. Understanding the meaning of your dreams will depend on the context of the dream. Definition of poker gry online wp - the natural agent that stimulates sight and makes things visible, an expression in someone's eyes indicating a particular emotion Feb 25, 2017nbsp;0183;32;Cirque du Soleil Billionaire Guy Laliberte Helps Fight Global Water Crisis With Luxury Poker gry online wp Items Including Power Boat Adventure On Lake Como And More Guns Combat veterans shoot down the NRA: The good guy with a gun is based on a fantasy world Dave Foley, Actor: The Poker gry online wp Guy. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,856 |
{"url":"https:\/\/www.nature.com\/articles\/s41598-021-96532-z","text":"Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.\n\n# Microendoscopic calcium imaging of the primary visual cortex of behaving macaques\n\n## Abstract\n\nIn vivo calcium imaging with genetically encoded indicators has recently been applied to macaque brains to monitor neural activities from a large population of cells simultaneously. Microendoscopic calcium imaging combined with implantable gradient index lenses captures neural activities from deep brain areas with a compact and convenient setup; however, this has been limited to rodents and marmosets. Here, we developed miniature fluorescent microscopy to image neural activities from the primary visual cortex of behaving macaques. We found tens of clear fluorescent signals from three of the six brain hemispheres. A subset of these neurons showed clear retinotopy and orientation tuning. Moreover, we successfully decoded the stimulus orientation and tracked the cells across days. These results indicate that microendoscopic calcium imaging is feasible and reasonable for investigating neural circuits in the macaque brain by monitoring fluorescent signals from a large number of neurons.\n\n## Introduction\n\nMacaque monkeys have long been used as experimental models to understand the neural mechanisms of the human brain. Extracellular recordings using microelectrodes have been traditionally used to record neural activity in the macaque brain. Recently, calcium imaging with fluorescence microscopy using acutely injected calcium-sensitive dye1,2,3,4 or genetically encoded calcium indicator5,6,7,8,9,10,11,12,13,14 has been applied. Calcium imaging allows the simultaneous recording of a number of neurons and thus is a powerful tool for understanding the function of local neural circuits at the mesoscopic scale. In primates, two-photon microscopy has been mainly used in calcium imaging studies to record neural activity from the cortical surface. In rodents, microendoscopic calcium imaging, which implants a gradient index (GRIN) lens into the brain and observes neural activity through a miniaturized fluorescent microscope, is also widely used, producing a number of innovative results concerning various brain areas, including deep neural nuclei15,16,17,18,19,20,21. Although there also has been a report on the application of microendoscopic calcium imaging to marmosets22, its application to macaque monkeys has not yet been reported.\n\nMicroendoscopy has several advantages over two-photon microscopy despite its lower temporal and spatial resolutions. In two-photon microscopy, adjusting the viewing angle to the observation plane is labor-intensive, and the animal must be completely immobilized against the experimental apparatus during an imaging session. For large animal species such as macaque monkeys, it is necessary to use an extra auxiliary fixture to achieve complete fixation of the animal\u2019s head7. On the other hand, in microendoscopy, imaging data can be obtained simply by snapping the miniaturized microscope to a baseplate. The light weight and small size of the miniaturized microscope allow the observation of animals under free-moving conditions, although wireless data transmission will be necessary in the case of macaque monkeys, as has been done in large-scale microelectrode recordings23. Furthermore, by implanting GRIN lenses in multiple locations in the same brain, microendoscopy can be used for simultaneous recording of multiple locations. Given these advantages and potential utilities, it is useful to establish the viability of microendoscopy in the macaque brain.\n\nIn this study, we applied microendoscopic calcium imaging to macaque monkeys targeting the primary visual cortex (V1), where neurons can be easily activated by presenting visual stimuli. We incorporated the calcium indicator GCaMP6s into a mosaic adeno-associated virus vector packaged with AAV1 and AAV2 capsid protein (AAV2.1)24. We injected this virus into the bilateral V1 of three macaques and observed highly efficient infection in these regions. Prism lenses, which were equipped with a triangular prism at the tip of a rod lens, were implanted into the vector injection sites. Miniaturized microscopy through prism lenses revealed fluorescence changes in four hemispheres from two of the three monkeys and was able to identify dozens of neurons in three of these four hemispheres. For these two monkeys, Gabor patches with different orientations were presented in the peripheral field of view during fixation to examine the stimulus response. Observed V1 neurons had receptive fields at the corresponding positions on the retinotopic map and showed significant selective responses to specific orientations. We also successfully decoded the orientation of the visual stimulus and tracked the cells across days. In summary, we successfully recorded neuronal populations simultaneously in the macaque V1 using microendoscopy, and the observed cell populations showed response patterns consistent with the established V1 neuronal properties. These results indicate that microendoscopic calcium imaging is a powerful recording method for the macaque brain.\n\n## Results\n\n### Virus injection and prism lens implantation into the macaque V1\n\nIn this study, we used three macaque monkeys (Monkeys U, J, and O). To conduct microendoscopic calcium imaging from the bilateral V1 of the macaque brain during the monkeys\u2019 performance of a behavioral task, we sequentially performed four surgeries: (1) attachment of a headpost to immobilize the monkeys in a monkey chair, (2) injection of the viral vector, (3) implantation of prism lenses, and (4) placement of the baseplate. We used GCaMP6s, which sum up several spikes in some temporal vicinity to the intensity of the fluorescence signal25, to obtain a strong intensity fluorescence signal in response to visual stimuli. PrismProbe (Inscopix, 1\u00a0mm diameter, 9.1\u00a0mm length) was used in this study to facilitate cortical penetration. PrismProbe was equipped with a triangular prism at the tip, and signals were collected from the side of the tip. We targeted the positions 12\u201314\u00a0mm from the midline to the lateral side and 4\u20136\u00a0mm from the lunate sulcus to the ventral side in V1 (Fig.\u00a01a). This location is known to have a receptive field at 4\u00b0 to 8\u00b0 (visual angle) from the fovea and approximately\u2009\u2212\u200945\u00b0 in the lower quadrant of the visual field26,27,28,29.\n\nFor vector injection surgery (Fig.\u00a01b), a 10-mm circular craniotomy was performed at the targeted locations, and an incision made in the dura mater was widened to expose the cortical surface. Vascularity was observed with a surgical microscope, and viral vectors were injected using glass micropipettes in positions that avoided large blood vessels. In Monkey U, the viral vector was injected into the tracks 0.3\u00a0mm dorsal and ventral from the center of the position where the prism lens was planned to be inserted. In Monkeys J and O, in addition to these two tracks, the viral vector was injected into another two tracks, which were 0.5\u00a0mm lateral and medial from the center of the lens position (Fig.\u00a01c). In each track, 1.0 \u00b5L vector solution was injected at three different depths (Fig.\u00a01b). Monkey U underwent vector injection and lens implantation surgeries on the same day. On the other hand, the craniotomy sites of Monkeys J and O were closed after vector injection. Subsequently, surgeries for lens implantation were performed after approximately 1\u00a0month to allow the expression of GCaMP6s. To determine the sites for lens implantation in Monkey J and O, the positions of viral vector injection were identified based on vascularity. Strong fluorescence signals were confirmed by handheld fluorescence microscopy at all four injection sites (Fig.\u00a01d). Then, a scalpel blade was used to make a cut on the cortical surface in advance, and the lens was inserted through the cut and advanced 0.5\u00a0mm away from the position where the prism tip completely entered the cortex. The observation plane of the lens (1\u00a0mm\u2009\u00d7\u20091\u00a0mm) was set to face the dorsal side. Handheld fluorescence microscopy was also applied to the postmortem brain of Monkey U, showing a strong fluorescence signal around the lens trace in the right hemisphere, but not in the left hemisphere (Fig.\u00a01d). After a sufficient recovery period after lens implantation, base plates were placed to mount the miniature fluorescent microscope.\n\nImmunohistochemical staining was performed on Monkey U and O to confirm GCaMP expression and lens positions after the completion of the imaging sessions. In Monkey U, we observed a high frequency of GCaMP expression in the V1 of the right hemisphere, and the population of GCaMP-positive cells was located close to the observation plane of the lens (Fig.\u00a02d\u2013f). The left hemisphere showed a similarly high frequency of GCaMP expression, but its position was found in V2 close to the lunate sulcus, and the population of GCaMP-positive cells was located slightly away from the observation plane (Fig.\u00a02a\u2013c). In Monkey O, a high frequency of GCaMP expression was observed in the wide V1 areas of both the left and right hemispheres, and the observation plane of the lenses was located in V1 close to GCaMP-positive cells (Fig.\u00a02g\u2013l). The lens insertion position in the left hemisphere was slightly closer to the midline than in the right hemisphere (Fig.\u00a02g, j). As will be discussed in the next section, fluorescent signals with cellular-level resolution were observed in the right V1 of Monkey U and in the bilateral V1 of Monkey O, whereas only vague and diffused signals were observed in the left V1 of Monkey U. The success and failure of fluorescence observations in these hemispheres were consistent with the relationship between the lens positions and locations of GCaMP expression.\n\n### Specific receptive fields of the detected V1 neurons\n\nThe monkeys were trained to perform a fixation task with a brief presentation of a peripheral visual stimulus (Fig.\u00a03a). During this task, the monkeys were required to maintain fixation at the central fixation point (FP). A Gabor patch was briefly presented at the periphery of the field of view 500\u00a0ms after the start of fixation. If the monkeys were successful in continuing to fixate on the FP, a correct sound and a drop of water as a reward were delivered. We first conducted sessions to identify receptive fields. During these sessions, Gabor patches were sequentially presented at 15 locations in the field of view opposite to the recorded hemisphere (Fig.\u00a03b). In subsequent sessions, Gabor patches were presented in a fixed position where the receptive field was identified. In all of these sessions, Gabor patches had six different orientations, and they were presented repeatedly in sequence (Fig.\u00a03c).\n\nAfter injection of the virus vector and installation of the lens and base plate following task training, microendoscopic calcium imaging was performed from the bilateral V1 of the three monkeys under awake conditions. We observed fluorescence dynamics in the bilateral V1 of two out of the three monkeys (four hemispheres of Monkey U and O). In the right hemisphere of Monkey U and both hemispheres of Monkey O, multiple fluorescence signals with cellular-level resolution were observed. Only a vague signal of change was observed in the left hemisphere of Monkey U and no fluorescence changes were detected in either hemisphere of Monkey J.\n\nWe then conducted recording sessions in Monkeys U and O to identify the receptive fields of neural populations (see \u201cMethods\u201d section). We used constrained nonnegative matrix factorization for microendoscopic data (CNMF-E)30 to detect putative neurons as regions of interest (ROIs) (see \u201cMethods\u201d section). Because neurons observed through each lens showed almost similar receptive fields, the analysis here used a time series of calcium dynamics from one representative ROI for each hemisphere that showed stimulus responses to many orientations. In the right hemisphere of Monkey U, stimulus responses were strongest at 5\u00b0 eccentricity at 8 o\u2019clock (Fig.\u00a04a, b). In the left hemisphere of Monkey O, stimulus responses were strongest at 7\u00b0 eccentricity at 4 o\u2019clock (Fig.\u00a04c, d), while in the right hemisphere, stimulus responses were strongest at 5\u00b0 eccentricity at 8 o\u2019clock (Fig.\u00a04e, f). These receptive fields were in the vicinity where we expected them to be located based on the known retinotopic map in macaque V126,27,28,29.\n\n### Detecting putative neurons\n\nWe then performed recording sessions to examine the orientation selectivity of the detected putative neurons by presenting Gabor patches to the identified receptive fields in each of the three hemispheres. Each of the six different orientations was presented 20 times, and the orientation selectivity index (OSI) was calculated using the data obtained from each detected putative neuron (see \u201cMethods\u201d section). The OSI is the highest (OSI\u2009=\u20091) when the ROI responds only to one orientation and not to the other. The response strength to each orientation was also used to calculate the preferred orientation of the ROI.\n\nIn a representative session recorded from the right hemisphere of Monkey U, 12 ROIs were detected in the medial side of the field of view (Fig.\u00a05a, Supplementary Movie 1). For Monkey O, 37 and 45 ROIs were detected in the ventromedial side of the field of view in a session recorded from the left (Fig.\u00a05c, Supplementary Movie 2) and right (Fig.\u00a05e, Supplementary Movie 3) hemispheres, respectively. Figure\u00a05b, d, f shows the time series of the fluorescence signals of typical ROIs in Fig.\u00a05a, c, e, respectively. Many of these putative neurons were significantly responsive to visual stimuli presented to the receptive field (75\/94, 79.8%).\n\n### Orientation tuning in the detected ROIs\n\nTo examine whether the detected ROIs show selectivity for a specific orientation, we calculated the OSI value for each ROI. The distribution of OSIs presented in Fig.\u00a06a shows that they vary from large to small (Fig.\u00a06a, inset). Figure\u00a06b shows the polar coordinate displays of the stimulus responses of the ROIs with the largest OSI values among those detected from each hemisphere. In addition, we calculated the preferred orientation of each ROI as the direction of the vector sum of the responses to six orientations of the visual stimulus. The distributions of preferred orientations are shown in Fig.\u00a06c. Except in the left hemisphere of Monkey U, where the number of ROIs was relatively small, the distributions of preferred orientations were not highly biased toward any particular orientation (Fig.\u00a06c, inset).\n\n### Decoding orientations of presented stimuli from imaging data\n\nNext, we calculated the decoding performance using a support vector machine to determine whether we could decode the orientation of the visual stimuli from calcium dynamics data obtained by microendoscopy. In this analysis, data from the representative sessions for each of the three hemispheres were merged (94 ROIs in total). We examined the decoding performance over time by a sliding window using time series data aligned to stimulus onset and found that the performance increased after stimulus onset and became significantly higher than shuffled data at approximately 500\u00a0ms (Fig.\u00a07a). Then, to investigate how decoding performance after stimulus onset is affected by the number of ROIs, we gradually increased the number of ROIs to be used in this analysis. We used both the random selection of ROIs from 94 ROIs and the addition of ROIs in order, starting with the one with the highest OSI. As a result, the decoding performance of the random selection method increased gradually as the number of ROIs increased. The decoding performance of the highest-to-lowest method was not significantly different from that of the random selection method (Fig.\u00a07b). Additionally, the peaks in decoding performance came at the points where all or most ROIs were used, indicating that information from ROIs other than the high-value OSI is also necessary to obtain high decoding performance.\n\n### Cell tracing across days\n\nUsing microendoscopic calcium imaging, the same cells can be observed across days by implanting a GRIN lens deep in the brain15, 16, 19, 31,32,33. We were able to maintain observations at cellular-level resolution across days for the right hemisphere of Monkey O. Here, we used imaging data from Days 1 and 3 to track the detected ROIs. We applied a cell tracking method with a probabilistic model using centroid distance and spatial correlation between the footprints of the ROIs34 (see \u201cMethods\u201d section). As a result, we were able to track a large portion of the cells detected on Day1 in the imaging data from Day3 (Fig.\u00a08a\u2013c; 33\/45 ROIs, 73.3%).\n\nTo determine whether the identified ROIs had similar cellular characteristics, we first calculated the Pearson correlation of the stimulus\u2013response between the two recording sessions. We calculated the correlation of the fluorescence time-series data (0\u20132000\u00a0ms after the stimulus onset) for each orientation between the identified ROIs, and the average of the correlation coefficients was obtained for each pair. Examples of the top three pairs of correlation coefficients are shown in Supplementary Fig. 1a. The correlation coefficients for the identified pairs were significantly higher than those for the shuffled pairs (Fig.\u00a08d; t(361)\u2009=\u20098.20, p\u2009=\u20094.29E-15, two-sample t-test). We then calculated the correlation between the OSI values of the identified pairs. Only ROIs that showed a significant stimulus\u2013response on both days (Days 1 and 3) were used in this analysis (21\/33 ROIs). However, no significant correlation was found (Fig.\u00a08e; r\u2009=\u20090.158, p\u2009=\u20090.495, Pearson correlation). Finally, we calculated the difference in preferred orientation between the two recording sessions (Fig.\u00a08f). The results showed that the proportion of ROIs that had small differences of less than 10\u00b0 as the largest, and the proportion of ROIs decreased as the differences increased. When the ROI pairs were shuffled, the proportion of differences was uniformly distributed. Therefore, we performed a uniformity test on the identified ROIs. The distribution of observed values was significantly different from the uniform distribution (p\u2009=\u20090.019, two-sample Kolmogorov\u2013Smirnov test). However, the distribution was not significantly different from the fitted exponential distribution (p\u2009=\u20090.958), suggesting that many of the tracked cells retained their response properties.\n\nAs an additional analysis, we performed the same cell tracing between two consecutive recordings on the same day (Rec1 and Rec2 on Day1). As a result, 84.4% of ROIs in Rec1 were tracked in Rec3 (Supplementary Fig. 2a\u2013c; 37\/45 ROIs, 84.4%). The Pearson correlation of stimulus\u2013response between the identified pairs was significantly higher than that of the shuffled pairs (Supplementary Fig. 1b and 2d; t(416)\u2009=\u20098.86, p\u2009=\u20092.41E-17, two-sample t-test). While the correlation between the OSI values was not significant (Supplementary Fig. 2e; 26\/37 ROIs, r\u2009=\u20090.149, p\u2009=\u20090.469, Pearson correlation), the difference in preferred orientation between identified pairs was biased toward zero (Supplementary Fig. 2f), as was the case across days. The distribution was significantly different from a uniform distribution (p\u2009=\u20090.019, two-sample Kolmogorov\u2013Smirnov test), but not significantly different from an exponential distribution (p\u2009=\u20090.603).\n\n## Discussion\n\nMicroendoscopic calcium imaging is rapidly gaining widespread use in systems neuroscience, particularly in rodents, and has generated numerous new insights through observations of various brain regions, including deep nuclei. However, this technique has not yet been applied to macaque monkeys, despite it being a powerful model for elucidating human brain function. To establish the feasibility of microendoscopic calcium imaging in the macaque brain, we tested it with bilateral V1 as a target area. As a result, we detected fluorescent signals with cellular resolution in response to visual stimuli in three of the six hemispheres used in this study. The detected neurons had receptive fields in locations consistent with the known retinotopic map and showed tuning to a specific orientation, which are characteristics of the V1 cells. We also successfully decoded the orientation of the stimuli presented to the monkeys and tracked the cells across days using the calcium dynamics data. These results show that microendoscopic calcium imaging is an effective observation method, even in macaque monkeys.\n\nThe properties of macaque V1 neurons have traditionally been explored using electrophysiological techniques. The results of our study are consistent with those of previous studies in many respects. It has been shown that the eccentricity of the receptive field is greater in regions closer to the midline on the V1 surface26,27,28,29. Consistent with this, in Monkey O, the lens injection position in the left hemisphere was closer to the midline than that in the right hemisphere, and the eccentricity of the receptive field was greater in the left hemisphere than in the right hemisphere. The magnification factor (the cortical surface distance between two points representing visual field positions 1\u00b0 apart) was found to be 1\u20132\u00a0mm\/degree around the eccentricity (5\u00b0\u20137\u00b0) of the receptive field we identified27,28,29. Based on this, the maximum change in the receptive field position across the field of view of the prism lens we used is expected to be less than 1\u00b0. This is consistent with our observation that the fluorescence signals in the field of view recorded by each lens had almost the same receptive field. V1 cells with similar orientation selectivity are known to assemble to form a columnar structure35. Previous studies using two-photon microscopy have detected the columnar structure in macaque V1 by observing the cortical surface from the top7, 13. In this study, we used prism lenses to observe the cortex from the side. Therefore, we were not able to observe the column structure, but we were able to observe the cells in the deep layers of V1. Previous studies have shown that input layers, such as 4C and 6, and output layers, such as 2\/3 and 5, have different cellular properties36,37,38,39. The orientation selectivity was shown to be relatively variable in the input layer37, consistent with our observations. However, because of the relatively small area and a limited number of cells we observed, it is difficult to determine from which layers we recorded the fluorescent signals. It is also known that the preferred orientation changes smoothly across layers along the direction perpendicular to the cortical surface, but the degree of change is not constant and is highly variable39. This is consistent with our result that there is no simple regularity in the distributions of the preferred orientation.\n\nWe succeeded in tracking the cells across days and found similarities in the time series of stimulus-responses and orientation preferences among the identified cells. However, we did not find any correlation in the orientation selectivity. This may be due to the large variation in the orientation selectivity of the observed cells.\n\nWe performed viral vector injection and prism lens implantation in a total of six hemispheres of three monkeys. Although handheld fluorescence microscopy and immunohistochemical staining showed GCaMP6s expression in all six hemispheres, calcium dynamics at the cellular-level resolution were successfully observed in three of them. Postmortem fluorescent staining of the brain of Monkey U showed that the expression of GCaMP6s was close to the observation plane of the lens in the right hemisphere, where neurons were visible with good resolution, while the expression of GCaMP6s was slightly away from the observation plane in the left hemisphere, where only diffused signals were visible with poor resolution. This suggests that the success or failure of imaging depends primarily on how the lens can be implanted in the vicinity of the expressed calcium indicators. Future technical improvements in both wider viral infection and precise lens implantation will be needed to increase the detection rate of calcium signals.\n\nIn this study, we reproduced the well-established characteristics of V1 cells, which include receptive field and orientation tuning, using time-series data of the detected ROIs. We also successfully decoded the orientation of visual stimuli from microendoscopic data, despite the relatively low temporal resolution. These results demonstrate the potential of microendoscopic calcium imaging for elucidating the functions of brain areas other than V1, such as the prefrontal cortex. In particular, it is suitable for investigating the dynamics of local neural circuits in long-term learning. Moreover, we successfully tracked many cells across days. Other rodent studies have succeeded in tracing the same cells over weeks with high recording quality using microendoscopy31. With this method, it is possible to examine how the response properties of individual neurons and the organization pattern of neural circuits change during the learning process. Furthermore, one of the advantages of calcium imaging is the minimal bias in the sampling of cells to be recorded. Even low-frequency cells that would be discarded by extracellular recording can still be recorded if they are within the field of view and express sufficient amounts of calcium indicators. When such cells change their response properties as a result of learning, it may be possible to review past data and retrospectively examine the same cells. Thus, microendoscopic calcium imaging allows for multifaceted analysis of the dynamics of local neural circuits in non-human primates.\n\n## Methods\n\n### Subjects\n\nThree male Japanese monkeys (Macaca fuscata) were used as experimental subjects (Monkey U, 12.8\u00a0kg; Monkey J, 8.2\u00a0kg; and Monkey O, 8.6\u00a0kg). All experimental protocols in this study were approved by the Animal Care and Use Committee and the Safety Committee for Genetic Modification Research at Tamagawa University and were in accordance with the National Institutes of Health\u2019s Guide for the Care and Use of Laboratory Animals and with the ARRIVE guidelines. The monkeys were kept in individual primate cages in an air-conditioned room where food was available ad libitum. The body weight and appetite of the monkeys were checked, and vegetables and fruits were provided daily. In the breeding room, many cages face each other to allow the monkeys to see and hear each other.\n\n### Virus vector\n\nThe AAV2.1-CaMKII\u03b1-GCaMP6s vector (6.0\u2009\u00d7\u20091013 genome copies per mL) was produced by the helper-free triple transfection procedure and was purified by affinity chromatography (GE Healthcare). Viral titers were determined by quantitative PCR using TaqMan technology (Life Technologies). The transfer plasmid (pAAV- CaMKII\u03b1-GCaMP6s-WPRE) was constructed by inserting the mouse CaMKII\u03b1 promoter, GCaMP6s gene, and WPRE sequence into an AAV backbone plasmid (pAAV-CMV, Stratagene).\n\n### Surgery\n\nTo perform microendoscopic calcium imaging, we performed a series of surgeries on each monkey: (1) installation of a head holder; (2) injection of viral vectors once in each of the left and right hemispheres, (3) implantation of prism lenses, and (4) installation of a baseplate to hold the miniaturized microscope. In each surgery, the monkeys were anesthetized by intramuscular injections of ketamine hydrochloride (10\u00a0mg\/kg) and xylazine (1\u00a0mg\/kg) and maintained under general anesthesia with isoflurane (1.0\u20132.0%). In the head holder surgery, after the skull was exposed, a head holder placed at the midline and ear bar zero was attached to the dental acrylic head implant, which was fastened to the skull by acrylic screws.\n\nAfter finishing the task training, the second and third surgeries were performed to inject the viral vector into the V1 of the monkeys. The dental cement around the injection site, which was determined based on MRI images, was removed, and the skull was craniotomized into a 10-mm diameter circle. Then, the dura at the craniotomy site was cut longitudinally and sutured to widen it on both sides to expose the cortex. The viral vector injection site was determined to avoid large blood vessels, and the virus was injected using a glass micropipette connected to a 10-\u00b5L Hamilton syringe. The micropipette was manually lowered using a micromanipulator (SM-11, Narishige, Tokyo, Japan). We injected 1.0 \u00b5L vector solution at depths of 2.5, 1.5, and 0.5\u00a0mm from the cortical surface. Injections were made using an electric syringe pump (Legato210P, KD Scientific, MA, USA) at a rate of 0.2 \u00b5L\/min. After injection, we waited for 5\u00a0min to prevent diffusion and backflow. Two injections were made around the lens implantation site for Monkey U and four tracks for Monkey J and Monkey O. Viral vector injection and lens implantation were performed on the same day for Monkey U and on different days about a month apart for Monkeys J and O. After vector injection surgery for Monkeys J and O, an artificial dura was placed under the native dura, covered with medical silicone adhesive and sealed with dental cement.\n\nFor the prism lens implantation surgery performed on the monkeys, the vector-injected holes were first re-exposed, and the locations for the lens placement were determined based on the vascularity and fluorescence observation of the cortical surface using a handheld fluorescence microscope (Dino-Lite, Opto Science, Tokyo, Japan). The prism lens was inserted into the cortex with the observation plane placed on the dorsal side. Prior to lens insertion, a scalpel blade was used to make an incision on the cortical surface to ensure smooth insertion of the lens. The scalpel blade (No. 11) was pierced to the required depth (2\u00a0mm) and moved 1\u00a0mm parallel to the blade. The blade was then withdrawn and reattached in the opposite direction to make another incision. The prism lens was inserted in line with this incision and was advanced until the upper edge of the observation plane was completely hidden within the cortical tissue by repeatedly moving 1\u00a0mm forward and 0.5\u00a0mm backward. After the prism lens was advanced to a sufficient depth, an artificial dura mater was placed around the lens, medical silicone adhesive (Kwik-sil, World Precision Instruments, FL, USA) was poured, and the periphery of the lens was secured with dental cement. The top of the lens was covered with a small piece of plastic paraffin film and covered with another silicone adhesive (Kwik-cast, World Precision Instruments, FL, USA). Finally, a chamber with a lid was placed with dental cement to protect the lens from being touched by the monkeys.\n\nOne month after lens implantation, the final surgery was performed to place the baseplate. After adjusting the angle, the baseplate was slowly moved closer to the lens during observation using a miniaturized microscope. The baseplate was fixed at a position where the blood vessels observed from the lens were clearly visible. The dental cement securing the baseplate was mixed with black acrylic paint to prevent light exposure. Imaging sessions with the monkeys while they perform the fixation task were conducted after a sufficient recovery period.\n\nThe monkeys were trained in a fixation task using a Gabor patch presentation and performed in a dark, sound-attenuated room. The monkeys were seated in a primate chair in front of a 20-inch LCD monitor (2005FPW, Dell, TX, USA) with their heads fixed. The distance between their eyes and the display was 57\u00a0cm. A trial started with the onset of the central FP, which was a white circle with a 1.0\u00b0 diameter on the monitor. Then, 500\u00a0ms after the monkeys started fixating at the FP, a Gabor patch was presented around the FP for 300\u00a0ms. If the monkeys continued to fix their eyes within a 3\u00b0 fixation window around the FP, then a success tone (1000\u00a0Hz) and water (0.3\u00a0mL) were given. Otherwise, no reward or an error tone (200\u00a0Hz) was delivered. After the inter-trial interval (4000\u2009\u00b1\u20091000\u00a0ms), the next trial was started. During the sessions to identify the receptive field, Gabor patches were presented in sequence at viewing angles of 5\u00b0, 7\u00b0, and 9\u00b0 from the FP, at azimuths 1, 2, 3, 4, and 5 o\u2019clock (when recording from the left hemisphere), and at 11, 10, 9, 8, and 7 o\u2019clock (when recording from the right hemisphere). At each position, a Gabor patch (3\u00b0 in diameter, 1 cycle\/degree) was presented sequentially at\u2009\u2212\u200990\u00b0,\u2009\u2212\u200960\u00b0,\u2009\u2212\u200930\u00b0, 0\u00b0, 30\u00b0, and 60\u00b0. Once the receptive field was identified, the Gabor patches were presented at the same position as the identified receptive field position in the subsequent recording sessions. The task was controlled using the TEMPO system (Reflective Computing, MO, USA). The visual stimulus presentation for the monitor was programmed by a custom-made program using an application programming interface (OpenGL).\n\n### Data acquisition\n\nCalcium imaging data were acquired using a miniaturized fluorescence microscope (nVISTA 3.0, Inscopix, CA, USA) with 0.2\u20130.5 mW\/mm2 of irradiance at 6\u201310 fps. Eye movement was monitored using an infrared camera system at a sampling rate of 240\u00a0Hz (Eye-Trac 6000, Applied Science Laboratories, MA, USA).\n\n### Data analysis\n\n#### ROI detection\n\nThe recorded imaging data were processed using the Inscopix Data Processing Software (Inscopix, CA, USA). For pre-processing, the recorded images were spatially downsampled by a factor of four, spatially filtered, and motion corrected. To extract cells showing fluorescence changes as ROIs, we used constrained nonnegative matrix factorization for microendoscopic data (CNMF-E30), which is a cell detection method based on a constrained matrix factorization approach and optimized for single-photon data obtained by microendoscopy. The resulting cell candidates were narrowed down by contrasting them with the video data. The resulting normalized time series data of the ROIs were then analyzed using custom codes in MATLAB (R2017b, MathWorks, MA, USA).\n\n#### Retinotopy and orientation selectivity\n\nTo examine the receptive fields of the extracted cell populations, we used the data obtained by presenting six orientations of Gabor patches at each of the 15 peripheral stimulus presentation locations (12 trials per location for two rounds of stimulus presentation) contralateral to the recoded hemisphere. Because nearly identical receptive fields were shown in the cell populations observed from each lens, we used time series data from representative ROIs that responded to many of the six orientations. The obtained time series data of fluorescence intensity were normalized using the mean and standard deviation of the entire recording session. Only the data from successful trials were used in this and subsequent analyses. Data from each trial were aligned to the stimulus onset and combined into the data matrices for each stimulus position. The average intensity from \u2212\u2009500 to 0\u00a0ms before stimulus onset was subtracted to adjust the baseline. The Tukey\u2013Kramer test was used for multiple comparisons of the fluorescent signals averaged over the post-stimulus period (500\u20131500\u00a0ms). The position with the strongest stimulus response among all positions and was significantly different from the other non-neighborhood positions was detected as the receptive field. To visualize the location of the receptive field, we performed 3D fitting using a linear model embedded in MATLAB's curve approximation tool.\n\nNext, we used the data obtained by presenting Gabor patches (20 trials for each orientation) at the identified receptive fields to examine the orientation selectivity of the cell population. To investigate whether the detected putative neuron responded to the presented stimulus, we used the data from the pre-stimulus period (\u2212\u20091000\u00a0ms to 0\u00a0ms from the stimulus onset) as a baseline and compared it with the mean signal intensity between the post-stimulus period (500\u20131500\u00a0ms) separately for each orientation. Multiple comparisons were performed using Bonferroni's method, and a putative neuron was considered stimulus-responsive if it was significantly different from the baseline (\u03b1\u2009=\u20090.05). To calculate the OSI, the peak signal intensity after stimulus presentation (500\u20131,500\u00a0ms) was used in the following function:\n\n$$\\mathrm{OSI}=\\frac{\\sqrt{{(\\sum R\\left(\\theta \\right)\\mathrm{sin}(2\\theta ))}^{2}+{(\\sum R\\left(\\theta \\right)\\mathrm{cos}(2\\theta ))}^{2}}}{\\sum R(\\theta )}$$\n\nwhere \u03b8 represents the stimulus orientation, and R(\u03b8) represents the intensity of the evoked response at \u03b8. Because the signal can take a negative value, we added a constant to all response values to set the minimum response to zero3. We also calculated a negative OSI value for each ROI because some ROIs respond negatively to stimuli. The absolute values of positive and negative OSIs were compared, and the greater value was taken as the OSI for that ROI. The preferred orientation of each ROI was determined as the orientation of the vector sum calculated based on the peak signal intensities after stimulus presentation (500\u20131500\u00a0ms).\n\n#### Decoding\n\nWe used time series data from the detected ROIs to decode the orientation of the Gabor patches presented in each trial. A multi-class error-correcting output code model consisting of a binary support vector machine was trained to decode the six orientations, which was performed by the fitcecoc function in MATLAB. The trained model was used to obtain error estimates for tenfold cross-validation, and the decoding performance was calculated from the classification loss. To calculate the time course of the decoding performance, we used a 500-ms sliding window moving by 166-ms step to the time series data aligned to the stimulus onset. For the control, we used the same procedure to obtain the decoding performance using shuffled labels for orientations of the stimuli and repeated the decoding procedure 100 times at each step. For each time-bin, two-tailed two-sample t-tests (\u03b1\u2009=\u20090.05) were used to determine whether there was a significant difference between the obtained decoding performance and that of the shuffled data. Next, to examine the change in decoding performance depending on the number of ROIs used, we either randomly selected n ROIs from a pool of ROIs and decoded the orientation of stimuli using them, or selected n ROIs in order of their OSI. For the control, we also used shuffled labels for the orientation of the stimuli to obtain the decoding performance for the randomly selected n ROIs. For the random selection and shuffled processes, we repeated the decoding procedure 100 times for each number of ROIs. In this analysis, we used the data from 500 to 1500\u00a0ms after stimulus onset.\n\n#### Cell tracking across sessions\n\nTo track cells between different imaging data, we used a cell tracking method with a probabilistic model that uses the centroid distance and spatial correlation between footprints of the ROIs30, in which the fields of view obtained in the two imaging sessions were first aligned and the centroid distance between the centers of gravity of the ROI footprints and the Pearson correlation of footprint shapes were calculated. Then, probabilistic models for these features were fitted based on the Bayesian framework, and the ROI tracking scores were evaluated using a mixture model. ROI pairs that score above a threshold (default value: Psame\u2009=\u20090.5) are registered as identical.\n\nTo examine the similarity of the stimulus responses, Pearson correlations for the stimulus responses (0\u20132000\u00a0ms after the stimulus onset) at each orientation were calculated and its average over different orientations was compared to the data of shuffled pairs (10 times the number of tracked pairs) using a two-sample t-test. Pearson correlations were also calculated for the OSI values between tracked pairs. In addition, we calculated the differences in the preferred orientations in each session for the ROIs. For comparison, the difference in preferred orientations was also calculated for the shuffled pairs. This shuffling process was repeated 10,000 times. Histograms for each difference data were obtained by setting the bin width to 10\u00b0. For the shuffled data, the number of elements was divided by 10,000. Finally, we tested whether the distributions obtained from the actual ROI pairs differed from the uniform and exponential distributions using the two-sample Kolmogorov\u2013Smirnov test (the former was set using theoretical values and the latter was set by fitting the actual data to an exponential curve with the fit function in MATLAB).\n\n### Immunohistochemistry\n\nFollowing the recording sessions, the monkeys were deeply anesthetized with an intravenous injection of sodium pentobarbital (70\u00a0mg\/kg, i.v.) and transcardially perfused with 0.01\u00a0M PBS followed by 4% paraformaldehyde in 0.1\u00a0M phosphate buffer (pH 7.4). Brains were extracted and post-fixed in 4% paraformaldehyde overnight at 4\u00a0\u00b0C and cryoprotected with increasing gradients of sucrose (5%, 10%, and 20%). Frozen brains were then sliced into coronal sections at 40-\u00b5m thickness using a cryostat.\n\nOne in four successive sections was immunohistochemically stained. Free-floating sections were washed with PBS and permeabilized in PBS containing 0.3% Triton X-100 (PBST). After blocking for 1\u00a0h in 3% normal goat serum in PBST containing 1% bovine serum albumin (BSA-PBST), sections were incubated for 2 nights at 4\u00a0\u00b0C with a mouse anti-GFP antibody (1:250, Millipore, MA, USA) in BSA-PBST. After washing in PBST, sections were incubated for 4\u00a0h at 20\u00a0\u00b0C with Alexa-488 labeled goat anti-mouse IgG (1:1000, Molecular Probes, OR, USA) in BSA-PBST. After washing in PBS, the sections were mounted on glass slides with Fluoromount (Diagnostic BioSystems, CA, USA). GCaMP6s fluorescence images were acquired using a camera lucida attached to an epifluorescence microscope (BX51, Olympus, Tokyo, Japan) with 10\u2009\u00d7\u2009, 20\u2009\u00d7\u2009, and 40\u2009\u00d7\u2009objective lenses.\n\n### Statistics\n\nNo statistical method was used to predetermine sample sizes, but our sample sizes were similar to those of previous V1 imaging studies using macaque monkeys7, 13. Statistical analyses were conducted using MATLAB (2017b). The Tukey\u2013Kramer test was performed on the responses to visual stimuli presented at 15 locations in the visual field contralateral to the recording hemisphere to locate the receptive fields. For signal time series before and after stimulus onset, two-tailed, two-sample t-tests were used to compare decoding performance from the original data with that from shuffled data. Correlation coefficients of stimulus responses between tracked ROI pairs and those between shuffled pairs were compared using two-tailed, two-sample t-tests. Correlations of OSI values between the tracked ROI pairs were calculated using the Pearson correlation. The two-sample Kolmogorov\u2013Smirnov test was performed to verify whether the distributions of the differences in preferred orientations in the ROI pairs tracked across different recordings varied from the uniform and exponential distributions.\n\n## Data availability\n\nThe data used in the analysis of this study are available from the corresponding author upon reasonable request. A reporting summary of this article is available in the Supplementary Information.\n\n## Code availability\n\nMATLAB codes used in the analysis of this study are available from the corresponding author upon reasonable request.\n\n## References\n\n1. Nauhaus, I., Nielsen, K. J., Disney, A. A. & Callaway, E. M. Orthogonal micro-organization of orientation and spatial frequency in primate primary visual cortex. Nat. Neurosci. 15, 1683\u20131690 (2012).\n\n2. Nauhaus, I., Nielsen, K. J. & Callaway, E. M. Efficient receptive field tiling in primate V1. Neuron 91, 893\u2013904 (2016).\n\n3. Ikezoe, K., Mori, Y., Kitamura, K., Tamura, H. & Fujita, I. Relationship between the local structure of orientation map and the strength of orientation tuning of neurons in monkey V1: A 2-photon calcium imaging study. J. Neurosci. 33, 16818\u201316827 (2013).\n\n4. Ikezoe, K., Amano, M., Nishimoto, S. & Fujita, I. Mapping stimulus feature selectivity in macaque V1 by two-photon Ca2+ imaging: Encoding-model analysis of fluorescence responses to natural movies. Neuroimage 180, 312\u2013323 (2018).\n\n5. Heider, B., Nathanson, J. L., Isacoff, E. Y., Callaway, E. M. & Siegel, R. M. Two-photon imaging of calcium in virally transfected striate cortical neurons of behaving monkey. PLoS ONE 5, 1\u201313 (2010).\n\n6. Seidemann, E. et al. Calcium imaging with genetically encoded indicators in behaving primates. Elife 5, 1\u201319 (2016).\n\n7. Li, M., Liu, F., Jiang, H., Lee, T. S. & Tang, S. Long-term two-photon imaging in awake macaque monkey. Neuron 93, 1049-1057.e3 (2017).\n\n8. Ju, N., Jiang, R., Macknik, S. L., Martinez-Conde, S. & Tang, S. Long-term all-optical interrogation of cortical neurons in awake-behaving non-human primates. PLoS Biol. 16, e2005839 (2018).\n\n9. Tang, S. et al. Complex pattern selectivity in macaque primary visual cortex revealed by large-scale two-photon imaging. Curr. Biol. 28, 38-48.e3 (2018).\n\n10. Tang, S. et al. Large-scale two-photon imaging revealed super-sparse population codes in V1 superficial layer of awake monkeys. Elife 7, e33370 (2018).\n\n11. Liu, Y. et al. Hierarchical representation for chromatic processing across macaque V1, V2, and V4. Neuron 108, 538-550.e5 (2020).\n\n12. Garg, A. K., Li, P., Rashid, M. S. & Callaway, E. M. Color and orientation are jointly coded and spatially organized in primate primary visual cortex. Science (80-. ) 364, 1275\u20131279 (2019).\n\n13. Ju, N.-S., Guan, S.-C., Tao, L., Tang, S.-M. & Yu, C. Orientation tuning and end-stopping in macaque V1 studied with two-photon calcium imaging. Cereb. Cortex https:\/\/doi.org\/10.1093\/cercor\/bhaa346 (2020).\n\n14. Tang, R. et al. Curvature-processing domains in primate v4. Elife 9, 1\u201321 (2020).\n\n15. Ziv, Y. et al. Long-term dynamics of CA1 hippocampal place codes. Nat. Neurosci. 16, 264\u2013266 (2013).\n\n16. Jennings, J. H. et al. Visualizing hypothalamic network dynamics for appetitive and consummatory behaviors. Cell 160, 516\u2013527 (2015).\n\n17. Pinto, L. & Dan, Y. Cell-type-specific activity in prefrontal cortex during goal-directed behavior. Neuron 87, 437\u2013450 (2015).\n\n18. Vander Weele, C. M. et al. Dopamine enhances signal-to-noise ratio in cortical-brainstem encoding of aversive stimuli. Nature 563, 397\u2013401 (2018).\n\n19. Cai, D. J. et al. A shared neural ensemble links distinct contextual memories encoded close in time. Nature 534, 115\u2013118 (2016).\n\n20. Ghosh, K. K. et al. Miniaturized integration of a fluorescence microscope. Nat. Methods 8, 871\u2013878 (2011).\n\n21. Yoshizawa, T., Ito, M. & Doya, K. Reward-predictive neural activities in striatal striosome compartments. eNeuro 5, 1\u201314 (2018).\n\n22. Kondo, T. et al. Calcium transient dynamics of neural ensembles in the primary motor cortex of naturally behaving monkeys. Cell Rep. 24, 2191-2195.e4 (2018).\n\n23. Schwarz, D. A. et al. Chronic, wireless recordings of large scale brain activity in freely moving rhesus monkeys. Nat. Methods 11, 670\u2013676 (2014).\n\n24. Kimura, K. et al. A mosaic adeno-associated virus vector as a versatile tool that exhibits high levels of transgene expression and neuron specificity in primate brain. bioRxiv. 2021.07.18.452859 (2021).\n\n25. Chen, T. W. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499, 295\u2013300 (2013).\n\n26. Blasdel, G. & Campbell, D. Functional retinotopy of monkey visual cortex. J. Neurosci. 21, 8286\u20138301 (2001).\n\n27. Van Essen, D. C., Newsome, W. T. & Maunsell, J. H. R. The visual field representation in striate cortex of the macaque monkey: Asymmetries, anisotropies, and individual variability. Vis. Res. 24, 429\u2013448 (1984).\n\n28. Tootell, R. B. H., Silverman, M. S., Hamilton, S. L., De Valois, R. L. & Switkes, E. Functional anatomy of macaque striate cortex. III. Color. J. Neurosci. 8, 1569\u20131593 (1988).\n\n29. Daniel, B. Y. P. M., Whitteridge, D., Hospital, M. & London, S. E. The representation of the visual field on the cerebral cortex in monkeys. J. Physiol. 159, 203\u2013221 (1961).\n\n30. Zhou, P. et al. Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data. Elife 7, 1\u201337 (2018).\n\n31. Rubin, A., Geva, N., Sheintuch, L. & Ziv, Y. Hippocampal ensemble dynamics timestamp events in long-term memory. Elife 4, 1\u201316 (2015).\n\n32. Liberti, W. A. et al. Unstable neurons underlie a stable learned behavior. Nat. Neurosci. 19, 1665\u20131671 (2016).\n\n33. Kitamura, T. et al. Engrams and circuits crucial for systems consolidation of a memory. Science 356, 73\u201378 (2017).\n\n34. Sheintuch, L. et al. Tracking the same neurons across multiple days in Ca2+ imaging data. Cell Rep. 21, 1102\u20131115 (2017).\n\n35. Hubel, D. H. & Wiesel, T. N. Sequence regularity and geometry of orientation columns in the monkey striate cortex. J. Comp. Neurol. 158, 267\u2013293 (1974).\n\n36. Ringach, D. L., Shapley, R. M. & Hawken, M. J. Orientation selectivity in macaque V1: Diversity and laminar dependence. J. Neurosci. 22, 5639\u20135651 (2002).\n\n37. Gur, M., Kagan, I. & Snodderly, D. M. Orientation and direction selectivity of neurons in V1 of alert monkeys: Functional relationships and laminar distributions. Cereb. Cortex 15, 1207\u20131221 (2005).\n\n38. Xing, D., Yeh, C. I., Burns, S. & Shapley, R. M. Laminar analysis of visually evoked activity in the primary visual cortex. Proc. Natl. Acad. Sci. USA 109, 13871\u201313876 (2012).\n\n39. Wang, T. et al. Laminar subnetworks of response suppression in macaque primary visual cortex. J. Neurosci. 40, 7436\u20137450 (2020).\n\n## Acknowledgements\n\nThe authors are grateful to Jonathan Nassi for the technical advice, and Maki Fujiwara and Mayuko Nakano for technical assistances. This paper was proofread in English by a native English speaker from Editage (https:\/\/www.editage.jp). This work was supported by a Grant-in-Aid for Scientific Research on Innovative Areas (4303 and 4805), by JSPS KAKENHI Grant Numbers JP18H03662 and 19H04984 from the Ministry of Education, Culture, Sports, Science and Technology of Japan, and by Grant Numbers JP20dm0307021 and JP18dm0207003 from Japan Agency for Medical Research and Development.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nM.O., Y.R.T., T.K., K.N., and M.S. developed the surgical and imaging procedures. M.O., J.J., and T.W.Y. performed the animal surgeries. K.I. and M.T. produced the viral vector. M.O. designed and performed the animal experiments. M.O. analyzed the data. M.O. and K.I. wrote the manuscript. All authors commented on the manuscript.\n\n### Corresponding author\n\nCorrespondence to Masamichi Sakagami.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare no competing interests.\n\n### Publisher's note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Supplementary Information\n\nSupplementary Video 1.\n\nSupplementary Video 2.\n\nSupplementary Video 3.\n\n## Rights and permissions\n\nReprints and Permissions\n\nOguchi, M., Jiasen, J., Yoshioka, T.W. et al. Microendoscopic calcium imaging of the primary visual cortex of behaving macaques. Sci Rep 11, 17021 (2021). https:\/\/doi.org\/10.1038\/s41598-021-96532-z\n\n\u2022 Accepted:\n\n\u2022 Published:\n\n\u2022 DOI: https:\/\/doi.org\/10.1038\/s41598-021-96532-z","date":"2022-07-01 09:19:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5056197643280029, \"perplexity\": 3603.9167342145506}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103922377.50\/warc\/CC-MAIN-20220701064920-20220701094920-00161.warc.gz\"}"} | null | null |
Thurman Alexander Green II (Longview, 12 augustus 1940 - Los Angeles, 19 juni 1997) was een Amerikaanse jazztrombonist, componist en arrangeur.
Biografie
Green leerde trombone spelen op de Junior Highschool en verhuisde in 1958 voor een studie naar Los Angeles aan het Compton College. Van 1961 tot 1965 speelde hij als tijdelijk soldaat in een band van de zeemacht. In 1962 bezocht hij de Navy School of Music in Washington D.C., waar hij Hamiet Bluiett ontmoette. Daarna keerde hij terug naar Los Angeles, waar hij als studiomuzikant deelnam aan talrijke muziekopnamen (onder andere bij Benny Golson en Gene Harris) en filmsoundtracks. In 1981 kreeg hij een compositiebeurs van het National Endowment for the Arts. Na het verwerven van de Bachelor of Arts aan de California State University at Dominguiez Hills voltooide hij in 1986 zijn muziekstudie met een Master of Arts in compositie. De daarbij gecomponeerde symfonie voerde hij uit met een filharmonisch orkest.
In 1987 ging hij op een Europese tournee met Mercer Ellington. Vervolgens concerteerde hij met Benny Carter in 1988/1989 en in 1991 in Japan. Tijdens de jaren 1990 werkte hij bovendien in de bands van Gerald Wilson, Louie Bellson, de Clayton-Hamilton Bigband, bij Bill Berry/Frank Capp en Horace Tapscott. In 1991 was hij ensemblespeler bij het Miles Davis-album Dingo. Tussen 1988 en 1992 was hij met Buster Cooper co-leader van een kwintet, waarmee hij in 1987 te gast was tijdens het Monterey Jazz Festival. In 1989 verscheen zijn trioalbum Cross Current.
In 1994 volgde het album Dance of the Night Creatures bij Mapleshade Records, waaraan onder onder Hamiet Bluiett, John Hicks en Walter Booker meewerkten. In 1995 formeerde hij met de bevriende Maurice Spears, Garnett Brown en George Bohanon de formatie BoneSoir. Na zijn plotselinge overlijden in 1997 na een Europese tournee werd door Phil Ranelin en verdere collega's het Thurman Green Scholarship Festival in het leven geroepen.
Greens compositorisch werk strekte zich uit van jazz tot klassieke muziek. Tot zijn bekendste jazzcomposities behoren Cross Current, One for Lately en Dance of the Night Creatures. Scott Yanow benadrukte het muzikale spectrum van Green, dat zich uitstrekte van bigbandmuziek tot freejazz. Thurman Green werd 56 jaar.
Literatuur
Richard Cook, Brian Morton: The Penguin Guide to Jazz on CD. 6th Edition. Penguin, London 2002, ISBN 0-14-051521-6.
Leonard Feather, Ira Gitler: The Biographical Encyclopedia of Jazz. Oxford University Press, New York 1999, ISBN 0-19-532000-X.
Amerikaans jazztrombonist
Amerikaans componist
Amerikaans arrangeur | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,807 |
{"url":"https:\/\/python.quantecon.org\/svd_intro.html","text":"# 7. Singular Value Decomposition (SVD)\u00b6\n\nIn addition to regular packages contained in Anaconda by default, this lecture also requires:\n\n!pip install quandl\n\nCollecting quandl\n\n Downloading Quandl-3.7.0-py2.py3-none-any.whl (26 kB)\n\nRequirement already satisfied: pandas>=0.14 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from quandl) (1.4.4)\nRequirement already satisfied: numpy>=1.8 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from quandl) (1.23.5)\nRequirement already satisfied: inflection>=0.3.1 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from quandl) (0.5.1)\nRequirement already satisfied: python-dateutil in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from quandl) (2.8.2)\nRequirement already satisfied: six in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from quandl) (1.16.0)\n\nCollecting more-itertools\n?25l \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 0.0\/52.8 kB ? eta -:--:--\n\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 52.8\/52.8 kB 10.9 MB\/s eta 0:00:00\n?25hRequirement already satisfied: requests>=2.7.0 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from quandl) (2.28.1)\nRequirement already satisfied: pytz>=2020.1 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from pandas>=0.14->quandl) (2022.7)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from requests>=2.7.0->quandl) (1.26.11)\nRequirement already satisfied: charset-normalizer<3,>=2 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from requests>=2.7.0->quandl) (2.0.4)\n\nRequirement already satisfied: idna<4,>=2.5 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from requests>=2.7.0->quandl) (3.3)\nRequirement already satisfied: certifi>=2017.4.17 in \/__w\/lecture-python.myst\/lecture-python.myst\/3\/envs\/quantecon\/lib\/python3.9\/site-packages (from requests>=2.7.0->quandl) (2022.9.14)\n\nInstalling collected packages: more-itertools, quandl\n\nSuccessfully installed more-itertools-9.0.0 quandl-3.7.0\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https:\/\/pip.pypa.io\/warnings\/venv\n\n\nimport numpy as np\nimport numpy.linalg as LA\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport quandl as ql\nimport pandas as pd\n\n\n## 7.1. Overview\u00b6\n\nThe singular value decomposition (SVD) is a work-horse in applications of least squares projection that form foundations for many statistical and machine learning methods.\n\nAfter defining the SVD, we\u2019ll describe how it connects to\n\n\u2022 the four fundamental spaces of linear algebra\n\n\u2022 under-determined and over-determined least squares regressions\n\n\u2022 principal components analysis (PCA)\n\nWe\u2019ll also tell the essential role that the SVD plays in\n\n\u2022 dynamic mode decomposition (DMD)\n\nLike principal components analysis (PCA), DMD can be thought of as a data-reduction procedure that represents salient patterns by projecting data onto a limited set of factors.\n\n## 7.2. The Setting\u00b6\n\nLet $$X$$ be an $$m \\times n$$ matrix of rank $$p$$.\n\nNecessarily, $$p \\leq \\min(m,n)$$.\n\nIn much of this lecture, we\u2019ll think of $$X$$ as a matrix of data in which\n\n\u2022 each column is an individual \u2013 a time period or person, depending on the application\n\n\u2022 each row is a random variable describing an attribute of a time period or a person, depending on the application\n\nWe\u2019ll be interested in two situations\n\n\u2022 A short and fat case in which $$m << n$$, so that there are many more columns (individuals) than rows (attributes).\n\n\u2022 A tall and skinny case in which $$m >> n$$, so that there are many more rows (attributes) than columns (individuals).\n\nWe\u2019ll apply a singular value decomposition of $$X$$ in both situations.\n\nIn the $$m < < n$$ case in which there are many more individuals $$n$$ than attributes $$m$$, we can calculate sample moments of a joint distribution by taking averages across observations of functions of the observations.\n\nIn this $$m < < n$$ case, we\u2019ll look for patterns by using a singular value decomposition to do a principal components analysis (PCA).\n\nIn the $$m > > n$$ case in which there are many more attributes $$m$$ than individuals $$n$$ and when we are in a time-series setting in which $$n$$ equals the number of time periods covered in the data set $$X$$, we\u2019ll proceed in a different way.\n\nWe\u2019ll again use a singular value decomposition, but now to construct a dynamic mode decomposition (DMD)\n\n## 7.3. Singular Value Decomposition\u00b6\n\nA singular value decomposition of an $$m \\times n$$ matrix $$X$$ of rank $$p \\leq \\min(m,n)$$ is\n\n(7.1)$X = U \\Sigma V^T$\n\nwhere\n\n\\begin{aligned} UU^T & = I & \\quad U^T U = I \\cr VV^T & = I & \\quad V^T V = I \\end{aligned}\n\nand\n\n\u2022 $$U$$ is an $$m \\times m$$ orthogonal matrix of left singular vectors of $$X$$\n\n\u2022 Columns of $$U$$ are eigenvectors of $$X^T X$$\n\n\u2022 $$V$$ is an $$n \\times n$$ orthogonal matrix of right singular values of $$X$$\n\n\u2022 Columns of $$V$$ are eigenvectors of $$X X^T$$\n\n\u2022 $$\\Sigma$$ is an $$m \\times n$$ matrix in which the first $$p$$ places on its main diagonal are positive numbers $$\\sigma_1, \\sigma_2, \\ldots, \\sigma_p$$ called singular values; remaining entries of $$\\Sigma$$ are all zero\n\n\u2022 The $$p$$ singular values are positive square roots of the eigenvalues of the $$m \\times m$$ matrix $$X X^T$$ and also of the $$n \\times n$$ matrix $$X^T X$$\n\n\u2022 We adopt a convention that when $$U$$ is a complex valued matrix, $$U^T$$ denotes the conjugate-transpose or Hermitian-transpose of $$U$$, meaning that $$U_{ij}^T$$ is the complex conjugate of $$U_{ji}$$.\n\n\u2022 Similarly, when $$V$$ is a complex valued matrix, $$V^T$$ denotes the conjugate-transpose or Hermitian-transpose of $$V$$\n\nThe matrices $$U,\\Sigma,V$$ entail linear transformations that reshape in vectors in the following ways:\n\n\u2022 multiplying vectors by the unitary matrices $$U$$ and $$V$$ rotates them, but leaves angles between vectors and lengths of vectors unchanged.\n\n\u2022 multiplying vectors by the diagonal matrix $$\\Sigma$$ leaves angles between vectors unchanged but rescales vectors.\n\nThus, representation (7.1) asserts that multiplying an $$n \\times 1$$ vector $$y$$ by the $$m \\times n$$ matrix $$X$$ amounts to performing the following three multiplcations of $$y$$ sequentially:\n\n\u2022 rotating $$y$$ by computing $$V^T y$$\n\n\u2022 rescaling $$V^T y$$ by multipying it by $$\\Sigma$$\n\n\u2022 rotating $$\\Sigma V^T y$$ by multiplying it by $$U$$\n\nThis structure of the $$m \\times n$$ matrix $$X$$ opens the door to constructing systems of data encoders and decoders.\n\nThus,\n\n\u2022 $$V^T y$$ is an encoder\n\n\u2022 $$\\Sigma$$ is an operator to be applied to the encoded data\n\n\u2022 $$U$$ is a decoder to be applied to the output from applying operator $$\\Sigma$$ to the encoded data\n\nWe\u2019ll apply this circle of ideas later in this lecture when we study Dynamic Mode Decomposition.\n\nWhat we have described above is called a full SVD.\n\nIn a full SVD, the shapes of $$U$$, $$\\Sigma$$, and $$V$$ are $$\\left(m, m\\right)$$, $$\\left(m, n\\right)$$, $$\\left(n, n\\right)$$, respectively.\n\nLater we\u2019ll also describe an economy or reduced SVD.\n\nBefore we study a reduced SVD we\u2019ll say a little more about properties of a full SVD.\n\n## 7.4. Four Fundamental Subspaces\u00b6\n\nLet $${\\mathcal C}$$ denote a column space, $${\\mathcal N}$$ denote a null space, and $${\\mathcal R}$$ denote a row space.\n\nLet\u2019s start by recalling the four fundamental subspaces of an $$m \\times n$$ matrix $$X$$ of rank $$p$$.\n\n\u2022 The column space of $$X$$, denoted $${\\mathcal C}(X)$$, is the span of the columns of $$X$$, i.e., all vectors $$y$$ that can be written as linear combinations of columns of $$X$$. Its dimension is $$p$$.\n\n\u2022 The null space of $$X$$, denoted $${\\mathcal N}(X)$$ consists of all vectors $$y$$ that satisfy $$X y = 0$$. Its dimension is $$m-p$$.\n\n\u2022 The row space of $$X$$, denoted $${\\mathcal R}(X)$$ is the column space of $$X^T$$. It consists of all vectors $$z$$ that can be written as linear combinations of rows of $$X$$. Its dimension is $$p$$.\n\n\u2022 The left null space of $$X$$, denoted $${\\mathcal N}(X^T)$$, consist of all vectors $$z$$ such that $$X^T z =0$$. Its dimension is $$n-p$$.\n\nFor a full SVD of a matrix $$X$$, the matrix $$U$$ of left singular vectors and the matrix $$V$$ of right singular vectors contain orthogonal bases for all four subspaces.\n\nThey form two pairs of orthogonal subspaces that we\u2019ll describe now.\n\nLet $$u_i, i = 1, \\ldots, m$$ be the $$m$$ column vectors of $$U$$ and let $$v_i, i = 1, \\ldots, n$$ be the $$n$$ column vectors of $$V$$.\n\nLet\u2019s write the full SVD of X as\n\n(7.2)$X = \\begin{bmatrix} U_L & U_R \\end{bmatrix} \\begin{bmatrix} \\Sigma_p & 0 \\cr 0 & 0 \\end{bmatrix} \\begin{bmatrix} V_L & V_R \\end{bmatrix}^T$\n\nwhere $$\\Sigma_p$$ is a $$p \\times p$$ diagonal matrix with the $$p$$ singular values on the diagonal and\n\n\\begin{aligned} U_L & = \\begin{bmatrix}u_1 & \\cdots & u_p \\end{bmatrix}, \\quad U_R = \\begin{bmatrix}u_{p+1} & \\cdots u_m \\end{bmatrix} \\cr V_L & = \\begin{bmatrix}v_1 & \\cdots & v_p \\end{bmatrix} , \\quad U_R = \\begin{bmatrix}v_{p+1} & \\cdots u_n \\end{bmatrix} \\end{aligned}\n\nRepresentation (7.2) implies that\n\n$X \\begin{bmatrix} V_L & V_R \\end{bmatrix} = \\begin{bmatrix} U_L & U_R \\end{bmatrix} \\begin{bmatrix} \\Sigma_p & 0 \\cr 0 & 0 \\end{bmatrix}$\n\nor\n\n(7.3)\\begin{aligned} X V_L & = U_L \\Sigma_p \\cr X V_R & = 0 \\end{aligned}\n\nor\n\n(7.4)\\begin{aligned} X v_i & = \\sigma_i u_i , \\quad i = 1, \\ldots, p \\cr X v_i & = 0 , \\quad i = p+1, \\ldots, n \\end{aligned}\n\nEquations (7.4) tell how the transformation $$X$$ maps a pair of orthonormal vectors $$v_i, v_j$$ for $$i$$ and $$j$$ both less than or equal to the rank $$p$$ of $$X$$ into a pair of orthonormal vectors $$u_i, u_j$$.\n\nEquations (7.3) assert that\n\n\\begin{aligned} {\\mathcal C}(X) & = {\\mathcal C}(U_L) \\cr {\\mathcal N}(X) & = {\\mathcal C} (V_R) \\end{aligned}\n\nTaking transposes on both sides of representation (7.2) implies\n\n$X^T \\begin{bmatrix} U_L & U_R \\end{bmatrix} = \\begin{bmatrix} V_L & V_R \\end{bmatrix} \\begin{bmatrix} \\Sigma_p & 0 \\cr 0 & 0 \\end{bmatrix}$\n\nor\n\n(7.5)\\begin{aligned} X^T U_L & = V_L \\Sigma_p \\cr X^T U_R & = 0 \\end{aligned}\n\nor\n\n(7.6)\\begin{aligned} X^T u_i & = \\sigma_i v_i, \\quad i=1, \\ldots, p \\cr X^T u_i & = 0 \\quad i= p+1, \\ldots, m \\end{aligned}\n\nNotice how equations (7.6) assert that the transformation $$X^T$$ maps a pairsof distinct orthonormal vectors $$u_i, u_j$$ for $$i$$ and $$j$$ both less than or equal to the rank $$p$$ of $$X$$ into a pair of distinct orthonormal vectors $$v_i, v_j$$ .\n\nEquations (7.5) assert that\n\n\\begin{aligned} {\\mathcal R}(X) & \\equiv {\\mathcal C}(X^T) = {\\mathcal C} (V_L) \\cr {\\mathcal N}(X^T) & = {\\mathcal C}(U_R) \\end{aligned}\n\nThus, taken together, the systems of quations (7.3) and (7.5) describe the four fundamental subspaces of $$X$$ in the following ways:\n\n(7.7)\\begin{align}\\begin{aligned} \\begin{aligned} {\\mathcal C}(X) & = {\\mathcal C}(U_L) \\cr {\\mathcal N}(X^T) & = {\\mathcal C}(U_R) \\cr {\\mathcal R}(X) & \\equiv {\\mathcal C}(X^T) = {\\mathcal C} (V_L) \\cr {\\mathcal N}(X) & = {\\mathcal C} (V_R) \\cr\\\\\\end{aligned} \\end{aligned}\\end{align}\n\nSince $$U$$ and $$V$$ are both orthonormal matrices, collection (7.7) asserts that\n\n\u2022 $$U_L$$ is an orthonormal basis for the column space of $$X$$\n\n\u2022 $$U_R$$ is an orthonormal basis for the null space of $$X^T$$\n\n\u2022 $$V_L$$ is an orthonormal basis for the row space of $$X$$\n\n\u2022 $$V_R$$ is an orthonormal basis for the null space of $$X$$\n\nWe have verified the four claims in (7.7) simply by performing the multiplications called for by the right side of (7.2) and reading them.\n\nThe claims in (7.7) and the fact that $$U$$ and $$V$$ are both unitary (i.e, orthonormal) matrices imply that\n\n\u2022 the column space of $$X$$ is orthogonal to the null space of of $$X^T$$\n\n\u2022 the null space of $$X$$ is orthogonal to the row space of $$X$$\n\nSometimes these properties are described with the following two pairs of orthogonal complement subspaces:\n\n\u2022 $${\\mathcal C}(X)$$ is the orthogonal complement of $${\\mathcal N}(X^T)$$\n\n\u2022 $${\\mathcal R}(X)$$ is the orthogonal complement $${\\mathcal N}(X)$$\n\nLet\u2019s do an example.\n\nnp.set_printoptions(precision=2)\n\n# Define the matrix\nA = np.array([[1, 2, 3, 4, 5],\n[2, 3, 4, 5, 6],\n[3, 4, 5, 6, 7],\n[4, 5, 6, 7, 8],\n[5, 6, 7, 8, 9]])\n\n# Compute the SVD of the matrix\nU, S, V = np.linalg.svd(A,full_matrices=True)\n\n# Compute the rank of the matrix\nrank = np.linalg.matrix_rank(A)\n\n# Print the rank of the matrix\nprint(\"Rank of matrix:\\n\", rank)\nprint(\"S: \\n\", S)\n\n# Compute the four fundamental subspaces\nrow_space = U[:, :rank]\ncol_space = V[:, :rank]\nnull_space = V[:, rank:]\nleft_null_space = U[:, rank:]\n\nprint(\"U:\\n\", U)\nprint(\"Column space:\\n\", col_space)\nprint(\"Left null space:\\n\", left_null_space)\nprint(\"V.T:\\n\", V.T)\nprint(\"Row space:\\n\", row_space.T)\nprint(\"Right null space:\\n\", null_space.T)\n\nRank of matrix:\n2\nS:\n[2.69e+01 1.86e+00 7.83e-16 3.27e-16 4.69e-17]\nU:\n[[-0.27 -0.73 -0.47 0.06 -0.42]\n[-0.35 -0.42 0.1 -0.18 0.81]\n[-0.43 -0.11 0.75 -0.27 -0.4 ]\n[-0.51 0.19 0.06 0.83 0.05]\n[-0.59 0.5 -0.45 -0.45 -0.04]]\nColumn space:\n[[-0.27 -0.35]\n[ 0.73 0.42]\n[ 0.02 0.06]\n[ 0.52 -0.83]\n[-0.37 -0.08]]\nLeft null space:\n[[-0.47 0.06 -0.42]\n[ 0.1 -0.18 0.81]\n[ 0.75 -0.27 -0.4 ]\n[ 0.06 0.83 0.05]\n[-0.45 -0.45 -0.04]]\nV.T:\n[[-0.27 0.73 0.02 0.52 -0.37]\n[-0.35 0.42 0.06 -0.83 -0.08]\n[-0.43 0.11 0.29 0.18 0.82]\n[-0.51 -0.19 -0.83 0.06 0.04]\n[-0.59 -0.5 0.46 0.07 -0.42]]\nRow space:\n[[-0.27 -0.35 -0.43 -0.51 -0.59]\n[-0.73 -0.42 -0.11 0.19 0.5 ]]\nRight null space:\n[[-0.43 0.11 0.29 0.18 0.82]\n[-0.51 -0.19 -0.83 0.06 0.04]\n[-0.59 -0.5 0.46 0.07 -0.42]]\n\n\n## 7.5. Eckart-Young Theorem\u00b6\n\nSuppose that we want to construct the best rank $$r$$ approximation of an $$m \\times n$$ matrix $$X$$.\n\nBy best we mean a matrix $$X_r$$ of rank $$r < p$$ that, among all rank $$r$$ matrices, minimizes\n\n$|| X - X_r ||$\n\nwhere $$|| \\cdot ||$$ denotes a norm of a matrix $$X$$ and where $$X_r$$ belongs to the space of all rank $$r$$ matrices of dimension $$m \\times n$$.\n\nThree popular matrix norms of an $$m \\times n$$ matrix $$X$$ can be expressed in terms of the singular values of $$X$$\n\n\u2022 the spectral or $$l^2$$ norm $$|| X ||_2 = \\max_{y \\in \\textbf{R}^n} \\frac{||X y ||}{||y||} = \\sigma_1$$\n\n\u2022 the Frobenius norm $$||X ||_F = \\sqrt{\\sigma_1^2 + \\cdots + \\sigma_p^2}$$\n\n\u2022 the nuclear norm $$|| X ||_N = \\sigma_1 + \\cdots + \\sigma_p$$\n\nThe Eckart-Young theorem states that for each of these three norms, same rank $$r$$ matrix is best and that it equals\n\n(7.8)$\\hat X_r = \\sigma_1 U_1 V_1^T + \\sigma_2 U_2 V_2^T + \\cdots + \\sigma_r U_r V_r^T$\n\nYou can read about the Eckart-Young theorem and some of its uses here https:\/\/en.wikipedia.org\/wiki\/Low-rank_approximation.\n\nWe\u2019ll make use of this theorem when we discuss principal components analysis (PCA) and also dynamic mode decomposition (DMD).\n\n## 7.6. Full and Reduced SVD\u2019s\u00b6\n\nUp to now we have described properties of a full SVD in which shapes of $$U$$, $$\\Sigma$$, and $$V$$ are $$\\left(m, m\\right)$$, $$\\left(m, n\\right)$$, $$\\left(n, n\\right)$$, respectively.\n\nThere is an alternative bookkeeping convention called an economy or reduced SVD in which the shapes of $$U, \\Sigma$$ and $$V$$ are different from what they are in a full SVD.\n\nThus, note that because we assume that $$X$$ has rank $$p$$, there are only $$p$$ nonzero singular values, where $$p=\\textrm{rank}(X)\\leq\\min\\left(m, n\\right)$$.\n\nA reduced SVD uses this fact to express $$U$$, $$\\Sigma$$, and $$V$$ as matrices with shapes $$\\left(m, p\\right)$$, $$\\left(p, p\\right)$$, $$\\left( n, p\\right)$$.\n\nFor a full SVD,\n\n\\begin{aligned} UU^T & = I & \\quad U^T U = I \\cr VV^T & = I & \\quad V^T V = I \\end{aligned}\n\nBut not all these properties hold for a reduced SVD.\n\nWhich properties hold depend on whether we are in a tall-skinny case or a short-fat case.\n\n\u2022 In a tall-skinny case in which $$m > > n$$, for a reduced SVD\n\n\\begin{aligned} UU^T & \\neq I & \\quad U^T U = I \\cr VV^T & = I & \\quad V^T V = I \\end{aligned}\n\u2022 In a short-fat case in which $$m < < n$$, for a reduced SVD\n\n\\begin{aligned} UU^T & = I & \\quad U^T U = I \\cr VV^T & = I & \\quad V^T V \\neq I \\end{aligned}\n\nWhen we study Dynamic Mode Decomposition below, we shall want to remember these properties when we use a reduced SVD to compute some DMD representations.\n\nLet\u2019s do an exercise to compare full and reduced SVD\u2019s.\n\nTo review,\n\n\u2022 in a full SVD\n\n\u2022 $$U$$ is $$m \\times m$$\n\n\u2022 $$\\Sigma$$ is $$m \\times n$$\n\n\u2022 $$V$$ is $$n \\times n$$\n\n\u2022 in a reduced SVD\n\n\u2022 $$U$$ is $$m \\times p$$\n\n\u2022 $$\\Sigma$$ is $$p\\times p$$\n\n\u2022 $$V$$ is $$n \\times p$$\n\nFirst, let\u2019s study a case in which $$m = 5 > n = 2$$.\n\n(This is a small example of the tall-skinny case that will concern us when we study Dynamic Mode Decompositions below.)\n\nimport numpy as np\nX = np.random.rand(5,2)\nU, S, V = np.linalg.svd(X,full_matrices=True) # full SVD\nUhat, Shat, Vhat = np.linalg.svd(X,full_matrices=False) # economy SVD\nprint('U, S, V =')\nU, S, V\n\nU, S, V =\n\n(array([[-0.36, -0.48, -0.51, -0.51, -0.35],\n[-0.5 , -0.51, 0.6 , 0.33, -0.15],\n[-0.51, 0.62, 0.34, -0.49, -0.1 ],\n[-0.5 , 0.33, -0.5 , 0.61, -0.11],\n[-0.34, -0.16, -0.12, -0.13, 0.91]]),\narray([2.22, 0.29]),\narray([[-0.65, -0.76],\n[-0.76, 0.65]]))\n\nprint('Uhat, Shat, Vhat = ')\nUhat, Shat, Vhat\n\nUhat, Shat, Vhat =\n\n(array([[-0.36, -0.48],\n[-0.5 , -0.51],\n[-0.51, 0.62],\n[-0.5 , 0.33],\n[-0.34, -0.16]]),\narray([2.22, 0.29]),\narray([[-0.65, -0.76],\n[-0.76, 0.65]]))\n\nrr = np.linalg.matrix_rank(X)\nprint(f'rank of X = {rr}')\n\nrank of X = 2\n\n\nProperties:\n\n\u2022 Where $$U$$ is constructed via a full SVD, $$U^T U = I_{p\\times p}$$ and $$U U^T = I_{m \\times m}$$\n\n\u2022 Where $$\\hat U$$ is constructed via a reduced SVD, although $$\\hat U^T \\hat U = I_{p\\times p}$$ it happens that $$\\hat U \\hat U^T \\neq I_{m \\times m}$$\n\nWe illustrate these properties for our example with the following code cells.\n\nUTU = U.T@U\nUUT = U@U.T\nprint('UUT, UTU = ')\nUUT, UTU\n\nUUT, UTU =\n\n(array([[ 1.00e+00, 1.24e-16, 9.25e-17, 2.25e-16, 1.38e-16],\n[ 1.24e-16, 1.00e+00, -5.67e-17, -7.72e-17, -2.87e-17],\n[ 9.25e-17, -5.67e-17, 1.00e+00, -1.31e-17, 3.44e-17],\n[ 2.25e-16, -7.72e-17, -1.31e-17, 1.00e+00, 5.61e-17],\n[ 1.38e-16, -2.87e-17, 3.44e-17, 5.61e-17, 1.00e+00]]),\narray([[ 1.00e+00, 2.36e-16, 2.86e-16, 1.56e-16, 9.77e-17],\n[ 2.36e-16, 1.00e+00, -5.50e-20, 5.82e-17, 9.95e-17],\n[ 2.86e-16, -5.50e-20, 1.00e+00, 8.35e-17, 4.01e-17],\n[ 1.56e-16, 5.82e-17, 8.35e-17, 1.00e+00, 4.13e-17],\n[ 9.77e-17, 9.95e-17, 4.01e-17, 4.13e-17, 1.00e+00]]))\n\nUhatUhatT = Uhat@Uhat.T\nUhatTUhat = Uhat.T@Uhat\nprint('UhatUhatT, UhatTUhat= ')\nUhatUhatT, UhatTUhat\n\nUhatUhatT, UhatTUhat=\n\n(array([[ 0.36, 0.42, -0.11, 0.02, 0.2 ],\n[ 0.42, 0.51, -0.06, 0.08, 0.25],\n[-0.11, -0.06, 0.64, 0.46, 0.07],\n[ 0.02, 0.08, 0.46, 0.36, 0.11],\n[ 0.2 , 0.25, 0.07, 0.11, 0.14]]),\narray([[1.00e+00, 2.36e-16],\n[2.36e-16, 1.00e+00]]))\n\n\nRemarks:\n\nThe cells above illustrate application of the fullmatrices=True and full-matrices=False options. Using full-matrices=False returns a reduced singular value decomposition.\n\nThe full and reduced SVd\u2019s both accurately decompose an $$m \\times n$$ matrix $$X$$\n\nWhen we study Dynamic Mode Decompositions below, it will be important for us to remember the preceding properties of full and reduced SVD\u2019s in such tall-skinny cases.\n\nNow let\u2019s turn to a short-fat case.\n\nTo illustrate this case, we\u2019ll set $$m = 2 < 5 = n$$ and compute both full and reduced SVD\u2019s.\n\nimport numpy as np\nX = np.random.rand(2,5)\nU, S, V = np.linalg.svd(X,full_matrices=True) # full SVD\nUhat, Shat, Vhat = np.linalg.svd(X,full_matrices=False) # economy SVD\nprint('U, S, V = ')\nU, S, V\n\nU, S, V =\n\n(array([[-0.54, -0.84],\n[-0.84, 0.54]]),\narray([1.72, 0.5 ]),\narray([[-0.46, -0.68, -0.37, -0.4 , -0.19],\n[ 0.63, -0.63, 0.43, 0. , -0.14],\n[-0.55, 0.03, 0.82, -0.15, -0.06],\n[-0.3 , -0.3 , -0.02, 0.91, -0.07],\n[-0.05, -0.24, 0.04, -0.02, 0.97]]))\n\nprint('Uhat, Shat, Vhat = ')\nUhat, Shat, Vhat\n\nUhat, Shat, Vhat =\n\n(array([[-0.54, -0.84],\n[-0.84, 0.54]]),\narray([1.72, 0.5 ]),\narray([[-0.46, -0.68, -0.37, -0.4 , -0.19],\n[ 0.63, -0.63, 0.43, 0. , -0.14]]))\n\n\nLet\u2019s verify that our reduced SVD accurately represents $$X$$\n\nSShat=np.diag(Shat)\nnp.allclose(X, Uhat@SShat@Vhat)\n\nTrue\n\n\n## 7.7. Polar Decomposition\u00b6\n\nA reduced singular value decomposition (SVD) of $$X$$ is related to a polar decomposition of $$X$$\n\n$X = SQ$\n\nwhere\n\n\\begin{aligned} S & = U\\Sigma U^T \\cr Q & = U V^T \\end{aligned}\n\nHere\n\n\u2022 $$S$$ is an $$m \\times m$$ symmetric matrix\n\n\u2022 $$Q$$ is an $$m \\times n$$ orthogonal matrix\n\nand in our reduced SVD\n\n\u2022 $$U$$ is an $$m \\times p$$ orthonormal matrix\n\n\u2022 $$\\Sigma$$ is a $$p \\times p$$ diagonal matrix\n\n\u2022 $$V$$ is an $$n \\times p$$ orthonormal\n\n## 7.8. Principal Components Analysis (PCA)\u00b6\n\nLet\u2019s begin with a case in which $$n >> m$$, so that we have many more individuals $$n$$ than attributes $$m$$.\n\nThe matrix $$X$$ is short and fat in an $$n >> m$$ case as opposed to a tall and skinny case with $$m > > n$$ to be discussed later.\n\nWe regard $$X$$ as an $$m \\times n$$ matrix of data:\n\n$X = \\begin{bmatrix} X_1 \\mid X_2 \\mid \\cdots \\mid X_n\\end{bmatrix}$\n\nwhere for $$j = 1, \\ldots, n$$ the column vector $$X_j = \\begin{bmatrix}X_{1j}\\\\X_{2j}\\\\\\vdots\\\\X_{mj}\\end{bmatrix}$$ is a vector of observations on variables $$\\begin{bmatrix}x_1\\\\x_2\\\\\\vdots\\\\x_m\\end{bmatrix}$$.\n\nIn a time series setting, we would think of columns $$j$$ as indexing different times at which random variables are observed, while rows index different random variables.\n\nIn a cross section setting, we would think of columns $$j$$ as indexing different individuals for which random variables are observed, while rows index different attributes.\n\nThe number of positive singular values equals the rank of matrix $$X$$.\n\nArrange the singular values in decreasing order.\n\nArrange the positive singular values on the main diagonal of the matrix $$\\Sigma$$ of into a vector $$\\sigma_R$$.\n\nSet all other entries of $$\\Sigma$$ to zero.\n\n## 7.9. Relationship of PCA to SVD\u00b6\n\nTo relate a SVD to a PCA (principal component analysis) of data set $$X$$, first construct the SVD of the data matrix $$X$$:\n\n(7.9)$X = U \\Sigma V^T = \\sigma_1 U_1 V_1^T + \\sigma_2 U_2 V_2^T + \\cdots + \\sigma_p U_p V_p^T$\n\nwhere\n\n$U=\\begin{bmatrix}U_1|U_2|\\ldots|U_m\\end{bmatrix}$\n$\\begin{split} V^T = \\begin{bmatrix}V_1^T\\\\V_2^T\\\\\\ldots\\\\V_n^T\\end{bmatrix} \\end{split}$\n\nIn equation (7.9), each of the $$m \\times n$$ matrices $$U_{j}V_{j}^T$$ is evidently of rank $$1$$.\n\nThus, we have\n\n(7.10)$\\begin{split} X = \\sigma_1 \\begin{pmatrix}U_{11}V_{1}^T\\\\U_{21}V_{1}^T\\\\\\cdots\\\\U_{m1}V_{1}^T\\\\\\end{pmatrix} + \\sigma_2\\begin{pmatrix}U_{12}V_{2}^T\\\\U_{22}V_{2}^T\\\\\\cdots\\\\U_{m2}V_{2}^T\\\\\\end{pmatrix}+\\ldots + \\sigma_p\\begin{pmatrix}U_{1p}V_{p}^T\\\\U_{2p}V_{p}^T\\\\\\cdots\\\\U_{mp}V_{p}^T\\\\\\end{pmatrix} \\end{split}$\n\nHere is how we would interpret the objects in the matrix equation (7.10) in a time series context:\n\n\u2022 $$\\textrm{for each} \\ k=1, \\ldots, n$$, the object $$\\lbrace V_{kj} \\rbrace_{j=1}^n$$ is a time series for the $$k$$th principal component\n\n\u2022 $$U_j = \\begin{bmatrix}U_{1k}\\\\U_{2k}\\\\\\ldots\\\\U_{mk}\\end{bmatrix} \\ k=1, \\ldots, m$$ is a vector of loadings of variables $$X_i$$ on the $$k$$th principal component, $$i=1, \\ldots, m$$\n\n\u2022 $$\\sigma_k$$ for each $$k=1, \\ldots, p$$ is the strength of $$k$$th principal component, where strength means contribution to the overall covariance of $$X$$.\n\n## 7.10. PCA with Eigenvalues and Eigenvectors\u00b6\n\nWe now use an eigen decomposition of a sample covariance matrix to do PCA.\n\nLet $$X_{m \\times n}$$ be our $$m \\times n$$ data matrix.\n\nLet\u2019s assume that sample means of all variables are zero.\n\nWe can assure this by pre-processing the data by subtracting sample means.\n\nDefine a sample covariance matrix $$\\Omega$$ as\n\n$\\Omega = XX^T$\n\nThen use an eigen decomposition to represent $$\\Omega$$ as follows:\n\n$\\Omega =P\\Lambda P^T$\n\nHere\n\n\u2022 $$P$$ is $$m\u00d7m$$ matrix of eigenvectors of $$\\Omega$$\n\n\u2022 $$\\Lambda$$ is a diagonal matrix of eigenvalues of $$\\Omega$$\n\nWe can then represent $$X$$ as\n\n$X=P\\epsilon$\n\nwhere\n\n$\\epsilon = P^{-1} X$\n\nand\n\n$\\epsilon\\epsilon^T=\\Lambda .$\n\nWe can verify that\n\n(7.11)$XX^T=P\\Lambda P^T .$\n\nIt follows that we can represent the data matrix $$X$$ as\n\n$\\begin{equation*} X=\\begin{bmatrix}X_1|X_2|\\ldots|X_m\\end{bmatrix} =\\begin{bmatrix}P_1|P_2|\\ldots|P_m\\end{bmatrix} \\begin{bmatrix}\\epsilon_1\\\\\\epsilon_2\\\\\\ldots\\\\\\epsilon_m\\end{bmatrix} = P_1\\epsilon_1+P_2\\epsilon_2+\\ldots+P_m\\epsilon_m \\end{equation*}$\n\nTo reconcile the preceding representation with the PCA that we had obtained earlier through the SVD, we first note that $$\\epsilon_j^2=\\lambda_j\\equiv\\sigma^2_j$$.\n\nNow define $$\\tilde{\\epsilon_j} = \\frac{\\epsilon_j}{\\sqrt{\\lambda_j}}$$, which implies that $$\\tilde{\\epsilon}_j\\tilde{\\epsilon}_j^T=1$$.\n\nTherefore\n\n\\begin{split} \\begin{aligned} X&=\\sqrt{\\lambda_1}P_1\\tilde{\\epsilon_1}+\\sqrt{\\lambda_2}P_2\\tilde{\\epsilon_2}+\\ldots+\\sqrt{\\lambda_m}P_m\\tilde{\\epsilon_m}\\\\ &=\\sigma_1P_1\\tilde{\\epsilon_2}+\\sigma_2P_2\\tilde{\\epsilon_2}+\\ldots+\\sigma_mP_m\\tilde{\\epsilon_m} , \\end{aligned} \\end{split}\n\nwhich agrees with\n\n$X=\\sigma_1U_1{V_1}^{T}+\\sigma_2 U_2{V_2}^{T}+\\ldots+\\sigma_{r} U_{r}{V_{r}}^{T}$\n\nprovided that we set\n\n\u2022 $$U_j=P_j$$ (a vector of loadings of variables on principal component $$j$$)\n\n\u2022 $${V_k}^{T}=\\tilde{\\epsilon_k}$$ (the $$k$$th principal component)\n\nBecause there are alternative algorithms for computing $$P$$ and $$U$$ for given a data matrix $$X$$, depending on algorithms used, we might have sign differences or different orders between eigenvectors.\n\nWe can resolve such ambiguities about $$U$$ and $$P$$ by\n\n1. sorting eigenvalues and singular values in descending order\n\n2. imposing positive diagonals on $$P$$ and $$U$$ and adjusting signs in $$V^T$$ accordingly\n\n## 7.11. Connections\u00b6\n\nTo pull things together, it is useful to assemble and compare some formulas presented above.\n\nFirst, consider an SVD of an $$m \\times n$$ matrix:\n\n$X = U\\Sigma V^T$\n\nCompute:\n\n(7.12)\\begin{aligned} XX^T&=U\\Sigma V^TV\\Sigma^T U^T\\cr &\\equiv U\\Sigma\\Sigma^TU^T\\cr &\\equiv U\\Lambda U^T \\end{aligned}\n\nCompare representation (7.12) with equation (7.11) above.\n\nEvidently, $$U$$ in the SVD is the matrix $$P$$ of eigenvectors of $$XX^T$$ and $$\\Sigma \\Sigma^T$$ is the matrix $$\\Lambda$$ of eigenvalues.\n\nSecond, let\u2019s compute\n\n\\begin{split} \\begin{aligned} X^TX &=V\\Sigma^T U^TU\\Sigma V^T\\\\ &=V\\Sigma^T{\\Sigma}V^T \\end{aligned} \\end{split}\n\nThus, the matrix $$V$$ in the SVD is the matrix of eigenvectors of $$X^TX$$\n\nSummarizing and fitting things together, we have the eigen decomposition of the sample covariance matrix\n\n$X X^T = P \\Lambda P^T$\n\nwhere $$P$$ is an orthogonal matrix.\n\nFurther, from the SVD of $$X$$, we know that\n\n$X X^T = U \\Sigma \\Sigma^T U^T$\n\nwhere $$U$$ is an orthonal matrix.\n\nThus, $$P = U$$ and we have the representation of $$X$$\n\n$X = P \\epsilon = U \\Sigma V^T$\n\nIt follows that\n\n$U^T X = \\Sigma V^T = \\epsilon$\n\nNote that the preceding implies that\n\n$\\epsilon \\epsilon^T = \\Sigma V^T V \\Sigma^T = \\Sigma \\Sigma^T = \\Lambda ,$\n\nso that everything fits together.\n\nBelow we define a class DecomAnalysis that wraps PCA and SVD for a given a data matrix X.\n\nclass DecomAnalysis:\n\"\"\"\nA class for conducting PCA and SVD.\n\"\"\"\n\ndef __init__(self, X, n_component=None):\n\nself.X = X\n\nself.\u03a9 = (X @ X.T)\n\nself.m, self.n = X.shape\nself.r = LA.matrix_rank(X)\n\nif n_component:\nself.n_component = n_component\nelse:\nself.n_component = self.m\n\ndef pca(self):\n\n\ud835\udf06, P = LA.eigh(self.\u03a9) # columns of P are eigenvectors\n\nind = sorted(range(\ud835\udf06.size), key=lambda x: \ud835\udf06[x], reverse=True)\n\n# sort by eigenvalues\nself.\ud835\udf06 = \ud835\udf06[ind]\nP = P[:, ind]\nself.P = P @ diag_sign(P)\n\nself.\u039b = np.diag(self.\ud835\udf06)\n\nself.explained_ratio_pca = np.cumsum(self.\ud835\udf06) \/ self.\ud835\udf06.sum()\n\n# compute the N by T matrix of principal components\nself.\ud835\udf16 = self.P.T @ self.X\n\nP = self.P[:, :self.n_component]\n\ud835\udf16 = self.\ud835\udf16[:self.n_component, :]\n\n# transform data\nself.X_pca = P @ \ud835\udf16\n\ndef svd(self):\n\nU, \ud835\udf0e, VT = LA.svd(self.X)\n\nind = sorted(range(\ud835\udf0e.size), key=lambda x: \ud835\udf0e[x], reverse=True)\n\n# sort by eigenvalues\nd = min(self.m, self.n)\n\nself.\ud835\udf0e = \ud835\udf0e[ind]\nU = U[:, ind]\nD = diag_sign(U)\nself.U = U @ D\nVT[:d, :] = D @ VT[ind, :]\nself.VT = VT\n\nself.\u03a3 = np.zeros((self.m, self.n))\nself.\u03a3[:d, :d] = np.diag(self.\ud835\udf0e)\n\n\ud835\udf0e_sq = self.\ud835\udf0e ** 2\nself.explained_ratio_svd = np.cumsum(\ud835\udf0e_sq) \/ \ud835\udf0e_sq.sum()\n\n# slicing matrices by the number of components to use\nU = self.U[:, :self.n_component]\n\u03a3 = self.\u03a3[:self.n_component, :self.n_component]\nVT = self.VT[:self.n_component, :]\n\n# transform data\nself.X_svd = U @ \u03a3 @ VT\n\ndef fit(self, n_component):\n\n# pca\nP = self.P[:, :n_component]\n\ud835\udf16 = self.\ud835\udf16[:n_component, :]\n\n# transform data\nself.X_pca = P @ \ud835\udf16\n\n# svd\nU = self.U[:, :n_component]\n\u03a3 = self.\u03a3[:n_component, :n_component]\nVT = self.VT[:n_component, :]\n\n# transform data\nself.X_svd = U @ \u03a3 @ VT\n\ndef diag_sign(A):\n\"Compute the signs of the diagonal of matrix A\"\n\nD = np.diag(np.sign(np.diag(A)))\n\nreturn D\n\n\nWe also define a function that prints out information so that we can compare decompositions obtained by different algorithms.\n\ndef compare_pca_svd(da):\n\"\"\"\nCompare the outcomes of PCA and SVD.\n\"\"\"\n\nda.pca()\nda.svd()\n\nprint('Eigenvalues and Singular values\\n')\nprint(f'\u03bb = {da.\u03bb}\\n')\nprint(f'\u03c3^2 = {da.\u03c3**2}\\n')\nprint('\\n')\n\nfig, axs = plt.subplots(1, 2, figsize=(14, 5))\naxs[0].plot(da.P.T)\naxs[0].set_title('P')\naxs[0].set_xlabel('m')\naxs[1].plot(da.U.T)\naxs[1].set_title('U')\naxs[1].set_xlabel('m')\nplt.show()\n\n# principal components\nfig, axs = plt.subplots(1, 2, figsize=(14, 5))\nplt.suptitle('principal components')\naxs[0].plot(da.\u03b5.T)\naxs[0].set_title('\u03b5')\naxs[0].set_xlabel('n')\naxs[1].plot(da.VT[:da.r, :].T * np.sqrt(da.\u03bb))\naxs[1].set_title('$V^T*\\sqrt{\\lambda}$')\naxs[1].set_xlabel('n')\nplt.show()\n\n\nFor an example PCA applied to analyzing the structure of intelligence tests see this lecture Multivariable Normal Distribution.\n\nLook at parts of that lecture that describe and illustrate the classic factor analysis model.\n\n## 7.12. Vector Autoregressions\u00b6\n\nWe want to fit a first-order vector autoregression\n\n(7.13)$X_{t+1} = A X_t + C \\epsilon_{t+1}$\n\nwhere $$\\epsilon_{t+1}$$ is the time $$t+1$$ instance of an i.i.d. $$m \\times 1$$ random vector with mean vector zero and identity covariance matrix and where the $$m \\times 1$$ vector $$X_t$$ is\n\n(7.14)$X_t = \\begin{bmatrix} X_{1,t} & X_{2,t} & \\cdots & X_{m,t} \\end{bmatrix}^T$\n\nand where $$T$$ again denotes complex transposition and $$X_{i,t}$$ is an observation on variable $$i$$ at time $$t$$.\n\nWe want to fit equation (7.13).\n\nOur data are organized in an $$m \\times (n+1)$$ matrix $$\\tilde X$$\n\n$\\tilde X = \\begin{bmatrix} X_1 \\mid X_2 \\mid \\cdots \\mid X_n \\mid X_{n+1} \\end{bmatrix}$\n\nwhere for $$t = 1, \\ldots, n +1$$, the $$m \\times 1$$ vector $$X_t$$ is given by (7.14).\n\nThus, we want to estimate a system (7.13) that consists of $$m$$ least squares regressions of everything on one lagged value of everything.\n\nThe $$i$$\u2019th equation of (7.13) is a regression of $$X_{i,t+1}$$ on the vector $$X_t$$.\n\nWe proceed as follows.\n\nFrom $$\\tilde X$$, we form two $$m \\times n$$ matrices\n\n$X = \\begin{bmatrix} X_1 \\mid X_2 \\mid \\cdots \\mid X_{n}\\end{bmatrix}$\n\nand\n\n$X' = \\begin{bmatrix} X_2 \\mid X_3 \\mid \\cdots \\mid X_{n+1}\\end{bmatrix}$\n\nHere $$'$$ is part of the name of the matrix $$X'$$ and does not indicate matrix transposition.\n\nWe continue to use $$\\cdot^T$$ to denote matrix transposition or its extension to complex matrices.\n\nIn forming $$X$$ and $$X'$$, we have in each case dropped a column from $$\\tilde X$$, the last column in the case of $$X$$, and the first column in the case of $$X'$$.\n\nEvidently, $$X$$ and $$X'$$ are both $$m \\times n$$ matrices.\n\nWe denote the rank of $$X$$ as $$p \\leq \\min(m, n)$$.\n\nTwo possible cases are\n\n\u2022 $$n > > m$$, so that we have many more time series observations $$n$$ than variables $$m$$\n\n\u2022 $$m > > n$$, so that we have many more variables $$m$$ than time series observations $$n$$\n\nAt a general level that includes both of these special cases, a common formula describes the least squares estimator $$\\hat A$$ of $$A$$.\n\nBut important details differ.\n\nThe common formula is\n\n(7.15)$\\hat A = X' X^+$\n\nwhere $$X^+$$ is the pseudo-inverse of $$X$$.\n\nApplicable formulas for the pseudo-inverse differ for our two cases.\n\nShort-Fat Case:\n\nWhen $$n > > m$$, so that we have many more time series observations $$n$$ than variables $$m$$ and when $$X$$ has linearly independent rows, $$X X^T$$ has an inverse and the pseudo-inverse $$X^+$$ is\n\n$X^+ = X^T (X X^T)^{-1}$\n\nHere $$X^+$$ is a right-inverse that verifies $$X X^+ = I_{m \\times m}$$.\n\nIn this case, our formula (7.15) for the least-squares estimator of the population matrix of regression coefficients $$A$$ becomes\n\n(7.16)$\\hat A = X' X^T (X X^T)^{-1}$\n\nThis formula for least-squares regression coefficients widely used in econometrics.\n\nFor example, it is used to estimate vector autorgressions.\n\nThe right side of formula (7.16) is proportional to the empirical cross second moment matrix of $$X_{t+1}$$ and $$X_t$$ times the inverse of the second moment matrix of $$X_t$$.\n\nTall-Skinny Case:\n\nWhen $$m > > n$$, so that we have many more attributes $$m$$ than time series observations $$n$$ and when $$X$$ has linearly independent columns, $$X^T X$$ has an inverse and the pseudo-inverse $$X^+$$ is\n\n$X^+ = (X^T X)^{-1} X^T$\n\nHere $$X^+$$ is a left-inverse that verifies $$X^+ X = I_{n \\times n}$$.\n\nIn this case, our formula (7.15) for a least-squares estimator of $$A$$ becomes\n\n(7.17)$\\hat A = X' (X^T X)^{-1} X^T$\n\nPlease compare formulas (7.16) and (7.17) for $$\\hat A$$.\n\nHere we are interested in formula (7.17).\n\nThe $$i$$th row of $$\\hat A$$ is an $$m \\times 1$$ vector of regression coefficients of $$X_{i,t+1}$$ on $$X_{j,t}, j = 1, \\ldots, m$$.\n\nIf we use formula (7.17) to calculate $$\\hat A X$$ we find that\n\n$\\hat A X = X'$\n\nso that the regression equation fits perfectly.\n\nThis is the usual outcome in an underdetermined least-squares model.\n\nTo reiterate, in our tall-skinny case in which we have a number $$n$$ of observations that is small relative to the number $$m$$ of attributes that appear in the vector $$X_t$$, we want to fit equation (7.13).\n\nTo offer ideas about how we can efficiently calculate the pseudo-inverse $$X^+$$, as our estimator $$\\hat A$$ of $$A$$ we form an $$m \\times m$$ matrix that solves the least-squares best-fit problem\n\n(7.18)$\\hat A = \\textrm{argmin}_{\\check A} || X' - \\check A X ||_F$\n\nwhere $$|| \\cdot ||_F$$ denotes the Frobenius (or Euclidean) norm of a matrix.\n\nThe Frobenius norm is defined as\n\n$||A||_F = \\sqrt{ \\sum_{i=1}^m \\sum_{j=1}^m |A_{ij}|^2 }$\n\nThe minimizer of the right side of equation (7.18) is\n\n(7.19)$\\hat A = X' X^{+}$\n\nwhere the (possibly huge) $$n \\times m$$ matrix $$X^{+} = (X^T X)^{-1} X^T$$ is again a pseudo-inverse of $$X$$.\n\nFor some situations that we are interested in, $$X^T X$$ can be close to singular, a situation that can make some numerical algorithms be error-prone.\n\nTo acknowledge that possibility, we\u2019ll use efficient algorithms for computing and for constructing reduced rank approximations of $$\\hat A$$ in formula (7.17).\n\nThe $$i$$th row of $$\\hat A$$ is an $$m \\times 1$$ vector of regression coefficients of $$X_{i,t+1}$$ on $$X_{j,t}, j = 1, \\ldots, m$$.\n\nAn efficient way to compute the pseudo-inverse $$X^+$$ is to start with a singular value decomposition\n\n(7.20)$X = U \\Sigma V^T$\n\nwhere we remind ourselves that for a reduced SVD, $$X$$ is an $$m \\times n$$ matrix of data, $$U$$ is an $$m \\times p$$ matrix, $$\\Sigma$$ is a $$p \\times p$$ matrix, and $$V$$ is an $$n \\times p$$ matrix.\n\nWe can efficiently construct the pertinent pseudo-inverse $$X^+$$ by recognizing the following string of equalities.\n\n(7.21)\\begin{split} \\begin{aligned} X^{+} & = (X^T X)^{-1} X^T \\\\ & = (V \\Sigma U^T U \\Sigma V^T)^{-1} V \\Sigma U^T \\\\ & = (V \\Sigma \\Sigma V^T)^{-1} V \\Sigma U^T \\\\ & = V \\Sigma^{-1} \\Sigma^{-1} V^T V \\Sigma U^T \\\\ & = V \\Sigma^{-1} U^T \\end{aligned} \\end{split}\n\n(Since we are in the $$m > > n$$ case in which $$V^T V = I_{p \\times p}$$ in a reduced SVD, we can use the preceding string of equalities for a reduced SVD as well as for a full SVD.)\n\nThus, we shall construct a pseudo-inverse $$X^+$$ of $$X$$ by using a singular value decomposition of $$X$$ in equation (7.20) to compute\n\n(7.22)$X^{+} = V \\Sigma^{-1} U^T$\n\nwhere the matrix $$\\Sigma^{-1}$$ is constructed by replacing each non-zero element of $$\\Sigma$$ with $$\\sigma_j^{-1}$$.\n\nWe can use formula (7.22) together with formula (7.19) to compute the matrix $$\\hat A$$ of regression coefficients.\n\nThus, our estimator $$\\hat A = X' X^+$$ of the $$m \\times m$$ matrix of coefficients $$A$$ is\n\n(7.23)$\\hat A = X' V \\Sigma^{-1} U^T$\n\n## 7.13. Dynamic Mode Decomposition (DMD)\u00b6\n\nWe turn to the tall and skinny case associated with Dynamic Mode Decomposition, the case in which $$m >>n$$.\n\nHere an $$m \\times n$$ data matrix $$\\tilde X$$ contains many more attributes $$m$$ than individuals $$n$$.\n\nDynamic mode decomposition was introduced by [Sch10],\n\nYou can read about Dynamic Mode Decomposition here [KBBWP16] and here [BK19] (section 7.2).\n\nThe key idea underlying Dynamic Mode Decomposition (DMD) is to compute a rank $$r < p >$$ approximation to the least square regression coefficients $$\\hat A$$ that we described above by formula (7.23).\n\nWe\u2019ll build up gradually to a formulation that is typically used in applications of DMD.\n\nWe\u2019ll do this by describing three alternative representations of our first-order linear dynamic system, i.e., our vector autoregression.\n\nGuide to three representations: In practice, we\u2019ll be interested in Representation 3. We present the first 2 in order to set the stage for some intermediate steps that might help us understand what is under the hood of Representation 3. In applications, we\u2019ll use only a small subset of the DMD to approximate dynamics. To do that, we\u2019ll want to use the reduced SVD\u2019s affiliated with representation 3, not the full SVD\u2019s affiliated with representations 1 and 2.\n\nGuide to impatient reader: In our applications, we\u2019ll be using Representation 3. You might want to skip the stage-setting representations 1 and 2 on first reading.\n\n## 7.14. Representation 1\u00b6\n\nIn this representation, we shall use a full SVD of $$X$$.\n\nWe use the $$m$$ columns of $$U$$, and thus the $$m$$ rows of $$U^T$$, to define a $$m \\times 1$$ vector $$\\tilde b_t$$ as\n\n(7.24)$\\tilde b_t = U^T X_t .$\n\nThe original data $$X_t$$ can be represented as\n\n(7.25)$X_t = U \\tilde b_t$\n\n(Here we use $$b$$ to remind ourselves that we are creating a basis vector.)\n\nSince we are now using a full SVD, $$U U^T = I_{m \\times m}$$.\n\nSo it follows from equation (7.24) that we can reconstruct $$X_t$$ from $$\\tilde b_t$$.\n\nIn particular,\n\n\u2022 Equation (7.24) serves as an encoder that rotates the $$m \\times 1$$ vector $$X_t$$ to become an $$m \\times 1$$ vector $$\\tilde b_t$$\n\n\u2022 Equation (7.25) serves as a decoder that reconstructs the $$m \\times 1$$ vector $$X_t$$ by rotating the $$m \\times 1$$ vector $$\\tilde b_t$$\n\nDefine a transition matrix for an $$m \\times 1$$ basis vector $$\\tilde b_t$$ by\n\n(7.26)$\\tilde A = U^T \\hat A U$\n\nWe can recover $$\\hat A$$ from\n\n$\\hat A = U \\tilde A U^T$\n\nDynamics of the $$m \\times 1$$ basis vector $$\\tilde b_t$$ are governed by\n\n$\\tilde b_{t+1} = \\tilde A \\tilde b_t$\n\nTo construct forecasts $$\\overline X_t$$ of future values of $$X_t$$ conditional on $$X_1$$, we can apply decoders (i.e., rotators) to both sides of this equation and deduce\n\n$\\overline X_{t+1} = U \\tilde A^t U^T X_1$\n\nwhere we use $$\\overline X_{t+1}, t \\geq 1$$ to denote a forecast.\n\n## 7.15. Representation 2\u00b6\n\nThis representation is related to one originally proposed by [Sch10].\n\nIt can be regarded as an intermediate step on the way to obtaining a related representation 3 to be presented later\n\nAs with Representation 1, we continue to\n\n\u2022 use a full SVD and not a reduced SVD\n\nAs we observed and illustrated earlier in this lecture\n\n\u2022 (a) for a full SVD $$U U^T = I_{m \\times m}$$ and $$U^T U = I_{p \\times p}$$ are both identity matrices\n\n\u2022 (b) for a reduced SVD of $$X$$, $$U^T U$$ is not an identity matrix.\n\nAs we shall see later, a full SVD is too confining for what we ultimately want to do, namely, cope with situations in which $$U^T U$$ is not an identity matrix because we use a reduced SVD of $$X$$.\n\nBut for now, let\u2019s proceed under the assumption that we are using a full SVD so that both of the preceding two requirements (a) and (b) are satisfied.\n\nForm an eigendecomposition of the $$m \\times m$$ matrix $$\\tilde A = U^T \\hat A U$$ defined in equation (7.26):\n\n(7.27)$\\tilde A = W \\Lambda W^{-1}$\n\nwhere $$\\Lambda$$ is a diagonal matrix of eigenvalues and $$W$$ is an $$m \\times m$$ matrix whose columns are eigenvectors corresponding to rows (eigenvalues) in $$\\Lambda$$.\n\nWhen $$U U^T = I_{m \\times m}$$, as is true with a full SVD of $$X$$, it follows that\n\n(7.28)$\\hat A = U \\tilde A U^T = U W \\Lambda W^{-1} U^T$\n\nAccording to equation (7.28), the diagonal matrix $$\\Lambda$$ contains eigenvalues of $$\\hat A$$ and corresponding eigenvectors of $$\\hat A$$ are columns of the matrix $$UW$$.\n\nIt follows that the systematic (i.e., not random) parts of the $$X_t$$ dynamics captured by our first-order vector autoregressions are described by\n\n$X_{t+1} = U W \\Lambda W^{-1} U^T X_t$\n\nMultiplying both sides of the above equation by $$W^{-1} U^T$$ gives\n\n$W^{-1} U^T X_{t+1} = \\Lambda W^{-1} U^T X_t$\n\nor\n\n$\\hat b_{t+1} = \\Lambda \\hat b_t$\n\nwhere our encoder is now\n\n$\\hat b_t = W^{-1} U^T X_t$\n\nand our decoder is\n\n$X_t = U W \\hat b_t$\n\nWe can use this representation to construct a predictor $$\\overline X_{t+1}$$ of $$X_{t+1}$$ conditional on $$X_1$$ via:\n\n(7.29)$\\overline X_{t+1} = U W \\Lambda^t W^{-1} U^T X_1$\n\nIn effect, [Sch10] defined an $$m \\times m$$ matrix $$\\Phi_s$$ as\n\n(7.30)$\\Phi_s = UW$\n\nand a generalized inverse\n\n(7.31)$\\Phi_s^+ = W^{-1}U^T$\n\n[Sch10] then represented equation (7.29) as\n\n(7.32)$\\overline X_{t+1} = \\Phi_s \\Lambda^t \\Phi_s^+ X_1$\n\nComponents of the basis vector $$\\hat b_t = W^{-1} U^T X_t \\equiv \\Phi_s^+ X_t$$ are often called DMD modes, or sometimes also DMD projected modes.\n\nTo understand why they are called projected modes, notice that\n\n$\\Phi_s^+ = ( \\Phi_s^T \\Phi_s)^{-1} \\Phi_s^T$\n\nso that the $$m \\times p$$ matrix\n\n$\\hat b = \\Phi_s^+ X$\n\nis a matrix of regression coefficients of the $$m \\times n$$ matrix $$X$$ on the $$m \\times p$$ matrix $$\\Phi_s$$.\n\nWe turn next to an alternative representation suggested by Tu et al. [TRL+14].\n\nIt is more appropriate to use this alternative representation when, as is typically the case in practice, we use a reduced SVD.\n\n## 7.16. Representation 3\u00b6\n\nDeparting from the procedures used to construct Representations 1 and 2, each of which deployed a full SVD, we now use a reduced SVD.\n\nAgain, we let $$p \\leq \\textrm{min}(m,n)$$ be the rank of $$X$$.\n\nConstruct a reduced SVD\n\n$X = \\tilde U \\tilde \\Sigma \\tilde V^T,$\n\nwhere now $$\\tilde U$$ is $$m \\times p$$, $$\\tilde \\Sigma$$ is $$p \\times p$$, and $$\\tilde V^T$$ is $$p \\times n$$.\n\nOur minimum-norm least-squares estimator approximator of $$A$$ now has representation\n\n(7.33)$\\hat A = X' \\tilde V \\tilde \\Sigma^{-1} \\tilde U^T$\n\nParalleling a step in Representation 1, define a transition matrix for a rotated $$p \\times 1$$ state $$\\tilde b_t$$ by\n\n(7.34)$\\tilde A =\\tilde U^T \\hat A \\tilde U$\n\nInterpretation as projection coefficients\n\n[BK22] remark that $$\\tilde A$$ can be interpreted in terms of a projection of $$\\hat A$$ onto the $$p$$ modes in $$\\tilde U$$.\n\nTo verify this, first note that, because $$\\tilde U^T \\tilde U = I$$, it follows that\n\n(7.35)$\\tilde A = \\tilde U^T \\hat A \\tilde U = \\tilde U^T X' \\tilde V \\tilde \\Sigma^{-1} \\tilde U^T \\tilde U = \\tilde U^T X' \\tilde V \\tilde \\Sigma^{-1} \\tilde U^T$\n\nNext, we\u2019ll just compute the regression coefficients in a projection of $$\\hat A$$ on $$\\tilde U$$ using the standard least-square formula\n\n$(\\tilde U^T \\tilde U)^{-1} \\tilde U^T \\hat A = (\\tilde U^T \\tilde U)^{-1} \\tilde U^T X' \\tilde V \\tilde \\Sigma^{-1} \\tilde U^T = \\tilde U^T X' \\tilde V \\tilde \\Sigma^{-1} \\tilde U^T = \\tilde A .$\n\nNote that because we are now working with a reduced SVD, $$\\tilde U \\tilde U^T \\neq I$$.\n\nConsequently,\n\n$\\hat A \\neq \\tilde U \\tilde A \\tilde U^T,$\n\nand we can\u2019t simply recover $$\\hat A$$ from $$\\tilde A$$ and $$\\tilde U$$.\n\nNevertheless, we hope for the best and proceed to construct an eigendecomposition of the $$p \\times p$$ matrix $$\\tilde A$$:\n\n(7.36)$\\tilde A = \\tilde W \\Lambda \\tilde W^{-1} .$\n\nMimicking our procedure in Representation 2, we cross our fingers and compute an $$m \\times p$$ matrix\n\n(7.37)$\\tilde \\Phi_s = \\tilde U \\tilde W$\n\nthat corresponds to (7.30) for a full SVD.\n\nAt this point, where $$\\hat A$$ is given by formula (7.33) it is interesting to compute $$\\hat A \\tilde \\Phi_s$$:\n\n\\begin{split} \\begin{aligned} \\hat A \\tilde \\Phi_s & = (X' \\tilde V \\tilde \\Sigma^{-1} \\tilde U^T) (\\tilde U \\tilde W) \\\\ & = X' \\tilde V \\tilde \\Sigma^{-1} \\tilde W \\\\ & \\neq (\\tilde U \\tilde W) \\Lambda \\\\ & = \\tilde \\Phi_s \\Lambda \\end{aligned} \\end{split}\n\nThat $$\\hat A \\tilde \\Phi_s \\neq \\tilde \\Phi_s \\Lambda$$ means, that unlike the corresponding situation in Representation 2, columns of $$\\tilde \\Phi_s = \\tilde U \\tilde W$$ are not eigenvectors of $$\\hat A$$ corresponding to eigenvalues on the diagonal of matix $$\\Lambda$$.\n\nBut in a quest for eigenvectors of $$\\hat A$$ that we can compute with a reduced SVD, let\u2019s define the $$m \\times p$$ matrix $$\\Phi$$ as\n\n(7.38)$\\Phi \\equiv \\hat A \\tilde \\Phi_s = X' \\tilde V \\tilde \\Sigma^{-1} \\tilde W$\n\nIt turns out that columns of $$\\Phi$$ are eigenvectors of $$\\hat A$$.\n\nThis is a consequence of a result established by Tu et al. [TRL+14], which we now present.\n\nProposition The $$p$$ columns of $$\\Phi$$ are eigenvectors of $$\\hat A$$.\n\nProof: From formula (7.38) we have\n\n\\begin{aligned} \\hat A \\Phi & = (X' \\tilde V \\tilde \\Sigma^{-1} \\tilde U^T) (X' \\tilde V \\Sigma^{-1} \\tilde W) \\cr & = X' \\tilde V \\tilde \\Sigma^{-1} \\tilde A \\tilde W \\cr & = X' \\tilde V \\tilde \\Sigma^{-1}\\tilde W \\Lambda \\cr & = \\Phi \\Lambda \\end{aligned}\n\nThus, we have deduced that\n\n(7.39)$\\hat A \\Phi = \\Phi \\Lambda$\n\nLet $$\\phi_i$$ be the $$i$$th column of $$\\Phi$$ and $$\\lambda_i$$ be the corresponding $$i$$ eigenvalue of $$\\tilde A$$ from decomposition (7.36).\n\nWriting out the $$m \\times 1$$ vectors on both sides of equation (7.39) and equating them gives\n\n$\\hat A \\phi_i = \\lambda_i \\phi_i .$\n\nThis equation confirms that $$\\phi_i$$ is an eigenvector of $$\\hat A$$ that corresponds to eigenvalue $$\\lambda_i$$ of both $$\\tilde A$$ and $$\\hat A$$.\n\nThis concludes the proof.\n\nAlso see [BK22] (p. 238)\n\n### 7.16.1. Decoder of $$X$$ as a linear projection\u00b6\n\nFrom eigendecomposition (7.39) we can represent $$\\hat A$$ as\n\n(7.40)$\\hat A = \\Phi \\Lambda \\Phi^+ .$\n\nFrom formula (7.40) we can deduce the reduced dimension dynamics\n\n$\\check b_{t+1} = \\Lambda \\check b_t$\n\nwhere\n\n(7.41)$\\check b_t = \\Phi^+ X_t$\n\nSince the $$m \\times p$$ matrix $$\\Phi$$ has $$p$$ linearly independent columns, the generalized inverse of $$\\Phi$$ is\n\n$\\Phi^{+} = (\\Phi^T \\Phi)^{-1} \\Phi^T$\n\nand so\n\n(7.42)$\\check b = (\\Phi^T \\Phi)^{-1} \\Phi^T X$\n\nThe $$p \\times n$$ matrix $$\\check b$$ is recognizable as a matrix of least squares regression coefficients of the $$m \\times n$$ matrix $$X$$ on the $$m \\times p$$ matrix $$\\Phi$$ and consequently\n\n(7.43)$\\check X = \\Phi \\check b$\n\nis an $$m \\times n$$ matrix of least squares projections of $$X$$ on $$\\Phi$$.\n\nBy virtue of least-squares projection theory discussed in this quantecon lecture e https:\/\/python-advanced.quantecon.org\/orth_proj.html, we can represent $$X$$ as the sum of the projection $$\\check X$$ of $$X$$ on $$\\Phi$$ plus a matrix of errors.\n\nTo verify this, note that the least squares projection $$\\check X$$ is related to $$X$$ by\n\n$X = \\check X + \\epsilon$\n\nor\n\n(7.44)$X = \\Phi \\check b + \\epsilon$\n\nwhere $$\\epsilon$$ is an $$m \\times n$$ matrix of least squares errors satisfying the least squares orthogonality conditions $$\\epsilon^T \\Phi =0$$ or\n\n(7.45)$(X - \\Phi \\check b)^T \\Phi = 0_{m \\times p}$\n\nRearranging the orthogonality conditions (7.45) gives $$X^T \\Phi = \\check b \\Phi^T \\Phi$$, which implies formula (7.42).\n\n### 7.16.2. A useful approximation\u00b6\n\nThere is a useful way to approximate the $$p \\times 1$$ vector $$\\check b_t$$ instead of using formula (7.41).\n\nIn particular, the following argument adapted from [BK22] (page 240) provides a computationally efficient way to approximate $$\\check b_t$$.\n\nFor convenience, we\u2019ll do this first for time $$t=1$$.\n\nFor $$t=1$$, from equation (7.44) we have\n\n(7.46)$\\check X_1 = \\Phi \\check b_1$\n\nwhere $$\\check b_1$$ is a $$p \\times 1$$ vector.\n\nRecall from representation 1 above that $$X_1 = U \\tilde b_1$$, where $$\\tilde b_1$$ is a time $$1$$ basis vector for representation 1 and $$U$$ is from a full SVD of $$X$$.\n\nIt then follows from equation (7.44) that\n\n$U \\tilde b_1 = X' \\tilde V \\tilde \\Sigma^{-1} \\tilde W \\check b_1 + \\epsilon_1$\n\nwhere $$\\epsilon_1$$ is a least-squares error vector from equation (7.44).\n\nIt follows that\n\n$\\tilde b_1 = U^T X' V \\tilde \\Sigma^{-1} \\tilde W \\check b_1 + U^T \\epsilon_1$\n\nReplacing the error term $$U^T \\epsilon_1$$ by zero, and replacing $$U$$ from a full SVD of $$X$$ with $$\\tilde U$$ from a reduced SVD, we obtain an approximation $$\\hat b_1$$ to $$\\tilde b_1$$:\n\n$\\hat b_1 = \\tilde U^T X' \\tilde V \\tilde \\Sigma^{-1} \\tilde W \\check b_1$\n\nRecall that from equation (7.35), $$\\tilde A = \\tilde U^T X' \\tilde V \\tilde \\Sigma^{-1}$$.\n\nIt then follows that\n\n$\\hat b_1 = \\tilde A \\tilde W \\check b_1$\n\nand therefore, by the eigendecomposition (7.36) of $$\\tilde A$$, we have\n\n$\\hat b_1 = \\tilde W \\Lambda \\check b_1$\n\nConsequently,\n\n$\\hat b_1 = ( \\tilde W \\Lambda)^{-1} \\tilde b_1$\n\nor\n\n(7.47)$\\hat b_1 = ( \\tilde W \\Lambda)^{-1} \\tilde U^T X_1 ,$\n\nwhich is a computationally efficient approximation to the following instance of equation (7.41) for the initial vector $$\\check b_1$$:\n\n(7.48)$\\check b_1= \\Phi^{+} X_1$\n\n(To highlight that (7.47) is an approximation, users of DMD sometimes call components of the basis vector $$\\check b_t = \\Phi^+ X_t$$ the exact DMD modes.)\n\nConditional on $$X_t$$, we can compute our decoded $$\\check X_{t+j}, j = 1, 2, \\ldots$$ from either\n\n(7.49)$\\check X_{t+j} = \\Phi \\Lambda^j \\Phi^{+} X_t$\n\nor use the approximation\n\n(7.50)$\\hat X_{t+j} = \\Phi \\Lambda^j (W \\Lambda)^{-1} \\tilde U^T X_t .$\n\nWe can then use $$\\check X_{t+j}$$ or $$\\hat X_{t+j}$$ to forecast $$X_{t+j}$$.\n\n### 7.16.3. Using Fewer Modes\u00b6\n\nIn applications, we\u2019ll actually want to just a few modes, often three or less.\n\nSome of the preceding formulas assume that we have retained all $$p$$ modes associated with the positive singular values of $$X$$.\n\nWe can adjust our formulas to describe a situation in which we instead retain only the $$r < p$$ largest singular values.\n\nIn that case, we simply replace $$\\tilde \\Sigma$$ with the appropriate $$r\\times r$$ matrix of singular values, $$\\tilde U$$ with the $$m \\times r$$ matrix whose columns correspond to the $$r$$ largest singular values, and $$\\tilde V$$ with the $$n \\times r$$ matrix whose columns correspond to the $$r$$ largest singular values.\n\nCounterparts of all of the salient formulas above then apply.\n\n## 7.17. Source for Some Python Code\u00b6\n\nYou can find a Python implementation of DMD here:\n\nhttps:\/\/mathlab.github.io\/PyDMD\/","date":"2023-01-31 09:16:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9447035193443298, \"perplexity\": 979.383300546705}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499857.57\/warc\/CC-MAIN-20230131091122-20230131121122-00454.warc.gz\"}"} | null | null |
Modravské slatě jsou přírodní památka na území obce Modrava v Národním parku Šumava. Jsou největším komplexem rašelinišť na Šumavě, který je však z většiny veřejnosti nepřístupný. Modravské slatě reprezentují unikátně zachovalý celek vrchovištních rašelinišť, která se na prameništích a mělkých pánvích vyvíjela od poslední doby ledové (před 8 až 10 tisíci lety). Jejich celková rozloha činí více než 2000 hektarů. Jedná se o první zónu Národního parku Šumava, která zde chrání především jedinečnou vegetaci a faunu. Žije zde ohrožený tetřev hlušec, rys ostrovid nebo myšivka horská. Modravské slatě byly zařazeny do mezinárodní sítě světově významných rašelinišť a mokřadů v rámci tzv. Ramskarské konvence.
Popis
Modravské slatě jsou nejvýše položenou část Šumavských plání v pramenné oblasti řeky Vydry mezi Modravou a česko–německou státní hranicí. Jedná se o území rozkládající se v nadmořských výškách kolem 1000 metrů na návětrné straně Šumavy, kde panuje vlhké podnebí s vysokým množstvím srážek. Modravské slatě se vyznačují četnými strukturovanými vrchovišti s mnoha jezírky a členitými nelesními plochami. Na celém tomto území jsou rozsáhlé komplexy rašelinných smrčin a podmáčených smrčin. Tyto komplexy jsou většinou obklopeny vrchovišti, které vše propojují ve větší rašelinné celky. Modravská rašeliniště jsou povětšině svahová. Jsou zásobena převážně prosakující vodou a vodou z pramenišť na horním toku řeky Vydry.
Jednotlivé slatě
Podle starých revírních map se na území Modravských plání nacházelo celkem 28 slatí.
Rokytská slať
Roklanská slať
Přední a Zadní Mlynářská slať
Šárecká slať
Novohuťské močály
Blatenská slať
Odvodňování
Požadavky na výsadby lesních porostů za účelem pěstování lesů či zakládání nových lesů vedlo v minulosti k neuvážených odvodňovacím zásahům do rašelinových komplexů. Zásahy do vodního režimu (výstavba sítí odvodňovacích příkopů a kanálů) nežádoucím způsobem ovlivnily i vývoj Modravských slatí. Poznatky o důležitosti zadržování vody v krajině, významu vytváření místního mikroklimatu a podstatného zpomalování odtoku povodňových vod, k nimž se dospělo na začátku 21. století, vyústily ke konkrétním revitalizačním opatřením: původní odvodňovací rýhy byly nově přehrazeny kaskádou dřevěných hrází s cílem opětovně zvýšit hladinu podzemní vody a zadržet vodu v dotčených biotopch na revitalizovaném území.
Železná opona
Do života Modravských slatí negativně zasáhla i tzv. železná opona. Ta se v 50. a 60. letech dvacátého století táhla (v podobě plotů s drátěnými zátarasy, které byly udržovány od roku 1953 pod životu nebezpečným vysokým elektrickýcm napětím 3000 až 6000 voltů) napříč rašeliništními komplexy. Železná opona byla po sametové revoluci sice odstraněna (v roce 1991), ale ještě dnes (rok 2019) lze dohledat na slatích, kromě průseků, i její relikty: dřevěné kůly, keramické izolátory či klubka ostnatých drátů.
Odkazy
Poznámky
Reference
Související články
Jezerní slať
Chalupská slať
Tříjezerní slať
Cikánská slať
Externí odkazy
Přírodní památky v okrese Klatovy
Rašeliniště na Šumavě
Národní park Šumava
Povodí Otavy
Modrava
Česko-německá státní hranice
Chráněná území vyhlášená roku 1989
Zrušené přírodní památky v Česku | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,382 |
2019 Alfa Romeo Giulietta is definitely an SUV that has a lot of lovers is trusted and it has become the top-marketing of his time. Recently seemed a gossip that the company provides this Crossover's next generation. This latest model supposedly provides a number of alterations in various areas, exterior, and inside, as well as an even more effective engine and fuel economy, turned a focus in its growth, there are also problems say that This 2019 Alfa Romeo Giulietta may wear Hybrid motors that more eco-friendly.
Allegedly, the 2019 Alfa Romeo Giulietta can encounter a change of style with this newest edition can have sizes which might be a bit more compact and contains a light weight, to the exterior. The size are very best for aerodynamics, The Headlight there are also adjustments that make it a far more fine. Of still retaining the character with this vehicle, about the back, the wheels and Rims' size remain the same. The newest look is quite remarkable, some netizen also said that they were extremely enthusiastic about the design of his new more advanced.
The Inner of Alfa Romeo Giulietta there is also an alteration from your active photography, we could see that the dash of the SUV is having a bit of a touch of modern, adequate Touchscreen LCD may show GPS with specific, the Speedometer it is rather modern. Seating luxury and part lined with real leather which makes it sophisticated, a variety are of shade choices to pick from. Traveler capacity of the vehicle it truly is very vast, may provide 5 passengers with superior, also 6 passengers. Shoe Room is also incredibly broad, the size are enough to carry the stuff you will need camping, or when going to Work, a picnic. This automobile is it possible to depend for looks.
To perform the task like a multi purpose car, This 2019 Alfa Romeo Giulietta appears to make use of a better motor than the prior model, we still do not have obvious details about his. Yet guesswork we this crossover would use V6 1.5 L on the basic version, with a few of the newest technology set within this car, we believe that this vehicle will experience a rise in energy. Rocketing pictures made wouldbe excellent both or off-road on the location block.
There is also rumor media declaring that the diesel edition offered will bring more gas as well as higher hp -productive. The Hybrid model is also thought to be a logic alternative presented the present individuals are presently conscious of the importance of eco-friendly cars.
Price and release 2019 Alfa Romeo Giulietta have not been declared, but that definitely consumers have a pastime within this, go through the price of the previous model when first launched, we believed the value isn't significantly less than $25000 and may be available at the conclusion of this year or early next year. We just wait. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,035 |
On Friday, I ranted a bit about "Are Lulz our best practice?" The biggest pushback I heard was that management doesn't listen, or doesn't make decisions in the best interests of the company. I think there's a lot going on there, and want to unpack it.
First, a quick model of getting executives to do what you want. I'll grossly oversimplify to 3 ordered parts.
You need a goal. Some decision you think is in the best interests of the organization, and reasons you think that's the case.
You need a way to communicate about the goal and the supporting facts and arguments.
You need management who will make decisions in the best interests of the organization.
The essence of my argument on Friday is that 1 & 2 are often missing or under-supported. Either the decisions are too expensive given the normal (not outlying) costs of a breach, or the communication is not convincing.
Let me expand on insufficient facts, the best interests of the organization and insufficient communication.
Sufficient facts mean that you have the data you need to convince an impartial or even a somewhat partial world that there's a risk tradeoff worth making. That if you invest in A over B, the expected cost to the organization will fall. And if B is an investment in raising revenues, then the odds that A happens are sufficiently higher than B that it's worth taking the risk of not raising revenue and accepting the loss from A. Insufficient facts is a description of what happens because we keep most security problems secret. In happens in several ways, prominent amongst them is that we can't really do a good job at calculating probability or losses, and that we have distorted views of those probabilities or losses.
Shifting to insufficient communication, this is what I meant by the lulzy statement "We're being out-communicated by people who can't spell." Communication is a two-way street. It involves (amongst many other things) formulating arguments that are designed to be understood, and actively listening to objections and questions raised.
There are two ways I can interpret this. The first is that "Hmmm's" idea of simple isn't really simple (insofar as it breaks something else). Perhaps fixing the breach is as cheap and easy as fixing the configurations, but there are other, higher impact things on the configuration management todo list. I don't know how long that implementation schedule is, nor how long he's been waiting. And perhaps his management went to clown school, not MBA school. I have no way to tell.
What I do know is that often the security professionals I've worked with don't engage in active listening. They believe their path is the right one, and when issues like competing activities in configuration management are brought up, they dismiss the issue and the person who raised it. And you might be right to do so. But does it help you achieve your goal?
Feel free to call me a management apologist, if that's easier than learning how to get stuff done in your organization. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,698 |
Q: How do you show how to get this infinite series for sec? $$
\pi \sec{\left(\frac{\pi x}{2} \right)} = 4 \sum_{k=0}^{\infty} (-1)^k\frac{2k+1}{(2k+1)^2-x^2}
.
$$
Preferably avoiding complex analysis or zig-zag numbers. This is my missing link in understanding the 1/(4k+1) series. I can get the similar series for tan (by differentiating log(sin) but have become really stuck with this trying similar approaches and looking for trig identities.
A: Form the sum in the followings:
$S= 4\sum\limits_{k=0}^{\infty} (-1)^k\frac{2k+1}{(2k+1)^2-x^2}=2 \sum\limits_{k=0}^{\infty} \frac{(-1)^k}{2k+1-x}+2\sum\limits_{k=0}^{\infty}\frac{(-1)^k}{2k+1+x}$
Let $k=2m$ if $k$ even and $k=2m+1$ if $k$ odd so we get:
$S=2 \sum\limits_{m=0}^{\infty} \frac{1}{4m+1-x}-2\sum\limits_{k=0}^{\infty}\frac{1}{4m+3-x}+\sum\limits_{m=0}^{\infty} \frac{1}{4m+1+x}-2\sum\limits_{k=0}^{\infty}\frac{1}{4m+3+x}$
Introduce the digamma function: $\psi(1+z)=-\gamma+\sum\limits_{k=1}^\infty\big(\frac{1}{k}-\frac{1}{z+k}\big)$
We have the followings:
$S=\frac{1}{2}\big(-\psi(\frac{x+1}{4})+\psi(\frac{x+3}{4})-\psi(\frac{-x+1}{4})+\psi(\frac{-x+3}{4})\big)$
Using the reflection formula of digamma function:
$\psi(1-z)-\psi(z)=\pi \cot(\pi z)$
We get:
$S=\frac{\pi}{2}\big(\cot(\frac{\pi}{4}(x+1))+\cot(\frac{\pi}{4}(1-x))\big)$
After simplifing the expression using trigonometric laws:
$S=\frac{\pi}{2}\frac{2}{\cos(\frac{x\pi}{2})}=\pi \sec(x\frac{\pi}{2})$
The statement is proved.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,447 |
package thecave.matrixcontrol.sbridge;
import com.fazecast.jSerialComm.SerialPort;
import java.util.Arrays;
public class NamedSerialPortFactory implements SerialPortFactory {
private final String portName;
public NamedSerialPortFactory(String portName) {
this.portName = portName;
}
@Override
public SerialPort createPort() {
SerialPort port = null;
for (SerialPort p : SerialPort.getCommPorts()) {
if (portName.equals(p.getSystemPortName())) {
port = p;
port.setComPortParameters(115200, 8, 1, SerialPort.NO_PARITY);
break;
}
}
return port;
}
public static String[] getPortNames() {
SerialPort[] ports = SerialPort.getCommPorts();
String[] result = new String[ports.length];
for (int i = 0; i < ports.length; i++) {
result[i] = ports[i].getSystemPortName();
}
return result;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,149 |
from tkinter import *
from tkinter import messagebox
from tkinter import simpledialog
import random
#All the chess piece images and the backgrounds are not my original work,
#the creators have been attributed right above where these have been created.
#http://creativecommons.org/licenses/by/4.0/legalcode
#All files have been unaltered besides a change in size and conversion to .gif
#Only the images used for check have been modified to include "CHECK" as text
#The use of any of these picturs has not been endorsed by the creators
class Animation(object):
# Override these methods when creating your own animation
def mousePressed(self, event): pass
def keyPressed(self, event): pass
def timerFired(self): pass
def init(self): pass
def redrawAll(self): pass
# Call app.run(width,height) to get your app started
def run(self, width=300, height=300):
# create the root and the canvas
root = Tk()
self.width = width
self.height = height
self.canvas = Canvas(root, width=width, height=height)
self.canvas.pack()
# set up events
def redrawAllWrapper():
self.canvas.delete(ALL)
self.redrawAll()
self.canvas.update()
def mousePressedWrapper(event):
self.mousePressed(event)
redrawAllWrapper()
def keyPressedWrapper(event):
self.keyPressed(event)
redrawAllWrapper()
root.bind("<Button-1>", mousePressedWrapper)
root.bind("<Key>", keyPressedWrapper)
# set up timerFired events
self.timerFiredDelay = 250 # milliseconds
def timerFiredWrapper():
self.timerFired()
redrawAllWrapper()
# pause, then call timerFired again
self.canvas.after(self.timerFiredDelay, timerFiredWrapper)
# init and get timerFired running
self.init()
timerFiredWrapper()
# and launch the app
root.mainloop()
print("Bye")
class Chess(Animation):
def mousePressed(self, event):
if self.mode == "SplashScreen":
self.SplashScreenmousePressed(event)
if self.mode == "Chess":
self.ChessmousePressed(event)
def keyPressed(self, event):
if self.mode == "SplashScreen":
self.SplashScreenkeyPressed(event)
if self.mode == "Chess":
self.ChesskeyPressed(event)
def timerFired(self):
if self.mode == "SplashScreen":
self.SplashScreentimerFired()
if self.mode == "Chess":
self.ChesstimerFired()
def redrawAll(self):
if self.mode == "SplashScreen":
self.SplashScreenredrawAll()
if self.mode == "Chess":
self.ChessredrawAll()
def SplashScreenmousePressed(self, event):
if self.splashPlayButton.containsPoint(event.x, event.y):
self.mode = "Chess"
def SplashScreenkeyPressed(self, event):
if event.keysym == "p" or event.keysym == "P":
self.mode = "Chess"
def SplashScreentimerFired(self):
pass
def SplashScreenredrawAll(self):
self.canvas.create_image(self.width/2, self.height/2,
image = self.background)
self.canvas.create_image(self.width/2, self.height/2,
image = self.splashPhoto)
self.splashPlayButton.draw(self)
extraGap = 30
self.canvas.create_text(self.width/2, self.height/2 + extraGap, text =
"It doesn't matter who we are...",
fill = "white", font = "Calibiri 30")
gap = 40
self.canvas.create_text(self.width/2, self.height/2 + extraGap + gap,
text = "what matters is our plan.",
fill = "white", font = "Calibiri 30")
Ypos = 2*self.height/5
self.canvas.create_image(self.width/2, Ypos, image = self.title)
def init(self):
filename = "Title.gif"
self.title = PhotoImage(file = filename)
self.mode = "SplashScreen"
filename = "knightBG.gif"
#https://pixabay.com/p-147065/?no_redirect
#public domain
self.splashPhoto = PhotoImage(file = filename)
#http://www.freestockphotos.biz/stockphoto/9090
#By Petr Kratochvil
self.background = PhotoImage(file = "Blue.gif").zoom(2,2)
#the photo's used to indicate Check
#https://pixabay.com/static/uploads/photo/2013/07/12/13/27/knight-147065_640.png
self.whiteCheck = PhotoImage(file = "Checkwhite.gif").subsample(3,3)
#https://pixabay.com/static/uploads/photo/2013/07/12/13/27/knight-147065_640.png
self.blackCheck = PhotoImage(file = "Checkblack.gif").subsample(3,3)
#initializing boards with initial positions
Ypos = 5*self.height/6
Xpos = self.width/12
sizeX, sizeY = 200, 50
self.splashPlayButton = Button("PLAY", Xpos, Ypos, sizeX, sizeY)
self.whiteBoard = Board(0,1)
self.neutralBoard = Board(2,3)
self.blackBoard = Board(4,5)
self.white1 = MovableBoard(self.whiteBoard, 0, 0)
self.white2 = MovableBoard(self.whiteBoard, 0, 4)
self.black1 = MovableBoard(self.blackBoard, 4, 0)
self.black2 = MovableBoard(self.blackBoard, 4, 4)
self.boards = [self.whiteBoard, self.neutralBoard, self.blackBoard,
self.white1, self.white2, self.black1, self.black2]
self.movableBoards = [self.white1, self.white2, self.black1,self.black2]
self.fixedBoards = [self.whiteBoard, self.neutralBoard, self.blackBoard]
#initializing all the buttons at their positions
self.margin = 20
tutorialButtonX, tutorialButtonY = 20, 150
tutorialButtonWidth = 160
tutorialButtonHeight = 30
self.tutorialButton = Button("TUTORIAL MODE", tutorialButtonX,
tutorialButtonY, tutorialButtonWidth, tutorialButtonHeight)
(restartButtonX, restartButtonY) = (tutorialButtonX,
tutorialButtonY + tutorialButtonHeight + 20)
(restartButtonWidth, restartButtonHeight) = (tutorialButtonWidth,
tutorialButtonHeight)
self.restartButton = Button("RESTART", restartButtonX, restartButtonY,
restartButtonWidth, restartButtonHeight)
(helpButtonX, helpButtonY) = (restartButtonX,
restartButtonY + restartButtonHeight + 20)
(helpButtonWidth, helpButtonHeight) = (restartButtonWidth,
restartButtonHeight)
self.helpButton = Button("INSTRUCTIONS", helpButtonX, helpButtonY,
helpButtonWidth, helpButtonHeight)
(AIButtonX, AIButtonY) = (helpButtonX,
helpButtonY + helpButtonHeight + 20)
(AIButtonWidth, AIButtonHeight) = (helpButtonWidth,
helpButtonHeight)
self.AIButton = Button("AI", AIButtonX, AIButtonY,
AIButtonWidth, AIButtonHeight)
self.buttons = [self.tutorialButton, self.restartButton,
self.helpButton, self.AIButton]
self.time = 0 #timer so that AI doesn't move instantaneously
self.lastPiece = None #keep last move highlighted for clarity
self.isGameOver = None
#place pieces on the board
pawnRowWhite, pawnRowBlack = 1, 2
movablePawnRowWhite, movablePawnRowBlack = 1, 0
pieceRowWhite, pieceRowBlack = 0, 3
movablePieceRowWhite, movablePieceRowBlack = 0, 1
self.wGap = self.width/12
self.hGap = self.height/40
fixedRows = fixedCols = len(self.whiteBoard.board)
movableRows = movableCols = len(self.white1.board)
#game states
self.selectedPiece = None
self.selectedBoard = None
self.checked = False
#To display dead pieces
trashCenter = 3*self.margin
trashHeightMargin = 15*self.margin
self.trashCans = [TrashCan(self.width - trashCenter,
self.height- trashHeightMargin, self, "white"),
TrashCan(self.width - trashCenter,
self.height-self.margin, self, "black")]
pieceOrder = (Knight, Bishop, Bishop, Knight)
self.turn = "white"
pieceOrderMovable1 = (Rook, Queen)
pieceOrderMovable2 = (King, Rook)
for col in range(fixedCols):
self.whiteBoard.board[pawnRowWhite][col] = Pawn("white", self)
self.blackBoard.board[pawnRowBlack][col] = Pawn("black", self)
piece = pieceOrder[col]
self.whiteBoard.board[pieceRowWhite][col] = piece("white", self)
self.blackBoard.board[pieceRowBlack][col] = piece("black", self)
for col in range(movableCols):
self.white1.board[movablePawnRowWhite][col] = Pawn("white", self)
self.white2.board[movablePawnRowWhite][col] = Pawn("white", self)
self.black1.board[movablePawnRowBlack][col] = Pawn("black", self)
self.black2.board[movablePawnRowBlack][col] = Pawn("black", self)
piece = pieceOrderMovable1[col]
self.white1.board[movablePieceRowWhite][col] = piece("white", self)
self.black1.board[movablePieceRowBlack][col] = piece("black", self)
piece = pieceOrderMovable2[col]
self.white2.board[movablePieceRowWhite][col] = piece("white", self)
self.black2.board[movablePieceRowBlack][col] = piece("black", self)
#keep track of pieces on and off the board
self.whitePieces, self.blackPieces = [], []
self.removedPiecesWhite, self.removedPiecesBlack = [], []
for board in self.boards:
for row in range(len(board.board)):
for col in range(len(board.board[row])):
piece = board.board[row][col]
if piece != None:
if piece.color == "white":
self.whitePieces.append(piece)
elif piece.color == "black":
self.blackPieces.append(piece)
def ChessmousePressed(self, event):
#movement of attack boards
if self.findMovableBoard(event) != None:
self.selectedBoard = self.findMovableBoard(event)
elif self.selectedBoard != None:
if self.findRowAndCol(event) != None:
(row, col, board) = self.findRowAndCol(event)
if self.selectedBoard.isLegal(row, col, board, self):
self.selectedBoard.moveBoard(row, col, board, self)
self.isChecked()
else:
self.selectedBoard = None
#selecting a piece
elif self.findRowAndCol(event) != None:
(row, col, board) = self.findRowAndCol(event)
piece = board.board[row][col]
#making sure the player who's turn it is selects a piece
if (self.selectedPiece == None and self.selectedBoard == None and
piece != None and piece.color == self.turn):
self.selectedPiece = piece
self.selectedPiece.row, self.selectedPiece.col = row, col
self.selectedPiece.board = board
elif self.selectedPiece != None:
#moving a piece only if the move is valid
if (self.selectedPiece.isLegal(row, col, board, self) and
(piece == None or self.selectedPiece.color != piece.color)):
self.selectedPiece.move(board, row, col, self)
#if a piece is killed, make changes in the lists keeping
#track of the pieces
if piece != None and self.checked == False:
if piece.color == "white":
self.whitePieces.remove(piece)
self.removedPiecesWhite.append(piece)
elif piece.color == "black":
self.blackPieces.remove(piece)
self.removedPiecesBlack.append(piece)
#see if the move resulted in a check
self.isChecked()
if self.checked == True:
#if yes check if the king is in checkmate
self.checkmate()
#otherwise check if the game is in stalemate
else:
self.stalemate()
#check if the menu buttons are clicked, and if yes, act accordingly
elif self.tutorialButton.containsPoint(event.x, event.y):
self.tutorialButton.selected = not(self.tutorialButton.selected)
elif self.helpButton.containsPoint(event.x, event.y):
helpWidth, helpHeight = 800, 500
HelpMenu.run(helpWidth,helpHeight)
elif self.restartButton.containsPoint(event.x, event.y):
answer = messagebox.askyesno("Warning",
"Are you sure you want to restart?")
if answer == True:
self.init()
elif self.AIButton.containsPoint(event.x, event.y):
self.AIButton.selected = not(self.AIButton.selected)
else: #any click outside deselects a piece
self.selectedPiece = None
def ChesskeyPressed(self, event):
#keyboard shortcuts for menu buttons
if event.keysym == "r":
answer = messagebox.askyesno("Warning",
"Are you sure you want to restart?")
if answer == True:
self.init()
if event.keysym == "t":
self.tutorialButton.selected = not(self.tutorialButton.selected)
if event.keysym == "i":
helpWidth, helpHeight = 800, 500
HelpMenu.run(helpWidth,helpHeight)
if event.keysym == "a":
self.AIButton.selected = not(self.AIButton.selected)
def ChesstimerFired(self):
#if AI is enabled, the computer plays as black
if self.AIButton.selected == True:
pieceIndex, boardIndex, rowIndex, colIndex = 0, 1, 2, 3
#finds the best move for the AI and if it's a tie, chooses a
#random one
if self.turn == "black":
self.time += 1
possibleMoves = self.listMoves()
if self.isGameOver != None:
return
elif len(possibleMoves) == 0:
self.isGameOver = "stalemate"
return
bestMoves = self.lowestCost(possibleMoves)
move = random.choice(bestMoves)
#move is a tuple containing info needed to make the move
piece, board = move[pieceIndex], move[boardIndex]
row, col = move[rowIndex], move[colIndex]
if self.time % 4 == 0: #takes a second before every move
otherPiece = board.board[row][col]
#move the piece
piece.move(board, row, col, self)
#update the lists keeping track of the pieces
if otherPiece != None and self.checked == False:
if otherPiece.color == "white":
self.whitePieces.remove(otherPiece)
self.removedPiecesWhite.append(otherPiece)
elif otherPiece.color == "black":
self.blackPieces.remove(otherPiece)
self.removedPiecesBlack.append(otherPiece)
#check if the move results in a check and then for
#checkmate and stalemate
self.isChecked()
if self.checked == True:
self.checkmate()
else:
self.stalemate()
self.time = 0
def ChessredrawAll(self):
#background
self.canvas.create_image(self.width/2, self.height/2,
image = self.background)
#draw boards in the correct order for overlaps to be visually accurate
#The boards furthest away from the user are drawn first and the closer
#ones on top
self.boards.sort(key = lambda x: x.start, reverse = True)
for board in self.boards:
board.draw(self)
if (board == self.whiteBoard or board == self.neutralBoard):
board.drawLines(self)
#lines are used to display the overlap of boards
for trashCan in self.trashCans:
trashCan.draw(self)
self.createTitleBar()
for button in self.buttons:
button.draw(self)
if self.isGameOver != None:
#On a checkmate or stalemate
self.canvas.create_text(self.width/2, self.height/2,
text = self.isGameOver.upper(), font = "Arial 50")
def findRowAndCol(self, event):
#find out the row and col that was clicked. The col is found using
#a linear equation as the col at a particular x coordinate changes
#with respect to the y value at that point.
heightMargin = 6*self.hGap
for board in self.fixedBoards:
(y2, y1) = (self.height - self.margin - board.height*heightMargin,
self.height - self.margin - (board.height*heightMargin) -
board.rows*self.hGap)
(x1, x2) = (-self.wGap/2*(event.y-y2)/self.hGap + self.margin +
(board.start/2 + 1)*self.wGap, -self.wGap/2*(event.y-y2)/
self.hGap + self.margin + (board.start/2 + 1 +
board.rows)*self.wGap)
#hgap and wgap are the spaceing between rows and cols respectively
if y1 < event.y < y2:
if x1 < event.x < x2:
row = int((event.y - y2)*(-1)/self.hGap)
col = int((event.x -x1)/self.wGap)
return (row, col, board)
else:
#check the same for movable boards
for board in self.movableBoards:
(y1, y2) = (self.height-self.margin-board.height*heightMargin
- (board.start - board.fixedBoard.start + 2)*self.hGap,
self.height - self.margin - board.height*heightMargin
- (board.start - board.fixedBoard.start)*self.hGap)
(x1, x2) = (-self.wGap/2*(event.y - y2)/self.hGap + self.margin
+ (board.colStart + board.start/2)*self.wGap,
-self.wGap/2*(event.y - y2)/self.hGap + self.margin +
(board.colStart + board.start/2 + 2)*self.wGap)
if y1 < event.y < y2:
if x1 < event.x < x2:
row = int((event.y - y2)*(-1)/self.hGap)
col = int((event.x - x1)/self.wGap)
return(row, col, board)
def findMovableBoard(self, event):
#to see if a player clicked on the movable/attack board itself
if self.selectedBoard == None and self.selectedPiece == None:
for board in self.movableBoards:
if board.isSelected(event.x, event.y):
return board
def isChecked(self):
#find the position of the king
for board in self.boards:
for row in range(board.rows):
for col in range(board.cols):
piece = board.board[row][col]
if isinstance(piece, King) and piece.color == self.turn:
kingRow, kingCol, kingBoard = row, col, board
for board in self.boards:
#check if any piece of the opposing color can kill the king
for row in range(board.rows):
for col in range(board.cols):
piece = board.board[row][col]
if piece != None and piece.color != self.turn:
piece.row, piece.col, piece.board = row, col, board
if piece.isLegal(kingRow, kingCol, kingBoard, self):
self.checked = True
return
self.checked = False
def checkmate(self):
#if no possible move can be made, only called if the king is checked
possibleMoves = self.listMoves()
if len(possibleMoves) == 0:
self.isGameOver = "checkmate"
return True
else:
return False
def stalemate(self):
#if no possible move can be made and the king is not checked
possibleMoves = self.listMoves()
if len(possibleMoves) == 0:
self.isGameOver == "stalemate"
return True
else:
return False
def listMoves(self):
#for every piece check if it can legally move to a spot on the board
possibleMoves = list()
pieces = self.whitePieces if self.turn == "white" else self.blackPieces
for piece in pieces:
for board in self.boards:
for row in range(board.rows):
for col in range(board.cols):
if piece.isLegal(row, col, board, self):
#move a pseudolegal move - that which is legal if check is not taken
#into account. Then if doesn't result in a check, add it to the list
#of possible moves
piece.move(board, row, col, self, trial = True)
#trial makes sure the move is undone after testing
otherPiece = board.board[row][col]
if self.checked == False:
#calculate the cost of a move(useful for AI)
cost = self.calculateCost(piece, otherPiece)
#store the move as tuple of information
currentMove = (piece, board, row, col, cost)
possibleMoves.append(currentMove)
self.isChecked()
return possibleMoves
def calculateCost(self, piece, otherPiece):
#each piece has a value associated with it, so it calculates the
#returns the value of the piece killed or none if no piece is killed
if otherPiece == None:
return 0
if otherPiece.color != piece.color:
return -otherPiece.points
else:
return otherPiece.points
def lowestCost(self, possibleMoves):
#for every possible move, calculate the cost of that move minus the
#best move of the opponent corresponding to that move
pieceIndex, boardIndex, rowIndex, colIndex = 0, 1, 2, 3
costIndex = 4
bestCost = None
bestMoves = []
for i in range(len(possibleMoves)):
#calculate cost of current move
currentMove = possibleMoves[i]
cost = currentMove[costIndex]
piece, board = currentMove[pieceIndex], currentMove[boardIndex]
row, col = currentMove[rowIndex], currentMove[colIndex]
self.turn = "black"
piece.move(board, row, col, self)
self.isChecked()
if self.checked == True: #incentive to check
if self.checkmate() == True: #always make a checkmate move
return
possibleMovesWhite = self.listMoves()
bestNewCost = None
for j in range(len(possibleMovesWhite)): #calculate opponents cost
nextMove = possibleMovesWhite[j]
newCost = nextMove[costIndex]
if bestNewCost == None or newCost < bestNewCost:
bestNewCost = newCost
#subtract it from cost to get the cost over 2 moves
totalCost = cost - bestNewCost if bestNewCost != None else cost
if bestCost == None or totalCost < bestCost:
bestCost = totalCost
bestMoves = [currentMove]
elif totalCost == bestCost:
bestMoves.append(currentMove)
piece.undoMove(row, col, board)
#take the board back to the original state
self.turn = "black"
self.isChecked()
return bestMoves
def createTitleBar(self):
#The tile bar shows the player name and the relative strength of each
#player based on remaining pieces
x = y = 20
height = 30
margin = 10
totalWidth = 240
whitePoints = self.getWhitePoints()
blackPoints = self.getBlackPoints()
whiteWidth = totalWidth*whitePoints/(whitePoints + blackPoints)
#relative strength
color1 = "white" if self.turn == "black" else "red"
color2 = "black" if self.turn == "white" else "red"
#The player who's turn it is, is displayed in red
self.canvas.create_rectangle(x, y, x + totalWidth, y + height)
self.canvas.create_rectangle(x, y, x + whiteWidth, y + height,
width = 0, fill = "thistle4")
self.canvas.create_rectangle(x + whiteWidth, y, x + totalWidth,
y + height, width = 0, fill = "thistle3")
self.canvas.create_text(x + margin, y + height/2,
fill = color1, text = "Player W", anchor = W)
self.canvas.create_text(x + totalWidth - margin, y + height/2,
fill = color2, text = "Player B", anchor = E)
if self.checked == True:
#image of the knight taken from:
#https://pixabay.com/en/knight-black-chess-figure-game-147065/
#modified to include the "CHECK"
img = self.whiteCheck if self.turn == "white" else self.blackCheck
self.canvas.create_image(x + totalWidth/2, y + height, image = img,
anchor = N)
def getWhitePoints(self):
totalPoints = 0
for piece in self.whitePieces:
if isinstance(piece, King) == False:
totalPoints += piece.points
return totalPoints
def getBlackPoints(self):
totalPoints = 0
for piece in self.blackPieces:
if isinstance(piece, King) == False:
totalPoints += piece.points
return totalPoints
class Board(object):
def __init__(self, height, start):
self.height = height
self.start = start
self.colStart = 1
self.rows = self.cols = 4
self.board = [[None] * self.cols for row in range(self.rows)]
#initialize an empty board
def draw(self, game):
startMargin = self.start*game.wGap/2 + self.colStart*game.wGap
heightMargin = self.height*6*game.hGap
if isinstance(self, MovableBoard):
start = self.start - self.fixedBoard.start
#adjust position of movable board if it isn't fixed at 0,0
else:
start = 0 #for fixed boards
for row in range(self.rows):
for col in range(self.cols + 1):
self.fill = "wheat" if (row+col)%2 == 1 else "brown"
#give a slanted 3D look
(x0,y0) = (startMargin + game.margin + (col + row/2)*game.wGap,
game.height - (row+start)*game.hGap - game.margin -
heightMargin)
x1,y1 = x0 + game.wGap/2, y0 - game.hGap
x2,y2 = x1 + game.wGap, y1
x3,y3 = x2 - game.wGap/2, y0
if col == self.cols: #numbers to help identify rows
game.canvas.create_text(x0 + game.wGap/2, (y0 + y2)/2,
text = str(row + self.start), anchor = N)
continue
piece = self.board[row][col]
#display all possible moves of a selected piece in tutorial mode
if (game.tutorialButton.selected == True and
game.selectedPiece != None and
game.selectedPiece.isLegal(row, col, self, game) and
(piece == None or piece.color != game.selectedPiece.color)):
game.selectedPiece.move(self, row, col, game, trial = True)
if game.checked == False:
#account for pseudo legal moves
self.fill = "blue"
game.isChecked()
game.canvas.create_polygon(x0, y0, x1, y1, x2, y2, x3, y3,
fill = self.fill, activefill = "red", outline = "black", width = 2)
#create each block/square
for row in range(self.rows): #draw all pieces at the correct position
for col in range(self.cols):
(x0,y0) = (startMargin + game.margin + (col + row/2)*game.wGap,
game.height - (row + start)*game.hGap - game.margin -
heightMargin)
piece = self.board[row][col]
if piece != None:
x, y = (2*x0 + 1.5*game.wGap)/2, (2*y0 - game.hGap)/2
piece.draw(game, x, y)
piece.board, piece.row, piece.col = self, row, col
def drawLines(self, game):
#lines help the user see overlapping rows
startMargin = self.start*game.wGap/2 + self.colStart*game.wGap
heightMargin = self.height*6*game.hGap
(leftGap, rightGap) = self.findGap(game)
for row in range(self.rows):
for col in range(self.cols):
(x0,y0) = (startMargin + game.margin + (col + row/2)*game.wGap,
game.height - (row)*game.hGap - game.margin - heightMargin)
x1,y1 = x0 + game.wGap/2, y0 - game.hGap
x2,y2 = x1 + game.wGap, y1
x3,y3 = x2 - game.wGap/2, y0
if type(self) == Board:
if (row, col) == (2, 0):
game.canvas.create_line(x0, y0, x0, y0 - 10*game.hGap)
elif (row, col) == (3, 0):
game.canvas.create_line(x1, y1 - leftGap, x1, y1 -
8*game.hGap)
game.canvas.create_line(x1, y1 - 8*game.hGap, x1,
y1 - 10*game.hGap, fill = "grey")
#The colour is different as this part of the line is behind the other board
elif (row, col) == (3, 3):
game.canvas.create_line(x2, y2 - rightGap, x2, y2 -
10*game.hGap)
elif (row, col) == (2, 3):
game.canvas.create_line(x3, y3, x3, y3 - 10*game.hGap)
def findGap(self, game):
#make changes to the line length so that it doesn't cross a
#movable board on top of it
leftGap, rightGap = 0, 0
for board in game.movableBoards:
if board.fixedBoard == self:
start = board.start - board.fixedBoard.start + 1
if start == 4:
leftGap = 6*game.hGap if board.colStart == 0 else leftGap
rightGap = 6*game.hGap if board.colStart == 4 else rightGap
return (leftGap, rightGap)
class MovableBoard(Board):
def __init__(self, fixedBoard, start, colStart):
self.fixedBoard = fixedBoard #keep a reference to a fixed board
self.height = self.fixedBoard.height + 1
self.start = start + self.fixedBoard.start - 1
self.colStart = colStart
self.rows = self.cols = 2
self.board = [[None] * self.cols for row in range(self.rows)]
def draw(self, game): #draw a thick line for selection purposes
super().draw(game)
startMargin = self.start*game.wGap/2 + self.colStart*game.wGap
heightMargin = self.height*6*game.hGap
start = self.start - self.fixedBoard.start
self.lineX = startMargin + game.margin + 1.5*game.wGap
self.lineY = game.height - start*game.hGap - game.margin - heightMargin
self.lineHeight = 5*game.hGap
self.lineWidth = 2.5
self.lineColor = "black" if self != game.selectedBoard else "red"
game.canvas.create_rectangle(self.lineX - self.lineWidth,
self.lineY, self.lineX + self.lineWidth,
self.lineY + self.lineHeight, fill = self.lineColor)
def isSelected(self, x, y): #click on the line for movement
try:
return ((self.lineX - self.lineWidth < x < self.lineX +
self.lineWidth) and (self.lineY < y < self.lineY + self.lineHeight))
except:
pass
def hasOnePiece(self, game):
#can only move it if has one piece of the player making the move and
#nothing else
pieceCount = 0
for row in range(self.rows):
for col in range(self.cols):
piece = self.board[row][col]
if piece != None and piece.color != game.turn:
return False
elif piece != None and piece.color == game.turn:
pieceCount += 1
return pieceCount == 1
def isLegal(self, row, col, fixedBoard, game):
#can only move at adjacent corners of boards
if (self.hasOnePiece(game) == True):
if (row, col) == (0, 0):
start = row + fixedBoard.start - 1
colStart = col
elif (row, col) == (3,3):
start = row + fixedBoard.start
colStart = col + 1
elif (row, col) == (3, 0):
start = row + fixedBoard.start
colStart = col
elif (row, col) == (0, 3):
start = row + fixedBoard.start - 1
colStart = col + 1
else:
return False
if self.fixedBoard == fixedBoard:
return (abs(start - self.start) +
abs(colStart - self.colStart) == 4)
elif self.fixedBoard != fixedBoard:
return (abs(start - self.start) +
abs(colStart - self.colStart) == 2)
def moveBoard(self, row, col, fixedBoard, game):
#move the board to the new position
oldBoard, oldStart, oldColstart = self.board, self.start, self.colStart
self.fixedBoard = fixedBoard
self.height = self.fixedBoard.height + 1
if row == 0: start = 0
elif row == 3: start = 4
if col == 0: colStart = 0
elif col == 3: colStart = 4
self.start = start + self.fixedBoard.start - 1
self.colStart = colStart
game.isChecked() #account for pseudo legal moves so that check is taken
#into account
if game.checked == True:
self.start, self.colStart = oldStart, oldColstart
self.board = oldBoard
return
game.selectedBoard = None
game.turn = "black" if game.turn == "white" else "white"
#NEVER CREATE A PIECE, ALWAYS CREATE A SUBCLASS
class Piece(object):
def __init__(self, color, game):
self.row = self.col = self.board = None
self.color = color
self.moves = 0
def draw(self, game):
#the last moved piece is green and the selected piece is red
self.drawColor = "red" if self == game.selectedPiece else self.color
self.drawColor="lime green" if self==game.lastPiece else self.drawColor
filename = "ChessPieces/Chess_%s_%s.gif" % (self.name, self.color)
self.photo = PhotoImage(file = filename).subsample(2, 2)
def __repr__(self):
return self.name
def noPieceInWay(self, TgtRow, TgtCol, drow, dcol, row, col, game,
selfBoard = None):
#recursively check each position in a given direction to see that a
#clear path to the destination exists
if selfBoard == None:
selfBoard = self.board
row += drow
col += dcol
if row == TgtRow and col == TgtCol: #reached position
return True
try: #because it fails on certain moves made by a movable board where
#the row or col is outside the main board
if (row >= selfBoard.start + selfBoard.rows or
row < selfBoard.start or col >= selfBoard.colStart +
selfBoard.cols or col < selfBoard.colStart or
selfBoard.board[row - selfBoard.start][col -selfBoard.colStart]
!= None):
#find new boards with the same row and col, as vertical
#movements don't count as moves
flag = False
for board in game.boards:
if flag == False and board != selfBoard:
for boardRow in range(len(board.board)):
if (board.colStart > col or col - board.colStart
>= board.cols): #account for movable boards
continue
if (row == boardRow + board.start and
(board.board[boardRow][col - board.colStart] == None)):
#found a board with the same row and col!
selfBoard = board
flag = True #don't check the rest
if flag == False:
return False
except:
return False
if row == TgtRow and col == TgtCol:
return True #reached
else:
return self.noPieceInWay(TgtRow, TgtCol, drow, dcol, row, col,
game, selfBoard) #test the next block/square in the same way
def move(self, board, row, col, game, trial = False):
#trial is for AI and check functions
(self.oldRow, self.oldCol, self.oldBoard, self.oldPiece) = (
self.row, self.col, self.board, board.board[row][col])
#in case we need to undo the move
flag = True
if ((board.board[row][col] == None or
self.color != board.board[row][col].color) and
type(board.board[row][col]) != King): #make a pseudo legal move
self.oldBoard.board[self.row][self.col] = None
board.board[row][col] = self
self.row, self.col = row, col
self.board = board
game.isChecked()
if (game.checked == True) or (trial == True):
self.undoMove(row, col, board) #undo the move if its a trial or
#isn't possible because of check
return
else:
self.moves += 1 #to keep track of pawn double moves
game.lastPiece = self #to display the last moved piece
flag = False
game.selectedPiece = None
game.turn = "black" if game.turn == "white" else "white"
#change turn when an actual move is made
if flag == True:
game.isChecked() #restore the check state to the original value
#if move conditions weren't met
def undoMove(self, row, col, board):
#take the piece to its old place and restore the value of the new place
#to what it previously was
self.oldBoard.board[self.oldRow][self.oldCol] = self
board.board[row][col] = self.oldPiece
(self.row, self.col, self.board) = (self.oldRow, self.oldCol,
self.oldBoard)
class Pawn(Piece):
def __init__(self, color, game):
super().__init__(color, game)
self.name = "Pawn"
self.points = 1
#"Chess pdt60" by File:Chess pdt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_pdt60.png#/media/
#File:Chess_pdt60.png
#that's all one url
#"Chess plt60" by File:Chess plt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_plt60.png#/media/
#File:Chess_plt60.png
#that's all one url
def draw(self, game, x, y):
super().draw(game)
game.canvas.create_image(x, y + game.hGap/2, image = self.photo,
anchor = S)
game.canvas.create_text(x,y, text = "P", fill = self.drawColor)
if self.color == "white": #Pawn reached the end! Upgrade to Queen
if type(self.board) == MovableBoard:
if self.row + self.board.start == 9:
self.convertPawn(game)
elif type(self.board) == Board:
if self.row + self.board.start == 8:
self.convertPawn(game)
if self.color == "black":
if type(self.board) == MovableBoard:
if self.row + self.board.start == 0:
self.convertPawn(game)
elif type(self.board) == Board:
if self.row + self.board.start == 1:
self.convertPawn(game)
def convertPawn(self, game):
self.board.board[self.row][self.col] = Queen(self.color, game)
pieces= game.whitePieces if self.color == "white" else game.blackPieces
pieces.append(self.board.board[self.row][self.col])
pieces.remove(self)
def isLegal(self, boardRow, boardCol, board, game):
otherPiece = board.board[boardRow][boardCol]
if otherPiece != None and otherPiece.color == self.color:
return False #only move to an empty spot or kill an opponent
self.prevBoard = self.board
if self == board.board[boardRow][boardCol]: #can't move on itself
return False
row = boardRow + board.start
col = boardCol + board.colStart
if self.color == "white":
#move 2 places on the first move
if self.moves == 0 and abs(self.row + self.board.start - row) == 2:
drow, dcol = 1, 0
return self.noPieceInWay(row, col, drow, dcol,
self.row + self.board.start, self.col + self.board.colStart, game)
#move only 1 on the other moves
if row - (self.row + self.board.start) != 1:
return False
elif self.color == "black":
#move 2 places on the first move
if self.moves == 0 and abs(self.row + self.board.start - row) == 2:
drow, dcol = -1, 0
return self.noPieceInWay(row, col, drow, dcol,
self.row + self.board.start, self.col + self.board.colStart, game)
#move only 1 on the other moves
if (self.row + self.board.start) - row != 1:
return False
#col changes if it kills a piece and doesn't otherwise
if board.board[boardRow][boardCol] != None:
return abs(col - (self.col + self.board.colStart)) == 1
else:
return col == self.col + self.board.colStart
class King(Piece):
def __init__(self, color, game):
super().__init__(color, game)
self.name = "King"
self.points = 0
#"Chess kdt60" by File:Chess kdt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_kdt60.png#/media/
#File:Chess_kdt60.png
#"Chess klt60" by File:Chess klt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_klt60.png#/media/
#File:Chess_klt60.png
def draw(self, game, x, y):
super().draw(game)
game.canvas.create_image(x, y + game.hGap/2, image = self.photo,
anchor = S)
game.canvas.create_text(x, y, text = "K", fill = self.drawColor)
def isLegal(self, boardRow, boardCol, board, game):
#Move one place in any direction
otherPiece = board.board[boardRow][boardCol]
if otherPiece != None and otherPiece.color == self.color:
return False
self.prevBoard = self.board
if self == board.board[boardRow][boardCol]:
return False
row = boardRow + board.start
col = boardCol + board.colStart
return (abs(col - (self.col + self.board.colStart)) <= 1 and
abs(row - (self.row + self.board.start)) <= 1)
class Queen(Piece):
def __init__(self, color, game):
super().__init__(color, game)
self.name = "Queen"
self.points = 9
#"Chess qdt60" by File:Chess qdt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_qdt60.png#/media/
#File:Chess_qdt60.png
#"Chess qlt60" by File:Chess qlt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_qlt60.png#/media/
#File:Chess_qlt60.png
def draw(self, game, x, y):
super().draw(game)
game.canvas.create_image(x, y + game.hGap/2, image = self.photo,
anchor = S)
game.canvas.create_text(x, y, text = "Q", fill = self.drawColor)
def isLegal(self, boardRow, boardCol, board, game):
otherPiece = board.board[boardRow][boardCol]
if otherPiece != None and otherPiece.color == self.color:
return False
self.prevBoard = self.board
if self == board.board[boardRow][boardCol]:
return False
row = boardRow + board.start
col = boardCol + board.colStart
#Rook movement, i.e. move in a straight line. Find the direction of
#movement
if self.col + self.board.colStart - col == 0:
dcol = 0
if self.row + self.board.start - row > 0:
drow = -1
elif self.row + self.board.start - row < 0:
drow = 1
else:
return False
elif self.row + self.board.start - row == 0:
drow = 0
if self.col + self.board.colStart - col > 0:
dcol = -1
elif self.col + self.board.colStart - col < 0:
dcol = 1
else:
return False
else:
#Bishop movement, i.e. diagonal movement. Find the direction.
if (self.row + self.board.start - row == self.col +
self.board.colStart - col):
if self.col + self.board.colStart - col < 0:
drow, dcol = 1, 1
else:
drow, dcol = -1, -1
elif (self.row + self.board.start - row == col -
self.col - self.board.colStart):
if col - self.col - self.board.colStart > 0:
drow, dcol = -1, 1
else:
drow, dcol = 1, -1
else:
return False
#check if there is a free path to the destination
return self.noPieceInWay(row, col, drow, dcol,
self.row + self.board.start, self.col + self.board.colStart, game)
class Bishop(Piece):
def __init__(self, color, game):
super().__init__(color, game)
self.name = "Bishop"
self.points = 3
#"Chess bdt60" by File:Chess bdt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_bdt60.png#/media/
#File:Chess_bdt60.png
#"Chess blt60" by File:Chess blt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_blt60.png#/media/
#File:Chess_blt60.png
def draw(self, game, x, y):
super().draw(game)
game.canvas.create_image(x, y + game.hGap/2, image = self.photo,
anchor = S)
game.canvas.create_text(x, y, text = "B", fill = self.drawColor)
def isLegal(self, boardRow, boardCol, board, game):
otherPiece = board.board[boardRow][boardCol]
if otherPiece != None and otherPiece.color == self.color:
return False
self.prevBoard = self.board
if self == board.board[boardRow][boardCol]:
return False
row = boardRow + board.start
col = boardCol + board.colStart
if (self.row + self.board.start - row ==
self.col + self.board.colStart - col):
if self.col + self.board.colStart - col < 0:
drow, dcol = 1, 1
else:
drow, dcol = -1, -1
elif (self.row + self.board.start - row ==
col - self.col - self.board.colStart):
if col - self.col - self.board.colStart > 0:
drow, dcol = -1, 1
else:
drow, dcol = 1, -1
else:
return False
return self.noPieceInWay(row, col, drow, dcol,
self.row + self.board.start, self.col + self.board.colStart, game)
class Knight(Piece):
def __init__(self, color, game):
super().__init__(color, game)
self.name = "Knight"
self.points = 3
#"Chess ndt60" by File:Chess ndt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_ndt60.png#/media/
#File:Chess_ndt60.png
#"Chess nlt60" by File:Chess nlt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_nlt60.png#/media/
#File:Chess_nlt60.png
def draw(self, game, x, y):
super().draw(game)
game.canvas.create_image(x, y + game.hGap/2, image = self.photo,
anchor = S)
game.canvas.create_text(x, y, text = "N", fill = self.drawColor)
def isLegal(self, boardRow, boardCol, board, game):
otherPiece = board.board[boardRow][boardCol]
if otherPiece != None and otherPiece.color == self.color:
return False
self.prevBoard = self.board
if self == board.board[boardRow][boardCol]:
return False
row = boardRow + board.start
col = boardCol + board.colStart
#Move 3 places but not in a straight line
return ((abs(col - (self.col + self.board.colStart)) +
abs(row - (self.row + self.board.start)) == 3) and
(abs(col - (self.col + self.board.colStart)) != 0) and
(abs(row - (self.row + self.board.start)) != 0))
class Rook(Piece):
def __init__(self, color, game):
super().__init__(color, game)
self.name = "Rook"
self.points = 5
#"Chess rdt60" by File:Chess rdt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_rdt60.png#/media/
#File:Chess_rdt60.png
#"Chess rlt60" by File:Chess rlt45.svg. Licensed under CC BY-SA 3.0 via
#Wikimedia Commons -
#https://commons.wikimedia.org/wiki/File:Chess_rlt60.png#/media/
#File:Chess_rlt60.png
def draw(self, game, x, y):
super().draw(game)
game.canvas.create_image(x, y + game.hGap/2, image = self.photo,
anchor = S)
game.canvas.create_text(x, y, text = "R", fill = self.drawColor)
def isLegal(self, boardRow, boardCol, board, game):
otherPiece = board.board[boardRow][boardCol]
if otherPiece != None and otherPiece.color == self.color:
return False
self.prevBoard = self.board
if self == board.board[boardRow][boardCol]:
return False
row = boardRow + board.start
col = boardCol + board.colStart
if self.col + self.board.colStart - col == 0:
dcol = 0
if self.row + self.board.start - row > 0:
drow = -1
elif self.row + self.board.start - row < 0:
drow = 1
else:
return False
elif self.row + self.board.start - row == 0:
drow = 0
if self.col + self.board.colStart - col > 0:
dcol = -1
elif self.col + self.board.colStart - col < 0:
dcol = 1
else:
return False
else:
return False
return self.noPieceInWay(row, col, drow, dcol,
self.row + self.board.start, self.col + self.board.colStart, game)
class TrashCan(object): #display the dead
def __init__(self, x, y, game, color):
self.x = x
self.y = y
self.color = color
def draw(self, game):
height = 180
width = 80
deltaWidth = 5
deltaHeight = 10
deltaArc = 40
game.canvas.create_rectangle(self.x - width/2, self.y, self.x +
width/2, self.y - height, fill = self.color, outline = self.color)
game.canvas.create_rectangle(self.x - width/2 - deltaWidth, self.y -
deltaHeight, self.x + width/2 + deltaWidth, self.y + deltaHeight,
fill = self.color, outline = self.color)
game.canvas.create_arc(self.x - width/2 - deltaWidth, self.y - height -
deltaArc, self.x + width/2 + deltaWidth, self.y - height +
deltaArc ,fill = self.color, outline = self.color,
extent = 180)
colGap = 20
rowGap = 40
piecesPerRow = 3
leftEnd = self.x - colGap
#iterate through the list of remove pieces, and draw them in the
#correct trashcan
if self.color == "white":
for i in range(len(game.removedPiecesBlack)):
piece = game.removedPiecesBlack[i]
row = i//piecesPerRow
col = i%piecesPerRow
piece.draw(game, leftEnd + col*colGap, self.y - row*rowGap)
elif self.color == "black":
for i in range(len(game.removedPiecesWhite)):
piece = game.removedPiecesWhite[i]
row = i//piecesPerRow
col = i%piecesPerRow
piece.draw(game, leftEnd + col*colGap, self.y - row*rowGap)
class Button(object):
def __init__(self, name, x, y, width, height):
self.name = name
self.x = x
self.y = y
self.width = width
self.height = height
self.selected = False
def containsPoint(self, x, y):
#if the x and y coordinate lie in the area covered by the button
if (self.x < x < self.x + self.width and self.y < y <
self.y + self.height):
return True
else:
return False
def draw(self, game):
#add spaces to the text so that activefill works properly
totalChars = round(self.width/10)
spaces = int((totalChars - len(self.name))/2)
self.fill = "green" if self.selected == True else "pink"
if self.name == "PLAY": self.fill = "thistle3"
game.canvas.create_rectangle(self.x, self.y, self.x + self.width,
self.y + self.height, fill = self.fill)
game.canvas.create_text(self.x + self.width/2, self.y + self.height/2,
fill = "blue", text = " "*spaces + self.name + " "*spaces,
font = "Menlo 16", activefill = "red",)
class Instructions(Animation):
#new window so people can learn and play side by side
def run(self, width=300, height=300):
# create the root and the canvas
root = Toplevel()
self.width = width
self.height = height
self.canvas = Canvas(root, width=width, height=height)
self.canvas.pack()
# set up events
def redrawAllWrapper():
self.canvas.delete(ALL)
self.redrawAll()
self.canvas.update()
def mousePressedWrapper(event):
self.mousePressed(event)
redrawAllWrapper()
def keyPressedWrapper(event):
self.keyPressed(event)
redrawAllWrapper()
root.bind("<Button-1>", mousePressedWrapper)
root.bind("<Key>", keyPressedWrapper)
# set up timerFired events
self.timerFiredDelay = 250 # milliseconds
def timerFiredWrapper():
self.timerFired()
redrawAllWrapper()
# pause, then call timerFired again
self.canvas.after(self.timerFiredDelay, timerFiredWrapper)
# init and get timerFired running
self.init()
timerFiredWrapper()
# and launch the app
root.mainloop()
print("Bye")
def init(self):
#image- https://www.flickr.com/photos/finlap/3684612986 #blue vignette
#By- sure2talk
#https://creativecommons.org/licenses/by/2.0/legalcode
#This picture has been used as a background and no other modification
#has been made.
self.pages = 6
self.page = 1
self.pageDict = {1: "Home", 2: "RulesPt1", 3: "RulesPt2",
4: "GameplayMvmt", 5: "GameplayButtons", 6: "GameplayWidgets"}
#a dictionary with filenames for the image corresponding to the page
#number
self.photoList = list()
#store the photoimage for all the images
for i in range(1, self.pages + 1):
filename = filename = "attachments/" + self.pageDict[i] + ".gif"
photo = PhotoImage(file = filename).subsample(2,2)
self.photoList.append(photo)
self.margin = 20
buttonWidth = 90
buttonHeight = 30
#to move through pages
self.nextButton = Button("NEXT", self.width - self.margin -
buttonWidth, self.height - self.margin - buttonHeight,
buttonWidth, buttonHeight)
self.previousButton = Button("PREVIOUS", self.margin, self.height -
self.margin - buttonHeight, buttonWidth, buttonHeight)
def mousePressed(self, event):
#change page number on clicking buttons
if (self.nextButton.containsPoint(event.x, event.y) and
self.page < self.pages):
self.page += 1
if (self.previousButton.containsPoint(event.x, event.y) and
self.page > 1):
self.page -= 1
def keyPressed(self, event):
#change page number on pressing arrow keys
if event.keysym == "Right" and self.page < self.pages:
self.page += 1
if event.keysym == "Left" and self.page > 1:
self.page -= 1
def timerFired(self):
pass
def redrawAll(self):
#draw the photo corresponding to the page number
self.photo = self.photoList[self.page - 1]
self.canvas.create_image(0, 0, image = self.photo, anchor = NW)
#don't draw previous on the first page and next on the last page
if self.page < self.pages:
self.nextButton.draw(self)
if self.page > 1:
self.previousButton.draw(self)
HelpMenu = Instructions()
App = Chess()
App.run(1000, 700)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,059 |
{"url":"https:\/\/www.cableizer.com\/documentation\/JR_p\/","text":"# Jam ratio in duct\n\nJamming is the wedging of three or more cables when pulled into a conduit. This usually occurs because of crossovers when the cables twist or are pulled around bends. The jam ratio is the ratio of the conduit inner diameter $Dd_i$ and the cable outside diameter $D_e$.\n\nIn calculating jamming probabilities, a 5% factor was used to account for the oval cross-section of conduit bends.\n\nThe cable diameters should be measured, since actual diameters may vary from the published nominal values.\n\nSymbol\n$JR_{p}$\nUnit\n%\nFormulae\n$\\frac{1.05 Dd_{i}}{D_{e}}$\nRelated\n$D_{e}$\nChoices\nProbability for jamminglower rangeupper range\nvery small$JR_p$ < 2.4$JR_p$ \u2265 3.2\nsmall2.4 \u2264 $JR_p$ < 2.53.0 \u2264 $JR_p$ < 3.2\nmoderate2.5 \u2264 $JR_p$ < 2.62.9 \u2264 $JR_p$ < 3.0\nsignificant2.6 \u2264 $JR_p$$JR_p$ < 2.9","date":"2020-08-05 18:42:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.30048930644989014, \"perplexity\": 8162.23517632244}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439735964.82\/warc\/CC-MAIN-20200805183003-20200805213003-00577.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.springerprofessional.de\/en\/advances-in-design-simulation-and-manufacturing-ii\/16777912","text":"main-content\n\nThis book reports on topics at the interface between manufacturing, mechanical and chemical engineering. It gives special emphasis to CAD\/CAE systems, information management systems, advanced numerical simulation methods and computational modeling techniques, and their use in product design, industrial process optimization and in the study of the properties of solids, structures, and fluids. Control theory, ICT for engineering education as well as ecological design, and food technologies are also among the topics discussed in the book. Based on the 2nd International Conference on Design, Simulation, Manufacturing: The Innovation Exchange (DSMIE-2019), held on June 11-14, 2019, in Lutsk, Ukraine, the book provides academics and professionals with a timely overview and extensive information on trends and technologies behind current and future developments of Industry 4.0, innovative design and renewable energy generation.\n\n### Case Study of Model-Based Definition and Mixed Reality Implementation in Product Lifecycle\n\nIn the new era of Industry 4.0, new strategies and approaches for the use and storage of product and data processing over the entire product life cycle are developed. One of these approaches is Model-Based Enterprise\/Definition. This approach attempts to place the relevant information in a single model, avoiding unnecessary interfaces between different tools and documents and redundant data. Another big trend of the recent time in the industry is the use of virtual and mixed reality in different phases of the product life cycle. The images of product geometry and information can be used virtually in the real environment and the design and processes can be validated before the production begins. This research work aims to research the possibility to combine and practically use the both mentioned technologies in the industrial enterprise processes for certain phases in the product life cycle. The present work examines the possibilities of integrating the advantages of model-based definition into mixed reality and making them usable for business processes. The proposed approach was validated for the valve quality assurance process at Siemens P&G.\n\nDmytro Adamenko, Robin Pluhnau, Arun Nagarajah\n\n### Design Optimization Techniques in Mechanical Design and Education of Engineers\n\nWith each year computerized tools for mechanical design become more comprehensive and contain a great variety of tools for engineers. Using the tools as black box solutions contains a lot of risks. In the field of CAD\/CAE in mechanical design one of the hot topics is the optimization of the shape of a structure complying to certain boundary conditions to make a structure lighter and thus more economic. Although these techniques have been described a long time ago, through to recent developments in additive manufacturing and other prototyping techniques which make it possible to make the resulting structures, the tools are currently gaining a lot of importance and are being implemented. In the article we consider a case study to show the differences between topological and parametrical optimization for the same task. Based on this example the authors would like to stress the importance of the correct implementation of these two approaches and the importance of teaching the methodology and not only tools in the engineering study.\n\nPeter Arras, Galyna Tabunshchyk\n\n### Prospects of the Implementation of Modular Charging Stations Based on IoT to the Infrastructure of the Automotive Industry\n\nThe article is focused on the investigation of the modular hybrid type of intelligent charging stations and their application in the automotive market infrastructure. A study of this market has shown that customer requirements are not fully satisfied. Besides, the current market state of charging stations requires the best service to be provided to its customers. The importance of supporting demand on such vehicles is caused by the current state of ecology in the world. The performed analysis has revealed an insufficient level of providing the market with the necessary infrastructure, in particular, the lack of a sufficient number of charging stations for electric vehicles, especially in the European market. The evaluation of the activity of the main suppliers of charging stations in the world market was carried out in order to find the best solution. According to the analysis, it was proposed to introduce the modular hybrid charging station that allows to extend the demand for electrical vehicles (EV) and to satisfy the expectations of manufacturers and customers. The strength of the proposed device is to integrate existing advanced technologies in order to create a completely new product that corresponds to the trends of Industry 4.0.\n\nMichal Balog, Angelina Iakovets, Hanna Sokhatska\n\n### Creative, Quality Oriented Rethinking of the Assessment Strategy at the University Level Courses. A Case Study\n\nOne of the main success factors of higher education institutions (but not the only one) is the constant focus on the quality and the continuous improvement of the teaching and learning evaluation process. Orientation towards the innovation, increased attention to the needs and interests of the education customers and stakeholders becomes imperative when universities want to become or remain competitive on the education services market. In this respect, the real involvement of students in their dual quality as internal and external clients in improving the quality of the educational process by considering their opinions and suggestions is proof of the student-centered education and contributes to the motivation and the increase of their satisfaction. This paper represents just a sequence of a more extensive program of the course redesign, carried out at one of the Master\u2019s degree programs from the Technical University of Cluj-Napoca, focusing exclusively on the evaluation aspects. The process of the course redesigning was focused on both teaching and learning processes and followed a series of steps, according to a model previously promoted by the author, using a number of innovative methods and tools for each stage.\n\nAlina Narcisa Crisan, Grigore Marian Pop\n\n### Numerical Deflections Analysis of Variable Low Stiffness of Thin-Walled Parts During Milling\n\nThe milling of parts with variable low stiffness requires consideration of a number of factors that are an obstacle to achieving technological requirements for the product. One of the most important factors of geometrical deviations in the milling process is the elastic deformation of the thin-walled parts. The review of prevention methods of undesirable deviations occurrence in the machining process of the variable stiffness parts is made. A detailed preliminary analysis with the help of engineering automation is proposed as a technological solution for the milling of thin-walled parts with a complex geometry. The forces that occurs during the material removal are calculated, directional force that acts on the face surface of the thin-walled element is defined. The dynamic process of material removing is modeled. The critical points of the thin-walled variable sample in deflections model are defined. The forces in the removal zone have been processed and included in the deflections model. Comparison between oscillation amplitudes in the process of conventional and high speed machining is made.\n\nSergey Dobrotvorskiy, Yevheniia Basova, Serhii Kononenko, Ludmila Dobrovolska, Maryna Ivanova\n\n### Modeling of Mixing Bulk Materials\n\nBased on the analysis of the mixing methods of bulk materials and mixer designs, the method of continuous mixing of bulk materials and the design of a spiral mixer for the implementation of the method is substantiated. The method involves the formation of a multi-layer flow of components in the desired ratio, with subsequent separation of the flow into portions of a small volume and mixing the components in a portion. After that, the mixing of portions of the finished mixture is carried out. The development of a new mixing method is due to the fact that known methods not providing a uniform distribution of components in the volume of the mixture are time-consuming and energy-intensive. Modeling different ways of mixing dragees has proved the effectiveness of the developed mixing method. Experiment has determined values of qualitative indicators of dragee mixes, in particular, the average contents of sweets of different colors of the mixture and the value of the heterogeneity of the mixture. Mixing of bulk materials in the developed way ensures uniform distribution of them in volume of mixture. Theoretical dependences are obtained for substantiation of rational structural and technological parameters of the equipment implementing the proposed method. It has been established that, in addition to improving the quality of the mixture, the developed method and the spiral mixer provide a reduction in the duration of the mixing process, do not cause damage to the components of the mixture and reduce the energy consumption for mixing process.\n\nIgor Dudarev, Ruslan Kirchuk, Yurii Hunko, Svitlana Panasyuk\n\n### Information Support for the Quality Management System Assessment of Engineering Enterprises\n\nIn the current conditions of production activities, more enterprises are working on the development and implementation of process-oriented management systems that correspond with international standards for management systems. The main purpose of the implementation of such systems is the satisfaction of the requirements of various stakeholders, and the index of their satisfaction becomes the criterion of perfection of the enterprise. Therefore, the task of its quantitative assessment is topical. In the paper for the determination of generalized satisfaction index of stakeholders, on the basis of fuzzy sets theory a scale of values of the linguistic variable \u201cSatisfaction\u201d was developed. This approach allows to assess the compliance degree of stakeholders\u2019 requirements and to present it as a linguistic value for further determination of the directions of improving the quality of the enterprise processes. On the basis of the principles of creating an information system for an engineering enterprise, the paper proposes the information support for the process of quality management system assessment, the main task of which is to create conditions that ensure rational processing and timely provision of necessary information on the functioning assessment results of the system that are under consideration. In the development of information support, the paper takes into account the possibility of using it as one of the blocks of a single information system of the enterprise, which makes it possible to create an information databank, as well as carry out a comparative analysis of the indexes under consideration for any period of time.\n\nOksana Dynnyk, Yuliia Denysenko, Viliam Zaloga, Oleksandr Ivchenko, Tetiana Yashyna\n\n### Programs to Boost IT-Readiness of the Machine Building Enterprises\n\nOne of the important aspects of providing the high level of the enterprises competitiveness on the market is existence of the necessary level of IT-readiness. By using the term \u201cIT-readiness\u201d we mean the ability of the enterprise to reach the mission by the most effective use of modern information technologies. There are contradictions between the need of the enterprise operatively to reconstruct design and production structures accordingly to the market condition change and the level of the modern IT use for maintenance of science intensive samples creation projects. The problem situation becomes complicated because of the lack of possibility of the fast development of expensive information support systems through the absence of big financial resources for the machine building enterprises in the conditions of an unstable investment climate and a low level of profitability. It leads to the need of the stage-by-stage IT introduction in the course of a life cycle support of a new equipment creation project that is also a characteristic for the machine building enterprises. It turned out that the high technology development is carried out now under the conditions of essential restrictions of financial resources all around the world. In these conditions one of progressive ideologies is Lean Manufacturing methodology. World experience shows that the success of this ideology introduction is directly connected with an effective use of modern information technologies of the high technology samples design and business management. Thus, an important question is the compliance of the enterprise to the necessary level of IT-readiness which is directly connected with a technological maturity.\n\nBohdan Haidabrus, Eugen Druzhinin, Mattias Elg, Martin Jason, Janis Grabis\n\n### Finite-Element Model of Bimetal Billet Strain Obtaining Box-Shaped Parts by Means of Drawing\n\nThe article shows data on determining billet shape while obtaining a box-shaped part from aluminum-copper bimetallic composition. Special attention in the course of finite-element modeling of strain process is paid to the choice of mechanical characteristics of each layer and the nature of the relationship between layers. It is shown that the optimal billet shape for drawing is a \u201crectangle with cut angles\u201d and copper layer outer position. This billet shape ensures the absence of corrugations with a single junction drawing of aluminum-copper bimetal box-shaped parts, and also provides the least deformation force with minimum intensity of stresses and strains. The absence of folds allows to judge on sufficiently proportionate layer-by-layer strain and preservation of the indissoluble layer-by-layer engagement. This makes possible to design a technological process for the production of bimetallic contacts with the required set of electromechanical characteristics and to recommend it for manufacturing. It is also noted that a more solid and consistent material for obtaining the optimum product handling properties should be located on the outer layer of the part. At the same time, it is advisable to use drawing in the manufacture of bimetallic parts to select, whenever it is possible, materials with approximately the same strength and plastic characteristics, avoiding the occurrence of different stresses in the product layers and, accordingly, distortion or other negative consequences for the finished item. The presence of lamination in a bimetallic composition increases the electrical resistance hundredfold and leads to the product rejection.\n\nTetiana Haikova, Ruslan Puzyr, Vladimir Dragobetsky, Anastasiya Symonova, Roman Vakylenko\n\n### Estimation of Temperature Levels in the Area of Polishing with Polymer-Abrasive Brushes\n\nA technique for measuring and monitoring the temperature in the area of processing with polymer-abrasive (PA) brush tools of rotary action, which have thermal performance limitations due to the low melting temperature of the fiber polymer base, is proposed. There is very contradictory information, which has been obtained mostly theoretically, about temperature level in the working area when polishing with PA brushes. Therefore, experimental thermal studies directly in the \u201cbrush-sample\u201d contact zone are relevant. It is important for maintaining high tool-life of brush PA tools. The temperature in the processing zone is influenced by the polishing modes (feed, tension, speed), tool parameters (diameter and length of fibers, which characterize the brush stiffness). The samples were made of different materials (steel 3, aluminum AM12 and titanium VT8-M alloys) in order to evaluate the effect of thermal conductivity on the temperature in the processing area. It was found that the temperatures measured during processing without lubricant-cooling agents were 30\u2026130 \u00b0C depending on modes and brush rigidity. The dependence of the maximum temperatures in the \u201ccontact patch\u201d on the modes and parameters of the tool when polishing with brushes of various materials was established. Most PA brushes have a thermal limit of 100\u2026120 \u00b0C. Most polishing work can be done without lubricant-cooling agents. However, to work on \u201chard\u201d modes or on materials with low thermal conductivity it is necessary to apply lubricant-cooling agents or brushes with high thermal resistance of fibers (200 \u00b0C); such brushes have recently appeared on the market of tools for finishing operations.\n\nNatalia Honchar, Eduard Kondratiuk, Dmytro Stepanov, Pavlo Tryshyn, Olena Khavkina\n\n### Analysis of the Involute and Sinusoidal Gears by the Operating Parameters and a New Method of Its Cutting\n\nToday, involute spur gears are the most common in mechanical engineering. However, besides of the benefits, involute teeth present several disadvantages. To overcome those disadvantages, designers resort to changes that increase the complexity of equipment, cutting tools that lead to the expensive cost. On another hand, gear and transmission, namely sinusoidal gear and transmission is known by higher properties. The results of the simulation of involute and sinusoidal gearing are described in the article. Proved that, sinusoidal gears have higher performance parameters. Simulation confirmed that sinusoidal gear have higher bending strength, lower contact stress, reduced contact friction and tension in the edging contact, improved performance indicators of transmission. The advantages are due to the following features of the sinusoidal gears geometry: gear tooth profile outlined by a smooth sine wave curve; greater teeth thickness on the pitch circle; wide range of coast flank pressure angle. A new method of gears machining is described. This technology makes it possible to reduce number of expensive and complex gears cutting tools and gears machine tools, greatly simplifies technology of tooth cutting and reduces the cost of gears manufacturing by a numerous times. The method has a wide versatility, provides the opportunity to produce a variety of gears types including, gears with asymmetric teeth.\n\n### Technological Assurance and Features of Fork-Type Parts Machining\n\nTo provide the machining accuracy of parts on metal-cutting machine-tools fixtures appointed for accurately locating and reliable workpiece clamping are used. The expansion of technological capabilities of modern CNC machine tools necessitates the improvement of design procedures in production planning is needed. The variety of parts and the tendency to increase their complexity, as well as the desire to reduce the cost of time, makes it necessary to find new approaches to the design of tooling. The article proposes the design of a flexible fixture, which provides sufficient tool availability and allows multiaxis machining of fork-type parts at one setup. The ways of intensification and manufacturing process of fork-type parts machining with a significant reduction of auxiliary and preparatory time are offered. Studies performed by numerical simulation methods confirmed that the proposed design meets all the accuracy parameters. The results of static structural, modal, and harmonic analyses confirmed that the proposed fixture had sufficient strength and dynamic stiffness, which allows it to be used with intensive cutting modes that are characteristic of modern machines and cutting tools. The oscillation amplitudes in places of the work surfaces in the proposed device do not exceed the tolerances for the manufacturing of these surfaces.\n\nVitalii Ivanov, Ivan Dehtiarov, Ivan Pavlenko, Mykyta Kosov, Michal Hatala\n\n### Increasing the Efficiency of the Production Process Due to Using Methods of Industrial Engineering\n\nThe thesis deals with the increase of production productivity using industrial engineering methods. Application of lean methods in production process and logistics. (predicate should be added). In the theoretical part of this thesis the knowledge (information in this case is better) drawn from technical literature in the field of industrial engineering is presented. In particular the problem of balancing the production line is described here. The right production cycle is a basic requirement for serial production. Process analysis and the right detail analysis of production operations is the main indicator for batching of the production cycle and the detection of bottlenecks. In the analytical part, an approach to resolving changes on the production line is described. Time analysis of layout changes and decision analysis are described. Narrow spots have been removed by various principles. There has been a change in the layout of the production line and the application of new technology in the control room. The entire study shows the practical verification of the acquired knowledge in the field of industrial engineering and subsequent application in real operation. The project was built on the REFA methodology that was important for balancing manufacturing operations. In order to achieve the optimum production cycle, the layout had to be changed. The new layout contained a decision analysis to determine the right solution. The whole study shows a systematic approach to applying the above methods.\n\nPavel K\u00e1bele, Milan Edl\n\n### Recruitment Web-Service Management System Using Competence-Based Approach for Manufacturing Enterprises\n\nToday there are many different web services for employment, but only a few of them have a focus on IT professionals for manufacturing enterprises. There is a need to select applicants with the required competencies with minimal time expenditures. The goal of the paper is to develop the architecture of a web service for the recruitment of employees at enterprises using a competent approach according to the international standards of eCF. In present there is no analogue of our recruitment web-service management system using competency-based approach for both manufacturing enterprises and employees. We prepared review and analysis the existing analogues of web-based job placement services, functional and non-functional requirements for web-based job placement for enterprises using competency approach. The high-level architecture and technical tasks for the participants of our web-based job placement service were also developed and described in our research. Data analysis of employers\u2019 requirements was prepared for decision making of employee of manufacturing enterprises through software package.\n\nVitaliy Kobets, Nikita Tsiuriuta, Valerii Lytvynenko, Mykola Novikov, Sergey Chizhik\n\n### The Application of the Uncorrected Tool with a Negative Rake Angle for Tapered Thread Turning\n\nBased on geometric modeling and FEM, it is developed a technique for choosing the values of geometrical parameters of a threaded turning tool (rake angle, clearance angle, inclination angle of the cutting edge). This technique depends on the allowable values of kinematic rake and clearance angles at different points of the cutting edge and reliability characteristics of the threaded connection\u2014fatigue strength and contact pressure in the thread. Using this technique, it is reasonable to use tools with an uncorrected profile and a negative value of the static rake angle at the nose point \u221210\u00b0 for turning the pin thread of the tubing with a diameter of 114 mm, which is made of difficult-to-machine steel. The calculated values of the kinematic rake angles (\u22124.4\u00b0\u2026\u22125.5\u00b0) indicate improved cutting conditions. Fatigue strength of the threaded connection almost does not change, but in order to avoid gaps in the connection, the coupling thread must be made by taking into account the difference in the flank angles of the pin and coupling.\n\nVolodymyr Kopei, Oleh Onysko, Vitalii Panchuk\n\n### Simulation of Induction Heating for Railway Wheel Set Elements During Assembly and Disassembly\n\nInduction heating process during wheel assembly, as well as the heater design during railway wheel set axle equipment disassembly, are suggested. Simulation of railway wheel thermal elasticity is carried out in SolidWorks Simulation. Deformation processes at local induction heating are fast and affect the obtained joint dimensions. The obtained data make it possible to set the time and dimensional parameters of the assembly technological process. The design of an induction heating unit for axle equipment elements removal is proposed. In the view of workability, the inductors with cylindrical encircling coil made of a solid copper conductor or copper tube are considered the best. In order to make heating the most effective, the inner surface of coil must be as close as possible to the heated surface. At the same time, the heater moves freely upon the part. It is necessary to use inductors with magnetically conductive system to decrease the dispersion of the magnetic field in the air and to enrich its concentration in the part material. The part or the parts group being heated is an element of inductor magnetic system. It is recommended to perform heating of the whole package simultaneously\u2014two bearing rings and a labyrinth sealing\u2014to increase the process capacity of induction heating unit.\n\nOleksandr Kupriyanov, Serhey Romanov\n\nThe problem of improvements in machining large dimension flat surfaces of width 400 mm and closed ones, which are under the presence of engaging step on the sides, of unfinished workpieces productivity is considered. It is proposed to use milling heads with adjusting the milling width for the mentioned surfaces machining. A variant of the mill design has been developed, where the spindle block containing two face mills with intersecting cutter trajectories can be rotated by an angle in the range of 0\u00b0\u2013360\u00b0. The dynamic characteristics studies into dedicated CMH with adjusting the milling width have been conducted. It has been established based on the three-mass dynamic model of the mill that rigidity changes in the feed plane even by 10% leads to a change in the main semi-axis of the ellipse of the movement trajectory along the yi axis, while the change in the second axis can be considered as insignificant (up to 3%). It has been established that the proper selection of the mills\u2019 system resulting rigidity can achieve tool operation stability and accuracy in size and geometry of the workpiece surface based on the study of the design of dynamic characteristics.\n\nPavlo Kushnirov, Dmytro Zhyhylii, Oleksandr Ivchenko, Artem Yevtukhov, Oksana Dynnyk\n\n### Experimental and Analytical Study of CBN Grinding of Welded Martensitic Aging Steel\n\nMartensitic aging steels (Marging steels\u2014MS) are high-alloyed low-carbon (0.03% C) structural steels based on the Fe\u2013Ni and Fe\u2013Cr\u2013Ni systems, additionally doped with cobalt, molybdenumtitaniumetc. The article presents the results of studying of the grinding process of martensitic aging steels. The physical nature of the transformations occurring in the surface layer of the grinded surface under the influence of the contact grinding temperature is considered. The steels are heated for hardening up to temperatures approximately 1200 \u00b0C. At this temperature, the intermetallic compounds of the alloying elements (usually fine and solid) dissolve in the solid solution. With rapid cooling at rates above the critical hardening rates, a decarburized \u201csoft\u201d martensite is formed, in which the intermetallic compounds are dissolved. This is followed by aging at temperatures of about 480\u2013520 \u00b0C. Under the effect of tempering temperature, the precipitation hardening of steel occurs, which consists in the fact that intermetallic compounds in a finely dispersed state are separated from a solid solution and block dislocation movement, because of which the steel acquires high mechanical properties. Under the action of contact grinding temperature 550\u2013600 \u00b0C these properties can be lost. The dependence of the contact temperature on the modes of borazon grinding is shown. The research is aimed at creating a database of permissible grinding conditions, the use of which provides optimal contact temperatures and a defect-free surface layer.\n\n### Effects of the Combined Laser-Ultrasonic Surface Hardening Induced Microstructure and Phase State on Mechanical Properties of AISI D2 Tool Steel\n\nThe surface layers of AISI D2 tool steel were hardened by a laser heat treatment (LHT), by an ultrasonic impact treatment (UIT) and by a combined laser-ultrasonic treatment (LHT\u2009+\u2009UIT). The peculiarities of microstructure and phase formations in the surface layers were analyzed after the above-mentioned surface treatments performed in the optimum regimes. Microstructural changes were studied using an optical and a transmission electron microscopy to corroborate the results of XRD analysis. Based on the experimentally obtained data regarding the grain\/subgrain size, the dislocation density, and the volume fraction and size of carbides, the differentiated contributions of various hardening mechanisms to the mechanical characteristics (\u03c30.2, HV) were theoretically assessed. The results indicated that the contribution of the grain boundary hardening is the most influential one among the hardening mechanisms in the cases of the LHT (~47%) and combined LHT\u2009+\u2009UIT (~51%) processes. Conversely, the dislocation hardening (~34%) mainly contributes to the UIT induced hardening. The yield strength values calculated based on the microstructural studies correlate well with the experimental data describing the surface microhardness.\n\nDmytro Lesyk, Silvia Martinez, Bohdan Mordyuk, Vitaliy Dzhemelinskyi, Oleksandr Danyleiko\n\n### Temperature Field Analysis in Grinding\n\nThe paper is devoted to solving an important scientific problem of determining the profile grinding temperature based on the choice of a not complex but at the same time adequate solution from the available analytical ones. The initial prerequisite for the paper developing concept is that of a moving heat source. In engineering applications, the moving heat source is often represented in the form of a moving contact zone between the grinding wheel and the workpiece surface. The source forms around itself a three-, two- or one-dimensional temperature field in the Cartesian coordinate system with (three-dimensional) and without (two- or one-dimensional) taking into account the influence of the source length in the direction, which is perpendicular to the direction of the source moving, respectively. There is another possibility to simplify the determination of grinding temperature by choosing a one-dimensional solution of the differential equation of heat conduction in which the moving heat source is absent and replaced by the time of action of an unmoving heat source. This time is equal to the ratio of the contact length (in the direction of moving) to the velocity of its movement. Due to the high speeds of the discontinuous profile grinding process, the replacement of the moving source with the unmoving (stationary) one often does not affect the accuracy of determining the profile grinding temperature on the surface and in a thin surface layer.\n\nNatalia Lishchenko, Vasily Larshin\n\n### Development of a System for Supporting Industrial Management\n\nMass customization is the most current production paradigm in organizations that depend heavily on the demands of their customers and with the ambition to stand out from the high competition in the market. However, given the increasing diversity of products that this type of production implies, implementing it in a company involves challenges, mainly in the Product Data Management (PDM). Thus, information technology and systems, more specifically Enterprise Resource Planning (ERP), are other determining factors for the success of organizations, allowing them to be more efficient through the integration of information. In response to a better functioning in the production planning and control (PPC), with the increasing expansion, the company Be Stitch directed the production of textile articles for the home market, and decided to innovate investing in an information system, allowing to adapt the way in which it operates and generates the required PPC information. With the phases of analysis, selection and survey of requirements carried out for the initial phase, the present project appeared as a follow-up to develop a software\u2014Silex\u2014being presented and specified the main functionalities needed, implemented and tested. After being developed and implemented in the company, this software has shown an improvement in the flow of information and is very beneficial in cases where the information is not centered in a certain point, as is the case of Be Stitch, which has streamlined and improved the communication between them.\n\nSusana Martins, Maria Leonilde Rocha Varela, Jos\u00e9 Machado\n\n### Technology Support for Protecting Contacting Surfaces of Half-Coupling\u2014Shaft Press Joints Against Fretting Wear\n\nThe paper describes the problem associated with the destruction occurred because of fretting wear (FW) of contacting surfaces of elastic coupling (EC) parts, among which the most attackable connection is a tension fit joint of half-coupling\u2014shaft type, wherein the shaft outer cylindrical surface makes contact with the half-coupling inner cylindrical surface. The essence of the most known methods for improving the quality of press joints (increasing bearing capacity, raising joint tightness and shaft strength, as well as reducing FW) is in introducing certain intermediate layers between mating surfaces of parts. In contact, those intermediate layers get properties being significantly different from the original ones, i.e., the transferring occur of such a feature as sliding ability into the intermediate medium. As a novelty, to create such layers, it shows the application of the electric spark alloying (ESA) method, as the most promising, eco-friendly and energy-efficient. The paper presents the ESA processes of aluminizing, sulfidizing and carburizing, which simultaneously occur on the internal surfaces of the half-coupling (hub) in the areas of its ends, and make it possible to improve atmospheric corrosion (fretting corrosion) resistance, prevent adhesion between contacting surfaces, improve surface micro hardness and wear resistance, as well as provide for increasing the joint tightness.\n\nVasyl Martsynkovskyy, Viacheslav Tarelnyk, Ievgen Konoplianchenko, Oksana Gaponova, Mykhailo Dumanchuk\n\n### Numerical Prediction of the Elastic and Strength Properties of Woven Composites\n\nThe article is devoted to the development of the procedure for the numerical determination of effective elastic constants and parameters of the strength criterion of woven composites based on known properties of the matrix and fibers. The mechanical characteristics of carbon plastics were obtained by numerical analysis of the stress-strain state of a representative volume using finite element software package ANSYS. A series of numerical experiments is performed in which a local stress state of a representative volume is modeled under the conditions of a uniform average stress state of an equivalent homogeneous material. Requirements of periodicity are used as boundary conditions. The calculations are performed for tension and compression in two directions, shears in two planes and two tensions simultaneously in two directions. For composites all the parameters of the quadratic strength criterion are determined. The proposed criterion takes into account the orthotropy of the material and the differences in the tension and compression strengths.\n\nAndriy Mikhalkin, Oleksii Petrov, Igor Kravchenko, Gennadiy Lvov, Olga Kostromytska\n\n### Technical-Economic Aspects of the Use of Technological Process of Deforming Broaching\n\nThe article gives a definition of the technical and economic potential of the application of the deforming broaching process. Research of the consequences of introducing deforming broaching into technological processes at manufacturing enterprises is carried out on the basis of application of system resource and matrix approach. On the basis of the performed researches, a methodological basis for the economic evaluation of the results of applying deforming broaching on the production has been developed. The article has improved the well-known scientific and methodological foundations for the determination of technical and economic results of the application of deforming broaching due to the complex identification of production-organizational decisions\u2014in technical and forms of evaluation in economic spheres. The developed approach gives an opportunity to more accurately assess the economic effect of introducing deforming broaching on the mining, metallurgical and machine building industries. It can be used to assess the economic effect of introducing other process parts processing.\n\nYakiv Nemyrovskyi, Eduard Posvyatenko, Sergii Sardak\n\n### Data Integration Technology of Industrial Information Systems\n\nResearch results on the development of an information support system for data integration from integrated production systems are presented. The way of information systems integration is offered. The corresponding method is based on data integration of information systems both for production purposes and for generating a subsystem of implementing the developed methods. As a result, this approach allows adopting optimal design and manufacturing solutions. The method is implemented using a universal PDM system and intermodulation software interface. The formal information models of data and methods of maintaining a full life cycle of information are created. This approach allows implementing the information support technology for the integration of production data. A functional-structural scheme for the implementation of the proposed method is described. The recommendations for practical use in the conditions of operating industrial enterprises are given. Developing this approach, the authors conducted a study on the formalization and modeling of design, production and normative reference data of the machine-building enterprise. Semantic modeling of data is carried out, design procedures for establishing their interconnections are developed.\n\nPetro Pavlenko, Vira Shendryk, Kostyantyn Balushok, Stanislav Doroshenko\n\n### Challenges and Issues of ICT in Industry 4.0\n\nInformation and communication technology continues to positively impact many stages of the manufacturing environment. Intelligence is about to be shared from the start to the end of the supply chain. Internet of Things (IoT) is adding intelligence to endpoints, big data are becoming the new way of running a business and Cloud Computing (CC) is becoming a new data center. The advancement that this technology brings to manufacturing is fundamentally changing individual companies and transforming market dynamics. The fourth industrial revolution (Industry 4.0) is all about including contemporary technologies for processes of automation and real-time data exchange in manufacturing organizations. This paper represents the basis for designing the communication layer of the ecosystem\u2019s value chain depending on the usage scenario within the Industry 4.0 concept. Also, depending on the usage scenario, different service classes are grouped and coverage of the currently available communication networks in the Republic of Croatia is shown. Characteristics and capabilities of Industry 4.0 offer efficient business in the fields of logistics, manufacturing, tourism and smart cities. Also, with increased connectivity it is possible to build smarter supply chains, processes and end-to-end ecosystems.\n\nDragan Perakovi\u0107, Marko Peri\u0161a, Petra Zori\u0107\n\n### The Effectiveness of ICT Tools for Engineering Education: ISO Checker\n\nOne of the main advantages given by mobile applications are that they are available round the clock, offering a variety of ways of learning, communicating and collaborating. The proposal of the research is the implementation and designing of a mobile-learning application to improve the quality of the teaching process of Tolerances and Dimensional Control for engineering degrees (design, robotics, industrial and mechanical engineering). The studies focused on the impact of mobile learning on student achievement and it showed that this method could be one of the promising educational technologies for development in educational environments. The usage of mobile devices as tools for educational purposes has a positive effect as ISO Checker is available in four different languages, German, English, French and Romanian, it offered the main information which was presented in ISO SYSTEM OF LIMITS AND FITS. Given the current technological innovations, the fact that students are hi-tech learners now, we consider that using of mobile applications integrated into the study of Tolerances and dimensional control stimulates student\u2019s involvement, considering the positive feedback which we had during the first semester of the 2018\u20132019 academic year.\n\nGrigore Marian Pop, Liviu Adrian Crisan, Mihai Tripa\n\n### Improvement of the Technology of Tribostate Application of Powder Paints Using Fractal Analysis of Spray Quality\n\nThe paper proposes a method of quantitative fractal evaluation of the quality of powder materials application on metal surfaces by tribostatic method. Design of tribometry spray, which provides efficient charging of powder with different fineness and moisture content of the particles is developed. The computer implementation of the proposed algorithms for assessing and controlling the quality of powder coating is carried out. Laboratory experiments have shown that with an increase in the number of helical elements in the design of the sprayer, the fractal dimension of the corresponding sample images of spraying spots appeared. The results support the hypothesis that the increase in number of the screw elements leads to increase in the length of the path of the powder in tribospayer. This significantly increases the number of collisions between the individual powder particles and the walls of the spray gun, and it contributes to a stronger charging of particles of different types of powder paints and, as a result, reduces the shedding of powder from experimental samples of different materials. The conducted research can significantly optimize the technological processes of tribostatic spraying of powder paints at small enterprises of machine-building or automotive industry.\n\nSerhii Pustiulha, Ihor Holovachuk, Volodymyr Samchuk, Viktor Samostian, Valentyn Prydiuk\n\n### Improvement of Manufacturing Technology and Recovery of Clamping Collets for Lathe Automats\n\nOn the basis of theoretical and experimental investigations a new advanced technologies of clamping collets manufacturing are offered. The technology of manufacturing non-adjustable and cast collets as well as the technology of their multiple recoveries have been developed. Using these technologies can increase the durability of the clamping collets for lathe automats by 2\u20135 times with an increasing the coefficient of using the metal from 22\u201326 to 70\u201395%. It\u2019s possible due to the choice of an optimal geometry of clamping collets, improving the contact conditions of collets with a spindle and rod, improving the structure of metal and reducing the stress concentrators in the places of transition the collet\u2019s petals. The research presents the main features, principal casting scheme and the operations sequence of cast clamping collets manufacturing technology. The adaptations and formulas to determine the allowance to re-grinding of working hole are presented, the number of recoveries for grinding the outer cone and inner working hole of clamping collets at its recovery is calculated.\n\nRostyslav Redko, Oleg Zabolotnyi, Olha Redko, Serhii Savchuk, Volodymyr Kovalchuk\n\n### The Study of Surface Microgeometry and Morphology of Plasma Electrolytic Oxidation Dielectric Coatings on Aluminum Alloys\n\nThe article focuses on the results of the surface microgeometry and morphology research of the plasma electrolytic oxidation (PEO) coatings produced in the alkaline-silicate electrolytes in various electricity modes\u2014galvanostatic (GS) and arbitrarily falling power (AFP) in alternating current modes. PEO coatings were formed on the samples of wrought aluminum alloys, which are normally used for the manufacturing of the diamond grinding wheels bodies. The influence of PEO factors on Ra roughness index as well as porosity, shape and size of the structural particles of the coatings surface was studied. It was established that PEO increases the reference Ra value by a factor of 2\u20266, depending upon the electrolyte composition and the processing mode. The \u201cliquid glass\u201d (a technical-grade sodium silicate solution) concentration is the key driver of Ra index, its decrease from 12 to 6 g\/L leads to roughness reduction by 25\u202640%. An extreme dependence of Ra index on anode current density in GS mode is identified in the electrolytes with the minimal concentration of the alkaline component of KOH solution. It was demonstrated that the morphology research results qualitatively correlate with the microgeometric and functional coating characteristics.\n\nElena Sevidova, Yuriy Gutsalenko, Aleksandr Rudnev, Larisa Pupan, Oksana Titarenko\n\n### Forecasting of Overloading Volumes in Transport Systems Based on the Fuzzy-Neural Model\n\nThe article deals with the expediency of evolutionary models application for obtaining the forecast carried out with the minimal error. In research the analysis of modern approaches to the creation of qualitative forecasting models of overloading volumes of cargo in ports with the use of modern methods was carried out. The relevance of using of such network as ANFIS for forecasting of future delivery volumes of grain to the port is proved by calculation method. Conclusion about the best forecast by means of the model by ANFIS is executed by on the comparison with the results of an ARX system. Use of the last type gives bigger error than the fuzzy-neural model. In research, the preprocessing of the entering data was carried out. This information is presented in the form as time series, which contain 1095 values. The selection procedure of allowed to adjust basic data in terms of the informative ability of each value in time series. The number of the actual input parameters (nodes) in the model is decreased from 7 to 4 after the results of the selection. At the same time, a forecasting error on a control sample made up 4.99%.\n\nNatalya Shramenko, Dmitriy Muzylyov\n\n### Experimental Vibrating Complex for the Research of Pressing Processes of Powder Materials\n\nAn experimental complex on a high pressure casing was developed for the research of vibrating processes of powder materials pressing. As a high pressure casing hydraulic hoses of high pressure pressed to the elliptic state were used. Hydraulic pump\u2014a pulsator\u2014is the upgraded gear pump in which a quarter of one tooth was cut. The modes of transition of a hydraulic motor through resonance were investigated. It is proposed to use the mode of fluctuation switching from coordinate X to Y and vice versa to create vibrating actuators of periodic action with a large vibration traction effort due to the massive energy storage. The developed experimental complex can be used both for one-sided and for dry isostatic pressing. The complex allows to implement spatial variations of the press-mould. The frequency and oscillation amplitude can be regulated. The frequency can be regulated by changing the turns of the hydromotor, and the amplitude\u2014by changing the pressure values in the system. The method of calculating vibration modules on a high pressure casing is developed.\n\nDmytro Somov, Oleg Zabolotnyi, Roman Polinkevich, Bohdan Valetskyi, Viktor Sychuk\n\n### Determination of Parameters of Cylindrical Grinding with Additional Intermediate Dressing\n\nThe searches of technological solutions of the problem of reducing power consumption by metal-cutting machine tools when grinding were performed. For this purpose, it is proposed to perform grinding with additional intermediate dressings of grinding wheels, which should ensure reducing the intensity of heat generation in the cutting zone. Mathematical formulas of tangential cutting force, an effective power, a quantity of heat, grinding time between additional intermediate dressings and dressing time for conventional and proposed dressing schemes are given. Based on the mathematical formulas a computer software for modeling the thermal regime parameters when grinding by introducing into its structure a module for calculating the parameters of grinding with additional intermediate dressings was updated. Multivariate calculation of the parameters of dressing process according to conventional grinding and grinding with additional intermediate dressings was done. An advisable number of additional intermediate dressings is found, depending on the allowable values of the tangential cutting force, a quantity of heat, and time of grinding between the additional dressings.\n\nMykhaylo Stepanov, Larysa Ivanova, Petro Litovchenko, Maryna Ivanova, Yevheniia Basova\n\n### Simulation Study of Cutting-Induced Residual Stress\n\nThe use of the finite element analysis (FEA) is an effective method for studying the surface layer deformation appeared from inherited residual stresses. This paper is devoted to the analysis of the effect of residual stresses to the service properties of parts and the development of a cutting-induced residual stresses simulation using the DEFORM software. The influence of residual stresses on the operational properties of machined parts is investigated. The fatigue strength of the product, which is provided as the result of forming in the cutting process of the surface layer structure, residual stresses and deformations, is used as a criterion for the decision-making about optimal structure and parameters of the functionally-oriented technological process. The causes of the occurrence of machine-induced residual stresses for different workpiece materials have been analyzed. The simulation model of residual machining-induced stresses is described. The functional dependence of the stress-strain state reflects the interference pattern of the frictional, force loads and the variable process of the deep thermal effects. It is proved, that the compression part of this cycle is determined by external load, and tensile\u2014by residual stresses.\n\n### Preventive Maintenance System in a Company from the Printing Industry\n\nIn the face of increasing demand for cardboard packages, proper organisation of production to facilitate rapid response to changing customer requirements has become of utmost importance. The key issue is to maximise equipment uptime and prevent unexpected breakdowns which may halt the production and cause delivery delays which generate delays further in the supply chain. Proper asset management can be supported through the implementation of the Total Productive Maintenance system. This paper presents an original Total Productive Maintenance system developed for a company operating in the packaging industry. The designed documentation and organisation of information flow among the company units aimed at supporting of the TPM system are presented and analysed. The importance of the human factor for the success of the implementation is highlighted, and certain setbacks which may be experienced during the implementation are listed.\n\nMarta Szczepaniak, Justyna Trojanowska\n\n### ICT Support for Industry 4.0 Innovation Networks: Education and Technology Transfer Issues\n\nIn order to ensure effective participation in the global research and innovation space, the development of scientific digital infrastructure according to priority directions is important. In terms of universities, the development of digital infrastructure is crucial for providing open access to scientific data and knowledge, further commercialization of research, innovation, products and services. Therefore, the purpose of this study is to consider the education and technology transfer issues of ICT support for Industry 4.0 innovation networks. Methodology of study is based on system approach to innovation networks development. The study is based on a broad understanding of technology transfer as an exchange of technology, technology knowledge between individuals, enterprises, universities, research centers, and government structures at all levels. The proposed research idea is also based on the concept of integrated technology and the concept of promising (innovation) needs. It is the definition of innovative needs which is a prerequisite for creating a competitive economy, since they determine the qualitative changes in it. It was shown, that application of Industry 4.0 digital and virtual engineering tools allows conducting the R&D processes in computer-aided design systems, which reflects the quality of innovation product and the product launch timing. From the abovementioned perspective, we give the list of the main technological areas of ICT, which form the basis for the high-tech sectors development within Industry 4.0.\n\nTeofilo Tirto, Yuriy Ossik, Vitaliy Omelyanenko\n\n### Directed Formation of Quality, as a Way of Improving the Durability of Conjugated Parts of Friction Pairs\n\nIn the article the research of possibilities of directed formation of indicators of quality is given, by selecting technological operations and appointment of corresponding processing regimes. Studied approach directed formation of quality, based on the elements of theories of technological inheritance and interference of quality, which can reduce the role of random factors and their complex combinations in the process of providing regulated quality indicators, which in turn reduces the field of dispersion values performance and improves the quality of processed details. It is noted that for implementation of directed formation it is necessary to determine technological parameters, with the help of changing the values of which, it is possible to control the values of their indicators, which evolve in the process of mechanical processing. This will completely eliminate or reduce the impact of those indicators that worsen the performance properties of parts. It was revealed that the nature of manifestations of technological heredity is influenced by the conditions of implementation and the type of technological operations. It is advisable to apply this approach in the initial machining operations in order to initialize the manifestations of those indicators that improve the performance of the parts in general.\n\nAnatolii Tkachuk, Valentyn Zablotskyi, Andriy Kononenko, Serhii Moroz, Stanislav Prystupa\n\n### Energy Criterion for Metal Machining Methods\n\nThe optimization criteria of manufacturing processes are reviewed. Actual technical, economic and energy criteria used in mechanical engineering technology are analyzed. New energy criterion \u201caction of technological system\u201d is introduced. The proposed criterion is defined as the work of shaping, done over the certain time interval. Investigated criterion is developed for evaluation and selection of technological processes by energy parameters. Mathematic simulation of external turning has been carried out. Analytic relations between the criterion \u201caction of technological system\u201d and cutting parameters have been determined. It has been theoretically established that cutting speed and line feed have the most influence on the investigated criterion. Experimental research of proposed criterion for rough turning and finish turning of external cylindrical surface has been carried out. Empirical relations between the investigated criterion and cutting parameters are determined based on experimental data. Analysis of experimental data for rough turning and finish turning of external cylindrical surface has been provided. Theoretical and experimental data are compared. According to the results, it was proved that line feed and cutting speed have the most influence on the criterion \u201caction of technological system\u201d.\n\nYurii Yarovyi, Inna Yarova\n\n### Functional Properties of PTFE-Composites Produced by Mechanical Activation\n\nThe influence of the matrix mechanical activation, fillers of various nature and composition on the structure and functional properties of polytetrafluoroethylene composites is explored. The greatest increase in wear resistance at preservation of high values of physical and mechanical properties of PTFE-composites is observed at the synergetic effect of application of the matrix mechanical activation, fillers, their mixing in two-stage mode and use of a binary filler of various chemical nature. It is revealed that introduction of the binary filler increases wear resistance of the developed composites by (2.6\u20134.1) times in comparison with two-component composites. The feature of the developed manufacturing technology of PTFE-composites consists in preliminary separate preparation of the matrix and fillers before their mixing by mechanical activation in various modes of the equipment therefore there is an increase in level of their breaking strength by 1.4 times and wear resistances by (3.7\u20136.0) times in comparison with industrial analogs that increases durability of work of frictional units of the compressor by (1.8\u20132.3) times.\n\nKristina Berladir, Oleksandr Gusak, Maryna Demianenko, Jozef Zajac, Anatoliy Ruban\n\n### Physical-Mechanical Properties and Structural-Phase State of Nanostructure Wear-Resistant Coatings Based on Nitrides of Metals W and Cr\n\nThe methods of obtaining wear-resistant coatings on the basis of W and Cr metal nitrides are considered in this work, their advantages and disadvantages are characterized. The modes of condensation of such coatings with different component ratios in a magnetron system with constant magnets are chosen. The study of microstructure and structural-phase composition was carried out. It has been established that the influence of the precipitation temperature (600 \u00b0C) used in this paper has led to a faster growth of grains in the precipitation of Cr-W-N coatings. From the analysis of the elemental composition of the coatings it is possible to observe a clear correlation between the composition of the target and the composition of the coating. Studies carried out on a scanning electron microscope of composite Cr-W-N coatings showed a nanocrystalline structure with a grain size of 50 nm. The microhardness of the obtained coatings was investigated and the obtained results were analyzed. The microhardness of the nanosized coatings was about 3 GPa to cover Cr39W11N50 and 5 GPa to cover Cr75W1N24.\n\n### The Effect of the Hardfacing Processes Parameters on the Carbide Volume Fraction\n\nThe paper presents the results of research on the possibility of shaping the hardfacing structure by changing the conditions of the surfacing process. The material used in research was high carbon and chromium a self-shielding cored wire giving the hardness 760\u2013840 HV (62\u201365 HRC) according to the manufacturer. The obtained hardness of the hardfacing was on the same level or was significant higher. The test results show significant differences in the structure and hardness of the deposits, where differences in the amount of carbide precipitations reach 30%, and differences in hardness reach up to 200 HV. The erosion tests showed that impingement angle 30\u00b0 gives lower erosion rate than angle 60\u00b0. It is possible to shape the structure and properties of hardfacing to a certain extent by selecting the appropriate parameters of the surfacing process. In conditions of this experiment the decisive effects on the properties are played by parameters such as heat input and heat dissipation.\n\nMarek Gucwa, Jerzy Winczek, S\u0142awomir Parzych, Marcin Kukuryk\n\n### Implementation of Pipe Steel Grade X52M Manufacturing According to API-5L Requirements Applied to Hot Rolling Mills \u201c1700\u201d\n\nFor the first time at rolling mill 1700 facilities, PJSC \u201cIlyich Iron and Steel Works\u201d of Mariupol, the technology has been developed and the batch of hot-rolled coils (8\u2009\u00d7\u20091260 mm) of steel grade X52M has been produced by the method of thermo-mechanical controlled rolling for further manufacturing of electric-welded pipes in accordance with API-5L. This paper confirms the advantages of the thermo-mechanical rolling method due to special features of the chemical composition and lower strength level during the production in comparison to other rolling methods, and the possibility to apply this method at the equipment that was not designed for manufacturing the products of such strength categories. The positive influence of Nb on microstructure forming and rolling products properties has been confirmed with the thermomechanical rolling method. Additionally, during the production, the controlled air cooling of coils has been applied up to 450 \u00b0C after coiling. This ensures the reduction of air scale layer thickness and improves the surface quality, including the surface quality during further manufacturing of electric-welded pipes. The developed technology makes it possible to ensure the production of coils which meet the present-day world requirements and meet the demands of domestic and foreign producers of electric-welded pipes. Next stages of the research to improve the quality and for further development of rolled products production for manufacturing of electric-welded pipes in accordance with API-5L have been determined.\n\nOleksandr Kurpe, Volodymyr Kukhar, Eduard Klimov, Sergii Chernenko, Elena Balalayeva\n\n### A New Technology for Producing the Polystyrene Foam Molds Including Implants at Foundry Industry\n\nHere the description of the possibility of processing technogenic waste polystyrene in binder materials for foundry is provided. The possibility of dissolving polystyrene foam in acetone. Regardless of the amount of acetone that was analyzed, polystyrene absorbs it in a 1:1 ratio, with the formation of a swollen precipitate. The obtained data of the study \u201cpolystyrene\u2013acetone\u201d have been successfully used as the basic elements of technology for cellular polystyrene models with implants. Here the prime factor imposing is the kinetics of polystyrene foam in acetone swelling and the swollen foamy polystyrene precipitate composition. The precipitate can be used as a binder for molding compounds. The kinetics of swelling of polystyrene foam and beaded polystyrene in acetone was studied. The data obtained in the study of the system \u201cpolystyrene foam\u2013acetone\u201d was used in the manufacture of polystyrene models with implants. The technology for producing polystyrene foam models with implants includes the need to use a special binder for fixing implanted granules on the surface of implanted granules. The research results allowed to propose a new technology for producing foam polystyrene models with implants without using a special binder.\n\nOlga Ponomarenko, Natalya Yevtushenko, Tatiana Lysenko, Liudmyla Solonenko, Vladimir Shynsky\n\n### Prediction of Lankford Coefficients for AA1050 and AA5754 Aluminum Sheets Using Uniaxial Tensile Tests and Cup Drawing Experiments\n\nRolled sheet metals show different mechanical properties and stretching ability in different directions. Anisotropy of these metals should be defined in order to model the behavior of material in different directions for forming operations. Plastic anisotropy of sheet metals is characterized by Lankford Coefficients (R-values). Uniaxial tensile tests are usually employed to determine Lankford coefficients. However, some metals, like aluminum, have limited elongations under uniaxial tension, and the accuracy of tensile tests may be adversely affected due to low elongation before fracture. Recently, instead of uniaxial tensile tests, cup drawing approach is preferred to determine anisotropic properties of sheet metals with low uniaxial elongations. In this study, uniaxial tensile tests were carried out to obtain Lankford Coefficients of AA1050 and AA5754 aluminum sheets in the rolling direction (RD), diagonal direction (DD) at an angle of 45\u00b0 to rolling, and perpendicular (transverse) direction (TD) to rolling. Subsequently, cup drawing experiments were conducted to predict Lankford coefficients using an analytical approach based on relevant literature. The results obtained from uniaxial tensile tests were compared with those obtained from cup drawings.\n\n### Simulation of the Influence of High-Voltage Pulsed Potential Supplied During the Deposition on the Structure and Properties of the Vacuum-Arc Nitride Coatings\n\nTiN films have been deposited on stainless steel plates using plasma based the ion implantation & deposition (PBII&D) with a negative pulse voltage from 850 to 2000 V. According to the results of X-ray structural analysis, the formation of titanium nitride with a cubic crystal lattice of the NaCl structural type is seen to occur. Computer simulation allows determining the depth of the layer that is exposed to the radiation, taking into account all the cascade damage. The depth of the layer varies from 3 to 4.4 nm with an increase of negative impulse potential (Uip) from 850 to 2000 V, respectively. A transition of the texture from [111] to [110] is present in TiN coatings with an increase of Uip. In the case of a pulse duration of 10 and 16 \u03bcs in the entire range of Uip used, the following dependences are observed: with the increasing Uip, the deformation of the crystallite lattice decreases with the axis of the texture [111] and increases with the corresponding deformation in the crystallite with the axis of the texture [110].\n\nNataliya Pinchuk, Oleg Sobol\n\n### Modeling of Processes for Creation New Porous Permeable Materials with Adjustable Properties\n\nIn this work, forecasting, modeling the patterns of formation of the structure and properties of materials, taking into account the size of the structural elements of the charge, establishing physical connections between the components, structure and properties of the finished product, their operational properties is an actual problem of the material science. Sustained modern trends in industrial development are increasing requirements for the quality of all products types. The practice of calculating new porous materials on fundamental of metal powders shows, that the implementation to the full extent of their strength and exploitation characteristics requires a significant increase in the level of prediction of materials physical and mechanical properties and the development of the new modeling methods, which includes a complex analysis of the materials formation processes. Therefore, the focus is on the model experiments predicting the dependence of the properties of materials on the technological parameters of obtaining products using analytical, numerical and numerical-analytical methods with the help of 3D modeling.\n\nOleksandr Povstyanoy, Oleg Zabolotnyi, Victor Rud, Andriy Kuzmov, Halyna Herasymchuk\n\n### Investigation of Properties of Mg and Al Based Nano Hybrid-Metallic Composites Processed Through Liquid Processing Technique\n\nThis research paper provides comprehensive and extraordinary efforts to carry out novel mechanical behaviour, corrosion testing and metallurgical characterization of aluminium alloy (Al-6061) and Magnesium alloy (Mg-AZ91D) reinforced nano-hybrid metallic composites processed using the liquid processing stir casting technique. These stir casted hybrid nano-metallic composites were manufactured using nano size reinforcements that are SiC, Graphite and Alumina of a size of ~100 nm. The finding of comparative results of casted composites was done by using potentio-dynamic polarization tests in the form of capacitance performance of dielectric properties. The results were reported out and the best results were achieved for aluminium reinforced graphite based composites with better corrosion behaviour performances and high hardness and tensile value of the fabricated composites. The experimental data for Mg-Graphite\/ SiC\/Al2O3 alloy show good arc-like performance over the frequency range with less impedance. The results also illustrated the good arc-like\/weber behaviour over the frequency range examined, and indicate decent corrosion behaviour.\n\nShubham Sharma, Mandeep Singh, N. Jayaram-Babu, Kalagadda Venkateswara Rao, Jujhar Singh\n\n### Application of Microphotogrammetric and Material Science Techniques in the Study of Materials on the Example of Alloy AlZnMgCu\n\nThe dominant trend of our time is the development of nanotechnology research, which is impossible without the usage of scanning electron microscopy in order to obtain qualitative and quantitative characteristics of the studied micro-objects at the micron and submicron levels. When objects have complex organization (microrelief) and its spatial structure is a priori unknown, it is not possible to interpret correctly the spatial organization or configuration based on only visual qualitative research. Therefore, there is a need to develop new methods that would allow three-dimensional reconstruction of micro-objects. The article proposes a method for calculating the fractal dimension of the fracture surface microrelief based on the digital model of relief and solving the problem of the spatial orientation of the investigated plane for the implementation of the correct analysis of the chemical composition of the prototypes using the energy-dispersion method (EDX). AlZnMgCu aluminum alloys were investigated here. After preparation, they were subject to heat treatment. At the end of the heat treatment, the stretch tests (DIN EN 10002) and shock loads (DIN EN 130148) were followed by further metallographic, microscopic (REM) and energy-dispersive (EDX) investigations. The proposed method and the results showed in this study confirm the presence of the hardening effect under the used heat treatment conditions, which in turn allows further effective prediction of the mechanical properties of various products of the AlZnMgCu system.\n\nAnna Uhl, Yuliia Melnyk, Oleksandr Melnyk, Inna Boyarska, Mykola Melnychuk\n\n### Optimal Parameters of Q&P Heat Treatment for High-Si Steels Found by Modeling Based on \u201cConstrained Paraequilibrium\u201d Concept\n\nThe article is dedicated to designing the regime of Q&P (Quenching and Partitioning) heat treatment for middle-carbon high-silicon steels 60Si2CrVA and 55Si3Mn2CrVMoNbA in order to improve their mechanical properties. The temperature of suspense of quenching cooling during Q&P treatment was calculated by modeling based on the concept of \u201cConstrained Paraequilibrium\u201d proposed by J. Speer. The values of Ms temperature as well as the kinetics of martensitic transformation for both steels were experimentally found to be incorporated into the model. It was derived from the modeling that quenching stage should be finished when reaching the steel temperature within the range of 150\u2013220 \u00b0C which guarantees the highest volume fraction of retained austenite in the microstructure (together with tempered martensite). The results of calculations were verified by XRD measurements of retained austenite in Q&P treated specimens being found as 17 vol% for steel 60Si2CrVA and 28.5 vol% for steel 55Si3Mn2CrVMoNbA which are lower then predicted values. The probable reasons of this discrepancy are outlined.\n\n### Method for Determination of Flow Characteristic in the Gas Turbine System\n\nThe flow coefficient and the hydraulic resistance coefficient are widely used in the simulation of flow in various turbines. Widely used methods for computation of the metering characteristics of the openings designed for gas turbine cooling systems are observed. The methods are based on the use of such notions as discharge coefficient and hydraulic resistance coefficient. The use of the latter is preferable for the design of gas turbine cooling systems, because it correlates the air mass flow rate with the total pressure drop in channels. In the general algorithm for calculating cooling systems, it is necessary to use the discharge coefficient. The work establishes a relationship between the discharge coefficient and the hydraulic resistance coefficient. The proposed method consists in the partition of overall losses of the total pressure in the hole into elements, in particular inlet pressure losses, outlet pressure losses and friction pressure losses. The air density and the Mach number were defined for each element. It was proposed to take into account the influence that setting angles of openings have on the hydraulic resistance. The method used for computation of the metering characteristics of holes showed a sufficiently good coincidence with experimental data when the pressure ratio values varied in the range of P 1 * \/P2\u2009=\u20091\u20132.5, the relative length of the channel is in the range of l\/d\u2009=\u20096.4\u201324.3 and setting angles of the opening is 30\u00b0, 45\u00b0, 90\u00b0.\n\nOlena Avdieieva, Oksana Lytvynenko, Iryna Mykhailova, Oleksandr Tarasov\n\n### Cutting Stone Building Materials and Ceramic Tiles with Diamond Disc\n\nDuring the repair and restoration of buildings, ceramic tiles and blocks of Al2O3 and ZrO2 are often cut. At present diamond abrasive discs are widely used for these purposes. The cutting process is accompanied by a considerable heat release and heating of the diamond disc. At a temperature of about 600\u00b0, the tensile strength of a disc is reduced by a factor of 2 and graphitization of diamond grains occurs. Thus, when cutting stone and building materials with a diamond disc, the disc heating temperature should not exceed 600 \u00b0C. In the work, mathematical modeling of the diamond cutting disc heating on a metal base was performed while cutting ceramic materials to determine the time of continuous operation to a critical temperature of 600 \u00b0C. The simulation results obtained showed the dependence of the heating temperature of the disc on the diameter of the latter, the speed of rotation, the minute feed, the grain size and the thickness of the disc. It is shown that by selecting appropriate process characteristics the time of continuous operation can be of the order of 10\u201312 min without the use of forced cooling.\n\n### Cavitation in Nozzle: The Effect of Pressure on the Vapor Content\n\nTwo-phase nozzles can work in jet injectors of various applications, including jet heat pumps (steam-water injectors) and thermocompressors. Lack of a reliable description of the mechanism of the evaporating liquid flow limits their use as energy-efficient working bodies. The estimation of effect of the vapor content on the initial pressure and temperature will make it possible to determine the variant of initial parameters, at which the overproduction of the vapor is the greatest. The goal of this work is to investigate the effect of pressure and temperature at the nozzle inlet to outlet vapor content. We use the model of a compressible two-phase medium, the kinetic model of evaporation\/condensation. The model also includes the dynamic and mechanical equilibrium of the process. The mathematical model using CFD package of Ansys CFX software considers the dynamic growth of the vapor bubble. The obtained results show the average deviation from the experimental value, particularly 2% for pressure and 10% for speed. Increasing pressure and temperature at the nozzle inlet leads to increasing the vapor mass fraction at the nozzle outlet.\n\nOleh Chekh, Serhii Sharapov, Maxim Prokopov, Viktor Kozin, Dariusz Butrymowicz\n\n### Control of Operation Modes Efficiency of Complex Technological Facilities Based on the Energy Efficiency Monitoring\n\nThe article proposes an approach to the organization of control the operation mode efficiency of the technological facility, which is based on simultaneous control of the power consumption efficiency and technological parameters to identify the causes of inefficient operation. The procedure of comparison the actual power consumption with its planned value has been used to control the power consumption efficiency. Shewhart control charts have been applied to control the technological parameters. The standards for the controlled parameters have been formed based on the monitoring system data in order to take into consideration the facility operation conditions. The confidence interval to the expected value of the power consumption, which is determined based on the power consumption mathematical model, has been selected as the power consumption standard. The power consumption standard is determined for every day, taking into consideration the actual values of the technological parameters. Control limits of the Shewhart control charts are standards for technological parameters. The proposed control procedure is based on taking into consideration the actual operation modes of the facility, which ensures the correct determination of the control limits and the correct control results, and provides for the possibility of adjusting the standards of the controlled parameters. Joint analysis of control charts allows determining the periods of time when the operation mode of the technological facility was ineffective in terms of power consumption, identifying the reasons that led to it.\n\nLiudmyla Davydenko, Viktor Rozen, Volodymyr Davydenko, Nina Davydenko\n\n### Performance Comparison of Two Guidance Systems for Agricultural Equipment Navigation\n\nCover crops have been gaining popularity in the Northern Great Plains as an effective practice to improve soil health. An accurate guidance system when planting cover crops between rows of standing corn at V6-V8 stage requires precise navigation to avoid damaging standing crop, but field observations showed that the grain cart often trampled over rows of plant. Development of accurate and efficient methods, to find the correct path to guide the farm equipment automatically during these operations is an important need. In this project, the capabilities of Ultrasonic and Tactile navigation sensor (Reichhardtd\u00ae Electronic Innovation Products) were studied and compared. The objective of this study was to consider the difference between performances of two navigation systems that would guide the grain cart tires to move between rows for different operating conditions, speeds, patterns of rows, and terrain conditions. The comparison tests were conducted with five different fixed speeds, variable speeds, and four different row patterns in laboratory conditions on a purposefully designed test bench. The study results show that both sensors can successfully locate\/identify row patterns if they are appropriately adjusted. But the steering system of the test bench has failed to respond with respect to the identified rows by the sensors due to a missing feedback controller. Moreover, appropriate digital filters were designed to remove undesired noises of signals read by the sensors and a fit model was found to compute exact physical location of the system with respect to rows.\n\nNadia Delavarpour, Sulaymon Eshkabilov, Thomas Bon, John Nowatzki, Sreekala Bajwa\n\n### Influence of Discrete Electromechanical Hardening on the Wear Resistance of Steels\n\nA new way of discrete electromechanical treatment has been developed. The method of hardening a cylindrical surface is based on the creation of linear hardened zones of increased hardness using a roller tool. Analytical dependences and numeral models have been proposed for determination of contact descriptions at co-operation of instrument and detail for the electromechanical strengthening. Numerical simulations are carried out by the finite element method. As a result, the deflected modes of a surface after its treatment by the discrete electromechanical method have been obtained. The treatment provides a different geometrical layout of the locally fixed areas charts. The model of wear and method of determination of descriptions of wear as a result of experimental tests have been developed for prognostication of wearproofness. For the analysis of the influence of the tensely deformed state of a discrete surface on wearproofness the experimental tests at the discrete electromechanical strengthening have been conducted. The obtained results indicate that the wear of samples significantly depends on the slip speed. Examined electromechanical treatment samples have a high wear resistance at high slip speeds.\n\nAleksandr Dykha, Oleg Makovkin, Maksym Dykha\n\n### Parallel Solution of Dynamic Elasticity Problems\n\nAn algorithm for parallel solution of the dynamic problems of the elasticity theory for axisymmetric objects as a three-dimensional problem of the elasticity theory has been proposed. The semidiscrete approximations reduce the problem to the solution of the Cauchy problem for a system of linear differential equations of the second order. The elements of the matrix are determined with the help of the semi-analytical finite element method (FEM) using the Fourier series analytical expansion by trigonometric functions of the angle coordinate and numerical expansion of isoparametric approximations on serendipity quadrilaterals in the meridional section. The Cauchy problem is solved by decomposing the solution into eigenfunctions, which we find using the subspace iterations method. The method has been parallelized with domain decomposition and message passing interface (MPI), and the parallelized method has been scaled to over 20 processors with high parallel performance. The numerical examples have demonstrated the performance of the proposed algorithm. The numerical results indicate that the method is very accurate and its parallelizations are efficient for both types of problems.\n\nIvan Dyyak, Vitaliy Horlatch, Marianna Salamakha\n\n### Wear Resistance of Hardened Nanocrystalline Structures in the Course of Friction of Steel-Grey Cast Iron Pair in Oil-Abrasive Medium\n\nThe results of the research studying the influence of modified hardened surface layer after friction hardening on wear resistance in the course of oil-abrasive wear of steel-grey cast iron friction pairs are presented. Friction hardening is one of the surface hardening methods with the use of highly concentrated energy sources. A nanocrystalline hardened (white) layer is formed in the surface layers after the friction hardening. The thickness and microhardness of the hardened layer depends on carbon content in the steel and its preliminary heat treatment. Thus, thickness of the hardened layer was 120 \u00b5m, and microhardness was 5.6 GPa, with the initial structure hardness of 3.2 GPa, in hardened and high-tempered test-pieces of Steel C45 (EN) after the friction hardening. Grain size of the hardened surface layer was equal to 20\u201340 nm near the treated surface. It is shown that the hardened layer significantly increases performance of the pair \u201cSteel 41Cr4 (EN)-Grey cast iron EN-GJL-200\u201d during sliding friction in oil-abrasive medium. When increasing the unit load area from 2 to 6 MPa, wear rate of the hardened pair decreased by 2.1\u20133.7 times in comparison with an unhardened pair. Only one component of the friction pair was hardened.\n\nIhor Hurey, Tetyana Hurey, Volodymyr Gurey\n\n### Efficiency Analysis of Gas Turbine Plant Cycles with Water Injection by the Aerothermopressor\n\nImproving the efficiency of gas turbine plants has been solved in two main directions: by intercooling the cyclic air between the stages of compressors; by increasing the amount of working fluid in the cycle with heat recuperation. Technology of cyclic air cooling of gas turbine plants is based on the hypothesis of thermogasdynamic compression and cooling, which consists in increasing the pressure as a result of instantaneous evaporation of the dispersed liquid injected into accelerated superheated steam flow or gas flow. In the paper it has been presented the basic schemes of the aerothermopressor installation along the gas turbine plant path. Seven variants of gas turbine plant cycles are analyzed and scheme-technical solutions are determined by using the aerothermopressor to obtain optimal operating parameters of the gas turbine plant. The efficient increasing of values of gas turbine plant has been determined, in the cycle with intercooling of cyclic air the efficiency is 46.9% and in the cycle with heat recuperation is 55.2%.\n\nDmytro Konovalov, Halina Kobalava\n\n### A Simulation Tool for Kinematics Analysis of a Serial Robot\n\nRobot programming is a very significant task in the field of robotics. Off-line programming (OLP) is a method performed before robot manipulation. It is the manual editing of the robot code using computer software to simulate the real robotic scenarios. Task sequence planning, short-term production, flexibility during operation and expecting real behaviour of the robots are some of the reasons that make the users prefer OLP. Operations can be visualized in many processes such as welding, cutting, even medical applications. In this study, off-line models are offered including the forward and inverse kinematics of a six Degree-Of-Freedom (DOF) serial robot manipulator (Denso VP-6242G). Robotic Toolbox combined with GUI Development Environment in Matlab\u00ae is used for the forward kinematics solution. A Matlab\u00ae Simulink model with Simmechanics blocks is used in the inverse kinematic analysis. Visualization is enriched by 3D Solidworks\u00ae models of the robot parts. Basic motion examples that can be used in many areas are presented.\n\nM. Erkan K\u00fct\u00fck, L. Canan D\u00fclger, M. Taylan Das\n\n### The Imitation Study of Taper Connections Stiffness of Face Milling Cutter Shank Using Machine Spindle in the SolidWorks Simulation Environment\n\nThe article deals with the radial stiffness increase of face milling cutter taper with a shank 7:24 using a machine spindle. The preliminary investigations prove that stiffness increase of such connections is possible by means of a face milling cutter shank with two-contact centering faces design. In this case, a smaller centering face is proposed to design hollow with reduced radial stiffness. In the paper, we have carried out the stiffness imitation study and considered a face-milling cutter taper connection stress-strain state with an improved shank under loading with a machine spindle using SolidWorks. To perform simulation modeling, we have used a parametric 3D model of taper connections static behavior in which external and internal tapers dimensions are associated with certain deviation limit through the SolidWorks equation tool. The parameters of the computational process of nonlinear static analysis in the simulation module have been determined. The parametric model boundary and kinematic conditions have been considered. It has been determined that standard simulation tools use for carving force simulation leads to stiffness system artificial increase. Therefore, in the paper, to simulate the impact carving, we propose to use specially created orthotropic material thermosetting forces. The imitation study shows that face milling cutter shank with two centering faces of all deviation limits leads to higher stiffness connection (smaller radial displacements).\n\nOleksandr Melnyk, Larysa Hlembotska, Nataliia Balytska, Viacheslav Holovnia, Mykola Plysak\n\n### Mathematical Modeling of Operating Process and Technological Features for Designing the Vortex Type Liquid-Vapor Jet Apparatus\n\nThe article discusses the design features of vortex type liquid-vapor jet devices in the form of an oblique cut-off nozzle of the motive flow while entering the vortex chamber. Mathematical modeling allows proving the influence of the oblique cut to a deflection of the flow from the nozzle axis by a certain angle. This model is based on a continuity equation in the modified Baer\u2019s form for the adiabatic process of discharge from the expanding nozzle with an oblique cut, as well as on the fundamental laws of thermodynamics for operating process. The proposed mathematical model allows determining the analytical dependence between the deflection angle of the flow from the nozzle axis in an oblique cut for an expanding nozzle and the following geometrical and physical parameters: the oblique angle of a nozzle, the taper angle of a nozzle, the initial pressure in front of the nozzle, the medium pressure at the outlet of the nozzle, the maximum nozzle expansion, and physical properties of the flow. The results of mathematical modeling of the flow deflection in an oblique cut of the expanding nozzle are presented analytically and graphically. Finally, it is proposed the methodology for numerical calculations of geometrical and operating parameters for ensuring the proper operating process, as well as it is described the technological features for designing the vortex type liquid-vapor jet apparatus.\n\nIurii Merzliakov, Ivan Pavlenko, Oleh Chekh, Serhii Sharapov, Vitalii Ivanov\n\n### Dynamic Stress State of Auxetic Foam Medium Under the Action of Impulse Load\n\nThe paper presents studies on the application of the boundary integral equation method for investigation of dynamic stress state of negative Poisson\u2019s ratio foam media with tunnel cavity under the action non-stationary loads. In case of the plane problem, the distribution of normalized hoop and radial stresses were obtained for the action of non-stationary impulse load, which was applied to the boundary of cavity cross-sections. For the solution of the non-stationary problem, the Fourier transform for time variable was used. In Cosserat elasticity for the application of the boundary integral equation method the Fourier transform potential representations of displacements and microrotations were written. The fundamental functions of displacements and microrotations for the two-dimensional case of Cosserat continuum were built. For the solving of time-domain problem the system of singular integral equations was written. For numerical calculations the method of mechanical quadrature was applied. Numerical example shows the comparison of dynamic stress distribution in the foam medium with negative and positive Poisson\u2019s ratio under the action of impulse load.\n\nOlena Mikulich, Lyudmila Samchuk, Yulia Povstiana\n\n### Improvement the Performance of Liquid Purification by Dynamic Rotary Filters\n\nLiquids purification from solid mechanical admixtures is a topical problem in many technical applications. Promising way to improve the performance and to reduce the cost of purification is using the dynamic filtration principle on the base of rotating filter element. This principle is known to be successfully utilized in thin membrane filtration applications. Wide use of rotational filtration in liquid systems of vehicles and industrial equipment requires theoretical study for larger rotational and filtration rates as well as consideration of porous baffle with lesser hydraulic resistance. The numerical approach is elaborated for simulation of liquid phase motion in a rotary filter with a porous filtering cylinder and support framework. The vortical flow motion in the gap is detected to be a factor leading to significant local increase of filtration velocity and even to exclusion the part of filtering surface from operation. The modeling approach for particle motion study in the stable flow outside of rotating porous cylinder is substantiated. The possibility is demonstrated in deterministic consideration to prevent under definite conditions contact of particles with the filtering surface. It is also shown that maximum influence of centrifugal force on the suspended particle motion can be achieved when particles are an order of magnitude less the boundary layer thickness.\n\nIevgen Mochalin, Suosheng Zheng, Jinyu Liu\n\n### Calculation Optimization of Complex Shape Shells by Numerical Method\n\nThe article presents the results of theoretical and experimental studies of complex shape shells performed by the method of curvilinear grids in order to optimize the calculation of strength and stability. It is analyzed the existing numerical methods for calculating shells, such as finite difference method, variational difference method and finite element method. To improve the convergence of finite difference method by reducing error of approximation of hard offset functions the finite element method was used for the first time. Due to this method finite difference approximation was obtained by averaging the tangential strains in a differential interval by using integration of Simpson\u2019s formula. This new finite difference scheme was called method of curvilinear grids, the essence of it is that vector differential relations are firstly replaced by their vector of finite difference analogues, and then the transition to scalar ratios is performed by designing in the local basis. The method of curvilinear grids is applied to calculate a complex shell, formed by a combination of four hypars. The result of calculation is a graph with the dependence of the critical load of the stability loss on the cross-sectional area of the edges. The study of convergence with the obtained results was performed by different methods at different mesh density.\n\nRuslan Pasichnyk, Oksana Pasichnyk, Olga Uzhegova, Olexandr Andriichuk, Olexandr Bondarskii\n\n### Improvement of the Hydraulic Units Design Based on CFD Modeling\n\nDevelopment of new hydraulic units for hydraulic drives or improvement of the characteristics of existing aggregates models can be performed on the basis of results obtained with computer simulation. The possibility of using CFD-module of Flow Simulation for research of hydraulic pressure losses in the design of a hydraulic lock is investigated here. A 3D-model of a working section of the hydraulic distributor has been developed in SolidWorks CAD system. Hydraulic pressure losses during the flow of fluid through a hydraulic lock occur at the output of the injection channel and at the input towards the working channel of the working section of the hydraulic distributor. The loss of pressure is due to the peculiarities of the design of the locking and regulating elements of the hydraulic lock. Based on the result of the computer simulation of hydrodynamic processes of fluid flow under pressure through a hydraulic lock, hydraulic pressure losses are determined. In order to reduce the pressure losses, it is proposed to make changes in the design of the hydraulic lock without degrading its performance has been proposed. These allow reducing the hydraulic pressure losses in the working section of the hydraulic distributor, which reduces the overall pressure loss in the hydraulic drive.\n\nOleksandr Petrov, Leonid Kozlov, Dmytro Lozinskiy, Oleh Piontkevych\n\n### Calculation of Hydrostatic Forces of Multi-gap Seals and Its Dependence on Shaft Displacement\n\nUsing the finite volume method, the problem of a three-dimensional fluid flow through a three-shaft seal of a high pressure centrifugal pump at different values of radial shaft displacements has been solved. Numerical calculations were carried out without taking into account deformation of seals under the influence of uneven pressure distribution. Pressure distributions, leakage values and hydrodynamic radial forces were obtained depending on the magnitude of the eccentricity. As a result of numerical calculations, the dependences of the radial forces on the radial shaft displacement were constructed and the hydrostatic stiffness coefficient was determined. These dependencies were also obtained using analytical equations. The obtained numerical results differ from the theoretical ones less than 5%. It should also be noted that the application of the three-gap seal significantly reduces leakage compared to homogeneous and double-layer seals, as well as increases the radial force stiffness, which in its turn provides a low level of rotor vibration.\n\nOleksandr Pozovnyi, Andrii Deineka, Dmytro Lisovenko\n\n### The Investigation of Particle Movement on a Helical Surface\n\nDifferential equations of particle movement on the rough surface of the spiral gutter under the effect of the force of its own weight are obtained in the article. The curve of the cross section of the gutter with a vertical plane passing through the axis of the surface is given by parametrical equations in general form. Special cases for individual cross-sectional lines (a straight line and a circle) are considered. If the section is an arc of a circle, a spiral gutter is formed. In the particular case when the cross section is a straight line inclined to the axis upwards, then the helical surface is an oblique helicoid. The equations are solved by numerical methods and trajectories of a particle movement along a helical surface are constructed. After the motion stabilizes, the particle has a constant speed and its trajectory is a helical curve. For this particular case, analytical dependencies that allow calculating the speed of a particle and its distance from the axis of the surface were found. The case, when an angle of elevation of the lowest helical curve of the gutter is equal to the angle of friction of the particle on the surface, is also considered. In the case of a spiral gutter, the elevation angle of its lower helical line should be greater than the friction angle in order to avoid congestions during transportation of particles of the technological material.\n\nSergiy Pylypaka, Viktor Nesvidomin, Tatiana Zaharova, Olexandr Pavlenko, Mikola Klendiy\n\n### The Wall Erosion in a Vortex Chamber Supercharger Due to Pumping Abrasive Mediums\n\nThe rapid wall erosion of the settings of pump elements occurs during pumping of two-phase mediums, in the hydraulic and pneumatic transport systems. In these circumstances, it is reasonable to use the jet technology in general and the vortex chamber superchargers in particular. The vortex chamber superchargers have the best, compared with other jet superchargers, energy efficiency indicators during pumping of bulk materials. The purpose of the article is to study the wall erosion of the vortex chamber. The mathematical modeling of the flow is carried out by solving the averaged Reynolds equations using a SST turbulence model corrected. Simultaneously with the hydrodynamic calculations the trajectories of abrasive material solid particles were calculated. Finney\u2019s model was used to model the wall erosion. It is found that for all values of the flow rates and, accordingly, the concentration of solid particles, a uniform wear of the vortex chamber is observed. To ensure the durability of superchargers it is necessary to increase the thickness of the chamber\u2019s walls. In the process of wear, the ratio of diameters of the inflow channels to the diameter of the vortex chamber will increase. It affects the energy characteristics of the supercharger: the efficiency, the amount of medium at the outflow of the device, the vacuum value near the axis. By setting minimum acceptable parameters it is possible to predict the wear of the chamber and calculate the resource of the supercharger without the use of expensive experimental investigations.\n\nAndrii Rogovyi, Sergey Khovanskyy, Irina Grechka, Jan Pitel\n\n### Data Acquisition Procedures for A&DM Systems Dedicated for the Foundry Industry\n\nThe article presents the effects of cooperation with Polish and European foundries regarding the design of procedures useful in acquisition and data mining systems (Acquisition & Data Mining, A&DM). The author\u2019s procedures for collecting data from foundry processes, including the topography of data sources, have been presented. These procedures have been associated with the possibilities of extended data analysis, which should be implemented in dedicated A&DM type systems. Specialized systems seem to be the most appropriate tools for rapid analyses of complex production processes (multivariate process). These systems allow to assess the stability of selected process parameters, and subsequently identify the cause and effect relationship related to the quality of castings. The choice of the number and type of parameters that can be associated with anomalies of processes depends on the system user, his knowledge and experience. This paper indicates the importance of dedicated A&DM systems built from scratch, developed in cooperation with a specific foundry.\n\nRobert Sika, Zenon Ignaszak\n\n### Choice of Correcting Link for Electrohydraulic Servo Drive of Technological Equipment\n\nVolodymyr Sokolov, Oleg Krol, Oksana Stepanova\n\n### Simulating the Process of a Bird Striking a Rigid Target\n\nA model was developed to simulate the process of a bird striking a rigid target. The target is a hinge-supported steel plate and, in the first approximation, it emulates aircraft structural components. The dynamic behavior of the plate is considered within the generalized model to allow for the spatial character of deformation of the structure. The method for solving the equation of plate motion consists in representing the solution as a double trigonometric series. The result is the transformation of the equation of motion to a system of ordinary second-order differential equations integrated by the solution expansion to a Taylor series. The model of a bird\u2019s shock action on a plate was developed based on the experimental research. The influence of a plate\u2019s angle of impact on the plate strain was studied for a bird-strike case. A comparison of theoretical results with experimental data showed their close fit. The suggested model of the process of a bird striking a plate is used for evaluating the strength of different aircraft components.\n\nNatalia Smetankina, Sergey Ugrimov, Igor Kravchenko, Dmitry Ivchenko\n\n### Static and Flow-Rate Characteristics of Centrifugal Pump\u2019s Balancing Device with Considering the Random Changes of Its Main Parameters\n\nAutomatic balancing device is one of the basic units of many multi-stage centrifugal pumps. Operating characteristics of such device are determined by cylindrical and face throttles geometrical characteristics which are stochastic by their nature. That is why the deterministic approach can\u2019t give valid results. The purpose of this paper is determine the probabilistic the static and flow-rate characteristics of centrifugal pump\u2019s automatic balancing device. The random changes of the mean value of the radial cylindrical gap, the face throttle taper, eccentricity and coefficients of losses are taking into account in the presented calculation model. It is shown that the actual value of balancing force and flow-rate in the chamber of the automatic balancing device can be substantially different from the calculation ones. Obtained results allow to estimate the possible values of the flow-rate and axial force in the automatic balancing device due to changes in manufacturing and installation tolerances, as well as to ensure the stable operation of the pump.\n\nYuliia Tarasevych, Ievgen Savchenko, Nataliia Sovenko\n\n### Improvement of Manufacture Workability for Distribution Systems of Planetary Hydraulic Machines\n\nPlanetary hydraulic machines are most often used in mechatronic hydraulic drives. The efficiency of the distribution system of a planetary hydraulic motor is provided by the manufacturability of its elements, in particular the distribution windows. To solve the problem of improving manufacture workability for the elements of the distribution system, the shape of the distributor and sleeve valve passages is substantiated. A design model, a mathematical apparatus, and a calculation algorithm were developed. They made it possible to investigate how changes in geometric parameters of a distribution system affect the throughput of a planetary hydraulic motor with passages embodied in the form of a circle. The initial data and initial conditions for the modeling of the distribution system operation with various kinematic diagrams were substantiated. The change in throughput according to kinematic diagrams of the distribution system was investigated. It was established that the increase in the number of working passages of the distributor caused the decrease of the distribution system flow area. The amplitude of the flow area oscillations decreased as well. When discharge passages of the distributor were used as additional working windows, the throughput of the distribution system increased. In that case, the amplitude of the area oscillations was reduced. The critical parameter which determined the operability of the distribution system was the oscillations of the flow area. Therefore, when the distribution systems for hydraulic motors were designed, it was recommended to use additional discharge windows as working passages.\n\nAngela Voloshina, Anatolii Panchenko, Oleg Boltyansky, Olena Titova\n\n### Signal Processing and Conditioning Tools and Methods for Road Profile Assessment\n\nThis work presents a comparative analysis of a few different methods and tools for road roughness assessment and highlights some practical aspects of using sensors and data acquisition tools including experimental data analyses. In assessment of road profile roughness, geometrical and response type measurement approaches with class 1, 2 and 3 tools (Total Station 06 Leica Geo SystemTM, Laser profilometer by DYNATESTTM, accelerometers of DytranTM, smart phone, GY-61 with in-house designed and developed data acquisition system and analog-to-digital converter with ArduinoTM Uno Board, and Roughometer III from ARRB group Ltd) are employed. Road tests are performed in different vehicle velocities, viz. 20, 30, 40, 50, 60, 70 and 80 km\/h. Comparative studies have demonstrated that the RTM approaches with accelerometers are of sufficiently high quality in assessing road profile and evaluating IRI and can be comparable in accuracy with the class 1 geometrical static profilers and class 2 mobile profilometers like laser profiler by DYNATESTTM.\n\nAbduvokhid Yunusov, Davron Riskaliev, Nurmukhammad Abdukarimov, Sulaymon Eshkabilov\n\n### Low-Frequency Ultrasound as an Effective Method of Energy Saving During Forming of Reactoplastic Composite Materials\n\nVarious aspects of the application of low-frequency ultrasound with the aim of achieving energy saving in the molding of reactoplastic polymer composite materials are analyzed. The efficiency of ultrasonic technology in the molding of traditional and nanomodified polymers on an epoxy matrix is considered. Features of the origin and development of ultrasonic cavitation in liquid epoxy binders are described. The main parameters of ultrasonic treatment are discussed. The experimental cyclograms of hardening of epoxy binding of \u201ccold\u201d and \u201chot\u201d curing used in the manufacture of reinforced plastics and obtained without and with ultrasonic treatment are analyzed. The improved designs of the impregnation, dosing and winding units, which are used on the serial impregnating and drying equipment intended for the preparation of prepregs, are considered. An improvement in the structure of cured reinforced composites obtained without and with ultrasonic treatment is established by conducting a comparative analysis of the electron microscopic examination of their microsections and the places of destruction.\n\nAleksandr Kolosov, Elena Kolosova, Dmitro Sidorov, Anish Khan\n\n### Numerical Simulation of Aeroelastic Interaction Between Gas-Liquid Flow and Deformable Elements in Modular Separation Devices\n\nThis paper considers modular dynamic separation devices as automatic control systems, which have hydraulic resistance as a regulation object and elastic forces as a control impact. The way for extension of the range of their efficient operation is proposed using vibration-inertia separation process. The related mathematical model of the abovementioned separation process is realized in three stages. This paper deals with numerical simulation of the aeroelastic interaction between gas flow and deformable elements in modular separation devices using the ANSYS Workbench software package. It should be noted that the modules Transient Structural and Fluent are used by system coupling combination. Two-way FSI method is chosen for simulation of aeroelasticity problem. Boundary conditions of symmetry are used for reducing of resource consumption of the problem solution. Methods of the dynamic meshes deforming are considered for additional preventing from negative volumes occurrence, as well as other significant settings for this type of problem are described. The oscillation frequencies of the deformable elements are determined according to the results of a numerical experiment for gas flow inlet velocity in the range from 3 to 6 m\/s. The highest oscillation frequency 153 Hz is determined and observed for the inlet flow velocity equal to 4 m\/s.\n\nOleksandr Liaposhchenko, Ivan Pavlenko, Katarina Monkova, \u041caryna Demianenko, Oleksandr Starynskyi\n\n### Significance of Swirl Flow Separator Modification in Rainwater Treatment Technology\n\nThe main objective of this paper was to measure the purification efficiency of the liquid stream in modified swirl settler with a baffle inside the tank. It was found that separation efficiency depended on the size of contamination particles and hydraulic load also. It decreased when hydraulic load was increasing and raised magnitude with bigger diameter of solid particles. Analysis of liquid damming brought about measurement of the pressure drop in function of water volumetric flow rate. The resistance coefficient was calculated based on previously conducted research. Theoretical introduction was made about sedimentation phenomenon and interactions between liquid and solid particles brought by gravity force. Based on literature research and manufacturer catalogues, different technical solutions and constructions of purifying devices like sedimentation tanks were presented. Emphasis was given to the swirl settling tanks, since they are one of the newest pretreatment solutions and exhibit high potential for modification, which can results with higher purification efficiency.\n\nMa\u0142gorzata Markowska, Marek Ochowiak, Sylwia W\u0142odarczak, Szymon Woziwodzki, Magdalena Matuszak\n\n### Development of Technology for Utilization of Sulphate Waste Water of Detergents Production\n\nThe possibility of using sulphate wastewaters containing a surface active agent is investigated, which are the waste of detergents production for regeneration of cation filters. The dependencies are given, allowing calculating the residual content of the surface active agent in the water entering the boilers. It is theoretically calculated and practically confirmed that a small amount of the surface active agent could get into soft water. Over 25 filter cycles were carried out using the regeneration solution prepared from wastewater of the installation producing detergents. The filter cycles have shown that the working capacity is preserved completely and makes up 270\u2013300 g-equiv\/m3. The technological scheme is proposed for utilization of sulphate wastewater of the installation producing detergents. This scheme allows ensuring stable regeneration of the cation-exchange filters using the sulphate solution.\n\nZurab Megrelishvili, Ibraim Didmanidze, Vladimir Zaslavskiy\n\n### Properties of Heat and Mass Transfer Processes in the Tubular Grids with the Heat Exchanger as a Stabilizer\n\nIn article considers hydrodynamic and heat mass transfer performances of simultaneous implementation of the heat mass transfer processes on tubular gratings with the stabilizer and a heat exchanger. The optimal service conditions for the absorber are determined. Analyzing the obtained data, we can conclude that high efficiency of using foam devices with the stabilization of the built-in heat exchangers at the stage of absorption of sulfur trioxide in the sulfuric acid production is shown. Efficient heat dissipation with the help of internal refrigerators provides the de-sired temperature mode of absorption, which allows eliminating all the bulky heat exchange economy in existing systems. The high performance activity of an absorber of the investigated construction is exhibited during the implementation of simultaneous processes. The industrial implementation of the stabilization method of the gas-liquid layer greatly extends the scope of foaming devices and opens up new possibilities for the intensification of the technological processes creating the low-waste technologies in chemical technology and other industries.\n\nViktor Moiseev, Oleksandr Liaposhchenko, Peter Trebuna, Eugenia Manoilo, Oleg Khukhryanskiy\n\n### Production of Pumpkin Pectin Paste\n\nThe expediency of using vegetable-fruit pastes with a high content of pectin for the therapeutic and prophylactic nutrition of people and excretion of radionuclides, heavy metals and toxins is substantiated. The aim of the research is the development of a technology and a small-sized line for the production of concentrated pumpkin pectin pastes with preservation of plant nutrients that have the properties of removing radionuclides, heavy metals and toxins from the human body. Pectin pastes were made from the pumpkin sort Hybrid 75. In the experiments, a method for the preparation of pectin-containing pastes has been realized by hydrolyzing the plant material with the lactic acid and curd whey, which are simultaneously preserving agents. Technological operations for the production of the pectin-containing paste from pumpkin, optimized by the hydromodule of hydrolysis of the raw materials, the temperature, and the length of the process and the method of evaporation of the reaction mass, which provides better extraction of the complex of natural vitamins, tannins and sugars from the raw materials with pectin, more complete preservation of the native properties of the pectin substances, what helps to achieve the high quality of the finished product. The technology and the line for the production of pumpkin pectin pastes and other pectin-containing raw materials have been developed, what provide the preservation in the production of the native plant fruit properties. The developed technological and hardware-technological schemes of the pectin pastes production make it possible to organize their release in places close to the cultivation of raw materials and reduce transportation costs.\n\n### Calculation of the Residence Time of Dispersed Phase in Sectioned Devices: Theoretical Basics and Software Implementation\n\nThe article deals with a model of calculation the residence time of particles in the granulating and drying devices with vertical sectioning of the working space. The algorithm of calculation the residence time in the granulating and drying devices workspace is described. The model is realized the implementation the author\u2019s software product Multistage Fluidizer\u00ae. The software product enables to automatize calculation simultaneously by several optimization criteria and to visualize calculation results in the form of 3D images. The perforated shelf constructive parameters impact and fluidized flow during the residence time in the device are fixed. The research proposed the way of defining the particle\u2019s residence time in the workspace of the granulating and drying devices in free (without consideration of cooperation with other granules and granulator\u2019s elements) and constrained motion regimes. The engineering computation of sectioning devices methodology with fluidized bed of particles is based on the calculation results. The automated calculations results give a base of designing industrial granulating and drying devices.\n\nViktor Obodiak, Nadiia Artyukhova, Artem Artyukhov\n\n### Influence of Difference in Density of Solids on Mixing Efficiency in the Designed Static Mixer\n\nA new design concept of universal set-up for the examination of mixing process of solids using elements of static mixer has been proposed. The designed mixer allows mixing of solids in the measuring range dm\u2009\u2264\u200917 mm. Owing to the complexity of the construction, it is possible to apply many combinations of settings. The study on mixing efficiency for two components differing in density has been performed. The following granular materials of different size of particle were used: polypropylene, quartz sand, silicon carbide and aloxide. It was shown that the increase in the difference between densities of mixed solids causes a deterioration of the final effect of studied process. Selected construction systems were suitable for mixing of granular solids of comparable density. Depending on applied mixing elements and their arrangement, a satisfactory mixing ratio of solids can be achieved. The efficiency of mixing process depends primarily on the construction type of selected mixing element\u2014segment. To ensure a satisfactory mixing efficiency for other systems, it is necessary to apply new constructions of mixing segments.\n\nMarek Ochowiak, And\u017celika Krupi\u0144ska, Sylwia W\u0142odarczak, Magdalena Matuszak, Ma\u0142gorzata Markowska\n\n### An Updated Portrait of Numerical Analyses on Spout-Fluidized Bed Incineration Systems\n\nBiodegradable wastes are becoming a serious problem in terms of health and ecological balance in parallel with the increasing of local and global populations in recent years. These wastes should be disposed in both efficient and eco-friendly ways. Considering biodegradable wastes as an energy source, it is necessary to be sure that those disposal methods should be focused on energy recovery. However, the existing waste disposal methods have not reached technologically targeted lines yet. It is very important that waste-to-energy recovery systems should have high energy conversion efficiency. Nowadays, there are many current studies based on the methods of wastes incineration. One of the most significant among these systems is fluidized or spout-fluidized bed incineration system. Unfortunately, the targeted points of the technological development of these systems in view of efficient energy recovery have not been reached yet. Existing incineration systems and current studies on this issue are generally concentrated on conventional fluidized bed systems. However, there are few studies of new generation spout-fluidized bed incineration systems which increase homogeneity and prevent waste from adhering to inner wall of a combustor. This study is focused on the conducted numerical studies in this research field. The latest developments and researches on both fluidized and spout-fluidized bed incineration systems will be investigated and discussed. The remarkable results will be pointed out by using the comparison in order to identify the gaps of the scientific literature.\n\nEmrah \u00d6zahi, Arif \u00c7utay, Ay\u015feg\u00fcl Abu\u015fo\u011flu, Alperen Tozlu\n\n### The Process of Environmentally Safe Biochemical Recycling of Phosphogypsum\n\nThis paper focuses on the determining of biochemical treatment feasibility of phosphogypsum with the extraction of useful components, particularly rare earth metals. The possibility of phosphogypsum use as a mineral substrate by various groups of microorganisms in environmental protection technologies allows the application of bioleaching. The results of research show that biochemical leaching is carried out by aerobic bacteria and it arches capable of oxidizing sulfide minerals. The representatives of the genera Acidithiobacillus, Leptospirillum, Sulfobacillus, Sulfolobus, Acidianus, Metallosphaera, Ferroplasma are leading in these processes. Biochemical formalization of the kinetics process and study of the data bank of current developments dealing with using the waste treatment processes have been carried out. The main ecological and biochemical researches, and various mechanisms of microbiological investigation, biochemical modelling have been studied for assessment of biomass productivity of phosphogypsum. Technological scheme of biological leaching of rare-earth metals from PG dumps has been developed. The optimal parameters have been determined under pH\u2009=\u20091.5\u20132.5 and T\u2009=\u2009278\u2013308 K and the efficiency of bioleaching has been estimated.\n\nLeonid Plyatsuk, Magdalena Balintova, Yelizaveta Chernysh, Iryna Ablieieva, Oleksiy Ablieiev\n\n### Semi-Empirical Correlations of Pollution Processes on the Condensation Surfaces of Exhaust Gas Boilers with Water-Fuel Emulsion Combustion\n\nExperimental research of low-temperature pollution kinetics on exhaust gas boiler condensation surfaces with water-fuel emulsion combustion to obtain approximation equations for prediction of processes development are carried out. Thermotechnical measurements of parameters were carried out by standard methods; a theory of test modeling and planning was used for experiment processing, and statistical analysis\u2014for experimental data treatment. Based on the experimental data the semi-empirical correlation dependences of specific pollution mass from water content of water-fuel emulsion, sulfur content in fuel oil and excess air factor at wall temperatures below the dew point temperature of sulfuric acid vapor have been developed. The regression equation is obtained for determining the specific pollution mass on the low-temperature heating surfaces. The regression equations make it possible to estimate the influence of various factors such as water content of water-fuel emulsion, sulfur content in fuel oil and excess air factor of pollution intensity. The obtained model allows to describe the pollution process adequately and to determine the values of factors where the lowest intensity of pollution is observed. The received correlations and equation for specific pollution mass can be used for determining thermal resistance of pollution layer, which is necessary for designing and operating of the exhaust gas boiler condensation surfaces.\n\n### CFD Assessment of Jet Flow Behavior in an Alternative Design of a Spray Dryer Chamber\n\nPerformance of the spray dryer mostly depends on the flow field of a drying air, which is highly influenced by the configuration of the spray dryer chamber. This paper simulates numerically the jet flow behaviour with no swirl entry in an alternative chamber design of a co-current pilot plant spray dryer chamber. In order to explore the hydrodynamics of the air jet flow in the spray dryer, a mathematical model was developed, using a transient three-dimensional Reynolds-average Navier-Stokes equations, closed via the RNG $$k - \\varepsilon$$ turbulence model and solved using Computational Fluid Dynamics (CFD) (ANSYS-Fluent) software. CFD simulation quantitatively captured many features of the jet flow behaviour such as the velocity profile and characteristics of the turbulent jet flow in a specified configuration of the spray dryer chamber of an expansion ratio of 20 and the Reynolds number of 2.07\u2009\u00d7\u2009105. The simulation revealed the jet behaves as a free turbulent jet at these conditions. Results of the simulation give a good prediction in comparison to the experimental data reported in the literature.\n\nSaad N. Saleh, Omer Saaed, Maksym Skydanenko\n\n### Assessment of the Quality of Alternative Fuels for Gasoline Engines\n\nThe use of alternative fuels is a strict requirement of the present day. The threatening ecological situation of the environment, the constant growth of the fleet, the high price of petroleum products and the import dependence of Ukraine in oil fuels does not raise doubts about the need to develop, improve the properties and expand the range of biofuels. One of the possible solutions to these problems is the use of bioethanol, both in pure form and as an additive to gasoline of oil origin. The purpose of this article is to study the effect of ethanol on the performance properties of traditional gasoline, the search for the optimal ratio of alcohol and gasoline for use in internal combustion engines. Experimental studies have shown that the concentration of ethanol in gasoline 5\u20137% performance of physical and chemical properties of the fuel does not change at all. If we add 30% ethanol, we have a serious change in the corresponding indicators, and to ensure the physical stability of the fuel, the reduction of its octane number should be sure to use the appropriate additive. Therefore, the use even in high concentrations of alcohol in the production of alternative fuels for gasoline engines is justified.\n\nValentyna Tkachuk, Taras Bozhydarnik, Oksana Rechun, Taras Karavayev, Nina Merezhko\n\n### Kinetics of Sodium Chloride Dissolution in Condensates Containing Ammonia and Ammonium Carbonates\n\nThe kinetics of sodium chloride dissolution under the conditions of gravity deposition in condensates of soda production gas cooling apparatus containing ammonia and ammonium carbonates was studied. The method for calculating the dissolution rate is proposed based on measuring the deposition time of dissolving salt crystals at two points. The method is based on a mathematical model characterizing the change in the particle deposition rate in the dissolution process and taking into account the value of the dissolution rate coefficient. The impact of the temperature and solvent composition, as well as the crystal shape in the solute on the dissolution rate was investigated. It was found out that, with an increase in temperature by 10 \u00b0C, the dissolution increases 1.3 times, which indicates that the process is limited by the solute diffusion from the surface of a solid particle into the bulk of the liquid. In addition, it was determined that the dissolution rate coefficient decreases with an increase in the sodium chlorides concentration and the carbonation rate of the solution and increases with increasing ammonia concentration. Testifying the impact of the salt crystal shape on the kinetics of its dissolution showed that it is not crucial within the limits of the experiment accuracy. Using the methodology of experiment planning and regression analysis, an equation was obtained for predicting the dissolution rate of sodium chloride depending on the temperature and composition of the solution.\n\nMichael Tseitlin, Valentina Raiko, Aleksei Shestopalov\n\n### Patterns of Pollutants Distribution from Vehicles to the Roadside Ecosystems\n\nThis paper focuses on the determination of the motor vehicles pollutants emission and distribution in space and time within roadside ecosystems. The need for improvement of the mathematical model of emission distribution and the spreading of pollutants from highways is related to the non-stationary transport flow. The results of research show the spread of pollutants in the roadside ecosystem is carried out by the transfer of air flows (advective and convective components) and diffusion (fluctuation movements relative to the transfer process). Obtained analytical dependences for the forecast estimations of pollutants concentrations in the air allow to design a spatial concentration field at any atmospheric states and air flow velocity. The adequacy of the developed mathematical model was checked using instrumental methods. As exemplified by sulfur dioxide, it has been determined that its concentration in the air varies exponentially depending on the distance from the road and at a distance of 30 m is reduced three-fold, reaching the level of 0.8 mg\/m3. The Ansys 17.0 software visualized the distribution of waste gases from a truck and determined that the bulk of pollutants deposited at a distance of 30 m from the road. The fields of concentration of the respective harmful substances in the roadside zone were obtained and the places of secondary entry of settled harmful impurities from the roadway were determined again in the atmospheric air.\n\n### Studies on a Simplex Pressure-Swirl Atomizers with a Different Spin Chamber Shape\n\nUnflagging interest in pressure-swirl atomizers forces continuous development of knowledge about their operation and construction. Although this type of atomizers had been described in a comprehensive way and the number of correlation equations describing atomization process is substantial, some construction features and their influence on the process have not still been described in a satisfactory way. In this study, the influence of modification of pressure-swirl atomizers construction on the values of discharge coefficient and pressure drops was analyzed. A transitory cone in three different variants, cylindrical, conical, profiled, was applied. The construction of the second group of atomizers was enriched with the presence of a blind hole in the swirl chamber. The enrichment of the construction in the form of a blind hole resulted in higher values of discharge coefficient in comparison to atomizers without a blind hole. In the turbulent flow range, the constant value of discharge coefficient, being independent from Reynolds number, was obtained both for atomizers with and without a blind hole. However, these values differed for individual pressure-swirl atomizers. Obtained results can be useful in the analysis and comparison of atomization effects for atomizers of various construction.\n\nSylwia W\u0142odarczak, Marek Ochowiak, And\u017celika Krupi\u0144ska, Marcin Janczarek, Magdalena Matuszak\n\n### Evaluation of Energy and Ecological Indicators of Motor Biofuels\n\nThe purpose of the study is to evaluate energy and environmental indicators of the engine in its work on biofuels and petroleum diesel. Research methods are calculated and experimental. The article deals with the method of evaluation and forecasting of energy and fuel-economic indexes of the engine by using biofuels. The authors indicate the engine power and fuel consumption due to the quantity and heat of combustion for fuel-air mixtures operating on biodiesel, biogas, and diesel oil on the developed method. To determine the heat of combustion of fuel-air mixtures authors used lower heat of fuels combustion. The analysis shows that engine working on petrodiesel has the highest capacity and lowest fuel consumption. On the contrary, the engine working on petrodiesel has some indicators. Emissions of harmful substances of the engine D-243 at various speed and loading modes when working on different fuels were determined with the experimental method. Quantitative values of fuel consumption and emissions of harmful substances were obtained by method of mathematical modeling in the process of moving a technological vehicle for a ride cycle using various types of fuels.\n\nVictor Zaharchuk, Oleh Zaharchuk, Valerij Dembitskij, Vasiliy Ivanciv, Sergiy Pankevich\n\n### Regularities of Solid-Phase Continuous Vibration Extraction and Prospects for Its Industrial Use\n\nThe results of the substantiation and hardware design of continuous vibroextraction for solid-liquid systems with a small difference in the density of phases are presented, which provide the possibility of determining the rational constructive and technological parameters of vibroextractors and modes of their industrial exploitation. Mathematical modeling and methods of experimental evaluation of the mass transfer efficiency are based on the phenomena of non-stationary mass transfer and hydrodynamics. The mechanism of counter-phase separation of phases during the continuous process and features of mass transfer at all scale levels are described. Theoretical substantiation is given to convective mass transfer taking into account the accumulation component of the substance, which, taking into account the mass return in the zone of mixing, is what discloses the content of this component. The realization of the obtained results allowed to develop the engineering calculation method of vibroextraction in the food industry, high-efficiency energy-saving vibroextractors of continuous action and, based on them, apparatus-technological schemes of rational deep processing of plant raw material for a number of industries.\n\nVolodymyr Zavialov, Taras Mysiura, Nataliia Popova, Valerii Sukmanov, Valentyn Chornyi","date":"2019-10-17 00:26:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.38783034682273865, \"perplexity\": 1702.4417125130133}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986672431.45\/warc\/CC-MAIN-20191016235542-20191017023042-00289.warc.gz\"}"} | null | null |
Argentina Central America (featured) COHA in English Costa Rica Development Energy and Environment Geopolitics Human Rights Politics Research Articles Security and Defense South America (featured)
A Latina Secretary-General
July 28, 2016 November 21, 2019 COHA Christiana Figueres, Evan Fagan, Secretary General, Susana Malcorra, United Nations
By Evan Fagan, Research Associate at the Council on Hemispheric Affairs
On July 12, ten of the twelve candidates running for the Secretary-General (SG) position of the United Nations took center stage before the U.N. General Assembly. In an attempt to promote transparency in the Secretary-General election process, the U.N. publicly announced the contestants for the first time in its history, instead of keeping the election confidential within the Security Council itself. Among the candidates were two prominent women from the Western Hemisphere, Christiana Figueres of Costa Rica and Susana Malcorra of Argentina. Although both have enjoyed successful careers in the U.N. over the past decade, their achievements and departments are markedly different. Figueres served as Executive Secretary of the U.N. Framework Convention on Climate Change, where she was partially responsible for adopting the groundbreaking Paris Agreement of December 2015. Malcorra served as chief-of-staff to current SG Ban Ki Moon, as well as his chief of logistics for U.N. peacekeeping missions.[1]
Having a Latin American woman at the head of the most important international organization in the world would be an important turning point in the role of the international body. Not only has the U.N. never had a female SG, but the organization has only had one Latin American SG, Javier Pérez de Cuéllar of Peru (1982-1991), known for his peace negotiations between Israel and Lebanon, the United Kingdom and Argentina, Afghanistan and the Soviet Union, and Iran and Iraq.[2] Figueres and Malcorra are considered to be very strong candidates for the position, despite general consensus within the General Assembly that the next SG should come from Eastern Europe as there has never been an SG from that region.[3] Although both fell short in the preliminary straw poll on July 21, with Malcorra coming in sixth place and Figueres coming twelfth, the veto power of the permanent members of the Security Council has the potential to change everything.[4] If either were to win, it will signify that the five permanent members of the Security Council recognize the importance of Latin America as a region in world affairs. It would also represent a win for the Campaign to Elect a Woman U.N. Secretary-General, a group founded in early 2015.[5]
Christiana Figueres
Christiana Figueres' most notable achievement in the U.N. bureaucracy is her role in facilitating the Paris Agreement, signed at the Paris Climate Conference (COP21) in December 2015. The Paris Agreement is a legally binding, multilateral policy that sets a "global action plan" to limit climate change to a global temperature increase of 2°C by 2020.[6] The Agreement is considered a milestone in the fight against climate change and an important achievement for Figueres, who began her work with the Convention on Climate Change after the failure of the Copenhagen Climate Change Conference in 2009. Figueres was able to turn the organization around through strong leadership, an emphasis on teamwork, and an aptitude for facilitating cooperation between different sectors and countries at various stages of development.[7] Her ability to deliver results through her dedication to the issue and willingness to work multilaterally could well push her ahead of other candidates.
It should be noted, however, that not everyone sees the Paris Agreement as a success. As climate scientist James Hansen contends, "It's just worthless words. There is no action, just promises" [8] Other scientists have likened it to the Kyoto Protocol of 1997, originally labeled a major success but later regarded as a failure due to lack of implementation. Scientist Kevin Anderson contends that waiting until 2020 to reduce emissions and implement changes will be too late. Still others see the Paris Agreement as unrealistic due to the magnitude of its proposed changes. As Michael Grubb of University College London claims, "Actually delivering 1.5°C is simply incompatible with democracy."[9] Still, even scientists admit that the Agreement was politically successful. Anderson, for instance, acknowledges that now "the bureaucrats have a better grasp of what is politically possible."[10]
Christiana Figueres comes from a prominent political family in Costa Rica. Her father, José "Don Pepe" Figueres Ferrer, served as president of Costa Rica three times and is credited with having instituted democracy in the country in 1948. As a social democrat, he oversaw a number of major reforms including the nationalization of banks and many industries, and the implementation of social security.[11] These achievements were tainted by allegations of corruption in 1972, though he was never convicted.[12] Figueres' brother, José María Figueres, also served as president of the country and was indicted for having received $900,000 USD from French telecommunications company, Alcatel, in 2004. The charges were eventually dropped due to lack of evidence.
For her part, Figueres has been congratulated on her past work with the U.N. and her ability to foster cooperation among its members. In fact, Americas Quarterly journalist, Guy Edwards, posits that, "both the U.S. and France could decide to back her given that both President Barack Obama and President François Hollande hope to secure favorable legacies on climate change."[13] During the first U.N. debate, Figueres indicated that she would not be the passive SG that some expect her to be—a trait that actually made current SG Ban Ki Moon so attractive to the Security Council.
Figueres' outspokenness was evident when al Jazeera asked several candidates if, as SG, they would apologize on behalf of the U.N. for the outbreak of cholera in Haiti believed to have been brought in by UN peacekeepers and responsible for the deaths of over 9,200 people. Figueres was the only candidate to raise her hand in affirmation in what many reporters and observers described as the most dramatic moment of the evening.[14] She proclaimed, "I believe that the integrity of the United Nations […] has actually been tarnished. That was an unintended consequence of a very important goal of the United Nations—unintended consequence. But we have to be responsible even for unintended consequences."[15] She went on to pledge that, as SG, she would do everything in her power to eliminate cholera in Haiti and improve the country's health and sanitation programs. Upon further questioning in regard to U.N. responsibility for financial compensation in Haiti, Figueres contended that while the U.N. should not have to pay for the consequences, it should do everything in its power to prevent similar cases in the future from developing.[16] In all, her performance during the debate was among the most dynamic of the ten participants.
Susana Malcorra
Argentinian Susana Malcorra, current foreign minister in the administration of Mauricio Macri, entered the race for SG with arguably the most relevant experience of any of the candidates. In fact, some believe that this is exactly what could prevent her from becoming an effective SG. As Ban Ki Moon's Chief-of-Staff from 2012-2015, Malcorra already has years of experience in the U.N. secretariat she is now intent on leading. Her other experience at the U.N. includes serving as senior administrator at the World Food Program and chief of logistics for U.N. peacekeeping operations. She has received repeated praise from peers and coworkers as well. In reflecting on her time as chief-of-staff, Richard Gowan, U.N. expert at the European Council on Foreign Relations, commented, "She saw her role as getting shit done. She was one of the few adults in Ban's office, and she has a strong sense of how to use the limited power you have in the U.N. system."[17] During her time as a U.N. diplomat, Malcorra has gained a reputation as a decisive fixer and bureaucrat and has gained many influential friends including current SG Ban Ki Moon, former U.S. envoy to the U.N., Susan Rice, and Russian ambassador Vitaly Churkin.[18]
At the same time, critics point out that Malcorra has gained a reputation for "accommodating" powerful government envoys. They claim that she is too willing to "sacrifice the U.N.'s independence and its core principles, including a commitment to human rights, for the sake of political expediency."[19] This tendency could well turn many members of the General Assembly, as well as Security Council members China and Russia, against her. In fact, Malcorra has repeatedly accommodated the United States and other influential actors in order to further her bid for SG. This has included accusations of lavishing Rice with VIP treatment and procuring positions within the organization for other U.S. officials.[20] Reportedly, at one point, Rice expressed a desire that the United States have a U.S. diplomat as the No. 2 in the U.N. mission to Afghanistan. U.S. diplomat Peter Galbraith eventually got the job despite the objections of other U.N. officials who saw it as representing a major conflict of interest. Malcorra, however, no favoritism was shown and she had followed all U.N. hiring procedures.[21]
According to Thant Myint-U, a former U.N. political officer, "There is an unfortunate perception, extremely unhelpful, of a secretariat that is in the pocket of the big powers." As another U.N. source says about Malcorra,
"Susana's big weakness is she looks at everything through the prism of the member states—she will always make the Russians, the Americans, or anyone else walk away with a feeling that they have a sympathetic ear. I have never seen her in all my time as an observer challenging a powerful member state. Sometimes you have to say no."[22]
In fact, the ability to resist the demands of the Security Council is no easy task. Denying member states, especially the U.S., what they want can destroy a career as evidenced by the fate of Boutros Boutros-Ghali, who was blocked by the U.S. from serving a second term as SG after refusing to authorize airstrikes against Bosnian Serbs.
Other worrisome signs regarding Malcorra's candidacy include her lack of support in Latin America. Recently, she has been accused of ignoring human rights violations committed by President Nicolás Maduro's administration in Venezuela when she countered efforts to remove Venezuela from the Organization of American States (OAS). Malcorra's actions were seen as a play for support from Caracas for her U.N. candidacy. In her words,
"Say for a minute that we expel Venezuela. What will that achieve? There is need to decompress the humanitarian crisis. We have spoken about human rights; we have said it loud and clear. But in the end what we need is to have both parties sitting at the table to find a solution. That you will not get by expelling Venezuela from the OAS."[23]
This statement shows an admirable prudence on her part, which could serve to enhance her bid for SG. The decision to promote dialogue rather than voting Venezuela out of the OAS also disproves critics' allegations of being in the pocket of the Security Council. Instead, it demonstrates how Malcorra is capable of challenging powerful actors in order to maintain institutional principles.
One of the most serious issues that Malcorra will face from the Security Council has to do with her Argentinian heritage. Britain may well veto her nomination in the first round of elimination of candidates due to tensions between Argentina and the U.K. over the ongoing Falkland Islands sovereignty dispute. Russia could also veto her candidacy (and Figueres' for that matter) due to a strong preference for an Eastern European SG.[24] There is speculation, however, that Russia and the U.S. will be unable to agree upon an Eastern European due to bitter diplomatic disagreements with regard to Syria and Ukraine. This opens up the possibility of electing someone from another region such as Latin America.[25]
One major advantage that Malcorra holds in the race is her strong administrative skills, honed in the private sector when she worked for IBM and Telecom Argentina. With these skills at her disposal, she is said to have increased productivity and bureaucratic efficiency in her U.N. years. She also has experience dealing with major strategic issues including the removal of chemical weapons from Syria, curbing the Ebola outbreak, and negotiating peace in the Democratic Republic of the Congo.
While Malcorra's performance during the U.N. debate was decidedly less impressive than that of Figueres, she still managed to make a positive impression with her thoughtful and knowledgeable responses during questioning.
Despite greater transparency, the SG election process is still largely secretive. In the end, the five members of the Security Council – the United States, Britain, France, Russia, and China – will likely have the final say. Still, public opinion could have an impact and, if this is the case, Figueres and Malcorra, being among the most popular candidates in the running, will have a much greater chance of being elected.[26] That the majority of the members in the Security Council have few political issues with either Argentina or Costa Rica, further increases those chances.
At this point, the election of a female candidate appears likely. It is widely considered to be the right time for a female SG, and there is strong support in the U.S. and Britain and other members of the Security Council.[27] In the words of Croatian candidate Vesna Pusić, the "UN has been for seventy years dominated by the male worldview, but that is only 50 percent of life experience, and now it is time for the other 50 percent."[28]
Having a Latin American woman as Secretary-General of the United Nations could have far reaching effects. Not only would it offer a voice to that half of the world, but it would also provide a voice for a region where poverty, displacement and violence are prevalent. In fact, the poverty rate in Latin America is 25.3 percent, while eight of the 10 most violent countries and 40 of the 50 most violent cities are located there. Furthermore, one in four violent killings in the world takes place in Brazil, Colombia, Mexico, and Venezuela.[29]
Both Christiana Figueres and Susana Malcorra are well qualified for the position of Secretary-General. Just as importantly, both are armed with the skills and experience necessary to address the world's most pressing issues, such as systemic violence and structural inequality, through a uniquely Latin American perspective.
Original research on Latin America by COHA. Please accept this article as a free contribution from COHA, but if re-posting, please afford authorial and instituional attribution. Exclusive rights can be negotiated. For additional news and analysis on Latin America, please go to LatinNews. com and Rights Action
Featured photo: Taken from Wikimedia.
[1] Edwards, Guy. "A Closer Look at the Latin American Women that Could Lead the UN." Americas Quarterly, July 11, 2016. Accessed July 20, 2016. http://www.americasquarterly.org/content/closer-look-latin-american-women-could-lead-un
[2] "Former Secretary General Javier Perez de Cuellar." Global Policy Forum. Accessed July 20, 2016. https://www.globalpolicy.org/component/content/article/150/32778.html
[3] Guy Edwards. "A Closer Look at the Latin American Women that Could Lead the UN."
[4] Gowan, Richard. "Big Power Politics Threaten to Upset U.N. Secretary-General Race." World Politics Review, July 25, 2016. Accessed July 28, 2016. http://www.worldpoliticsreview.com/articles/19458/big-power-politics-threaten-to-upset-u-n-secretary-general-race
[5] "Campaign to Elect a Woman UN Secretary-General: About." Campaign to Elect a Woman Secretary-General." Accessed July 20, 2016. http://www.womansg.org/#!about/cee5
[6] "Paris Agreement." European Commission: Climate Action. Accessed July 20, 2016. http://ec.europa.eu/clima/policies/international/negotiations/paris/index_en.htm
[8] Le Page, Michael. "Paris climate deal is agreed—but is it really good enough?" New Scientist, December 12, 2015. Accessed July 20, 2016. https://www.newscientist.com/article/dn28663-paris-climate-deal-is-agreed-but-is-it-really-good-enough/
[11] Pace, Eric. "Jose Figueres Ferrer Is Dead at 83; Led Costa Ricans to Democracy." The New York Times, June 9, 1990. Accessed July 25, 2016. http://www.nytimes.com/1990/06/09/obituaries/jose-figueres-ferrer-is-dead-at-83-led-costa-ricans-to-democracy.html
[12] "Political and Economic History of Costa Rica." San José State University Department of Economics. Accessed July 20, 2016. http://www.sjsu.edu/faculty/watkins/costarica.htm
[13] Guy Edwards. "A Closer Look at the Latin American Women that Could Lead the UN."
[14] Goldberg, Mark Leon. "The First-Ever UN Secretary General Debate Made for Some Great TV." UN Dispatch, July 13, 2016. Accessed July 20, 2016. http://www.undispatch.com/first-ever-un-secretary-general-debate-made-great-tv/
[15] Al Jazeera English. "The UN debate." Filmed July 2016. Youtube video, 2:25:35. Posted July 2016. https://www.youtube.com/watch?v=3WqFUVpoZQs
[17] Lynch, Colum. "Can a Consummate Insider Bring the Change the U.N. Desperately Needs?" Foreign Policy, July 5, 2016. Accessed July 20, 2016. http://foreignpolicy.com/2016/07/05/can-a-consummate-insider-bring-the-change-the-u-n-desperately-needs/
[26] "The UN debate." Al Jazeera Interactive, July 12, 2016. Accessed July 20, 2016. http://interactive.aljazeera.com/aje/2016/un-debate-secretary-general/index.html
[27] Colum Lynch. "Can a Consummate Insider Bring the Change the U.N. Desperately Needs?"
[28] Pavlic, Vedran. "Vesna Pusić Takes Part in UN Secretary General Candidates Debate." Total Croatia News, July 13, 2016. Accessed July 20, 2016. https://www.total-croatia-news.com/item/13024-vesna-pusic-takes-part-in-un-secretary-general-candidates-debate
[29] Muggah, Robert. "Latin America's Poverty Is Down, But Violence Is Up. Why?" Americas Quarterly, October 20, 2015. Accessed July 20, 2016. http://www.americasquarterly.org/content/latin-americas-poverty-down-violence-why
← "I would not rape you because you do not deserve to be raped:" A Closer Look at Institutionalized Sexism in Brazil
Illegal Gold Mining in Peru →
The Fabulous Life of the Ravenous Vultures
April 9, 2010 COHA Comments Off on The Fabulous Life of the Ravenous Vultures
The Big China and Taiwan Tussle: Dollar Diplomacy Returns to Latin America
September 19, 2008 COHA Comments Off on The Big China and Taiwan Tussle: Dollar Diplomacy Returns to Latin America
Indigenous Leader Killed in Land Dispute in Brazil
June 20, 2016 COHA Comments Off on Indigenous Leader Killed in Land Dispute in Brazil | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,733 |
<div class='collapse navbar-collapse' id='mainnav'>
<ul class='nav navbar-nav'>
<li><a id='nav_link_info' href='/#/info'>{{'NAV.INFO' | translate}}</a></li>
<li><a id='nav_link_agb' href='/#/agb'>{{'NAV.AGB' | translate}}</a></li>
<li ng-show='true'><a id='nav_link_faq' href='/#/faq'>{{'NAV.FAQ' | translate}}</a></li>
<li><a id='nav_link_programme' href='/#/'>{{'NAV.PROGRAM' | translate}}</a></li>
<li ng-show='iSvc.isAuthenticated()'><a id='nav_link_myres' href='/#/myRegistrations'>{{'NAV.MYRES' | translate}}</a></li>
<li ng-show='iSvc.isAuthorized("admin") || iSvc.isAuthorized("fadmin") || iSvc.isAuthorized("ladmin")'><a href='/#/shop/calendar'>Meine Reservierungen</a></li>
<li ng-show='iSvc.isAuthorized("fadmin") || iSvc.isAuthorized("admin")'><a href='/#/myCommitments'>{{'NAV.MYCOMMITS' | translate}}</a></li>
<li ng-show='iSvc.isAuthorized("fadmin") || iSvc.isAuthorized("admin")' class="dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false">
{{'NAV.REPORT' | translate}} <span class="caret"></span>
</a>
<ul class="dropdown-menu">
<li ng-show='iSvc.isAuthorized("admin")'><a href='/#/report'>{{'NAV.RESERVATIONS' | translate}}</a></li>
<li ng-show='iSvc.isAuthorized("admin") && !isKiso'><a href='/#/payment'>{{'NAV.PAYMENT' | translate}}</a></li>
<li ng-show='iSvc.isAuthorized("admin") || iSvc.isAuthorized("fadmin")'><a href='/#/reportPresence'>{{'NAV.PRESENCE' | translate}}</a></li>
<li ng-show='iSvc.isAuthorized("admin")'><a href='/#/reportOverview'>{{'NAV.OVERVIEW' | translate}}</a></li>
<li ng-show='iSvc.isAuthorized("admin")'><a href='/#/filterConfirmation'>{{'NAV.FILTERCONFIRM' | translate}}</a></li>
</ul>
</li>
<li ng-show='iSvc.isAuthorized("admin") || iSvc.isAuthorized("fadmin")' class="dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false">
{{'NAV.LENDINGS' | translate}} <span class="caret"></span>
</a>
<ul class="dropdown-menu">
<li ng-show='iSvc.isAuthorized("admin") || iSvc.isAuthorized("fadmin") || iSvc.isAuthorized("ladmin")'><a href='/#/shop/articles'>Artikel</a></li>
<li ng-show='iSvc.isAuthorized("admin") || iSvc.isAuthorized("fadmin") || iSvc.isAuthorized("ladmin")'><a href='/#/shop/reservation'>Ausleihe</a></li>
<li ng-show='iSvc.isAuthorized("admin")'><a href='/#/shop/calendar'>Kalender</a></li>
</ul>
</li>
<!-- <li ng-show='iSvc.isAuthorized("fadmin")'><a target='_blank' href='http://www2.jugendsommer.com/'>{{'NAV.LENDINGS' | translate}}</a></li> -->
<li ng-show='iSvc.isAuthorized("admin")'><a href='/#/userRolesSearch'>{{'NAV.USERSEARCH' | translate}}</a></li>
</ul>
</div> | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,075 |
\section{I. Introduction}
Event-by-event fluctuations play a crucial role in the hydrodynamic description of the relativistic heavy ion collisions~\cite{sph-review-1,hydro-review-04,hydro-review-05,hydro-review-06,hydro-review-09,hydro-review-10}.
The experimental observations of the enhancement in correlations at intermediate and low transverse momenta~\cite{RHIC-star-ridge-3,RHIC-phenix-ridge-5,RHIC-phobos-ridge-7,LHC-alice-vn-4,LHC-cms-ridge-7,LHC-atlas-vn-4}, comparing with those at high transverse momenta~\cite{RHIC-star-jet-1,RHIC-star-jet-2} strongly support the hydrodynamics/transport characteristic of such phenomena~\cite{sph-corr-1,hydro-v3-1,hydro-vn-ph-3,hydro-corr-ph-1,hydro-corr-1,ampt-4,ampt-5}.
In particular, the observed features, referred to as ``ridge'' and ``shoulders'', were shown to be successfully reproduced by hydrodynamical simulations with event-by-event fluctuating initial conditions (IC) but not by averaged IC~\cite{sph-corr-1}.
Subsequently, it leads to the current understanding through extensive studies of event-by-event based hydrodynamic/transport analysis~\cite{hydro-v3-2,hydro-v3-4,hydro-v3-5,hydro-v3-6,hydro-v3-7,hydro-v3-8}, that the two-particle correlations for the lower transverse momenta can be mostly interpreted in terms of flow harmonics $v_n$.
Notably, the triangular flow, $v_3$, is mostly attributed to for the appearance of the ``shoulder" structure on the away side of the trigger particle~\cite{hydro-v3-1,hydro-v3-2,hydro-v3-3}.
Moreover, it is understood that these parameters are closely associated with the corresponding $\epsilon_n$, the anisotropies of the initial energy distribution~\cite{hydro-v3-1,hydro-v3-2}.
In fact, the observed behavior of anisotropic parameters as functions of centrality and transverse momentum has been studied at a very high quantitative level~\cite{hydro-v2-heinz-2,hydro-v2-voloshin-1}, as well as the correlation between $v_{n}$ and $\epsilon_n$ has also been established (see~\cite{hydro-review-08,hydro-review-04,hydro-review-09,hydro-review-06} for the recent reviews on this topic).
In spite of the success of the statistical analysis of event-by-event hydrodynamic simulations, the linearity between event averaged $v_n$ and $\epsilon_n$ become less evident for harmonics greater than $n=2$.
To be specific, it was shown that the correlations among $v_n$ and $\epsilon_n$ become weaker for larger harmonics~\cite{hydro-vn-3,sph-vn-4}.
This suggests that the event-by-event fluctuations themselves carry important information.
Subsequently, if one restricts himself only to the analysis of the event-averaged relations/correlations among $v_n$ and $ \epsilon_n$, then some genuine hydrodynamic signals from the local fluctuations in each individual event might be washed out, or hidden behind some very complicated correlations among the harmonics~\cite{hydro-corr-ph-7,hydro-corr-ph-8,hydro-vn-10}.
That is, the local genuine hydrodynamic evolution, e.g., turbulence, local shock wave, etc., while sensitive to the properties of matter, might not be encoded in a simple way regarding event-average correlations.
In this context, in the present paper, we explore from an alternative angle which may explain in a simple way the physical origin of the anisotropic flow pattern.
Before the explanation regarding the triangular flow was first suggested~\cite{hydro-v3-1}, we proposed a peripheral tube model~\cite{sph-corr-2,sph-corr-3,sph-corr-4,sph-corr-7} which provides a straightforward and reasonable picture for the generation of the triangular flow and the consequent two-particle correlations.
It is an approach within the general event-by-event hydrodynamic scheme.
The model views the fluctuations in the IC as consisting of independent high energy tubes close to the surface of the system.
Thereby, each tube affects the hydrodynamical evolution of the system independently, and their contributions are summed up linearly to the resultant two-particle correlations.
In this approach, one substitutes the complex bulk of the hot matter by an average distribution over many events from the same centrality class.
The above picture attempts to interpret the physical content of fluctuating IC regarding the granularity represented by peripheral high energy tubes.
To be specific, if a tube locates deep inside the hot matter, the effect of its hydrodynamic expansion would be quickly absorbed by its surroundings, causing relatively less inhomogeneity in the media.
On the contrary, a tube staying close to the surface leads to a significant disturbance to the one-particle distribution, resulting in an azimuthal two-particle correlation structure similar in shape and magnitude to the observed data.
By numerical simulations, as shown in Fig.\ref{figevo}, one finds that the fluid is deflected to both sides of the tube, causing two peaks separated by $\sim 120$ degrees in the one-particle azimuthal distribution.
Subsequently, this leads to the desired two-particle distribution where a double peak is formed on the away side, whose height is approximately half of the single peak located on the near side.
It is shown that the resultant correlation structure is robust against the variation of the model parameters~\cite{sph-corr-2,sph-corr-4}.
Furthermore, simulations carried out with multiple peripheral tubes still show such robust feature of the two-particle correlation structure, which strongly suggests that the emergence of the two particle correlations can be naturally interpreted as a superposition of those of independent peripheral tubes~\cite{sph-corr-7}.
It is interesting to compare the present model to the interpretation frequently employed in the literature, where random but systematic emergence of octupole deformation in the initial energy distribution is considered to lead to the final triangular flow, and consequently, the double peak nature of the two particle correlations in the final state.
The mostly linear relation between the initial anisotropy parameters, $\epsilon_n$ and the flow harmonics $v_n$, observed in numerical simulations~\cite{hydro-vn-2,hydro-vn-3,hydro-vn-4,hydro-vn-9} are considered as the evidence of such interpretation.
We note that it is meaningful to compare the present model to another approach frequently employed in the literature, where the one-particle distribution are decomposed into different flow harmonics.
Subsequently, the collective flow related to the harmonic coefficients are understood to be mostly independent and associated with the corresponding eccentricity components in the IC.
Note that the physical image associated with the above interpretation is drastically different from that of our approach regarding the emergence of octupole moment of the initial energy distribution.
In our model, the octupole moment does appear in IC, but it is not directly related to the formation of the triangular flow.
In both cases, by and large, the hydrodynamic evolution linearly transforms the initial state geometric inhomogeneity into the final state anisotropy in momentum space.
The main point is that in our present model the effect is considered as local, as a result of how the expansion is affected by a high energy tube close to the surface of the fluid.
Therefore, it is irrelevant to the fluid dynamics of the rest of the system, but localized to a specific azimuthal angle $\phi_\mathrm{tube}$ associated to the peripheral tube in question.
As hydrodynamics is understood as an effective theory at the long wavelength limit, the peripheral tube interprets the two-particle correlation in terms of phenomena where the characteristic length is comparable to the system size.
Concerning harmonic coefficients, the event planes of the elliptic and triangular flow coefficients are both generated by the tube and are therefore both correlated to the location of the tube $\phi_\mathrm{tube}$.
However, as one carries out the event-by-event average, $\phi_\mathrm{tube}$ is averaged out, and the resultant expression (for instance, see Eqs.(\ref{ccumulant}) and (\ref{ccumulant2}) below) does not explicitly depend on it.
\begin{figure}
\begin{tabular}{cc}
\vspace{-50pt}
\begin{minipage}{225pt}
\centerline{\includegraphics[width=350pt]{fig2_t1}}
\end{minipage}
&
\begin{minipage}{225pt}
\centerline{\includegraphics[width=350pt]{fig2_t2}}
\end{minipage}
\\
\vspace{-50pt}
\begin{minipage}{225pt}
\centerline{\includegraphics[width=350pt]{fig2_t3}}
\end{minipage}
&
\begin{minipage}{225pt}
\centerline{\includegraphics[width=350pt]{fig2_t4}}
\end{minipage}
\\
\begin{minipage}{225pt}
\centerline{\includegraphics[width=350pt]{fig2_t5}}
\end{minipage}
&
\begin{minipage}{225pt}
\centerline{\includegraphics[width=350pt]{fig2_t6}}
\end{minipage}
\end{tabular}
\caption{(Color online) The temporal evolution of the IC consisting of one peripheral tube placed on top of an elliptic smoothed background energy distribution.
The parameters for the IC used in calculation are discussed in section III.}
\label{figevo}
\end{figure}
The present work is organized as follows.
In the next section, we briefly review the peripheral tube model and discuss its main results on the observed two-particle correlations.
Subsequently, in section III, we show that the experimental data can be reasonably reproduced by appropriately constructing the IC with the peripheral tube.
Furthermore, we extract the model parameters using event-by-event simulations with various devised IC reflecting the average background and event-by-event fluctuations.
It is shown that those extracted values are indeed in accordance with the observed two-particle correlation data.
The last section is devoted to discussions and concluding remarks.
\section{II. The peripheral tube model for two-particle correlations}
The underlying assumption of the peripheral tube model is that the hydrodynamic evolution which generates the characteristic double-peaked two-particle correlation is mainly composed of two parts.
The first one is associated with the bulk energy distribution of the IC, which generates the dominant part of the resultant collective flow.
For simplicity, this bulk energy distribution is assumed to generate mostly the elliptic flow.
Owing to the multiplicity fluctuations, the contributions from the proper events does not cancel out with those from the mixed event, and the residual is proportional to the standard deviation of the multiplicities.
The second contribution comes from that of a peripheral tube.
It produces aligned elliptic and triangular flow.
This fact is related to the event plane dependence as well as centrality dependence of the observed two-particle correlation~\cite{sph-corr-ev-4,sph-corr-ev-6}.
To be specific, in this model, the two-particle correlation is entirely determined by the one-particle distribution.
Instead of writing the latter down directly in terms of Fourier expansion~\cite{hydro-corr-ph-2}, we write down the one-particle distribution as a sum of two terms, namely, the distribution of the background and that of the tube.
\begin{eqnarray}
\frac{dN}{d\phi}(\phi,\phi_\mathrm{tube}) =\frac{dN_{\mathrm{bgd}}}{d\phi}(\phi) +\frac{dN_{\mathrm{tube}}}{d\phi}(\phi,\phi_\mathrm{tube}),
\label{eq-sec5-1-1}
\end{eqnarray}
where
\begin{eqnarray}
\frac{dN_{\mathrm{bgd}}}{d\phi}(\phi)&=&\frac{N_\mathrm{bgd}}{2\pi}(1+2v_2^\mathrm{bgd}\cos(2\phi)), \label{eq-sec5-1-2}\\
\frac{dN_{\mathrm{tube}}}{d\phi}(\phi,\phi_\mathrm{tube})&=&\frac{N_\mathrm{tube}}{2\pi}\sum_{n=2,3}2v_n^\mathrm{tube}\cos(n[\phi-\phi_\mathrm{tube}]) . \label{eq-sec5-1-3}
\end{eqnarray}
In Eq.(\ref{eq-sec5-1-2}) we consider the most simple case for the background flow, by parametrizing it with the elliptic flow parameter $v_2^\mathrm{bgd}$ and the overall multiplicity, denoted by $N_\mathrm{bgd}$.
The contributions from the tube are measured with respect to its angular position $\phi_\mathrm{tube}$, and a minimal number of Fourier components are introduced to reproduce the desired two-particle correlation~\cite{sph-corr-4}, that is to say, only $v_2^\mathrm{tube}$ and $v_3^\mathrm{tube}$ are retained in Eq.(\ref{eq-sec5-1-3}).
It is worth noting that although both the contributions from the background and the tube are written in Fourier expansion, they are intrinsically independent distributions.
In particular, the triangular flow in our model is completely generated by the tube, and since its symmetry axis is correlated to the tube location $\phi_\mathrm{tube}$, the variation of the latter is related to the event-by-event fluctuations.
We also assume that the flow components from the background are much more significant than those generated by the tube, $\Psi_2$ is mainly determined by the elliptic flow of the background $v_2^\mathrm{bgd}$.
Following the methods used by the STAR experiment~\cite{RHIC-star-plane-1,RHIC-star-plane-2,RHIC-star-plane-3}, the subtracted di-hadron correlation is given by
\begin{eqnarray}
\left<\frac{dN_{\mathrm{pair}}}{d\Delta\phi}(\phi_s)\right> =\left<\frac{dN_{\mathrm{pair}}}{d\Delta\phi}(\phi_s)\right>^{\mathrm{prop}} -\left<\frac{dN_{\mathrm{pair}}}{d\Delta\phi}(\phi_s)\right>^{\mathrm{mix}} , \nonumber \\
\label{eq-sec5-1-4}
\end{eqnarray}
where $\phi_s$ is the trigger angle ($\phi_s=0$ for in-plane and $\phi_s=\pi/2$ for out-of-plane trigger).
By using Eq.(\ref{eq-sec5-1-3}), one finds the proper two-particle correlation
\begin{eqnarray}
\left<\frac{dN_{\mathrm{pair}}}{d\Delta\phi}\right>^{\mathrm{prop}} =
\int\frac{d\phi_\mathrm{tube}}{2\pi}f(\phi_\mathrm{tube})
\frac{dN^T}{d\phi}(\phi_s,\phi_\mathrm{tube})
\frac{dN^A}{d\phi} (\phi_s+\Delta\phi,\phi_\mathrm{tube}), \nonumber \\ \label{eq-sec5-1-corr-proper}
\end{eqnarray}
where $f(\phi_\mathrm{tube})$ is the distribution function of the tube, and superscripts ``$T$" and ``$A$" indicate ``trigger" and ``associated" particles respectively (c.f. subscripts ``$T$" are shorthands for ``transverse").
For simplicity, we take $f(\phi_\mathrm{tube})=1$.
In a more realistic approach, however, $f(\phi_\mathrm{tube})$ might contain a small elliptic modulation owing to the elliptic shape of the IC.
Nonetheless, as can be shown straightforwardly, such correction leads to additional contributions which are smaller than the results discussed below, and therefore are ignored in the present study.
The combinatorial background $\left<{dN_{\mathrm{pair}}}/{d\Delta\phi}\right>^{\mathrm{mix}}$ can be calculated by using either cumulant or ZYAM method \cite{zyam-1}.
Though both methods yield very similar results in our model, it is more illustrative to evaluate the cumulant.
Following similar arguments presented in Ref.~\cite{sph-corr-ev-4}, it is straightforward to show that the resultant correlation reads
\begin{eqnarray}
\langle \frac{dN_\mathrm{pair}}{d\Delta\phi}(\phi_s)\rangle ^{(\mathrm{cmlt})}
&=&\frac{\langle N_\mathrm{bgd}^T N_\mathrm{bgd}^A\rangle -\langle N_\mathrm{bgd}^T\rangle \langle N_\mathrm{bgd}^A\rangle }{(2\pi)^2}(1+2v_2^{\mathrm{bgd},T}\cos(2\phi_s))
(1+2v_2^{\mathrm{bgd},A}\cos(2(\Delta\phi+\phi_s))) \nonumber\\
&+&\frac{\langle N_\mathrm{tube}^T N_\mathrm{tube}^A\rangle }{(2\pi)^2}\sum_{n=2,3}2v_n^{\mathrm{tube},T}v_n^{\mathrm{tube},A}
\cos(n\Delta\phi) \ ,
\label{ccumulant}
\end{eqnarray}
where the event average is carried out by integration in $\phi_\mathrm{tube}$.
The above expression explicitly depends on $\phi_s$, which can be used to study the event plane dependence of the correlation.
In particular, the trigonometric dependence of the background contribution on $\phi_s$ indicates that its contribution to the out-of-plane triggers is opposite to that for the in-plane ones.
As a result, for the out-of-plane correlation, it leads to an overall suppression in the amplitude, as well as forms a double peak structure on the away side.
Indeed, experimental data~\cite{RHIC-star-plane-1,RHIC-star-plane-2,RHIC-star-plane-3} shows that the overall correlation decreases while the away-side correlation evolves from a single peak to a double peak as $\phi_s$ increases.
Since these observed features are in agreement with the analytically derived results, the peripheral model is shown to be meaningful despite of its simplicity.
By further averaging out $\phi_s$, one obtains
\begin{eqnarray}
\langle \frac{dN_\mathrm{pair}}{d\Delta\phi}\rangle ^{(\mathrm{cmlt})}
&=&\frac{\langle N_\mathrm{bgd}^T N_\mathrm{bgd}^A\rangle -\langle N_\mathrm{bgd}^T\rangle \langle N_\mathrm{bgd}^A\rangle }{(2\pi)^2}(1+2v_2^{\mathrm{bgd},T}v_2^{\mathrm{bgd},A}\cos(2\Delta\phi)) \nonumber\\
&+&\frac{\langle N_\mathrm{tube}^T N_\mathrm{tube}^A\rangle }{(2\pi)^2}\sum_{n=2,3}2v_n^{\mathrm{tube},T}v_n^{\mathrm{tube},A}
\cos(n\Delta\phi).
\label{ccumulant2}
\end{eqnarray}
If the model is indeed realistic, one should be able to obtain the parameters in Eq.(\ref{ccumulant2}) according to their respective physical content, while the resulting correlations should be still quanlitatively in agreement with the data.
This is the principal object of the present work.
In what follows, by carrying out numerical simulations, we first show that an appropriately constructed IC can reasonably reproduce the observed data.
Furthermore, we attempt to calculate the model parameters according to their definitions.
This is done by using various IC tailored to match the respective physical properties of IC in question.
In particular, we study the multiplicity fluctuations as well as the flow harmonics of different IC associated with the background as well as the peripheral tube.
The two-particle correlations are then evaluated by using the obtained values.
\begin{figure}
\begin{tabular}{cc}
\begin{minipage}{225pt}
\centerline{\includegraphics[width=200pt]{fig1_BGEnergy}}
\end{minipage}
&
\begin{minipage}{225pt}
\centerline{\includegraphics[width=200pt]{fig1_BGtube}}
\end{minipage}
\end{tabular}
\caption{(Color online) Left: the background energy distribution obtained by averaging over many NeXuS events. Right: one random event with a high energy peripheral tube sits on top of the background.
The parametrization of the IC is discussed in the text.}
\label{figIC}
\end{figure}
\subsection{III. The model parameters extracted by comparing hydrodynamic simulations against the data}
For the present study, we focus on mid-central 200 AGeV Au+Au collisions.
We first show that one can devise the IC according to the peripheral tube model and subsequently evaluate the two-particle correlation.
This is done by estimating the background energy distribution by averaging over the 343 events generated by a microscopic event generator, NeXuS~\cite{nexus-rept}, for the centrality window 20\%-40\% as shown in Fig.\ref{figIC}.
The obtained almond shaped energy distribution is then fitted by the following parametrization
\begin{eqnarray}
\epsilon_\mathrm{bgd} &=& (9.33+7r^2+2r^4)e^{-r^{1.8}} \ \ \text{fm}^{-3}\, , \label{energyb}
\end{eqnarray}
with
\begin{eqnarray}
r &=& \sqrt{0.41x^2+0.186y^2} \ \ \text{fm} . \nonumber
\end{eqnarray}
The profile of the high energy tube is calibrated to that of a typical peripheral tube in NeXuS IC as follows
\begin{eqnarray}
\epsilon_\mathrm{tube}&=& 12e^{-(x-x_\mathrm{tube})^2-(y-y_\mathrm{tube})^2} \ \ \text{fm}^{-3} \, , \label{energyt}
\end{eqnarray}
where the tube is located at a given value of energy density close to the surface, determined by a free parameter $r_\mathrm{tube}$, so that its coordinates on the transverse plane read
\begin{eqnarray}
x_\mathrm{tube} &=& \frac{r_\mathrm{tube}\cos\theta}{\sqrt{0.41\cos^2\theta+0.186\sin^2\theta}} \ \ \text{fm}\\
y_\mathrm{tube} &=& \frac{r_\mathrm{tube}\sin\theta}{\sqrt{0.41\cos^2\theta+0.186\sin^2\theta}} \ \ \text{fm}. \nonumber
\end{eqnarray}
Here $r_\mathrm{tube}$ is used as a free parameter whose value is determined below, and the azimuthal angle of the tube $\theta$ is randomized among different events.
\begin{figure}
\begin{tabular}{cc}
\begin{minipage}{250pt}
\centerline{\includegraphics[width=250pt]{fig3_pt1cz}}
\end{minipage}
&
\begin{minipage}{250pt}
\centerline{\includegraphics[width=250pt]{fig3_pt2cz}}
\end{minipage}
\end{tabular}
\caption{(Color online) The calculated two-particle correlations by using one-tube IC for $20\% - 40\%$ centrality window in comparison with the corresponding data by PHINEX Collaboration~\cite{RHIC-phenix-ridge-5}, and those obtained by using the extracted paramters in Table \ref{tubeparameters} and Eqs.(\ref{lbNbNt3}) and (\ref{lbNbNt4}).
The SPheRIO results from using cumulant method are shown in red solid curves, the data are shown in solid squares, and those obtained by the estimated parameters are shown by the blue hatched curves.
Left: the results for the momentum intervals $0.4<p_T^A<1$ Gev and $2<p_T^T<3$ Gev.
Right: those for the momentum intervals $1<p_T^A<2$ Gev and $2<p_T^T<3$ Gev. }
\label{fig2p}
\end{figure}
By combining the two pieces together, the IC for the present model read
\begin{eqnarray}
\epsilon = \epsilon_\mathrm{bgd}+\epsilon_\mathrm{tube} .
\end{eqnarray}
Subsequently, we carry out hydrodynamical simulations by using the SPheRIO code~\cite{sph-review-1,sph-corr-1,sph-corr-2,sph-corr-4,sph-eos-2,sph-cfo-1,sph-eos-3,sph-corr-ev-8}, it is an ideal hydrodynamic code using the Smoothed Particle Dynamics algorithm~\cite{sph-algorithm-review-01,sph-1st,sph-algorithm-09}.
For the present study, only the temporal evolution on the transverse plane is considered as Bjorken scaling is assumed.
We generate a total of 2000 events according to IC profile discussed above.
Then the hydrodynamical simulation is carried out for each event, by the end of which, a Monte-Carlo generator is evoked 200 times for hadronization in order to further increase the statistics.
The resultant two-particle correlations, evaluated by cumulant method, are shown in Fig.\ref{fig2p}, in comparison with the PHENIX data~\cite{RHIC-phenix-ridge-5}.
The paramter $r_\mathrm{tube}$ is chosen to be $2.3$ fm in the calculations.
As the hydrodynamic simulations are of two-dimension, the obtained correlations are multiplied by a factor related to the longitudinal scaling of the system.
Now, it is interesting to verify that the parameters given in Eq.(\ref{ccumulant2}) are indeed quantitatively meaningful.
To achieve this, we extract the model parameters in Eq.(\ref{ccumulant2}) using mostly the same arguments leading to very expression.
We first estimate those for the background distribution $\frac{dN_{\mathrm{bgd}}}{d\phi}$.
The background flow coefficients $v_2^\mathrm{bgd}$ can be obtained directly by investigating the hydrodynamic evolution of IC solely determined by $\epsilon_\mathrm{bgd}$.
A total of 2000 events with 200 Monte Carlo each are considered in the evaluation.
To estimate the multiplicity fluctuations of the background, $<N_\mathrm{bgd}^TN_\mathrm{bgd}^A>-<N_\mathrm{bgd}^T><N_\mathrm{bgd}^A>$, we count, on an event-by-event basis, the number of particles of corresponding momentum intervals.
The events in question are those generated by NeXuS of 20\% - 40\% centrality window, and we made use of a total of 1000 events.
By using Fourier expansion of the two particle correlation, and extracting the second and third order coefficients, one obtains the parameters related to $v_2^{\mathrm{tube},T}$, $v_2^{\mathrm{tube},A}$, $v_3^{\mathrm{tube},T}$, and $v_3^{\mathrm{tube},A}$.
The obtained values are summarized in Table \ref{tubeparameters} and Eqs.(\ref{lbNbNt3}) and (\ref{lbNbNt4}).
\begin{table}[ht]
\begin{center}
\caption{The calculated background as well as overall elliptic flow coefficients for correspoding transverse momentum intervals of trigger and associated particles.
The calculations are carried out by using IC of smooth elliptic energy distribution as described in the text.}\vspace{0.5cm}
\begin{tabular}{c|c|c|c}
& $0.4<p_T<1$ & $1<p_T<2$ & $2<p_T<3$\\
\hline
$v_{2}^\mathrm{bgd}$ & 0.11& 0.21& 0.36\\
\hline
$v_{2}^\mathrm{all}$ & 0.09& 0.17& 0.26\\
\end{tabular}
\label{tubeparameters}
\end{center}
\end{table}
On the other hand, however, some of the parameters can also be inferred straightforwardly from the experimental data.
Therefore, we carry out these comparisons with the published data as follow.
The overall elliptic flow, $v_2^\mathrm{all}$, obtained by simulations of event-by-event fluctuating IC consisting of the background and tube, should be consistent with the collisions of the same centrality window.
This is confirmed by comparing the value of $v_{2}^\mathrm{all}$ against the data of 20\%-60\% Au+Au collisions obtained by PHENIX~\cite{RHIC-phenix-v2-4}.
The multiplicity fluctuations estimated from simulations and for the corresponding momentum intervals are found to be
\begin{eqnarray}
\langle N^T_\mathrm{bgd}N^A_\mathrm{bgd}\rangle -\langle N^T_\mathrm{bgd}\rangle \langle N^A_\mathrm{bgd}\rangle =14.67 \, , \label{lbNbNt3} \\
\langle N^T_\mathrm{tube}N^A_\mathrm{tube}\rangle v_2^{\mathrm{tube},T}v_2^{\mathrm{tube},A}=1.62 , \nonumber\\
\langle N^T_\mathrm{tube}N^A_\mathrm{tube}\rangle v_3^{\mathrm{tube},T}v_3^{\mathrm{tube},A}=1.63 , \nonumber
\end{eqnarray}
for $0.4<p_T^A<1$, $2<p_T^T<3$, and
\begin{eqnarray}
\langle N^T_\mathrm{bgd}N^A_\mathrm{bgd}\rangle -\langle N^T_\mathrm{bgd}\rangle \langle N^A_\mathrm{bgd}\rangle =5.07 \, , \label{lbNbNt4} \\
\langle N^T_\mathrm{tube}N^A_\mathrm{tube}\rangle v_2^{\mathrm{tube},T}v_2^{\mathrm{tube},A}=1.36 , \nonumber\\
\langle N^T_\mathrm{tube}N^A_\mathrm{tube}\rangle v_3^{\mathrm{tube},T}v_3^{\mathrm{tube},A}=1.48 , \nonumber
\end{eqnarray}
for $1<p_T^A<2$, $2<p_T^T<3$ respectively.
Finally, by substituting the above parameters back into Eq.(\ref{ccumulant2}), one obtains the two-particle correlation as also shown in Fig.\ref{fig2p}.
It is found that the two approaches are in good agreement with each other.
\begin{figure}
\begin{tabular}{ccc}
\vspace{0pt}
\begin{minipage}{150pt}
\centerline{\includegraphics[width=180pt]{fig4_2tube_3000N}}
\end{minipage}
&
\begin{minipage}{150pt}
\centerline{\includegraphics[width=180pt]{fig4_3tube}}
\end{minipage}
&
\begin{minipage}{150pt}
\centerline{\includegraphics[width=180pt]{fig4_234}}
\end{minipage}
\\
\vspace{0pt}
\end{tabular}
\caption{(Color online) The relative angle distributions between $\Psi_2$ and $\Psi_3$ in tube model.
Left: The relative angle distribution (filled black circles) evaluated by the ratio of foreground (red empty circles) to the background (blue empty squares) signals. The results are for IC with two high energy tubes.
Middle: The results evaluated by using different numbers of IC with three high energy tubes.
Right: The relative angle distributions calculated by using IC with different number of tubes, where each curve is obtained by using 3000 events.}
\label{figureCPsi23}
\end{figure}
Also, we investigated the correlation between different event planes.
In particular, since the elliptic flow and triangular flow from the tube are correlated by definition, it is interesting to verify whether the overall event planes $\Psi_2$ and $\Psi_3$ are correlated in comparison to existing data~\cite{LHC-atlas-vn-3}.
If one adds two high-energy tubes onto an isotropic background and assumes that the resultant distribution is just the superposition of those individual contributions, one observes that the resultant event planes might behave nonlinearly.
By assuming that the azimuthal angles of the two tubes are $0$ and $\phi_t$, the harmonic coefficients are $v_n^{i}$ with $i=1, 2$, it is straightforward to find that
\begin{eqnarray}
\Psi_2 = \frac12\arctan\frac{v_2^2\sin 2\phi_t}{v_2^1+v_2^2\cos 2\phi_t} \ ,\\
\Psi_3 = \frac13\arctan\frac{v_3^2\sin 3\phi_t}{v_3^1+v_3^2\cos 3\phi_t} \ .
\end{eqnarray}
Thus in general $\Psi_2 \ne \Psi_3$ if the harmonic coefficients of the two tubes are not precisely identical.
This has been demonstrated in our numerical study presented in Ref.~\cite{sph-corr-7}.
There the background is assumed to be isotropic, and all the tubes are identical, even though, numerical study shows that event planes are uncorrelated.
To confirm these results with better statistics, we carried out calculations in accordance with Ref.\cite{LHC-atlas-vn-3} by using a more significant number of events, where the relative angle distribution was obtained as the ratio of foreground to background signals.
Calculations were carried out by IC constructed using different numbers of tubes added to an isotropic background with randomized azimuthal angles.
As shown in Fig.\ref{figureCPsi23}, the relative angle distribution is found to approach to that of random distribution as the number of events increases.
The feature is observed for IC with two, three and four high-energy tubes and found to be consistent with our previous findings.
It indicates that the tube model is consistent with the observed event plane correlation.
\section{IV. Concluding remarks}
The hydrodynamical approach has been shown to be a successful description of many experimental data of relativistic heavy ion collisions, prooving the emergence of collective flow dynamics.
It is widely accepted that the flow dynamics are expressed by a set of flow harmonics, including correlations among them~\cite{hydro-corr-ph-2,hydro-corr-ph-4,hydro-corr-ph-6,hydro-corr-ph-7,hydro-corr-ph-8} on an event-by-event basis.
On the other hand, if one wishes to investigate the properties of the matter such as the degree of local thermal equilibrium, namely, the equation of state and transport coefficients, we need to study observables sensitive to specific aspects of hydrodynamics which are genuinely nonlinear.
Such behavior is, of course, contained in the intrinsically nonlinear, higher order correlators~\cite{hydro-corr-ph-7,hydro-corr-ph-8,hydro-vn-10}.
We understand that they may demonstrate themselves in an untrivial way and it will not be straightforward to extract a simple physical picture from them.
Here, we note that several microscopic transport models, such as AMPT or PHSD, have shown to have similar properties as viscous hydrodynamic calculations~\cite{ampt-4,ampt-5,hydro-review-06}.
In particular, it is worth mentioning that in the event-by-event realizations, a state close to the local thermal equilibrium only corresponds to a tiny space-time domain during the entire dynamical evolution (Ref.~\cite{hydro-review-06,phsd-01}).
To clarify up to what extent the genuine event-by-event hydrodynamics is valid, we should seek a new set of observables which are sensitive to the nonlinear evolution of the system.
The primary objective of the present paper is that, as a complementary effort to the current statistical analysis of flow harmonics, we propose a straightforward physical picture which describes the two-particle angular correlations.
It may serve as a possible candidate for the signals of the emergence of the local non-linear dynamics.
In the standard approach, the double-peaked behavior is associated to the ocutople inhomogeneity in the initial energy distribution.
Such a vision is relatively well-accepted, particularly when one considers the linear correlations between the event-averaged initial energy inhomogeneity $\langle \epsilon_n\rangle$ and the flow harmonics $\langle v_n\rangle$.
On the other hand, if we push the above arguments to the extreme, the concept of the ``local thermal equilibrium" becomes meaningless, since the correlations between $\langle v_3\rangle$ and $\langle \epsilon_3\rangle$ are only valid on the event-averaged basis.
Therefore, the nonlinear regime and fluctuations are of fundamental significance~\cite{hydro-corr-ph-7,hydro-corr-ph-8}.
It is true that the observed two-particle correlations suffer from the ambiguities of the subtraction process via, for example, the ZYAM method.
However, once we establish a reasonable physically simple model, we can investigate the nonlinear properties and fluctuations within the framework of the standard flow harmonics analysis.
Regarding the above points, we develop the peripheral tube model for the two-particle correlations and show that the data are consistent with those obtained by event-by-event hydrodynamical simulations using appropriately devised IC.
The latter is further shown to be in qualitative agreement with those by using the analytic result of a simplified model with the parameters extracted from flow coefficients and multiplicity fluctuations.
The model can be employed to study the event plane dependence of the two-particle correlaions~\cite{sph-corr-ev-4}, which can be derived as the modulation of the background evolves from in-plane to out-of-plane direction.
Also, the centrality dependence of the correlations has been discussed~\cite{sph-corr-ev-6}, where the model parameters were extracted from the NeXuS IC and were shown to give consistent results.
These results indicate that the peripheral tube model offers a simple physical vision for the interpretation of the observed features of the two-particle correlations in nuclear collisions.
Note that the peripheral tube model, in general, generates the octupole anisotropy of the energy distribution, so that the correlations among $\epsilon_3$ and $v_3$ also appear.
In this context, the analysis of fluctuations using higher order correlations may reveal the signal of the genuine nonlinear hydrodynamics.
It is interesting to propose further and investigate observables which carry such signals.
Quantities associated with the event plane correlations~\cite{LHC-atlas-vn-3,LHC-atlas-vn-5,LHC-alice-vn-5} and the 2+1 correlations~\cite{RHIC-star-corr-2plus1-1} may provide vital information.
We believe that our approach, to be proven right or not, will offer a meaningful reference to investigate the nonlinear regime of the flow dynamics.
In this regard, many specific aspects of the model should be further clarified and explored in terms of standard flow harmonics.
Works in this direction are under progress.
\section*{Acknowledgments}
We gratefully acknowledge the financial support from
Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP),
Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado do Rio de Janeiro (FAPERJ),
Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq),
and Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES).
A part of the work was developed under the project INCT-FNA Proc. No. 464898/2014-5, the Center for Scientific Computing (NCC/GridUNESP) of the S\~ao Paulo State University (UNESP),
and National Natural Science Foundation of China (NNSFC) under contract No.11805166.
\bibliographystyle{h-physrev}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,516 |
Find wedding photography Melbourne VIC? Lensure is similarly vital as getting the best pictures and this makes wedding photography such a requesting and thorough work. Over all these, the wedding photos remain to record history and document the special day. We ensures The finest wedding photography Melbourne, VIC. Our experts can capturing timeless memories with our professional wedding photographers available throughout Melbourne. Just 04 4734 4238 today for affordable wedding photography Melbourne. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,392 |
Йосиф Иванов Цанков (; 7 ноября 1911, Русе, Третье Болгарское царство — 21 октября 1971, София, Народная Республика Болгария) — болгарский . Основоположник современной болгарской музыки лёгкого жанра, эстрадной песни и поп-музыки в Болгарии. Спортсмен, тренер по баскетболу.
Биография
Родился в семье богатого торговца. С детства говорил на французском, немецком и английском языках. С 6 лет играл на фортепиано. В 11-летнем возрасте создал своё первое музыкальное сочинение. В 1924/1925 и 1925/1926 академических годах обучался в американском «Робертовском колледже» в Константинополе. В 1930 году поступил на юридический факультет Софийского университета. Во время учёбы в Константинополе заинтересовался баскетболом, и после переезда в Софию стал одним из основателей баскетбольной команды AS-23. Был соучредителем мужской сборной по баскетболу, играл капитаном и центральным нападающим. Основал женскую сборную по баскетболу и тренировал её. Член Мужской сборной Болгарии по баскетболу. Участник чемпионата Европы 1935 г.
Работал представителем Osram в Болгарии, а также совладельцем и менеджером первого в стране завода по производству грампластинок «Берлинский граммофон», позднее «Немецкий граммофон».
Был главным музыкальным руководителем и композитором театра Odeon. Один из основателей Болгарского национального радио (вначале — «Радио София») в 1936 году. Известный филателист, имел одну из самых богатых коллекций в Болгарии.
Автор более 500 музыкальных сочинений: первых болгарских оперетт, вальсов, танго, фокстротов, румбы и др. Во время бомбардировок Софии в годы Второй мировой войны большинство его произведений было уничтожено.
Автор оперетт: «Проданная любовь» (1937), «Жуана» (1940, «Кооперативный опереточный театр»), «Держись, Жужи!» (1941), «Золотая вдова» (1945).
Скончался во сне от сердечного приступа.
Награды
Орден «Стара планина» (посмертно, 22 мая 2002)
Память
С 2002 года его именем названа улица в Софии.
Литература
Театральная энциклопедия. Том 5 / Глав. ред. П. А. Марков — М.: Советская энциклопедия, 1967.
Ссылки
Иосиф Цанков
Композиторы-песенники
Композиторы оперетт
Умершие от инфаркта миокарда
Баскетболисты Болгарии
Филателисты Болгарии
Выпускники Софийского университета
Баскетбольные тренеры Болгарии | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,388 |
The great online advertising campaign should be focused on investing as small as feasible, when having as significantly as probable in return. Skilled marketers fully grasp this. Amateur marketers ordinarily dont grasp this till they have sunk thousands of dollars into campaigns that only generate a handful of dollars if they are lucky. Thats called going into the red but let critique. OK, Alright, you know what a banner ad is and you feel you know what you want it to say. That is a begin, but it is only a get started. There are other more pressing points to take into account. The banner ad is a quite vital piece of the advertising puzzle but exactly where you place it makes all the distinction in the world. In the brick and mortar world of true estate, businesses seek out buildings for their operations based on where it is located. Businessmen and businesswomen generally attribute results to place, location, place.
Now, in the brick and mortar globe hot locations are extremely costly to acquire. This is basically mainly because the owner knows that it is a hot house and sets his/her costs accordingly. Mega hot on line properties are no different. The laws of provide and demand drive their costs upward to the point that little enterprises just cant afford to compete for these spots. So, if you have attempted to get noticed on-line with a price range of significantly less than 100 dollars, you undoubtedly have located that you are out of luck.
To get the world to take notice requires a specific level of advertising innovation and most marketers dont have it and subsequently do it wrong. As previously stated, provide and demand drives the value of advertising up incredibly promptly. You have to come across the internet sites that are destined to take off and get in Prior to they go huge. A lot of little advertisers wait about and finish up competing with the massive boys once the house goes hot. Cease doing that. You cant afford it.
Marketing online is the first thing marketers feel of when their enterprise is ready to launch. Nonetheless, constructing a internet site that locations their product or service on show for the planet to see is a lot less difficult than acquiring the globe to essentially want to see it. The way to beat unfriendly advertising pricing structures is to be revolutionary in your ad placements. Look for sites that are just beginning and scoop up the ad spaces like a stock prepared to blast off.
Crm Marketing Meaning – So, if you have attempted to get noticed on-line with a price range of significantly less than one hundred dollars, you undoubtedly have discovered that you are out of luck. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,934 |
class choose
{
public:
choose();
~choose();
};
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,532 |
{"url":"https:\/\/discuss.codechef.com\/t\/tandem-editorial\/12592","text":"# TANDEM - Editorial\n\n#1\n\nAuthor: Ke Bi\nTester: Kevin Atienza\nTranslators: Vasya Antoniuk (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)\nEditorialist: Kevin Atienza\n\nMedium-Hard\n\n### PREREQUISITES:\n\nSuffix array, longest common prefix, string hashing, range minimum query, segment tree\n\n### PROBLEM:\n\nA tandem is a nonempty string repeated thrice, i.e., a string of the form AAA for a nonempty string A. A tandem is interesting if the character following each instance of A are not all the same, otherwise it is boring. Given a string s of length N, how many interesting and boring tandems does s have?\n\n### QUICK EXPLANATION:\n\nWe enumerate interesting and boring tandems per length of A. Let\u2019s say |A| = L. Thus, the length of the tandem is 3L. Fix this L.\n\nConsider the positions 0, L, 2L, 3L, \\ldots. A tandem of length 3L will cover exactly three such positions. So we count boring and interesting tandems per three positions i, j, k with (i,j,k) = (i,i+L,i+2L).\n\n\u2022 Let LCP(i,j,k) be the length of the longest common prefix of s[i\\ldots N-1], s[j\\ldots N-1] and s[k\\ldots N-1].\n\u2022 Let LCS(i,j,k) be the length of the longest common suffix of s[0\\ldots i], s[0\\ldots j] and s[0\\ldots k].\n\nLet V = \\min(LCP(i,j,k),L) + \\min(LCS(i,j,k),L) - 1. Then:\n\n\u2022 If V < L, then there are no tandems.\n\u2022 If V \\ge L and LCP(i,j,k) > L, then there are V-L+1 boring tandems and 0 interesting tandems.\n\u2022 If V \\ge L and LCP(i,j,k) \\le L, then there are V-L boring tandems and 1 interesting tandem.\n\nThe answers are the sums of all these contributions across all $L$s and across all $(i,j,k)$s.\n\n### EXPLANATION:\n\nWe will use the notation s_{i, j} to denote the substring from s_i to s_j. Thus, |s_{i,j}| = j-i+1.\n\n# Faster than O(N^3)\n\nThe obvious O(N^3) brute-force algorithm here doesn\u2019t pass, because N is very large. (And actually, even O(N^2) shouldn\u2019t pass.) So let\u2019s try to improve on O(N^3). Here we\u2019ll try to show a solution that will lead to the intended one.\n\nLet\u2019s enumerate the tandems according to the length of A. Suppose |A| = L, so the tandem AAA has length 3L. For now, let\u2019s fix this L and then count all boring and interesting tandems of length 3L. Ideally our solution will be fast enough so we can do this for each L \\in [1,N\/3].\n\nConsider the characters s_0, s_L, s_{2L}, s_{3L}, \\ldots. There are approximately N\/L such positions. But since these are spaced L characters apart from each other, it follows that every tandem of length 3L covers exactly three of these. Therefore, let\u2019s fix three adjacent characters, say s_i, s_j, s_k where (i,j) and (j,k) are spaced L characters away from each other, then count all boring and interesting tandems covering these three.\n\nThe tandem must start at some position in [i-L+1,i]. Otherwise, if it starts at some position > i it will not be able to cover s_i, and if it starts at some position \\le i-L, it will not be able to cover s_k. But it can\u2019t just start at any such position! If we want the string s_{a, a+3L-1} to be a tandem, the following must hold (by definition):\n\ns_{a,a+L-1} = s_{a+L,a+2L-1} = s_{a+2L,a+3L-1}\n\nBut due to the way we selected the position a, we find that a \\le i \\le a+L-1, a+L \\le j \\le a+2L-1 and a+2L \\le k \\le a+3L-1. It means that we can decompose the equality above into two parts:\n\ns_{a,i} = s_{a+L,j} = s_{a+2L,k}\ns_{i,a+L-1} = s_{j,a+2L-1} = s_{k,a+3L-1}\n\nWe can restate these two equalities in another way. The first one is the same as:\n\nThe strings s_{0, i}, s_{0, j} and s_{0, k} must have a common suffix of length i-a+1.\n\nThe second one is the same as:\n\nThe strings s_{i, N-1}, s_{j, N-1} and s_{k, N-1} must have a common prefix of length a+L-i.\n\nThis now gives us a way to compute the number of tandems covering s_i, s_j and s_k. Let\u2019s first define two things:\n\n\u2022 LCP(i,j,k) is the length of the longest common prefix of s_{i, N-1}, s_{j, N-1} and s_{k, N-1}.\n\u2022 LCS(i,j,k) is the length of the longest common suffix of s_{0, i}, s_{0, j} and s_{0, k},. (Not to be confused with the longest common subsequence.)\n\nThere is a tandem starting at a if i-a+1 \\le LCS(i,j,k), a+L-i \\le LCP(i,j,k), and a\\in [i-L+1,i]. Rearranging these inequalities gives:\n\n\\max(i-LCS(i,j,k)+1, i-L+1) \\le a \\le \\min(LCP(i,j,k)-L+i, i)\n\nwhich means the number of valid $a$s is\n\n\\min(LCP(i,j,k)-L+i, i) - \\max(i-LCS(i,j,k)+1, i-L+1) + 1\n\nor 0 if this number is negative. We can actually simplify that number to the following:\n\n\\min(LCP(i,j,k),L) + \\min(LCS(i,j,k),L) - L\n\nBut which among these are boring, and which are interesting? Notice that if s_{a,a+3L-1} and s_{a+1,a+3L} are both tandems, then s_{a,a+3L-1} is automatically boring, because the following characters are the same. It means that only the last tandem, s_{i,i+3L-1} can be interesting. To check if it\u2019s interesting, simply check if s_{i+L} = s_{j+L} = s_{k+L}. If this is true, then the string is boring, otherwise it is interesting!\n\nComputing \\min(LCP(i,j,k),L) and \\min(LCS(i,j,k),L) na\u00efvely takes O(L) time each. Since we will do this approximately N\/L times for each length L, the running time is therefore O(N\/L\\cdot L) = O(N) per L. So if we do this for each L\\in [1,N\/3], the running time is O(N^2). This is much better than O(N^3)!\n\n# Faster solution\n\nTo improve that solution, we need a faster way to compute LCP(i,j,k) and LCS(i,j,k). We\u2019ll describe various solutions here.\n\n## Solution 1: Hashing + Binary Search\n\nIf we can compare any two substrings for equality in O(1) time, then we can improve computing LCP and LCS quickly using binary search. One way to do substring comparisons quickly is with hashing!\n\nSpecifically, we will use polynomial hashing. We fix a base B and a modulus M. Let\u2019s now define the hash function H(S). For the string s = s_0s_1s_2\\ldots s_{N-1}, its hash will be the following value:\n\nH(s) = (v(s_0) + v(s_1)B^1 + v(s_2)B^2 + v(s_3)B^3 + \\ldots + v(s_{N-1})B^{N-1}) \\bmod M\n\nwhere v(c) is another function that maps a character to a number.\n\nWe will use this hash to compare substrings in O(1):\n\n\u2022 Preprocess the string so that H(s) can be computed in O(1) time for any substring s.\n\u2022 To compare two substrings s and t, simply compare their hashes, H(s) and H(t).\n\nIf two strings are equal, then their hashes must be the same. Unfortunately, if they are not equal, the hashes might still be the same. For this reason, this hashing algorithm fails sometimes. But if you choose B, M and v(c) well enough, then you\u2019ll probably still get accepted\n\nSo how do we preprocess the string? The key advantage of polynomial hashing is the following properties: (assuming we have precomputed powers of B modulo M)\n\n\u2022 Concatenation. Given H(s) and H(t), H(st) can be computed in O(1) time as H(st) = (H(s) + H(t)B^{|s|})\\bmod M.\n\u2022 Splitting. Given H(st) and H(t), H(s) can be computed in O(1) time as H(s) = (H(st) - H(t)B^{|s|})\\bmod M.\n\nThese properties are what we can use to preprocess. We simply compute the hashes of all suffixes of the string. This can be done in O(N) by using the concatenation property above. Then, to compute the hash of a substring s, find the appropriate suffixes st and t, then compute H(s) from H(st) and H(t) using the splitting property!\n\nSo what\u2019s the running time of this algorithm? Notice that LCP and LCS are computed with binary search, which takes O(\\log N) time each, so doing that N\/L times gives O(N\/L \\log N). Summing from 1 to L gives\n\nO\\left(\\sum_{L=1}^{N\/3} N\/L \\log N\\right) = O\\left(N \\log N \\sum_{L} 1\/L\\right) = O(N \\log^2 N)\n\nowing to the fact that the Harmonic series grows as O(\\log N).\n\n## Solution 2: Suffix array + segment tree\n\nComputing longest common prefixes is a standard step when you already have the suffix array. After constructing the suffix array, constructing the LCP array takes just O(N). Afterwards, to find the longest common prefix of two suffixes, s_{i,N-1} and s_{j,N-1}:\n\n\u2022 Find the positions of i and j in the suffix array. Let\u2019s say they are i' and j'. Without loss of generality, assume i' \\le j'. (We can swap i and j otherwise.)\n\u2022 The LCP is now \\min(LCP[i'+1],LCP[i'+2],\\ldots,LCP[j']).\n\nModifying this for three suffixes is straightforward. Thus, we have reduced LCP to a range minimum query, which is pretty standard Similarly, you can construct the \"LCS array\" by constructing a suffix array for the reverse of s, so LCS can be reduced to a range minimum query as well.\n\nHow do we compute range minimum queries? The most standard way is to use segment trees. This way, each range minimum query takes O(\\log N) time to compute. Thus, the running time is again O(N \\log^2 N) like above. (Note that the suffix array can be constructed in O(N \\log^2 N) time.) But if you implement a segment tree normally, you might get TLE, even though the time complexity is correct! If this happens to you, here\u2019s an improvement: Notice that we actually only need \\min(LCP(i,j,k),L). Thus, we can add a third argument to our range_min query call, which denotes the current upper bound. range_min now looks like this:\n\ndef range_min(i, j, L):\nif this.min >= L: \/\/ special pruning.\nreturn L \/\/ if this.min >= L, it's impossible to improve upon 'L'.\nif i <= this.i and this.j <= j:\nreturn this.min\nif this.j < i or j < this.i:\nreturn L\nreturn left.range_min(i, j, right.range_min(i, j, L))\n\n\nNotice that the partial results are getting passed everywhere, so that the pruning done by this.min >= L is maximized. With this, I was able to get the segment tree solution accepted!\n\n## Solution 3: Suffix array + sparse table\n\nAnother way to compute range minimum queries is by computing a table of minimums. Let M*[k] be the range minimum in the range [i,i+2^k-1]. We can build the table M*[k] using the following property:\n\n\\begin{align*} M*[0] &= A_i \\\\\\ M*[k] &= \\min(M*[k-1], M[i+2^{k-1}][k-1]) \\end{align*}\n\nThus, this construction takes O(N \\log N) time. Afterwards, each range query runs in O(1) time, because the minimum in the range [i,j] is \\min(M*[k], M[j-2^k+1][k]), where 2^k is the largest power of two \\le j-i+1. (Now, computing k takes O(\\log N), but you can just precompute it in O(N) time for each length j-i+1.)\n\nNotice that we moved the \\log N factor from query time to construction time; whereas segment trees take O(N) time to construct and has O(\\log N) query time, this solution takes O(N \\log N) time to construct and has O(1) query time. But this change is significant, because the number of queries that we have is actually:\n\nO\\left(\\sum_L N\/L\\right) = O(N \\log N)\n\nwhich means that the overall running time (after constructing the suffix array) is\n\nO(N \\log N) + O(N \\log N) = O(N \\log N)\n\nwhich is an improvement over O(N \\log^2 N)! (Note that there are O(N\\log N), and even O(N) algorithms to construct the suffix array.)\n\nO(N \\log^2 N)\n\n### AUTHOR\u2019S AND TESTER\u2019S SOLUTIONS:\n\n#2\n\nThat was a good one!","date":"2019-05-23 22:20:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8951706886291504, \"perplexity\": 2693.4338884657486}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232257396.96\/warc\/CC-MAIN-20190523204120-20190523230120-00441.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.appropedia.org\/Appropedia:To_do","text":"## For Staff\n\n### Shortnames\n\nDone - November 06.\n\n\u2022 Try to implement short names:\n\u2022 http:\/\/meta.wikimedia.org\/wiki\/Rewrite_rules\n\u2022 See test working at buildcapacity.org (a test time-lagged copy of appropedia)\n\u2022 So far only problem is titles with ampersands break at the & sign.\n\u2022 Why do people caution against a short name that reads the directory your mediawiki is installed in. Why do most opt for a directory that does not exist, such as www.appropedia.org\/wiki\/, where wiki is not an actual folder?\n\nThe thought that comes up is that there might be some search order behavior that might actually find a hit if there is a real directory involved? --Curtbeckmann 17:02, 29 September 2006 (PDT)\n\n### Regular backup\n\n\u2022 Regular backup schedule and instructions [1]\n1. Click Databases\n3. Click Export\n4. Click appropedia\n5. Select Save as file\n6. Click Go\n\u2022 Look for new scheme based on new server allowing shell access.\n\n### Help Pages\n\nDone\n\n\u2022 Posting less spammable email addy:\n\u2022 If you want to post your own spam resistant email address, use something like:\n\u2022 name[[File:atsymbol.png]]organization[[File:dot.png]]org\n\u2022 to show\n\u2022 nameorganizationorg\nThis seems cool. Can it be a template? I'll get to work on it\u00a0:-)\nSeems to work. See Template:Em. Takes three arguments. Where to offer this to users? --Curtbeckmann 19:35, 4 October 2006 (PDT)\n\n## How to mark notes for tech admin work\n\n\u2022 Use {{deeptechadmin}} that puts a marker and includes it in a category?\n\n\u2022 Add interwiki links as necessary (note: when we have shell access we can add all quickly with REPLACE INTO interwiki (iw_prefix,iw_url,iw_local) VALUES[2], although many are already done):\n\u2022 Login to phpMyAdmin and navigate to Server: Localhost -> Database: appropedia -> Table: w1interwiki.\n\u2022 Click the browse tab for a list of all rows in the table.\n\u2022 At the bottom of the page, select insert new row.\n\u2022 On the following page, fill in the keyword for interwiki linking in the value field of iw_prefix. Then type the wiki URL into the field iw_url. Make sure to append $1 to the URL (this will be replaced with the article name). e.g. http:\/\/en.wikipedia.org\/wiki\/$1 for Wikipedia.\n\u2022 NOTE: do not select a function for either fields. This will convert the values you have entered. Ignore the fields iw_local and iw_trans.\n\u2022 Click go, and check to see if a new entry has been added to the table.\n\u2022 Additional note: when I first tried this, I clicked the insert tab at the top of the page and did the same as above. It did not work, but also did not appear to cause any problems. If you experience problems with interwiki linking, please have an experienced phpMyAdmin user check it out.\n\u2022 add http:\/\/en.wiktionary.org\/wiki\/ the wiki dictionary.\n\n### Encourage more Spanish\n\n\u2022 Encourage some Spanish projects and sites... here is some language from wikipedia en espanol, para describir este sitio.\n\u2022 Wikipedia es una enciclopedia libre escrita por voluntarios de todo el mundo. Los siguientes art\u00edculos pueden servir de ayuda ya sea para leer, editar o participar de los objetivos de este sitio.\n\n## For BOD\n\n### Templates\n\n\u2022 Template for Topics\n\u2022 Introductory Description (brief)\n\u2022 How it works (in Depth)\n\u2022 Examples\n\u2022 Pros\n\u2022 Cons\n\u2022 Common Errors\n\u2022 Projects\n\u2022 Template for Projects\n\u2022 see Category:Templates\n\u2022 banners, userboxes or infoboxes to differentiate between projects, programs, topics, tools, etc.\n\n### Converting\n\nDone --Chriswaterguy 10:38, 24 February 2010 (UTC)\n\nFind the best then add to help pages.\n\n## For All\n\n### Portals and categories\n\nSee Appropedia:Portals and categories KVDP 02:21, 26 October 2012 (PDT)\n\n### Topics\n\nDone --Chriswaterguy 10:38, 24 February 2010 (UTC)\n\n\u2022 Topics are now called categories to match with inherent wiki format\n\u2022 keep building list of topics, but insert them as categories\n\u2022 e.g. Kirsten Thompson - Male contraception ~ Category:Male contraception\n\u2022 Keep talking about namespaces versus other ways to differentiate the different areas of Appropedia.\n\n### Other media\n\nDone --Chriswaterguy 10:38, 24 February 2010 (UTC)\n\nDone --Chriswaterguy 10:38, 24 February 2010 (UTC)\n\n### Suggestions for other areas\n\n#### Reviews\n\n\u2022 resource, books, tools reviews format suggestion:\n\u2022 title\/ suitable unique descriptor\n\u2022 author\/source\n\u2022 review (personal comment), signed? Reviews have value when they come from somebody's experience\n\u2022 relevant picture, maybe not the cover\n\u2022 relevant quote(s) quotes or a single picture for reviews are generally allowed under copyright rules\n\nUnlike the main articles, which should attempt unified result through peer review, I believe reviews themselves should show personal opinions, with some editing as needed for neatness\/spelling\/appropriatedness, to avoid a \"from the pulpit\" endorsement of a given resource.\n\n#### Topics\n\nDone --Chriswaterguy 10:38, 24 February 2010 (UTC)\n\n\u2022 Consider distancing the categories from encylopedic entries and\/or just doing some type of transclussion.\n\n#### Curricula\n\n\u2022 Once we have a curricula area, encourage teachers with language such as:\nHey teachers - Post your curricullum in the Curricula area\nStudents can make comments, or ask questions (or consider a seperate Q&A page)\nSee ____________ for an example\nTo start just search for something like Myname_mycourse\nStart editting and do not forget to add the [[Category:Curricula]] at the bottom of the page","date":"2022-08-08 22:45:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26879340410232544, \"perplexity\": 11703.015451106507}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882570879.1\/warc\/CC-MAIN-20220808213349-20220809003349-00551.warc.gz\"}"} | null | null |
Q: Record is not Added in to the databse using Jsp Mysql i am creating a simple crud using jsp mysql but record is not added in to the database.i tried this for 3 days but i couldn't solve the problem.i don't know what was a problem what i tried so far i attached below. Form design and Student.java class consist of all data and StudentDAO database is there add.jsp add the data in to the database.
Form
<div class="form-group" align="left">
<label class="form-label">Student Name</label>
<input type="text" class="form-control" placeholder="Student Name" id="stname" name="stname" size="30px" required>
</div>
<div class="form-group" align="left">
<label class="form-label">Course</label>
<input type="text" class="form-control" placeholder="Course" id="course" name="course" size="30px" required>
</div>
<div class="form-group" align="left">
<label class="form-label">Fee</label>
<input type="text" class="form-control" placeholder="Fee" id="fee" name="fee" size="30px" required>
</div>
<div class="card" align="right">
<button type="button" class=" btn btn-info" id="save" onclick="addProject()">Registation
</button>
<button type="button" class="btn btn-warning" id="clear" onclick="clear()">Close</button>
</div>
Student.java
public class Student {
private String stname;
private String course;
private int fee;
public Student(String stname, String course, int fee) {
this.stname = stname;
this.course = course;
this.fee = fee;
}
public String getStname() {
return stname;
}
public void setStname(String stname) {
this.stname = stname;
}
public String getCourse() {
return course;
}
public void setCourse(String course) {
this.course = course;
}
public int getFee() {
return fee;
}
public void setFee(int fee) {
this.fee = fee;
}
StudentDAO.java
public class StudentDAO {
Connection connection;
PreparedStatement pst;
public StudentDAO() {
String jdbcURL = "jdbc:mysql://localhost/studcrud";
String dbUser = "root";
String dbPassword = "";
try {
Class.forName("com.mysql.jdbc.Driver");
connection = DriverManager.getConnection(jdbcURL, dbUser, dbPassword);
}
catch(Exception ex){
ex.printStackTrace();
}
}
public int insertUser(Student stud) throws SQLException,
ClassNotFoundException {
String sql = "insert into records(name,course,fee)values(?,?,?)";
PreparedStatement statement = connection.prepareStatement(sql);
statement.setString(1, stud.getStname());
statement.setString(2, stud.getCourse());
statement.setInt(3, stud.getFee());
int row= statement.executeUpdate();
return row;
}
Add.jsp
<%@page import="org.json.simple.JSONArray"%>
<%@page import="org.json.simple.JSONObject"%>
<%@page import="java.sql.*" %>
<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%
String stname = request.getParameter("stname");
String course = request.getParameter("course");
int fee = Integer.parseInt(request.getParameter("fee"));
//fill it up in a Student Bean
Student stud = new Student(stname,course,fee);
StudentDAO stu = new StudentDAO();
int rows = stu.insertUser(stud);
String message;
if(rows==0)
{
message = "success";
}
else
{
message = "Failed";
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,288 |
{"url":"https:\/\/physics.stackexchange.com\/questions\/188945\/sound-source-not-in-a-straight-line-with-the-sound-receiver-does-that-make-a-d","text":"# Sound source not in a straight line with the sound receiver - does that make a difference?\n\nHope the graphics will help me explaining my question better. Let's say the box would be a room in the fourth floor, and the sound source would come from cars in the street. It is clear that in case A the sound would enter the room and would be clearly heard from the human ear.\n\nWhat I am interested to inspect is case B. The red line there represents a sound-proof shield. The shield would block the direct sound geometrically as you see. In front of the flat there would be no nearby building that could bounce the sound back to the room from the other direction. So, there is just air. I know sound is not a bullet that travels in straight line, so my question is: Would there be a significant decrease in the sound heard inside the room if a sound-proof shield would be placed as in the picture in case B? Can we calculate a rough percentage?\n\n\u2022 It may be good to realize that sound is a longitudinal air pressure wave. That is to say, sound is air moving back and forth. But just as a fan cannot push air straight forward, so can't sound. \u2013\u00a0MSalters Jun 11 '15 at 14:30\n\nThe situation you are describing is an example of Fresnel diffraction (or near-field diffraction).\n\nIn general, when a wave propagates every point of the wave front can be thought of as its own source of waves traveling in all directions (called Huygens construction). It turns out that neighboring point sources along an infinite straight wave front reinforce the \"forward\" direction only, but if you put an obstacle in the way you can see this diffraction.\n\nThe mathematics needed is simplified when you look at the effect of this diffraction \"far away\" (far compared to the wavelength of the wave). In the case of sound, a frequency of 55 Hz (low end of the range of sounds you hear) has a wavelength of about 6 m, so on the scale of your drawing diffraction would occur.\n\nThis explains why you can hear the thumping bass of a loud car stereo before the car turns the corner, and only make out the song when the car is in sight.\n\nThe calculation of relative sound level into the room as drawn is tricky - it involves an integral that is usually evaluated using a graphical technique called the Cornu spiral, and strongly depends on dimensions and frequency. But as a rule of thumb, \"high frequencies travel straighter\". And \"sound barriers\" do work (somewhat) to reduce nuisance noise (for example the noise of cars speeding along a highway).\n\nIf you want to estimate the attenuation, you will find this link has some helpful equations and graphs.\n\nUPDATE\n\nThere is a problem with the link given: it defines the Fresnel number as\n\n$$N = \\frac{2d}{\\lambda}$$\n\nBut has a confusing definition of $d$. To make things work, you need to set the straight line distance from source to receiver to $D$ (not $d$). If you do that, then\n\n$$d = A + B - D$$\n\nand the Fresnel number is\n\n$$N = 2\\;\\frac{A+B-D}{\\lambda}$$\n\n\u2022 So, according the Fresnel-Barrier graph in the link you provided, we're talking about an attenuation of somewhere between 5 and 25 dB, correct? \u2013\u00a0multigoodverse Jun 11 '15 at 12:32\n\u2022 @ArditS. It depends on the Fresnel number. The plot shows the result for values between 0.01 and 100, which is why they show attenuation a from 5 to 25 dB, but you should calculate it (the Fresnel number) for your geometry and the wavelength of the sound of interest (from the $1\/\\lambda$ relationship you can see that longer wavelengths attenuate less). Of course the real situation is further complicated because of the window - but we are ignoring that for now. \u2013\u00a0Floris Jun 11 '15 at 13:07\n\u2022 Got it! In my case, d is 20 meters, and traffic noise frequency is around 3000 Hz, so a wavelength of 0.1 m. The Fresnel number would be: N = 2d\/\u03bb = 2*20 \/ 0.1m = 400 With a Fresnel number of 400, the attenuation would be near 30 dB then. Thanks! \u2013\u00a0multigoodverse Jun 11 '15 at 13:30\n\u2022 Note - $d$ is the difference between the direct line from source to receiver, and the indirect line going around the obstacle. For a reasonable setup, I don't think it is as much as 20 m. Look at the diagram in the link. If you can't figure it out I will update the answer... \u2013\u00a0Floris Jun 11 '15 at 13:35\n\u2022 I just re-read the link and realize that VERY UNHELPFULLY they use the symbol $d$ twice - once for the total distance, and once again in the expression $d = A+B-d$. The $d$ on the left hand side of that equation really ought to be a different symbol than on the right. Call distance between source and receiver $D$, then $d = A+B-D$. And use $d$ to calculate the Fresnel number. \u2013\u00a0Floris Jun 11 '15 at 13:38\n\nYou will benefit by finding some tutorials on wave theory. In brief, assuming a spherical wavefront from the emitter, you are correct there's no direct path to the receiver. However, the edge of yourabsorber there causes diffraction (Huygen's principle), so thatsome of the sound wave (energy) will make its way to the receiver. You can see a demo of this, e.g., at mike-willis tutorial .","date":"2020-08-08 11:11:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5910632610321045, \"perplexity\": 448.26655162504085}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439737645.2\/warc\/CC-MAIN-20200808110257-20200808140257-00200.warc.gz\"}"} | null | null |
{"url":"https:\/\/enacademic.com\/dic.nsf\/enwiki\/628650","text":"Phasor (sine waves)\n\n\ufeff\nPhasor (sine waves)\n\nIn physics and engineering, a phase vector (\"phasor\") is a representation of a sine wave whose amplitude (A), phase (\u03b8), and frequency (\u03c9) are time-invariant. It is a subset of a more general concept called analytic representation. Phasors reduce the dependencies on these parameters to three independent factors, thereby simplifying certain kinds of calculations. In particular, the frequency factor, which also includes the time-dependence of the sine wave, is often common to all the components of a linear combination of sine waves. Using phasors, it can be factored out, leaving just the static amplitude and phase information to be combined algebraically (rather than trigonometrically). Similarly, linear differential equations can be reduced to algebraic ones. The term \"phasor\" therefore often refers to just those two factors. In older texts, a phasor is also referred to as a sinor.\n\nDefinition\n\nEuler's formula indicates that sine waves can be represented mathematically as the sum of two complex-valued functions:\n\n:$Acdot cos\\left(omega t + heta\\right) = A\/2cdot e^\\left\\{i\\left(omega t + heta\\right)\\right\\} + A\/2cdot e^\\left\\{-i\\left(omega t + heta\\right)\\right\\},$ [\n\n*i is the Imaginary unit ($i^2 = -1$).\n\n*In electrical engineering texts, the imaginary unit is often symbolized by j.\n*The frequency of the wave, in Hz, is given by $omega\/2pi$.\n]\n\nor as the real part of one of the functions:\n\n:\n\nAs indicated above, \"phasor\" can refer to either $A e^\\left\\{i heta\\right\\} e^\\left\\{iomega t\\right\\},$ or just the complex constant, $A e^\\left\\{i heta\\right\\},$ . In the latter case, it is understood to be a shorthand notation, encoding the amplitude and phase of an underlying sinusoid.\n\nAn even more compact shorthand is angle notation: $A angle heta.,$\n\nPhasor arithmetic\n\nMultiplication by a constant (scalar)\n\nMultiplication of the phasor $A e^\\left\\{i heta\\right\\} e^\\left\\{iomega t\\right\\},$ by a complex constant, $B e^\\left\\{iphi\\right\\},$ produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid:\n\n:\n\nIn electronics, $B e^\\left\\{iphi\\right\\},$ would represent an impedance, which is independent of time. Multiplying a phasor current by an impedance produces a phasor voltage. Note that if $B e^\\left\\{iphi\\right\\},$ represents another phasor (in shorthand notation), the notation hides the time dependence. But the product of two phasors (or squaring a phasor) would represent the product of two sine waves, which is a non-linear operation and does not produce another phasor. [Multiplying two sinusoids produces an expression with other frequencies, which cannot be represented as a phasor.]\n\nDifferentiation and integration\n\nThe time derivative or integral of a phasor produces another phasor [\n\nThis results from: $frac\\left\\{d\\right\\}\\left\\{dt\\right\\}\\left(e^\\left\\{i omega t\\right\\}\\right) = i omega e^\\left\\{i omega t\\right\\}$\n\nwhich means that the complex exponential is the eigenfunction of the derivative operation.] . For example:\n\n:\n\nTherefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant, $i omega = \\left(e^\\left\\{ipi\/2\\right\\} cdot omega\\right).,$ Similarly, integrating a phasor corresponds to multiplication by $frac\\left\\{1\\right\\}\\left\\{iomega\\right\\} = frac\\left\\{e^\\left\\{-ipi\/2\\left\\{omega\\right\\}.,$ The time-dependent factor, $e^\\left\\{iomega t\\right\\},$, is unaffected. When we solve a linear differential equation with phasor arithmetic, we are merely factoring $e^\\left\\{iomega t\\right\\},$ out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the capacitor in an RC circuit:\n\n:$frac\\left\\{d v_C\\left(t\\right)\\right\\}\\left\\{dt\\right\\} + frac\\left\\{1\\right\\}\\left\\{RC\\right\\}v_C\\left(t\\right) = frac\\left\\{1\\right\\}\\left\\{RC\\right\\}v_S\\left(t\\right)$\n\nWhen the voltage source in this circuit is sinusoidal:\n\n:$v_S\\left(t\\right) = V_Pcdot cos\\left(omega t + heta\\right),,$\n\nwe may substitute:\n\n::$v_C\\left(t\\right) = operatorname\\left\\{Re\\right\\} \\left\\{V_c cdot e^\\left\\{iomega t\\right\\}\\right\\},$\n\nwhere phasor $V_s = V_P e^\\left\\{i heta\\right\\},,$ and phasor $V_c,$ is the unknown quantity to be determined.\n\nIn the phasor shorthand notation, the differential equation reduces to [Proof:\n\nNumBlk|:\n$frac\\left\\{d operatorname\\left\\{Re\\right\\} \\left\\{V_c cdot e^\\left\\{iomega t\\right\\}$\n{dt} + frac{1}{RC}operatorname{Re} {V_c cdot e^{iomega t}} = frac{1}{RC}operatorname{Re} {V_s cdot e^{iomega t}}\nEquationRef|Eq.1\n\nSince this must hold for all $t,$, specifically: $t-frac\\left\\{pi\\right\\}\\left\\{2omega \\right\\},,$ it follows that:\n\nNumBlk|:\n$frac\\left\\{d operatorname\\left\\{Im\\right\\} \\left\\{V_c cdot e^\\left\\{iomega t\\right\\}$\n{dt} + frac{1}{RC}operatorname{Im} {V_c cdot e^{iomega t}} = frac{1}{RC}operatorname{Im} {V_s cdot e^{iomega t}}\nEquationRef|Eq.2\n\nIt is also readily seen that:\n\n:$frac\\left\\{d operatorname\\left\\{Re\\right\\} \\left\\{V_c cdot e^\\left\\{iomega t\\right\\}\\left\\{dt\\right\\} = operatorname\\left\\{Re\\right\\} left\\left\\{ frac\\left\\{dleft\\left( V_c cdot e^\\left\\{iomega t\\right\\} ight\\right)\\right\\}\\left\\{dt\\right\\} ight\\right\\}= operatorname\\left\\{Re\\right\\} left\\left\\{ iomega V_c cdot e^\\left\\{iomega t\\right\\} ight\\right\\}$\n\n:$frac\\left\\{d operatorname\\left\\{Im\\right\\} \\left\\{V_c cdot e^\\left\\{iomega t\\right\\}\\left\\{dt\\right\\} = operatorname\\left\\{Im\\right\\} left\\left\\{ frac\\left\\{dleft\\left( V_c cdot e^\\left\\{iomega t\\right\\} ight\\right)\\right\\}\\left\\{dt\\right\\} ight\\right\\}= operatorname\\left\\{Im\\right\\} left\\left\\{ iomega V_c cdot e^\\left\\{iomega t\\right\\} ight\\right\\}$\n\nSubstituting these into EquationNote|Eq.1 and EquationNote|Eq.2, multiplying EquationNote|Eq.2 by $i,,$ and adding both equations gives:\n\n:$iomega V_c cdot e^\\left\\{iomega t\\right\\} + frac\\left\\{1\\right\\}\\left\\{RC\\right\\}V_c cdot e^\\left\\{iomega t\\right\\} = frac\\left\\{1\\right\\}\\left\\{RC\\right\\}V_s cdot e^\\left\\{iomega t\\right\\}$\n\n:$left\\left(iomega V_c + frac\\left\\{1\\right\\}\\left\\{RC\\right\\}V_c = frac\\left\\{1\\right\\}\\left\\{RC\\right\\}V_s ight\\right) cdot e^\\left\\{iomega t\\right\\}$\n\n:$iomega V_c + frac\\left\\{1\\right\\}\\left\\{RC\\right\\}V_c = frac\\left\\{1\\right\\}\\left\\{RC\\right\\}V_s quadquad\\left(QED\\right)$] :\n\n:$i omega V_c + frac\\left\\{1\\right\\}\\left\\{RC\\right\\} V_c = frac\\left\\{1\\right\\}\\left\\{RC\\right\\}V_s$\n\nSolving for the phasor capacitor voltage gives:\n\n:$V_c = frac\\left\\{1\\right\\}\\left\\{1 + i omega RC\\right\\} cdot \\left(V_s\\right) = frac\\left\\{1-iomega R C\\right\\}\\left\\{1+\\left(omega R C\\right)^2\\right\\} cdot \\left(V_P e^\\left\\{i heta\\right\\}\\right),$\n\nAs we have see, the complex constant factor represents differences of the amplitude and phase of $v_C\\left(t\\right),$ relative to $V_P,$ and $heta.,$\n\nIn polar coordinate form, the factor is:\n\n:$frac\\left\\{1\\right\\}\\left\\{sqrt\\left\\{1 + \\left(omega RC\\right)^2cdot e^\\left\\{-i phi\\left(omega\\right)\\right\\},,$ where $phi\\left(omega\\right) = arctan\\left(omega RC\\right).,$\n\nTherefore:\n\n:$v_C\\left(t\\right) = frac\\left\\{1\\right\\}\\left\\{sqrt\\left\\{1 + \\left(omega RC\\right)^2cdot V_P cos\\left(omega t + heta- phi\\left(omega\\right)\\right)$\n\nThe sum of multiple phasors produces another phasor. That is because the sum of sine waves of one frequency is also a sine wave:\n\n:\n\nwhere:\n\n:$A_3^2 = \\left(A_1 cos\\left( heta_1\\right)+A_2 cos\\left( heta_2\\right)\\right)^2 + \\left(A_1 sin\\left( heta_1\\right)+A_2 sin\\left( heta_2\\right)\\right)^2$:$an\\left( heta_3\\right) = frac\\left\\{A_1 sin\\left( heta_1\\right)+A_2 sin\\left( heta_2\\right)\\right\\}\\left\\{A_1 cos\\left( heta_1\\right)+A_2 cos\\left( heta_2\\right)\\right\\}.$\n\nIn physics, this sort of addition occurs when sine waves \"interfere\" with each other, constructively or destructively. Another way to view the calculations above is that two vectors with coordinates $\\left[A_1 cos\\left( heta_1\\right), A_1 sin\\left( heta_1\\right)\\right] ,$ and $\\left[A_2 cos\\left( heta_2\\right), A_2 sin\\left( heta_2\\right)\\right] ,$ are added (see vector addition) to produce a resultant vector with coordinates $\\left[A_3 cos\\left( heta_3\\right), A_3 sin\\left( heta_3\\right)\\right] .,$\n\nThe vector concept provides useful insight into questions like this: \"What phase difference would be required between three identical waves for perfect cancellation?\" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle, and the angle between each phasor to the next is 120\u00b0 (2\u03c0\/3 radians), or one third of a wavelength $lambda\/3.$ So the phase difference between each wave must also be 120\u00b0.\n\nIn other words, what this shows is:\n\n:$cos\\left(omega t\\right) + cos\\left(omega t + 2pi\/3\\right) + cos\\left(omega t +4pi\/3\\right) = 0.,$\n\nIn the example of three waves, the phase difference between the first and the last wave was 240 degrees, while for two waves destructive interference happens at 180 degrees. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength $lambda$. This is why in single slit diffraction, the minima occurs when light from the far edge travels a full wavelength further than the light from the near edge.----\n\nPhasor diagrams\n\nElectrical engineers, electronics engineers, and electronic engineering technicians use phasor diagrams to visualize complex constants and variables (phasors). Like vectors, arrows drawn on graph paper or computer displays represent phasors. Cartesian and polar representations each have advantages.----\n\nCircuit laws\n\nWith phasors, the techniques for solving DC circuits can be applied to solve AC circuits. A list of the basic laws is given below.\n\n* Ohm's law for resistors: a resistor has no time delays and therefore doesn't change the phase of a signal therefore \"V\"=\"IR\" remains valid.\n* Ohm's law for resistors, inductors, and capacitors: \"V\"=\"IZ\" where \"Z\" is the complex impedance.\n* In an AC circuit we have real power (\"P\") which is a representation of the average power into the circuit and reactive power (\"Q\") which indicates power flowing back and forward. We can also define the complex power \"S\"=\"P\"+\"iQ\" and the apparent power which is the magnitude of \"S\". The power law for an AC circuit expressed in phasors is then \"S\"=\"VI\"* (where \"I\"* is the complex conjugate of \"I\").\n* Kirchhoff's circuit laws work with phasors in complex form\n\nGiven this we can apply the techniques of analysis of resistive circuits with phasors to analyze single frequency AC circuits containing resistors, capacitors, and inductors. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components with magnitude and phase then analyzing each frequency separately.\n\nPower engineering\n\nIn analysis of three phase AC power systems, usually a set of phasors is defined as the three complex cube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination of symmetrical circuits. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents. In the context of power systems analysis, the phase angle is often given in degrees, and the magnitude in rms value rather than the peak amplitude of the sinusoid.\n\nThe technique of synchrophasors uses digital instruments to measure the phasors representing transmission system voltages at widespread points in a transmission network. Small changes in the phasors are sensitive indicators of power flow and system stability.\n\nFootnotes\n\nReferences\n\n*\n\n* [http:\/\/www.jhu.edu\/~signals\/phasorapplet2\/phasorappletindex.htm Phasor Phactory]\n* [http:\/\/resonanceswavesandfields.blogspot.com\/2007\/08\/phasors.html Visual Representation of Phasors]\n* [http:\/\/www.allaboutcircuits.com\/vol_2\/chpt_2\/5.html Polar and Rectangular Notation]\n\nWikimedia Foundation. 2010.\n\nLook at other dictionaries:\n\n\u2022 Phase angle \u2014 In the context of vectors and phasors, the term phase angle refers to the angular component of the polar coordinate representation. The notation Aang ! heta, for a vector with magnitude (or amplitude ) A and phase angle \u03b8, is called angle\u2026 \u2026 \u00a0 Wikipedia\n\n\u2022 Real part \u2014 In mathematics, the real part of a complex number z, is the first element of the ordered pair of real numbers representing z, i.e. if z = (x, y) , or equivalently, z = x+iy, then the real part of z is x. It is denoted by Re{ z } or Re{ z }, where \u2026 \u00a0 Wikipedia\n\n\u2022 List of trigonometric identities \u2014 Cosines and sines around the unit circle \u2026 \u00a0 Wikipedia\n\n\u2022 Alternating current \u2014 (green curve). The horizontal axis measures time; the vertical, current or voltage. In alternating current (AC, also ac) the movement of electric charge periodically reverses direction. In direct current (DC, also dc), the flow of electric charge \u2026 \u00a0 Wikipedia\n\n\u2022 Fourier series \u2014 Fourier transforms Continuous Fourier transform Fourier series Discrete Fourier transform Discrete time Fourier transform Related transforms \u2026 \u00a0 Wikipedia\n\n\u2022 List of mathematics articles (P) \u2014 NOTOC P P = NP problem P adic analysis P adic number P adic order P compact group P group P\u00b2 irreducible P Laplacian P matrix P rep P value P vector P y method Pacific Journal of Mathematics Package merge algorithm Packed storage matrix Packing\u2026 \u2026 \u00a0 Wikipedia","date":"2019-12-13 11:36:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 46, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8765119910240173, \"perplexity\": 981.9436515739021}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540553486.23\/warc\/CC-MAIN-20191213094833-20191213122833-00529.warc.gz\"}"} | null | null |
The Evangelical Lutheran Church of St. Matthew is the oldest Lutheran congregation in North America. The congregation is a member of the Lutheran Church–Missouri Synod. Since 2006, the congregation has been located at the Cornerstone Center, 178 Bennett Avenue in Manhattan, New York City. The congregation has been known by different names, only acquiring the name St. Matthew in 1822 and using it exclusively since 1838.
History
The congregation was founded in 1643 by Dutch Lutherans in New Amsterdam but the church was not chartered until December 6, 1664, when the new governor, Richard Nicolls, issued a charter after the British had taken control of the colony in April 1664.
The first church building was constructed in 1671 on the present Broadway site of Trinity Episcopal Church, outside the walls of the city. This building was destroyed in 1673, and the congregation constructed a new church to the south of Rector Street and Broadway. This structure was later described as a "cattle shed" and replaced with a new stone edifice known as Trinity Church, and dedicated on June 29, 1729.
German-speaking members seceded from the congregation in 1750 and purchased a brewery on Cliff Street, which became Christ Church Lutheran. That year, the Rev. Henry Melchior Muhlenberg began to serve the church, starting a parish school in 1752. Beginning in 1770, Rev. Bernard Michael Houseal served the church for 14 years. His home and Trinity Church was destroyed during the New York Fire of 1776. The church records escaped the fire and the congregation thereafter worshiped in the Cedar Street Scotch Presbyterian Church. In 1784, Houseal escaped New York as a Loyalist. In 1784, Christ Church Lutheran united with Trinity Lutheran Church as the United German Lutheran Churches in New York City. After the merger, services were held in the former Christ Church building at Frankfort and William streets, which had been built in 1767 and was known as The Old Swamp Church.
The congregation was one of the founders of the New York Ministerium in 1786.
An English-language Lutheran church was founded and built in 1822 on Walker Street, at the east end of Broadway, and named Saint Matthew's Church. Always in debt, it was sold in 1826 for $22,750 after the United German Lutheran Churches declined to help the church. Shortly thereafter, the building was resold at the same price to the United German Lutheran Churches, and the result was referred to as "Christ and Old Trinity". The congregation maintained both buildings, with Christ Church conducting services in German and St. Matthew's Church in English. The Christ Church building was sold in 1831 and the congregation met in St. Matthew's until 1838, when the congregation assumed the name St. Matthew's with predominantly German services. Starting in May 1840, English services were no longer held due to German immigration and the huge turnout for the German services.
Other congregations which branched from St. Matthew's around this time include:
St. Paul's German Evangelical Lutheran Church, now located on West 22nd Street, was founded in 1841 by St. Matthew's former pastor.
St Mark's Evangelical Lutheran Church was founded in 1847 as a branch church of St. Matthew's. It was subsidized by Trinity, the Old Church.
In 1852, the United Lutheran Churches of New York celebrated their 100th Anniversary.
In 1868, St. Matthew's sold its Walker Street church and, as the German Evangelical Lutheran Church of Saint Matthew in the City of New York, acquired the former First Baptist Church at Broome and Elizabeth streets. In 1881, the Concordia Collegiate Institute of the Missouri Synod (subsequently known as Concordia College) was founded at the Broome Street church. St. Matthew's subsidized the institute until its 1893 move to Hawthorne, New York.
In 1885, St. Matthew's left the New York Ministerium to join the more conservative Evangelical Lutheran Synod of Missouri, Ohio, and Other States, now known as the Lutheran Church–Missouri Synod (LCMS), to which it still belongs.
In 1903, St. Matthew's built a brick and stone church and a three-story residence for $25,000 at 300 West 9th Avenue at 44th Street to designs by architect John Boese of 280 Broadway.
In 1906, St. Matthew erected a mission chapel at 145th Street and Convent Avenue. The Broome Street church closed in 1913 and the congregation moved to the 145th Street and Convent Avenue mission chapel. Nearby at 145th Street and Convent Avenue, the church built a four-story brick and stone parish house in 1908 at a cost of $50,000 to designs by architect John Boese.
St. Matthew's merged in 1945 with Messiah Mission Church in the Inwood, Manhattan, neighborhood and moved into that church building. In 1956, construction of a new church building was begun at 202 Sherman Avenue in Inwood, and completed in 1957. This building was sold in 2006, and demolished along with its elementary school, being replaced by an apartment building. The congregation moved south to the Cornerstone Center on 178 Bennett Avenue.
References
Notes
Bibliography
External links
Official website
Religious organizations established in 1643
Demolished churches in New York City
Demolished buildings and structures in Manhattan
Dutch-American culture in New York City
German-American culture in New York City
Churches completed in 1671
Churches completed in 1729
Churches completed in 1906
Churches completed in 1957
Lutheran churches in New York City
Former Lutheran churches in the United States
Churches in Manhattan
1643 establishments in the Dutch Empire
1643 establishments in North America
Inwood, Manhattan
Establishments in New Netherland
Lutheran Church–Missouri Synod churches
17th-century Lutheran churches in the United States | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 303 |
\section{Introduction}\label{sec:1}
In the present paper, we consider the following model problem
\emph{(distributed velocity tracking problem)}.
Let $\Omega\subseteq\mathbb{R}^d$ be a bounded domain with $d\in\{2,3\}$.
Find a velocity field $u\in [H^1(\Omega)]^d$, a
pressure distribution
$p\in L^2(\Omega)$ and a control (force field) $f\in [L^2(\Omega)]^d$ such that
the tracking functional
\begin{equation}\nonumber
J(u,f) = \frac12 \|u-u_D\|_{L^2(\Omega)}^2 +
\frac{\alpha}{2} \|f\|_{L^2(\Omega)}^2
\end{equation}
is minimized subject to the Stokes equations
\begin{equation}\nonumber
\begin{aligned}
-\Delta u + \nabla p &= f \mbox{ in } \Omega, \\
\nabla \cdot u &= 0 \mbox{ in } \Omega, \\
u &= 0
\mbox{ on } \partial \Omega. \\
\end{aligned}
\end{equation}
The cost parameter or regularization
parameter~$\alpha>0$ and the desired state (desired velocity field) $u_D\in [L^2(\Omega)]^d$ are assumed
to be given. To enforce uniqueness of the solution, we additionally require
$\int_{\Omega} p \mbox{ d}x=0$.
Here and in what follows, $L^2(\Omega)$ and~$H^1(\Omega)$ denote the
standard Lebesgue and Sobolev spaces with associated standard
norms~$\|\cdot\|_{L^2(\Omega)}$ and~$\|\cdot\|_{H^1(\Omega)}$,
respectively.
The main goal of this work is to construct and to analyze numerical
methods that produce an approximate solution
to the optimization problem, where the computational complexity can be
bounded by the number of unknowns times a
constant which is independent of the grid level
and the choice of the parameter~$\alpha$, in particular for
small values of~$\alpha$.
The solution of the optimization problem is characterized by the
Karush-Kuhn-Tucker-system (KKT-system).
As we are interested in good approximations of the solution, the
discretization of the KKT-system leads to a large-scale linear system. This linear system will
be solved with multigrid methods
because they are one of the fastest known methods for such problems.
Originally, multigrid methods have been designed and analyzed for
elliptic problems.
They also work well for saddle point problems (like KKT-systems)
and have gained growing interest in this area, see,
e.g.,~\cite{Borzi:Schulz:2009} and the references cited there.
The unknowns of the discretized KKT-system for a PDE-constrained
optimization problem can be partitioned into primal variables and
dual variables. In our case, the primal variables
consist of state variables (the velocity field~$u$ and the pressure distribution~$p$) and control
variables (the force field~$f$). The dual variables are the Lagrange multipliers which
are introduced to incorporate the constraints.
One approach to solve such problems is to apply multigrid methods in
every step of an overall block-structured
iterative method to equations in just one of these blocks of variables.
Such methods have been proposed, e.g., in~\cite{Zulehner:2010,Kollmann:Zulehner:2012,Pearson:2012}.
Another approach, which we will follow here, is to apply the multigrid
idea directly to the (reduced or not reduced) KKT-system,
which is called an all-at-once approach. Such methods have been
proposed and discussed for the elliptic
optimal control problem, e.g., in
\cite{Schoeberl:Simon:Zulehner:2010,Takacs:Zulehner:2012}.
In this paper we present a convergence proof for multigrid methods
based on the classical splitting of the analysis
into smoothing property and approximation property, see~\cite{Hackbusch:1985}.
This paper is organized as follows. In Section~\ref{sec:2} we will
introduce the optimality system and discuss its discretization.
In Section~\ref{sec:3} we will introduce an all-at-once multigrid approach.
Its convergence will be proven in Section~\ref{sec:3a}.
Numerical results which illustrate the convergence result will be presented
in Section~\ref{sec:4}. In Section~\ref{sec:5} we will close with conclusions.
\section{Optimality system and discretization}\label{sec:2}
For setting up the optimality system we need the
space $H^1_0(\Omega)$, the space of functions in $H^1(\Omega)$ vanishing on the boundary.
Moreover, we need the space $L^2_0(\Omega)$, which is the space of functions in $L^2(\Omega)$ with mean value $0$, i.e.,
\begin{equation*}
L^2_0(\Omega) := \left\{ v \in L^2(\Omega) \;:\; \int_{\Omega} v \;\mbox{d}\xi = 0\right\}.
\end{equation*}
Both spaces are equipped with standard norms, i.e., $\|\cdot\|_{H^1_0(\Omega)}:=\|\cdot\|_{H^1(\Omega)}$ and
$\|\cdot\|_{L^2_0(\Omega)}:=\|\cdot\|_{L^2(\Omega)}$.
The solution of the problem is characterized by the Karush Kuhn Tucker
system (KKT-system), which reads as follows, cf.~\cite{Zulehner:2010} and others.
Find
$(u,p,f,\lambda,\mu)\in [H^1_0(\Omega)]^d\times
L^2_0(\Omega)\times [L^2(\Omega)]^d \times [H^1_0(\Omega)]^d\times
L^2_0(\Omega) $ such that
\begin{align*}
(u,\tilde{u})_{L^2(\Omega)}
+ (\nabla \lambda, \nabla \tilde{u})_{L^2(\Omega)}
+ (\mu,\nabla\cdot \tilde{u})_{L^2(\Omega)}
&= (u_D,\tilde{u})_{L^2(\Omega)}\\
(\nabla \cdot \lambda,\tilde{p})_{L^2(\Omega)}
&= 0\\
\alpha (f,\tilde{f})_{L^2(\Omega)}
-(\lambda,\tilde{f})_{L^2(\Omega)}
&=0\\
(\nabla u,\nabla\tilde{\lambda})_{L^2(\Omega)}
+(p,\nabla\cdot\tilde{\lambda})_{L^2(\Omega)}
-(f,\tilde{\lambda})_{L^2(\Omega)}
&=0\\
(\nabla\cdot u,\tilde{\mu})_{L^2(\Omega)}
&= 0\
\end{align*}
holds for all $(\tilde{u},\tilde{p},\tilde{f},\tilde{\lambda},\tilde{\mu})\in
[H^1_0(\Omega)]^d\times L^2_0(\Omega)\times [L^2(\Omega)]^d \times
[H^1_0(\Omega)]^d\times L^2_0(\Omega)$.
The third line of the KKT-system directly implies $f = \alpha^{-1}
\lambda$.
This allows to eliminate the control $f$, which
leads to the reduced KKT-system, which reads as follows. Find
$x:=(u,p,\lambda,\mu)\in X:=[H^1_0(\Omega)]^d\times L^2_0(\Omega) \times
[H^1_0(\Omega)]^d\times L^2_0(\Omega) $ such that
\begin{equation}\nonumber
\begin{aligned}
(u,\tilde{u})_{L^2(\Omega)}
+ (\nabla \lambda,\nabla \tilde{u})_{L^2(\Omega)}
+ (\mu,\nabla \cdot \tilde{u})_{L^2(\Omega)}
&= (u_D,\tilde{u})_{L^2(\Omega)}\\
(\nabla \cdot \lambda,\tilde{p})_{L^2(\Omega)}
&= 0\\
(\nabla u,\nabla \tilde{\lambda})_{L^2(\Omega)}
+ (p,\nabla \cdot \tilde{\lambda})_{L^2(\Omega)}
- \alpha^{-1} (\lambda,\tilde{\lambda})_{L^2(\Omega)}
&=0\\
(\nabla\cdot u,\tilde{\mu})_{L^2(\Omega)}
&= 0\\
\end{aligned}
\end{equation}
holds for all $(\tilde{u},\tilde{p},\tilde{\lambda},\tilde{\mu})\in X$.
Certainly, the KKT-system can be rewritten as one variational equation
as follows. Find $x \in X $ such that
\begin{equation}\label{eq:gen:bil}
\mathcal{B}(x,\tilde{x}) = \mathcal{F}(\tilde{x}) \qquad \mbox{for all }\tilde{x} \in X,
\end{equation}
where
\begin{align*}\nonumber
& \mathcal{B}((u,p,\lambda,\mu),(\tilde{u},\tilde{p},\tilde{\lambda},\tilde{\mu})) :=
(u,\tilde{u})_{L^2(\Omega)}
+(\nabla \lambda,\nabla \tilde{u})_{L^2(\Omega)}
+(\mu,\nabla\cdot\tilde{u})_{L^2(\Omega)}
\\&\qquad
+(\nabla \cdot\lambda,\tilde{p})_{L^2(\Omega)}
+(\nabla u,\nabla \tilde{\lambda})_{L^2(\Omega)}
+(p,\nabla\cdot\tilde{\lambda})_{L^2(\Omega)}
- \alpha^{-1}(\lambda,\tilde{\lambda})_{L^2(\Omega)}
\\&\qquad
+ (\nabla\cdot u,\tilde{\mu})_{L^2(\Omega)}\mbox{ and}
\\
& \mathcal{F}(\tilde{u},\tilde{p},\tilde{\lambda},\tilde{\mu}) := (u_D,\tilde{u})_{L^2(\Omega)}.
\end{align*}
We are interested in finding an approximative solution for equation~\eqref{eq:gen:bil}. Both, the
proposed solution strategy and the convergence analysis,
follow the abstract framework introduced in~\cite{Takacs:Zulehner:2012}. The
conditions, \citecond{(A1)}, \citecond{(A1a)}, \citecond{(A3)} and
\citecond{(A4)}, mentioned in the present paper are the same conditions
as in~\cite{Takacs:Zulehner:2012}.
For simplicity, we introduce the following notation.
\begin{notation}
Throughout this paper,~$C>0$ is a generic constant, independent of
the grid level~$k$ and the choice of the parameter~$\alpha$.
For any scalars~$a$ and~$b$, we write~$a \lesssim b$ (or $b \gtrsim a$) if there is a
constant~$C>0$ such that~$a < C\,b$. We write~$a\eqsim b$ if~$a\lesssim b\lesssim a$.
\end{notation}
The following property guarantees existence and uniqueness of the solution.
\begin{description}
\item[\citecond{(A1)}] The relation
\begin{equation*}
\|x\|_{X} \lesssim \nnsup{\tilde{x}}{X}
\frac{\mathcal{B}(x,\tilde{x})}{\|\tilde{x}\|_{X}}
\lesssim \|x\|_{X}
\end{equation*}
holds for all $x\in X$.
\end{description}
In~\cite{Zulehner:2010}
it was shown that condition~\citecond{(A1)} is satisfied
for $X:=Y\times Y$, where $Y:=U\times P$, $U:=[H^1_0(\Omega)]^d$, $P:=L^2_0(\Omega)$,
equipped with norms
\begin{equation*}
\|x\|_X^2 := \|(u,p,\lambda,\mu)\|_X^2 := \|(u,p)\|_Y^2 +
\alpha^{-1} \|(\lambda,\mu)\|_Y^2,
\end{equation*}
where
\begin{align*}
\|(u,p)\|_Y^2 &:= \|u\|_U^2 + \|p\|_{P}^2,\\
\|u\|_U^2 &:= \|u\|_{L^2(\Omega)}^2+\alpha^{1/2} \|u\|_{H^1(\Omega)}^2\mbox{ and}\\
\|p\|_P^2 &:= \nnsup{w}{[H^1_0(\Omega)]^d} \frac{\alpha (p,\nabla \cdot
w)_{L^2(\Omega)}^2}{\|w\|_{L^2(\Omega)}^2 + \alpha^{1/2}
\|w\|_{H^1(\Omega)}^2}.
\end{align*}
Using the following notation, we can express the norms in a nicer way.
\begin{notation}
For any Hilbert space~$A$, the symbol $A^*$ denotes its dual space equipped
with the dual norm
\begin{equation*}
\|u\|_{A^*} := \nnsup{w}{A}\frac{\langle u,w \rangle}{\|w\|_A},
\end{equation*}
where $\langle u,\cdot\rangle := u(w)$ denotes the duality pairing.
For any Hilbert space~$A$ and any scalar $a>0$, the symbol~$a\, A$
denotes the space on the underlying set of the Hilbert space~$A$ equipped with the norm
\begin{equation*}
\|u\|_{a\,A}^2 := a \|u\|_A^2.
\end{equation*}
For any two Hilbert spaces~$A$ and~$B$, the symbol
$A\cap B$ denotes the space on the intersection of the underlying sets,
$\{u\in A \cap B\}$, equipped with the norm
\begin{equation*}
\|u\|_{A\cap B}^2 := \|u\|_A^2 + \|u\|_B^2,
\end{equation*}
and the symbol~$A+B$ denotes the space on the algebraic sum of the
underlying sets, $\{u_1+u_2\;:\; u_1\in A, u_2\in B\}$, equipped with the norm
\begin{equation*}
\|u\|_{A+B}^2 := \inf_{u_1\in A, u_2\in B, u=u_1+u_2}
\|u_1\|_A^2 + \|u_2\|_B^2.
\end{equation*}
\end{notation}
The spaces $A^*$, $a\, A$, $A\cap B$ and $A+B$ are Hilbert spaces. The fact that $A^*$
is a Hilbert space follows directly from the Riesz representation theorem, see, e.g.,
Theorem~1.2 in~\cite{Adams:Fournier}. The fact that $a\, A$ is a Hilbert space
is obvious and for the latter two see, e.g., Lemma~2.3.1 in~\cite{Bergh:Loefstroem:1976}.
We immediately see, that the norm on $U$ can be rewritten as follows
\begin{equation*}
\|u\|_U = \|u\|_{L^2(\Omega)\cap \alpha^{1/2} H^1(\Omega)}.
\end{equation*}
To reformulate the norm $\|\cdot\|_P$, we need a regularity assumption.
\begin{description}
\item[\citecond{(R)}] \emph{Regularity of the generalized Stokes problem.}
Let $f\in [L^2(\Omega)]^d$ and $g\in H^1_0(\Omega)\cap L^2_0(\Omega)$ be arbitrarily
but fixed and
$(u,p)\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$ be the solution
of the Stokes problem, i.e., such that
\begin{equation*}
\begin{array}{lclclcl}
(\nabla u,\nabla \tilde{u})_{L^2(\Omega)}
&+&(p,\nabla\cdot\tilde{u})_{L^2(\Omega)} &=&(f,\tilde{u})_{L^2(\Omega)}\\
(\nabla\cdot u,\tilde{p})_{L^2(\Omega)} &&& =& (g,\tilde{p})_{L^2(\Omega)} \\
\end{array}
\end{equation*}
holds for all $(\tilde{u},\tilde{p}) \in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$.
Then $(u,p)\in [H^2(\Omega)]^d\times H^1(\Omega)$ and
\begin{equation*}
\|u\|_{H^2(\Omega)}^2 + \|p\|_{H^1(\Omega)}^2 \lesssim \|f\|_{L^2(\Omega)}^2+\|g\|_{H^1(\Omega)}^2.
\end{equation*}
\end{description}
This condition is satisfied for convex polygonal domains, see Lemma~2.1 in~\cite{Takacs:2013} which
is a direct consequence of Theorem~2 in~\cite{Kellogg:Osborn:1976}.
\begin{lemma}\label{lem:pe}
If~\citecond{(R)} is satisfied, then
\begin{equation*}
\|p\|_P \eqsim \|p\|_{\alpha H^1(\Omega)+\alpha^{1/2} L^2(\Omega)}
\end{equation*}
holds for all $p\in L^2_0(\Omega)$.
\end{lemma}
This lemma was shown in Theorem~3.2 in~\cite{Olshanskii:Peters:Reusken:2005}
under a regularity assumption, which is weaker than regularity
assumption~\citecond{(R)}.
The discretization of problem~\eqref{eq:gen:bil} is done using
standard finite element techniques.
We assume to have a sequence of girds obtained
by uniform refinement. On each
grid level~$k$, we discretize the problem using the Galerkin approach,
i.e., we have finite dimensional
spaces $X_k \subseteq X$ and consider the following problem. Find $x_k
\in X_k $ such that
\begin{equation}\label{eq:galerkin}
\mathcal{B}(x_k,\tilde{x}_k) = \mathcal{F}(\tilde{x}_k) \qquad \mbox{for all
}\tilde{x}_k \in X_k.
\end{equation}
Using a nodal basis, we can represent this problem in matrix-vector
notation as follows:
\begin{equation}\label{eq:gen:bil:mv}
\mathcal{A}_k\,\underline{x}_k = \underline{\mathpzc{f}}_k.
\end{equation}
Here and in what follows, any underlined quantity, like~$\underline{x}_k$, is
the representation of the corresponding
non-underlined quantity, here~$x_k$, with respect to a nodal basis of
the corresponding Hilbert space, here~$X_k$.
Existence and uniqueness of the
discretized problem is guaranteed
by the following condition.
\begin{description}
\item[\citecond{(A1a)}] The relation
\begin{equation*}
\|x_k\|_{X} \lesssim \nnsup{\tilde{x}_k}{X_k}
\frac{\mathcal{B}(x_k,\tilde{x}_k)}{\|\tilde{x}_k\|_{X}}
\lesssim \|x_k\|_{X}
\end{equation*}
holds for all $x_k\in X_k$.
\end{description}
Due to the fact that the model problem is indefinite,
condition~\citecond{(A1)} does not imply condition~\citecond{(A1a)}.
For the Stokes problem itself, it is well-known that such a condition
(also known as \emph{discrete inf-sup condition})
can only be guaranteed if the discretization is chosen appropriately. The same
is true for the Stokes control problem. Fortunately, we can show
the discrete inf-sup condition~\citecond{(A1a)} for
the Stokes control problem based on pre-existing knowledge on the
discrete inf-sup condition
for the Stokes problem. This allows to show that all discretizations which are
suitable for the Stokes flow problem are also suitable
for the Stokes control problem.
We choose the space~$X_k$ as follows:
\begin{equation*}
X_k := Y_k \times Y_k\qquad \mbox{where} \qquad Y_k:=U_k \times P_k\qquad
\end{equation*}
and the choice of $U_k\subseteq U=[H^1_0(\Omega)]^d$
and~$P_k \subseteq P = L^2_0(\Omega)$ is discussed below. Note that $X_k$ has product
structure and that the state and the
adjoined state (Lagrange multipliers) are
discretized the same way. The same has already been done for
optimal control problems with elliptic state equation, cf.~\cite{Takacs:Zulehner:2012} and
many others.
Due to the fact that the grids are obtained by uniform refinement, the
discrete subsets are nested, i.e.,
$U_{k} \subseteq U_{k+1}$ and $P_{k} \subseteq P_{k+1}$. Therefore,
also $X_{k} \subseteq X_{k+1}$ holds.
The next step is to show condition~\citecond{(A1a)}. We have seen that
the analysis done in~\cite{Zulehner:2010}, applied
to the infinite dimensional spaces, shows condition~\citecond{(A1)}.
If the analysis done
in~\cite{Zulehner:2010} is applied to the discretized spaces, we obtain that
\begin{equation}\label{eq:a1pseudo}
\|x_k\|_{X_k} \lesssim \nnsup{\tilde{x}_k}{X_k}
\frac{\mathcal{B}(x_k,\tilde{x}_k)}{\|\tilde{x}_k\|_{X_k}}
\lesssim \|x_k\|_{X_k}
\end{equation}
is satisfied for all $x_k \in X_k$, where
\begin{align*}
\|x_k\|_{X_k}^2 &:= \|(u_k,p_k,\lambda_k,\mu_k)\|_{X_k}^2 :=
\|(u_k,p_k)\|_{Y_k}^2 + \alpha^{-1}
\|(\lambda_k,\mu_k)\|_{Y_k}^2,\\
\|(u_k,p_k)\|_{Y_k}^2& := \|u_k\|_{U}^2+ \|p_k\|_{P_k}^2,\\
\|p_k\|_{P_k}^2 &:= \nnsup{w_k}{U_k} \frac{\alpha (p_k,\nabla
\cdot w_k)_{L^2(\Omega)}^2}{\|w_k\|_{L^2(\Omega)}^2 +
\alpha^{1/2} \|w_k\|_{H^1(\Omega)}^2}\mbox{ and}
\end{align*}
$\|\cdot\|_{U}^2$ is as above.
Note that this is not condition~\citecond{(A1a)}, as the norms
$\|\cdot\|_P$ and $\|\cdot\|_{P_k}$ are not equal. For
showing condition~\citecond{(A1a)}, it suffices to show that these two
norms are equivalent which implies also the equivalence of the norms $\|\cdot\|_X$ and
$\|\cdot\|_{X_k}$. This can be shown using the following condition.
\begin{description}
\item[\citecond{(S)}]
The discretization of $P$ is $H^1$-conforming, i.e., $P_k\subseteq H^1(\Omega)$, and
the \emph{weak inf-sup condition}
\begin{equation*}
\nnsup{u_k}{U_k} \frac{(\nabla\cdot u_k,p_k)_{L^2(\Omega)}
}{\|u_k\|_{L^2(\Omega)} } \gtrsim \|\nabla p_k\|_{L^2(\Omega)}
\end{equation*}
holds for all $p_k\in P_k$.
\end{description}
\begin{lemma}
Assume that the discretization satisfies condition~\citecond{(S)}.
Then condition~\citecond{(A1a)} is satisfied for the model problem.
\end{lemma}
\begin{proof}
Lemma~2.2 in~\cite{Olshanskii:2012} states (provided that \citecond{(S)}
is satisfied) that ${\|\cdot\|_P}\eqsim{\|\cdot\|_{P_k}}$ is satisfied.
A direct consequence is $\|\cdot\|_X\eqsim\|\cdot\|_{X_k}$. Therefore,
condition~\eqref{eq:a1pseudo} implies condition~\citecond{(A1a)}.
\qed\end{proof}
Note that condition~\citecond{(S)} is a standard condition which ensures that the
chosen discretization is stable for the Stokes problem.
In~\cite{Bercovier:Pironneau:1979,Verfuerth:1984} it was shown that
condition~\citecond{(S)} is satisfied for the Taylor-Hood element
($P1-P2$-element) for polygonal
domains where at least one vertex of each element is located in the interior
of the domain. Here and in what follows we assume that the problem is
discretized with the Taylor-Hood element and that the mesh satisfies
the named condition.
\section{An all-at-once multigrid method}\label{sec:3}
The problem shall be solved with an all-at-once multigrid method. The
abstract algorithm for solving the discretized
equation~\eqref{eq:gen:bil:mv} on grid level~$k$ reads as follows.
Starting from an initial approximation~$\underline{x}^{(0)}_k$,
one iterate of the multigrid method is given by the following two steps:
\begin{itemize}
\item \emph{Smoothing procedure:} Compute
\begin{equation} \nonumber
\underline{x}^{(0,m)}_k := \underline{x}^{(0,m-1)}_k + \hat{\mathcal{A}}_k^{-1}
\left(\underline{\mathpzc{f}}_k -\mathcal{A}_k\;\underline{x}^{(0,m-1)}_k\right)
\qquad \mbox{for } m=1,\ldots,\nu
\end{equation}
with $\underline{x}^{(0,0)}_k=\underline{x}^{(0)}_k$. The choice of
the smoother (or, in other words, of the preconditioning matrix
$\hat{\mathcal{A}}_k^{-1}$) will be discussed below. \vspace{.2cm}
\item \emph{Coarse-grid correction:}
\begin{itemize}
\item Compute the defect
$\underline{r}_k^{(1)} := \underline{\mathpzc{f}}_k -\mathcal{A}_k\;\underline{x}^{(0,\nu)}_k$
and restrict it to grid level $k-1$ using
an restriction matrix $I_k^{k-1}$:
\begin{equation}\nonumber
\underline{r}_{k-1}^{(1)} := I_k^{k-1} \left(\underline{\mathpzc{f}}_k -\mathcal{A}_k
\;\underline{x}^{(0,\nu)}_k\right).
\end{equation}
\item Solve the coarse-grid problem
\begin{equation}\label{eq:coarse:grid:problem}
\mathcal{A}_{k-1} \,\underline{p}_{k-1}^{(1)} =\underline{r}_{k-1}^{(1)}
\end{equation}
approximatively.
\item Prolongate $\underline{p}_{k-1}$ to the
grid level $k$ using an prolongation
matrix $I^k_{k-1}$ and add
the result to the previous iterate:
\begin{equation}\nonumber
\underline{x}_{k}^{(1)} := \underline{x}^{(0,\nu)}_k +
I_{k-1}^k \, \underline{p}_{k-1}^{(1)}.
\end{equation}
\end{itemize}
\end{itemize}
As we have assumed to have nested spaces, the intergrid-transfer
matrices $I_{k-1}^k$ and $I_k^{k-1}$ can be chosen in
a canonical way:
$I_{k-1}^k$ is the canonical embedding and the restriction $I_k^{k-1}$
is its transpose.
If the problem on the coarser grid is solved exactly (two-grid
method), the coarse-grid correction is given by
\begin{equation} \label{eq:method:cga}
\underline{x}_k^{(1)} := \underline{x}_k^{(0,\nu)} +
I_{k-1}^{k} \, \mathcal{A}_{k-1}^{-1} \, I_{k}^{k-1}
\left( \underline{\mathpzc{f}}_k - \mathcal{A}_k \;\underline{x}_k^{(0,\nu)}\right).
\end{equation}
In practice the problem~\eqref{eq:coarse:grid:problem} is
approximatively solved by applying one step (V-cycle)
or two steps (W-cycle) of the multigrid method, recursively. On
the coarsest grid level ($k=0$) the problem~\eqref{eq:coarse:grid:problem} is
solved exactly.
To construct a multigrid convergence result based on Hackbusch's
splitting of the analysis into smoothing property and approximation
property, we have to introduce an appropriate framework.
Convergence is shown on the spaces $X_k$, which are equipped with an $L^2$-like
norms $|\hspace{-.1em}|\hspace{-.1em}|\cdot|\hspace{-.1em}|\hspace{-.1em}|_{0,k}$, which are defined a follows:
\begin{equation*}
|\hspace{-.1em}|\hspace{-.1em}| x_k|\hspace{-.1em}|\hspace{-.1em}|_{0,k}^2 :=\|\underline{x}_k\|_{\mathcal{L}_k}^2
:=(\mathcal{L}_k \underline{x}_k,\underline{x}_k)_{\ell^2},
\end{equation*}
where
\begin{equation}\label{eq:defMcL}
\mathcal{L}_k:=
\left(
\begin{array}{cccc}
\varphi_{\alpha,k} M_{U,k}\hspace{-.4cm}\mbox{} \\
&\alpha h_k^{-2} \varphi_{\alpha,k}^{-1}M_{P,k}\hspace{-.4cm}\mbox{}\\
&&\alpha^{-1} \varphi_{\alpha,k} M_{U,k}\hspace{-.4cm}\mbox{}\\
&&& h_k^{-2} \varphi_{\alpha,k}^{-1} M_{P,k}\ \\
\end{array}
\right),
\end{equation}
and $\varphi_{\alpha,k}:=1+\alpha^{1/2} h_k^{-2}$ and $M_{U,k}$ and $M_{P,k}$ are the mass matrices,
representing the $L^2$-inner product on $U_k$ and $P_k$, respectively.
Based on the norm $|\hspace{-.1em}|\hspace{-.1em}| \cdot|\hspace{-.1em}|\hspace{-.1em}|_{0,k}$, we can introduce the residual norm $|\hspace{-.1em}|\hspace{-.1em}| \cdot|\hspace{-.1em}|\hspace{-.1em}|_{2,k}$
using
\begin{equation*}
|\hspace{-.1em}|\hspace{-.1em}| x_k |\hspace{-.1em}|\hspace{-.1em}|_{2,k} := \sup_{\tilde{x}_k\in X_k}
\frac{\mathcal{B} \left(x_k, \tilde{x}_k\right)}{|\hspace{-.1em}|\hspace{-.1em}| \tilde{x}_k|\hspace{-.1em}|\hspace{-.1em}|_{0,k}}.
\end{equation*}
Smoothing property and approximation property read as follows.
\begin{itemize}
\item \emph{Smoothing property:}
\begin{equation} \label{eq:smp}
|\hspace{-.1em}|\hspace{-.1em}| x_k^{(0,\nu)}-x_k^* |\hspace{-.1em}|\hspace{-.1em}|_{2,k}
\le \eta(\nu) |\hspace{-.1em}|\hspace{-.1em}| x_k^{(0)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{0,k}
\end{equation}
should hold for some function $\eta(\nu)$
with $\lim_{\nu\rightarrow\infty}\eta(\nu)= 0$. Here and in what follows,
$x_k^*\in X_k$ is the exact solution of the discretized problem~\eqref{eq:gen:bil:mv}.
\item \emph{Approximation property:}
\begin{equation} \label{eq:apprp}
|\hspace{-.1em}|\hspace{-.1em}| x_k^{(1)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{0,k} \le
C_A |\hspace{-.1em}|\hspace{-.1em}| x_k^{(0,\nu)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{2,k}
\end{equation}
should hold for some constant $C_A>0$.
\end{itemize}
It is easy to see that, if we combine both conditions, we obtain
\begin{equation}\nonumbe
|\hspace{-.1em}|\hspace{-.1em}| x_k^{(1)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{0,k} \le
q(\nu) |\hspace{-.1em}|\hspace{-.1em}| x_k^{(0)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{0,k},
\end{equation}
where $q(\nu)=C_A\eta(\nu)$, i.e., that the two-grid method converges
for $\nu$ large enough.
The convergence of the W-cycle multigrid method can be shown under
mild assumptions, see e.g.~\cite{Hackbusch:1985}.
The choice of an appropriate smoother is a key issue in constructing
such a multigrid method.
Here, we introduce one smoother which is appropriate for a
large class of problems including the model problem:
\emph{normal equation smoothers}, cf.~\cite{Brenner:1996}, which read as follows.
\begin{equation}\nonumber
\underline{x}^{(0,m)}_k := \underline{x}^{(0,m-1)}_k + \tau
\underbrace{\mathcal{L}_k^{-1} \mathcal{A}_k \mathcal{L}_k^{-1}}_{\displaystyle
\hat{\mathcal{A}}_k^{-1}:=}
\left(\underline{\mathpzc{f}}_k -\mathcal{A}_k \;\underline{x}^{(0,m-1)}_k\right)
\qquad \mbox{for } m=1,\ldots,\nu.
\end{equation}
Here, a fixed~$\tau>0$ has to be chosen such that the
spectral radius~$\rho(\tau \hat{\mathcal{A}}_k^{-1}\mathcal{A}_k)$ is bounded away
from~$2$ on all grid
levels~$k$ and for all choices of the parameter~$\alpha$.
Using a standard inverse inequality, one can show that
\begin{equation*}
\|x_k\|_{X} \lesssim |\hspace{-.1em}|\hspace{-.1em}| x_k|\hspace{-.1em}|\hspace{-.1em}|_{0,k}
\end{equation*}
is satisfied for all $x_k \in X_k$. Based on this result, using an eigenvalue
analysis one can show the following lemma, cf.~\cite{Brenner:1996}.
\begin{lemma}
The damping parameter $\tau>0$ can be chosen independent
of grid level~$k$ and the choice of the parameter~$\alpha$ such that
\begin{equation*}
\tau\, \rho(\hat{\mathcal{A}}_k^{-1}\mathcal{A}_k) \le 2-\epsilon < 2,
\end{equation*}
holds for some constant $\epsilon>0$. For this choice of~$\tau$, there is a
constant~$C_S>0$, independent of the grid level~$k$ and the choice of the
parameter~$\alpha$,
such that the smoothing property~\eqref{eq:smp} is satisfied with rate
\begin{equation*}
\eta(\nu):=C_S \nu^{-1/2}.
\end{equation*}
\end{lemma}
Certainly, the iteration procedure~\eqref{eq:smp} should be efficient-to-apply.
Using the fact, that the mass matrices $M_{U,k}$ and $M_{P,k}$ in~\eqref{eq:defMcL} and their diagonals are
spectrally equivalent under weak assumptions, for the practical
realization of the smoother these matrices can be replaced by their diagonals.
\section{A convergence proof}\label{sec:3a}
The proof of the approximation property is done using the
approximation theorem introduced in~\cite{Takacs:Zulehner:2012}
which requires besides the conditions~\citecond{(A1)}
and~\citecond{(A1a)} two more conditions (conditions~\citecond{(A3)} and
\citecond{(A4)}) involving, besides the Hilbert space $X$, two more Hilbert
spaces $X_{-,k}:=(X_-,\|\cdot\|_{X_{-,k}})$ and
$X_{+,k}:=(X_+,\|\cdot\|_{X_{+,k}})$, which are chosen as follows.
As weaker space, we choose $X_-:= Y_- \times
Y_-$, where $Y_-:=U_-\times P_-$, $U_-:=[L^2(\Omega)]^d$ and $P_-:= [H^1_0(\Omega)\cap L^2_0(\Omega)]^*$,
equipped with norms
\begin{align*}
\|x\|_{X_{-,k}}^2 &:= \|(u,p,\lambda,\mu)\|_{X_{-,k}}^2 :=
\|(u,p)\|_{Y_{-,k}}^2 + \alpha^{-1} \|(\lambda,\mu)\|_{Y_{-,k}}^2,\\
\|(u,p)\|_{Y_{-,k}}^2 &:= \|u\|_{U_{-,k}}^2 +\|p\|_{P_{-,k}}^2,\\
\|u\|_{U_{-,k}}^2 &:=h_k^{-2} \|u\|_{[H^1_0(\Omega)+ \alpha^{-1/2} L^2(\Omega)]^*}^2
\mbox{ and}\\
\|p\|_{P_{-,k}}^2 &:=h_k^{-2} \|p\|_{[\alpha^{-1} L^2_0(\Omega)\cap\alpha^{-1/2} H^1_0(\Omega)]^*}^2
\end{align*}
Note that dual spaces are $(X_-)^*:= (Y_-)^*\times (Y_-)^*$, where $(Y_-)^*=(U_-)^*\times (P_-)^*$,
$(U_-)^*=[L^2(\Omega)]^d$ and $(P_-)^* = H^1_0(\Omega)\cap L^2_0(\Omega)$, equipped with norms
\begin{align*}
\|\mathcal{F}\|_{(X_{-,k})^*}^2 &:= \|(f,g,\zeta,\chi)\|_{(X_{-,k})^*}^2 :=
\|(f,g)\|_{(Y_{-,k})^*}^2 + \alpha \|(\zeta,\chi)\|_{(Y_{-,k})^*}^2,\\
\|(f,g)\|_{(Y_{-,k})^*}^2 &:= \|f\|_{(U_{-,k})^*}^2 +\|g\|_{(P_{-,k})^*}^2,\\
\|f\|_{(U_{-,k})^*}^2 &:=h_k^{2} \|f\|_{H^1_0(\Omega)+ \alpha^{-1/2} L^2(\Omega)}^2
\mbox{ and}\\
\|g\|_{(P_{-,k})^*}^2 &:=h_k^{2} \|g\|_{\alpha^{-1} L^2_0(\Omega)\cap\alpha^{-1/2} H^1_0(\Omega)}^2.
\end{align*}
As stronger space, we choose
$X_{+}:=Y_{+}\times Y_{+}$, where $Y_{+}:=U_{+}\times P_{+}$, $U_{+}:=[H^2(\Omega)\cap H^1_0(\Omega)]^d$ and
$P_{+}:=H^1(\Omega)\cap L^2_0(\Omega)$, equipped with norms
\begin{align*}
\|x\|_{X_{+,k}}^2 &:= \|(u,p,\lambda,\mu)\|_{X_{+,k}}^2 :=
\|(u,p)\|_{Y_{+,k}}^2 + \alpha^{-1} \|(\lambda,\mu)\|_{Y_{+,k}}^2,\\
\|(u,p)\|_{Y_{+,k}}^2 &:= \|u\|_{U_{+,k}}^2+ \|p\|_{P_{+,k}}^2,\\
\|u\|_{U_{+,k}}^2 &:= h_k^2 \|u\|_{H^1(\Omega)\cap \alpha^{1/2} H^2(\Omega)}^2
\mbox{ and}\\
\|p\|_{P_{+,k}}^2 &:= h_k^2 \|p\|_{\alpha H^2(\Omega)+\alpha^{1/2} H^1(\Omega)}^2.
\end{align*}
The additional conditions read as follows.
\begin{description}
\item[\citecond{(A3)}] On all grid levels $k$, the approximation error
result
\begin{equation*}
\inf_{x_k\in X_k} \|x-x_k\|_X \lesssim \| x \|_{X_{+,k}}\qquad \mbox{for all } x\in X_+
\end{equation*}
is satisfied.
\item[\citecond{(A4)}] For all grid levels $k$, all $\mathcal{F}\in (X_-)^*$ the solution
$x_{\mathcal{F}}\in X$ of the problem,
\begin{equation}\label{a4:problem}
\mbox{find $x\in X$ such that} \qquad \mathcal{B}(x,\tilde{x}) = \mathcal{F}(\tilde{x}) \qquad\mbox{ for all } \tilde{x}\in X,
\end{equation}
satisfies $x_{\mathcal{F}}\in X_+$ and the inequality
\begin{equation}\label{eq:a4}
\|x_{\mathcal{F}}\|_{X_{+,k}} \lesssim \|\mathcal{F}\|_{(X_{-,k})^*}.
\end{equation}
\end{description}
Based on these assumptions, the following theorem shows the approximation property.
\begin{theorem}\label{thrm:main}
Let for $k=0,1,2,\ldots$ the symmetric matrices $\mathcal{A}_k$ be obtained by discretizing
problem~\eqref{eq:galerkin} using a sequence of finite-dimensional nested
subspaces $X_{k-1}\subseteq X_k\subset X$.
Assume that there are Hilbert spaces $X_+\subseteq X\subseteq X_-$
with mesh-dependent norms ${\|\cdot\|_{X_{+,k}}}$, ${\|\cdot\|_{X}}$
and ${\|\cdot\|_{X_{-,k}}}$ such that the conditions~\citecond{(A1)},
\citecond{(A1a)}, \citecond{(A3)} and~\citecond{(A4)} are satisfied.
Then the coarse-grid correction~\eqref{eq:method:cga} satisfies the approximation
property
\begin{equation} \label{eq:apprp:thrm1}
\| x_k^{(1)}-x_k^*\|_{X_{-,k}} \le
C_A \sup_{\tilde{x}_k\in X_k}
\frac{\mathcal{B}\left(
x_k^{(0,\nu)}-x_k^*,\tilde{x}_k\right)}
{\|\tilde{x}_k\|_{X_{-,k}}},
\end{equation}
where the constant $C_A$ only depends on the constants
that appear (implicitly) in the named conditions.
\end{theorem}
For a proof, see~\cite{Takacs:Zulehner:2012}, Theorem~4.1.
\begin{theorem}
Condition~\citecond{(A3)} is satisfied.
\end{theorem}
\begin{proof}
This proof is analogous to the proof of Theorem~4.2 in~\cite{Takacs:2013}. However, to keep
this paper as self-contained as possible, we give a proof of this theorem.
Note that it suffices
to show approximation error results for the individual variables separately.
Using a standard interpolation operator $\Pi_k:[H^2(\Omega)]^d\rightarrow U_k$,
we obtain for the velocity field~$u$
\begin{equation*}
\|u - \Pi_k u\|_{L^2(\Omega)}^2 \lesssim h_k^2 \|u \|_{H^1(\Omega)}^2
\quad\mbox{and}\quad
\|u - \Pi_k u\|_{H^1(\Omega)}^2 \lesssim h_k^2 \|u \|_{H^2(\Omega)}^2,
\end{equation*}
for all $u\in [H^2(\Omega)]^d$ and therefore
\begin{align*}
& \inf_{u_k\in U_k} \|u-u_k\|_{U}^2 \le \|u-\Pi_ku\|_{U}^2
=\|u-\Pi_ku\|_{L^2(\Omega)}^2+\alpha^{1/2} \|u-\Pi_ku\|_{H^1(\Omega)}^2
\\&\qquad \lesssim h_k^2 \left(\|u\|_{H^1(\Omega)}^2+ \alpha^{1/2}\|u\|_{H^2(\Omega)}^2\right)
= \|u\|_{U_{+,k}}^2.
\end{align*}
The same can be done for the adjoined velocity~$\lambda$.
Also for the pressure distribution~$p$ we can do a similar estimate. The estimates
\begin{equation*}
\inf_{p_k\in P_k} \|p - p_k\|_{L^2(\Omega)}^2 \lesssim h_k^2 \|p\|_{H^1(\Omega)}^2
\quad \mbox{and} \quad
\inf_{p_k\in P_k} \|p - p_k\|_{H^1(\Omega)}^2 \lesssim h_k^2 \|p\|_{H^2(\Omega)}^2
\end{equation*}
are standard approximation error results which imply
\begin{align*}
&\inf_{p_k\in P_k} \|p-p_k\|_P^2 = \inf_{p_k\in P_k} \|p-p_k\|_{\alpha H^1(\Omega)+\alpha^{1/2} L^2(\Omega)}^2 \\
&\quad=
\inf_{\substack{p_k\in P_k\\q_1\in H^1(\Omega)\\q_2\in L^2(\Omega)\\q_1+q_2=p-p_k}} \|q_1\|_{\alpha H^1(\Omega)}^2
+\|q_2\|_{\alpha^{1/2} L^2(\Omega)}^2\\
&\quad=
\inf_{\substack{p_1\in H^1(\Omega)\\p_2\in L^2(\Omega)\\p_1+p_2=p}}
\inf_{p_{1,k}\in P_k} \|p_1-p_{1,k}\|_{\alpha H^1(\Omega)}^2
+\inf_{p_{2,k}\in P_k} \|p_2-p_{2,k}\|_{\alpha^{1/2} L^2(\Omega)}^2\\
&\quad\le
\inf_{\substack{p_1\in H^2(\Omega)\\p_2\in H^1(\Omega)\\p_1+p_2=p}}
\inf_{p_{1,k}\in P_k} \|p_1-p_{1,k}\|_{\alpha H^1(\Omega)}^2
+\inf_{p_{2,k}\in P_k} \|p_2-p_{2,k}\|_{\alpha^{1/2} L^2(\Omega)}^2\\
&\quad\lesssim h_k^2
\inf_{\substack{p_1\in H^2(\Omega)\\p_2\in H^1(\Omega)\\p_1+p_2=p}}
\|p_1\|_{\alpha H^2(\Omega)}^2
+ \|p_2\|_{\alpha^{1/2} H^1(\Omega)}^2
= h_k^2 \|p\|_{\alpha H^2(\Omega)+\alpha^{1/2} H^1(\Omega)}^2.
\end{align*}
The same can be done for the adjoined pressure~$\mu$. This finishes the proof.
\qed\end{proof}
For showing~\citecond{(A4)}, we recall Theorem~4.6
in~\cite{Takacs:2013} on the regularity of the generalized Stokes problem. For this
purpose, we need a regularity assumption for the
Poisson problem with homogeneous Neumann boundary conditions.
\begin{description}
\item[\citecond{(R1)}] \emph{Regularity of the Poisson problem.}
Let $g\in L^2(\Omega)$ and $p\in H^1(\Omega)\cap L^2_0(\Omega)$ be such that
\begin{equation*}
(\nabla p,\nabla \tilde{p})_{H^1(\Omega)} = (g,\tilde{p})_{L^2(\Omega)} \mbox{ for all }\tilde{p} \in H^1(\Omega)\cap L^2_0(\Omega).
\end{equation*}
Then $p\in H^2(\Omega)$ and
$
\|p\|_{H^2(\Omega)} \lesssim \|g\|_{L^2(\Omega)}
$.
\end{description}
Such a regularity assumption can be guaranteed for convex polygonal domains (see, e.g.,~\cite{Dauge:1988}).
Theorem~4.6 in~\cite{Takacs:2013} directly implies the following theorem.
\begin{theorem}\label{thrm:td:stokes}
Suppose that the regularity assumptions~\citecond{(R)} and~\citecond{(R1)} are satisfied.
Let $f \in [L^2(\Omega)]^d$ and $g\in H^1_0(\Omega)\cap L^2_0(\Omega)$.
The solution of the problem, find $(u,p) \in Y$ such that
\begin{align*}
\alpha^{-1/2} ( u,\tilde{u})_{L^2(\Omega)}
+ (\nabla u,\nabla\tilde{u})_{L^2(\Omega)}
+(p,\nabla\cdot\tilde{u})_{L^2(\Omega)}
&=(f,\tilde{u})_{L^2(\Omega)}\\
(\nabla\cdot u,\tilde{p})_{L^2(\Omega)}
&= (g,\tilde{p})_{L^2(\Omega)}\
\end{align*}
for all $(\tilde{u},\tilde{p}) \in Y$, satisfies $(u,p)\in Y_{+}$ and the inequality
\begin{equation*}
\begin{aligned}
&\|u\|_{\alpha^{-1/2} H^1(\Omega)\cap H^2(\Omega) }^2
+ \|p\|_{\alpha^{1/2} H^2(\Omega)+H^1(\Omega)}^2\\
&\qquad\lesssim
\|f\|_{\alpha^{1/2} H^1_0(\Omega)+ L^2(\Omega) }^2
+ \|g\|_{\alpha^{-1/2} L^2_0(\Omega)\cap H^1_0(\Omega)}^2
\end{aligned}
\end{equation*}
is satisfied.
\end{theorem}
\begin{proof}
We choose the parameter $\beta$ (which occurs in~\cite{Takacs:2013})
to be $\beta:=\alpha^{-1/2}$.
\qed\end{proof}
\begin{lemma}\label{lem:step2}
Suppose that assumptions~\citecond{(R)} and~\citecond{(R1)} are satisfied. Let
$\mathcal{F}\in (X_-)^*$ be arbitrarily but fixed.
Then, $x_{\mathcal{F}}$, the solution of~\eqref{a4:problem},
satisfies $x_{\mathcal{F}}\in X_{+}$ and the bound
\begin{equation}\label{eq:aux1}
\|x_{\mathcal{F}} \|_{X_{+,k}}^2 \lesssim\|\mathcal{F}\|_{(X_{-,k})^*}^2
+ h_k^2 \left( \|u_{\mathcal{F}} \|_{ H^1(\Omega) }^2 + \alpha^{-1} \|\lambda_{\mathcal{F}}\|_{ H^1(\Omega) }^2\right).
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathcal{F}(\tilde{u},\tilde{p},\tilde{\lambda},\tilde{\mu}):=
(f,\tilde{u})_{L^2(\Omega)} + (g,\tilde{p})_{L^2(\Omega)}+
(\zeta,\tilde{\lambda})_{L^2(\Omega)} + (\chi,\tilde{\mu})_{L^2(\Omega)}$, where
$f, \zeta \in [L^2(\Omega)]^d$ and $g,\chi \in H^1_0(\Omega)\cap L^2_0(\Omega)$.
Let $\hat{f}:= f-u_{\mathcal{F}}+ \alpha^{-1/2}\lambda_{\mathcal{F}}$ and
$\hat{\zeta}:=\zeta+\alpha^{-1} \lambda_{\mathcal{F}}+\alpha^{-1/2} u_{\mathcal{F}}$.
Then we can rewrite the KKT-system as follows:
\begin{align*}
(\nabla \lambda_{\mathcal{F}},\nabla \tilde{u})_{L^2(\Omega)} + \alpha^{-1/2}
( \lambda_{\mathcal{F}}, \tilde{u})_{L^2(\Omega)} +
(\mu_{\mathcal{F}},\nabla \cdot \tilde{u})_{L^2(\Omega)} &=
(\hat{f}, \tilde{u})_{L^2(\Omega)} \\
(\nabla \cdot \lambda_{\mathcal{F}},\tilde{p})_{L^2(\Omega)}
& = (g,\tilde{p})_{L^2(\Omega)}
\end{align*}
and
\begin{align*}
(\nabla u_{\mathcal{F}},\nabla \tilde{\lambda})_{L^2(\Omega)}
+\alpha^{-1/2} ( u_{\mathcal{F}}, \tilde{\lambda})_{L^2(\Omega)}
+(p_{\mathcal{F}},\nabla\cdot \tilde{\lambda})_{L^2(\Omega)}
&=(\hat{\zeta}, \tilde{\lambda})_{L^2(\Omega)}\\
(\nabla \cdot u_{\mathcal{F}},\tilde{\mu})_{L^2(\Omega)}
&= (\chi,\tilde{p})_{L^2(\Omega)}.
\end{align*}
As $\hat{f} \in [L^2(\Omega)]^d$,
$g\in H^1_0(\Omega)\cap L^2_0(\Omega)$,
$\hat{\zeta} \in [L^2(\Omega)]^d$
and $\chi \in H^1_0(\Omega)\cap L^2_0(\Omega)$,
we obtain using Theorem~\ref{thrm:td:stokes} that $x_{\mathcal{F}} \in X_+$ and the following bounds are satisfied:
\begin{align*}
&\|\lambda_{\mathcal{F}}\|_{\alpha^{-1/2} H^1(\Omega) \cap H^2(\Omega) }^2 +
\|\mu\|_{\alpha^{1/2} H^2(\Omega) + H^1(\Omega)}^2 \\
&\qquad \lesssim \|f-u_{\mathcal{F}}+\alpha^{-1/2}\lambda_{\mathcal{F}} \|_{\alpha^{1/2} H^1_0(\Omega) + L^2(\Omega) }^2 +
\|g\|_{ \alpha^{-1/2} L^2_0(\Omega) \cap H^1_0(\Omega) }^2
\end{align*}
and
\begin{align*}
&\|u_{\mathcal{F}}\|_{\alpha^{-1/2} H^1(\Omega) \cap H^2(\Omega) }^2 +
\|p_{\mathcal{F}}\|_{ \alpha^{1/2} H^2(\Omega) + H^1(\Omega) }^2 \\
&\qquad \lesssim \|\zeta-\alpha^{-1}\lambda_{\mathcal{F}}+\alpha^{-1/2}u_{\mathcal{F}} \|_{\alpha^{1/2} H^1_0(\Omega) + L^2(\Omega) }^2 +
\|\chi\|_{ \alpha^{-1/2} L^2_0(\Omega) \cap H^1_0(\Omega) }^2.
\end{align*}
We can combine these two estimates and obtain
\begin{align*}
&\|u_{\mathcal{F}}\|_{ H^1(\Omega) \cap \alpha^{1/2} H^2(\Omega) }^2
+ \|p_{\mathcal{F}}\|_{ \alpha H^2(\Omega) + \alpha^{1/2} H^1(\Omega) }^2
\\&\qquad
+ \alpha^{-1} \|\lambda_{\mathcal{F}}\|_{ H^1(\Omega) \cap \alpha{1/2} H^2(\Omega) }^2
+ \alpha^{-1} \|\mu\|_{ \alpha H^2(\Omega) + \alpha^{1/2} H^1(\Omega) }^2
\\&\qquad\lesssim
\|f\|_{ H^1_0(\Omega) + \alpha^{-1/2} L^2(\Omega) }^2
+ \|g\|_{ \alpha^{-1} L^2_0(\Omega) \alpha^{-1/2} \cap H^1_0(\Omega) }^2
\\&\qquad\qquad
+ \alpha \|\zeta\|_{ H^1_0(\Omega) + \alpha^{-1/2} L^2(\Omega) }^2
+ \alpha \|\chi\|_{ \alpha^{-1} L^2_0(\Omega) \cap \alpha^{-1/2} H^1_0(\Omega) }^2
\\&\qquad\qquad
+ \|u_{\mathcal{F}} \|_{ H^1_0(\Omega) + \alpha^{-1/2} L^2(\Omega) }^2
+ \alpha^{-1} \|\lambda_{\mathcal{F}}\|_{ H^1_0(\Omega) + \alpha^{-1/2} L^2(\Omega) }^2.
\end{align*}
Note that $\|u_{\mathcal{F}} \|_{ H^1_0(\Omega) + \alpha^{-1/2} L^2(\Omega) } \le \|u_{\mathcal{F}} \|_{ H^1_0(\Omega)}=\|u_{\mathcal{F}} \|_{ H^1(\Omega)}$
holds because of $u_{\mathcal{F}}\in [H^1_0(\Omega)]^d$. As the analogous holds also for $\lambda_{\mathcal{F}}$,
this finishes the proof.
\qed\end{proof}
To show condition~\citecond{(A4)}, we have to bound
$\|u_{\mathcal{F}} \|_{H^1(\Omega)}^2 + \alpha^{-1} \|\lambda_{\mathcal{F}}\|_{H^1(\Omega)}^2$ from above.
For showing such a result, we need some notation.
As $H^1_0(\Omega)$ is dense in $L^2(\Omega)$, for $u\in [H^2(\Omega)]^d$ the function $-\Delta u \in [L^2(\Omega)]^d$
can be approximated by some function $w^{\epsilon} \in [H^1_0(\Omega)]^d$ such that
\begin{equation*}
\|-\Delta u - w^{\epsilon}\|_{L^2(\Omega)}^2 \le \epsilon.
\end{equation*}
So, we can introduce an operator $-\Delta^{\epsilon}:[H^2(\Omega)]^d\rightarrow [H^1_0(\Omega)]^d$ such that
\begin{equation*}
\|-\Delta u - (-\Delta^{\epsilon}) u \|_{L^2(\Omega)}^2 \le \epsilon.
\end{equation*}
Analogously, we introduce the operator $\nabla^{\epsilon}:H^1(\Omega)
\rightarrow [H^1_0(\Omega)]^d$ such that
\begin{equation*}
\|\nabla p - \nabla^{\epsilon}p \|_{L^2(\Omega)}^2 \le \epsilon.
\end{equation*}
\begin{lemma}\label{lem:aux2}
Let $\mathcal{F} \in (X_-)^*$ and let
$x_{\mathcal{F}}=(u_{\mathcal{F}},p_{\mathcal{F}},\lambda_{\mathcal{F}},\mu_{\mathcal{F}})$
be the solution of~\eqref{a4:problem}. Then $x_{\mathcal{F}}$ satisfies the estimate
\begin{equation}\label{eq:aux2}
h_k^2\left(\|u_{\mathcal{F}}\|_{H^1(\Omega)}^2 + \alpha^{-1} \|\lambda_{\mathcal{F}}\|_{H^1(\Omega)}^2\right)
\lesssim \|\mathcal{F}\|_{(X_{-,k})^*} \|x_{\mathcal{F}}\|_{X_{+,k}}.
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathcal{F}(\tilde{u},\tilde{p},\tilde{\lambda},\tilde{\mu}):=
(f,\tilde{u})_{L^2(\Omega)} + (g,\tilde{p})_{L^2(\Omega)}+
(\zeta,\tilde{\lambda})_{L^2(\Omega)} + (\chi,\tilde{\mu})_{L^2(\Omega)}$, where
$f, \zeta \in [L^2(\Omega)]^d$ and $g,\chi \in H^1_0(\Omega)\cap L^2_0(\Omega)$.
The idea of this proof is to show that for all $\epsilon>0$ there
is some $\tilde{x}^{\epsilon}\in X$ such that
\begin{align}\nonumber
&\mathcal{F}(\tilde{x}^{\epsilon})-\mathcal{B}(x_{\mathcal{F}},\tilde{x}^{\epsilon})\\
&\quad\lesssim h_k^{-2} \|\mathcal{F}\|_{(X_{-,k})^*}
\|x_{\mathcal{F}}\|_{X_{+,k}} -\|u_{\mathcal{F}}\|_{H^1(\Omega)}^2
- \alpha^{-1} \|\lambda_{\mathcal{F}}\|_{H^1(\Omega)}^2 \label{eq:aux2a}\\
&\quad\quad+ \epsilon (\alpha^{1/2} + \alpha^{-1/2}) h_k^{-1}
(\|x_{\mathcal{F}}\|_{X_{+,k}} + \|\mathcal{F}\|_{(X_{-,k})^*})+\epsilon^2.\nonumber
\end{align}
Note that the left-hand-side of the inequality is $0$. Therefore,
this would be sufficient to show the statement of the lemma, as
$\epsilon>0$ can be chosen arbitrarily small.
In the following, we show that~\eqref{eq:aux2a} is satisfied for the choice
$\tilde{x}^{\epsilon}:=(-\Delta^{\epsilon} u_{\mathcal{F}}, $ $
-\nabla\cdot\nabla^{\epsilon} p_{\mathcal{F}},
\Delta^{\epsilon} \lambda_{\mathcal{F}},
\nabla\cdot\nabla^{\epsilon} \mu_{\mathcal{F}})$.
We estimate the individual summands of $\mathcal{F}(\tilde{x}^{\epsilon})-
\mathcal{B}(x_{\mathcal{F}},\tilde{x}^{\epsilon})$ separately. For the first one, we obtain
\begin{align*}
& -(u_{\mathcal{F}},-\Delta^{\epsilon} u_{\mathcal{F}})_{L^2(\Omega)}
\le- (u_{\mathcal{F}},-\Delta u_{\mathcal{F}})_{L^2(\Omega)} + \epsilon \|u_{\mathcal{F}}\|_{L^2(\Omega)}\\
&\quad= - (\nabla u_{\mathcal{F}},\nabla u_{\mathcal{F}})_{L^2(\Omega)} + \epsilon \|u_{\mathcal{F}}\|_{L^2(\Omega)}
\lesssim - \|u_{\mathcal{F}}\|_{H^1(\Omega)}^2 + \epsilon h_k^{-1} \|x_{\mathcal{F}}\|_{X_{+,k}}
\end{align*}
due to the fact that $u_{\mathcal{F}}\in [H^2(\Omega)\cap H^1_0(\Omega)]^d$ and
due to Friedrichs' inequality. The same can be done for
$\alpha^{-1} (\lambda_{\mathcal{F}},\Delta^{\epsilon} \lambda_{\mathcal{F}})_{L^2(\Omega)}$.
For the next two summands,
\begin{align*}
&-(\nabla u_{\mathcal{F}},\nabla \Delta^{\epsilon} \lambda_{\mathcal{F}})_{L^2(\Omega)} - (\nabla \lambda_{\mathcal{F}},\nabla (-\Delta^{\epsilon}) u_{\mathcal{F}})_{L^2(\Omega)}\\
&\quad = (\Delta u_{\mathcal{F}}, \Delta^{\epsilon} \lambda_{\mathcal{F}})_{L^2(\Omega)} - (\Delta \lambda_{\mathcal{F}}, \Delta^{\epsilon} u_{\mathcal{F}})_{L^2(\Omega)}\\
&\quad \le (\Delta u_{\mathcal{F}}, \Delta \lambda_{\mathcal{F}})_{L^2(\Omega)} - (\Delta \lambda_{\mathcal{F}}, \Delta u_{\mathcal{F}})_{L^2(\Omega)} + \epsilon(\|\Delta u_{\mathcal{F}}\|_{L^2(\Omega)}+\|\Delta \lambda_{\mathcal{F}}\|_{L^2(\Omega)}) \\
&\quad \le \epsilon(\| u_{\mathcal{F}} \|_{H^2(\Omega)}+\| \lambda_{\mathcal{F}} \|_{H^2(\Omega)})
\le \epsilon h_k^{-1} (\alpha^{1/4} + \alpha^{-1/4}) \|x_{\mathcal{F}}\|_{X_{+,k}}
\end{align*}
is satisfied due to the fact that $\Delta^{\epsilon}$ maps into $[H^1_0(\Omega)]^d$.
For the next two summands, we obtain
\begin{equation}\nonumber
\begin{aligned}
&-(\nabla\cdot u_{\mathcal{F}}, \nabla\cdot\nabla^{\epsilon} \mu_{\mathcal{F}})_{L^2(\Omega)} - (\nabla\cdot(-\Delta^{\epsilon}) u_{\mathcal{F}},\mu_{\mathcal{F}})_{L^2(\Omega)}\\
&\quad= (\nabla \nabla\cdot u_{\mathcal{F}},\nabla^{\epsilon}\mu_{\mathcal{F}})_{L^2(\Omega)}- (\Delta^{\epsilon}u_{\mathcal{F}},\nabla \mu_{\mathcal{F}})_{L^2(\Omega)} \\
&\quad\le (\nabla \nabla\cdot u_{\mathcal{F}},\nabla^{\epsilon}\mu_{\mathcal{F}})_{L^2(\Omega)} - (\Delta^{\epsilon}u_{\mathcal{F}},\nabla^{\epsilon} \mu_{\mathcal{F}})_{L^2(\Omega)} + \epsilon \|\Delta^{\epsilon} u_{\mathcal{F}}\|_{L^2(\Omega)} \\
&\quad\le -(\nabla u_{\mathcal{F}},\nabla\nabla^{\epsilon}\mu_{\mathcal{F}})_{L^2} - (\Delta u_{\mathcal{F}},\nabla^{\epsilon} \mu_{\mathcal{F}})_{L^2}+ \epsilon (\|\Delta u_{\mathcal{F}}\|_{L^2} +\|\nabla^{\epsilon} p_{\mathcal{F}}\|_{L^2}+\epsilon) \\
&\quad= -(\nabla u_{\mathcal{F}},\nabla\nabla^{\epsilon}\mu_{\mathcal{F}})_{L^2} + (\nabla u_{\mathcal{F}},\nabla \nabla^{\epsilon} \mu_{\mathcal{F}})_{L^2} + \epsilon (\|\Delta u_{\mathcal{F}}\|_{L^2} +\|\nabla^{\epsilon} p_{\mathcal{F}}\|_{L^2}+\epsilon) \\
&\quad\le \epsilon (\|u_{\mathcal{F}}\|_{H^2(\Omega)} +\|p_{\mathcal{F}}\|_{H^1(\Omega)}+2\epsilon)
\lesssim \epsilon h_k^{-1} (\alpha^{-1/4} + \alpha^{-1/2}) \|x_{\mathcal{F}}\|_{X_{+,k}} + \epsilon^2.
\end{aligned}
\end{equation}
The same can be done for $-(\nabla\cdot \lambda_{\mathcal{F}},- \nabla\cdot\nabla^{\epsilon} p_{\mathcal{F}})_{L^2(\Omega)} - (\nabla\cdot\Delta^{\epsilon} \lambda_{\mathcal{F}},p_{\mathcal{F}})_{L^2(\Omega)}$.
Let $f_2\in [H^1_0(\Omega)]^d$ and $f_1:=f-f_2$. Then
\begin{equation*}
(f_1,-\Delta^{\epsilon} u_{\mathcal{F}})_{L^2(\Omega)}
\lesssim
\| f_1\|_{\alpha^{-1/2} L^2(\Omega)} \|u_{\mathcal{F}}\|_{\alpha^{1/2}
H^2(\Omega)} + \epsilon\alpha^{1/4} \| f_1\|_{\alpha^{-1/2} L^2(\Omega)}
\end{equation*}
holds as well as
\begin{align*}
(f_2,-\Delta^{\epsilon} u_{\mathcal{F}})_{L^2(\Omega)}
&\lesssim (\nabla f_2,\nabla u_{\mathcal{F}})_{L^2(\Omega)}+ \epsilon\| f_2\|_{L^2(\Omega)}\\
&\lesssim
\| f_2\|_{H^1(\Omega)} \|u_{\mathcal{F}}\|_{H^1(\Omega)} + \epsilon\| f_2\|_{H^1(\Omega)}.
\end{align*}
This implies
\begin{align*}\label{eq:aux2c}
&(f,-\Delta^{\epsilon} u)_{L^2(\Omega)}\\
&\qquad \lesssim \|f\|_{H^1_0(\Omega)+ \alpha^{-1/2} L^2(\Omega)}
\|u\|_{H^1(\Omega)\cap \alpha^{1/2} H^2(\Omega)} \\
&\qquad\qquad+ \epsilon (1+\alpha^{1/4})
\|f\|_{H^1_0(\Omega)+ \alpha^{-1/2} L^2(\Omega)}\\
&\qquad\lesssim \|f\|_{H^1_0(\Omega)+ \alpha^{-1/2} L^2(\Omega)}
\|u\|_{H^1(\Omega)\cap \alpha^{1/2} H^2(\Omega)} + \epsilon h_k^{-1}
(1+\alpha^{1/4})\|\mathcal{F}\|_{(X_{-,k})^*}.
\end{align*}
Let $p_2\in H^2(\Omega)$ and $p_1:=p_{\mathcal{F}}-p_2\in H^1(\Omega)$.
We have
\begin{align*}
(g,-\nabla\cdot\nabla^{\epsilon} p_1)_{L^2(\Omega)} &=
(\nabla g,\nabla^{\epsilon} p_1)_{L^2(\Omega)}\\
&\lesssim \|g\|_{ \alpha^{-1/2} H^1(\Omega) }
\|p_1\|_{ \alpha^{1/2} H^1(\Omega) } + \epsilon \|g\|_{H^1(\Omega)}.
\end{align*}
Moreover, using $g\in H^1_0(\Omega)$, we have also
\begin{align*}
&(g,-\nabla\cdot\nabla^{\epsilon} p_2)_{L^2(\Omega)}
= (\nabla g,\nabla^{\epsilon} p_2)_{L^2(\Omega)}
\lesssim (\nabla g,\nabla p_2)_{L^2(\Omega)} + \epsilon \|g\|_{H^1(\Omega)}\\
&\quad =-( g,\nabla \cdot \nabla p_2)_{L^2(\Omega)} + \epsilon \|g\|_{H^1(\Omega)}
\le \|g\|_{ \alpha^{-1} L^2(\Omega) }
\|p_2\|_{ \alpha H^2(\Omega) } + \epsilon \|g\|_{H^1(\Omega)}
\end{align*}
and therefore
\begin{align*}
&(g,-\nabla\cdot\nabla^{\epsilon} p_{\mathcal{F}})_{L^2(\Omega)}\\
& \qquad \lesssim \| g\|_{\alpha^{-1} L^2(\Omega)\cap \alpha^{-1/2} H^1(\Omega) }
\|p_{\mathcal{F}}\|_{\alpha^{1/2} H^1(\Omega) + \alpha H^2(\Omega)} + \epsilon \|g\|_{H^1(\Omega)}\\
& \qquad \lesssim \| g\|_{\alpha^{-1} L^2(\Omega)\cap \alpha^{-1/2} H^1(\Omega) }
\|p_{\mathcal{F}}\|_{\alpha^{1/2} H^1(\Omega) + \alpha H^2(\Omega)} + \epsilon h_k^{-1}\alpha^{1/4}
\|\mathcal{F}\|_{(X_{-,k})^*}
\end{align*}
is satisfied. The same can be done for $(\zeta,\Delta^{\epsilon}\lambda_{\mathcal{F}})_{L^2(\Omega)}$ and
$(\chi,\nabla\cdot\nabla^{\epsilon}\mu_{\mathcal{F}})_{L^2(\Omega)}$.
Combining these results, we immediately obtain~\eqref{eq:aux2a}, which
finishes the proof.
\qed\end{proof}
\begin{theorem}
Condition~\citecond{(A4)} is satisfied.
\end{theorem}
\begin{proof}
By combining~\eqref{eq:aux1} and~\eqref{eq:aux2}, we obtain
\begin{align*}
\|x_{\mathcal{F}}\|_{X_{+,k}}^2 \le C \left( \|\mathcal{F}\|_{(X_{-,k})^*}^2 +
\|\mathcal{F}\|_{(X_{-,k})^*} \|x_{\mathcal{F}}\|_{X_{+,k}} \right)
\end{align*}
for some constant $C>0$ (independent of $k$ and $\beta$) which implies
\begin{align*}
\|x_{\mathcal{F}}\|_{X_{+,k}} \le \frac12 \left(C+\sqrt{4C+C^2}\right) \|\mathcal{F}\|_{(X_{-,k})^*},
\end{align*}
i.e.~\eqref{eq:a4}, which finishes the proof.
\qed\end{proof}
So, we have shown condition~\citecond{(A4)}. So, Theorem~\ref{thrm:main}
implies the approximation property.
Note, that we have now shown the approximation property in the
norm $\|\cdot\|_{X_{-,k}}$, i.e.,~\eqref{eq:apprp:thrm1}. The
next step is to show the approximation property
in the norm-pair $|\hspace{-.1em}|\hspace{-.1em}| \cdot|\hspace{-.1em}|\hspace{-.1em}|_{0,k}$ and $|\hspace{-.1em}|\hspace{-.1em}| \cdot|\hspace{-.1em}|\hspace{-.1em}|_{2,k}$, i.e.,~\eqref{eq:apprp}.
To show~\eqref{eq:apprp}, the following lemma is sufficient.
\begin{lemma}\label{lem:equiv}
The inequality
\begin{equation}\label{eq:equiv}
|\hspace{-.1em}|\hspace{-.1em}| x_k|\hspace{-.1em}|\hspace{-.1em}|_{0,k} \lesssim \|x_k\|_{X_{-,k}}
\end{equation}
is satisfied for all $x_k \in X_k$.
\end{lemma}
\begin{proof}
The proof of this lemma is based on Lemma~4.7 in~\cite{Takacs:2013}.
Lemma~4.7 states (in the notation of the present paper and for
the choice $\beta:=\alpha^{-1/2}$) that
\begin{equation}\nonumber
\alpha^{-1/2} |\hspace{-.1em}|\hspace{-.1em}| y_k|\hspace{-.1em}|\hspace{-.1em}|_{Y,0,k}^2 \lesssim \alpha^{-1/2} \|y_k\|_{Y_{-,k}}^2
\end{equation}
is satisfied for all $y_k \in Y_k$. As for $x_k=(y_k,\psi_k)\in X_k=Y_k \times Y_k$ both,
\begin{equation}\nonumber
|\hspace{-.1em}|\hspace{-.1em}| (y_k,\psi_k) |\hspace{-.1em}|\hspace{-.1em}|_{0,k}^2 = |\hspace{-.1em}|\hspace{-.1em}| y_k |\hspace{-.1em}|\hspace{-.1em}|_{Y,0,k}^2
+\alpha^{-1} |\hspace{-.1em}|\hspace{-.1em}| \psi_k |\hspace{-.1em}|\hspace{-.1em}|_{Y,0,k}^2
\end{equation}
and
\begin{equation}\nonumber
\| (y_k,\psi_k) \|_{X_{-,k}}^2 = \| y_k \|_{Y_{-,k}}^2
+\alpha^{-1} \| \psi_k \|_{Y_{-,k}}^2,
\end{equation}
is satisfied by definition, \eqref{eq:equiv} follows immediately.
\qed\end{proof}
So, we have shown the approximation property~\eqref{eq:apprp}. So, we obtain the following
overall convergence result.
\begin{theorem}
Assume that
\begin{itemize}
\item the regularity assumptions~\citecond{(R)} and~\citecond{(R1)} are
satisfied on the domain~$\Omega$,
\item the problem is discretized using the Taylor-Hood element and
\item the normal equation smoother introduced above is used as smoother.
\end{itemize}
Then the two-grid method converges if sufficiently many smoothing
steps are applied, i.e., we have
\begin{equation*}
|\hspace{-.1em}|\hspace{-.1em}| x_k^{(1)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{0,k} \le q(\nu) |\hspace{-.1em}|\hspace{-.1em}| x_k^{(0)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{0,k},
\end{equation*}
with $q(\nu):=C_S\, C_A\, \nu^{-1/2}$,
where the constants~$C_A$ and~$C_S$ are independent of the grid
level~$k$ and the choice of the parameter~$\alpha$.
\end{theorem}
Note that we have shown that the method converges in the norm $|\hspace{-.1em}|\hspace{-.1em}| \cdot |\hspace{-.1em}|\hspace{-.1em}|_{0,k}$ if sufficiently
many pre-smoothing steps are applied. The application of post-smoothing steps does not derogate
the convergence because the proposed smoother is power-bounded. Moreover, if we assume that only post-smoothing steps
are applied, the combination of smoothing property and approximation property (which now have to be combined
in the inverse order) leads to convergence in the residual norm $|\hspace{-.1em}|\hspace{-.1em}| \cdot |\hspace{-.1em}|\hspace{-.1em}|_{2,k}$. Again, due
to power-boundedness of the smoother, the method stays convergent if, besides sufficiently many post-smoothing
steps, also pre-smoothing steps are applied.
For all the mentioned cases, the convergence of the W-cycle multigrid method follows under weak
assumptions, cf.~\cite{Hackbusch:1985}.
\section{Numerical Results}\label{sec:4}
In this section, we illustrate the convergence theory presented within
this paper with numerical experiments.
The domain~$\Omega$ was chosen to be the unit square $\Omega:=(0,1)^2$.
As mentioned in Section~\ref{sec:2}, the weak inf-sup-condition~\citecond{(S)}
can be shown for the Taylor-Hood element only if at least one vertex of each element
is located in the interior of the domain~$\Omega$. As this is not satisfied for the standard
decomposition of the unit square into two triangular elements, we have chosen the coarsest
grid (grid level~$k=0$) to be a decomposition of the domain~$\Omega$ into $8$ triangles,
cf. Fig.~\ref{fig:1}. The grid levels~$k=1,2,\ldots$ were constructed
by uniform refinement, i.e., every triangle was decomposed into four
subtriangles.
The desired velocity field (desired state) $u_D$ was chosen to be
\begin{equation*}
u_D(\xi_1,\xi_2) := \left\{
\begin{array}{ll}
\left(\begin{array}{c}\xi_2-\tfrac12\\\tfrac12-\xi_1\end{array}\right) &
\quad\mbox{for } \sqrt{\left(\xi_1-\tfrac12\right)^2+\left(\xi_2-\tfrac12\right)^2} < \frac{4}{5}\\
0 & \quad\mbox{otherwise.}
\end{array}
\right.
\end{equation*}
The desired velocity field is visualized in the left-hand-side picture in both, Fig.~\ref{fig:3} and~\ref{fig:4}.
For solving the discretized KKT-system, we have used the proposed W-cycle multigrid method. We have applied
$\nu$ pre- and $\nu$ post-smoothing steps using the normal equation smoother. The
matrix $\mathcal{L}_k$ was chosen as follows
\begin{equation}\label{eq:defLnr}
\mathcal{L}_k :=
\left(
\begin{array}{cccc}
\hat{A}_k \\
& \hat{S}_k \\
&& \alpha^{-1} \hat{A}_k \\
&&& \alpha^{-1} \hat{S}_k
\end{array}
\right),
\end{equation}
where $\hat{A}_k := \mbox{diag }(M_{U,k}+\alpha^{1/2}K_{U,k})$ and $\hat{S}_k:=\alpha \,\mbox{diag }(D_k\hat{A}_k^{-1}D_k^T)$.
Here, $M_{U,k}$ and $K_{U,k}$ are the mass matrix and the stiffness matrix, representing the $L^2$-inner product
and the $H^1$-inner product in $U_k$, respectively. The matrix $D_k$ represents the bilinear form $d(u_k,p_k)=(\nabla\cdot u_k,p_k)_{L^2(\Omega)}$
on $U_k \times P_k$. Note that the matrix $\mathcal{L}_k$, introduced above, is spectrally equivalent to the matrix
$\mathcal{L}_k$, introduced in Section~\ref{sec:3}. Therefore, the choice proposed above is also covered by the
convergence theory. The damping parameter was chosen to be~$\tau=0.35$ for all grid levels~$k$ and all choices of~$\alpha$.
\begin{figure}[ht]%
\begin{center}
\includegraphics[scale=.4]{discretization}\qquad
\includegraphics[scale=.4]{discretization2}
\caption{Discretization on grid levels $k=1$ and $k=2$, where the squares denote the degrees of freedom of (the components of)
$u$ and $\lambda$ are the the dots denote the degrees of freedom of $p$ and $\mu$}
\label{fig:1}
\end{center}
\end{figure}%
\begin{figure}[ht]%
\begin{center}
\includegraphics[scale=.8]{desired}
\includegraphics[scale=.8]{state0}
\includegraphics[scale=.8]{control0}
\caption{Desired velocity field $u_D$, optimal velocity field $u$ and optimal control $f$ for $\alpha = 1$ on grid level $k=3$}
\label{fig:3}
\end{center}
\end{figure}%
\begin{figure}[ht]%
\begin{center}
\includegraphics[scale=.8]{desired}
\includegraphics[scale=.8]{state12}
\includegraphics[scale=.8]{control12}
\caption{Desired velocity field $u_D$, optimal velocity field $u$ and optimal control $f$ for $\alpha = 10^{-12}$ on grid level $k=3$}
\label{fig:4}
\end{center}
\end{figure}%
The solution of the optimal control problem can be seen in Fig.~\ref{fig:3} and \ref{fig:4}. Note that the desired
velocity field is a general $L^2$-function (due to the jump) and therefore it cannot be reached by the (optimal) velocity
field which is an $H^1$-function and therefore continuous. For the case $\alpha=1$, we observe the optimal velocity field and the control
to be rather smooth. (The control does not take large values). For small values of $\alpha$, like $\alpha=10^{-12}$,
the desired velocity field is approximated quite well, cf. Fig.~\ref{fig:4}. This is achieved
by rather large values of the control (high forces) which are concentrated on the region where the desired velocity field has its
jump. Forces have to be applied with the same orientation as the desired state, as well as with the opposite orientation. (In
the picture mainly the forces with opposite orientation can be seen.) As mentioned
in the introduction, we are interested in a fast linear solver which also works well for such small choices of $\alpha$.
The number of iterations and the convergence rate were measured as
follows: we start with $ x_k^{(0)}=0$ and measure the reduction of the error in each step
using the residual norm $|\hspace{-.1em}|\hspace{-.1em}| \cdot |\hspace{-.1em}|\hspace{-.1em}|_{2,k}$. The iteration was stopped when
the initial error was reduced by a factor of $\epsilon = 10^{-6}$. The convergence rates
$q$ is the mean convergence rate in this iteration, i.e.,
\begin{equation*}
q = \left(\frac{|\hspace{-.1em}|\hspace{-.1em}| x_k^{(n)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{2,k}}{|\hspace{-.1em}|\hspace{-.1em}| x_k^{(0)}-x_k^*|\hspace{-.1em}|\hspace{-.1em}|_{2,k}}\right)^{1/n},
\end{equation*}
where $n$ is the number of iterations needed to reach the stopping
criterion. Here, $x_k^*$ is the exact solution and $x_k^{(i)}$ is the $i$-th iterate.
\begin{table}[ht]%
\begin{center}
\begin{tabular}{p{0.2cm}p{1.cm}p{0.2cm}p{1.cm}p{0.2cm}p{1.cm}p{0.2cm}p{1.cm}}
\hline\noalign{\smallskip}
\multicolumn{2}{l}{$\nu=1+1$} &
\multicolumn{2}{l}{$\nu=2+2$} &
\multicolumn{2}{l}{$\nu=4+4$} &
\multicolumn{2}{l}{$\nu=8+8$} \\
$n$ & $q$& $n$ & $q$& $n$ & $q$& $n$ & $q$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$61$&$0.796$&$32$&$0.647$&$21$&$0.507$&$15$&$0.390$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{center}
\caption{Number of iterations $n$ and convergence rate $q$ depending on $\nu=\nu_{pre}+\nu_{post}$, the
number of pre- and post-smoothing steps, on grid level $k=4$ for $\alpha=1$}
\label{tab:0}
\end{table}%
\begin{table}[ht]%
\begin{center}
\begin{tabular}{p{1.00cm}p{0.2cm}p{1.cm}p{0.2cm}p{1.cm}p{0.2cm}p{1.cm}p{0.2cm}p{1.cm}p{0.2cm}p{1.cm}}
\hline\noalign{\smallskip}
& \multicolumn{2}{l}{$\alpha=1$} &
\multicolumn{2}{l}{$\alpha=10^{-3}$} &
\multicolumn{2}{l}{$\alpha=10^{-6}$} &
\multicolumn{2}{l}{$\alpha=10^{-9}$} &
\multicolumn{2}{l}{$\alpha=10^{-12}$} \\
& $n$ & $q$& $n$ & $q$& $n$ & $q$& $n$ & $q$ &$n$ & $q$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$k=3$ &$32$&$0.648$&$33$&$0.651$&$35$&$0.673$&$48$&$0.749$&$51$&$0.760$\\
$k=4$ &$32$&$0.647$&$32$&$0.646$&$33$&$0.657$&$46$&$0.738$&$73$&$0.827$\\
$k=5$ &$32$&$0.645$&$32$&$0.644$&$32$&$0.644$&$39$&$0.697$&$60$&$0.793$\\
$k=6$ &$31$&$0.636$&$31$&$0.636$&$31$&$0.635$&$32$&$0.647$&$46$&$0.739$\\
$k=7$ &$29$&$0.620$&$29$&$0.620$&$29$&$0.618$&$29$&$0.621$&$42$&$0.716$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{center}
\caption{Number of iterations $n$ and convergence rate $q$ for $\nu=2+2$ pre- and post-smoothing steps}
\label{tab:1}
\end{table}%
In Table~\ref{tab:0} we compare for a fixed grid level (level~$k=4$) and a fixed choice
$\alpha=1$ the convergence rates for several choices of $\nu$, the number of pre- and
post-smoothing steps. We see that the convergence rates behave approximately like~$\nu^{-1/2}$.
This is consistent with the theory which
guarantees the convergence rate being bounded by $C\,\nu^{-1/2}$ as this only describes
the asymptotic behavior.
In Table~\ref{tab:1} we compare various grid levels~$k$ and choices of the parameter~$\alpha$.
Here, we have used a fixed choice of $\nu=2+2$ pre- and post-smoothing steps. First we observe
that the number of iterations seems to be well-bounded for all grid levels~$k$ which yields
an optimal convergence behavior. Moreover, we see that the number of iterations is also
well-bounded for a wide range of choices of the parameter~$\alpha$, i.e., we observe
also robust convergence as predicted by the convergence theory.
It has to be mentioned that for the model problem, also the (more efficient) V-cycle multigrid method
converges with rates comparable to the convergence rates of the W-cycle multigrid method. However,
the V-cycle is not covered by the convergence theory.
\section{Conclusions and Further Work}\label{sec:5}
In the present paper we have shown that the construction of an
all-at-once multigrid method for a Stokes control problem is possible. Here, a preconditioned
normal equation smoother was chosen. The overall numerical complexity of this method seems to be
comparable to block-preconditioned MINRES iterations, cf., e.g., Table~4.2
in~\cite{Zulehner:2010}, which shows the number of MINRES iterations needed. (Note that in each
MINRES step one multigrid cycle is applied to each component of the overall block-matrix, i.e.,
to the velocity~$u$, the pressure~$p$, the adjoined velocity~$\lambda$ and the adjoined pressure~$\mu$.)
One advantage of the all-at-once multigrid method, introduced in the present paper, is the fact
that an outer iteration is not necessary, the multigrid iteration is an linear iteration scheme
which can be directly applied to
solve the problem. As we could show the approximation property for a particular choice of
norms, the construction of other smoothers is of particular interest. The convergence rates we
have observed in this paper for a multigrid method with normal equation smoothing
are comparable with the convergence rates observed in~\cite{Takacs:Zulehner:2012}
for a multigrid method with normal equation smoothing applied to an
optimal control problem with elliptic state equation. For that problem we have
seen that other smoothers are available which lead to much faster convergence rates,
cf.~\cite{Takacs:Zulehner:2011} and others. Similar improvements were possible
for the generalized Stokes problem, cf.~\cite{Takacs:2013}. Therefore, it seems to be reasonable
to construct faster smoothers also for the the Stokes control problem.
\textbf{Acknowledgements.} The author thanks Markus Kollmann for providing parts of the code used to compute
the numerical results presented in this paper. Moreover, the support of the numerical analysis
group of the Mathematical Institute, University of Oxford, is gratefully acknowledged.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,535 |
{"url":"http:\/\/www.ck12.org\/analysis\/Distance-Between-a-Point-and-a-Plane\/studyguide\/user:cmF5bmEuc2NobG9zc2JlcmdAZ21haWwuY29t\/Planes-in-Space\/","text":"<meta http-equiv=\"refresh\" content=\"1; url=\/nojavascript\/\">\n\n# Distance Between a Point and a Plane\n\n%\nProgress\nProgress\n%\nPlanes in Space\n\nFeel free to modify and personalize this study guide by clicking \u201cCustomize.\u201d\n\n### Vocabulary\n\n##### Complete the chart.\n Word Definition _____________ the 3 dimensional equivalent of a line on a standard rectangular graph Intercept Form _________________________________________________________ _____________ a vector perpendicular to all possible vectors within a plane _____________ the angle between two planes in a 3D space Origin _________________________________________________________ Perpendicular line _________________________________________________________\n\n### Planes in Space\n\nSince the normal to the plane is, by definition, perpendicular to all possible vectors within a plane and since the dot product of two vectors is equal to zero for any two perpendicular vectors, we can define a plane in terms of the dot product of the normal vector with any vector,\u00a0$\\overrightarrow{v}$\u00a0, within the plane:\n\n$\\overrightarrow{n} \\times \\overrightarrow{v} = 0$\n\nWhich we can also write as\n\n$\\left \\langle n_x, n_y, n_z \\right \\rangle \\times \\left \\langle (x - x_0), (y - y_0), (z - z_0) \\right \\rangle = 0$\n\n.\n\nWhat is the\u00a0intercept form\u00a0of the equation of a plane? _______________________\n\n.\n\nWhat is the equation which specifies the plane in terms of the normal vector and two points on the plane?\u00a0_______________________\n\nWhat are the equations of the intercepts of that plane?\n\n$a =$\u00a0____________ \u00a0 \u00a0$b =$\u00a0____________ \u00a0 \u00a0\u00a0$c =$____________\n\n.\n\nGiven the following intersections, write the equation of the plane.\n\n1. $(13, 0, 0), (0, 21, 0)$\u00a0and\u00a0$(0, 0, 17)$\n2. $(5, 0, 0), (0, 1, 0)$\u00a0and\u00a0$(0, 0, 2)$\n3. $(27, 0, 0), (0, 12, 0)$\u00a0and\u00a0$(0, 0, 18)$\nFind the intercepts of the plane given the following equations:\n1. $1x - 7y - z + 10 = 0$\n2. $-2x + 9y + 4z - 1 = 0$\n3. $6x - 11y + 2z + 3 = 0$\n\nUse the given equations to determine the normal unit-vector to that plane:\n\n1. $-8x + 7y + 2z + 5 = 0$\n2. $10x + 3y - z - 2 = 0$\n3. $-1x - 2y + 7z + 16 = 0$\n.\n\n.\n\n#### Distance Between a Point and a Plane\n\nThe\u00a0position vector\u00a0for the point closest to a plane is _________________ to the normal vector.\n\nDetermine the location of the point on the plane closest to the origin by finding the projection of the given point\u2019s ___________________ onto the _____________________.\n\nThe angle between two planes is the same as the angle between their ____________________.\n\nUse the ___________________ to find this angle.\n\n.\n\nThe three points define a plane. Determine the point on the plane which is closest to the origin.\n\n1. $P = (3, 8, 10), Q = (-2, 5, 8)$\u00a0and\u00a0$R = (7, 4, 8)$\n2. $P = (9, -1, 4), Q = (6, 2, -8)$\u00a0and\u00a0$R = (12 , 9, 10)$\n3. $P = (5, 8,-9), Q = ( -5, 3, 9)$\u00a0and\u00a0$R = (10, 4, -6)$\n\nDetermine the dihedral angle between each of these planes and the x-y plane, use the\u00a0$|\\overrightarrow{n}|$\u00a0you calculated for each plane and recall that the normal to the x-y plane is the unit vector$\\hat{z} = \\left \\langle 0, 0, 1 \\right \\rangle$\n\n1. $P = (3, 8, 10), Q = (-2, 5, 8)$\u00a0and\u00a0$R = (7, 4, 8)$\n2. $P = (9, -1, 4), Q = (6, 2, -8)$\u00a0and\u00a0$R = (12 , 9, 10)$\n3. $P = (5, 8,-9), Q = ( -5, 3, 9)$\u00a0and\u00a0$R = (10, 4, -6)$\n\nDetermine the dihedral angle between the two planes.\n\n1. $-7x + 20y + 6z + 4 = 0$\u00a0and\u00a0$-19x - 3y + z + 5 = 0$\n2. $5x - 8y + 20z - 5 = 0$\u00a0and\u00a0$6x + y + 19z - 7 = 0$\n3. $14x + 11y - 5z - 16 = 0$\u00a0and\u00a0$11x - 13y + 8z + 4 = 0$\n\n.","date":"2015-03-03 11:31:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 38, \"texerror\": 0, \"math_score\": 0.6744154691696167, \"perplexity\": 486.3550899634373}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-11\/segments\/1424936463181.39\/warc\/CC-MAIN-20150226074103-00284-ip-10-28-5-156.ec2.internal.warc.gz\"}"} | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.