text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
\section{Introduction \label{sec:Intro}}
Spin light-emitting diodes
(spin-LEDs),~\cite{fiederling:1999a,ohno:1999a,awschalom:02,pryor:03,guendogdu:04,seufert,book:02,kroutvar:2004a}
in which electron recombination is accompanied by the emission of a photon with well-defined circular
polarization, provide an efficient interface between electron spins and photons. The operation of such devices
at the single-photon level would allow one to convert the quantum state of an electron encoded in its spin state
into that of a photon with a wide range of possible applications. In view of quantum information schemes,
converting spin into photon quantum states corresponds to a conversion of localized into flying qubits, which
can be transmitted over long distances and could overcome limitations caused by the short-range nature of the
electron exchange interaction.~\cite{book:02} On a more fundamental level, the photon polarization can be
readily measured experimentally such that an interface between spins and photons will allow one to measure
quantum properties of the spin system via the photons generated on recombination. More specifically,
entanglement of electron spins could be demonstrated not only in current noise~\cite{burkard:2000,egues:2002}
but also via photon polarizations which allows one to test Bell's inequalities.~\cite{bell:65}
In this work, we show that nonlocal spin-entangled electron pairs that recombine in single quantum dots
contained in spatially separated spin-LEDs are converted into polarization-entangled photon states. In addition
to its applications in quantum communication, this transfer can be used to characterize the output of an
electron spin
entangler~\cite{andreev:01,lesovik:01,recher:02,bena:02,bouchiat:03,saraga:03,recher:03,saraga2:04} in a setup
as shown in Fig.~\ref{fig:setup}. Furthermore, such a setup acts as a deterministic source of
polarization-entangled photon pairs.
Recently, the decay of biexcitons in single quantum dots has been proposed
for the production of entangled photons.~\cite{benson:00,moreau:01} However, several
experiments~\cite{kiraz:02,santori:02,stevenson:2002a,zwiller:02,ulrich:2003a} have only shown polarization
correlation but not entanglement of the photons. The fine structure splitting $\delta_{\mathrm{ehx}}$ of the
bright exciton ground state~\cite{takagahara:00} has been identified to be crucial for the lack of
entanglement: Firstly, the polarization-entangled photons are also entangled in energy if
$\delta_{\mathrm{ehx}}$ is larger than the exciton
linewidth.~\cite{stace:2003a} Secondly, for $\delta_{\mathrm{ehx}}\neq 0$ the exciton spin relaxation rate due
to phonons $1/T_{1,X}$ is enhanced~\cite{tsitsishvili:2003a} and leads to an increased decoherence rate
$1/T_{2,X} = 1/2T_{1,X} + 1/T_{\varphi,X}$, where $1/T_{\varphi,X}$ is the pure decoherence rate. To overcome
these difficulties we propose to use positively charged excitons ($X^+$), for which $\delta_{\mathrm{ehx}}= 0$
up to small corrections. Moreover, we demonstrate that the antisymmetric hole ground state of the $X^+$ enables
the production of entangled four-photon states. We study the transfer of entanglement for different photon
emission directions by calculating the von Neumann entropy. Due to quantum mechanical interference, the fidelity
of this process approaches unity not only for photon emission along the spin quantization axis, but for a
continuous set of observation directions. The relaxation and decoherence of the electron spins in the leads is
modeled using a master equation and it is quantified by the fidelity of the entangled state.
\begin{figure}[t]
\centerline{\includegraphics[width=7.5cm]{setup_fat_fig1.eps}
} \caption{(Color online) Schematic setup for the transfer of entanglement between electrons and photons. An
electron entangler (gray box) injects a pair of spin-entangled electrons into two current leads. The electrons
recombine individually in one quantum dot located in the left (L) and one in the right (R) spin-LED and give
rise to the emission of two photons.} \label{fig:setup}
\end{figure}
This work is organized as follows. In Sec.~\ref{sec:Dynamics} we describe the
dynamics of the conversion process. In Sec.~\ref{sec:Optic} we focus on the
microscopic expressions for the involved optical transitions, leading to entangled
four-photon and two-photon states. In Sec.~\ref{sec:Entanglement} we quantify the
entanglement of the two-photon state as a function of the emission angles.
We conclude in Sec.~\ref{sec:Concl}.
\section{Dynamics of the conversion process \label{sec:Dynamics}}
The effective Hamiltonian of the system is given by
\begin{equation}
H=H_{L}+H_{R}+H_{\mathrm{rad}}+H_{\mathrm{int}},
\end{equation}
where $H_{\mathrm{\alpha}}=\mathbf{p}^{2}/2m+V_{\mathrm{qd}}(\mathbf{r})$ is the Hamiltonian of the quantum dot
$\alpha=L,R$ with confinement potential $V_{\mathrm{qd}}(\mathbf{r})$. The Hamiltonian of the radiation field is
$H_{\mathrm{rad}}=\sum_{\mathbf{k},\lambda}\hbar\omega_{k}a_{\mathbf{k}\lambda}^{\dagger}a_{\mathbf{k}\lambda}$
and $H_{\mathrm{int}}=-e\mathbf{A\cdot p}/m_{0}c=H_{\mathrm{em}}+H.c.$ is the optical interaction term, which is
linear in both the vector potential $\mathbf{A}$ and the electron momentum $\mathbf{p}$ and can be decomposed
into a photon emission term $H_{\mathrm{em}}$ and its Hermitian conjugate. For simplicity, we assume that the
dots $L$ and $R$ are identical, with cubic crystal structure and with aligned main crystal axes. We choose the
$z$ axis parallel to the quantum dot growth direction (e.g., [001]). If the quantum dot confinement is stronger
in the $z$ direction than in the $xy$ plane, $z$ defines the spin quantization axis and heavy-hole (hh) and
light-hole (lh) states are energetically split by $\Delta_{\mathrm{hh-lh}}$ (typically
$\Delta_{\mathrm{hh-lh}}\sim 10\, \mathrm{meV}$). We consider a hh ground state, with angular momentum
projection $\pm 3/2$ in terms of electron quantum numbers. We further focus on the strong-confinement regime,
where the dot radius is smaller than the exciton Bohr radius.
The quantum dots in both spin-LEDs are prepared in a state $\ket{\chi_{\alpha}}$, where two excess holes occupy
the lowest hh level in each dot. This initial state, which can be generated by applying an appropriate bias
voltage across the LED, has several advantages. Firstly, electrons with arbitrary spin states can recombine
optically, as demonstrated for electron spin detection in a recent experiment.~\cite{guendogdu:04} Secondly, the
$z$ component of the total hole spin vanishes. This is a consequence of the fact that in quantum dots the hh-lh
exciton mixing due to the electron-hole exchange interaction $\Delta_{\mathrm{ehx}}$ is determined by a small
parameter $\Delta_{\mathrm{ehx}}/\Delta_{\mathrm{hh-lh}}\sim 0.01$. Thus, injected spin-polarized electrons give
rise to circularly polarized $X^+$ luminescence. This remains true for dots with asymmetric confinement in the
$xy$ plane, in stark contrast to the case with an electron and only one hole in the dot,~\cite{takagahara:00}
where the good exciton eigenstates are horizontally polarized and are split in energy typically by
$\delta_{\mathrm{ehx}}\sim 0.1\,\mathrm{meV}$. Thus, the electron-hole exchange interaction can be neutralized
by initially providing {\it two} holes. Interband mixing (e.g., hh and lh states) in strongly anisotropic dots
reduces the maximum circular polarization of photons emitted from spin-polarized electrons \cite{pryor:03} and
reduces the fidelity of our scheme. However, because the interband transition probability for lh states is three
times smaller than that for hh states, and hh-lh mixing is typically controlled by some small para\-meter in
slightly elliptical dots,~\cite{takagahara:00} we neglect lh transitions.
\subsection{Electron injection and photon emission}
We first describe the dynamics of the electron injection and recombination in the two dots using a master
equation. The rate for the injection and the subsequent relaxation of electrons into the conduction band ground
state in the dot $\alpha$ is denoted by $W_{e\alpha}$. It has been demonstrated that this entire process is spin
conserving and occurs much faster than the optical recombination~\cite{seufert,guendogdu:04}, which is described
by the rates $W_{p\alpha}$. Typically, $W_{p\alpha}\sim 1\:(\mathrm{ns})^{-1}$ and $W_{e\alpha}\sim
0.1\:(\mathrm{ps})^{-1}$ for the incoherent transition rates. We solve the master equation for the classical
occupation probabilities and obtain the probability that two photons are emitted after the injection of two
electrons into the dots at $t=0$,
\begin{equation}
P_{2p} = \prod_{\alpha = L,R}\frac{W_{e\alpha}(1-e^{-tW_{p\alpha}})-
W_{p\alpha}(1-e^{-tW_{e\alpha}})}{W_{e\alpha}-W_{p\alpha}}.
\end{equation}
For $W_{p\alpha} \ll W_{e\alpha}$, $P_{2p}\approx \prod_{\alpha = L,R}(1-e^{-tW_{p\alpha}})$. After photon
emission, bipartite photon entanglement is achieved by a measurement of the hole spins as we describe below and
the initial state is finally restored by injection of two holes into each of the two dots. We estimate the
production rate of entangled photons in a setup to test some of the proposed electron
entanglers.\cite{andreev:01,lesovik:01,recher:02,bena:02,bouchiat:03,saraga:03,recher:03,saraga2:04} For
example, electron spin singlets $|\Psi^-\rangle =(\ket{\! \uparrow\downarrow} -\ket{\! \downarrow\uparrow})/\sqrt{2}$ are produced by the Andreev
entangler~\cite{andreev:01} with an average time separation $\Delta t \sim 10^{-5}\mathrm{s}$, while for the
entangler based on three quantum dots,~\cite{saraga:03} $\Delta t \sim 10^{-8}\mathrm{s}$. The two electrons of
a singlet typically are injected into the current leads with a relative time delay $\tau \simeq
10^{-13}\mathrm{s}$ for both of these entanglers. Because $\tau, W_{p\alpha}^{-1} \ll \Delta t$, photons
originating from a single pair of entangled electrons can be identified with high reliability. In the steady
state, the generation rate of entangled photons is determined by the rate at which entangled electron pairs
leave the entangler, $1/\Delta t$.
\subsection{Electron spin dynamics}
Relaxation and decoherence is taken into account for the two spins by the single-spin Bloch
equation.~\cite{burkard:2003} Given that the electrons are in different leads, they interact with different
environments (during times $t$ and $t'$, respectively). Therefore, we consider different magnetic fields
$\mathbf{h}$ and $\mathbf{h'}$, enclosing an angle $\beta$, each acting on an individual spin. We calculate the
two-spin density matrix $\chi(t,t')$ and obtain for the singlet fidelity
$f=4\langle\Psi^-\left|\chi(t,t')\right|\Psi^-\rangle$ (given in Ref.~\cite{burkard:2003} for $t=t'$ and $\beta
=0$),
\begin{eqnarray}\nonumber
f & = & 1-\mbox{cos}\beta\, a a'P P'+
e_1\left[e'_2\mbox{sin}^2\beta\,\mbox{cos}(h't')
+e'_1\mbox{cos}^2\beta \right]\\ \nonumber
& & + e_2e'_1\mbox{sin}^2\beta\,\mbox{cos}(ht)
+ e_2e'_2\left[2\,\mbox{cos}\beta\,\mbox{sin}(ht)\,\mbox{sin}(h't')
\right.\\
& & + \left.\left(\mbox{cos}^2\beta\,+1\right)\mbox{cos}(ht)\,\mbox{cos}(h't')
\right] ,
\end{eqnarray}
where for the first (second) spin $e_i=e^{-t/T_i}$ ($e'_i=e^{-t'/T'_i}$), $a=1-e_1$ ( $a'=1-e'_1$), $P$ ($P'$)
is the equilibrium polarization, and $T_2$ and $T_1$ ($T'_2$ and $T'_1$) are the spin decoherence and
relaxation times, respectively. For $t \ll T_1,T_2$ and $t' \ll T'_1,T'_2$ (in bulk GaAs $T_2\sim
100\,\mathrm{ns}$ has been measured \cite{kikkawa:1998a} and, typically, $T_1\gg T_2$), the electrons form a
nonlocal spin-entangled state after their injection into the dots $L$ and $R$ and after their subsequent
relaxation to the single-electron orbital ground states $\phi_{c\alpha}(\mathbf{r}_{c\alpha},\sigma)$. A local
rotation of one of the two spins in the leads (for $\mathbf{h} \neq \mathbf{h}'$) enables a transformation of
$|\Psi^-\rangle$ into another (maximally entangled) Bell state $|\Psi^+\rangle =(\ket{\! \uparrow\downarrow} +
\ket{\! \downarrow\uparrow})/\sqrt{2}$ or $\ket{\Phi^{\pm}}=(\ket{\! \uparrow\uparrow}\pm\ket{\! \downarrow\downarrow})/\sqrt{2}$. This can be achieved, e.g., by
controlling the local Rashba spin-orbit interaction in the current leads.~\cite{egues:2002,burkard:2003}
\section{Optical transitions \label{sec:Optic}}
The optical recombination processes of the two electrons occur
independently, except for the entanglement of the spin wave functions.
We consider one single branch $\alpha=L,R$ of the apparatus and omit
the index $\alpha$. The state of the single quantum dot which is
charged with two hhs in the orbital ground state and into which
a single electron with spin $\sigma$ has been injected is given by
\begin{equation}
\ket{e,\sigma} = \int\mathrm{d}^{3}r_{c}
\phi_{c}^{*}(\mathbf{r}_{c},\sigma) b_{c\sigma}^{\dagger}(\mathbf{r}_{c})\ket{\chi}.
\label{eq:exstate}
\end{equation}
Here, $b_{c\sigma}^{\dagger}(\mathbf{r}_{c})$ creates an electron with
spin $S_{z}=\sigma/2=\pm1/2$
at $\mathbf{r}_c$ in the ground state of the dot,
$\ket{\chi}=\sum_{\tau\neq\tau'}\int \mathrm{d}^{3}r_{v1} \mathrm{d}^{3}r_{v2}\phi_{v}(\mathbf{r}_{v1},\tau;\mathbf{r}_{v2},\tau') b_{v\tau}(\mathbf{r}_{v1})b_{v\tau'}(\mathbf{r}_{v2})\ket{g}$,
where $\ket{g}$ is the electrostatically neutral ground state of the
quantum dot, and $\phi_{v}(\mathbf{r}_{v1},\tau;\mathbf{r}_{v2},\tau')$
is the orbital part of the two-hole wave function. In the strong-confinement
regime where Coulomb correlations are negligible, $\phi_{v}$ is a product
of the single-particle valence band states. The labels $\tau,\,\tau'$
denote the hh spin component $S_{z} = \tau /2 = \pm1/2$ that factor out
for angular momentum $J_{z}=\pm 3/2$. We now calculate the emission matrix
element $\bra{f}H_{\mathrm{em}}\ket{i}$ with initial state
$\ket{i}=\ket{e,\sigma}\otimes\ket{\dots,n_{\mathbf{k}\lambda},\dots}$
and final state $\ket{f}=b_{v\tau'}(\mathbf{r}_{v2})\ket{g}\otimes\ket{\dots,n_{\mathbf{k}\lambda}+1,\dots}$,
where $\ket{\dots,n_{\mathbf{k}\lambda},\dots}$ is a Fock state of the
electromagnetic field, typically the photon vacuum. Because of quantum
mechanical selection rules, the optical transitions connect only states
with the same spin such that $\tau'\neq\sigma$.
In the envelope-function and dipole approximations,~\cite{biexcitons}
\begin{equation}
|\bra{f}H_{\mathrm{em}}\ket{i}| =
\frac{e}{m_{0}c}\, A_{0}(\omega_{k})\sqrt{n_{\mathbf{k}\lambda}+1}\,
\left|\mathbf{e}_{\mathbf{k}\lambda}^{*}\cdot\mathbf{p}_{cv}^{*} C_{eh}\right|,
\label{eq:emmatrixelement}
\end{equation}
where $\mathbf{p}_{cv}^{*}=\mathbf{p}_{vc}$ is the inter-band momentum
matrix element,
$\mathbf{e}_{\mathbf{k}\lambda}$ is the unit polarization vector with
$\lambda=\pm1$ for circular polarization $|\sigma_{\pm}\rangle$,
$A_{0}(\omega_{k})=(\hbar/2\epsilon\epsilon_{0}\omega_{k}V)^{1/2}$,
and $C_{eh}=\int\mathrm{d}^{3}r\,\psi_{c}^{*}(\mathbf{r},\sigma)\psi_{v}(\mathbf{r},\sigma)$,
where $\psi_{n}$ is the envelope function of a carrier in the band $n=c,v$.
For cubic symmetry, $\mathbf{e}^{*}_{\mathbf{k}\lambda}\cdot\mathbf{p}_{cv}^{*} = p_{cv}(\cos\theta-\sigma\lambda)e^{-i\sigma\phi}/2 \equiv p_{cv}m_{\sigma\lambda}(\theta,\phi)$,
where $\theta$ and $\phi$ are the polar and the azimuthal angle of the
photon emission direction, respectively.
With the transition $\ket{e,\sigma}\rightarrow b_{v-\sigma}(\mathbf{r}_{v2})\ket{g}$, a photon
\begin{equation}
|\sigma,\theta,\phi\rangle=N(\theta)(m_{\sigma,+1}(\theta,\phi)|\sigma_{+}\rangle+m_{\sigma,-1}(\theta,\phi)|\sigma_{-}\rangle)
\label{eq:photonstate}
\end{equation}
is emitted into the direction $(\theta , \phi)$. Here, $N(\theta)=[2/(1+\cos^{2}\theta)]^{1/2}$ is a
normalization factor. Eq.~(\ref{eq:photonstate}) shows that for $\theta=0$, a spin-up ($\sigma=+1$) electron
generates a $|\sigma_{-}\rangle$ photon, whereas a $|\sigma_{+}\rangle$ photon is obtained from a spin-down
($\sigma=-1$) electron. The admixture of the opposite circular polarization increases with $\theta$, leading to
linear polarization for $\theta=\pi/2$. For $\theta\neq0$, the spin-inverted states $|+1,\theta,\phi\rangle$ and
$|-1,\theta,\phi\rangle$ have interchanged coefficients for $|\sigma_{+}\rangle$ and $|\sigma_{-}\rangle$, up to
a relative phase determined by the (global) phase factors $\exp{(-i\sigma\phi)}$. Note that in two-photon states
the azimuthal angles thus can provide a {\it relative} phase, as we exploit below.
\subsection{Entangled four-photon state}
The two photons produced at recombination are entangled with the two
holes which remain in the dots, due to the antisymmetric hole ground
state. By injecting a pair of electrons with spins polarized in the
$xy$ plane into the dots,\cite{twopairs} a four-photon state of
the Greenberger-Horne-Zeilinger (GHZ) type \cite{peres:1998a} can be produced if
$T_{1,X}$ and $T_{2,X}$ exceed the exciton lifetime $\tau_X$.
For the two polarized electrons, only the electron spin orientation
in $z$ direction which satisfies the optical selection rules contributes
to the optical transition, respectively.
For circularly polarized photons emitted along $z$, the electron
Bell states give rise to the photon states
\begin{eqnarray}
\ket{\Psi^{\pm}} & \rightarrow & |\sigma_+ \sigma_- \sigma_- \sigma_+ \rangle
\pm |\sigma_- \sigma_+ \sigma_+ \sigma_- \rangle,\label{eq:ghz1}\\
\ket{\Phi^{\pm}} & \rightarrow & |\sigma_- \sigma_- \sigma_+ \sigma_+ \rangle
\pm |\sigma_+ \sigma_+ \sigma_- \sigma_- \rangle,\label{eq:ghz2}
\end{eqnarray}
where the first two entries indicate the first photon pair (L,R) and the third and fourth entry the second
photon pair (L,R), respectively. Normalization has been omitted for simplicity. Yet, the second photon pair is
generated by neutral excitons and is thus exposed to the same problems as the biexciton decay cascade in
asymmetric quantum dots. Here, a cavity can be used to maintain the GHZ state since the energy entanglement of
the second photon pair can be erased,~\cite{stace:2003a} and $\tau_X$ can be shortened due to the Purcell
effect to reduce exciton polarization decoherence.
\subsection{Entangled two-photon state}
Full {\it bipartite} photon entanglement of the first photon pair is obtained, e.g., by directing the second
photon pair via secondary optical paths to a linear polarization measurement which is performed {\it before} the
first photon pair is measured,~\cite{imamoglupc} see Fig.~\ref{fig:entropy} (a). Even different bases
$\{|H\rangle ,\, |V\rangle\}$ and $\{|H'\rangle ,\, |V'\rangle\}$ can be chosen for the two photons of the
second pair. Note that the electron-hole exchange interaction in elliptical dots assists this projection into
linearly polarized eigenstates (along the major and the minor axis of the dots, respectively) already during the
lifetime of the remaining two excitons. While the loss of (linear) polarization coherence is tolerable for these
excitons, $T_{1,X}>\tau_X$ is required for entanglement of the first photon pair. This suggests that the scheme
presented here can be realized with typical quantum dots, see Ref.~\cite{tsitsishvili:2003a} and references
therein.
If the second photon pair is measured in the state $|HH'\rangle$ or $|VV'\rangle$, the electron Bell states have
given rise to the two-photon states
\begin{eqnarray}
\ket{\Psi^{\pm}} & \rightarrow & |\!+\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!-\!\!1,\theta_{2},\phi_{2}\rangle_{R} \nonumber\\
& & \pm\,|\!-\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!+\!\!1,\theta_{2},\phi_{2}\rangle_{R},\label{eq:2photonstate1}\\
\ket{\Phi^{\pm}} & \rightarrow & |\!+\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!+\!\!1,\theta_{2},\phi_{2}\rangle_{R} \nonumber\\
& &
\pm\,|\!-\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!-\!\!1,\theta_{2},\phi_{2}\rangle_{R}.\label{eq:2photonstate2}
\end{eqnarray}
Here, normalization has been omitted for simplicity. If the second photon pair is measured as $|HV'\rangle$ or
$|VH'\rangle$, $\pm$ is replaced by $\mp$ on the right-hand side of Eqs.\ (\ref{eq:2photonstate1}) and
(\ref{eq:2photonstate2}).
Obviously, above two-photon states (\ref{eq:2photonstate1}) and (\ref{eq:2photonstate2}) are maximally
entangled for $\theta_{1}=\theta_{2}=0$. For $\theta_{1}=\theta_{2}\in(0,\pi/2)$, the total relative phase
factor between the two-photon states in Eq.~(\ref{eq:2photonstate1}) is $\exp(i\gamma+2i\Delta\phi)$. Here,
$\Delta\phi=\phi_{1}-\phi_{2}$, and the relative phase of the two-electron states is $\gamma=\pi$ for
$\ket{\Psi^{-}}$ and $\gamma=0$ for $\ket{\Psi^{+}}$. For Eq.~(\ref{eq:2photonstate2}), the relative phase
factor is $\exp[i\gamma+2i(\phi_{1}+\phi_{2})]$, with $\gamma=\pi$ for $\ket{\Phi^{-}}$ and $\gamma=0$ for
$\ket{\Phi^{+}}$. By tuning the relative phase factors in Eqs.\ (\ref{eq:2photonstate1}) and
(\ref{eq:2photonstate2}) to $-1$, two circularly polarized photons can be recovered for $\theta_1=\theta_2 \in
(0,\pi/2)$ from the elliptically polarized single-photon states due to quantum mechanical
interference.\cite{ghzent} Thus, maximal entanglement is transferred from two electron spins to the
polarizations of two photons for certain ideal emission angles. For $\ket{\Psi^{-}}$ ($\ket{\Psi^{+}}$),
$\Delta\phi=0$ ($\Delta\phi=\pi/2$) needs to be satisfied $\mathrm{mod}\pi$, whereas the condition for
$\ket{\Phi^{-}}$ ($\ket{\Phi^{+}}$) is $\phi_{1}+\phi_{2}=0$ ($\phi_{1}+\phi_{2}=\pi/2$) $\mathrm{mod}\pi$. For
$\theta_1=\theta_2 = \pi /2$ these two-photon states vanish completely due to destructive interference.
\begin{figure}[t]
\centerline{\includegraphics[width=8cm]{bell_and_entropy.eps} \vspace{1mm}} \caption{(Color online) (a)
Schematic setup to obtain bipartite entanglement of photons 1 and 2 by measuring the photons 3 and 4 of the GHZ
state in bases of linear polarizations $H,V$ and $H',V'$, respectively (see the text). In (b) and (c), we show
the von Neumann entropy (b) $E = E_{\mathrm{min}}$ and (c) $E = E_{\mathrm{max}}$ as a function of the polar
angles $\theta_1$ and $\theta_2$ for photon emission.
$E$ oscillates
between (b) and (c) as a function of $\phi_1$ and $\phi_2$, as explained in the text. The photon-polarization
entanglement is maximal for $\theta_1 = \theta_2 = 0$, whereas for $\theta_i = \pi /2$ entanglement is absent.
In (c), $ E_{\mathrm{max}}=1$ for the continuous set of directions $\theta_{1}=\theta_{2}\in[0,\pi/2)$.
}
\label{fig:entropy}
\end{figure}
\section{Photon entanglement as a function of emission directions \label{sec:Entanglement}}
For arbitrary emission directions of the two photons, the degree of
polarization entanglement can be quantified by the von Neumann entropy
$E=-\text{tr}_{2}(\tilde{\rho}\log_{2}\tilde{\rho})$.
Here, $\tilde{\rho}=\text{tr}_1\rho$ is the reduced density matrix of the two-photon state $\rho$
with the trace $\text{tr}_1$ taken over photon 1. For a maximally entangled two-photon state $E=1$,
while $E=0$ represents a pure state
$\tilde{\rho}$ (which implies the absence of bipartite entanglement).
If the two electrons recombine after times much shorter than the spin
lifetimes $T_1,\,T'_1,\,T_2,\,T'_2$, $E$ oscillates for
Eq.~(\ref{eq:2photonstate1}) as a function of $\Delta\phi$ of the
two emitted photons between a minimal value,
\begin{eqnarray}
E_{\mathrm{min}} &=& \log_{2}(1+x_{1}x_{2})
- \frac{x_{1}x_{2}\log_{2}(x_{1}x_{2})}{1+x_{1}x_{2}},
\end{eqnarray}
and a maximal value,
\begin{equation}
E_{\mathrm{max}}=\log_{2}(x_{1}+x_{2})-\frac{x_{1}\log_{2}(x_{1})}{x_{1}+x_{2}}-\frac{x_{2}\log_{2}(x_{2})}{x_{1}+x_{2}},\label{eq:emax}
\end{equation}
where $x_{i}=\mbox{cos}^2\theta_{i}$, which is (only) obtained for the ideal angles $\phi_1$ and $\phi_2$
mentioned above; see Fig.~\ref{fig:entropy} (b) and (c). For Eq.~(\ref{eq:2photonstate2}), $E$ oscillates
between $E_{\mathrm{min}}$ and $E_{\mathrm{max}}$ as a function of $\phi_1 + \phi_2$. As expected,
$E_{\mathrm{max}}=1$ for all $\theta_{1}=\theta_{2}\in[0,\pi/2)$. The discontinuity in $E_{\mathrm{max}}$ for
$\theta_1=\theta_2=\pi /2$ is due to the vanishing two-photon state.
\section{Conclusions \label{sec:Concl}}
We have studied the transfer of entanglement from electron spins to photon polarizations. We have discussed the
generation of entangled four-photon and two-photon states via the injection of spin-entangled electrons into
quantum dots charged with two excess holes. We have proposed a scheme to achieve complete entanglement transfer
from two electron spins to two photons. We have shown that this scheme can even be realized with quantum dots
exhibiting an exciton exchange splitting. We have shown the dependence of the photon entanglement on the
emission angles and identified the conditions for maximal entanglement. This offers the possibility to
efficiently test Bell's inequalities for electron spins. In addition, our results show that a continuous set of
directions exist along which entanglement is maximal. Finally, similar schemes to produce entangled photons can
be realized using two tunnel-coupled dots~\cite{gywat} instead of two isolated dots. In such a setup, it is
essential that tunnel coupling is provided for the conduction-band electrons, whereas the valence-band holes are
not tunnel coupled and thus localized in the individual dots. After a positively charged exciton is created in
each of the two dots, the spin entanglement is provided from the singlet ground state of the delocalized
electrons and can be transferred to the photons, similarly as described in this work.
We thank A. Imamo\=glu, G. Burkard, F. Meier, P. Recher, D. S. Saraga, V. N. Golovach, and D. V. Bulaev for
discussions. We acknowledge support from DARPA, ARO, ONR, NCCR Nanoscience, and the Swiss NSF.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,110 |
Q: kendo: How to exclude time part from OdataString I am using toOdataString method to convert filterExpression to OdataString. When field is of date Type, time is also getting included.
When user is selecting some date let say 08-april-2021 While converting to odataString it adds time part as well.
Units Date eq 2021-04-08T00:00:00.000Z
Stackblitz for reproduction : https://stackblitz.com/edit/angular-ivy-dzo3tn?file=src%2Fapp%2Fapp.ccomponent.ts
Note: check in console for the output
Can it be created like $filter=date(unitdate) eq 2021-04-08 ??
reference: https://github.com/OData/WebApi/issues/1473
A: You actually need to remove the TimeZone part of the date. Please check the following code:
let queryStr = `${toODataString(state)}`;
const regex = /T00:00:00\.000Z/gi;
const noTimeZoneQueryStr = queryStr.replace(regex, '');
console.log("noTimeZoneQueryStr", noTimeZoneQueryStr);
Please find the updated Stackblitz here: https://stackblitz.com/edit/angular-ivy-w7m4kd?file=src%2Fapp%2Fapp.ccomponent.ts
Edit regarding the note: Yes. Please consider the following code:
let queryStr = `${toODataString(state)}`;
const dateStr = /(r=| )Units Date /g;
let newQueryStr = queryStr.replace(dateStr, '$1date(Units Date) ');
const regex = /T00:00:00\.000Z/gi;
const noTimeZoneQueryStr = newQueryStr.replace(regex, '');
console.log("noTimeZoneQueryStr", noTimeZoneQueryStr);
Please find the updated Stackblitz here: https://stackblitz.com/edit/angular-ivy-jqma46?file=src%2Fapp%2Fapp.ccomponent.ts
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,981 |
\section{Introduction}
A degenerate Fermi gas of $^{161}$Dy has recently been achieved \cite{lu2} following the realization of Bose-Einstein condensation of $^{164}$Dy \cite{lu1}.
The dysprosium isotopes have large magnetic moments 10$\mu_B$ ($\mu_B$ being the Bohr magneton) and
the progress of these experiments provides an opportunity to study exotic many-body physics with
magnetic dipolar moments.
Cold atomic systems with a synthetic spin-orbit coupling have also attracted strong experimental and theoretical interests \cite{lin,sau,yu,hu,sau}.
The dipole-dipole interaction is essentially a spin-orbit coupled interaction.
Sogo {\it et al.} \cite{sogo} and Li and Wu \cite{li} have recently demonstrated that in ultra cold dipolar Fermi gases the
dipole-dipole interaction can give rise to an instability toward spontaneous formation of a spin-orbit coupled phase.
They studied the properties of the spin-orbit couplings in infinite systems.
It is interesting to investigate how such phases with spin-orbit couplings are realized
in trapped dipolar Fermion gases consisting of a finite number of atoms.
In this paper we study the ground states and collective excitations of a gas consisting of a small number of atoms with spin one half
using a time-dependent density-matrix approach (TDDMA) \cite{WC,GT}. Systems consisting of a small number of atoms have often been
used for theoretical investigations of dipolar Fermi gases \cite{Oster}
and may be realized in the array of microtraps or optical lattices as discussed
in Refs. \cite{Oster,Barberan,Popp}.
The TDDMA consists of the coupled equations of motion for one-body and two-body density matrices.
These equations are exact in the case of an $N=2$ system. The advantage of the TDDMA is that physical
observables are easily calculated using the one-body and two-body density matrices. Furthermore the TDDMA
has a direct relation to the time-dependent Hartree-Fock approximation (TDHFA):
Approximation of the two-body density matrix with anti-symmetrized products of the one-body density matrices in the TDDMA equation gives the TDHFA equation.
The TDDMA has recently been applied to polarized dipolar gases \cite{toh1,toh2} and a quantum dot \cite{toh3}.
The paper is organized as follows; the formulation is given in Sec. II, the results obtained for the ground state and the excited
states of an $N=2$ system are shown in Sec. III, the results for an $N=70$ system are presented in Sec. III,
and Sec. IV is devoted to a summary.
\section{Formulation}
\subsection{Hamiltonian}
We consider a magnetic dipolar gas of fermions with spin one half,
which is trapped in a spherically symmetric harmonic potential with
frequency $\omega$. The system is described by the Hamiltonian
\begin{eqnarray}
H=\sum_\alpha\epsilon_\alpha a^\dag_\alpha a_\alpha
+\frac{1}{2}\sum_{\alpha\beta\alpha'\beta'}\langle\alpha\beta|v|\alpha'\beta'\rangle
a^\dag_{\alpha}a^\dag_\beta a_{\beta'}a_{\alpha'},
\label{totalH}
\end{eqnarray}
where $a^\dag_\alpha$ and $a_\alpha$ are the creation and annihilation operators of an atom at
a harmonic oscillator state $\alpha$
corresponding to the trapping potential $V(r)=m\omega^2r^2/2$ and
$\epsilon_\alpha=\omega(n+3/2)$ with $n=0,~1,~2,....$.
We use units such that $\hbar=1$ and assume that $\alpha$ contains the spin quantum number.
In Eq. (\ref{totalH}) $\langle\alpha\beta|v|\alpha'\beta'\rangle$ is the matrix element of
a pure magnetic dipole-dipole interaction \cite{dipole}
\begin{eqnarray}
v(r)&=&-\frac{1}{r^3}\left(3({\bm d}_1\cdot\hat{\bm r})({\bm d}_2\cdot\hat{\bm r})
-{\bm d}_1\cdot{\bm d}_2\right)
\nonumber \\
&-&\frac{8\pi}{3}{\bm d}_1\cdot{\bm d}_2\delta^3({\bm r}),
\label{vdd}
\end{eqnarray}
where ${\bm d}$ is the magnetic dipole moment, ${\bm r}={\bm r}_1-{\bm r}_2$ and $\hat{\bm r}={\bm r}/r$.
The magnetic dipole moment for spin 1/2 is given by ${\bm d}=d{\bm \sigma}$ where ${\bm \sigma}$ is Pauli matrix.
In the case of completely polarized gases
the second term on the right-hand side of Eq. (\ref{vdd}) can be neglected because the exchange term
cancels out the direct term. The contact term (the second term on the right-hand side of Eq. (\ref{vdd})) is usually
omitted in the study of dipolar gases. However, it is well-known that the contact term for the proton and electron magnetic
dipole moments is essential to explain the hyperfine splitting of
a hydrogen atom. Therefore, in the following calculations we keep it as it is.
The effect of the contact interaction $g\delta^3({\bm r})$, which is usually additionally included in the study of cold atoms, is also
considered in limited cases.
\subsection{$N=2$ system}
The TDDMA gives
the coupled equations of motion for the one-body density matrix (the occupation matrix) $n_{\alpha\alpha'}$
and the two-body density matrix $\rho_{\alpha\beta\alpha'\beta'}$.
These matrices are defined as
\begin{eqnarray}
n_{\alpha\alpha'}(t)&=&\langle\Phi(t)|a^\dag_{\alpha'} a_\alpha|\Phi(t)\rangle,
\\
\rho_{\alpha\beta\alpha'\beta'}(t)&=&\langle\Phi(t)|a^\dag_{\alpha'}a^\dag_{\beta'}
a_{\beta}a_{\alpha}|\Phi(t)\rangle,
\label{rho2}
\end{eqnarray}
where $|\Phi(t)\rangle$ is the time-dependent total wavefunction
$|\Phi(t)\rangle=\exp[-iHt] |\Phi(t=0)\rangle$.
The equations in the TDDMA are written as
\begin{eqnarray}
i \dot{n}_{\alpha\alpha'}&=&
(\epsilon_{\alpha}-\epsilon_{\alpha'}){n}_{\alpha\alpha'}
\nonumber \\
&+&\sum_{\lambda_1\lambda_2\lambda_3}
[\langle\alpha\lambda_1|v|\lambda_2\lambda_3\rangle \rho_{\lambda_2\lambda_3\alpha'\lambda_1}
\nonumber \\
&-&\rho_{\alpha\lambda_1\lambda_2\lambda_3}\langle\lambda_2\lambda_3|v|\alpha'\lambda_1\rangle],
\label{n2}
\end{eqnarray}
\begin{eqnarray}
i\dot{\rho}_{\alpha\beta\alpha'\beta'}&=&
(\epsilon_{\alpha}
+\epsilon_{\beta}
-\epsilon_{\alpha'}
-\epsilon_{\beta'}){\rho}_{\alpha\beta\alpha'\beta'}
\nonumber \\
&+&\sum_{\lambda_1\lambda_2}[
\langle\alpha\beta|v|\lambda_1\lambda_2\rangle\rho_{\lambda_1\lambda_2\alpha'\beta'}
\nonumber \\
&-&\langle\lambda_1\lambda_2|v|\alpha'\beta'\rangle\rho_{\alpha\beta\lambda_1\lambda_2}].
\label{N2C2}
\end{eqnarray}
Since there are no higher-level reduced density matrices in an $N=2$ system, these two equations are exact
if all elements of $n_{\alpha\alpha'}$
and $\rho_{\alpha\beta\alpha'\beta'}$ can be taken.
When the two-body density matrix in Eq. (\ref{n2}) is approximated by anti-symmetrized products of the occupation matrices,
Eq. (\ref{n2}) is equivalent to the equation in the TDHFA.
\subsection{$N\ge3$ system}
When the number of atoms is greater than two,
the equation of motion for the two-body density matrix is coupled to a three-body density-matrix
$\rho_{\alpha\beta\gamma\alpha'\beta'\gamma'}$:
\begin{eqnarray}
i\dot{\rho}_{\alpha\beta\alpha'\beta'}&=&
(\epsilon_{\alpha}
+\epsilon_{\beta}
-\epsilon_{\alpha'}
-\epsilon_{\beta'}){\rho}_{\alpha\beta\alpha'\beta'}
\nonumber \\
&+&\sum_{\lambda_1\lambda_2}[
\langle\alpha\beta|v|\lambda_1\lambda_2\rangle\rho_{\lambda_1\lambda_2\alpha'\beta'}
\nonumber \\
&-&\langle\lambda_1\lambda_2|v|\alpha'\beta'\rangle\rho_{\alpha\beta\lambda_1\lambda_2}]
\nonumber \\
&+&\sum_{\lambda_1\lambda_2\lambda_3}
[\langle\alpha\lambda_1|v|\lambda_2\lambda_3\rangle\rho_{\lambda_2\lambda_3\beta\alpha'\lambda_1\beta'}
\nonumber \\
&+&\langle\lambda_1\beta|v|\lambda_2\lambda_3\rangle\rho_{\lambda_2\lambda_3\alpha\alpha'\lambda_1\beta'}
\nonumber \\
&-&\langle\lambda_1\lambda_2|v|\alpha'\lambda_3\rangle\rho_{\alpha\lambda_3\beta\lambda_1\lambda_2\beta'}
\nonumber \\
&-&\langle\lambda_1\lambda_2|v|\lambda_3\beta'\rangle\rho_{\alpha\lambda_3\beta\lambda_1\lambda_2\alpha'}].
\label{N3C2}
\end{eqnarray}
This coupled chain of equations of motion for reduced density matrices is known as the
Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy.
The BBGKY hierarchy can be truncated by approximating the three-body density matrix with
the antisymmetrized products of the one-body and two-body density matrices
\cite{WC,GT}. As will be discussed below, however,
such a truncation is valid only in weakly interacting regimes.
\subsection{Ground State and Collective Excitations}
The ground state in the TDDMA is given as a stationary solution of the TDDM equations
(Eqs. (\ref{n2}) and (\ref{N2C2})).
We use the following adiabatic method to obtain a nearly stationary
solution \cite{adiabatic1}: Starting from a non-interacting spin-saturated configuration,
we solve Eqs. (\ref{n2}) and (\ref{N2C2}) gradually increasing the interaction
$v({\bm r})\times t/T$. To suppress oscillating components which come from the mixing
of excited states, we must take large $T$. We use
$T=2\pi/\omega\times 4$. For $t>T$ the interaction strength is fixed at $v({\bm r})$. We have checked the stability of the obtained
ground state for $t>T$.
For strongly interacting regimes a spin-unsaturated deformed state becomes the ground state in the mean-field theory.
In these regimes we perform
symmetry unrestricted Hartre-Fock (HF) calculations to obtain the HF ground state
starting from a Slater determinant which breaks symmetries.
We excite collective oscillations by introducing a time-dependent operator $\hat{Q}(t)$
to the total Hamiltonian Eq. (\ref{totalH}). In the case of a one-body excitation operator,
$\hat{Q}(t)$ is given by
$k\sum_{\alpha\alpha'}\langle\alpha|Q|\alpha'\rangle
a^\dag_{\alpha}a_{\alpha'}\delta(t-T)$, where
$k$ determines the oscillation amplitude.
The initial conditions for the occupation matrix and the two-body density matrix at $t=T$ become such that
\begin{eqnarray}
n_{\alpha\alpha'}(T_+)=
\sum_{\lambda\lambda'}\langle\alpha|e^{-ikQ}|\lambda\rangle
n_{\lambda\lambda'}(T_-)\langle\lambda'|e^{ikQ}|\alpha'\rangle,
\label{n-init}
\end{eqnarray}
\begin{eqnarray}
\rho_{\alpha\beta\alpha'\beta'}(T_+)&=&
\sum_{\lambda_1\lambda_2\lambda_1'\lambda_2'}\langle\alpha|e^{-ikQ}|\lambda_1\rangle
\langle\beta|e^{-ikQ}|\lambda_2\rangle
\nonumber \\
&\times&\rho_{\lambda_1\lambda_2\lambda_1'\lambda_2'}(T_-)
\nonumber \\
&\times&\langle\lambda_1'|e^{ikQ}|\alpha'\rangle
\langle\lambda_2'|e^{ikQ}|\beta'\rangle,
\label{C-init}
\end{eqnarray}
where $T_-$ and $T_+$ indicate the times infinitesimally before and after $T$, respectively, and
$\langle\alpha|e^{ikQ}|\alpha'\rangle$ means
\begin{eqnarray}
\langle\alpha|e^{ikQ}|\alpha'\rangle&=&\delta_{\alpha\alpha'}-ik\langle\alpha|Q|\alpha'\rangle
\nonumber \\
&+&\frac{1}{2!}(ik)^2\sum_\lambda\langle\alpha|Q|\lambda\rangle\langle\lambda|Q|\alpha'\rangle+\cdot\cdot\cdot.
\end{eqnarray}
We study the collective modes in a small amplitude regime and, therefore,
expand Eqs. (\ref{n-init}) and (\ref{C-init}) up to second order of $k$.
The strength function $S(E)$ for an excitation operator $\hat{Q}$, which describes the distribution of
the transition strength, is calculated as \cite{toh1}
\begin{eqnarray}
S(E)=\frac{1}{k\pi}\int_0^\infty (q(t)-q(T))\sin Et'dt',
\label{strength}
\end{eqnarray}
where $q(t)=\langle \hat{Q}\rangle$ and $t'=t-T$.
Since the integration in Eq. (\ref{strength}) is performed for a finite interval
in numerical calculations,
we multiply $q(t)-q(T)$ by a damping factor $\exp(-\Gamma t'/2)$ to suppress spurious oscillations
in $S(E)$. Since each discrete state gains an artificial width due to this damping factor,
$\Gamma$ must be smaller than experimental energy resolution.
We make a comparison of the TDDMA results with the TDHFA results.
The small amplitude limit of the TDHFA corresponds to the random-phase approximation (RPA) \cite{RS}.
\section{Results}
\subsection{Ground State}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig1.eps}
\end{center}
\caption{(Color online) Ground-state energy as a function of $C=d^2/\omega\xi^3$ obtained in the TDDMA
for $N=2$ calculated with the single-particle states up
to the $2p-1f$ states (solid line). The dashed and dot-dashed lines show the TDDMA results calculated using the
single-particles states up to the $2s-1d$ and $3s-2d-1g$ states, respectively. The results in the TDHFA where spherical symmetry is imposed
are shown with the green (gray) solid line. The squares and circles denote
the results in the unrestricted HF approximation: The HF states denoted by the squares have a Rashba-like magnetization, while
those shown by the circles have magnetization in the $z$ direction.}
\label{fig1}
\end{figure}
First we consider an $N=2$ system, for which we can make a comparison of the results in the mean-field approaches and
the exact solutions given in the TDDMA.
As the starting ground state
we use a Slater determinant with a closed-shell configuration \cite{RS} where two atoms with spin up and down occupy the $1s$ state.
The number of $\rho_{\alpha\beta\alpha'\beta'}$ elements increases rapidly with increasing number of
the single-particle states, which makes it difficult to use a large number of the single-particle states.
For such a numerical reason we are forced to work with rather small configuration spaces but
this does not prevent from obtaining a semi-quantitative understanding of finite dipolar gases.
The ground-state energy calculated in the TDDMA for $N=2$ is show in Fig. \ref{fig1}
as a function of the parameter $C=d^2/\omega\xi^3$, where
$\xi$ is the oscillator length $\xi=\sqrt{1/m\omega}$.
The dashed, solid and dot-dashed lines show the TDDMA results calculated using the single-particle states up to the $2s-1d$, $2p-1f$ and $3s-2d-1g$ states,
respectively.
The range of $C$ considered in Fig. \ref{fig1} may be rather large
for the current experimental situations \cite{lu2} : for example, $C$
for a gas of $^{161}$Dy trapped in a harmonic potential with $\omega=2\pi\times 500$Hz is about 0.2. A large value of $C$ may be
realized for a dipolar gas confined in a lattice \cite{lu2}.
The results in the TDHFA
where spherical symmetry is imposed are shown with the green (gray) solid line.
Here the single-particle states up to the $2p-1f$ states are used.
In the case of the TDHFA calculations it is not so difficult to expand the single-particle space. From
the TDHFA calculations performed with the single-particle states up to the $3p-2f-1h$ states we estimate that the total HF energies
calculated with the single-particle states up to the $2p$ and $1f$ states explain
$99.9$\% of the converged values.
As shown in Fig. \ref{fig1}, the TDDMA results obtained using the single-particle states up to the $2p-1f$ states explain a substantial
part of the correlation energies though the TDDMA results are not completely converged. Therefore, in the following we mainly discuss the results obtained using
the single-particle states up to the $2p-1f$ states.
The squares and circles denote
the results in the HF approximation without symmetry restriction, which will be discussed below.
The increase of the ground state energy with the increasing $C$ means that the interaction Eq. (\ref{vdd})
is repulsive. This is due to the contact term (the second term on the right-hand side of Eq. (\ref{vdd})).
Note that the tensor part of Eq. (\ref{vdd}) alone cannot give any interaction energy when we start from
the non-interacting spin-symmetric ground state.
The difference between the TDDMA energy and the TDHFA energy is rather large, indicating the importance of the
ground-state correlations.
To investigate the effects of ground-state correlations in larger $N$ systems, we perform the TDDMA calculations for $N=8$
where a Slater determinant with the fully occupied $1s$ and $1p$ states (a closed-shell configuration) is used as the starting ground state.
We use the same single-particle states as those
used for $N=2$. The obtained results (black solid line) are
shown in Fig. \ref{fig8} as a function of $C$ and compared with the results of the spherical TDHFA calculations (green solid line). The results for $N=2$ are also shown for comparison.
The energy is normalized by the energy $E_0$ of the initial non-interacting state, which is $3\omega$ for $N=2$ and $18\omega$ for $N=8$.
As mentioned above the application of the TDDMA for $N\ge 3$ is limited to weakly interacting regimes $(C< 0.5)$. Figure \ref{fig8} suggests that the ground-state correlations are
significant even in heavier systems.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig2.eps}
\end{center}
\caption{(Color online) Ground-state energies calculated in the TDDMA (black solid line) and TDHFA (green (gray) solid line) for $N=8$.
The results for $N=2$ are also shown for comparison with the corresponding dashed lines.}
\label{fig8}
\end{figure}
Figure \ref{fig1}
shows that the breaking of spherical symmetry
gives a lower-energy solution in the HF approximation (HFA).
The HF ground states given by the squares are solutions with Rashba-like \cite{rashba} magnetization
which are obtained starting from Slater determinants with $\langle\Phi_0|({\bm \sigma}\times {\bm r})_z|\Phi_0\rangle\neq0$.
The circles in Fig. \ref{fig1} show the HF ground states where spins of the two atoms are completely polarized in the $z$ direction.
The tensor part of the dipole-dipole interaction is responsible for this completely polarized configuration because the
contact term cancels out in such a configuration.
In Figs. \ref{fig2} and \ref{fig3}
the distribution of the order parameter $\langle({\bm \sigma}\times {\bm r})_z\rangle$ which is given by
\begin{eqnarray}
\langle({\bm \sigma}\times {\bm r})_z\rangle&\equiv&
\langle\Phi_0|({\bm \sigma}\times {\bm r})_z\delta^3({\bm r}-{\bm r'})|\Phi_0\rangle
\nonumber \\
&=&
\sum_{\alpha\alpha'}({\bm \sigma}\times {\bm r})_zn_{\alpha\alpha'}\phi_\alpha({\bm r})\phi_{\alpha'}({\bm r})
\label{order}
\end{eqnarray}
is shown for the Rashba-like magnetized solution with $N=2$ and $C=0.8$. Here, $\phi_\alpha({\bm r})$ is the harmonic oscillator wavefunction.
Figures \ref{fig2} and \ref{fig3} show that the order parameter has a toroidal distribution.
The schematic picture of the spin distribution of this magnetized solution is shown in Fig. \ref{fig5} \cite{sogo}.
The density profile of the magnetized solution is shown in Fig. \ref{fig4}.
The density distribution is spheroidally extended in the $xy$ direction (an oblate shape).
Figures \ref{fig2}, \ref{fig3} and \ref{fig4} show that the Rashba-like magnetization is realized mostly in the central part of the gas.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig3.eps}
\end{center}
\caption{(Color online) Contour plot of the distribution of the order parameter $\langle({\bm \sigma}\times {\bm r})_z\rangle$
in the $xy$ plane calculated in the unrestricted HFA for $N=2$ and $C=0.8$. The values of the order parameter
are given in arbitrary units.
The distribution has reflection
symmetry with respect to the $x$ and $y$ axes.}
\label{fig2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig4.eps}
\end{center}
\caption{(Color online) Contour plot of the distribution of $\langle({\bm \sigma}\times {\bm r})_z\rangle$
in the $xz$ plane calculated in the unrestricted HFA for $N=2$ and $C=0.8$.
The distribution has rotation
symmetry with respect to the $z$ axis and reflection symmetry with respect to the $x$ axis.}
\label{fig3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig5.eps}
\end{center}
\caption{Schematic picture for spin distribution of Rashba-like magnetization with $\langle\Phi_0|({\bm \sigma}\times {\bm r})_z|\Phi_0\rangle\neq 0$ .}
\label{fig5}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig6.eps}
\end{center}
\caption{Density distribution $\rho(x,0,0)$ as a function of $x$ in the unrestricted HFA for $N=2$ and $C=0.8$.
The density distribution is symmetric with respect to the origin.}
\label{fig4}
\end{figure}
Since the ground-state calculation starts from the spin-saturated non-interacting configuration, the ground states in the TDDMA remain always spin-saturated and
have a spherically symmetric density distribution. In this case the order parameter, Eq. (\ref{order}), vanishes.
The TDDMA ground states are supposed to be a superposition of many configurations including magnetized and deformed ones.
To know the intrinsic structure of the TDDMA ground states, it is convenient to use
the two-body density distribution $\rho({\bm r}s{\bm r'}s':{\bm r}s{\bm r'}s')$ which is given by the two-body density matrix as
\begin{eqnarray}
\rho({\bm r}s{\bm r'}s':{\bm r}s{\bm r'}s')
&=&\sum_{\alpha\beta\alpha'\beta'}\rho_{\alpha(s)\beta(s')\alpha'(s)\beta'(s')}
\nonumber \\
&\times&\phi_\alpha({\bm r})\phi_\beta({\bm r'})\phi^*_{\alpha'}({\bm r})\phi^*_{\beta'}({\bm r'}).
\end{eqnarray}
This distribution gives the conditional
probability
to find an atom with spin $s$ at ${\bm r}$ when the other atom with spin $s'$ is located at
${\bm r'}$.
In the HFA the two-body density distribution is given as $\rho({\bm r}s{\bm r'}s':{\bm r}s{\bm r'}s')
=\rho({\bm r}s:{\bm r}s)\rho({\bm r'}s':{\bm r'}s')-\rho({\bm r}s:{\bm r'}s')\rho({\bm r'}s':{\bm r}s)$.
The contour plots of $\rho({\bm r}\uparrow{\bm r'}\downarrow:{\bm r}\uparrow{\bm r'}\downarrow)$
calculated in the unrestricted HFA and TDDMA for $N=2$ and $C=0.8$ are shown in Figs. \ref{fig.6} and \ref{fig.7}, respectively.
The position of ${\bm r'}$ is chosen at $(1.25\xi,0)$.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig7.eps}
\end{center}
\caption{(Color online) Contour plot of the two-body density distribution
$\rho({\bm r}\uparrow{\bm r'}\downarrow:{\bm r}\uparrow{\bm r'}\downarrow)$ (in arbitrary units)
in the $xy$ plane calculated in the unrestricted HFA
for $N=2$ and $C=0.8$, where ${\bm r'}=(1.25\xi,0)$.
The distribution has reflection
symmetry with respect to the $x$ axis.}
\label{fig.6}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig8.eps}
\end{center}
\caption{(Color online) Same as Fig. \ref{fig.6} but calculated in the TDDMA.}
\label{fig.7}
\end{figure}
The two-body density distribution in the HFA is depleted in the region $x>0$ and enhanced in the region $x<0$,
which indicates $\rho({\bm r}\uparrow:{\bm r}\uparrow)\approx\rho({\bm r}\uparrow:{\bm r}\downarrow)$
and $\rho({\bm r}\uparrow:{\bm r'}\downarrow)\ll\rho({\bm r}\uparrow:{\bm r}\uparrow)$ for ${\bm r}\neq{\bm r'}$.
The two-body density distribution in the HFA is thus consistent with the magnetization shown in Fig. \ref{fig5}.
The two-body density distribution in the TDDMA is similar to that in the HFA. This suggests that the intrinsic structure in
the TDDMA ground state has the magnetization similar to the HFA ground state.
To study the magnetization in much heavier systems,
we performed an unrestricted HFA calculation for $N=70$ and $C=0.8$ using the single-particle states
up to the $3p$, $2f$ and $1h$ states.
The contour plots of $\langle({\bm \sigma}\times {\bm r})_z\rangle$ are shown in Figs. \ref{fig9} and \ref{fig10}.
The density profile is
also shown in Fig. \ref{fig11}. It is thus found that a similar Rashba-like magnetization occurs in heavier systems.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig9.eps}
\end{center}
\caption{(Color online) Same as Fig. \ref{fig2} but for $N=70$ and $C=0.8$.}
\label{fig9}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig10.eps}
\end{center}
\caption{(Color online) Same as Fig. \ref{fig3} but for $N=70$ and $C=0.8$.}
\label{fig10}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig11.eps}
\end{center}
\caption{Density distribution $\rho(x,0,0)$ as a function of $x$ in the unrestricted HFA for $N=70$ and $C=0.8$.}
\label{fig11}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig12.eps}
\end{center}
\caption{(Color online) Strength functions for the Rashba-like mode calculated in the TDHFA for $N=2$ and $C=0.3$. The solid,
red dotted and blue dot-dashed lines show the results with $g/\omega\xi^3=0$, $1.8$ and $-1.8$, respectively.}
\label{pmdlta}
\end{figure}
It is pointed out in Refs. \cite{sogo} and \cite{li} that in infinite systems
the instability of the spin symmetric HF ground state against the spin monopole and spin quadrupole modes
occurs faster than the spin-orbit mode associated with $({\bm \sigma}\times{\bm r})_z$. As shown below,
we found that in the case of a trapped gas considered here, which is spherical symmetric and spin-saturated, the instability against
the spin monopole and spin quadrupole modes
occur for stronger dipole-dipole interaction ($C> 1$) than the $({\bm \sigma}\times{\bm r})_z$
mode. This is because the monopole and quadrupole modes should overcome $2\omega$ excitation energy in such trapped gases with
closed-shell configurations.
Trapped gases with open-shell configurations may have instabilities similar to infinite systems, which is an interesting subject of future study.
In order to investigate the effect of the contact interaction $g\delta^3({\bm r})$ on the instability of the Rashba-like mode,
we calculated the strength function for the excitation operator $({\bm \sigma}\times{\bm r})_z$ in the TDHFA.
The obtained strength functions for the Rashba-like mode for $N=2$ and $C=0.3$
are shown in Fig. \ref{pmdlta} where $g\delta^3({\bm r})$ with $g/\omega\xi^3=0$ (solid line), $1.8$ (red dotted line) and $-1.8$ (blue dot-dashed line) are used.
Figure shows that the repulsive contact interaction makes the Rashba-like mode soft.
In fact we found that the simple repulsive contact interaction $g\delta^3({\bm r})$ alone can also give a Rashba-like magnetization
when it is sufficiently strong ($g/\omega\xi^3>18$).
The results for infinite systems \cite{sogo} also show
that spin modes become unstable for a strongly repulsive contact interaction.
\subsection{Collective Excitations}
\subsubsection{Quadrupole Modes}
The strength function for the quadrupole mode calculated in the TDDMA (solid line) for $N=2$ and $C=1$ is shown in Fig. \ref{fig12}.
The excitation operator used is $(z^2-(x^2+y^2)/2)$. An excited mode is classified by the orbital angular momentum $L$, the total spin $S$ and
the total angular momentum $J$. Its parity $P$ is given by $P=(-1)^L$. The mode excited by $(z^2-(x^2+y^2)/2)$ has $L=2$, $S=0$ and $J^P=2^+$.
The result in the TDHFA is shown with the dotted line.
In the TDHFA calculation we used the spherically symmetric HF ground state so that the excited modes have good
quantum numbers as do the results in the TDDMA.
The artificial width used is $\Gamma/\omega=0.1$. The TDDMA result is quite different from the TDHFA result which shows a single peak.
The split of the strength in the TDDMA is considered to be due to the decoupling of the quadrupole mode and two-phonon states.
The candidate of the two-phonon states that have $E\approx2\omega$ is the double Kohn mode.
The single Kohn mode is the center-of-mass motion which can be excited by the operator $z$ and it is well-known \cite{Kohn,Brey,Dobson}
that the Kohn mode has excitation energy $\omega$
for any interaction with translational invariance.
We numerically confirmed this property.
Since the excitation operator for the double Kohn mode includes a one-body part such that
\begin{eqnarray}
\left(\sum_{\alpha\alpha'}\langle\alpha|z|\alpha'\rangle
a^\dag_{\alpha}a_{\alpha'}\right)^2&=&\sum_{\alpha\alpha'}\langle\alpha|z^2|\alpha'\rangle
a^\dag_{\alpha}a_{\alpha'}
\nonumber \\
&+&\sum_{\alpha\beta\alpha'\beta'}\langle\alpha|z|\alpha'\rangle\langle\beta|z|\beta'\rangle
\nonumber \\
&\times&a^\dag_{\alpha}a^\dag_{\beta}a_{\beta'}a_{\alpha'},
\end{eqnarray}
the double Kohn mode can be excited by the one-body operator $(z^2-(x^2+y^2)/2)$.
It is pointed out in Ref.\cite{ts04} that the excitation
energy of the double Kohn mode
should be $E=2\omega$ for any interaction with translational invariance.
The state which presumably consists of the double Kohn mode appears slightly above $2\omega$.
Such a deviation may be due to the truncation of the single-particle space which makes it difficult to properly describe the two phonon states.
A clear splitting of the double Kohn mode with $E=2\omega$ is seen in the TDDMA calculations for the monopole and quadrupole excitations of a two-dimensional
quantum dot with $N=2$ \cite{toh3},
where a larger single-particle space can be taken.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig13.eps}
\end{center}
\caption{Strength function for the quadrupole mode calculated in the TDDMA for $N=2$ and $C=1$ (solid line).
The result in the TDHFA where the spherically symmetric HF ground state is used is shown with the dotted line.
The artificial width used is $\Gamma/\omega=0.1$}
\label{fig12}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig14.eps}
\end{center}
\caption{Same as Fig. \ref{fig12} but for the spin quadrupole mode excited by the operator $\sigma_z (z^2-(x^2+y^2)/2)$.}
\label{fig13}
\end{figure}
The strength functions calculated for the spin quadrupole mode are shown in Fig. \ref{fig13} for $N=2$ and $C=1$.
The excitation operator used is $\sigma_z (z^2-(x^2+y^2)/2)$ which can excite states with $L=2$, $S=1$ and $J^P=1^+$ and $3^+$.
The dipole-dipole interaction is
strongly attractive in the particle - hole channel for the spin quadrupole modes \cite{sogo,li}.
Figure \ref{fig13} shows that the effects of the ground-state correlations
strongly reduce the particle - hole correlations. Comparing with the spin monopole mode excited by the operator $\sigma_zr^2$ which also excites states with
$J^P=1^+$, we found that
the lowest and highest-energy states at $E/\omega=0.9$ and $2.3$ in Fig. \ref{fig13} calculated in the TDDMA
have $J^P=1^+$ while the largest peak at $E/\omega=1.6$ corresponds to the $3^+$ state.
Since the lowest $1^+$ state is strongly excited by the operator $\sigma_zr^2$, its main component is considered to be $L=0$, $S=1$ and $J^P=1^+$.
The spin quadrupole mode with $L=2$, $S=1$ and $J^P=2^+$ comes between the lowest $1^+$ state and the $3^+$ state as a single peak, though it is not shown in Fig. \ref{fig13}.
We found that
the simple contact interactions of the form $g\delta^3({\bm r}_1-{\bm r}_2)$ or ${\bm d}_1\cdot{\bm d}_1g'\delta^3({\bm r}_1-{\bm r}_2)$
gives a single peak for the spin quadrupole modes. Therefore, the splitting of the spin quadrupole modes depending on $J$ is caused by the tensor part
of the dipole-dipole interaction (the first term on the right-hand side of Eq. (\ref{vdd})).
\subsubsection{Spin Dipole Modes}
Finally we show the result for the spin dipole mode excited by the operator
$\sigma_z z$ which can excite states with $J^P=0^-$ and $2^-$.
The strength function for the spin dipole mode calculated in the TDDMA
is shown in Fig. \ref{fig14} for $N=2$ and $C=0.8$. Figure \ref{fig14}
indicates that the spin dipole mode becomes quite soft for large interaction strength.
We have checked that the peaks at $E/\omega=0.6$ and $1.1$ have $J^P=2^-$ and $0^-$, respectively.
The tensor part of the dipole-dipole interaction is again responsible for the splitting.
In the TDHFA the spin dipole mode is unstable and is not shown in Fig. \ref{fig14}.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig15.eps}
\end{center}
\caption{Strength function for the spin dipole mode calculated in the TDDMA.}
\label{fig14}
\end{figure}
The spin-orbit mode excited by the operator $({\bm \sigma}\times {\bm r})_z$ which corresponds to $L=1$, $S=1$ and $J^P=1^-$ has zero excitation energy
at $C=0.8$. We show in Fig. \ref{fig15} the time evolution of $\langle\Phi_0|({\bm \sigma}\times {\bm r})_z|\Phi_0\rangle$ calculated
in the TDDMA: It is difficult to calculate the strength function because the Fourier transformation requires the TDDMA calculation for quite a long period of time.
The time evolution shows the process toward magnetization induced by the small external field $k({\bm \sigma}\times {\bm r})_z\delta(t-T)$.
Figure \ref{fig15} also shows that the magnetization process is accompanied by small oscillation with frequency $\approx 2\omega$.
\begin{figure}
\begin{center}
\includegraphics[height=6cm]{mgfig16.eps}
\end{center}
\caption{Time evolution of the spin-orbit moment $\langle\Phi_0|({\bm \sigma}\times {\bm r})_z|\Phi_0\rangle$ (in arbitrary units) calculated in the TDDMA.}
\label{fig15}
\end{figure}
From the above study of the spin excitations in the TDHFA, we can conclude that in a dipolar gas with a spin symmetric and spherical
closed-shell configuration the instability occurs first in the $J^P=1^-$ mode followed by
the $J^P=2^-$ mode and the $J^P=1^+$ mode as a function of $C$. As mentioned above, the difference in the order of unstable modes
between our result and the result for infinite systems \cite{sogo,li}
is explained by the trapping potential.
The fact that the spin modes calculated in the TDDMA do not show instabilities in the interaction regions where those in the TDHFA do
also suggests that quantum fluctuations (the ground-state correlations and configuration mixing) have an effect of pushing the instabilities to stronger interaction regions.
\section{Summary}
The ground state and collective excitations of an $N=2$ dipolar Fermion gas were studied using the time-dependent density-matrix
approach (TDDMA) which provides us with an alternative way of obtaining the exact solutions. In this approach the physical
observables are directly calculated using the one-body and two-body density matrices
and it has a clear relation to the time-dependent Hartree-Fock theory.
By comparing with the TDDMA results which correspond to the exact solutions we can investigate the effects of quantum fluctuations which are
missing in the mean-field approaches.
It was shown that the magnetization associated with the instability against the Rashba like spin-orbit mode realizes first
and that such magnetization can occur in heavier systems. Comparison with the exact solutions suggests that the instabilities given by
the Hartree-Fock approximation are shifted to stronger interaction
regimes due to quantum fluctuations.
It was pointed out that the tensor properties of the dipole-dipole interaction can be revealed in the excitations associated with spin degrees of
freedom.
For numerical reasons we were forced to work with rather small configurations spaces and the results
in the TDDMA are not completely converged. We also showed that enlarging the space does not qualitatively
change the results. Therefore, we think that our results are semi-quantitatively correct.
\begin{acknowledgments}
The author would like to thank Dr. P. Schuck for valuable discussions and critical reading of the manuscript.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,247 |
namespace Microsoft.Azure.Management.NetApp.Models
{
/// <summary>
/// Defines values for MirrorState.
/// </summary>
public static class MirrorState
{
public const string Uninitialized = "Uninitialized";
public const string Mirrored = "Mirrored";
public const string Broken = "Broken";
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 590 |
\section*{Introduction and notations}
In their ICM talk \cite{SU06}, Skinner and Urban outline a program to connect the order of vanishing of the $L$-functions of certain
polarized regular motives with the rank of the associated Bloch-Kato Selmer groups.
Their strategy is to deform the motives along certain $p$-adic eigenfamilies of Galois representations to construct the expected extensions.
They introduce the notion \emph{finite slope families} to encode the local properties of these $p$-adic families. One may view finite slope families as generalizations of the $p$-adic families arising from Coleman-Mazur eigencurve, which is formulated as weakly refined families by Bellaiche-Chenevier \cite{BC06}, in the sense that a finite slope family may have \emph{multiple} constant Hodge-Tate weights $k_1,\dots, k_r\in\mathbb{Z}$ and a Zariski dense subset of crystalline points which have prescribed crystalline periods with Hodge-Tate numbers $k_1,\dots,k_r$. Skinner and Urban then use the (unproved) analytic continuation of these crystalline periods to deduce that the expected extensions lie in the Selmer groups. Most recently, Harris, Lan, Taylor and Thorne construct Galois representations for (non-self dual) regular algebraic cuspidal automorphic representations of $\mathrm{GL}(n)$ over CM fields \cite{HLTT}. Their construction also involves $p$-adic deformations, and it turns out that these Galois representations live in certain $p$-adic families which generalize Skinner-Urban's finite slope families by replacing crystalline periods with semi-stable periods. Furthermore, to show that the Galois representations constructed by them are geometric as predicted by the philosophy of Langlands correspondence, one needs the analytic continuation of semi-stable periods for these families.
In this paper, we make use of the notion finite slope families to encode the local properties of the $p$-adic families of Galois representations in \cite{HLTT}; this generalizes the original definition of Skinner-Urban. Our main result is then to prove the analytic continuation of semi-stable periods for such families. This will provide a necessary ingredient to Skinner-Urban's ICM program. Besides, we recently learned from Taylor that, in an ongoing project of Ila Varma, she will establish the aforementioned geometric properties of Galois representations based on the results of this paper and a previous one of us \cite{L12}. We also note that recently Shah proves some results about interpolating Hodge-Tate and de Rham periods in families of $p$-adic Galois representations which may be applied to some related situations \cite{S}.
As the $p$-adic families over Coleman-Mazur eigencurve are special cases of finite slope families, our result generalizes the famous result of Kisin on the analytic continuation of crystalline periods for such families \cite{Ki03}. However, even in the crystalline case, our strategy and techniques are completely different from his. In fact, in Kisin's original work as well as the recent enhancement made by us \cite{L12}, one crucially relies on the fact that the families have only one constant Hodge-Tate weight, which is obviously not the case for general finite slope families. On the other hand, the work presented in this paper is inspired by the works of Berger and Colmez on families of de Rham representations \cite{BC07} and Kedlaya, Pottharst and Xiao on the cohomology of families of $\m$-modules \cite{KPX}. For a finite slope family, by adapting the techniques of \cite{KPX}, we first cut out a sub-family of $\m$-modules, which is expected to be generated by the desired semi-stable periods, after making a proper and surjective base change. We then develop a theory of families of Hodge-Tate and de Rham $\m$-modules with bounded Hodge-Tate weights. Finally we prove some analogues of Berger-Colmez for such families of $\m$-modules, and use them to conclude that the sub-family of $\m$-modules is semi-stable.
In the remainder of this introduction, we give more precise statements about our results.
We fix a finite extension $K$ of $\Q$.
Let $K_0$ be the maximal unramified sub-extension of $K$, and let $f=[K_0:\Q]$.
\begin{defn}\label{def:fs}
Let $X$ be a reduced and separated rigid analytic space over $K$. A \emph{finite slope family} of $p$-adic representations of dimension $d$ over $X$ is a locally free coherent $\OO_X$-module $V_X$ of rank $d$ equipped with a continuous $G_K$-action and together with the following data
\begin{enumerate}
\item[(1)]a positive integer $c$,
\item[(2)]a monic polynomial $Q(T)\in\OO_X(X)[T]$ of degree $m$ with unit constant term,
\item[(3)]a subset $Z$ of $X$ such that for all $z$ in $Z$, $V_z$ is semi-stable with non-positive Hodge-Tate weights, and for all $B\in\mathbb{Z}$ the set of $z$ in $Z$ such
that $V_z$ has $d-c$ Hodge-Tate weights less than $B$ is Zariski dense in $X$,
\item[(4)]for $z\in Z$, a $K_0\otimes_{\Q}k(z)$-direct summand $\mathcal{F}_{z}$ of $D^+_{\mathrm{st}}(V_z)$ which is free of rank $c$ and stable under $\varphi$ and $N$ such that $\varphi^f$ has characteristic polynomial $Q(z)(T)$ and all Hodge-Tate weights of $\mathcal{F}_z$ lie in $[-b,0]$ for some $b$ which is independent of $z$.
\end{enumerate}
\end{defn}
Our main results are as follows.
\begin{theorem}\label{thm:main}
Let $V_X$ be a finite slope family over $X$. Then there exists a surjective proper morphism $X'\ra X$ so that $(K\otimes_{K_0}D^+_{\mathrm{st}}(V_{X'}))^{Q(\varphi)=0}$ has a rank $c$ locally free coherent $K_0\otimes_{\Q}\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\otimes_{\Q}k(x)$-submodule in $\D_\rig^\dag(V_x)$ for any $x\in X'$. As a consequence, $D^+_{\mathrm{st}}(V_x)^{Q(\varphi)(x)=0}$ has a free $K_0\otimes_{\Q}k(x)$-submodule of rank $c$ for any $x\in X$.
\end{theorem}
The following corollary is clear.
\begin{cor}
Let $V_X$ be a finite slope family over $X$. If $V_z$ is crystalline for any $z\in Z$, then there exists a surjective proper morphism $X'\ra X$ so that $(K\otimes_{K_0}D^+_{\mathrm{crys}}(V_{X'}))^{Q(\varphi)=0}$ has a rank $c$ locally free coherent $K_0\otimes_{\Q}\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\otimes_{\Q}k(x)$-submodule in $\D_\rig^\dag(V_x)$ for any $x\in X'$. As a consequence, $D^+_{\mathrm{crys}}(V_x)^{Q(\varphi)(x)=0}$ has a free $K_0\otimes_{\Q}k(x)$-submodule of rank $c$ for any $x\in X$.
\end{cor}
\section*{Acknowledgements}
Thanks to Christopher Skinner, Richard Taylor and Ila Varma for useful communications. We especially thank Richard Taylor for suggesting a more concise definition of finite slope families.
\section{Families of $\m$-modules}
\begin{defn}
Let $A$ be a Banach algebra over $\Q$. For $s>0$, a \emph{$\varphi$-module} over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ is a finite projective $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$-module $D_A^s$ equipped with an isomorphism
$$\varphi^*D_A^s\cong D_A^s\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A}\mathbf{B}^{\dag,ps}_{\rig,K}\widehat{\otimes}_{\Q}A.$$ A \emph{$\varphi$-module} $D_A$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ is the base change to $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ of a $\varphi$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ for some $s>0$.
A \emph{$\m$-module} over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ is a $\varphi$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ equipped with a commuting semilinear continuous action of $\Gamma$. A \emph{$\m$-module} $D_A$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ is the base change to $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ of a $\m$-module $D_A^s$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}A$ for some $s>0$.
\end{defn}
\begin{notation}
For a morphism $A\ra B$ of Banach algebras over $\Q$, we denote by $D^s_B$ (resp. $D_B$) the base change of $D^s_A$ (resp. $D_A$) to $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}B$ (resp. $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}B$). In the case when $A=S$ is an affinoid algebra over $\Q$ and $x\in M(S)$, we denote $D^s_{k(x)}$ (resp. $D_{k(x)}$) by $D_x^s$ (resp. $D_x$) instead.
\end{notation}
Let $S$ be an affinoid algebra over $\Q$. Recall that for sufficiently large $s$, a vector bundle over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ consists of one finite flat module $D_S^{[s_1,s_2]}$ over each ring $\mathbf{B}^{[s_1,s_2]}_K\widehat{\otimes}_{\Q}S$ with $s\leq s_1\leq s_2$, together with isomorphisms
\[
D_S^{[s_1,s_2]}\otimes_{\mathbf{B}^{[s_1,s_2]}_{K}\widehat{\otimes}_{\Q}S}
\mathbf{B}^{[s_1',s_2']}_{K}\widehat{\otimes}_{\Q}S\cong D_S^{[s'_1,s'_2]}
\]
for all $s\leq s_1'\leq s_1\leq s_2\leq s_2'$ satisfying the cocycle conditions. A $\varphi$-bundle over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ is a vector bundle $(D_S^{[s_1,s_2]})_{s\leq s_1\leq s_2}$ over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ equipped with isomorphisms $\varphi^*D_S^{[s_1,s_2]}\cong D_S^{[ps_1,ps_2]}$ for all $s/p\leq s_1\leq s_2$ satisfying the obvious compatibility conditions. When $s$ is sufficiently large, by \cite[Proposition 2.2.7]{KPX}, the natural functor from the category of $\varphi$-modules over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ to the category of $\varphi$-bundles over $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}S$ is an equivalence of categories. Note that by its definition, one can glue $\varphi$-bundles over separated rigid analytic spaces. Therefore this equivalence of categories enables us to introduce the following definition.
\begin{defn}
Let $X$ be a separated rigid analytic space over $\Q$. A family of $\m$-modules $D_X$ over $X$ is a compatible family of $\m$-modules $D_S$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}S$ for each affinoid subdomain $M(S)$ of $X$.
\end{defn}
The following theorem follows from \cite{BC07}, \cite{KL10} and \cite{L12}.
\begin{theorem}
Let $A$ be a Banach algebra over $\Q$, and let $V_A$ be a finite locally free $A$-linear representation of $G_K$. Then there is a $\m$-module $\D_\rig^\dag(V_A)$ over $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}A$ functorially associated to $V_A$. The rule $V_A\mapsto \D_\rig^\dag(V_A)$ is fully faithful and exact, and it commutes with base change in $A$.
\end{theorem}
Let $A$ be a Banach algebra over $K_0$. Recall that one has a canonical decomposition
\[
A\otimes_{\Q}K_0\cong\prod_{\sigma\in\mathrm{Gal}(K_0/\Q)}A_{\sigma}
\]
where each $A_{\sigma}$ is the base change of $A$ by the automorphism $\sigma$. Furthermore, the $\mathrm{Gal}(K_0/\Q)$-action permutes all $A_\sigma$'s in the way that $\tau(A_\sigma)=A_{\tau\sigma}$. For any $a\in A^\times$, we equip $A\otimes_{\Q}{K_0}$ with a $\varphi\otimes 1$-semilinear action $\varphi$ by setting
\[
\varphi((x_1,x_{\varphi},\dots, x_{\varphi^{f-1}}))=(ax_{\varphi^{f-1}},x_1,\dots,x_{\varphi^{f-2}})
\]
where $x_{\sigma}\in A_{\sigma}$ for each $\sigma\in\mathrm{Gal}(K_0/\Q)$; we denote this $\varphi$-module by $D_a$. It is clear that the $\varphi$-action on $D_a$ satisfies $\varphi^f=1\otimes a$.
We fix a uniformizer $\pi_K$ of $K$.
\begin{defn} For any continuous character $\delta:K^\times\ra A^\times$, we associate it a rank 1 $(\varphi,\Gamma)$-module $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A$ as follows. If $\delta|_{\OO_K^\times}=1$, we set $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)
\otimes_{A\otimes_{\Q}{K_0}}D_{\delta(\pi_K)}$ where we equip $D_{\delta(\pi_K)}$ with the trivial $\Gamma$-action. For general $\delta$, we write $\delta=\delta'\delta''$ such that $\delta'(\pi_K)=1$ and $\delta''|_{\OO_K^\times}=\mathrm{id}$. We view $\delta'$ as an $A$-valued character of $W_K$, and extend it to a character of $G_K$ continuously. We then set
\[
(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)=\D_\rig^\dagger(\delta')
\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A}
(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta'').
\]
For a $(\varphi,\Gamma)$-module $D_A$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A$, we put $D_A(\delta)=D_A\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A}
(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}A)(\delta)$.
Let $X$ be a separated rigid analytic space over $\Q$. For a continuous character $\delta:K^\times\ra \OO(X)^\times$ and a family of $\m$-module $D_X$ over $X$, we define the families of $\m$-modules $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_X)(\delta)$ and $D_X(\delta)$ by gluing $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S)(\delta)$ and $D_S(\delta)$ for all affinoid subdomains $M(S)$ respectively.
\end{defn}
\section{Cohomology of families of $\m$-modules}
Let $\Delta_K$ be the $p$-torsion subgroup of $\Gamma$. Choose $\gamma_K\in\Gamma_K$ whose image in $\Gamma/\Delta_K$ is a topological generator.
\begin{defn}
Let $S$ be an affnioid algebra over $\Q$. For a $\m$-module $D_S$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$, we define the Herr complex $C^\bullet_{\varphi,\gamma_K}(D_S)$ of $D_S$ concentrated in degree $[0,2]$ as follows:
\[
C^{\bullet}_{\varphi,\gamma_K}(D_S)=
[D_S^{\Delta_K}\stackrel{d_{1}}{\longrightarrow}D_S^{\Delta_K}\oplus D_S^{\Delta_K}
\stackrel{d_{2}}{\longrightarrow}D_S^{\Delta_K}]
\]
with $d_1(x) = ((\gamma_K - 1)x, (\varphi - 1)x)$ and $d_2(x,y) =
(\varphi - 1)x - (\gamma_K - 1)y$. One shows that this complex is independent of the choice of $\gamma_K$ up to canonical quasi-isomorphism. Its cohomology group is denoted by $H^\bullet(D_S)$.
\end{defn}
By the main result of \cite{KPX}, one knows that $H^i(D_S)$ is a finite $S$-module and commutes with flat base change in $S$. This enables a cohomology theory for families of $\m$-modules over general rigid analytic spaces.
\begin{defn}
Let $X$ be a separated rigid analytic space over $\Q$, and let $D_X$ be a family of $\m$-modules over $X$. For each $0\leq i\leq 2$, we define $H^\bullet(D_X)$ to be the cohomology of the complex
\[
C^{\bullet}_{\varphi,\gamma_K}(D_X)=
[D_X^{\Delta_K}\stackrel{d_{1}}{\longrightarrow}D_X^{\Delta_K}\oplus D_X^{\Delta_K}
\stackrel{d_{2}}{\longrightarrow}D_X^{\Delta_K}]
\]
with $d_1(x) = ((\gamma_K - 1)x, (\varphi - 1)x)$ and $d_2(x,y) =
(\varphi - 1)x - (\gamma_K - 1)y$. For each $0\leq i\leq 2$, $H^i(D_X)$ is therefore the coherent $\OO_X$-module obtained by gluing $H^i(D_S)$ for all affinoid subdomains $M(S)$ of $X$.
\end{defn}
As a consequence of finiteness of the cohomology of families of $\m$-modules, by a standard argument we see that locally on $X$, the complex $C^{\bullet}_{\varphi,\gamma_K}(D_X)$ is quasi isomorphic to a complex of locally free coherent sheaves concentrated in degree $[0,2]$. This would enable us to flatten the cohomology of families of $\m$-modules by blowing up the base $X$. The following lemma is a rearrangement of some arguments in \cite[\S6]{KPX}.
\begin{lemma}\label{lem:modification}
Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. Then the following are true.
\begin{enumerate}
\item[(1)]The exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ so that $H^0(D_{X'})$ is flat and $H^i(D_{X'})$ has Tor-dimensions $\leq 1$ for each $i=1,2$.
\item[(2)]Suppose that $D'_{X}$ is a family of $\m$-modules over $X$ of rank $d'$, and that $\lambda: D'_X\ra D_X$ be a morphism between them so that for any $x\in X$, the image of $\lambda_x$ is a $\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ so that the cokernel of $\pi^*\lambda$ has Tor-dimension $\leq 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
The upshot is that for a bounded complex $(C^\bullet,d^\bullet)$ of locally free coherent sheaves on $X$, there exists a blow up $\pi:X'\ra X$, which depends only on the quasi-isomorphism class of $(C^\bullet,d^\bullet)$, so that $\pi^*d^i$ has flat image for each $i$. Furthermore, the construction of $X'$ commutes with dominant base change in $X$ (see \cite[Corollary 6.2.5]{KPX} for more details). Thus for (1), we can construct $X'$ locally and then glue. For (2),
let $Q_X$ denote the cokernel of $\lambda$. For any $x\in X$, since the image of $\lambda_x$ is a $\m$-submodule of rank $d$, by \cite[Lemma 5.3.1]{L12}, we get that $Q_x$ is killed by a power of $t$. Now let $M(S)$ be an affinoid subdomain of $X$, and suppose that $D_S^s$ and $D'^s_S$ are defined for some suitable $s>0$. For $r>s$, set $Q_S^{[s,r]}=D^{[s,r]}_S/\lambda(D'^{[s,r]}_S)$. Since for any $x\in M(S)$, the fiber of $Q_S^{[s,r]}$ at $x$ is killed by a power of $t$, we get that $Q_S^{[s,r]}$ is killed by $t^k$ for some $k>0$. This yields that $Q_S^{[s,r]}$ is a finite $S$-module. Now we apply \cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,ps]}$ to get a blow up $Y$ of $M(S)$ so that the pullback of $Q_S^{[s,ps]}$ has Tor-dimension $\leq1$. Using the fact $(\varphi^n)^*Q_S^{[s,ps]}\cong Q_S^{[p^ns,p^{n+1}s]}$, we see that $Y$ is also the blow up obtained by applying \cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,p^{n+1}s]}$ for any positive integer $n$. It therefore follows that for any $r>s$, the pullback of $Q_S^{[s,r]}$ has Tor-dimension $\leq 1$; hence the pullback of $Q_S$ has Tor-dimension $\leq 1$. Furthermore, the blow ups for all affinoid subdomains $M(S)$ glue to form a blow up $X'$ of $X$ which satisfies the desired condition.
\end{proof}
\begin{lemma}\label{lem:ker-birational}
Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D'_X$ and $D_{X}$ be families of $\m$-modules over $X$ of ranks $d'$ and $d$ respectively, and let $\lambda: D'_X\ra D_X$ be a morphism between them. Suppose that for any $x\in X$, the image of $\lambda_x$ is a $\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ such that the kernel of $\pi^*\lambda$ is a family of $\m$-modules of rank $d'-d$ over $X'$, and there exists a Zariski open dense subset $U\subset X'$ such that $(\ker(\pi^*\lambda))_x=\ker((\pi^*\lambda)_x)$ for any $x\in U$.
\end{lemma}
\begin{proof}
Let $Q_X$ be the cokernel of $\lambda$. By the previous Lemma, we may suppose that $Q_X$ has Tor-dimension $\leq1$ after adapting $X$. Now let $P_X$ denote the kernel of $\lambda$. For any $x\in X$, the Tor spectral sequence computing the cohomology of the complex $[D_{X}\stackrel{\lambda}{\longrightarrow}D'_{X}]\otimes^{\mathbf{L}}_{\OO_{X}}k(x)$ gives rise to a short exact sequence
\[
0\longrightarrow P_x\longrightarrow\ker(\lambda_x)\longrightarrow\mathrm{Tor}_1(Q_X,k(x))\longrightarrow0.
\]
Since the image of $\lambda_x$ is a $\m$-module of rank $d$, $\ker(\lambda_x)$ is a $\m$-module of rank $d'-d$. Since $Q_X$ is killed by a power of $t$ locally on $X$, we get that the last term of the exact sequence is killed by a power of $t$. This yields that $P_x$ is a $\m$-module of rank $d'-d$. We therefore conclude that $P_X$ is a family of $\m$-modules of rank $d'-d$ over $X$ by \cite[Corollary 2.1.9]{KPX}. Furthermore, since $Q_X$ has Tor-dimension $\leq1$, by \cite[Lemma 6.2.7]{KPX}, we get that the set of $x\in X$ for which $\mathrm{Tor}_1(Q_X,k(x))\neq0$ forms a nonwhere dense Zariski closed subset of $X$; this yields the rest of the lemma.
\end{proof}
The following proposition modifies part of \cite[Theorem 6.2.9]{KPX}.
\begin{prop}\label{prop:cohomology}
Let $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D_X$ be a family of $\m$-modules of rank $d$ over $X$, and let $\delta:K^\times\ra \OO(X)^\times$ be a continuous character. Suppose that there exist a Zariski dense subset $Z$ of closed points of $X$ and a positive integer $c\leq d$ such that for every $z\in Z$, $H^0(D_z^{\vee}(\delta_z))$ is a
$c$-dimensional $k(z)$-vector space.
Then there exists a proper birational morphism $\pi:X'\ra X$ of reduced rigid analytic spaces over $K$ and a morphism $\lambda: D_{X'}\ra M_{X'}=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})(\delta)\otimes_{\OO_{X'}}L$ of $\m$-modules, where $L$ is a locally free coherent $\OO_{X'}$-module of rank $c$ equipped with trivial $\varphi,\Gamma$-actions, such that
\begin{enumerate}
\item[(1)]for any $x\in X'$, the image of $\lambda_{x}$ is a $\m$-submodule of rank $c$;
\item[(2)]the kernel of $\lambda$ is a family of $\m$-modules of rank $d-c$ over $X'$, and there exists a Zariski open dense subset $U\subset X'$ such that $(\ker\lambda)_x=\ker(\lambda_x)$ for any $x\in U$.
\end{enumerate}
\end{prop}
\begin{proof}
Using Lemma \ref{lem:modification}, we first choose a proper birational morphism $\pi:X'\ra X$ with $X'$ reduced such that $N_{X'}=\pi^*(D^{\vee}_{X}(\delta))$ satisfies the conditions that $H^0(N_{X'})$ is flat and $H^i(N_{X'})$ has Tor-dimension $\leq 1$ for each $i=1,2$. Then for any $x\in X'$, the base change spectral sequence $E^{i,j}_2=\mathrm{Tor}_{-i}(H^j(N_{X'}),k(x))\Rightarrow H^{i+j}(N_x)$ gives a short exact sequence
\[
0\longrightarrow H^0(N_{X'})\otimes_{\OO_{X'}}k(x)\longrightarrow H^0(N_x)\longrightarrow \mathrm{Tor}_1(H^1(N_{X'}),k(x))\longrightarrow0
\]
As $H^1(N_{X'})$ has Tor-dimension $\leq1$, by \cite[Lemma 6.2.7]{KPX}, the set of $x\in X'$ for which the last term of the above exact sequence does not vanish forms a nowhere dense Zariski closed subset $V$. For any $z\in\pi^{-1}(Z)/V$, we deduce from the above exact sequence that $H^0(N_{X'})\otimes_{\OO_{X'}}k(z)$ is a $c$-dimensional $k(z)$-vector space. Since $H^0(N_{X'})$ is flat and $\pi^{-1}(Z)/V$ is a Zariski dense subset of $X'$, we get that $H^0(N_{X'})$ is locally free of constant rank $c$. Let $L$ be its dual; then the natural map $(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})H^0(N_{X'})\ra N_{X'}$
gives a map $\lambda:D_{X'}\ra M_{X'}=(\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}\OO_{X'})(\delta)\otimes_{\OO_{X'}}L$. For any $x\in X'$, since the map $H^0(N_{X'})\otimes_{\OO_{X'}}k(x)\longrightarrow H^0(N_x)$ is injective, we get that the image of $\lambda_x$ is a rank $c$ $\m$-submodule of $M_x$. We thus conclude the proposition using the previous lemma.
\end{proof}
\section{Families of Hodge-Tate $\m$-modules}
From now on, let $S$ be a reduced affinoid algebra over $K$.
\begin{defn}\label{def:HT}
Let $D_S$ be a $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set
\[
\D^n_{\Sen}(D_S)=D_S^{r_n}\otimes_{\mathbf{B}^{\dag,r_n}_{\rig,K}\widehat{\otimes}_{\Q}S}K_n\otimes_{\Q}S.
\]
We call $D_S$ \emph{Hodge-Tate with Hodge-Tate weights in $[a,b]$} if
there exists a positive integer $n$ such that
the natural map
\begin{equation}\label{eq:def-HT}
(\oplus_{a\leq i\leq b}\D^n_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_S(-i))
\end{equation}
is an isomorphism. We denote by $h_{HT}(D_S)$ the smallest $n$ which satisfies this condition, and we define $D_{\mathrm{HT}}(D_S)=(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma$.
\end{defn}
\begin{lemma}\label{lem:HT-inv}
Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then for any $n\geq h_{HT}(D_S)$, (\ref{eq:def-HT}) is an isomorphism and $\D_\Sen^n(D_S(-i))^{\Gamma}=\D_\Sen^{h_{HT}(D_S)}(D_S(-i))^{\Gamma}$ for any $i\in [a,b]$. As a consequence, we have
$(\oplus_{a\leq i\leq b}\D_\Sen^n(D_S(-i)))^\Gamma=D_{\mathrm{HT}}(D_S)$.
\end{lemma}
\begin{proof}
Tensoring with $K_{n}\otimes_{\Q}S[t,1/t]$ on both sides of the map
\[
(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_{h_{HT}(D_S)}
\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^{h_{HT}(D_S)}(D_S(-i))
\]
We get that the natural map
\[
(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma\otimes_{K\otimes_{\Q}S}(K_{n}\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^{n}(D_S(-i))
\]
is an isomorphism. Taking $\Gamma$-invariants on both sides, we get
\[
(\oplus_{a\leq i\leq b}\D^{h_{HT}(D_S)}_\Sen(D_S(-i)))^\Gamma=(\oplus_{a\leq i\leq b}\D^{n}_\Sen(D_S(-i)))^\Gamma.
\]
This yields the lemma.
\end{proof}
\begin{remark}
If $D_S$ is Hodge-Tate with weights in $[a,b]$, taking $\Gamma$-invariants on both sides of (\ref{eq:def-HT}), we see that $\D^n_\Sen(D_S(-i))^{\Gamma}=0$ for any $n\geq h_{HT}(D_S)$ and $i\notin [a,b]$.
\end{remark}
\begin{lemma}\label{lem:HT}
If $D_S$ is a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$, then for any morphism $S\ra R$ of affinoid algebras over $K$, $D_R$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_R)\leq h_{HT}(D_S)$. Furthermore, the natural map $\D^n_\Sen(D_S(i))^\Gamma\otimes_{S}R\ra\D^n_\Sen(D_R(i))^\Gamma$ is an isomorphism for any $i\in\mathbb{Z}$ and $n\geq h_{HT}(D_S)$. As a consequence, the natural map $D_{\mathrm{HT}}(D_S)\otimes_SR\ra D_{\mathrm{HT}}(D_R)$ is an isomorphism.
\end{lemma}
\begin{proof}
Let $n\geq h_{HT}(D_S)$. Tensoring with $R$ over $S$ on both sides of (\ref{eq:def-HT}), we get that the natural map
\[
(\oplus_{a\leq i\leq b}\D^n_\Sen(D_S(-i))^\Gamma\otimes_SR)\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_R(-i)).
\]
is an isomorphism. Comparing $\Gamma$-invariants on both sides, we get that the natural map
\[
\D^n_\Sen(D_S(-i))^\Gamma\otimes_{S}R\ra\D^n_\Sen(D_R(-i))^\Gamma
\]
is an isomorphism for any $a\leq i\leq b$. This implies that the natural map
\[
(\oplus_{a\leq i\leq b}\D^n_\Sen(D_R(-i))^\Gamma\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_R(-i)).
\]
is an isomorphism. This proves the lemma.
\end{proof}
\begin{cor}
If $D_S$ is a Hodge-Tate $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$, then $D_{\mathrm{HT}}(D_S)$ is a locally free coherent $K\otimes_{\Q}S$-module of rank $d$.
\end{cor}
\begin{proof}
By the previous lemma, it suffices to treat the case that $S$ is a finite extension of $K$; this is clear from the isomorphism (\ref{eq:def-HT}).
\end{proof}
\begin{defn}
Let $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. We call $D_X$ \emph{Hodge-Tate} with weights in $[a,b]$ if for some (hence any) admissible cover $\{M(S_i)\}_{i\in I}$ of $X$, $D_{S_i}$ is Hodge-Tate with weights in $[a,b]$ for any $i\in I$. We define $D_{\mathrm{HT}}(D_X)$ to be the gluing of all $D_{\mathrm{HT}}(D_{S_i})$'s.
\end{defn}
\begin{lemma}\label{lem:HT-criterion}
Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. Then (\ref{eq:def-HT}) is an isomorphism if and only if the natural map
\begin{equation}\label{eq:lem-HT}
\oplus_{a\leq i\leq b}\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}\longrightarrow\D_\Sen^n(D_S)
\end{equation}
is an isomorphism. Furthermore, if this is the case, then (\ref{eq:def-HT}) holds for $n$.
\end{lemma}
\begin{proof}
For the ``$\Rightarrow$'' part, since (\ref{eq:def-HT}) is an isomorphism, we deduce that
\begin{equation}\label{eq:lem-HT-2}
\D_\Sen^n(D_S)=\oplus_{a\leq i\leq b}t^i\cdot\D^n_{\Sen}(D_S(-i))^\Gamma
\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S).
\end{equation}
Note that $t^i\cdot\D^n_{\Sen}(D_S(-i))^\Gamma\subseteq\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}$. Hence (\ref{eq:lem-HT-2}) implies that (\ref{eq:lem-HT}) is surjective. On the other hand, it is clear that (\ref{eq:def-HT}) is injective; hence it is an isomorphism. Conversely, suppose that (\ref{eq:lem-HT}) is an isomorphism. Note that
\[
\D_\Sen^n(D_S)^{\Gamma_n=\chi^i}=t^i\cdot\D_\Sen^n(D_S(-i))^{\Gamma_n}=(t^i\cdot\D_\Sen^n(D_S(-i))^\Gamma)
\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S),
\]
where the latter equality follows from \cite[Proposition 2.2.1]{BC07}. This implies that $D_S$ satisfies (\ref{eq:lem-HT-2}), yielding that $D_S$ satisfies (\ref{eq:def-HT}).
\end{proof}
\begin{prop}\label{prop:HT-family}
Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}S$. Suppose that there exists a Zariski dense subset $Z\subset M(S)$ such that $D_z$ is Hodge-Tate with weights in $[a,b]$ for any $z\in Z$ and $\sup_{z\in Z}\{h_{HT}(D_z)\}<\infty$. Then $D_S$ is Hodge-Tate with weights in $[a,b]$.
\end{prop}
\begin{proof}
Let $n\geq\sup_{z\in Z}\{h_{HT}(D_z)\}$ such that $D_S^n$ is defined, and let $\gamma$ be a topological generator of $\Gamma_n$. For any $a\leq i\leq b$, let $p_i$ denote the operator
$\prod_{a\leq j\leq b, j\neq i}\frac{\gamma-\chi^{j}(\gamma)}{\chi^i(\gamma)-\chi^j(\gamma)}$,
and let $M_i=p_i(\D_\Sen^n(D_S))$. It is clear that $p_i$ is the identity on $\D_{\Sen}^n(D_S)^{\Gamma_n=\chi^i}$; hence $\D_{\Sen}^n(D_S)^{\Gamma_n=\chi^i}\subseteq M_i$. On the other hand, for any $z\in Z$, since $D_z$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_z)\leq n$, we deduce from Lemma \ref{lem:HT-criterion} that $p_i(\D_\Sen^n(D_z))=\D^n_\Sen(D_z)^{\Gamma_n=\chi^i}$. This implies that $M_i$ maps onto $\D^n_\Sen(D_z)^{\Gamma_n=\chi^i}$ under the specialization $\D_\Sen^n(D_S)\ra \D_\Sen^n(D_z)$. Since $Z$ is Zariski dense, we conclude $M_i\subseteq\D^n_\Sen(D)^{\Gamma_n=\chi^i}$; hence $M_i=\D^n_\Sen(D)^{\Gamma_n=\chi^i}$.
Let $M=\oplus_{a\leq i\leq b}M_i$. We claim that the natural inclusion $M\subseteq \D_\Sen^n(D_S)$ is an isomorphism. In fact, for any $z\in Z$, since $\D_\Sen^n(D_z)=\oplus_{a\leq i\leq b}\D_\Sen^n(D_z)^{\Gamma_n=\chi^i}$, we have that $M$ maps onto $\D_\Sen^n(D_z)$. Thus $\D^n_\Sen(D_S)/M$ vanishes at $z$. We therefore conclude $\D^n_\Sen(D_S)/M=0$ because $Z$ is Zariski dense. By Lemma \ref{lem:HT-criterion} and the claim, we conclude that $D_S$ is Hodge-Tate with weights in $[a,b]$.
\end{proof}
\section{Families of de Rham $\m$-modules}
\begin{defn}\label{def:dR}
Let $D_S$ be a $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set
\[
\D^{+,n}_{\dif}(D_S)=D_S^{r_n}\otimes_{\mathbf{B}^{\dag,r_n}_{\rig,K}\widehat{\otimes}_{\Q}S}(K_n\otimes_{\Q}S)[[t]], \qquad
\D^{n}_{\dif}(D_S)=\D^{+,n}_{\dif}(D_S)[1/t].
\]
We equip $\D_\dif^n(D_S)$ with the filtration $\mathrm{Fil}^i\D_\dif^n(D_S)=t^i\D_\dif^{+,n}(D_S)$. We call $D_S$ \emph{de Rham with weights in $[a,b]$} if there exists a positive integer $n$ such that
\begin{enumerate}
\item[(1)]
the natural map
\begin{equation}\label{eq:def-de Rham}
\D^n_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S)
\end{equation}
is an isomorphism;
\item[(2)]$\mathrm{Fil}^{-b}(\D^n_\dif(D_S)^\Gamma)=D_S$ and $\mathrm{Fil}^{-a+1}(\D^n_\dif(D_S)^\Gamma)=0$
where $\mathrm{Fil}^{i}(\D^n_\dif(D_S)^\Gamma)$ is the induced filtration on $\D^n_\dif(D_S)^\Gamma$.
\end{enumerate}
We denote by $h_{dR}(D_S)$ the smallest $n$ which satisfies these conditions, and we define $D_{\mathrm{dR}}(D_S)=\D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma$.
\end{defn}
\begin{lemma}\label{lem:dR-inv}
Let $D$ be a de Rham $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$. Then for any $n\geq h_{dR}(D_S)$, $\D^n_\dif(D_S)^\Gamma=D_{\mathrm{dR}}(D_S)$
\end{lemma}
\begin{proof}
We tensor $K_{n+1}\otimes_{\Q}S[[t]][1/t]$ on both sides of the map
\[
\D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_{h_{dR}(D_S)}\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^{h_{dR}(D_S)}(D_S),
\]
yielding that the map
\[
\D^{h_{dR}(D_S)}_\dif(D_S)^\Gamma\otimes_{K\otimes_{\Q}S}(K_{n}\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^{n}(D_S).
\]
is an isomorphism. Comparing $\Gamma$-invariants on both sides, we get the desired result.
\end{proof}
\begin{lemma}\label{lem:dR-HT}
If $D$ is a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$, then $D$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_S)\leq h_{dR}(D_S)$. Furthermore, we have $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=\D_\Sen^n(D_S(i))^\Gamma$ under the identification $\mathrm{Gr}^i\D_\dif^n(D_S)=\D_\Sen^n(D_S(i))$ for any $n\geq h_{dR}(D_S)$.
\end{lemma}
\begin{proof}
Let $n\geq h_{dR}(D_S)$. Since (\ref{eq:def-de Rham}) is an isomorphism, we deduce that the natural map of graded modules
\begin{equation}\label{eq:lem-dR-HT}
\oplus_{i\in\mathbb{Z}}\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)
\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[t,t^{-1}]\longrightarrow \oplus_{i\in\mathbb{Z}}\D_\Sen^n(D_S(i))
\end{equation}
is surjective. On the other hand, since $t^i\cdot\mathrm{Gr}^{-i}D_{\mathrm{dR}}(D_S)\subset \D_{\Sen}^n(D_S)$, we have that the natural map
\[
\oplus_{a\leq i\leq b}t^i\cdot\mathrm{Gr}^{-i}D_{\mathrm{dR}}(D_S)\ra \D_\Sen^n(D_S)
\]
is injective. This implies that (\ref{eq:lem-dR-HT}) is injective; hence it is an isomorphism. Comparing the $\Gamma$-invariants on both sides, we get $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=\D_\Sen^n(D_S(i))^\Gamma$ for each $i\in\mathbb{Z}$. This proves the lemma.
\end{proof}
\begin{lemma}\label{lem:dR}
If $D_S$ is a de Rham $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$, then for any morphism $S\ra R$ of affinoid algebras over $K$, $D_R$ is de Rham with weights in $[a,b]$ and $h_{dR}(D_R)\leq h_{dR}(D_S)$. Furthermore, the natural maps $\mathrm{Fil}^i D_{\mathrm{dR}}(D_S)\otimes_{S}R\ra \mathrm{Fil}^iD_{\mathrm{dR}}(D_R)$ are isomorphisms for all $i\in \mathbb{Z}$.
\end{lemma}
\begin{proof}
Let $n\geq h_{dR}(D_S)$. Tensoring with $(K_n\otimes_{\Q}R)[[t]][1/t]$ on both sides of (\ref{eq:def-de Rham}), we get that the natural map
\begin{equation}\label{eq:lem-dR}
(\D^n_\dif(D_S)^\Gamma\otimes_S R)\otimes_{K\otimes_{\Q}R}(K_n\otimes_{\Q}R)[[t]][1/t]\longrightarrow \D_\dif^n(D_R).
\end{equation}
is an isomorphism. Comparing $\Gamma$-invariants on both sides of (\ref{eq:lem-dR}), we get that the natural map $\D^n_\dif(D_S)^\Gamma\otimes_{S}R\ra\D^n_\dif(D_R)^\Gamma$
is an isomorphism; hence $D_R$ is de Rham. Then by Lemmas \ref{lem:HT} and \ref{lem:dR-HT}, we deduce that the natural map
$\mathrm{Gr}^i(D_{\mathrm{dR}}(D_S))\otimes_SR\ra\mathrm{Gr}^i(D_{\mathrm{dR}}(D_R))$ is an isomorphism.
This implies the rest of the lemma.
\end{proof}
\begin{cor}
If $D_S$ is a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q} S$, then $D_{\mathrm{dR}}(D_S)$ is a locally free coherent $K\otimes_{\Q}S$-module of rank $d$.
\end{cor}
\begin{proof}
We first note that for each $i\in\mathbb{Z}$, $\mathrm{Gr}^i(D_{\mathrm{dR}}(D_S))$, which is isomorphic to $\D_\Sen^n(D_S(i))^\Gamma$ by Lemma \ref{lem:dR-HT}, is a coherent $K\otimes_{\Q}S$-module. We then deduce that $D_{\mathrm{dR}}(D_S)$ is a coherent $K\otimes_{\Q}S$-module. Using Lemma \ref{lem:dR}, it then suffices to treat the case that $S$ is a finite extension of $K$; this follows easily from the isomorphism (\ref{eq:def-de Rham}).
\end{proof}
\begin{defn}
Let $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\m$-modules of rank $d$ over $X$. We call $D_X$ \emph{de Rham} if for some (hence any) admissible cover $\{M(S_i)\}_{i\in I}$ of $X$, $D_{S_i}$ is de Rham with weights in $[a,b]$ for any $i\in I$. We define $D_{\mathrm{dR}}(D_X)$ to be the gluing of all $D_{\mathrm{dR}}(D_{S_i})$'s.
\end{defn}
\begin{lemma}\label{lem:dR-weight}
If $D_S$ is a de Rham $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ of rank $d$ with weights in $[a,b]$, then $t^{-a}\D_\dif^{+,n}(D_S)\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_S)$ for any $n\geq h_{dR}(D_S)$.
\end{lemma}
\begin{proof}
Since $\mathrm{Gr}^{-b}D_{\mathrm{dR}}(D_S)=D_{\mathrm{dR}}(D_S)$, we get $D_{\mathrm{dR}}(D_S)\subset t^{-b}\D^{+,n}_\dif(D_S)$; hence $D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_S)$. By the proof of Lemma \ref{lem:dR-HT}, we know that the natural map (\ref{eq:lem-dR-HT}) is an isomorphism of graded modules. By the facts that $\mathrm{Gr}^iD_{\mathrm{dR}}(D_S)=0$ for $i\geq -a+1$ and $\mathrm{Fil}^i\D_\dif^n(D_S)$ is $t$-adically complete, we thus deduce that $t^{-a}\D_\dif^{+,n}(D_S)\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]]$.
\end{proof}
\begin{lemma}
Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then for any $k\geq b-a+1$, $i\in[a,b]$, $n\geq h_{HT}(D_S)$ and $\gamma\in\Gamma_n$, the map $\gamma-\chi^i(\gamma):t^k\D_\dif^{+,n}(D_S)\ra t^k\D_\dif^{+,n}(D_S)$ is bijective.
\end{lemma}
\begin{proof}
Since $\D_\dif^{+,n}(D_S)$ is $t$-adically complete, it suffices to show that
\[
\gamma-1:t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)\ra t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)
\]
is bijective for any $k\geq b-a+1$. Note that $t^k\D_\dif^{+,n}(D_S)/t^{k+1}\D_\dif^{+,n}(D_S)$ is isomorphic to $\D_\Sen^n(D_S(k))$ as a $\Gamma$-module. Note that $\D^n_\Sen(D_S(k))=\oplus_{a\leq j\leq b}(\D^n_\Sen(D_S))^{\Gamma_n=\chi^{j+k}}$ by Lemma \ref{lem:HT-criterion}. Since $j+k\geq b+1$ for all $j\in [a,b]$, we deduce that $\gamma-\chi^i(\gamma)$ is bijective on $\D^n_\Sen(D_S(k))$.
\end{proof}
\begin{lemma}\label{lem:dR-criterion}
Let $D_S$ be a Hodge-Tate $\m$-module over $\mathbf{B}^\dag_{\rig,K}\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. Then $D_S$ is de Rham if and only if there exists a positive integer $n\geq h_{HT}(D_S)$ such that $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$. Furthermore, if this is the case, then (\ref{eq:def-de Rham}) holds for $n$.
\end{lemma}
\begin{proof}
Suppose that $D_S$ is de Rham. Let $n\geq h_{dR}(D_S)$, and put
\[
N=D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]].
\]
Since $D$ has weights in $[a,b]$, by Lemma \ref{lem:dR-weight}, we have $t^{-a}\D_\dif^{+,n}(D_S)\subset N\subset t^{-b}\D_\dif^{+,n}(D_S)$. On the other hand, by the construction of $N$, it is clear that $(\gamma-1)N\subset tN$. It therefore follows that
\[
\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset
\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)(t^aN)\subset t^{2b-a+1}N\subset t^{b-a+1}\D_\dif^{+,n}(D_S).
\]
Now suppose $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$ for some $n\geq h_{HT}(D_S)$. We claim that for any $j\in[a,b]$ and $a\in(\D^n_\Sen(D_S))^{\Gamma_n=\chi^j}$, we can lift $a$ to an element in $(\D_{\dif}^{+,n}(D_S))^{\Gamma_n=\chi^j}$. In fact, let $\tilde{a}$ be any lift of $a$ in $\D_\dif^{+,n}(D_S)$, and let $\tilde{b}=\prod_{a\leq i\leq 2b-a, i\neq j}\frac{\gamma-\chi^i(\gamma)}{\chi^j(\gamma)-\chi^i(\gamma)}\tilde{a}$ where $\gamma$ is a topological generator of $\Gamma_n$; it is clear that $\tilde{b}$ is also a lift of $a$. Furthermore, by assumption, we have $(\gamma-\chi^j(\gamma))(\tilde{b})\in \Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D^{+,n}_\dif(D_S)$. By the previous lemma, we choose some $\tilde{c}\in t^{b-a+1}\D^{+,n}_\dif(D_S)$ satisfying $(\gamma-\chi^j(\gamma))(\tilde{b})=(\gamma-\chi^j(\gamma))(\tilde{c})$. It is then clear that $\tilde{b}-\tilde{c}$ is a desired lift of $a$. Since $\D^n_\Sen(D_S)=\oplus_{a\leq i\leq b}(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$, we have that $(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$ is locally free for each $i\in[a,b]$. By shrinking $M(S)$, we may further suppose that each $(\D^n_\Sen(D_S))^{\Gamma_n=\chi^i}$ is free. We then deduce from the claim that there exists a free $K_n\otimes_{\Q}S$-module $M\subseteq(\D_\dif^{n}(D_S))^{\Gamma_n}$ such that the natural map
\[
M\otimes_{K_n\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S)
\]
is an isomorphism. It follows that the natural map
\[
M^\Gamma\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)[[t]][1/t]\longrightarrow \D_\dif^n(D_S)
\]
is an isomorphism because $M=M^{\Gamma}\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}S)$ by \cite[Proposition 2.2.1]{BC07}. Taking $\Gamma$-invariants on both sides, we get $M^{\Gamma}=(\D_\dif^n(D_S))^\Gamma$. This implies that $D_S$ is de Rham.
\end{proof}
\begin{prop}\label{prop:dR-family}
Let $D_S$ be a $\m$-module over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}S$. Suppose that there exists a Zariski dense subset $Z\subset M(S)$ such that $D_z$ is de Rham with weights in $[a,b]$ for any $z\in Z$ and $\sup_{z\in Z}\{h_{dR}(D_z)\}<\infty$. Then $D_S$ is de Rham with weights in $[a,b]$.
\end{prop}
\begin{proof}
By Proposition \ref{prop:HT-family}, we first have that $D_S$ is Hodge-Tate with weights in $[a,b]$. Let $n\geq \max\{h_{HT}(D_S),\sup_{z\in Z}\{h_{dR}(D_z)\}\}$. By Lemma \ref{lem:dR-criterion}, we have
\[
\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_z)\subset t^{b-a+1}\D_\dif^{+,n}(D_z)
\]
for any $z\in Z$. This implies $\Pi_{i=a}^{2b-a}(\gamma-\chi(\gamma)^i)\D_\dif^{+,n}(D_S)\subset t^{b-a+1}\D_\dif^{+,n}(D_S)$ because $Z$ is Zariski dense. Hence $D_S$ is de Rham by Lemma \ref{lem:dR-criterion} again.
\end{proof}
\section{$P$-adic local monodromy for families of de Rham $\m$-modules}
The main goal of this section is to prove the $p$-adic local monodromy for families of de Rham $\m$-modules. The proof is similar to Berger-Colmez's proof of the $p$-adic local monodromy for families of de Rham representations \cite[\S6]{BC07}. Indeed, with the results we have proved in \S2 and \S3, the proof from [\emph{loc.cit.}] go over verbatim. We therefore often sketch our proof and refer the reader to [\emph{loc.cit.}] for more details.
We fix $E$ to be a finite extension of the products of the complete residue fields of the Shilov boundary of $M(S)$.
\begin{prop}\label{prop:N_dR}
Let $D_S$ be a de Rham $\m$-module of rank $d$ over $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}S$ with weights in $[a,b]$. For any $s>0$ such that $n(s)\geq h_{dR}(D_S)$, let
\[
N_s(D_E)=\{y\in t^{-b}D^{s}_E\hspace{2mm}\text{such that}\hspace{2mm}\iota_n(y)\in D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]\hspace{1mm}\text{for each}\hspace{2mm}n\geq n(s)\}.
\]
Then the following are true.
\begin{enumerate}
\item[(1)]The $\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q} E$-module $N_s(D_E)$ is free of rank $d$ and stable under $\Gamma$.
\item[(2)]We have
$N_s(D_E)\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}E,\iota_n}(K_n\otimes_{\Q}E)[[t]]
=D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ for each $n\geq n(s)$.
\end{enumerate}
Furthermore, if we put $N_{\mathrm{dR}}(D_E)=N_s(D_E)\otimes_{\mathbf{B}^{\dag,s}_{\rig,K}\widehat{\otimes}_{\Q}E}
\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q}E$, then the following are true.
\begin{enumerate}
\item[(3)]The $\mathbf{B}^{\dag}_{\rig,K}\widehat{\otimes}_{\Q} E$-module $N_{\mathrm{dR}}(D_E)$ is free of rank $d$, stable under $\Gamma$, and independent of the choice of $s$.
\item[(4)]We have $\varphi^*(N_{\mathrm{dR}}(D_E))=N_{\mathrm{dR}}(D_E)$ and $\nabla(N_{\mathrm{dR}}(D_E))\subset t\cdot N_{\mathrm{dR}}(D_E)$.
\end{enumerate}
\end{prop}
\begin{proof}
Since the localization map $\iota_n$ is continuous, we first have that $N_s(D_E)$ is a closed $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-submodule of $t^{-b}D_E^{s}$. It follows that
$N_s(D_E)$ is a finite locally free $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-module because $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$ is isomorphic to a finite product of Robba rings. On the other hand, by
Lemma \ref{lem:dR-weight}, we get that $t^{-a}D_E^{s}$ is contained in $N_s(D_E)$. We thus conclude that $N_s(D_E)$ is a free $\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E$-module of rank $d$. To show (2), we proceed as in the proof of \cite[Proposition 6.1.1]{BC07}. For any $y\in D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ and $w\geq \max\{0,b-a\}$, since
\[
D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]\subset t^{-b}\D_\dif^{+,n}(D_E)
\]
by Lemma \ref{lem:dR-weight}, we may pick some $y_0\in t^{-b}D_E^{s}$ such that $\iota_n(y_0)-y\in t^w
D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$. Let $t_{n,w}$ be the function defined in \cite[Lemme I.2.1]{LB04}. It follows that
\[
\iota_m(t_{n,w}y_0)\in t^{w-b}\D_\dif^{+,n}(D_E)=t^{w-b+a}(t^{-a}\D_\dif^{+,n}(D_E))\subset D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_m\otimes_{\Q}E)[[t]]
\]
for $m>n$
and
\[
\iota_n(t_{n,w}y_0)-y\in t^{w}D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]].
\]
This implies that the natural map $N_s(D_E)\ra D_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]/(t^w)$ is surjective; this proves (2). We get (3) immediately from (2). The first half of (4) follows from the fact that $\iota_{n+1}\circ \varphi=\iota_n$. Note that $\iota_n(\nabla(N_s(D_E)))=\nabla(\iota_n(N_s(D_E)))\subset tD_{\mathrm{dR}}(D_S)\otimes_{K\otimes_{\Q}S}(K_n\otimes_{\Q}E)[[t]]$ for any $n\geq n(s)$; this proves the second half of (4).
\end{proof}
\begin{prop}\label{prop:monodromy}
Keep notations as in Proposition \ref{prop:N_dR}. Then there exists a finite extension $L$ over $K$ such that
\[
M=(N_{\mathrm{dR}}(D_E)\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E)^{I_L}
\]
is a free $L_0'\otimes_{\Q}E$-module of rank $d$ and the natural map
\begin{equation*}
\begin{split}
M\otimes_{L_0'\otimes_{\Q}E}
\mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E
\longrightarrow N_{\mathrm{dR}}(D_E)\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,L}^\dag\widehat{\otimes}_{\Q}E
\end{split}
\end{equation*}
is an isomorphism.
\end{prop}
\begin{proof}
Let $f'=[K_0':\Q]$. Note that there is a canonical decomposition $\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E\cong\prod_{i=0}^{f'-1}\r_E^{(i)}$
where each $\r_E^{(i)}$ is isomorphic to $\r_E$ and stable under $\Gamma_K$, and satisfies $\varphi(\r_E^{(i)})\subset\r_E^{(i+1)}$ ($\r_E^{(f')}=\r_E^{(0)}$). Let $N^{(i)}_{\mathrm{dR}}(D_E)=N_{\mathrm{dR}}(D_E)
\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}\r_E^{(i)}$. It follows that each $N_{\mathrm{dR}}^{(i)}(D_E)$ is stable under $\partial=\nabla/t$ and $\varphi^{f'}$; hence it is a $p$-adic differential equation with a Frobenius structure. By the versions of the $p$-adic local monodromy theorem proved by Andr\'e \cite{An} or Mebkhout \cite{Meb}, we conclude that each $N^{(i)}_{\mathrm{dR}}(D_E)$ is potentially unipotent. This yields the proposition using the argument of \cite[Proposition 6.2.2]{BC07} and \cite[Corollaire 6.2.3]{BC07}.
\end{proof}
\begin{lemma}\label{lem:monodromy}
Keep notations as in Proposition \ref{prop:monodromy}, and let
\[
M=(N_s(D_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)^{I_L}
\]
for sufficiently large $s$. Then for any $n\geq n(s)$, we have
\begin{equation}\label{eq:lem-monodromy}
L\otimes_{L_0}\iota_n(M)=(\D_\dif(D_E\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\rig,L}^\dag\widehat{\otimes}_{\Q}E))^{I_L}
\end{equation}
\end{lemma}
\begin{proof}
By the previous proposition, the left hand side of (\ref{eq:lem-monodromy}) is a free $L\otimes_{L_0}L_0'\otimes_{\Q}E$-module of rank $d$. On the other hand, since $((L_n\otimes_{\Q}E)[[t]][1/t])^{I_L}=L\otimes_{L_0}L_0'\otimes_{\Q}E$, we deduce that the right hand side of (\ref{eq:lem-monodromy}), which obviously contains the left hand side, is an $L\otimes_{L_0}L_0'\otimes_{\Q}E$-module generated by at most $d$-elements. This yields the desired identity.
\end{proof}
\section{Proof of the main theorem}
We start by making some preliminary reductions. After a finite surjective base change of $X$, we may assume that $Q(T)$ factors as $\prod_{i=1}^m(T-F_i)$. By reordering the $f_i$'s and throwing away some points of $Z$ we may further assume that for all $z\in Z$, $v_p(F_i(z))\geq v_p(F_j(z))$ if $i>j$ and $F_i(z)\neq F_j(z)$ if $F_i\neq F_j$. We then set $\F_{i,z}=
D_{\mathrm{st}}^+(V_z)^{(\varphi^f-F_1(z))\cdots(\varphi^f-F_{i}(z))=0}$ for all $z\in Z$ and $1\leq i\leq m$.
Using Definition \ref{def:fs}(3), we may suppose that $\F_{i,z}\subseteq \F_z$ for all $z\in Z$ and $1\leq i\leq m$ by shrinking $Z$. Furthermore, by the fact that $N\varphi=p\varphi N$ and the condition that $v_p(F_i(z))\geq v_p(F_j(z))$ if $i>j$, we see that $N=0$ on each graded piece $\F_{i,z}/\F_{i-1,z}$.
Let $c_{i,z}$ be the rank of $ \F_{i,z}/\F_{i-1,z}$ over $K_0\otimes k(z)$, and partition $Z$ into finitely many subsets according to the sequence $c_{i,z}$. One of these subsets of $Z$ must still be Zariski dense. Replace $Z$ by this subset and set $c_i = c_{i,z}$ for any $z$ in this subset.
For $z\in Z$, we will inductively set $\m$-submodules $\mathrm{Fil}_{i,z}\subset\D_\rig^\dag(V_z)$ for $1\leq i\leq m$ such that $D_{\mathrm{st}}(\mathrm{Fil}_{i,z})=\F_{i,z}$. For $i=1$, since $V_z$ has non-positive Hodge-Tate weights and $N(\F_{1,z})=0$, we have
\[
\F_{1,z}=(D^+_{\mathrm{crys}}(V_z))^{\varphi^f=F_1(z)}\subset\D_\rig^\dag(V_z)^{\Gamma}
\]
by Berger's dictionary. Let $\mathrm{Fil}_{1,z}$ be the saturation of the $\m$-submodule generated by $\mathcal{F}_{1,z}$. Now suppose we have set $\mathrm{Fil}_{i-1,z}$ for some $i\geq 2$. It follows that
\[
D_{\mathrm{st}}^+(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z})=D_{\mathrm{st}}^+(V_z)/\F_{i-1,z}.
\]
Note that
\[
\F_{i,z}/\F_{i-1,z}=(D_{\mathrm{st}}^+(V_z)/\F_{i-1,z})^{\varphi^f=F_{i}(z),N=0}.
\]
Hence
\[
\F_{i,z}/\F_{i-1,z}=D^+_{\mathrm{crys}}(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i,z})^{\varphi^f=F_{i}(z)}\subset
(\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z})^\Gamma.
\]
We then set $\mathrm{Fil}_{i,z}$ to be the preimage of the saturation of the $\m$-submodule of $\D_\rig^\dag(V_z)/\mathrm{Fil}_{i-1,z}$ generated by $\F_{i,z}/\F_{i-1,z}$.
Now for each $1\leq i\leq m$, we define the character $\delta_i:K\ra\OO(X)^\times$ by setting $\delta_i(p)=F_i^{-1}$ and $\delta_i(\OO_K^\times)=1$. Let $D_X=\D_\rig^\dag(V_X)^{\vee}$.
\begin{lemma}\label{lem:de Rham-part}
Suppose that $X$ is irreducible. Then for each $0\leq i\leq m$, there exists a proper birational morphism $\pi:X'\ra X$ and a sub-family of $\m$-modules $D^{(i)}_{X'}\subset D_{X'}$ over $X'$ of rank $d-c_1-\dots-c_i$ such that
\begin{enumerate}
\item[(1)]
for any $x\in X'$, the natural map $D_x^{(i)}\ra D_x$ is injective;
\item[(2)]
there exists a Zariski open dense subset $U$ of $X'$ such that for any $z\in Z'=\pi^{-1}(Z)\cap U$, the natural map $D^{(i)}_z\ra D_z$ is the dual of the projection $\D_\rig^\dag(V_{\pi(z)})\ra \D_\rig^\dag(V_{\pi(z)})/\mathrm{Fil}_{i,\pi(z)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We proceed by induction on $i$. The initial case is trivial. Suppose that for some $1\leq i\leq m$, the lemma is true for $i-1$.
Note that $\mathcal{F}_{i,z}/\mathcal{F}_{i-1,z}$ maps into $\D_\rig^\dag(V_{z})/\mathrm{Fil}_{i,z}$ for any $z\in Z$. Since $\F_{i,z}/\F_{i-1,z}=(D_{\mathrm{crys}}^+(V_z)/\F_{i-1,z})^{\varphi^f=F_{i}(z)}$, we get that $(D^{(i)}_z)^{\vee}(\pi^{*}(\delta_i)(z))$ has $k(z)$-dimension $c_i$ for any $z\in Z'$. Since $Z'$ is Zariski dense in $X'$, by Proposition \ref{prop:cohomology}, after adapting $X'$ and $U$, we may find a sub-family of $\m$-modules $D^{(i)}_{X'}$ of $D^{(i-1)}_{X'}$ with rank $d-c_1-\dots-c_i$ such that
\begin{enumerate}
\item[(1')]$D_x^{(i)}\ra D_x^{(i-1)}$ is injective for any $x\in X'$;
\item[(2')]for any $z\in \pi^{-1}(Z)\cap U$, $D_z^{(i)}$ is the kernel of the dual of the map
\[
(\mathbf{B}_{\rig,K}^\dag\otimes_{\Q}k(z))\cdot(\mathcal{F}_{i,\pi(z)}/\mathcal{F}_{i-1,\pi(z)})\ra \D_\rig^\dag(V_{\pi(z)})/\mathrm{Fil}_{i,\pi(z)}.
\]
\end{enumerate}
It is clear that (1') and (2') imply (1) and (2) respectively; this finishes the inductive step.
\end{proof}
To prove Theorem \ref{thm:main}, we also need the following lemma.
\begin{lemma}
Let $V_S$ be a free $S$-linear representation of $G_K$ of rank $d$. Then there exists a positive integer $m(V_S)$ such that for any $x\in M(S)$ and $a\in\D_\dif^{+}(V_x)$, if $a$ is $\Gamma$-invariant, then $a\in\D_\dif^{+,m(V_S)}(V_x)$.
\end{lemma}
\begin{proof}
This is a consequence of the Tate-Sen method. Using \cite[Th\'eor\`{e}me 4.2.9]{BC07}, we first choose a finite extension $L$ over $K$ and some positive integer $m$ so that $\D_{\rig,L}^{\dag,r_m}(V_S)$ is a free $\mathbf{B}_{\rig,L}^{\dag,r_m}\widehat{\otimes}_{\Q}S$-module with a basis $\mathrm{e}=(e_1,\dots,e_d)$. Let $\gamma$ be a topological generator of $\Gamma_{L_m}$ and write $\gamma(e)=eG$ for some $G\in\mathrm{GL}_d(\mathbf{B}_{\rig,L}^{\dag,r_m}\widehat{\otimes}_{\Q}S)$. Recall that by the classical work of Tate \cite{T}, we know that there exists a constant $c>0$ such that $v_p((\gamma-1)x)\leq v_p(x)+c$ for any nonzero $x\in (1-R_{L,m})\widehat{L}_\infty$, where $R_{L,m}:\widehat{L}_\infty\ra L_m$ is Tate's normalized trace map. Since the localization map $\iota_m:\mathbf{B}_{\rig,L}^{\dag,r_m}\ra L_m[[t]]$ is continuous, by enlarging $m$, we may suppose that the constant term of $\iota_m(G)-1$ has norm less than $p^{-c}$. We fix some $m_0\in\mathbb{N}$ such that $K_\infty\cap L_m=K_{m_0}\cap L_m$.
Now let $a\in\D_\dif^{+,K_n}(V_x)^\Gamma$ for some $x\in X$ and $n\geq m$. We will show that $a\in\D_\dif^{+,K_{m_0}}(V_x)^\Gamma$. Since $\iota_m(\mathrm{e})$ forms a basis of $\D^{+,L_n}_{\dif}(V_S)$, we may write $a=\iota_m(\mathrm{e})(x)A$ for some
\[
A\in \mathrm{M}_{d\times1}((L_n\otimes_{\Q}k(x))[[t]]).
\]
The $\Gamma$-invariance of $a$ implies $\iota_m(G(x))\gamma(A)=A$; thus $(1-R_{L,m})\iota_m(G(x))\gamma(A)=(1-R_{L,m})A$. Note that $\iota_m(G(x))$ has entries in $(L_m\otimes_{\Q}k(x))[[t]]$. It follows that $(G(x)-1)B=(1-\gamma^{-1})B$ where $B=(1-R_{L,m})A$. Let $B_0$ be the constant term of $B$. If $B_0\neq0$, then the constant term of $(\iota_m(G(x))-1)B$ has valuation $\geq v(\iota_m(G(x))-1)+v(B_0)>v(B_0)+c$ whereas the constant term $(1-\gamma^{-1})B_0$ of $(1-\gamma^{-1})B$ has valuation $\leq v(B_0)+c$; this yields a contradiction. Hence $B_0=0$. Iterating this argument, we get $B=0$. Hence $a\in \D_\dif^{+,L_m}(V_x)\cap\D_\dif^{+,K_n}(V_x)\subset\D_\dif^{+,K_{m_0}}(V_x)$. Thus we may choose $m(V_S)=m_0$.
\end{proof}
\emph{Proof of Theorem 0.2}.
We retain the notations as above. By passing to irreducible components, we may suppose that $X$ is irreducible. We then apply Lemma \ref{lem:de Rham-part} to $V_X$. Note that $V_{X'}$ is again a finite slope family over $X'$ with the Zariski dense set of crystalline points $\pi^{-1}(Z)$. We may suppose that $X'=X$. Let $\lambda:\D^\dag_{\rig}(V_X)=D^{\vee}_X\ra (D_X^{(m)})^{\vee}$ be the dual of $D_X^{(m)}\ra D_X$, and let $P_X=\ker(\lambda)$. For any $x\in X$, since $D^{(m)}_x\ra D_x$ is injective, we get that the image of $\lambda_x$ is a $\m$-submodule of rank $d-c_1-\cdots-c_m$. Thus by Lemma \ref{lem:ker-birational}, after adapting $X$, we may assume that $P_X$ is a family of $\m$-modules of rank $c_1+\cdots+c_m$, and there exists a Zariski open dense subset $U\subset X$ such that $P_x=\ker(\lambda_x)$ for any $x\in U$. Note that $\ker(\lambda_z)=\mathrm{Fil}_{i,z}$ for any $z\in Z$. Thus by replacing $Z$ with $Z\cap U$, we may assume that $P_z=\mathrm{Fil}_{i,z}$ for any $z\in Z$. We claim that $P_{X}$ is de Rham with weights in $[-b,0]$. To do so, we set $Y$ to be the set of $x\in X$ for which $P_x$ is de Rham with weights in $[a,b]$. By the previous lemma, we see that for any affinoid subdomain $M(S)\subset X$, there exists an integer $m(V_S)$ such that if $P_x$ is de Rham for some $x\in M(S)$, then $h_{dR}(P_x)\leq m(V_S)$. We then deduce from Proposition \ref{prop:dR-family} that $Y\cap M(S)$ is a Zariski closed subset of $M(S)$. Hence $Y$ is a Zariski closed subset of $X$. On the other hand, since $P_z$ is de Rham with weights in $[-b,0]$, we get $Z\subset Y$; thus $Y=X$ by the Zariski density of $Z$. Furthermore, using Proposition \ref{prop:dR-family} and the previous lemma again, we deduce that $P_X$ is de Rham with weights in $[-b,0]$. As a consequence, we obtain a locally free coherent $\OO_X\otimes_{\Q}K$-module $D_{\mathrm{dR}}(P_X)$ of rank $c_1+\cdots+c_m$.
The next step is to show that for any $x\in X$, $D_{\mathrm{dR}}(P_x)$ is contained in $D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K$. Let $Y$ be the set of $x\in X$ satisfying this condition. We first show that $Y$ is a Zariski closed subset of $X$. For this, it suffices to show that $Y\cap M(S)$ is a Zariski closed subset of $M(S)$ for any affinoid subdomain $M(S)$ of $X$. To show this, we employ the $p$-adic local monodromy for families of de Rham $\m$-modules. As in \S5, let $E$ be the product of the complete residue fields of the Shilov boundary of $M(S)$. Since $P_S$ is a family of de Rham $\m$-modules with weights in $[-b,0]$, by Lemma \ref{lem:monodromy}, there exists a finite extension $L$ of $K$ such that for sufficiently large $s$ and $n\geq n(s)$, we have
\[
L\otimes_{L_0}\iota_n(M)=(\D_\dif(P_E\otimes_{\mathbf{B}_{\rig,K}^\dag\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\rig,L}^\dag\widehat{\otimes}_{\Q}E))^{I_L}
\]
for
$M=(N_s(P_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)^{I_L}$; furthermore, $N_s(P_E)\subset P_E^{s}$. Thus
\[
\iota_n(M)\subset \iota_n(P_E\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)\subset
\iota_n(\D_\rig^\dag(V_E)\otimes_{\mathbf{B}_{\rig,K}^{\dag,s}\widehat{\otimes}_{\Q}E}
\mathbf{B}_{\log,K}^{\dag,s}\widehat{\otimes}_{\Q}E)\subset\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E.
\]
Note that $D_{\mathrm{dR}}(P_E)\subset \D_\dif^+(P_E)\subset\D_\dif^+(V_E)\subset\mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_E$. This yields
\[
D_{\mathrm{dR}}(P_E)\subset (\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L\cap \mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_E=
(\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L.
\]
We therefore deduce from \cite[Lemme 6.3.1]{BC07} that
\[
D_{\mathrm{dR}}(P_S)\subset (\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_E)\otimes_{L_0}L\cap
\mathbf{B}_{\mathrm{dR}}^+\widehat{\otimes}_{\Q}V_S=(\mathbf{B}^+_{\mathrm{st}}\widehat{\otimes}_{\Q}V_S)\otimes_{L_0}L.
\]
It follows that $Y\cap M(S)$, which is the set of $x\in M(S)$ such that $D_{\mathrm{dR}}(P_x)\subset (\mathbf{B}^+_{\mathrm{st}}\otimes_{\Q}V_x)\otimes_{K_0}K$, is Zariski closed in $M(S)$.
To conclude the theorem, it then suffices to show that $D_{\mathrm{dR}}(P_x)\subset (D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K)^{Q(\varphi)(x)=0}$ for any $x\in X$; here we $K$-linearly extend the $\varphi^f$-action to $D^+_{\mathrm{st}}(V_x)\otimes_{K_0}K$. Note that $\mathrm{Fil}_{m,z}$ is semi-stable with $D_{\mathrm{st}}(\mathrm{Fil}_{m,z})=\mathcal{F}_{m,z}$. This implies that $Q(\varphi)(D_{\mathrm{dR}}(P_X))$ vanishes at $z$, yielding that $Q(\varphi)(D_{\mathrm{dR}}(P_X))=0$ by the Zariski density of $Z$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,003 |
Q: Strategies in mapping text size to data in ggplot I would like to consult on how to map text size to data in ggplot(). In the following silly example, I have data describing some English letters and the average score of "liking" each letter received. That is, imagine that we surveyed people and asked them, "to what extent do you like the letter [ ], on scale of 1-7, where 1 means strongly dislike, and 7 means like very much".
For statistical reasons that are beyond the scope of this question, I do not want to use a bar plot, as I seek to minimize the desire to compare between the mean values. Hence, I chose a different visualization, as seen below.
My issue is: I want to give the viewer a feeling that accounts for the difference in values. So I decided to map the size of geom_text() to the actual value presented. However, this gets a little tricky when I try to make it look nice.
library(ggplot2)
library(ggforce)
my_df <-
data.frame(
letter = letters[1:16],
mean_liking = c(
3.663781,
3.814590,
3.806543,
3.788288,
3.756278,
4.491339,
3.549708,
3.799703,
3.651306,
4.522255,
4.075301,
5.619614,
3.917391,
2.579243,
3.692090,
4.439822
)
)
## scenario 1 -- without mapping size
ggplot(data = my_df) +
geom_circle(aes(x0 = 0, y0 = 0, r = 0.5, fill = letter), show.legend = FALSE) +
geom_text(aes(label = round(mean_liking, 2), x = 0, y = 0)) +
coord_fixed() +
facet_wrap(~letter) +
theme_void()
## scenario 2 -- mapping size "plainly" (so to speak)
ggplot(data = my_df) +
geom_circle(aes(x0 = 0, y0 = 0, r = 0.5, fill = letter), show.legend = FALSE) +
geom_text(aes(label = round(mean_liking, 2), x = 0, y = 0,
size = mean_liking)) + # <-- mapped here
coord_fixed() +
facet_wrap(~letter) +
theme_void()
## scenario 3 -- mapping size multiplied by 10
ggplot(data = my_df) +
geom_circle(aes(x0 = 0, y0 = 0, r = 0.5, fill = letter), show.legend = FALSE) +
geom_text(aes(label = round(mean_liking, 2), x = 0, y = 0,
size = mean_liking*10)) + # <-- mapped here; getting strange
coord_fixed() +
facet_wrap(~letter) +
theme_void()
Created on 2021-08-17 by the reprex package (v2.0.0)
As can be seen above, both scenario 2 and 3 resulted in unreadable text size for letter n. So I have a couple of questions:
*
*Why does text size remain the same, despite multiplying by 10?
*How could I have the text size vary according to the mean_liking value?
*Is there any useful strategy that takes into account the fact that those means were generated from a finite scale that ranges 1-7? I guess this implies some subjective judgment on how one would choose to visualize it, but I'm very interested to get more perspectives on this.
Thank you!
A: Regarding your questions:
*
*ggplot doesn't actually take the values of our data and uses them directly as size-values, but it scales them with a minimum of 1 and a maximum of 6. So regardless of how you transform your data, the result always looks the same.
*To actually change the size, you have to change the ranges of size with scale_size(range = c(min_size, max_size)) or don't map mean_liking to size, but instead set size to the values of mean_liking outside of aes()
# Mapping mean_liking to size and change size-ranges
ggplot(data = my_df) +
geom_circle(aes(x0 = 0, y0 = 0, r = 0.5, fill = letter), show.legend = FALSE) +
geom_text(aes(label = round(mean_liking, 2), x = 0, y = 0,
size = mean_liking)) + # <-- mapped here
scale_size(range = c(4, 8)) +
coord_fixed() +
facet_wrap(~letter) +
theme_void()
# setting size to the values of mean_liking directly
ggplot(data = my_df) +
geom_circle(aes(x0 = 0, y0 = 0, r = 0.5, fill = letter), show.legend = FALSE) +
geom_text(aes(label = round(mean_liking, 2), x = 0, y = 0),
size = my_df$mean_liking) +
coord_fixed() +
facet_wrap(~letter) +
theme_void()
*Unfortunately I don't have any good strategies here, sorry.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,508 |
NOTE: This a fun 45-60 minute talk introduced at Comdex Fall which incorporates an interesting view of the future, networking among the participants, a million dollar bill, a space suite and chocolate.
How will work be accomplished?
How productive have companies, individuals, computers and society become?
This session pulls in a lot of what's happening today to extrapolate what will be commonplace in the next 10 to 20 years. If you know the future or at least get a glimpse at it, imagine what can be done today to prepare your company for it!
You will walk away with the five key questions you need to ask to e-volve your company or watch it die. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,190 |
\section{Introduction}
\label{sec:intro}
These guidelines include complete descriptions of the fonts, spacing, and
related information for producing your proceedings manuscripts. Please follow
them and if you have any questions, direct them to Conference Management
Services, Inc.: Phone +1-979-846-6800 or email
to \\\texttt{icip2022@cmsworkshops.com}.
\section{Formatting your paper}
\label{sec:format}
All printed material, including text, illustrations, and charts, must be kept
within a print area of 7 inches (178 mm) wide by 9 inches (229 mm) high. Do
not write or print anything outside the print area. The top margin must be 1
inch (25 mm), except for the title page, and the left margin must be 0.75 inch
(19 mm). All {\it text} must be in a two-column format. Columns are to be 3.39
inches (86 mm) wide, with a 0.24 inch (6 mm) space between them. Text must be
fully justified.
\section{PAGE TITLE SECTION}
\label{sec:pagestyle}
The paper title (on the first page) should begin 1.38 inches (35 mm) from the
top edge of the page, centered, completely capitalized, and in Times 14-point,
boldface type. The authors' name(s) and affiliation(s) appear below the title
in capital and lower case letters. Papers with multiple authors and
affiliations may require two or more lines for this information. Please note
that papers should not be submitted blind; include the authors' names on the
PDF.
\section{TYPE-STYLE AND FONTS}
\label{sec:typestyle}
To achieve the best rendering both in printed proceedings and electronic proceedings, we
strongly encourage you to use Times-Roman font. In addition, this will give
the proceedings a more uniform look. Use a font that is no smaller than nine
point type throughout the paper, including figure captions.
In nine point type font, capital letters are 2 mm high. {\bf If you use the
smallest point size, there should be no more than 3.2 lines/cm (8 lines/inch)
vertically.} This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make
the paper much more readable. Larger type sizes require correspondingly larger
vertical spacing. Please do not double-space your paper. TrueType or
Postscript Type 1 fonts are preferred.
The first paragraph in each section should not be indented, but all the
following paragraphs within the section should be indented as these paragraphs
demonstrate.
\section{MAJOR HEADINGS}
\label{sec:majhead}
Major headings, for example, "1. Introduction", should appear in all capital
letters, bold face if possible, centered in the column, with one blank line
before, and one blank line after. Use a period (".") after the heading number,
not a colon.
\subsection{Subheadings}
\label{ssec:subhead}
Subheadings should appear in lower case (initial word capitalized) in
boldface. They should start at the left margin on a separate line.
\subsubsection{Sub-subheadings}
\label{sssec:subsubhead}
Sub-subheadings, as in this paragraph, are discouraged. However, if you
must use them, they should appear in lower case (initial word
capitalized) and start at the left margin on a separate line, with paragraph
text beginning on the following line. They should be in italics.
\section{PRINTING YOUR PAPER}
\label{sec:print}
Print your properly formatted text on high-quality, 8.5 x 11-inch white printer
paper. A4 paper is also acceptable, but please leave the extra 0.5 inch (12 mm)
empty at the BOTTOM of the page and follow the top and left margins as
specified. If the last page of your paper is only partially filled, arrange
the columns so that they are evenly balanced if possible, rather than having
one long column.
In LaTeX, to start a new column (but not a new page) and help balance the
last-page column lengths, you can use the command ``$\backslash$pagebreak'' as
demonstrated on this page (see the LaTeX source below).
\section{PAGE NUMBERING}
\label{sec:page}
Please do {\bf not} paginate your paper. Page numbers, session numbers, and
conference identification will be inserted when the paper is included in the
proceedings.
\section{ILLUSTRATIONS, GRAPHS, AND PHOTOGRAPHS}
\label{sec:illust}
Illustrations must appear within the designated margins. They may span the two
columns. If possible, position illustrations at the top of columns, rather
than in the middle or at the bottom. Caption and number every illustration.
All halftone illustrations must be clear black and white prints. Colors may be
used, but they should be selected so as to be readable when printed on a
black-only printer.
Since there are many ways, often incompatible, of including images (e.g., with
experimental results) in a LaTeX document, below is an example of how to do
this \cite{Lamp86}.
\section{FOOTNOTES}
\label{sec:foot}
Use footnotes sparingly (or not at all!) and place them at the bottom of the
column on the page on which they are referenced. Use Times 9-point type,
single-spaced. To help your readers, avoid using footnotes altogether and
include necessary peripheral observations in the text (within parentheses, if
you prefer, as in this sentence).
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.5cm]{image1}}
\centerline{(a) Result 1}\medskip
\end{minipage}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{image3}}
\centerline{(b) Results 3}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{image4}}
\centerline{(c) Result 4}\medskip
\end{minipage}
\caption{Example of placing a figure with experimental results.}
\label{fig:res}
\end{figure}
\section{COPYRIGHT FORMS}
\label{sec:copyright}
You must submit your fully completed, signed IEEE electronic copyright release
form when you submit your paper. We {\bf must} have this form before your paper
can be published in the proceedings.
\section{RELATION TO PRIOR WORK}
\label{sec:prior}
The text of the paper should contain discussions on how the paper's
contributions are related to prior work in the field. It is important
to put new work in context, to give credit to foundational work, and
to provide details associated with the previous work that have appeared
in the literature. This discussion may be a separate, numbered section
or it may appear elsewhere in the body of the manuscript, but it must
be present.
You should differentiate what is new and how your work expands on
or takes a different path from the prior studies. An example might
read something to the effect: "The work presented here has focused
on the formulation of the ABC algorithm, which takes advantage of
non-uniform time-frequency domain analysis of data. The work by
Smith and Cohen \cite{Lamp86} considers only fixed time-domain analysis and
the work by Jones et al \cite{C2} takes a different approach based on
fixed frequency partitioning. While the present study is related
to recent approaches in time-frequency analysis [3-5], it capitalizes
on a new feature space, which was not considered in these earlier
studies."
\vfill\pagebreak
\section{REFERENCES}
\label{sec:refs}
List and number all bibliographical references at the end of the
paper. The references can be numbered in alphabetic order or in
order of appearance in the document. When referring to them in
the text, type the corresponding reference number in square
brackets as shown at the end of this sentence \cite{C2}. An
additional final page (the fifth page, in most cases) is
allowed, but must contain only references to the prior
literature.
\bibliographystyle{IEEEbib}
\section{The race towards generalization}
\label{sec:Introduction}
In the recent few years, the research community has witnessed a race towards achieving higher and higher performance (in terms of an error on unseen data), proposing very large architectures like, for example, transformers~\cite{liu2021swin}. From big architectures come big responsibilities: learning strategies to avoid the over-fitting urge to be developed.\\
The most straightforward approach would be to provide more data: deep learning methods are notoriously data hunger. Since they typically optimize some objective function through gradient descent, having more data in the training set helps the optimization process in selecting the most appropriate set of features (to oversimplify, the most recurrent ones). This allows us to have high performance on unseen data. Such an approach has the big drawbacks of requiring enormous computational power for training and, most importantly, large annotated datasets. While undertaking the first drawback is an actual research topic~\cite{frankle2018the, bragagnolo2022update}, the second is broadly explored with approaches like transfer learning~\cite{zhuang2020comprehensive} or self-supervised learning~\cite{ravanelli2020multi}.\\
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/teaser-2.pdf}
\caption{The double descent phenomenon (dashed line): is it possible to constrain the learning problem to a minimum such that the loss, in over-parametrized regimes, remains close to $\mathcal{L}^{opt}$ (continuous line)?}
\label{fig:DD_scheme}
\end{figure}
In more realistic cases, large datasets are typically unavailable, and in those cases approaches working with small data, in the context of \emph{frugal AI}, need to be employed. This poses research questions on how to enlarge the available datasets or to transfer knowledge from similar tasks; however it poses also questions on how to optimally dimension the deep learning model to be trained. Contrarily to what is expected from the bias-variance trade-off, the phenomenon of \emph{double descent} can be observed in a very over-parameterized network: given some optimal set of parameters for the model $\boldsymbol{w}^{opt}$ with loss value $\mathcal{L}^{opt}$, adding more parameters will worse the performance until a local maximum $\mathcal{L}^{*}$ beyond which, adding even more parameters, the trend goes back decreasing. This phenomenon, named \emph{double descent}~\cite{Belkin_2019} is displayed in Fig.~\ref{fig:DD_scheme}, and is consistently reported in the literature~\cite{spigler2019jamming, Geiger_2019}. Double descent poses the serious problem of finding the best set of parameters, in order not to fall into an over-parametrized (or under-parametrized) regime. The possible approaches to tackle this are two: finding $\boldsymbol{w}^{opt}$, which is a problem requiring a lot of computation, or extremely over-sizing the model. Unfortunately, both the roads are not compatible with a frugal setup: is there a solution to this problem?\\
In this work, we show that the double descent phenomenon is potentially avoidable. Having a sufficiently large regularization on the model's parameters drives the deep models in a configuration where the set of parameters in excess $\boldsymbol{w}^{exc}$ are essentially producing no perturbation on the output of the model.
Nevertheless, as opposed to Nakkiran~et~al.~\cite{nakkiran2021optimal}, who showed in regression tasks that such a regularization could help in dodging double descent, we observe that, in classification tasks, this regularization is insufficient in complex scenarios: an ingredient is still missing, although we are on the right path.
\section{Double descent and its implications}
\label{sec:Related work}
\textbf{Double descent in machine learning models.} The double descent phenomenon has been highlighted in various machine learning models, like decision trees, random features~\cite{Meng-Random-Feature-Model}, linear regression~\cite{muthukumar2020harmless} and deep neural networks~\cite{yilmaz2022regularization}. Based on the calculation of the precise limit of the excess risk under the high dimensional framework where the training sample size, the dimension of data, and the dimension of random features tend to infinity proportionally, Meng~et~al.~\cite{Meng-Random-Feature-Model} demonstrate that the risk curves of double random feature models can exhibit double and even multiple descents. The double descent risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models. Muthukumar~et~al.~\cite{muthukumar2020harmless} provide a precise mathematical analysis for the shape of this curve in two simple data models with the least squares/least norm predictor. By defining the effective model complexity, Nakkiran~et~al.~\cite{nakkiran2021deep} showed that the double descent phenomenon is not limited to varying the model size, but it is also observed as a function of training time or epochs and also identified certain regimes were increasing the number of train samples hurts test performance.\\
\textbf{Double descent in regression tasks.} It has been recently shown that, for certain linear regression models with isotropic data distribution, optimally-tuned $\ell_2$ regularization can achieve monotonic test performance as either the sample size or the model size is grown. Nakkiran~et~al.~\cite{nakkiran2021optimal} demonstrated it analytically and established that optimally-tuned $\ell_2$ regularization can mitigate double descent for general models, including neural networks like Convolutional Neural Networks. Endorsing such a result, Yilmaz~et~al.~\cite{yilmaz2022regularization} indicated that regularization-wise double descent can be explained as a superposition of bias-variance trade-offs on different features of the data (for a linear model) or parts of the neural network and that double descent can be eliminated by scaling the regularization strengths accordingly.\\
\textbf{Double descent for classification tasks.} Although many signs of progress in regression models have been done, in classification tasks, the problem of avoiding, or formally characterizing, the double descent phenomenon, is much harder to tackle. The test error of standard deep networks, like the ResNet architecture, trained on standard image classification datasets, consistently follows a double descent curve
both when there is label noise (CIFAR-10) and without any label noise (CIFAR-100)~\cite{yilmaz2022regularization}.
Double descent of pruned models concerning the number of original model parameters has been studied by Chang~et~al, which reveals a double descent behavior also in model pruning~\cite{Chang_Overparameterization}. Model-wise, the double descent phenomenon has been studied a lot under the spectrum of over-parametrization: a recent work also confirmed that sparsification via network pruning can cause double descent in presence of noisy labels~\cite{SparseDoubleDescent}. He~et~al.~\cite{SparseDoubleDescent} proposed a novel learning distance interpretation, where they observe a correlation between the model's configurations before and after training with the sparse double descent curve, emphasizing the flatness of the minimum reached after optimization. Our work differs from this study by enlightening some cases in which the double descent phenomenon is not evident. We show, in particular, that by imposing some constraints on the learning process, we can avoid the double descent. The experimental setup follows the same as He~et~al.'s.
\section{Dodging the double descent}
\label{sec:method}
\begin{figure*}[t]
\centering
\begin{subfigure}{.685\columnwidth}
\includegraphics[width=\columnwidth]{figures/SDD_MNIST.pdf}
\end{subfigure}\hfill
\begin{subfigure}{.685\columnwidth}
\includegraphics[width=\columnwidth]{figures/SDD_CIFAR_10.pdf}
\end{subfigure}
\begin{subfigure}{.685\columnwidth}
\includegraphics[width=\columnwidth]{figures/SDD_CIFAR_100.pdf}
\end{subfigure}
\caption{Test accuracy in the function of sparsity with different amounts of symmetric noise $\varepsilon$.\\Dashed lines correspond to vanilla and solids lines to $\ell_2$ regularization.\\
\textbf{Left:} LeNet-300-100 on MNIST.
\textbf{Middle:} ResNet-18 on CIFAR-10.
\textbf{Right:} ResNet-18 on CIFAR-100.}
\label{fig:MNIST}
\end{figure*}
\begin{algorithm}[t]
\caption{Sketching performance in over-parametrized setups: prune, rewind, train, repeat.}
\label{Algo}
\begin{algorithmic}[1]
\Procedure{Sketch ($\boldsymbol{w}^{init}$, $\Xi$, $\lambda$, $T^{iter}$,$T^{end}$)}{}
\State $\boldsymbol{w} \gets$ Train($\boldsymbol{w}^{init}$, $\Xi$, $\lambda$)\label{line:dense}
\While{Sparsity($\boldsymbol{w}, \boldsymbol{w}^{init}$) $< T^{end}$}\label{line:endcond}
\State $\boldsymbol{w} \gets$ Prune($\boldsymbol{w}$, $T^{iter}$) \label{line:prune}
\State $\boldsymbol{w} \gets$ Rewind($\boldsymbol{w}$, $\boldsymbol{w}^{init}$)\label{line:rewind}
\State $\boldsymbol{w} \gets$ Train($\boldsymbol{w}$,$\Xi$, $\lambda$)\label{line:wd}
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
\textbf{A regularization-based approach.} In the previous section we have presented all the main contributions achieved in the literature around the double descent phenomenon. It is a known result that, given an over-parametrized model, without special constraints on the training, this phenomenon can be consistently observed. However, it is also known that, for an optimally parametrized model, double descent should not occur. Let us say, the target, optimal output for the model is $\boldsymbol{y}_{opt}$. For instance, there is some subset $\boldsymbol{w}_{exc}\in \boldsymbol{w}$ of parameters belonging to the model which are in excess, namely the ones contributing to the double descent phenomenon. Since these are not essential in the learning/inference steps, they can be considered as \emph{noisy parameters}, which deteriorate the performance of the whole model and make the whole learning problem more difficult. They generate a perturbation in the output of the model which we can quantify as
\begin{equation}
\boldsymbol{y}^{exc} = \sum_{w_i\in \boldsymbol{w}^{exc}} \text{forward}\left[\phi(\boldsymbol{x}_i\cdot w_i)\right],
\end{equation}
where $\boldsymbol{x}_i$ is the input(s) processed by $w_i$, $\phi(\cdot)$ is some non-linear activation for which $\phi(z)\approx z$ when $z\rightarrow 0$, and $\text{forward}(\cdot)$ simply forward-propagates the signal to the output of the neural network. As the output of the model is given by $\boldsymbol{y}=\boldsymbol{y}^{opt}+\boldsymbol{y}^{exc}$, in the optimal case we would require that $\boldsymbol{y}^{exc}=\boldsymbol{0}$. To satisfy such a condition, we have two possible scenarios:
\begin{itemize}
\item $\exists w_i\neq 0 \in \boldsymbol{w}^{exc}$. In this case, the optimizer finds a minimum loss such that the algebraic sum of the noisy contribution is zero. When a subset of these is removed, it is possible that $\boldsymbol{y}^{exc}\neq \boldsymbol{0}$, which results in a performance loss and, for instance, the double descent phenomenon is observed.
\item $w_i=0, \forall w_i\in \boldsymbol{w}^{exc}$. In this case, there is no contribution of the noisy terms and we are in the optimal scenario, where the parameters are de-facto removed from the model.
\end{itemize}
Let us focus on the second case. For numerical reasons, this scenario is unrealistic during continuous optimization; hence we can achieve a similar outcome by satisfying two conditions:
\begin{enumerate}
\item $w_i x_i \approx 0, \forall w_i \in \boldsymbol{w}^{exc}$;
\item $\|\text{forward}\left[\phi(\boldsymbol{x}_i\cdot w_i)\right]\|_1 \leq \|\phi(\boldsymbol{x}_i\cdot w_i)\|_1$,\\ with $\|\phi(\boldsymbol{x}_i\cdot w_i)\|_1\approx 0$.
\end{enumerate}
We can achieve these conditions with a sufficiently large regularization on the parameters $\boldsymbol{w}$. The first condition is achievable employing any common weight penalty, as we assume that, for local minima of the loss where $\frac{\partial L}{\partial w_i} = 0$, we have some weight penalty $C$ pushing its magnitude towards zero. The second condition, on the contrary, requires more careful consideration. Indeed, for the propriety of the function's composability, we need to ensure that the activation function in every layer does not amplify the signal (true in the most common scenarios) and that all the parameters have the lowest magnitude possible. For this reason, ideally, the regulation to employ should be $\ell_\infty$; on the other hand, however, we are also required to enforce some sparsity. Towards this end, recent works in the field suggest $\ell_2$ regulation is a fair compromise\cite{han2015learning}. Hence, we need a sufficiently large $\ell_2$ regularization.\\
\textbf{How can we observe we have avoided double descent?} We present an algorithm to sketch the (eventual) double descent phenomenon in Alg.~\ref{Algo}. After training for the first time the model on the learning task $\Xi$, eventually with $\ell_2$ regularization weighted by $\lambda$ (line~\ref{line:dense}), a magnitude pruning stage is set up (line~\ref{line:prune}). Neural network pruning, whose goal is to reduce a large network to a smaller one without altering accuracy, removes irrelevant weights, filters, or other structures from neural networks. An unstructured pruning method called magnitude-based pruning, popularized by~\cite{han2015learning}, adopted a process in which weights, below some specific threshold $T$, are pruned (line~\ref{line:prune}).
We highlight that more complex pruning approaches exist, but magnitude-based pruning shows their competitiveness despite very low complexity~\cite{Gale_Magnitude}. Towards this end, the hyper-parameter $T^{iter}$ sets the relative pruning percentage, or in other words, how many parameters will be removed at every pruning stage.
Once pruned, the accuracy of the model typically decreases. To recover performance, the lottery ticket rewinding technique, proposed by Frankle \& Carbin~\cite{LTR}, is used. It consists of retraining the subset of parameters that are surviving the pruning stage to their initialization value (line~\ref{line:rewind}) and then training the model (line~\ref{line:wd}). This approach allows us to state whether a lowly-parametrized model from initialization can, in the best case, learn a target task. We end our sketching once we reach a sparsity higher or equal than $T^{end}$ (line~\ref{line:endcond}).\\
\textbf{Experimental setup.} For the experimental setup, we follow the same approach as He~et~al.~\cite{SparseDoubleDescent}. The first model we train is a LeNet-300-100 on MNIST, for 200 epochs, optimized with SGD with a fixed learning rate of 0.1. The second model is a ResNet-18, trained on CIFAR-10 \& CIFAR-100, for 160 epochs, optimized with SGD, having momentum 0.9 and a learning rate of 0.1 decayed by a factor 0.1 at milestones 80 and 120. For each dataset, a percentage $\varepsilon$ of symmetric, noisy labels are introduced: the labels of a given proportion of training samples are flipped to one of the other class labels, selected with equal probability~\cite{Noisy_labels}. In our experiments, we test with $\varepsilon \in \{10\%, 20\%, 50\%\}$.
When the regularization $\ell_2$ is employed, we set $\lambda$ to \num{1e-4}. In all experiments, we use batch sizes of 128 samples, $T^{iter}$ to 20\% and $T^{end}$ is 99.9\%.\\
\textbf{Results.} Fig.~\ref{fig:MNIST} displays our results. As in He~et~al.~\cite{SparseDoubleDescent} work, looking at LeNet-300-100 with $\varepsilon=10\%$, the double descent consists of 4 phases. First, at low sparsities, the network is over-parameterized, thus pruned network can still reach similar accuracy to the dense model. The second phase is a phase near the interpolation threshold, where training accuracy is going to drop, and test accuracy is about to first decrease and then increase as sparsity grows. The third phase is located at high sparsities, where test accuracy is rising. The final phase happens when both training and test accuracy drop significantly. However, while we can observe the double descent in the test accuracy without $\ell_2$ regularization, the phenomenon fades when the regularization is added. Indeed, the existence of the second phase is questioned: the test accuracy which is expected to decrease in this phase reaches a plateau before rising when regularization is added. In this simple setup, the double descent is here dodged.
However, Fig.~\ref{fig:MNIST} also portrays the result of ResNet-18 on CIFAR-10 and CIFAR-100 experiences, with different percentages of noisy labels. Whether the regularization is used or not, on average, and for every value of $\varepsilon$,
the double descent phenomenon occurs and is present in both cases. Those experiments, which can be considered more complex than the previous one, highlight some limits of the use of the standard regularization to avoid the double descent phenomenon, and suggest that a specific regularizer should be designed.\\
\textbf{Ablation on $\boldsymbol{\lambda}$.}
In the previous experiments in Fig.~\ref{fig:MNIST}, we have proposed solutions with a $\ell_2$-regularization hyper-parameter which provides a good trade-off between performance in terms of validation error and avoidance of the double descent.
However, ablating the regularization parameter is a matter of interest to ensure that with larger values, the double descent can not be avoided either.
Hence, we propose in Fig.~\ref{fig:ablation_test} an ablation study on $\lambda$ for CIFAR-10 with $\varepsilon=10\%$.
We observe that, even for extremely high $\lambda$, the double descent is not dodged, and the overall performance of the model drops: the imposed regularization becomes too high, and the training set is not entirely learned anymore. This indicates us that, while for regression tasks $\ell_2$ regularization is the key to dodge the double descent, in classification tasks the learning scenario can be very different, and some extra ingredient is still missing.
\begin{figure}
\centering
\begin{subfigure}{1\columnwidth}
\includegraphics[width=1.0\linewidth]{figures/ICASSP_4_values_Train_Loss.pdf}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\includegraphics[width=1.0\linewidth]{figures/ICASSP_4_values_Test_Loss.pdf}
\end{subfigure}
\caption{Train and test loss at varying $\lambda$ for CIFAR-10, with $\varepsilon=10\%$.}
\label{fig:ablation_test}
\end{figure}
\section{Is double descent avoidable?}
\label{sec:Conclusion}
The problem of finding the best-fitting set of parameters for deep neural networks, which has evident implications for both theoretical and applicative cases, is currently a subject of great interest in the research community. In particular, the phenomenon of double descent prioritizes the research around finding the optimal size for deep neural networks: if a model is not extremely over-parametrized it may fall into a sub-optimal local minimum, harming the generalization performance.\\
In this paper, we have moved some first steps, in a traditional classification setup, towards avoiding the double descent. If we successfully achieve a local minimum where, regardless of its over-parametrization, the performance of the model is consistently high at varying the cardinality of its parameters, there would not be a strict need in finding the optimal subset of parameters for the trained model. Standard regularization approaches like $\ell_2$, which have proven their effectiveness in regression tasks, evidenced some limits in more complex scenarios while dodging effectively the double descent in simpler setups. This result gives us hope: a custom regularization towards avoidance of double descent can be designed, and will be the subject of future research. | {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,698 |
Мутинус (, древнеримский бог плодородия, отождествлялся с Приапом) — род семейства .
Биологическое описание
Плодовое тело гриба появляется из белого толстого шнурообразного мицелия; молодые плодовые тела называются «ведьмиными яйцами», они яйцевидной или грушевидной формы. Затем «яйца» распадаются на несколько долей.
Виды
Примечания
Весёлковые
Грибы Европы
Грибы Северной Америки | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,640 |
Kościół Masthugget – świątynia luterańska w Göteborgu, w Szwecji. Został wybudowany w 1914 według projektu Sigfrida Ericsona na wzgórzu w pobliżu centrum miasta i rzeki Göta älv. Ze względu na fakt, że wieża kościoła ma 60 metrów wysokości, Masthugget jest widocznym punktem na panoramie miasta, jego wizytówką i jedną z popularniejszych atrakcji turystycznych.
Kościół ma status zabytku sakralnego według rozdz. 4 Kulturminneslagen (pol. Prawo o pamiątkach kultury) ponieważ został wzniesiony do końca 1939 (3 §).
Przypisy
Masthugget
Göteborg | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,358 |
\section{Introduction}
In wires made from a three-dimensional topological insulator (3DTI) the topological surface states form a two-dimensional conducting electron layer that envelops the bulk. The energy spectrum of these wires features a gap at zero magnetic field which closes when an axial magnetic flux of $\phi_0 = h/2e$ threads the wire cross section \cite{ostrovsky2010,Bardarson-et-al-2010,Bardarson_2013,Kozlovsky_2020,graf2020}. With closing of the gap, a non-degenerate perfectly transmitting mode appears, rendering the wire's band structure topologically nontrivial. Whereas semiconductor wires with strong spin-orbit interaction are the so far prevailing material platform to search for Majorana bound states (\cite{Mourik2012,Rokhinson2012,Deng2016,Gul2018} and references therein), mesoscopic wires made of TI material are a promising alternative \cite{Cook-Vazifeh-Franz_2012,Ilan-et-al-2014,deJuan-et-al-2014,deJuan-et-al-2019,Xypakis-et-al-2020,Legg-et-al-2021,Legg_2022}. Recent experiments utilizing the fractional Josephson effect in \ce{HgTe}-based Josephson junctions (JJ) indeed provided evidence that the 4$\pi$-periodic supercurrents observed for an axial magnetic flux $>\phi_0/4$ are of topological origin \cite{Fischer-et-al-2022}. In these experiments two superconducting contacts are placed across a HgTe wire with the TI wire constituting the normal region forming a JJ.
So far, semiconductor wires with strong spin-orbit interaction have been the prevailing system class to search for topological superconductivity. The JJ built from such wire structures and their behavior in a magnetic field have been investigated, for instance, in Refs.~\cite{Suominen-et-al-2017,Zuo-et-al-2017,Gharavi_2017,2019-PhysRevB-Sriram_et_al,Kringhoj_2021,Perla_2021,Stampfer_2022}. Related work is also available on JJs made from \ce{Cd3As2} \cite{Li-et-al-2021} or \ce{Bi2Se3} \cite{Chen-et-al-2018} wires and \ce{(Bi,Sb)2Te3} nanoribbons \cite{Schueffelgen2019}.
Here we investigate the evolution of supercurrent interference in \ce{HgTe} wire-based JJs as a function of an axial magnetic field. The supercurrent flows between the two superconducting (sc) contacts along the TI and is driven by the difference $\varphi$ of the superconducting phase between the two contacts. In the presence of an axial magnetic field the supercurrent amplitude oscillates as a result of interference between Andreev bound states acquiring different phases along their quasi-classical trajectory between the sc contacts \cite{2016-PhysRevB-Ostroukh_et_al}. In the case of a junction with the magnetic field perpendicular to the supercurrent, the supercurrent oscillations are described by $I_c(\phi)=I_c(0)\abs{\sin(\pi\phi/\phi_0)/(\pi\phi/\phi_0)}$,
identical to the Fraunhofer pattern of a single slit experiment. Such a pattern is not expected for the TI-wire JJ mentioned above, since the current flows on average parallel to the magnetic field and the shortest ballistic trajectories should not pick up any phase from the magnetic flux. Remarkably, we find oscillations in the supercurrent which are $h/2e$, $h/4e$ and even $h/8e$ periodic, thus constituting a highly unusual interference pattern. Below we relate these findings, both experimentally and theoretically in a consistent way, to the three-dimensional JJ-geometry and the coupling of the superconducting contacts to the TI wire.
\section{Device parameters and experimental setup}
\begin{figure*}
\centering
\includegraphics{Fig_1.pdf}
\caption{\textbf{Sample layout, excess current, and critical current vs. magnetic field $\boldsymbol{B}$.} \textbf{a,} Cartoon of the sample layout showing the \ce{HgTe} wire and the two \ce{Nb} contacts which form the Josephson junction. The topological surface states are shown in red. The magnetic field is oriented parallel to the axis of the wires. \textbf{b,} $I$-$V$ trace of nanowire G (black trace) for $B=0$ at a temperature of $\SI{27}{mK}$. For high bias voltages, the slope represents the normal-state resistance $R_N$, while for lower voltages Andreev reflections influence the trace resulting in an excess current $I_{exc}=\SI{2.2}{\micro \ampere}$. The superconducting gap $\Delta$ can be extracted from the curve as the trace starts to deviate from the constant normal-state resistance (red curve) if $eV<2\Delta \approx \SI{1.8}{mV}$. With these values the parameter $Z\approx0.98$\ is estimated. Thus, the transparency is given by $D\approx0.51$. \textbf{c,} Color map of the differential resistance $dV/dI$ of sample G as a function of the current $I$ and the magnetic flux $\phi/\phi_0$ ($\phi_0=h/2e$). For sample G, $\phi/\phi_0$ corresponds to $B\approx\SI{36}{mT}$. Superconducting regions are shown in blue. The critical current oscillates with a period $\phi_0/2$, while the side maxima at $\phi=\phi_0$ are most pronounced. \textbf{d,} Color map of the differential resistance $dV/dI$ of sample G up to higher values of the magnetic flux. For $\phi/\phi_0>3$, additional maxima appear resulting in a $\phi_0/4$ periodicity.}
\label{fig:device_excess_sampleG}
\end{figure*}
We considered 9 devices (labeled A-J, ordered by descending JJ transparency D) made from wafers with an \SI{80}{nm} thick, strained \ce{HgTe} film, which is grown on \ce{CdTe} by molecular beam epitaxy. A thin \ce{Cd_{0.7}Hg_{0.3}Te} buffer layer was introduced in between to improve the quality of the samples \cite{2014-PhysRevLett-Kozlov_et_al}. Finally, the wafers are capped by \ce{Cd_{0.7}Hg_{0.3}Te} and \ce{CdTe}. Fig.~\ref{fig:device_excess_sampleG}a
sketches the wafer structure and the device. Typically, the Fermi level $\mu$ is located at the top of the valence band, and surface electrons as well as bulk holes co-exist. The electron density is of order $n_e\sim\SI{e11}{\metre^{-2}}$. Additionally, \ce{In}-doping was added to the \ce{Cd_{0.7}Hg_{0.3}Te} layers for specific wafers (sample D, G). This increases the
electron density up to one order of magnitude,
since the Fermi level $\mu$ is shifted to the conduction band.
We fabricate the nanowires using electron beam lithography and wet-chemical etching \cite{2018-PhysRevB-Ziegler_et_al, Fischer-et-al-2022}. Due to the wet-chemical etching, the wires have a trapezoidal cross-section. In the following, we use the average width, which typically ranges between $\SIrange{500}{700}{\nano\metre}$. The wire perimeter is always shorter than the phase coherence length, which is of the order of several microns \cite{2018-PhysRevB-Ziegler_et_al}, and transport is thus coherent. The superconducting \ce{Nb} contacts are placed on the surface of \ce{HgTe} after removing the capping layers by wet-chemical etching. To enhance contact quality, we clean the \ce{HgTe} surface by gentle in-situ \ce{Ar}$^+$-sputtering and add a thin \ce{Ti} layer ($\sim\SI{3}{nm}$), grown in-situ by thermal evaporation, below the \ce{Nb}. As \ce{Nb} tends to oxidize, we add a thin layer of \ce{Pt} to protect the \ce{Nb}. The distance between adjacent superconducting contacts is between $\SIrange{50}{240}{\nano\metre}$.
The samples are cooled down in a dilution refrigerator with a base temperature of $\SI{27}{mK}$. The $B$-field is aligned parallel to the wire's axis so that the magnetic flux through the wire is $\phi=BA$, where $A$ is the cross-sectional area of the wire. The measurements are taken using standard dc techniques, while the dc lines are filtered by $\pi$-filters at room temperature and \ce{Ag}-epoxy filters \cite{2014-APL-Scheller_et_al} as well as RC-filters in the mixing chamber. The differential resistance $dV/dI$ is measured by superimposing the dc bias with a small ac signal using lock-in amplifiers.
The transparency of the superconducting contacts is determined by voltage-biased measurements. An $I$-$V$ trace, exemplary shown for sample G, is plotted in Fig.~\ref{fig:device_excess_sampleG}b. The slope of the trace stays constant and represents the normal-state resistance for bias voltages $V > \SI{1.8}{mV}$, while for lower voltages Andreev reflections modify the slope \cite{1982-PhysRevB-Blonder_Tinkham_Klapwijk, 1983-PhysRevB-Octavio_et_al}. The change of the resistance gives an estimation for the superconducting gap of \ce{Nb} $\Delta = eV/2\approx \SI{0.95}{meV}$. The additional current flowing across the junction is the excess current $I_{exc}=\SI{2.2}{\micro \ampere}$.
With the extracted values, we estimate the dimensionless parameter $Z$ which describes the average transparency $D=1/(1+Z^2)$ using the expression of Niebler et al. \cite{2009-SupScience-Niebler_et_al}, which is based on the work of Flensberg et al. \cite{1988-PhysRevB-Flensberg_et_al} and the OBTK-theory \cite{1983-PhysRevB-Octavio_et_al}. Inserting the values of sample G, we get $Z \approx0.98$ and $D\approx0.51$ \footnote{We are aware that OBTK-theory fails for low junction transparencies, because it does not correctly take into account interferences from multiple reflections (see, e.g., [13-15] in the supplement to \cite{Fischer-et-al-2022}}.
An overview of the individual sample geometries and transparencies is displayed in table \ref{tab:sample-table}.
\begin{table}[]
\centering
\begin{tabular}{l|c c c c c c c c c}
Sample & A & B & C & D & E & F & G & H & J \\
\hline
$D$~ & 0.70 & 0.66 & 0.64 & 0.63 & 0.62 & 0.57 & 0.51 & 0.49 & 0.43 \\
$w$ $[\si{\nano\metre}]$~ & 450 & 900 & 570 & 600 & 470 & 700 & 700 & 540 & 520 \\
$L$ $[\si{\nano\metre}]$~ & 50 & 160 & 100 & 180 & 65 & 70 & 110 & 40 & 240 \\
$W_{\text{S}}$ $[\si{\micro\metre}]$~ & 1.3 & 1.3 & 1.3 & 1.3 & 2.3 & 4.3 & 0.6 & 4.3 & 0.6
\end{tabular}
\caption{\textbf{Sample transparencies and geometries.} Junction width $w$, length $L$ and width $W_S$ of the deposited superconducting niobium fingers for samples A-J, ordered by sample transparency D.}
\label{tab:sample-table}
\end{table}
\begin{figure*}
\centering
\includegraphics{Fig_2.pdf}
\caption{\textbf{Gate dependence of $\boldsymbol{I_C(B)}$-oscillations. } \textbf{a,} Sketch of the sample layout. An insulator and a metallic topgate is placed on top of the junction. \textbf{b,} The critical current $I_C$ increases for higher gate voltages $V_G$. \textbf{c,} Color map of the differential resistance $dV/dI$ of sample J as a function of the current I and the magnetic flux $\phi/\phi_0$ at $V_G=0$. The critical current oscillates with a period of $\phi_0$. \textbf{d,} The corresponding color maps at $V_G=\SI{3}{V}$. Additional oscillations of $I_C$ appear recovering the $\phi_0/2$ periodicity.}
\label{fig:gated device}
\end{figure*}
\section{Experimental Results}
\label{sec:exp}
In a Josephson junction, a magnetic field parallel to the current direction is expected to act as a pair-breaker \cite{Yip2000,Heikkila2000,Crosser2008}. In this scenario, the critical current of the device decreases monotonously with increasing magnetic field strength. For some of our devices, however, we found a strong modulation of the critical current $I_C$ as a function of the axial magnetic field $B$. Fig.~\ref{fig:device_excess_sampleG}c presents a color map of the differential resistance $dV/dI$ for sample G as a function of current $I$ and magnetic flux $\phi$, threading the cross-sectional area of the nanowire. This device has a critical current $I_C\approx\SI{600}{\nano\ampere}$ and shows the most prominent oscillations of $I_C$. With the width of the wire $w\approx\SI{700}{\nano\metre}$, one superconducting flux quantum $\phi_0=h/2e$ corresponds to $B\approx\SI{36}{mT}$. Blue regions in the color map illustrate superconducting states. The pattern displays maxima of $I_C$ for $\phi=n\cdot\phi_0/2$ with $n$ an integer, while $I_C$ is fully suppressed in between them. Furthermore, the maxima at multiples of $\phi_0$ are more pronounced than the $\phi_0/2$ maxima. Data of the same device up to higher fluxes is shown in Fig.~\ref{fig:device_excess_sampleG}d. Here, additional maxima at $\phi=n\cdot\phi_0/4, n\in\mathbb{Z}$, appear. The $h/4e$ periodicity eventually changes to $h/8e$ at higher magnetic fields. The envelope of this pattern can be ascribed to the expected pair-breaking mechanism. We note at this point that roughly $h/2e$ periodic oscillations were observed by Stampfer et al. and ascribed to oscillations of the transmission due to the conventional Aharonov-Bohm effect \cite{Stampfer_2022}. Nonmonotonic behavior of $I_C(B)$ with multiple nodes and lobes but without clear periodicity were observed in semiconductor nanowire JJs in an axial field \cite{Gharavi_2017,Zuo-et-al-2017}.
Only a fraction of the investigated junctions show an oscillatory interference pattern of the critical current as a function of the flux, while the critical current monotonously decreases with the magnetic field for other samples. Even the exact shape and periodicity of the pattern, if it exists, differs for various devices. Therefore, we will analyze the emergence of the $I_C(B)$-oscillations in the following for different experimental parameters like contact transparency and gate voltage.
\begin{figure*}
\centering
\includegraphics{Fig_3_transparency.pdf}
\caption{\textbf{Impact of sample transparency on $\boldsymbol{I_C(B)}$-oscillations.} Color maps of the differential resistance $dV/dI$ as a function of the current normalized $I/I_c$ and the magnetic flux $\phi/\phi_0$ for different samples. The color maps are ordered by decreasing transparency of the junctions, with the highest transparency shown in (a) to the lowest in (i). \textbf{a, b,} Samples A and B with high transparency ($D\approx0.70$ and $D\approx0.66$) show no oscillations as a function of the magnetic field. \textbf{c, d, e,} Intermediate transparencies in samples C, D, E ($D\approx0.64$, $D\approx0.63$, and $D\approx0.62$, respectively): the shape of the critical current contour starts to deviate and first nodes and antinodes are observable. \textbf{f, g, h, i,} Samples F - J with the lowest transparencies ($D\approx0.57$ to $D\approx0.43$) show distinct oscillations as a function of applied magnetic field.}
\label{fig:transmission}
\end{figure*}
\subsection{Gated devices}
Fig.~\ref{fig:gated device}c shows the data of sample J. This device has a critical current $I_C=\SI{136}{\nano \ampere}$ and an average transparency $D=0.43$, while one flux quantum $\phi_0 = h/2e$ corresponds to $B\approx\SI{50}{mT}$. For this sample we also observe $I_C\left(\phi\right)$-oscillations. However, only maxima at $\phi=n\cdot\phi_0$ are visible leading to a $h/2e$-periodicity. For more detailed studies, a top-gate was added to the junction. This allows to investigate the $I_C\left(\phi\right)$-oscillations as function of the gate voltage $V_G$. The structure of a gated device is sketched in Fig.~\ref{fig:gated device}a. An insulator made of $\sim\SI{30}{\nano\metre}$ \ce{SiO_2}, grown by PECVD, and $\sim\SI{100}{\nano\metre}$ \ce{Al_2O_3}, grown by ALD, was deposited above the junction. The top-gate voltage $V_G$ is applied via a metallic \ce{Ti}/\ce{Au}-layer. Fig.~\ref{fig:gated device}b shows the critical current $I_C$ as a function of the top-gate voltage $V_G$. By tuning $V_G$ from $\SIrange{0}{3}{V}$, $I_C$ increases by a factor $\sim1.7$. Fig.~\ref{fig:gated device}d illustrates $dV/dI\left(\phi,\,I\right)$ of sample J for $V_G=\SI{3}{V}$. Additional maxima appear at $\phi=(2n+1)\cdot\phi_0/2$ in contrast to the data at $V_g=0$. Hence, the $h/4e$-periodicity is recovered by increasing $V_G$. This observation emphasizes that the $h/2e$-oscillations are the dominating ones and are observable for any $V_G$. The maxima at $\phi=(2n+1)\cdot\phi_0/2$ cannot be resolved for $V_g=0$ due to the low $I_C$ at these positions. By increasing $V_G$, the number of contributing channels increases and the increased $I_C$ enables to resolve $I_C$ at $\phi=(2n+1)\cdot\phi_0/2$. Compared to sample G, however, $I_C\left(\phi\right)$-oscillations with a period $h/8e$ are not observable, although the transparency of the devices are similar. Sample G was fabricated from a doped wafer. Its electron density, and thus the number of transport channels contributing to the signal, is much higher than in the undoped sample J, even when the latter is gated at high voltages. This suggests that for observing higher harmonics in the $I_C\left(\phi\right)$-oscillations a sufficiently large number of transport channels is necessary.
\subsection{Influence of the transparency}
In addition to differences in geometry, the transparency of the superconductor/nanowire interface is the decisive parameter that differentiates the devices studied. Fig.~\ref{fig:transmission} shows color maps of the differential resistance $dV/dI$ as a function of the normalized current $I/I_C$ for several samples with different transparency. The transparency was calculated using the $I$-$V$ characteristics, as explained above and explicitly demonstrated for sample G. Here it should be mentioned that the extracted transparency gives a value averaged over all contributing transport channels. Thus, it can vary locally at the superconductor/nanowire interface. In Fig.~\ref{fig:transmission}, the color maps are ordered by the device transparency, descending from higher to lower values from top left to bottom right, $(a)\to(i)$. Moreover, the labelling of the devices A-J follows the labelling in the figure (a)-(i). Thus, devices A and B have the highest transparencies, $D\approx0.70$ and $D\approx0.66$, among the samples investigated. For these high-transparency devices the critical current $I_C$ monotonously decays with increasing magnetic flux $\phi$. For samples with slightly lower transparency $D\approx0.64$ and $D\approx0.63$, as in samples C and D, the monotonic decrease of the critical current still prevails but an additional shoulder comes out. This shoulder can be considered as a precursor of the supercurrent interference appearing at still lower transparencies. The oscillations start for device E ($D\approx0.62$). Initially, $I_C$ decreases and is almost fully suppressed below $\phi=\phi_0$. Then, $I_C$ increases again and shows a maximum around $\phi=\phi_0$. The oscillations become more pronounced for samples F ($D\approx0.57$), G ($D\approx0.51$), H ($D\approx0.49$), and J ($D\approx0.43$) which have an even lower transparency.
These samples show clear $I_C(B)$-oscillations with periodicities $h/2e$ or $h/4e$. For samples G and J, the maxima appear exactly at positions $\phi=n\cdot\phi_0/2$ and $\phi=n\cdot\phi_0$ respectively, while the positions are slightly shifted for devices E and F, where the observed oscillation periods deviate by about 10 percent of a flux quantum from $h/4e$ and $h/2e$. For sample H, the observed periodicity is approximately 20 percent smaller than what one would expect from geometry. These deviations occur in samples with much wider superconducting contacts (see table \ref{tab:sample-table}, suggesting that the larger contacts might affect the flux distribution in the junction.
Based on these experimental observations we conclude that the transparency $D$ is the most influential parameter that determines whether $I_C(B)$-oscillations occur or not. The oscillations appear preferentially for samples with low average transparency, while they are fully absent for high transparencies.
\section{Theory}
\label{sec:theory}
\begin{figure}
\centering
\includegraphics[%
keepaspectratio,%
width=\columnwidth,%
]{Fig_4_system}
\caption{%
\textbf{Geometry of the system used in the theoretical model.}
The upper panel shows a 3D sketch of the nanowire Josephson junction
while the lower panel is a 2D sketch of the rolled out and periodically continued nanowire surface.
Regions with induced superconductivity are shaded green,
normal conducting regions gray.
Superconductivity is not induced around the whole circumference,
the bottom area is still considered as normal conducting. Additional barriers used in the model are marked as vertical orange lines in the lower panel.
The different type of retro-reflected paths arising from our semiclassical analysis, Sec.~\ref{sec:semiclassics}, are shown in red, purple and blue, respectively.
}
\label{fig:system}
\end{figure}
To proceed we summarize the desiderata and key aspects of the physical problem from a more theory-oriented point of view:
\begin{itemize}
\item[(i)] there must be sufficiently many open surface channels between the two superconducting electrodes to ensure a fairly high $I_C$;
\item[(ii)] a number of open channels must be sensitive to the flux threading the nanowire cross-section, otherwise no $\phi$-periodicity would show up;
\item[(iii)] imperfect contacts, representing barriers for the transport electrons, suppress contributions from flux-insensitive channels relative to flux-sensitive ones.
\end{itemize}
Based on these premises we first define the model geometry, sketched in Fig.~\ref{fig:system} and introduced in detail below.
An assumption that will turn out to be critical is that the Nb fingers induce superconductivity only close to the contact regions (shaded green areas in Fig.~\ref{fig:system}), \ie the nanowire bottom surface remains normal conducting\footnote{Except for the highest-quality samples, see the discussion in Sec.~\ref{sec:exp_theo_comparison}.}. We will later demonstrate that modes formed by Andreev retro-reflection (partially) winding around the circumference of the 3DTI nanowire pick up an Aharonov-Bohm phase and lead to the experimentally observed supercurrent oscillations.
To reach our conclusions we combine semiclassical analytics with tight-binding numerical simulations. Semiclassics allow us to identify the fundamentals of the transport problems in terms of families of electronic paths which enclose (or do not enclose) a magnetic flux. This picture is validated by rigorous quantum transport simulations based on a tight-binding implementation of the corresponding Bogoliubov-de Gennes (BdG) Hamiltonian, see below.
Our analysis shows that the relevant aspects of the problem are geometrical (non-planar surface conduction, winding vs.~straight propagation, nanowire perimeter not fully superconducting), while the Dirac or trivial (quadratic) nature of the carriers seems to play a secondary role.
\subsection{Geometry and Model}
\label{sec:model}
The upper panel of Figure~\ref{fig:system} shows the model geometry of the 3D nanowire JJ and the lower panel its unrolled surface. In the figure,
$w$ and $h$ denote the nanowire width and height,
$L$ the junction length,
and $W_S$ the width of the superconducting contacts.
We also introduce the perimeter $P = 2 w + 2 h$
and the interfacial boundary $C = w + 2 h$ which describes the length of the perimeter covered by the superconducting contacts.
In our model we include phenomenological delta-like barriers at the interfaces
between normal and superconducting parts in the transverse direction only, \ie along the perimeter, {\em i.e.},
\begin{equation}
\label{eq:barrier}
U(z, s) = U_0 \Theta(s) \Theta(w + 2 h - s) [\delta(z) + \delta(z - L)] \, .
\end{equation}
The barriers are marked in orange in Fig.~\ref{fig:system}.
They account for the fact that the supercurrent oscillations appear in the less transparent
junctions; see Fig.~\ref{fig:transmission}. Indeed, the barriers turn out to be essential for the observation and understanding of the supercurrent oscillations with a flux-periodicity of $h/4e$.
The reason for their presence lays in the fabrication process itself.
Foremost, an incomplete removal of the capping layer induces a barrier at the interfaces between Nb and HgTe.
These complex interface physics are simplified, but essentially captured, by the local delta barriers.
Starting from this geometrical model the JJ system is quantum mechanically described by the Bogoliubov-de Gennes Hamiltonian
\begin{equation}
\label{eq:BdG-Hamiltonian}
H = \begin{pmatrix}
h_e & \Delta e^{i \varphi} \\
\Delta e^{-i \varphi} & h_h
\end{pmatrix}
,
\end{equation}
where $h_e$ and $h_h$ describe the electron and hole Hamiltonians
and $\Delta$ and $\varphi$ denote the absolute value and phase of the pairing potential.
The topological surface states are described by the Dirac model Hamiltonian \cite{2010-PhysRevLett-Zhang_Vishwanath,2010-PhysRevLett-Bardarson_Brouwer_Moore}
\begin{equation}
\label{eq:H_Dirac}
h_{\textit{e/h}} = \pm \hbar v_F \left[
\qmop{k}_z \sigma_x + \left( \qmop{k}_s \pm \frac{\phi}{\phi_0} \frac{\pi}{P} \right) \sigma_y
\right] \mp \mu \pm U
,
\end{equation}
the upper (lower) sign denoting the electron (hole) Hamiltonian.
Here, $z$ and $s$ are the coordinates along the wire axis and along the perimeter,
and $\qmop{k}_z$ and $\qmop{k}_s$ the respective momentum operators. Furthermore,
$\mu$ is the chemical potential and $U$ denotes the barriers at the NS-interfaces, see also below.
Only the nanowire surface in direct contact with the superconductor, shaded green in Fig.~\ref{fig:system}, is affected by the proximity effect. Its bottom surface, grey in Fig.~\ref{fig:system}, remains normal.
Accordingly, the absolute value of the pairing potential is modelled as follows:
\begin{equation}
\label{eq:Delta}
\Delta = \begin{cases}
\Delta_0 & \text{for $0 \leq s \leq C$ and $-W_S \leq z \leq 0$,} \\
\Delta_0 & \text{for $0 \leq s \leq C$ and $L \leq z \leq L + W_S$,} \\
0 & \text{otherwise.}
\end{cases}
\end{equation}
Furthermore, we assume that the thickness of the \ce{Nb} contacts
is much smaller than the London penetration depth of \ce{Nb}
such that no supercurrent develops around the perimeter
and the magnetic field is not screened.
Thus, the Hamiltonian \eqref{eq:BdG-Hamiltonian} has to be defined in an gauge invariant way.
To ensure this the superconducting phase $\varphi$, defined only in the regions $W_S \leq z \leq 0$ and $L \leq z \leq L + W_S$, satisfies
\begin{equation}
\label{eq:Phase_of_Delta}
\varphi = \begin{cases}
- \frac{1}{2} \varphi_0 + 2 \pi \frac{\phi}{\phi_0} \frac{s}{P} & \text{for $-W_S \leq z \leq 0$,} \\
+ \frac{1}{2} \varphi_0 + 2 \pi \frac{\phi}{\phi_0} \frac{s}{P} & \text{for $L \leq z \leq L + W_S$, }
\end{cases}
\end{equation}
and the unitary transformation $V(\phi) H(\phi) V^{\dagger}(\phi) = H(0)$ holds for
\begin{equation}
\label{eq:unitary-transformation}
V(\phi) = \exp\left(i \pi \frac{\phi}{\phi_0} \frac{s}{P} \tau_z \right)
.
\end{equation}
The transformation also modifies the boundary condition of the wave function,
\begin{equation}
\label{eq:boundary-condition}
(V \Psi)(s + P) = \pm \exp\left(- i \pi \frac{\phi}{\phi_0} \tau_z\right) (V \Psi)(s)
,
\end{equation}
necessary for the calculation of the Andreev bound states.
Note that Eq.~(\ref{eq:Phase_of_Delta}) for $\varphi$
can also be derived using Ginzburg-Landau theory:
The free energy density is proprotional to the supercurrent $\vec{J}_S = - 2 (e n_S / m) (\hbar \nabla \varphi + 2 e \vec{A})$.
Minimizing $\vec{J}_S$ leads to $\nabla \varphi = - 2 e \vec{A} / \hbar$
\cite{1996-Tinkham-superconductivity,2018-PhysRevB-Wojcik_Nowak,2019-PhysRevB-Winkler_et_al}.
\subsection{Semiclassical analysis}
\label{sec:semiclassics}
\subsubsection{Method}
\label{sec:method}
A semiclassical approach is justified in the limit $k_F L \gg 1$, which is fulfilled in our system, see Sec.~\ref{sec:results}. We thus follow the procedure from Ref.~\cite{2016-PhysRevB-Ostroukh_et_al}.
First we identify all classical self-retracing trajectories $\Gamma$ that arise from pure retro-reflections at the left and right NS contacts. Such trajectories are thus composed of electron-like and hole-like path segments. Each trajectory $\Gamma$ is then assigned a wave mode bound to a small tube of width $\lambda_F = 2 \pi / k_F$ and contributes a current of $j(\Gamma)$ to the total current.
The total current follows by integrating the contributions $j(\Gamma)$ over all paths $\Gamma$ at the Fermi surface.
Choosing a cut $z = z_{\text{cut}}$ through the normal part,
the paths can be characterized by the $s$ coordinate along this cut
and the axial wave number $k_s$
such that the integral reads \cite{2016-PhysRevB-Ostroukh_et_al}
\begin{align}
I &= \frac{1}{2 \pi} \int \dd s \int \dd k_s \; j(s, k_s)
\notag\\
\label{eq:definition-total-current}
&= \frac{k_F}{2 \pi} \int \dd s \int \dd \theta \; \cos(\theta) j(s, \theta),
\end{align}
with $\theta$ the path angle with respect to the $z$ direction.
The expression \eqref{eq:definition-total-current} contains a significant simplification: It does not account for specular normal reflection at the NS interfaces, which would modify the definition of the current in terms of paths.
The inclusion of additional paths from such normal reflections
substantially complicates the calculations of $j(\Gamma)$ and $I$ and requires the use of resummation techniques beyond the scope of this work. Moreover, we will establish {\it a posteriori} via quantum mechanical simulations that only perfectly retro-reflected paths are particularly important.
Note also that there is no bending of the paths due to the $B$-field, since the Lorentz force points perpendicular to the nanowire surface. Finally, for simplicity we stick to the short junction limit, $L \ll \xi = \hbar v_F / \Delta_0$, although we expect our findings to qualitatively hold for long junctions as well.
\subsubsection{Classification of the trajectories}
\label{sec:trajectory-classification}
The classical trajectories can be divided into different categories.
First, we can assign a ``crossing number''~$n$ to each path
which counts the crossings through the nonproximitized bottom surface.
Formally, one can define a line cut $s = s_{\text{cut}}$
with $C < s_{\text{cut}} < P$
and count the (directed) crossings through this cut.
We emphasize that this integer $n$ does not correspond to a proper winding number around the perimeter. It only counts the transverse crossings of the non-proximitized bottom surface.
Second, we can group the paths according to their start and end points,
see Fig.~\ref{fig:system}:
\begin{enumerate}
\item \textit{Type-1 paths} start and end on the $z = 0$ and $z = L$ NS interfaces;
\item \textit{Type-2 paths} are ``mixed'' paths,
where start and end points are located on a $z = \textit{const}$ and a $s = \textit{const}$ interface;
\item \textit{Type-3 paths} comprise paths with start and end points on the $s = 0$ and $s = C$ interfaces.
\end{enumerate}
Type-2 paths can be further subdivided into type-2L and type-2R paths, where type-2L paths start on the $z = 0$ interface and type-2R paths end on the $z = L$ interface. It is important to notice that type-2 and type-3 paths only exist for $n \neq 0$, in other words there are only type-1 paths with $n = 0$.
For given initial coordinates $(s_0, z_0)$ and final coordinates $(s_1, z_1)$, the trajectories are parametrized as
\begin{align}
\label{eq:trajectory}
s(t) = s_0 + t \frac{k_s}{k_F}
\quad \text{and} \quad
z(t) = z_0 + t \frac{k_z}{k_F}
,
\end{align}
where the wave numbers satisfy
\begin{equation}
\label{eq:relation-ks-kz}
\frac{k_z}{k_s} = \frac{z_1 - z_0}{s_1 - s_0}
\quad \text{and} \quad
k_z^2 + k_s^2 = k_F^2
\, .
\end{equation}
\subsubsection{Current contributions}
\label{sec:current-contributions}
To calculate the current contribution $j(\Gamma)$ for each classical trajectory $\Gamma$,
we employ the scattering matrix formalism introduced by Beenakker
for 1D Josephson junctions
\cite{1991-PhysRevLett-Beenakker}.
In the short junction limit $L \to 0$, one gets for the energies of the the Andreev bound states (ABS) \cite{1991-PhysRevLett-Beenakker,2004-JSupercond-Klapwijk}
\begin{equation}
\label{eq:abs-finite-transmission}
E = \pm \Delta_0 \sqrt{1 - \tau \sin^2\left(\tfrac{1}{2} \varphi_0 - \gamma\right)}
.
\end{equation}
Here, the gauge-invariant phase difference $\varphi_0 - 2 \gamma$ appears \cite{1996-Tinkham-superconductivity}, where
\begin{equation}
\label{eq:aharonov-bohm-phase}
\gamma = \frac{e}{\hbar} \int_{\Gamma} \dd \vec{s} \cdot \vec{A}
= n \pi \frac{\phi}{\phi_0}
.
\end{equation}
is the Aharonov-Bohm (AB) phase of the classical trajectory.
In Eq.~(\ref{eq:aharonov-bohm-phase}) the parameter $\tau$ depends on the transparency and is different for the different types of paths.
For zero temperature, the current contribution reads \cite{1991-PhysRevLett-Beenakker,2004-JSupercond-Klapwijk}
\begin{equation}
\label{eq:abs-current-finite-transmission}
j = \frac{e \Delta_0}{4 \hbar} \frac{\tau \sin(\varphi_0 - 2 \gamma)}{\sqrt{1 - \tau \sin^2(\varphi_0 / 2 - \gamma)}}
,
\end{equation}
approaching in the limit $\tau \to 1$
\begin{equation}
\label{eq:abs-current-with-magnetic-field}
j = \frac{e \Delta_0}{2 \hbar} \sin\left(\tfrac{1}{2} \varphi_0 - \gamma\right) \sgn\left[\cos\left(\tfrac{1}{2} \varphi_0 - \gamma\right)\right]
,
\end{equation}
where $\sgn$ is the sign function.
For the different types $m$ of paths, one obtains different $\tau_m$, namely
\begin{align}
\label{eq:dirac-tau-1}
\tau_1 &= \frac{1}{\sin^2(\varphi_N) + X^2 \cos^2(\varphi_N)}
, \\
\label{eq:dirac-tau-2-3}
\tau_2
&= \frac{1}{1 + Z^2 (1 + Z^2)^{-1} \tan^2(\theta)}
, \quad \text{and} \quad
\tau_3 = 1
\end{align}
with the dimensionless barrier strength $Z = U_0 / \hbar v_F$ \cite{1983-PhysRevB-Octavio_et_al}.
The parameters $\varphi_N$ and $X$ are given by
\begin{align}
\label{eq:dirac-phi-n}
\varphi_N &= 2 \arctan\left(
\tfrac{\cos(\theta) + Z \tan(\theta)}{ Z - \sin(\theta) - [1 + Z^2 + Z^2 \tan^2(\theta)]^{1/2} }
\right)
\intertext{and}
\label{eq:dirac-X}
X &= [1 + 2 Z^2 (1 + Z^2)^{-1} \tan^2(\theta)].
\end{align}
\subsection{Numerical simulations}
Besides the semiclassical approach we also employ numerical tight-binding simulations with the Python package Kwant~\cite{2014-Groth_et_al}.
Using the finite difference method, the BdG Hamiltonian Eq.~\eqref{eq:BdG-Hamiltonian} and its components, consisting of nontrivial surface states with a linear dispersion Eq.~\eqref{eq:H_Dirac}, are evaluated on a discrete square grid with lattice constant $a$.
Note that by putting the Dirac Hamiltonian on a lattice, the well known Fermion doubling problem arises \cite{1977-PhysRevD-Susskind,1982-PhysRevD-Stacey,1981-PhysLettB-Nielsen_Ninomiya,1981-NuclPhysB-Nielson_Ninomiya_1,1981-NuclPhysB-Nielson_Ninomiya_2}.
This issue can be circumvented by considering an additional Wilson mass term $H_W=E_{W}a/(4\hbar v_F) (k_z^2+ k_s^2)\sigma_z$
\cite{1977-PhysRevD-Susskind,2016-ApplPhysLett-Habib_Sajjad_Ghosh}, which gaps out the artificial Dirac cones at the borders of the first Brillouin zone.
This term is important to avoid non-physical inter-valley scattering introduced by the potential barriers $U(z,s)$, Eq.~(\ref{eq:barrier}), in the JJ.
Also, regarding these delta barriers, one has to appropriately scale the amplitude for the discrete representation.
This is achieved by fixing $U^{'}_0 = U_0/a$.
\begin{figure*}
\centering
\includegraphics[%
keepaspectratio,%
width=2\columnwidth,%
]{theory-L100-vertical}
\caption{%
{\bf Critical current for the TI nanowire-based Josephson junction}.
The results from the semiclassical (left panel) and numerical calculations (right) are shown for four different strengths of the interfacial barrier potential, Eq.~(\ref{eq:barrier}).
The barrier predominantly suppresses contributions from direct paths which do not cross the bottom surface of the wire,
such that the peaks at $\phi = h/4e = \phi_0/2$ and $3h/4e=3\phi_0/2$ emerge.
For larger barrier strengths, those peaks start to appear and become observable in comparison to the peaks at integer multiples of $\phi_0$.
}
\label{fig:theory}
\end{figure*}
Connecting the lattice sites with coordinates $(z,s=0)$ and $\left(z,s=P\right)$ by a hopping with phase factor $\exp(i\pi)$ we introduce anti-periodic boundary conditions.
Moreover, the flux through the wire cross section is accounted for by a Peierls substitution with the additional phase factor $\exp(i2\pi\frac{a}{P}\frac{\phi}{2\phi_0} )$.
Finally, superconductivity is introduced as simple onsite s-wave pairing given by Eq.~\eqref{eq:Delta}.
For the numerics we assume semi-infinite leads, {\em i.e.} $W_s\rightarrow \infty$, because we directly attach translationally invariant superconducting leads to the normal JJ part to keep the numerical cost to a minimum.
Additionally, we consider the local phase modulation of $\Delta$ introduced in Eq.~\eqref{eq:Phase_of_Delta}.
To access the current-phase relation and incorporate all geometrical junction details we compute the supercurrent following Ref.~\cite{FURUSAKI1994214}.
Furthermore we exploit part of the code package provided in a repository of Ref.~\cite{Zuo-et-al-2017}, and adapt it to our implemented tight-binding model. The core of the numerical method is the computation of the current-phase relation via Green's functions.
The supercurrent is given by
\begin{multline}\label{eq:furusaki_I_CPR}
I_{LR}(\varphi_0,\phi)= 2\frac{ek_{\mathrm{B}}T}{\hbar}\sum_{n=0}^{\infty}
\sum_{\substack{i\in R \\ j\in L}}\mathrm{Im} \left( H_{ji}G^r_{ij}(i\omega_n)\right. \\
\left.- H_{ij}G^r_{ji}(i\omega_n) \right),
\end{multline}
where $\omega_n = \frac{k_{\mathrm{B}}T}{\hbar} (2n+1) \pi$ are fermionic Matsubara frequencies.
The labels $i$ and $j$ run over lattice sites in two adjacent transversal lattice rows $R$ and $L$. In Eq.~(\ref{eq:furusaki_I_CPR}) the terms $H_{ij}$ and $G_{ij}$ denote the hopping matrix elements and the off-diagonal elements of Green's function, respectively, connecting those sites.
Furthermore, the phase difference $\varphi_0$ is incorporated into the hopping matrix elements as a phase factor.
This is simply introduced by performing a gauge transformation that shifts the phase difference into a vector potential inside the JJ.
For more details of the methodology we refer the reader to Refs.~\cite{2016-PhysRevB-Ostroukh_et_al,Zuo-et-al-2017}. For a fixed magnetic flux, the critical current is
\begin{equation}
I_c(\phi) = \max_{\varphi_0} |I_{LR}(\varphi_0,\phi)| \, ,
\end{equation}
\textit{i.e.}, the maximum of the corresponding current-phase relation.
\subsection{Semiclassical and numerical results for the critical current}
\label{sec:results}
We are now in a position to combine semiclassics and quantum mechanical simulations to explain the central experimental findings for the critical current reported in Sec.~\ref{sec:exp}.
For the realistic JJ setup discussed in Sec.~\ref{sec:model} we choose the following parameters to model the SNS-junction geometry, see Fig.~\ref{fig:system}:
$w = \SI{300}{\nano\metre}$, $h = \SI{80}{\nano\metre}$,
$L = \SI{100}{\nano\metre}$, $W_S = \SI{1000}{\nano\metre}$,
$\hbar v_F = \SI{330}{\milli\electronvolt\nano\metre}$, $\mu = \SI{30}{\milli\electronvolt}$, and $\Delta_0 = \SI{0.8}{\milli\electronvolt}$,
in accordance with Refs.~\cite{2018-PhysRevB-Ziegler_et_al,2021-PhysRevB-Fuchs_et_al,Fischer-et-al-2022}.
The corresponding Fermi wave number is $k_F \approx \SI{0.09}{\per\nano\metre}$, {i.e.}\ the Fermi wavelength $\lambda_F \sim 70$nm, and $k_F L \approx 10$,. Hence, the semiclassical limit ($k_F L \gg 1$) is well justified.
Since the coherence length reads $\xi \approx \SI{400}{\nano\metre}$, working in the short junction limit is also justified.
In the semiclassical calculations we include only paths with crossing numbers $n = 0, \pm 1$, since their angle $\theta$ is small and maximizes the $\cos\theta$ factor in the integral~\eqref{eq:definition-total-current}. Paths with higher crossing number $|n|$ have lower weight, and indeed we checked that including them modifies our results only marginally. Furthermore higher-crossing paths quickly approach the coherence length cutoff, \ie phase coherence is lost before the electron crosses the junction.
The value of $\mu$ is chosen to have a high number of open channels while still keeping the numerical simulations in an energetically converged regime.
Numerical and semiclassical results for the critical current are shown and compared to each other in Fig.~\ref{fig:theory}.
On the whole the numerics (left panel) and semiclassics (right panel) show qualitative agreement.
It is convenient to start by looking at the numerics, which show peaks only at integer values of $\phi_0 = h/2e$ in the case of perfect interface transparency, \ie without any barrier ($U_0 = 0$).
We note that in high transparency samples no oscillation was measured at all. Our theory model predicts no oscillation for fully proximitized systems, which is indeed more likely when the NS junction is good. In a fully proximitized nanowire there is no phase variation around the perimeter, except in integer multiples of $2\pi$, describing a vortex. Without any kind of accounting for the vector potential in the superconducting phase, the gap and therefor also the critical current will show just an exponential decay.
For increasing barrier strength $U_0$, the interfacial transparencys $\tau_{1,2}$ decrease,
leading to an overall reduction of the critical current.
At the same time, with increasing $U_0$ new maxima emerge and grow at fluxes
$\phi = h/4e = \phi_0/2$ and $3h/4e=3\phi_0/2$, reaching a peak height of nearly one half the major peaks (for $U_0 = \SI{600}{\milli\electronvolt\nano\metre}$).
The semiclassical results from the right panel of Fig.~\ref{fig:theory} show a corresponding trend: a decreasing critical current with increasing barrier height and the emergence of additional peaks at $\phi = h/4e$ and $3h/4e$. In the semiclassical calculation the dominant peaks arise mainly from the short lead-connecting trajectories marked as type-1 paths with crossing number $n = 1$ in Fig.~\ref{fig:system}.
Upon increasing the barrier height contributions from such type-1 paths are suppressed relative to those from type-2 and type-3 paths with $n = \pm 1$, since the former involve two barrier reflections while the latter only one, or none at all.
For instance, the current associated with type-3 paths is not influenced by the barrier at all.
The growing relevance of paths with $n = \pm 1$ and no barrier reflection leads to the emergence of the peaks at $h/4e$ and to their increase relative to the peaks at $h/2e$.
To conclude the comparison, semiclassical and numerical results agree on the fundamental aspects: they both predict the emergence of peculiar $h/4e$ peaks for larger barrier strength~$U_0$, the increase of their magnitude relative to the $h/2e$ peaks, and the broadening of all peaks with increasing $U_0$.
A few differences between them, however, remain:
Numerics give a considerably smaller value of the current,
and the peak current also decreases faster with increasing barrier strength~$U_0$. With regard to this,
first note that the actually induced ''effective'' gap of each of the ABSs as obtained in the numerical calculations is smaller than $\Delta_0$; see App.~\ref{app:ABS_numerics} for a detailed discussion.
To fix this issue in the semiclassical calculation, one would need to introduce an effective gap $\Delta_{\text{eff}} < \Delta_0$ (possibly different for each mode).
Second, as mentioned in Sec.~\ref{sec:method} the semiclassical method neglects contributions from paths with normal specular reflection.
We expect the resulting effects to reduce the current further, as more normal electron reflection reduces the contribution of Andreev reflection.
Furthermore, numerics is not limited to the short junction limit, and in fact fully captures effects of finite length and finite temperature. For shorter junctions the difference between the semiclassical and numerical current magnitude is indeed smaller, an explicit hint that the short junction assumption of semiclassics loses accuracy for longer systems.
To conclude the theory discussion, the semiclassical appproach is approximate but enables us to interprete the different peculiar peaks in terms of specific (quantized) relevant families of trajectories.
The emergence of the additional peaks related to paths (partially) winding around the nanowire highlights the three-dimensional character of the SNS junction geometry, compared to common planar junctions.
\section{Comparison of experiment and theory}
\label{sec:exp_theo_comparison}
We finally compare the experimentally measured critical currents with the corresponding theoretically calculated results.
Consider first samples with a high average transparency, which can also be modeled in the framework of the effective surface model. As mentioned in Sec.~\ref{sec:model}, a high quality NS interface should allow for superconducting pairing to be induced across the entire nanowire perimeter. This implies that no phase variation around the perimeter as given by Eq.~\ref{eq:Phase_of_Delta} can develop, and the Andreev bound states states become similar to those of planar Josephson junctions. In such a scenario the magnetic field is simply destroying pair correlations, and the superconducting gap decreases monotonically with increasing field strength. As a consequence the critical current decays exponentially without any oscillation, as indeed measured in high-transparency samples.
On the contrary, flux periodic supercurrent oscillations are observed in samples with low average transparency. In Sec.~\ref{sec:model} we argued that the junction transparency might be reduced due to an imperfect removal of a capping layer, which lowers the interface quality between the superconducting Nb and HgTe. The imperfect interface was modelled both semiclassically and numerically via barriers of varying strength, whose presence suppresses the large current contributions which have no or only a $h/2e$ periodicity. Vice versa, the $h/4e$-periodic current components are not affected and their signatures emerge, providing a clear explanation for the observed behaviour of low-transparency junctions.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{TI_JJ_critical_current_exp_fit}
\caption{Introducing an exponential envelope function to mimic the pair breaking mechanism of the applied flux leads to a good agreement between theoretical results and the experimental observations. The blue curve corresponds to the originally calculated numerical data, while the orange curve shows the adjusted data.}
\label{fig:exp_decay}
\end{figure}
Irrespective of the sample quality, all measurements show also a decrease of the current for increasing magnetic field. This is expected and attributed to the reduction of the induced superconducting gap by the magnetic field \cite{deJuan-et-al-2019}, which weakens pairing correlations. One can phenomenologically account for this behavior by multiplying the theoretical data with an appropriate envelope function, mimicking the weakening of the BdG pairing amplitude $\Delta_0$ of Eq.~(\ref{eq:Delta}).
\footnote{A microscopic description would require a self-consistent treatment of the superconductor in its electromagnetic environment, which is beyond the scope of the present work.}
An example is shown in Fig.~\ref{fig:exp_decay}, where we assumed a simple exponential decay of the pairing potential with respect to the applied flux.
The data was numerically computed with the same system parameters as for Fig.~\ref{fig:theory}, except that the length was increased to $L=200~\mathrm{nm}$ to better match the experimental dimensions. The blue curve is the raw simulation data, while the orange one is adjusted with the phenomenological flux-induced decay. The adjusted critical current exhibits all qualitative features of the experimental curve plotted in Fig.~\ref{fig:transmission}g. In particular the peak at $\phi/\phi_0=1$ is larger than the first half-integer one.
We further remark that also oscillations with a period of $h/8e$ were observed in sample G.
From the semiclassical model such a periodicity is to be expected if paths with crossing number $n = \pm 2$ contribute considerably to the current flow. This should be possible in the presence of a large overall number of conducting channels, with sufficiently many belonging to the $n=\pm2$ family to make their signature visible -- recall that such paths are identified by a large angle $\theta$, such that the weight of a single path in Eq.~\eqref{eq:definition-total-current} is usually very low.
This agrees with the observation that sample G has indeed the highest number of open transport channels. Our argument is also in line with the behavior from sample J: A gating potential of $V_G = \SI{3}{\volt}$ has to be applied to the junction, such that the $h/4e$ periodic oscillations can be measured. The gating potential increases the Fermi energy, {\it ergo} the number of open transport channels. As a consequence the contribution of type-2 and type-3 paths grows and maxima at $\phi=(2n+1)\cdot\phi_0/2$ appear.
\section{Conclusions}
\label{sec:conclusions}
We realized Josephson junctions made of HgTe 3D topological insulator nanowires and demonstrated the fine sensitivity of surface supercurrents to a coaxial magnetic field. The field does not pierce the topologically protected surface states of the wires, yet Fraunhofer-like critical current patterns develop, notably with unusual non-integer flux periodicity in lower-quality samples. Our theoretical analysis shows that such peculiar magneto-transport properties are essentially resulting from a series of nontrivial geometrical constraints. First, contrary to standard Josephson junctions, propagating electronic modes form Andreev bound states uniquely on a non-planar surface enclosing the insulating HgTe bulk. Second, such states may have a purely longitudinal character -- associated with semiclassical paths roughly parallel to the axial direction -- or a partially transverse behavior -- corresponding to paths winding fully or partially around the wire perimeter -- and are differently affected by the quality of the NS contacts along different directions. Third, superconductivity is in general not induced across the entire nanowire perimeter, nor is the magnetic field screened by the Nb fingers, which are thinner than the London penetration depth. As a consequence the partially transverse Andreev bound states pick up an Aharonov-Bohm phase which is not necessarily integer, \ie electrons are not limited to enclosing a fixed number of vortices. This yields the observed peculiar critical current oscillations.
On the other hand, while the existence of surface states is necessary, spin-momentum locking of topological Dirac states appears to play a minor role. We numerically found similar overall features for surface states obeying an effective Schr\"odinger equation.
For further studies it is certainly desirable and interesting also to measure the current-phase relation.
Due to the Aharonov-Bohm phase, which is picked up by the Andreev bound states, related signatures could be observable in such measurements and serve as an additional check for the theoretical model.
\begin{acknowledgments}
We thank Denis Kochan, Henry Legg and Philipp R\"u\ss mann for useful discussions. This work was funded by the European Research Council
under the European Union's Horizon 2020 research and
innovation program (Grant Agreement No.~787515, 253 ProMotion).
We also acknowledge support by the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within Project-ID 314695032 -- SFB 1277 (projects A07, A08, B08)
and through the Elitenetzwerk Bayern Doktorandenkolleg "Topological Insulators".
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,610 |
The Chinese art of noise
By Isham Cook on December 21, 2012 • ( 3 Comments )
Turning to the subject of noise, Chinese people's voices must be the loudest on earth, with the Cantonese taking the gold medal. I heard a joke about this: Two Cantonese men in the United States are having a conversation in the street. An American walks by and thinks they are having a fight, so he calls the police. When the police arrive and ask them what they are fighting about, they say, "We're just whispering." — Bo Yang, The Ugly Chinaman
The December pre-dawn Beijing smog is so thick it seems to block out noise, until the one-man band following behind me rends the silence with his hawking, spitting and throat-clearing symphony; he plays all the repeats in the atonal musical score. I find a Starbucks that opens early and sit down with a coffee and George Prochnik's In Pursuit of Silence: Listening for Meaning in a World of Noise, a book I've had on my shelf for some time, waiting for just the right moment of stillness to ease into it such as this early morning.
But the day is not off to a good start. The shop's music is melodious but unrelenting: the brassy clamor of Ella Fitzgerald singing "Jingle Bells," followed by "Rudolf, The Red-Nosed Reindeer," "Frosty the Snowman," and every other awful carol you forgot existed. Give me Julie Andrews singing Christmas carols, or give me Ella singing anything but Christmas carols, but please don't combine the two.
I can thank Starbucks at least for eliminating the more egregious forms of indoor environmental pollution: smoking and Chinese pop. I put on my state-of-the-art noise isolation earphones, which I use just as often without as with music since they are reasonably good at blocking out sound. Today, though, they aren't up to the task and the shrill music penetrates like a dentist's drill.
The problem is not so much the music but its loudness. This leaves me in a dilemma, because the Law of Complaining states that one well-prosecuted complaint may be effective while two are merely annoying. I can't ask them to both change the music and turn it down. Which is less desirable, listening to different loud music or to the same songs at slightly lower volume?
Each request is complicated in its own right. If I ask them to change the music, now on repeat with the same handful of songs already playing a second time, they will claim it's the only music they have. Or it's the Christmas season, so the music is appropriate (though you often hear Christmas carols at all times of the year in China). Or the customers seem to like it, since none of them except me has ever complained before. They will moreover find it odd that I'm a Westerner complaining about Western music. Regardless of how they respond, the implication will be that I am imposing my idiosyncratic musical tastes on everyone else, despite being the sole customer at this early hour.
True, Starbucks franchises may be officially required to play only pre-selected music. If this were not the case, I suspect we would be hearing something worse even than Chinese pop: Adele. What we do hear is jazz. Yet it seems the franchises were given only general guidelines, on the assumption that local managers are savvy to musical taste, when what is needed is a great deal more micromanagement.
You see, music serves a wholly different function here than in the rest of the world. Like "elevator music" in the West, all music is Muzak to the Chinese. They don't listen to it so much as expect it provide an atmosphere on demand. Putting the same few tracks or a single CD on repeat for the entire day is as logical to them as lining a room with wallpaper of the same pattern. Here music could be defined as a transitional phenomenon between white noise and outright noise: adjustable noise pollution. But when music is so attenuated in its function, why one wonders is it needed at all?
Putting the same few tracks or the same CD on repeat mode for the entire day is as logical to them as decorating the four walls of a room with identically patterned wallpaper.
The answer reveals itself when I ask the staff to turn the music down. Terrified at the looming prospect of silence, they simply will not cooperate. I've gone through this routine many times before. Initially they politely acquiesce and one of the baristas dashes into the back room to adjust the volume control. The Chinese are masters of the psychological effect, however. They know that if they merely appear to be adjusting the volume, I will perceive an illusory reduction in sound, when none in fact occurs. They can't play this game on me, and I implore them to go back once more and actually turn the music down.
On the other hand, one positive outcome of the general Chinese indifference to music is that cafés and restaurants often forget to turn it on and the silence passes unnoticed. In any case, although today I succeed in having it dialed it down a notch, the effort expended in trying to restore a proper Starbucks ambience from the McDonald's-like mania is itself enough to spoil my morning. So much for Prochnik's book.
It's off to work. There is no peaceful form of transportation to choose from, but some are relatively less tumultuous than others. The worst is the subway and its braying address system. On the escalator into the station a recording to "Please stand firm and hold the handrail" harasses in a continuous loop, reminding us we're all clumsy children. On the train, each station is announced both before arriving and upon arrival in Mandarin and English, along with such redundancies as "Please get ready for your arrival." The American native speaker they hired to record the English has a professional voice but her pronunciation of the Chinese station names is off and this makes for excruciating listening. Then they got rid of her and now have the Chinese announcer speaking the English instead, with equally unfortunate results. The volume, of course, is loud, further amplified to a screech from the damaged speaker cones in some of the cars.
At least it helps to drown out the passengers shouting into their cellphones. It's not simply self-importance, the Chinese brand of machismo. If they really had reason to brag, they wouldn't be stuck among the crowds but in the back of a chauffeur-driven Audi with tinted windows or the private room of an exclusive restaurant. When upstart businessmen shamelessly conduct their affairs within earshot of everyone around them, it's a kind of advertising, a means of pricking up the ears of potential partners or customers nearby and thus expanding their zone of influence in as wide a radius as possible every minute of the day.
When businessmen conduct their affairs loudly within earshot of everyone around them, it's a kind of advertising.
Shouting into a cellphone is also a way to compensate for noisy surroundings and the unnerving delayed signal from your interlocutor. But the lower down the social scale someone is, the louder the yelling gets, and not just into cellphones. My theory as to why so many Chinese shout at each other even in quiet places is that it's an ingrained response from centuries of life on the farm, when people communicated by yelling across fields, and they remain forever stuck in yelling mode.
The Japanese make for a pointed contrast, all the more striking coming from a neighboring culture with a shared religio-cultural heritage and similar concepts of social harmony. In Japan, cellphone talking is frowned on in the subway and outlawed on trains except for the closed sections between train cars, while announcements from the PA system come through quietly in softly clipped voices. Riding Japanese trains is like being in a Zen garden on wheels. Not that such regulation is needed; the Japanese naturally converse in hushed voices, often in whispers, as if living out the Tao in daily life. Even Japanese porn actresses don't yell or orgasm loudly, they whimper (the Japanese do have a soft spot for roaring environments like Pachinko parlors and store-front hawkers with megaphones; they just confine and regulate these spaces as tightly as their red light districts).
My company pays for my taxi rides to and from work, but this isn't always a quieter option. It's often an unpleasant experience. Taxi interiors are no longer as smelly as they used to be after the city began fining unhygienic drivers in preparation for the 2008 Olympics. But rudeness is still the norm. The trend these days, if they stop for you at all, is to demand you tell them where you're going before getting in the cab; if you don't happen to be heading where they want to go, they just drive off. Once you make it inside a taxi, you are assaulted by the loud radio. They'll turn it down if you insist, though reluctantly. Then there's the video screen attached to the back of the front seat inches from your face blaring ads, with the volume set on high. To turn it off, after much fiddling you discover that you need to hit the little button in the corner twice, once to access the volume control and again to stop the video.
The comparatively least unpleasant means of transportation is walking or cycling. Here I have to say that much improvement has been made over the past two decades in curbing this aspect of urban noise pollution. I recall the Shanghai of the early '90s as being the worst, when all drivers honked their horn continuously in a deluded attempt to force their way through the lawless traffic, and when that didn't work jumping out and getting in fist fights with the driver in front. Now they only honk intermittently.
The problem is you are always in the way. Today as I ride my bike to work in a dedicated bike lane, a car is honking at me from behind to move aside. Pedestrians on sidewalks are likewise forced to step aside to make way for honking mopeds going at speeds of 30-40kph in both directions so that their drivers, typically messengers and deliverymen, can get to their destination on time and avoid being fired. The only saving grace of these electric bicycles is that they run quietly—so quietly you don't hear them coming and may easily get run over. And to avoid responsibility for causing your accident, they will fly away from you faster than they came at you.
My workplace luckily is a quiet refuge. But I can't focus on work today because I am too tired from lack of sleep. The reason I couldn't sleep was the new residential complex being built across from my high-rise. The construction has been going on for months. It was previously much worse, the hammering and pounding taking place around the clock. It gave rise to interesting philosophical speculation. Did the fellow residents on my side of the building really not mind? Could they be so inured that the most extreme noise never consciously registered and they didn't even hear it? Or did they mind but were powerless to do anything about it?
Can people be so inured that the most extreme noise pollution never registers consciously and they don't even hear it?
Then things started quieting down every night between midnight or so and six or seven in the morning. I had heard there was a city ordinance against construction work during sleeping hours and it appeared to have kicked in. I can imagine how very pissed off all relevant parties—the developers, the corrupt officials profiting off the developers, the construction workers, the home buyers eager to move in to the new building, and the overworked police themselves for having to handle frivolous complaints from old ladies—must have been: the nerve of those little people in forcing our hand and delaying our schedule!
They didn't really cooperate, however, but merely shifted less noisy activity to the wee hours: a constant stream of diesel trucks entering and exiting, guard dogs barking, and the clanking of steel girders and other heavy materials being unloaded. So although the incessant pounding has stopped and the decibels dip for a few hours, I am still regularly jolted awake by a medley of jagged sounds. Whereas before it was Steve Reich or Philip Glass at full volume, now it is John Cage or Karlheinz Stockhausen at full volume. And then there is the inevitable crash that occurs at least once every night when they drop something on the ground, probably intentionally, still pissed off at us. The best way to describe this is when T. Coraghessan Boyle in Drop City likened rock music on a bad acid trip to the sound of a marching band crashing down a staircase. I counter the best I can with the combined hum of my air conditioner and fan to create a fog of white noise, together with earplugs and a sleeping pill, but this is only occasionally effective, and it wasn't last night.
Today after work I have a first date with a Chinese female for dinner and a Peking Opera performance in the evening. She is pretty and that will take some of the edge off my weariness. The restaurant turns out to be a good one, which means it's crowded and loud, with the hard floors and furniture only magnifying the noise. The notion of dampening interior acoustics with industrial carpeting hasn't yet taken hold in Chinese restaurants and I doubt it ever will. The reason is clear to anyone who frequents the restaurants here: customers love it. The Serbian novelist Milorad Pavic once compared women without a nice rump to a town without a church; the Chinese would compare the same to a restaurant without noise. The Chinese restaurant fulfills the same function as the British pub (or American college pub)—the noisier, and the longer the evening drags on, the better. Without a bar tradition, Chinese men do their rounds in restaurants. The one-upmanship of competing tables then merges into a general party atmosphere and the whole establishment is swinging like a wedding party.
With repeated exposure, the foreigner learns to go with the flow. I can admit having even grown fond of the boisterous Chinese restaurant to a degree. I guess it's because its particular sonic congeries has a special flavor that can't be found anywhere else. It also helps that my female companion has a great ass. Too bad the din prevents me from making out most of what she says.
We head for the Grand Chinese Opera Theater on Chang'an Avenue, a handsome performance space combining features of the traditional Chinese theater with the larger Western-style auditorium. It's my first visit to this venue, and the experience is horrible. The problem is not the oft-vilified cacophony of Peking Opera. I enjoy the art form in fact and have been going for years. Like the Chinese restaurant, it's an acquired taste but one that rewards the patient. Unfortunately, all the theaters I've been to electronically amplify the singers, which Chinese opera has no more need of than Western opera does. One explanation I've heard is that they're catering to a mostly elderly audience who are hard of hearing. Another is that the older generations, used to the bullhorns and public address systems of decades past screaming revolutionary slogans, are so addicted to constant racket that they gravitate to wherever it can be found.
They are not subtle about it: big loudspeakers adorn the sides of the stage. The sound quality is poor and the volume is jacked up to a painful, ear-splitting degree. I look around to see if I can spot any uncomfortable opera-goers. Nope, I seem to be the only one. I roll up pieces of tissue paper and stuff them in my ears. But it's too late. The noise has seeped into my brain like poison and I'm rapidly developing a headache. I do the unthinkable but I have no choice, which is to leave a terrible impression on my hot date by abandoning her at a concert she invited me to, with profuse apologies. I exit into the relative and short-lived peace of Beijing's nighttime air before descending back to the construction pit for the night.
How to have fun in China's disposable cities
Tagged as: noise pollution in China
jackguard8888@gmail.com says:
From what I know about Chinese—they LOVE noise—a silent world is one of the most terrifying things that could ever happen. Also why Chinese generally do not like to be "alone" anywhere—and also the reason why many Chinese from mainland China hate living/working in the U.S…..it is TOO quiet! TOO boring—-NO people…No NOISE….They can't handle it. They equate NOISE with being "exciting" and an ambiance of LIFE HAPPENING….without it it is a dead and meaningless world. It is my belief that they do not have a culture of music—-because music was never a large part of their culture—-not like in Europe wearing powdered wigs and sipping lattes whilst eyeballing a big-breasted patron behind some veil of sorts. Surely Chinese do have music and have always had music but they don't seem to have a culture of enjoying music—-music appreciation—-everything is about MONEY. Music is a tool—-like a glitzy sign to pull in customers. I once wandered through a mall in WangFuJing where a HipHop song was BLARING out from a shop with some GangSta Rap—-"Baby Baby SUCK MY DI*K!!!!" over and over and over again—-while customers totally unaware of what the song meant strolled by without an issue. I've been in restaurants in Beijing that were so LOUD that it was almost impossible to have a conversation at your table——most restaurants in the west a pin could be heard hitting a carpeted floor……
thanks for this piece, Isham.. it helps me feel better about deciding, years ago, not to come teach English in Guangzhou.
Ralph Tomlinson says:
Yes – I know how you feel. I went to KFC in China – I didn't like the pictures hanging on the wall.
Leave a Reply to jackguard8888@gmail.com Cancel reply | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,835 |
<html>
<head>
<title>Data Administrator - maboss</title>
</head>
<body>
<h2>Data Administration</h2>
%for item in tables:
<a href="/maboss/tables/${item}">${item}</a><br/>
%endfor
</body>
</html> | {
"redpajama_set_name": "RedPajamaGithub"
} | 8,518 |
Masatoşi Gündüz İkeda (25 February 1926 – 9 February 2003), was a Japanese-born Turkish mathematician known for his contributions to the field of algebraic number theory.
Early years
Ikeda was born on 25 February 1926 in Tokyo, Japan, to Junzo Ikeda, head of the statistics department of an insurance company, and his wife Yaeko Ikeda. He was the youngest child with a brother and two sisters. He grew up reading mathematics books belonging to his father. During his school years, he bought himself used books about mathematics and the life story of mathematicians. He was very impressed by the French mathematician Évariste Galois (1811–1832).
Academic career
Ikeda graduated from the mathematics department of Osaka University in 1948. He received a Ph.D. degree with his thesis "On Absolutely Segregated Algebras", written in 1953 under the direction of Kenjiro Shoda. He was appointed associate professor in 1955. He pursued scientific research at the University of Hamburg in Germany, under the supervision of Helmut Hasse (1898–1979) between 1957 and 1959. On a suggestion from Hasse, he went to Turkey in 1960 and landed at Ege University in İzmir. In 1961, he was appointed a foreigner specialist in the Faculty of Science at the same university.
In 1964, Ikeda married Turkish biochemist Emel Ardor, whom he met and followed to Turkey. He was naturalized, converted to Islam and adopted the Turkish name Gündüz. He became associate professor in 1965 and a full professor in 1966. In 1968, with permission of the university, he went to the Middle East Technical University (METU) in Ankara as a visiting professor for one year. However, following the end of his term, he was offered a permanent post as a full professor, which he accepted upon the proposal of the mathematician Cahit Arf, whom he had known since his early years in Turkey.
From time to time, Ikeda was invited as a visiting professor to various universities such as the University of Hamburg (1966), San Diego State University, California (1971), and Yarmouk University in Irbid, Jordan (1984, 1985–86). In 1976, Ikeda carried out research work at Princeton University. In 1976, Ikeda went to Hacettepe University in Ankara, where he chaired the mathematics department until 1978, before he returned to METU. He retired in 1992 at METU. His scientific devotion was in Galois theory.
Among the research institutions Ikeda served were TÜBİTAK Marmara Research Center and Turkish National Research Institute of Electronics and Cryptology. Finally, he worked at the Feza Gürsey Basic Sciences Research Center in Istanbul.
Ikeda was a member of the Basic Sciences Board at the Scientific and Technological Research Council of Turkey (TÜBİTAK), and served as the head of the Mathematic Research Unit at the METU.
Family life and death
Ikeda died on 9 February 2003, in Ankara. Following a religious funeral service held on 12 February at Kocatepe Mosque, he was laid to rest at the Karşıyaka Cemetery. He was the father of two sons, both born on Turkey.
Recognition
In 1979, Ikeda was honored with the TÜBİTAK Science Award.
The Mathematics Foundation of Turkey established the "Masatoshi Gündüz İkeda Research Award" in Ikeda's memory.
See also
Anabelian geometry
References
External links
Personal web page
1926 births
2003 deaths
Japanese Muslims
Turkish Muslims
Osaka University alumni
20th-century Japanese mathematicians
Algebraists
University of Hamburg alumni
Naturalized citizens of Turkey
Japanese emigrants to Turkey
Turkish mathematicians
Academic staff of Ege University
Academic staff of Middle East Technical University
Academic staff of Hacettepe University
Recipients of TÜBİTAK Science Award
Burials at Karşıyaka Cemetery, Ankara
People from Tokyo | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,429 |
\section{Introduction}\label{sec:int}
An operator $\mathfrak{L}$ is said to be
{\it a L{\'e}vy-type operator} if
it acts on
every smooth compactly supported function~$f$
according to the following
explicit formula
\begin{align*}
\mathfrak{L}f(x) &= c(x)f(x)+
b(x)\cdot \nabla f(x)+\sum_{i,j=1}^d a_{ij}(x) \frac{\partial^2 f}{\partial x_i \partial x_j}(x) \\
&\qquad +\int_{\RR^d}\Big(f(x+z)-f(x)- {\bf 1}_{|z|<1} \left<z,\nabla f(x)\right>\! \Big) N(x,dz)\,,
\end{align*}
where $c(x)$, $b(x)$, $a_{ij}(x)$ and $N(x,dz)$
are called coefficients and satisfy certain natural conditions.
If there is a model of motion behind such an operator, the coefficients describe the infinitesimal behaviour of the motion at point $x$.
For instance, the vector $b(x)$ indicates the direction and magnitude of a drift, while
$N(x,B)$ the intensity of jumps from $x\in\RR^d$ to the set $x+B\subset \RR^d$.
An interesting phenomenon occurs when
the measure $N(x,dz)$ is non-symmetric in $dz$.
Namely, a non-zero {\it internal drift} coefficient
$$
\int_{\RR^d} z \left({\bf 1}_{|z|<r}-{\bf 1}_{|z|<1}\right) N(x,dz)
$$
may be induced by non-symmetric jumps. Indeed
(for simplicity, let $c\equiv 0$, $b\equiv 0$, $a_{ij} \equiv 0$)
\begin{align*}
\mathfrak{L}f(x)=
\int_{\RR^d}\Big(f(x+z)-f(x)- {\bf 1}_{|z|<r} \left<z,\nabla f(x)\right>\! \Big) N(x,dz)\\
+\left( \int_{\RR^d} z \left({\bf 1}_{|z|<r}-{\bf 1}_{|z|<1}\right) N(x,dz)\right) \cdot \nabla f(x)\,.
\end{align*}
The above can be interpreted
as a decomposition into
{\it leading non-local part}
and
{\it internal drift part}.
Our aim is to understand the role of the internal drift, since as we observe in examples it may be more difficult to handle than the {\it external drift} $b(x)$. Models based on
a special case of constant coefficients, i.e., when
$c(x)= c$, $b(x)= b$, $a_{ij}(x)= a_{ij}$ and $N(x,dz)= N(dz)$ are independent of $x$,
known as L{\'e}vy processes or L{\'e}vy flights,
already proved to be useful in many applications
in physics, finance or statistics \cite{MR1833689}, \cite{WOS:000266711700004}.
It is therefore desirable to provide a decent structure behind more general operators with $x$-dependent coefficients.
Due to the
Courr{\`e}ge-Waldenfels theorem,
infinitesimal generators of Feller semigroups with sufficiently rich domain are
L{\'e}vy-type operators;
see \cite[Theorem~2.21]{MR3156646}, \cite[Theorem~4.5.21]{MR1873235}.
On the other hand, it is a non-trivial task to associate a semigroup with a given L{\'e}vy-type operator
with non-constant coefficients.
In the present paper we study an operator
\begin{align}\label{def:operator}
{\mathcal L} f(x)=b(x)\cdot \nabla f(x)+ \int_{\RR^d}\Big(f(x+z)-f(x)- {\bf 1}_{|z|<1} \left<z,\nabla f(x)\right>\! \Big) \,\kappa(x,z)J(z)dz\,.
\end{align}
Under the assumptions specified in Section~\ref{sec:set}
we prove the uniqueness and the existence of
a weak fundamental solution to the equation $\partial_t={\mathcal L}$.
We further analyse the semigroup associated with that solution
and discuss properties of its generator, which we identify as the closure of the operator \eqref{def:operator} defined on the set of smooth compactly supported functions. Pointwise estimates of the fundamental solution are established under additional constraints.
Our main focus is the case when the mapping
$z\mapsto \kappa(x,z)J(z)$
is non-symmetric.
A surprising fact is that despite an extensive study of non-local operators
and rapidly growing literature,
a fundamental example, which we are about to present, has not yet been covered.
Namely, let $b=0$ and $J(z)=|z|^{-d-1}$ in \eqref{def:operator}, i.e.,
\begin{align}\label{def:1-stable}
{\mathcal L} f(x)= \int_{\RR^d}\Big(f(x+z)-f(x)- {\bf 1}_{|z|<1} \left<z,\nabla f(x)\right>\! \Big) \,\frac{\kappa(x,z)}{|z|^{d+1}} dz\,,
\end{align}
and suppose that $c_\kappa^{-1} \leq \kappa(x,z)\leq c_\kappa$ and
$|\kappa(x,z)-\kappa(y,z)|\leq c_\kappa |x-y|^{\epsilon_{\kappa}}$
for some
$c_\kappa>0$,
$\epsilon_{\kappa} \in (0,1]$ and all $x,y,z\in\RR^d$.
The
results
available
in the literature, such as
\cite{KKS}, \cite{S}, \cite{PJ}, \cite{MR3806688},
require further assumptions on
the coefficient
$\kappa$ to treat~\eqref{def:1-stable}, which exclude natural
examples. Our results allow us to remove those restrictions, see Example~\ref{ex:1},
and to cover other interesting operators, see Section~\ref{sec:ex}.
To study the operator \eqref{def:operator}
we employ {\it the parametrix method}, which primarily provides a candidate for the fundamental solution.
We use this paper as an opportunity to deliver a general, transparent exposition of this method in a functional analysis framework. First, we single out natural hypotheses
that lead to the construction and initial features of the candidate. We also point out key hypothesis that validate several principal properties, like semigroup property, non-negativity, etc., based on a suitable approximation of that candidate.
Once the functional analytic approach is outlined and the hypotheses are clarified, we
discuss in general integral kernels associated with the constructed operators.
We comment on the development of the parametrix method.
It was proposed by
Levi~\cite{zbMATH02644101}, Hadamard \cite{zbMATH02629781}, Gevery \cite{zbMATH02629782}
for differential operators. Later extended by
Feller \cite{zbMATH03022319} (even with perturbations by bounded non-local terms) and
Dressel \cite{MR0003340}.
Here we refer the reader to two classical monographs
by Friedman \cite{MR0181836} and Eidelman \cite{MR0252806}.
Non-local operators were later treated by
Drin'~\cite{MR0492880}, Drin' \& Eidelman \cite{MR616459}, Kochubei \cite{MR972089} and Kolokoltsov \cite{MR1744782}, see also the monograph by Eidelman, Ivasyshin \& Kochubei \cite{MR2093219}.
More recently, the method was used for non-local operators by
Xie, Zhang \cite{MR3294616},
Knopova, Kulik \cite{MR3652202},
Bogdan, Knopova, Sztonyk~\cite{MR4077549},
K{\"u}hn \cite{MR3912204},
Chen, Zhang \cite{MR3500272, MR3806688}, Kim, Song, Vondra\v{c}ek \cite{MR3817130}, Grzywny, Szczypkowski \cite{MR3996792}, Szczypkowski~\cite{S}.
It was also employed in the analysis of
SDEs by Kohatsu-Higa, Li \cite{MR3544166},
Knopova, Kulik \cite{MR3765882}, Kulik \cite{MR3907007},
Kulczycki, Ryznar \cite{MR4167204},
Kulczycki, Ryznar, Sztonyk \cite{MR4261305}, Kulczycki, Kulik, Ryznar
\cite{KKR-2020},
Menozzi, Zhang \cite{MZ}.
We also refer the reader to
more recent monographs by
Knopova, Kochubei \& Kulik \cite{MR3965398},
Chen, Zhang \cite{MR3824209}
and
K\"{u}hn \cite{MR3701414}.
A different Hilbert-space approach to parametrix method was developed by Jacob \cite{MR1254818, MR1917230}, Hoh \cite{MR1659620}, B{\"o}ttcher \cite{MR2163294, MR2456894}, Tsutsumi \cite{MR0367492, MR0499861}, Kumano-go \cite{MR666870}.
For a more complete list of references,
we refer to Knopova, Kulik \& Schilling \cite{KKS}
and Bogdan, Knopova, Sztonyk \cite{MR4077549}.
The rest of the paper is organized as follows.
In Section~\ref{sec:set} we specify our setting and assumptions.
Section~\ref{sec:notation} contains the notation used throughout the paper.
In Section~\ref{sec:main} we formulate the main results.
Section~\ref{sec:gen} is devoted to the general discussion of the parametrix method.
In Section~\ref{sec:appl} that method is applied
to investigate the operator \eqref{def:operator} under our assumptions,
and we give proofs of the main results.
In Section~\ref{sec:a-proofs} we prove all statements of Section~\ref{sec:f_a_a}.
Technical results are shifted to
Sections~\ref{sec:sac} and~\ref{sec:a-frozen}.
\section{Setting and assumptions}\label{sec:set}
In this section we introduce
two sets of assumptions $(\mathbf{A})$ and $(\mathbf{A\!^\ast})$ under which the main results in Section~\ref{sec:main} are stated.
We begin by defining objects and formulating conditions.
All the functions considered in the paper are assumed to be Borel measurable on their natural domains.
Let $d\in{\mathbb N}$ and
$\nu:[0,\infty)\to[0,\infty]$ be a non-increasing function ($\nu \not\equiv 0$) satisfying
\begin{align*}
\int_{\RR^d} (1\land |x|^2)\, \nu(|x|)dx<\infty\,.
\end{align*}
For $r>0$ we let
$$
h(r):= \int_{\RR^d} \left(1\land \frac{|x|^2}{r^2}\right) \nu(|x|)dx\,,\qquad \quad
K(r):=r^{-2} \int_{|x|<r}|x|^2 \,\nu(|x|)dx\,.
$$
We say that {\it the weak scaling condition} at the origin holds if there are $\alpha_h\in (0,2]$ and $C_h \in [1,\infty)$ such that
\begin{align}\label{set:h-scaling}
h(r)\leq C_h \,\lambda^{\alpha_h} h(\lambda r)\,,\qquad \lambda,\, r\in (0, 1]\, .
\end{align}
We refer the reader to \cite[Lemma~2.3]{MR4140542} for other equivalent descriptions of \eqref{set:h-scaling},
which involve relations between $h$ and $K$.
We consider $J: \RR^d \to [0, \infty]$
such that for some constant
$c_{\!\scriptscriptstyle J}\in [1,\infty)$ and all $x\in\RR^d$,
\begin{align}\label{set:J}
c_{\!\scriptscriptstyle J}^{-1} \nu(|x|) \leq J(x) \leq c_{\!\scriptscriptstyle J}\, \nu(|x|)\,.
\end{align}
Next, suppose that $\kappa \colon \RR^d\times\RR^d \to (0,\infty)$ is such that for a constant $c_\kappa\in[1,\infty)$ and all $x,z\in\RR^d$,
\begin{align}\label{set:k-bound}
c_\kappa^{-1} \leq \kappa(x,z)\leq c_\kappa\,,
\end{align}
and for some $\epsilon_{\kappa} \in (0,1]$ and all $x,y,z\in\RR^d$,
\begin{align}\label{set:k-holder}
|\kappa(x,z)-\kappa(y,z)|\leq c_\kappa \,|x-y|^{\epsilon_{\kappa}}\,.
\end{align}
Let $b\colon \RR^d \to \RR^d$. For $x\in\RR^d$ and $r>0$ we define
\begin{align*}
b_r^x:=b(x)+\int_{\RR^d} z \left({\bf 1}_{|z|<r}-{\bf 1}_{|z|<1}\right) \kappa(x,z)J(z)dz\,.
\end{align*}
We ponder the existence of
a constant $\sigma\in (0,1]$
such that for all $x\in\RR^d$,
\begin{align}\label{set:indrf-cancellation-scale}
|b_r^x| \leq c_\kappa r^{\sigma} h(r)\,, \qquad r\in (0,1]\,.
\end{align}
Furthermore, suppose that there are $N\in{\mathbb N}$,
$\indhei{j}\in (0,1]$ and $\indcsi{j} \in (0,1]$ such that
\begin{align}
\label{set:indrf-holder-s-B}
|b_r^x-b_r^y|\leq c_\kappa
\sum_{j=1}^{N} |x-y|^{\indhei{j}} \, r^{\indcsi{j}} h(r)\,, \qquad |x-y|\leq 1,\, r\in (0,1]\,.
\end{align}
Alternatively to \eqref{set:indrf-holder-s-B} we consider
\begin{align}
\tag{\ref*{set:indrf-holder-s-B}${}^\ast$}
\label{set:indrf-holder-s-A}
|b_r^x-b_r^y|\leq c_\kappa
\sum_{j=1}^{N} (|x-y|^{\indhei{j}}\land 1) \, r^{\indcsi{j}} h(r)\,, \qquad x,y\in\RR^d,\,r\in (0,1]\,.
\end{align}
We specify our framework. The {\it dimension} $d$ and {\it the profile function} $\nu$ are fixed.
We use two sets of assumptions.
\vspace{\baselineskip}
\begin{itemize}
\item[$(\mathbf{A})\colon$] \eqref{set:h-scaling}--\eqref{set:k-holder} hold;
\eqref{set:indrf-cancellation-scale}
and \eqref{set:indrf-holder-s-B} hold with
\begin{align*}
\underset{j=1,\ldots,N}{\forall}\quad
\alpha_h\land (\sigma \indhei{j})+ \indcsi{j} -1>0
\qquad \mbox{and}\qquad \alpha_h+\sigma-1>0\,.
\end{align*}
\vspace{\baselineskip}
\item[$(\mathbf{A\!^\ast})\colon$] \eqref{set:h-scaling}--\eqref{set:k-holder} hold;
\eqref{set:indrf-cancellation-scale}
and \eqref{set:indrf-holder-s-A} hold with
\begin{align*}
\underset{j=1,\ldots,N}{\forall}\quad
\alpha_h\land (\sigma \indhei{j})+ \indcsi{j} -1>0\,.
\end{align*}
\end{itemize}
\vspace{\baselineskip}
We end this section with a few comments.
\begin{remark}\label{rem:exdrf}
If
\eqref{set:indrf-cancellation-scale} holds, then
for some $c_{\exdrf}>0$ and all $x\in\RR^d$,
\begin{align}\label{set:exdrf-bound}
|b(x)|\leq c_{\exdrf} \,.
\end{align}
If \eqref{set:indrf-holder-s-B}
(or even \eqref{set:indrf-holder-s-A}) holds, then
for some $c_{\exdrf}>0$, $\epsilon_b \in (0,1]$ and all $|x-y|\leq 1$,
\begin{align}\label{set:exdrf-holder}
|b(x)-b(y)|\leq c_{\exdrf} |x-y|^{\epsilon_b}\,.
\end{align}
Conversely,
assuming \eqref{set:h-scaling},
if \eqref{set:exdrf-bound} and \eqref{set:exdrf-holder} hold, then there is $c_\kappa>0$ such that for all $x,y\in\RR^d$, $r\in (0,1]$,
$$
|b(x)| \leq c_\kappa r^{\alpha_h \land 1}h(r)\,, \qquad \qquad
|b(x)-b(x)| \leq c_\kappa (|x-y|^{\epsilon_b} \land 1)\, r^{\alpha_h \land 1}h(r)\,.
$$
\end{remark}
\vspace{\baselineskip}
For the sake of discussion we consider the condition: there are $\beta_h\in (0,2]$ and $c_h\in (0,1]$ such that
\begin{equation}\label{eq:intro:wusc}
h(r)\geq c_h\,\lambda^{\beta_h}\,h(\lambda r)\, ,\qquad \lambda, r\in(0,1]\, .\\
\end{equation}
We relate our assumptions with those in \cite{MR3996792} and \cite{S}, but without addressing the exact results.
\begin{remark}\label{rem:other-as}
Assume that \eqref{set:h-scaling}--\eqref{set:k-holder} hold. In each case below,
$(\mathbf{A\!^\ast})$ is satisfied.
\begin{enumerate}
\item[(i)] If $b=0$ and $\kappa(x,z)J(z)=\kappa(x,-z)J(-z)$, then $\sigma=N=\indhei{1}=\indcsi{1}=1$. This is exactly the case {\rm (P3)} in \cite{MR3996792}.
\item[(ii)] If \eqref{set:exdrf-bound}--\eqref{set:exdrf-holder} hold and $\alpha_h>1$, then $\sigma=N=1$ and $\indhei{1}=\epsilon_{\kappa} \land \epsilon_b$, $\indcsi{1}=1$, see \cite[Fact~1.1]{S}.
In particular, it covers the case {\rm (P1)} in \cite{MR3996792}.
\item[(iii)] If \eqref{eq:intro:wusc} holds, $0<\alpha_h\leq \beta_h <1$ and
$b(x)=\int_{|z|<1} z\kappa(x,z) J(z)dz$, then
$\sigma=N=1$ and $\indhei{1}=\epsilon_{\kappa}$, $\indcsi{1}=1$. This is exactly the case {\rm (P2)} in \cite{MR3996792}. Note that here the operator \eqref{def:operator}
equals
\begin{align}\label{eq:op-2}
{\mathcal L} f(x)=\int_{\RR^d}(f(x+z)-f(x)) \,\kappa(x,z)J(z)dz\,.
\end{align}
\item[(iv)] If {\rm (Q0)} from \cite{S} holds, then $\sigma=N=\indhei{1}=\indcsi{1}=1$. Note that
in \cite{S} the results were obtained under stronger conditions {\rm (Q1)} and {\rm (Q2)}.
\item[(v)] If $b=0$ and for all $x\in\RR^d$ and $r\in (0,1]$,
\begin{align*}
\int_{r\leq |z|<1} z \kappa(x,z)J(z)dz =0\,,
\end{align*}
then $\sigma=N=\indhei{1}=\indcsi{1}=1$.
The above condition was used in \cite{PJ} to analyse the operator~\eqref{def:1-stable}.
The condition was weakened by proposing
{\rm (Q1)} in \cite{S}.
\end{enumerate}
\end{remark}
\begin{remark}\label{rem:close-to-1}
Clearly, if \eqref{set:h-scaling} holds with $\alpha_h\geq 1$ and some constant $C_h>0$, then it holds for every $\alpha_h<1$ with the same constant $C_h$. Now, assume that \eqref{set:h-scaling}--\eqref{set:k-holder} and \eqref{set:exdrf-bound}--\eqref{set:exdrf-holder} hold, then
\begin{enumerate}
\item[(i)] if $\alpha_h\in(0,1)$, then
\eqref{set:indrf-cancellation-scale}
and \eqref{set:indrf-holder-s-A} hold with $\sigma=\alpha_h$
and $N=1$, $\indhei{1}=\epsilon_{\kappa} \land \epsilon_b$, $\indcsi{1}=\alpha_h$.
\item[(ii)] if $\alpha_h\in(0,1)$ can be chosen arbitrarily close to $1$, then $(\mathbf{A\!^\ast})$ is satisfied.
\end{enumerate}
For justification see
Remark~\ref{rem:exdrf} and
Lemma~\ref{lem:drf}.
\end{remark}
\section{Notation}\label{sec:notation}
For the reader's convenience we collect the notation used in the paper.
By $c(d,\ldots)$ we denote a
positive number that depends only on the listed parameters $d,\ldots$.
We write
$\varpi_0$ to represent
$c_{\!\scriptscriptstyle J},c_\kappa,\alpha_h,C_h,h$.
By $\bbbeta\in \mathbb{N}_0^d$
we denote a multi-index.
We use ``$:=$" to denote the definition.
As usual, $a\land b:=\min\{a,b\}$ and $a\vee b := \max\{a,b\}$.
We write $x\cdot y$ or $\left<x,y\right>$ for the standard scalar product of $x,y\in\RR^d$.
We put
$$\indhei{0}:=\epsilon_{\kappa}\,, \qquad \qquad \indcsi{0}:=1\,.$$
The functions
$h(r)$, $K(r)$ and $b_r^x$
were introduced in Section~\ref{sec:set}.
For $t>0$, $x\in\RR^d$ we define
{\it the bound function},
\begin{align*}
\Upsilon_t(x):=\left( [h^{-1}(1/t)]^{-d}\land \frac{tK(|x|)}{|x|^{d}} \right).
\end{align*}
Note that in Section~\ref{sec:gen} the mapping $r_t$ is any such that it satisfies
the condition ({\bf R}). However, in
Section~\ref{sec:main} and~\ref{sec:appl}
we exclusively take
$r_t=h^{-1}(1/t)$, see \eqref{def:r_t}.
With this choice of $r_t$ we introduce the following auxiliary functions
\begin{align*}
\errx{\gamma}{\beta}(t,x,y) &:= r_t^{\gamma}\left(|y-x|^{\beta}\land 1\right)t^{-1}\Upsilon_{t}(y-x-tb_{r_t}^{x})\,, \\
\erry{\gamma}{\beta}(t,x,y) &:= r_t^{\gamma}\left(|y-x|^{\beta}\land 1\right)t^{-1}\Upsilon_{t}(y-x-tb_{r_t}^{y})\,,
\end{align*}
and
\begin{align*}
\err{\gamma}{\beta}(t,x):= r_t^{\gamma} \left(|x|^{\beta}\land 1\right) t^{-1} \Upsilon_t(x)\,.
\end{align*}
We put
$\mathfrak{t_0}:=1/h(1)$.
We also use the following abbreviation
\begin{align*}
\delta_{r}^{\mathfrak{K}_w} (t,x,y;z)&:=p^{\mathfrak{K}_w}(t,x+z,y)-p^{\mathfrak{K}_w}(t,x,y)-{\bf 1}_{|z|<r}\left< z,\nabla_x p^{\mathfrak{K}_w}(t,x,y)\right>.
\end{align*}
\vspace{\baselineskip}
The supremum norm is denoted by
$$\|f\|_{\infty}=\sup_{x\in {\mathbb R}^n}|f(x)|\,.$$
We use the following function spaces:
by $B_b({\mathbb R}^n)$ we denote bounded Borel measurable functions,
$C(E)$ are continuous functions on $E\subseteq {\mathbb R}^n$.
Furthermore, $C_b(E)$, $C_0(E)$, $C_c(E)$ are subsets of $C(E)$
of functions that are bounded,
vanish at infinity,
and have compact support, respectively.
We write
$f\in C^k({\mathbb R}^n)$ if
the function and all its partial derivatives up to (including if finite) order $k\in {\mathbb N}\cup \{\infty\}$ are elements of $C({\mathbb R}^n)$;
we similarly understand $C_b^k({\mathbb R}^n)$, $C_0^k({\mathbb R}^n)$, $C_c^k({\mathbb R}^n)$.
In particular,
$C^{\infty}({\mathbb R}^n)$ are smooth functions, while
$C_c^{\infty}({\mathbb R}^n)$ are smooth functions with compact support.
We refer to ${\mathbb R}\times\RR^d$ as the {\it time-space}.
Each of the sets $(a,b)$, $[a,b)$, $(a,b]$, $[a,b]$, where $-\infty<a<b<\infty$, is called a {\it bounded interval}.
For a bounded interval $I$ we denote by $\mathcal{B}(I\times\RR^d)$ the $\sigma$-field of Borel subsets of $I\times\RR^d$.
We say that a Borel set is of full measure if its complement is of Lebesgue measure zero (the underlying space shall be clear from the context).
For a Borel set $\mathcal{S}\subseteq (0,1]\times\RR^d\times\RR^d$ and $t\in (0,1]$, $x\in\RR^d$ we define its section $\mathcal{S}^{t,x}:=\{y\in\RR^d\colon (t,x,y)\in \mathcal{S}\}$, which is a Borel set, see \cite[Lemma~1.28]{MR4226142}.
\section{Main results and examples}\label{sec:main}
To formulate our results we use the notion of a kernel.
\begin{definition}
We call $\mu$ {\it a kernel} on the time-space
if for any $(s,x)\in {\mathbb R}\times \RR^d$ and any bounded interval $I\subset {\mathbb R}$
the mapping
$$E \longmapsto \mu(s,x,E)\,,$$
is a (finite) signed measure on $\mathcal{B}(I\times\RR^d)$.
\end{definition}
In the first result we address the question of uniqueness.
\begin{theorem}\label{thm:uniq}
Assume $(\mathbf{A})$ or $(\mathbf{A\!^\ast})$.
Let $(s,x)\in {\mathbb R}\times\RR^d$ be fixed.
Suppose that $\mu_1$ and $\mu_2$ are
kernels on the time-space
such that
for every $\phi \in C_c^{\infty}({\mathbb R}\times\RR^d)$
\begin{align}\label{eq:uniq}
\iint\limits_{(s,\infty)\times \RR^d} \mu_j(s,x,dudz)\Big[\partial_u\, \phi(u,z) + {\mathcal L}_z \phi(u,z) \Big] = - \phi(s,x)\,,
\qquad j=1,2\,.
\end{align}
Then for every
bounded interval $I\subset (s,\infty)$ and every
set $E\in\mathcal{B}(I \times \RR^d)$
we have
$$
\mu_1(s,x,E)=\mu_2(s,x,E)\,.
$$
\end{theorem}
The second result concerns the question
of existence.
\begin{theorem}\label{thm:exist}
Assume $(\mathbf{A})$ or $(\mathbf{A\!^\ast})$.
There exists $\mu$ such that
for all $(s,x)\in{\mathbb R}\times\RR^d$ and $\phi\in C_c^{\infty}({\mathbb R}\times\RR^d)$,
\begin{align*}
\iint\limits_{(s,\infty)\times\RR^d} \mu(s,x,dzdu)
\Big[\partial_u\, \phi(u,z) + {\mathcal L}_z \phi(u,z) \Big] = - \phi(s,x)\,.
\end{align*}
There is a Borel function $p\colon (0,\infty) \times\RR^d\times\RR^d\to {\mathbb R}$ such that
for every
bounded interval $I\subset (s,\infty)$ and every
set $E\in\mathcal{B}(I \times \RR^d)$
we have
$$
\mu(s,x,E)=\iint\limits_{E} p(u-s,x,z)\,dzdu\,.
$$
\end{theorem}
\noindent
In fact, in Section~\ref{ssec:a-hc} for every $t>0$ and $x\in\RR^d$ we construct $p(t,x,y)$ whose existence is announced in Theorem~\ref{thm:exist}, and from now on we mean that concrete function.
Further properties of $p(t,x,y)$ can be found in
Section~\ref{ssec:ker}
and
Section~\ref{sec:realization}.
Let
$$P_t f(x)=\int_{\RR^d} p(t,x,y)f(y)\,.$$
\begin{theorem}\label{thm:sem_prop}
Assume $(\mathbf{A})$ or $(\mathbf{A\!^\ast})$.
The family $(P_t)_{t>0}$
is a strongly continuous positive contraction semigroup on $(C_0(\RR^d),\|\cdot\|_{\infty})$.
Let $(\mathcal{A},D(\mathcal{A}))$ be its infinitesimal generator.
Then
\begin{itemize}
\item[(i)] $P_t 1 =1$ for all $t>0$,
\item[(ii)] $P_t\colon B_b(\RR^d)\to C_b(\RR^d)$ for all $t>0$,
\item[(iii)] $C_0^2(\RR^d)\subseteq D(\mathcal{A})$
and $\mathcal{A}={\mathcal L}$ on $C_0^2(\RR^d)$,
\item[(iv)] $(\mathcal{A},D(\mathcal{A}))$ is the closure of $({\mathcal L},C_c^{\infty}(\RR^d))$,
\item[(v)] $(P_t)_{t>0}$ is differentiable.
\end{itemize}
\end{theorem}
\noindent
We emphasize that in
Proposition~\ref{prop:diff_closure}
we calculate $\mathcal{A}P_t f$ and we even give several representations, two of which are expressed by means of the operator ${\mathcal L}$, see Theorem~\ref{thm:der_p-t},
Corollary~\ref{cor:der_p-tb} and, in particular, formula \eqref{eq:der3}.
Furthermore, under stronger assumptions we give pointwise estimates.
\begin{theorem}\label{thm:pointwise}
Assume $(\mathbf{A\!^\ast})$ and suppose that
\begin{align}\label{cond:pointwise}
\eta:=2 \min\left\{ \frac{\alpha_h}{2}\land (\sigma \indhei{j})+\indcsi{j}-1 \colon\quad j=0,\ldots,N\right\} > 0\,.
\end{align}
There exists $c>0$ such that for all $t\in (0,1/h(1)]$, $x,y\in\RR^d$ we have
$$
p(t,x,y)\leq
ct \left(\erry{0}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1}{\indhei{j}}\right)(t,x,y)\,.
$$
The constant $c$ can be chosen to depend only on
$d,c_{\!\scriptscriptstyle J},c_\kappa,\alpha_h,C_h,h,N,\eta, \varepsilon_0$, $\min\limits_{j=0,\ldots,N} (\indcsi{j})$.
\end{theorem}
\noindent
The value of $\varepsilon_0$ is chosen in Section~\ref{ssec:a-zoet}.
Actually, in Proposition~\ref{prop:remainder-bound} we estimate the difference between $p(t,x,y)$ and a certain $p_0(t,x,y)$ selected in Section~\ref{ssec:a-zoet}.
\begin{corollary}
Assume $(\mathbf{A\!^\ast})$ and suppose that
$\indcsi{j}=1$ for all $j=1,\ldots, N$.
There exists $c>0$ such that for all $t\in (0,1/h(1)]$, $x,y\in\RR^d$ we have
$$
p(t,x,y)\leq c \Upsilon_t(y-x-tb^y_{r_t})\,.
$$
\end{corollary}
\noindent
The term $b^y_{r_t}$ in the above estimates may be replaced by
$b^x_{r_t}$, see Corollary~\ref{cor-shifts}.
\subsection{Examples}\label{sec:ex}
We focus on the non-local part of \eqref{def:operator}, therefore in most of the examples $b \equiv 0$.
We emphasize that we can treat the operator \eqref{def:1-stable}
without any further restrictions than those mentioned in Section~\ref{sec:int}, very much like in the first example.
\begin{example}\label{ex:1}
Let $d=1$,
$J(z)=\nu(|z|)=|z|^{-2}$. Then $h(r)= r^{-1} h(1)$. Assume that
$b \equiv 0$ and $\kappa(x,z)=a(x)k(z)$, where
$$
a(x)=1+\sqrt{x}\, {\bf 1}_{(0,1)}(x) +{\bf 1}_{[1,\infty)}(x) \,,\qquad
k(z)=\frac12{\bf 1}_{(-\infty,0)}(z)+\frac32{\bf 1}_{[0,\infty)}(z)\,,
$$
Then $(\mathbf{A\!^\ast})$ is satisfied.
Indeed,
see Remark~\ref{rem:close-to-1}, or note that
for every $\sigma,s\in(0,1)$ there is $c>0$ such that for all $r\in (0,1]$,
$$
\left|b_r^x \right|
= |a(x)| \log(1/r) \leq c r^{\sigma} h(r)\,.
$$
$$
\left| b_r^x - b_r^y \right|
= |a(x)-a(y)|\log(1/r) \leq c (|x-y|^{1/2}\land 1)\, r^{s} h(r)\,.
$$
Thus, it suffices to assure that $(\sigma/2)+ s -1>0$.
\end{example}
Here is a simple general observation.
\begin{fact}\label{fact:kappa_prod}
Assume that
\eqref{set:h-scaling} holds with $\alpha_h \geq 1$, \eqref{set:J} holds and $J(z)=J(-z)$ for all $|z|<1$.
Suppose further that $b \equiv 0$ and $\kappa(x,z)=a(x)k(z)$
is such that for some $c$ we have
\begin{itemize}
\item[{\it (i)}] for all $x\in\RR^d$,
$$ 0<c^{-1}\leq a(x) \leq c <\infty\,,$$
\item[{\it (ii)}] for some $\beta\in (0,1]$
and all $x,y\in\RR^d$,
$$
|a(x)-a(y)|\leq c |x-y|^{\beta}\,,$$
\item[{\it (iii)}] for all $z\in\RR^d$,
$$0<c^{-1}\leq k(z) \leq c<\infty\,.$$
\item[{\it (iv)}]
for some
$\eta\in(0,1]$ and all $|z|<1$,
$$
|k(z)-k(0)|\leq c |z|^{\eta}\,,
$$
\end{itemize}
Then $(\mathbf{A\!^\ast})$ is satisfied with
$\sigma=N=1$ and $\indhei{1}=\beta$, $\indcsi{1}=1$.
\end{fact}
\noindent{\bf Proof.}
Clearly, \eqref{set:k-bound} and \eqref{set:k-holder} hold.
Now, by \cite[Lemma~5.1]{MR3996792} we have,
\begin{align*}
\left| \int_{r\leq |z|<1} z \,k(z)J(z)dz\right|
&=\frac12 \left| \int_{r\leq |z|<1} z \big(k(z)-k(-z)\big)J(z)dz\right| \leq c \int_{r\leq |z|<1} |z|^{1+\eta}\,J(z)dz\\
&= c \int_r^1 s^{d+\eta} \, \nu(s)ds\leq c \int_r^1 s^{\eta} h(s)ds\leq c \int_r^1 s^{\eta}(r/s) h(r)ds
\leq c\, rh(r)\,.
\end{align*}
Thus, \eqref{set:indrf-cancellation-scale}
and \eqref{set:indrf-holder-s-A} hold with the desired parameters.
{\hfill $\Box$ \bigskip}
We give another concrete example to
demonstrate how cancellations can validate
conditions \eqref{set:indrf-cancellation-scale}
and \eqref{set:indrf-holder-s-A} with
$\sigma=N=1$ and $\indhei{1}=\beta$, $\indcsi{1}=1$.
Note that
Fact~\ref{fact:kappa_prod}
does not apply here, because condition {\it (iv)} does not hold.
\begin{example}
Let $d=1$ and
$J(z)=\nu(|z|)=|z|^{-2}\varphi(|z|)$, where
\begin{align*}
\varphi(r)=
\begin{cases}
\dfrac{1}{\log(1/r)}, \qquad &r \leq 1/2\,, \\
\\
\dfrac1{\log(2)}, & r >1/2\,.
\end{cases}
\end{align*}
Then $h(r)$ is comparable to $r^{-1}\varphi(r)$ for $r\in (0,1]$.
Assume that
$b \equiv 0$ and
$\kappa(x,z)=a(x)k(z)$, where
$a(x)$
satisfies {\it (i)} and {\it (ii)} from Fact~\ref{fact:kappa_prod}, while
\begin{align*}
k(z)=1+\begin{cases}
1 &z\in (2^{-2n},2^{-2n+1}], \\
k_n \qquad &z\in [-2^{-2n},-2^{-2n-1}), \\
0 & else,
\end{cases}
\quad \qquad n=1,2,\ldots\,,
\end{align*}
and
\begin{align*}
k_n=\frac{\log(1+1/(2n-1))}{\log(1+1/(2n))}\,.
\end{align*}
Clearly, $k(z)$ is bounded from below and above, but also for $r\in (0,1/2]$,
\begin{align*}
\left| \int_{r\leq |z|<1} z \,k(z)J(z)dz\right|=
\left|\int_r^{1/2} \big(k(z)-k(-z)\big)\frac{1}{z \log(z)}\, dz \right|
\leq \frac{c}{\log(1/r)}\,.
\end{align*}
Thus, $(\mathbf{A\!^\ast})$ is satisfied with $\sigma=N=1$ and $\indhei{1}=\beta$, $\indcsi{1}=1$.
\end{example}
The next example showcases that
considering a decomposition of the coefficient $\kappa(x,z)$ can provide us with improved parameters in \eqref{set:indrf-holder-s-B} or \eqref{set:indrf-holder-s-A}.
\begin{example}
Let $d=1$, $J(z)=\nu(|z|)=|z|^{-1-3/4}$. Then $h(r)= r^{-3/4} h(1)$.
We define
\begin{align*}
a_1(x)=a(x)\,, \quad k_1(z)=k(z)\,,\quad
a_2(x)= [a(x)]^{1/3}\,, \quad
k_2(z)=
\begin{cases}
1 & z\in(2^{-2n},2^{-2n+1}],\\
2^{1/4} \qquad & -z\in(2^{-2n-1},2^{-2n}],\\
0 & else.
\end{cases}
\end{align*}
where $a(x)$ and $k(z)$ are from Example~\ref{ex:1}, and $n$ runs through all positive integers.
Assume that $b \equiv 0$ and let $\kappa(x,z)= a_1(x)k_1(z)+a_2(x)k_2(z)$.
Then $(\mathbf{A\!^\ast})$ is satisfied with
$\sigma=3/4$, $N=2$ and $\epsilon_1=1/2$, $s_1=3/4$, $\epsilon_2=1/6$, $s_2=1$.
In a different manner, suppose that there is $c>0$ such that for all $|x-y|\leq 1$, $r\in (0,1]$,
$$
\left| \int_{r\leq |z|<1} z(\kappa(x,z)-\kappa(y,z)) J(z)dz\right|\leq c
|x-y|^{\epsilon} r^s h(r)\,.
$$
It is not hard to see that necessarily
$\epsilon \leq 1/6$ and $s\leq 3/4$. In particular, $\epsilon+s-1<0$.
\end{example}
In all subsequent examples we let
\begin{align*}
\nu(r)=r^{-d-1}\varphi(r)\,,
\end{align*}
and we specify $\varphi\colon [0,\infty)\to [0,\infty]$ in such a way that \eqref{set:h-scaling} holds.
The function $J(x)$ is assumed to satisfy~\eqref{set:J}, while $\kappa(x,z)$
both conditions \eqref{set:k-bound} and \eqref{set:k-holder}.
We note that in each example below
the functions $h(r)$, $K(r)$ and $r^d \nu(r)$
are comparable for $r\in (0,1]$.
In particular, the conditions \eqref{set:indrf-cancellation-scale}
and \eqref{set:indrf-holder-s-A} read as follows: for $r\in (0,1]$,
\begin{align*}
\left|b_r^x \right|
&\leq c \,r^{\sigma-1} \varphi(r)\,,
\\
\left|b_r^x -b_r^y \right|
&\leq c \sum_{j=1}^{N} (|x-y|^{\indhei{j}}\land 1)\, r^{\indcsi{j}-1} \varphi(r)\,.
\end{align*}
In Examples~\ref{ex:3}
-- \ref{ex:0_log}
the condition $(\mathbf{A\!^\ast})$ is satisfied, since
the comments from Remark~\ref{rem:close-to-1} apply.
\begin{example}\label{ex:3}
Let $\varepsilon\in(0,1]$ and
$$
\varphi(r)=\frac{1}{[\log(2+1/r)]^{1+\varepsilon}}\,.
$$
Then
\eqref{set:h-scaling} holds for every $\alpha_h<1$, but not with $\alpha_h=1$;
while \eqref{eq:intro:wusc} holds with $\beta_h=1$, but fails for any $\beta_h<1$.
Furthermore, we let
$b(x)=\int_{|z|<1} z\kappa(x,z) J(z)dz$, thus
we actually consider the operator~\eqref{eq:op-2}.
The conditions \eqref{set:exdrf-bound} and \eqref{set:exdrf-holder} are satisfied.
Here, $\lim_{r\to 0^+}\varphi(r)=0$. Note also that
$\int_{|z|<r}|z| \nu(|z|)dz$ is comparable to $[\log(2+1/r)]^{-\varepsilon}$.
\end{example}
What is more, in Examples~\ref{ex:1_log} -- \ref{ex:wide_range-2} we have
$$\int_{|z|<1}|z|\,\nu(|z|)dz=\infty\,.$$
That was also the case in Example~\ref{ex:1}, where the unbounded term $\log(1/r)$ appeared as a result of the integral $\int_{r\leq |z|<1}|z|\nu(|z|)dz$.
\begin{example}\label{ex:1_log}
Let $b \equiv 0$ and
$$\varphi(r)=\frac{1}{\log(2+1/r)}\,.$$ Then
\eqref{set:h-scaling} holds for every $\alpha_h<1$, but not with $\alpha_h=1$;
while \eqref{eq:intro:wusc} holds with $\beta_h=1$, but fails for any $\beta_h<1$. The order of the corresponding operator is logarithmically smaller than 1.
Here, $\lim_{r\to 0^+} \varphi(r)=0$.
Note that $\int_{r\leq |z|<1}|z|\nu(|z|)dz$ is
comparable to $\log[\log(2+1/r)]$ for small $r$.
\end{example}
\begin{example}\label{ex:0_log}
Let $b \equiv 0$ and
$$\varphi(r)=\log(2+1/r)\,.$$
Then
\eqref{set:h-scaling}
holds with $\alpha_h=1$, but fails for any $\alpha_h>1$, and
\eqref{eq:intro:wusc} holds with every $\beta_h>1$,
but not with $\beta_h=1$.
Roughly speaking, the order of the corresponding operator is logarithmically greater than 1.
Here, $\lim_{r\to 0^+} \varphi(r)=\infty$.
Note that $\int_{r\leq |z|<1}|z|\nu(|z|)dz$
comparable to \mbox{$[\log(2+1/r)]^{2}$} for small $r$.
\end{example}
In the last two examples we propose other interesting
functions $\varphi$,
which can be considered with regard to our assumptions.
\begin{example}\label{ex:wide_range-1}
Let
\begin{align*}
\varphi(r):=
\begin{cases}
c_k\, r^{-1/4}\,,\qquad \qquad \quad &r \in \left[ ((2k+1)!)^{-1}, ((2k)!)^{-1}\right],\\
c_k \sqrt{(2k+1)!}\, r^{1/4}\,, &r \in \left[ ((2k+2)!)^{-1}, ((2k+1)!)^{-1}\right],
\end{cases}
\end{align*}
and $c_k=((2k)!!)^{-1/2}$.
We put $\varphi(r)=0$ if $r>1$.
Then
\eqref{set:h-scaling} holds with $\alpha_h=3/4$, but fails for any $\alpha_h>3/4$;
while \eqref{eq:intro:wusc} holds with $\beta_h=5/4$, but fails for any $\beta_h<5/4$. Roughly speaking, the order of the corresponding operator ranges from $3/4$ to $5/4$.
Interestingly, here $\liminf_{r \to 0^+} \varphi (r)=0$ and $\limsup_{r \to 0^+} \varphi (r)=\infty$.
\end{example}
\begin{example}\label{ex:wide_range-2}
Let
\begin{align*}
\varphi(r):=
\begin{cases}
c_k\, r^{-1/4}\,,\qquad \qquad \quad &r \in \left[ ((3k+2)!)^{-1}, ((3k)!)^{-1}\right],\\
c_k \sqrt{(3k+2)!}\, r^{1/4}\,, &r \in \left[ ((3k+3)!)^{-1}, ((3k+2)!)^{-1}\right],
\end{cases}
\end{align*}
and $c_k=((3k)!!!)^{-1/2}$.
We put $\varphi(r)=0$ if $r>1$.
All the conclusions of Example~\ref{ex:wide_range-1}
are valid here, except that now $\lim_{r\to 0^+}\varphi(r)=\infty$.
\end{example}
One could extend Example~\ref{ex:wide_range-1}
and Example~\ref{ex:wide_range-2} by considering $\nu(r)=r^{-d-1} [\varphi(r)]^a$, $a\in [0,4)$, which gives an operator of order that ranges from $1-a/4$ to $1+a/4$.
\vspace{\baselineskip}
\section{General approach}\label{sec:gen}
In this section we present
a general approach to the
parametrix construction of the fundamental solution to the equation $\partial_tu-Lu=0$,
motivated by \cite{KKS} and many other papers on this topic, which
will provide a transparent yet rigorous framework.
To implement that general strategy in a concrete situation one needs to specify the zero order approximation $P_t^0$,
which in turn determines the error term $Q_t^0$, typically $Q_t^0=-(\partial_t-L)P_t^0$.
A proper choice is such that it allows
proving certain natural hypothesis.
Throughout this section we assume that
\begin{enumerate}
\item[({\bf R})]
a mapping $t \mapsto r_t$
is a non-negative non-decreasing function of $t\in(0,1]$ such that
$$\forall_{0<\lambda\leq 1} \,\, r_{\lambda t} \leq \sqrt{\lambda}\, r_t$$
holds for all $t\in (0,1]$.
\end{enumerate}
\subsection{Construction - functional analytic approach}\label{sec:f_a_a}
We
use the space
$B_b(\RR^d)$ equipped
with the supremum norm $\|f\|_{\infty}$.
For $t\in (0,1]$ we consider two {\it linear} operators
\begin{align}\tag{S1} \label{op:S1}
\begin{aligned}
P_t^0 \colon B_b(\RR^d) \to B_b(\RR^d)\,,\\
Q_t^0 \colon B_b(\RR^d) \to B_b(\RR^d)\,.
\end{aligned}
\end{align}
According to the ideas presented in \cite[Section~5]{KKS}, we expect the solution to be of the form
\begin{align}\label{def:op_P}
P_t f:= P_t^0 f+\int_0^t P_{t-s}^0\, Q_s f\, ds\,,
\end{align}
where
\begin{align}\label{def:op_Q}
Q_t f := Q_t^0 f + \sum_{n=1}^{\infty}Q_t^nf \,,
\end{align}
and
\begin{align}\label{def:op_Qn}
Q_t^nf:= \idotsint\limits_{0<s_1<\ldots<s_n<t} Q_{t-s_n}^0 \ldots Q_{s_1}^0 f \,\,ds_1 \ldots ds_n\,.
\end{align}
We will make sense of \eqref{def:op_P} provided \eqref{op:S1} holds together with the following hypotheses\\
\begin{enumerate}
\item[(H1)] There is $C_1>0$ such that for all $t\in (0,1]$, $f \in B_b(\RR^d)$,
$$\| P_t^0 f\|_{\infty}\leq C_1 \|f\|_{\infty}\,.$$
\item[(H2)] For all $t\in (0,1]$, $f \in B_b(\RR^d)$,
$$\lim_{s \to t} \|P_s^0 f - P_t^0 f\|_{\infty} =0\,.$$
\end{enumerate}
\begin{enumerate}
\item[(H3)] There are $C_3,\varepsilon_0 >0$ such that for all $t\in (0,1]$, $f \in B_b(\RR^d)$,
$$\| Q_t^0 f\|_{\infty}\leq C_3\, t^{-1} r_t^{\varepsilon_0} \|f\|_{\infty}\,.$$
\item[(H4)] For all $t\in (0,1]$, $f \in B_b(\RR^d)$,
$$\lim_{s \to t} \|Q_s^0 f - Q_t^0 f\|_{\infty} =0\,.$$
\end{enumerate}
\noindent
To this end we will use the notion of the Bochner integral, which in our setting boils down to the continuity and absolute integrability of the integrand. For a general setting we refer the reader to \cite{MR3617205}.
Now, we give a consequence of {\rm ({\bf R})},
cf. \cite[Lemma~5.15]{MR3996792}.
\begin{lemma}\label{lem:time_conv}
For all $t \in (0,1]$, $\varepsilon>0$ and $k\in {\mathbb N}$,
$$
\int_0^t (t-s)^{-1} r_{t-s}^{\varepsilon}\, s^{-1} r_s^{k \varepsilon} \,ds \leq B(\varepsilon/2, (k\varepsilon)/2)\, t^{-1} r_t^{(k+1)\varepsilon}\,,
$$
and
$$
\int_0^t s^{-1} r_s^{\varepsilon}\,ds\leq B(1,\varepsilon/2)\, r_t^{\varepsilon} \,,
$$
where $B(a,b)$ is the Beta function.
\end{lemma}
The first inequality in Lemma~\ref{lem:time_conv} allows us to carry the bound of $Q_t^0f$ in {\rm (H3)} over the summands of the series in \eqref{def:op_Q}.
We lighten the discussion here and give all the proofs to the following facts later in Section~\ref{sec:a-proofs}.
\begin{fact}\label{fact:Qn_well_Bb}
Assume \eqref{op:S1}. Suppose {\rm (H3)} and {\rm (H4)} hold. Then the integral in
\eqref{def:op_Qn}
is well defined in as Bochner integral in
$(B_b(\RR^d),\|\cdot\|_{\infty})$. Further,
for all $t\in (0,1]$, $n\in{\mathbb N}$ and $f\in B_b(\RR^d)$,
\begin{enumerate}
\item $Q_t^n \colon B_b(\RR^d)\to B_b(\RR^d)$,
\item $\|Q_t^n f\|_{\infty}\leq C_3^{n+1} \prod_{k=1}^n B\!\left(\frac{\varepsilon_0}{2},\frac{k\varepsilon_0}{2}\right) t^{-1} r_t^{(n+1)\varepsilon_0}\, \|f\|_{\infty}$,
\item $\lim_{s \to t} \|Q_s^n f - Q_t^n f\|_{\infty} =0$.
\end{enumerate}
\end{fact}
\begin{fact}\label{fact:Q_well_Bb}
Assume \eqref{op:S1}. Suppose {\rm (H3)} and {\rm (H4)} hold. Then
the series in \eqref{def:op_Q} converges absolutely in
$(B_b(\RR^d),\|\cdot\|_{\infty})$.
Further,
there is $c>0$ such that for all $t\in (0,1]$ and $f\in B_b(\RR^d)$,
\begin{enumerate}
\item $Q_t\colon B_b(\RR^d)\to B_b(\RR^d)$,
\item $\|Q_t f\|_{\infty}\leq c t^{-1} r_t^{\varepsilon_0} \|f\|_{\infty}$,
\item $\lim_{s \to t} \|Q_s f - Q_t f\|_{\infty} =0$.
\end{enumerate}
\end{fact}
Having the operators $Q_t^n$ and $Q_t$ well defined it is convenient to note that for every $f\in B_b(\RR^d)$ they
satisfy the equations
\begin{align}\label{eq:Qn}
Q_t^n f = \int_0^t Q_{t-s}^0\, Q_{s}^{n-1}f \,ds\,,\qquad n=1,2,\ldots\,,
\end{align}
and
\begin{align}\label{eq:Q}
Q_t f = Q_t^0f + \int_0^t Q_{t-s}^0\, Q_{s}f \,ds\,.
\end{align}
Actually, we verify \eqref{eq:Qn} and \eqref{eq:Q} in the proofs of Facts~\ref{fact:Qn_well_Bb} and~\ref{fact:Q_well_Bb}.
\begin{fact}\label{fact:P_well_Bb}
Assume \eqref{op:S1}. Suppose {\rm (H1)} -- {\rm (H4)} hold.
The integral in \eqref{def:op_P} is well defined as a Bochner integral in $(B_b(\RR^d),\|\cdot\|_{\infty})$.
Further,
there is $c>0$ such that for all $t\in (0,1]$ and $f\in B_b(\RR^d)$,
\begin{enumerate}
\item $P_t\colon B_b(\RR^d)\to B_b(\RR^d)$,
\item $\|P_t f\|_{\infty}\leq c \|f\|_{\infty}$,
\item $\lim_{s \to t} \|P_s f - P_t f\|_{\infty} =0$.
\end{enumerate}
\end{fact}
\begin{remark}\label{rem:const}
The constant $c$ in Fact~\ref{fact:Q_well_Bb}
depends only on $C_3,\varepsilon_0,r_1$.
The constant $c$ in Fact~\ref{fact:P_well_Bb}
depends only on $C_1,C_3,\varepsilon_0,r_1$.
See \eqref{ineq:Q_norm} and \eqref{ineq:P_norm}, respectively.
\end{remark}
In the next result we address what is known in probability theory as {\it the strong Feller property}.
\begin{fact}\label{fact:strong-F}
Assume that $P_t^0, Q_t^0\colon B_b(\RR^d)\to C_b(\RR^d)$ for all $t\in (0,1]$.
Suppose {\rm (H1)} -- {\rm (H4)} hold.
Then, for all $t\in (0,1]$, $n\in {\mathbb N}$,
$$P_t,\, Q_t,\, Q_t^n\colon
B_b(\RR^d)\to C_b(\RR^d)\,.$$
\end{fact}
Even though the above reasoning delivers a convenient strategy for the construction of $P_t$, which is a candidate for the solution,
we
will actually need
more regularity, therefore we
shall require that $P_t^0$ and $Q_t^0$
have yet another mapping property.
We
employ the space
$C_0(\RR^d)$.
We further suppose that for $t\in (0,1]$,
\begin{align}\tag{S2} \label{op:S2}
\begin{aligned}
P_t^0 \colon C_0(\RR^d) \to C_0(\RR^d)\,,\\
Q_t^0 \colon C_0(\RR^d) \to C_0(\RR^d)\,.
\end{aligned}
\end{align}
\begin{fact}\label{fact:PQ_well_C0}
Assume \eqref{op:S1} and \eqref{op:S2}. Suppose {\rm (H1)} -- {\rm (H4)} hold.
Then
the integrals in \eqref{def:op_P} and \eqref{def:op_Qn}
are well defined as Bochner integrals
and the series in \eqref{def:op_Q} converges absolutely in
$(C_0(\RR^d),\|\cdot\|_{\infty})$.
In particular,
\begin{enumerate}
\item $Q_t\colon C_0(\RR^d)\to C_0(\RR^d)$,
\item $P_t\colon C_0(\RR^d)\to C_0(\RR^d)$.
\end{enumerate}
\end{fact}
One of the central hypothesis here, crucial for the choice of $P_t^0$, is
the continuity at zero in the norm.
\\
\begin{enumerate}
\item[(H0)] For all $f\in C_0(\RR^d)$,
$$
\lim_{t\to 0^+} \|P_t^0f - f\|_{\infty}=0\,.
$$
\end{enumerate}
\vspace{\baselineskip}
\noindent
At this point it immediately translates into the same property for $P_t$.
\begin{fact}\label{fact:P-s-cont}
Assume \eqref{op:S1} and \eqref{op:S2}. Suppose {\rm (H0)} -- {\rm (H4)} hold. Then for all $f\in C_0(\RR^d)$,
$$
\lim_{t\to 0^+} \|P_t f -f\|_{\infty}=0\,.
$$
\end{fact}
\vspace{\baselineskip}
The importance of \eqref{op:S1}, \eqref{op:S2} and {\rm (H0)} -- {\rm (H4)} is now undisputed.
These conditions already give rise to a continuous family of bounded operators $(P_t)_{t\in [0,1]}$
on $C_0(\RR^d)$.
Next, we want to show that it is
a positivity preserving sub-Markovian semigroup that corresponds in a sense to~$L$.
Namely, the aim is to verify the following conjectures\\
\begin{enumerate}
\item[(CoJ1)] For all $t\in (0,1]$ and $f\in C_0(\RR^d)$,
$$
f \geq 0 \quad\implies\quad P_t f \geq 0\,.
$$
\item[(CoJ2)] For all $t\in (0,1]$ and $f\in C_0(\RR^d)$,
$$
f \leq 1 \quad\implies\quad P_t f \leq 1\,.
$$
\item[(CoJ3)]
For all $t,s>0$ (such that $t+s\leq 1$) and $f\in C_0(\RR^d)$
$$
P_t P_s f = P_{t+s}f\,.
$$
\item[(CoJ4)] For all $t\in (0,1]$ and $f\in \mathcal{D} \cap C_0(\RR^d)$,
$$
P_tf(x) -f(x)= \int_0^t P_s\, L f(x) \,ds\,,
\qquad x\in\RR^d\,.
$$
(the set $\mathcal{D}$ and the operator $L$ have to be specified)
\end{enumerate}
\vspace{\baselineskip}
\noindent
In the end we shall also consider \\
\begin{enumerate}
\item[(CoJ5)] For all $t\in (0,1]$,
$$
P_t 1 = 1\,.
$$
\end{enumerate}
\vspace{\baselineskip}
We focus on the verification of conjectures {\rm (CoJ1)} -- {\rm (CoJ4)} in the next section.
The conjecture {\rm (CoJ5)} is treated only in
Section~\ref{ssec:ker}, where applications to our models are considered.
We note that the above mentioned verification requires continuity, like that provided by part (3) of Fact~\ref{fact:P_well_Bb}.
We collect more such results that are essential and hold true in the general setting.
We introduce the notation: for $\tau \in [0,1]$ we let
$$
\Omega_{[\tau,1]} :=\{(t,\varepsilon)\in{\mathbb R}^2\colon\,\, t\in [\tau, 1] ,\,\,\varepsilon\geq 0 \quad\mbox{and} \quad t+\varepsilon\leq 1\}\,.
$$
\begin{lemma}\label{lem:continuity}
Assume \eqref{op:S1} and \eqref{op:S2}. Suppose {\rm (H0)} -- {\rm (H4)} hold.
Then for every $f\in C_0(\RR^d)$
the following mappings are uniformly continuous into $(C_0(\RR^d),\|\cdot\|_{\infty})$,
\begin{enumerate}
\item
$
t \longmapsto P_t^0 f
$ on $[0,1]$,
\item
$
(t,\varepsilon)\longmapsto \int_0^t P_{t-s+\varepsilon}^0\,Q_s f \,ds
$ on $\Omega_{[0,1]}$,
\item
$
t \longmapsto Q_t^0 f
$ on $[\tau, 1]$ for every $\tau\in (0,1]$,
\item
$
(t,\varepsilon)\longmapsto \int_0^t Q_{t-s+\varepsilon}^0\,Q_s f \,ds
$ on $\Omega_{[\tau,1]}$ for every $\tau\in (0,1]$,
\item
$
(t,\varepsilon) \longmapsto P_{\varepsilon}^0\, Q_t f
$ on $[\tau,1]\times [0,1]$ for every $\tau\in (0,1]$.
\end{enumerate}
\end{lemma}
\begin{remark}\label{rem:1toT}
Note that
if {\rm ({\bf R})}, \eqref{op:S1}, \eqref{op:S2} and {\rm (H0)} -- {\rm (H4)}
are true for $t\in (0,T]$, then
all results of this section
hold for $t\in (0,T]$
in place of $t\in (0,1]$.
In particular, in such a case, in Lemma~\ref{lem:continuity}
we have $[0,T]$ and $[\tau, T]$ instead of
$[0,1]$ and $[\tau,1]$, respectively.
Furthermore, {\rm (CoJ1)} -- {\rm (CoJ5)}
shall also be considered on $[0,T]$.
The same applies to Sections~\ref{sec:coj-gen} and~\ref{sec:realization}.
\end{remark}
\subsection{Conjectures via approximate solution}\label{sec:coj-gen}
We assume in the whole section that {\rm ({\bf R})}, \eqref{op:S1}, \eqref{op:S2} and {\rm (H0)} -- {\rm (H4)} are satisfied, and we continue to discuss the general approach.
The aim is to assert that the conjectures {\rm (CoJ1)} -- {\rm (CoJ4)} hold true for $P_tf$
constructed in \eqref{def:op_P}.
We consider
a linear operator $L$ defined on a linear subspace
$D(L)$ of $\mathcal{R}(\RR^d)$ --~real valued functions on~$\RR^d$~-- and such that $Lf\in \mathcal{R}(\RR^d)$ for $f\in D(L)$. We ponder on the following properties
\\
\begin{enumerate}
\item[(L1)] $L$ satisfies the {\it positive maximum principle}, that is, \\
\begin{center}
\it
if $f\in D(L)$ is such that $f(x_0)=\sup_{x\in\RR^d} f(x) \geq 0$, then $Lf(x_0)\leq 0\,.$
\end{center}
{\,}
\item[(L2)] Non-zero constant function on $\RR^d$ belongs to $D(L)$.\\
\item[(L3)] $\mathcal{D} \subseteq D(L)$ is such that $L f \in C_0(\RR^d)$ for every $f\in \mathcal{D}$.
\end{enumerate}
\vspace{\baselineskip}
A classical method of proving
{\rm (CoJ1)} -- {\rm (CoJ4)}
is to associate $P_t f$ with an operator $L$ satisfying {\rm (L1)} -- {\rm (L3)}
by showing that it solves the equation
$\partial_t u - L u =0$, in other words,
that it is harmonic for $\partial_t-L$.
In general, a typical problem is that
we do not know whether $P_t f \in D(L)$, which is usually a question of sufficient regularity.
In a series of papers
\cite{MR3765882},
\cite{MR4003136},
\cite{KKS}
this problem was resolved for certain operators by
introducing the notion of approximate harmonicity
and by specifying
the so called {\it approximate fundamental solution}.
Namely,
for $t,\varepsilon>0$, $t+\varepsilon\leq 1$
and $f\in C_0(\RR^d)$ we let (in the sense of Bochner, thus $P_{t,\varepsilon}f \in C_0(\RR^d)$)
\begin{align}\label{def:approx_sol}
P_{t,\varepsilon}f := P_{t+\varepsilon}^0 f + \int_0^t P_{t-s+\varepsilon}^0\, Q_s f\, ds\,.
\end{align}
We argue here
that such an approach is successful in general, provided
$P_{t,\varepsilon}$ is adequately regular
and a proper relation holds between $P_t^0$ and $Q_t^0$.
This decisive relation is an equality
that we point out in the hypothesis {\rm (H5)} below.
\vspace{\baselineskip}
\begin{enumerate}
\item[(H5)] For all $t,\varepsilon>0$, $t+\varepsilon\leq 1$, $x\in\RR^d$
and $f\in C_0(\RR^d)$,
$$
(\partial_t-L_x) P_{t,\varepsilon} f(x)
= P_{\varepsilon}^0\, Q_t f(x)
-Q_{t+\varepsilon}^0 f(x) - \int_0^t Q_{t-s+\varepsilon}^0\, Q_s f(x)\,ds\,.
$$
\vspace{\baselineskip}
\item[(H6)] For all $t,\varepsilon>0$, $t+\varepsilon\leq 1$, $x\in\RR^d$
and $f\in C_0(\RR^d)$,
\begin{align*}
\int_0^{1-\varepsilon} |\partial_s P_{s,\varepsilon} f (x)|ds <\infty\,, \quad\quad
\int_0^{1-\varepsilon} | L_x P_{s,\varepsilon} f (x)|ds<\infty\,,
\end{align*}
\begin{align*}
L_x \int_0^t P_{s,\varepsilon} f (x)\,ds =
\int_0^t L_x P_{s,\varepsilon} f (x)\,ds \,.
\end{align*}
\end{enumerate}
\vspace{\baselineskip}
\begin{remark}\label{rem:H5_H6}
An inherent part of {\rm (H5)}
is that $t\mapsto P_{t,\varepsilon}f(x)$ is differentiable, and $P_{t,\varepsilon}f\in D(L)$.
Similarly, in {\rm (H6)}
we require that $t\mapsto P_{t,\varepsilon}f(x)$ is differentiable, both $P_{t,\varepsilon}f$ and $\int_0^t P_{s,\varepsilon} f\,ds$ belong to $D(L)$, and the integrands are measurable.
\end{remark}
\begin{remark}\label{rem:H5-ver}
In practice, to verify the hypothesis {\rm (H5)}, one initiates the construction in Section~\ref{sec:f_a_a} so that
$Q_t^0f = -(\partial_t-L)P_t^0f$,
and
performs direct computations to evaluate $\partial_t P_{t,\varepsilon}f(x)$.
\end{remark}
Here are some general properties of
$P_{t,\varepsilon}f$ that stem from Lemma~\ref{lem:continuity}.
\begin{corollary}\label{cor:approx_sol-gen_prop}
For all $\varsigma\in(0,1)$ and $f\in C_0(\RR^d)$ we have
\begin{enumerate}
\item $$\lim_{\varepsilon\to 0^+}\sup_{t\in [0,1-\varsigma]}\|P_{t,\varepsilon}f -P_tf\|_{\infty}=0\,,$$
\item
$$
\underset{\varepsilon \in (0,\varsigma]}{\forall} \quad \lim_{|x|\to \infty} \sup_{t\in [0,1-\varsigma]} |P_{t,\varepsilon}f(x)|=0\,,
$$
\item
$$
\lim_{\substack{(t,\varepsilon)\to (0,0)\\ t,\varepsilon \geq 0}} \|P_{t,\varepsilon}f-f\|_{\infty} = 0\,,
$$
\item
$$
\underset{\varepsilon \in (0,\varsigma]}{\forall} \quad (t,x)\mapsto P_{t,\varepsilon}f(x) \mbox{ is an element of }\, C_0([0,1-\varsigma]\times\RR^d)\,.
$$
\end{enumerate}
\end{corollary}
\noindent{\bf Proof.}
If either $t=0$ or $\varepsilon=0$, then $P_{t,\varepsilon}f$ is also given by \eqref{def:approx_sol}.
The reason to introduce $\varsigma$ is only to create room for $\varepsilon>0$ such that $t+\varepsilon\leq 1$.
The first statement follows from Lemma~\ref{lem:continuity}
part (1) and (2).
The same induces that
$P_{t,\varepsilon}f$ is continuous in $t\in[0,1-\varsigma]$, hence the set $\{P_{t,\varepsilon}f\colon t\in [0,1-\varsigma]\}$ is a compact subset of $(C_0(\RR^d),\|\cdot\|_{\infty})$ and therefore the second statement holds true
as a direct consequence of sequential compactness (see \cite[Chapter~9, Theorem~16]{zbMATH05172308}).
The third one results from
$\|P_{t,\varepsilon}f-f\|_{\infty} \leq \|P_{t,\varepsilon}f-P_{t}f\|+\|P_{t}f-f\|_{\infty}$,
the first statement of the corollary and
Fact~\ref{fact:P-s-cont}.
Finally, since
$P_{t,\varepsilon}f(x)-P_{t_0,\varepsilon}f(x_0)
= P_{t,\varepsilon}f(x)-P_{t_0,\varepsilon}f(x)+P_{t_0,\varepsilon}f(x)-P_{t_0,\varepsilon}f(x_0)$, the continuity in the last statement holds by Lemma~\ref{lem:continuity}
part (1) and (2), and because $P_{t_0,\varepsilon}f\in C_0(\RR^d)$. Convergence to zero is assured by the second statement.
{\hfill $\Box$ \bigskip}
In what follows we prove each conjecture
{\rm (Coj1)} -- {\rm (Coj4)}
under a proper selection of conditions
{\rm (L1)} -- {\rm (L3)} and hypothesis
{\rm (H5)} -- {\rm (H6)}.
For the sake of this short discussion we use the notion of approximate harmonicity
\cite[Definition~5.4]{MR4003136}.
Roughly speaking,
we show that
$P_tf$, $1-P_tf$ and $P_{t+s}f-P_tP_sf$
for $f\in C_0(\RR^d)$,
as well as
$P_tf -f -\int_0^t P_s{\mathcal L} f ds$
for $f\in \mathcal{D}\cap C_0(\RR^d)$,
are approximate harmonic for $\partial_t-L$.
The corresponding approximating families are
$P_{t,\varepsilon} f$, $1-P_{t,\varepsilon}f$, $P_{t+s,\varepsilon}f-P_{t,\varepsilon}P_sf$,
and
$P_{t,\varepsilon} f -f -\int_0^t P_{s,\varepsilon} {\mathcal L} f ds$.
We also show that this leads to the conjectures.
A direct application of
\cite[Proposition~5.5]{MR4003136} -- valid only for certain class of operators -- or just a simple reference to the proof of that result in \cite{MR4003136},
would be inaccurate or incomplete,
since some differences appear in details while making sure that our assumptions suffice.
\begin{fact}\label{fact:approx_sol-zero-1}
Suppose {\rm (H5)} holds.
For all $\tau, \varsigma>0$, $\tau+ \varsigma\leq 1$
and $f\in C_0(\RR^d)$ we have
$$
\lim_{\varepsilon \to 0^+}\sup_{t\in [\tau,1-\varsigma ]} \|(\partial_t-L) P_{t,\varepsilon}f \|_{\infty} = 0\,.
$$
\end{fact}
\noindent{\bf Proof.}
Due to Lemma~\ref{lem:continuity} parts
(3), (4) and (5), the expression in
the hypothesis {\rm (H5)}
converges
in the supremum norm
uniformly with respect to $t\in [\tau,1-\varsigma ]$ as $\varepsilon\to 0^+$ to the limit
$Q_t f - Q_t^0f + \int_0^t Q_{t-s}^0\, Q_{s}f \,ds$,
which by \eqref{eq:Q} equals zero.
This ends the proof.
{\hfill $\Box$ \bigskip}
We stress that our assumptions are commented in more detail in
Remark~\ref{rem:H5_H6}.
\begin{proposition}\label{prop:coj123}
Suppose {\rm (L1)} and {\rm (H5)} hold.
The conjectures {\rm (CoJ1)} and {\rm (CoJ3)} are true.
If additionally {\rm (L2)} holds, then {\rm (CoJ2)} also holds.
\end{proposition}
\noindent{\bf Proof.}
Let $f\in C_0(\RR^d)$ be non-negative.
Suppose there are
$\varsigma, \theta >0$,
$t'\in [0,1-\varsigma]$ and $x'\in\RR^d$ such that $e^{-\theta t'} P_{t'}f(x')<0$.
Define $u_{\varepsilon}(t,x)= e^{-\theta t} P_{t,\varepsilon}f(x)$.
By part (1) of Corollary~\ref{cor:approx_sol-gen_prop},
there are $\eta>0$ and $0<\bar{\varepsilon}_0<\varsigma$
such that
$$
u_\varepsilon(t',x') <-\eta\,,
$$
for all $0<\varepsilon<\bar{\varepsilon}_0$.
By part (4) of Corollary~\ref{cor:approx_sol-gen_prop}
$u_{\varepsilon}\in C_0([0,1-\varsigma]\times \RR^d)$
and it attains minimum at a point denoted by $(t_\varepsilon,x_\varepsilon)$. Since $f\geq 0$, by part (3) of
Corollary~\ref{cor:approx_sol-gen_prop},
there are $\bar{t}_1>0$ and $0<\bar{\varepsilon}_1<\bar{\varepsilon}_0$
such that
$$
u_{\varepsilon}(t,x)\geq -\eta/2\,,
$$
for all $t\in [0,\bar{t}_1]$, $\varepsilon\in(0,\bar{\varepsilon}_1]$ and $x\in\RR^d$. This gives $t_\varepsilon\in (\bar{t}_1,1-\varsigma]$ for all $
\varepsilon\in(0,\bar{\varepsilon}_1]$.
Therefore, by Remark~\ref{rem:H5_H6} and {\rm (L1)}, we get
$$
\partial_t u_{\varepsilon}(t_\varepsilon,x_\varepsilon)\leq 0\,,
\qquad \quad
L_x u_\varepsilon(t_\varepsilon,x_\varepsilon)\geq 0\,,
$$
and so for all $\varepsilon\in (0,\bar{\varepsilon}_1]$ we have
$$
(\partial_t-L_x) u_\varepsilon(t_\varepsilon,x_\varepsilon) \leq 0\,.
$$
However, Fact~\ref{fact:approx_sol-zero-1} gives
\begin{align*}
(\partial_t-L_x) u_\varepsilon(t_\varepsilon,x_\varepsilon)
&= e^{-\theta t_\varepsilon} (\partial_t-L_x) P_{t_\varepsilon,\varepsilon}f(x_\varepsilon)
-\theta u_{\varepsilon}(t_\varepsilon,x_\varepsilon)\\
&\geq e^{-\theta t_\varepsilon} (\partial_t-L_x) P_{t_\varepsilon,\varepsilon}f(x_\varepsilon)+\theta\eta
\qquad\qquad
\xrightarrow{\varepsilon\to 0^+} \quad \theta \eta >0\,,
\end{align*}
which is a contradiction, and shows that $P_tf \geq 0$ for $t\in [0,1]$, see Fact~\ref{fact:P_well_Bb}.
The proof of {\rm (CoJ3)} is similar: we consider $u_\varepsilon(t,x)=e^{-\theta t}(P_{t+s,\varepsilon}f(x) - P_{t,\varepsilon} P_s f(x))$ for $t\in [0,1-s-\varsigma]$. Note that $P_sf \in C_0(\RR^d)$. We also use
Lemma~\ref{lem:continuity} part (1) and (2)
to assert that $u_\varepsilon(t,x)\geq -\eta/2$.
Finally, we put $-f$ in place of $f$.
The proof of {\rm (CoJ2)} is also similar: we consider
$u_\varepsilon(t,x)=e^{-\theta t}(1-P_{t,\varepsilon}f(x))$
for $t\in [0,1-\varsigma]$.
If for some point $u_\varepsilon(t',x')<-\eta$, by part (4) of Corollary~\ref{cor:approx_sol-gen_prop}, it attains its minimum.
We write $u_\varepsilon(t,x)=e^{-\theta t}(1-f(x))+e^{-\theta t}(f(x)-P_{t,\varepsilon}f(x))$ to show $u_\varepsilon(t,x)\geq -\eta/2$.
To obtain the contradiction from
$$
(\partial_t-L_x) u_\varepsilon(t_\varepsilon,x_\varepsilon)
=e^{-\theta t}(-L1) -e^{-\theta t_\varepsilon} (\partial_t-L_x) P_{t_\varepsilon,\varepsilon}f(x_\varepsilon)
-\theta u_{\varepsilon}(t_\varepsilon,x_\varepsilon)\,,
$$
we additionally use that
$L1 \leq 0$, which follows from {\rm (L1)} and {\rm (L2)}.
{\hfill $\Box$ \bigskip}
\begin{fact}\label{fact:approx_sol-zero-2}
Suppose {\rm (H5)} and {\rm (H6)} hold.
For all $\varsigma \in (0,1)$ and $f\in C_0(\RR^d)$ we have
$$
\lim_{\varepsilon\to 0^+}\int_0^{1-\varsigma} \|(\partial_t-L) P_{t,\varepsilon}f \|_{\infty} \,dt=0\,.
$$
\end{fact}
\noindent{\bf Proof.}
By Fact~\ref{fact:approx_sol-zero-1}
it suffices to show that
$\int_0^{\tau}\|(\partial_t-{\mathcal L}) P_{t,\varepsilon}f \|_{\infty} \,dt$
can be made arbitrarily small uniformly in $0<\varepsilon \leq \bar{\varepsilon}_1$, by the choice of $\tau>0$ and $\bar{\varepsilon}_1>0$. To guarantee that, we apply the supremum norm to each element in the expression of
{\rm (H5)}.
We get
$\|P_\varepsilon^0 Q_t \|_{\infty}\leq C_1 c t^{-1}r_t^{\varepsilon_0}\|f\|_{\infty}$
by {\rm (H1)} and Fact~\ref{fact:Q_well_Bb};
$\|Q_{t+\varepsilon}^0f\|_{\infty}\leq C_3 (t+\varepsilon)^{-1}r_{t+\varepsilon}^{\varepsilon_0}\|f\|_{\infty}$
by {\rm (H3)};
and
$\int_0^t \| Q_{t-s+\varepsilon}^0 Q_s f\|_{\infty} ds\leq \int_0^t C_3 (t-s+\varepsilon)^{-1}r_{t-s+\varepsilon}^{\varepsilon_0} c s^{-1}r_s^{\varepsilon_0}\,ds$
by {\rm (H3)} and Fact~\ref{fact:Q_well_Bb}.
Clearly, the result follows after integrating against $dt$ over $(0,\tau)$ and using Lemma~\ref{lem:time_conv} (change of the order of integration simplifies calculations for the third term): we just have to make $r_{\tau+\varepsilon}^{\varepsilon_0}$ small, see~{\rm ({\bf R})}.
{\hfill $\Box$ \bigskip}
Again, we refer to Remark~\ref{rem:H5_H6} for important comments on our assumptions.
\begin{proposition}\label{prop:coj4}
Suppose {\rm (L1)}, {\rm (L3)} and {\rm (H5)}, {\rm (H6)} hold.
The conjecture {\rm (CoJ4)} is true.
\end{proposition}
\noindent{\bf Proof.}
Let $f\in \mathcal{D}\cap C_0(\RR^d)$.
Note that
$f\in D(L)$ and
$L f\in C_0(\RR^d)$ by {\rm (L3)}.
Suppose that there are $\varsigma, \theta>0$, $t'\in[0,1-\varsigma]$ and $x'\in \RR^d$ such that
$e^{-\theta t'}(P_{t'}f(x')-f(x')-\int_0^{t'} P_s L f(x')\,ds) <0$.
We consider
$$
u_{\varepsilon}(t,x)= e^{-\theta t}\left(P_{t,\varepsilon}
f(x)-f(x)-\int_0^t P_{s,\varepsilon} L f(x)\,ds \right) .
$$
By part (1) of Corollary~\ref{cor:approx_sol-gen_prop},
there are $\eta>0$ and $0<\bar{\varepsilon}_0<\varsigma$
such that
$u_\varepsilon(t',x') <-\eta$
for all $0<\varepsilon<\bar{\varepsilon}_0$.
By part (4) of Corollary~\ref{cor:approx_sol-gen_prop} $u_{\varepsilon}\in C_0([0,1-\varsigma]\times \RR^d)$, hence
it attains its minimum at a point denoted by $(t_\varepsilon,x_\varepsilon)$.
By $\int_0^t P_{s,\varepsilon} L f(x)\,ds=\int_0^t \left[ P_{s,\varepsilon} L f(x) - L f(x)\right]ds+ t L f(x)$, and
Corollary~\ref{cor:approx_sol-gen_prop} part (3),
there are $\bar{t}_1>0$ and $0<\bar{\varepsilon}_1<\bar{\varepsilon}_0$
such that
$u_{\varepsilon}(t,x)\geq -\eta/2$
for all $t\in [0,\bar{t}_1]$, $\varepsilon\in(0,\bar{\varepsilon}_1]$ and $x\in\RR^d$.
Thus $t_\varepsilon\in (\bar{t}_1,1-\varsigma]$ for all $
\varepsilon\in(0,\bar{\varepsilon}_1]$.
Therefore, by {\rm (L1)}, for all $\varepsilon\in (0,\bar{\varepsilon}_1]$ we have
$$
(\partial_t-L_x) u_\varepsilon(t_\varepsilon,x_\varepsilon) \leq 0\,.
$$
Note that we can apply
$\partial_t$
and $L_x$
to $u_{\varepsilon}(t,x)$
due to our assumptions, see
Remark~\ref{rem:H5_H6},
and (for the differentiability of the integral in $t$) Corollary~\ref{cor:approx_sol-gen_prop} part (4).
However,
$$
(\partial_t-L_x) u_\varepsilon(t,x)=
e^{-\theta t}(\partial_t-L_x) (\ldots)_{(t,x)}
-\theta u_\varepsilon(t,x)
$$
where, by {\rm (H6)},
\begin{align*}
(\partial_t-L_x) (\ldots)_{(t,x)}
&:=
(\partial_t-L_x) P_{t,\varepsilon}
f(x) - L f(x) - P_{t,\varepsilon} L f(x)
+\int_0^t L_x P_{s,\varepsilon} L f(x)\,ds\\
&= (\partial_t-L_x) P_{t,\varepsilon}
f(x)
- L f(x)
+P_{0,\varepsilon} L f(x)
-\int_0^t (\partial_s-L_x) P_{s,\varepsilon} L f(x)\,ds\,,
\end{align*}
and so by Facts~\ref{fact:approx_sol-zero-1} and~\ref{fact:approx_sol-zero-2},
$$
(\partial_t-L_x) u_\varepsilon(t_\varepsilon,x_\varepsilon)\geq
e^{-\theta t_\varepsilon}(\partial_t-L_x) (\ldots)_{(t_\varepsilon,x_\varepsilon)}
+\theta \eta \quad
\xrightarrow{\varepsilon\to 0^+} \quad \theta\eta >0\,,
$$
which is a contradiction.
Finally, it suffices to consider $-f$ in place of $f$.
{\hfill $\Box$ \bigskip}
\subsection{Realization via integral kernels}\label{sec:realization}
We assume in the whole section that {\rm (\bf R)}, \eqref{op:S1} and {\rm (H1)} -- {\rm (H4)} are satisfied.
Suppose there are Borel functions
$p_0, q_0\colon (0,1] \times \RR^d\times \RR^d \to {\mathbb R}$ such that for all $t\in (0,1]$, $x\in\RR^d$ and $f\in B_b(\RR^d)$ (in the sense of Lebesgue)
\begin{align}\tag{S3} \label{op:S3}
\begin{aligned}
P_t^0 f (x) &= \int_{\RR^d} p_0(t,x,y) f(y)\,dy\,,\\
Q_t^0 f (x) &= \int_{\RR^d} q_0(t,x,y) f(y)\,dy\,.
\end{aligned}
\end{align}
\noindent
The aim of this section is to make sure that
under \eqref{op:S3}
the operator $P_t$ is an integral operator on $B_b(\RR^d)$ with a Borel measurable kernel. Clearly, we have
\begin{fact} Assume \eqref{op:S3}.
The hypothesis {\rm (H1)} is tantamount to: for all $t\in (0,1]$, $x\in\RR^d$
\begin{align}
\int_{\RR^d} |p_0(t,x,y)|\,dy &\leq C_1\,, \label{ineq:H1-gen}
\end{align}
The hypothesis {\rm (H3)} is tantamount to: for all $t\in (0,1]$, $x\in\RR^d$
\begin{align}
\int_{\RR^d} |q_0(t,x,y)|\,dy \leq \, & C_3 \, t^{-1} r_t^{\varepsilon_0}\,. \label{ineq:H3-gen}
\end{align}
\end{fact}
Before treating the operator
\eqref{def:op_P}
we first concentrate on \eqref{def:op_Q} and \eqref{def:op_Qn}.
Again, we often use that
the evaluation at a point is a continuous functional on $(B_b(\RR^d),\|\cdot\|_{\infty})$ and therefore commutes with Bochner integral, see \cite{MR3617205}.
We shall now assert that there exists a Borel function
$q_n \colon (0,1] \times \RR^d\times \RR^d \to {\mathbb R}$,
which is a kernel of
the operator $Q_t^n$, see \eqref{def:op_Qn}.
\begin{lemma}
Assume \eqref{op:S3}. For all $n\in {\mathbb N}$, $t\in (0,1]$, $x\in\RR^d$ and $f\in B_b(\RR^d)$,
\begin{align}\label{eq:Q_n_via_q_n}
Q_t^n f(x) = \int_{\RR^d} q_n(t,x,y) f(y)\,dy\,,
\end{align}
where Borel functions $q_n$ are given
inductively for all $t\in (0,1]$, $x,y\in\RR^d$
by the following absolutely convergent double integral
\begin{align}\label{def:q_n-gen}
q_n(t,x,y):=\int_0^t \int_{\RR^d} {\bf 1}_{\mathcal{S}_n}(t,x,y)\, q_0(t-s,x,z)q_{n-1}(s,z,y)\,dzds\,.
\end{align}
The sets $\mathcal{S}_n\subseteq (0,1]\times\RR^d\times\RR^d$
as well as
every section $\mathcal{S}^{t,x}_n\subseteq \RR^d$ are Borel and of full measure.
For all $n\in{\mathbb N}$, $t\in (0,1]$ and $x\in\RR^d$,
\begin{align}\label{ineq:q_n-gen-b}
\int_{\RR^d} |q_n(t,x,y)|\,dy\leq
C_3^{n+1} \prod_{k=1}^n B\!\left(\frac{\varepsilon_0}{2},\frac{k\varepsilon_0}{2}\right) t^{-1} r_t^{(n+1)\varepsilon_0}\,.
\end{align}
\end{lemma}
\noindent{\bf Proof.}
For $n \in {\mathbb N}$, $t, s_1,\ldots, s_n\in (0,1]$ and $x,y, z_1,\ldots,z_n \in \RR^d$ we define a Borel function
$F(t,x,s_n,z_n,\ldots,s_1,z_1,y)$ by
$${\bf 1}_{0<s_1<\ldots<s_n<t}\,| q_0(t-s_n,x,z_n) q_0(s_n-s_{n-1},z_n,z_{n-1})\ldots q_0(s_2-s_1,z_2,z_1) q_0(s_1,z_1,y)|\,,$$
and a Borel set (see \cite[Lemma~1.28]{MR4226142})
$$
\mathcal{S}_n :=\{(t,x,y)\colon \int_{({\mathbb R})^n} \int_{(\RR^d)^n} F(t,x,s_n,z_n,\ldots,s_1,z_1,y)\, dz_1\ldots dz_n \,ds_1\ldots ds_n<\infty \}\,.
$$
Let $n=1$. Clearly,
$\mathcal{S}_1^{t,x}=\{y\colon \int_{{\mathbb R}}\int_{\RR^d} F(t,x,s,z,y)\,dzds<\infty \}$.
By \eqref{ineq:H3-gen}
and Lemma~\ref{lem:time_conv} we get
\begin{align}\label{ineq:Fub-aux}
\int_{\RR^d} \int_{{\mathbb R}}\int_{\RR^d} F(t,x,s,z,y)\,dzds\,dy < \infty\,, \tag{$\star$}
\end{align}
which confirms that $\mathcal{S}_1^{t,x}$ is of full measure. Similarly,
for any compact $D\subset \RR^d$,
$$
\int_0^1 \int_{D}\int_{\RR^d} \int_{{\mathbb R}}\int_{\RR^d} |F(t,x,s,z,y)|\,dzds \,dy\,\,dxdt < \infty\,,
$$
hence $\mathcal{S}_1$ is of full measure.
Since
\eqref{ineq:Fub-aux}
validates the change the order of integration we have
\begin{align*}
Q_t^1 f(x)&=\int_{0<s<t} Q_{t-s}^0 Q_s^0 f(x)\,ds
=\int_0^t \int_{\RR^d} q_0(t-s,x,z) \int_{\RR^d} q_0(s,z,y)f(y)\,dy\,dzds\\
&= \int_{\RR^d} \int_0^t \int_{\RR^d} q_0(t-s,x,z)q_0(s,z,y)\,dzds \,f(y)\,dy = \int_{\RR^d} q_1(t,x,y) f(y)\,dy \,.
\end{align*}
Finally, \eqref{ineq:q_n-gen-b} is equivalent to
part (2) of Fact~\ref{fact:Qn_well_Bb}.
The case of $n \geq 2$ goes by similar lines.
{\hfill $\Box$ \bigskip}
We make a similar observation about a kernel of the operator $Q_t$ defined in \eqref{def:op_Q}.
\begin{lemma}
Assume \eqref{op:S3}. For all $t\in (0,1]$, $x\in\RR^d$ and $f\in B_b(\RR^d)$,
\begin{align}\label{eq:Q_via_q}
Q_tf(x) = \int_{\RR^d}q(t,x,y) f(y)\,dy\,,
\end{align}
where for all $t\in (0,1]$, $x,y\in\RR^d$ the following series converges absolutely
\begin{align}\label{def:q-gen}
q(t,x,y):=\sum_{n=0}^\infty {\bf 1}_{\mathcal{S}_q}(t,x,y) \,q_n(t,x,y)\,.
\end{align}
The set $\mathcal{S}_q\subseteq (0,1]\times\RR^d\times\RR^d$ as well as every section $\mathcal{S}_q^{t,x}\subseteq \RR^d$ are Borel and of full measure.
Further, there exists $c>0$ such that for all $t\in (0,1]$ and $x\in\RR^d$,
\begin{align}\label{ineq:q-gen-b}
\int_{\RR^d} |q(t,x,y)|\,dy \leq
c \, t^{-1} r_t^{\varepsilon_0}\,.
\end{align}
\end{lemma}
\noindent{\bf Proof.}
Let $F(t,x,y):=\sum_{n=0}^\infty |q_n(t,x,y)|$, where $q_n$ are Borel functions defined in \eqref{def:q_n-gen}.
We introduce a Borel set $\mathcal{S}_q:=\{(t,x,y)\colon F(t,x,y) <\infty \}$.
Using \eqref{ineq:q_n-gen-b}
and Lemma~\ref{lem:time_conv},
cf.
proof of Fact~\ref{fact:Q_well_Bb},
for any compact $D\subset \RR^d$,
$$
\int_{\RR^d} F(t,x,y)\,dy<\infty\,,\qquad\quad
\int_0^1 \int_{D}\int_{\RR^d} F(t,x,y)\,dy\,dxdt<\infty\,,
$$
hence $\mathcal{S}_q^{t,x}$ and $\mathcal{S}_q$ are of full measure.
By
Fact~\ref{fact:Q_well_Bb}
we can calculate the following limit pointwise
\begin{align*}
Q_tf(x)=\lim_{N\to \infty} \sum_{n=0}^N Q_t^n f(x)\,.
\end{align*}
Therefore, by
\eqref{eq:Q_n_via_q_n} and since $\mathcal{S}_q^{t,x}$ is of full measure, and by the dominated convergence theorem
\begin{align*}
Q_tf(x)
&=\lim_{N\to\infty} \int_{\RR^d} \sum_{n=0}^N q_n(t,x,y)f(y)\,dy
=\lim_{N\to\infty} \int_{\RR^d} {\bf 1}_{\mathcal{S}_q} (t,x,y)\sum_{n=0}^N q_n(t,x,y)f(y)\,dy\\
&= \int_{\RR^d} \lim_{N\to\infty} {\bf 1}_{\mathcal{S}_q}(t,x,y)\sum_{n=0}^N q_n(t,x,y)f(y)\,dy
=\int_{\RR^d} q(t,x,y) f(y)\,dy\,.
\end{align*}
Finally,
\eqref{ineq:q-gen-b}
is equivalent to Fact~\ref{fact:Q_well_Bb} part (2).
{\hfill $\Box$ \bigskip}
We deal with the operator $P_t$ defined in \eqref{def:op_P}.
\begin{proposition}\label{prop:p-gen}
Assume \eqref{op:S3}.
For all $t\in (0,1]$, $x\in\RR^d$ and $f\in B_b(\RR^d)$,
\begin{align}\label{eq:P_via_p}
P_tf(x) = \int_{\RR^d} p(t,x,y) f(y)\,dy\,,
\end{align}
where a Borel function $p$ is given for all $t\in (0,1]$, $x,y\in\RR^d$ by the
following formula with absolutely convergent
double integral
\begin{align}\label{def:p-gen}
p(t,x,y):=p_0(t,x,y)+ \int_0^t \int_{\RR^d}
{\bf 1}_{\mathcal{S}_p}(t,x,y)\, p_0(t-s,x,z)q(s,z,y)\,dzds\,.
\end{align}
The set $\mathcal{S}_p\subseteq (0,1]\times\RR^d\times\RR^d$ as well as every section $\mathcal{S}_p^{t,x}\subseteq \RR^d$ are Borel and of full measure.
Further, there exists $c>0$ such that for all $t\in (0,1]$ and $x\in\RR^d$,
\begin{align}\label{eq:p-gen}
\int_{\RR^d} | p(t,x,y)| \,dy \leq c\,.
\end{align}
\end{proposition}
\noindent{\bf Proof.}
Let
$F(t,x,s,z,y):={\bf 1}_{0<s<t} \,|p_0(t-s,x,z)q(s,z,y)|$, where $q$ is a Borel function given by \eqref{def:q-gen}.
We define a Borel set
$\mathcal{S}_p:=\{(t,x,y)\colon \int_{{\mathbb R}}\int_{\RR^d} F(t,x,s,z,y)\,dzds<\infty \}$.
Integrating over $\RR^d$ against $dy$
or over $(0,1]\times D \times \RR^d$
against $dy\,dxdt$, for any compact $D\subset \RR^d$,
and using
\eqref{ineq:q-gen-b},
\eqref{ineq:H1-gen} and
Lemma~\ref{lem:time_conv},
we conclude that
$\mathcal{S}_p^{t,x}$ and $\mathcal{S}_p$ are of full measure.
It also validates the change the order of integration
\begin{align*}
\int_0^t P_{t-s}^0\, Q_s f (x)\, ds
=\int_0^t \int_{\RR^d} p_0(t-s,x,z)\int_{\RR^d} q(s,z,y)f(y)\,dy\,dzds\\
=\int_{\RR^d} \int_0^t\int_{\RR^d} {\bf 1}_{\mathcal{S}_p}(t,x,y) p_0(t-s,x,z) q(s,z,y)\,dzds\, f(y)\,dy\,.
\end{align*}
Now, \eqref{eq:P_via_p} follows from \eqref{def:op_P}, while
\eqref{eq:p-gen} is equivalent to
Fact~\ref{fact:P_well_Bb} part (2).
{\hfill $\Box$ \bigskip}
\section{Applications}\label{sec:appl}
In this section we finally treat the operator \eqref{def:operator}.
We assume that either $(\mathbf{A})$ or $(\mathbf{A\!^\ast})$ holds, see
Section~\ref{sec:set}.
In what follows $T>0$ is {arbitrary and fixed}.
Throughout this section, for $t>0$, we use
\begin{align}\label{def:r_t}
r_t:=h^{-1}(1/t)\,,
\end{align}
which satisfies {\rm ({\bf R})} for all $t\in (0,T]$, see \cite[Lemma~5.1]{MR3996792} and
Remark~\ref{rem:1toT}.
In the subsequent sections we prove that
the general approach from
Section~\ref{sec:gen} applies and so all the conclusions hold.
While using the results of Section~\ref{sec:sac}
or~\ref{sec:a-frozen},
the reader may conveniently assume
that either $(\mathbf{A})$ or $(\mathbf{A\!^\ast})$ holds.
Actually, in each of those sections, we
start with limited assumptions and
gradually add them in the text in-between subsequent results.
\vspace{\baselineskip}
\subsection{Zero order approximation and error term}\label{ssec:a-zoet}
In this section we specify
$P_t^0$
and $Q_t^0$, both of which shall be integral operators.
We start by the so-called freezing coefficients of the operator \eqref{def:operator}.
For $w\in\RR^d$ we define
an operator
\begin{align*}
{\mathcal L}^{\mathfrak{K}_w} f(x):=b(w)\cdot \nabla f(x)+ \int_{\RR^d}\Big(f(x+z)-f(x)- {\bf 1}_{|z|<1} \left<z,\nabla f(x)\right>\! \Big) \,\kappa(w,z)J(z)dz\,,
\end{align*}
which make sense whenever $f\in C_0^2(\RR^d)$.
Already when considered for $f\in C_c^{\infty}(\RR^d)$, i.e., for smooth functions $f:\RR^d \to {\mathbb R}$ with compact support, it uniquely determines a L{\'e}vy process, which has a transition density that we denote by
$p^{\mathfrak{K}_w}(t,x)$,
see \cite[Section~6]{MR3996792}.
Recall also that the density is given via Fourier inversion formula \cite[(97)]{MR3996792}. In
Section~\ref{sec:a-frozen}
we provide more properties of that function, which are necessary in the discussion below.
For $t>0$, $x,y\in\RR^d$
we write
$p^{\mathfrak{K}_w}(t,x,y):=p^{\mathfrak{K}_w}(t,y-x)$, and we let
\begin{align*}
p_0(t,x,y)&:=p^{\mathfrak{K}_y}(t,x,y)\,,\\
q_0(t,x,y)&:=-(\partial_t-{\mathcal L}_x)\,p_0(t,x,y)\,.
\end{align*}
We record that those functions are jointly continuous in all three variables. This is a consequence of Lemma~\ref{lem:a-continuity} and \eqref{eq:par_t_is_L}.
The specific choice of $q_0$ by means of $p_0$ is imposed by the parametrix method \cite[Section~1.7]{MR3701414}, \cite[Section~5]{KKS}.
In our considerations we substantially use that for $r>0$,
\begin{align}
{\mathcal L}_x^{\mathfrak{K}_{v}} p^{\mathfrak{K}_{w}}(t,x,y)
=
b_{r}^{v}\cdot \nabla_x p^{\mathfrak{K}_{w}}(t,x,y)
+
\int_{\RR^d}\delta_{r}^{\mathfrak{K}_{w}}(t,x,y;z)\kappa(v,z)J(z)dz\,. \label{eq:pL}
\end{align}
and thus
\begin{align}\label{eq:delta-alt}
\begin{aligned}
q_0(t,x,y)&=-(\partial_t-{\mathcal L}_x^{\mathfrak{K}_x}) p^{\mathfrak{K}_y}(t,x,y)
=({\mathcal L}_x^{\mathfrak{K}_x}-{\mathcal L}_x^{\mathfrak{K}_y}) p^{\mathfrak{K}_y}(t,x,y) \\
&=
(b_{r}^{x}-b_{r}^{y}) \cdot \nabla_x p^{\mathfrak{K}_y}(t,x,y)
+
\int_{\RR^d} \delta_{r}^{\mathfrak{K}_y} (t,x,y;z) (\kappa(x,z)-\kappa(y,z))J(z)dz.
\end{aligned}
\end{align}
Now, for $t>0$, $x\in\RR^d$ and $f\in B_b(\RR^d)$ we set
\begin{align*}
P_t^0 f (x) &:= \int_{\RR^d} p_0(t,x,y) f(y)\,dy\,,\\
Q_t^0 f (x) &:= \int_{\RR^d} q_0(t,x,y) f(y)\,dy\,.
\end{align*}
These operators are well defined as observed right at the beginning in Lemma~\ref{lem:well_def}.
We let
$$
\boxed{
0<\varepsilon_0< \min\{\alpha_h\land (\sigma \indhei{j})+ \indcsi{j} -1\colon\,\, j=0,\ldots,N\}\,,}
$$
and under $(\mathbf{A})$ additionally
$$
\boxed{
0<\varepsilon_0\leq \alpha_h+\sigma-1\,.}
$$
\vspace{\baselineskip}
\subsection{Hypothesis - construction}\label{ssec:a-hc}
Our aim in this section is to
show that
with the above choice of $P_t^0$ and $Q_t^0$, the conditions
\eqref{op:S1}, \eqref{op:S2}, and {\rm (H0)} -- {\rm (H4)}
are satisfied for $t\in (0,T]$.
\begin{lemma}\label{lem:well_def}
There exist constants $C_1, C_3>0$ such that for all $t\in (0,T]$, $x\in\RR^d$,
\begin{align}
\int_{\RR^d} p_0(t,x,y)\,dy &\leq C_1\,, \label{ineq:H1} \\
\int_{\RR^d} |q_0(t,x,y)|\,dy \leq \, & C_3 \, t^{-1} r_t^{\varepsilon_0}\,. \label{ineq:H3}
\end{align}
\end{lemma}
\noindent{\bf Proof.}
The inequality \eqref{ineq:H1}
follows from Proposition~\ref{prop:gen_est}, Corollary~\ref{cor-shifts} and Lemma~\ref{lem:conv}(a).
We focus on \eqref{ineq:H3} and $t\in (0,\mathfrak{t_0}]$. Lemma~\ref{lem:q_0-aux} guarantees that
under $(\mathbf{A\!^\ast})$ we have
\begin{align*}
\int_{\RR^d} |q_0(t,x,y)|\,dy\leq c
\sum_{j=0}^{N} \int_{\RR^d} \erry{\indcsi{j}-1}{\indhei{j}}(t,x,y)\,dy\,,
\end{align*}
while under $(\mathbf{A})$
by Lemmas~\ref{lem:q_0-aux*1}
and~\ref{lem:q_0-aux*2},
\begin{align*}
\int_{\RR^d} |q_0(t,x,y)|\,dy
&=\int_{|x-y|<R} |q_0(t,x,y)|\,dy
+ \int_{|x-y|\geq R} |q_0(t,x,y)|\,dy\\
&\leq
c
\sum_{j=0}^{N} \int_{\RR^d} \erry{\indcsi{j}-1}{\indhei{j}}(t,x,y)\,dy
+ c t^{-1}r_t^{\alpha_h+\sigma-1}\,.
\end{align*}
Now, due to the definition of $\varepsilon_0$, for each $j=0,\ldots,N$ there exists
$0<\ell_j < \alpha_h \land (\sigma \indhei{j})$
such that $\varepsilon_0=\ell_j+\indcsi{j}-1$.
Applying Corollary~\ref{cor-shifts} and Lemma~\ref{lem:conv}(a) with
$\beta_0=\ell_j$ we get
$$
\int_{\RR^d} \erry{\indcsi{j}-1}{\indhei{j}}(t,x,y)\,dy
\leq \frac{c_1}{\alpha_h-\ell_j}t^{-1} r_t^{\ell_j+\indcsi{j}-1}= \frac{c_1}{\alpha_h-\ell_j} t^{-1} r_t^{\varepsilon_0}\,.
$$
Obviously, $r_t^{\alpha_h+\sigma-1}\leq r_t^{\varepsilon_0}$ for $t\in (0,\mathfrak{t_0}]$.
This ends the proof of \eqref{ineq:H3} if $0<T\leq \mathfrak{t_0}$.
Now, let $T>\mathfrak{t_0}$.
Applying
\eqref{eq:delta-alt} with $r=1$,
\begin{align*}
| q_0(t,x,y)|
=
&\Big|\,(b_{r_1}^x-b_{r_1}^y) \cdot \nabla_x p^{\mathfrak{K}_y}(t,x,y)
+
\int_{\RR^d} \delta_{1}^{\mathfrak{K}_y} (t,x,y;z)(\kappa(x,z)-\kappa(y,z))J(z)dz \,\Big|\,.
\end{align*}
Using
\eqref{set:indrf-cancellation-scale} and
\eqref{ineq:L1_uni_time-1}
to the first term of the sum,
and
\eqref{set:J},
\eqref{set:k-bound},
\eqref{ineq:L1_uni_time-2},
\eqref{ineq:L1_uni_time-3}
to the second one, we get
that for $t\in [\tau,T]$,
\begin{align}\label{ineq:q_0-away}
|q_0(t,x,y)|\leq c \left(\Upsilon_{\tau}(y-x)
+\int_{|z|\geq 1} \Upsilon_{\tau}(y-x-z)\nu(|z|)dz\right).
\end{align}
We take $\tau = \mathfrak{t_0}$ in \eqref{ineq:q_0-away}, integrate the inequality against $dy$ and use \cite[Lemma~5.6]{MR3996792}, which yields $\int_{\RR^d}|q_0(t,x,y)|\,dy\leq c$. It remains to notice that $1\leq (T r_\mathfrak{t_0}^{-\varepsilon_0}) t^{-1}r_t^{\varepsilon_0}$.
{\hfill $\Box$ \bigskip}
Clearly, $P_t^0 f$ and $Q_t^0 f$ are Borel functions, see \cite{MR4226142}. We have the following conclusion.
\begin{corollary}\label{cor:S1H1H3}
The condition \eqref{op:S1} as well as hypotheses {\rm (H1)} and {\rm (H3)} are satisfied.
\end{corollary}
\vspace{0.25\baselineskip}
\begin{lemma}
For all $t>0$ we have
\begin{align*}
\lim_{s\to t}\sup_{x\in\RR^d} \int_{\RR^d} | p_0(t,x,y) - p_0(s,x,y)|\,dy&=0\,,\\
\lim_{s\to t}\sup_{x\in\RR^d} \int_{\RR^d} | q_0(t,x,y) - q_0(s,x,y)|\,dy&=0\,.
\end{align*}
\end{lemma}
\noindent{\bf Proof.}
Using \eqref{ineq:L1_uni_time-1} we get for $t,s\geq \tau$,
\begin{align*}
|p_0(t,x,y) - p_0(s,x,y)|
= | p^{\mathfrak{K}_y}(t,x,y)-p^{\mathfrak{K}_y}(s,x,y)|
\leq c |t-s|\, \Upsilon_{\tau}(y-x)\,.
\end{align*}
It suffices now to integrate that inequality, see \cite[Lemma~5.6]{MR3996792}.
Applying
\eqref{eq:delta-alt} with $r=1$,
\begin{align*}
| q_0(t,x,y) - q_0(s,x,y)|
=
&\Big|\,(b_{r_1}^x-b_{r_1}^y) \cdot \Big[ \nabla_x p^{\mathfrak{K}_y}(t,x,y)
-\nabla_x p^{\mathfrak{K}_y}(s,x,y)\Big]\\
&+
\int_{\RR^d} \Big[ \delta_{1}^{\mathfrak{K}_y} (t,x,y;z)-\delta_{1}^{\mathfrak{K}_y} (s,x,y;z)\Big] (\kappa(x,z)-\kappa(y,z))J(z)dz \,\Big|\,.
\end{align*}
Further, using
\eqref{set:indrf-cancellation-scale} and
\eqref{ineq:L1_uni_time-1}
to the first term of the sum,
and
\eqref{set:J},
\eqref{set:k-bound},
\eqref{ineq:L1_uni_time-2},
\eqref{ineq:L1_uni_time-3}
to the second one, we have
for $t,s\geq \tau$,
$$
| q_0(t,x,y) - q_0(s,x,y)|
\leq c |t-s| \, \left(\Upsilon_{\tau}(y-x)
+\int_{|z|\geq 1} \Upsilon_{\tau}(y-x-z)\nu(|z|)dz\right).
$$
Integrating against $dy$ and using \cite[Lemma~5.6]{MR3996792} gives the result.
{\hfill $\Box$ \bigskip}
\begin{corollary}
The hypotheses {\rm (H2)} and {\rm (H4)} are satisfied.
\end{corollary}
\vspace{0.25\baselineskip}
\begin{lemma}
The condition \eqref{op:S2} is satisfied.
\end{lemma}
\noindent{\bf Proof.}
Note that
$P_t^0 f(x) = \int_{\RR^d} p_0(t,x,y+x)f(y+x)dy$ and by \eqref{ineq:L1_uni_time-1},
$$
|p_0(t,x,y+x)f(y+x)|\leq c \Upsilon_t(y) \|f\|_{\infty}\,.
$$
The estimate validates the use of the dominated convergence theorem
\cite[Lemma~5.6]{MR3996792}, hence the continuity follows from that of the integrand, see Lemma~\ref{lem:a-continuity}. Convergence to zero at infinity follows from that of $f$ and the boundedness of $p_0(t,x,y)$.
Similarly, $Q_t^0 f(x) = \int_{\RR^d} q_0(t,x,y+x)f(y+x)dy$, and by \eqref{ineq:q_0-away},
$$|q_0(t,x,y+x)f(y+x)|\leq c\left( \Upsilon_t(y)+\int_{|z|\geq 1}
\Upsilon_t(y-z) \nu(|z|)dz
\right)\|f\|_{\infty}\,.$$
This ends the proof, see
Lemma~\ref{lem:a-continuity} and \eqref{eq:par_t_is_L}.
{\hfill $\Box$ \bigskip}
We show the strong continuity at zero.
\begin{lemma}
For every $f\in C_0(\RR^d)$,
\begin{align*}
\lim_{t\to 0^+}\sup_{x\in\RR^d} \left|\int_{\RR^d} p_0(t,x,y) f(y)\,dy - f(x)\right| =0\,.
\end{align*}
\end{lemma}
\noindent{\bf Proof.}
We have
\begin{align*}
\int_{\RR^d} p_0(t,x,y) f(y)\,dy - f(x)
= \int_{\RR^d} p^{\mathfrak{K}_y}(t,x,y)\big[f(y)-f(x)\big]\,dy
+ \left[ \int_{\RR^d}p^{\mathfrak{K}_y}(t,x,y)\,dy-1\right] f(x) \,.
\end{align*}
Since $f$ is bounded and due to
Proposition~\ref{prop:strong_at_zero}
the second term converges to zero in the desired fashion.
Since $f\in C_0(\RR^d)$ there exists $\delta>0$ such that $|f(y)-f(x)|\leq \varepsilon$ whenever $|y-x|\leq \delta$.
Thus, by \eqref{ineq:H1} we get
\begin{align*}
\int_{\RR^d} p^{\mathfrak{K}_y}(t,x,y)| f(y)-f(x)|\,dy
\leq
\varepsilon \,C_1
+2 \|f\|_{\infty}\int_{|x-y|>\delta} p^{\mathfrak{K}_y}(t,x,y)\,dy\,.
\end{align*}
Again, the second term here converges to zero by Lemma~\ref{lem:strong_at_zero-aux}.
This ends the proof, because the choice of $\varepsilon$ was arbitrary.
{\hfill $\Box$ \bigskip}
\begin{corollary}\label{cor:H0}
The hypothesis {\rm (H0)} is satisfied.
\end{corollary}
\begin{lemma}\label{lem:strong-F}
We have $P_t^0,Q_t^0\colon B_b(\RR^d)\to C_b(\RR^d)$ for all $t\in (0,T]$.
\end{lemma}
\noindent{\bf Proof.}
Since \eqref{op:S1} holds by Corollary~\ref{cor:S1H1H3},
it suffices to show the continuity of $P_t^0f$ and $Q_t^0f$. Fix $t\in (0,T]$ and $R>0$. By \cite[Corollary~5.10]{MR3996792} for all $|x|\leq R$ and $y\in\RR^d$,
\begin{align}\label{ineq:remove_x}
\Upsilon_t(y-x)\leq c \Upsilon_t(y),
\end{align}
for some $c>0$. Thus, by \eqref{ineq:L1_uni_time-1} we have $p_0(t,x,y) \leq c \Upsilon_t(y)$,
while by \eqref{ineq:q_0-away} we get
$|q_0(t,x,y)|\leq c\Upsilon_t(y)+ c\int_{|z|\geq 1} \Upsilon_{t}(y-z)\nu(|z|)dz$.
Recall that $p_0$ and $q_0$ are continuous.
Now, the desired continuity follows by
\cite[Corollary~5.6]{MR3996792} and the dominated convergence theorem.
{\hfill $\Box$ \bigskip}
In summary, we verified \eqref{op:S1}, \eqref{op:S2} and {\rm (H0)} -- {\rm (H4)} for $t\in (0,T]$, i.e., the foundations of the functional analytic approach, and therefore all the results of Section~\ref{sec:f_a_a} are at hand. We continue more detailed analysis in the next section.
\vspace{\baselineskip}
\subsection{Hypothesis - conjectures}\label{ssec:b-hc}
The aim in this section is to
specify the operator $(L,D(L))$ and the set $\mathcal{D}$ such that
{\rm (L1)} -- {\rm (L3)} are satisfied,
and to ensure that {\rm (H5)} and {\rm (H6)} hold true. To be more precise,
we consider $L f:={\mathcal L} f$ given by \eqref{def:operator} for $f\in D({\mathcal L})$, where\\
\begin{align*}
D({\mathcal L}):=\big\{f\colon\, &\nabla f(x) \, \mbox{\it exists and the integral in \eqref{def:operator}}\\
&\mbox{\it converges absolutety for every } x\in\RR^d \big\}\,,
\end{align*}
and we choose
$$\mathcal{D}:=C_0^2(\RR^d)\,.$$
\vspace{\baselineskip}
\noindent
Note that {\rm (L1)},
{\rm (L2)}
and {\rm (L3)} are indeed satisfied.
We need a few direct computations before we
can conclude that {\rm (H5)} and {\rm (H6)} are true on $(0,T]$, see Remark~\ref{rem:H5_H6} and~\ref{rem:H5-ver}.
From the technical point of view, the main ingredient of all the proofs is
Corollary~\ref{cor:L1_uni_time},
which provides various estimates of the integral kernel $p_0(t,x,y)$ of the zero order approximation $P_t^0f(x)$.
We point out that in our setting the choice of $D({\mathcal L})$ and $\mathcal{D}$ is rather unrestricted. Another good one, which appears in the literature, is $C_0^2(\RR^d)$ as $D({\mathcal L})$, and $C_c^{\infty}(\RR^d)$ as $\mathcal{D}$.
In particular, it has to guarantee that $P_{t,\varepsilon}f\in D({\mathcal L})$ for any $f\in C_0(\RR^d)$, see Remark~\ref{rem:H5_H6}.
\begin{lemma}\label{lem:point-1}
For all $t\in (0,T]$, $x\in\RR^d$
and $f\in C_0(\RR^d)$,
\begin{align*}
\partial_t P_t^0 f (x) = \int_{\RR^d} \partial_t \,p_0(t,x,y) f(y)\,dy\,,
\qquad
\quad
{\mathcal L}_x P_t^0 f (x) = \int_{\RR^d} {\mathcal L}_x \,p_0(t,x,y) f(y)\,dy\,.
\end{align*}
In particular,
$$
Q_t^0 f (x) = -(\partial_t-{\mathcal L}_x) P_t^0f(x)\,.
$$
\end{lemma}
\noindent{\bf Proof.}
The first equality follows from the
dominated convergence theorem
justified by
\eqref{ineq:L1_uni_time-1}
and \cite[Lemma~5.6]{MR3996792}.
The rest of the result will follow if we can differentiate under the integral in the first spacial variable (by dominated convergence theorem)
and change the order of integration (by Fubini's theorem),
which validate the steps below, where
$
F(x):=P_t^0f(x)
$,
\begin{align*}
{\mathcal L}_x F(x)
&=b(x)\cdot \nabla_x
F(x)+
\int_{\RR^d}\Big(F(x+z)-F (x)-{\bf 1}_{|z|<1} \left<z,\nabla_x F(x)\right>\!\Big)\kappa(x,z)J(z)dz\\
&=\int_{\RR^d}b(x)\cdot \nabla_x p^{\mathfrak{K}_y}(t,x,y) f(y)\,dy
+\int_{\RR^d}\int_{\RR^d}
\delta_1^{\mathfrak{K}_y}(t,x,y;z)f(y)\,dy\,
\kappa(x,z)J(z)dz\\
&=\int_{\RR^d}\Bigg( b(x)\cdot \nabla_x p^{\mathfrak{K}_y}(t,x,y)
+\int_{\RR^d}
\delta_1^{\mathfrak{K}_y}(t,x,y;z)\,
\kappa(x,z)J(z)dz\Bigg)
f(y)\,dy\\
&= \int_{\RR^d} {\mathcal L}_x p_0(t,x,y) f(y)\,dy\,,
\end{align*}
see \eqref{def:operator} and \eqref{eq:pL}.
The differentiability in the spacial variable
is justified by
\cite[Lemma~5.6]{MR3996792}
and a fact that given for all $t\in [\tau,T]$, $x,y\in\RR^d$ and $0<|\varkappa|<1$
we have
\begin{align}\label{lem:point-1-aux}
\begin{aligned}
|\frac1{\varkappa}\left( p^{\mathfrak{K}_y}(t,x+\varkappa e_i,y)-p^{\mathfrak{K}_y}(t,x,y)\right)|
&\leq \int_0^1 |\partial_{x_i}p^{\mathfrak{K}_y}(t,x+ \theta \varkappa e_i,y) |\,d\theta \\
&\leq c \Upsilon_{\tau}(y-x-\theta \varkappa e_i)
\leq c \Upsilon_{\tau}(y-x)\,,
\end{aligned}
\end{align}
where the inequalities follow from
\eqref{ineq:L1_uni_time-1}
and \cite[Corollary~5.10]{MR3996792}, since $|\theta\varkappa e_i|<1= (r_{\tau}^{-1}) r_{\tau}$.
Fubini's theorem can be used due to
\eqref{ineq:L1_uni_time-2}
and \eqref{ineq:L1_uni_time-3}
accompanied by
\eqref{set:J},
\eqref{set:k-bound}
and
\cite[Lemma~5.6]{MR3996792}.
In particular, we get $P_t^0f \in D({\mathcal L})$.
{\hfill $\Box$ \bigskip}
We shall now obtain
results
similar to Lemma~\ref{lem:point-1}, but
for the integral part of \eqref{def:approx_sol}.
\begin{lemma}\label{lem:point-2}
For all $t,\varepsilon>0$, $t+\varepsilon\leq T$,
$x\in\RR^d$ and $f\in C_0(\RR^d)$,
\begin{align*}
\partial_t \int_0^t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds
= P_{\varepsilon}^0\, Q_t f(x)
+ \int_0^t \partial_t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds\,.
\end{align*}
\end{lemma}
\noindent{\bf Proof.}
All the integrals are absolutely convergent Lebesgue integrals.
We divide the following expression by $t-t_1$,
\begin{align*}
&\int_0^t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds
- \int_0^{t_1} P_{t_1-s+\varepsilon}^0\, Q_s f (x)\, ds\\
&\quad=\int_{t_1}^t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds
+\int_0^{t_1} \left[P_{t-s+\varepsilon}^0\, Q_s f (x)-P_{t_1-s+\varepsilon}^0\, Q_s f (x)\right] ds\,.
\end{align*}
Then due to the continuity in
Lemma~\ref{lem:continuity} part (5)
the first term on the right hand side converges to $P_{\varepsilon}^0\, Q_{t_1} f(x)$ as $t\to t_1$.
We rewrite the second term as
$$
\int_0^{t_1} \int_{\RR^d}
\big[p_0(t-s+\varepsilon,x,y) - p_0(t_1-s+\varepsilon,x,y)\big]\, Q_sf(y)\, dy ds\,.
$$
After dividing by $t-t_1$, then
applying bounds in \eqref{ineq:L1_uni_time-1}
and part (2) of Fact~\ref{fact:Q_well_Bb},
and using integrability
from \cite[Lemma~5.6]{MR3996792}
and Lemma~\ref{lem:time_conv},
we can
pass to the limit $t\to t_1$ under the integral to obtain
$
\int_0^{t_1} \int_{\RR^d} \partial_{t_1}
p_0(t_1-s+\varepsilon,x,y)\, Q_sf(y)\, dy ds\,.
$
The first equality in
Lemma~\ref{lem:point-1} finally yields the result.
{\hfill $\Box$ \bigskip}
\begin{lemma}\label{lem:point-3}
For all $t,\varepsilon>0$, $t+\varepsilon\leq T$,
$x\in\RR^d$ and $f\in C_0(\RR^d)$,
\begin{align*}
{\mathcal L}_x \int_0^t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds
= \int_0^t {\mathcal L}_x P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds\,.
\end{align*}
\end{lemma}
\noindent{\bf Proof.}
Similarly to Lemma~\ref{lem:point-1},
the result will follow if we can differentiate under the integrals in the first spacial variable
and change the order of integration,
which validate the steps below, where now
$
F(x):=\int_0^t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds
$,
\begin{align*}
{\mathcal L}_x F(x)
&=b(x)\cdot \nabla_x
F(x)+
\int_{\RR^d}\Big(F(x+z)-F (x)-{\bf 1}_{|z|<1} \left<z,\nabla_x F(x)\right>\!\Big)\kappa(x,z)J(z)dz\\
&=
\int_0^t\int_{\RR^d}b(x)\cdot \nabla_x p^{\mathfrak{K}_y}(t-s+\varepsilon,x,y) Q_sf(y)\,dyds\\
&\quad +\int_{\RR^d}\int_0^t \int_{\RR^d}
\delta_1^{\mathfrak{K}_y}(t-s+\varepsilon,x,y;z)Q_sf(y)\,dyds\,
\kappa(x,z)J(z)dz\\
&=\int_0^t\int_{\RR^d} \Bigg(b(x)\cdot \nabla_x p^{\mathfrak{K}_y}(t-s+\varepsilon,x,y)\\
&\qquad\qquad\qquad+
\int_{\RR^d}\delta_1^{\mathfrak{K}_y}(t-s+\varepsilon,x,y;z)\kappa(x,z)J(z)dz \Bigg) Q_s f(y)\,dyds\\
&=\int_0^t\int_{\RR^d} {\mathcal L}_x p_0(t-s+\varepsilon,x,y)Q_s f(y) \,dyds = \int_0^t {\mathcal L}_x P_{t-s+\varepsilon}Q_sf(x)\,ds\,.
\end{align*}
In the last equality we have used
Lemma~\ref{lem:point-1} with $\tilde{f}(x)=Q_sf(x)$ with
Fact~\ref{fact:PQ_well_C0}.
The differentiability
in the spacial variable
is validated since by \eqref{lem:point-1-aux} and
Fact~\ref{fact:Q_well_Bb} we get
\begin{align*}
|\frac1{\varkappa}\left(p^{\mathfrak{K}_y}(t-s+\varepsilon,x+\varkappa e_i,y)-p^{\mathfrak{K}_y}(t-s+\varepsilon,x,y)\right)Q_sf(y)|
\leq c \Upsilon_{\varepsilon}(y-x) s^{-1} r_s^{\varepsilon_0} \|f\|_{\infty}\,,
\end{align*}
where the right hand side is integrable over $(0,t)\times \RR^d$ against
$dyds$, see
\cite[Lemma~5.6]{MR3996792} and
Lemma~\ref{lem:time_conv}.
Fubini's theorem is in force by
bounds provided by
\eqref{ineq:L1_uni_time-2},
\eqref{ineq:L1_uni_time-3},
Fact~\ref{fact:Q_well_Bb},
\eqref{set:J},
\eqref{set:k-bound},
and
integrability properties in
\cite[Lemma~5.6]{MR3996792},
Lemma~\ref{lem:time_conv},
and that of $\nu(|z|)dz$.
In particular, we get $ \int_0^t P_{t-s+\varepsilon}^0\, Q_s f\, ds \in D({\mathcal L})$.
{\hfill $\Box$ \bigskip}
We are in a position to handle
$
(\partial_t-{\mathcal L}) P_{t,\varepsilon}f
$.
Note that from \eqref{def:approx_sol},
\begin{align}\label{eq:approx_sol_point}
P_{t,\varepsilon}f(x) = P_{t+\varepsilon}^0 f(x) + \int_0^t P_{t-s+\varepsilon}^0\, Q_s f(x)\, ds\,,
\end{align}
since the evaluation at a point is a continuous functional on $C_0(\RR^d)$ and therefore commutes with Bochner integral, see \cite{MR3617205}.
From Lemmas~\ref{lem:point-1}, \ref{lem:point-2} and~\ref{lem:point-3} we get an immediate corollary.
\begin{corollary}\label{cor:point-1-3}
For all $t,\varepsilon>0$, $t+\varepsilon\leq T$,
$x\in\RR^d$ and $f\in C_0(\RR^d)$,
the mapping
$t\mapsto P_{t,\varepsilon}f(x)$ is differentiable,
$P_{t,\varepsilon}f\in D({\mathcal L})$ and
\begin{align*}
\partial_t P_{t,\varepsilon}f(x) &= \partial_t P_{t+\varepsilon}^0 f (x) + P_{\varepsilon}^0\, Q_t f(x)
+ \int_0^t \partial_t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds\,,\\
{\mathcal L}_x P_{t,\varepsilon}f(x)&={\mathcal L}_x P_{t+\varepsilon}^0 f (x) + \int_0^t {\mathcal L}_x P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds\,.
\end{align*}
\end{corollary}
\begin{corollary}
The hypothesis {\rm (H5)} is satisfied.
\end{corollary}
\noindent{\bf Proof.}
By Corollary~\ref{cor:point-1-3}
we can
apply the operator $(\partial_t-{\mathcal L}_x)$ to
$P_{t,\varepsilon}f(x)$ in compliance with
Remark~\ref{rem:H5_H6}, and
\begin{align}\label{eq:approx_sol_der_t_L}
(\partial_t-{\mathcal L}_x) P_{t,\varepsilon} f(x)
&=
\partial_t P_{t+\varepsilon}^0 f (x)+
P_{\varepsilon}^0\, Q_t f(x)
+ \int_0^t \partial_t P_{t-s+\varepsilon}^0\, Q_s f (x)\, ds \nonumber \\
&\quad - {\mathcal L}_x P_{t+\varepsilon}^0 f (x) - \int_0^t {\mathcal L}_x P_{t-s+\varepsilon}^0\, Q_s f (x) \nonumber \\
&= P_{\varepsilon}^0\, Q_t f(x)
-Q_{t+\varepsilon}^0 f(x) - \int_0^t Q_{t-s+\varepsilon}^0\, Q_s f(x)\,ds\,.
\end{align}
{\hfill $\Box$ \bigskip}
\begin{lemma}\label{lem:point-4}
For all $t,\varepsilon>0$, $t+\varepsilon\leq T$,
$x\in\RR^d$ and $f\in C_0(\RR^d)$,
\begin{align*}
{\mathcal L}_x \int_0^t P_{s,\varepsilon} f (x)\,ds =
\int_0^t {\mathcal L}_x P_{s,\varepsilon} f (x)\,ds \,.
\end{align*}
\end{lemma}
\noindent{\bf Proof.}
The following two equalities will entail the result (see Lemma~\ref{lem:point-1} and~\ref{lem:point-3}),
\begin{align*}
&{\mathcal L}_x \int_0^t P_{s+\varepsilon}^0 f (x)\,ds =
\int_0^t \int_{\RR^d} {\mathcal L}_x \, p_0 (s+\varepsilon,x,y) f (y)\,dyds\,,\\
&{\mathcal L}_x \int_0^t \int_0^u P_{u-s+\varepsilon}^0\, Q_s f (x)\, dsdu
= \int_0^t \int_0^u \int_{\RR^d} {\mathcal L}_x p_0(u-s+\varepsilon,x,y)\, Q_s f (y)\,dy dsdu\,,
\end{align*}
Similarly to
Lemma~\ref{lem:point-1} and~\ref{lem:point-3}
they hold if we can differentiate under the integrals in the first spacial variable
and change the order of integration.
The steps to verify that are very much like those in
Lemma~\ref{lem:point-1} and~\ref{lem:point-3} (the main ingredients being Corollary~\ref{cor:L1_uni_time}, \cite[Corollary~5.10, Lemma~5.6]{MR3996792}, Fact~\ref{fact:Q_well_Bb} and Lemma~\ref{lem:time_conv}), and
therefore we omit further details. In particular,
the integrals are absolutely convergent and
the discussed objects belong to $D({\mathcal L})$.
{\hfill $\Box$ \bigskip}
\begin{corollary}\label{cor:H6}
The hypothesis {\rm (H6)} is satisfied.
\end{corollary}
\noindent{\bf Proof.}
The integrability conditions required for {\rm (H6)} follow
from the formulas in Corollary~\ref{cor:point-1-3}
and Lemma~\ref{lem:point-1},
and the estimates in \eqref{ineq:L1_uni_time-1} and Fact~\ref{fact:Q_well_Bb}
combined with integrability properties from
\cite[Lemma~5.6]{MR3996792}
and Lemma~\ref{lem:time_conv}.
The equality required for {\rm (H6)} is provided by Lemma~\ref{lem:point-4}.
The regularity indicated in Remark~\ref{rem:H5_H6}
is guaranteed by
Corollary~\ref{cor:point-1-3}
and Lemma~\ref{lem:point-4}.
{\hfill $\Box$ \bigskip}
To sum up, we showed that for $(L,D(L))=({\mathcal L}, D({\mathcal L}))$ and $\mathcal{D}=C_0^2(\RR^d)$
the conditions
\mbox{{\rm (L1)} -- {\rm (L3)}} are satisfied,
and the hypothesis
{\rm (H5)} and {\rm (H6)}
hold true on $(0,T]$. Therefore,
all the conclusions of Section~\ref{sec:coj-gen} are accessible. In particular,
Proposition~\ref{prop:coj123}
and~\ref{prop:coj4}
assert that
the conjectures {\rm (CoJ1)} -- {\rm (CoJ4)}
are true on $(0,T]$.
\vspace{\baselineskip}
\subsection{Integral kernels}\label{ssec:ker}
Clearly,
since $P_t^0$ and $Q_t^0$ were defined as integral operators, i.e., the condition
\eqref{op:S3} holds,
all the conclusions of
Section~\ref{sec:realization}
are valid for $t\in (0,T]$.
Beyond that, we have
\begin{lemma}\label{lem:p-non-neg}
For all $t\in (0,T]$, $x,y\in\RR^d$,
\begin{align}\label{ineq:p-non-neg}
{\bf 1}_{\mathcal{S}_{p_+}}(t,x,y)\, p(t,x,y) \geq 0\,.
\end{align}
The set $\mathcal{S}_{p_+} \subseteq (0,T]\times\RR^d\times\RR^d$
as well as every section $\mathcal{S}_{p_+}^{t,x}\subseteq \RR^d$ are Borel and of full measure.
\end{lemma}
\noindent{\bf Proof.}
We let $\mathcal{S}_{p_+}:=\{(t,x,y)\colon p(t,x,y)\geq 0\}$, which is a Borel set by Proposition~\ref{prop:p-gen}, and so is $\mathcal{S}_{p_+}^{t,x}=\{y\colon p(t,x,y)\geq 0\}$.
Using
Lusin's theorem, see \cite[Corollary on p. 56]{MR924157},
\eqref{eq:P_via_p}, \eqref{eq:p-gen}
and the dominated convergence theorem,
we extend {\rm (CoJ1)}:
for any $f\in B_b(\RR^d)$, if $f \geq 0$, then
$$
\int_{\RR^d} p(t,x,y) f(y)\,dy \geq 0\,.
$$
Let $B_n:=\{(t,x,y)\colon p(t,x,y)\leq -1/n\}$.
Clearly, each $B_n^{t,x}$ is of measure zero. Therefore, since ${\bf 1}_{B_n^{t,x}}(y)={\bf 1}_{B_n}(t,x,y)$, we get
$\int_0^T \int_{\RR^d}\int_{\RR^d} {\bf 1}_{B_n}(t,x,y) p(t,x,y) dy\,dxdt=0$.
This asserts that
$\mathcal{S}_{p_+}^{t,x}$
and
$\mathcal{S}_{p_+}$
are of full measure.
{\hfill $\Box$ \bigskip}
\begin{lemma}\label{lem:CoJ5}
The conjecture {\rm (CoJ5)} holds true.
\end{lemma}
\noindent{\bf Proof.} Let
$f_n(x)=\varphi(x/n)$,
where $\varphi\in C_c^{\infty}(\RR^d)$ is
such that $0\leq \varphi(z) \leq 1$,
$\varphi(z)=1$ if $|z|\leq 1$
and
$\varphi(z)=0$ if $|z|\geq 2$.
Clearly, $f_n\to 1$. By \eqref{eq:p-gen} and the dominated convergence theorem,
we also have
$P_t f_n\to P_t 1$
and $\int_0^t P_s\, {\mathcal L} f_n\,ds\to 0$.
We use {\rm (CoJ4)} for $f_n$ and let $n\to\infty$.
{\hfill $\Box$ \bigskip}
\begin{remark}\label{rem:t-global}
Since $T>0$ is arbitrary, we actually have that
\eqref{eq:Q_n_via_q_n},
\eqref{def:q_n-gen},
\eqref{eq:Q_via_q},
\eqref{def:q-gen},
\eqref{eq:P_via_p},
\eqref{def:p-gen},
and
\eqref{ineq:p-non-neg}
hold for all $t>0$, $x,y\in\RR^d$,
where the sets
$\mathcal{S}_n, \mathcal{S}_q, \mathcal{S}_p, \mathcal{S}_{p_+} \subseteq (0,\infty)\times\RR^d\times\RR^d$
and all sections
$\mathcal{S}_n^{t,x},\mathcal{S}_q^{t,x}, \mathcal{S}_p^{t,x}, \mathcal{S}_{p_+}^{t,x}\subseteq \RR^d$
are of full measure.
\end{remark}
\vspace{\baselineskip}
\subsection{Uniqueness}
In this section we prove Theorem~\ref{thm:uniq}.
A general idea of the proof is that the assumptions of the theorem guarantee that
\begin{align}\label{eq:1=2}
\iint\limits_{(s,\infty)\times \RR^d} \mu_1(s,x,dudz) \psi(u,z)
= \iint\limits_{(s,\infty)\times \RR^d} \mu_2(s,x,dudz) \psi(u,z) \,,
\end{align}
where
\begin{align}\label{eq:phi-psi}
\big[ \partial_u + {\mathcal L}_z \big]\phi(u,z)=\psi(u,z)\,.
\end{align}
The result shall follow if
the class of $\psi$ for which
\eqref{eq:1=2} holds is rich enough.
Put differently, we intend to show that given $\psi$ the equation \eqref{eq:phi-psi} can be solved, that is, we can find $\phi$.
Since
$P_t$ is a candidate for the fundamental solution of $\partial_t u ={\mathcal L} u$,
we anticipate that
$$
\phi(u,z)=-\int_{u}^{\infty} P_{r-u}\psi(r,\cdot) (z)\,dr\,.
$$
The problem is that our knowledge about the regularity of $P_t \psi$
is limited, we therefore only use an approximate form of $\phi$ created by putting $P_{t,\varepsilon}\psi$ instead of the expression under the integral.
More details can be found in the proof.
We point out that in the proof we use different (arbitrarily large) values of $T$.
\vspace{\baselineskip}
\noindent
{\bf Proof of Theorem~\ref{thm:uniq}.}
We divide the proof into five steps.
\noindent
{\it Step 1.}
We claim that
\begin{align}\label{eq:uniq-ext}
\iint\limits_{(s,\infty)\times \RR^d} \mu_j(s,x,dudz)\Big[\partial_u\, \Phi(u,z) + {\mathcal L}_z \Phi(u,z) \Big] = - \Phi(s,x)\,,
\qquad j=1,2\,,
\end{align}
holds for any function $\Phi\colon {\mathbb R}\times \RR^d \to \RR^d$ which satisfies the following conditions:
\begin{enumerate}
\item[(i)] there are $\mathcal{S}<s<\mathcal{T}$ such that $\Phi(u,z)=0$ if $u<\mathcal{S}$ or $u> \mathcal{T}$,
\item[(ii)] $\partial_u \Phi(u,z)$ exists and is continuous on ${\mathbb R}\times\RR^d$, and $\|\partial_u \Phi\|_{\infty}<\infty$,
\item[(iii)] $\partial_z^{\bbbeta}\Phi(u,z)$ exists and is continuous on ${\mathbb R}\times\RR^d$, and $\|\partial_z^{\bbbeta} \Phi\|_{\infty}<\infty$ for every $|\bbbeta|\leq 2$.
\end{enumerate}
Note that such $\Phi$ is bounded and continuous on ${\mathbb R}\times\RR^d$. We consider
$$
\phi_n(u,z)= (\Phi * \varphi_{n})(u,z)\cdot \varphi(z/n)\,,
$$
where $\varphi_{n}$ is a standard mollifier ('$*$' stands for convolution) in ${\mathbb R}\times\RR^d$, and $\varphi\in C_c^{\infty}(\RR^d)$
is an arbitrary fixed function
such that $0\leq \varphi(z) \leq 1$,
$\varphi(z)=1$ if $|z|\leq 1$
and
$\varphi(z)=0$ if $|z|\geq 2$.
It is not hard to verify that
$\phi_n\in C_c^{\infty}({\mathbb R}\times\RR^d)$
and that
the following pointwise
convergence holds
$$
\phi_n \longrightarrow \Phi\,, \qquad
\partial_u \phi_n \longrightarrow \partial_u \Phi\,, \qquad
{\mathcal L}_z \phi_n \longrightarrow {\mathcal L}_z \Phi\,,
$$
and that there is a constant $c>0$ such that
$\|\partial_u \phi_n\|_{\infty}+\|{\mathcal L}_z \phi_n\|_{\infty} \leq c <\infty$
for all $n\in{\mathbb N}$.
Now, we use
\eqref{eq:uniq} with $\phi=\phi_n$ and the dominated convergence theorem to pass with $n$ to infinity under the integral, which
yields \eqref{eq:uniq-ext}.
\vspace{\baselineskip}
\noindent
{\it Step 2.}
Let $\theta\in C_c^{\infty}({\mathbb R})$, $\xi\in C_c^{\infty}(\RR^d)$.
We claim that for each $\varepsilon\in (0,1]$ the function $\widetilde{\Phi}_{\varepsilon}$ defined below satisfies (i), (ii), (iii).
We consider
$$
\widetilde{\Phi}_{\varepsilon}(u,z):=\vartheta(u) \Phi_{\varepsilon}(u,z)\,,
$$
where $\vartheta\in C^{\infty}({\mathbb R})$ is such that $\vartheta(u)=0$ if $u\leq \mathcal{S}$, and $\vartheta(u)=1$ if $u\geq s$
for some
$\mathcal{S}<s$, and
\begin{align*}
\Phi_{\varepsilon}(u,z):=
-\int_{u}^{\infty} \theta(r) P_{r-u,\varepsilon}\,\xi(z)\,dr\,.
\end{align*}
Let $\mathcal{T} > s$ be such that ${\rm supp}\, \theta \subseteq (-\infty,\mathcal{T})$.
Note that
to have $\widetilde{\Phi}_{\varepsilon}$ properly defined we only need $\Phi_{\varepsilon}(u,z)$ for $u\geq \mathcal{S}$. Therefore,
we use $P_t$ constructed on $(0,T]$, where
$T=\mathcal{T}-\mathcal{S}+1$,
which gives access to $P_{t,\varepsilon}$ for $t\in (0,T-1]$ and $\varepsilon \in (0,1]$,
see Remark~\ref{rem:1toT}.
Clearly, (i) holds for $\widetilde{\Phi}_{\varepsilon}$ with $\mathcal{S}$ and $\mathcal{T}$.
Hence, in what follows, we consider $\widetilde{\Phi}_{\varepsilon}(u,z)$ and $\Phi_{\varepsilon}(u,z)$ only for $\mathcal{S}\leq u \leq \mathcal{T}$.
We focus on~(ii). Note that
$$
\Phi_{\varepsilon}(u,z)=-\int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u) P_{v,\varepsilon} \,\xi(z) \,dv\,, \qquad
\partial_u \Phi_{\varepsilon}(u,z)=-\int_0^{\mathcal{T}-\mathcal{S}} \theta'(v+u) P_{v,\varepsilon} \,\xi(z) \,dv\,.
$$
In the second equality above we use
Corollary~\ref{cor:approx_sol-gen_prop}
part (4) to differentiate under the integral. It also guarantees
continuity and boundedness of $\Phi_{\varepsilon}$ and $\partial_u \Phi_{\varepsilon}$,
hence also of
$\partial_u \widetilde{\Phi}_{\varepsilon}(u,z)
=
\vartheta'(u) \Phi_{\varepsilon}(u,z)
+\vartheta(u) \partial_u \Phi_{\varepsilon}(u,z)$.
We shall now consider (iii).
By
\eqref{lem:point-1-aux},
Fact~\ref{fact:Q_well_Bb},
\cite[Lemma~5.6]{MR3996792},
Lemma~\ref{lem:time_conv}
and the dominated convergence theorem, we have for $|\bbbeta|\leq 1$,
\begin{align*}
\partial_z^{\bbbeta} \int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u) P^0_{v+\varepsilon} \,\xi(z) \,dv
=\int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u) \int_{\RR^d} \partial_z^{\bbbeta} \,p_0(v+\varepsilon,z,y)\xi(y) \,dy\,dv
\end{align*}
and
\begin{align*}
\partial_z^{\bbbeta} \int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u) & \int_0^v P^0_{v-r+\varepsilon} Q_r \,\xi(z)\,drdv\\
=&
\int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u) \int_0^v \int_{\RR^d}\partial_z^{\bbbeta}\, p_0(v-r+\varepsilon,z,y) Q_r\, \xi(y)\,dy\,drdv\,.
\end{align*}
Similar argument using \eqref{ineq:L1_uni_time-1}
and \cite[Corollary~5.10]{MR3996792} proves the above formulas for all $|\bbbeta|\leq 2$.
Clearly, by \eqref{def:approx_sol}, $\partial_z^{\bbbeta} \,\Phi_{\varepsilon}$
is the sum of the above expressions.
Therefore, the same estimates that allowed differentiation under the integrals,
yield
continuity (see \eqref{ineq:remove_x}) and boundedness of
$\partial_z^{\bbbeta} \,\Phi_{\varepsilon}$, see also Lemma~\ref{lem:a-continuity}.
Finally, the same holds for
$\partial_z^{\bbbeta}\, \widetilde{\Phi}_{\varepsilon}(u,z)=\vartheta(u) \partial_z^{\bbbeta} \,\Phi_{\varepsilon}(u,z)$.
\vspace{\baselineskip}
\noindent
{\it Step 3.}
We claim that
for all $s\leq u \leq \mathcal{T}$, $z\in\RR^d$,
\begin{align*}
\big[\partial_u+{\mathcal L}_z\big] \widetilde{\Phi}_{\varepsilon}(u,z)= \theta(u) P_{0,\varepsilon}\,\xi(z)+ \int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u)\, (\partial_v - {\mathcal L}_z) P_{v,\varepsilon}\,\xi(z)\,dv\,.
\end{align*}
Note that $\widetilde{\Phi}_{\varepsilon}=\Phi_{\varepsilon}$ if $u\geq s$.
Due to
Corollary~\ref{cor:H6}, see also
Remark~\ref{rem:H5_H6},
and
Corollary~\ref{cor:approx_sol-gen_prop}
part (4)
we can integrate by parts to get
$\partial_u \Phi_{\varepsilon}(u,z)
=\theta(u)P_{0,\varepsilon}\,\xi(z)+\int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u)\partial_v P_{v,\varepsilon}\,\xi(z)\,dv$.
Furthermore, we have
${\mathcal L}_z \Phi(u,z)=-\int_0^{\mathcal{T}-\mathcal{S}} \theta(v+u)\, {\mathcal L}_z P_{v,\varepsilon}\,\xi(z)\,dv$,
by literally the same proof as that of Lemma~\ref{lem:point-4}.
Adding the two expressions gives the claim.
\vspace{\baselineskip}
\noindent
{\it Step 4.}
Let $\theta\in C_c^{\infty}({\mathbb R})$, $\xi\in C_c^{\infty}(\RR^d)$.
We claim that \eqref{eq:1=2}
holds with $\psi(u,z)=\theta(u)\xi(z)$.
From~{\it Step 1} and {\it Step 2} we get
\eqref{eq:1=2} with
$\psi(u,z)=[\partial_u+{\mathcal L}_z] \widetilde{\Phi}_{\varepsilon}(u,z)$.
Next, by
{\it Step 3},
{\rm (H0)} (see Corollary~\ref{cor:H0})
and
Fact~\ref{fact:approx_sol-zero-2}
we get
$\lim_{\varepsilon\to 0^+}[\partial_u+{\mathcal L}_z] \widetilde{\Phi}_{\varepsilon}(u,z)=\theta(u)\xi(z)$. Therefore, by the dominated convergence theorem we obtain the claim.
\vspace{\baselineskip}
\noindent
{\it Step 5.} We finally claim that the consequent in Theorem~\ref{thm:uniq} is true.
Clearly, by {\it Step 4} and approximation by standard mollification,
we extend \eqref{eq:1=2}
to hold for any $\psi(u,z)=\theta(u)\xi(z)$,
where $\theta\in C_c({\mathbb R})$, $\xi\in C_c(\RR^d)$.
Next, by Urysohn's lemma and the dominated convergence theorem, \eqref{eq:1=2} holds for any
$\psi(u,z)={\bf 1}_{J}(u) {\bf 1}_{V}(z)$,
where $J\subset {\mathbb R}$ is open and bounded, and $V\subseteq \RR^d$ is open,
cf. \cite[(2) on p. 49]{MR924157}.
Finally, we get \eqref{eq:1=2} for $\psi(u,z)={\bf 1}_{E}(u,z)$, where $E$ is like in the statement,
by the monotone class theorem
\cite[Theorem~2.3]{MR0264757}.
{\hfill $\Box$ \bigskip}
\subsection{Existence and properties}
In this section we prove
Theorem~\ref{thm:exist} and Theorem~\ref{thm:sem_prop}.
\vspace{\baselineskip}
Due to
Sections~\ref{ssec:a-hc}, \ref{ssec:b-hc},
\ref{ssec:ker}
we can use all results of Sections~\ref{sec:f_a_a},
\ref{sec:coj-gen}, \ref{sec:realization}.
\vspace{\baselineskip}
\noindent
{\bf Proof of Theorem~\ref{thm:sem_prop}.}
Since $T>0$ is arbitrary and the formulas
\eqref{def:op_P}, \eqref{def:op_Q}
and \eqref{def:op_Qn}
are consistent the operators $P_t$ are defined for all $t>0$. Furthermore,
it is an integral operator with the kernel $p\colon(0,\infty)\times\RR^d\times\RR^d\to {\mathbb R}$, see
Remark~\ref{rem:t-global}.
The operator $P_t$ maps from $C_0(\RR^d)$ to $C_0(\RR^d)$ by Fact~\ref{fact:PQ_well_C0}.
The semigroup property and
positivity stems from Proposition~\ref{prop:coj123}.
The strong continuity follows from Fact~\ref{fact:P-s-cont}.
The contractivity results from {\rm (CoJ1)} and {\rm (CoJ2)}.
Furthermore,
(i) follows from Lemma~\ref{lem:CoJ5},
(ii) follows from Fact~\ref{fact:strong-F}, see also Lemma~\ref{lem:strong-F},
(iii) follows from {\rm (CoJ4)}, see Section~\ref{ssec:b-hc} and {\rm (L3)}.
(iv) and (v) follow from Proposition~\ref{prop:diff_closure} proved in the next section.
{\hfill $\Box$ \bigskip}
\noindent
{\bf Proof of Theorem~\ref{thm:exist}.}
Let $\phi\in C_c^{\infty}({\mathbb R}\times\RR^d)$.
Then $\xi(t)=\phi(t,\cdot)$
satisfies the assumptions of
\cite[Theorem~4.1]{MR3514392},
cf. \cite[Proof of (1.16)]{MR3514392}.
Note that here we
use Theorem~\ref{thm:sem_prop}.
Thus
for every $(s,x)\in{\mathbb R}\times\RR^d$ and $\phi \in C_c^{\infty}({\mathbb R}\times\RR^d)$ we have
\begin{align*}
\int_s^{\infty} P_{u-s} \big[ \partial_u \phi(u,\cdot)+ {\mathcal L}\, \phi(u,\cdot) \big](x)\, du = -\phi(s,x)\,.
\end{align*}
{\hfill $\Box$ \bigskip}
\subsection{Time-derivative and generator}
In this section we calculate the derivative of $P_tf(x)$ with respect to $t>0$, and use it to analyse the generator.
The calculations are based on
\begin{align}\label{eq:P-alt}
P_t f=P_t^0 f+\int_0^{t/2} P_{t-s}^0\, Q_s f\, ds+\int_0^{t/2}P_s^0\, Q_{t-s} f\, ds\,,
\end{align}
the series representation $Q_tf=\sum_{n=0}^{\infty}Q_t^n f$, and for $n=1,\ldots$,
\begin{align}\label{eq:Qn-alt}
Q_t^{n} f = \int_0^{t/2} Q_{t-s}^0\, Q_{s}^{n-1}f \,ds+\int_0^{t/2} Q_{s}^0\, Q_{t-s}^{n-1}f \,ds\,.
\end{align}
See \eqref{def:op_P}, \eqref{def:op_Q} and \eqref{eq:Qn}.
In what follows the constant $C_3$ is taken from
Lemma~\ref{lem:well_def}.
\begin{lemma}\label{lem:der_q0-t}
There is $c>0$ such that for all $t\in (0,T]$, $x\in\RR^d$ and $f\in B_b(\RR^d)$,
$$
\partial_t Q_t^0 f(x)=\int_{\RR^d} \partial_t q_0(t,x,y) f(y)\,dy\,,
$$
and
$$
\| \partial_t Q_t^0 f \|_{\infty}
\leq c \,C_3
(t^{-1 }r_t^{\sigma-1})t^{-1}r_t^{\varepsilon_0}\|f\|_{\infty}\,.
$$
\end{lemma}
\noindent{\bf Proof.}
The equality follows from
\eqref{eq:delta-alt}, Remark~\ref{rem:gen_not},
\eqref{ineq:L1_uni_time-1}, \cite[Lemma~5.6]{MR3996792} and
the dominated convergence theorem.
The inequality follows from
Lemmas~\ref{lem:q_0-aux*1}, \ref{lem:q_0-aux}
and~\ref{lem:q_0-aux*2}
together with Corollary~\ref{cor-shifts} and Lemma~\ref{lem:conv}(a), cf. proof of Lemma~\ref{lem:well_def}.
{\hfill $\Box$ \bigskip}
\begin{lemma}\label{lem:der_qn-t}
There are $C,c>0$ such that for all $t\in (0,T]$, $x\in\RR^d$,
$n\in{\mathbb N}$ and $f\in B_b(\RR^d)$,
$$
\partial_t Q_t^nf(x)= \int_0^{t/2} \partial_t Q_{t-s}^0\, Q_{s}^{n-1}f (x) \,ds+\int_0^{t/2} Q_{s}^0\, \partial_t Q_{t-s}^{n-1}f(x) \,ds+Q_{t/2}^0\, Q_{t/2}^{n-1}f(x)\,,
$$
and
$$
\| \partial_t Q_t^nf\|_{\infty}
\leq c \,C_3 (t^{-1}r_t^{\sigma-1})\,
(C C_3)^n \, \prod_{k=1}^n B\!\left(\frac{\varepsilon_0}{2},\frac{k\varepsilon_0}{2}\right) t^{-1} r_t^{(n+1)\varepsilon_0}\,
\|f\|_{\infty}\,.
$$
\end{lemma}
\noindent{\bf Proof.}
Let $c$ be the constant from Lemma~\ref{lem:der_q0-t}. Note that
by \eqref{set:h-scaling} there is $c_1=c_1(T,\alpha_h,C_h,h)>0$ such that $r_{t/2} \geq c_1 r_t$ for $t\in(0,T]$, see \cite[Lemma~2.3]{MR4140542}.
We let $\lambda_0=1$, and for $n\in{\mathbb N}$ we put
$$
\lambda_n= C_3^n \prod_{k=1}^n B\!\left(\frac{\varepsilon_0}{2},\frac{k\varepsilon_0}{2}\right)
\qquad \mbox{and}\qquad
C=3\max\left\{1, 2 c_1^{\sigma-1},\frac{4 r_T^{1-\sigma}}{c\Gamma(\frac{\varepsilon_0}{2})}2^{\frac{\varepsilon_0}{2}}\right\}.
$$
Our aim is to prove
differentiability and
$\|\partial_t Q_t^n f\|_{\infty}\leq c\, C_3 (t^{-1}r_t^{\sigma-1})\, C^n\lambda_n \, t^{-1} r_t^{(n+1)\varepsilon_0}\,
\|f\|_{\infty}$ for $n\in {\mathbb N}\cup \{0\}$.
If $n=0$ the statement holds by Lemma~\ref{lem:der_q0-t}.
We assume the differentiability and the estimate for $n\in{\mathbb N}\cup \{0\}$.
By \eqref{eq:Qn-alt}
we have
$Q_t^{n+1} f = \int_0^{t/2} Q_{t-s}^0\, Q_{s}^{n}f \,ds+\int_0^{t/2} Q_{s}^0\, Q_{t-s}^{n}f \,ds$.
In what follows use properties of $Q_t^nf$ from Fact~\ref{fact:Qn_well_Bb}.
We have
\begin{align}\label{eq:d1}
\begin{aligned}
&\int_0^{t/2} Q_{t-s}^0\, Q_{s}^{n}f (x)\,ds
-\int_0^{t_1/2} Q_{t_1-s}^0\, Q_{s}^{n}f(x) \,ds\\
&\qquad = \int_0^{t_1/2} \int_{t_1}^t \partial_{\theta} Q_{\theta-s}^0\, Q_s^n f (x)\,d\theta ds
+\int_{t_1/2}^{t/2} Q_{t-s}^0\, Q_{s}^{n}f (x)\,ds=: I_1+I_2\,,
\end{aligned}
\end{align}
and
\begin{align}\label{eq:d2}
\begin{aligned}
&\int_0^{t/2} Q_{s}^0\, Q_{t-s}^{n}f(x) \,ds
-\int_0^{t_1/2} Q_{s}^0\, Q_{t_1-s}^{n}f(x) \,ds\\
&\qquad = \int_0^{t_1/2}\int_{\RR^d} q_0(s,x,y) \int_{t_1}^t\partial_{\theta} Q_{\theta-s}^nf(y)\,d\theta \,dyds
+\int_{t_1/2}^{t/2} Q_s^0 Q_{t-s}^nf(x)\,ds
=: I_3+I_4\,.
\end{aligned}
\end{align}
Dividing by $t-t_1$ and using the assumed estimates and \eqref{ineq:H3}
justifies the usage of the dominated convergence theorem for parts $I_1$ and $I_3$ as $t\to t_1$.
The continuity of $Q_{u}^0Q_v^n f$ for $u,v>0$
gives that the parts corresponding to $I_2$ and $I_4$ converge to $\frac12 Q_{t_1/2}^0\, Q_{t_1/2}^{n}f(x)$ each.
Thus we get the differentiability and
\begin{align*}
\partial_t Q_t^{n+1}f(x)= \int_0^{t/2} \partial_t Q_{t-s}^0\, Q_{s}^{n}f (x) \,ds+\int_0^{t/2} Q_{s}^0\, \partial_t Q_{t-s}^{n}f(x) \,ds+Q_{t/2}^0\, Q_{t/2}^{n}f(x)\,.
\end{align*}
We prove the estimates. We frequently use
Lemma~\ref{lem:time_conv}
and the monotonicity of $r_t$, see Remark~\ref{rem:r_t}.
By Lemma~\ref{lem:der_q0-t} and
Fact~\ref{fact:Qn_well_Bb},
\begin{align*}
\int_0^{t/2} \|\partial_t Q_{t-s}^0\, Q_{s}^{n}f \|_{\infty}\,ds
&\leq c\, C_3 (t/2)^{-1} r_{t/2}^{\sigma-1}\,
C_3 \lambda_n \int_0^t (t-s)^{-1} r_{t-s}^{\varepsilon_0}\, s^{-1} r_s^{(n+1)\varepsilon_0}ds\\
&\leq c\, C_3 (t^{-1}r_{t}^{\sigma-1})\, 2 c_1^{\sigma-1} \lambda_{n+1} \, t^{-1} r_t^{(n+2)\varepsilon_0}\,.
\end{align*}
By
Lemma~\ref{lem:well_def} and inductive hypothesis
\begin{align*}
\int_0^{t/2}\| Q_{s}^0\, \partial_t Q_{t-s}^{n}f\|_{\infty} \,ds
&\leq C_3\, c\, C_3 (t/2)^{-1} r_{t/2}^{\sigma-1}\, C^n \lambda_n \int_0^t s^{-1} r_s^{\varepsilon_0} (t-s)^{-1} r_{t-s}^{(n+1)\varepsilon_0}ds\\
&\leq c\,C_3 (t^{-1}r_{t}^{\sigma-1})\, 2 c_1^{\sigma-1} C^n \lambda_{n+1} \, t^{-1} r_t^{(n+2)\varepsilon_0}\,.
\end{align*}
Following \cite[p. 129]{MR3765882} we have $\frac1{B(\varepsilon_0/2,(n+1)\varepsilon_0/2)}\leq 2^{(n+1)\frac{\varepsilon_0}{2}}/\Gamma(\frac{\varepsilon_0}{2})$, $n=0,1,\ldots$.
Now, by
Lemma~\ref{lem:well_def}
and Fact~\ref{fact:Qn_well_Bb} we get
\begin{align*}
\|Q_{t/2}^0\, Q_{t/2}^{n}f\|_{\infty}
& \leq C_3 (t/2)^{-1} r_{t/2}^{\varepsilon_0}\,C_3 \lambda_n \,(t/2)^{-1}\, r_{t/2}^{(n+1)\varepsilon_0}\\
&\leq C_3 (t^{-1}r_{t}^{\sigma-1})\, 4 r_T^{1-\sigma}\, C_3 \lambda_n \, t^{-1}r_t^{(n+2)\varepsilon_0}\\
&\leq C_3 (t^{-1} r_{t}^{\sigma-1}) \,
\frac{4 r_T^{1-\sigma}}{\Gamma(\frac{\varepsilon_0}{2})} 2^{(n+1)\frac{\varepsilon_0}{2}}
\lambda_{n+1}\, t^{-1} r_{t}^{(n+2)\varepsilon_0}\\
&\leq c \,C_3
(t^{-1} r_{t}^{\sigma-1})
\left(\frac{4 r_T^{1-\sigma}}{c\Gamma(\frac{\varepsilon_0}{2})} 2^{\frac{\varepsilon_0}{2}}\right) C^n
\lambda_{n+1}
\, t^{-1} r_{t}^{(n+2)\varepsilon_0}\,.
\end{align*}
Summing up the estimates gives the desired bound for $n+1$.
{\hfill $\Box$ \bigskip}
\begin{lemma}\label{lem:der_q-t}
There is $c>0$ such that for all $t\in (0,T]$, $x\in\RR^d$ and $f\in B_b(\RR^d)$,
$$
\partial_t Q_tf(x)=\sum_{n=0}^{\infty} \partial_t Q_t^n f(x)\,,
$$
and
$$
\| \partial_t Q_t f\|_{\infty}
\leq c (t^{-1}r_t^{\sigma-1})\, t^{-1} r_t^{\varepsilon_0}\, \|f\|_{\infty}\,.
$$
\end{lemma}
\noindent{\bf Proof.}
Fix $f$ and $x$.
For every $t\in (0,T]$ by Fact~\ref{fact:Q_well_Bb} the series
$\sum_{n=0}^{\infty} Q_t^n f(x)$ converges. By
Lemma~\ref{lem:der_qn-t}
the series $\sum_{n=0}^{\infty} \partial_t Q_t^n f(x)$
converges locally uniformly in $t\in (0,T]$.
Thus,
we can differentiate term by term.
The estimate follows from Lemma~\ref{lem:der_qn-t}.
{\hfill $\Box$ \bigskip}
\begin{lemma}
There is $c>0$ such that for all $t\in (0,T]$ and $f\in C_0(\RR^d)$,
\begin{align}\label{eq:der_p0-t-b}
\|\partial_t P_t^0f \|_{\infty}\leq c t^{-1} r_t^{\sigma-1} \|f\|_{\infty}\,,
\end{align}
and
\begin{align}\label{eq:der_p0-t-c}
\lim_{s\to t}\|\partial_s P_{s}^0f - \partial_t P_{t}^0f\|_{\infty}=0\,.
\end{align}
\end{lemma}
\noindent{\bf Proof.}
The estimate \eqref{eq:der_p0-t-b} follows from Lemma~\ref{lem:point-1},
Proposition~\ref{prop:gen_est_time},
Corollary~\ref{cor-shifts} and Lemma~\ref{lem:conv}(a).
Now, by
\eqref{eq:par_t_is_L} and \eqref{ineq:L1_uni_time-1}
for $t,s\geq \tau$,
$$
|\partial_s p_0(s,x,y)-\partial_t p_0(t,x,y)|
\leq c|t-s| \Upsilon_{\tau} (y-x)\,.
$$
Thus, using Lemma~\ref{lem:point-1}
and \cite[Lemma~5.6]{MR3996792}
the equality \eqref{eq:der_p0-t-c} holds.
{\hfill $\Box$ \bigskip}
\begin{theorem}\label{thm:der_p-t}
For all $t\in (0,T]$, $x\in\RR^d$ and $f\in C_0(\RR^d)$,
\begin{align}
\partial_t P_t f(x)
&=\partial_t P_t^0 f(x)
+\int_0^{t/2} \partial_t P_{t-s}^0 Q_sf(x)\,ds
+\int_0^{t/2} P_s^0 \partial_t Q_{t-s}f(x)\,ds
+ P_{t/2}^0\, Q_{t/2}f(x)\label{eq:der1} \\
&=\partial_t P_t^0 f(x)+Q_tf(x)+\lim_{\varepsilon\to 0^+}\int_0^{t-\varepsilon}\partial_t P_{t-s}^0 Q_sf(x)\,ds \label{eq:der2}\\
&= {\mathcal L}_x P_t^0 f(x) + \lim_{\varepsilon \to 0^+} {\mathcal L}_x \int_0^{t-\varepsilon} P_{t-s}^0\, Q_sf(x)\,ds\,. \label{eq:der3}
\end{align}
\end{theorem}
\noindent{\bf Proof.}
First recall that $Q_tf\in C_0(\RR^d)$ and $\partial_t Q_tf \in B_b(\RR^d)$, see Fact~\ref{fact:PQ_well_C0} and Lemma~\ref{lem:der_q-t}.
We get \eqref{eq:der1} by differentiating \eqref{eq:P-alt}.
The exact calculations are like in \eqref{eq:d1} and \eqref{eq:d2}
with $Q_t^0$ replaced by $P_t^0$, and $Q_t^n$ changed to $Q_t$.
The usage of the dominated convergence theorem is justified by
\eqref{eq:der_p0-t-b},
Fact~\ref{fact:Q_well_Bb} part (2),
\eqref{ineq:H1} and
Lemma~\ref{lem:der_q-t}.
See also Lemma~\ref{lem:continuity} part~(5).
Note that we have
$
\partial_s P_{t-s}^0 Q_sf(x)
= -\partial_t P_{t-s}^0 Q_sf(x)
+ P_{t-s}^0 \partial_s Q_sf(x)
$.
Thus
\begin{align*}
\int_0^{t/2} P_s^0 \partial_t Q_{t-s}f(x)\,ds
&=\lim_{\varepsilon \to 0^+}\int_{t/2}^{t-\varepsilon} P_{t-s}^0 \partial_s Q_sf(x)\,ds\\
&=\lim_{\varepsilon \to 0^+} \left( \int_{t/2}^{t-\varepsilon} \partial_t P_{t-s}^0 Q_sf(x)\,ds
+P_{\varepsilon}^0 Q_{t-\varepsilon}f(x)-P_{t/2}^0Q_{t/2}f(x)\right).
\end{align*}
By part (5) of Lemma~\ref{lem:continuity}
this already proves \eqref{eq:der2}.
Recall that $(\partial_t-{\mathcal L}_x) P_t^0f(x)=- Q_t^0 f (x)$ by Lemma~\ref{lem:point-1}.
Then
$$
\partial_t P_t^0 f(x)= -Q_t^0f(x)+{\mathcal L}_x P_t^0f(x)\,,
$$
and
$$
\lim_{\varepsilon\to 0^+}\int_0^{t-\varepsilon}\partial_t P_{t-s}^0 Q_sf(x)\,ds=
-\int_0^{t} Q_{t-s}^0 Q_sf(x)\,ds
+\lim_{\varepsilon\to 0^+}\int_0^{t-\varepsilon} {\mathcal L}_x P_{t-s}^0 Q_sf(x)\,ds\,.
$$
It suffices to add the above equalities and use \eqref{eq:Q} to obtain \eqref{eq:der3}.
{\hfill $\Box$ \bigskip}
\begin{lemma}\label{lem:der-t-approx}
For all
$t\in (0,T)$ and $f\in C_0(\RR^d)$
$$
\lim_{\varepsilon\to 0^+}\|\partial_t P_{t,\varepsilon}f-\partial_t P_t f\|_{\infty}=0\,.
$$
\end{lemma}
\noindent{\bf Proof.}
Let $t\in(0,T)$ and $\varsigma>0$ be fixed such that
$t\in (0,T-\varsigma]$.
We use \eqref{eq:der1} for $\partial_t P_tf(x)$.
Similarly to
Theorem~\ref{thm:der_p-t}, we get
\begin{align*}
\partial_t P_{t,\varepsilon} f(x)
=\partial_t P_{t+\varepsilon}^0 f(x)
&+\int_0^{t/2} \partial_t P_{t-s+\varepsilon}^0 Q_sf(x)\,ds\\
&\quad+\int_0^{t/2} P_{s+\varepsilon}^0 \partial_t Q_{t-s}f(x)\,ds
+ P_{t/2+\varepsilon}^0\, Q_{t/2}f(x)\,.
\end{align*}
We have $\partial_t P_{t+\varepsilon}^0 f \to \partial_t P_{t}^0 f$ and
$P_{t/2+\varepsilon}^0\, Q_{t/2}f \to P_{t/2}^0\, Q_{t/2}f$ in the norm by \eqref{eq:der_p0-t-c} and {\rm (H2)}, respectively.
Next,
by \eqref{eq:der_p0-t-b}, Fact~\ref{fact:Q_well_Bb} part (2),
{\rm (H1)} and Lemma~\ref{lem:der_q-t}
for all $s\in (0,t/2)$ and
$\varepsilon\in [0,\varsigma]$
we get
$
\|\partial_t P_{t-s+\varepsilon}^0 Q_sf\|_{\infty}
\leq c s^{-1} r_s^{\varepsilon_0} \|f\|_{\infty}$,
$\|P_{s+\varepsilon}^0 \partial_t Q_{t-s}f\|_{\infty}\leq c\|f\|_{\infty}$.
Thus the difference of the integral parts of $\partial_tP_{t,\varepsilon}f$ and $\partial_t P_t$ converge to zero in the norm by
\eqref{eq:der_p0-t-c} and {\rm (H2)}, and
the dominated convergence theorem.
{\hfill $\Box$ \bigskip}
We note that \eqref{eq:der2}
and \eqref{eq:der3}
resemble formulae displayed in Corollary~\ref{cor:point-1-3}.
Due to Lemma~\ref{lem:der-t-approx}
the relation is even stronger, see also Fact~\ref{fact:approx_sol-zero-1}.
\begin{corollary}\label{cor:der_p-tb}
For all $t\in (0,T)$ and $f\in C_0(\RR^d)$,
\begin{align}
\partial_t P_t f(x)&=
\partial_t P_t^0 f(x)+Q_tf(x)+\lim_{\varepsilon\to 0^+}\int_0^t\partial_t P_{t-s+\varepsilon}^0 Q_sf(x)\,ds \label{eq:der2b} \\
&={\mathcal L}_x P_t^0 f(x) + \lim_{\varepsilon \to 0^+} {\mathcal L}_x \int_0^t P_{t-s+\varepsilon}^0\, Q_sf(x)\,ds\,.
\label{eq:der2b}
\end{align}
\end{corollary}
We prove part (iv) and (v) of Theorem~\ref{thm:sem_prop}. In the next result we use arbitrary values of $T$.
\begin{proposition}\label{prop:diff_closure}
The semigroup $(P_t)_{t>0}$ is differentiable and its generator
$(\mathcal{A},D(\mathcal{A}))$
is the closure of $({\mathcal L},C_c^{\infty}(\RR^d))$. In particular, for all $t>0$, $x\in\RR^d$,
\begin{align*}
\mathcal{A} P_tf(x)=\partial_t P_tf(x)\,.
\end{align*}
\end{proposition}
\noindent{\bf Proof.}
Let $(\mathcal{A}_c,D(\mathcal{A}_c)):=({\mathcal L},C_c^{\infty}(\RR^d))$ and denote its closure by $(\bar{\mathcal{A}}_c,D(\bar{\mathcal{A}}_c))$.
Clearly, we have $\bar{\mathcal{A}}_c\subseteq \mathcal{A}$. To obtain that $\mathcal{A}\subseteq\bar{\mathcal{A}}_c$ it suffices to show $D(\mathcal{A})\subseteq D(\bar{\mathcal{A}}_c)$.
To this end in each of the steps below we exploit that $\bar{\mathcal{A}}_c$ is a closed operator.
Note also that the differentiability follows from {\it Step 2}.
\noindent
{\it Step 1}. We claim that $f\in D(\bar{\mathcal{A}}_c)$ for every $f\in C_0^2(\RR^d)$. Indeed, by convolution with a standard
mollifier and multiplication by a cut-off function
we get
$f_n(x)=(f*\varphi_n)(x)\cdot \varphi(x/n)$,
which is such that $f_n\in C_c^{\infty}(\RR^d)$, $f_n\to f$ and
$\bar{\mathcal{A}}_c f_n = {\mathcal L} f_n\to {\mathcal L} f$
in the norm
(the latter follows from
$\|{\mathcal L} g\|_{\infty} \leq c \sum_{|\bbbeta|\leq 2} \|\partial^{\bbbeta} g\|_{\infty}$
and
a fact that $f_n$ converges to $f$
in the norm together with all its derivatives up to order $2$).
\noindent
{\it Step 2}. We claim that $P_t f\in D(\bar{\mathcal{A}}_c)$ for every $f\in C_0(\RR^d)$.
Indeed, we let $f_n=P_{t,1/n} f$, see~\eqref{def:approx_sol}. Then
$f_n\in C_0^2(\RR^d)$
by \eqref{ineq:L1_uni_time-1}.
Furthermore,
$f_n \to P_t f$ by
Corollary~\ref{cor:approx_sol-gen_prop}
part (1).
Finally, by
{\it Step~1}.,
Theorem~\ref{thm:sem_prop}(iii),
Lemma~\ref{fact:approx_sol-zero-1}
and~\ref{lem:der-t-approx} we get
$\bar{\mathcal{A}}_c f_n= \mathcal{A} f_n = {\mathcal L} f_n = ({\mathcal L} -\partial_t)P_{t,1/n} f+\partial_t P_{t,1/n}f \to \partial_t P_tf$.
\noindent
{\it Step 3}. We claim that $f\in D(\bar{\mathcal{A}}_c)$ for every $f\in D(\mathcal{A})$.
Indeed, if $f_n=P_{1/n}f$, then $f_n\in D(\bar{\mathcal{A}}_c)$, $f_n\to f$ and $\bar{\mathcal{A}}_c f_n = \mathcal{A} f_n= \mathcal{A}P_{1/n}f = P_{1/n}\mathcal{A}f\to \mathcal{A}f$, see Fact~\ref{fact:P-s-cont}.
{\hfill $\Box$ \bigskip}
\subsection{Pointwise estimates}
Note that
Lemma~\ref{lem:q_0-aux}
provides a pointwise upper bound for $q_0(t,x,y)$
for all \mbox{$t\in (0,\mathfrak{t_0}]$} and \mbox{$x,y\in\RR^d$}.
We shall now
propagate it on $q_n$ defined in \eqref{def:q_n-gen}, and then further on $q$ and $p$ given in
\eqref{def:q-gen} and
\eqref{def:p-gen}.
\begin{lemma}\label{lem:q-bound}
Assume $(\mathbf{A\!^\ast})$ and \eqref{cond:pointwise}.
For every $\varepsilon\in (0,\eta)$
there exists a constant $c>0$
such that for all
$t\in (0,\mathfrak{t_0}]$ and $x,y\in\RR^d$,
\begin{align*}
|q(t,x,y)|\leq c\left(\erry{\varepsilon}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1}{\indhei{j}}\right)(t,x,y)\,.
\end{align*}
The constant $c$ can be chosen to depend only on
$d,c_{\!\scriptscriptstyle J},c_\kappa,\alpha_h,C_h,h,N,\eta, \varepsilon$, $\min\limits_{j=0,\ldots,N} (\indcsi{j})$.
\end{lemma}
\noindent{\bf Proof.}
We set $T=\mathfrak{t_0}$ for the whole proof.
Fix $\varepsilon\in (0,\eta)$ and
choose $0<M< \varepsilon $ such that
$\varepsilon+M<\eta$.
Now, by \eqref{cond:pointwise} for each $j=0,\ldots,N$ there exists $0<L_j<(\alpha_h/2)\land (\sigma \indhei{j})$ such that
\begin{align}\label{eq:def-Lj}
\varepsilon+M=2(L_j+\indcsi{j}-1)\,.
\end{align}
Define $L:=\max\{L_j\colon\,\, j=0,\ldots,N\}$. Note that
$$
0<L<\alpha_h/2\,,\qquad M< \min_{j=0,\ldots,N}( L_j+\indcsi{j}-1) \,.
$$
We will show by induction that for all $n\in{\mathbb N}$ and
$t\in (0,T]$, $x,y\in\RR^d$,
\begin{align}\label{ineq:q_n-bound}
|q_n(t,x,y)|\leq \lambda_n\left(\erry{\varepsilon+n M}{0}+ \sum_{j=0}^{N} \erry{\indcsi{j}-1+nM}{\indhei{j}}\right)(t,x,y)\,,
\end{align}
where
$$
\lambda_n= c_0 \left(\frac{c}{\alpha_h-2L} \right)^n \, \prod_{k=1}^{n} B(M/2,kM/2)\,,
$$
a constant $c_0$ is taken from Lemma~\ref{lem:q_0-aux}
and $c$ is another constant that we will now specify.
To this end we write
$D_1$ for a constant from Corollary~\ref{cor-shifts}, similarly
$D_2$ is from Lemma~\ref{lem:conv}(b),
$D_3=C_h (r_T\vee 1)^{2}$, $D_4=r_T\vee 1$, $D_5=5(N+1)^2c_0$. Finally we write
$$c:=D_1 D_2 D_3^{2/\alpha_h} D_4 D_5\,.$$
Before we proceed with the proof of \eqref{ineq:q_n-bound} we establish two auxiliary inequalities which will facilitate calculations.
For the first auxiliary inequality we let
$n\in{\mathbb N}\cup \{0\}$
and apply Lemma~\ref{lem:conv}(c) with
\begin{align*}
\beta_0=2L,\ \beta_1= \indhei{j},\ \beta_2 = \indhei{i},\quad
&n_1=n_2=\varepsilon+M+2-\indcsi{j}-\indcsi{i}\,,\\
&m_1=M+1-\indcsi{j}, \ m_2=M+1-\indcsi{i}\,,\\
&\gamma_1 =\indcsi{j}-1, \ \gamma_2=\indcsi{i}-1+nM\,.
\end{align*}
Note that by the definition of $L_j$ in \eqref{eq:def-Lj} and the subsequent properties of $M$, the assumptions of Lemma~\ref{lem:conv}(c) are satisfied. Indeed, for all $i,j=0,\ldots, N$ we have
\begin{align*}
&0<\varepsilon+M+2-\indcsi{j}-\indcsi{i}=L_j+L_i\leq (2L)\wedge\sigma(\indhei{j}+\indhei{i})\,,\\
&0<M+1-\indcsi{j}<L_j \leq (2L)\wedge\sigma\indhei{j}\,.
\end{align*}
Therefore we get
\begin{align}
&\int_0^t\int_{\RR^d}
\errx{\indcsi{j}-1}{\indhei{j}}(t-s,x,z) \erry{\indcsi{i}-1+nM}{\indhei{i}}(s,z,y)\,dzds \nonumber \\
&\leq
\frac{D_2 D_3^{2/\alpha_h}}{\alpha_h-2L}\,
B\!\left(\frac{M}{2},\frac{(n+1)M}{2}\right) \left(2\erry{\varepsilon+(n+1)M}{0}+ \erry{\indcsi{i}-1+(n+1)M}{\indhei{i}}
+\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}} \right)(t,x,y)\,. \label{ineq:aux1}
\end{align}
For the second auxiliary inequality
we use Lemma~\ref{lem:conv}(c) with
\begin{align*}
\beta_0=2L,\ \beta_1=\indhei{j}, \ \beta_2=0, \quad
&n_1= n_2=M+1-\indcsi{j}\,,\\
&m_1=M+1-\indcsi{j}, \ m_2=0\,,\\
&\gamma_1 =\indcsi{j}-1, \ \gamma_2=\varepsilon+nM\,.
\end{align*}
The assumptions are satisfied again and we get
\begin{align}
&\int_0^t\int_{\RR^d}
\errx{\indcsi{j}-1}{\indhei{j}}(t-s,x,z) \erry{\varepsilon+nM}{0}(s,z,y)\,dzds \nonumber \\
&\leq \frac{D_2 D_3^{1/\alpha_h}}{\alpha_h-2L}\,
B\!\left(\frac{M}{2},\frac{\varepsilon+nM}{2}\right)\left(3\erry{\varepsilon+(n+1)M}{0}+\erry{\varepsilon+\indcsi{j}-1+nM}{\indhei{j}}\right)(t,x,y) \nonumber \\
&\leq \frac{D_2 D_3^{1/\alpha_h}D_4}{\alpha_h-2L}\,B\!\left(\frac{M}{2},\frac{(n+1)M}{2}\right)\left(3\erry{\varepsilon+(n+1)M}{0}+\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)(t,x,y)\,.\label{ineq:aux2}
\end{align}
In the second inequality we used the monotonicity of the Beta function, we wrote that
$\varepsilon+nM=\varepsilon-M+(n+1)M $, applied Remark~\ref{rem:r_t} and $D_4^{\varepsilon-M}\leq D_4$.
We will now show \eqref{ineq:q_n-bound}
for $n=1$.
By Lemma~\ref{lem:q_0-aux},
Corollary~\ref{cor-shifts} and \eqref{ineq:aux1} we obtain
\begin{align*}
|q_1(t,x,y)|
&\leq \int_0^t \int_{\RR^d}|q_0(t-s,x,z)| |q_0(s,z,y)|\, dzds\\
&\leq c_0^2 D_1 \sum_{i,j=0}^{N}
\int_0^t\int_{\RR^d}
\errx{\indcsi{j}-1}{\indhei{j}}(t-s,x,z) \erry{\indcsi{i}-1}{\indhei{i}}(s,z,y)\,dzds\\
&\leq c_0^2\frac{D_1 D_2 D_3^{2/\alpha_h}}{\alpha_h-2L}B(M/2,M/2)\sum_{i,j=0}^{N}\left(2\erry{\varepsilon+M}{0}+ \erry{\indcsi{j}-1+M}{\indhei{j}}
+\erry{\indcsi{i}-1+M}{\indhei{i}} \right)(t,x,y)\\
&\leq 2c_0^2\frac{D_1 D_2 D_3^{2/\alpha_h}}{\alpha_h-2L}(N+1)^2B(M/2,M/2)\left(\erry{\varepsilon+M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+M}{\indhei{j}}\right)(t,x,y)\\
&\leq \lambda_1 \left(\erry{\varepsilon+M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+M}{\indhei{j}}\right)(t,x,y)\,.
\end{align*}
We assume that \eqref{ineq:q_n-bound} holds for $n\in{\mathbb N}$.
By Lemma~\ref{lem:q_0-aux}, \eqref{ineq:q_n-bound}
and Corollary~\ref{cor-shifts} we have
\begin{align*}
|q_{n+1}(t,x,y)|
&\leq \int_0^t \int_{\RR^d}|q_0(t-s,x,z)| |q_{n}(s,z,y)|\, dzds \\
&\leq c_0\lambda_n
\int_0^t \int_{\RR^d}
\sum_{j=0}^{N} \erry{\indcsi{j}-1}{\indhei{j}}(t-s,x,z)
\left(\erry{\varepsilon+n M}{0}+ \sum_{i=0}^{N} \erry{\indcsi{i}-1+nM}{\indhei{i}}\right)(s,z,y)\,dzds\\
&\leq c_0\lambda_n D_1\sum_{j=0}^{N}\int_{0}^{t}\int_{\RR^d}\errx{\indcsi{j}-1}{\indhei{j}}(t-s,x,z)\erry{\varepsilon+n M}{0}(s,z,y)\,dzds\\
&\quad +c_0\lambda_n D_1\sum_{i,j=0}^{N}\int_{0}^{t}\int_{\RR^d}\errx{\indcsi{j}-1}{\indhei{j}}(t-s,x,z)\erry{\indcsi{i}-1+nM}{\indhei{i}}(s,z,y)\,dzds\eqqcolon I_1+I_2\,.
\end{align*}
We estimate $I_1$ by applying \eqref{ineq:aux2} and using
$D_3^{1/\alpha_h}\leq D_3^{2/\alpha_h}$, $(N+1)\leq (N+1)^2$ to obtain
\begin{align*}
I_1 &\leq c_0\lambda_n\frac{D_1 D_2 D_3^{1/\alpha_h} D_4}{\alpha_h-2L}\,
B\!\left(\frac{M}{2},\frac{(n+1)M}{2}\right)\!(N+1)\!
\left(3\erry{\varepsilon+(n+1)M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)\!(t,x,y)\\
&\leq \lambda_n\frac{\frac{3}{5}D_1 D_2 D_3^{2/\alpha_h}D_4 D_5}{\alpha_h-2L}\,
B\!\left(\frac{M}{2},\frac{(n+1)M}{2}\right)\left(\erry{\varepsilon+(n+1)M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)(t,x,y)\\
&=\frac{3}{5}\lambda_{n+1}\left(\erry{\varepsilon+(n+1)M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)(t,x,y)\,.
\end{align*}
For $I_2$ we apply \eqref{ineq:aux1} to get
\begin{align*}
I_2
&\leq c_0\lambda_n \frac{D_1 D_2 D_3^{2/\alpha_h}}{\alpha_h-2L}\,
B\!\left(\frac{M}{2},\frac{(n+1)M}{2}\right)\!(N+1)^2
\!\left(2\erry{\varepsilon+(n+1)M}{0}+2\sum_{j=0}^{N}\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)\!(t,x,y)\\
&\leq \lambda_n\frac{\frac{2}{5}D_1 D_2 D_3^{2/\alpha_h}D_4 D_5}{\alpha_h-2L}\,
B\!\left(\frac{M}{2},\frac{(n+1)M}{2}\right)\left(\erry{\varepsilon+(n+1)M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)(t,x,y)\\
&=\frac{2}{5}\lambda_{n+1}\left(\erry{\varepsilon+(n+1)M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)(t,x,y)\,.
\end{align*}
Ultimately, after adding the estimates for $I_1$ and $I_2$ we get
\begin{equation*}
|q_{n+1}(t,x,y)|\leq \lambda_{n+1}\left(\erry{\varepsilon+(n+1)M}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1+(n+1)M}{\indhei{j}}\right)(t,x,y)\,.
\end{equation*}
This ends the proof of \eqref{ineq:q_n-bound}.
Using Remark~\ref{rem:r_t}
we derive from
\eqref{ineq:q_n-bound} that
$$
| q_n(t,x,y) |\leq \lambda_n \,r_T^{nM}
\left(\erry{\varepsilon}{0}+ \sum_{j=0}^{N} \erry{\indcsi{j}-1}{\indhei{j}}\right)(t,x,y)\,.
$$
It remains to recognize that
$\sum_{n=1}^{\infty} \lambda_n r_T^{nM}<\infty$, for instance by noticing
$$
\lambda_n \, r_T^{nM}
=c_0\Gamma(M/2)\left(\frac{c \, \Gamma(M/2) r_T^M}{\alpha_h-2L}\right)^n\frac{1}{\Gamma((n+1)M/2)}\,.
$$
Actually, $r_T= 1$, $M$ depends only on $\varepsilon$ and $\eta$, while $L$ depends only on $\varepsilon+M$ and
$\min\limits_{j=0,\ldots,N} (\indcsi{j})$.
{\hfill $\Box$ \bigskip}
\begin{proposition}\label{prop:remainder-bound}
Assume $(\mathbf{A\!^\ast})$ and \eqref{cond:pointwise}.
For every $\varepsilon\in (0,\eta)$
there exists a constant $c>0$
such that for all
$t\in (0,\mathfrak{t_0}]$ and $x,y\in\RR^d$,
$$
|p(t,x,y)-p_0(t,x,y)|\leq ct \left(\erry{\varepsilon \land \varepsilon_0}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1}{\indhei{j}}\right)(t,x,y)\,.
$$
The constant $c$ can be chosen to depend only on
$d,c_{\!\scriptscriptstyle J},c_\kappa,\alpha_h,C_h,h,N,\eta, \varepsilon, \varepsilon_0$, $\min\limits_{j=0,\ldots,N} (\indcsi{j})$.
\end{proposition}
\noindent{\bf Proof.}
We set $T=\mathfrak{t_0}$.
By Proposition~\ref{prop:gen_est}, Lemma~\ref{lem:q-bound} and Corollary~\ref{cor-shifts} we
have
\begin{align*}
|p(t,x,y)-p_0(t,x,y)| &\leq c \int_0^t \int_{\RR^d}(t-s) \errx{0}{0}(t-s,x,z)\erry{\varepsilon}{0}(s,z,y)\,dzds\\
&\quad+c\sum_{j=0}^{N}\int_{0}^{t}\int_{\RR^d}(t-s)\errx{0}{0}(t-s,x,z)\erry{\indcsi{j}-1}{\indhei{j}}(s,z,y)\,dzds\eqqcolon I_1+I_2 \,.
\end{align*}
Using Lemma~\ref{lem:conv}(c) with
\begin{align*}
\beta_0=0, \ \beta_1=0, \ \beta_2=0, \quad &n_1=n_2= m_1=m_2=0\,,\\
&\gamma_1=0, \ \gamma_2=\varepsilon\,,
\end{align*}
we get $I_1\leq (c/\alpha_h) t\erry{\varepsilon}{0}(t,x,y)$.
Due to the definition of $\varepsilon_0$, for each $j=0,\ldots,N$ there exists
$0<\ell_j < \alpha_h \land (\sigma \indhei{j})$
such that $\varepsilon_0=\ell_j+\indcsi{j}-1$.
Let $\ell :=\max\{\ell_j\colon \, j=0,\ldots,N\}$ and note that $0<\ell<\alpha_h$.
We use Lemma~\ref{lem:conv}(c) with
\begin{align*}
\beta_0=\ell, \ \beta_1=0, \ \beta_2=\indhei{j}, \quad &n_1=n_2=\varepsilon_0+1-s_j\,,\\
&m_1=0, \ m_2=\varepsilon_0+1-s_j\,,\\
&\gamma_1=0, \ \gamma_2=\indcsi{j}-1\,,
\end{align*}
to get that
\begin{align*}
I_2\leq \frac{c}{\alpha_h-\ell}\, t\left(\erry{\varepsilon_0}{0}+\sum_{j=0}^{N}\erry{\indcsi{j}-1}{\indhei{j}}\right)(t,x,y)\,.
\end{align*}
This ends the proof.
{\hfill $\Box$ \bigskip}
\noindent
{\bf Proof of Theorem~\ref{thm:pointwise}.}
It suffices to use Proposition~\ref{prop:remainder-bound} and estimates of $p_0(t,x,y)$ provided by Proposition~\ref{prop:gen_est}, see Section~\ref{ssec:a-zoet}.
{\hfill $\Box$ \bigskip}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,798 |
RELUCTANT HASTINGS' WITNESS MAY FACE PRISON TERM
WILLIAM E. GIBSON, Washington Bureau ChiefSUN-SENTINEL
WASHINGTON -- A Senate committee announced on Thursday it will seek a civil contempt order to imprison William Borders unless he agrees to testify about his alleged bribery conspiracy with U.S. District Judge Alcee Hastings.
If the action is approved by the full Senate and a federal court, Borders could face imprisonment until the end of the impeachment trial in the fall.
The committee also warned Borders about possible later criminal contempt charges, which could bring up to a one-year sentence.
"You can purge yourself of that (by testifying)," Sen. Arlen Specter, R- Pa., told Borders. "You've got the keys to the jail in your pocket."
Borders emphatically defied the special committee's demand for his testimony.
The defiant witness, who served 33 months in prison for conspiring to arrange a bribe, complained that he is being harassed for asserting his constitutional right to remain silent.
"I believe my legal debt to society has been paid in full," Borders said.
The former lawyer said he had hoped, after being imprisoned, paroled and disbarred, "that I would finally be left alone to pursue that sector of happiness which is left to me."
Borders said the committee's threats of contempt orders "just heightens my conclusion that answering any questions from this committee would subject me to further jeopardy of a criminal nature."
The Court of Appeals ruled on Wednesday that if the witness failed to comply with an earlier district court order, the committee "may seek enforcement through civil contempt proceedings."
Chairman Jeff Bingaman, D-N.M., said the committee would seek authorization from the full Senate to pursue civil contempt enforcement against Borders, which must then be considered by a federal court.
The evidentiary phase of Hastings' trial is set to end on Aug. 4, but Bingaman said the trial record will remain open if Borders agrees to testify during the Senate's August recess or anytime after the Senate resumes its session on Sept. 6.
Committee members said Borders' testimony is vitally important in their collection of evidence.
Sen. Joseph Lieberman, D-Conn., noted that Borders has been granted immunity from further prosecution in an effort to compel his testimony.
"It seems to me that you are the only person on this Earth who really knows what happened here, whether you acted independently (to arrange a bribe) or whether you acted together with Judge Hastings.
Trials and Arbitration
Arlen Specter | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,974 |
A fine and true explanation! One of the best things about being a witch is that we can tailor our needs/practices/beliefs to what resonates for us as an individual. We, or at least I, can borrow from hoodoo, Santeria, the druids or the Vikings etc....as long as it is done with respect and knowledge-I feel comfortable, challenged, and inspired. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,864 |
{"url":"http:\/\/tex.stackexchange.com\/questions\/22684\/disable-hyphenation-in-apa6es-footnotes","text":"# Disable hyphenation in apa6e's footnotes\n\nI would like to disable hyphenation in apa6e's footnotes, to match APA's 6th edition guidelines.\n\napa6e uses ragged2e, and the ragged2e documentation (pdf) claims that certain commands can make hyphenation \"almost impossible\":\n\n\\setlength{\\RaggedLeftRightskip}{0pt plus 1fil}\n\\setlength{\\RaggedRightRightskip}{0pt plus 1fil}\n\n\nThese commands disable hyphenation in the body, but not in the (endnotes) footnotes.\n\nHow can I either (1) disable ragged2e's hyphenation in apa6e, or (2) edit apa6e.cls to stop using ragged2e altogether?\n\nExample:\n\n\\documentclass[endnotes]{apa6e}\n\\title{}\n\\author{}\n\\shorttitle{}\n\\authornote{}\n\n\\setlength{\\RaggedLeftRightskip}{0pt plus 1fil}\n\\setlength{\\RaggedRightRightskip}{0pt plus 1fil}\n\n\\begin{document}\n\nThe body does not have any hyphenation. Good. Even\nreallyreallyreallyreallyreallyreallyreallyreallylongwords remain\nunhyphenated.\\\n%\n\\footnote{However, the footnotes become hyphenated. The\nreallyreallyreallyreallyreallyreallyreallyreallylongwords become\nhyphenatated.}\n\n\\end{document}\n\n-\nbe careful -- if you have any really, Really, REALLY long words, they may be longer than your line. (probably not a real concern in english and a wide-ish text width, but if you get into german or some other agglutinative languages, or maybe organic chemistry, you could find yourself with some real problems.) \u2013\u00a0barbara beeton Jul 10 '11 at 13:34\n\nYou can patch the command that formats the endnote to add \\RaggedRight to it. The easiest way to do this is with the etoolbox package, using the \\appto macro, which appends code to an existing macro.\n\n\\usepackage{etoolbox}\n\\appto{\\enoteformat}{\\RaggedRight}\n\n\n\\documentclass[endnotes]{apa6e}\n\\usepackage{etoolbox}\n\\title{}\n\\author{}\n\\shorttitle{}\n\\authornote{}\n\\setlength{\\RaggedLeftRightskip}{0pt plus 1fil}\n\\setlength{\\RaggedRightRightskip}{0pt plus 1fil}\n\n% Add \\RaggedRight to the \\enoteformat command (from the endnotes package)\n\\apptocmd{\\enoteformat}{\\RaggedRight}{}{}\n\\begin{document}\n\nThe body does not have any hyphenation. Good. Even\nreallyreallyreallyreallyreallyreallyreallyreallylongwords remain\nunhyphenated.\\\n%\n\\footnote{However, the footnotes become hyphenated. The\nreallyreallyreallyreallyreallyreallyreallyreallylongwords become\nhyphenatated.}\n\n\\end{document}\n\n-\nWorks great. Thanks! \u2013\u00a0newresearcher Jul 10 '11 at 15:19\nIn this case, the simpler \\appto command also works. I've update the answer to reflect this. (There's no real difference between the two in this case.) \u2013\u00a0Alan Munn Jul 10 '11 at 17:37\nThe difference between \\appto and \\apptocmd is that the latter applies also to command with mandatory arguments. The extra work of \\apptocmd is not needed when the command has no arguments. However \\apptocmd doesn't work on commands with an optional argument. \u2013\u00a0egreg Jul 10 '11 at 17:42\n@egreg Thanks. That's clearer. \u2013\u00a0Alan Munn Jul 10 '11 at 18:10\n\\PassOptionsToPackage{originalparameters}{ragged2e}% for the body\n\\documentclass[endnotes]{apa6e}\n\\makeatletter% for the footnotes\n\\renewcommand{\\footnote}[1]{\\def\\apaSIXe@hasendnotes\\relax{}\\endnote{\\raggedright#1}}\n\\makeatother\n\\title{}\n\\author{}\n\\shorttitle{}\n\\authornote{}\n\\begin{document}\n\nThe body does not have any hyphenation. Good. Even\nreallyreallyreallyreallyreallyreallyreallyreallylongwords remain\nunhyphenated.\\\n%\n\\footnote{However, the footnotes become hyphenated. The\nreallyreallyreallyreallyreallyreallyreallyreallylongwords become\nhyphenatated.}\n\n\\end{document}\n\n-","date":"2016-02-09 08:23:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8841977119445801, \"perplexity\": 8552.60986926378}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-07\/segments\/1454701156627.12\/warc\/CC-MAIN-20160205193916-00090-ip-10-236-182-209.ec2.internal.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.physicsforums.com\/threads\/find-the-derivative-of-the-functions.250887\/","text":"# Find the derivative of the functions\n\n1. Aug 19, 2008\n\n### fr33pl4gu3\n\nThe question is as below:\n\ng(x) = (2x2+x+1) \/ (x2+2x+1)\n\nIt says find the derivative of the functions, I don't get it what does it meant by that, derivative supposed to be dy\/dx kind of things right, but all the tutorials is so simple, but how do you do this question??\n\n2. Aug 19, 2008\n\n### HallsofIvy\n\nStaff Emeritus\nRe: derivative\n\nFirst, I am going to move this. Just because it is about a derivative, does not mean it has anything to do with differential equations!\n\nYes, the derivative is dy\/dx or, in this case because the function is called \"g\", dg\/dx.\n\nSince g(x) is defined as a fraction, do you know the \"quotient rule\"?\n$$\\frac{d \\frac{f}{g}}{dx}= \\frac{\\frac{df}{dx}g- f\\frac{dg}{dx}}{g^2}$$\n\n3. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\nNo, but i managed to find 2 sites about the question, and i got the final answer in below, i wonder if this is right??\n\nA: (4x+1) \/ (2x+2)\n\n4. Aug 19, 2008\n\n### HallsofIvy\n\nStaff Emeritus\nRe: derivative\n\nNo, it appears that you have just differentiated the numerator and denominator separately and ignored the formula I gave.\n$$\\frac{d\\frac{f}{g}}{dx}\\ne \\frac{\\frac{df}{dx}}{\\frac{dg}{dx}}$$\n\nYour function is g(x) = (2x2+x+1) \/ (x2+2x+1) which is of the form f(x)\/h(x) with f(x)= 2x2+ x+ 1 and h(x)= x2+ 2x+ 1. Then f'(x)= 4x+ 1 and h'(x)= 2x+ 2. The formula I gave\n$$\\frac{d\\frac{f}{h}}{dx}= \\frac{\\frac{df}{dx}g- f\\frac{dg}{dx}}{g^2}$$\nthen gives\n$$\\frac{dg}{dx}= \\frac{(4x+1)(x^2+ 2x+1)- (2x^2+ x+ 1)(2x+2)}{(x^2+ 2x+ 1)^2}$$\n$$= \\frac{3x^2+ 2x-1}{(x^2+ 2x+ 1)^2}$$\nwhich, I believe does not simplify any more.\n\n5. Aug 19, 2008\n\n### NoMoreExams\n\nRe: derivative\n\nIf you are not familiar with the quotient rule (which you should change immediately), you might be familiar with the product rule. In which case rewrite\n\n$$g(x) \\, = \\, \\frac{2x^{2} \\, + \\, x \\, + \\, 1}{x^{2} \\, + \\, 2x \\, + \\, 1}$$\n\nas\n\n$$g(x) \\, = \\, \\left( 2x^{2} \\, + \\, x \\, + \\, 1 \\right) \\left(x^{2} \\, + \\, 2x \\, + \\, 1 \\right)^{-1}$$\n\n6. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\nNew question,\n\nfind the derivative of the function,\n\nf(x) = sin2x(3x2-2)5=cos2x(3x2-2)5 + sin2x(30x-10)4\n\nCan i simplify until the answer: (3x2-2)5 + (30x-10)4\n\n7. Aug 19, 2008\n\n### HallsofIvy\n\nStaff Emeritus\nRe: derivative\n\nI have no idea what you are doing! You are simply handing us purported \"answers\"- they are in fact not even close to correct- without showing any work.\n\nDo you know the \"chain rule\"?\nDo you know what the derivative of sin x is?\n\n8. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\nThe derivative of the sin x is cos x, right??\n\n9. Aug 19, 2008\n\n### NoMoreExams\n\nRe: derivative\n\nYes now do you know the deriviative of $$f(g(x))$$?\n\n10. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\nhow is product rule different from chain rule??\n\n11. Aug 19, 2008\n\n### rootX\n\nRe: derivative\n\nIn way that product rule is for\n\nf(x)*g(x)\n\nso d\/dx [f(x)*g(x)] = f'*g + f*g'\nand chain rule is for\n\nf[g(x)] = g'(x)*f'[g(x)]\n\n12. Aug 19, 2008\n\n### Feldoh\n\nRe: derivative\n\nChain rule: $$\\frac{d}{dx}[f(g(x))] = g'(x)*f'(g(x))$$\n\nProduct rule: $$\\frac{d}{dx}[f(x)*g(x)] = f'(x)g(x)+g'(x)f(x)$$\n\nCan you see the differences? It sounds like your trying to teach yourself calculus starting with derivative shortcuts, that's a bad idea. Try looking in the tutorials section for video lectures\/e-books instead of reading tutorials on shortcuts.\n\nEdit: Damn beat to it XD\n\n13. Aug 19, 2008\n\n### NoMoreExams\n\nRe: derivative\n\nI think the first step is understanding the difference betweeen function multiplication and function composition. Do you know that difference from your algebra class?\n\n14. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\nso the answer should be something like this:\n\n(30x - 10)4 * cos2(3x2-2)5\n\n15. Aug 19, 2008\n\n### rootX\n\nRe: derivative\n\n>> d\/dx '(sin(x))^2*(3*x^2-2)^5'\n\nans=\n\nIt's not\n\nIt's 2*sin(x)*(3*x^2-2)^5*cos(x)+30*sin(x)^2*(3*x^2-2)^4*x\n\nThis question seems to be above your level right now, try some simpler questions first for good understanding of diff rules and then try this one ...\n\n16. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\nisn't it f(x) = sin2x and g(x) = (3x2-2)5?? and how is it possible that it evolve until that stage and how does cos x + 30 came from??\n\n17. Aug 19, 2008\n\n### rootX\n\nRe: derivative\n\nYes, so\nit looks like\nf[p(x)]*g[q(x)]\n\nHint: p(x) = sin(x) and f[p(x)] = p(x)^2\nsimilarly q(x) = 3x^2-2 and g(x) = q(x)^5\n\ntherefore\nd\/dx f[p(x)]*g[q(x)] = ??\n\nUse chain rule and product rule together\n\n18. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\ncan anyone show me the whole solution to this question, this is my assignment:\n\nf(x) = sin2x(3x2-2)5\n\n19. Aug 19, 2008\n\n### Diffy\n\nRe: derivative\n\nNo, I certainly hope no one just shows you the answer, after all it is YOUR assignment. RootX's response should be enough to help you figure it out. What about this derivative is still confusing you? Give it a shot yourself.\n\n20. Aug 19, 2008\n\n### fr33pl4gu3\n\nRe: derivative\n\nIs it:\nf[p(x)]*g[q(x)]\n\nor\n\nf'[p(x)]*g'[q(x)]","date":"2017-02-21 07:41:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5497158765792847, \"perplexity\": 4364.155288988015}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-09\/segments\/1487501170696.61\/warc\/CC-MAIN-20170219104610-00026-ip-10-171-10-108.ec2.internal.warc.gz\"}"} | null | null |
Q: angular (jquery timepicker) unable to get value of input I want to make a custom directive with the jquery plugin timepicker. I'm not getting the input value in the console, it says undefined.
here's a plunkr
<table class="table table-bordered">
<thead>
<tr>
<th>Time From</th>
<th>Time To</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<input type="text" ng-model="row1" size=6/ disabled>
</td>
<td>
<input type="text" ng-model="dup_row1 " size=6 timepicki/>
{{dup_row1}}
</td>
</tr>
</tbody>
</table>
var app = angular.module('myApp', []);
app.directive('timepicki', [
function() {
var link;
link = function(scope, element, attr, ngModel) {
element.timepicki();
};
return {
restrict: 'A',
link: link,
require: 'ngModel'
};
}
])
app.controller('ctrl', function($scope) {
$scope.row1 = "00:00"
$scope.submit=function(){
console.log($scope.dup_row1)
}
});
A: The code you've posted is not the same as the code in your plunker.
The AngularJS Developer Guide says;
Use controllers to:
*
*Set up the initial state of the $scope object.
In your example above, you log the value of $scope.dup_row1 on submit, but your controller never sets that value, and as such it is undefined.
The following will print "hello" to the console;
app.controller('ctrl', function($scope) {
$scope.row1 = "00:00"
$scope.dup_row1 = "hello"
$scope.submit=function(){
console.log($scope.dup_row1)
}
});
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 290 |
\section{Introduction}
The human brain generates complex behaviors via the dynamics of electrical activity in a network of $\sim10^{11}$ neurons each making $\sim10^{4}$ synaptic connections. As there is no known centralized authority determining which specific connections a neuron makes or specifying the weights of individual synapses, synaptic connections must be established based on local rules. Therefore, a major challenge in neuroscience is to determine local synaptic learning rules that would ensure that the network acts coherently, i.e. guarantee robust network self-organization.
Much work has been devoted to the self-organization of neural networks for solving unsupervised computational tasks using Hebbian and anti-Hebbian learning rules \citep{foldiak1990forming,foldiak1989adaptive,rubner1989self,rubner1990development,carlson1990anti,plumbley1993hebbian,leen1991,plumbley1993efficient,linsker1997local}.
Unsupervised setting is natural in biology because large-scale labeled datasets are typically unavailable. Hebbian and anti-Hebbian learning rules are biologically plausible
because they are local: The weight of an (anti-)Hebbian synapse is proportional to the (minus) correlation in activity between the two neurons the synapse connects.
In networks for dimensionality reduction, for example, feedforward connections use Hebbian rules and lateral - anti-Hebbian, Figure 1. Hebbian rules attempt to align each neuronal feature vector, whose components are the weights of synapses impinging onto the neuron, with the input space direction of greatest variance. Anti-Hebbian rules mediate competition among neurons which prevents their feature vectors from aligning in the same direction. A rivalry between the two kinds of rules results in the equilibrium where synaptic weight vectors span the principal subspace of the input covariance matrix, i. e. the subspace spanned by the eigenvectors corresponding to the largest eigenvalues.
However, in most existing single-layer networks, Figure 1, Hebbian and anti-Hebbian learning rules were postulated rather than derived from a principled objective. Having such derivation should yield better performing rules and deeper understanding than has been achived using heuristic rules. But, until recently, all derivations of single-layer networks from principled objectives led to biologically implausible non-local learning rules, where the weight of a synapse depends on the activities of neurons other than the two the synapse connects.
Recently, single-layer networks with local learning rules have been derived from similarity matching objective functions \citep{pehlevan2015MDS,pehlevan2014NMF,hu2014SMF}. But why do similarity matching objectives lead to neural networks with local, Hebbian and anti-Hebbian learning rules? A clear answer to this question has been lacking.
Here, we answer this question by performing several illuminating variable transformations. Specifically, we reduce the full network optimization problem to a set of trivial optimization problems for each synapse which can be solved locally. Eliminating neural activity variables leads to a min-max objective in terms of feedforward and lateral synaptic weight matrices. This finally formalizes the long-held intuition about the adversarial relationship of Hebbian and anti-Hebbian learning rules.
In this paper, we make the following contributions. In Section \ref{S2}, we present a more transparent derivation of the previously proposed online similarity matching algorithm for Principal Subspace Projection (PSP). In Section \ref{S3}, we propose a novel objective for PSP combined with spherizing, or whitening, the data, which we name Principal Subspace Whitening (PSW), and derive from it a biologically plausible online algorithm. Also, in Sections \ref{S2} and \ref{S3}, we demonstrate that stability in the offline setting guarantees projection onto the principal subspace and give principled learning rate recommendations. In Section \ref{S4}, by eliminating activity variables from the objectives, we derive min-max formulations of PSP and PSW which yield themselves to game-theoretical interpretations. In Section \ref{S5}, by expressing the optimization objectives in terms of feedforward synaptic weights only, we arrive at novel formulations of dimensionality reduction in terms of fractional
powers of matrices. In Section \ref{S6}, we demonstrate numerically that the performance of our online algorithms is superior to the heuristic ones.
\section{From similarity matching to Hebbian/anti-Hebbian networks for PSP}\label{S2}
\subsection{Derivation of a mixed PSP from similarity matching}
The PSP problem is formulated as follows. Given $T$ centered input data samples, ${\bf x}_t \in \mathbb{R}^n$, find $T$ projections, ${\bf y}_t \in \mathbb{R}^k$, onto the principal subspace ($k\le n$), i.e. the subspace spanned by eigenvectors corresponding to the $k$ top eigenvalues of the input covariance matrix:
\begin{align}\label{Cdef}
{\bf C} \equiv \frac 1 T \sum_{t=1}^{T} {\bf x}_t {\bf x}_t^\top = \frac 1 T {\bf X}{\bf X}^\top,
\end{align}
where we resort to a matrix notation by concatenating input column vectors into ${\bf X}=\left[{\bf x}_1,\ldots,{\bf x}_T\right]$. Similarly, outputs are ${\bf Y}=\left[{\bf y}_1,\ldots,{\bf y}_T\right]$.
Our goal is to derive a biologically plausible single-layer neural network implementing PSP by optimizing a principled objective. Biological plausibility requires that the learning rules are local, i.e. synaptic weight update depends on the activity of only the two neurons the synapse connects. The only PSP objective known to yield a single-layer neural network with local learning rules is based on similarity matching \citep{pehlevan2015MDS}. This objective, borrowed from Multi-Dimensional Scaling (MDS), minimizes the mismatch between the similarity of inputs and outputs \citep{mardia1980multivariate,williams01ona,cox2000multidimensional}:
\begin{align}\label{SM}
{\rm PSP:} \hspace{3cm} \min_{{\bf Y}\in \mathbb{R}^{k\times T}} \frac 1{T^2} \left\Vert {\bf X}^\top{\bf X}-{\bf Y}^\top{\bf Y}\right\Vert_F^2 .
\end{align}
Here, similarity is quantified by the inner products between all pairs of inputs (outputs) comprising the Grammians ${\bf X}^\top{\bf X}$ (${\bf Y}^\top{\bf Y}$).
One can understand intuitively that the objective \eqref{SM} is optimized by the projection onto the principal subspace by considering the following (for a rigorous proof see \citep{pehlevan2015normative,mardia1980multivariate,cox2000multidimensional}). First, substitute a Singular Value Decomposition (SVD) for matrices ${\bf X}$ and ${\bf Y}$ and note that the mismatch is minimized by matching right singular vectors of ${\bf Y}$ to that of ${\bf X}$. Then, rotating the Grammians to the diagonal basis reduces the minimization problem to minimizing the mismatch between the corresponding singular values squared. Therefore, ${\bf Y}$ is given by the top $k$ right singular vectors of ${\bf X}$ scaled by corresponding singular values. As the objective \eqref{SM} is invariant to the left-multiplication of ${\bf Y}$ by an orthogonal matrix, it has infinitely many degenerate solutions. One such solution corresponds to the Principal Component Analysis (PCA).
Unlike non-neural-network formulations of PSP or PCA, similarity matching outputs principal components (scores) rather than principal eigenvectors of the input covariance (loadings). Such difference in formulation is motivated by our interest in PSP or PCA neural networks \citep{diamantaras1996principal} that output principal components, ${\bf y}_t$, rather than principal eigenvectors. Principal eigenvectors are not transmitted downstream of the network but can be recovered computationally from the synaptic weight matrices. Although synaptic weights do not enter the objective \eqref{SM}, in previous work \citep{pehlevan2015MDS}, they arose naturally in the derivation of the online algorithm (see below) and stored correlations between input and output neural activities.
Next, we derive the min-max PSP objective from Eq. \eqref{SM}, starting with expanding the square of the Frobenius norm:
\begin{align}\label{SMfull}
\argmin_{{\bf Y}\in \mathbb{R}^{k\times T}} \frac 1{T^2} \left\Vert {\bf X}^\top{\bf X}-{\bf Y}^\top{\bf Y}\right\Vert_F^2 = \argmin_{{\bf Y}\in \mathbb{R}^{k\times T}} \frac{1}{T^2} {\rm Tr}\left(-2{\bf X}^\top{\bf X}{\bf Y}^\top{\bf Y} + {\bf Y}^\top{\bf Y}{\bf Y}^\top{\bf Y} \right).
\end{align}
We can rewrite Eq. \eqref{SMfull} by introducing two new dynamical variable matrices in place of covariance matrices $\frac 1T {\bf X}{\bf Y}^\top$ and $\frac 1T {\bf Y}{\bf Y}^\top$:
\begin{align}\label{S0}
\min_{{\bf Y}\in \mathbb{R}^{k\times T}} \min_{{\bf W}\in \mathbb{R}^{k\times n}}\max_{{\bf M}\in \mathbb{R}^{k\times k}} \, &L_{PSP}({\bf W},{\bf M},{\bf Y}), {\rm \; \; where}
\end{align}
\begin{align}\label{SMMW}
\qquad L_{PSP}({\bf W},{\bf M},{\bf Y}) &\equiv {\rm Tr}\left( -\frac{4}{T}{\bf X}^\top{\bf W}^\top{\bf Y} + \frac{2}{T} {\bf Y}^\top{\bf M}{\bf Y} \right) + 2 {\rm Tr}\left({\bf W}^\top{\bf W}\right) - {\rm Tr}\left({\bf M}^\top{\bf M}\right).
\end{align}
To see that Eq. \eqref{SMMW} is equivalent to Eq. \eqref{SMfull} find optimal ${\bf W}^*=\frac 1T {\bf Y}{\bf X}^\top$ and ${\bf M}^*=\frac 1T {\bf Y}{\bf Y}^\top$ by setting the corresponding derivatives of objective \eqref{SMMW} to zero. Then, substitute ${\bf W}^*$ and ${\bf M}^*$ into Eq. \eqref{SMMW} to obtain \eqref{SMfull}.
Finally, we exchange the order of minimization with respect to ${\bf Y}$ and ${\bf W}$ as well as the order of minimization with respect to ${\bf Y}$ and maximization with respect to ${\bf M}$ in Eq. \eqref{SMMW}. The last exchange is justified by the saddle point property (see Proposition \ref{minmaxPSP} in Appendix \ref{A1}). Then, we arrive at the following min-max optimization problem:
\begin{align}\label{SMMW2}
\min_{{\bf W}\in \mathbb{R}^{k\times n}}\max_{{\bf M}\in \mathbb{R}^{k\times k}} \min_{{\bf Y}\in \mathbb{R}^{k\times T}}L_{PSP}({\bf W},{\bf M},{\bf Y}) ,
\end{align}
where $L_{PSP}({\bf W},{\bf M},{\bf Y})$ is defined in Eq. \eqref{SMMW}. We call this a mixed objective because it includes both output variables, ${\bf Y}$, and covariances, ${\bf W}$ and ${\bf M}$.
\subsection{Offline PSP algorithm}
In this section, we present an offline optimization algorithm to solve the PSP problem and analyze fixed points of the corresponding dynamics. These results will be used in the next Section for the biologically plausible online algorithm implemented by neural networks.
In the offline setting, we can solve Eq. \eqref{SMMW2} by the alternating optimization approach used commonly in neural networks literature \citep{olshausen1996emergence,olshausen1997sparse,arora2015simple}. We, first, minimize with respect to ${\bf Y}$ while keeping ${\bf W}$ and ${\bf M}$ fixed,
\begin{align}
{\bf Y}^* = \argmin_{{\bf Y}\in \mathbb{R}^{k\times T}} L_{PSP}({\bf W},{\bf M},{\bf Y}),
\end{align}
and, second, make a gradient descent-ascent step with respect to ${\bf W}$ and ${\bf M}$ while keeping ${\bf Y}$ fixed:
\begin{align}\label{gda}
\left[\begin{array}{c} {\bf W} \hspace{1cm} {\bf M} \end{array}\right] &\longleftarrow \left[\begin{array}{c} {\bf W} \hspace{1cm} {\bf M} \end{array}\right] + \left[\begin{array}{c} - \eta \frac {\partial L_{PSP}({\bf W},{\bf M},{\bf Y}^*)}{\partial {\bf W}} \hspace{1cm} \frac{\eta}{\tau}\frac {\partial L_{PSP}({\bf W},{\bf M},{\bf Y}^*)}{\partial {\bf M}} \end{array}\right],
\end{align}
where $\eta$ is the ${\bf W}$ learning rate and $\tau > 0$ is a ratio of learning rates for ${\bf W}$ and ${\bf M}$. In Appendix \ref{LSPSP}, we analyze how $\tau$ affects linear stability of the fixed point dynamics. These two phases are iterated until convergence (Algorithm 1)\footnote{This alternating optimization is identical to a gradient descent-ascent (see Proposition \ref{lem1} in Appendix \ref{chainrule}) in ${\bf W}$ and ${\bf M}$ on the objective:
\begin{align*}
l_{PSP}({\bf W},{\bf M}) \equiv \min_{{\bf Y}\in \mathbb{R}^{k\times T}} L_{PSP}({\bf W},{\bf M},{\bf Y}).
\end{align*}
%
}.
\begin{algorithm}[H]
\caption{Offline min-max PSP\label{offlinePSPalg}}
\begin{algorithmic}[1]
\STATE Initialize ${\bf W}$. Initalize ${\bf M}$ as a positive definite matrix.
\STATE Iterate until convergence:
\begin{ALC@g}
\STATE
Minimize Eq. \eqref{SMMW} with respect to ${\bf Y} $, keeping ${\bf W}$ and ${\bf M}$ fixed:
%
\begin{align}\label{Y}
{\bf Y} = {\bf M}^{-1}{\bf W}{\bf X}.
\end{align}
%
\STATE Perform a gradient descent-ascent step with respect to ${\bf W}$ and ${\bf M}$ for a fixed ${\bf Y}$:
%
\begin{align}
{\bf W}&\longleftarrow {\bf W} + 2\eta \left(\frac 1T {\bf Y}{\bf X}^\top- {\bf W}\right), \nonumber \\
{\bf M} & \longleftarrow {\bf M} + \frac \eta{\tau} \left(\frac 1T {\bf Y}{\bf Y}^\top-{\bf M}\right).
\end{align}
%
where the step size, $0<\eta<1$, may depend on the iteration.
\end{ALC@g}
\end{algorithmic}
\end{algorithm}
Optimal ${\bf Y}$ in Eq. \eqref{Y} exists because ${\bf M}$ stays positive definite if initialized as such.
\subsection{Linearly stable fixed points of Algorithm 1 correspond to the PSP}
Here we demonstrate that convergence of Algorithm \ref{offlinePSPalg} to fixed ${\bf W}$ and ${\bf M}$ implies that ${\bf Y}$ is a PSP of ${\bf X}$. To this end, we approximate the gradient descent-ascent dynamics in the limit of small learning rate with the system of differential equations:
\begin{align}\label{gdPSP}
{\bf Y}(t) &= {\bf M}^{-1}(t){\bf W}(t){\bf X}, \nonumber \\
\frac{d {\bf W}(t)}{dt} &= \frac 2{T}{\bf Y}(t){\bf X}^\top -2{\bf W}(t), \nonumber \\
{\tau}\frac{d{\bf M}(t)}{dt} &= \frac{1}{T}{\bf Y}(t){\bf Y}(t)^\top - {\bf M}(t),
\end{align}
where $t$ is now the time index for gradient descent-ascent dynamics.
To state our main result in Theorem \ref{mainLSPSP}, we define the ``filter matrix" ${\bf F}(t)$ whose rows are ``neural filters"
\begin{align}\label{fdef}
{\bf F}(t) := {\bf M}^{-1}(t){\bf W}(t),
\end{align}
so that, according to Eq. \eqref{Y},
\begin{align}
{\bf Y}(t) = {\bf F}(t){\bf X}.
\end{align}
\begin{Th}\label{mainLSPSP} Fixed points of the dynamical system \eqref{gdPSP} have the following properties:
%
\begin{enumerate}
\item The neural filters, ${\bf F}$, are orthonormal, i.e. ${\bf F F^\top}={\bf I}$.
\item The neural filters span a $k$-dimensional subspace in $\mathbb{R}^n$ spanned by some $k$ eigenvectors of the input covariance matrix.
\item Stability of a fixed point requires that the neural filters span the {\bf principal} subspace of ${\bf X}$.
\item Suppose the neural filters span the principal subspace. Define
%
\begin{align}
\gamma_{ij} := 2+\frac{\left(\sigma_i-\sigma_j\right)^2}{\sigma_i\sigma_j},
\end{align}
%
where $i = 1,\ldots,k$, $j=1,\ldots,k$ and $\lbrace \sigma_1,\ldots,\sigma_k\rbrace$ are the top $k$ principal eigenvalues of ${\bf C}$. We assume ${\sigma}_k \neq \sigma_{k+1}$. This fixed point is linearly stable if and only if:
%
\begin{align}\label{tauPSP}
\tau < \frac{1}{2-4/\gamma_{ij}}
\end{align}
%
for all $(i,j)$ pairs. By linearly stable we mean that linear perturbations of ${\bf W}$ and ${\bf M}$ converge to a configuration in which the new neural filters are merely rotations within the principal subspace of the original neural filters.
\end{enumerate}
\end{Th}
\begin{proof} See Appendix \ref{LSPSP}. \end{proof}
Based on Theorem \ref{mainLSPSP} we claim that, provided the dynamics converges to a fixed point, Algorithm \ref{offlinePSPalg} has found a PSP of input data. Note that the orthonormality of the neural filters is desired and consistent with PSP since, in this approach, outputs, ${\bf Y}$, are interpreted as coordinates with respect to a basis spanning the principal subspace.
Theorem \ref{mainLSPSP} yields a practical recommendation for choosing learning rate parameters in simulations. In a typical situation, one will not know the eigenvalues of the covariance matrix a priori but can rely on the fact, $\gamma_{ij}\geq 2$. Then, Eq. \eqref{tauPSP} implies that for $\tau \leq 1/2$ the principal subspace is linearly stable leading to numerical convergence and stability.
\subsection{Online neural min-max optimization algorithms}
Unlike the offline setting considered so far, where all the input data are available from the outset, in the online setting, input data are streamed to the algorithm sequentially, one at a time. The algorithm must compute the corresponding output before the next input arrives and transmit it downstream. Once transmitted, the output cannot be altered. Moreover, the algorithm cannot store in memory any sizable fraction of past inputs or outputs but only a few, $\mathcal{O}(n k)$, state variables.
Whereas developing algorithms for the online setting is more challenging than that for the offline, it is necessary both for data analysis and for modeling biological neural networks. The size of modern datasets may exceed that of available RAM and/or the output must be computed before the dataset is fully streamed. Biological neural networks operating on the data streamed by the sensory organs are incapable of storing any significant fraction of it and compute the output on the fly.
\begin{figure}
\centering
\includegraphics{NECO-03-17-2830-Figure1.eps}
\caption{Dimensionality reduction neural networks derived by min-max optimization in the online setting. A. Network with autapses. B. Network without autapses.}
\label{Fig1}
\end{figure}
\cite{pehlevan2015MDS} gave a derivation of a neural online algorithm for PSP, starting from the original similarity matching cost function \eqref{SM}. Here, instead, we start from the min-max form of similarity matching \eqref{SMMW2} and end up with a class of algorithms that reduce to the algorithm of \cite{pehlevan2015MDS} for special choices of learning rates. Our main contribution, however, is that the current derivation is much more intuitive and simpler, with insights to why similarity matching leads to local learning rules.
We start by rewriting the min-max PSP objective \eqref{SMMW2} as a sum of time-separable terms that can be optimized independently:
\begin{align}\label{SMMW2t}
\min_{{\bf W}\in \mathbb{R}^{k\times n}}\max_{{\bf M}\in \mathbb{R}^{k\times k}} \, \frac{1}{T} \sum_{t=1}^T l_{PSP,t} ({\bf W},{\bf M}),
\end{align}
where
\begin{align}
l_{PSP,t} ({\bf W},{\bf M}) \equiv 2 {\rm Tr}\left({\bf W}^\top{\bf W}\right) - {\rm Tr}\left({\bf M}^\top{\bf M}\right) + \min_{{\bf y}_t\in \mathbb{R}^{k\times 1}} l_t({\bf W},{\bf M},{\bf y}_t),
\end{align}
and
\begin{align}\label{lyapunov}
l_t({\bf W},{\bf M},{\bf y}_t)=-4{\bf x}_t^\top{\bf W}^\top{\bf y}_t + 2{\bf y}_t^\top{\bf M}{\bf y}_t.
\end{align}
This separation in time is a benefit of the min-max PSP objective \eqref{SMMW2}, and leads to a natural way to derive an online algorithm that was not available for the original similarity matching cost function \eqref{SM}.
To solve the optimization problem, Eq. \eqref{SMMW2t}, in the online setting, we optimize sequentially each ${l}_{PSP,t}$. For each $t$, first, minimize Eq.\eqref{lyapunov} with respect to ${\bf y}_t$ while keeping ${\bf W}_t$ and ${\bf M}_t$ fixed. Second,
make a gradient descent-ascent step with respect to ${\bf W}_t$ and ${\bf M}_t$ for fixed ${\bf Y}$:
\begin{align}\label{ogda}
{\bf W}_{t+1}&= {\bf W}_t - \eta_t \frac {\partial l_{PSP,t}({\bf W}_t,{\bf M}_t)}{\partial {\bf W}_t}, \nonumber \\
{\bf M}_{t+1} & = {\bf M}_t + \frac{\eta_{t}}{\tau} \frac {\partial l_{PSP,t}({\bf W}_t,{\bf M}_t)}{\partial {\bf M}_t},
\end{align}
where $0<\eta_{t}<1$ is the ${\bf W}$ learning rate and $\tau>0$ is the ratio of ${\bf W}$ and ${\bf M}$ learning rates. As before, Proposition \ref{lem1} (Appendix \ref{chainrule}) ensures that the online gradient descent-ascent updates, Eq. \eqref{ogda}, follow from alternating optimization \citep{olshausen1996emergence,olshausen1997sparse,arora2015simple} of $l_{PSP,t}$.
\begin{algorithm}[H]
\caption{Online min-max PSP\label{onlinePSPalg}}
\begin{algorithmic}[1]
\STATE At $t=0$, initialize the synaptic weight matrices, ${\bf W}_1$ and ${\bf M}_1$. ${\bf M}_1$ must be symmetric and positive definite.
\STATE Repeat for each $t = 1,\ldots T$
\begin{ALC@g}
\STATE Receive input ${\bf x}_t$
\STATE Neural activity: Run until convergence
\begin{align}\label{gdMDSon}
\frac{d{\bf y}_t(\gamma)}{d\gamma} = {\bf W}_t{\bf x}_t -{\bf M}_t{\bf y}_t.
\end{align}
\STATE Plasticity:
Update synaptic weight matrices,
\begin{align}\label{wmSM}
{\bf W}_{t+1} &= {\bf W}_{t} + 2\eta_{t} \left({\bf y}_t{\bf x}_t^\top-{\bf W}_t\right), \nonumber \\
{\bf M}_{t+1} &= {\bf M}_{t}+\frac{\eta_{t}}{\tau} \left({\bf y}_t{\bf y}_t^\top-{\bf M}_t\right).
\end{align}
\end{ALC@g}
\end{algorithmic}
\end{algorithm}
Algorithm \ref{onlinePSPalg} can be implemented by a biologically plausible neural network. The dynamics \eqref{gdMDSon} corresponds to neural activity in a recurrent circuit, where ${\bf W}_t$ is the feedforward synaptic weight matrix and $-{\bf M}_t$ is the lateral synaptic weight matrix, Fig. \ref{Fig1}A. Since ${\bf M}_t$ is always positive definite, Eq. \eqref{lyapunov} is a Lyapunov function for neural activity. Hence the dynamics is guaranteed to converge to a unique fixed point, ${\bf y}_t = {\bf M}^{-1}_t{\bf W}_t{\bf x}_t$, where matrix inversion is computed iteratively in a distributed manner.
Updates of covariance matrices, Eq. \eqref{wmSM}, can be interpreted as synaptic learning rules: Hebbian for feedforward and anti-Hebbian (due to the $``-"$ sign in \eqref{gdMDSon}) for lateral synaptic weights. Importantly, these rules are local - the weight of each synapse depends only on the activity of the pair of neurons that synapse connects - and therefore biologically plausible.
Even requiring full optimization with respect to ${\bf y}_t$ vs. a gradient step with respect to ${\bf W}_t$ and ${\bf M}_t$ may have a biological justification. As neural activity dynamics is typically faster than synaptic plasticity, it may settle before the arrival of the next input.
To see why similarity matching leads to local learning rules let us consider Eqs. \eqref{SMMW2} and \eqref{SMMW2t}. Aside from separating in time, useful for derivation of online learning rules, $L_{PSP}({\bf W},{\bf M},{\bf Y})$ also separates in synaptic weights and their pre- and postsynaptic neural activities,
\begin{align}
L_{PSP}({\bf W},{\bf M},{\bf Y})=\sum_t \left[\sum_{ij}\left( 2W_{ij}^2-4W_{ij}x_{t,j}y_{t,i}\right) - \sum_{ij}\left( M_{ij}^2+2M_{ij}y_{t,j}y_{t,i}\right)\right].
\end{align}
Therefore, a derivative with respect to a synaptic weight depends only on the quantities accessible to the synapse.
Finally, we address two potential criticisms of the neural PSP algorithm. First is the existence of autapses, i.e. self-coupling of neurons, in our network manifested in nonzero diagonals of the lateral connectivity matrix, ${\bf M}$, Fig \ref{Fig1}A. Whereas autapses are encountered in the brain, they are rarely seen in principal neurons \citep{ikeda2006autapses}. Second is the symmetry of lateral synaptic weights in our network which is not observed experimentally. We derive an autapse-free network architecture (zeros on the diagonal of the lateral synaptic weight matrix ${\bf M}_t$) with asymmetric lateral connectivity, Fig \ref{Fig1}B, by using coordinate descent \citep{pehlevan2015MDS} in place of gradient descent in the neural dynamics stage \eqref{gdMDSon} (see Appendix \ref{coord}). The resulting algorithm produces the same outputs as the current algorithm and for the special case $\tau=1/2$ and $\eta_t = \eta/2$, reduces to the algorithm with ``forgetting'' of \cite{pehlevan2015MDS}.
\section{From constrained similarity matching to Hebbian/anti-Hebbian networks for PSW}\label{S3}
The variable substitution method we introduced in the previous section can be applied to other computational objectives in order to derive neural networks with local learning rules. To give an example, we derive a neural network for PSW, which can be formulated as a constrained similarity matching problem. This example also illustrates how an optimization constraint can be implemented by biological mechanisms.
\subsection{Derivation of PSW from constrained similarity matching}
The PSW problem is closely related to PSP: project centered input data samples onto the principal subspace ($k\le n$), and ``spherize" the data in the subspace so that the variances in all directions are 1. To derive a neural PSW algorithm, we use the similarity matching objective with an additional constraint:
%
\begin{align}\label{CSM}
{\rm PSW:} \hspace{3cm} \min_{{\bf Y}\in \mathbb{R}^{k\times T}} \frac 1{T^2} \left\Vert {\bf X}^\top{\bf X}-{\bf Y}^\top{\bf Y}\right\Vert_F^2, \qquad {\rm s.t.}\quad \frac 1T{\bf Y}{\bf Y}^\top = {\bf I}
\end{align}
%
We rewrite Eq. \eqref{CSM} by expanding the Frobenius norm squared and dropping the ${\rm Tr} \left({\bf Y}^\top{\bf Y}{\bf Y}^\top{\bf Y}\right)$ term, which is constant under the constraint, thus reducing \eqref{CSM} to a constrained similarity alignment problem:
%
\begin{align}\label{CSA}
\min_{{\bf Y}\in \mathbb{R}^{k\times T}} \left ( -\frac 1{T^2} {\bf X}^\top{\bf X}{\bf Y}^\top{\bf Y} \right ), \qquad {\rm s.t.}\quad \frac 1T{\bf Y}{\bf Y}^\top = {\bf I}.
\end{align}
%
To see that objective \eqref{CSA} is optimized by the PSW, first, substitute a Singular Value Decomposition (SVD) for matrices ${\bf X}$ and ${\bf Y}$ and note that the alignment is maximized by matching right singular vectors of ${\bf Y}$ to ${\bf X}$ and rotating to the diagonal basis (for a rigorous proof see \citep{pehlevan2015normative}). Since the squared singular values of ${\bf Y}$ equal unity, the objective \eqref{CSA} is reduced to a summation of $k$ squared singular values of ${\bf X}$ and is optimized by choosing the top $k$. Then, ${\bf Y}$ is given by the top $k$ right singular vectors of ${\bf X}$ scaled by $\sqrt{T}$. As before, objective \eqref{CSA} is invariant to the left-multiplication of ${\bf Y}$ by an orthogonal matrix and, therefore, has infinitely many degenerate solutions.
Next, we derive a mixed PSW objective from Eq. \eqref{CSA} by introducing two new dynamical variable matrices: the input-output correlation matrix, ${\bf W}=\frac 1T {\bf X}{\bf Y}^\top$, and the Lagrange multiplier matrix, ${\bf M}$, for the whitening constraint:
%
\begin{align}
\min_{{\bf Y}\in \mathbb{R}^{k\times T}} \min_{{\bf W}\in \mathbb{R}^{k\times n}}\max_{{\bf M}\in \mathbb{R}^{k\times k}} \, &L_{PSW}({\bf W},{\bf M},{\bf Y}),
\end{align}
%
where
%
\begin{align}\label{CSMMW}
\qquad L_{PSW}({\bf W},{\bf M},{\bf Y}) &\equiv -\frac{2}{T}{\rm Tr}\left({\bf X}^\top{\bf W}^\top{\bf Y}\right) + {\rm Tr}\left({\bf W}^\top{\bf W}\right) + {\rm Tr}\left({\bf M} \left(\frac{1}{T}{\bf Y}{\bf Y}^\top -{\bf I}\right)\right).
\end{align}
%
To see that Eq. \eqref{CSMMW} is equivalent to Eq. \eqref{CSA}, find optimal ${\bf W}^*=\frac 1T {\bf Y}{\bf X}^\top$ by setting the corresponding derivatives of the objective \eqref{CSMMW} to zero. Then, substitute ${\bf W}^*$ into Eq. \eqref{CSMMW} to obtain the Lagrangian of Eq. \eqref{CSA}.
Finally, we exchange the order of minimization with respect to ${\bf Y}$ and ${\bf W}$ as well as the order of minimization with respect to ${\bf Y}$ and maximization with respect to ${\bf M}$ in Eq. \eqref{CSMMW} (see Proposition \ref{minmaxPSW} in Appendix \ref{A2} for a proof). Then, we arrive at the following min-max optimization problem with a mixed objective:
%
\begin{align}\label{CSMMW2}
\min_{{\bf W}\in \mathbb{R}^{k\times n}}\max_{{\bf M}\in \mathbb{R}^{k\times k}} \min_{{\bf Y}\in \mathbb{R}^{k\times T}}L_{PSW}({\bf W},{\bf M},{\bf Y}) ,
\end{align}
where $L_{PSW}({\bf W},{\bf M},{\bf Y})$ is defined in Eq. \eqref{CSMMW}.
\subsection{Offline PSW algorithm}
Next, we give an offline algorithm for the PSW problem, using the alternating optimization procedure as before. We solve Eq. \eqref{CSMMW2} by, first, optimizing with respect to ${\bf Y}$ for fixed ${\bf W}$ and ${\bf M}$ and, second, making a gradient descent-ascent step with respect to ${\bf W}$ and ${\bf M}$ while keeping ${\bf Y}$ fixed\footnote{This alternating optimization is identical to a gradient descent-ascent (see Proposition \ref{lem1} in Appendix \ref{chainrule}) in ${\bf W}$ and ${\bf M}$ on the objective:
%
\begin{align*}
l_{PSW}({\bf W},{\bf M}) \equiv \min_{{\bf Y}\in \mathbb{R}^{k\times T}} L_{PSW}({\bf W},{\bf M},{\bf Y}).
\end{align*}
%
}.
We arrive at the following algorithm:
%
\begin{algorithm}[H]
\caption{Offline min-max PSW\label{offlinePSWalg}}
\begin{algorithmic}[1]
\STATE Initialize ${\bf W}$. Initialize ${\bf M}$ as a positive definite matrix.
\STATE Iterate until convergence:
\begin{ALC@g}
\STATE
Minimize Eq. \eqref{CSMMW} with respect to ${\bf Y} $, keeping ${\bf W}$ and ${\bf M}$ fixed:
%
\begin{align}\label{YY}
{\bf Y} = {\bf M}^{-1}{\bf W}{\bf X}.
\end{align}
%
\STATE Perform a gradient descent-ascent step with respect to ${\bf W}$ and ${\bf M}$ for a fixed ${\bf Y}$:
%
\begin{align}\label{PSWWMupdate}
{\bf W}&\longleftarrow {\bf W} + 2\eta \left(\frac 1T {\bf Y}{\bf X}^\top- {\bf W}\right), \nonumber \\
{\bf M} & \longleftarrow {\bf M} + \frac{\eta}{\tau} \left(\frac 1T {\bf Y}{\bf Y}^\top-{\bf I}\right).
\end{align}
%
where the step size, $0<\eta<1$, may depend on the iteration.
\end{ALC@g}
\end{algorithmic}
\end{algorithm}
%
Convergence of Algorithm \ref{offlinePSWalg} requires the input covariance matrix, ${\bf C}$, to have at least $k$ non-zero eigenvalues. Otherwise, a consistent solution cannot be found because update \eqref{PSWWMupdate} forces ${\bf Y}$ to be full-rank while Eq. \eqref{YY} lowers its rank.
\subsection{Linearly stable fixed points of Algorithm \ref{offlinePSWalg} correspond to PSW}
Here we claim that convergence of Algorithm \ref{offlinePSWalg} to fixed ${\bf W}$ and ${\bf M}$ implies PSW of ${\bf X}$. In the limit of small learning rate, the gradient descent-ascent dynamics can be approximated with the system of differential equations:
%
\begin{align}\label{gdPSW}
{\bf Y}(t) &= {\bf M}^{-1}(t){\bf W}(t){\bf X}, \nonumber \\
\frac{d {\bf W}(t)}{dt} &= \frac 2{T}{\bf Y}(t){\bf X}^\top -2{\bf W}(t), \nonumber \\
{\tau}\frac{d{\bf M}(t)}{dt} &= \frac{1}{T}{\bf Y}(t){\bf Y}(t)^\top - {\bf I}(t),
\end{align}
%
where $t$ is now the time index for gradient descent-ascent dynamics. We again define the neural filter matrix ${\bf F} = {\bf M}^{-1}{\bf W}$.
\begin{Th}\label{mainLSPSW} Fixed points of the dynamical system \eqref{gdPSW} have the following properties:
%
\begin{enumerate}
\item The outputs are whitened, i.e. $\frac 1T {\bf Y}{\bf Y}^\top = {\bf I}$.
\item The neural filters span a $k$-dimensional subspace in $\mathbb{R}^n$ which is spanned by some $k$ eigenvectors of the input covariance matrix.
\item Stability of the fixed point requires that the neural filters span the {\bf principal} subspace of ${\bf X}$.
\item Suppose the neural filters span the principal subspace. This fixed point is linearly stable if and only if
%
\begin{align}\label{tauPSW}
\tau < \frac{\sigma_i+\sigma_j}{2\left(\sigma_i-\sigma_j\right)^2}
\end{align}
%
for all $(i,j)$ pairs, $i\neq j$. By linear stability we mean that linear perturbations of ${\bf W}$ and ${\bf M}$ converge to a rotation of the original neural filters within the principal subspace.
\end{enumerate}
\end{Th}
\begin{proof} See Appendix \ref{proofmainLSPSW}. \end{proof}
Based on Theorem \ref{mainLSPSW} we claim that, provided Algorithm \ref{offlinePSWalg} converges, this fixed point corresponds to a PSW of input data. Unlike the PSP case, the neural filters are not orthonormal.
\subsection{Online algorithm for PSW}
As before, we start by rewriting the min-max PSW objective \eqref{CSMMW2} as a sum of time-separable terms that can be optimized independently:
%
\begin{align}\label{CSMMW2t}
\min_{{\bf W}\in \mathbb{R}^{k\times n}}\max_{{\bf M}\in \mathbb{R}^{k\times k}} \, \frac{1}{T} \sum_{t=1}^T l_{PSW,t} ({\bf W},{\bf M}).
\end{align}
%
where
%
\begin{align}
l_{PSW,t} ({\bf W},{\bf M}) \equiv {\rm Tr}\left({\bf W}^\top{\bf W}\right) - {\rm Tr}\left({\bf M}\right) + \frac 12 \min_{{\bf y}_t\in \mathbb{R}^{k\times 1}} l_t({\bf W},{\bf M},{\bf y}_t).
\end{align}
%
and $l_t({\bf W},{\bf M},{\bf y}_t)$ is defined in Eq. \eqref{lyapunov}.
%
In the online setting, Eq. \eqref{CSMMW2t} can be optimized by sequentially minimizing each ${ l}_{PSW,t}$. For each $t$, first, minimize \eqref{lyapunov} with respect to ${\bf y}_t$ for fixed ${\bf W}_t$ and ${\bf M}_t$, second, update ${\bf W}_t$ and ${\bf M}_t$ according to a gradient descent-ascent step for fixed ${\bf y}_t$:
%
\begin{align}\label{ogda2}
{\bf W}_{t+1}&= {\bf W}_t - \eta_t \frac {\partial l_{PSW,t}({\bf W}_t,{\bf M}_t)}{{\bf W}_t}, \nonumber \\
{\bf M}_{t+1} & = {\bf M}_t + \frac{\eta_{t}}{\tau} \frac {\partial l_{PSW,t}({\bf W}_t,{\bf M}_t)}{{\bf M}_t},
\end{align}
%
where $0<\eta_{t}<1$ is the ${\bf W}$ learning rate and $\tau>0$ is the ratio of ${\bf W}$ and ${\bf M}$ learning rates.
As before, Proposition \ref{lem1} ensures that the online gradient descent-ascent updates, Eq. \eqref{ogda2}, follow from alternating optimization \citep{olshausen1996emergence,olshausen1997sparse,arora2015simple} of $l_{PSW,t}$.
\begin{algorithm}[H]
\caption{Online min-max PSW\label{onlinePSWalg}}
\begin{algorithmic}[1]
\STATE At $t=0$, initialize the synaptic weight matrices, ${\bf W}_1$ and ${\bf M}_1$. ${\bf M}_1$ must be symmetric and positive definite.
\STATE Repeat for each $t = 1,\ldots, T$
\begin{ALC@g}
\STATE Receive input ${\bf x}_t$
\STATE Neural activity: Run until convergence
%
\begin{align}\label{gdCSMon}
\frac{d{\bf y}_t(\gamma)}{d\gamma} = {\bf W}_t{\bf x}_t -{\bf M}_t{\bf y}_t.
\end{align}
%
\STATE Plasticity:
Update synaptic weight matrices,
%
\begin{align}\label{wmCSM}
{\bf W}_{t+1} &= {\bf W}_{t} + 2\eta_{W,t} \left({\bf y}_t{\bf x}_t^\top-{\bf W}_t\right), \nonumber \\
{\bf M}_{t+1} &= {\bf M}_{t}+\eta_{M,t} \left({\bf y}_t{\bf y}_t^\top-{\bf I}_t\right).
\end{align}
\end{ALC@g}
\end{algorithmic}
\end{algorithm}
%
Algorithm \ref{onlinePSWalg} can be implemented by a biologically plausible single-layer neural network with lateral connections as in Algorithm \ref{onlinePSPalg}, Fig. \ref{Fig1}A. Updates to synaptic weights, Eq. \eqref{wmCSM}, are local, Hebbian/anti-Hebbian plasticity rules. An autapse-free network architecture, Fig \ref{Fig1}B, may be obtained using coordinate descent \citep{pehlevan2015MDS} in place of gradient descent in the neural dynamics stage \eqref{gdCSMon} (see Appendix \ref{coordPSW}).
The lateral connections here are the Lagrange multipliers introduced in the offline problem, Eq. \eqref{CSMMW}. In the PSP network, they resulted from a variable transformation of the output covariance matrix. This difference caries over to the learning rules, where in Algorithm \ref{onlinePSWalg}, the lateral learning rule is enforcing the whitening of the output, but in Algorithm \ref{onlinePSPalg}, the lateral learning rule sets the lateral weight matrix to the output covariance matrix.
\section{Game theoretical interpretation of Hebbian/anti-Hebbian learning}\label{S4}
In the original similarity matching objective, Eq. \eqref{SM}, the only variables are neuronal activities which, at the optimum, represent principal components. In Section \ref{S2}, we rewrote these objectives by introducing matrices {\bf W} and {\bf M} corresponding to synaptic connection weights, Eq. \eqref{SMMW}. Here, we eliminate neural activity variables altogether and arrive at a min-max formulation in terms of feedforward, ${\bf W}$, and lateral, ${\bf M}$, connection weight matrices only. This formulation lends itself to a game-theoretical interpretation.
Since in the offline PSP setting, optimal ${\bf M}^*$ in Eq. \eqref{SMMW2} is an invertible matrix (because ${\bf M}^*=\frac 1T {\bf Y}^*{{\bf Y}^*}^\top$, see also Appendix \ref{A1}), we can restrict our optimization to invertible matrices, ${\bf M}$, only. Then, we can optimize objective \eqref{SMMW} with respect to ${\bf Y}$ and substitute its optimal value ${\bf Y}^*={\bf M}^{-1}{\bf W}{\bf X}$ into \eqref{SMMW} and \eqref{SMMW2} to obtain:
%
\begin{align}\label{SMMW3}
&\min_{{\bf W}\in \mathbb{R}^{k\times n}}\max_{{\bf M}\in \mathbb{R}^{k\times k}}-\frac{2}{T}{\rm Tr}\left( {\bf X}^\top{\bf W}^\top{\bf M}^{-1} {\bf W}{\bf X}\right) + 2 {\rm Tr}\left({\bf W}^\top{\bf W}\right) - {\rm Tr}\left({\bf M}^\top{\bf M}\right),\nonumber \\
&\text{ s.t. ${\bf M}$ is invertible.}
\end{align}
%
This min-max objective admits a game-theoretical interpretation where feedforward, ${\bf W}$, and lateral, ${\bf M}$, synaptic weight matrices oppose each other. To reduce the objective, feedforward synaptic weight vectors of each output neuron attempt to align with the direction of maximum variance of input data. However, if this was the only driving force then all output neurons would learn the same synaptic weight vectors and represent the same top principal component. At the same time, linear dependency between different feedforward synaptic weight vectors can be exploited by the lateral synaptic weights to increase the objective by cancelling the contributions of different components. To avoid this, the feedforward synaptic weight vectors become linearly independent and span the principal subspace.
A similar interpretation can be given for PSW, where feedforward, ${\bf W}$, and lateral, ${\bf M}$, synaptic weight matrices oppose each other adversarially.
%
%
\section{Novel formulations of dimensionality reduction using fractional exponents}\label{S5}
In this section, we point to a new class of dimensionality reduction objective functions that naturally follow from the min-max objectives \eqref{SMMW} and \eqref{SMMW2}. Eliminating both the neural activity variables, {\bf Y}, and the lateral connection weight matrix, {\bf M}, we arrive at optimization problems in terms of the feedforward weight matrix, {\bf W}, only. The rows of optimal {\bf W} form a non-orthogonal basis of the principal subspace. Such formulations of principal subspace problems involve fractional exponents of matrices and, to the best of our knowledge, have not been proposed previously.
By replacing $\max_{\bf M}\min_{\bf Y}$ optimization in the min-max PSP objective, Eq. \eqref{SMMW2}, by its saddle point value (see Proposition \ref{minmaxPSP} in Appendix \ref{A1}) we find the following objective expressed solely in terms of ${\bf W}$:
\begin{align}\label{fractionalPSP}
\min_{{\bf W}\in \mathbb{R}^{k\times n}} {\rm Tr} \left(-\frac{3}{T^{2/3}}\left({\bf W}{\bf X}{\bf X}^\top{\bf W}^\top\right)^{2/3}+2{\bf W}{\bf W}^\top\right),
\end{align}
The rows of the optimal ${\bf W}$ are not principal eigenvectors, rather the rowspace of ${\bf W}$ spans the principal subspace.
By replacing $\max_{\bf M}\min_{\bf Y}$ optimization in the min-max PSW objective, Eq. \eqref{CSMMW2}, by its optimal value (see Proposition \ref{minmaxPSW} in Appendix \ref{A2}):
\begin{align}\label{fractionalPSW}
\min_{{\bf W}\in \mathbb{R}^{k\times n}} {\rm Tr} \left(-\frac{2}{T^{1/2}}\left({\bf W}{\bf X}{\bf X}^\top{\bf W}^\top\right)^{1/2}+{\bf W}{\bf W}^\top\right).
\end{align}
As before, the rows of the optimal ${\bf W}$ are not principal eigenvectors, rather the rowspace of ${\bf W}$ spans the principal eigenspace.
We observe that the only material difference between Eqs. \eqref{fractionalPSP} and \eqref{fractionalPSW} is in the value of the fractional exponent. Based on this, we conjecture that any objective function of such form with a fractional exponent from a continuous range is optimized by ${\bf W}$ spanning the principal subspace. Such solutions would differ in the eigenvalues associated with the corresponding components.
A supporting argument for our conjecture comes from the work of \cite{miao1998fast}, which studied the cost
\begin{align}\label{logPCA}
\min_{{\bf W}\in \mathbb{R}^{k\times n}} {\rm Tr} \left(-\log\left({\bf W}{\bf X}{\bf X}^\top{\bf W}^\top\right)+{\bf W}{\bf W}^\top\right).
\end{align}
Eq. \ref{logPCA} can be seen as a limiting case of our conjecture, where the fractional exponent goes to zero. Indeed, \cite{miao1998fast} proved that the rows of optimal ${\bf W}$ are an orthonormal basis for the principal eigenspace.
\section{Numerical experiments}\label{S6}
\begin{figure}
\centering
\includegraphics[width = \textwidth]{NECO-03-17-2830-Figure2.eps}
\vskip -120pt
\caption{Demonstration of the stability of the PSP (top row) and PSW (bottom row) algorithms. We constructed an $n=10$ by $T=2000$ data matrix ${\bf X}$ from its SVD, where the left and right singular vectors are chosen randomly, the top three singular values are set to $\lbrace \sqrt{3T},\sqrt{2T},\sqrt{T}\rbrace$ and the rest of the singular values are chosen uniformly in $[0,0.1\sqrt{T}]$. Learning rates were $\eta_{t} = 1/\left(10^3+t\right)$. Errors were defined using deviation of the neural filters from their optimal values \citep{pehlevan2015MDS}. Let ${\bf U}$ be the $10\times 3$ matrix whose columns are the top 3 left singular vectors of ${\bf X}$. PSP error: $\left\Vert {\bf F}(t)^\top{\bf F}(t) -{\bf U}{\bf U}^\top\right\Vert_F$, PSW error: $\left\Vert {\bf F}(t)^\top{\bf F}(t) -{\bf U}{\bf S}{\bf U}^\top\right\Vert_F$, with ${\bf S} = {\rm diag}\left([1/3, 1/2, 1]\right)$ in MATLAB notation. Solid (dashed) lines indicate linearly stable (unstable) choices of $\tau$. A) Small perturbations to the fixed point. ${\bf W}$ and ${\bf M}$ matrices were initialized by adding a random Gaussian variable, $\mathcal{N}(0,10^{-6})$, elementwise to their fixed point values. B) Offline algorithm, initialized with random ${\bf W}$ and ${\bf M}$ matrices. C) Online algorithm, initialized with the same initial condition as in B). A random column of ${\bf X}$ is processed at each time.
\label{Fig2}}
\end{figure}
Next, we test our findings using a simple artificial dataset. We generated an $n=10$ dimensional dataset and we simulated our offline and online algorithms to reduce this dataset to $k=3$ dimensions, using different values of the parameter $\tau$. The results are plotted in Figs. \ref{Fig2}, \ref{FigM}, \ref{FigTau} and \ref{Figcomp} along with details of the simulations in the figures' caption.
Consistent with Theorems \ref{mainLSPSP} and \ref{mainLSPSW}, small perturbations to PSP and PSW fixed points decayed (solid lines) or grew (dashed lines) depending on the value of $\tau$, Fig. \ref{Fig2}A. Offline simulations that start from random initial conditions converged to the PSP (or the PSW) solution if the fixed point was linearly stable, Fig. \ref{Fig2}B. Interestingly, the online algorithms' performance were very close to that of the offline, Fig. \ref{Fig2}C.
The error for linearly unstable simulations in Fig. \ref{Fig2} saturates rather than blowing up. This may seem at odds with Theorems \ref{mainLSPSP} and \ref{mainLSPSW}, which stated that if there is a stable fixed point of the dynamics, it should be the PSP/PSW solution. A closer look resolves this dilemma. In Fig. \ref{FigM}, we plot the evolution of an element of the ${\bf M}$ matrix in the offline algorithms for stable and unstable choices of $\tau$. When the principal subspace is linearly unstable, the synaptic weights exhibit undamped oscillations. The dynamics seems to be confined to a manifold with a fixed distance (in terms of the error metric) from the principal subspace. That the error does not grow to infinity is a result of the stabilizing effect of min-max antagonism of the synaptic weights. Online algorithms behave similarly.
\begin{figure}
\centering
\includegraphics[width = \textwidth]{NECO-03-17-2830-Figure3.eps}
\caption{Evolution of a synaptic weight. Same dataset was used as in Fig. \ref{Fig2}. $\eta = 10^{-3}$. \label{FigM}}
\end{figure}
Next, we studied in detail the effect of $\tau$ parameter on the convergence. In the offline algorithm, we plot the error after a fixed number of gradient steps, as a function of $\tau$. For PSP, there is an optimal $\tau$. Decreasing $\tau$ beyond the optimal value doesn't lead to a degradation in performance, however increasing it leads to a rapid increase in the error. For PSW, there is a plateau of low error for low values of $\tau$ but a rapid increase as one approaches the linear instability threshold. Online algorithms behave similarly.
\begin{figure}
\centering
\includegraphics[width = \textwidth]{NECO-03-17-2830-Figure4.eps}
\caption{Effect of $\tau$ of performance. Error after $2\times 10^4$ gradient steps are plotted as a function of different choices of $\tau$. Same dataset was used as in Fig. \ref{Fig2} with same network initalization and learning rates. Both curves start from $\tau =0.01$ and go to the maximum value allowed for linear stability. \label{FigTau}}
\end{figure}
Finally, we compared the performance of our online PSP algorithm to neural PSP algorithms with heuristic learning rules such as the Subspace Network \citep{oja1989neural} and the Generalized Hebbian Algorithm (GHA) \citep{sanger1989optimal}, on the same dataset. We found that our algorithm converges much faster (Fig. \ref{Figcomp}). Previously, the original similarity matching network \citep{pehlevan2015MDS}, which is a special case of the online PSP algorithm of this paper, was shown to converge faster than the APEX \citep{kung1994adaptive} and F\"oldiak's \citep{foldiak1989adaptive} networks.
\begin{figure}
\centering
\includegraphics{NECO-03-17-2830-Figure5.eps}
\caption{Comparison of the online PSP algorithm with the Subspace Network \citep{oja1989neural} and the GHA \citep{sanger1989optimal}. The dataset and the error metric is as in Fig. \ref{Fig2}. For fairness of comparison, the learning rates in all networks were set to $\eta = 10^{-3}$. $\tau = 1/2$ for the online PSP algorithm \label{Figcomp}. Feedforward connectivity matrices were initialized randomly. For the online PSP algorithm, lateral connectivity matrix was initialized to the identity matrix. Curves show averages over 10 trials.}
\end{figure}
\section{Discussion}
In this paper, through transparent variable substitutions, we demonstrated why biologically plausible neural networks can be derived from similarity matching objectives, mathematically formalized the adversarial relationship between Hebbian feedforward and anti-Hebbian lateral connections using min-max optimization lending itself to a game-theoretical interpretation, and formulated dimensionality reduction tasks as optimizations of fractional powers of matrices. The formalism we developed should generalize to unsupervised tasks other than dimensionality reduction and could provide a theoretical foundation for both natural and artificial neural networks.
In comparing our networks with biological ones, most importantly, our networks rely only on local learning rules that can be implemented by synaptic plasticity. While Hebbian learning is famously observed in neural circuits \citep{bliss1973longa,bliss1973long}, our networks also require anti-Hebbian learning, which can be interpreted as the long-term potentiation of inhibitory postsynaptic potentials. Experimentally, such long-term potentiation can arise from pairing action potentials in inhibitory neurons with subthreshold
depolarization of postsynaptic pyramidal neurons \citep{komatsu1994age,maffei2006potentiation}. However, plasticity in inhibitory synapses does not have to be Hebbian, i.e. depend on the correlation between pre- and postsynaptic activity \citep{kullmann2012plasticity}.
To make progress, we had to make several simplifications sacrificing biological realism. In particular, we assumed that neuronal activity is a continuous variable which would correspond to membrane depolarization (in graded potential neurons) or firing rate (in spiking neurons). We ignored the nonlinearity of the neuronal input-output function. Such linear regime could be implemented via a resting state bias (in graded potential neurons) or resting firing rate (in spiking neurons).
The applicability of our networks as models of biological networks can be judged by experimentally testing the following predictions. First, we predict a relationship between the feedforward and lateral synaptic weight matrices which could be tested using modern connectomics datasets. Second, we suggest that similarity of output activity matches that of the input which could be tested by neuronal population activity measurements using calcium imaging.
Often the choice of a learning rate is crucial to the learning performance of neural networks. Here, we encountered a nuanced case where the ratio of feedforward and lateral weights, $\tau$, affects the learning performance significantly. First, there is a maximum value of such ratio, beyond which the principal subspace solution is linearly unstable. The maximum value depends on the principal eigenvalues, but for PSP, $\tau \leq 1/2$ is always linearly stable. For PSW there isn't an always safe choice. Having the same learning rates for feedforward and lateral weights, $\tau=1$, may actually be unstable. Second, linear stability is not the only thing that affects performance. In simulations, for PSP, we observed that there is an optimal value of $\tau$. For PSW, decreasing $\tau$ seems to increase performance until a plateau is reached. This difference between PSP and PSW may be attributed to the difference of origins of lateral connectivity. In PSW algorithms, lateral weights originate from Lagrange multipliers enforcing an optimization constraint. Low $\tau$, meaning higher lateral learning rates, force the network to satisfy the constraint during the whole evolution of the algorithm.
Based on these observation, we can make practical suggestions for the $\tau$ parameter. For PSP, $\tau = 1/2$ seems to be a good choice, which is also preferred from another derivation of an online similarity matching algorithm \citep{pehlevan2015MDS}. For PSW, the smaller the $\tau$, the better it is, although one should make sure that the lateral weight learning rate $\eta/\tau$ is still sufficiently small.
\subsubsection*{Acknowledgments}
We thank Alex Genkin, Sebastian Seung, Mariano Tepper and Jonathan Zung for discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,418 |
\subsection{Related work} \label{sec:related}
\paragraph{Strategic classification.}
Since its introduction in \cite{hardt2016strategic}
(and based on earlier formulations in \cite{bruckner2009nash,bruckner2012static,grosshans2013bayesian}),
the literature on strategic classification has been growing at a rapid pace.
Various aspects of learning have been studied, including:
generalization behavior \cite{zhang2021incentive,sundaram2021pac,ghalme2021strategic},
algorithmic hardness \cite{hardt2016strategic},
practical optimization methods \cite{levanon2021strategic,levanon2022generalized},
and societal implications \cite{MilliMDH19,HuIV19,chen2020strategic,levanon2021strategic}.
Some efforts have been made to extend beyond the conventional user models, e.g., by adding noise \cite{jagadeesan2021alternative},
relying on partial information \cite{ghalme2021strategic,BechavodPZWZ21},
or considering broader user interests \cite{levanon2022generalized};
but these, as do the vast majority of other works,
focus on linear classifiers and independent user responses.\footnote{The only exception we are familiar with is \cite{liu2022strategic} who study strategic ranking, but do not focus on learning.}
Our goal here is to consider richer predictive model classes
that lead to correlated user behavior.
\paragraph{Graph Neural Networks (GNNs).}
The use of graphs in learning has a long and rich history,
and remains to be a highly active area of research
\cite{wu2020comprehensive}.
Here we cover a small subset of relevant work.
The key idea underlying most methods in this field is to iteratively propagate and aggregate information from neighboring nodes.
Modern approaches implement variations of this idea as differentiable neural architectures \cite{gori2005new,scarselli2008graph,kipf2017semi,gilmer2017neural}.
This allows to express more elaborate forms of propagation \cite{li2018deeper,alon2021bottleneck}
and aggregation \cite{wu2019simplifying,xu2018how,DBLP:journals/corr/LiTBZ15},
including attention-based mechanisms \cite{velickovic2018graph,brody2022how}.
Nonetheless, and despite their impressive empirical success,
a key result by \cite{wu2019simplifying} shows both theoretically and empirically that most of the expressive power of GNNs can be attributed to the graph (rather than to sophisticated non-linearities).
Given that their linear approach (SGC) matches state-of-the-art performance on multiple tasks,
here we focus primarily on this architecture.
This aligns well with the strategic classification requirement
of models that allow for computationally tractable user responses (Eq. \eqref{eq:response_mapping}).
\paragraph{Robustness of GNNs.}
As most other fields in deep learning,
GNNs have been the target of recent inquiry as to their sensitivity
to adversarial attacks.
Common attacks include perturbing nodes, either in sets
\cite{zugner2018adversarial,ijcai2021-458}
or individually \cite{finkelshtein2020single}.
Attacks can be applied before training
\cite{zugner2018meta, bojchevski2019adversarial, li2021adversarial,zhang2020gnnguard}
or at test-time \cite{DBLP:journals/corr/SzegedyZSBEGF13, DBLP:journals/corr/GoodfellowSS14}; our work corresponds to the latter.
While there are connections between adversarial and strategic behavior
\cite{sundaram2021pac},
the key difference is that strategic behavior is not a zero-sum game;
in fact, in some instances, incentives can align \cite{levanon2022generalized}.
This makes the relations between the system and its users more nuanced,
and provides a degree of freedom in learning that does not exist in adversarial settings.
\section{Introduction}
Machine learning is increasingly being used to inform decisions about humans.
But when users of a system stand to gain from certain predictive outcomes,
they may be prone to ``game'' the system by strategically modifying their features (at some cost).
The literature on \emph{strategic classification} \cite{BrucknerS11,hardt2016strategic}
studies learning in this setting,
with emphasis on how to learn classifiers that are robust to
strategic user behavior.
The idea that users may respond to a decision rule applies broadly
and across many domains, from hiring, admissions, and scholarships, to loan approval, insurance, welfare benefits, and medical eligibility
\cite{mccrary2008manipulation,almond2010estimating,camacho2011manipulation,lee2010regression}.
This, along with its clean formulation as a learning problem,
have made strategic classification the target of much recent interest
\cite{sundaram2021pac,zhang2021incentive,levanon2021strategic,ghalme2021strategic,jagadeesan2021alternative,zrnic2021leads,estornell2021unfairness,lechner2021learning,levanon2022generalized,liu2022strategic,ahmadi2022classification,barsotti2022can}.
But despite these advances,
most works in strategic classification remain to follow the original problem formulation in assuming independence across users responses.
From a technical perspective,
this assumption greatly simplifies the learning task,
as it allows the classifier to consider each user's response
in isolation:
user behavior is modeled via a \emph{response mapping}
$\Delta_h(x)$ determining how users modify their features $x$
in response to the classifier $h$,
and learning aims to find an $h$ for which $y \approx h(\Delta_h(x))$.
Intuitively, a user will modify her features if this `moves' her across the decision boundary,
as long as this is worthwhile (i.e., gains from prediction exceed modification costs).
Knowing $\Delta_h$ allows the system to anticipate user responses
and learn an $h$ that is robust.
For a wide range of settings,
learning under independent user responses
has been shown to be
theoretically possible \cite{hardt2016strategic,zhang2021incentive,sundaram2021pac}
and practically feasible \cite{levanon2021strategic,levanon2022generalized}.
But in most realistic applications, responses are rarely independent.
For example, dependencies can arise from the task itself:
as noted in \cite{liu2022strategic},
most common examples of strategic classification (e.g., admissions)
are actually problems of constrained resource allocation (e.g., acceptance quotas)
rather than iid classification.
Once users compete for resources,
this create dependencies their responses,
since payoffs now depend on the predictive outcomes (and hence actions) of others.
This highlights an important gap in the literature.
Our goal in this paper is to extend the literature on strategic classification to account for dependencies in user responses.
The dependencies we target, however, stem not from the problem itself,
but from the system's choice of hypothesis class
(i.e., the parametric form of learned classifiers).
Intuitively, user responses can become dependent through the classifier if predictions for one user rely also on information regarding other users, i.e.,
if $h(x_i)$ is also a function of other $x_j$.
In this way, the affects of a user modifying her features
via $x_j \mapsto \Delta_h(x_j)$
can propagate to other users and affect their decisions (since $h(x_i)$ now relies on $\Delta_h(x_j)$ rather than $x_j$).
We aim to establish the affects of such dependencies on learning, and in way that can guide the system in making good modeling choices.
To make our goal concrete,
we turn to the growing literature on Graph Neural Networks (GNNs),
which studies the learning of predictive models that make
use of a graphical structure over examples \cite{Monti_2017_CVPR, 10.1145/3326362,7974879,Hamilton2017InductiveRL}.
GNNs take as input a weighted (and often directed) graph whose nodes correspond to featurized examples,
and whose edges indicate relations that are believed to be useful for prediction (e.g., if $j{\rightarrow}i$ indicates that $y_i=y_j$ is likely).
Graphs are useful as additional input if they are informative about labels in a way that complements features;
the many success stories of GNNs suggest that this is often the case \cite{zhou2020graph}.
One domain in which GNNs have been used extensively is \emph{social networks}; here, nodes describe users, and edges describe social relations.
Given that many applications of strategic classification
are social in nature, and given the prevelance of social network platforms---considering strategic classification in a social-networks setting is natural \cite{ghalme2021strategic}.
GNNs are useful for our purposes since predictions about each user are made using the features of neighboring users in the graph (in addition to their own).
The conventional approach is to first embed nodes in a way that depends on their neighbors' features,
$\phi_i = \phi(x_i;x_{\mathrm{nei}(i)})$,
and then perform classification (typically linear) in embedded space,
${\hat{y}}_i = \sign(w^\top \phi_i)$.
Note that while this may be beneficial for prediction---it also inadvertently introduces dependencies across users,
which strategic users can exploit.
To see how, consider that a user who lies far to the negative side of the decision boundary (and so independently cannot cross)
may benefit from the graph if her neighbors ``pull'' her embedding towards the decision boundary and close enough for her to cross.
Conversely, the graph can also suppress strategic behavior, since neighbors can also ``hold back'' nodes and prevent them from crossing.
Whether this is helpful to the system or not depends on the true label of the node.
Graphs therefore hold the potential to benefit the system, but also its users.
Here we study the natural question:
who does the graph help more?
Through analysis and experimentation, we show that learning in a way that \emph{neglects} to
account for strategic behavior not only jeopardizes performance,
but becomes \emph{worse} as the reliance on the graph for prediction increases.
In this sense, the graph becomes a vulnerability which users can exploit,
turning it from an asset to the system---to a threat.
As a solution, we propose a practical approach to learning GNNs in strategic environments.
We show that for a key neural architecture
and certain cost functions,
graph-dependent user responses
can be expressed as a `projection-like' operator.
This operator admits a simple and differentiable closed form; with additional smoothing,
this allows us to implement
responses as a neural layer, and learn robust predictors $h$ using gradient methods.
Experiments on synthetic and real data (with simulated responses)
demonstrate that our approach not only effectively accounts for strategic behavior, but in some cases, can harness the efforts of self-interested users to promote the system's own goals \cite{levanon2022generalized}.
Our code is available anonymously at \url{https://github.com/StrategicGNNs/Code}.
\future{
- broader perspective: risks of using sophisticated models in strategic environments (and without consideration); but hope that with careful planning, can still be accounted for
}
\section{Learning setup}
Our setting includes $n$ users, represented as nodes in a directed graph $G=(V,E)$ with
non-negative edge weights $W=\{w_{ij}\}_{(i,j) \in E},
w_{ij} \ge 0$ (we set $w_{ij}=0$ if $(i,j) \not\in E$).
Each user $i$ is also described by a feature vector
$x_i \in \mathbb{R}^\ell$ and a binary label $y_i \in {\{\pm 1\}}$.
We use $x_{-i} = \{ x_j \}_{j \neq i}$ to denote the set of features of all nodes other than $i$.
Using the graph, our goal is to learn a classifier $h$ that correctly predicts user labels.
The challenge in our strategic setting is that
inputs at test-time can be strategically modified by users, in response to $h$ and in a way that depends on the graph and on other users
(we describe this shortly).
Denoting by $x^h_i$ the (possibly modified)
strategic response of $i$ to $h$,
our learning objective is:
\begin{equation}
\label{eq:learning_objective1}
\argmin_{h \in H} \sum_i L(y_i, {\hat{y}}_i), \qquad \quad
{\hat{y}}_i=h(x_i^h,x^h_{-i})
\end{equation}
where $H$ is the model class
and $L$ is a loss function (i.e., log-loss).
Note that both predictions ${\hat{y}}_i$ and modified features $x_i^h$ can depend on $G$ and on on $x^h_{-i}$ (possibly indirectly through $h$).
We focus on the inductive graph learning setting,
in which training is done on $G$,
but testing is done on a different graph, $G'$
(often $G,G'$ are two disjoint components of a larger graph).
Our goal is therefore to learn a classifier that generalizes to other graphs in a way that is robust to strategic user behavior.
\paragraph{Graph-based learning.}
We consider linear graph-based classifiers---these are linear classifiers that operate on linear, graph-dependent node embeddings, defined as:
\begin{equation}
\label{eq:classifier+embedding}
h_{\theta,b}(x_i;x_{-i}) = \sign(\theta^\top \phi(x_i;x_{-i}) + b),
\qquad \quad
\phi(x_i;x_{-i}) = {\widetilde{w}}_{ii} x_i + \sum_{j \neq i} {\widetilde{w}}_{ji} x_j
\end{equation}
where $\phi_i = \phi(x_i;x_{-i})$ is node $i$'s embedding,\footnote{Note that embedding preserve the dimension of the original features.}
$\theta \in \mathbb{R}^\ell$ and $b \in \mathbb{R}$ are learned parameters,
and ${\widetilde{w}}_{ij} \ge 0$ are pairwise weights that depend on $G$ and $W$.
We refer to users $j$ with ${\widetilde{w}}_{ji} \neq 0$ as
the \emph{embedding neighbors} of $i$.
A simple choice of weights is ${\widetilde{w}}_{ji}=w_{ji}$
for $(j,i) \in E$ (and 0 otherwise),
but different methods propose different ways
to construct ${\widetilde{w}}$;
here we focus on the weight scheme defined in \cite{wu2019simplifying}.
We assume the weights ${\widetilde{w}}$ are
predetermined (i.e., the embedding model is fixed)
and focus on learning the classifier parameters $\theta$ and $b$.
\paragraph{Strategic inputs.}
For the strategic aspects of our setting,
we adopt the popular formulation of \cite{hardt2016strategic}.
Users seek to be classified positively (i.e., have ${\hat{y}}_i=1$), and to achieve this, are willing to modify their features (at some cost).
Once the system has learned and published $h$,
a test-time user $i$ can modify her features $x_i \mapsto x'_i$ in response to $h$.
Modification costs are defined
by a cost function $c(x,x')$ (known to all);
here we focus mainly on 2-norm costs
$c(x,x') = \|x-x'\|_2$
as used in \cite{levanon2022generalized,
chen2020strategic},
but also discuss other costs from
\cite{bruckner2012static, levanon2021strategic,
BechavodPZWZ21}.
User $i$ therefore modifies her features (or ``moves'') if this improves her prediction
(i.e., if $h(x_i)=-1$ but $h(x'_i)=1$)
and is cost-effective (i.e.,
prediction gains exceed modification costs);
for linear classifiers, this means crossing the decision boundary.
Note that since $y \in {\{\pm 1\}}$, gains are at most $h(x')-h(x)=2$.
Users therefore do not move to any $x'$ whose cost $c(x,x')$ exceeds a `budget' of $2$, and the maximal moving distance is $d=2$.
The unique aspect of our setting is that user responses are linked through their mutual dependence on the graph.
We next proceed to describe our model of user responses in detail.
\section{Learning and optimization} \label{sec:learn_and_opt}
We are now ready to describe our learning approach.
Our learning objective can be restated as:
\begin{equation}
\label{eq:learning_objective}
{\hat{h}} = \argmin_{h \in H} \sum_i L(y_i, h(x_i^h;x_{-i}^h))
\end{equation}
for $H=\{h_{\theta,b}\}$ as in Eq. \eqref{eq:classifier+embedding}.
The difficulty in optimizing Eq. \eqref{eq:learning_objective}
is that $x^h$ depend on $h$ through
the iterative process, which relies on $\Delta_h$.
At test time, $x^h$ can be computed exactly by simulating the dynamics.
However, at train time, we would like to allow for gradients of
$\theta,b$ to propagate through $x^h$.
For this, we propose an efficient differential proxy of $x^h$,
implemented as a stack of layers,
each corresponding to one response round.
The number of layers is a hyperparameter, $T$.
\paragraph{Single round.}
We begin with examining a single iteration of the dynamics, i.e., $T=1$.
Note that since a user moves only if the cost is at most 2,
Eq. \eqref{eq:response_mapping} can be rewritten as:
\begin{equation}
\label{eq:hard_if}
\Delta_h(x_i;x_{-i}) = \begin{cases}
x'_i & \text{if } h(x_i;x_{-i})=-1
\,\text{ and }\, c(x_i,x'_i) \le 2 \\
x_i & \text{o.w.}
\end{cases}
\end{equation}
where $x'_i=\mathrm{proj}_h(x_i;x_{-i})$ is the point to which $x_i$ must move in order for $\phi(x_i;x_{-i})$ to be projected onto $h$.
This projection-like operator (on $x_i$) can be shown to have a closed-form solution:
\begin{equation}
\label{eq:projection}
\mathrm{proj}_h(x_i;x_{-i}) = x_i - \frac{\theta^\top \phi(x_i;x_{-i}) + b}{\|\theta\|_2^2 {\widetilde{w}}_{ii}}\theta
\end{equation}
See Appendix \ref{apx:proj} for a derivation using KKT conditions.
Eq. \eqref{eq:projection} is differentiable in $\theta$ and $b$;
to make the entire response mapping differentiable,
we replace the `hard if'
in Eq. \eqref{eq:hard_if} with a `soft if',
which we now describe.
First, to account only for negatively-classified points,
we ensure that only points in the negative halfspace are projected via a `positive-only' projection:
\begin{equation}
\label{eq:projection_positive}
\mathrm{proj}^+_h(x_i;x_{-i}) = x_i -
\min\left\{0, \frac{\theta^\top \phi(x_i;x_{-i}) + b}{\|\theta\|_2^2 {\widetilde{w}}_{ii}} \right\} \theta
\end{equation}
Then, we replace the $c\le 2$ constraint with a smoothed sigmoid
that interpolates between $x_i$ and the projection,
as a function of the cost of the projection and thresholded at 2.
This gives our differentiable approximation of the response mapping:
\begin{equation}
\label{eq:apx_rspns}
\tilde{\rspns}(x_i;x_{-i},\kappa) =
x_i + (x'_i-x_i)\sigma_\tau\big( 2-c(x_i,x'_i)-\kappa \big)
\quad \text{where} \quad
x'_i = \mathrm{proj}^+_h(x_i;x_{-i})
\end{equation}
where $\sigma$ is a sigmoid and $\tau$ is a temperature hyperparameter ($\tau \rightarrow 0$ recovers Eq. \eqref{eq:hard_if})
and for $T=1$, $\kappa=0$.
In practice we add a small additive tolerance term for numerical stability (See Appendix \ref{apx:tolerance}).
\future{bring this analysis up into the paper}
\paragraph{Multiple rounds.}
Next, we consider the computation of (approximate) modified features after $T>1$ rounds,
denoted ${\tilde{x}}^{(T)}$, in a differentiable manner.
Our approach is to apply $\tilde{\rspns}$ iteratively as:
\begin{equation} \label{eq:multi_update}
{\tilde{x}}_i^{(t+1)} = \tilde{\rspns}({\tilde{x}}_i^{(t)};{\tilde{x}}_{-i}^{(t)},\kappa_i^{(t)}), \qquad \qquad {\tilde{x}}_i^{(0)}=x_i
\end{equation}
Considering $\tilde{\rspns}$ as a layer in a neural network,
approximating $T$ rounds can be done by stacking.
In Eq. \eqref{eq:multi_update},
$\kappa_i^{(t)}$ is set to accumulate costs of approximate responses,
$\kappa_i^{(t)} = \kappa_i^{(t-1)} + c({\tilde{x}}_i^{(t-1)},{\tilde{x}}_i^{(t)})$.
One observation is that for 2-norm costs,
$\kappa_i^{(t)} = c({\tilde{x}}_i^{(0)},{\tilde{x}}_i^{(t)})$
(by the triangle inequality; since all points move along a line, equality holds).
We can therefore simplify Eq. \eqref{eq:apx_rspns}
and replace $c(x_i^{(t-1)},x'_i)-\kappa_i^{(t-1)}$ with $c(x_i^{(0)},x'_i)$.
For other costs, this gives a lower bound
(see Appendix \ref{apx:proj}).
\future{\\
- give new formula with $c(x^(0),x')$ \\
- say that layers are added only at train time - at test time, users respond on their own via $\Delta$, so no need for $\tilde{\rspns}$.
\paragraph{Interpretation}
like message passing; as in embedding part of GNNs, but this time of responses
- is the following true? claim: since sigmoid is sub-linear (in negative part), will never pay more than 2 for move \\
- what about multiple steps - can the cumsum be >2? \\
- interpretation: like message passing; as in embedding part of GNNs, but this time of responses
\paragraph{Practical considerations.}
- tradeoff \\
- tau vs T \\
- tau: \\
--- tau means points move w/o crossing, or move too much (?) \\
--- implications over multiple rounds; effectively larger hypothesis class \\
- T: \\
--- small T may be sufficient
}
\subsection{Experiments on real data} \label{sec:experiments_real}
In this section we empirically evaluate our approach on three real datasets, and study the effects of varying the scale of costs and model depth.
\begin{table*}
\centering
{
\begin{tabular}{lrrrr}
\toprule
& \multicolumn{1}{c}{\textbf{Cora}} & \multicolumn{1}{c}{\textbf{CiteSeer}} &
\multicolumn{1}{c}{\textbf{PubMed}}\\
\toprule
{Na{\"i}ve}\ & 52.56\eb{0.20} & 61.06\eb{0.10} & 19.02\eb{0.01}\\
Robust (ours) & 77.51\eb{0.38} & 75.21\eb{0.25} & 82.41\eb{0.38}\\
\midrule
Non-strategic (benchmark) & 87.95\eb{0.08} & 77.11\eb{0.05} & 91.34\eb{0.03}\\
\bottomrule
\end{tabular}
}
\caption{
Test accuracy of different methods. For all results
$T=3$ and $d=0.25$.
}
\label{tab:main_results}
\end{table*}
\paragraph{Data.}
We use three benchmark datasets used extensively in the GNN literature: Cora, CiteSeer, and PubMed \cite{sen2008collective,kipf2017semi},
and adapt them to our setting.
We use the standard (transductive) train-test split of \cite{sen2008collective};
the data is made inductive by removing
all test-set nodes that can be influenced by train-set nodes \cite{Hamilton2017InductiveRL}.
All three datasets describe citation networks, with papers as nodes and citations as edges.
Although these are directed relations by nature, the available data include only undirected edges;
hence, we direct edges towards lower-degree nodes,
so that movement of higher-degree nodes is more influential.
As our setup requires binary labels, we follow standard practice and merge classes, aiming for balanced binary classes
that sustain strategic movement.
Further details in Appendix \ref{ref:experimental_details}.
\paragraph{Methods.}
We compare our robust learning approach to a {na{\"i}ve}\ approach that does not account for strategic behavior (i.e., falsely assumes that users do not move).
As a benchmark we report the performance of the {na{\"i}ve}\ model on non-strategic data (for which it is appropriate).
All methods are based on the SGC \cite{wu2019simplifying}\ architecture as it is expressive enough to effectively utilize the graph, but simple enough to permit rational user responses (Eq. \eqref{eq:response_mapping}; see also notes Sec. \ref{sec:related}).
We use the standard weighing scheme of
${\widetilde{W}} = D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ where $A$ is the adjacency matrix and $D$ is the diagonal degree matrix.
Appendix \ref{apx:experiments_extra_real} includes additional results.
\paragraph{Optimization and setup.}
We train using Adam \cite{DBLP:journals/corr/KingmaB14} and set
hyperparameters according to \cite{wu2019simplifying}
(learning rate=$0.2$, weight decay=$1.3\cdot 10^{-5}$).
Training is stopped after $20$ epochs (this usually suffices for convergence).
Hyperparameters were determined based only on the train set:
$\tau=0.05$, chosen to be the smallest value which retained stable training, and $T=3$, as training typically saturates then (we also explore varying depths).
We use $\beta$-scaled 2-norm costs, $c_\beta(x,x') = \beta \|x-x'\|_2, \beta \in \mathbb{R}_+$, which induce a maximal moving distance of $d_\beta = 2/\beta$.
We observed that values around $d=0.5$ permit almost arbitrary movement;
we therefore experiment in the range $d \in [0,0.5]$,
but focus primarily on the mid-point of $d=0.25$
(note $d=0$ implies no movement).
Mean and standard errors are reported over five random initializations.
Appendix \ref{ref:experimental_details} includes further details.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures_two_thirds/cost_lim.pdf} \\
\includegraphics[width=\linewidth]{figures_two_thirds/train_iterations.pdf}
\caption{Accuracy for increasing: \textbf{(Top:)} max distance $d$ ($T=3$);
\textbf{(Bottom:)} depth $T$ ($d=0.25$).
}
\label{fig:cost_lim+train_iterations}
\end{figure}
\paragraph{Results.}
Table \ref{tab:main_results} presents detailed results for $d=0.25$ and $T=3$. As can be seen, the naive approach is highly vulnerable to strategic behavior. In contrast, by anticipating how users collectively respond, our robust approach is able to recover
most of the drop in accuracy (i.e., from `benchmark' to `{na{\"i}ve}'; Cora: 35\%, CiteSeer: 16\%, PubMed: 72\%). Note this is achieved with a $T$ much smaller
than necessary for response dynamics to converge
(${T_{\mathrm{max}}}$: Cora=7, CiteSeer=7, PubMed=11).
Fig. \ref{fig:cost_lim+train_iterations} (top) shows results for varying max distances $d \in [0,0.5]$ and fixing $T=3$ (note $d=0$ entails no movement).
For Cora and CiteSeer, larger max distances---the result of lower modification costs---hurt performance; nonetheless, our robust approach
maintains a fairly stable recovery rate
over all values of $d$. For PubMed, our approach retains $\approx 92\%$
of the \emph{optimum}, showing resilience to reduced costs.
Interestingly, for CiteSeer, in the range $d \in [0.05,0.15]$,
our approach \emph{improves} over the baseline, suggesting it utilizes
strategic movements for improved accuracy (as in Sec. \ref{sec:exp_synth}).
Fig. \ref{fig:cost_lim+train_iterations} (bottom) shows results for varying depths $T\in\{0,\dots,10\}$. For all datasets, results improve as $T$ increases, but saturate quickly at $T \approx 3$; this suggests a form of robustness of our approach to overshooting in choosing $T$ (which due to smoothing can cause larger deviations from the true dynamics). Using $T=1$ recovers between $65\%-91\%$ (across datasets) of the optimal accuracy. This shows that while considering only one round of user responses (in which there are no dependencies) is helpful, it is much more effective to consider multiple, dependent rounds---even if only a few.
\section{Analysis}
\subsection{Hitchhiking} \label{apx:hitch}
Here we provide a concrete example of hitchhiking, following
Fig. \ref{fig:graph_examples} (E).
The example includes three nodes, $i,j,k$,
positioned at:
\[
x_k=-3, \qquad x_i=-2.1 \qquad x_j = -0.5, \qquad
\]
and connected via edges $k{\rightarrow}j,$ and $j{\rightarrow}i$.
Edge weights ${\widetilde{w}}_{ji}=0.6$ and ${\widetilde{w}}_{ii}=0.4$;
${\widetilde{w}}_{kj}=1/3$ and ${\widetilde{w}}_{jj}=2/3$; and ${\widetilde{w}}_{kk}=1$.
The example considers a threshold classifier $h_b$ with $b=0$,
and unit-scale costs (i.e., $\beta=1$) inducing a maximal moving distance
of $d=2$.
We show that $i$ cannot invest effort to cross and obtain ${\hat{y}}_i=1$;
but once $j$ moves (to obtain ${\hat{y}}_j=1$), this results in $i$ also being classified positively (\emph{without} moving).
Initially (at round $t=0$), node embeddings are:
\[
\phi_k = -3 \qquad,
\phi_i = -1.14 \qquad,
\phi_j = -\frac{4}{3}
\]
and all points are classified negatively, ${\hat{y}}_k={\hat{y}}_i={\hat{y}}_j=-1$.
Notice that $i$ cannot cross the decision boundary even if she moves the maximal cost-feasible distance of $d=2$:
\begin{align*}
& \phi(x_i^{(0)} + 2 ;x^{(0)}_{i-}) = {\widetilde{w}}_{ii}(x^{(0)}_i + 2) + {\widetilde{w}}_{ji}x^{(0)}_j = 0.4(-2.1 + 2) + 0.6(-\frac{1}{2}) = -0.34 < 0
\intertext{Hence, $i$ doesn't move, so $x^{(1)}_i = x^{(0)}_i$.
Similarly, $k$ cannot cross, so $x^{(1)}_k = x^{(0)}_k$.
However, $j$ \emph{can} cross by moving to $1.5$ (at cost 2) in order to get ${\hat{y}}_j=1$:}
& x^{(1)}_j = 1.5 = -1/2 + 2 = x^{(0)}_j + 2 \\
\Rightarrow\,\,\, & \phi(x_j^{(1)};x^{(1)}_{j-})={\widetilde{w}}_{jj}x^{(1)}_j + {\widetilde{w}}_{kj}x^{(0)}_k = \frac{2}{3}x^{(1)}_j + \frac{1}{3}(-3) = 0 \,\,\,
\Rightarrow\,\,\, {\hat{y}}^{(1)}_j=1
\intertext{After $j$ moves, $i$ is classified positively (and so does not need to move):}
& \phi(x_i^{(1)};x^{(1)}_{i-}) = {\widetilde{w}}_{ii}x^{(1)}_i + {\widetilde{w}}_{ji}x^{(1)}_j = 0.4(-2.1) + 0.6\frac{3}{2} = 0.06 > 0
\,\,\,\Rightarrow\,\,\,
{\hat{y}}_i^{(2)} = 1
\end{align*}
\subsection{Cascading behavior} \label{apx:domino}
We give a constructive example (for any $n$)
which will be used to prove Propositions \ref{prop:move_at_n}
and \ref{prop:n-k_move_at_k}.
The construction is modular, meaning that we build a small `cyclic' structure of size 3, such that for any given $n$, we simply replicate this structure roughly
$n/3$ times, and include two additional `start' and `finish' nodes.
Our example assumes a threshold classifier $h_b$ with $b=0$,
and scale costs $c_\beta$ with $\beta = 1.5$ inducing a maximum moving distance of $d_\beta = 3$.
Let $n$. We construct a graph of size $n+2$ as follows.
Nodes are indexed $0,\dots,n+1$.
The graph has bi-directional edges between each pair of consecutive nodes,
namely $(i,i+1)$ and $(i+1,i)$ for all $i=0,\dots,n$,
except for the last node, which has only an outgoing edge $(n+1,n)$,
but no incoming edge.
We set uniform normalized edge weights, i.e.,
$w_{ij}=1/3$ and $w_{ii}=1/3$ for all $1 \le i,j\le n$,
and $w_{0,0}=w_{0,1}=1/2$ and $w_{n+1,n+1}=w_{n+1,n}=1/2$.
The initial features of each node are defined as:
\begin{equation}
x_0 = -1, \qquad\quad
x_i =
\label{eq:hard_if}
\begin{cases}
\text{ } 2 & \text{if } i \bmod 3 = 1 \\
-4 & \text{o.w. }
\end{cases} \qquad \forall i=1,\dots,n+1
\end{equation}
Figure \ref{fig:domino} (A) illustrates this for $n=3$.
Note that while the graph creates a `chain' structure,
the positioning of node features is cyclic (starting from $n=1$):
$2,-4,-4,2,-4,-4,2,\dots$ etc.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures_appendix/domino_v2.pdf}
\caption{
\textbf{Cascading behavior}.
The construction for Propositions \ref{prop:move_at_n} and \ref{prop:n-k_move_at_k}, for:
\textbf{(A)} $n=3$, which includes one copy of the main 3-node module
(light blue box), and
\textbf{(B)} for any $n>3$, by replicating the module in sequence
(note $|V|=n+2$).
}
\label{fig:domino}
\end{figure}
We begin with a lemma showing that in our construction,
each node $i=1,\dots,n$ moves precisely at round $t=i$.
\begin{lemma}
\label{lem:domino}
At every round $1 \le t \le n$:
\setlist{nolistsep}
\begin{enumerate}[noitemsep,leftmargin=1cm,label=(\arabic*)]
\item node $i=t$ moves, with
$x_i^{(t)}=5$ if $k \bmod 3 = 1$, and $x_i^{(t)}=-1$ otherwise
\item all nodes $j>t$ do not move, i.e., $x_j^{(t)} = x_j^{(t-1)}$
\end{enumerate}
\end{lemma}
Note that (1) (together with Prop. \ref{prop:move_once})
implies that for any round $t$,
all nodes $i<t$ (which have already moved at the earlier round $t'=i$)
do not move again.
Additionally, (2) implies that all $j>t$ remain in their initial position, i.e.,
$x_j^{(t)} = x_j^{(0)}$.
Finally, notice that the starting node $x_0$ has $\phi_0=0.5$, meaning that ${\hat{y}}_0^{(0)}=1$, and so does not move at any round.
\begin{proof}
We begin with the case for $n=3$.
\setlist{nolistsep}
\begin{itemize}[noitemsep,leftmargin=0.7cm]
\item
\textbf{Round 1:} Node $i=1$ can cross by moving the maximal distance of 3:
\begin{equation}
\label{eq:domino_n=1}
{\widetilde{w}}_{1,1}(x_1^{(0)} + 3) + {\widetilde{w}}_{0,1}x_0^{(0)} + {\widetilde{w}}_{2,1}x_2^{(0)} = \\
\frac{1}{3}(2 + 3) + \frac{1}{3}(-1) + \frac{1}{3}(-4) = 0
\end{equation}
However, nodes 2,3 cannot cross even if they move the maximal feasible distance:
\begin{equation}
\label{eq:domino_n=2}
{\widetilde{w}}_{2,2}(x_2^{(0)} + 3) + {\widetilde{w}}_{1,2}x_{1}^{(0)} + {\widetilde{w}}_{3,2}x_{3}^{(0)} =
\frac{1}{3}(-4 + 3) + \frac{1}{3}(2) + \frac{1}{3}(-4) = -1 < 0
\end{equation}
\begin{equation}
\label{eq:domino_n=3}
{\widetilde{w}}_{3,3}(x_3^{(0)} + 3) + {\widetilde{w}}_{2,3}x_{2}^{(0)} + {\widetilde{w}}_{4,3}x_{4}^{(0)} =
\frac{1}{3}(-4 + 3) + \frac{1}{3}(-4) + \frac{1}{3}(2) = -1 < 0
\end{equation}
\item
\textbf{Round 2:} Node $i=2$ can cross by moving the maximal distance of 3:
\begin{equation}
\label{eq:domino_n=2_round2}
{\widetilde{w}}_{2,2}(x_2^{(1)} + 3) + {\widetilde{w}}_{1,2}x_{1}^{(1)} + {\widetilde{w}}_{3,2}x_{3}^{(1)} =
\frac{1}{3}(-4 + 3) + \frac{1}{3}(5) + \frac{1}{3}(-4) = 0
\end{equation}
However, node 3 cannot cross even if it moves the maximal feasible distance:
\begin{equation}
\label{eq:domino_n=3_round2}
{\widetilde{w}}_{3,3}(x_3^{(1)} + 3) + {\widetilde{w}}_{2,3}x_{2}^{(1)} + {\widetilde{w}}_{4,3}x_{4}^{(1)} = \frac{1}{3}(-4 + 3) + \frac{1}{3}(-4) + \frac{1}{3}(2) = -1 < 0
\end{equation}
\item
\textbf{Round 3:} Node $i=3$ can cross by moving the maximal distance of 3:
\begin{equation}
\label{eq:domino_n=3_round3}
{\widetilde{w}}_{3,3}(x_3^{(2)} + 3) + {\widetilde{w}}_{2,3}x_{2}^{(2)} + {\widetilde{w}}_{4,3}x_{4}^{(2)} = \\
\frac{1}{3}(-4 + 3) + \frac{1}{3}(-1) + \frac{1}{3}(2) = 0
\end{equation}
\end{itemize}
Fig. \ref{fig:domino} (A) illustrates this procedure for $n=3$.
Next, consider $n>3$. Due to the cyclical nature of feature positioning and the chain structure of our graph, we can consider what happens when we sequentially add nodes to the graph. By induction, we can show that:
\setlist{nolistsep}
\begin{itemize}[itemsep=0.2cm,leftmargin=0.7cm]
\item
$n \bmod 3 = 1$: Consider round $t=n$.
Node $n$ has $x_{n}^{(t-1)}=2$,
and two neighbors:
$n-1$, who after moving at the previous round has $x_{n-1}^{(t-1)}=-1$;
and $n+1$, who has a fixed $x_{n+1}^{(t-1)}=-4$.
Thus, it is in the same configuration as node $i=1$,
and so its movement follows Eq. \eqref{eq:domino_n=1}.
\item
$n \bmod 3 = 2$:
Consider round $t=n$.
Node $n$ has $x_{n}^{(t-1)}=-4$,
and two neighbors:
$n-1$, who after moving at the previous round has $x_{n-1}^{(t-1)}=5$;
and $n+1$, who has a fixed $x_{n+1}^{(t-1)}=-4$.
Thus, it is in the same configuration as node $i=2$,
and so its movement follows Eq. \eqref{eq:domino_n=2_round2}.
\item
$n \bmod 3 = 0$:
Consider round $t=n$.
Node $n$ has $x_{n}^{(t-1)}=-4$,
and two neighbors:
$n-1$, who after moving at the previous round has $x_{n-1}^{(t-1)}=-1$;
and $n+1$, who has a fixed $x_{n+1}^{(t-1)}=2$.
Thus, it is in the same configuration as node $i=2$,
and so its movement follows Eq. \eqref{eq:domino_n=3_round3}.
\end{itemize}
Fig. \ref{fig:domino} (B) illustrates this idea for $n>3$.
\end{proof}
We now proceed to prove the propositions.
\paragraph{Proposition \ref{prop:move_at_n}:}
The proposition follows immediately from Lemma \ref{lem:domino};
the only detail that remains to be shown is that node $n+1$ does not move at all.
To see this, note that since it does not have any incoming edges,
its embedding depends only on its own features, $x_{n+1}$.
If $n+1 \bmod 3 = 1$, we have $x_{n+1}=2$, and so ${\hat{y}}_{n+1}=1$ without movement. Otherwise, $x_{n+1}=-4$, meaning that it is too far to cross.
\paragraph{Proposition \ref{prop:n-k_move_at_k}:}
Fix $n$ and $k\le n$.
Consider the same construction presented above for a graph of size $k+2$
Then, add $n-k$ nodes identical nodes:
for each $k < j \le n$, add an edge $k{\rightarrow}j$,
and set $x_j = x_k - 6$.
We claim that all such nodes will move exactly at round $k$.
Consider some node $k < j \le n$.
Since $x_k$ moves only at round $k$ (following Lemma \ref{lem:domino}),
$j$ does not move in any of the first $t \le k$ rounds:
\begin{equation}
{\widetilde{w}}_{j, j}(x_{j}^{(0)} + 3) + {\widetilde{w}}_{k, j}x_{k}^{(0)} = \frac{1}{2}(-x_k^{(0)} - 6 + 3) + \frac{1}{2}(x_k^{(0)}) = \frac{1}{2}(-x_k^{(0)} -3) + \frac{1}{2}(x_k^{(0)}) = -1.5 < 0
\end{equation}
At the end of round $t=k$,
node $k$ has a value of $x_k^{(0)}+3$.
This enables $j$ to cross by moving the maximal distance of 3:
\begin{equation}
{\widetilde{w}}_{j, j}(x_{j}^{(k)}+3) + {\widetilde{w}}_{k, j}x_{k}^{(k)} = \frac{1}{2}(-x_k^{(0)} -6 + 3) + \frac{1}{2}(x_k^{(k)}) = \frac{1}{2}(-x_k^{(0)} -3) + \frac{1}{2}(x_k^{(0)} + 3) = 0
\end{equation}
As this applies to all such $j$, we get that $n-k$ nodes move at round $k$, which concludes our proof.
\section{Optimization}
\subsection{Projection} \label{apx:proj}
We prove for 2-norm-squared costs. Correctness holds for 2-norm-costs since the argmin is the same (squared over positives is monotone).
Calculation of $x_i$'s best response requires solving the following equation:
\begin{align*}
& \min_{x'} c(x'_i, x_i) \quad \textrm{s.t} \quad\theta^\top\phi(x_i;x_{-i}) + b & \\
& \min_{x'} \|x'_i - x_i\|_2^2 \quad \textrm{s.t} \quad \theta^\top\phi(x_i;x_{-i}) + b & \\
\intertext{To solve for $x'$, we apply the Lagrange method.
Define the Lagrangian as follows:}
&L(x'_i, \lambda) = \|x'_i - x_i\|_2^2 + \lambda[\theta^\top\phi(x_i;x_{-i}) + b] &
\intertext{Next, to find the minimum of $L$, derive with respect to $x'_i$, and compare to 0:}
& 2(x'_i - x_i) + \lambda\theta {\widetilde{w}}_{ii} = 0 \\
& x'_i = x_i -\frac{\lambda {\widetilde{w}}_{_ii}}{2}\theta &
\intertext{Plugging $x'_i$ into the constraint gives:}
& \theta^\top[{\widetilde{w}}_{_{ii}}(x_i -\frac{\lambda {\widetilde{w}}_{_ii}}{2}\theta) + \sum_{j \neq i} {\widetilde{w}}_{ij} x_j] + b = 0 & \\
& \theta^\top[\phi(x_i;x_{-i}) -\frac{\lambda {\widetilde{w}}_{_ii}^2}{2}\theta] + b = 0 & \\
& \theta^\top \phi(x_i;x_{-i}) + b = \frac{\lambda {\widetilde{w}}_{_ii}^2}{2}\|\theta\|_2^2 & \\
& 2\frac{\theta^\top \phi(x_i;x_{-i}) + b}{\|\theta\|_2^2 {\widetilde{w}}_{_{ii}}^2} = \lambda &
\intertext{Finally, plugging $\lambda$ into the expression for $x'_i$ obtains:}
& x'_i = x_i - \frac{\theta^\top \phi(x_i;x_{-i}) + b}{\|\theta\|_2^2{\widetilde{w}}_{ii}}\theta &
\end{align*}
\subsection{Generalized costs} \label{apx:proj_ext}
Here we provide a formula for computing projections in closed form
for generalized quadratic costs:
\[
c(x,x') = \frac{1}{2}(x'-x)^\top A (x'-x)
\]
for positive-definite $A$.
As before, the same formula holds for generalized 2-norm costs (since the argmin is the same).
Begin with:
\begin{align*}
& \min_{x'} c(x'_i, x_i) \quad \textrm{s.t} \quad \theta^\top\phi(x_i;x_{-i}) + b & \\
& \min_{x'} \frac{1}{2}(x'_i-x_i)^\top A(x'_i-x_i) \quad \textrm{s.t} \quad \theta^\top\phi(x_i;x_{-i}) + b &
\intertext{As before, apply the Lagrangian method:}
& \frac{1}{2}(x'_i-x_i)^\top A(x'_i-x_i) + \lambda[\theta^\top\phi(x_i;x_{-i}) + b] &
\intertext{Derivation w.r.t. to $x_i'$:}
& \frac{1}{2}[A^\top(x'_i-x_i)+A(x'-x)] + \lambda\theta {\widetilde{w}}_{ii} = 0 & \\
& (A^\top+A)x'_i=(A^\top+A)x_i - 2\lambda\theta{\widetilde{w}}_{ii} &
\intertext{Since the matrix $(A^\top+A)$ is PD, we can invert to get:}
& x'_i = x_i -2\lambda(A^\top+A)^{-1} \theta{\widetilde{w}}_{ii} &
\intertext{Plugging $x'_i$ in the constrain:}
& \theta^\top[{\widetilde{w}}_{_{ii}}(x_i -2\lambda (A^\top+A)^{-1}\theta{\widetilde{w}}_{_{ii}}) + \sum_{j \neq i} {\widetilde{w}}_{ij} x_j] + b = 0 & \\
& \theta^\top[\phi(x_i;x_{-i}) -2\lambda (A^\top+A)^{-1}{\widetilde{w}}_{_{ii}}^2\theta] + b = 0 & \\
& \theta^\top \phi(x_i;x_{-i}) + b = 2\lambda \theta^\top(A^\top+A)^{-1}\theta{\widetilde{w}}_{_{ii}}^2 &
\intertext{Since $(A^\top+A)^{-1}$ is also PD,
we get $\theta^\top(A^\top+A)^{-1}\theta > 0$, and hence:}
& \frac{\theta^\top \phi(x_i;x_{-i}) + b}{2\theta^\top(A^\top+A)^{-1}\theta{\widetilde{w}}_{_{ii}}^2} = \lambda &
\intertext{Finally, pluging in $\lambda$:}
& x'_i = x_i -\frac{\theta^\top \phi(x_i;x_{-i}) + b}{\theta^\top(A^\top+A)^{-1}\theta{\widetilde{w}}_{_{ii}}^2}(A^\top+A)^{-1} \theta{\widetilde{w}}_{ii} & \\
& x'_i = x_i -\frac{\theta^\top \phi(x_i;x_{-i}) + b}{\theta^\top(A^\top+A)^{-1}\theta{\widetilde{w}}_{_{ii}}}(A^\top+A)^{-1} \theta &
\end{align*}
Setting $A=I$ recovers Eq. \eqref{eq:projection}.
\subsection{Improving numerical stability by adding a tolerance term} \label{apx:tolerance}
Theoretically, strategic responses move points precisely on the decision boundary. For numerical stability in classifying (e.g., at test time),
we add a small tolerance term, $\mathtt{tol}$, that ensures that points are projected to lie strictly within the positive halfspace.
Tolerance is added as follows:
\begin{equation}
\min_{x'} c(x'_i, x_i) \quad \textrm{s.t} \quad \theta^\top\phi(x_i;x_{-i}) + b \ge \mathtt{tol} \\
\end{equation}
This necessitates the following adjustment to Eq. \eqref{eq:projection}:
\begin{equation}
\label{eq:projection_tol}
\mathrm{proj}_h(x_i;x_{-i}) = x_i - \frac{\theta^\top \phi(x_i;x_{-i}) + b - \mathtt{tol}}{\|\theta\|_2^2 {\widetilde{w}}_{ii}}\theta
\end{equation}
However, blindly applying the above to Eq. \eqref{eq:projection_positive} via:
\begin{equation}
\mathrm{proj}^+_h(x_i;x_{-i}) = x_i -
\min\left\{0, \frac{\theta^\top \phi(x_i;x_{-i}) + b - \mathtt{tol}}{\|\theta\|_2^2 {\widetilde{w}}_{ii}} \right\} \theta
\end{equation}
is erroneous, since any user whose score is lower than $\mathtt{tol}$
will move---although in principal she shouldn't.
To correct for this, we adjust Eq. \eqref{eq:projection_positive} by adding a mask that ensures that only points in the negative halfspace are projected:
\begin{equation}
\label{eq:next_projection_tol}
\mathrm{proj}_h(x_i;x_{-i}) = x_i - \one{\theta^\top \phi(x_i;x_{-i}) + b < 0}
\cdot \left(\frac{\theta^\top \phi(x_i;x_{-i}) + b - \mathtt{tol}}{\|\theta\|_2^2 {\widetilde{w}}_{ii}}\theta\right)
\end{equation}
\begin{table}[]
\centering
\caption{Real data -- statistics and experimental details}
\begin{tabular}{lrrrrrllc}
\toprule
& $|V|$ & $|E|$ & $\ell$ & $n_{\text{train}}$ & $n_{\text{test}}^*$ & \vtop{\hbox{\strut negative }\hbox{\strut classes}} & \vtop{\hbox{\strut positive }\hbox{\strut classes}} & \vtop{\hbox{\strut $\%\{y=1\}$ }\hbox{\strut (train\,/\,test)}} \\
\midrule
Cora & 2,708 & 5,728 & 1,433 & 640 & 577 & 0,2,3 & 1,4,5,6 & 44\%\,/\,36\% \\
CiteSeer & 3,327 & 4,552 & 3,703 & 620 & 721 & 0,2,3 & 1,4,5 & 50\%\,/\,49\% \\
Pubmed & 19,717 & 44,324 & 500 & 560 & 941 & 1,2 & 0 & 21\%\,/\,18\% \\
\bottomrule
\end{tabular}
\label{tab:real_datasets}
\end{table}
\section{Additional experimental details} \label{ref:experimental_details}
\paragraph{Data.}
We experiment with three citation network datasets: Cora, CiteSeer, and Pubmed \cite{sen2008collective}.
Table \ref{tab:real_datasets} provides summary statistics of the datasets, as well as experimental details.
\paragraph{Splits.}
All three datasets include a standard train-validation-test split, which we adopt for our use.\footnote{Note that nodes in these sets do not necessarily account for all nodes in the graph.}
For our purposes, we use make no distinction between `train' and `validation', and use both sets for training purposes.
To ensure the data is appropriate for the inductive setting, we remove from the test set all nodes which can be influenced by train-set nodes---this ranges from 6\%-43\% of the test set, depending on dataset (and possibly setting; see Sec. \ref{apx:num_sgc_layers}).
In Table \ref{tab:real_datasets}, the number of train samples is denoted $n_{\text{train}}$, and the number of inductive test samples is denoted $n^*_{\text{test}}$ (all original transductive test sets include 1,000 samples).
\paragraph{Binarization.}
To make the data binary (original labels are multiclass),
we enumerated over possible partitions of classes into `negative' and `positive', and chose the most balanced partition.
Experimenting with other but similarly-balanced partitions resulted in similar performance (albeit at times less distinct strategic movement).
The exception to this was PubMed (having only three classes), for which the most balanced partition was neither `balanced' nor stable, and so here we opted for the more stable alternative.
Reported partitions and corresponding negative-positive ratios (for train and for test) are given in
Table \ref{tab:real_datasets}.
\paragraph{Strategic responses.}
At test time, strategic user responses are computed by simulating the response dynamics in Sec. \ref{sec:strat_responses} until convergence.
\section{Additional experimental results} \label{apx:experiments_extra}
\subsection{Experiments on synthetic data}
\label{apx:experiments_extra_synth}
In this section we explore further in depth the relation between user movement and classification performance, using our synthetic setup in Sec. \ref{sec:exp_synth}
(all examples discussed herein use $\alpha=0.7$).
From a predictive point of view, graphs are generally helpful if
same-class nodes are well-connected.
This is indeed the case in our construction
(as can be seen by the performance of the benchmark method with non-extreme $\alpha>0$ values).
From a strategic perspective, however, connectivity increases cooperation,
since neighboring nodes can positively influence each other over time.
In our construction, cooperation occurs mostly within classes, i.e.,
negative points that move encourage other negative points to move,
and similarly for positive points.
\begin{figure}[t!]
\centering
\includegraphics[width=0.31\linewidth]{figures_appendix/synth_extra_1.pdf}
\,
\,
\includegraphics[width=0.307\linewidth]{figures_appendix/synth_extra_2.pdf}
\,
\includegraphics[width=0.31\linewidth]{figures_appendix/synth_extra_3.pdf}
\caption{
\textbf{Synthetic data}: relations between movement and accuracy.
All results use $\alpha=0.7$.
\textbf{(Left:)} The relative number of points that move for every threshold $b$, per class, and comparing one round ($T=1$) to convergence ($T=\infty$).
\textbf{(Center:)} Accuracy for every threshold $b$,
after one round ($T=1$) and at convergence ($T=\infty$).
For each optimal $b$, bars show the relative number of points (per class)
that obtain ${\hat{y}}=1$ due to strategic behavior.
\textbf{(Right:)} Accuracy for $T=1$ (top) and $T=\infty$ (bottom),
relative to the non-strategic benchmark ($T=0$),
and as a result of strategic movement of negative points (red arrows; decrease accuracy)
and positive points (green arrows; improve accuracy).
}
\label{fig:synth_extra}
\end{figure}
\paragraph{Movement trends.}
Fig. \ref{fig:synth_extra} (left) shows how different threshold classifiers $h_b$ induce different degrees of movements.
The plot shows the relative number of points (in percentage points)
whose predictions changed as a result of strategic behavior, per class (red: $y=-1$, green: $y=1$)
and over time: after one round ($T=1$, dashed lines),
and at convergence ($T=\infty$, solid lines).
As can be seen, there is a general trend: when $b$ is small,
mostly negative points move, but as $b$ increases, positive points move instead.
The interesting point to observe is the gap between the first round ($T=1$)
and final round ($T=\infty$).
For negative points, movement at $T=1$ peaks at $b_1 \approx -0.25$,
but triggers relatively little consequent moves.
In contrast, the peak for $T=\infty$ occurs at a larger $b_\infty \approx 0.15$.
For this threshold,
though \emph{less} points move in the first round,
these trigger significantly \emph{more} additional moves at later rounds---a result of the connectivity structure within the negative cluster of nodes (blue arrows).
A similar effect takes place for positive nodes.
\paragraph{The importance of looking ahead.}
Fig. \ref{fig:synth_extra} (center) plots
for a range of thresholds $b$
the accuracy of $h_b$ at convergence ($T=\infty;$ orange line),
and after one round ($T=1$; gray line).
The role of the latter is to illustrate the outcomes as `perceived' by a myopic predictive model that considers only one round (e.g., includes only one response layer $\tilde{\rspns}$); the differences between the two lines demonstrate the gap between perception (based on which training chooses a classifier ${\hat{h}}$) and reality (in which the classifier ${\hat{h}}$ is evaluated).
As can be seen, the mypoic approach leads to an under-estimation of
the optimal $b^*$; at $b_1 \approx 0.5$, performance for $T=1$ is optimal,
but is severely worse under the true $T=\infty$,
for which optimal performance is at $b_\infty \approx 1.15$.
The figure also gives insight as to \emph{why} this happens.
For both $b_1$ and $b_\infty$, the figure shows (in bars)
the relative number of points from each class who obtain ${\hat{y}}=1$ as a result of strategic moves.
Bars are stacked, showing the relative number of points that moved per round $T$
(darker = earlier rounds; lightest = convergence).
As can be seen, at $b_1$, the myopic models believes that many positive points, but only few negative points, will cross.
However, in reality, at convergence, the number of positive points that crossed is only slightly higher than that of negative points.
Hence, the reason for the(erroneous) optimism of the myopic model is that it
did not correctly account for the magnitude of correlated moves of negative points, which is expressed over time.
In contrast, note that at $b_\infty$, barely any negative points cross.
\paragraph{How movement affects accuracy.}
An important observation about the relation between movement and accuracy
is that for any classifier $h$,
any \emph{negative} point that moves \emph{hurts} accuracy (since $y=-1$ but predictions become ${\hat{y}}=1$),
whereas any \emph{positive} point that moves \emph{helps}
accuracy (since $y=1$ and predictions are now ${\hat{y}}=1$).
Fig. \ref{fig:synth_extra} (right) shows how these movements combine to affect accuracy.
The figure compares accuracy before strategic behavior ($T=0$; dashed line)
to after one response round ($T=1$; solid line, top plot)
and to convergence ($T=\infty$; solid line, lower plot).
As can be seen, for any $b$, the difference between pre-strategic and post-strategic accuracy
amounts to exactly the degradation due to negative points (red arrows)
plus the improvement of positive points (green arrows).
Note, however, the difference between $T=1$ and $T=\infty$, as they relate to the benchmark model ($T=0$, i.e., no strategic behavior).
For $T=1$ (top), across the range of $b$, positive and negative moves
roughly balance out. A result of this is that curves for $T=0$ and $T=1$ are very much similar, and share similar peaks in terms of accuracy
(both have $\approx 0.89$).
One interpretation of this is that if points were permitted to move only one round, the optimal classifier can completely recover the benchmark accuracy by ensuring that the number of positive points the moves overcomes the number of negative points.
However, for $T=\infty$ (bottom), there is a \emph{skew} in favor of positive points (green arrows). The result of this is that for the optimal $b$,
additional rounds allow positive points to move in a way that obtains slightly \emph{higher} accuracy (0.91) compared to the benchmark (0.89).
This is one possible mechanism underlying our results on synthetic data in Sec. \ref{sec:exp_synth},
and later for our results on real data in Sec. \ref{sec:experiments_real}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/num_layers.pdf}
\caption{Accuracy for varying number of SGC layers ($K$). Note test sets may vary across $K$.}
\label{fig:num_layers}
\end{figure}
\subsection{Experiments on real data} \label{apx:experiments_extra_real}
\subsubsection{Extending neighborhood size} \label{apx:num_sgc_layers}
One hyperparameter of SGC is the number `propagation' layers, $K$,
which effectively determines the graph distance at which nodes can influence others (i.e., the `neighborhood radius').
Given $K$, the embedding weights are defined as
${\widetilde{W}} = D^{-\frac{1}{2}} A^K D^{-\frac{1}{2}}$ where $A$ is the adjacency matrix and $D$ is the diagonal degree matrix.
For $K=0$, the graph is unused, which results in a standar linear classifier over node features.
Our results in the main body of the paper use $K=1$.
Fig. \ref{fig:num_layers} shows results for an increasing $K$
(we set $T=3, d=0.25$ as in our main results).
Results are mixed: for PubMed, higher $K$ seems to lead to less drop in accuracy for {na{\"i}ve}\ and less recovery for our approach;
for Cora and CiteSeer, results are unstable.
Note however that this may likely be a product of our inductive setup:
since varying $K$ also changes the effective test set (since to preserve inductiveness, larger $K$ often necessitates removing more nodes), test sets vary across conditions and decrease in size,
making it difficult to directly compare result across different $K$.
\subsubsection{Strategic improvement}
Our main results in Sec. \ref{sec:experiments_real} show that for CiteSeer,
our strategically-aware approach outperforms the non-strategic benchmark
(similarly to our synthetic experiments).
Here we show that these results are robust.
Fig. \ref{fig:cost_lim_resolution} provides higher-resolution results on CiteSeer
for max distances $d \in [0,0.22]$ in hops of $0.01$.
All other aspects the setup match the original experiment.
As can be seen, our approach slightly but consistently improves upon the benchmark until $d \approx 0.17$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\linewidth]{figures/citeseer_expanded.pdf}
\caption{Accuracy on CitesSeed for increasing max distance $d$
(focused and higher-resolution).}
\label{fig:cost_lim_resolution}
\end{figure}
\section*{Checklist}
The checklist follows the references. Please
read the checklist guidelines carefully for information on how to answer these
questions. For each question, change the default \answerTODO{} to \answerYes{},
\answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf
justification to your answer}, either by referencing the appropriate section of
your paper or providing a brief inline description. For example:
\begin{itemize}
\item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.}
\item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.}
\item Did you include the license to the code and datasets? \answerNA{}
\end{itemize}
Please do not modify the questions and only use the provided macros for your
answers. Note that the Checklist section does not count towards the page
limit. In your paper, please delete this instructions block and only keep the
Checklist section heading above along with the questions/answers below.
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerNA{}
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{}
\item Did you include complete proofs of all theoretical results?
\answerYes{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{Our code is anonymously available at \url{https://github.com/StrategicGNNs/Code} }
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{Provided in appendix}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerNA{}
\item Did you mention the license of the assets?
\answerNA{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNA{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\section{Discussion}
In this paper we study strategic classification under graph neural networks.
Relying on a graph for prediction introduces dependencies in user responses, which can result in complex correlated behavior. The incentives of the system and its users are not aligned, but also not discordant;
our proposed learning approach utilizes this degree of freedom
to learn strategically-robust classifiers.
Strategic classification assumes rational user behavior; this necessitates classifiers that are simple enough to permit tractable best-responses.
A natural future direction is to consider more elaborate predictive architectures coupled with appropriate boundedly-rational user models,
in hopes of shedding further light on questions regarding
the benefits and risks of transparency and model explainability \cite{ghalme2021strategic,BechavodPZWZ21}.
\section{Strategic user behavior: model and analysis}
Eq. \eqref{eq:classifier+embedding} states that
$h$ classifies $i$ according to her embedding $\phi_i$, which in turn is a weighted sum
of her features and those of her neighbors.
To gain intuition as to the effects of the graph on user behavior,
it will be convenient to assume
weights ${\widetilde{w}}$ are normalized\footnote{This is indeed the case in several common approaches.}
so that we can write:
\begin{equation}
\phi_i = \phi(x_i;x_{-i}) = (1-\alpha_i) x_i + \alpha_i {\bar{x}}_i \quad \text{for some} \,\,\,
\alpha_i \in [0,1]
\end{equation}
I.e., $\phi_i$ can be viewed as an interpolation between $x_i$ and some point ${\bar{x}}_i \in \mathbb{R}^\ell$ representing all other nodes,
where the precise point along the line depends on a parameter $\alpha_i$ that represents the influence of the graph (in a graph-free setting, $\alpha_i=0$).
This reveals the dual effect a graph has on users:
On the one hand, the graph limits the ability of user $i$ to influence her own embedding,
since any effort invested in modifying $x_i$
affects $\phi_i$ by at most $1-\alpha_i$.
But the flip side of this is that an $\alpha_i$-portion of $\phi_i$ is fully determined by other users (as expressed in ${\bar{x}}_i$);
if they move, $i$'s embedding also `moves' for free.
A user's `effective' movement radius is $r_i=d(1-\alpha_i)$.
Fig. \ref{fig:graph_examples} (F) shows this for varying $\alpha_i$.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/graph_examples.pdf}
\caption{
\textbf{Simple graphs, complex behavior}.
In all examples $h_b$ has $b=0$, and ${\widetilde{w}}_{ij}=1$ unless otherwise noted.
\textbf{(A)} Without the graph, $i$ does not move, but with $j{\rightarrow}i$, $\phi_i$ is close enough to cross.
\textbf{(B)} $i$ can cross, but due to $j{\rightarrow}i$, must move beyond $0$.
\textbf{(C)} Without the graph, $i$ has ${\hat{y}}_i=1$, but due to $j{\rightarrow}i$, must move.
\textbf{(D)} Without the graph $i$ would move, but with the graph---cannot.
\textbf{(E)} Hitchhiker:
$i$ cannot move, but once $j$ moves and crosses, $\phi_i$ crosses for free.
\textbf{(F)} Effective radii for various $\alpha$.
}
\label{fig:graph_examples}
\end{figure}
\subsection{Strategic responses} \label{sec:strat_responses}
Given that $h$ relies on the graph for predictions---how should a user modify her features $x_i$ to obtain ${\hat{y}}_i=1$?
In vanilla strategic classification (where $h$ operates on each $x_i$ independently),
users are modeled as rational agents that respond to the classifier by maximizing their utility, i.e.,
play $x'_i = \argmax_{x'} h(x') - c(x_i,x')$,
which is a best-response that results in immediate equilibrium
(users have no incentive to move, and the system has no incentive to change $h$).
In our graph-based setting, however,
the dependence of ${\hat{y}}_i$ on all other users via
$h(x_i;x_{-i})$ makes this notion of best-response ill-defined,
since the optimal $x'_i$ can depend on others' strategic responses, $x'_{-i}$, which are unknown to user $i$ at the time of decision (and may very well rely on $x'_i$ itself).
As a feasible alternative,
here we generalize the standard model
by assuming that users play \emph{myopic best-response} over a sequence of multiple update rounds.
As we will see, this has direct connections to key ideas underlying graph neural networks.
Denote the features of node $i$ at round $t$ by $x_i^{(t)}$,
and set $x_i^{(0)} = x_i$.
A myopic best response means that at round $t$,
each user $i$ chooses $x_i^{(t)}$ to maximize her
utility at time $t$ \emph{according to the state of the game at time $t-1$}, i.e.,
assuming all other users play $\{x_j^{(t-1)}\}_{j \neq i}$,
with costs accumulating over rounds.
This defines a \emph{myopic response mapping}:
\begin{equation}
\label{eq:response_mapping}
\Delta_h(x_i;x_{-i},\kappa) \triangleq
\argmax_{x' \in \mathbb{R}^\ell} h(x';x_{-i}) - c(x_i,x') - \kappa
\end{equation}
where at round $t$ updates are made (concurrently) via
$x_i^{(t+1)} = \Delta_h(x_i^{(t)};x_{-i}^{(t)},\kappa_i^{(t)})$
with accumulating costs $\kappa_i^{(t)} = \kappa_i^{(t-1)} + c(x_i^{(t-1)},x_i^{(t)}), \kappa_i^{(0)}=0$.
Predictions for round $t$ are ${\hat{y}}_i^{(t)} = h(x_i^{(t)};x_{i-i}^{(t)})$.
Eq. \eqref{eq:response_mapping}
naturally extends the standard best-response mapping
(which is recovered when $\alpha_i=0\,\,\forall i$, and converges after one round).
By adding a temporal dimension, the actions of users propagate over the graph and in time to affect others.
Nonetheless, even within a single round, graph-induced dependencies
can result in non-trivial behavior; some examples for $\ell=1$ are given in Fig. \ref{fig:graph_examples} (A-D).
\subsection{Analysis}
We now give several results demonstrating basic properties of our response model and consequent dynamics,
which shed light on how the graph differentially
affects the system and its users.
\paragraph{Convergence.}
Although users are free to move at will,
movement adheres to a certain useful pattern.
\begin{proposition}
\label{prop:move_once}
For any $h$, if users move via Eq. \eqref{eq:response_mapping},
then for all $i \in [n]$, $x_i^{(t)} \neq x_i^{(t-1)}$ at most once.
\end{proposition}
\begin{proof}
User $i$ will move only when:
(i) she is currently classified negatively, $h(x_i;x_{i-})=-1$, and
(ii) there is some $x'$ for which utility can improve,
i.e., $h(x';x_{i-}) - c(x_i,x') > -1$, which in our case occurs if $h(x';x_{i-})=1$ and $c(x_i,x') < 2$
(since $h$ maps to $[-1,1]$).\footnote{In line with \cite{hardt2016strategic}, we assume that if the value is zero then the user does not move.}
Eq. \eqref{eq:response_mapping} ensures that
the modified $x'_i$ will be such that
$\phi(x'_i;x_{-i})$ lies exactly on the decision boundary of $h$;
hence, $x'_i$ must be \emph{closer} to the decision boundary
(in Euclidian distance) than $x_i$.
This means that any future moves of an (incoming)
neighbor $j$ can only push $i$ further away from the decision boundary;
hence, the prediction for $i$ remains positive, and she has no future incentive to move again.\footnote{Users moving only once ensures that the cumulative costs are never larger than the final gain.}
\end{proof}
Hence, all users move at most once.
The proof reveals a certain monotonicity principle:
users always (weakly) benefit from any strategic movement.
Convergence follows as an immediate result.
\begin{corollary}
Myopic-best response dynamics converge for any $h$
(and after at most $n$ rounds).
\end{corollary}
We will henceforth use $x_i^h$ to denote
the features of user $i$ at convergence (w.r.t. $h$), denoted ${T_{\mathrm{max}}}$.
\paragraph{Hitchhiking.}
When $i$ moves,
the embeddings of (outgoing)
neighbors $j$ who currently have ${\hat{y}}_j=-1$
also move closer to the decision boundary;
thus, users who were initially too far to cross may be able to do so at later rounds.
In this sense, the dependencies across users introduced by the graph-dependent embeddings align user incentives,
and promote an implicit form of cooperation.
Interestingly, users can also obtain positive predictions
\emph{without} moving. We refer to such users as `hitchhikers'.
\begin{proposition}
There exist cases where
${\hat{y}}_i^{(t)}=-1$ and $i$ doesn't move,
but ${\hat{y}}_i^{(t+1)}=1$.
\end{proposition}
A simple example can be found in Figure \ref{fig:graph_examples} (E).
Hitchhiking demonstrates how relying on the graph for classification can promote strategic behavior---even
under a single response round.
\paragraph{Cascading behavior.}
Hitchhiking shows how the movement of one user can flip the label of another, but the effects of this process are constrained to a single round.
When considering multiple rounds, a single node can trigger
a `domino effect' of moves that span the entire sequence.
\begin{proposition} \label{prop:move_at_n}
For any $n$, there exists a graph where a single
move triggers $n$ additional moves.
\end{proposition}
\begin{proposition} \label{prop:n-k_move_at_k}
For any $n$ and $k \le n$, there exists a graph where $n-k$
users move after $k$ rounds.
\end{proposition}
Proofs are constructive and modular (see Appendix \ref{apx:domino}),
so that increasing $n$
can be done by attaching additional components.
Both results show that, through monotonicity,
users also (weakly) benefit from additional rounds.
This has concrete implications on learning.
\begin{corollary} \label{cor:domino1}
In the worst case, the number of rounds until convergence is $\Omega(n)$.
\end{corollary}
\begin{corollary} \label{cor:domino2}
In the worst case, $\Omega(n)$ users move after $\Omega(n)$ rounds.
\end{corollary}
Thus,
to exactly account for user behavior, the system must
correctly anticipate the strategic responses of users many rounds into the future,
since a bulk of predictions may flip in the last round.
Fortunately, these results also suggests that in some cases,
blocking one node from crossing can prevent a cascade of flips;
thus, it may be worthwhile to `sacrifice' certain predictions for collateral gains.
This presents an interesting tradeoff in learning,
encoded in the learning objective we present next.
\future{
\subsection{Two sides of the same graph}
By using the graph for improving its predictive performance,
the system inadvertently makes it possible for users
to also utilize the graph for their own purposes.
And while it may be unclear a-priori who benefits more from the graph,
for the purpose of learning, it is important to understand
the different ways in which the graph can work for---or against---each party.
We end this section with a summary of the different perspectives, as expressed in the context of learning.
\paragraph{Users' perspective.}
The goal of users is to obtain positive predictions, regardless of their true labels $y$. The graph introduces dependencies in users' utility through the reliance of their embedding on their neighbors.
This acts as a double-edged sword: neighbors can either make it easier for users to cross (if they are closer to the decision boundary \itay{That is not precise. The explanation is in the comment}, or on its positive side), or hold them back from crossing (if they are further away).
But users are aligned in their incentives---the result of which is that the efforts of one user propagate through the graph to aid others.
Thus, as a collective, users `cooperate' through the graph,
but only indirectly, as they cannot coordinate their moves;
in the extreme, $\alpha_i=1\,\,\forall i$ entails no incentive for any user to move.
- care about distance, as this affects moving; hence care about how the graph affects distances
\paragraph{The system's perspective.}
The goal of the system is to predict labels correctly.
In general, the graph is useful if it encodes relevant information beyond what features provide.
- care about accuracy; for 0/1, just side of user wrt boundary \\
- this can either hurt or help the system, depending on the true $y$ (moves help if y=1 but hurt if y=-1) \\
- user and system incentives are not fully discordant (as in adversarial) \\
- this means system has degree of freedom in choosing an h \\
we now show how this can be utilized in learning \\
- cite social burden paper, SCMP (regularization), GSC (aligned)
}
\section{Experiments}
\subsection{Synthetic data} \label{sec:exp_synth}
We begin our empirical evaluation by demonstrating different aspects of learning in our
setting using a simple but illustrative synthetic example.
We set $\ell=1$
and sample features $x_i \in \mathbb{R}$ for each class from a corresponding Gaussian $\mathcal{N}(y,1)$ (classes are balanced).
For each node we uniformly sample
5 neighbors from the same class and 3 from the other,
and use uniform weights.
This creates a task where both features and the graph are
informative about labels, but only partially, and in a complementary manner
(i.e., noise is uncorrelated; for $i$ with $y_i=1$, if $x_i<0$,
it is still more likely that most neighbors have $x_j>0$, and vice versa).
As it is a-priori unclear how to optimally combine these sources,
we study the effects of relying on the graph to various degrees
by varying a global $\alpha$, i.e.,
setting ${\widetilde{w}}_{ii}=(1-\alpha)$ and ${\widetilde{w}}_{ij}=\alpha/\mathrm{deg}_i$
for all $i$ and all $j \neq i$.
We examine both strategic and non-strategic settings,
the latter serving as a benchmark.
Since $\ell=1$, $H=\{h_b\}$ is simply the class of thresholds, hence we can scan all thresholds $b$ and report learning outcomes for all models $h_b \in H$.
For non-strategic data, the optimal $h^*$ has $b^*\approx 0$;
for strategic data, the optimal $h^*$
can be found using line search.
Testing is done on disjoint but similarly sampled held-out features and graph.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\linewidth]{figures_two_thirds/synth_per_iter.pdf}
\,\,\,
\includegraphics[width=0.48\linewidth]{figures_two_thirds/sensitivity.pdf}
\caption{\textbf{(Left)} Accuracy of learned ${\hat{h}}$ for varying graph importancies $\alpha \in [0,1]$.
\textbf{(Right)} Accuracy for all threshold classifiers $h_b$,
with sensitivity (horizontal lines)
and per network depth $T$ (dots).}
\label{fig:synth}
\end{figure}
\paragraph{The effects of strategic behavior.}
Figure \ref{fig:synth} (left) presents the accuracy of the learned ${\hat{h}}$
for varying $\alpha$ and in different settings.
In a non-strategic setting (dashed gray),
increasing $\alpha$ helps,
but if reliance on the graph becomes exaggerated, performance deteriorates
($\alpha \approx 0.7$ is optimal).
Allowing users to respond strategically reverses this result:
for $\alpha=0$ (i.e., no graph), responses lower accuracy by $\approx 0.26$ points;
but as $\alpha$ is increased, the gap \emph{grows},
this becoming more pronounced as test-time response rounds progress (blue lines).
Interestingly, performance under strategic behavior is \emph{worst} around the previously-optimal
$\alpha \approx 0.75$.
This shows how learning in a strategic environment---but \emph{neglecting} to account for strategic behavior---can be detrimental.
By accounting for user behavior,
our approach (orange line) not only recovers performance,
but slightly improves upon the non-strategic setting
(this can occur when positive points are properly incentivized \cite{levanon2022generalized}; see Appendix \ref{apx:experiments_extra_synth}).
\paragraph{Sensitivity analysis.}
Figure \ref{fig:synth} (right) plots the accuracy of all threshold models $h_b$
for increasing values of $\alpha$. For each $\alpha$, performance exhibits a `bell-curve' shape, with its peak at the optimal $h^*$.
As $\alpha$ increases, bell-curves change in two ways.
First, their centers \emph{shift}, decreasing from positive values towards zero (which is optimal for non-strategic data);
since using the graph limits users' effective radius of movement,
the optimal decision boundary can be less `stringent'.
Second, and interestingly, bell-curves become \emph{narrower}.
We interpret this as a measure of \emph{tolerance}:
the wider the curve, the lower the loss in accuracy
when the learned ${\hat{h}}$ is close to
(but does not equal) $h^*$.
The figure shows for a subset of $\alpha$-s
`tolerance bands': intervals around $b^*$ that include thresholds $b$ for which the accuracy of $h_b$ is at least $90\%, 95\%$, and $97.5\%$
of the optimum (horizontal lines).
Results indicate that larger $\alpha$-s
provide \emph{less} tolerance.
If variation in ${\hat{h}}$ can be attributed to the number of examples,
this can be interpreted as hinting that larger $\alpha$-s
may entail larger sample complexity.
\paragraph{Number of layers ($T$).}
Figure \ref{fig:synth} (right) also shows for each bell-curve the accuracy achieved by learned models ${\hat{h}}$ of increasing depths, $T=1,\dots,4$ (colored dots).
For $\alpha=0$ (no graph), there are no inter-user dependencies,
and dynamics converge after one round. Hence, $T=1$ suffices and is optimal,
and additional layers are redundant.
However, as $\alpha$ increases, more users move in later rounds,
and learning with insufficiently large $T$ results in deteriorated performance.
This becomes especially distinct for large $\alpha$: e.g., for $\alpha=0.9$,
performance drops by $\sim 11\%$
when using $T=1$ instead of the optimal $T=4$.
Interestingly, lower $T$ always result in lower, more `lenient' thresholds; as a result, performance deteriorates, and more quickly for larger, more sensitive $\alpha$.
Thus, the relations between $\alpha$ and $T$ suggest that greater reliance on the graph requires more depth.
\subsection{Experiments on real data}
In this section, we evaluate the impact of strategic behavior in a graph, and the effectiveness of our method on real datasets. In \cref{subsec:experimental_setting}, we describe in details the experimental setup we have used. In \cref{subsec:main_results} we present our main results, our method is extremely effectiveness with regards to defending against strategic behavior. In \cref{subsec:cost_lim,subsec:train_iterations} we analyze the effects of the limit on the cost and the number of train iterations used , respectively.
\subsection{Experimental Setup}
\label{subsec:experimental_setting}
In this section we describe our experimental setup and the data we have used. We ran each experiment five times with different random seeds for each dataset, and report the means and standard deviations.
\para{Setup.} We trained each models using the Adam optimizer with a learn rate of 0.2 and a weight decay of $5\cdot 10^{-4}$ as depicted in \citep{wu2019simplifying}. The training is halted after the model reaches a state of saturation, which was 20 epochs.
\nir{how did you choose these?}\ben{Done}
As for a cost limitation of 0.5 both regular and robust models converge to the same outcome, we have limited the cost to 0.25 (the mid-point), in which the our defence is still effective but not too much (as for extremely low values of the cost limitation i.e. 0.01, 0.05).
We choose a sigmoid the temperature of 0.05, which resulted in stable/smooth results over the trainset, i.e. no spikes in \ref{fig:cost_lim} or \ref{fig:train_iterations} over the trainset.
In these experiments we intend to show the strength of our robust model and hence we would like to a sufficient number of train iterations. Following \ref{fig:train_iterations} our model reaches a saturation point when using more than 3 train iterations and so all experiments use 3 train iterations.\nir{how did you choose this temperature?} \ben{done}
\para{Data.} The used citation network datasets Cora, CiteSeer and PubMed from \cite{sen2008collective}.
\nir{can you add some additional recent references which use these datasets?}\ben{REMEMBER TO ADD REFS}
Nodes represent documents and edges represent citation links. Training, validation and test splits are given by binary masks. Statistics of all datasets are provided in the supplementary material.\nir{add reference to section in appendix}
\subsection{Preprocessing}
In this section we elaborate on our preprocessing pipeline, which is composed of merging the split of the data, binarizing the data, directing the edges, making the data inductive.
\para{Merging the split of the data.} Given the train/validation/test split for each dataset, we assimilate the validation set in the train set. Ergo, we use a simple train/test split, for each dataset. \nir{number of samples in each set? can put in appendix}\ben{REMEMBER TO ADD}
\para{Data binarization.} We binarize each dataset by setting a subset of the group of labels to be considered as negative while the rest are positive. We choose the split out of the balanced splits in which the settings of strategic classification had the most influence over the models accuracy (the highest decrease). \nir{not clear what you mean here. can you be precise? how did you decide how to binarize exactly?}\ben{done}
\nir{in the appendix, please write exactly for each dataset what classes you set as -1 as what as 1.}\ben{REMEMBER TO ADD}
\para{Directing the edges.} The edges in each of our datasets are undirected. We therefore direct each edge so that out of the two nodes it connect, the now directed edge is incoming towards the node with the lower degree.
\nir{any intuition for why this makes sense?}\ben{We would like to extend the influence of the graph's topology. Therefore, we give more expressive power to nodes with higher importance.}
\nir{so just to make sure i'm correct in this: an edge $i{\rightarrow}j$ means that the embedding of $j$ depends on $i$ - yes? so in our case, the lower-degree node's embedding is affected by the actions of the higher-degree node}\ben{Exactly. This follows from the intuition I had tried to follow.}
\nir{also: this implies that we get a DAG (at least up to tie-breaking). is this important for us?}\ben{Its not important. we just wanted to remove the symmetry that undirected graphs hold}
\para{Making the data inductive.} We make our train/test splits inductive by removing nodes in the test set that are in a range of influence towards a node in the trainset.
\nir{please be more precise here. what exactly do you do to the test set?}\ben{done}
\nir{we use $k=1$, right? so `range of influence' is direct neighbor/distance=1, yes?}\ben{Yes, but I wanted to write this for a general range of influence}
\nir{also: if we're using a standard split - why do we need to modify it? is it not a standard split for the inductive setting?}\ben{The standard splits are transductive. There are no inductive standard splits (Chaim feel free to correct me on this). So we have to make them inductive}
\nir{important: where do we say what GNN method we use? Assuming it's SGC - we need to motivate this choice (and why not others).}
\chaim{I think we should also experiment with addition GNN model, no reason not to trying it}
\ben{Will modify the code to generalize over GNN types. Heads up, this may take a day... Afterwards, I will include a paragraph which talks about the different GNNs}
\nir{please define ${\widetilde{w}}$. in particular say how you weight self-edges vs other edges, and why (is this standard? if so, cite)}\ben{we do not adress this issues. we use the standard adjacency matrix normalization for gnns, which is built in into the SGC implementation (its $D^-0.5 A D^-0.5$, where A is the adjacency matrix and D is a diagonal degree matrix). Should we note that in our experimental settings?}
\subsection{Main Results}
\label{subsec:main_results}
\cref{tab:main_results} presents our main results on the power of our method across various datasets.
Our model achieves an increase of \%, \% and \%
in accuracy for the Cora, CiteSeer and PubMed datasets correspondingly.
Note that our method is extremely effective across all datasets, even with only 3 train iterations and a limitation of 0.25 on the cost.
\nir{re iters: is it 3 or 5?}\ben{After we added kappa 3 train iters were enough to reach saturation. Ergo, we changed it to 3.}
\begin{table*}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{lrrrr}
\toprule
& \multicolumn{1}{c}{\textbf{Cora}} & \multicolumn{1}{c}{\textbf{CiteSeer}} &
\multicolumn{1}{c}{\textbf{PubMed}}\\
\toprule
{Na{\"i}ve}\ & \eb{} & \eb{} & \eb{}\\
Robust (ours) & \eb{} & \eb{} & \eb{}\\
\midrule
Non-strategic (benchmark) & \eb{} & \eb{} & \eb{}\\
\bottomrule
\end{tabular}
}
\caption{
The test accuracy of our model with 5 train iterations, under a limitation of 0.05 on the cost.
\nir{notice I renames the methods and changed the order}
\nir{why are the fonts so big?}}\ben{No reason. The settings for the table are temporary}
\label{tab:main_results}
\end{table*}
\subsection{Limiting The Cost}
\label{subsec:cost_lim}
How does the limit on the cost function affect our model? Intuitively, the higher the limit, the stronger the strategic behavior. Hence, our model will have a harder time being robust to this behavior. We examine whether this holds in practice.
In our experiments described in \cref{tab:main_results}, we used a limit of 0.05 on the cost. Here, we vary this value and observe the effectiveness of our model.
\cref{fig:cost_lim} shows the results on all of our datasets and demonstrates that our models is very effective across multiple values for the limitation on the cost.
With regards to our intuition, as the limitation on the cost increases, the effectiveness decreases.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/cost_lim.pdf}
\caption{Accuracy as a function of the limitation on the cost, where our model uses 5 train iterations.}
\label{fig:cost_lim}
\end{figure}
\subsection{Train Iterations}
\label{subsec:train_iterations}
In \cref{subsec:train_iterations}, we analyzed the effect of the number of train iterations on the performance of our model. As we train for more iterations, the test accuracy naturally increases for all datasets, up to a point of saturation.
Note that after reaching the point of saturation our model is even more robust that what is desplayed in \cref{tab:main_results}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/train_iterations.pdf}
\caption{Accuracy as a function of the number of train iterations, under a limitation of 0.05 on the cost.}
\label{fig:train_iterations}
\end{figure}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,299 |
Hynobius abei
Abe's Salamander, Abe Sansho-uo
Subgenus: Hynobius
family: Hynobiidae
subfamily: Hynobiinae
IUCN (Red List) Status Critically Endangered (CR)
National Status Endangered
Raffaëlli Account
Hynobius abei ranges from 47-71 mm in snout-vent length, and 82-122 mm in total length. It has a short and stout trunk with 11-13 costal grooves, a strongly keeled tail, and fairly short limbs with five toes on the hindlimbs. The color can be red-brown to blackish brown, with occasional flecks of silver or light blue (Goris and Maeda 2004).
Country distribution from AmphibiaWeb's database: Japan
This species is endemic to Japan. It inhabits secondary bamboo forest or deciduous hardwood forest of the northern region of Kyoto and Hyogo Prefectures. It has a fragmented distribution and is known from fewer than 20 sites (Goris and Maeda, 2004)[3684].
Hynobius abei breeds during November and December, when there is snow. The breeding grounds are in shaded pools formed by small springs or seeps in the forest, where the temperature is relatively stable year-round (neither freezing in the winter nor above 68 degrees in the summer). Adults return to the same pool year after year. During the mating season, the male's jowls swell, which causes the head to appear somewhat triangular. This distinguishes breeding males from the females morphologically (Goris and Maeda 2004).
Females can lay anywhere from 26 to 109 eggs. The eggs are laid in 2 coiled sacs with longitudinal folds, beneath the layer of leaves. After the clutch is deposited, the male remains near the egg sacs for a time. The larvae develop under the snow. They generally mature quickly once the snow melts, though some will overwinter as larvae. Metamorphosis usually occurs in June or July, but occasionally not until August. Juveniles do not disperse far from the breeding pool. (Goris and Maeda 2004).
This species is nocturnal, foraging at night. It is carnivorous, preying on earthworms, spiders, and snails (Goris and Maeda 2004) as well as various insects and aquatic invertebrates (Hutchins et al. 2003).
Hynobius abei is endangered. The decline in population is due primarily to loss and fragmentation of habitat because of urbanization (Hutchins et al. 2003). This salamander has a very limited distribution, due in part to its environmental requirements. It needs forested habitat with shade and stable temperatures, and is thus highly susceptible to habitat disturbance (Goris and Maeda 2004).
Possible reasons for amphibian decline
General habitat alteration and loss
Habitat modification from deforestation, or logging related activities
Loss of genetic diversity from small population phenomena
This species is easily confused with H. nebulosus, H. takedai, and H. hidamontanus, which have very similar morphology (Goris and Maeda 2004).
The karyotype of Hynobius abei is 2n = 56, with 9 large, 4 medium, and 15 small pairs. Chromosomal characteristics distinguishing this species include one pair of acrocentric chromosomes in the medium-sized group and 5 pairs of bi-armed chromosomes in the small-sized group. (Seto and Matsui 1984). In addition, H. abei has a very short C-positive/R-negative band in the terminal region of the long arm of chromosome 2, as well as a very short band composed of the entire short arm of chromosome 10 (Izumisawa et al. 1990).
Goris, R.C. and Maeda, N. (2004). Guide to the Amphibians and Reptiles of Japan. Krieger Publishing Company, Malabar, Florida.
Izumisawa, Y., Ikebe, C., Kuro-O., M., and Kohno, S. (1990). ''Cytogenetic studies of Hynobiidae (Urodela). IX. Karyological characters of Hynobius abei Sato by means of R- and C-banding.'' Cellular and Molecular Life Sciences, 46(1), 104-106.
Seto, T., and Matsui, M. (1984). ''Karyotype of the Japanese salamander, Hynobius abei.'' Cellular and Molecular Life Sciences, 40(8), 874.
Written by Peera Chantasirivisal (Kris818 AT berkeley.edu), URAP, UC Berkeley
Edited by Kellie Whittaker (2008-01-03)
Species Account Citation: AmphibiaWeb 2008 Hynobius abei: Abe's Salamander <http://amphibiaweb.org/species/3878> University of California, Berkeley, CA, USA. Accessed Jan 21, 2020. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,071 |
Priyanka Chopra Addresses Dropping Nick Jonas' Surname From Her Instagram Handle
Posted By: Joe Brock
Global star Priyanka Chopra is finally opening up about the scrutiny that followed after she dropped her husband, Nick Jonas' last name from her Instagram handle last year. The 'Mary Kom' actor described the uproar on social media following the change as "a professional hazard" to Vanity Fair for its February cover story, reported Fox News. Priyanka Chopra Drops Husband Nick Jonas' Surname From Social Media Handles.
The 39-year-old star recalled the "random Monday in November" when she reverted to only her first and last name on Instagram, which immediately sparked speculation from fans about an impending split. But as Priyanka tells the fashion magazine, social media is far from being the be-all and end-all. She added that the experience served as a prime example of the microscope celebrities find themselves under in this age. Priyanka Chopra Removes Jonas From Her Name, Sparks Rumours of Divorce From Nick Jonas Among Fans.
"It's a very vulnerable feeling, actually, that if I post a picture, everything that's behind me in that picture is going to be zoomed in on, and people are going to speculate," the 'Quantico' star explained. "It's just a professional hazard… Because of the noise of our social media, because of the prevalence that it has in our lives, I think it seems a lot larger than it is. I think that we give it a lot more credence in real life, and I don't think it needs that," she added.
To put an end to all the rumours about their split in November, Priyanka had left a flirty comment on one of Jonas' workout videos. "Damn! I just died in your arms…" she wrote at the time, followed by a couple of heart emojis. Priyanka and Nick got married in a Christian and a Hindu ceremony in Jodhpur's Umaid Bhawan Palace on December 1 and 2 in 2018. Later, the couple also hosted two receptions in Delhi and Mumbai.
(This is an unedited and auto-generated story from Syndicated News feed, The Hamden Journal Staff may not have modified or edited the content body)
Joe Brock
Joe founded The Hamden Journal with an aim to bring relevant and unaltered news to the general public with a specific view point for each story catered by the team. He is a proficient journalist who holds a reputable portfolio with proficiency in content analysis and research. With ample knowledge about the business industry, Joe also contributes his knowledge for the business section of the website. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,487 |
package org.whitehole.app.model;
import java.util.HashMap;
import java.util.UUID;
import java.util.stream.Stream;
public class Workspace {
private final HashMap<UUID, FutureProject> _projects = new HashMap<>();
public Stream<FutureProject> getProjects() {
return _projects.values().stream();
}
public Workspace() throws Exception {
}
public FutureProject getProjectById(UUID id) {
return _projects.get(id);
}
public Workspace addProject(FutureProject p) {
_projects.put(p.getId(), p);
return this;
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,478 |
package ezbake.data.mongo.helper;
import com.mongodb.BasicDBObject;
import com.mongodb.DBObject;
import com.mongodb.WriteResult;
import com.mongodb.util.JSON;
import ezbake.base.thrift.EzSecurityToken;
import ezbake.base.thrift.Visibility;
import ezbake.data.common.TokenUtils;
import ezbake.data.mongo.EzMongoHandler;
import ezbake.data.mongo.redact.RedactHelper;
import ezbake.data.mongo.thrift.EzMongoBaseException;
import org.apache.accumulo.core.security.VisibilityParseException;
import org.apache.commons.lang.StringUtils;
import org.apache.thrift.TException;
import org.codehaus.jettison.json.JSONException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.List;
/**
* Created by mchong on 8/26/14.
*/
public class MongoUpdateHelper {
private final Logger appLog = LoggerFactory.getLogger(MongoUpdateHelper.class);
private EzMongoHandler ezMongoHandler;
public MongoUpdateHelper(EzMongoHandler handler) {
this.ezMongoHandler = handler;
}
private boolean isUpdatingSecurityField(Object existingField, Object updaterField) {
// We consider the user to be updating the security fields if:
// if existing doc doesn't have the field, and the new doc has the field, OR
// if both docs have the field, and the new field is different
return (existingField == null && updaterField != null) ||
((existingField != null && updaterField != null) &&
!existingField.equals(updaterField));
}
public boolean isUpdatingSecurityFields(DBObject existingContent, DBObject updater) {
// We need to compare all the security fields to see if any one of them are being updated.
Object versionFieldObj = existingContent.get(RedactHelper.VERSION_FIELD);
Object formalVisibilityFieldObj = existingContent.get(RedactHelper.FORMAL_VISIBILITY_FIELD);
Object externalCommunityVisibilityFieldObj = existingContent.get(RedactHelper.EXTERNAL_COMMUNITY_VISIBILITY_FIELD);
Object platformReadVisibilityFieldObj = existingContent.get(RedactHelper.PLATFORM_OBJECT_READ_VISIBILITY_FIELD);
Object platformWriteVisibilityFieldObj = existingContent.get(RedactHelper.PLATFORM_OBJECT_WRITE_VISIBILITY_FIELD);
Object platformManageVisibilityFieldObj = existingContent.get(RedactHelper.PLATFORM_OBJECT_MANAGE_VISIBILITY_FIELD);
Object platformDiscoverVisibilityFieldObj = existingContent.get(RedactHelper.PLATFORM_OBJECT_DISCOVER_VISIBILITY_FIELD);
Object idFieldObj = existingContent.get(RedactHelper.ID_FIELD);
Object compositeFieldObj = existingContent.get(RedactHelper.COMPOSITE_FIELD);
Object purgeIdsFieldObj = existingContent.get(RedactHelper.PURGE_IDS_FIELD);
Object appIdFieldObj = existingContent.get(RedactHelper.APP_ID_FIELD);
Object versionFieldUpdaterObj = updater.get(RedactHelper.VERSION_FIELD);
Object formalVisibilityFieldUpdaterObj = updater.get(RedactHelper.FORMAL_VISIBILITY_FIELD);
Object externalCommunityVisibilityFieldUpdaterObj = updater.get(RedactHelper.EXTERNAL_COMMUNITY_VISIBILITY_FIELD);
Object platformReadVisibilityFieldUpdaterObj = updater.get(RedactHelper.PLATFORM_OBJECT_READ_VISIBILITY_FIELD);
Object platformWriteVisibilityFieldUpdaterObj = updater.get(RedactHelper.PLATFORM_OBJECT_WRITE_VISIBILITY_FIELD);
Object platformManageVisibilityFieldUpdaterObj = updater.get(RedactHelper.PLATFORM_OBJECT_MANAGE_VISIBILITY_FIELD);
Object platformDiscoverVisibilityFieldUpdaterObj = updater.get(RedactHelper.PLATFORM_OBJECT_DISCOVER_VISIBILITY_FIELD);
Object idFieldUpdaterObj = updater.get(RedactHelper.ID_FIELD);
Object compositeFieldUpdaterObj = updater.get(RedactHelper.COMPOSITE_FIELD);
Object purgeIdsFieldUpdaterObj = updater.get(RedactHelper.PURGE_IDS_FIELD);
Object appIdFieldUpdaterObj = updater.get(RedactHelper.APP_ID_FIELD);
return isUpdatingSecurityField(versionFieldObj, versionFieldUpdaterObj) ||
isUpdatingSecurityField(formalVisibilityFieldObj, formalVisibilityFieldUpdaterObj) ||
isUpdatingSecurityField(externalCommunityVisibilityFieldObj, externalCommunityVisibilityFieldUpdaterObj) ||
isUpdatingSecurityField(platformReadVisibilityFieldObj, platformReadVisibilityFieldUpdaterObj) ||
isUpdatingSecurityField(platformWriteVisibilityFieldObj, platformWriteVisibilityFieldUpdaterObj) ||
isUpdatingSecurityField(platformManageVisibilityFieldObj, platformManageVisibilityFieldUpdaterObj) ||
isUpdatingSecurityField(platformDiscoverVisibilityFieldObj, platformDiscoverVisibilityFieldUpdaterObj) ||
isUpdatingSecurityField(idFieldObj, idFieldUpdaterObj) ||
isUpdatingSecurityField(compositeFieldObj, compositeFieldUpdaterObj) ||
isUpdatingSecurityField(purgeIdsFieldObj, purgeIdsFieldUpdaterObj) ||
isUpdatingSecurityField(appIdFieldObj, appIdFieldUpdaterObj);
}
public boolean isUpdatingSecurityFields(DBObject content, Visibility vis) {
return (vis != null &&
(vis.isSetFormalVisibility() ||
vis.isSetAdvancedMarkings()));
}
public int updateDocument(String collectionName, String jsonQuery, String jsonDocument,
boolean useUpdateOperators, Visibility vis, EzSecurityToken security)
throws TException, EzMongoBaseException, VisibilityParseException, JSONException {
TokenUtils.validateSecurityToken(security, ezMongoHandler.getConfigurationProperties());
if (StringUtils.isEmpty(collectionName)) {
throw new EzMongoBaseException("collectionName is required.");
}
int updatedCount = 0;
final String finalCollectionName = ezMongoHandler.getCollectionName(collectionName);
DBObject content = (DBObject) JSON.parse(jsonDocument);
// see if we are able to update the data in db with user's classification in the user token.
// Also need to see if this is a WRITE or MANAGE operation - MANAGE is when users update the
// security fields in the Mongo document.
String operation = EzMongoHandler.WRITE_OPERATION;
if (isUpdatingSecurityFields(content, vis)) {
operation = EzMongoHandler.MANAGE_OPERATION;
}
final List<DBObject> results =
ezMongoHandler.getMongoFindHelper().findElements(collectionName, jsonQuery, "{ _id: 1}", null, 0, 0, false, security, false, operation);
if (results.size() > 0) {
// construct a list of Objects to use as the filter
final List<Object> idList = new ArrayList<Object>();
for (final DBObject result : results) {
Object id = result.get("_id");
appLog.info("can update DBObject (_id): {}", id);
idList.add(id);
}
final DBObject inClause = new BasicDBObject("$in", idList);
final DBObject query = new BasicDBObject("_id", inClause);
if (!useUpdateOperators) {
if (operation.equals(EzMongoHandler.MANAGE_OPERATION)) {
// we need to check if the user is allowed to update with the new Visibility
ezMongoHandler.getMongoInsertHelper().checkAbilityToInsert(security, vis, content, null, true, false);
}
// use $set if we are not using mongo update operators (such as $pull) in the jsonDocument
content = new BasicDBObject("$set", content);
} else {
// when using an update operator such as $pull and if we are doing a MANAGE operation -
// we need to set the security fields in the object as well.
if (operation.equals(EzMongoHandler.MANAGE_OPERATION)) {
DBObject visDBObject = new BasicDBObject();
// we need to check if the user is allowed to update with the new Visibility
ezMongoHandler.getMongoInsertHelper().checkAbilityToInsert(security, vis, visDBObject, null, true, false);
content.put("$set", visDBObject);
}
}
final boolean upsert = false;
final boolean multi = true;
updatedCount = updateContent(finalCollectionName, query, content, upsert, multi);
} else {
appLog.info("Did not find any documents to update.");
}
return updatedCount;
}
public int updateContent(String finalCollectionName, DBObject query, DBObject content, boolean upsert,
boolean multi) {
final WriteResult writeResult =
ezMongoHandler.getDb().getCollection(finalCollectionName).update(query, content, upsert, multi);
appLog.info("updated - write result: {}", writeResult.toString());
return writeResult.getN();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,903 |
His tales of travelling the world in his one man shows, Are You Dave Gorman? (which later became the television series The Dave Gorman Collection) and Dave Gorman's Googlewhack Adventure have dazzled live audiences and broken box office records across the globe, establishing Dave as one of the hottest live international performers. His achievements include a three month run on Broadway in New York, plus sell-out seasons at The Edinburgh Fringe, The Sydney Opera House, The Melbourne Comedy Festival and at the US Comedy Arts Festival in Aspen, where he won the HBO US Comedy Arts Festival One Man Show Award in both 2001 and 2004.
Dave appeared as a team captain on the second series of BBC Three's Rob Brydon's Annually Retentive and has appeared on The Daily Show, filming six segments with Jon Stewart for More 4. His radio show Genius for BBC Radio 4 has since been made into two successful television series for BBC Two. His 90 minute documentary, Dave Gorman in America Unchained broadcast on More 4 in 2008 delivering more than six times the channel's average viewing figures for the slot.
Dave has also had a phenomenal track record with his books Are You Dave Gorman? and Dave Gorman's Googlewhack Adventure, that climbed to number one in the Sunday Times non-fiction Bestsellers List. His third book America Unchained was released in 2008.
He has won two BAFTA Awards as a writer on the Mrs Merton Show for BBC One. In addition he has made appearances on The Late Show With David Letterman (CBS), ITV1's The Frank Skinner Show, BBC One's Absolutely Fabulous, BBC Two's Never Mind The Buzzcocks, BBC One's They Think It's All Over and Dave acted in Steve Coogan's feature film 24 Hour Party People.
Dave spent autumn 2009 touring the country with his show Sit Down, Pedal, Pedal, Stop, and Stand Up. Due to popular demand the tour was extended into 2010 and the DVD of this tour can be purchased online through Digital Stores.
He returned to the Edinburgh Fringe in 2011 with his new show Dave Gorman's Powerpoint Presentation. It was his first run at the Fringe for 8 years. Dave performed 28 shows in 26 days and each and every one of them sold out! He then toured the show throughout October/November and, due to popular demand, extended the tour into 2012 with another 30+ shows around the country and four nights in London at the Southbank Centre. He wrapped up the UK tour with a show at The Bloomsbury Theatre which was held as a benefit for Shelter before taking the show to Montreal for an 8 show run at the Just For Laughs Comedy Festival who also took him and the show to Sydney at The Sydney Opera House. | {
"redpajama_set_name": "RedPajamaC4"
} | 265 |
{"url":"https:\/\/quant.stackexchange.com\/questions\/36356\/valuation-of-fixed-income-securities","text":"# Valuation of Fixed-Income Securities [closed]\n\nCould one of you please assist with question 4 shown in the image above?\n\n## closed as off-topic by LocalVolatility, Alex C, Helin, Bob Jansen\u2666Oct 10 '17 at 5:06\n\nThis question appears to be off-topic. The users who voted to close gave this specific reason:\n\n\u2022 \"Basic financial questions are off-topic as they are assumed to be common knowledge for those studying or working in the field of quantitative finance.\" \u2013 LocalVolatility, Alex C, Helin, Bob Jansen\nIf this question can be reworded to fit the rules in the help center, please edit the question.\n\n\u2022 I am voting to close this question for being too basic. It also shows no effort whatsoever. \u2013\u00a0LocalVolatility Oct 9 '17 at 22:40\n\u2022 Hey LocalVolatility, we all have to start some where. \u2013\u00a0johnrogers Oct 9 '17 at 22:42\n\u2022 Please see quant.stackexchange.com\/help\/on-topic for what is on-topic here. \u2013\u00a0LocalVolatility Oct 9 '17 at 22:43\n\nBonds X and Y pay semiannual coupons. Then the cash flow for X is a single payment at the maturity of 3\\$(half of 6\\$ since it's semiannual) plus the par value. The discounted cash flow has to equal the price of the bond.\nThen you have :$$100,98 = \\frac{103}{1+R_{0.5}}$$ Solving for $R_{0.5}$ gives you 2%\nThen you can perform the same calculation for Y, knowing that you have a first cash flow of 4\\$(that has to be discounted with$R_{0.5}$) for the first 6-month period and a second for the second period for the coupon and the principal payment (that should be discounted with$R_1$). I wish I could \"add a comment\" to your answer but I can't for the moment. Of course, the coupon for Y is 4\\$ I corrected my answer. However, you are missing to square the denominator (because you are discounting the cash flow of the second period). You should have : $$103.59 = \\frac{4}{1+R_{0.5}} + \\frac{104}{(1+R_1)^2}$$ Since you know $R_{0.5}$ you can solve for $R_{1}$.","date":"2019-05-26 19:37:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6424113512039185, \"perplexity\": 1082.1885612917138}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232259452.84\/warc\/CC-MAIN-20190526185417-20190526211417-00219.warc.gz\"}"} | null | null |
{"url":"http:\/\/electronics.stackexchange.com\/questions\/107858\/if-mosfet-is-a-voltage-controlled-device-then-why-do-we-need-to-supply-it-with","text":"# If MOSFET is a voltage-controlled device, then why do we need to supply it with high current when using in an H-bridge?\n\nI might be confusing myself, but from what I am reading so far, a BJT is a current-controlled device and a MOSFET is a voltage-controlled device, which to me implies that a MOSFET requires very low input current.\n\nIf that is correct, than why, in say an H-bridge, do we need to use a high side driver to supply 2A? Is it because of the Miller capacitance that we need to charge the gate capacitance for a split second? Please explain.\n\n-\n\nExactly what you suspect. The effect of the gate capacitance is to slow down the switching. (The Miller effect multiplies the 'effective' gate capacitance.)\n\nIf we want the H-bridge to switch only occasionally (let's say at 1 Hz) a low gate current is (in most cases) fine, because the thermal effects of the switching are spread over 1 second, which is a relatively long period.\n\nBut if PWM is used, for instance at 300 Hz, there is only ~ 3ms to spread the heat dissipated in a switch (on+off), so this heat must be minimized.\n\n-\n\nThe simple answer is that $I_c=C \\dfrac{dv}{dt}$ and there is only current demand during fast switching.\n\nIt is my experience that the best rapid switching bridge commutators from 5 to 500 Amps will use a current gain of around 50 to 200 depending on component selection. For BJT's it ranges from 10 to 100 or 5 to 10% of the linear hFE or beta. Depending on dopants and geometry of junction.\n\n\u2022 and we were always taught FETs were just voltage controlled high impedance, but due to gate-drain or base-collector Miller capacitance, it acts like a current controlled rise time switch during transition, then voltage controlled resistance after.\n\n\u2022 it is wise to learn about dead-band commutation in Bridge drivers too, in the microsecond range to prevent frying drivers.\n\n\u2022 -\n\nTechie Stuff\n\nCharge time for the Miller capacitance is larger than that for the gate to source capacitance Cgs due to the rapidly changing drain voltage for entire duration of Vgs transition (current = C dv\/dt).\n\nOnce both of the capacitances Cgs and Cgd are fully charged, gate voltage (VGs) starts increasing again until it reaches the supply voltage, where Ig drops near zero.\n\n-\nRashid if someone said you need 5A gate current, then your output switched current must be >50A. \u2013\u00a0 user40708 Apr 25 at 0:06\n\nYou need a high side driver to drive N channel FETs that are connected to the $V_{\\text{in}}$ rail. The gate needs be driven higher than the source, which will be at $V_{\\text{in}}$ when the switch is on.\n\nGate drive current doesn't necessarily need to be 2A. There are many high side drives that only drive ~500mA, and that's just fine. Usually the amount of drive current ends up being determined by $R_g$ (gate resistance), some of which is internal to the FET and can't be reduced. Often $R_g$ will end up being 10 or 20 Ohms for an optimal (non ringing) drive signal.\n\nAs a crude description, the gate is capacitively coupled to the channel via $C_{\\text{iss}}$, which is made up of $C_{\\text{dg}}$ (the Miller capacitance) and $C_{\\text{gs}}$. These are both charged when the FET is switched. All the switching happens as the drain voltage changes (I'm not trying to be too obvious). As the drain voltage changes, charge on $C_{\\text{dg}}$ changes, and the gate voltage is effectively stuck at a (near) constant level during that time. That's called the Miller plateau. So, that means that the switching time is defined by how quickly the gate drive circuit can manage the charge in $C_{\\text{dg}}$. That is determined by $R_g$, $Q_{\\text{dg}}$, Miller plateau voltage, and gate drive voltage.\n\n-\nbut the gate drive current depends on the gate charge right? please correct me if i am wrong. \u2013\u00a0 rashid Apr 24 at 21:10\nIt depends on the charge yes, but also gate resistance and drive voltage and to a degree the drain current. \u2013\u00a0 gsills Apr 24 at 21:14\nReally sorry about that. I knew that the \\text{} was there for a reason, I just didn't know why. The downside of it is that it is really a pain to render the correct notation with LaTeX if it requires all that markup. Thanks for letting me know, anyway. In any case, I know there's a simple way to roll back those kinds of changes, but I don't know how. If you want, I can edit it all back. \u2013\u00a0 Ricardo Apr 26 at 3:21\n@Ricardo ... It's OK, you didn't know. Everyone learns stuff on this site. I know I have. I don't know how to roll back either (heh), so I just edited it back. But, I did leave Vin with subscript. \u2013\u00a0 gsills Apr 26 at 3:28","date":"2014-09-02 09:09:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7668163180351257, \"perplexity\": 1266.8933221163986}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-35\/segments\/1409535921872.11\/warc\/CC-MAIN-20140909040133-00391-ip-10-180-136-8.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
In the recent years video streaming became a huge (and still growing) part of the daily internet traffic. In US, Netflix and YouTube alone account for 50\% of download traffic during the peak hours (8pm - 11pm). User-perceived quality-of-experience (QoE) is critical in the Internet video delivery system as it impacts user engagement and revenues of video service providers~\cite{dobrian2011understanding}.
Given that there is little in-network support of QoE in the complex Internet video delivery system, client-side bitrate adaptation algorithms become critical to ensure high user-perceived QoE
by adapting bitrate levels according to network conditions.
A significant amount of research efforts has been focused recently on understanding and designing better bitrate adaptation algorithms~\cite{yin2014toward,yin2015control,huang2015buffer,li2014probe,sun2016cs2p}.
While client-side bitrate adaptation is critical to ensure high QoE for single player regarding available
bandwidth as given by a black box, as video traffic becomes predominant on the internet,
it is more and more likely that multiple video players will share
bottlenecks and compete for bandwidth in the network~\cite{akhshabi2012what, confused}. Such scenarios can be seen in home network, commercial building network, and campus networks,
where multiple devices (e.g., HDTV, tablet, laptop, cell phone, etc.) connect to Internet by a single Wifi router.
In these cases, in addition to single-player QoE, the multi-player QoE fairness becomes a critical issue.
While there have been several practical proposals to address multiplayer QoE fairness problem by designing better player bitrate adaptation algorithms~\cite{conext12,li2014probe}
and network-assisted bandwidth allocation schemes~\cite{Cofano2016Design, Mansy2015Network, Georgopoulos2013Towards}, there are still a lot of open questions in this space. For example, will the interaction among
different classes of bitrate adaptation algorithms lead to instability? Is centralized, in-network or server-side control necessary to ensure multiplayer QoE fairness?
How to design distributed control schemes with information exchange to achieve QoE fairness?
We envision that this rich and broad problem space presents significant opportunities for control theory to provide insights to a real networking problem and
to guide real system design.
As such, our goal in this paper is to bring the problem space to light from a control theory perspective.
As a first step in this direction, we
formalize the multiplayer QoE fairness problem and address a subset of the key questions.
We start from building a formal mathematical model of the multiplayer joint bandwidth allocation and bitrate adaptation problem,
extending the single-player bitrate adaptation model from prior work~\cite{yin2014toward,yin2015control}. We first focus on the steady-state problem,
and convert the multiplayer fairness problem as the stability analysis of an equilibrium of a non-linear dynamical system.
We derive sufficent conditions under which multiple players with same/different bitrate adaptation policies can
converge to QoE fairness with TCP-based bandwidth sharing at the bottleneck, and found that
TCP-based network bandwidth sharing is not sufficient to ensure QoE fairness, confirming the observation of a measurement study~\cite{confused} from a theory aspect.
The result of the analysis calls for active, in-network support for better bandwidth allocation.
Given the recent development of smart routers such as Google OnHub router~\cite{googleonhub} and programmable OpenWrt~\cite{openwrt},
we envision that a router-based bandwidth allocation scheme is practical in the near future.
While recent proposals of router-assisted schemes are based on steady-state utility maximization,
we propose a non-linear MPC-based router-assisted bandwidth allocation algorithm that directly models players as close-loop dynamical systems.
We evaluate the proposed strategy using trace-driven simulations and find that the router-assisted control outperforms existing steady-state solutions
in both efficiency and fairness, by adaptively allocating more bandwidth to players which has high resolution
and insufficient buffer level.
In addition to answering concrete key questions, we hope that this work provides insights into an exciting problem space that has received little attention from
the control community and how control theory can potentially make a significant impact on guiding real system design.
\mypara{Summary of Contributions} The main contribution of this paper is summarized as follows:
\begin{itemize}
\item We bring the multiplayer QoE fairness problem to light from a control theory perspective and provide a
formal model to reason about existing approaches;
\item We provide theoretical analysis of the convergence of TCP-based bandwidth sharing schemes to QoE fairness;
\item We propose a nonlinear MPC-based router-assisted bandwidth allocation algorithm that outperform existing approaches.
\end{itemize}
The rest of the paper is organized as follows: We begin by sketching the problem space of multiplayer QoE fairness
in Section \ref{sec:bg}. We describe system model and formulate QoE fairness optimization problem in Section \ref{sec:model}.
In Section \ref{sec:analysis} we provide analysis of TCP-based bandwidth sharing policies. We propose
router-based bandwidth allocation in Section \ref{sec:router} and evaluate the algorithm in Section \ref{sec:eval}.
Finally, we conclude the paper with future work in Section \ref{sec:concl}.
\section{Background and Related Work}\label{sec:bg}
In this section, we provide a high-level overview of HTTP-based adaptive video streaming and the multiplayer QoE fairness problem. We then sketch the classes of possible solutions and landscape of prior work, and identify the key questions that call for the use of control theoretic principles.
\mypara{HTTP-based adaptive video streaming}
Today a lot of video streaming technologies use HTTP-based adaptive video streaming (Apple's HLS, Adobe's HDS, etc.). All these video streaming protocols are standardized under the Dynamic Adaptive Streaming over HTTP or DASH. When using DASH each video is divided into multiple smaller segments or "chunks". Each chunk corresponds to a few seconds of play time and it is encoded at multiple discrete bitrates. This is necessary so that the adaptive video player can switch to a different bitrate if necessary after the chunk was downloaded.
\begin{figure}[t]
\centering
\includegraphics[width=220p
]{player.pdf}
\vspace{-0.1cm}
\caption{Abstract adaptive video player model}
\vspace{-0.5cm}
\label{fig_mp:player}
\end{figure}
Figure \ref{fig_mp:player} shows an abstract model of an adaptive video player. Video chunks are downloaded via HTTP to a local video buffer, and then played out to users. A \textit{bitrate controller} is responsible to choose the bitrate for each video chunk based on predicted available bandwidth and the state of the buffer, to maximize the user's QoE. A significant amount of work has been focused on the design of the bitrate controller, including rate-based algorithms~\cite{conext12, li2014probe}, buffer-based algorithms~\cite{huang2015buffer, spiteri2016bola}, and hybrid algorithms~\cite{tian2012towards, yin2015control}. In particular, recent work~\cite{yin2015control} provides a control-theoretic framework to understand existing approaches and proposes MPC-based bitrate controller for single-player QoE optimization.
\mypara{Multiplayer QoE fairness} While single-player bitrate adaptation algorithms have been well studied, they consider available bandwidth as a given stochastic variable and maximize QoE for a single player without considering the impact to other players. However,
When multiple players
share a bottleneck in the network,
the efficiency and fairness of QoE across multiple adaptive video players become critical.
Note that multiplayer QoE fairness includes both fairness in \textit{steady state} and \textit{transient state}. For example, when a HDTV and a tablet share a bandwidth bottleneck in a home network, HDTV should ideally get more bandwidth in steady-state than the tablet as it needs higher-quality video to match the higher resolution. On the other hand, for example, a player with empty buffer is expected to obtain more bandwidth than another with full buffer sharing the same bottleneck, as it needs to quickly accumulate buffer so as to converge quickly to optimal bitrate and avoid rebuffering.
\mypara{Internet video delivery ecosystem} Different from single-player problem, the multiplayer QoE fairness can be affected by a broader range of factors. As such, we zoom out from the adaptive player model in Figure \ref{fig_mp:player} and look at how the internet video delivery ecosystem impacts the multiplayer QoE fairness.
\begin{figure}[t]
\centering
\includegraphics[width=240p
]{video_ecosystem.pdf}
\vspace{-0.1cm}
\caption{The internet video delivery ecosystem}
\vspace{-0.5cm}
\label{fig_mp:video_ecosystem}
\end{figure}
As shown in Figure \ref{fig_mp:video_ecosystem}, the Internet video delivery ecosystem consists of a variety of entities that has different control capabilities to optimize different objectives. Video source providers, such as Netflix and YouTube,
own the client players and can design client-side bitrate control to optimize the user-perceived QoE; Content delivery networks (CDN), such as Akamai and Level3, place videos in CDN servers at the edge of the internet and assign players to
best servers in a video session;
Internet service providers (ISP), such as Comcast and Verizon, control the bandwidth available to CDN servers and client players
according to agreement
with users; Video quality optimizers, such as Conviva, employ a global view to provide centralized control of bitrate and CDN server selection for client players.
\mypara{Classes of potential solutions} Given the diverse control capabilities in the internet video delivery system,
there are several classes of solutions to achieve multiplayer QoE fairness: \textit{player-side}, \textit{in-network}, and \textit{server-side} solutions.
Player-side solutions, such as FESTIVE~\cite{conext12} and PANDA~\cite{li2014probe}, entail designing better bitrate adaptation algorithms for multiplayer QoE fairness. While only requiring player algorithm change and thus easy to deploy, player-side solutions do not alter bandwidth allocation in the network and can suffer from suboptimal bandwidth allocation schemes such as the unideal TCP effect~\cite{confused} and interaction with uncooperative players and cross traffic~\cite{akhshabi2012what}.
In-network solutions, on the other hand, employ active bandwidth allocation in the network to achieve multiplayer QoE fairness. While bottleneck can occur anywhere in the network making such schemes difficult to deploy, there are several recent proposals in particular on router-based bandwidth allocation algorithms to optimize steady-state QoE fairness where router is the single bottleneck shared among players~\cite{Cofano2016Design, Mansy2015Network, Georgopoulos2013Towards}.
Alternatively, server-side solutions regard the server as a single point of control and allocate bandwidth to players~\cite{akhshabi2013server}. However, the actual bandwidth bottleneck can occur in the network instead of server and the computation cost is high when the number of players is too large.
\mypara{Key research questions} The broad problem space for multiplayer QoE fairness has posed a series of key research questions including:
\begin{enumerate}
\item What is the optimal approach and fundamental limitations of each class of solutions?
\item What is the fundamental tradeoff between different classes of solutions?
\item How to design the information exchange scheme to enable coordination of different entities in the video delivery ecosystems to achieve QoE fairness?
\end{enumerate}
As a first step to tackle the broader problem, in this paper we want to develop a principled framework and answer a subset of key questions so as to shed light on the broader problem space and provide useful insights for future work. In the next section, we
start to develop a formal mathematical model of multiplayer QoE fairness problem.
\section{Modeling}\label{sec:model}
In this section, we develop a mathematical model for multiplayer HTTP-based adaptive video streaming.
Figure \ref{fig_mp:model} provides an overview of the model.
\begin{figure}[t]
\centering
\includegraphics[width=250p
]{model.pdf}
\vspace{-0.5cm}
\caption{Modeling multiplayer joint bandwidth allocation and bitrate adaptation problem}
\vspace{-0.5cm}
\label{fig_mp:model}
\end{figure}
\mypara{Video streaming model}
We consider a discrete time model with time horizon ${\cal K} = \{1, \cdots, K \}$ with a sampling period $\Delta T$. Let us consider a set of $N$ video players ${\cal P}$ sharing a single bottleneck link with bandwidth $W[k]$ at time $k$. Let $w_i[k]\in\mathbb{R}_+$ be the available bandwidth to the player $i$ at the time $k$, we have:
\begin{equation}
\sum_{i\in{\cal P}} w_i[k]\leq W[k], \quad \forall k\in{\cal K}
\end{equation}
We assume this link is the only bottleneck along the Internet path from the video players to the servers.
Each video player streams video from some video server on the Internet via HTTP. The video is encoded in a set of bitrate levels ${\cal R}$. When downloading video, player $i\in{\cal P}$ is able to choose the bitrate $r_i[k]\in{\cal R}$ of the video at each time step $k$. In constant bitrate encoding, $r_i\times t$ bits of data need to be downloaded to get the video with $t$ seconds of play time.
Each player has a buffer to store downloaded yet unplayed video. Let $b_i[k]\in[0, \overline{B}_i]$ be the buffer level at the beginning of time step $k$, namely, the amount of play time of the video in the buffer. The buffer accumulates as new video is being downloaded, and drains as video is played out to users. The buffer dynamics of the player $i$ is formulated as follows:
\begin{equation}
b_i[k+1] = b_i[k]-\Delta T + \frac{w_i[k] \Delta T}{r_i[k]}
\end{equation}
\mypara{QoE objective} The objective of the adaptive video players is to maximize the quality-of-experience (QoE) of users, which is modeled as a linear combination of the following factors: 1) average video quality, 2) average quality change, 3) total rebuffer time and 4) startup delay. For simplicity, in this paper we enforce that there are no rebuffering events, and we only consider the case where all the players have started playback. As such, the QoE utility function $U_i: {\cal R}\times\mathbb{R}_+\times\mathbb{R}_+ \rightarrow\mathbb{R}$ of player $i$ is the formulated as the average QoE of video downloaded over the entire time horizon:
\begin{equation}
U_i = \frac{\sum_{k=1}^K \frac{w_i[k]}{r_i[k]}U_i^P[k]}{\sum_{k=1}^K \frac{w_i[k]}{r_i[k]}}
\end{equation}
where $U_i^P[k]$ is the QoE of the video downloaded in time $k$:
\begin{equation}
U_i^P[k] = q_i(r_i[k]) - \mu_i \left| q_i(r_i[k]) - q_i(r_i[k-1]) \right|
\end{equation}
Note that $q_i:{\cal R} \rightarrow \mathbb{R}$ is the function that maps bitrate to the video quality perceived by users. We assume $q_i(\cdot)$ to be positive, increasing and concave to model the diminishing return property. $\mu_i$ is the parameter that defines the trade-off between high average quality and less quality changes. The larger $\mu_i$ is, the more reluctant the user $i$ is to change the video quality.
\mypara{QoE fairness} Going from single player to multiplayer video streaming, a natural objective function would be the sum of utilities (QoE) of all users, also known as \textit{social welfare} or \textit{efficiency}, i.e., $\sum_{i\in{\cal P}} U_i$. However, in the context of multiplayer video streaming, QoE fairness among players becomes a critical issue as each player usually serves a different user yet they share the same bottleneck resource. As such, we consider the QoE fairness $F(U_1,\cdots, U_N)$ as the objective, where $F: \Pi_{i\in{\cal P}}{\cal U}_i\rightarrow \mathbb{R} $ is a general fairness measure~\cite{Lan2010Axiomatic}. Specifically, we consider a class of fairness measures known as $\alpha$-fairness~\cite{mo2000fair}, where:
\begin{equation}
F_{\alpha}(\mathbf{U}) =
\begin{cases}
\sum_{i\in{\cal P}}\frac{U_i^{1-\alpha}}{1-\alpha} \quad \ \alpha \geq 0, \alpha \neq 1 \\
\sum_{i\in{\cal P}} \log U_i \quad \alpha = 1
\end{cases}
\end{equation}
Note that $\alpha$-fairness is a general fairness measure that satisfies axiom 1,2,3,5 from~\cite{Lan2010Axiomatic}. If $\alpha= 1$, $\alpha$-fairness becomes \textit{proportional fairness}; if $\alpha \rightarrow \infty$, it becomes \textit{max-min fairness}.
\mypara{Multiplayer QoE maximization problem}
Now we are ready to formulate the multiplayer QoE maximization problem where optimal bitrates $(\mathbf{r}[k], k\in{\cal K})$ and bandwidth $(\mathbf{w}[k], k\in{\cal K}) $ of players are decided to maximize some QoE fairness measure $F(\mathbf{U})$, given the capacity of the bottleneck link, $(W[k], k\in{\cal K})$:
\begin{align}
\max \quad & F\left( U_1, \cdots, U_N \right)
\\
\mbox{over} \quad & \mathbf{r}[k], \mathbf{w}[k] \quad \mbox{given } W[k], k\in{\cal K} \\
s.t. \quad & \sum_{i\in{\cal P}} w_i[k] = W[k], \quad \forall k\in{\cal K} \label{eq:mabr_start
\\
& b_i[k+1] = b_i[k] - \Delta T + \frac{w_i[k] \Delta T}{ r_i[k] }, \\
& \quad \quad \forall i\in{\cal P}, k=1,\cdots,K \notag \\
& \underline{B}_i \leq b_i[k] \leq \overline{B}_i, \quad \forall i\in{\cal P}, k\in{\cal K}
\label{eq:bufferlimit}\\
& w_i[k] \geq 0, r_i[k] \in{\cal R}\quad \forall i\in{\cal P}, k\in{\cal K}\label{eq:mabr_end}
\end{align}
Ideally, a centralized controller can decide both the bitrate $\mathbf{r}$ and the bandwidth $\mathbf{w}$ for all players to achieve QoE fairness, given the complete information of the system. However, the current practice can be interpreted as a distributed way to solve the problem by primal decomposition with no explicit message passing between players and router: Each player $i$ decides the bitrate of itself according to some \textit{bitrate adaptation policy} $r_i[k+1] = f(w_i[k], b_i[k+1])$, while the bottleneck link (conceptually) decides how to allocate available bandwidth according to some \textit{bandwidth allocation policy} $\mathbf{w}[k] = h(\mathbf{r}[k], \mathbf{b}[k])$. The design of optimal distributed solution is to find optimal $(h, f)$ pairs, i.e., $(h^*, f^*)$. Next, we discuss respectively the design of $h$ and $f$.
\mypara{Bandwidth allocation policies}
Given that the players in ${\cal P}$ shares a bottleneck link with total bandwidth $W[k]$, i.e., $\sum_{i\in{\cal P}}w_i[k] = W[k]$. A \textit{bandwidth allocation} policy $h: \mathbb{R}^n\rightarrow \mathbb{R}^n$ is a function that maps bitrates $\mathbf{r}[k]$ to bandwidth allocation vector $\mathbf{w}[k]$. Let $h_i:\mathbb{R}^n\rightarrow\mathbb{R}$ be the function that maps $\mathbf{r}[k]$ to $w_i[k]$.
\begin{equation}
\mathbf{w}[k] = h(\mathbf{r}[k])
\end{equation}
Under ideal TCP, all players get the equal share of the total bandwidth, i.e., $w_1[k] = \cdots = w_N[k] = W[k]/N$,. However, in practice, TCP is not ideal in the sense that players with larger bitrate gets larger share of the bandwidth due to the discrete effects~\cite{confused}.
We have the following assumptions of the bandwidth allocation function under unideal TCP according to measurement data in \cite{confused}:
\begin{asmpt}
Under non-ideal TCP, the bandwidth allocation policy $h(\cdot)$ has the following properties:
\begin{enumerate}
\item If $r_i=r_j$, $h_i(\mathbf{r}) = h_j(\mathbf{r})$;
\item If $r_i>r_j$, $h_i(\mathbf{r}) > h_j(\mathbf{r})$;
\item $\frac{\partial h_i(\mathbf{r})}{\partial r_i} > 0$, $\frac{\partial h_i(\mathbf{r})}{\partial r_j} < 0$, $i\neq j$;
\item $\lim_{r_i \rightarrow \infty}h_i(\mathbf{r}) < W$, $\lim_{r_i \rightarrow 0}h_i(\mathbf{r}) > 0$;
\item $h(\cdot)$ is symmetric over $\mathbf{r}$ (does not depend on order of players).
\end{enumerate}
\end{asmpt}
\begin{lm}
The function $h(\cdot)$ has $1+kn$ fixed points, where $k\in\mathbb{N}$.
\end{lm}
\mypara{Bitrate adaptation policies} Bitrate adaptation policy of player $i$, $f_i(\cdot)$, maps available bandwidth $w_i[k]$ and buffer level $b_i[k]$ to bitrate to choose $r_i[k]$ so as to maximize the QoE of the player. Bitrate adaptation policies have been widely studied by both in academia and in industry, and each video streaming service has its own adaptation policy. To make decisions on what bitrate to choose, there are two classes of algorithms: rate-based (RB) or buffer-based (BB) controllers.
In a rate-based policy $RB(f_i)$, $r_i[k] = f_i(w_i[k-1])$, where $f_i:\mathbb{R}_+\rightarrow{\cal R}$ is an increasing function. We consider a special case $LRB(\alpha)$ where $f_i$ is an affine function $r_i[k] = \alpha w_i[k-1]$.
In a buffer-based policy $BB(f_i)$, $r_i[k] = f_i(b_i[k])$, where $f_i:\mathbb{R}_+\rightarrow{\cal R}$ is an increasing function. We also consider the special case $LBB(\alpha, \beta)$ of an affine $f$ function $r_i[k] = \alpha b_i[k] + \beta$.
Note that both RB and BB policies can be regarded as heuristic algorithm to maximize QoE which may lead to sub-optimal solution. However, it is still of great interest to study these policies as they are currently widely deployed in the real-world players, such as Netflix or YouTube.
\section{Analysis of Fairness in Steady State}\label{sec:analysis}
\mypara{QoE fairness in the steady state} Note that an interesting special case of the multiplayer problem is when the system is in steady state, where the video quality and bandwidth of all players stay unchanged. Formally, we have the following definition:
\begin{mydef}
Given fixed total available bandwidth $W$, the multiplayer video streaming system is in steady state $(\mathbf{r}_0, \mathbf{w}_0)$ if for each player $i\in{\cal P}$:
\begin{enumerate}
\item Bitrate and bandwidth stay unchanged, i.e., $r_i[k] = r_{0i}$, $w_i[k] = w_{0i}$, $\forall k\in {\cal K}$;
\item Buffer level is non-decreasing, i.e., $b_i[k+1]\geq b_i[k]$, $\forall k\in{\cal K}$.
\end{enumerate}
\end{mydef}
Removing the inter-temporal constraints and inter-temporal component in the objective function, we get the multiplayer QoE fairness problem in steady state where optimal solution
is denoted as $(\mathbf{r}_0^*, \mathbf{w}_0^*)$:
\begin{align}
\max \quad & f\left( q_1(r_1), \cdots, q_N(r_N) \right)
\\
\mbox{over} \quad & \mathbf{r}, \mathbf{w} \quad \mbox{given } W \\
s.t. \quad & \sum_{i\in{\cal P}} w_i = W, \\
& r_i\leq w_i, \quad \forall i\in{\cal P} \\
& w_i \geq 0, r_i \in{\cal R}, \quad \forall i\in{\cal P}
\end{align}
Note that this problem is convex given that ${\cal R} = [\underline{R}, \overline{R}]$, and in the case that all players share the same $q_i = q$, the optimal solution is $(\mathbf{r}_0^*, \mathbf{w}_0^*): r_{0i}=w_{0i} = W/N$.
\mypara{Fairness of homogeneous RB players} We first consider the simplest case where all players are using the same rate-based algorithms.
\begin{thm}
If all players adopt $RB(f)$ bitrate adaptation policies, the following statements are true:
\begin{enumerate}
\item $(\mathbf{r}_0, \mathbf{w}_0): \ r_{i0} = f\left(\frac{w}{n}\right), w_{i0} = \frac{w}{n}$ is an equilibrium;
\item If $h\circ f$ is a contractive mapping, $(\mathbf{r}_0, \mathbf{w}_0)$ is globally asymptotically stable;
\item If $h\circ f$ is a expansive mapping, $(\mathbf{r}_0, \mathbf{w}_0)$ is unstable;
\end{enumerate}
\end{thm}
\mypara{Fairness of homogeneous BB players} We consider the case where all players adopt the same buffer-based bitrate adaptation policies and have the same QoE functions.
\begin{lm}
If all players adopts buffer-based bitrate adaptation policy, $(\mathbf{r}_0, \mathbf{w}_0)$ is an equilibrium if and only if:
\begin{enumerate}
\item $\mathbf{r}_0 = \mathbf{w}_0$;
\item $h(\mathbf{r}_0) = \mathbf{r}_0$.
\end{enumerate}
\end{lm}
\begin{thm}
If all players adopts $LBB(\alpha, \beta)$ bitrate adaptation policy, the following statements are true:
\begin{enumerate}
\item $(\mathbf{r}_0, \mathbf{w}_0): \ r_{i0} = w_{i0} = \frac{w}{n}$ is an equilibrium;
\item If $-\frac{1}{n} <\frac{\partial h_i(\mathbf{r}_0)}{\partial r_j}<0$, $\forall i\neq j$, then the equilibrium is locally asymptotically stable;
\item If $\frac{\partial h_i(\mathbf{r}_0)}{\partial r_j} < -\frac{1}{n}$, $\forall i\neq j$, then the equilibrium is unstable;
\end{enumerate}
\label{trm:2}
\end{thm}
Note that comparing results in homogeneous RB and BB players, we found that the convergence of RB players depends on both bandwidth allocation and bitrate adaptation policies, while convergence of BB players only depends on bandwidth allocation functions. The key reason is that, the bitrate decisions of BB players reflects the state of the player, i.e., buffer level, while the bitrate decisions of RB players does not depend on the internal states.
\mypara{Implications on system design} From the analysis we know that, in homogeneous player case, the convergence of BB players only depends on the characteristics of the bandwidth allocation function $h(\cdot)$, while for RB players, the convergence depends on the composite of bandwidth allocation function $h(\cdot)$ and player adaptation algorithms $f(\cdot)$. This has the following key implications that informs the system design:
First, the analysis confirms that the router-side bandwidth allocation function is critical to the convergence of both RB and BB players. Given that the player adaptation algorithms are designed by potentially different providers and may not be considering multiplayer effect, it could in turn be beneficial to redesign the bandwidth allocation function to ensure convergence with a larger range of player adaptation algorithms.
Second, the analysis provides a theoretical guide for the design of RB player adaptation algorithms which helps us better understand why existing design works. Given that the convergence depends on both bandwidth allocation and player adaptation, if TCP-based implicit bandwidth allocation is hard to change, we can design better player adaptation algorithms so that $h\circ f$ is contractive. One example of this principle is the design of FESTIVE~\cite{conext12}, where $f(\cdot)$ function is concave to make sure $h\circ f$ is contractive.
\section{NMPC-based Router-Assisted Bandwidth Allocation for QoE Fairness}\label{sec:router}
Despite a fully distributed scheme, the analysis from the previous section has posed the fundamental limitation of
TCP-based bandwidth allocation scheme: First, not all $h(\cdot)$ lead to convergence to QoE fairness in steady state even if players have the same QoE function $U(\cdot)$
and use the same class of bitrate adaptation policies $f(\cdot)$. Second, it cannot take into account different QoE goals and will not converge to fairness when players employ different classes of bitrate adaptation policies. As such, in order to achieve multiplayer QoE fairness, we want to design better player bitrate adaptation policies $f_i(\cdot)$ and bandwidth allocation policy $h(\cdot)$.
However, it is difficult to deploy/modify bitrate adaptation policies of all video players as they belongs to different and competing video streaming services, e.g., Netflix, YouTube, Amazon Video, etc. Also, controlling the bandwidth from the player side is difficult as the player runs on top of HTTP and cannot change the underlying TCP protocol. Instead, routers are in a good position to collect information of each player and video stream, and can technically control the bandwidth allocation. As smart routers are becoming more and more pervasive in the home entertainment industry (e.g. Google OnHub router), we envision that router-assisted bandwidth allocation scheme is more practical. Overall, we develop a hybrid router-assisted control for fairness: we keep the player adaptation policies $f_i(\cdot)$ unchanged, and design bandwidth allocation policy $h(\cdot)$ to achieve QoE fairness.
As routers have access to all video streams going through, we assume it can get or learn the following information from each player $i\in{\cal P}$:
1) current states of the player including bitrate $r_i$, buffer level $b_i$, 2) bitrate adaptation policy $f_i(\cdot)$, 3) QoE function $U_i(\cdot)$.
Given these information, the router-side bandwidth allocation function $h(\cdot)$ is obtained implicitly by solving the following bandwidth allocation
problem
in a moving horizon manner, regarding each player as a closed-loop system.
\begin{align}
\max \quad & F\left( U_1, \cdots, U_N \right)
\notag\\
\mbox{over} \quad & \mathbf{w}[k] \quad \mbox{given } W[k], k\in{\cal K} \notag\\
s.t. \quad & \eqref{eq:mabr_start} - \eqref{eq:mabr_end} \notag\\
& r_i[k] = f_i(w_i[k-1], b_i[k]), \quad \forall i\in{\cal P}, k\in{\cal K} \notag
\end{align}
Note that as the dynamics of players are non-linear, the resulting controller is a non-linear MPC-based controller.
\section{Evaluation}\label{sec:eval}
\subsection{Setup}
\mypara{Evaluation framework} We employ a custom Matlab-based simulation framework.
The duration of each time step is 2s and the simulation framework works in a synchronized manner:
At the beginning of each 2s interval, the states of the player and the network is updated according to
player dynamics and previously recorded traces. The bitrate and bandwidth decisions are then made simultaneously.
There is no event in between each 2s interval. Note that this is slightly different from the single-player simulation
in previous section as the player decisions are not synchronized, i.e., the player can change the bitrate at chunk boundaries, which
may not necessarily be every 2s. We acknowledge this limitation and will
test in real asynchronized settings in future work.
\mypara{Resource allocation schemes} We compare the following algorithms:
\begin{enumerate}
\item \textit{Baseline}: In baseline scheme, the bandwidth controller knows the $q(\cdot)$ function of all players, and the bandwidth is allocated by
solving steady-state bandwidth allocation problem at the beginning of each time step.
Given allocated bandwidth, each player then adopts RB or BB adaptation strategies to choose its bitrate.
This scheme has been seen in recent work~\cite{Cofano2016Design, Mansy2015Network, Georgopoulos2013Towards}.
\item \textit{Router}: In router-assisted scheme, the bandwidth controller knows the QoE functions, states (buffer level, bitrate), and bitrate adaptation
strategies of all players. The router-assisted bandwidth controller works in a moving horizon way:
At the beginning of each time steps, the controller predict the bandwidth in a fixed horizon to the future,
and solve the router-assisted bandwidth allocation problem in the horizon to decide bandwidth allocation.
We assume the bandwidth is given in the MPC horizon.
\item \textit{Centralized}: The centralized scheme entails calculating the optimal bandwidth allocation
and the bitrate decisions simultaneously by solving the joint optimization problem. We assume the controller knows the
entire future bandwidth. While less practical, the centralized controller provides us with an upper bound of the performance.
\end{enumerate}
\mypara{Metrics} We evaluate the algorithms using the following performance metrics:
\begin{enumerate}
\item \textit{$\alpha$-fairness}: We adopt $\alpha$-fairness measure as it is widely used in prior work~\cite{Lan2010Axiomatic}. Specifically, we focus
on two special case of $\alpha$-fairness: 1) $\alpha =0$ corresponding to \textit{social welfare, sum of QoE, or efficiency}; 2) $\alpha = 1$ corresponding to
\textit{proportional fairness}. As $\alpha$-fairness can be decomposed into a component corresponding to efficiency and another component corresponding to
fairness measures that does not depend on fairness~\cite{Lan2010Axiomatic}, we also use social welfare and normalized Jain's index as detailed metrics.
\item \textit{Social welfare}: Defined as sum of QoE of all players, i.e., $\sum_{i\in{\cal P}}U_i$.
\item \textit{Normalized Jain's index}: Defined as the Jain's index of normalized QoE, namely, Jain's index of $\mathbf{U}/(\sum_{i\in{\cal P}}U_i)$.
Jain's index is widely used in prior work to depict QoE fairness of players~\cite{conext12}, it is defined as $J(\mathbf{x}) = (\sum x_i)^2/(n\cdot \sum x_i^2)$.
\end{enumerate}
\mypara{Throughput traces} We use the throughput trace from FCC MBA 2014 project~\cite{fccdataset}.
The dataset has more than 1 million sessions of throughput measurement, each containing 6 measurement of 5-sec average throughput.
For experiment purposes, we concatenate the measurements from the same client IP and server IP, and use the concatenated traces in
the experiment. To avoid trivial cases where the available bandwidth is too high or too low, we only use traces whose average throughput is
0 to 3Mbps. Also, we multiply the throughput by the number of players in the experiment to eliminate the scaling effect in multiplayer experiments.
\mypara{Player parameters} The time horizon is discretized by $\Delta t = 2s$.
For simplicity, we assume players can choose bitrate in a continuous range [200kbps, 3000kbps].
We set buffer size to be 30s.
For QoE functions, we set $\mu = 1$ for all players.
For default settings, players has the following video quality function $q(r) = r^p$, we set $p = 0.6$ by default, making
$q(\cdot)$ function concave. Note that this can be non-concave in general, e.g., we could also use the sigmoid-like functions as suggested
in \cite{Chiang2009Nonconvex}, however, this will make the objective non-convex.
We let RB players adopt $r[k] = 0.8\times w[k-1]$, while
BB players adopt $r[k] = 100\times b[k]$ by default.
\subsection{End-to-End Results}
In this section, we focus on the end-to-end comparison of the algorithms.
\mypara{Efficiency-vs-fairness tradeoff}
We first evaluate the algorithms in terms of normalized social welfare (sum of QoE) and normalized fairness measure (Jain's index).
We change $\alpha$ in $\alpha$-fairness in order to get different points on the curve.
Figure \ref{fig_mp:alpha_pareto} shows the pareto front of the algorithms. There are three observations:
First, router-assisted control outperforms baseline controller by 5-7\% in terms of social welfare given the same normalized Jain's index.
For example, if we let normalized Jain's index to be 0.8, router assisted controller achieves ~56\% of optimal, while baseline controller
only achieves 50\% of optimal.
Second, centralized controller significantly outperforms both router-assisted and baseline controller with 15+\% advantage.
This is because centralized controller has more flexibility on deciding the bitrate for each player, while router-assisted controller
does not have direct control over players' bitrates and can only steer the bitrate by controlling the bandwidth (for RB players) and implicitly buffer level (for BB players).
Third, we observe a natural tradeoff between social welfare and fairness. According to Lan et al.~\cite{Lan2010Axiomatic},
$\alpha$-fairness can be factored into two component: efficiency (social welfare) and fairness measure that satisfies the five axioms and does not depend on scale.
When $\alpha =0$, both centralized and router-assisted controller optimizes social welfare without considering
the fairness of players. As such, the social welfare at the left most point of the curve is at the maximum. However,
as $\alpha$ is increased, more and more weight is put on the fairness of QoE, leading to increased fairness but
less total QoE.
Note that this resonates with the observation in prior work~\cite{conext12} on the tradeoff between sum of bitrates and their fairness,
but our proposed algorithms are able to systematically adjust this tradeoff by selecting an appropriate $\alpha$.
\begin{figure}[t]
\centering
\includegraphics[width=200p
]{alpha_pareto.pdf}
\caption{Social welfare vs fairness tradeoff}
\vspace{-0.5cm}
\label{fig_mp:alpha_pareto}
\end{figure}
\subsection{Sensitivity Analysis}
Next, we conduct sensitivity analysis on a class of key parameters so as to understand the robustness and the reason why router-assisted controller
outperforms existing methods.
\mypara{Impact of QoE functions} We first look at how the algorithms performs under different QoE functions in Figure \ref{fig_mp:nqoe_q}.
We use two BB players with the same parameters except for video quality functions, i.e., $q(\cdot)$ function. We let $q(r) = r^p$ and vary
the coefficient $p$. The larger $p$ is, the user-perceived quality is more sensitive w.r.t. bitrate; The smaller $p$ is, the less sensitive the user
is to bitrate. As shown in Figure \ref{fig_mp:bw_q}, both baseline and router assisted controller allocate more bandwidth to the player with larger $p$ and thus requiring
higher bitrate, as both controllers takes into account the $q(\cdot)$ function in their optimization.
However, router-assisted algorithm outperforms baseline controllers as it considers player buffer dynamics and lead to faster convergence to optimal bitrates.
In addition, the advantage of router-assisted algorithm over baseline controller is increasing as the video quality coefficients $p$ for different players become
more diverse. Note that this confirms our observation that more bandwidth should be allocated to high-resolution devices in order to achieve QoE fairness.
\begin{figure}[t]
\centering
\subfloat[]
{
\includegraphics[width=115pt]{nqoe_q.pdf}
\label{fig_mp:nqoe_q}
}
\subfloat[]
{
\includegraphics[width=115pt]{bw_q.pdf}
\label{fig_mp:bw_q}
}
\caption{Impact of QoE functions}
\vspace{-0.5cm}
\end{figure}
\mypara{Impact of initial conditions} We further investigate how the players' initial buffer levels impact the performance.
Figure \ref{fig_mp:nqoe_init} shows the players' normalized QoE vs different initial conditions, while Figure \ref{fig_mp:bw_init}
shows the bandwidth allocated to players in baseline and router-assisted schemes. There are three key observations:
First, the router-assisted algorithm consistently outperforms baseline solution, increasing the normalized QoE for each player.
Second, the router-assisted algorithm has more advantage over baseline solution when the initial buffer levels for the players become
more diverse. For instance, while router-assisted and baseline achieves similar performance when both players have 2s buffer initially,
both players' QoE are significantly improved when initial buffer levels are 2s and 18s respectively.
Third, an interesting observation from Figure \ref{fig_mp:bw_init} is that, while baseline solution does not consider states and dynamics of the players
and therefore allocate the same bandwidth to both players even one player has much more buffer and need less bandwidth,
router assisted algorithm allocate less bandwidth to players with full buffer and more bandwidth to player with empty buffer as it needs to quickly accumulate
buffer so as to stream at high bitrate.
As such, router-assisted algorithm achieves better performance as it takes into account the states and dynamics of the players,
which is critical to players' QoE.
\begin{figure}[t]
\centering
\subfloat[]
{
\includegraphics[width=115pt]{nqoe_init.pdf}
\label{fig_mp:nqoe_init}
}
\subfloat[]
{
\includegraphics[width=115pt]{bw_init.pdf}
\label{fig_mp:bw_init}
}
\caption{Impact of initial conditions}
\vspace{-0.5cm}
\end{figure}
\mypara{Impact of bandwidth variability} Finally, we investigate how bandwidth variability impacts the performance.
To showcase that the proposed router-assisted algorithm is more robust to bandwidth variability than the baseline solution, a zero mean Gaussian white noise is added to every bandwidth trace. The variability in bandwidth is increased as we increase the standard deviation of the additive white noise.
Figure \ref{fig_mp:fairness_variance} shows the mean fairness vs the standard deviation of the additive white noise. Mean fairness is calculated by averaging the results obtained after simulating both algorithms using 100 noisy bandwidth traces.
Furthermore, Figure \ref{fig_mp:fairness_variance} confirms that the router-assisted algorithm is more robust to bandwidth variability as its average fairness stays almost intact while the baseline solution shows a decreasing trend in average fairness as we increase the bandwidth variability.
This behavior is expected as the router-assisted algorithm uses an adaptive approach to allocate the bottleneck resources leading to better result in highly variable environment.
\begin{figure}[t]
\centering
\includegraphics[width=200p
]{fairness_vs_variance.pdf}
\caption{Impact of bandwidth variability}
\vspace{-0.5cm}
\label{fig_mp:fairness_variance}
\end{figure}
\subsection{Summary of Results}
Our main findings are summarized as follows:
\begin{enumerate}
\item Given fixed normalized Jain's index, router-assisted algorithm outperforms baseline solution by 5-7\% in terms of social welfare (sum of QoE),
while centralized bandwidth allocation + bitrate control achieves ~70\% of optimal, achieving 15+\% advantage comparing to other solutions.
\item Our sensitivity analysis shows that router-assisted algorithm has more advantage over baseline solution when the QoE functions and initial conditions
of players are more diverse. Moreover, router-assisted algorithm can allocate more bandwidth to players with less buffer while baseline solution fails
to take into account the states of the players.
\end{enumerate}
\section{Conclusion}\label{sec:concl}
Instead of regarding available bandwidth as given by a black box, we further consider the multiplayer interaction
in adaptive video streaming, namely, the joint bandwidth allocation and bitrate adaptation problem in a
star network. We build a mathematical model and conduct theoretical analysis on the
convergence of RB/BB players under non ideal TCP assumptions. Given that
convergence is not guaranteed in general, we develop a router-assisted control
which allocate bandwidth to players taking into account their bitrate adaptation
strategies and states. Using trace-drive simulations, we show that our proposed
router-assisted control outperforms existing QoE-aware bandwidth allocation
algorithms as it can adaptively allocate bandwidth to players with high resolution
and in more urgent need to accumulate buffer.
\bibliographystyle{abbrv}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,505 |
\section{Background and Preliminaries}
A discounted reward Markov Decision Process (MDP) is characterized by a tuple $(S,A,p,r,\gamma)$ where $S:=\{1,2,\cdots,i,j,\cdots,M \}$ denotes the set of states, $A = \{a_1,\ldots,a_k\}$ denotes the set of actions, $p$ is the transition probability rule i.e., $p(j|i,a)$ denotes the probability of transition from state $i$ to state $j$ when action $a$ is chosen. Also, $r(i,a,j)$ denotes the single-stage reward obtained in state $i$ when action $a$ is chosen and the system transitions to state $j$. Finally, $0 \leq \gamma < 1$ denotes the discount factor. The objective in an MDP is to learn an optimal policy $\pi: S \xrightarrow{} A$, where $\pi(i)$ denotes the action to be taken in state $i$ that maximizes the discounted reward objective given by:
\begin{align} \label{eq-1}
\mathbb{E} \Big[ \sum_{t = 0}^{\infty} \gamma^{t}r(s_{t},\pi(s_{t}),s_{t+1}) \mid s_{0} = i \Big].
\end{align}
In \eqref{eq-1}, $s_{t}$ is the state of the system at time $t$ and $\mathbb{E}[.]$ is the expectation taken over the states obtained over time $t = 1,\ldots,\infty$. Let $V(.)$ be the value function with $V(i)$ being the value of state $i$ that represents the total discounted cost obtained starting from state $i$ following the optimal policy $\pi$. The optimal value function can be obtained by solving for the solution of the Bellman equation \cite{bertsekas1996neuro} given by:
\begin{align}\label{v-eq}
V(i)=\max_{a \in A} \Big \{ \sum_{j=1}^{M} p(j|i,a) \big{(}r(i,a,j)+\gamma V(j)\big{)} \Big \} , ~ \forall i \in S.
\end{align}
We assume here for simplicity that all actions are feasible in every state.
Value Iteration is a popular scheme employed to obtain optimal policy and value function. It works as follows. An initial value function $V_0$ is selected and a sequence of $V_{n}, ~ n \geq 1$ is generated in an iterative fashion as below:
\begin{align}\label{vi-eqn}
V_{n}(i) = \max_{a \in A} \Big\{ \displaystyle\sum_{j=1}^{M} p(j|i,a)\big{(} r(i,a,j)+\gamma V_{n-1}(j) \big{)} \Big\}, ~n \geq 1, \forall i \in S.
\end{align}
Let $\zeta$ denote the set of all bounded functions from $S$ to $\mathbb{R}$. Note that equation \eqref{v-eq} can be rewritten as:
\begin{align}\label{fp-vi}
V = TV,
\end{align}
where the operator $T: \zeta \xrightarrow{} \zeta$ is a function given by $$(TV)(i) = \max_{a \in A} \Big\{ r(i,a)+\gamma \displaystyle\sum_{j=1}^{M} p(j|i,a)V(j) \Big\},$$ and
$r(i,a)=\displaystyle\sum_{j=1}^{M} p(j|i,a)r(i,a,j). $ is the expected single-stage cost in state $i$ when action $a$ is chosen.
It is easy to see that $T$ is a contraction map with contraction factor $\gamma$, the discount factor. Therefore, from the contraction mapping theorem, it is clear that the value iteration scheme given by equation \eqref{vi-eqn} converges to the optimal value function i.e.,
\begin{align}
V = \lim_{n \xrightarrow{} \infty} V_{n} = TV.
\end{align}
Let $Q(i,a)$ with $(i,a) \in S \times A$, be defined as
\begin{align}\label{ql-eq}
Q(i,a) := r(i,a)+\gamma \sum_{j=1}^{M} p(j|i,a)V(j).
\end{align}
Here $Q(i,a)$ is the optimal Q-value function associated with state $i$ and action $a$. It denotes the total discounted reward obtained starting from state $i$ upon taking action $a$ and following the optimal policy in subsequent states. Then from \eqref{v-eq}, it is clear that
\begin{align}
V(i) = \max_{a \in A}Q(i,a).
\end{align}
Therefore, the equation \eqref{ql-eq} can be re-written as follows:
\begin{align}\label{ql-eq2}
Q(i,a) = r(i,a) + \gamma \sum_{j=1}^{M} p(j|i,a) \max_{b \in A}Q(j,b).
\end{align}
This is known as the Q-Bellman equation. As with \eqref{vi-eqn}, the Q-value iteration procedure for finding optimal Q-values works as follows: For $n \geq 1$,
\begin{align}\label{qvi-eqn}
Q_{n}(i,a) = r(i,a)+\gamma \displaystyle\sum_{j=1}^{M} p(j|i,a) \max_{b \in A} Q_{n-1}(j,b),
\end{align}
with $Q_0$ being the initial Q-value function that can be arbitrarily chosen.
We obtain the optimal policy by letting
\begin{align}
\pi(i) = \arg \max_{a \in A}Q(i,a).
\end{align}
The corresponding optimal value function is then given by
\begin{align}
V(i) = \max_{a \in A}Q(i,a).
\end{align}
In this way, we obtain optimal value function and optimal policy using the Q-value iteration scheme. Note that the system of equations \eqref{qvi-eqn} is a non-linear system of equations. Q-value iteration is a first order method for solving this system of equations. In this work, our objective is to construct a modified Q-Bellman equation and apply the Newton-Raphson second order technique to solve for the optimal value function. Note that we cannot apply the second order method directly to the equation \eqref{qvi-eqn} as the $\max(.)$ operator on the RHS of this equation is not differentiable. Therefore, we construct an approximate Q-Bellman operator that is differentiable and apply the Newton's second order technique. Before we propose our algorithm, we briefly discuss the Newton's second order technique \cite{ortega1970iterative} for solving a non-linear system of equations.
Consider a function $F:\mathbb{R}^{d} \xrightarrow{} \mathbb{R}^{d}$ that is differentiable. Suppose we are interested in finding the root of $F$ i.e., a point $x$ such that $F(x) = 0$. The Newton-Raphson method can be applied to find a solution here. We select an initial point $x_0$ and then proceed as follows:
\begin{align}\label{nw-method}
x_n = x_{n-1} - J_F^{-1}(x_{n-1})F(x_{n-1}), ~ n \geq 1,
\end{align}
where $J_F(x)$ is the Jacobian of the function $F$ evaluated at point $x$. Under suitable hypotheses it can be shown that the procedure \eqref{nw-method} leads to quadratic convergence to the root of $F$.
In the next section, we construct a function $F$ for our problem and apply the Newton-Raphson method to find the optimal value function and policy pair.
\section{Conclusion}
In this work, we propose a second-order value iteration scheme based on Newton-Raphson method for faster convergence to near optimal value function in discounted reward MDP problems. The first step involves constructing a differentiable Bellman equation by approximation of $\max(.)$ operator. We then apply second order Newton method to arrive at the proposed algorithm. We prove the bounds on approximation error and show faster convergence to the optimal value function. In future, we would like to develop model-free, asynchronous variants of SOVI algorithm. This can be achieved by applying stochastic approximation techniques to our proposed second order value iteration.
\section{Convergence Analysis}\label{conv-sec}
In this section we study the convergence analysis of our algorithm. Note that the norm considered in the following analysis is max-norm i.e. $\|x\|:=\max_{1\leq i \leq d}|x_{i}|.$
\begin{lemma}
Suppose $f: \mathbb{R}^{d} \rightarrow \mathbb{R}$ and $f(x)=\displaystyle\max\{x_{1},x_{2},\cdots,x_{d}\}. $ Let $g_N: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be defined as follows.
$g_N(x)=\frac{1}{N}\log\displaystyle\sum_{i=1}^{d}e^{Nx_{i}}. $ Then $\displaystyle\sup_{x \in \mathbb{R}^{p}}\big{|}f(x)-g_{N}(x)\big{|} \longrightarrow 0$ as $N \longrightarrow \infty.$
\end{lemma}
\noindent\hspace{2em}{\itshape Proof: }
Let $x_{i_{*}}=\max\{x_{1},x_{2},\cdots,x_{d}\}. $ Now
\begin{align*}
\big{|}f(x)-g_{N}(x)\big{|}
& =\bigg{|}\max\{x_{1},x_{2},\cdots,x_{d}\}-\frac{1}{N}\log\displaystyle\sum_{i=1}^{d}e^{Nx_{i}}\bigg{|}\\
& = \bigg{|}x_{i_{*}}-\frac{1}{N}\log\Big{[}\Big{(}\displaystyle\sum_{i=1}^{d}e^{N(x_{i}-x_{i_{*}})}\Big{)}e^{Nx_{i_{*}}}\Big{]}\bigg{|}\\
& = \bigg{|}\frac{1}{N}\log\bigg{(}\displaystyle\sum_{i=1}^{d}e^{N(x_{i}-x_{i_{*}})}\bigg{)}\bigg{|}\\
& \leq \bigg{|}\frac{\log d}{N}\bigg{|} \rightarrow 0 \text{ as } N \rightarrow \infty.
\end{align*}
Note that the inequality follows from the definition of $x_{i_{*}}=\max\{x_{1},x_{2},\cdots,x_{d}\} $ and the fact that $e^{N(x_{i}-x_{i_{*}})} \leq 1 \text{ for } 1 \leq i \leq d$ (since $x_i \leq x_{i^*}$ $\forall i$).
Hence $\displaystyle\sup_{x \in \mathbb{R}^{d}}\big{|}f(x)-g_{N}(x)\big{|} \rightarrow 0$ as $N \rightarrow \infty $ with the rate $\frac{1}{N}.$
\begin{lemma}
\label{contraction-l2}
Let $U:\mathbb{R}^{|S| \times |A|} \rightarrow \mathbb{R}^{|S| \times |A|}$ be defined as follows.
$$(UQ)(i,a)=r(i,a)+\gamma\mathbb{E}\Big{[}\frac{1}{N}\log\displaystyle\sum_{b=1}^{|A|}e^{NQ(j,b)}\Big{]}.$$ Then $U$ is a max-norm contraction.
\end{lemma}
\noindent\hspace{2em}{\itshape Proof: }
For $P,Q \in \mathbb{R}^{|S| \times |A|}$, we have
\begin{align*}
\big{|}(UP)(i,a)-(UQ)(i,a) \big{|}
& = \gamma\bigg{|}\mathbb{E}\Big{[}\frac{1}{N}\log\displaystyle\sum_{b=1}^{|A|}e^{NP(j,b)}-\frac{1}{N}\log\displaystyle\sum_{b=1}^{|A|}e^{NQ(j,b)}\Big{]}\bigg{|}\\
& = \gamma\Bigg{|}\mathbb{E}\Bigg{[}\frac{1}{\displaystyle\sum_{b \in A}e^{N\xi(j,b)}} \Big{(}e^{N\xi(j,.)}\Big{)}^T\Big{(}P(j,.)-Q(j,.)\Big{)}\Bigg{]}\Bigg{|}\\
& \leq \gamma\mathbb{E}\Bigg{[}\Bigg{|}\frac{1}{\displaystyle\sum_{b \in A}e^{N\xi(j,b)}} \Big{(}e^{N\xi(j,.)}\Big{)}^T\Big{(}P(j,.)-Q(j,.)\Big{)}\Bigg{|}\Bigg{]}\\
& \leq \gamma\mathbb{E}\big{[}\max_{b}|P(j,b)-Q(j,b)| \big{]} \\
& \leq \gamma \max_{(i,a)}|P(i,a)-Q(i,a)|=\gamma \|P-Q\|. \\
\text{So } \|UP-UQ\| & = \max_{(i,a)}\big{|}(UP)(i,a)-(UQ)(i,a) \big{|}\leq \gamma \|P-Q\|.
\end{align*}
Hence $U$ is a contraction with contraction factor $\gamma$. Here the second equality follows from an application of multivariate mean value theorem where $\xi(j,.)$ lies on the line joining $P(j,.), Q(j,.)$.
\begin{lemma} \label{l3}
Let $Q, Q'$ be fixed points of $T$ and $U$ respectively.
Then $\|Q-Q'\|\leq \frac{\gamma}{N(1-\gamma)} \log(|A|).$
\end{lemma}
\noindent\hspace{2em}{\itshape Proof: }
Since $Q$ is the unique fixed point of $T$, we have
$$Q(s,a)=r(s,a)+\gamma \mathbb{E}\bigg{[}\displaystyle\max_{b \in A} Q(Z,b)\bigg{]}.$$
Similarly $Q'$ is the unique fixed point of $U$ (unique by virtue of Lemma \eqref{contraction-l2}), so
$$Q'(s,a)=r(s,a)+ \gamma \mathbb{E}\bigg{[}\frac{1}{N}\log\displaystyle\sum_{b \in A}
e^{NQ'(Z,b)}\bigg{]},$$
where the expectation above is taken w.r.t the law of the `next' state $Z$, i.e., $p(.|s,a)$.
Now
\begin{align*}
& \big{|}Q(s,a)-Q'(s,a)\big{|} \\
= & \Bigg{|}\gamma \mathbb{E}\bigg{[}\max_{b \in A} Q(Z,b)-\frac{1}{N}\log\displaystyle\sum_{b \in A}
e^{NQ'(Z,b)}\bigg{]}\Bigg{|}\\
= & \gamma \Bigg{|} \mathbb{E}\bigg{[}\max_{b \in A}Q(Z,b)-\max_{b \in A}Q'(Z,b)-\frac{1}{N}\log\Big{(}\displaystyle\sum_{b \in A}e^{N\big{(}Q'(Z,b)-Q'(Z,c)\big{)}}\Big{)}
\bigg{]} \Bigg{|} \\
\leq & \gamma \mathbb{E}\Bigg{[}\bigg{|}\max_{b \in A}Q(Z,b)-\max_{b \in A}Q'(Z,b)-\frac{1}{N}\log\Big{(}\displaystyle\sum_{b \in A}e^{N\big{(}Q'(Z,b)-Q'(Z,c)\big{)}}\Big{)}
\bigg{|}\Bigg{]} \\
\leq & \gamma \mathbb{E}\Bigg{[}\bigg{|}\max_{b \in A}Q(Z,b)-\max_{b \in A}Q'(Z,b)\bigg{|}+\bigg{|}\frac{1}{N}\log\Big{(}\displaystyle\sum_{b \in A}e^{N\big{(}Q'(Z,b)-Q'(Z,c)\big{)}}\Big{)}
\bigg{|}\Bigg{]} \\
\leq & \gamma \|Q-Q'\|+\frac{\gamma}{N}\log|A|. \\
\end{align*}
\begin{align*}
& \text{Hence }\|Q-Q'\| \leq \gamma \|Q-Q'\|+\frac{\gamma}{N}\log|A| \\
& \implies \|Q-Q'\| \leq \frac{\gamma}{N(1-\gamma)} \log|A|.
\end{align*}
Here the second equality follows from the choice of $c= \arg\max_{b \in A}Q'(Z,b)$ i.e. $Q'(Z,c)=\max_{b \in A}Q'(Z,b).$ This completes the proof. This lemma shows that the approximation error $\|Q-Q'\| \rightarrow 0$ as $N \rightarrow \infty.$
We now invoke the following theorem from \cite{ortega1970iterative} to show the global convergence of our second order value iteration.
\begin{theorem}[Global Newton Theorem]
\label{GNT}
Suppose that $F: \mathbb{R}^{d}\rightarrow \mathbb{R}^{d}$ is continuous, component wise concave on $\mathbb{R}^{d}$, differentiable and that $F'(x)$ is non-singular and $F'(x)^{-1} \geq 0$, i.e. each entry of $F'(x)^{-1}$ is non-negative, for all $x \in \mathbb{R}^{d}.$ Assume, further, that $F(x)=0$ has a unique solution $x^{*}$ and that $F'$ is continuous on $\mathbb{R}^{d}.$ Then for any $x_{0} \in \mathbb{R}^{d}$ the Newton iterates given by equation \ref{nw-method} converge to $x^{*}.$
\end{theorem}
\begin{theorem}
Let $Q'$ be the fixed point of the operator $U$.
SOVI converges to $Q'$ for any choice of initial point $Q_{0}.$
\end{theorem}
\noindent\hspace{2em}{\itshape Proof: }
SOVI computes the zeros of the equation $Q-UQ=0.$
So we appeal to Theorem \ref{GNT} with the choice of $F$ as $I-U:\mathbb{R}^{|S|\times|A|}\rightarrow \mathbb{R}^{|S|\times|A|}$ where $(I-U)(Q)(i,a)=Q(i,a)-r(i,a)-\gamma \mathbb{E}\bigg{[}\frac{1}{N}\log\displaystyle\sum_{b \in A}
e^{NQ(Z,b)}\bigg{]}.$
It is enough to verify the hypothesis of Theorem \ref{GNT} for $I-U.$ Clearly $I-U$ is continuous, component-wise concave and differentiable with $(I-U)'(Q)= I-J_{U}(Q)$ where
\begin{align*}
J_{U}(Q)((i,a),(k,c))=\gamma p(k|i,a) \frac{e^{NQ(k,c)}}{\displaystyle\sum_{b \in A}e^{NQ(k,b)}}
\end{align*}
is $|S||A| \times |S||A|$ dimensional matrix with $1\leq i,k \leq |S|$ and $1 \leq a,c \leq |A|$. Now observe that
\begin{itemize}
\item each entry in $(i,a)^{\text{th}}$ row is non-negative.
\item the sum of the entries in $(i,a)^{\text{th}}$ row is
\begin{align*}
\displaystyle\sum^{|S|}_{k=1}\displaystyle\sum^{|A|}_{c=1} \gamma p(k|i,a) \frac{e^{NQ_n(k,c)}}{\displaystyle\sum_{b \in A}e^{NQ_n(k,b)}}=\gamma.
\end{align*}
\end{itemize}
So $J_{U}(Q)=\gamma \Phi$ for a $|S||A| \times |S||A|$ dimensional transition probability matrix $\Phi$. It is easy to see that $(I-J_{U}(Q))^{-1}$ exists (see remark \ref{r3}) with the power series expansion
\begin{align*}
\big{(}I-J_{U}(Q)\big{)}^{-1}= \displaystyle\sum^{\infty}_{r=0} \gamma^{r} \Phi^{r}.
\end{align*}
Moreover, since each entry in $\Phi$ is non-negative, $\Phi \geq0$. Hence $\big{(}I-J_{U}(Q)\big{)}^{-1}\geq 0.$ It is clear from lemma \ref{contraction-l2} that the equation $Q-UQ=0$ has a unique solution. This completes the proof.
\begin{remark}
\label{r3}
$I-J_{U}(Q)=I-\gamma\Phi$ for a transition probability matrix $\Phi$. If $\lambda$ is an eigen-value of $I-\gamma\Phi$ then $1-\gamma<\lambda\leq 1+\gamma$. Since $1-\gamma>0$, $0 \notin \sigma(I-J_{U}(Q))$, the spectrum of $I-J_{U}(Q)$. Hence for any $Q$, $\big{(}I-J_{U}(Q)\big{)}^{-1}$ exists and the matrix norm of $\big{(}I-J_{U}(Q)\big{)}^{-1}$ is at most $\frac{1}{1-\gamma}.$
\end{remark}
\begin{theorem}
SOVI has second order convergence.
\end{theorem}
\noindent\hspace{2em}{\itshape Proof: }
Suppose $F(Q)=Q-UQ.$ Let $Q^*$ be the unique solution of $F(Q)=0$ and $\{Q_{n}\}$ be the sequence of iterates generated by SOVI.
Define $e_{n}=\|Q_{n}-Q^*\|$ and $G(Q)=Q-F'(Q)^{-1}F(Q)$. As $Q^*$ satisfies $Q^* = UQ^*$, it is a fixed point of $G$. It is enough to show that $e_{n+1}\leq ke^2_{n}$ for a constant $k.$
We could show for our particular choice of $F$ that
\begin{align*}
& \|F'(Q)-F'(Q^*)\| \leq \|Q-Q^*\| \\
\implies & \|F(Q)-F(Q^*)-F'(Q^*)(Q-Q^*)\|\leq \frac{1}{2} \|Q-Q^*\|^2 \\
& \text{(by an application of the fundamental theorem of calculus).}
\end{align*}
Utilizing the above properties we have
\begin{align*}
e_{n+1} = & \|Q_{n+1}-Q^*\| \\
= & \|G(Q_{n})-G(Q^*)\| \\
= & \|Q_{n}-F'(Q_{n})^{-1}F(Q_{n})- Q^*\|\\
\leq & \big{\|}F'(Q_{n})^{-1}\big{[}F(Q_{n})-F(Q^*)-F'(Q^*)(Q_{n}-Q^*)\big{]}\big{\|} \\ &+\big{\|}F'(Q_{n})^{-1}\big{[}F'(Q_{n})-F'(Q^*)\big{]}(Q_{n}-Q^*)\big{\|} \\
\leq & \|F'(Q_{n})^{-1}\|\big{\|}\big{[}F(Q_{n})-F(Q^*)-F'(Q^*)(Q_{n}-Q^*)\big{]}\big{\|} \\ &+\|F'(Q_{n})^{-1}\|\big{\|}\big{[}F'(Q_{n})-F'(Q^*)\big{]}(Q_{n}-Q^*)\big{\|} \\
\leq & \frac{3}{2}\beta \|Q_{n}-Q^*\|^2 \\
= & k e^2_{n},
\end{align*}
where $\beta=\|F'(Q)\|^{-1} \leq \frac{c_{0}}{1-\gamma}$ for some constant $c_{0}>0$ and $k=\frac{3}{2}\beta$.
\section{Experiments}
In this section, we describe the experimental results of our SOVI algorithm. For this purpose, we use python MDP toolbox \cite{pymdp} for generating the MDP and computing value iteration. We generate $100$ independent MDPs each with $10$ states, $5$ actions and we set the discount factor to be $0.9$ in each case. We run both the standard value iteration and SOVI algorithm for $50$ iterations. The initial Q-values of the algorithms are assigned random integers between 10 and 20 (which are far away from the optimal value function). We consider average error to be the criterion for comparison between SOVI and standard Value Iteration (VI) algorithm. Average error at iteration $i$, denoted $E(i)$, is calculated as follows. For each of $100$ runs, we collect the max-norm difference between the optimal value function and the value function at iteration $i$ and then take the average over these runs. That is,
\begin{align*}
E(i) = \frac{1}{100} \sum_{k=1}^{100} \|V_k^{*} - \max_{a}Q_{k}^{i}(.,a)\|_{\infty},
\end{align*}
where $V_k^*$ is the optimal value function of the MDP $k$ and $Q_{k}^i(.,.)$ is the Q-value function of MDP $k$ at iteration $i$. Recall that the SOVI with a fixed $N$ gives a near-optimal value function. The advantage of using SOVI is that the Q-value iterates converge to the near-optimal Q-values rapidly. This can be observed in Figure \ref{exp-fig} where we compare the performance of standard VI and SOVI with values of $N = 5,10,15,20,30,35$. The code for our experiments is available at the anonymous link: \url{https://github.com/second-order-value-iteration/SOVI.git}.
\begin{figure}[htb]
\begin{minipage}[c]{0.5\textwidth}
\input{table.tex}
\caption{Performance comparison}
\label{table1}
\end{minipage}
\begin{minipage}[c]{0.5\textwidth}
\includegraphics[width=0.98\textwidth]{result_SOVI.png}
\caption{Performance of SOVI}
\label{exp-fig}
\end{minipage}
\end{figure}
We now discuss the results obtained in Figure \ref{exp-fig}. The average error decreases as the number of iterations increase for the standard value iteration but at a much slower rate. On the other hand, for SOVI algorithm the average error decreases quickly and then stays almost constant. In fact, we see that SOVI takes on average $3$ iterations to converge. This is due to the second order convergence of the Newton's method. Moreover, $N=5$ has the maximum average error and $N=35$ has the least average error as it provides better approximation of the $max(.)$ function.
In Table \ref{table1}, we indicate the average error value at the end of $50$ iterations for all the algorithms. We observe that, SOVI with $N=30$ and $N= 35$ has low error at the end of $50$ iterations compared to the standard VI. Therefore, we conclude by saying that SOVI converges rapidly to a near-optimal solution when run for finite number of iterations, respecting the error bounds derived in lemma \ref{l3} of section \ref{conv-sec}. Moreover, higher the value of $N$, the smaller the error between the SOVI value function and optimal value function.
\section{Introduction}
In a discounted reward Markov Decision process \cite{bertsekas1996neuro}, the objective is to maximize the expected cumulative discounted reward. Reinforcement Learning (RL) deals with the algorithms for solving an MDP problem when the model information (probability transition matrix and reward function) is unknown. RL algorithms instead make use of state and reward samples and compute optimal value function and policy. Due to the success of deep learning \cite{lecun2015deep}, RL algorithms in combination with deep neural networks have been successfully deployed to solve many real world problems and games \cite{mnih2015human,mnih2013playing}. However, there is ongoing research for improving the sample-efficiency as well as convergence of RL algorithms \cite{haarnoja2018soft,hansen2018fast}.
Most of the RL algorithms can be viewed as stochastic approximation \cite{borkar2009stochastic} variants of the Bellman equation \cite{bellman1966dynamic} in MDP. For example, the popular Q-learning algorithm \cite{watkins1992q} is a stochastic fixed point iteration to solve the Q-Bellman equation. Therefore, we believe that in order to improve the performance of RL algorithms, a promising approach would be to propose faster algorithms for solving MDPs when the model information is known. In this work, we propose a second order value iteration method for computing the optimal value function and policy when the model information is known. We first propose a modified Q-Bellman equation and then apply the second order Newton-Raphson method to obtain our algorithm.
The issue with directly applying the Newton-Raphson method on standard Q-Bellman equation is that the equation has a $\max(.)$ operator in it, which is not differentiable. Therefore, we approximate the $\max$ operator by a smooth function $g_N$, where $N$ is a parameter. This approximation allows us to apply the second order method thereby ensuring faster rate of convergence.
Note that the solution obtained by our second order technique on the modified Q-Bellman equation may be different from the actual solution because of the approximation of the $\max$ operator by $g_N$. However, we show that our proposed algorithm converges to the actual solution as $N \xrightarrow{} \infty$.
We show through numerical experiments that given a finite number of iterations, our proposed algorithm computes a solution that is closer to the actual solution when compared with that obtained by regular value iteration. Therefore, we show that our proposed algorithm provides a better near-optimal solution when compared with that provided by value iteration.
We now summarize the contributions of our paper:
\begin{itemize}
\item We construct a modified Q-Bellman equation through an approximation of the $\max$ operator and show that the contraction factor of this modified Q-Bellman operator is still the discount factor (as with the regular Q-Bellman Operator).
\item We propose a second order Q-value iteration algorithm based on the Newton-Raphson method.
\item We prove the global convergence of our Q-value iteration algorithm and show its the second order convergence.
\item We derive an error bound between the value function obtained by our proposed method and the actual value function and show that the error vanishes asymptotically.
\item Through experimental evaluation, we further confirm that our proposed technique provides a better near-optimal solution compared to that of the value iteration when ran for the same finite number of iterations.
\end{itemize}
\section{Proposed Algorithm}
We construct our modified Q-Bellman operator as follows. We first approximate the $\max$ operator, i.e. the function $f(x)=\max^{d}_{i=1}{x_{i}}$, where $x = (x_1,\ldots,x_d)$ with
$g_{N}(x)=\frac{1}{N}\log\displaystyle\sum^{d}_{i=1}e^{Nx_{i}}.$ Now the equation \eqref{qvi-eqn} can be rewritten as follows:
\begin{align}\label{mqvi-eqn}
Q_{n}(i,a) = r(i,a)+\gamma \displaystyle\sum_{j=1}^{M} p(j|i,a)\frac{1}{N}\log\displaystyle\sum^{|A|}_{b=1}e^{NQ_{n-1}(j,b)} , ~n \geq 1,
\end{align}
starting with an initial $Q_0$ (arbitrarily chosen in general).
Therefore our modified Bellman operator $U:\mathbb{R}^{|S| \times |A|} \xrightarrow{} \mathbb{R}^{|S| \times |A|} $ is defined as follows:
\begin{align}\label{mbo}
UQ(i,a)=r(i,a)+\gamma \displaystyle\sum_{j=1}^{M} p(j|i,a)\frac{1}{N}\log\displaystyle\sum^{|A|}_{b=1}e^{NQ(j,b)}.
\end{align}
Finally, our Second Order Value Iteration (SOVI) method is described in algorithm \ref{alg:SOVI}.
\begin{algorithm}[H]
\caption{Second Order Value Iteration (SOVI) }\label{alg:SOVI}
\hspace*{\algorithmicindent} \textbf{Input:}\\
\hspace*{\algorithmicindent} $(S,A,p,r,\gamma)$: MDP model \\
\hspace*{\algorithmicindent} $Q_0$: Initial $Q$-vector \\
\hspace*{\algorithmicindent} $N$: prescribed approximation factor\\
\hspace*{\algorithmicindent} Iter: number of iterations\\
\hspace*{\algorithmicindent} \textbf{Output:} Approximate $Q$-values.
\begin{algorithmic}[1]
\Procedure{SOVI:}{}
\While{$n < $Iter}
\State compute $|S||A| \times |S||A|$ matrix $J_{U}(Q_n)$. The $((i,a),(k,c))^{\text{th}}$ entry is given by
$$J_{U}(Q_{n})((i,a),(k,c))=\gamma p(k|i,a) \frac{e^{NQ_n(k,c)}}{\displaystyle\sum_{b \in A}e^{NQ_n(k,b)}}$$
\State $Q_{n+1}=Q_n- \big{(}I-J_{U}(Q_n)\big{)}^{-1}(Q_n-UQ_{n})$
\EndWhile
\State \textbf{return} $Q_{\text{Iter}}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{remark}
Note that in our case, the function $F$ in equation \eqref{nw-method} corresponds to $F(Q)=Q-UQ$ and $J_{F}(Q)=I-J_{U}(Q)$ is a $|S||A| \times |S||A|$ dimensional matrix.
\end{remark}
\begin{remark}
Note that directly computing $\big{(}I-J_{U}(Q)\big{)}^{-1}(Q-UQ)$ would involve $O(|S|^3|A|^3)$ operation. This computation could be carried out by solving the system $(I-J_{U}(Q))Y=Q-UQ$ for $Y$ to avoid numerical stability issues.
\end{remark}
\section{Related Work}
Value Iteration and Policy Iteration are two popular techniques employed for solving an MDP problem.
Earlier works in the literature propose heuristic algorithms for solving an MDP efficiently and improving the convergence to the optimal solution \cite{barto1995learning,hansen2001lao,bonet2003faster}.
We now discuss some of the works that propose variants of value iteration algorithm. Approximate Newton methods have been proposed in \cite{furmston2016approximate} for policy optimization in MDPs. They provide a detailed analysis of Hessian of the objective function and derive their algorithms.
In \cite{wingate2005prioritization}, several methods are proposed for reducing the number of backup operations (application of $T$ operator) in value iteration. In \cite{dai2007topological}, topological value iteration algorithm is proposed that exploits the structure of the MDP problem and reduces the number of backup operations to be performed in the value iteration algorithm.
Recently, randomized algorithms for solving MDPs approximately are proposed in \cite{sidford2018variance}. Their algorithms are constructed by combining sampling methods for value iteration with variance reduction techniques.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,004 |
Q: Ember js/Handlebars I have an Ember application running with hadlebars. I have several ember templates in my HTML page, but I ran into a slight dilemma - I can't escape an HTML code within script tag. I am looking to insert a non-working raw HTML into script tag for illustrative purposes. Will it be of help by using .hbs file method? I would prefer not to use .hbs method.
<script type="text/x-handlebars" data-template-name="name">
<html>
<head>
so on....
</head>
</html>
</script>
Thank you.
A: You can do this exactly the same way you would do it in normal HTML: escape the HTML entities:
<script type="text/x-handlebars" data-template-name="name">
<html>
<head>
so on....
</head>
</html>
</script>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,881 |
Pitt soccer parts ways with leading scorer Edward Kizza
Pitt soccer announced last week that Edward Kizza, the second highest goal scorer in program history, is no longer on the team.
By Ben Bobeck, Senior Staff Writer
The No. 2 Pitt men's soccer program (2-0-0, 1-0-0 in ACC) will go through the rest of what has started as their most promising season in recent memory without the services of last year's leading scorer, senior forward Edward Kizza.
Kizza appeared in both of the Panthers' exhibition matches, scoring twice. He did not appear in the Panther's regular season non-conference opener versus Notre Dame nor the ACC opener versus Syracuse.
Pitt Athletics confirmed the news Sept. 26, issuing the following statement during Pitt's match.
"Edward Kizza is no longer with the University of Pittsburgh Men's Soccer Program," Pitt said. "No further statement will be given at this time."
Get Pitt and Oakland news in your inbox three times a week.
Why exactly Kizza departed the team was not clear, and the program refused to comment further.
A native of Kampala, Uganda, Kizza has led Pitt up front under head coach Jay Vidovich, departing the team just six goals shy of Pitt legend Joe Luxbacher's career record of 37. Kizza ended the 2019 season with 12 goals and four assists, earning All-South Region Second Team, First Team All-ACC, ACC All-Tournament Team and Scholar All-East Region Team. His strong junior season put him on the 2020 ACC Preseason Watch List.
Kizza made the All-ACC First Team and All-South Region Second Team in 2018 after a 15-goal season, following a debut season where he ranked second on the squad with four goals and three assists while being named to the ACC All-Freshman Team.
Pitt's offense has maintained stride without the forward through the team's first two games of the season, scoring three goals in games each against Notre Dame and Syracuse this past Tuesday. The Panthers have had strong contributions from first-year Bertin Jacquesson, with one goal in each game.
bobeck
kizza
Ben Bobeck, Staff Writer
Pitt felled by familiar opponent as Fighting Irish advance to College Cup semifinals
Men's soccer secures No. 5 seed in NCAA tournament
Two quick second-half goals propel Notre Dame past Pitt in ACC Semifinal
Panthers advance to semifinals of ACC soccer tournament with 2-1 win over Hokies
Panthers outduel Wolfpack behind Robinson shutout, win 2-0
From Brazil to Pittsburgh, Rodrigo Almeida embodies Pitt's vision of a soccer program
Men's soccer dominates in 5-1 rout of UMass
West scores twice in return, Panthers take down Eagles 3-0 despite weather delay
Robinson's efforts in net not enough, Panthers fall to UNC 1-0
Stingy defense, young offense push Pitt past Cleveland State, 4-0 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,791 |
Saidapur es una ciudad censal situada en el distrito de Satara en el estado de Maharashtra (India). Su población es de 13913 habitantes (2011). Se encuentra a 5 km de Satara y a 102 km de Pune.
Demografía
Según el censo de 2011 la población de Saidapur era de 13913 habitantes, de los cuales 6986 eran hombres y 6927 eran mujeres. Saidapur tiene una tasa media de alfabetización del 88,97%, superior a la media estatal del 82,34%: la alfabetización masculina es del 93,78%, y la alfabetización femenina del 84,17%.
Referencias
Localidades de Maharashtra | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,213 |
{"url":"https:\/\/www.baryonbib.org\/bib\/a22366b1-f053-44b7-9ff6-71c081b1076c","text":"PREPRINT\nA22366B1-F053-44B7-9FF6-71C081B1076C\n\nModeling the flare in NGC 1097 from 1991 to 2004 as a tidal disruption event\n\nZhang XueGuang\n\nSubmitted on 19 September 2022\n\nAbstract\n\nIn the Letter, interesting evidence is reported to support a central tidal disruption event (TDE) in the known AGN NGC 1097. Considering the motivations of TDE as one probable origination of emission materials of double-peaked broad emission lines and also as one probable explanation to changing-look AGN, it is interesting to check whether are there clues to support a TDE in NGC 1097, not only a changing-look AGN but also an AGN with double-peaked broad emission lines. Under the assumption that the onset of broad H$\\alpha$ emission was due to a TDE, the 13years-long (1991-2004) variability of double-peaked broad H$\\alpha$ line flux in NGC 1097 can be well predicted by theoretical TDE model, with a $\\left(1-1.5\\right){\\mathrm{M}}_{\\odot }$ main-sequence star tidally disrupted by the central BH with TDE model determined mass about $\\left(5-8\\right)\u00d7{10}^{7}{\\mathrm{M}}_{\\odot }$. The results provide interesting evidence to not only support TDE-related origin of double-peaked broad line emission materials but also support TDE as an accepted physical explanation to physical properties of changing-look AGN.\n\nPreprint\n\nComment: 5 pages, 3 figures, 1 table, Accepted to be published in MNRAS Letter\n\nSubjects: Astrophysics - Astrophysics of Galaxies; Astrophysics - High Energy Astrophysical Phenomena","date":"2022-09-25 01:01:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 4, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7184478640556335, \"perplexity\": 5368.966494737646}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334332.96\/warc\/CC-MAIN-20220925004536-20220925034536-00626.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.doubtnut.com\/question-answer-physics\/a-thin-rod-of-length-1-m-is-fixed-in-a-vertical-position-inside-a-train-which-is-moving-horizentally-643181386","text":"Home\n\n>\n\nEnglish\n\n>\n\nClass 11\n\n>\n\nPhysics\n\n>\n\nChapter\n\n>\n\nLaws Of Motion\n\n>\n\nA thin rod of length 1 m is fi...\n\n# A thin rod of length 1 m is fixed in a vertical position inside a train, which is moving horizentally with constant acceleration 4 ms^(-2).A bead can slide on the rod and friction coefficient between them is 0.5. If the head is released from rest at the top of the rod , it will reach the bottom the in\n\nUpdated On: 27-06-2022\n\nUPLOAD PHOTO AND GET THE ANSWER NOW!\n\nText Solution\n\nsqrt2 s1 s2 s0.5 s\n\nSolution\u00a0:\u00a0N = ma <br> f_(L) = mu N = 0.5ma <br> Vertical acceleration, <br> a_nu = (mg - f_(L))\/(m) = g - 0.5 a <br> = 10 - 0.5 xx 4 <br> = 8 m\/\/s^(2) <br> Applying s = (1)\/(2) at^(2) <br> or t = sqrt((2 s)\/(a) in vertical direction ,we have <br> t = sqrt((2 xx 1)\/(8)) = 0.5 s <br> <img src=\"https:\/\/d10lpgp6xz60nq.cloudfront.net\/physics_images\/DCP_V01_C08_E01_135_S01.png\" width=\"80%\">\n\nStep by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.\n\nTranscript\n\nlength is fixed in vertical position inside a train which is moving horizontally with constant acceleration of 4 metre per second square abhi cancelling on the row rod and friction Coefficient between them is point 5 if head is released from rest at the top of the road it will reach the bottom En Route to S 1 S to S or point 5 seconds ke so basically this is the road and train is accelerated Aarakshan normal force experienced by the by the ring and also by the frictional force will at which is the surface is rough and MC is acting Sunita acceleration in downward direction and we need to find the time at which it will reach the end ok because it is given the length is only one metre now and will be equal to\n\nthat is normal force experienced by the ring will be equal to the pseudo force acting on a due to motion of train similarly force will be equal to new and that is u m a new m a vertical acceleration AV AV vertical acceleration United with different colour will be equal to minus mark that is forces acting in the opposite direction are MG - so we will be equal to M G - FL upon M ok so this is equal to G - new a ok now that will be 10 -\n\n- news point 5 and acceleration of train is given us for metre per second square so this will be from here a vertical comes out as 8 metre per second square ok so this is a vertical now using the expression t = under root 2 is upon its initial velocity will be zero in the vertical direction time taken will be equal to under root 2 into 1 because it is said it is of 1 M upon 8 to 10 class 1 by 4 letters 1 upon 2 S that point 5 second so be built 1 point 5 second to cover the rod of length of the rod that is sorry point 5 second option for is the correct answer ok thank you","date":"2022-12-01 18:14:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3096965253353119, \"perplexity\": 747.0878926917076}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710829.5\/warc\/CC-MAIN-20221201153700-20221201183700-00280.warc.gz\"}"} | null | null |
The Official Website Of Saddleback College Athletics
About Saddleback College
Gaucho Sports Network
Athletic Counseling
College Student Handbook
Pathway to Success Presentation
Matriculation Process
Estimated Out-Of-State Costs
Clauss starts tenure at Saddleback
Totorp Hired as AD
Fall 2019 Scholar Teams
Fall 2018 Athletic Honor Roll
Athletic Scholarships reach all-time high
2019 Golf Tournament Fundraiser
Saddleback Ranked No. 1
Saddleback honors "Scholar Ballers" with luncheon
Gaucho Volleyball Hall of Fame
Gauchos scare but can't sink Pirates
Orange Coast (9-1, 1-0) 20 25 24 25 15 3
Saddleback (3-6, 0-1) 25 18 26 19 10 2
K: Danyelle Brown - 22
B: Kerrigan Hecht - 5
D: Danyelle Brown - 29
SA: Danyelle Brown - 3
K: Kaylee Douglass - 14
B: Allison Pina - 8
D: Chasticy Gonzales - 21
SA: 2 Players (#9, #19) - 2
The Orange Empire Conference schedule started out with a humdinger as Orange Coast College (9-1) visited Saddleback College (3-6) to open the conference schedule. Orange Coast won the match in five sets, 20-25, 25-18, 24-26, 25-19, 15-10.
The host Gauchos came out loose and enthused, jumping out to a 20-10 lead in the first set as Orange Coast made several attacking errors and fed Saddleback's middle blockers at the net. The Pirates evened the match with a victory in the second set.
The third set was a thriller as the teams traded points, keeping the score tied from 17-all through 24-all before Saddleback pushed through for the win. It looked like the Gauchos could have closed out the match in the fourth set, bouncing back from an early four-point deficit to tie the score at 17's, but Orange Coast was able to get three quick points and hold on to win the set. The Pirates led all the way in the fifth set as Saddleback struggled with its serving. The Gauchos fought back during rallies to pull to within two points at 11-9, the serving errors allowed Orange Coast to regain the momentum it needed to nail down the match.
Saddleback College
28000 Marguerite Parkway. Mission Viejo, CA 92692
Wed, 08/28 | Women's Volleyball at Glendale (6:00 PM)
Fri, 08/30 | Women's Volleyball vs. Cerritos (6:00 PM)
Wed, 09/04 | Women's Volleyball vs. Victor Valley (4:00 PM)
Wed, 09/04 | Women's Volleyball vs. Cerro Coso (6:00 PM)
Fri, 09/06 | Women's Volleyball vs. Rio Hondo (1:00 PM)
Fri, 09/06 | Women's Volleyball vs. Chaffey (4:00 PM)
Wed, 09/11 | Women's Volleyball at MiraCosta (6:00 PM)
Fri, 09/13 | Women's Volleyball vs. San Diego Miramar (6:00 PM)
Fri, 09/20 | Women's Volleyball at Orange Coast (6:00 PM)
Wed, 09/25 | Women's Volleyball vs. Santiago Canyon (6:00 PM)
Fri, 09/27 | Women's Volleyball at Cypress (6:00 PM)
Wed, 10/02 | Women's Volleyball vs. Riverside (6:00 PM)
Wed, 10/09 | Women's Volleyball vs. Golden West (6:00 PM)
Fri, 10/11 | Women's Volleyball vs. Fullerton (6:00 PM)
Wed, 10/16 | Women's Volleyball at Santa Ana (6:00 PM) | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,448 |
<?php
namespace Exar;
/**
* Abstract class with Exar autoloader.
*/
abstract class ExarTest extends \PHPUnit_Framework_TestCase {
public static function setUpBeforeClass() {
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 4,110 |
\section{Introduction}
This paper deepens the analysis of law invariant risk measures and their connection to divergence-type functionals of probability measures. Throughout the paper, a nonatomic standard Borel space $(\Omega,{\mathcal F},P)$ is fixed, and a \emph{risk measure} is defined to be a convex functional $\rho : L^\infty := L^\infty(\Omega,{\mathcal F},P) \rightarrow {\mathbb R}$ satisfying:
\begin{enumerate}
\item Monotonicity: If $X,Y \in L^\infty$ and $X \le Y$ a.s. then $\rho(X) \le \rho(Y)$.
\item Cash additivity: If $X \in L^\infty$ and $c \in {\mathbb R}$ then $\rho(X + c) = \rho(X) + c$.
\item Normalization: $\rho(0)=0$.
\end{enumerate}
The functional $X \mapsto \rho(-X)$ is more traditionally called a \emph{normalized convex risk measure}; some authors use the term \emph{acceptability measure} \cite{roorda2007time} for what we have chosen to call a risk measure. Convex risk measures first appeared in \cite{follmer-schied-convex,frittelli2002putting,heath2004pareto}, extending the class of \emph{coherent} risk measures introduced in the seminal paper of Artzner et al. \cite{artzner1999coherent} (see also \cite{delbaen2002coherent}). A risk measure $\rho$ is \emph{law invariant} if $\rho(X)=\rho(Y)$ whenever $X$ and $Y$ have the same law. Three standard examples will guide us throughout the paper: The first is the well known entropic risk measure $\rho(X) = \eta^{-1}\log{\mathbb E}[e^{\eta X}]$, $\eta > 0$. Second, given a nondecreasing convex function $\ell : {\mathbb R} \rightarrow [0,\infty)$ with $\ell(0)=1$, the corresponding \emph{shortfall risk measure} (introduced by F\"ollmer and Schied in \cite{follmer-schied-convex}) is
\[
\rho(X) = \inf\left\{c \in {\mathbb R} : {\mathbb E}[\ell(X-c)] \le 1\right\}.
\]
Lastly, given a nondecreasing convex function $\phi : {\mathbb R} \rightarrow {\mathbb R}$ with $\phi^*(1) = \sup_{x \in {\mathbb R}}(x-\phi(x)) = 0$, the corresponding
\emph{optimized certainty equivalent} (introduced by Ben-Tal and Teboulle in \cite{bental-teboulle-1986,bental-teboulle-2007}) is
\[
\rho(X) := \inf_{m \in {\mathbb R}}\left({\mathbb E}[\phi(m+X)] - m\right).
\]
We construct divergences as follows:
Fix a law invariant risk measure $\rho$.
Given a Polish space $E$, let ${\mathcal P}(E)$ denote the set of Borel probability measures on $E$. For any Polish space (or any standard Borel space) $E$ and any $\mu \in {\mathcal P}(E)$, we may define a new law invariant risk measure $\rho_\mu : L^\infty(E,\mu) \rightarrow {\mathbb R}$ by $\rho_\mu(f) := \rho(f(X))$, where $X$ is any $E$-valued random variable on $\Omega$ with law $P \circ X^{-1} = \mu$. Indeed, such an $X$ exists because $\Omega$ is nonatomic, and this definition is independent of the choice $X$ thanks to law invariance.
This family of risk measures satisfies a consistency property, namely
\begin{align}
\rho_\mu(f) = \rho_\nu(g), \text{ whenever } \mu \circ f^{-1} = \nu \circ g^{-1}. \label{def:riskmeasureconsistency}
\end{align}
Let $\alpha(\cdot | \mu)$ denote the minimal penalty function associated to $\rho_\mu$, i.e., the restriction to ${\mathcal P}(E)$ of the convex conjugate of $\rho_\mu$:
\[
\alpha(\nu | \mu) = \sup_{f \in L^\infty(E,\mu)}\left(\int_Ef\,d\nu - \rho_\mu(f)\right) = \sup\left\{\int_Ef\,d\nu : f \in L^\infty(E,\mu), \ \rho_\mu(f) \le 0\right\}.
\]
We call $\alpha$ the \emph{divergence induced by $\rho$}.
In summary, the functional $\alpha(\cdot | \cdot)$ is defined for pairs of probability measures on \emph{any Polish space} (or standard Borel space), much like the classical relative entropy and other information divergences, such as the $f$-divergence.
Indeed, when $\rho$ is the entropic risk measure, $\alpha$ is nothing but the usual relative entropy (also known as the Kullback-Leibler divergence)
\[
H(\nu | \mu) = \int \log\left(\frac{d\nu}{d\mu}\right)\,d\nu \ \ \text{ for } \nu \ll \mu, \quad \infty \text{ otherwise}.
\]
When $\rho$ is a shortfall risk measure corresponding to a function $\ell$, the induced divergence is
\[
\alpha(\nu | \mu) = \inf_{t > 0}\frac{1}{t}\left(1 + \int_E\ell^*\left(t\frac{d\nu}{d\mu}\right)d\mu\right), \text{ for } \nu \ll \mu, \quad \infty \text{ otherwise},
\]
where $\ell^*(t) = \sup_{s \in {\mathbb R}}(st - \ell(s))$ is the convex conjugate of $\ell$.
Finally, when $\rho$ is the optimized certainty equivalent corresponding to a function $\phi$, the induced divergence is the $\phi^*$-divergence
\[
\alpha(\nu|\mu) = \int\phi^*\left(\frac{d\nu}{d\mu}\right)d\mu, \text{ for } \nu \ll \mu, \quad \infty \text{ otherwise}.
\]
In fact, we could instead start from a $[0,\infty]$-valued function $\alpha=\alpha(\nu|\mu)$, defined for pairs of probability measure $(\nu,\mu) \in {\mathcal P}(E)^2$ for any Polish space $E$, such that for each Polish space $E$ and each $\mu \in {\mathcal P}(E)$ we have the following properties:
\begin{enumerate}
\item $\alpha(\mu | \mu) = 0$.
\item $\alpha(\nu | \mu) = \infty$ if $\nu \in {\mathcal P}(E)$ is not absolutely continuous with respect to $\mu$.
\item The map $\nu \mapsto \alpha(\nu | \mu)$ is convex and lower semicontinuous with respect to total variation.
\item $\alpha(\nu K | \mu K)\le \alpha(\nu | \mu)$ for every $\nu \in {\mathcal P}(E)$ and every kernel $K$ from $E$ to another Polish space $F$, where $\mu K(dy) := \int_E\mu(dx)K(x,dy) \in {\mathcal P}(F)$.
\end{enumerate}
We call such a functional a \emph{divergence},
and we show that to any divergence there corresponds a unique law invariant risk measure defined on the original space $(\Omega,{\mathcal F},P)$; we prove this by showing the definitions
\[
\rho(f(X)) := \rho_\mu(f) := \sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef\,d\nu - \alpha(\nu | \mu)\right),
\]
to be consistent in the sense of \eqref{def:riskmeasureconsistency}, where $E$ is a Polish space, $f \in B(E)$, and $\mu = P \circ X^{-1}$ for some $X : \Omega \rightarrow E$. The property (4) corresponds exactly to the consistency property \eqref{def:riskmeasureconsistency} and is known as the \emph{data processing inequality} in information theory, at least when $\alpha$ is the usual relative entropy.
The primary focus of the paper is on the characterization of properties of divergences related to the well known chain rule for relative entropy, which reads
\[
H\left(\nu(dx)K^\nu_x(dy) \ | \ \mu(dx)K^\mu_x(dy)\right) = H(\nu | \mu) + \int\nu(dx)H(K^\nu_x | K^\mu_x),
\]
and holds for all (disintegrated) probability measures $\mu(dx)K^\mu_x(dy)$ and $\nu(dx)K^\nu_x(dy)$ on the product of two Polish spaces. More generally, we say a divergence $\alpha$ is \emph{superadditive} if
\begin{align}
\alpha\left(\nu(dx)K^\nu_x(dy) \ | \ \mu(dx)K^\mu_x(dy)\right) \ge \alpha(\nu | \mu) + \int\nu(dx)\alpha(K^\nu_x | K^\mu_x). \label{intro:superadditivity}
\end{align}
and we say $\alpha$ is \emph{subadditive} if the reverse inequality holds. We characterize this in terms of various properties of the corresponding risk measure $\rho$.
The original motivation for this study comes from an ongoing investigation into the tensorization properties of \emph{concentration inequalities} of the form
\begin{align}
\rho(\lambda X) \le \gamma(\lambda), \text{ for all } \lambda \ge 0, \label{intro:concentration}
\end{align}
where $\gamma : [0,\infty) \rightarrow [0,\infty]$. In a follow-up paper \cite{lacker-liquidity}, we study concentration inequalities \eqref{intro:concentration} in connection with liquidity risk. When $\rho$ is the entropic risk measure, the inequality \eqref{intro:concentration} is simply a bound on the moment generating function of $X$. Tensorization in this context roughly means bounding $\rho(\lambda h(X,Y))_{\lambda \ge 0}$ in terms of bounds on $\rho(\lambda f(X))_{\lambda \ge 0}$ and $\rho(\lambda g(Y))_{\lambda \ge 0}$, for two given (typically independent) random variables $X$ and $Y$ and various (classes of) functions $f,g,h$. Tensorization properties are typically proven using the chain rule (see \cite{gozlan-leonard-survey} for details, particularly Proposition 1. thereof), so we seek alternatives for the chain rule in order to understand how to extend these ideas to general concentration inequalities of the form \eqref{intro:concentration}.
It turns out that the dual form of superadditivity \eqref{intro:superadditivity} is a so-called \emph{time-consistency} property of the corresponding risk measure $\rho$, which we describe by building on a construction of Weber \cite{weber-distributioninvariant}: Define a functional $\tilde{\rho}$ on ${\mathcal P}({\mathbb R})$ by $\tilde{\rho}(P \circ X^{-1}) = \rho(X)$, which is of course well defined thanks to law invariance. For any $\sigma$-field ${\mathcal G} \subset {\mathcal F}$ in $\Omega$ and any $X \in L^\infty$, consider the ${\mathcal G}$-measurable random variable
\[
\rho(X | {\mathcal G})(\omega) := \tilde{\rho}(P(X \in \cdot \, | \, {\mathcal G})(\omega)),
\]
where $P(X \in \cdot \, | \, {\mathcal G})$ denotes a regular conditional law of $X$ given ${\mathcal G}$. We say $\rho$ is acceptance consistent if $\rho(X) \le \rho(\rho(X | {\mathcal G}))$ for every $X \in L^\infty$ and any $\sigma$-field ${\mathcal G} \subset {\mathcal F}$.
If the reverse inequality holds, we say $\rho$ is \emph{rejection consistent}. If $\rho$ is both acceptance and rejection consistent, we say it is \emph{time consistent}. We show that acceptance consistency of $\rho$ is essentially equivalent to the superadditivity of the induced divergence $\alpha$, and we provide an additional characterization in terms of a property of the \emph{measure acceptance set}
\[
{\mathcal A} := \left\{P \circ X^{-1} : X \in L^\infty, \ \rho(X) \le 0 \right\} \subset {\mathcal P}({\mathbb R}).
\]
These various characterizations are put to use to find those functions $\ell$ and $\phi$ for which the corresponding shortfall risk measures and optimized certainty equivalents are acceptance consistent.
It follows from the results of Kupper and Schachermayer \cite{kupper-schachermayer} that the entropic risk measure is essentially the only time consistent risk measure, and as a corollary we find that the relative entropy is the only divergence (up to a scalar multiple) satisfying the chain rule.\footnote{We make no attempt to reconcile our characterization of relative entropy with the many already present in the literature (see the survey of Csisz\'ar \cite{csiszar2008axiomatic}), but we can at least say with confidence that the techniques by which we obtained it are new, notably avoiding functional equations.} Ultimately, we find that not many law invariant risk measures are acceptance consistent (or rejection consistent) other than the entropic one, or modest perturbations thereof. In other words, not many divergences beyond relative entropy are superadditive. Although our results are somewhat negative in this sense, the construction and characterization of divergences induced by risk measures is interesting in its own right, and they appear to be useful tools in the study of law invariant risk measures. Moreover, we find some value in understanding the limitations of our divergences in the applications discussed above.
We also briefly revisit the related results of Weber \cite{weber-distributioninvariant}. Say that $\rho$ is \emph{weakly acceptance consistent} if $\rho(X) \le 0$ whenever $\rho(X | {\mathcal G}) \le 0$ a.s., for $X \in L^\infty$ and $\sigma$-fields ${\mathcal G} \subset {\mathcal F}$. Weber showed that this is essentially equivalent to the convexity of the measure acceptance set ${\mathcal A}$.
We show that weak acceptance consistency is also equivalent to an inequality weaker than superadditivity:
\[
\alpha\left(\nu(dx)K^\nu_x(dy) \ | \ \mu(dx)K^\mu_x(dy)\right) \ge \int\nu(dx)\alpha(K^\nu_x | K^\mu_x).
\]
Time consistency properties of dynamic risk measures have by now been studied thoroughly \cite{riedel2004dynamic,detlefsen2005conditional,frittelli2004dynamic,follmer2006convex,tutsch2008update,cheridito2011composition}. The nice survey of Acciaio and Penner \cite{acciaio-penner-dynamic} will be a useful reference, although we will mostly work with the type of dynamic law invariant risk measures constructed by Weber in \cite{weber-distributioninvariant}. With this rich literature in mind, the most novel of our results on time consistency is the characterization of acceptance consistency in terms of the shift-convexity of the measure acceptance set, which nicely complements Weber's result on weak acceptance consistency. In theory, our characterization in terms of superadditivity \eqref{intro:superadditivity} could be deduced from results in \cite{acciaio-penner-dynamic}, but this is non-trivial: The key difference is that previous papers on the subject (including \cite{acciaio-penner-dynamic}) use \emph{essential suprema} to define the minimal penalty function of a conditional risk measure. We work purely with pointwise definitions, and while this distinction is largely technical, there is a non-trivial gap between the two stemming from a delicate measurable selection argument. See Section \ref{se:essentialsuprema} for details.
The above results must be qualified: the equivalence of superadditivity and acceptance consistency is only proven under the additional assumption that the divergence $\alpha$ is \emph{simplified}, in the sense that
\[
\alpha(\nu | \mu) = \sup_{f \in C([0,1])}\left(\int_{[0,1]}f\,d\nu - \rho_\mu(f)\right),
\]
for each $\mu,\nu \in {\mathcal P}([0,1])$. This additional assumption is admittedly somewhat of a nuisance, and it is unclear if the main result on superadditivity holds without it. While we did not identify a nice dual characterization for this condition, we have identified a common stronger condition: Namely, $\alpha$ is simplified as soon as $\rho$ is \emph{Lebesgue continuous}, in the sense that $\rho(X_n) \rightarrow \rho(X)$ whenever $X_n$ are uniformly bounded and $X_n \rightarrow X$ a.s. This is a strong assumption, but it indeed holds for our main examples of shortfall risk measures and optimized certainty equivalents. Moreover, we show that Lebesgue continuity of $\rho$ is actually equivalent to joint lower semicontinuity of the induced divergence $\alpha$ (with respect to weak convergence).
Finally, Section \ref{se:furtherproperties} we study miscellaneous properties of divergences.
First, joint convexity of $\alpha$ is shown to be equivalent to the concavity of $\rho$ on the level of probability measures (i.e., concavity of $\tilde{\rho}$ defined above), a property studied in some detail by Acciaio and Svindland \cite{acciaio-svindland-concave} which holds for every optimized certainty equivalent. Second, as an interesting decision-theoretic consequence of the defining property (4) of divergences, note that if $T : E \rightarrow F$ is measurable then $\alpha(\nu \circ T^{-1} | \mu \circ T^{-1}) \le \alpha(\nu | \mu)$; we show that equality holds if $T$ is a sufficient statistic for $\{\mu,\nu\}$. Lastly, we show that simplified divergences can be approximated in a sense by their projections on finite sets.
The paper is organized as follows. Section \ref{se:riskmeasures} reviews the basic definitions and duality results of law invariant risk measures before introducing divergences and studying their most essential properties. The main characterization of divergences in terms of law invariant risk measures is given by Proposition \ref{pr:informationinequality} and Theorem \ref{th:divergence-characterization}. Section \ref{se:convexitycontinuitysufficiency} then introduces the concept of a \emph{simplified divergence}, and as an important class of examples we then clarify the connection between continuity properties of a risk measure and joint lower semicontinuity of the induced divergence. This is a useful preparatory step for Section \ref{se:timeconsistency}, which turns to time consistency and superadditivity. The main Theorem \ref{th:mainequivalence} characterizes time consistency properties of a law invariant risk measure in terms of both the additivity properties of the induced divergence and what we call the \emph{shift-convexity} of its measure acceptance set. Section \ref{se:furtherproperties} studies additional results pertaining to convexity and some more information-theoretic uses for divergences, and it should be noted that this section is completely independent of Section \ref{se:timeconsistency}. Finally, Section \ref{se:examples} applies the theory to the examples of shortfall risk measures and optimized certainty equivalents. The short appendix is devoted to the proof of a technical lemma.
\section{Risk measures and divergences} \label{se:riskmeasures}
First, let us fix some notation. Throughout the paper, $(\Omega,{\mathcal F},P)$ is a fixed probability space, which we assume is a nonatomic standard Borel space. Abbreviate $L^p=L^p(\Omega,{\mathcal F},P)$ as usual for the set of (equivalence classes of) $p$-integrable real-valued measurable functions on $\Omega$. Let ${\mathcal P}(\Omega)$ denote the set of probability measures on $(\Omega,{\mathcal F})$, and let ${\mathcal P}_P(\Omega)$ denote the subset consisting of those measures which are absolutely continuous with respect to $P$.
As stated in the introduction, a \emph{risk measure} to us is a convex nondecreasing (with respect to a.s. order) functional $\rho : L^\infty \rightarrow {\mathbb R}$ satisfying $\rho(0)=0$ and $\rho(X+c)=\rho(X)+c$ for all $X\in L^\infty, c \in {\mathbb R}$. Note again that this is somewhat different from the standard definition, in which $\rho$ is instead nonincreasing \cite{follmer-schied-book}.
Law-invariant risk measures possess some nice additional structure, highlighted in particular by the results of \cite{jouini-touzi-schachermayer} and \cite{filipovic-svindland}, though we will not need the latter.
\begin{theorem}[Theorem 2.1 of \cite{jouini-touzi-schachermayer}] \label{th:jouini-touzi-schachermayer}
Every law-invariant risk measure $\rho$ satisfies the \emph{Fatou property}, which means that whenever $X_n \in L^\infty$ are uniformly bounded and converge a.s. to $X \in L^\infty$, then $\rho(X) \le \liminf_{n\rightarrow\infty}\rho(X_n)$.
\end{theorem}
Let us recall a classical duality result, but note that the details of the presentation are somewhat unusual: We say a function $\alpha : {\mathcal P}_P(\Omega) \rightarrow [0,\infty]$ is a \emph{penalty function} for $\rho$ if it holds that
\begin{align}
\rho(X) = \sup_{Q \in {\mathcal P}_P(\Omega)}\left({\mathbb E}^Q[X] - \alpha(Q)\right).
\label{def:penalty}
\end{align}
(Note that the supremum involves only countably additive measures, and we will make no mention of finite additivity.)
Here ${\mathbb E}^Q$ denotes expectation with respect to the probability $Q$. Expectation under the reference measure $P$ is simply denoted ${\mathbb E}$, and integrals on spaces other than $\Omega$ are written in a more explicit measure-theoretic notation.
\begin{theorem}[Theorem 4.33 of \cite{follmer-schied-book}] \label{th:follmerschied}
Suppose $\rho$ is a law invariant risk measure. Then the function $\alpha : {\mathcal P}_P(\Omega) \rightarrow [0,\infty]$ defined by
\begin{align}
\alpha(Q) := \sup\left\{{\mathbb E}^Q[X] : X \in L^\infty, \ \rho(X) \le 0\right\} = \sup_{X \in L^\infty}\left\{{\mathbb E}^Q[X] - \rho(X)\right\} \label{def:minimalpenalty}
\end{align}
is a penalty function for $\rho$. In fact, it is the \emph{minimal penalty function}, in the sense that any other penalty function $\alpha'$ for $\rho$ satisfies $\alpha \le \alpha'$.
\end{theorem}
Note that the dual representation \eqref{def:minimalpenalty} implies that the minimal penalty function is convex and lower semicontinuous with respect to the total variation topology, as well as the weaker topology $\sigma({\mathcal P}_P(\Omega),L^\infty)$.\footnote{As usual, when $F$ is a set of real-valued functions on a set $E$, the notation $\sigma(E,F)$ refers to the coarsest topology on $E$ rendering the elements of $F$ continuous.} There is an alternative dual representation more specific to law invariant risk measures, due to Kusuoka \cite{kusuoka2001law} and extended in \cite{frittelli2005law,jouini-touzi-schachermayer}, but we will make no use of this.
\begin{remark} \label{re:equivalenceclasses}
Note that we may afford to be lazy about the fact that $\rho$ is to be evaluated at \emph{equivalence classes}, i.e. elements of $L^\infty$, as opposed to specific measurable functions. For a risk measure $\rho$, we may define $\rho(X) := \rho([X])$ in the obvious way for a measurable function $X : \Omega \rightarrow {\mathbb R}$ by finding the equivalence class $[X] \in L^\infty$ to which $X$ belongs. With this in mind, we may then define $\alpha(Q) := \infty$ for $Q \in {\mathcal P}(\Omega)$ which are not absolutely continuous with respect to $P$, and then the dual formula \eqref{def:penalty} may be re-written
\[
\rho(X) = \sup_{Q \in {\mathcal P}(\Omega)}\left({\mathbb E}^Q[X] - \alpha(Q)\right),
\]
for bounded measurable functions $X : \Omega \rightarrow {\mathbb R}$.
\end{remark}
As with many properties of convex risk measures, law invariance may be alternatively characterized by a property of the minimal penalty function, and this will be a building block for a more general discussion in the next section. This characterization appears to be new, although a very similar result appeared in \cite[Proposition 2]{shapiro2013kusuoka}, and see also \cite[Lemma A.4]{jouini-touzi-schachermayer}.
\begin{proposition} \label{pr:lawinvariantcharacterization}
Suppose a risk measure $\rho$ on $L^\infty$ has the Fatou property (see Theorem \ref{th:jouini-touzi-schachermayer}). Then $\rho$ is law-invariant if and only if it has a penalty function $\alpha$ satisfying $\alpha(Q \circ T^{-1}) \le \alpha(Q)$ for every $Q \in {\mathcal P}_P(\Omega)$ and for every measurable $T : \Omega \rightarrow \Omega$ satisfying $P \circ T^{-1} = P$.
\end{proposition}
\begin{proof}
First, assume $\rho$ is law invariant, and let $\alpha$ be its minimal penalty function provided by Theorem \ref{th:follmerschied}. Let $T : \Omega \rightarrow \Omega$ be a measurable map satisfying $P \circ T^{-1} = P$. Then $X \circ T$ and $X$ have the same law and thus $\rho(X) = \rho(X \circ T)$ for every $X \in L^\infty$. Hence
\begin{align*}
\alpha(Q \circ T^{-1}) &= \sup_{X \in L^\infty}\left({\mathbb E}^{Q \circ T^{-1}}[X] - \rho(X)\right) \\
&= \sup_{X \in L^\infty}\left({\mathbb E}^{Q}[X \circ T] - \rho(X \circ T)\right) \\
&\le \sup_{X \in L^\infty}\left({\mathbb E}^{Q}[X] - \rho(X)\right) \\
&= \alpha(Q).
\end{align*}
To prove the converse, fix $X,Y \in L^\infty$ with the same law. By \cite[Corollary 6.11]{kallenberg-foundations} (since the probability space is nonatomic) we may find a measurable map $T : \Omega \rightarrow \Omega$ such that $P \circ T^{-1} = P$ and $P(X = Y \circ T)=1$. Then
\begin{align*}
\rho(X) &= \sup_{Q \in {\mathcal P}_P(\Omega)}\left({\mathbb E}^Q[X] - \alpha(Q)\right) \\
&= \sup_{Q \in {\mathcal P}_P(\Omega)}\left({\mathbb E}^{Q \circ T^{-1}}[Y] - \alpha(Q)\right) \\
&\le \sup_{Q \in {\mathcal P}_P(\Omega)}\left({\mathbb E}^{Q \circ T^{-1}}[Y] - \alpha(Q \circ T^{-1})\right) \\
&\le \sup_{Q \in {\mathcal P}_P(\Omega)}\left({\mathbb E}^Q[Y] - \alpha(Q)\right) \\
&= \rho(Y).
\end{align*}
Reversing the roles of $X$ and $Y$ completes the proof.
\end{proof}
\begin{remark}
From the proof of Proposition \ref{pr:lawinvariantcharacterization}, it should be clear that the assumption that $\rho$ has the Fatou property is not needed. We state only this simpler form in order to avoid introducing additional terminology, and to avoid dwelling on details involving finitely additive measures.
\end{remark}
\subsection{Divergences and their characterization} \label{se:divergences}
Let us now exploit law invariance to construct a corresponding \emph{family} of risk measures and what we refer to as \emph{divergences}.
Fix for the rest of this section a law-invariant risk measure $\rho$. Given a Polish space $E$, let ${\mathcal P}(E)$ denote the set of Borel probability measures on $E$.
We write $\nu \ll \mu$ to mean $\nu$ is absolutely continuous with respect to $\mu$. Given any $\mu \in {\mathcal P}(E)$, write ${\mathcal P}_\mu(E) := \{\nu \in {\mathcal P}(E) : \nu \ll \mu\}$. Define also $C(E)$, $C_b(E)$, and $B(E)$ to be the sets of continuous, bounded continuous, and bounded measurable functions on $E$, respectively.
The space ${\mathcal P}(E)$ is endowed with the $\sigma$-field generated by the maps $\mu \mapsto \mu(A)$, where $A \subset E$ is Borel; this equals the Borel $\sigma$-field generated by the topology of weak convergence, i.e., $\sigma({\mathcal P}(E),C_b(E))$.
Given a Polish space $E$ and $\mu \in {\mathcal P}(E)$, we may find (because $\Omega$ is nonatomic) a measurable function $X : \Omega \rightarrow E$ such that $P \circ X^{-1} = \mu$. We may then define a (law invariant) risk measure $\rho_\mu$ on $L^\infty(E,\mu)$ by
\[
\rho_\mu(f) := \rho(f(X)).
\]
Note that by law-invariance this definition does not depend on the choice of $X$, as long as $P \circ X^{-1} = \mu$.
We call $(\rho_\mu)_{\mu,E}$ the \emph{family of risk measures induced by $\rho$}.
This family of risk measures satisfies a consistency property, namely
\begin{align}
\rho_\mu(f) = \rho_\nu(g), \text{ whenever } \mu \circ f^{-1} = \nu \circ g^{-1}. \label{def:riskconsistency}
\end{align}
In particular, for any measurable map $T$ from one Polish space $E$ to another $F$, we have $
\rho_{\mu \circ T^{-1}}(f) = \rho_\mu(f \circ T)$, for $f \in L^\infty(F,\mu \circ T^{-1})$.
The same construction is valid when $E$ is any standard Borel space, but for simplicity we stick with Polish spaces.
The minimal penalty function of $\rho_\mu$ is denoted $\alpha( \cdot | \mu) : {\mathcal P}_\mu(E) \rightarrow [0,\infty]$ and defined by
\begin{align}
\alpha(\nu | \mu) &:= \sup_{f \in B(E)}\left(\int_E f\,d\nu - \rho_\mu(f)\right) = \sup\left\{\int_E f\,d\nu : f \in L^\infty(E,\mu), \ \rho_\mu(f) \le 0\right\}. \label{def:alpha}
\end{align}
Extend $\alpha(\cdot | \mu)$ to all of ${\mathcal P}(E)$ by setting $\alpha(\nu | \mu) = \infty$ whenever $\nu$ is not absolutely continuous with respect to $\mu$. Then, for $f \in B(E)$,
\[
\rho(f(X)) = \rho_\mu(f) = \sup_{\nu \in {\mathcal P}(E)}\left(\int_E f\,d\nu - \alpha(\nu | \mu)\right), \text{ if } P \circ X^{-1} = \mu,
\]
and it is easy to check that \eqref{def:alpha} remains valid for $\nu \in {\mathcal P}(E) \backslash {\mathcal P}_\mu(E)$.
(As in Remark \ref{re:equivalenceclasses}, let us not be overly careful about distinguishing between measurable functions and equivalence classes thereof.)
We refer to $\alpha(\cdot|\cdot)$ as the \emph{divergence induced by $\rho$}.
Note that $\alpha(\cdot | \cdot)$ is defined for \emph{pairs} of probability measures on \emph{any} Polish space. Additionally, $\alpha(\cdot | \mu)$ is always convex and lower semicontinuous with respect to total variation, and also with respect to the topology $\sigma({\mathcal P}(E),B(E))$.
An alternative expression for the divergence induced by $\rho$ is through the \emph{measure acceptance set}
\[
{\mathcal A} := \left\{P \circ X^{-1} : X \in L^\infty, \ \rho(X) \le 0\right\} \subset {\mathcal P}({\mathbb R}).
\]
Indeed, we may then write
\[
\alpha(\nu|\mu) = \sup\left\{ \int_Ef\,d\nu : f \in B(E), \ \mu \circ f^{-1} \in {\mathcal A}\right\}.
\]
Divergences satisfy a consistency property related to \eqref{def:riskconsistency}, the statement of which requires some notation involving kernels. Given Polish spaces $E$ and $F$, a \emph{kernel from $E$ to $F$} is a measurable function $E \ni x \mapsto K_x \in {\mathcal P}(F)$. Given $\mu \in {\mathcal P}(E)$, write $\mu K := \int_E\mu(dx)K_x(\cdot)$ for the mean measure in ${\mathcal P}(F)$, i.e.,
\[
\mu K(A) = \int_E\mu(dx)K_x(A), \text{ for } A \subset E \text{ Borel}.
\]
For $f \in B(F)$, write $Kf$ for the function $Kf(x) = \int_FK_x(dy)f(y)$ in $B(E)$. Note the identity $\int_F f\,d(\mu K) = \int_E Kf\,d\mu$.
\begin{proposition} \label{pr:informationinequality}
Let $\rho$ be a law invariant risk measure with divergence $\alpha$. If $E$ and $F$ are Polish spaces and $K$ is a kernel from $E$ to $F$, then
\begin{align}
\alpha(\nu K | \mu K) \le \alpha(\nu | \mu), \label{def:informationinequality}
\end{align}
for all $\mu,\nu \in {\mathcal P}(E)$.
In particular, if $T : E \rightarrow F$ is measurable, then
\[
\alpha(\nu \circ T^{-1} | \mu \circ T^{-1}) \le \alpha(\nu | \mu),
\]
and equality holds if $T$ is bijective with measurable inverse.
\end{proposition}
\begin{proof}
Note that the second claim follows from the first by setting $K(x,dy) = \delta_{T(x)}(dy)$. Jensen's inequality shows easily that $\mu \circ (Kf)^{-1} \le (\mu K) \circ f^{-1}$ in convex order for all $f \in B(F)$; indeed, for every convex function $\phi$ on ${\mathbb R}$,
\[
\int_{\mathbb R}\phi\,d\mu \circ (Kf)^{-1} = \int_E\phi(Kf)\,d\mu \le \int_EK(\phi \circ f)\,d\mu = \int_F \phi \circ f\,d(\mu K) = \int_{\mathbb E}\phi\,d(\mu K) \circ f^{-1}.
\]
It is well known that (normalized) law invariant risk measures are increasing with respect to convex order, e.g. by \cite[Corollary 4.65]{follmer-schied-book}, and thus $\rho_{\mu K}(f) \ge \rho_{\mu}(Kf)$. Then
\begin{align*}
\alpha(\nu K | \mu K) &= \sup_{f \in B(F)}\left(\int_Ff\,d(\nu K) - \rho_{\mu K}(f)\right) \\
&= \sup_{f \in B(F)}\left(\int_E (Kf)\,d\mu - \rho_{\mu K}(f)\right) \\
&\le \sup_{g \in B(E)}\left(\int_E g\,d\mu - \rho_{\mu}(g)\right) \\
&= \alpha(\nu | \mu).
\end{align*}
\end{proof}
In fact, the inequality \eqref{def:informationinequality} is enough to reconstruct from $\alpha$ the original family of risk measures. This is made precise in the following:
\begin{theorem} \label{th:divergence-characterization}
Suppose we are given family of functions ${\mathcal P}(E) \ni \nu \mapsto \alpha(\nu|\mu) \in [0,\infty]$, for each Polish space $E$ and each $\mu \in {\mathcal P}(E)$, and suppose the following conditions hold:
\begin{enumerate}
\item $\alpha(\mu|\mu)=0$.
\item $\alpha(\nu|\mu)=\infty$ if $\nu \in {\mathcal P}(E)$ is not absolutely continuous with respect to $\mu$.
\item $\alpha(\nu K | \mu K) \le \alpha(\nu | \mu)$ for every $\nu \in {\mathcal P}(E)$ and every kernel $K$ from $E$ to another Polish space $F$.
\end{enumerate}
For each Polish space $E$ and each $\mu \in {\mathcal P}(E)$, define
\begin{align}
\rho_\mu(f) := \sup_{\nu \in {\mathcal P}(E)}\left(\int_E f\,d\nu - \alpha(\nu | \mu)\right), \ f \in B(E). \label{def:inducedriskmeasurefamily}
\end{align}
Then each $\rho_\mu$ is a law invariant risk measure. Moreover, for any Polish spaces $F$ and $G$, any $\mu \in {\mathcal P}(F)$ and $\nu \in {\mathcal P}(G)$, and any $f \in B(F)$ and $g \in B(G)$ with $\mu \circ f^{-1} = \nu \circ g^{-1}$, we have $\rho_\mu(f) = \rho_\nu(g)$.
\end{theorem}
\begin{proof}
It is immediate from the definition that $\rho_\mu$ is a risk measure. Indeed, since $\alpha(\mu|\mu) = 0$ and $\alpha(\nu|\mu) \ge 0$ for all $\nu$, we have $\rho_\mu(0)=0$. Theorem 4.33 of \cite{follmer-schied-book} shows that $\rho_\mu$ satisfies the Fatou property, since the supremum in its definition includes only countably additive measures. For a fixed $\mu$, we deduce from property (3) and Proposition \ref{pr:lawinvariantcharacterization} that $\rho_\mu$ is law-invariant.
It remains to prove the last claim.
Suppose for the moment that we can find a kernel $K$ from $F$ to $G$ such that $\mu K = \nu$ and $\mu(Kg = f) =1$. Then
\begin{align*}
\rho_\nu(g) &= \sup_{\eta \in {\mathcal P}(G)}\left(\int_G g\,d\eta - \alpha(\eta | \nu)\right) \\
&\ge \sup_{\eta \in {\mathcal P}_\mu(F)}\left(\int_G g \,d(\eta K) - \alpha(\eta K | \nu)\right) \\
&= \sup_{\eta \in {\mathcal P}_\mu(F)}\left(\int_F f \,d\eta - \alpha(\eta K | \mu K)\right) \\
&\ge \sup_{\eta \in {\mathcal P}_\mu(F)}\left(\int_F f \,d\eta - \alpha(\eta | \mu )\right) \\
&= \rho_\mu(f).
\end{align*}
Indeed, the second inequality follows from the assumption (3). Reversing the roles of $f$ and $g$ completes the proof. To prove the existence of such a kernel, we appeal to a famous theorem of Strassen \cite[Theorem 3]{strassen-marginals}: Define $h_\phi$ on $F$ by
\[
h_\phi(x) = \sup_{\eta \in S(x)}\int_G\phi\,d\eta, \quad \text{ where } \quad S(x) := \left\{\eta \in {\mathcal P}(G) : \int_Gg\,d\eta = f(x)\right\}.
\]
If $S(x)$ is nonempty for each $x \in F$, and if $h_\phi$ is measurable, then Strassen's theorem says that there exists a kernel $K$ from $F$ to $G$ satisfying both $\mu K=\nu$ and $\mu(Kg=f)=1$ if and only if
\begin{align}
\int_G\phi\,d\nu \le \int_Fh_\phi\,d\mu, \text{ for all } \phi \in C_b(G). \label{pf:divergence-characterization1}
\end{align}
Suppose for the moment that $S(x)$ is nonempty for each $x \in F$ and that $h_\phi$ is measurable, so that we can apply this theorem. Define a new function $\widetilde{h}_\phi$ on ${\mathbb R}$ by
\begin{align*}
\widetilde{h}_\phi(a) := \sup\phi(g^{-1}(\{a\})) = \sup\left\{\phi(y) : y \in G, \ g(y) = a\right\},
\end{align*}
with the usual convention $\sup\emptyset = -\infty$.
Let us check that
\begin{align}
\widetilde{h}_\phi(f(x)) \le h_\phi(x), \ \mu-a.e. \ x \in F. \label{pf:divergence-characterization2}
\end{align}
If $x \in F$ has $\widetilde{h}_\phi(f(x)) = -\infty$, there is nothing to prove.
So suppose $x \in F$ satisfies $\widetilde{h}_\phi(f(x)) > -\infty$. For fixed $\epsilon > 0$ we may then choose $y_0 \in G$ such that $g(y_0)=f(x)$ and $\phi(y_0) \ge \widetilde{h}_\phi(f(x)) - \epsilon$. Then, since $\int_G g\,d\delta_{y_0} = f(x)$, we have by definition $\phi(y_0) \le h_\phi(x)$, and thus $\widetilde{h}_\phi(f(x)) \le \epsilon + h_\phi(x)$. Since $\epsilon$ was arbitrary, this proves \eqref{pf:divergence-characterization2}.
Finally, since clearly $\phi(x) \le \widetilde{h}_\phi(g(x))$ for all $x \in G$, using $\nu \circ g^{-1} = \mu \circ f^{-1}$ we prove \eqref{pf:divergence-characterization1}:
\[
\int_G\phi\,d\nu \le \int_G\widetilde{h}_\phi\circ g\,d\nu = \int_F\widetilde{h}_\phi \circ f\,d\mu \le \int_Fh_\phi\,d\mu.
\]
It remains to check the technical points left out above. First note that $S(x)$ is nonempty for $\mu$-almost every $x$: If $S(x)$ is empty then $f(x)$ is not in the range of $g$, which cannot hold on a set of positive $\mu$-measure because $\mu \circ f^{-1} = \nu \circ g^{-1}$. Modify $S(x)$ to equal $F$ on a null set in such a way that it is never empty $S(x) \ne \emptyset$. Next note that $h_\phi$ is universally measurable because the graph of $S(x)$ is analytic \cite[Proposition 7.47]{bertsekasshreve}, so we may apply Strassen's theorem by simply replacing the Borel $\sigma$-field on $F$ with its universal completion.
\end{proof}
With the previous result in mind, it is natural to make the following definition:
\begin{definition}
A \emph{divergence} is a family of convex lower-semicontinuous (with respect to total variation) functions ${\mathcal P}(E) \ni \nu \mapsto \alpha(\nu | \mu) \in [0,\infty]$, for each Polish space $E$ and each $\mu \in {\mathcal P}(E)$, satisfying properties (1-3) of Theorem \ref{th:divergence-characterization}. Given a divergence $\alpha$, the \emph{corresponding (or induced) family of risk measures} is the family $(\rho_\mu)_{\mu,E}$ defined by \eqref{def:inducedriskmeasurefamily}. The \emph{corresponding (or induced) risk measure} is the risk measure $\bar{\rho}$ defined on $L^\infty=L^\infty(\Omega,{\mathcal F},P)$ by
\[
\bar{\rho}(X) := \rho_{P \circ X^{-1}}(id),
\]
where $id$ denotes the identity map on ${\mathbb R}$. Thanks to Theorem \ref{th:divergence-characterization}, $\bar{\rho}$ is well defined. It is straightforward to check that its induced divergence is exactly $\alpha$, and also that $\bar{\rho}_\mu = \rho_\mu$ for each Polish space $E$ and $\mu \in {\mathcal P}(E)$.
\end{definition}
\subsection{Simplified divergences} \label{se:convexitycontinuitysufficiency}
An important property of relative entropy is that its dual formula can be reduced to a supremum over \emph{continuous} functions: For a Polish space $E$ and for $\mu,\nu \in {\mathcal P}(E)$,
\[
H(\nu | \mu) = \sup_{f \in B(E)}\left(\int_E f\,d\nu - \log\int_Ee^f\,d\mu\right) = \sup_{f \in C_b(E)}\left(\int_E f\,d\nu - \log\int_Ee^f\,d\mu\right).
\]
For our characterization of superadditivity of divergences in Section \ref{se:timeconsistency}, it will be important for us to have a similar result for general divergences. Such a simplification is not always possible, so we make a definition:
\begin{definition} \label{def:simplified}
The divergence $\alpha$ induced by a law invariant risk measure $\rho$ is said to be \emph{simplified} if for every $\mu,\nu \in {\mathcal P}([0,1])$ we have
\[
\alpha(\nu | \mu) = \sup_{f \in C([0,1])}\left(\int_{[0,1]} f\,d\nu - \rho_\mu(f)\right).
\]
Equivalently, for each $\mu \in {\mathcal P}([0,1])$, the map $\alpha(\cdot | \mu)$ is weakly lower semicontinuous, where ``weakly'' refers to the usual weak convergence topology $\sigma({\mathcal P}(E),C_b(E))$.\footnote{This equivalence is a simple application of the Fenchel-Moreau theorem: Let $M([0,1])$ denote the set of bounded finitely additive signed measures on $[0,1]$, and extend $\alpha(\cdot | \mu)$ to $M([0,1])$ by setting $\bar{\alpha}_\mu(\nu) = \alpha(\nu | \mu)$ for $\nu \in {\mathcal P}([0,1])$ and $\bar{\alpha}_\mu(\nu) = \infty$ otherwise. Put $M([0,1])$ and $C([0,1])$ in duality. Then $\rho_\mu$ is the convex conjugate of $\bar{\alpha}_\mu$. As $\bar{\alpha}_\mu$ is convex and proper, it is lower semicontinuous if and only if it equals its biconjugate.}
\end{definition}
An important reason for this definition is the following measurability result, which we are unfortunately unable to prove without the additional assumption.
\begin{lemma} \label{le:jointlymeasurable}
Every simplified divergence $\alpha$ is jointly measurable, in the sense that for any fixed Polish space $E$ the function $\alpha(\cdot | \cdot)$ is jointly measurable on ${\mathcal P}(E) \times {\mathcal P}(E)$ (with respect to the Borel $\sigma$-field generated by the topology of weak convergence).
\end{lemma}
\begin{proof}
Fix a Polish space $E$. By Borel isomorphism (see \cite[Theorem 15.6]{kechris-settheory}), there exists a measurable bijection $T : E \rightarrow [0,1]$ with measurable inverse. It follows from Proposition \ref{pr:informationinequality} that
\[
\alpha(\nu \circ T^{-1} | \mu \circ T^{-1}) = \alpha(\nu | \mu),
\]
for all $\mu,\nu \in {\mathcal P}(E)$. Note also that the map $\mu \mapsto \mu \circ T^{-1}$ is a measurable bijection from ${\mathcal P}(E)$ to ${\mathcal P}([0,1])$ with measurable inverse. Thus, to show $\alpha$ is jointly measurable on ${\mathcal P}(E) \times {\mathcal P}(E)$, it suffices
to show it is jointly measurable on ${\mathcal P}([0,1]) \times {\mathcal P}([0,1])$.
Since $\alpha$ is simplified, we have
\[
\alpha(\nu | \mu) = \sup_{f \in C([0,1])}\left(\int f\,d\nu - \rho_\mu(f)\right)
\]
for $\mu,\nu \in {\mathcal P}([0,1])$. Since $\rho_\mu$ is Lipschitz with respect to the supremum norm on $C([0,1])$, and since $C([0,1])$ is separable, we may reduce the supremum above to a countable one. But $\nu \mapsto \int f\,d\nu$ is measurable for each $f \in C([0,1])$, as is $\mu \mapsto \rho_\mu(f)$ thanks to Lemma \ref{le:rhofamilyregularity}.
\end{proof}
\subsection{Semicontinuity of divergences}
The rest of the section studies lower semicontinuity properties of $\alpha$, in part for their intrinsic interest, and in part for a tractable condition that will allow us to verify that all of the examples of divergences we discuss in Section \ref{se:examples} are indeed simplified.
We did not find a good characterization of simplified divergences on the dual side, i.e., in terms of $\rho$, but the following partial results shed some light on the condition nonetheless.
We know that for any divergence $\alpha$, the map $\alpha(\cdot | \mu)$ is lower semicontinuous with respect to the topology $\sigma({\mathcal P}(E),B(E))$ for any fixed $\mu$, $E$. We will see later in Lemma \ref{pr:strongerlsc} that in fact $\alpha(\cdot|\cdot)$ is \emph{jointly} lower semicontinuous with respect to the same topology. On the other hand, relative entropy is known to be jointly lower semicontinuous with respect to weak convergence, and we first characterize those divergences which share this property. For this it helps to make two definitions, the second of which is well known:
\begin{definition} \label{def:lsc}
We say a divergence $\alpha$ is \emph{jointly weakly lower semicontinuous} if, for each Polish space $E$, the map $\alpha(\cdot|\cdot)$ is lower semicontinuous on ${\mathcal P}(E) \times {\mathcal P}(E)$ with respect to the topology of weak convergence, i.e., equipping ${\mathcal P}(E)$ with the topology $\sigma({\mathcal P}(E),C_b(E))$.
\end{definition}
\begin{definition} \label{def:lebesgue-continuous}
We say a risk measure $\rho$ is \emph{Lebesgue continuous} if whenever $X_n \in L^\infty$ is a uniformly bound sequence with $X_n \rightarrow X$ a.s. we have $\rho(X_n) \rightarrow \rho(X)$. This is equivalent to the seemingly weaker condition that whenever $X_n,X \in L^\infty$ with $X_n \downarrow X$ a.s. we have $\rho(X_n) \downarrow \rho(X)$ (c.f. Remark 4.25 and Exercise 4.2.2 of \cite{follmer-schied-book}).
\end{definition}
The main result of this section is the following, and the proof is preceded by a preparatory lemma:
\begin{theorem} \label{th:tight-lowersemicontinuity}
Let $\rho$ be a law invariant risk measure with induced divergence $\alpha$. The following are equivalent:
\begin{enumerate}
\item $\alpha$ is jointly weakly lower semicontinuous.
\item For each Polish space $E$ and each $f \in C_b(E)$, the map $\mu \mapsto \rho_\mu(f)$ is continuous.
\item For each Polish space $E$, each $\mu \in {\mathcal P}(E)$, and each $f,f_n \in C_b(E)$ with $f_n \rightarrow f$ pointwise and with $f_n$ uniformly bounded, we have $\rho_\mu(f_n) \rightarrow \rho_\mu(f)$.
\item $\rho$ is Lebesgue continuous.
\end{enumerate}
If these conditions hold, then:
\begin{enumerate}
\item[(5)] For each Polish space $E$ and each $\nu,\mu \in {\mathcal P}(E)$ we have
\[
\alpha(\nu | \mu) = \sup_{f \in C_b(E)}\left(\int_E f\,d\nu - \rho_\mu(f)\right).
\]
\end{enumerate}
\end{theorem}
\begin{lemma} \label{le:rhofamilyregularity}
Let $\rho$ be a law invariant risk measure. Fix a Polish space $E$ and a function $f \in B(E)$. The map $\Phi : {\mathcal P}(E) \rightarrow {\mathbb R}$ given by $\Phi(\mu) := \rho_\mu(f)$ is measurable. Moreover, if $f \in C_b(E)$, then $\Phi$ is lower semicontinuous.
\end{lemma}
\begin{proof}
First we prove the second claim. Let $\mu_n \rightarrow \mu$ in ${\mathcal P}(E)$. By Skorohod representation, we may find $E$-valued random variables $X,X_n$ defined on $\Omega$ with $P \circ X^{-1}=\mu$, $P \circ X_n^{-1} = \mu_n$, and $X_n \rightarrow X$ almost surely. Then $f(X_n) \rightarrow f(X)$ almost surely since $f$ is continuous, and the sequence $f(X_n)$ is uniformly bounded. Thus, the Fatou property (Theorem \ref{th:jouini-touzi-schachermayer}) implies
\[
\rho_\mu(f) = \rho(f(X)) \le \liminf_{n\rightarrow\infty}\rho(f(X_n)) = \liminf_{n\rightarrow\infty}\rho_{\mu_n}(f).
\]
To prove the first claim, find $M > 0$ such that $|f| \le M$, and write $\rho_\mu(f) = \rho_{\mu \circ f^{-1}}(id)$, where $id$ denotes the identity map on $[-M,M]$. According to the previous argument, $m \mapsto \rho_m(id)$ is lower semicontinuous and thus measurable on ${\mathcal P}([-M,M])$. Since also $\mu \mapsto \mu \circ f^{-1}$ is Borel measurable from ${\mathcal P}(E)$ to ${\mathcal P}([-M,M])$ (easily proven using, e.g., \cite[Proposition 7.25]{bertsekasshreve}), we see that $\Phi$ is the composition of two measurable maps.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:tight-lowersemicontinuity}]
($1 \Rightarrow 2$) Suppose first that $E$ is compact. Let $\mu_n \rightarrow \mu$ in ${\mathcal P}(E)$. We know from Lemma \ref{le:rhofamilyregularity} that $\rho_\mu(f) \le \liminf_{n\rightarrow\infty}\rho_{\mu_n}(f)$, so we show upper semicontinuity. Let $\epsilon > 0$, and find for each $n$ some $\nu_n \in {\mathcal P}(E)$ satisfying
\[
\rho_{\mu_n}(f) \le \epsilon + \int f\,d\nu_n - \alpha(\nu_n | \mu_n).
\]
Since $E$ is compact, every subsequence admits a further subsequence $\{n_k\}$ such that $\nu_{n_k} \rightarrow \nu$ for some $\nu \in {\mathcal P}(E)$, and lower semicontinuity of $\alpha$ implies
\[
\limsup_{k\rightarrow\infty}\rho_{\mu_{n_k}}(f) \le \epsilon + \int f\,d\nu - \alpha(\nu | \mu) \le \epsilon + \rho_\mu(f).
\]
This shows $\limsup_{n\rightarrow\infty}\rho_{\mu_n}(f) \le \rho_\mu(f)$.
Finally, if $E$ is not necessarily compact, find $M > 0$ such that $\mu_n(|f| \le M) = 1$. Since $[-M,M]$ is compact, the previous result shows that
\[
\rho_{\mu_n}(f) = \rho_{\mu_n \circ f^{-1}}(id) \rightarrow \rho_{\mu \circ f^{-1}}(id) = \rho_\mu(f),
\]
where $id$ is the identity map on $[-M,M]$.
($2 \Rightarrow 3$) Let $f_n,f \in C_b(E)$ be uniformly bounded with $f_n \rightarrow f$ $\mu$-a.s. Then there exists $M > 0$ such that $\mu(|f_n| \le M) = 1$ for all $n$, and so
\[
\rho_\mu(f_n) = \rho_{\mu \circ f_n^{-1}}(id) \rightarrow \rho_{\mu \circ f^{-1}}(id) = \rho_\mu(f),
\]
where $id$ denotes the identity map on $[-M,M]$.
($3 \Rightarrow 4$) Let $X,X_n \in L^\infty$ be uniformly bounded with $X_n \rightarrow X$ a.s. Find $M > 0$ such that $|X_n|\le M$ a.s. for all $n$. Let $E$ denote the (complete separable) metric space of convergent sequences with values in $[-M,M]$ endowed with the supremum metric, and define $\mu$ on $E$ by
\[
\mu = \prod_{n=1}^\infty P \circ X_n^{-1}.
\]
Let $f_n(x)=x_n$ denote the coordinate maps, and let $f(x) = \lim_{n\rightarrow\infty}x_n$,
for $x=(x_1,x_2,\ldots) \in E$. Then $f$ and $f_n$ are uniformly bounded and continuous, with $f_n \rightarrow f$ pointwise by construction. Since $\mu \circ f_n^{-1} = P \circ X_n^{-1}$ and $\mu \circ f^{-1} = P \circ X^{-1}$, we have
\[
\rho(X_n) = \rho_\mu(f_n) \rightarrow \rho_\mu(f) = \rho(X).
\]
($4 \Rightarrow 5$) Clearly we have
\[
\alpha(\nu | \mu ) \ge \sup_{f \in C_b(E)}\left(\int f\,d\nu - \rho_\mu(f)\right).
\]
To show the reverse inequality, fix $\epsilon > 0$ and find $f \in B(E)$ such that
\[
\alpha(\nu | \mu ) \le \epsilon + \int f\,d\nu - \rho_\mu(f).
\]
Find a bounded sequence $f_n$ of continuous functions with $f_n \rightarrow f$ a.s. Then, using (3) and the bounded convergence theorem, we get
\[
\alpha(\nu | \mu ) \le \epsilon + \lim_{n\rightarrow\infty}\left(\int f_n\,d\nu - \rho_\mu(f_n)\right) \le \epsilon + \sup_{f \in C_b(E)}\left(\int f\,d\nu - \rho_\mu(f)\right).
\]
($4 \Rightarrow 3$) Obvious.
($3 \Rightarrow 2$) Let $\mu_n \rightarrow \mu$ in ${\mathcal P}(E)$ and $f \in C_b(E)$. Let $\lambda$ denote Lebesgue measure on $[0,1]$, and let $q_n$ and $q$ denote the quantile functions corresponding to $\mu_n \circ f^{-1}$ and $\mu\circ f^{-1}$, respectively, so that $\mu_n \circ f^{-1} = \lambda \circ q_n^{-1}$ and $\mu \circ f^{-1} = \lambda \circ q^{-1}$. Then $q_n$ are uniformly essentially bounded with $q_n \rightarrow q$ $\lambda$-a.s., and law invariance yields
\[
\rho_{\mu_n}(f) = \rho_\lambda(q_n) \rightarrow \rho_\lambda(q) = \rho_\mu(f).
\]
($4 \Rightarrow 1$) We know by now that (4) implies both (5) and (2), and thus we can write
\[
\alpha(\nu | \mu) = \sup_{f \in C_b(E)}\left(\int f\,d\nu - \rho_\mu(f)\right).
\]
Since the map $(\mu,\nu) \mapsto \int f\,d\nu - \rho_\mu(f)$ is jointly continuous by (2), $\alpha$ is lower semicontinuous as the supremum of continuous functions.
\end{proof}
As announced before, there are some additional continuity properties of potential interest, although we shall not use these in the sequel.
Note that Lemma \ref{le:jointlymeasurable} does not follow from the following Proposition \ref{pr:strongerlsc}, because the Borel $\sigma$-field of $\sigma({\mathcal P}(E),B(E))$ is typically strictly larger than the Borel $\sigma$-field of the topology of weak convergence.
\begin{proposition} \label{pr:strongerlsc}
Suppose $\rho$ is a law invariant risk measure with induced divergence $\alpha$.
If ${\mathcal P}(E)$ is endowed with the topology $\sigma({\mathcal P}(E),B(E))$, then the map $\mu \mapsto \rho_\mu(f)$ is continuous for every $f \in B(E)$, and $\alpha(\cdot|\cdot)$ is lower semicontinuous with respect to the product topology on ${\mathcal P}(E) \times {\mathcal P}(E)$.
\end{proposition}
\begin{proof}
First, fix $f \in B(E)$ with $|f| \le M$ for $M > 0$. Note that $\rho_\mu(f) = \rho_{\mu \circ f^{-1}}(id)$, where $id$ denotes the identity map on $[-M,M]$. We saw in Lemma \ref{le:rhofamilyregularity} that ${\mathcal P}([-M,M]) \ni m \mapsto \rho_m(f)$ is continuous with respect to weak convergence. It is easy to check that $\mu \mapsto \mu \circ f^{-1}$ is a continuous map from $({\mathcal P}(E),\sigma({\mathcal P}(E),B(E)))$ to ${\mathcal P}([-M,M])$ endowed with the weak convergence topology, and this proves the first claim. According to the definition \eqref{def:alpha}, $\alpha(\nu |\mu)$ is the supremum of continuous functions of $(\nu,\mu)$, and this proves the second claim.
\end{proof}
\section{Acceptance consistency and superadditivity} \label{se:timeconsistency}
As was first observed by Weber \cite{weber-distributioninvariant}, a law invariant risk measure naturally gives rise to a \emph{dynamic risk measure} on any (nice enough) filtered probability space. We will use the same construction:
Define $\tilde{\rho}$ again by $\tilde{\rho}(P \circ X^{-1}) = \rho(X)$, which makes sense thanks to law-invariance. Using our previous notation, note that $\tilde{\rho}(m) = \rho_m(id)$, where $id$ denotes the identity map on ${\mathbb R}$. We may then define, for any $\sigma$-field ${\mathcal G} \subset {\mathcal F}$ in $\Omega$ and any $X \in L^\infty$,
\[
\rho(X | {\mathcal G}) := \tilde{\rho}(P(X \in \cdot \, | \, {\mathcal G})).
\]
Note that a regular conditional law of $X$ given ${\mathcal G}$ exists because $\Omega$ is standard. Lemma \ref{le:rhofamilyregularity} ensures that $\rho(X | {\mathcal G})$ is a ${\mathcal G}$-measurable random variable, defined uniquely up to a.s. equality. Similarly, for a random variable $Y$, write $\rho(X | Y) := \rho(X | \sigma(Y))$. Note that if $Y$ is ${\mathcal G}$-measurable then
\[
\rho(X +Y | {\mathcal G}) = \rho(X | {\mathcal G}) + Y, \ a.s.,
\]
for any random variable $X$. If $X$ and $Y$ are independent, then it is straightforward to check that
\[
\rho(f(X,Y) | X) = \rho(f(x,Y))|_{x = X}.
\]
We are nearly ready to define the type of time-consistency we investigate.
\begin{definition}
We say that a law-invariant risk measure $\rho$ is \emph{acceptance consistent} if
\[
\rho(X) \le \rho(\rho(X | {\mathcal G})),
\]
for all sub-$\sigma$-fields ${\mathcal G} \subset {\mathcal F}$ and all $X \in L^\infty$. If the inequality is reversed, we say $\rho$ is \emph{rejection consistent}. We say $\rho$ is \emph{time consistent} if it is both acceptance and rejection consistent.
\end{definition}
\begin{remark}
This definition begins to look more like the one appearing in the literature (see \cite{acciaio-penner-dynamic}) once it is applied inductively. Let $({\mathcal F}_t)_{t \ge 0}$ denote any filtration on $\Omega$, with ${\mathcal F}_t \subset {\mathcal F}$ for all $t$. Indeed, $(\rho(\cdot | {\mathcal F}_t))_{t \ge 0}$ is a \emph{dynamic risk measure} in the sense of \cite{acciaio-penner-dynamic}. If $\rho$ is acceptance consistent and $X \in L^\infty$, then it is straightforward to check that $\rho(X | {\mathcal F}_s) \le \rho(\rho(X | {\mathcal F}_t) | {\mathcal F}_s)$ a.s. for $0 \le s \le t$.
\end{remark}
\subsection{Superadditivity and shift-convexity}
Let us give names to certain divergence inequalities resembling the chain rule of classical relative entropy.
Henceforth we will need to assume that our divergences are simplified, as in Definition \ref{def:simplified}.
As far as the following definition of superadditivity is concerned,
this assumption is merely to ensure that the divergence $\alpha(\cdot | \cdot)$ is jointly measurable, so that the integrals make sense. Later, a technical point in the proof of the main Theorem \ref{th:mainequivalence} will depend crucially on the divergence being simplified, but the question of whether or not Theorem \ref{th:mainequivalence} holds in more generality remains open.
\begin{definition}
We say that a divergence $\alpha$ is \emph{partially superadditive} (resp. \emph{partially subadditive}) if
\[
\alpha\left(\left.\nu(dx)K^\nu_x(dy) \right| \mu_1 \times \mu_2\right) \ge \alpha(\nu | \mu_1) + \int\nu(dx)\alpha(K^\nu_x | \mu_2), \ \text{ (resp. } \le \text{) }
\]
whenever $\nu(dx)K^\nu_x(dy)$ and $\mu_1 \times \mu_2$ are probability measures on the product of two Polish spaces; note that the latter is required to be a product measure.
We say a simplified divergence $\alpha$ is (fully) \emph{superadditive} (resp. \emph{subadditive}) if
\[
\alpha\left(\left. \nu(dx)K^\nu_x(dy) \right| \mu(dx)K^\mu_x(dy)\right) \ge \alpha(\nu | \mu) + \int\nu(dx)\alpha(K^\nu_x | K^\mu_x), \ \text{ (resp. } \le \text{) }
\]
whenever $\nu(dx)K^\nu_x(dy)$ and $\mu(dx)K^\mu_x(dy)$ are probability measures on the product of two Polish spaces.
\end{definition}
Note that partial superadditivity as opposed to full superadditivity only requires the inequality to hold when the reference measure is a product.
It turns out that these conditions are equivalent, although we have only an indirect proof of this fact. As was discussed in the introduction, additivity properties of a divergence $\alpha$ are linked with time consistency and sub-level set properties of its induced risk measure, which we now describe. In the following, we will write ${\mathcal P}[-M,M]$ for $M > 0$ for to the set of probability measures on ${\mathbb R}$ which are supported on the interval $[-M,M]$.
\begin{definition} {\ }
\begin{enumerate}
\item The \emph{measure acceptance set} ${\mathcal A}$ of a law invariant risk measure $\rho$ is defined by ${\mathcal A} := \{P \circ X^{-1} : X \in L^\infty, \ \rho(X) \le 0\}$. In words, this is the set of laws of random variables $X$ satisfying $\rho(X) \le 0$.
\item A set ${\mathcal A} \subset {\mathcal P}({\mathbb R})$ is \emph{shift-convex} if for every $\mu \in {\mathcal A}$, every $M > 0$, and every measurable map ${\mathbb R} \ni x \mapsto K_x \in {\mathcal A} \cap {\mathcal P}[-M,M]$, it holds that the measure $\int_{\mathbb R}\mu(dx)K_x(\cdot - x)$ is in ${\mathcal A}$.
\end{enumerate}
\end{definition}
As was discussed by Weber, the \emph{convexity} of a measure acceptance set ${\mathcal A}$ admits a natural interpretation in terms of so-called compound lotteries: If two outcomes $X$ and $Y$ are acceptable, then convexity of ${\mathcal A}$ means that the outcome with law $tP\circ X^{-1} + (1-t)P\circ Y^{-1}$ is also acceptable, for any $t \in (0,1)$. Shift-convexity is open to interpretation on similar grounds:
Suppose $X$ is an acceptable outcome, and that $Y$ is conditionally acceptable given $X$. Then shift-convexity means that $X+Y$ is itself acceptable. To see this, in the definition of shift-convexity take $\mu$ to be the law of $X$ and $K_x$ to be the conditional law of $Y$ given $X$.
Section \ref{se:shiftconvexity} we will elaborate on interpretations and reformulations of this unusual property. We can now state the main result of this section.
\begin{theorem} \label{th:mainequivalence}
Suppose $\alpha$ is a simplified divergence induced by a law invariant risk measure $\rho$ with acceptance set ${\mathcal A}$. The following are equivalent:
\begin{enumerate}
\item $\rho$ is acceptance consistent.
\item ${\mathcal A}$ is shift-convex.
\item $\alpha$ is superadditive.
\item $\alpha$ is partially superadditive.
\end{enumerate}
Similarly, the same equivalences hold when ``acceptance'' is changed to ``rejection'', ``superadditive'' is changed to ``subadditive'', and ${\mathcal A}$ is changed to ${\mathcal A}^c$. The equivalence of (1) and (2) holds without the assumption that $\alpha$ is simplified.
\end{theorem}
The equivalence between (1) and (3) is related to Theorem 27 of \cite{acciaio-penner-dynamic}, and Section \ref{se:essentialsuprema} elaborates on the precise connection.
From Theorem \ref{th:mainequivalence} we can conclude that \emph{not many divergences are additive}, i.e., both superadditive and subadditive.
According to Kupper and Schachermayer \cite{kupper-schachermayer}, the only time consistent law invariant risk measures are entropic:
\begin{corollary} \label{co:entropic}
Suppose $\alpha$ is a simplified divergence. If $\alpha$ is both superadditive and subadditive, then it is of one of the following forms:
\begin{align*}
\alpha(\nu | \mu) &= \frac{1}{\eta}H(\nu | \mu), \text{ for some } \eta > 0, \ \ \text{ for } \nu \ll \mu, \quad \infty \text{ otherwise}, \\
\alpha(\nu | \mu) &= 0 \ \ \text{ for } \nu = \mu, \quad \infty \text{ otherwise}, \\
\alpha(\nu | \mu) &= 0 \ \ \text{ for } \nu \ll \mu, \quad \infty \text{ otherwise}.
\end{align*}
The induced risk measure in these cases are $\rho(X) = \eta^{-1}\log{\mathbb E}[e^{\eta X}]$, $\rho(X) = {\mathbb E} X$, and $\rho(X) = \esssup X$.
\end{corollary}
\subsection{Properties of time consistency}
The following lemma shows that acceptance consistency is equivalent to a seemingly weaker statement, which will be easier to connect with shift-convexity:
\begin{lemma} \label{le:partialacceptanceconsistency}
Let $\rho$ be a law invariant risk measure. Then $\rho$ is acceptance consistent if and only if the following holds: For every pair of independent random variables $X,Y$ with values in some Polish spaces $E,F$, and for every $f \in B(E \times F)$, we have
\[
\rho(f(X,Y)) \le \rho(\rho(f(X,Y) | X)).
\]
\end{lemma}
\begin{proof}
The ``only if'' direction is immediate. To prove the converse, fix $X \in L^\infty$ and a $\sigma$-field ${\mathcal G} \subset {\mathcal F}$. Find $Y \in L^\infty$ which generates ${\mathcal G}$, for example $Y = \sum_{n=1}^\infty 2^{-n}1_{B_n}$ where $\{B_n\}$ is a countable family of generators of ${\mathcal G}$ (recall that our ambient probability space is standard). By \cite[Theorem 5.10]{kallenberg-foundations}, we may find independent random variables $\widetilde{Y}$ and $U$ as well as a measurable function $f$ such that $(\widetilde{Y},f(\widetilde{Y},U))$ has the same law as $(Y,Z)$. Then the hypothesis and law invariance imply
\[
\rho(Z) = \rho(f(\widetilde{Y},U)) \le \rho(\rho(f(\widetilde{Y},U) | \widetilde{Y})).
\]
But the conditional law of $f(\widetilde{Y},U)$ given $\widetilde{Y}$ is the same as the conditional law of $Z$ given $Y$, and thus law invariance of $\rho$ implies that $\rho(f(\widetilde{Y},U) | \widetilde{Y})$ and $\rho(Z | Y) = \rho(Z | {\mathcal G})$ have the same law. Using law invariance once more, we conclude that
\[
\rho(\rho(f(\widetilde{Y},U) | \widetilde{Y})) = \rho(\rho(Z | {\mathcal G})).
\]
\end{proof}
The next lemma rephrases acceptance consistency, in a more measure-theoretic notation which will be useful later.
\begin{proposition} \label{pr:acceptanceconsistent-equivalences}
For a law invariant risk measure $\rho$, the following are equivalent:
\begin{enumerate}
\item $\rho$ is acceptance consistent.
\item For Polish spaces $E$ and $F$, $\bar{\mu}=\mu(dx)K^\mu_x(dy) \in {\mathcal P}(E \times F)$, $f \in B(E \times F)$, and $g \in B(E)$ satisfying $\rho_\mu(g) \le 0$, we have
\[
\mu\left\{x \in E : \rho_{K^\mu_x}(f(x,\cdot)) \le g(x)\right\} = 1 \quad \Rightarrow \quad \rho_{\bar{\mu}}(f) \le 0.
\]
\item For Polish spaces $E$ and $F$, $\bar{\mu}=\mu(dx)K^\mu_x(dy) \in {\mathcal P}(E \times F)$, and $f \in B(E \times F)$, we have
\[
\rho_{\mu}\left(\rho_{K^\mu_x}(f(x,\cdot))|_{x = X}\right) \ge \rho_{\bar{\mu}}(f),
\]
\item For Polish spaces $E$ and $F$, $\mu_1 \in {\mathcal P}(E)$, $\mu_2 \in {\mathcal P}(F)$, $f \in B(E \times F)$, and $g \in B(E)$ satisfying $\rho_{\mu_1}(g) \le 0$, we have
\[
\mu\left\{x \in E : \rho_{\mu_2}(f(x,\cdot)) \le g(x)\right\} = 1 \quad \Rightarrow \quad \rho_{\mu_1 \times \mu_2}(f) \le 0.
\]
\item For Polish spaces $E$ and $F$, $\bar{\mu}=\mu_1 \times \mu_2 \in {\mathcal P}(E \times F)$, and $f \in B(E \times F)$, we have
\[
\rho_{\mu_1}\left(\rho_{\mu_2}(f(x,\cdot))|_{x = X}\right) \ge \rho_{\bar{\mu}}(f),
\]
where $X$ denotes the identity map on $E$.
\end{enumerate}
The same equivalences hold for rejection consistent, but with the inequalities reversed.
\end{proposition}
\begin{proof} It is obvious that (3) implies (5) and (2) implies (4). Property (5) and the property described in Lemma \ref{le:partialacceptanceconsistency} are equivalent, merely written in different notation, and thus (5) and (1) are equivalent. It remains to prove $1 \Rightarrow 2 \Rightarrow 3$ and $4 \Rightarrow 5$.
($1 \Rightarrow 2$) Fix Polish spaces $E$ and $F$, $\bar{\mu}=\mu(dx)K^\mu_x(dy) \in {\mathcal P}(E \times F)$, $f \in B(E \times F)$, and $g \in B(E)$ satisfying $\rho_\mu(g) \le 0$. Suppose also
\[
\mu\left\{x \in E : \rho_{K^\mu_x}(f(x,\cdot)) \le g(x)\right\} = 1.
\]
Find an $E \times F$-valued random variable $(X,Y)$ with law $\bar{\mu}$, and note that
\[
\rho(f(X,Y) | X) = \rho_{K^\mu_X}(f(X,\cdot)) \le g(X), \ a.s.
\]
Acceptance consistency and monotonicity of $\rho$ yield
\[
\rho_{\bar{\mu}}(f) = \rho(f(X,Y)) \le \rho(\rho(f(X,Y) | X)) \le \rho(g(X)) = \rho_\mu(g) \le 0.
\]
($2 \Rightarrow 3$) Fix Polish spaces $E$ and $F$, $\bar{\mu}=\mu(dx)K^\mu_x(dy) \in {\mathcal P}(E \times F)$, and $f \in B(E \times F)$. Define $g \in B(E)$ by $g(x) = \rho_{K^\mu_x}(f(x,\cdot))$. Then trivially
\[
\mu\left\{x \in E : \rho_{K^\mu_x}(f(x,\cdot) - \rho_\mu(g)) \le g(x) - \rho_\mu(g)\right\} = 1.
\]
Since $\rho_\mu(g - \rho_\mu(g)) = 0$,
property (2) implies
\[
\rho_{\bar{\mu}}(f - \rho_\mu(g)) \le 0.
\]
Rearrange this to get $\rho_{\bar{\mu}}(f) \le \rho_\mu(g)$, as desired.
($4 \Rightarrow 5$) Fix Polish spaces $E$ and $F$, $\mu_1 \in {\mathcal P}(E)$, $\mu_2 \in {\mathcal P}(F)$, and $f \in B(E \times F)$. Define $g \in B(E)$ by $g(x) = \rho_{\mu_2}(f(x,\cdot))$. Then trivially
\[
\mu_1\left\{x \in E : \rho_{\mu_2}(f(x,\cdot) - \rho_\mu(g)) \le g(x) - \rho_{\mu_1}(g)\right\} = 1.
\]
Since $\rho_{\mu_1}(g - \rho_{\mu_1}(g)) = 0$,
property (4) implies
\[
\rho_{\mu_1 \times \mu_2}(f - \rho_{\mu_1}(g)) \le 0.
\]
Rearrange this to get $\rho_{\mu_1 \times \mu_2}(f) \le \rho_{\mu_1}(g)$, as desired.
\end{proof}
This alternative description of acceptance consistency will serve us especially well when addressing additivity. For now, we will use it in establishing the connection between acceptance consistency and shift-convexity.
\begin{proposition} \label{pr:timeconsistent-shiftconvex}
A law-invariant risk measure is acceptance consistent if and only if it its measure acceptance set is shift-convex.
\end{proposition}
\begin{proof}
Let $\rho$ be a law-invariant risk measure with measure acceptance set ${\mathcal A}$. First, assume $\rho$ is acceptance consistent. Fix $\mu \in {\mathcal A}$, $M > 0$, and a measurable map ${\mathbb R} \ni x \mapsto K_x \in {\mathcal A} \cap {\mathcal P}[-M,M]$. Set $\bar{\mu} = \mu(dx)K_x(dy - x)$. Letting $\lambda$ denote Lebesgue measure on $[0,1]$, we may find (e.g. by \cite[Theorem 5.10]{kallenberg-foundations}) a measurable function $f : {\mathbb R} \times [0,1] \rightarrow {\mathbb R}$ such that, if $\hat{f}(x,y) := (x,f(x,y))$, then
\[
\bar{\mu} = (\mu \times \lambda) \circ \hat{f}^{-1}.
\]
Now set $g(x) = x$. Since $\lambda \circ f(x,\cdot)^{-1} = K_x(\cdot - x)$, we have
\[
\lambda \circ [f(x,\cdot)-g(x)]^{-1} = K_x \in {\mathcal A} \cap {\mathcal P}[-M,M],
\]
for each $x$. Thus
\[
\mu\left\{x \in {\mathbb R} : \rho_\lambda(f(x,\cdot)) \le g(x)\right\} = \mu\left\{x \in {\mathbb R} : \lambda \circ [f(x,\cdot)-g(x)]^{-1} \in {\mathcal A}\right\} = 1.
\]
Note that since $\mu$ has compact support and $K_x \in {\mathcal P}[-M,M]$ for all $x$, it follows that $f$ is essentially bounded with respect to $\mu \times \lambda$.
Since also $\mu \circ g^{-1} = \mu \in {\mathcal A}$, i.e., $\rho_\mu(g) \le 0$, acceptance consistency (Proposition \ref{pr:acceptanceconsistent-equivalences}(2)) implies that $\rho_{\mu \times \lambda}(f) \le 0$. In other words, $(\mu \times \lambda) \circ f^{-1} \in {\mathcal A}$. But this completes the proof of shift-convexity, since
\[
(\mu \times \lambda) \circ f^{-1} = \int_{{\mathbb R}}\mu(dx)K_x(\cdot - x).
\]
Conversely, assume now that $\rho$ is shift-convex. Let $E$ and $F$ be Polish spaces, and fix $\mu_1 \in {\mathcal P}(E)$, $\mu_2 \in {\mathcal P}(F)$, $f \in B(E \times F)$, and $g \in B(E)$ with $\rho_{\mu_1}(g) \le 0$. Suppose also that
\[
\mu_1\left\{x \in E : \rho_{\mu_2}(f(x,\cdot)) \le g(x)\right\} = 1,
\]
or equivalently that
\[
\mu_1\left\{x \in E : \mu_2 \circ [f(x,\cdot)-g(x)]^{-1} \in {\mathcal A}\right\} = 1,
\]
In light of Proposition \ref{pr:acceptanceconsistent-equivalences}(4), we must check that $\rho_{\mu_1 \times \mu_2}(f) \le 0$, or equivalently that $(\mu_1 \times \mu_2) \circ f^{-1} \in {\mathcal A}$. Set $\nu := \mu_1 \circ g^{-1}$, and note that $\nu \in {\mathcal A}$. For $x \in {\mathbb R}$, define also
\[
K_x := \begin{cases}
\mu_2 \circ [f(x,\cdot)-g(x)]^{-1} &\text{if } \mu_2 \circ [f(x,\cdot)-g(x)]^{-1} \in {\mathcal A} \\
\delta_0 &\text{otherwise}.
\end{cases}
\]
(The choice of $\delta_0$ is arbitrary, and any other element of ${\mathcal A}$ would do.) Then $K_x \in {\mathcal A}$ for each $x$, and shift-convexity implies
\[
\int_{\mathbb R} \nu(dx)K_x(\cdot - x) \in {\mathcal A}.
\]
But in fact $\int_{\mathbb R} \nu(dx)K_x(\cdot - x)$ is equal to $(\mu_1 \times \mu_2) \circ f^{-1}$, since for $\phi \in B({\mathbb R})$ we have
\begin{align*}
\int_{\mathbb R} \nu(dx)\int_{\mathbb R} K_x(dy - x)\phi(y) &= \int_{\mathbb R} \nu(dx)\int_{\mathbb R} K_x(dy)\phi(x+y) \\
&= \int_E\mu_1(dx)\int_F\mu_2(dy)\phi\left(g(x) + f(x,y) - g(x)\right) \\
&= \int_{E \times F} \phi \circ f\,d(\mu_1 \times \mu_2).
\end{align*}
\end{proof}
Finally, before we turn to the proof of Theorem \ref{th:mainequivalence}, we compute a penalty function for the risk measure $X \mapsto \rho(\rho(X | {\mathcal G}))$, under no time consistency assumptions. This is related to some results in \cite{acciaio-penner-dynamic} and \cite{cheridito2011composition}, to name but a few, but different in the sense that our conditional penalty functions are defined as pointwise suprema as opposed to essential suprema; see Section \ref{se:essentialsuprema} for a discussion of this point.
\begin{proposition} \label{pr:keyidentity}
Let $\rho$ be a law invariant risk measure with induced divergence $\alpha$, which we assume is simplified. Let $E$ and $F$ be Polish spaces, and let $\bar{\mu} = \mu(dx)K^\mu_x(dy) \in {\mathcal P}(E \times F)$. Let $f \in B(E \times F)$, and let $X$ denote the identity map on $E$. Then
\[
\rho_{\mu}\left(\rho_{K^\mu_x}(f(x,\cdot))|_{x = X}\right) = \sup_{\nu(dx)K^\nu_x(dy) \in {\mathcal P}(E \times F)}\left\{\int_E\int_Ff(x,y)K^\nu_x(dy)\nu(dx) - \int_F\alpha(K^\nu_x | K^\mu_x) \nu(dx) - \alpha(\nu | \mu)\right\}.
\]
\end{proposition}
\begin{proof}
We first compute
\begin{align*}
\rho_{\mu}&\left(\rho_{K^\mu_x}(f(x,\cdot))|_{x = X}\right) \\
&= \sup_{\nu \in {\mathcal P}(E)}\left\{\int_E\rho_{K^\mu_x}(f(x,\cdot))\nu(dx) - \alpha(\nu | \mu)\right\} \\
&= \sup_{\nu \in {\mathcal P}(E)}\left\{\int_E\sup_{\eta \in {\mathcal P}(F)}\left(\int_Ff(x,y)\eta(dy) - \alpha(\eta | K^\mu_x) \right)\nu(dx) - \alpha(\nu | \mu)\right\}.
\end{align*}
Complete the proof by using a well known measurable selection argument \cite[Proposition 7.50]{bertsekasshreve} to deduce
\begin{align*}
\int_E&\sup_{\eta \in {\mathcal P}(F)}\left(\int_Ff(x,y)\eta(dy) - \alpha(\eta | K^\mu_x) \right)\nu(dx) \\
&= \sup_{K}\left(\int_E\int_Ff(x,y)K_x(dy)\nu(dx) - \int_F\alpha(K_x | K^\mu_x) \nu(dx,)\right),
\end{align*}
where the supremum is over all kernels from $E$ to $F$.
\end{proof}
\subsection{Proof of Theorem \ref{th:mainequivalence}}
We saw in Proposition \ref{pr:timeconsistent-shiftconvex} that acceptance consistency and shift-convexity are equivalent. We will prove that acceptance consistency implies superadditivity and that partial superadditivity implies acceptance consistency. This is enough, since clearly superadditivity implies partial superadditivity. Fix throughout two Polish spaces $E$ and $F$ and a function $f \in B(E \times F)$.
First assume $\rho$ is partially superadditive. Fix $\bar{\mu} = \mu_1 \times \mu_2 \in {\mathcal P}(E \times F)$. Use Proposition \ref{pr:keyidentity} followed by partial superadditivity to get
\begin{align*}
\rho_{\mu}&\left(\rho_{\mu_2}(f(x,Y))|_{x = X}\right) \\
&= \sup_{\nu(dx)K^\nu_x(dy) \in {\mathcal P}(E \times F)}\left\{\int_E\int_Ff(x,y)K^\nu_x(dy)\nu(dx) - \int_F\alpha(K^\nu_x | \mu_2) \nu(dx) - \alpha(\nu | \mu_1)\right\} \\
&\ge \sup_{\nu(dx)K^\nu_x(dy) \in {\mathcal P}(E \times F)}\left\{\int_E\int_Ff(x,y)K^\nu_x(dy)\nu(dx) - \alpha(\nu(dx)K^\nu_x(dy) | \bar{\mu})\right\} \\
&= \rho_{\bar{\mu}}(f).
\end{align*}
Conclude from Proposition \ref{pr:acceptanceconsistent-equivalences}(5) that $\rho$ is acceptance consistent.
Now suppose $\rho$ is acceptance consistent. Let $\bar{\mu} = \mu(dx)K^\mu_x(dy) \in {\mathcal P}(E \times F)$. Use Proposition \ref{pr:keyidentity} followed by Proposition \ref{pr:acceptanceconsistent-equivalences}(3) to get
\begin{align}
\sup_{\nu(dx)K^\nu_x(dy) \in {\mathcal P}(E \times F)}&\left\{\int_E\int_Ff(x,y)K^\nu_x(dy)\nu(dx) - \int_F\alpha(K^\nu_x | K^\mu_x) \nu(dx) - \alpha(\nu | \mu)\right\} \nonumber \\
&= \rho_{\mu}\left(\rho_{K^\mu_x}(f(x,\cdot))|_{x = X}\right) \label{pf:mainthm1} \\
&\ge \rho_{\bar{\mu}}(f) \nonumber
\end{align}
On the other hand, according to Lemma \ref{le:integralconvexity} proven below, and using the definition of $\alpha$, we get
\begin{align*}
\int\nu(dx)\alpha(K^\nu_x|K^\mu_x) + \alpha(\nu | \mu) &= \sup_{f \in B(E \times F), g \in B(E)}\left\{\int f\,d\bar{\nu} - \int\nu(dx)\rho_{K^\mu_x}(f(x,\cdot)) + \int g\,d\nu - \rho_\mu(g)\right\} \\
&= \sup_{f \in B(E \times F), g \in B(E)}\left\{\int_{E \times F} (g(x) + f(x,y))\,\bar{\nu}(dx,dy) - \rho_\mu(g + \rho_{K^\mu_x}(f(x,\cdot)))\right\} \\
&= \sup_{f \in B(E \times F), g \in B(E)}\left\{\int f\,d\bar{\nu} - \rho_\mu(\rho_{K^\mu_x}(f(x,\cdot))|_{x = X})\right\}
\end{align*}
Indeed, in the second line we replaced $g(x)$ by $g(x) + \rho_{K^\mu_x}(f(x,\cdot))$, and in the final step we replaced $f$ by $f+g$. This shows that the function of $\nu(dx)K^\nu_x(dy)\in{\mathcal P}(E \times F)$ given by
\begin{align}
\int_F\alpha(K^\nu_x | K^\mu_x) \nu(dx) + \alpha(\nu | \mu) \label{pf:mainthm2}
\end{align}
is precisely the minimal penalty function of the risk measure given by \eqref{pf:mainthm1}.
Since $\bar{\nu} \mapsto \alpha(\bar{\nu}|\bar{\mu})$ is the minimal penalty function of $\rho_{\bar{\mu}}$ (see Theorem \ref{th:follmerschied}), it follows from the order-reversing property of convex conjugation that $\alpha(\cdot|\bar{\mu})$ dominates the minimal penalty function of the risk measure given by \eqref{pf:mainthm1}. That is,
\[
\alpha\left(\left. \nu(dx)K^\nu_x(dy) \right| \bar{\mu}\right) \ge \int_F\alpha(K^\nu_x | K^\mu_x) \nu(dx) + \alpha(\nu | \mu),
\]
for every $\nu(dx)K^\nu_x(dy) \in {\mathcal P}(E \times F)$. \hfill \qedsymbol
\begin{lemma}[Nearly Lemma 4 of \cite{acciaio-penner-dynamic}] \label{le:integralconvexity}
For any $\bar{\nu} = \nu(dx)K^\nu_x(dy) \in {\mathcal P}(E \times F)$, we have
\[
\int\nu(dx)\alpha(K^\nu_x|K^\mu_x) = \sup_{f \in B(E \times F)}\left\{\int f\,d\bar{\nu} - \int\nu(dx)\rho_{K^\mu_x}(f(x,\cdot))\right\}
\]
\end{lemma}
\subsection{Essential suprema} \label{se:essentialsuprema}
Let us briefly discuss how to connect our results with a more common dual characterization of acceptance consistency in terms of penalty functions, as can be found in \cite{acciaio-penner-dynamic}. Assume throughout that our divergence $\alpha$ is simplified.
Let $\bar{\mu} = \mu(dx)K^\mu_x(dy)\in {\mathcal P}(E \times F)$ for Polish spaces $E$ and $F$. Let $(X,Y)$ denote the identity map (i.e., coordinate maps) on $E \times F$, and define the filtration $({\mathcal F}_0,{\mathcal F}_1,{\mathcal F}_2)$ on $E \times F$ by letting ${\mathcal F}_0$ be the trivial $\sigma$-field, letting ${\mathcal F}_1 = \sigma(X)$, and letting ${\mathcal F}_2$ be the Borel $\sigma$-field.
Define a dynamic risk measure $(\rho_0,\rho_1)$ on $E \times F$ by
\begin{align*}
[\rho_1(f)](x,y) &= \rho_{K^\mu_x}(f(x,\cdot)), \text{ for } (x,y) \in E \times F \\
\rho_0(f) &= \rho_{\bar{\mu}}(f).
\end{align*}
That is, $\rho_1$ maps ${\mathcal F}_2$-measurable random variables to ${\mathcal F}_1$-measurable random variables. Alternatively, we could see $\rho_1$ as mapping from $L^\infty(E \times F,\bar{\mu})$ to $L^\infty(E,\mu)$.
In this notation, acceptance consistency simply means $\rho_0(f) \le \rho_0(\rho_1(f))$ for all $f \in B(E \times F)$.
According to Theorem 27 of \cite{acciaio-penner-dynamic}, acceptance consistency is equivalent to the inequality
\[
\alpha_0(\bar{\nu}) \ge \alpha_{0,1}(\bar{\nu}) + {\mathbb E}^{\bar{\nu}}\left[\alpha_{1}(\bar{\nu}) \right]
\]
holding for every $\bar{\nu} = \nu(dx)K^\nu_x(dy) \in {\mathcal P}_{\bar{\mu}}(E \times F)$, where $\alpha_0$, $\alpha_1$, and $\alpha_{0,1}$ are defined by
\begin{align*}
\alpha_0(\bar{\nu}) &:= \sup\left\{{\mathbb E}^{\bar{\nu}}[f] : f \in L^\infty(E \times F,
{\mathcal F}_2,\bar{\mu}), \ \rho_0(f) \le 0\right\} = \alpha(\bar{\nu} | \bar{\mu}), \\
\alpha_1(\bar{\nu}) &:= \esssup\left\{{\mathbb E}^{\bar{\nu}}[f | {\mathcal F}_1] : f \in L^\infty(E \times F,{\mathcal F}_2,\bar{\mu}), \ \rho_1(f) \le 0 \ a.s.\right\} \\
&= \esssup\left\{\int_F f(X,y)\,K^\nu_X(dy) : f \in L^\infty(E \times F,{\mathcal F}_2,\bar{\mu}), \ \mu\{ x\in E :\rho_{K^\mu_x}(f(x,\cdot)) \le 0\} = 1\right\} \\
\alpha_{0,1}(\bar{\nu}) &:= \sup\left\{{\mathbb E}^{\bar{\nu}}[f] : f \in L^\infty(E \times F,
{\mathcal F}_1,\bar{\mu}), \ \rho_0(f) \le 0\right\} \\
&= \sup\left\{\int_E f\,d\nu : f \in L^\infty(E,
\mu), \ \rho_{\mu}(f) \le 0\right\} = \alpha(\nu | \mu).
\end{align*}
Here ${\mathbb E}^{\bar{\nu}}$ denotes integration with respect to $\bar{\nu}$. In other words, acceptance consistency is equivalent to
\[
\alpha(\bar{\nu} | \bar{\mu}) \ge \alpha(\nu | \mu) + {\mathbb E}^{\bar{\nu}}\left[\alpha_{1}(\bar{\nu}) \right]
\]
holding for every $\bar{\nu} = \nu(dx)K^\nu_x(dy) \in {\mathcal P}_{\bar{\mu}}(E \times F)$. This differs from our definition of superadditivity only in the term ${\mathbb E}^{\bar{\nu}}[\alpha_1(\bar{\nu})]$. According to Lemma 4 of \cite{acciaio-penner-dynamic},
\begin{align*}
{\mathbb E}^{\bar{\nu}}\left[\alpha_{1}(\bar{\nu}) \right] &= \sup\left\{\int_{E \times F}f\,d\bar{\nu} : f \in L^\infty(E \times F,{\mathcal F}_2,\bar{\mu}), \ \rho_1(f) \le 0 \ a.s.\right\}.
\end{align*}
Note that $\rho_1(f) \le 0$ a.s. if and only if $\mu\{ x\in E :\rho_{K^\mu_x}(f(x,\cdot)) \le 0\} = 1$. Moreover, for any $f \in B(E \times F)$, if $g(x,y) := f(x,y) - \rho_{K^\mu_x}(f(x,\cdot))$ then $\rho_{K^\mu_x}(g(x,\cdot)) \le 0$ for all $x$. Thus
\[
{\mathbb E}^{\bar{\nu}}\left[\alpha_{1}(\bar{\nu}) \right] = \sup_{f \in B(E \times F)}\left\{\int f\,d\bar{\nu} - \int\nu(dx)\rho_{K^\mu_x}(f(x,\cdot))\right\}
\]
But according to Lemma \ref{le:integralconvexity}, this is in turn equal to
\[
\int_E\nu(dx)\alpha(K^\nu_x | K^\mu_x).
\]
In other words, Lemma \ref{le:integralconvexity} bridges our characterization of acceptance consistency with that of \cite[Theorem 27]{acciaio-penner-dynamic}, which we now see are equivalent.
\subsection{Weak time consistency}
A related notion of time consistency was studied by Weber in \cite{weber-distributioninvariant}. Namely, we say a law invariant risk measure $\rho$ is \emph{weakly acceptance consistent} if $\rho(X | {\mathcal G}) \le 0$ a.s. implies $\rho(X) \le 0$, for every $X \in L^\infty$ and every $\sigma$-field ${\mathcal G} \subset {\mathcal F}$. Similarly, $\rho$ is \emph{weakly rejection consistent} if $\rho(X | {\mathcal G}) > 0$ a.s. implies $\rho(X) > 0$.
The following result, due in large part to Weber \cite{weber-distributioninvariant}, characterizes weak time consistency in terms of measure acceptance sets as well as divergences.
Let us say that a set ${\mathcal A} \subset {\mathcal P}({\mathbb R})$ is \emph{locally measure convex} if for each $M > 0$ and each $Q \in {\mathcal P}({\mathcal A} \cap {\mathcal P}[-M,M])$ the mean measure $\int_{{\mathcal A} \cap {\mathcal P}[-M,M]} Q(dm)m$ is in ${\mathcal A}$.
\begin{theorem} \label{th:weber}
Suppose $\alpha$ is a simplified divergence induced by a law invariant risk measure $\rho$ with acceptance set ${\mathcal A}$. The following are equivalent:
\begin{enumerate}
\item $\rho$ is weakly acceptance consistent.
\item ${\mathcal A}$ is locally measure convex.
\item For Polish spaces $E$ and $F$, and measures $\mu(dx)K^\mu_x(dy)$ and $\nu(dx)K^\nu_x(dy)$ in ${\mathcal P}(E \times F)$, we have
\[
\alpha\left(\nu(dx)K^\nu_x(dy) \ | \ \mu(dx)K^\mu_x(dy)\right) \ge \int\nu(dx)\alpha(K^\nu_x | K^\mu_x).
\]
\item For Polish spaces $E$ and $F$, and measures $\mu_1 \times \mu_2$ and $\nu(dx)K^\nu_x(dy)$ in ${\mathcal P}(E \times F)$, we have
\[
\alpha\left(\nu(dx)K^\nu_x(dy) \ | \ \mu_1\times\mu_2\right) \ge \int\nu(dx)\alpha(K^\nu_x | \mu_2).
\]
\end{enumerate}
Similarly, the same equivalences hold when ``acceptance'' is changed to ``rejection'', ``sub'' is changed to ``super'', and ${\mathcal A}$ is changed to ${\mathcal A}^c$. The equivalence of (1) and (2) holds without the assumption that $\alpha$ is simplified.
\end{theorem}
\begin{proof}
The implication $(1) \Leftrightarrow (2)$ in the following was first noticed by Weber \cite{weber-distributioninvariant}, and the rest is proven along the same lines as Theorem \ref{th:mainequivalence}, but we provide a sketch: Suppose first that (1) holds. Fix Polish spaces $E$ and $F$ and measures $\mu(dx)K^\mu_x(dy)$ and $\nu(dx)K^\nu_x(dy)$ in ${\mathcal P}(E \times F)$. It is easy to see (similar to Proposition \ref{pr:acceptanceconsistent-equivalences}) that weak acceptance consistency is equivalent to the following: for $f \in B(E \times F)$,
\[
\mu\{\rho_{K^\mu_x}(f(x,\cdot)) \le 0\}=1 \quad \Rightarrow \quad \rho_{\bar{\mu}}(f) \le 0.
\]
Thus, by Lemma \ref{le:integralconvexity},
\begin{align*}
\int\nu(dx)\alpha(K^\nu_x|K^\mu_x) &= \sup_{f \in B(E \times F)}\left\{\int f\,d\bar{\nu} - \int\nu(dx)\rho_{K^\mu_x}(f(x,\cdot))\right\} \\
&= \sup\left\{\int f\,d\bar{\nu} : f \in B(E \times F), \ \mu\{\rho_{K^\mu_x}(f(x,\cdot)) \le 0\}=1\right\} \\
&\le \sup\left\{\int f\,d\bar{\nu} : f \in B(E \times F), \ \rho_{\bar{\mu}}(f) \le 0\right\} \\
&= \alpha(\bar{\nu} | \bar{\mu}).
\end{align*}
This proves $(1) \Rightarrow (3)$. Since clearly (3) implies (4), let us finally show that (4) implies (1). Fix Polish spaces $E$ and $F$ and $\mu_1 \times \mu_2 \in {\mathcal P}(E \times F)$. As in the proof of Theorem \ref{th:mainequivalence}, the inequality of (4), combined with Lemma \ref{le:integralconvexity} and the order-reversing property of convex conjugation, implies the set inclusion
\[
\left\{f \in B(E \times F) : \rho_{\mu_1 \times \mu_2}(f) \le 0\right\} \supset \left\{f \in B(E \times F) : \mu_1\{\rho_{\mu_2}(f(x,\cdot)) \le 0\}=1\right\}.
\]
Again, it is easy to see (similar to Proposition \ref{pr:acceptanceconsistent-equivalences}) this implies weak acceptance consistency.
\end{proof}
\begin{remark}
In fact, for a measure acceptance set ${\mathcal A}$, local measure convexity is equivalent to (ordinary) convexity. Indeed, the set ${\mathcal A} \cap {\mathcal P}[-M,M]$ is weakly closed for each $M > 0$, which can be proven easily using the Fatou property \ref{th:jouini-touzi-schachermayer} and the Skorohod representation for weak convergence. It is well known that closed convex sets are measure convex, e.g. \cite[Corollary 1.2.4]{winkler-choquet}. The same equivalence may not hold for the complement ${\mathcal A}^c$, which is not closed.
\end{remark}
\subsection{More on shift-convexity} \label{se:shiftconvexity}
Let us recall our first interpretation of a shift-convex acceptance set: Suppose $X$ and $Y$ are risks, and $X$ is acceptable. Suppose that $Y$ is conditionally acceptable given $X$. Then shift-convexity means that $X+Y$ is itself acceptable. Proposition \ref{pr:shiftconvex-equivalence} below shows that this interpretation can be sharpened somewhat:
Suppose $X$ and $Y$ are risks, and $X$ is acceptable. Suppose that $X$ is ${\mathcal G}$-measurable for some $\sigma$-field ${\mathcal G}$, and suppose the risk $Y$ is conditionally acceptable given ${\mathcal G}$. Then shift-convexity implies that the risk $X+Y$ is acceptable as well.
\begin{proposition} \label{pr:shiftconvex-equivalence}
Suppose ${\mathcal A}$ is the measure acceptance set of a law invariant risk measure. Then ${\mathcal A}$ is shift-convex if and only if it satisfies the following property:
\begin{enumerate}
\item[(S)] For each $M > 0$ and each $\gamma \in {\mathcal P}({\mathbb R} \times ({\mathcal A} \cap {\mathcal P}[-M,M]))$ with first marginal belonging to ${\mathcal A}$,
it holds that
\[
\int\gamma(dx,dm)m(\cdot - x) \in {\mathcal A}.
\]
\end{enumerate}
Similarly, ${\mathcal A}^c$ is shift-convex if and only if it satisfies property (S).
\end{proposition}
\begin{proof}
Suppose ${\mathcal A}$ satisfies property (S). Fix $\mu \in {\mathcal A}$ and a measurable map ${\mathbb R} \ni x \mapsto K_x \in {\mathcal A} \cap {\mathcal P}[-M,M]$. Define $\gamma \in {\mathcal P}({\mathbb R} \times ({\mathcal A} \cap {\mathcal P}[-M,M]))$ now by
\[
\gamma(dx,dm) := \mu(dx)\delta_{K_x}(dm).
\]
Then clearly the first marginal of $\gamma$ is $\mu$, which belongs to ${\mathcal A}$. Moreover, $\gamma({\mathbb R} \times ({\mathcal A} \cap {\mathcal P}[-M,M])) = 1$ since $K_x \in {\mathcal A} \cap {\mathcal P}[-M,M]$ for every $x$.
Thus, by property (S), the measure
\[
\int_{\mathbb R}\mu(dx)K_x(\cdot - x) = \int\gamma(dx,dm)m(\cdot - x)
\]
is in ${\mathcal A}$, which shows that ${\mathcal A}$ is shift-convex.
Conversely, suppose ${\mathcal A}$ is shift-convex. Then the corresponding law invariant risk measure $\rho$ is acceptance consistent. A fortiori, $\rho$ is weakly acceptance consistent and thus ${\mathcal A}$ is locally measure convex by Theorem \ref{th:weber}. Now fix $M > 0$ and $\gamma \in {\mathcal P}({\mathbb R} \times ({\mathcal A} \cap {\mathcal P}[-M,M]))$ with first marginal $\mu$ belonging to ${\mathcal A}$. Disintegrate $\gamma$ to find a measurable map ${\mathbb R} \ni x \mapsto Q_x \in {\mathcal P}({\mathcal A} \cap {\mathcal P}[-M,M])$ such that
\[
\gamma(dx,dm) = \mu(dx)Q_x(dm).
\]
For each $x$ define $K_x \in {\mathcal P}({\mathbb R})$ to be the mean measure of $Q_x$, i.e.
\[
K_x(\cdot) = \int Q_x(dm)m(\cdot).
\]
Since $Q_x({\mathcal A} \cap {\mathcal P}[-M,M]) = 1$ for $\mu$-a.e. $x$, and since ${\mathcal A}$ is locally measure convex, it holds that $K_x \in {\mathcal A}$ for $\mu$-a.e. $x$. From partial shift-convexity we conclude that the measure
\begin{align*}
\int\gamma(dx,dm)m(\cdot - x) &= \int_{\mathbb R}\mu(dx)\int_{{\mathcal A} \cap {\mathcal P}[-M,M]}Q_x(dm)m(\cdot - x) = \int_{\mathbb R}\mu(dx)K_x(\cdot - x)
\end{align*}
is in ${\mathcal A}$, which proves property (S).
\end{proof}
\section{Further properties of divergences} \label{se:furtherproperties}
While every divergence is convex in its first argument by definition,
it is well known that relative entropy and also $f$-divergences are \emph{jointly} convex. It turns out that \emph{joint} convexity of a divergence is equivalent to \emph{concavity} of the corresponding law invariant risk measure on the level of distributions. To be clear, for a law invariant risk measure $\rho$, define the function $\tilde{\rho}$ on the set of probability measures on ${\mathbb R}$ with compact support by setting $\tilde{\rho}(P \circ X^{-1}) = \rho(X)$, for $X \in L^\infty$. The concavity of $\tilde{\rho}$ was studied recently by Acciaio and Svindland \cite{acciaio-svindland-concave}, who make a compelling case that concavity is much more common in spite of the convexity of $\rho$ on the level of random variables. Indeed, they show that $\rho(X) = {\mathbb E} X$ is the \emph{only} law invariant risk measure for which $\tilde{\rho}$ is convex. The entropic risk measure, for example, clearly has $\tilde{\rho}$ concave. Moreover, if $\rho$ is the optimized certainty equivalent corresponding to a function $\phi$, then the formula
\[
\tilde{\rho}(\mu) = \inf_{m \in {\mathbb R}}\left(\int\phi(m + x)\mu(dx) - m\right)
\]
shows that $\tilde{\rho}$ is concave.
\begin{proposition} \label{pr:jointconvexity}
Let $\rho$ be a law invariant risk measure with induced divergence $\alpha$. The following are equivalent:
\begin{enumerate}
\item $\alpha$ is jointly convex, in the sense that $\alpha(\cdot|\cdot)$ is convex on ${\mathcal P}(E) \times {\mathcal P}(E)$ for each Polish space $E$.
\item For each Polish space $E$ and each $f \in B(E)$, the map $\mu \mapsto \rho_\mu(f)$ is concave.
\item $\tilde{\rho}$ is concave.
\end{enumerate}
\end{proposition}
\begin{proof}
($1 \Rightarrow 2$) Let $E$ be a Polish space and $f \in B(E)$. Fix $t \in (0,1)$ and $\mu_1,\mu_2 \in {\mathcal P}(E)$. Then (1) implies
\begin{align*}
\rho_{t\mu_1 + (1-t)\mu_2}(f) &= \sup_{\nu \in {\mathcal P}(E)}\left\{\int f\,d\nu - \alpha(\nu | t\mu_1 + (1-t)\mu_2)\right\} \\
&\ge \sup_{\nu_1,\nu_2 \in {\mathcal P}(E)}\left\{t\int f\,d\nu_1 + (1-t)\int f\,d\nu_2 - \alpha(t\nu_1 + (1-t)\nu_2 | t\mu_1 + (1-t)\mu_2)\right\} \\
&\ge \sup_{\nu_1,\nu_2 \in {\mathcal P}(E)}\left\{t\int f\,d\nu_1 + (1-t)\int f\,d\nu_2 - t\alpha(\nu_1 | \mu_1) + (1-t)\alpha(\nu_2 | \mu_2)\right\} \\
&= t\rho_{\mu_1}(f) + (1-t)\rho_{\mu_2}(f).
\end{align*}
($2 \Rightarrow 1$) On the other hand, if $\nu_1,\nu_2 \in {\mathcal P}(E)$, then (2) implies
\begin{align*}
\alpha(t\nu_1 + (1-t)\nu_2 | t\mu_1 + (1-t)\mu_2) &= \sup_{f \in B(E)}\left\{t\int f\,d\nu_1 + (1-t)\int f\,d\nu_2 - \rho_{t\mu_1 + (1-t)\mu_2}(f)\right\} \\
&\le \sup_{f \in B(E)}\left\{t\int f\,d\nu_1 + (1-t)\int f\,d\nu_2 - t\rho_{\mu_1}(f) + (1-t)\rho_{\mu_2}(f)\right\} \\
&\le t\alpha(\nu_1 | \mu_1) + (1-t)\alpha(\nu_2 | \mu_2).
\end{align*}
($3 \Rightarrow 2$) This is immediate from the identity
\[
\rho_{t\mu_1 + (1-t)\mu_2}(f) = \tilde{\rho}\left(t\mu_1\circ f^{-1} + (1-t)\mu_2\circ f^{-1}\right).
\]
($2 \Rightarrow 3$) This is almost immediate from the above identity. Assume (2). Let $m_1,m_2 \in {\mathcal P}({\mathbb R})$ have compact support, and let $t \in (0,1)$. Then, letting $id$ denote the identity map on ${\mathbb R}$,
\[
\tilde{\rho}(tm_1 + (1-t)m_2) = \rho_{tm_1 + (1-t)m_2}(id) \ge t\rho_{m_1}(id) + (1-t)\rho_{m_2}(id) = t\tilde{\rho}(m_1) + (1-t)\tilde{\rho}(m_2).
\]
\end{proof}
Divergences are actually uniquely determined by their values for \emph{finite} spaces $E$, as is formalized in the following proposition. Building on the characterization of relative entropy in Corollary \ref{co:entropic} below, we could derive an even simpler characterization akin to those surveyed by Csisz\'ar \cite{csiszar2008axiomatic}, but this would lead us too far astray.
\begin{proposition}
Suppose $\alpha$ is a simplified divergence.
For any Polish space $E$ and any $\mu,\nu \in {\mathcal P}(E)$, we have
\[
\alpha(\nu | \mu) = \sup\left\{\alpha(\nu \circ T^{-1}|\mu \circ T^{-1}) : T : E \rightarrow F \text{ measurable, } F \text{ finite}\right\}.
\]
\end{proposition}
\begin{proof}
The inequality $\ge$ follows immediately from the definition of a divergence. To prove the reverse inequality, note that it holds trivially if $E$ is finite. Generally, by Borel isomorphism (see \cite[Theorem 15.6]{kechris-settheory}), there exists a measurable function $S : E \rightarrow [0,1]$ with measurable inverse. Suppose we can prove that
\begin{align}
\alpha(\nu | \mu) = \sup\left\{\alpha(\nu \circ T^{-1}|\mu \circ T^{-1}) : T : [0,1] \rightarrow F \text{ measurable, } F \text{ finite}\right\}. \label{pf:finiteapprox1}
\end{align}
for all $\mu,\nu \in {\mathcal P}([0,1])$. Then, if $\mu,\nu \in {\mathcal P}(E)$, we use Proposition \ref{pr:informationinequality} to conclude
\begin{align*}
\alpha(\nu | \mu) &= \alpha(\nu \circ S^{-1} | \mu \circ S^{-1}) \\
&= \sup\left\{\alpha(\nu \circ (T \circ S)^{-1}|\mu \circ (T \circ S)^{-1}) : T : [0,1] \rightarrow F \text{ measurable, } F \text{ finite}\right\} \\
&=\sup\left\{\alpha(\nu \circ T^{-1}|\mu \circ T^{-1}) : T : E \rightarrow F \text{ measurable, } F \text{ finite}\right\}.
\end{align*}
Indeed, this is true because every measurable map $T : E \rightarrow F$ can be written as $T' \circ S$, where $T' = T \circ S^{-1}$. Hence, we need only to prove \eqref{pf:finiteapprox1}.
Since $[0,1]$ is compact, for each $n$ we may find a measurable map $T_n : [0,1] \rightarrow [0,1]$ with finite range such that $|x - T_n(x)| \le 1/n$ for all $x \in [0,1]$.
Then $T_n$ converges uniformly to the identity. Since $\alpha$ is simplified, for a given $\epsilon > 0$ we may find a \emph{continuous} function $f$ on $[0,1]$ such that
\[
\alpha(\nu | \mu) \le \epsilon + \int f\,d\nu - \rho_\mu(f).
\]
Since $\rho_\mu$ is continuous in the supremum norm, and since $f \circ T_n \rightarrow f$ uniformly, we conclude that $\rho_\mu(f) = \lim_n\rho_\mu(f \circ T_n)$. Thus
\begin{align*}
\alpha(\nu | \mu) &\le \epsilon + \lim_{n\rightarrow\infty}\left(\int f \circ T_n\,d\nu - \rho_\mu(f \circ T_n)\right) \\
&= \epsilon + \lim_{n\rightarrow\infty}\left(\int f\,d\nu \circ T_n^{-1} - \rho_{\mu \circ T_n^{-1}}(f)\right) \\
&\le \epsilon + \liminf_{n\rightarrow\infty}\alpha(\nu \circ T_n^{-1} | \mu \circ T_n^{-1}).
\end{align*}
This is enough to complete the proof.
\end{proof}
Finally, let us mention a result of potential relevance in mathematical statistics, namely that sufficient statistics always attain equality in the inequality $\alpha(\nu \circ T^{-1} | \mu \circ T^{-1}) \le \alpha(\nu | \mu)$.
The foundational paper \cite{kullback-leibler} of Kullback and Leibler proved this result for relative entropy, and Liese and Vadja \cite[Theorem 14]{liese2006divergences} treat the case of $f$-divergences.
\begin{proposition} \label{pr:sufficientstatistic}
Let $E$ be a Polish space and $\mu,\nu \in {\mathcal P}(E)$ with $\nu \ll \mu$. Suppose a measurable map $T : E \rightarrow F$ is sufficient for $\{\mu,\nu\}$, meaning that $d\nu/d\mu$ is $T$-measurable. Then, for any divergence $\alpha$, we have $\alpha(\nu \circ T^{-1} | \mu \circ T^{-1}) = \alpha(\nu | \mu)$. In particular, this holds if $T$ is a bijection with measurable inverse.
\end{proposition}
\begin{proof}
We use a more probabilistic notation for this proof, since we deal with conditional expectations.
By definition of a divergence, $\alpha(\nu \circ T^{-1} | \mu \circ T^{-1}) \le \alpha(\nu | \mu)$, so we must only prove the reverse inequality.
As is well known, sufficiency of $T$ implies easily that ${\mathbb E}^\nu[f \, | \, T] = {\mathbb E}^\mu[f \, | \, T]$ a.s. for each $f \in B(E)$; indeed, if $h \in B(E)$ is $T$-measurable, then
\[
{\mathbb E}^\nu[f h] = {\mathbb E}^\mu[f h d\nu/d\mu] = {\mathbb E}^\mu[{\mathbb E}^\mu[f \, | \, T] h d\nu/d\mu] = {\mathbb E}^\nu[{\mathbb E}^\mu[f \, | \, T] h].
\]
Because of Corollary 4.65 of \cite{follmer-schied-book}, we have $\rho_\mu(f) \ge \rho_\mu({\mathbb E}^\mu[f \, | \, T])$. Thus
\begin{align*}
\alpha(\nu | \mu) &= \sup_{f \in B(E)}\left( {\mathbb E}^\nu[f] - \rho_\mu(f)\right) \\
&\le \sup_{f \in B(E)}\left({\mathbb E}^\nu[f] - \rho_\mu({\mathbb E}^\mu[f \, | \, T])\right) \\
&= \sup_{f \in B(E)}\left({\mathbb E}^\nu[{\mathbb E}^\mu[f \, | \, T]] - \rho_\mu({\mathbb E}^\mu[f \, | \, T])\right).
\end{align*}
Every $T$-measurable function on $E$ may be written as $g \circ T$ for some measurable function $g$, and thus
\[
\alpha(\nu | \mu) \le \sup_{g \in B(F)}\left({\mathbb E}^\nu[g \circ T] - \rho_\mu(g \circ T)\right) = \sup_{g \in B(F)}\left({\mathbb E}^{\nu \circ T^{-1}}[g] - \rho_{\mu\circ T^{-1}}(g)\right) = \alpha(\nu \circ T^{-1} | \mu \circ T^{-1}).
\]
\end{proof}
\begin{remark}
Proposition \ref{pr:sufficientstatistic} raises a natural question: Does the converse hold? That is, does the equality $\alpha(\nu \circ T^{-1} | \mu \circ T^{-1}) = \alpha(\nu | \mu)$ imply that $T$ is sufficient for $(\mu,\nu)$? This does not hold for all divergences, but it does when $\alpha$ is relative entropy, as was observed first by Kullback and Leibler \cite{kullback-leibler}. Liese and Vadja \cite{liese2006divergences} show that this converse holds for many (but not all) $f$-divergences. This characterization leads to useful tests for sufficiency, as is explained in both of these papers \cite{kullback-leibler,liese2006divergences}.
\end{remark}
\section{Examples} \label{se:examples}
Before we discuss some common law invariant risk measures, recall that our sign convention is not the usual one. Namely, $\rho$ is increasing, not decreasing. More precisely, if $\rho$ is a risk measure according to our definition, the map $ X \mapsto \rho(-X)$ is what is more often called a risk measure, as in \cite{follmer-schied-book}.
\subsection{Shortfall risk measures} \label{se:shortfall}
Shortfall risk measures, introduced by F\"ollmer and Schied \cite{follmer-schied-convex}, are of the form
\[
\rho(X) = \inf\{c \in {\mathbb R} : {\mathbb E}[\ell(X-c)] \le 1\},
\]
where $\ell$ is a \emph{loss function}, defined as follows:
\begin{definition} \label{def:lossfunction}
A \emph{loss function} is a convex and nondecreasing function $\ell : {\mathbb R} \rightarrow {\mathbb R}$ satisfying $\ell(0) = 1 < \ell(x)$ for all $x > 0$.
\end{definition}
Of course, the induced family of risk measures is
\begin{align}
\rho_\mu(f) = \inf\{c \in {\mathbb R} : \int_E\ell(f(x)-c)\mu(dx) \le 1\}. \label{def:shortfall}
\end{align}
Note that by continuity of $\ell$ and monotone convergence, the infimum is always attained. In particular,
\[
\int_E\ell(f(x)-\rho_\mu(f))\mu(dx) \le 1.
\]
According to the \cite[Theorem 4.115]{follmer-schied-book} the induced divergence is
\begin{align}
\alpha(\nu | \mu) = \inf_{t > 0}\frac{1}{t}\left(1 + \int_E\ell^*\left(t\frac{d\nu}{d\mu}\right)d\mu\right), \text{ for } \nu \ll \mu, \label{def:shortfallentropy}
\end{align}
where $\ell^*(x) = \sup_{y \in {\mathbb R}}(xy - \ell(y))$ is the convex conjugate.
It is known that shortfall risk measures are Lebesgue continuous in the sense of Definition \ref{def:lebesgue-continuous} \cite[Proposition 4.113 and Exercise 4.2.2]{follmer-schied-book}. Hence, by Theorem \ref{th:tight-lowersemicontinuity}, $\alpha$ is jointly weakly lower semicontinuous and simplified.
Let us now determine when a shortfall risk measure is superadditive. We say that a nonnegative function $\ell : {\mathbb R} \rightarrow [0,\infty]$ is \emph{log-subadditive} (resp. log-superadditive) if $\ell(x+y) \le \ell(x)\ell(y)$ (resp. $\ge$) for all $x,y \in {\mathbb R}$.
\begin{proposition} \label{pr:shortfalltensorization}
Let $\ell$ be a loss function and $\alpha$ the corresponding divergence defined in \eqref{def:shortfallentropy}. If $\ell$ is log-subadditive (resp. log-superadditive) then $\alpha$ is superadditive (resp. subadditive), or equivalently $\rho$ is acceptance consistent (resp. rejection consistent)
\end{proposition}
\begin{proof}
Assume $\ell$ is log-subadditive. With Theorem \ref{th:mainequivalence} in mind, we will show that the following set is shift-convex:
\[
{\mathcal A} := \left\{m \in \bigcup_{M > 0}{\mathcal P}[-M,M] : \int\ell\,dm \le 1\right\}.
\]
Fix $\mu \in {\mathcal A}$, $M > 0$, and a measurable map ${\mathbb R} \ni x \mapsto K_x \in {\mathcal A} \cap {\mathcal P}[-M,M]$. Then $\int_{\mathbb R} \ell\,dK_x \le 1$ for all $x$ and also $\int_{\mathbb R}\ell\,d\mu \le 1$. Thus
\begin{align*}
\int_{\mathbb R}\mu(dx)\int_{\mathbb R} K_x(dy-x)\ell(y) &= \int_{\mathbb R}\mu(dx)\int_{\mathbb R} K_x(dy)\ell(y+x) \\
&\le \int_{\mathbb R}\mu(dx)\ell(x)\int_{\mathbb R} K_x(dy)\ell(y) \\
&\le 1,
\end{align*}
and it follows that $\int_{\mathbb R}\mu(dx)K_x(\cdot - x)$ is in ${\mathcal A}$.
\end{proof}
\begin{remark} \label{re:notmanylogsubadditive}
It is difficult to construct interesting examples of log-subadditive functions, beyond the obvious case of $\ell(x)=e^{\eta x}$ for $\eta > 0$. Note that $\ell(x) = 0$ for some $x$ precludes log-subadditivity, since it implies $\ell(y)\le\ell(y-x)\ell(x)=0$ for all $y$. The function $\log \ell$ must be nondecreasing and subadditive on ${\mathbb R}$ and equal to zero at zero, and moreover the exponential of this function must be convex. The only other examples we found are all of the restrictive form $\ell(x)=e^{F(x)}$ for nondecreasing functions $F$ with $F'(0) > 0$ and with at most linear growth, i.e., $F(x) \le c_1x$ for all $x \ge 0$ and $F(x) \ge c_2x$ for $x \le 0$, for $c_1,c_2 > 0$.
\end{remark}
\subsection{Optimized certainty equivalent}
An optimized certainty equivalent, as introduced by Ben-Tal and Teboulle \cite{bental-teboulle-1986,bental-teboulle-2007}, is of the form
\[
\rho(X) := \inf_{m \in {\mathbb R}}\left({\mathbb E}[\phi(m+X)] - m\right),
\]
where $\phi : {\mathbb R} \rightarrow {\mathbb R}$ is convex and nondecreasing, with $\phi^*(1) = \sup_{x \in {\mathbb R}}(x - \phi(x)) = 0$.
Of course, the induced family of risk measures is
\[
\rho_\mu(f) := \inf_{m \in {\mathbb R}}\left(\int_E\phi(m+f(x))\mu(dx) - m\right).
\]
The corresponding divergence is the $\phi^*$-divergence,
\[
\alpha(\nu|\mu) = \int\phi^*\left(\frac{d\nu}{d\mu}\right)d\mu, \text{ for } \nu \ll \mu.
\]
As we saw in the discussion preceding Proposition \ref{pr:jointconvexity}, an optimized certainty equivalent always satisfies the concavity condition of Proposition \ref{pr:jointconvexity}, and this provides an alternative proof of the well known joint convexity of $\alpha$. It is also known that $\alpha$ is jointly weakly lower semicontinuous, which we confirm using Theorem \ref{th:tight-lowersemicontinuity} before addressing time consistency and additivity properties.
\begin{proposition}
Every optimized certainty equivalent is weakly lower semicontinuous. In particular, $\alpha$ is simplified.
\end{proposition}
\begin{proof}
The second claim follows from the first by Theorem \ref{th:tight-lowersemicontinuity}.
According to Theorem \ref{th:tight-lowersemicontinuity}, it suffices to show Lebesgue continuity: If $X_n,X \in L^\infty$ with $X_n \downarrow X$ a.s., we must show that $\rho(X_n) \downarrow \rho(X)$. Let $\epsilon > 0$, and find $m \in {\mathbb R}$ such that
\[
{\mathbb E}[\phi(m+X)] - m \le \rho(X) + \epsilon.
\]
By monotone convergence, ${\mathbb E}[\phi(m + X_n)] \downarrow {\mathbb E}[\phi(m+X)]$. Thus, for sufficiently large $n$,
\[
\rho(X_n) \le {\mathbb E}[\phi(m + X_n)] - m \le {\mathbb E}[\phi(m+X)] - m + \epsilon \le \rho(X) + 2\epsilon.
\]
\end{proof}
\begin{theorem}
Suppose $\phi$ satisfies $y\phi^*(x) + x\phi^*(y) \le \phi^*(xy)$ (resp. $\ge$) for all $x,y \ge 0$. Then $\alpha$ is superadditive (resp. subadditive), or equivalently $\rho$ is acceptance consistent (resp. rejection consistent).
\end{theorem}
\begin{proof}
We treat the superadditive case, as the subadditive case is proven similarly.
Suppose $E$ and $F$ are Polish spaces, and let $\bar{\mu} = \mu(dx)K^\mu_x(dy)$ and $\bar{\nu} = \nu(dx)K^\nu_x(dy)$ be probability measures on $E \times F$.
Simply use the definition of $\alpha$:
\begin{align*}
\alpha(\bar{\nu} | \bar{\mu}) &= \int_{E \times F}\phi^*\left(\frac{d\bar{\nu}}{d\bar{\mu}}\right)d\mu \\
&= \int_E\mu(dx)\int_F K^\mu_x(dy)\phi^*\left(\frac{d\nu}{d\mu}(x)\frac{dK^\nu_x}{dK^\mu_x}(y)\right) \\
&\ge \int_E\mu(dx)\int_FK^\mu_x(dy)\left[\frac{dK^\nu_x}{dK^\mu_x}(y)\phi^*\left(\frac{d\nu}{d\mu}(x)\right) + \frac{d\nu}{d\mu}(x)\phi^*\left(\frac{dK^\nu_x}{dK^\mu_x}(y)\right) \right] \\
&= \int_E\phi^*\left(\frac{d\nu}{d\mu}\right)d\mu + \int_E\nu(dx)\int_FK^\mu_x(dy)\phi^*\left(\frac{dK^\nu_x}{dK^\mu_x}(y)\right) \\
&= \alpha(\nu | \mu) + \int_E\nu(dx)\alpha(K^\nu_x|K^\mu_x).
\end{align*}
\end{proof}
\begin{remark}
Of course, the relationship $\phi^*(xy) = x\phi^*(y) + y\phi^*(x)$ is satisfied by $\phi^*(x) = x\log x$, the conjugate of which (assuming $\phi^* = +\infty$ on the negative half-line) is $\phi(x) = e^{x-1}$. More generally, suppose $\ell$ is a strictly increasing log-subadditive loss function. Then $\ell^{-1}(xy) \ge \ell^{-1}(x) + \ell^{-1}(y)$ for $x,y > 0$, and so $\phi^*(x) := x\ell^{-1}(x)$ satisfies $\phi^*(xy) \ge x\phi^*(y) + y\phi^*(x)$. As was discussed in Remark \ref{re:notmanylogsubadditive}, there are not many such functions.
\end{remark}
\subsection{Coherent risk measures}
A risk measure is called \emph{coherent} if $\rho(\lambda X) = \lambda\rho(X)$ for all $X \in L^\infty$ and $\lambda \ge 0$. A coherent law invariant risk measure admits a representation
\[
\rho(X) = \sup_{Q \in {\mathcal Q}}{\mathbb E}^Q[X],
\]
where ${\mathcal Q} \subset {\mathcal P}_P(\Omega)$ is closed convex. Law invariance simply means that if $Q \in {\mathcal Q}$ and $Q'\in {\mathcal P}_P(\Omega)$ have the same density law $ P \circ (dQ/dP)^{-1} = P \circ (dQ'/dP)^{-1}$, then $Q'$ must also be in ${\mathcal Q}$; indeed, this follows easily from the Kusuoka representation (see \cite{kusuoka2001law,jouini-touzi-schachermayer}). For a Polish space $E$ and $\mu \in {\mathcal P}(E)$, note that
\[
\rho_\mu(f) = \sup_{\eta \in {\mathcal Q}[\mu]}\int f\,d\eta,
\]
where
\[
{\mathcal Q}[\mu] := \left\{Q \circ X^{-1} : X \in L^0(\Omega;E), \ Q \in {\mathcal Q}, \ P \circ X^{-1} = \mu \right\} \subset {\mathcal P}_\mu(E).
\]
Here $L^0(\Omega;E)$ denotes the set of measurable functions from $\Omega$ to $E$.
Let us see when $\rho$ is acceptance consistent by using Proposition \ref{pr:acceptanceconsistent-equivalences}(5). Let $E$ and $F$ be Polish spaces, let $\mu_1 \in {\mathcal P}(E)$, let $\mu_2 \in {\mathcal P}(F)$, and let $f \in B(E \times F)$. Then, if $X$ denotes the identity map on $E$,
\begin{align}
\rho_{\mu_1}(\rho_{\mu_2}(f(x,\cdot))|_{x = X}) &= \sup_{\eta \in {\mathcal Q}[\mu]}\int_E\eta(dx)\rho_{\mu_2}(f(x,\cdot)) \nonumber \\
&= \sup_{\eta \in {\mathcal Q}[\mu_1]}\int_E\eta(dx)\sup_{\eta' \in {\mathcal Q}[\mu_2]}\int_E\eta'(dy)f(x,y) \nonumber \\
&= \sup_{\eta \in {\mathcal Q}[\mu_1,\mu_2]}\int_{E \times F}f\,d\eta, \label{pf:coherent}
\end{align}
where we define
\[
{\mathcal Q}[\mu_1,\mu_2] := \left\{m(dx)K^m_x(dy) \in {\mathcal P}(E \times F) : m \in {\mathcal Q}[\mu_1], \ K^m_x \in {\mathcal Q}[\mu_2] \text{ for all } x\right\}.
\]
Indeed, the last line of \eqref{pf:coherent} follows from a well known measurable selection argument \cite[Proposition 7.50]{bertsekasshreve}. Thus, $\rho$ is acceptance consistent if and only if
\[
\sup_{\eta \in {\mathcal Q}[\mu_1 \times \mu_2]}\int_{E \times F}f\,d\eta \le \sup_{\eta \in {\mathcal Q}[\mu_1,\mu_2]}\int_{E \times F}f\,d\eta,
\]
for all $f \in B(E \times F)$, since the left-hand side is exactly $\rho_{\mu_1 \times \mu_2}(f)$. But this is equivalent to the closed convex hull of ${\mathcal Q}[\mu_1,\mu_2]$ containing ${\mathcal Q}[\mu_1 \times \mu_2]$. For this to hold for every pair $\mu_1,\mu_2$ is a very stringent requirement, and we were unable to clarify it any further. It holds in the extreme cases, when ${\mathcal Q}$ is a singleton or ${\mathcal Q} = {\mathcal P}_P(\Omega)$.
Note that the above discussion is equally valid for the robust entropic risk measure
\[
\rho(X) = \sup_{Q \in {\mathcal Q}}c^{-1}\log{\mathbb E}^Q[e^{c X}], \ c > 0.
\]
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,750 |
{"url":"http:\/\/physics.aps.org\/story\/v17\/st14","text":"Focus: A New Way to Make Elements\n\nPublished April 21, 2006 \u00a0|\u00a0 Phys. Rev. Focus 17, 14 (2006) \u00a0|\u00a0 DOI: 10.1103\/PhysRevFocus.17.14\n\nNeutrino-Induced Nucleosynthesis of $A>64$ Nuclei: The $\\nu p$ Process\n\nC. Fr\u00f6hlich, G. Mart\u00ednez-Pinedo, M. Liebend\u00f6rfer, F.-K. Thielemann, E. Bravo, W. R. Hix, K. Langanke, and N. T. Zinner\n\nPublished April 10, 2006\n\nThe neat order of chemistry\u2019s periodic table hides some riddles. Physicists have long believed that exploding stars forge the heavier elements, but the accepted ways to assemble atomic nuclei in the hot flash of a supernova cannot explain the existence of some unusual isotopes. To create them, researchers now propose a new process that involves antineutrinos, ghostly particles that supernovae generate in huge numbers. The blazingly fast reactions, described in the 14 April PRL, may explain some of the rarer ingredients of our solar system and the distinct chemical patterns seen in primitive stars.\n\nStars fuse hydrogen and helium into elements as heavy as iron and nickel, but the heavier elements come mainly from supernovae, the explosions of giant stars that also create dense neutron stars or occasionally black holes at their cores. In the superhot explosion, most heavy elements arise when helium nuclei assemble into more massive nuclei, which then absorb neutrons that decay into protons. These rapid-fire reactions forge elements that climb the periodic table. But when nuclear astrophysicists studied these processes and others in detail, they found gaps. For example, the sun and meteorites contain some isotopes of the metals molybdenum and ruthenium with a high proportion of protons, but with no clear origins in the accepted series of reactions.\n\nNow a European-led team thinks it knows why. Graduate student Carla Fr\u00f6hlich of the University of Basel, Switzerland, and colleagues examined models of a supernova\u2019s earliest moments. Last year they and a separate team of astrophysicists independently realized that there is a proton-rich region surrounding the fresh neutron star during the first few seconds of the explosion [1, 2]. Isotopes that already have a high fraction of protons cannot capture these additional protons and progress to new elements because of the repulsive force from the positive charges jammed into their nuclei.\n\nBut Fr\u00f6hlich and her colleagues found that some of the protons in this region transform into neutrons by reacting with antineutrinos streaming from the neutron star. These extra neutrons are critical during the early seconds when the material is still hot enough to make heavy, proton-rich isotopes. Some nuclei packed with the maximum allotment of protons grab a neutron and so generate enough binding force to capture another proton through the strong nuclear attraction. Within a few seconds, this cycle creates a series of stable isotopes containing as high a proportion of protons as nuclear forces allow\u2013including the problematic varieties of molybdenum and ruthenium. \u201cWithout the antineutrinos creating a constant supply of free neutrons, this would not be possible,\u201d says coauthor Gabriel Mart\u00ednez-Pinedo of the research institute GSI in Darmstadt, Germany.\n\nOther physicists view the process as a key step forward. \u201cThese isotopes have been an enigma for nucleosynthesis theory since its inception,\u201d says Robert Hoffman of Lawrence Livermore National Laboratory in California, part of a team that has confirmed and expanded upon the new work with independent models of the proton-rich region.\n\nLighter isotopes created by this same process\u2013notably strontium, yttrium, and zirconium\u2013also may appear especially clearly in the most primitive stars, says astronomer Timothy Beers of Michigan State University in East Lansing. Old stars should have relatively clean imprints of these isotopes manufactured by the first supernovae and unmarred by chemical processing later on. Indeed, Beers notes, one of the most chemically primitive stars observed in the galaxy contains a surprising amount of strontium\u2013far more than nucleosynthesis models had predicted, but consistent with the new study.\n\n\u2013Robert Irion\n\nRobert Irion is a freelance science writer based in Santa Cruz, CA.\n\nReferences\n\n1. J. Pruet, et al., Astrophys. J. 623, 325 (2005).\n2. C. Fr\u00f6hlich et al., Astrophys. J. 637, 415 (2006).","date":"2015-01-30 17:05:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 2, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3432047963142395, \"perplexity\": 3106.7869040285577}, \"config\": {\"markdown_headings\": false, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-06\/segments\/1422115856115.9\/warc\/CC-MAIN-20150124161056-00204-ip-10-180-212-252.ec2.internal.warc.gz\"}"} | null | null |
I am wondering if the devs will release a DISCO ENTERPRISE uniform. Gotta admit Pike's uniform is really nice!
Probably at the same time they release the Disco Connie.
Is it much different from Disco uniform? I thought re-coloring the existing DISCO uniform with Disco TOS colors would do it.
There are some significant differences. The Disco Ent uniforms do not have the metallic side and shoulder pieces and they add the ranks on the cuffs.
Rellime... a very good point! You are probably right about the timing. As for the differences, Rellimie has pretty much given the explanation. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,493 |
In 2017 Opus carried out a major project to measure exhaust emissions from on-road traffic in the city of Barcelona and its metropolitan area. The project was defined by Barcelona Regional, a public agency for urban planning led by Barcelona City Council.
This project has been one of the largest remote-sensing projects done in Europe and its results have pushed the city authorities to impose new restrictions on polluting vehicles.
The levels of air pollutants in the city of Barcelona exceed the limit values recommended by the World Health Organization and established by the European Union. The city plans to tackle this health problem with the implementation of a package of measures aimed at reducing the average levels of NO2 and PM10 in Barcelona, which recorded an increase in the last two years (between 11 and 13%).
The authorities hired Opus RSE to conduct an extensive and detailed analysis of the emissions, to truly understand their problem, the contribution of different types of vehicles to pollutant emissions and thus create efficient environmental policies.
The Remote Sensing Devices were placed at 32 different locations in Barcelona. This mobility can only be achieved with Opus RSD AccuScan, which is small and easy to set-up.
The measurement sites represent important traffic areas inside and around the city of Barcelona, with the aim of making a representative characterisation of on-road traffic emissions throughout Barcelona: city center, motorways, port, urban accesses, industrial sites, etc.
Hundreds of thousands of vehicles were analyzed, so the understanding of traffic emissions in Barcelona is now very clear.
It was particularly relevant finding that the average emissions of nitrogen oxides (NOx) from diesel passenger cars exceed the limits, including the most modern vehicles (Euro 6). It is true that, in general, all Euro 6 vehicles reduce their average emissions compared to older vehicles and come closer to meeting the limits set by European standards. However, it is very alarming that many Euro 6 vehicles fail to comply with emissions regulations. These vehicles were recently homologated, so it has been demonstrated once again that the real-driving emissions are much higher than those measured in laboratory type-approval tests.
The High Emitters of Barcelona, which account for 6% of the entire fleet, contribute between 20-30% of total traffic emissions. This group consists mainly of Euro 3 diesel passenger cars, but also 17% of relatively new passenger cars (Euro 5 and Euro 6) which would be inexpensive to repair (typically injection problems) and would immediately reduce emissions from road traffic.
The study showed that the identification and repair of High Emitters would be the most economical measure to reduce on-road traffic emissions. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,484 |
Your City. Your Business.
BizSense Pro
Pro Overview
The DeedBook
The Docket
The R&D Dept.
Creditors look to force Colortree into bankruptcy
Jonathan Spiers September 13, 2019 0
Colortree produced and mailed fliers, postcards, envelopes and other products. (Photos by Jonathan Spiers)
Three months after Colortree Group abruptly shut its doors, three companies that did business with the Henrico-based direct mail and printing company are attempting to force it into bankruptcy in an effort to retrieve over $8 million.
An involuntary petition seeking Chapter 7 liquidation was filed Wednesday in federal bankruptcy court against Colortree, which put its approximately 240 employees out of work when it ceased operations June 3.
The petition was filed on behalf of three companies: Lindenmeyr Munroe, a New York-based commercial printing paper and packaging supplier with locations in Henrico and Colonial Heights; Domtar Corp., a paper producer with corporate offices in Montreal and South Carolina; and G.E. Richards Graphic Supplies, a Pennsylvania-based commercial printing equipment wholesale distributor with an office in Henrico.
The businesses list more than $8.17 million in money owed them for goods and services. Lindenmeyr Munroe is seeking in excess of $8 million, Domtar lists over $155,000 and G.E. Richards seeks about $11,000. The petition states that Colortree is not paying its debts as they become due.
The companies are represented by Williamsburg attorney Gregory Bean of Gordon Rees Scully Mansukhani. Bean did not return a call seeking comment.
A message left for Colortree President and owner James "Pat" Patterson was not returned. Despite the closure, the company's voicemail and website remained active as of Thursday.
The company's headquarters at 8000 Villa Park Drive appeared deserted mid-afternoon, save for a few vehicles parked around the building.
On Thursday, the day after the petition was filed, a U.S. District Court judge entered an order certifying class-action status for a lawsuit brought against Colortree in June on behalf of all employees who lost their jobs due to the closure.
The suit, which was brought by former employee Terry Kennedy, a prepress operator at the company for three years, alleges Colortree violated the federal Worker Adjustment and Retraining Notification Act and the Virginia Wage Payment Act by not providing the employees with 60 days' written notice of their terminations, as the WARN Act requires.
Citing the WARN Act
The suit seeks to recover lost wages and benefits, as well as employees' accrued vacation time and other compensation owed.
In its response to the suit, filed July 3, Colortree denied that it failed to provide notice to employees or "appropriate governmental agencies pursuant to the WARN Act and applicable exception…" and contends that it provided such notice "as soon as practicable and in accordance with the WARN Act."
A photo of the empty parking lot at the Colortree facility taken in early June.
Several employees told BizSense they received no notice of the closure and were given 15 minutes to clear out before the building was locked. That followed a conference call in which it was conveyed to employees listening in that the 31-year-old company was shutting down. News of the closure prompted several businesses to offer jobs to former Colortree employees.
In a letter dated the same day as the closure, Colortree sent a WARN notice to the Virginia Employment Commission and Henrico County that said it was permanently closing "some or all of its divisions" and laying off approximately 240 employees. The notice was signed by Patterson, who purchased full ownership of the company two years ago.
Colortree says it sought funding
In its lawsuit response, Colortree said it "worked diligently and in good faith to secure financing that would have enabled it to remain in business and continue to (employ), or provide additional notice, to affected employees," adding that it "continues to seek such financing."
The company contends it is covered by the "faltering business" exception to the WARN Act "because it was actively seeking capital or business which it had a realistic opportunity to obtain…" and believed that giving notice would have precluded it from obtaining the needed capital or business. It said it paid employees for all hours worked in accordance with state law and seeks dismissal of the suit.
Williams Mullen attorney Laura Windsor is representing Colortree in the case. Employees in the class action are represented locally by Spotts Fain attorneys Jennifer West and Edward Bagnell Jr., who are working the case with Jack Raisner and Rene Roupinian of New York-based employment law firm Outten & Golden.
Some of the items awaiting auction. (PPL Group)
Meanwhile, an auction of Colortree's presses, equipment and other assets is scheduled to be held at 10 a.m. Tuesday. Illinois-based PPL Group is conducting the auction, which will be held at the Colortree building and benefit Sterling National Bank.
The involuntary bankruptcy petition against Colortree comes three months after a similar action was taken against Live Well Financial, a Chesterfield-based mortgage company that closed in May. A similar petition from three of that company's creditors was granted in July, and one of the lenders – Michigan-based Flagstar Bank – recently won approval to take control of a bond that Live Well had owned, representing $37 million of the $70 million the bank was owed.
POSTED IN Commercial Real Estate, Editor's Pick, Featured, Law, News
About the Author: Jonathan Spiers
Jonathan joined BizSense in early 2015 after a decade of reporting in Wilmington, N.C., and at the Henrico County Leader. The Virginia Tech grad covers government, real estate, advertising/marketing and other news. Reach him at [email protected] or (804) 308-2447.
New burger joint looks to sink its teeth into former Dutch & Co. space
580-home development planned at old Henrico Plaza Shopping Center site
Wine Stork delivery service is Richmond entrepreneurs' new baby
Veteran commercial real estate broker branches out with one-man shop
In order to foster transparent, civil conversation, please include your full name when posting comments.
© 2021 Richmond BizSense - All Rights Reserved | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,562 |
La chiesa di Santa Maria Valleverde è una chiesa di Celano, in provincia dell'Aquila. Il suo nome completo è "Santa Maria Valleverde dei Riformati", dalla omonima famiglia dell'ordine francescano. Ospita il museo e biblioteca di Santa Maria Valleverde.
Storia
La chiesa venne edificata nel 1508, anno riportato sull'architrave del portale.
Chiamata in origine Sanctae Mariae de Valleviridi è stata affiancata dal convento dei frati minori riformati di San Giovanni da Capestrano. L'atto della sua fondazione risale a qualche anno prima, esattamente al 1504, mentre furono Lionello Accrocciamuro e sua moglie Jacovella da Celano a favorire il suo progetto nella prima metà del XV secolo.
Nel 1902 è stata dichiarata monumento nazionale.
Fu gravemente danneggiata dal terremoto della Marsica del 1915, ma ricostruita secondo gli schemi originali. Unico elemento di origine che manca è la torre imponente delle campane a pianta quadrata. Resta infatti la base sopra cui è stato costruito un campanile più piccolo, con conformazione a vela. Immagini del campanile originale sono visibili in foto del primo Novecento. La torre era suddivisa in tre settori ed era resa ancora più slanciata da una cuspide piramidale sul tetto.
Descrizione
La chiesa presenta una facciata tardo gotica, con un portale tardo romanico che reca un bassorilievo con l'Agnello. La facciata è suddivisa in due piani da una cornice marcapiano. Nella parte superiore si trova una finestra rinascimentale ad arco a tutto sesto.
L'interno, ad una sola navata, presenta una volta a crociera e tre cappelle sul lato sinistro, la prima e la terza delle quali presentano affreschi restaurati, anche se in parte mancanti. Sul lato destro, in alto, si trovano due grandi tavole: una raffigura la Natività, con il Bambino che indica dei putti che recano i simboli della Passione, mentre l'altra rappresenta l'Andata al Calvario, attribuita al Sodoma. Affreschi del XVI secolo della Vergine e relativi alle scene della Passione sono stati attribuiti al pittore bresciano Paolo Zoppo.
A fianco della chiesa si apre il chiostro, dal quale si accede al convento francescano.
Nel piano superiore del convento è stata insediata la biblioteca con la collezione intitolata a "Pietro Antonio Corsignani", costituita da un migliaio di tomi e volumi antichi, e il contiguo museo che espone le opere sacre.
Note
Voci correlate
Museo e biblioteca di Santa Maria Valleverde
Altri progetti
Collegamenti esterni
La chiesa di Santa Maria Valleverde sul sito Terre Marsicane
Maria Valleverde
Maria
Celano
Monumenti nazionali della provincia dell'Aquila | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,338 |
<?php
namespace Tests\Validators;
use GUMP;
use Exception;
use Tests\BaseTestCase;
/**
* Class ContainsValidatorTest
*
* @package Tests
*/
class ContainsValidatorTest extends BaseTestCase
{
/**
* @dataProvider successProvider
*/
public function testSuccess($rule, $input)
{
$this->assertTrue($this->validate($rule, $input));
}
public function successProvider()
{
return [
['contains,one', 'one'],
['contains,one;two;with space', 'with space'],
[['contains' => ['one']], 'one'],
[['contains' => ['one', 'two']], 'two'],
];
}
/**
* @dataProvider errorProvider
*/
public function testError($rule, $input)
{
$this->assertNotTrue($this->validate($rule, $input));
}
public function errorProvider()
{
return [
['contains,one', 'two'],
['contains,one;two;with space', 'with spac'],
];
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 1,527 |
{"url":"http:\/\/openstudy.com\/updates\/50c2108fe4b069388b5666d8","text":"9 Group Title Find an equation of tangent line .... one year ago one year ago\n\n1. tennisgirl5621\n\nWHO '\n\n2. 9\n\nof $y(x)=\\int\\limits_{2}^{x} \\cos ( \\pi t ^{3} dt$ at x = 2\n\n$\\huge{y'(x)=F(x)=\\cos(\\pi x^3)}$","date":"2014-09-23 02:27:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.42010462284088135, \"perplexity\": 3803.2849898564587}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-41\/segments\/1410657137906.42\/warc\/CC-MAIN-20140914011217-00341-ip-10-234-18-248.ec2.internal.warc.gz\"}"} | null | null |
Q: Django 2.0.1: Looping to create radio buttons breaks "required" setting I have struggled with this problem for a while so I appreciate any help, however vague.
Django 2.0.1: The "required" setting that Django uses for validating whether a field is valid works fine if I input:
{{ client_primary_sector }} in to the applicable html file with the "required" setting chosen via the data model (blank=False) or via forms.py (attrs={"required": "required"}). However, the "required" setting fails when I use for loops to produce radio buttons.
See below for a working and broken example.
models.py:.
class SurveyInstance(models.Model):
client_primary_sector = models.CharField(choices=PRIMARY_SECTOR, null=True, default='no_selection', blank=False, max_length=100)
Please note from above the `default='no_selection', which is not in the PRIMARY_SECTOR choices and isn't rendered as an option to the user. This forces the user to select before data is saved (I have confirmed it works).
forms.py
class ClientProfileForm(ModelForm):
class Meta:
model = SurveyInstance
fields = ('client_primary_sector',)
widgets = {'client_primary_sector': forms.RadioSelect(choices=PRIMARY_SECTOR, attrs={"required": "required"}),
}
views.py
def client_profile_edit(request, pk):
# get the record details from the database using the primary key
survey_inst = get_object_or_404(SurveyInstance, pk=pk)
# if details submitted by user
if request.method == "POST":
# get information from the posted form
form = ClientProfileForm(request.POST, instance=survey_inst)
if form.is_valid():
survey_inst = form.save()
# redirect to Next view:
return redirect('questionnaire:business-process-management', pk=survey_inst.pk)
else:
# Retrieve existing data
form = ClientProfileForm(instance=survey_inst)
return render(request, 'questionnaire/client_profile.html', {'form': form})
client_profile.html
<!-- this works: -->
<!-- <div class="radio_3_cols">
{{ form.client_primary_sector }}
</div> -->
<!-- this doesn't: -->
{% for choice in form.client_primary_sector %}
<div class="radio radio-primary radio-inline">
{{ choice.tag }}
<label for='{{ form.client_primary_sector .auto_id }}_{{ forloop.counter0 }}'>{{ choice.choice_label }}</label>
</div>
{% endfor %}
You may wonder why I don't just use the working solution... I would like to be able to use the for loop logic for other situations and so require a solution.
A: Answered my own question. From the documentation for 2.0:
https://docs.djangoproject.com/en/2.0/ref/forms/widgets/#radioselect
The correct syntax is:
{% for radio in form.client_profile %}
<label for="{{ radio.id_for_label }}">
{{ radio.choice_label }}
<span class="radio">{{ radio.tag }}</span>
</label>
{% endfor %}
Not whatever I found before. Confirmed as working. Hoorah!
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,942 |
Vintage inspired Belmont will become a much loved addition to your wardrobe this season. Crafted from black calf leather that only gets better with age, this derby ankle boot is detailed with hook and eyelet lacing finished with a handy pull-on tab. Wear with the cuff of rolled up jeans just touching the top of the boot and thick knits for the perfect accompaniment to wintery walks. | {
"redpajama_set_name": "RedPajamaC4"
} | 304 |
Q: .c file in Xcode Project how to use / implement? The Xcode project I am working on contains a .c file which I sourced from this tutorial I want to open the serial port and listen for data coming in, so I created a method in a .m file which I named addRFID. How would I go about opening the port and listen for data coming in on the RX line within the .m file?
A: c files can be included in your project in exactly the same way as .m files. My suggestion is to create a header file for that .c (i.e. make a .h in which you declare all of the functions and variables, constants, etc), and import that in your .m, then use the functions as you see fit, calling them directly from inside object methods.
Then simply make sure that the .c file is included in your target so that it gets compiled and things should work great.
To summarize:
*
*Don't directly import your .c file
*Include the .c file in your target for compilation
*make a .h file declaring all the interface, which you do directly import.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,673 |
# Is That a Word?
### From AA to ZZZ, the Weird and Wonderful Language of SCRABBLE®
DAVID BUKSZPAN
# Contents
Disclaimer
Acknowledgments
Introduction
Part 1:
The Story of Scrabble and Beyond:
The Game's Creation, Mutations, and Relations, Plus a Look at Its Most Remarkable Records
Part 2:
Unscrambling Scrabblish:
A Catalog of Useful, Curious, and Surprising Lists, Facts, and Marginalia
Part 3:
The Lexicon Contextualized:
Speaking Scrabblish
Sources
Dictionaries
Copyright
# DISCLAIMER
This collection of words, tips, and history related to anagram crossword games, specifically Scrabble but also including Words With Friends, Lexulous, Bananagrams, and Snatch-It Word Game, is not sponsored by, endorsed by, written for, or approved by Hasbro, Inc., Zynga Inc., Bananagrams Inc., U.S. Games Systems, Inc., or any other game producer. In the text that follows, for ease of usage, simplified versions of game titles are used. By using the word "Scrabble," we mean "SCRABBLE® Brand Crossword Game." Scrabble is the trademark of Hasbro, Inc. in North America and Mattel in other countries around the world. By using the words "Words With Friends," we mean "Words With Friends®." Words With Friends is the trademark of Zynga Inc. By using the word "Lexulous," we mean "Lexulous®." Lexulous is the trademark of RJ Softwares. By using the word "Bananagrams," we mean "Bananagrams®." Bananagrams is a registered trademark of Bananagrams Inc. By using the words "Snatch-It" we mean "Snatch-It Word Game," which is produced by U.S. Game Systems, Inc.
# ACKNOWLEDGMENTS
_Hearty thank yous to Stephen Fatsis and Paul McCarthy, both of whose fine books on Scrabble were instrumental to the reporting in the first third of this volume. My gratitude also to my family, friends, and editor for your support and tolerance as I subjected you all to far too much jetsam of Scrabble minutia for far too long a time. And cheers to you, Alfred Mosher Butts; you created a helluva game._
# INTRODUCTION
THIS BOOK STARTED, quite literally, with a challenge. A few years back, a friend invited me to join her one warm spring day in Brooklyn's lovely Prospect Park for a game of Scrabble. I'm sure that, knowing full well my weaknesses for games, parks, and any excuse for an al fresco glass of wine, she wasn't surprised at how fast I arrived with a blanket, a board, and a box of sauvignon blanc. I hadn't played Scrabble in many years, but I had always been decent at the game and looked forward to showing off my skills. Oh, does nothing go as well rewarded in this world as overconfidence?
A couple of turns in, my friend played _za_. "Za?" I asked, "I don't think so," the incredulity in my voice leaving no room for my friend to do anything other than pick up her tiles and say she was kidding. "Yeah," she said, "za. Pizza. Like 'a slice of za.'" My eyes rolled and I couldn't help but feel a twang of sympathy, knowing full well that though a smart girl, my friend's large vocabulary of unusual slang terms was of no help in Scrabble. "I've got the dictionary right here," she offered. Confidently I flipped to the last page and backtracked a few pages to the spot under the giant Z tile marking the beginning of the letter's entries, expecting maybe a few words before **zag**. And I was right: there were a few words before **zag**. But the first of them was **za**.
I certainly didn't remember that word from my days growing up playing Scrabble. My friend scored her 20-odd points, and we continued. I was holding on to a slight lead. What I wasn't ready for was her play of **qi** —in two directions!—off a Triple Word Score that she'd opened up two turns later.
"Qi?" I asked, only slightly less skeptical than I'd been about za. "It's a type of Chinese eternal life force," she said flatly. Again she handed me the dictionary. This time she racked up 64 points, jumping out to a sizable lead. By the end of the game, it was all I could do to cover my annoyance, congratulate her on her victory, and not mutter these bizarre two-letter words to myself as I finished off the box of wine. It was a particular brand of annoyance, a kind of childish sour-grapes complaint that tried to dismiss her knowledge as a kind of cheating. I'd rather lose than have to try that hard, I thought, as I dropped into a local bookstore on my way home and picked up a copy of the newest edition of the Scrabble dictionary.
I did a little online research, and my friend and I continued to play as spring turned to summer. I started playing other friends, coworkers, and family members. I started playing strangers online. Some opponents knew some of these mystical, magical words that were capable of scoring huge amounts of points; others didn't. I kept playing and learning, and before long I found myself persuing the game section of my bookstore for books specifically about Scrabble.
I started with Stefan Fatsis's bestseller _Word Freak_ , one of two of the best books ever written on Scrabble. The other is Paul McCarthy's _Letterati._ They are both fascinating, superbly researched insider's guides to the world of competitive Scrabble—the players, the tournaments, the gossip, and the history of the game. What they aren't, however, are books particularly suited to improving one's game. And suddenly that's what I was looking for more than anything else: a guide for a noncompetitive (the parlance is "parlor") player that was something different than mindnumbing lists of words found in so-called "players' guides." That's what I was after.
In reading _Word Freak_ and _Letterati,_ while I loved being let in to the world of competitive Scrabble (a world I knew from the start I never wanted to enter into), I also grew increasingly saddened. Saddened to learn that while the elite players—the contenders for the national and world championships—have utterly unbelievable numbers of unusual words memorized, they don't know (and don't seem to even care!) what a lot of those words mean.
This makes sense, of course. There's only so much capacity in the human mind. If the mission is to know as many words as possible, why waste cranial real estate with a bunch of definitions? A nice little bungalow housing seven-letter words containing two _c_ s and two _e_ s could be built there.
Yet here it is, a game unlike chess or backgammon, poker or dominos, that has the ability to transcend the 225 squares of its board—that offers the chance to take what's learned ostensibly to beat one's opponent and also use it to spruce up the conversation that night at dinner. On one side, we have players who know all the words and don't care about their definitions. On the other, players who maybe know some of these words, but who naturally somewhat resent people who memorize a lot of them. I started to feel there must be more people like me—folks who would like the game even more for its capacity to increase one's vocabulary. (Of course, this is nothing new. One of the most exciting aspects of Scrabble for its inventor was just that.) And thus this book, which indulges in some of my favorite aspects of my favorite game: its peculiar history and numerous iterations, helpful strategies, quirky facts, and, above all, its wonderful, wonderful words.
## Scrabblish as a Second Language
The Scrabble lexicon, the game's authorized list of playable words for home use, is contained in the _Official Scrabble Players Dictionary_ , often referred to as the _OSPD._ In its fourth edition, the paperback version is inexpensive but impressively comprehensive despite its small size, which has also helped make the _OSPD_ the primary reference for settling disputes for all kinds of anagram-based games, from Bananagrams to Boggle. There's no rule that parlor players must use the _OSPD_ for Scrabble _,_ but for the sake of consistency and standardization, it's the logical choice. While not all online Scrabble-type games use the _OSPD,_ those that do not tend to use lexicons that are very similar. Ultimately, if an industry standard other than the _OSPD_ is taken up, it's hard to imagine that one will be chosen that risks alienating the many, many players who currently refer to the _OSPD_ as their word-game bible.
Occasionally, when I play Scrabble against someone unacquainted with the Scrabble lexicon and I use a word not typically part of an average (or even above average) English speaker's vocabulary, the response is less one of curiosity than criticism. "See," an opponent is likely to say, "that's what I don't like about Scrabble: all those ridiculous words."
This book culls from the _OSPD_ some of those ridiculous words, words that I personally find interesting for their definitions or linguistic construction and/or strategically helpful, with the hope not so much of making them any less odd or outlandish, but of embracing them for those reasons.
These words, while included in the _OSPD_ because they're included in at least one of five other, standardized English dictionaries (more on this in History of the Scrabble Lexicon on page 38), often do not resemble the English language we know. The _OSPD_ includes words like **teiid** and **xyst** , **cwm** and **kvas** , **ecu** and **fremd**. Even some of the words we might recognize, like **candle** or **necklace** , turn out to have meanings most English speakers would never imagine, opening up constructions as verbs like **candled** or **necklacing**.
In some ways, this book is less about Scrabble (and similar anagram games) than about the strange language that these games—most notably Scrabble—have given rise to, a language I like to think of as Scrabblish.
While it is a subset of English, Scrabblish consists of a wide array of words, many of which exist outside of most English speakers' vocabulary. As they're all legal ("playable") words in Scrabble, the Scrabblish lexicon contains no words that require capitalization or hyphenation. Often these words are archaic or obscure. In Scrabblish, an aged person who ate a piece of okra in the afternoon can also be described as an **oldster wha et ae bendy**.
Because even competitive Scrabble players with the greatest knowledge of the Scrabblish lexicon know but a fraction of the definitions of these words, I feel safe in saying that Scrabblish, while studied meticulously, is still unspoken—and for that we should be thankful. Still, I find Scrabblish utterly charming (and quite literally very playful), as well as a fascinating gateway into the possibilities that the English language offers, if we only care to look.
## How Words Are Designated
This book categorizes words into three sets: words (like **chair** ) that are playable in Scrabble and included in the _OSPD_ for parlor play, words (like **_shit_** ) that are playable in Scrabble tournaments but are censored from or too long to be included in the _OSPD_ (more on this difference in The History of the Scrabble Lexicon, pg. 38), and words (like _Santa_ ) that are not allowed in Scrabble. Words from the first set are in **bold** , words from the second set are in **_bold and italics_** , and words from the last group are _underlined_.
Definitions are almost always based on the ones offered in the _OSPD,_ which favors uncommon usages, but occasionally the more common usage is invoked. Words are almost always used in the part of speech given/favored by the _OSPD_ to highlight possible suffixes.
Some Scrabblish alternatives to common words are sprinkled throughout the text in this book. Definitions are located in nearby sidebars for convenience.
## Ways to Improve Your Game
While learning some of the words in this book should improve your Scrabble game substantially, its purpose is not solely to make you a better player. There's some of that, to be sure, but if you really want to improve your game, there are other options available to you. Namely: play a lot, especially against much better players (online games or matches against a computer program work fine); buy any or all of the wonderful, exhaustive players' guidebooks that exist; or simply pick up a copy of the _OSPD_ , study it, create flash cards and lists, and slowly kiss that pesky social life of yours good-bye.
This book is first and foremost about having fun with the words and the game. Scrabble set, park, and box of wine sold separately.
# [PART 1
**The Story of Scrabble and Beyond:**
The Game's Creation, Mutations, and Relations, Plus a Look at Its Most Remarkable Records](003-toc.html#part1)
Scrabble was born in Queens, came of age in downtown New York chess clubs, and has been entertaining and challenging players ever since. But that's not to say it hasn't experienced its growing pains along the way. As the game has evolved, so have its spinoffs and various online iterations—and so has Scrabble's official dictionary, which came under fire for some of its more noxious entries. Finally, no study of the game would be complete without a moment to celebrate some of its players' most fantastic feats, from winning with negative points to what might be the greatest Scrabble play ever.
## Alfred Mosher Butts
### Recession Is the Mother of Invention
It was the 1930s, and an out-of-work, thirty-three-year-old architect with diverse interests and an obsessive personality thought Americans could use a new game to help pass the hard times. Working out of his fifth-floor walk-up in the Queens, New York, neighborhood of Jackson Heights, Alfred Mosher Butts started by writing a three-page "History of Games" in which he made three classifications: "men-on-a-board games" like chess and backgammon, numbers games using dice or cards, and games involving letters and words. Butts was particularly fond of backgammon, which he thought correctly balanced the elements of skill and luck to create "a much more satisfactory and enduring amusement." Studying the overall landscape of the games industry in the United States, Butts determined that the category that showed the most promise for innovation was word games, of which the prevailing model at the time was a game called Anagrams.
Anagrams is reputed to date back to Victorian England, and it is still one of the most popular non-Scrabble games among Scrabble tournament players. Selchow & Righter, the first established game company to produce Scrabble, was already producing a game called Anagrams when it bought the rights to Scrabble. Anagrams involved players overturning tiles, evenly distributed across the alphabet, one at a time to create words. (Today, Anagrams is most often played with Scrabble tiles, but it is also available for purchase as a game called Snatch It.)
Butts's breakthrough in improving Anagrams came when he read Edgar Allan Poe's "The Gold Bug," in which a character tries to crack a code of symbols to find a hidden treasure. The code is solved by comparing the frequency of certain symbols with their frequency in the English language, starting with that most recurrent of letters, _e_. Butts realized that a game that took into account the proportion of different letters in English words, instead of simply producing an equal number of each letter (akin to playing cards), would make game play easier, yet still incorporate a strong factor of the luck of the draw.
**Alfredo:** an Italian sauce of cream, butter, cheese, and garlic
**Mosher:** one who dances violently to rock music
**Butt:** to strike with a thrust of the head
And so Butts launched into the creation of Lexiko, the first iteration of Scrabble. It's often said that Butts created the letter distribution by dissecting the front page of the _New York Times_ ; actually he used pages from the _Times,_ the _New York Herald Tribune_ , and the _Saturday Evening Post._ Butts sold the first copies of Lexiko in 1933.
Butts continued to **tew** at the game, adding a board, adding premium squares, reworking the distribution and number of tiles, tweaking the rules. He tested Lexiko at home with his **feme** , manufactured sets in his living room, and sold them by word of mouth: by August 1934, he'd sold eighty-four sets at about $1.50 a game **fer** $127. But he'd had to **wair** $147 on plywood, ink, glue, and such. The game grew in popularity so that Butts had to **moil** to keep up with Christmas orders, but when he tried selling the game to a large **biz** , Milton Bradley, Parker Brothers, and the publishing house Simon and Schuster all passed. In 1938, he changed the name of the game from Lexiko to Criss-Cross Words.
Butts eventually gave up his search for a buyer, but in 1947 a lawyer made him an offer and bought the rights; Butts would receive a small royalty. (Butts lived out his life collecting checks large enough to live comfortably, but far smaller than one might expect for creating a game that would become—and remain—so incredibly popular.) The lawyer tinkered with the board, farmed out much of the production, and changed the name to Scrabble. The name was chosen in small part for its meaning (to grope about frantically), in large part for its evocation of the word _scramble_ , and certainly not least because there was no similar trademarked name.
**Tew:** to work hard
**Feme:** a wife
**Fer:** for
**Wair:** to spend
**Moil:** to work
**Biz:** a business
The only anagram of Scrabble is clabbers: to beat badly.
Sales continued to rise until 1952, when legend has it that Macy's chairman Jack Straus played Scrabble during his vacation on Long Island. He quickly became totally taken with it but was surprised to learn that his store didn't carry the game. He submitted a large order and other stores soon followed suit. By the end of the year, two thousand sets were being sold a week. A year later, the Queen of England was spotted buying a set in New York and sales so skyrocketed that the _Herald Tribune_ wrote that 1953 could be memorable for "any number of notable events, from the inauguration of a Republican president to the growth of Scrabble."
Nearly four million sets were sold in 1954, and sales have remained strong ever since, as the American rights were first sold to Selchow & Righter, then transferred to Coleco, and most recent to Hasbro. It's estimated that worldwide sales are still around three or four million sets per year, with the recent increase in the popularity of online play creating a fresh surge in traditional sets. Scrabble has now sold more than 150 million sets worldwide and can be found in a third of American homes.
**From Life magazine, December 14, 1953**
"Christmas shoppers in search of the standard $2 Scrabble set can get it in only two ways: they can place their names on the bottom of waiting lists, or they can lurk for hours beside a counter until a shipment arrives, at which time they take their chances like football players going after a fumble. No game in the history of the trade has ever sold so rapidly and few have shown such promise of consistent, long term popularity."
## The Letter of the Law
### Clarifying Certain Rules
One of the beautiful aspects of Scrabble is the relative simplicity of its rules and game play. Even so, some procedural questions occasionally crop up that are unaddressed in the short rule book packaged with each set. While players are welcome to follow their own house rules, here is a rundown of the rules of the finer points of the game as they have been agreed upon for competitive play in the United States, offered here primarily in the hope of helping maintain the domestic peace.
#### STARTING THE GAME
Players begin by drawing one tile each. The player with a tile closest to the beginning of the alphabet goes first. A blank supersedes an A. If two or more players tie, only the tied players draw again. Once the order is established, players return their tiles to the bag, and the player to go first picks his letters first. Play proceeds clockwise.
#### ENDING THE GAME
The game ends when no tiles remain in the bag and one player has played all his tiles, or when all players pass three times consecutively. In two-player Scrabble, the player who finishes all his tiles first receives twice the value of his opponent's tiles. In games of more than two players, the player who finishes first receives the sum of all the opponents' tiles, and each opponent subtracts the sum of the tiles on his own rack from his own score. If play ends without any player using all his tiles, the sum of each player's rack is subtracted from his score. Blanks have no value. High score wins. Ties exist—there is no tiebreaker. (In tournaments, a tie counts as half a win and half a loss.)
#### SCORING PREMIUM SQUARES AND THE BINGO BONUS
Double or Triple Letter Scores are computed in a play's score before a Double or Triple Word Score. A 50-point bonus for a "bingo" (using all seven letters in one play) is added on only after any multiplication has taken place.
**National Scrabble Day**
National Scrabble Day is celebrated each year on April 13, and it commemorates Alfred Mosher Butts's birthday in 1899. Generally, the Scrabble-obsessed celebrate by indulging in their obsession, but one might also consider observing the day by speaking Scrabblish, or at least peppering one's speech with _q_ -without- _u_ words.
#### DRAWING TILES
Tiles should be kept in an opaque bag. When a player draws tiles, the player's eyes should be averted and the bag should be held at arm's length and eye level. Feeling the face of a tile with one's fingers to discern what it is (known as " **brailling** ") is not allowed.
In Scrabble parlance, to **braille** is to cheat by feeling the face of a letter when it's in the bag. Games in Scrabble tournaments are played with flat-faced plastic letters called _Protiles_ to eliminate brailling (sets of Protiles can be purchased online for about $20). The _Official Scrabble Players Dictionary_ defines braille as "to write in braille," but there's also to **brail** , to haul in a sail—which offers a kind of double entendre to the Scrabble players' usage. Of course, all of this is moot for blind Scrabble players, for whom sets with braille tiles exist, which themselves must be terrifically simple to braille if tiles are chosen from a bag rather than by first being laid out facedown on the table.
#### DRAWING TOO MANY TILES AT ONCE
It sometimes happens that a player picks too many tiles during a turn, and doesn't notice that he has too many tiles until they are already on his rack. The player's opponent then gets to select as many tiles as have been overdrawn plus two additional tiles to look at, and then choose whichever tile(s) he wants to return to the bag to correct the guilty player's rack. If the extra tiles have not been integrated into the player's rack, the opponent chooses only from the newly chosen tiles.
#### TRADING IN TILES
A player can use a turn to trade in one to seven tiles at any time in the game until there are fewer than seven tiles left in the bag. The player should announce the number of tiles he's trading in, set those tiles face down to the side, then choose replacements, and only then return the tiles he's trading in to the bag. Of course, a player receives no points for that turn, but learning when it's wise to exchange tiles is a skill that's worth its weight in blanks.
**Recycling Tiles: Ecological Scrabble**
One slight rule change that can make classic Scrabble a bit more interesting and fun—not to mention a fair bit easier—is to allow players to replace the blank once it has been played. Sometimes this is called Ecological Scrabble, as the blank is "recycled." Under this adaptation, a blank played on the board in one turn can be substituted for the letter it represents by any player on a subsequent turn. The caveat is that the blank must be used at once by the player who picks it up. Another version allows players to replace the blank with any letter that would create a legal play. Points are awarded for any new word(s) created using the blank, but not for the word(s) created by replacing the blank with a letter.
#### THE CHALLENGE RULE
In competitive Scrabble, there's no rule that says one must play a legal word. But if a word is challenged and is found not to be legal (called a **phony** in Scrabble parlance), the player who set it down loses his turn. Conversely, if a challenged word is found to be playable, the challenger loses his turn. When a play is challenged, all new words that may have been made during a turn are included in the challenge, and the invalidity of any of them results in the loss of the entire turn. It's a system that rewards knowledge (and confidence in one's knowledge) of the official word list.
**Challenging Evolution**
In the early days of competitive Scrabble, the standard "look-up rule" dictated that the player who played the word had to look up the word if challenged. Today this seems odd, as it gives the player a chance to find other possible plays for the next turn if the challenged word is not listed. But at the time, the de facto dictionary for play was the Funk & Wagnalls College Dictionary, and words in it were often hard to find (one had to know that maria was the plural of mare, and so would be found under mare, for example). So the onus was on the player to know the dictionary well. Having an opponent look up a word could punish a player if the opponent didn't know where to find it. Today in competitive play, it's standard to have an impartial party do the adjudicating.
## Basic Strategizing
Despite their stated point values, some tiles are worth more than others. Anyone who's ever played Scrabble knows good letters (the blank, S, R, X, etc.) from bad tiles (the V and C don't form any two-letter words, so they can be particularly annoying). The trick is not to waste good letters for small scores and to unload bad letters (by playing them or trading them in) without letting them sit on your rack too long and ruin the chance for a bingo, or at least for drawing better letters.
#### ESSES TO EXCESS
They say you can't have too much of a good thing, so it might follow that the more Ss in your rack, the better. But as having two Ss at once actually decreases the chances of scoring a bingo, if you're holding more than one, it's often best to play off the extras as soon as possible and hold on to a single S until the timing for a big play is right.
#### **KEEP TRACK OF THE POWER TILES, BLANKS, AND SS**
Particularly as the game progresses, it's helpful to know what tiles your opponent might be holding. This is nowhere near as hard as counting cards—and it's perfectly legal. Scrabble boards even have the tile distribution numbers printed on them off to the side.
The main tiles to keep track of are the "power tiles," the five highest-scoring tiles, of which there is only one apiece: Z, Q, X, J, and K. Before you play an A right below a Triple Letter Score, check to see if your opponent might have the Z. It could be a 60-plus point difference. Know the two-letter words using these tiles, and be mindful of them even when you don't have these letters. And certainly, knowing how many blanks or Ss are out there can inform a decision about whether to make a certain play.
##### **Dynamic Duos: The Ten Most Important (i.e., highest-scoring) Two-Letter Words**
_Do not turn the page until your learn this list._
* * *
**Za:** pizza
---
**Qi:** the central life force in traditional Chinese culture, pronounced as "chee" (also **ki** )
**Jo:** a sweetheart
**Ax:** a tool for chopping wood
**Ex:** to cross something out
**Ox:** a large mammal (pl. -en) or a large, oafish person (pl. -es)
**Xi:** a Greek letter _(pronounced like the zai in_ bonsai _)_
**Xu:** a monetary unit of Vietnam equal to one-hundredth of a **dong** _(derived from the French_ **sou** _)_ (also **sau** )
**Ka:** the eternal soul in ancient Egyptian spirituality
**Ki:** the life force in traditional Chinese culture, pronounced as "chee" (also **qi** )
* * *
## **Variations on a Theme**
### **Non-Scrabble Word Games**
#### **GAMES BY HASBRO**
Over the years, Hasbro and previous owners of the Scrabble brand have released a number of spin-off games bearing the Scrabble name and alternative anagram word games. Of the bunch, Up Words and Super Scrabble have proven the most popular, but there are plenty of others currently available in stores, or on offer at yard sales.
**Up Words:** An excellent "three-dimensional" Scrabble-like game that can sometimes be even **funner** than Scrabble, Up Words is played with stackable tiles, creating the opportunity to change letters in words as well as build off them. For instance, an F tile can be played atop the B in **boxes** to create **foxes** , and then used perpendicularly to make **flap**. Up Words is a faster game than Scrabble, and in a way more aesthetically pleasing: what better treat for the lover of words than to watch as a little three-dimensional city of various-sized towers built of letters springs up over the course of a game?
**Super Scrabble:** A much larger board with more than twice as many tiles plus quadruple premium squares combine to make this super-sized Scrabble game higher scoring. With its 21-by-21-square board, one can nerd out like never before and finally play **_pseudosophistications_**!
**Scrabble Slam:** A card game in which players rush to place cards in the center to form words. A low-stakes way to bone up on four-letter words—especially when playing against opponents with larger **vocabs** (vocabularies). It moves fast but gets old fast, too.
**Scrabble Scramble:** In this decent on-the-go Scrabble spin-off, players shake out dice with letters on each face and make words on a tiny Scrabble board. A good way to pass half an hour at the beach with your librarian friend, it's a slightly improved version of the old Crosswords game by Milton Bradley.
**Scrabble Upper Hand:** Billed as a "grand slam word game," this mix of Scrabble and Bridge (why it wasn't simply called Scribbidge I can't understand) came out from Selchow and Righter. It didn't last long. This game is not to be confused with the still extant Kings Cribbage (put out by Conoco, not Hasbro) which is basically Cribbage played on a Scrabble board. It's not terribly easy to pick up, and seems destined to remain forever abandoned and gathering dust in the great toy attic of history.
**Scrabble Overturn:** Overturn was an interesting "spin" on Scrabble, incorporating aspects of Othello (Go). Each tile is a cylinder with the same letter printed on it four times in four different colors. Each player (up to four) is assigned a color, and as a word is played on the board, the player turns all the tiles in that word to her color, as well as tiles in any other accompanying new words. For instance, if **zag** is on the board in green, and the player using purple letters creates **zags** and **sugar** , all the letters in **sugar** , plus the former **zag** (as it's now **zags** ) are turned to purple. Players record scores for words created as the game progresses, and these are added to points totaled at the end of the game for the words that are then in each player's color. Unfortunately, the round tiles are difficult to manipulate in the racks, and if one is dropped it has a habit of rolling away. Still, this is a good game to pick up at a yard sale (it's been out of print for years); just don't play it on the top of a hill.
#### **BOLDLY PLAYING: STAR TREK SCRABBLE**
Scrabble is in many ways the Law & Order of board games: it's cheap to produce, very addictive, and there have been far too many spin-offs to count. And though there hasn't yet been a Law & Order Scrabble spin-off (wouldn't mccoy make a great word?), it probably wouldn't be the oddest. Spin-off sets based on such franchises as Shrek, The Wizard of Oz, Major League Baseball, the Chicago Cubs (perhaps a good gift to give an adversary, as the Cubs haven't won a championship in over a century), the New York Yankees (yankee isn't playable but **yanqui** , meaning an American citizen, is), The Simpsons, Dora the Explorer, John Deere, and Star Trek have all been unleashed. Beyond their thematically stylized boards, the spin-offs include allowances for the lexical idiosyncrasies of their subjects, including bonus points for certain words.
The _Star Trek_ edition, for instance, offers bonus points for **captain** and vulcan, which is legal in that game only. Triple Word Scores are called Tribble Word Scores. Even the very language of the game play is altered: when exchanging tiles, players are instructed to announce, "I am a doctor, not a linguist!" and when a challenge is called for, the challenger is to suggest that the word seems "illogical."
**Howbeit** (nevertheless), you don't need a special _Star Trek_ set to play _Star Trek_ –like words, nor do you need to know Klingon (although when you see the words that champion tournament players make, it of ten looks like they do). While Spock isn't good, there are some words that are playable in a traditional Scrabble game that may resemble the Trekkie words you're familiar with.
**Scottie:** a type of terrier
**Sulu:** a Malaysian skirt
**Kirk:** a church
**Whoopie:** one who cheers loudly (also **whoopee** )
**Khan:** an Asian ruler
**Phlox:** a genus of flowering plants
**Vulcanic:** pertaining to volcanoes
Scotty is no good in Scrabble, but you can bring your **scottie** to the game. You can even dress him in a **sulu** and watch him play behind the **kirk**. Whoopi Goldberg can't come either, but a **whoopie** can cheer you on. Choose your opponent wisely: beware the wrath of a **khan**. If you beat him, soothe his anger with a bouquet of **phlox**. (In case you were wondering, the species known as volcano phlox is not **vulcanic**.)
#### **PRESIDENTIAL SCRABBLE**
Barack Obama is a self-described Scrabble fan, George W. Bush has said his favorite iPad **app** is Scrabble, and Bill Clinton passed much of his time after his heart surgery playing Up Words. But for those who want a hefty serving of presidential politics infused into their crossword board game, Presidential Scrabble is also available. Like other Scrabble spin-offs, Presidential Scrabble changes the board a little (it's round instead of square and offers bonus squares based on the Electoral College votes of different states), awards extra points for using some themed lingo (like **vote** ), adds to the playable lexicon (LBJ and FDR are good), and otherwise toys with the original (a deck of cards featuring presidents offers special privileges—the Nixon card allows a player to be pardoned for misspelling a word).
But there was already plenty of presidential stuff in the regular old, plain- **jane** (Adams) version of Scrabble. For instance, the names of many **prezes** are also common nouns and are thus playable: **Pierce** , **Grant** , **Hoover** , **Ford** , **Carter** , and **Bush**. President (and former **veep** ) **_Johnson_** is only invited to participate in tourna-ment play.
**Bush:** to cover with shrubs
**Carter:** one who carts
**Clintonia:** an herb of the lily family with yellow, white, or purple flowers
**Ford:** to cross
**Grant:** to permit
**Hoover:** to use a vacuum cleaner
**_Johnson:_** a penis
**Pierce:** to puncture
**Prex:** a president, usually of a college (also **prexy** )
**Prez:** a president
**Veep:** vice president
#### **OTHER SCRABBLE-LIKE GAMES**
The Scrabble lexicon isn't the only part of the game that allows for seemingly endless variation; there are lots of ways to tinker with the game itself. Online and mobile games have exploded in popularity in recent years, as have more analog and less traditional versions of the classic anagrams game.
##### **Trickster Less Tricky**
In 2010, Mattel, which owns the rights to Scrabble everywhere except in North America, debuted Scrabble Trickster. Scrabble Trickster is not available in North America, and is basically classic Scrabble, except that it permits proper nouns, and if words are placed on particular tiles, players may spell words backward or "steal" tiles from other players. The game's release garnered a lot of attention on both sides of the pond, much of which was critical of its apparent aim to court players with pop culture at the expense of building and rewarding vocabulary skills. Articles lambasted Trickster for allowing names like JayZ, Lady Gaga, and Barack Obama. But while one cannot play proper nouns as such in classic Scrabble, the original does allow for **Jay** , **Zee** , **Lady** , **Gaga** , **Barrack** (careful, two Rs here), **Oba** , and **Ma**.
* * *
---
**Jay:** the letter _j_
**Zee:** the letter _z_
**Shawn:** a past tense of shaw
**Carter:** one who carts
**Lady:** a woman
**Gaga:** insane
**Barrack:** to shout boisterously
**Oba:** a hereditary chief in Benin and Nigeria
**Ma:** mother
##### **SPEED SCRABBLE/BANANAGRAMS**
As the name suggests, Speed Scrabble is a fast, less formal variant of Scrabble. Also known as "Take Two" (with slightly different rules) and similar to Bananagrams, Speed Scrabble is played with Scrabble tiles but without the board.
Bananagrams is a product completely independent of Scrabble, consisting of 144 tiles (as opposed to Scrabble's 100 tiles) without point values in a **cavendish** -esque purse (made of yellow cloth, not **abaca** ). Its rules vary slightly from Speed Scrabble, but it is essentially the same game. Perhaps the most noticeable difference is the banana-centric lingo used in the game, like saying "Split" to begin the game and "Peel" to pick another tile.
**RULES**
**1.** All the tiles are set **facedown** in the center of a playing space **between** the players.
**2.** Each player draws seven tiles and at the word "Go" flips the tiles and commences to make her own crossword configuration of the letters.
**3.** As soon as one player uses all seven of her tiles, she announces, "Pick one!" and each player picks an additional tile from the upside-down tiles in the center. (A popular form of this game has players pick two tiles at a time.) Each player continues to reconfigure her own crossword in the attempt to incorporate all her own tiles.
**4.** Play continues with a player announcing "Pick one!" as soon as she has incorporated all her tiles into her crossword.
**5.** The round ends when there are not enough tiles for all players to pick a tile (or two tiles) each, and then the first player to incorporate all her tiles into her crossword wins.
**6.** The winner is awarded the total point value of all other players' unused tiles, and each other player subtracts the total of her own unused tiles from her own score.
**7.** All words in the _OSPD_ are legal, though **vulgo** (often) it's **mair** (more) fun and challenging to play without two-letter words. (Familiarity with the words in this book is particularly useful in Speed Scrabble.)
**8.** Often, a bonus of 10 points is awarded to players for each seven-letter (or longer) word they have created, or for the longest word of the round.
**Cavendish:** a process of curing tobacco _(When referring to the Cavendish banana, it's capitalized.)_
**Abaca:** a species of banana harvested for its hemp (also **abaka** )
**Facedown:** with the face-side downward
**Atween:** between
**Vulgo:** often
**Mair:** more
**WARNINGS AND TIPS**
**1.** It's usually beneficial to try to create long words, as they'll create more hooks for additional words.
**2.** Don't neglect to inspect the winner's crossword—often there are tiles from an old word that were not moved as the player changed her board. And make sure she hasn't conveniently played an upside-down M as a W somewhere.
**Note:** Speed Scrabble is also the name of a form of traditional Scrabble played extremely quickly, similar to Speed Chess. While competitive Scrabble is generally played with a time limit of 25 minutes per player, Speed Scrabble allots only three minutes per competitor. A point is deducted for every second taken over the three-minute limit. The engaging Scrabble documentary Word Wars, based on Stephen Fatsis's book, Word Freak, contains footage of one such match, and it's pretty impressive.
##### ANAGRAMS
Anagrams, also known as the board game Snatch-It, is probably the most common Scrabble variant played by competitive Scrabble players. Like Speed Scrabble, it's fast moving and played without a board. But it reveals itself as a more cutthroat contest in that all players have access to tiles as soon as they're turned over, and players can steal ("snatch") each other's tiles after opponents have already incorporated them into words.
**RULES**
**1.** Play starts with all the tiles facedown between all competitors. Players establish rules beforehand governing the minimum length of playable words and whether adding simple prefixes like _re_ \- and _un_ \- and suffixes like - _er_ , - _ed_ , and - _ing_ are legal.
**2.** Players take turns turning over one tile at a time, leaving the overturned tiles in the center of the playing space.
**3.** When a player sees a word that can be made from the overturned tiles, she announces the word and takes the tiles, forming the word in front of her so the other players can see it.
**4.** As tiles continue to be turned over, players can use any combination of tiles in the center or other players' complete words (called stealing) to create new words (a player may use an S with an opponent's **rate** to create **stare** , for instance). The new word is placed in front of the player who makes it.
**5.** Play ends when all the tiles are overturned and no player can make a new word.
**6.** If two players call out different words at the same time, the longer word takes precedence. If words are of the same length, the higher scoring (based on the tiles' points) takes precedence. If point totals are also the same, or the same word is called out by two players simultaneously, players draw letters, and the closest to the beginning of the alphabet takes precedence. Letters overturned for this reason are then flipped over again and mixed back in with the remaining tiles.
**7.** Scoring can be determined by the number of tiles, words, or points of the tiles each player has in front of him. Bonus points for words of a certain length may also be awarded.
**WARNINGS AND TIPS**
**1.** It's often best to play with two sets of Scrabble tiles at once, offering longer game play and thus more possibilities to create anagrams of longer words.
**2.** If playing with children, consider allowing them to create shorter words. (Then steal their words from them—tough love breeds champions!)
##### PLAY WITH T AND A: STRIP SCRABBLE
Perhaps the most **risque** permutation of Scrabble around is Strip Scrabble. No matter how cold your letters are, play can turn steamy at any moment. And it's the only version of the game that encourages competitors to show each other their racks. What's not to love?
**RULES**
**1.** In this game in particular, it's important to set the ground rules early: players should agree at the outset to wear the same number of articles of clothing; how much **nakeder** everyone feels comfortable becoming (maybe you want to stop at your **undies** ), and what exactly constitutes an article of clothing (good rules of thumb: jewelry should not count and a pair of **sox** should be considered one article—the one sock on and one **aff** look doesn't do anybody anyfavors).
**2.** Play involves a normal Scrabble set and follows all the rules of Scrabble, with the following exceptions:
• A player who scores less than 10 points (or 20 points, depending on the skills of the players) in a turn—including trading in tiles—must take off one article of clothing.
• If a player makes use of a Triple Word Score, all other players take off an article of clothing.
• If a player scores a bingo, all other players take off two articles of clothing.
**3.** At the end of the game, all players except the winner (though fair play might dictate the winner also) shed all remaining clothes to the agreed upon level of nudity. If everyone's already naked, the question must be asked: why are you still playing?
**Risque:** verging on impropriety
**Nakeder:** more naked
**Undies:** underwear
**Sox:** plural of sock (also **socks** )
**Aff:** off
**Moppet:** a child
**Cummer:** a godmother
**Granny:** a grandmother
**Aa:** a type of stony, rough lava (pronounced a'ah) _(A Hawaiian word, it also originally meant "burn.")_
**Bee:** the letter _b_
**Cee:** the letter _c_
**Dee:** the letter _d_
**Doubled:** multiplied by two
**Vino:** wine
**Vera:** very
**Oot:** out
**WARNINGS AND TIPS**
**1.** Should not be played with **moppet** s, **cummers** , or **grannies** —especially my **granny**!
**2.** Use caution: tiles have a way of sticking to the skin (also: hiding tiles on one's person is generally frowned up—unless it's a really good hiding spot).
**3.** Unfortunately, xxx is not legal, even with an X and both blanks. **Aa** is the only playable bra size, unless one counts **bee** , **cee** , **dee** , and of course, **doubleD.**
#### **Now I Know My A, Bee, Cees: Alphabets**
The letter _a_ is spelled, simply, "a"—so it's too short to be played in Scrabble, and the plural, aes, is also not playable. The same goes for _e_ and its plural, ees, _i_ and ies, and _u_ and ues. As it and its plural are hyphenated, "double-u" is clearly not playable. However, **ae** is playable (adj., one), and **oes** makes it in as the plural of **oe** (a wind off the Faeroe islands).
The rest of the English alphabet is all fair game:
* * *
**bee** / **kay** | **es** and **ess**
---|---
**cee** / **el** and **ell** | **tee**
**dee** / **em** | **vee**
**ef** and **eff** / **en** | **ex**
**gee** / **pee** | **wye**
**aitch** / **kue** | **zee** and **zed** and **izzard**
**jay** / **ar** |
* * *
#### **Other Alphabets**
The names of the twenty-four Greek letters are all playable words.
**Ere eta** comes **zeta** , and **phi** is found **nigh chi**.
**Ere:** before
**Nigh:** near
* * *
**Alpha** / **Epsilon** | **Iota**
---|---
**Beta** / **Zeta** | **Kappa**
**Gamma** / **Eta** | **Lambda**
**Delta** / **Theta** | **Mu**
**Nu** / **Rho** | **Phi**
---|---
**Xi** / **Sigma** | **Chi** (also **khi** )
**Omicron** / **Tau** | **Psi**
**Pi** / **Upsilon** | **Omega**
* * *
The names of all the Hebrew letters are playable words.
* * *
---
**Alef, aleph** ( **alif** is an Arabic letter)
**Bes** , **beth**
**Gimel**
**Daledh** , **daleth**
**Heh**
**Vau** , **vav** , **vaw** , **waw**
**Zayin**
**Cheth** , **het** , **heth** , **khet** , **kheth**
**Tet** , **teth**
**Yod** , **yodh**
**Kaf** , **kaph** , **khaf** , **khaph**
**Lamed** , **lamedh**
**Mem**
**Nun**
**Samech** , **samek** , **samekh**
**Ain** , **ayin**
**Fe** , **feh**
**Pe** , **peh**
**Sade** , **sadhe** , **sadi** , **tsade** , **tsadi**
**Koaph** , **qoph** , **caph**
**Resh**
**Sin**
**Shin**
**Tav**
* * *
**4.** Subsets of words—such as types of clothing, parts of the anatomy, or words having to do with sex—can be determined at the outset of the game to be wild cards and, when played, cause opponents to shed an article of clothing.
**5.** Contrary to its effect on most types of Scrabble games, booze (especially **vino** ) **vera** much increases the level and enjoyment of play.
**6.** If everyone's wearing a lot of clothes because it's cold **oot** —or perhaps if your opponents are particularly hot—consider adding the caveat that at the end of each round, everyone but the player with the highest point total for that round takes off one article of clothing.
**7.** Team play is strongly encouraged.
##### HAGGLE SCRABBLE
There are two versions of Scrabble that incorporate money to make the game a little easier and a little more interesting. The first is Haggle Scrabble, the second is Cheaters' Scrabble. Games can be played using Monopoly money. The Monopoly amounts are given on the next page; if using cash, divide numbers by 100.
In Haggle Scrabble, sometimes known as Bartering Scrabble, players start without any money. Players' points are awarded as money at the end of each turn. So a player who makes a 26-point move is given $26. (No need for that pesky score sheet!) As players accumulate money, they can offer deals to their opponents. Players can make an offer to trade letters with opponents (the whole rack or a certain number of letters, either by first inspecting the opponent's rack or blindly swapping), to pay an opponent to pass his turn, to swap turns, or just about any other cheat they can think of. Players can also make offers to try to influence an opponent's move ("I'll give you $50 to open up a Triple Word Score for me"), to have a look at an opponent's rack, or even for advice. Whoever has the most money at the end of the game wins.
#### **Little Words, Big Money: Two- and Three-Letter Currency Words**
It's obvious that short words, even if they themselves don't score a lot of points, are very valuable in Scrabble. They provide openings for longer words, as well as opportunities to position words on a board cramped with letters. But some small words are also valuable off the board: the English words for foreign currencies.
**Xu** and **zaire** are probably the most profitable, although **avo** , **ecu** , **jun** , **lek** , and **lev** also offer a lot of bang for the buck. I'm still hoping to play my favorite, **ngwee** , a monetary unit of Zambia that's worth one-hundredth of a **kwacha**. (At present, a ngwee is worth about 2 ten-thousandths of an American cent; I'd rather take the 9 points.)
Here's how to cash in:
* * *
---
**Att:** a unit of currency in Laos (pl. **att** )
**Avo:** a unit of currency in Macao
**Ban:** a unit of currency in Romania (pl. **bani** )
**Ecu:** a former French coin
**Euro:** a unified currency of much of Europe; also an Australian marsupial
**Fil:** a coin used in Iraq and Jordan
**Hao:** a unit of currency in Vietnam (pl. **hao** )
**Jun:** a coin used in North Korea (pl. **jun** )
**Lek:** a unit of currency in Albania (pl. -s, -e, or -u)
**Leu:** a unit of currency in Romania (pl. **lei** )
**Lev:** a unit of currency in Bulgaria (pl. **leva** )
**Pul:** a coin used in Afghanistan (pl. **puls** or **puli** )
**Pya:** a copper coin of Burma
**Sen:** a unit of currency in Japan
**Som:** a unit of currency in Kyrgyzstan
**Sou:** a former French coin
**Zuz:** an ancient Hebrew silver coin (pl. **zuzim** )
* * *
##### CHEATERS'S SCRABBLE
In Cheaters' Scrabble, players start with $1,000 Monopoly money to spend as they will. Unlike in Haggle Scrabble, money is not awarded for points, and the winner is whoever has the most points at the end of the game. A New York City–based nonprofit that tutors students in creative writing, called 826nyc, held a Cheaters' Scrabble tournament. Their price list is a good model to use as a price guide:
• $500: Invent a Word—just pronounce it and define it, and it can't be challenged
• $250: Reject a Word—an opponent must remove his played word, redraw tiles, and lose his turn
• $200: Surf the 'Net—the chance to look up words on the Internet for two minutes
• $150: Add Q, X, or Z—add to any word and it counts
• $150: Buy a Vowel—trade a vowel from your rack for any tile you want in the bag
• $100: Add 10 to a Tile—add 10 points to a tile's value
• $100: Create a Blank—turn any tile in your rack over and make it a blank
• $100: Opponent's Rack—see an opponent's tiles
• $75: Name & Place—play a proper noun
• $50: Exchange a Tile—trade in a tile on your rack for a random tile without losing a turn
• $50: Passport—use a word from a foreign language
#### **PLAYING ONLINE**
##### SCRABULOUS/LEXULOUS
With the launch of an **online** platform in 2005, Scrabble began to experience a surge in popularity it hadn't seen since its earliest days. Except the online version wasn't exactly Scrabble; it was Scrabulous, a word game closely replicating the original. Created by two brothers in Kolkata, India, and originally available only through the Scrabulous **website** , the game exploded in 2007 when it became possible for **netops** to play one another via Facebook. Scrabulous duplicated Scrabble's board layout, letter values, tile distribution, and rules, and the creators were soon bringing in over $25,000 a month in **ad** revenue. But it was not to last: in 2008, Hasbro filed suit under the Digital Millennium Copyright Act, and five days later Facebook disabled the game for North American users. Within a month, Scrabulous was pulled in all other countries except India.
**Online:** connected to a computer or telecommunications system
**Website:** one or more internally connected web pages accessible on the Internet
**Ad:** an advertisement
**Ami:** a friend ( **Amie:** a female friend)
**_Cellphone_ :** a wireless phone on a cellular network (also **cellular** )
**Tablet:** to write on a flat surface
**Netop:** a friend
**Freeware:** free software
**App:** a computer application
**Netiquette:** online etiquette
**Kibitz:** to chat informally (also **kibbitz** )
**Befriend:** to make a friend of
After a ruling in the Delhi High Court, Scrabulous was rereleased as Lexulous. Beyond the name, other dissimilarities to Scrabble were incorporated. The board layout was rearranged, tile distribution and point values were slightly altered, and players are dealt eight tiles at a time. These days Lexulous continues to flourish with more than $300,000 in revenue per year.
##### WORDS WITH **AMI** S: WORDS WITH FRIENDS
Words With Friends (WWF) was released in 2009 and has become another popular way to play a Scrabble-like game on **_cellphones_** and **tablets** , and one of the most popular cellphone **freeware apps** in general. Many games (currently up to twenty) can be played at once. Texans Paul and David Bittner created the game in 2009 and later sold it for more than $50 million. While it's considered bad etiquette to speak to one's opponent when face to face in a Scrabble tournament, it's not bad **netiquette** to **kibitz** with and even **befriend** strangers through Words With Friends.
To avoid the sorts of legal issues with Hasbro that Scrabulous encountered (and perhaps also to increase scoring possibilities), Words With Friends modifies some of the aspects of Scrabble while keeping many of the fundamental characteristics intact. While Scrabble purists may dislike straying in any way from the basics of the game they know, most casual players seem more than fine with the trade-off in return for the convenience of playing on the go. In WWF, premium squares have been placed elsewhere on the board; there are 104 tiles compared to Scrabble's 100 (including an extra E, T, S, and D, which makes play somewhat easier); and the value of about half the letters have been tweaked.
To many, the biggest difference (and bone of contention) between Words With Friends and Scrabble is not the point values or the rearranged board, but that Words With Friends adheres to an alternate dictionary. It follows the Enhanced North American Benchmark Lexicon (ENABLE), a list used by many other electronic board games. While most of the words in this book are also found in ENABLE, many (like **qi** and **za** ) are not. However, Words With Friends is to be commended for openly offering users the chance to offer suggestions for edits to ENABLE ( **qi** and **za** , which were originally excluded, are now both playable in WWF).
##### EA SPORTS'S OFFICIAL SCRABBLE APP
With the Lexulous lawsuit, Hasbro realized the value of a web-based version of Scrabble, and redoubled its efforts to create a viable, official digital version. It contracted with GameHouse, which created an aesthetically pleasing electronic version that faithfully replicated the actual game of Scrabble. Unfortunately, it didn't allow for online play and came with a $19.95 price tag, as opposed to the ad-based but essentially free versions of Scrabulous/Lexulous and Words With Friends. Though it was ideal for at-home play against a computer opponent, the GameHouse version never caught on. Hasbro turned to Electronic Arts (EA) for a mobile app game, which it released in 2008. Where the GameHouse version limited its lexicon to words appearing in the _OSPD_ , EA's game offers the choice of either playing by words listed in the _OSPD_ or the _Official Tournament and Club Word List_ ( _OWL_ ), the complete and official word list used in competitive Scrabble, as well as Lexulous. ( _OWL_ contains all the words in the _OSPD_ , plus a list of longer and expunged words left out of the dictionary. For more on the difference between the _OWL_ and the _OSPD_ , see History of the Scrabble Lexicon on page 38.)
The EA version allows for play against multiple opponents as well as a computer opponent, and creates player skill ratings. Using Facebook, one can restrict the pool of random opponents by rating. And the mobile app allows one to play Scrabble Duplicate (see Scrabble in France, page 38)—a surprising if widely ignored option. While Hasbro got off to a late start in offering a viable mobile Scrabble app, the advantage of presenting the actual game has won it a rapidly expanding following. But with its competitions' loyal fan bases, the future of online and mobile play is anybody's game.
##### **LOST A LETTER? PICKING UP SCRABBLE, PIECEMEAL**
Hasbro won't sell you a single letter—no matter how badly you may want an extra K to form **tokamak** (a donut-shaped nuclear reactor, also **tokomak** ) without a blank—but it does currently offer all 100 tiles, four racks, and a tile pouch for $6.50. And they'll sell you a new game board (strangely, called a "gameboard" on the order form on the Hasbro website, though gameboard is not playable) for $5. A scorepad (which—you guessed it!—is written _score pad_ on the order form although **scorepad** is playable) is $2.50, and instructions are free. So the whole contents of a Scrabble box can be yours for $14. As Scrabble often sells for $20 to $25, that cardboard box may be the most valuable part!
## **Scrabble in France**
Competitive Scrabble is played the world over, generally in tournaments that pair competitors in one-on-one play. However, in France, Quebec, and other **_frenchified_** places, tournament Scrabble takes the form of _le Scrabble Duplicate_ , a form of the game created to eliminate chance. Players sit alone, each at his own board, facing the front of the room where oversize letters are drawn and announced. Each player attempts to make the highest-scoring play with those letters, and is awarded the number of points that their play would earn. The highest-scoring play is then selected and applied to the oversize board in the front of the room, and each player, regardless of what he played on his last turn, applies that same highest-scoring play to his own board. Then new tiles are chosen in the front corresponding to however many tiles were used on the last move, and each player endeavors to find the best play with those new letters on that same board.
On one hand, there can be no more whining about luck: everyone has the same board and letters to work with. On the other hand, aspects of the game such as defense or saving some letters to use on the next turn (so-called rack management) are meaningless here. In his book, _How to Play Scrabble Like a Pro_ , world champion Joel Wapnick opines, "Le Scrabble Duplicate is to North American Scrabble what a foul-shooting contest is to basketball." Swish!
## **History of the Scrabble Lexicon**
From the time Alfred Mosher Butts created Lexiko in the 1930s through its transformation in the 1950s into the modern, standardized form of Scrabble as we now know it, Scrabble was primarily a friendly game, a "parlor" game.
But in the 1960s, gamers in the chess clubs in Manhattan—where chess, checkers, backgammon, and Go players would meet for serious competition—started taking the game to a competitive level. In smoky rooms, regulars challenged each other to timed, penny-a-point matches that increasingly played out at unprecedented levels of calculation, skill, and tenacity, involving ever-rarer words and higher scores. (Think of this as the age when Scrabble players went from playing **cat** to **kat**.) These highly skilled "sharks" used handicaps (maybe spotting opponents 50 points, or handicapping their own allotted time) to lure in "fish," casual players off the street who often didn't quite know what they were in for.
Remember, this was a time when Scrabble's popularity was soaring, and new, niche versions of the game were popping up in social circles around the country. Once money became involved, some rules of play needed to be sorted out. Scrabble sets came—as they still do—with box-top rules, but aspects of the game like players' ability to challenge questionable words, timed play, and a standardized list of acceptable words were missing.
The standard lexicon became _Funk & Wagnalls College Dictionary_, the principal dictionary of the day. (The book was also the source of much mirth on the television show _Rowan and Martin's Laugh-In_ , where the line "Go look it up in your _Funk and Wagnalls_ " became an oft-repeated joke, playing on the sound of "funk.") But _Funk & Wagnalls_ had some serious organizational shortcomings when used for Scrabble play. In 1973, work began on an official dictionary for Scrabble. Words were culled from five dictionaries: _Random House College Dictionary_ (1968), _American Heritage Dictionary of the English Language_ (1969), _Webster's New World Dictionary_ (2nd edition, 1970), _Webster's Collegiate_ (1973), and _Funk & Wagnalls Standard College Dictionary_ (1973). In 1978 the _Official Scrabble Players Dictionary (OSPD)_ was officially adopted. The compilers, though paid, received no credit in the dictionary. Occasionally one finds definitions in the _OSPD_ that seem inexplicable. In _Letterati,_ Paul McCarthy reports that one of the two compilers of the original _OSPD_ recalls including "some definitions that were inside jokes that only they understood."
As the dictionary's purpose was to list playable words in a condensed format, definitions were necessarily brief and vague. Of course, there were also a lot of unusual entries. One of which, **rei** , is defined as "an erroneous English form for a former Portuguese coin." Isn't privelege an erroneous form of **privilege**? Why include an incorrectly **spelt** word?
The first _OSPD_ was a giant leap forward, but it was still found to lack many words. An improved second edition came out in 1991. It was very much the result of work by a dedicated man named Joe Leonard, who reportedly submitted more than 5,500 omissions and errors. He never asked for pay, and never received any. Nor did he receive any credit in the 1991 or subsequent editions, though he is currently recognized on the North American Scrabble Players Association site (Scrabbleplayers.org).
#### **YOU CAN'T PLAY THAT ON TELEVISION: EXPUNGING THE OSPD**
The cover of the second _OSPD_ boasted that the new edition contained "over 3,000 new entries." While its predecessor included the likes of **_shit_** and **_fuck_** (it even included that unholy of unholies, **_cunt_** ), the second edition added entries like **_shithead_** and **_fuckup_** and words like the rather humorous **bazooms**. This isn't to imply that it added 3,000 "dirty" words, but rather that the publisher, Merriam-Webster, and the producers of Scrabble, at that point Milton Bradley, weren't shy or deeply censorious of the game's official word list. And like the first edition, the second included a host of often abhorrent racial slurs; a surprisingly extensive list of insults that, if one were pressed to commend it for some reason, could only be praised for being so wide-ranging in its scope: Black or white or brown or yellow, male or female, gentile or Jew, just about everyone was represented and abused. It was all in there, so much of the English language, from its prettiest, most poetic, and uplifting to the foulest, most disgraceful, and contemptible. To signify the vulgar and/or odious natures of some of the words included in the first two _OSPD_ s, the three-word disclaimer "an offensive term" was pinned to the end of their definitions.
#### THE J-ISH QUESTION
Growing up playing Scrabble in a Jewish family in the 1980s, I remember being aware of some of these offensive words. Of course, a child's eyes are invariably drawn to four-letter words—even as an adult I sometimes thrill at seeing a dirty word in a respectable place like a dictionary. But I also remember seeing words that were so obviously offensive and taboo that I felt no thrill, only somehow a sense of shame at even spotting them, knowing very well I'd never dare try to play them.
As a Jew, I knew **_hebe_** and **_yid_** were offensive in a theoretical sense, but having never been called either of those words, let alone in a hatefulway, and having a general feeling of ownership of those words, I felt **okeh** (okay) playing them. My parents may have encountered the words in an offensive way when they were younger, but if they did, a certain feeling of ownership allowed for their use on the board—at least in comparison to a pragmatism associated with wanting to win. But then there was the "k" word— ** _kike_** —which cannot but ring in the ear in a positively hateful, loathsome way. (Well, to my family's ears anyway.) Luckily, playing that word would require having the K and the blank at the same time, and if by some chance you had that, odds were good you could do something a far sight better with that rack.
So that leaves the great ponderable of my Scrabble playing as a youth: **_jew_**. It would've been one thing if _jew_ had been defined as a noun, as in a Jew, but as a proper noun that could not be the case. Instead, it was " _Jew_ : to bargain with—an offensive term." Offensive all right, I could see that. The idea of using it in speech like that seemed in some ways even worse than _kike._ At least with _kike_ , the message seemed to be fairly matter of fact: "You are a Jew. (And I don't like you because of that fact so I call you this name.)" But with _jew_ it was "I think so lowly of Jews I'll use their name to insult someone who isn't even Jewish. And that person will be degraded by my comparing his behavior to a Jew's." So terrible! And yet... and yet... look at those points! How often would one get stuck with the J and see that W out there on the board, flapping in the breeze. Or vice versa: there'd be that J, and here I was with an E and a W, and maybe there was a Double Word Score involved, and if I didn't use it someone else was surely ready to make **jo** or **jet** or **jot** or **jut** and pick up those points just _sitting_ there!
#### Eating Your Words: Playable Candy Names
**Bonbon:** a type of chocolate-coated sweet
**Butterfingers:** a clumsy person (also **butterfingered** , but not butterfinger)
**Candyfloss:** cotton candy
**Fireball:** a ball of fire, a meteor
**Jawbreaker:** a type of hard candy
**Jujube:** a type of edible berry _(not to be confused with_ **juju** _: an object believed to have mystical powers)_
**Nestle:** to lie close to something or someone
**Skittle:** a form of bowling, or a pin used in that game
**Starburst:** an image resembling a diffusion of light
**Tootsie:** a foot (also **tootsy** )
**Whatchamacallit:** something whose name one cannot remember or does not know
And so it was that depending on one's letters and the board and the score sheet, _jew_ , despite the intended and understood offense, was turned into offence. You'd play it, you'd lay those letters down, but you wouldn't say it. You'd just click those tiles into place and look up and say your score, generally "26," and you'd be met with everyone else's eyes, eyes saying "really?" And you'd look back at them knowing that they'd probably do the same, in which case you'd give them the "really?" look, too. And then _jew_ would sit there on the board. And it wasn't all that rare that someone would then go on to play _jew_ off another side of the same J later in the game if he or she could. (A play I like to think of as a " **jujube**.")
Scrabble's relationship with offensive words has been complicated as well. Take for example an event from 1990. Bob Felt had just won the National Scrabble Championship, and was a guest on _Good Morning America_ to show off his winning board and define some of the words he played. Unbeknownst to the audience, Felt fiddled with the board, changing **_darkies_** to **darkens**. "I didn't think that defining that word on national television was in anybody's interest," he said later.
And yet there were these words, prescribed and described for all to see in the _OSPD._ That is, until 1993, when Virginia art gallery owner Judith Grad discovered just a few of the multitude of shockingly offensive terms in the _OSPD_. She was incensed. She dispatched letters to Merriam-Webster, that published the dictionary, and Milton Bradley, that headed Hasbro's games division.
"It is certainly not the intent of the dictionary to perpetuate racial or ethnic slurs or to make such usages respectable," read the response from Merriam-Webster's editor in chief. "However, such slurs are part of the language and reputable dictionaries record them as such."
"As a dictionary, it is a reflection of words currently used in our language," replied Milton Bradley's President. He added, "It is important to note that Milton Bradley Co. does not condone the use of these words, nor do we advocate the use of offensive terms. If it were up to us, none of these words—nor the sentiments behind them—would exist at all."
Unsatisfied, Grad reached out to groups including the Anti-Defamation League and the National Association for the Advancement of Colored People, still with no result. The National Council of Jewish Women, however, launched a letter-writing campaign, and then the Anti-Defamation League (ADL) amped up its efforts. An ADL chairman insisted on the removal of the offending words. "The use of ethnic slurs in Scrabble," he wrote Hasbro, "is literally playing games with hate." Not only were many players suddenly declaring their discomfort with playing some words in the _OSPD_ , but there was also the immediate chance of a maj or black eye for the Scrabble brand—not to mention the consideration for the Scrabble in Schools initiative that had begun.
Meanwhile, though some competitive players were pleased by the prospect of ridding the game of some of its nastier bits, many argued that the words in the _OSPD_ were like chess pieces: meaningless objects used to play a game. They'd studied and mastered these pieces, and it was ridiculous for Hasbro to strip them of these tools because some parlor players were, to their minds, overly sensitive. Besides, once one started to purge the lexicon, where would it end?
#### A PAX SCRABBALLA: THE _OSPD, OWL_ , THE LL, AND SOWPODS
In an unusual compromise between many competitive players' wishes to keep the _OSPD_ free of censorship and Hasbro's corporate concerns to escape bad press and remain family and school friendly, it was announced at the 1994 National Scrabble Championship that there would be two lists. The third _OSPD_ would be expurgated for home and school use. A separate list for competitive use, the _Official Tournament and Club Word List_ (alternately abbreviated as _OWL_ , _TWL_ , and _OCTWL_ —leave it to Scrabble players to come up with three terms for the same thing!) would remain uncensored. The _OWL_ was to be available only to members of the National Scrabble Association (NSA), but this has proven impossible to enforce, and it can be readily purchased online.
When the third _OSPD_ came out in 1996, players sleuthing through the book found that a total of 167 words had been expunged, including not just the obviously offensive slurs and sex acts, but more questionable choices, too. Both **_papist_** (a Roman Catholic—an offensive term) and **_nonpapist_** (apparently similarly—though oppositely—offensive) were cut, as were **_jesuit_** (a scheming person—an offensive term) ( _who knew?_ ), **_fatso_** , **_libber_** (one who supports a liberation movement), **_comsymp_** (one who sympathizes with the communist movement), **_spaz_** , **_poo_** , and **_fart_**. Also struck from the book were a myriad of trademarked words, such as **_biro_** , **_jacuzzi_** , **_lycra_** , **_pyrex_ ,** and **_tofutti_**. The list has come to be known by competitive players as "The _Poo_ List." For a complete list (not recommended for sensitive eyes), visit this book's companion website at Isthatascrabbleword.com.
Meanwhile, the _OWL_ is a book-length list of all the words of up to nine letters long that are permitted in sanctioned play. It contains everything: the lewd and the crude; the racist, xenophobic, and homophobic; and the potentially trademarked. Devoid of definitions, it keeps Scrabble delightfully disconnected from whatever those nasty words might mean, while offering a coolly calculated embrace of the idea that, for competitive Scrabble players, the words are detached from any denotation or connotation.
For international competition, players use a lexicon known as SOWPODS, which is an acronym composed of _OSPD_ and the _OWL_ from the name of the British lexicon, the British Official Word List. SOWPODS is therefore considerably larger than the _OSPD_ and contains all of the words in the Long List (LL).
Downloadable as a .txt file at www.scrabbleplayers.org, the NSA website, the LL provides the full list of inflected words of ten to fifteen letters (the longest length word a standard Scrabble board can accommodate), beyond what even the _OWL_ provides. For instance, while _OWL_ lists **_abiogenic_** , the LL includes **_abiogenically_**. Where else would one come across such a lovely word as **_kittenishnesses_** , explain to an inquiring opponent that **_tsutsugamushis_** are bacteria that form a type of typhus (and are found primarily in an area—located between northern Japan, northern Australia, and Pakistan—known as the " ** _tsutsugamushi_** triangle"), or ponder the irony of playing **_nonachievements_** across the board, perhaps even across three Triple Word Scores.
#### "Dirty" Words That Remain in the _OSPD_ under Alternative Definitions
**Poop** is in the _OSPD_ , defined not as excrement but as "to make exhausted"; **pee** refers to the letter; **dick** in the sense of a detective; **dicker** is to haggle. **Pussy** is a cat, or as a less attractive word: the adjective meaning "full of puss." **Cock** is to tilt to the side and **cum** is the preposition meaning "along with." **Tit** squeaked into the _OSPD_ as a little bird; **titty** 's there in a more mammary or agrarian sense, a sophomoric one as "a teat"; **tittie** is defined as "a sister." Personally, I don't think it's **crappy** —that is to say, decidedly bad—that **crapola** , meaning utter nonsense, remains a legit bingo possibility.
The **tit** 's **tittie** , a **tomtit** in a **titfer** , watches the **titman tittup** to the **titty** , **whilst** the **ouistiti** with **otitis** and the **bushtit titter** over the **titbit** of **titian tittle** , because they were **boobies**.
* * *
**Tit:** a type of little bird
---
**Tittie:** a sister
**Tomtit:** any of various types of little birds
**Titfer:** a hat ( _British slang_ )
**Titman:** the smallest piglet in a litter (pl. -men)
**Tittup:** to move in an exaggerated or jerky way
**Titty:** a teat
**Whilst:** while
**Ouistiti:** a marmoset _(sometimes also called_ wistit, _but sadly that variation is not playable)_
**Otitis:** inflammation of the ear
**Bushtit:** a titmouse
**Titter:** to laugh in a partially suppressed way, to giggle
**Titbit:** a tidbit
**Titian:** a bright auburn color ( _derived from Titian, who was fond of painting in this color, particularly women with red hair_ )
**Tittle:** a small diacritical mark in writing or typography, as in the dot above a lowercase _i_
**Booby:** a fool (pl. **boobies** )
* * *
Two fifteen-letter (quidralettral?) words have been played in tournament play: Ken Clark put the _re_ in **_reconsideration_** in 1990, and Ed Liebfried put the _dis_ in **_discontentments_** in 2005.
#### OFFENSIVE WORDS THAT HAVE NOT BEEN EXPUNGED
Despite Hasbro's august efforts to expunge offensive words from the _OSPD_ , somehow several escaped the scythe. A black urban professional can refer to himself as a **buppy** (or **buppie** ), and **ponce** ("to pimp"), **fem** ("a passive homosexual"), **nelly/nellie** ("an effeminate male"), and **butch** ("a lesbian with mannish traits") can still be played in living rooms across America. So can **bumpkin** ("an unsophisticated rustic"), **hayseed** ("a bumpkin"), and **hick** ("a rural person"). The definition of **sambo** ("a Latin American of mixed black and Indian ancestry") disregards its most common usage as a slur.
Perhaps even more troubling, despite complaints from Romanies, **gyp** ("to swindle") and **gypper** ("one that gyps") are legal. How this differs from using _jew_ in the same way is hard to imagine. Then again, **rom** ("a Gypsy man or boy") is also allowed, despite its near-universal capitalization in English. Again, why this cultural group earns the distinction of being declassified from proper-noun status is bewildering.
Other words derived from places or ethnicities that one might point to as offensive are also listed in the _OSPD_ : **mongol** ("a person affected with a form of mental deficiency"—also **mongolian** , **mongoloid** , and **mongolism** ), **oriental** ("an inhabitant of an eastern country"), and **shanghai** ("to kidnap for service aboard a ship") come to mind. **Cyprian** ("a prostitute"—derived from the ancient orgiastic worship of Aphrodite on Cyprus) and **paphian** ("a prostitute"—derived from Paphos, an ancient Cypriot city) seem odd choices not to strike.
Truly, the path of expurgation is a difficult one to navigate, and quickly becomes a slippery slope. The trailblazer's task of determining what trees to fell, what rocks and roots to uproot, and what bumps in the road to leave be is not to be envied, and perhaps it is too easy at times to criticize. But when that path becomes a highway, or else major obstacles to its destination (some happy place where people are not made to feel excluded or insulted) are left intact, it seems right to call attention to it.
Personally, I'm all for playing with every dirty word one can think of. And I'm also all for playing without them. Competitive Scrabble players, for their part, like to say that the tiles are not the pieces in Scrabble; it's the words that are the pieces. To my mind, as long as my opponent and I decide on which set of words/pieces to use (be it the _OSPD_ or the _OWL_ ), it's fine with me. But I also understand that each Scrabble player should have the right to make that decision. So I would urge Hasbro to make a version of the _OWL_ , with definitions included, easily available to the public. And it's to be hoped that each future edition of the _OSPD_ will strive to correct the oversights as well as overzealousnesses of its predecessors, just as it's up to all speakers of any language to do the same in their usage.
## The Sound of Muzjik
### Scrabble Records
It's said that records are meant to be broken, but some records are meant to be played. In Scrabble, records for the highest-scoringplays and games are kept only for matches officially sanctioned by the National Scrabble Association—typically in Scrabble clubs or tournaments.
The highest-scoring-possible seven-letter opening play is a bingo that comes from the letters IJKMSUZ, which look unpromising unless you recognize **muzjiks**. Placing the Z on the Double Letter Score is an opening move that would bring in 128 points. The odds of drawing these tiles are about 1 in 55 million, and indeed the play has never been recorded in sanctioned play. However, Jesse Inman of South Carolina did open a game at the 2008 National Scrabble Championships with **muzjiks** using the blank for the U for a record-setting 126-point opening move.
**Muzjiks** is a pluralized, alternative spelling of the equally impressive though slightly lower-scoring **muzhik** , a Russian peasant, particularly a serf before the Russian Revolution of 1917. **Muzhik** came into English thanks in large part to Tolstoy and Dostoyevsky; the lattereven penned a seemingly autobiographical, Slavophilic short story about a kind muzhik he knew as a child, the title of which is translated alternatively as "The Peasant Marey" or "The Muzhik Marey." Today, _muzhik_ is used in Russian as the equivalent of "guy" or "dude."
While the chance of ever playing **muzjiks** is unlikely, drawing IJKMSUV from the bag is as likely as getting wet when it's raining out in comparison to pulling **muumuu(s)** (with a blank for the s) on your first turn. It's the least-likely opening bingo out there, and at a staggering 8 billion to 1 shot, the 76 points seem like a paltry reward. Personally, I'm in favor of having a special "MUUMUU Move" muumuu made, to be awarded to anyone who has such amazing luck.
The second-highest opening move ever recorded in American tournament play is **bezique** (a card game similar to pinochle) for 124 points. **Cazique** (a tropical oriole) and **mezquit** (a shrub found throughout the American Southwest, also spelled **mesquit** , **mesquite** ) would also bring in that much.
If you like to dream big, the next-highest valued monster openings would be:
* * *
**122 POINTS**
---
**Kolhozy:** a Russian collective farm
**Sovkhoz:** a state-owned farm found in the former Soviet Union
**Zinkify:** to cover with zinc (also **zincify** )
**Zombify:** to transform into a zombie
* * *
**120 POINTS**
---
**Jazzily:** in a jazz-like way
**Jezebel:** an evil woman
**Jukebox:** a machine that plays recorded music for money
**Muzhiks:** Russian peasants
**Quetzal:** a tropical bird
**Quezals:** quetzals
**Quickly:** in short time
**Quizzed:** tested for information
**Squeeze:** to hold tightly
**Squiffy:** inebriated
**Zymurgy:** the study of fermentation into alcohol
* * *
#### OTHER RECORDS
The most record-breaking game of sanctioned Scrabble in North America took place on October 12, 2006, in Lexington, Massachusetts. Michael Cresta of Massachusetts set two records, for a single turn with **quixotry** for 365 points and for total points with 830. His opponent, Wayne Yorra, put up 490 points himself, helping set the record for highest combined score with 1,320.
#### DOUBLE- AND TRIPLE-TRIPLES
**Beziques** , **caziques** , **mezquite** , **mezquits** , and **oxazepam** also tie for the highest possible eight-letter bingos. If placed across two Triple Word Scores—known as a double-triple—each can be played for 392 points.
The holy grail of Scrabble is the triple-triple (sometimes referred to as a triple-triple-triple), a play that spans three Triple Word Scores. There's no record of a triple-triple ever having been played.
Theoretically, the highest possible move in Scrabble has been determined to be **oxyphenbutazone** (an anti-inflammatory used to treat arthritis) played as a triple-triple. There's much debate about how much this word can score, depending on whether one uses _OWL_ or SOWPODS words to construct the ideal opening for the word on theboard. (It boils down to the fact that _OWL_ does not permit prequalified—a necessary play to create a slightly higher-scoring **oxyphenbutazone** than without it—though SOWPODS does.) It's a puzzle many folks are continuing to work on. Suffice it to say that either way, the nearly 1,800 points one could score would be one for the record books. The highest combined score for a theoretical game using only _OWL_ words is approximately 4,000 points.
#### You're Prequalified!
At first glance it seems ridiculous that _oxyphenbutazone_ is per-mitted in Scrabble but prequalified isn't—after all, prequalified is a term one hears fairly often in talk of loans. But prequalification has generally been hyphenated, appearing as pre-qualified. Further, the mind reels a bit at the idea of prequalification as opposed to qualification. That _pre_ \- does seem, well, if not redundant, at least utterly useless. Although then again there is the case of Formula One racing, which for a time utilized a "pre-qualifying" round to determine who would get to compete in the "qualifying" round, which in turn decided who would be allowed to ultimately compete in a race. Better yet, one could argue that this sidebar prequalifies the reader to weigh in on the topic, even without the qualifications of being an expert on the subject.
#### THE LONGEST GAME (WITH THE SHORTEST MOVES)
In a 1993 tournament in Tennessee, Jan Dixon and Paul Avrin played a game in which they each took twenty-five turns, which averages to exactly two tiles per play.
#### FIVE POINTS PER SECOND: THE SHORTEST GAME
In a 2003 tournament in New Jersey, Scrabble expert Matt Graham played a complete game in 96 seconds, scoring 471 points. So much for not wanting to play Scrabble "because it takes so long"; that's a whole game of turns taking about as long as it does to sneeze a few times.
" **Achoo**! **Ahchoo**!"
**"Kerchoo!"**
**" _Gesundheit_!"**
#### THE GREATEST MOVE... EVER?
The play that is often cited as the most impressive word ever played in Scrabble took place in 1995. Jim Geary, a former pro poker player, was down by 90 and holding BEEIORW with two letters left in the bag. He calculated that if he played off his B and an E, there was a 1/68 chance he could pick up an A and a T, which were either in the bag or on his opponent's rack. The odds played out in Geary's favor: he pulled the A and T, and with a rack of AEIORTW he played through a Z and an O on the board to end the game with **waterzooi** , a classic Flemish stew with fish or chicken. It scored 92 points, plus the value of his opponent's rack.
#### LESS THAN ZERO: WINNING WITH NEGATIVE POINTS
The record for the lowest winning score ever is currently held by Helena Gauthier, who beat her opponent: –9 to –11. How does something like this happen? Let's take a look at a game that set the previous record of –8 points, set back in 1990.
At the start of a game at the Midwest Invitational Tournament in July, 1990, Rod Nivison's first rack was UNIDEAE. His opponent picked seven tiles, and as he did so accidentally exposed one tile to Nivison: the D. Rod saw the opportunity for **unidead** (ironically, it means "without ideas"), and so he passed, hoping his opponent would play that D. His opponent traded in a tile, so Rod passed again. His opponent traded in another tile, and Rod passed again. His opponent played the phony dormine ( **minored** would have worked) and Rob challenged it off the table.
At the time, the rules clearly stated that game play ended if both players scored 0 points (through passing or trading in tiles) three times in a row. The rule made sense, because it was a clear way to stop play at the end of a game, when players might not be able—or want—to put down any tiles, and was fashioned—like several other Scrabble rules—after a rule in chess. It was simply unforeseen that it might be invoked so early in the game.
Each player then subtracted the value of his own tiles from his score (which was 0 to 0 at the time), and Nivison came out the winner, –8 to–10.
Sometimes in Scrabble, less is more.
# [PART 2
**Unscrambling Scrabblish:**
A Catalog of Useful, Curious, and Surprising Lists, Facts, and Marginalia](003-toc.html#part2)
Since the earliest Scrabble players first started perusing the _Funk & Wagnalls_ dictionary for words that could aid their game, generations of players have sought to systemize their efforts to improve their chances of winning by enlarging their vocabulary. While these efforts often include long lists of words organized alphabetically by their anagrams, some players prefer words grouped together thematically. Here's a look at some of the game's lexicon presented through a combination of connected words and peculiar trivia from playable band names to unusual terms for body parts.
## "Biblically Speaking..."
### Words from the Bible
The _OSPD_ may be the Scrabble players' **bible** , but plenty of words found there have their roots in the original Good Book (if not always their definitions).
In the **noel** , the light from the **lucifer** through the **judas** gives the **magdalen** in a **joseph** a **gloria** , as if illuminating the **ruth** of her **saul**.
* * *
**Noel:** a Christmas card
---
**Lucifer:** a friction match
**Judas:** a peephole
**Magdalen:** a former prostitute
**Joseph:** a woman's long cloak _(after Joseph's coat of many colors)_
**Gloria:** a halo
**Ruth:** compassion
**Saul:** a soul
* * *
#### OTHER PLAYABLE BIBLICAL WORDS INCLUDE:
* * *
**Bible:** a definitive text
---
**Calvary:** a representation of the crucifixion
**Golgotha:** a burial place
**Jezebel:** an evil woman
**Lazar:** a beggar afflicted with a terrible disease, particularly a leper
**Maria:** a large plain on the surface of the moon that appears dark
**Sodom:** a place infamous for vice
**Torah:** a law (pl. **torahs** , **toroth** , or **torot** )
**Veronica:** a handkerchief with a depiction of Christ's face _(after the Biblical woman who offered Jesus a handkerchief to wipe his face as he carried his cross)_
* * *
## Toponyms
### Finding the Right Place on the Board
The names of specific places, languages, or geographically grouped peoples (Russia, Hebrew, Parisians) are **verboten** (not permitted) in Scrabble, as they are proper nouns. However, _toponymic_ words named after places are often legal. One can feel free to eat a **danish** off **china** plates while wearing a **kashmir** sweater, a **panama** hat, and a pair of **bermudas** , and it's okay by the _OSPD._ (Even though it's likely that the danish originated in Vienna and panama hats first came from Ecuador, but that's another story.) After you do the **german** , you can sit in your **berlin** , wrapped in **alaska** , smoke something **colorado** , and play **boston**.
* * *
**Afghan:** a wool blanket
---
**Alamo:** a cottonwood poplar tree
**Alaska:** "a heavy fabric," according to the _OSPD (presumably this refers to wool from Alaska, perhaps_ **qiviut** _, wool from the Alaskan musk ox)_
**Berlin:** a type of fancy, fast, and light horse-drawn carriage _(later,_ **berline** _came to be used for early limousines)_
**Bermudas:** a variety of knee-length, wide-legged shorts
**Bohemia:** a community of unconventional, usually artistic, people
**Bolivia:** "a soft fabric" according to the _OSPD (like_ **alaska** _, another noun for a type of fabric that has disappeared from many dictionaries)_
**Bordeaux:** wine from the Bordeaux region
**Boston:** a card game similar to whist
**Brazil:** a type of tree found in Brazil used to make instrument bows (also **brasil** )
**Brit:** a non-adult herring
**Cayman:** a type of **croc** (a crocodile), also known as a spectacled crocodile (also **caiman** )
**Celt:** a type of ax used during the New Stone Age
**Chile:** a spicy pepper (also **chili** )
**Colorado:** used to describe cigars of medium strength and color
**Congo:** "an eellike amphibian" in the _OSDP (There are types of frogs and snakes called congo, and a common type of eelcalled the_ **conger** _, but traces of an_ **eellike** _—good word!—amphibian by this name escape my investigations.)_
**Cyprus:** "a thin fabric," according to the _OSPD (Silk has long been an export of Cyprus, since the_ **bombyx** _—a silkworm—was imported from China.)_
**Dutch:** referring to each person paying for himself
**Egyptian:** a sans serif typeface
**English:** to cause a ball to spin
**French:** to slice food thinly
**Gambia:** a flowering plant also known as uncaria or cat's claw (also **gambier** , _which happens to be a small town in Ohio_ )
**Geneva:** gin, or a liquor like gin _(Gin is often cited as a shortened form of_ **geneva** _, which is likely derived from_ genièvre _, the French word for juniper.)_
**Genoa:** a type of **jib** (a triangular sail), also known as a **jenny** , first used by a Swedish sailor in Genoa
**German:** also known as the german **cotillon** , an elaborate nineteenth-century dance, which sometimes involved having to jump over a rope just to get onto the dance floor
**Greek:** something not understood
**Guinea:** a type of British coin minted from 1663 to 1813
**Holland:** a linen fabric
**Japan:** to gloss with black lacquer _(One can even_ **japanize china** _.)_
**Java:** coffee
**Jordan:** a chamber pot ( _see Shakespeare's_ Henry IV _: "Why, they will allow us ne'er a jordan, and then we leak in your chimney."_ )
**Kashmir:** cashmere
**Mecca:** a destination for many people
**Oxford:** a type of formal men's shoe, also known as a **bal** or **balmoral**
**Panama:** a type of wide-brimmed hat
**Paris:** a type of plant found in Europe and Asia that produces a lone, poisonous berry
**Roman:** a romance written in meter
**Scot:** an assessed tax _(Think of "scot-free.")_
**Scotch:** to put an end to; or to etch or scratch (as in **hopscotch** )
**Sherpa:** a soft fabric used for linings
**Siamese:** a water pipe providing a connection for two hoses
**Swiss:** a sheer, cotton fabric
**Texas:** a tall structure on a steamboat containing the pilothouse
**Toledo:** a type of sword famous for its fine craftsmanship, originally from Toledo
**Wale:** to injure, to create welts on the skin
**Warsaw:** a type of grouper fish
**Waterloo:** a definitive defeat
**Zaire:** a currency of Zaire
* * *
## Sing, O Muse, of Those Ingenious Words That Have Traveled Far and Wide from Ancient Greece to the Scrabble Board
Familiarity with the names and places associated with the **Iliad** and the **Odyssey** will serve one well in Scrabble. And although we can sit and wish that Aeaea (the mythical island said to be the home of the sorceress Circe) were playable, **Homer** 's two **epically** great poems certainly do offer Scrabble players some godlike powers.
* * *
**Achillea:** an herb, also known as **yarrow**
---
**Acropolis:** a citadel
**Aeneus:** greenish-gold in color (also **aeneous** )
**Aphrodite:** a type of orange-colored butterfly of North America; also a type of orchid
**Apollo:** a handsome man
**Ares:** plural of **are** : a unit of surface measure equal to 100 square meters
**Arete:** a sharp, narrow mountain ridge
**Argosy:** a large merchant ship or a fleet of such ships
**Artemisia:** a plant belonging to the daisy family used in herbal medicine
**Atheneum:** a literary or scientific institution
**Calypso:** a style of music from the West Indies, usually with improvised lyrics
**Cyclopes:** a tiny (1/2\- to 3-mm) crustacean with a single, central eye (also **cyclops** ) _(_ **Cyclopes** _was erroneously omitted from some editions of the_ OSPD _.)_
**Hector:** to bully, usually verbally
**Homer:** to hit a homerun in baseball
**Homeric:** having an impressively large or grand quality
**Iliad:** a lengthy poem, often describing a series of misfortunes
**Ilium:** the upper part of either of the innominate bones of the pelvis
**Muse:** to think about
**Nestor:** a wise, elderly man
**Odyssey:** a long, adventure-filled journey
**Phoenix:** a mythical bird said to have lived in the Arabian desert for 500 years, cyclically burning itself to death and emerging anew from its own ashes
**Stento:** a person with a strong voice
**Troy:** a system of weights used primarily for gems and precious metals
**Xenia:** the effect of pollen on a plant (Xenia _is known to readers of Homer as the Greek term for hospitality. The botanic definition very likely derives from the Greek_ **xenos** _, for "stranger.")_
* * *
## It's Jabberwocky!
**_Jabberwocky_** , the title of Lewis Carroll's sensibly nonsensical poem included in _Through the Looking Glass, and What Alice Found There_ , is a playable word defined in _Merriam-Webster_ 's as "meaningless speech or writing." Although brillig cannot be played, many words from the poem do make the cut.
' **Twas** brillig, and the slithy toves
Did **gyre** and gimble in the wabe;
All mimsy were the borogoves,
And the **mome raths** outgrabe.
"Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!"
He took his vorpal sword in hand:
Long time the manxome foe he sought—
So rested he by the Tumtum tree,
And stood awhile in thought.
And as in uffish thought he stood,
The Jabberwock, with eyes of flame,
Came **whiffling** through the tulgey wood,
And **burbled** as it came!
One, two! One, two! and through
and through The vorpal blade went snicker-snack!
He left it dead, and with its head
He went **galumphing** back.
"And hast thou slain the Jabberwock?
Come to my arms, my **beamish** boy!
O **frabjous** day! Callooh! Callay!"
He **chortled** in his joy.
'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe;
All mimsy were the borogoves,
And the mome raths outgrabe.
While the first word of the poem, _'Twas_ , is used as the contraction for "It was," **twas** is playable under another definition (it's the plural of **twa** , an alternative spelling of "two").
* * *
**Gyre:** to move in a circle or spiral
---
**Mome:** a fool
**Rath:** rathe, appearing or ripening early
**Whiffle:** "to move or think erratically," according to the _OSPD_
**Burble:** to speak quickly and excitedly
**Galumph:** to move clumsily
**Beamish:** cheerful
**Frabjous:** splendid
**Chortle:** to chuckle with glee
* * *
It seems likely that Carroll did not mean all of these words in the way they're defined by the _OSPD_ ( **mome** and **rath** in particular) _._
Just for fun, for those interested in something closer to Carroll's meaning (despite the fact that Carroll later offered contrary definitions), let's follow Alice's lead and consult Humpty Dumpty, who claimed he could "explain all the poems that ever were invented—and a good many that haven't been invented just yet."
> "[T]here are plenty of hard words there, ' _Brillig_ ' means four o'clock in the afternoon—the time when you begin _broiling_ things for dinner."
>
> "That'll do very well," said Alice: "and ' _slithy_ '?"
>
> "Well, ' _slithy_ ' means 'lithe and slimy.' 'Lithe' is the same as 'active.' You see it's like a portmanteau—there are two meanings packed up into one word."
>
> "I see it now," Alice remarked thoughtfully: "and what are ' _toves_ '?"
>
> "Well, ' _toves_ ' are something like badgers—they're something like lizards—and they're something like corkscrews."
>
> "They must be very curious-looking creatures."
>
> "They are that," said Humpty Dumpty; "also they make their nests under sun-dials—also they live on cheese."
>
> "And what's to ' _gyre_ ' and to ' _gimble_ '?"
>
> "To ' _gyre_ ' is to go round and round like a gyroscope. To ' _gimble_ ' is to make holes like a gimlet."
>
> "And ' _the wabe_ ' is the grass-plot round a sun-dial, I suppose?" said Alice, surprised at her own ingenuity.
>
> "Of course it is. It's called ' _wabe_ ' you know, because it goes a long way before it, and a long way behind it—"
>
> "And a long way beyond it on each side," Alice added.
>
> "Exactly so. Well, the ' _mimsy_ ' is 'flimsy and miserable' (there's another portmanteau for you). And a ' _borogove_ ' is a thin shabby-looking bird with its feathers sticking out all round—something like a live mop."
>
> "And then ' _mome raths_ '?" said Alice. "I'm afraid I'm giving you a great deal of trouble."
>
> "Well, a ' _rath_ ' is a sort of green pig: but ' _mome_ ' I'm not certain about. I think it's short for 'from home'—meaning that they'd lost their way, you know."
>
> "And what does ' _outgrabe_ ' mean?"
>
> "Well, ' _outgribing_ ' is something between bellowing and whistling, with a kind of sneeze in the middle: however, you'll hear it done, maybe—down in the wood yonder—and, when you've once heard it, you'll be _quite_ content."
## To Bingo, or Not to Bingo
### Playable Shakespearean Characters
Some words with **bardic** (poetic) connections have roles to play on the board as well as on the stage:
**Ariel:** a gazelle found in Africa
**Dogberry:** the fruit of a dogwood tree
**Hamlet:** a village
**Lear:** learning
**Puck:** a disk used in ice hockey and other games
**Romeo:** a seductive male, a male lover
**Shylock:** to lend money with a high interest rate (considered offensive)
## Other Standout Literary or Historical Eponyms
* * *
**Bluebeard:** a man who repeatedly marries and kills his wives
---
**Caesar** : an absolute leader
**Einstein:** an exceptionally intelligent person
**Eyre:** a long journey
**Dickens:** a devil (pl. -es)
**Fagin:** a person (usually an adult) who instructs others (often children) in crime
**Holden:** the past participle of **hold**
**Huckleberry:** a berry like a blueberry
**Napoleon:** a type of layered pastry
**Oedipal:** describing libidinal feelings of a child toward the parent of the opposite sex
**Quixote:** according to the _OSPD_ , "a quixotic person" _(An interesting example of a noun being defined by an adjective derived from a proper noun._ **Quixotic** _is "extremely idealistic,"_ **quixotry** _is "a quixotic action or thought.")_
**Rousseau:** fried pemmican _(Pemmican is a Native American high-protein, high-fat food composed of dried meat, occasionally mixed with fruit.)_
**Zooey:** like a zoo
* * *
## Are You a Word?
### Playable First Names
* * *
**Al:** a type of East Indian tree
---
**Alan:** a breed of hunting dog, named after the Alan people (also **aland** , **alant** )
**Alec:** a herring
**Ana:** a collection of miscellany about a specific topic
**Anna:** a former Indian coin
**Barbie:** a barbecue
**Belle:** a pretty woman
**Ben:** an inner room
**Benny:** an amphetamine pill
**Bertha:** a style of wide collar
**Beth:** a Hebrew letter
**Biff:** to hit
**Bill:** to charge for goods or services
**Billy:** a short club
**Bo:** a friend
**Bobby:** a policeman
**Bonnie:** pretty (also **bonny** )
**Brad:** a small nail or tack
**Carl:** a peasant or manual laborer (also **carle** )
**Carol:** to sing merrily
**Celeste:** a percussive keyboard instrument (also **celesta** )
**Chad:** a scrap of paper
**Chevy:** to chase (also **chivy** )
**Christie:** a type of turn in skiing (also **christy** )
**Clarence:** an enclosed carriage
**Dagwood:** a large, stuffed sandwich ( _named after the comic strip character who was fond of them_ )
**Daphne:** a flowering shrub with poisonous berries
**Davy:** a safety lamp
**Deb:** a debutante
**Devon:** a breed of cattle
**Dexter:** located to the right
**Dom:** a title given to some monks
**Don:** to put on a piece of clothing
**Donna:** an Italian woman of repute
**Erica:** a shrub of the heath family
**Fay:** to join together closely
**Florence:** a former European gold coin
**Franklin:** a nonnoble medieval English landowner
**Fritz:** a nonworking or semi-functioning state
**Gilbert:** a unit of magnetomotive force _(equal to 10/4π ampere-turn, in case you were wondering)_
**Gilly:** to transport on a type of train car
**Graham:** whole-wheat flour
**Hank:** to secure a sail
**Hansel:** to give a gift to, usually to commence a new year (also **handsel** )
**Harry:** to harass
**Henry:** a unit of electric inductance
**Herby:** full of herbs
**Jack:** to hoist with a type of lever
**Jacky:** a sailor
**Jake:** okay, satisfactory
**Jane:** a girl or woman
**Jay:** any of various birds, known for their crests and shrill calls
**Jean:** denim
**Jenny:** a female donkey
**Jerry:** a German soldier
**Jess:** to fasten a strap around the leg of a bird in falconry (also **jesse** )
**Jill:** a unit of liquid measure equal to 1/4 of a pint
**Jimmy:** to pry open
**Joannes:** a Portuguese coin (also **johannes** )
**Joe:** a fellow
**Joey:** a young kangaroo
**John:** a toilet
**Johnny:** a hospital gown
**Jones:** a strong desire
**Josh:** to tease
**Kelly:** a bright shade of green
**Kelvin:** a unit of absolute temperature
**Ken:** to know
**Kent:** past tense of **ken**
**Kerry:** a breed of cattle
**Kris:** a curved dagger
**Lars:** plural of **lar** : a type of ancient Roman guardian deity (also **lares** )
**Lassie:** a lass
**Laura:** an aggregation of hermitages used by monks
**Laurel:** to crown one's head with a wreath
**Lee:** to shelter from wind
**Louie:** a lieutenant
**Louis:** a former gold coin of France worth 20 francs
**Mac:** a raincoat
**Mae:** more
**Mamie:** a tropical, fruit-bearing tree (also **mamey** and **mammee** )
**Marc:** the pulpy residue of fruit after it is pressed for wine
**Marcel:** to make waves in the hair using a special iron
**Marge:** a margin
**Martin:** any of the type of bird also known as a swallow
**Marvy:** marvelous
**Matilda:** a hobo's bundle _(chiefly Australian, where the hobo would likely be called a **swagman** )_
**Matt:** to put a dull finish on (also **matte** )
**Maxwell:** a unit of magnetic flux
**Mel:** honey
**Merle:** a blackbird
**Mickey:** a drugged drink _(Also known as a Mickey Finn, after a Chicago bartender of that name who, around the turn of the 20th century, would slip some chloral hydrate into unsuspecting patrons' drinks, then bring the incapacitated victims to a back room where he would rob them.)_
**Mike:** a microphone
**Milt:** to fertilize with fish sperm
**Minny:** a minnow
**Mo:** a moment
**Molly:** a type of tropical fish
**Morgan:** a unit of frequency in genetics
**Morris:** a type of folk dance from England
**Morse:** describing a type of code made of long and short signals
**Mort:** a note sounded in hunting to announce the death of prey
**Nelson:** a type of wrestling hold
**Newton:** the unit of force required to accelerate one kilogram of mass one meter per second
**Nick:** to make a shallow cut
**Norm:** a standard
**Pam:** the name for the jack of clubs in some card games
**Peter:** to lessen gradually
**Pia:** a fine membrane of the brain and spinal chord
**Randy:** sexually excited
**Regina:** a queen
**Rex:** a king
**Rick:** to stack hay, corn, or straw
**Rob:** to steal
**Robin:** a type of thrush with a reddish breast
**Rod:** to provide with or use a rod
**Roger:** the pirate flag
**Sal:** salt
**Sally:** to make a brief trip or sudden start
**Sawyer:** one who saws wood
**Shawn:** past tense of _show_
**Sheila:** a girl or young woman
**Sol:** the fifth note of diatonic scale (also **so** )
**Sonny:** a boy or young man
**Sophy:** a former Persian ruler
**Spencer:** a type of sail
**Tad:** a young boy
**Tammie:** a fabric used in linings and curtains (also **tammy** )
**Ted:** to spread for drying
**Teddy:** a woman's one-piece undergarment
**Terry:** a soft, absorbent type of cloth
**Tiffany:** a thin, mesh fabric
**Timothy:** a Eurasian grass used for grazing
**Toby:** a drinking mug in the shape of a man or a man's face
**Tod:** a British unit of weight for wool equal to 28 pounds
**Tom:** the male of various animals
**Tommy:** a loaf or chunk of bread
**Tony:** stylish
**Vera:** very
**Victoria:** a light, four-wheeled carriage
**Warren:** an area where rabbits live, or a crowded, mazelike place
**Webster:** one who weaves
**Will:** to choose, decree, or induce to happen
**Willy:** to clean fibers with a certain machine (also **willow** )
* * *
#### I Haint Afraid of No...
**Banshee:** a female spirit in Gaelic folklore that wails to warn of a family member's imminent death
**Barguest:** a goblin (also **barghest** )
**Bogy:** a goblin
**Daimon:** a spirit (also **daemon** )
**Eidolon:** a phantom or specter
**Fairyism:** the quality of being like a fairy ( _not really a ghost, but a great word_ )
**Haint:** a ghost
**Kelpie:** a water sprite in Scottish folklore known for drowning sailors
**Wraith:** a ghost of a person, often appearing just before that person's death
**Zombi:** a zombie ( **Zombify** _and_ **zombification** _are both playable, but are not to be confused with_ **zombiism** _, a West African and Haitian belief system involving a rainbow serpent. See Wade Davis's_ The Serpent and the Rainbow.)
## Trumping with Tramps
The world's oldest profession is also one of the best represented in the _OSPD._ Synonyms for **prostitute** abound in the dictionary. While the old classics like **hooker** and **whore** are included, here are ten less-familiar synonyms alongside some of their best usages:
**Callet** , as used in Shakespeare's _Othello_
DESDEMONA: Am I that name, Iago?
IAGO: What name, fair lady?
DESDEMONA: Such as she says my lord did say I was.
EMILIA: He call'd her whore; a beggar in his drink
Could not have laid such terms upon his **callet**.
IAGO: Why did he so?
DESDEMONA: I do not know; I am sure I am none such.
**Demirep** , from _Tom Jones_ by Henry Fielding
> "He had no knowledge of that character which is vulgarly called a **demirep** ; that is to say, a woman who intrigues with every man she likes, under the name and appearance of virtue; and who, though some over-nice ladies will not be seen with her, is visited (as they term it) by the whole town, in short, whom everybody knows to be what nobody calls her."
**Quean:** (pronounced "kwayne") from Lord Byron's _Don Juan_
"She was to dismiss her guards and he his Harem,
And for their other matters, meet and share 'em.
But as it was, his Highness had to hold
His daily council upon ways and means
How to encounter with this martial scold,
This modern Amazon, and queen of **queans**."
**Trull:** from Jonathan Swift's "A Proposal for Giving Badges to the Beggars of Dublin"
> "If he be not quite maimed, he and his **trull** , and litter of brats (if he hath any) may get half their support by doing some kind of work in their power, and thereby be less burthensome to the people."
Other good ones include **chippy** (also **chippie** ), **cocotte** , **cyprian** , **floozy** (also **floosy** , **floozie** , and **flossie** ), and **pross** (also **prossie** and **prostie** ).
Of course, there are plenty of other colorful words in the lexicon that deal with, **ahem** , the ins and outs of sex, or even interesting gender roles. Here are twenty-nine of them, including one of my favorite acceptable words, **_pimpmobile_**. (If you don't appreciate that such words are in the Scrabble lexicon, just remember: don't hate the **playa** , hate the game.)
* * *
**Bawd:** a female proprietress of a brothel
---
**Bemadam:** to refer to by the title of madam
**Bimbette:** an attractive but dumb young woman
**Bimbo:** a promiscuous woman, an unintelligent man
**Catamite:** a boy who is sodomized
**Cathouse:** a brothel
**Cicisbeo:** a lover of a married woman (pl. -beos, -beis) ( _likely an inversion of the Italian_ bel cece _, which translates literally to "beautiful chickpea," andis used in the Italian in the same way_)
**Cornuto:** a cuckold
**Cotquean:** a hussy, or a man who busies himself in a housewife's affairs
**Curiosa:** pornographic literature
**Drab:** to consort with prostitutes
**Frottage:** the act of masturbation by rubbing against another person
**Frotteur:** one who engages in frottage
**Hoochie:** a sexually promiscuous young woman
**Lupanar:** a brothel
**Nudie:** a film featuring naked actors
**Nympho:** a woman with extreme sexual desire
**Peepshow:** a performance, generally sexual in nature, watched through a small hole
**Pimp:** to solicit clients for a prostitute
**Pimpmobile:** an ostentatious car characteristically owned by a pimp
**Playa:** a basin in a desert valley that may occasionally fill with water
**Porno:** pornography
**Porny:** pornographic in nature
**Rakehell:** a man with little moral restraint, a libertine
**Sexpot:** an extremely sexually attractive woman
**Sleazo:** sleazy
**Sleazoid:** a sleazy person
**Swive:** to have sexual intercourse with
**Tomcat:** to be sexually promiscuous—used of men
* * *
## It's Kosher for Scrabble
### Yiddish Words My Grandmother Knew
The imminent death of Yiddish, a language that rivals Scrabblish for its lexicon of fantastic, unusual words, is a matter of much hand-wringing.
Players can **futz** with their letters while some little **pisher** , some **nudnik** across the table, **putzes** around with a blank. At the next table is some **schlemiel** who's **schlepped** his **schlock** in with him to bring him good luck. The **schlub** he's about to play **schlumps** in his chair, wiping his **schnoz** with a **schmatte** , and the two **schmos** sit and **schmooze** like a couple of **schmucks.** It's the same **shtick** every time, but at heart they're a couple of **nebbishy menschen**. When one makes a bingo, the other never says, " **oy** "—it's always " **mazeltov**!" without any **shmalz**.
* * *
**Futz:** to act ineffectually, either deliberately or inadvertently
---
**Pisher:** someone young and/or inexperienced
**Nudnik:** an obnoxious, annoying person (also **nudnick** ) _(also the name by which each and every one ofmy Hebrew school teachers called me)_
**Putz:** to deliberately act ineffectually _(_ also slang for _penis)_
**Schlemiel:** a dope (also **schlemihl** )
**Schlep:** to carry laboriously or pitifully (also **schlepp** )
**Schlock:** worthless items (also **shlock** )
**Schlub:** a dumb or unfit person
**Schlump:** to move about lazily
**Schnoz:** the nose
**Schmatte:** a rag or garment of low quality
**Schmo:** a dolt (also **schmoe** ) (from **schmuck** )
**Schmooze:** to speak idly or smoothly (also **schmoose** )
**Schmuck:** an obnoxious or stupid person _(literally_ penis _, from the Polish_ smok _, meaning "dragon")_
**Shtick:** a routine (also **schtick** , **schtik** )
**Nebbishy:** meek ( _describes a person who's a_ **nebbish** )
**Mensch:** a good person (pl. -es, -en) _(The opposite of a **mensch** is an_ unmensch—a fine Yiddish word even if it doesn't fly in Scrabble.)
**Oy:** an interjection used to express exasperation or pain
**Mazeltov:** an interjection used to express congratulations, though a closer translation is "good luck"
**Schmalz:** rendered chicken fat or excessive sentimentality (also **schmaltz** )
* * *
#### The Whole Schmeer
"Auto-antonyms" (or contronyms or antagonyms) are **homographs** that have opposite meanings. Oft-cited examples include _left_ (it can mean both "gone" or "still here"), _clip_ ("to cut off" or "to attach together"), and _fast_ ("to move quickly" and "to stay in one position"). **Schmeer** , which refers to a spread of butter or cream cheese on bread ("a bagel with a schmeer" is not uncommon parlance in New York City, to this day), can also be used as the verb "smear," as in, to apply a schmeer to a bagel ( _schmeer_ and _smear_ share the same old German roots). It also means to "bribe," in the same sense as the English idiom "to butter up." Of course in English, _smear_ is also to dishonor one's reputation, the opposite of buttering anyone up. So in a sense, in translation _schmeer_ 's definitions are—if not opposites—at least oddly antithetical.
_Schmeer_ can also be used in the sense of "everything together, or a little of everything together" as in, "What's he on trial for?—Oh, let's see: money laundering, racketeering, embezzlement, tax evasion... the whole schmeer." So while one can't schmeer someone in the sense of "to defame," one can schmeer (bribe) a politician with a schmeer (a large spread of cream cheese), or smear (defame) a politician for a schmeer of indecencies, including being a **ganef** (thief).
My favorite near-Yiddish word in Scrabblish is **_schnorkel_** , an alternate spelling of _snorkel_. Or perhaps it's snorkeling while **kvetching** about the underwater humidity?
Here are some more Yiddish words that are as good as **gelt** (money):
* * *
**Bubkes:** a tiny amount, virtually nothing (also **bupkes** and **bupkus** ) _(literally goat or horse droppings)_
---
**Chatchka:** a knickknack (also **chatchke** and **tchotchke** )
**Cockamamy:** a knickknack, or used to denigrate something as being far-fetched (also **cockamamie** ) _(as in, my mother: "It's no fun when you play all those_ **cockamamie** _words!")_
**Dreck:** garbage or drivel (also **drek** )
**Dreidel:** a type of spinning top (also **dreidl** )
**Echt:** genuine ( _It's held by some that the drink "egg cream," which doesn't contain any eggs, derived its name from_ echt cream _, the "real" or "genuine" cream drink.)_
**Fleishig:** describing food containing meat or fat
**Ganef:** a thief (also **gonef** , **ganof** , **gonof** , **gonoph** , **gonif** , and **goniff** )
**Hutzpa:** nerve or gall (also **hutzpah** , **chutzpa** , **chutzpah** )
**Kibitz:** to make small talk, especially as something important is happening nearby (also **kibbitz** and **kibbutz** )
**Kvell:** to brim with great pride
**Kvetch:** to complain
**Mamzer:** a bastard (also **momser** )
**Nosh:** to snack
**Plotz:** to become overwhelmed with emotion
**Schnook:** a fool (also **shnook** )
**Schnorrer:** a freeloader
**Schnoz:** the nose, usually a big one (also **schnozz** and **schnozzle** )
**Shtetel:** a Jewish village (also **shtetl** ) (pl. **shtetels** , **shtetlach** )
**Shul:** a synagogue (also **schul** , pl. -s or -n)
**Tsuris:** troubles, worries (also **tsoris** , **tsorriss** , and **tsores** )
* * *
#### Doing It Our Way: "Schlemiel! Schlimazel! Hasenpfeffer Incorporated"
Fans of the old sitcom _Laverne & Shirley_ will remember the iconic start to the show's opening theme song, sung while the two young women hop down the Milwaukee block: " **Schlemiel**! **Schlimazel**! **_Hasenpfeffer_** Incorporated." They may have been reciting a Yiddish-American hopscotch chant, but these lyrics could be helpful in another game played upon squares.
* * *
**Schlemiel:** a fool, or someone with bad luck (also **schlemihl** )
---
**Shlimazel:** someone perennially unlucky
**Hasenpfeffer:** a German stew of rabbit or hare
* * *
While a shlimazel is likely to fall down a lot, a schlemiel is likely, as the old Yiddish saying goes, "to fall on his back and break his nose."
## Jeepers Creepers
### Interesting Interjections, Exceptional Exclamations, and Outstanding Oaths
The Scrabble board is colorful—why shouldn't its language be colorful too? Although many dirty words have been struck from the _OSPD_ , the player who extracts six tiles from the bag and discovers that he's pulled four Is and two Us to add to the V already on his rack does have some recourse; there are many wonderful non-expletive interjections in Scrabblish.
* * *
**Arriba:** used to express excitement
---
**Attaboy:** used to cheer a male on
**Attagirl:** used to cheer a female on
**Begorah:** a mild oath _(Irish in origin, from "by God!")_ (also **begorra** , **begorrah** )
**Bejabers:** a mild oath _(likely from "by Jesus!")_
**Bejeezus:** a mild oath _(likely from "by Jesus!")_ (also **bejesus** )
**Blimy:** a mild oath ( _primarily British, a contraction of "God blind me")_ (also **blimey** )
**Caramba:** an exclamation of surprise _(purportedly from the Spanish_ carajo _, meaning "penis")_
**Crikey:** a mild oath _(likely a euphemism for Christ)_ (also **cricky** and **cracky** )
**Criminy:** a mild oath _(likely a euphemism for Christ, perhaps a contraction of the euphemism "Jiminy Cricket!")_ (also **crimine** )
**Cripe:** a mild oath _(a euphemism for Christ)_ (also **cripes** )
**Dammit:** a mild oath
**Duh:** an exclamation of obviousness
**Eek:** an exclamation of fear
**Egad:** a mild oath _(likely from "Oh, God")_ (also **egads** )
**Faugh:** an exclamation of disgust (also **foh** )
**Fie:** an exclamation of disapproval
**Gardyloo:** an expression of warning _(from the French_ garde à l'eau _, meaning "beware of the water," it originated in Scotland and was used to warn pedestrians that one was about to throw dirty water into the street)_
**Geez:** a mild oath _(a shortened form of Jesus)_ (also **jeez** )
**Giddyup:** used to urge on (also **giddap** and **giddyap** )
**Goldarn:** an exclamation of anger (also **goldurn** )
**Golly:** a very mild oath _(perhaps a contraction of "God's body")_
**Gor:** a mild oath _(primarily British, a euphemism for God)_ (also **gorblimy** )
**Gosh:** a very mild oath _(a euphemism for God)_
**Hey:** used to gain attention (also **heigh** )
**Hic:** expressive of a hiccup (also **huic** )
**Hunh:** used to ask for something to be repeated or explained
**Hup:** used to urge on
**Jeepers:** a mild oath _(a euphemism for Jesus)_
**Jiminy:** a mild oath _(a euphemism for Jesus)_ (also **jimminy** )
**Lackaday:** an expression of regret _(a contraction of "alack the day")_
**Mm:** used to express appreciation or assent
**Nertz:** an expression of unhappiness _(likely a form of "nuts!")_ (also **nerts** )
**Ochone:** an exclamation of lament _(Irish and Scottish, a form of_ ohone)
**Pah:** an exclamation of disgust
**Pardi:** a mild oath _(Used like "by God." See_ Hamlet: _"Ah, ha! Come, some music! come, the recorders! For if the king like not the comedy, Why then, belike, he likes it not, perdy.")_ (also **pardee** , **pardie** , **pardy** , **perdie** , and **perdy** )
**Pfft:** an expression of dismissal, or an expression of a sudden conclusion
**Pfui:** an expression of dismissal or contempt (also **phooey** )
**Phew:** an expression of relief
**Pht:** a mild expression of anger (also **phpht** )
**Poh:** an expression of disapproval _(I love the_ OED _'s definition: "An ejaculation of contemptuous rejection." Accurate bordering on poetic!)_ (also **pugh** )
**Prithee:** an expression used to implore (also **prythee** )
**Ptui:** an expression of disgust (also **ptooey** )
**Quotha:** a sarcastic expression used after repeating another's words to imply disbelief, used like "indeed!" _(as in, "Tom played_ **ptooey** _! 'I knew it because I read it in a book,'_ **quotha** _!")_
**Rah:** used to express encouragement and support in competition
**Righto:** used to express assent
**Shazam:** used to express a magical occurrence
**Sheesh:** an expression of exasperation
**Sooey:** used as a call to pigs
**Touche:** used to express a hit in fencing or a good point in conversation
**Tsk:** an expression of disapproval (also **tsktsk** )
**Vive:** an expression meaning "long live" _(not to be confused with_ **vivers** _, a plural noun for food used mostly in Scotland)_
**Voila:** used to express something's instantaneous presence
**Vum:** used to express surprise _(Antiquated, primarily used in New England, this holdover from colonial times comes from_ vow _, and was often used in "Well, I vum!" to show surprise.)_
**Whamo:** used to express an instantaneous, powerful event (also **whammo** )
**Whee:** used to express enjoyment
**Whoa:** used to express hesitation
**Wirra:** used to express sorrow
**Wisha:** used to express surprise
**Yikes:** used to express trepidation
**Yipe:** used to express fear or dismay (also **yipes** )
**Yippee:** used to express intense happiness
**Yoicks:** used to encourage hunting dogs
**Yum:** used to express appreciation for food
**Zooks:** a mild oath (also **gadzooks** , as well as **godzookery** , -ies) _(a contraction of "God's hooks," referring to Jesus's crucifixion)_
**Zounds:** a mild oath _(from "God's wounds," referring to Jesus's crucifixion)_
**Zowie:** an expression of surprise or admiration
**Zzz:** used to express being asleep
* * *
#### Swanny, I Swanny
**Swanny** , "to declare," is a verb playable only in the first-person singular, as in "I swanny!" Widely unknown outside of the American South, "I swanny" can still be heard there in moments of annoyance or dismay as a very mild oath meaning "I swear," derived from a contraction of "I shall warrant ye." A further shortened form—perhaps the most graceful expression of agitation I've ever heard—is also sometimes heard: "I swan."
## The Answer, My Friend, Is Blowing in the Wind
### Plays that Make Scrabble a Breeze
You might be surprised to find what a beeeze it is to make words using these many types of winds found around the world.
In Cuba: Oh, boy, feel that **bayamo blaw.**
In Siberia: **Brr** , what a **bura**.
In Croatia: **Brrr** , what a **bora**.
In France it's a **fon** , in India a **bhut** , in the American Northwest a **chinook.**
In Switzerland: This is some **bise** , I think I might get a **bleb**.
**Bayamo** : a strong wind found in Cuba
**Blaw:** to blow
**Brr:** used to indicate feeling cold (also **brrr** )
**Bura** : a violent Eurasian windstorm (also **buran** )
**Bora** : a cold wind in lowland regions, particularly along the Adriatic
**Fon** : a warm, dry wind that blows down off some mountains (also **fohn** and **foehn** )
**Bhut:** a warm dry wind in India (also **bhoot** )
**Chinook:** a warm wind that flows off the east side of the Rockies; or a type of Pacific Northwest salmon named after the Chinook people
**Bise:** a cold, dry wind, found especially blowing from the northeast in Switzerland (also **bize** )
**Bleb:** a blister
#### OTHER WINDS
**Etesian:** a northerly Mediterranean summer wind
**Haboob:** a violent sandstorm or **duststorm** _(There was something of a dustup in Arizona over the use of haboob in local weather reports when a storm hit the Phoenix area in July 2011, with some local residents complaining that the word could offend American soldiers returning from the Middle East.)_
**Oe:** defined by the _OSPD_ as "a whirlwind off the Faeroe Islands," the _OED_ 's definition is "a small island" _(similar to the words for island in Danish,_ **Ø** _, and Swedish,_ ö _)_
**Sarsar:** an icy wind _(from the Arabic_ çarçar _for a cold wind)_
**Simoom** : a hot, violent desert wind (also **simoon** and **samiel** )
**Williwaw:** a violent, cold wind blowing down from a mountain (also **willyway** and **williwau)**
## Superhero/Superheroine Secret Identities
### Playable Comic Book Heroes and Villains
While proper names are **persona** non grata in Scrabble, some comic book characters are welcome to the board, as they have common noun definitions as well.
**Batgirl:** a girl whose job it is to mind baseball equipment
**Batman:** a British officer's orderly
**Corsair** : a pirate
**Hulk** : to appear large or intimidating
**Iceman:** a man whose job it is to supply ice
**Ironman:** a man of great strength and/or endurance
**Joker:** one who is habitually making jokes
**Magneto:** a small electric generator containing a magnet
**Mystique:** an aura of attractiveness
**Riddler:** one who poses riddles
**Robin:** a type of thrush
**Superman:** an idealized, superior man
**Superwoman:** from _Merriam-Webster_ , "an exceptional woman; _especially:_ a woman who succeeds in having a career and raising a family"
**Wolverine:** a smallish, vicious carnivore of the weasel family, native to the tundra
## A Growing Web of Words
The years between the publication of the third _OSPD_ in 1996 and the fourth edition eight years later brought with them the explosion of the Internet, which, despite not being a playable word itself, was attended by its own lexicon, including **email** , **ebook** , **webcam** , **webcast** , **blog** / **weblog** (as a verb and noun), **blogger** (but not weblogger), **firewall** , **login/logon** (but not logout), **metatag** , and **spam** / **spammer** / **antispam** / **spambot**. My two favorites are:
**Megaflop:** a measure of a computer's calculating speed equal to one million floating point operations ("FLOPS") per second
**Wetware:** the human brain when considered as functionally equivalent to a computer
## Wannabe a Baller?
Some hip-hop terms have made their way into Scrabblish, one way or another:
**Baller:** "one who balls," according to the _OSPD_
**Benjamin:** a fragrant gum resin (also **benzoin** )
**Biggie:** a large person
**Doggy:** (adj.) similar to a dog, or (n.) a little dog (also **doggie** )
**Jiggy:** pleasurably excited
**Lo:** used to call attention
**Poppa:** father
**Skee:** to ski
**Smalls:** pl. of **small** (the small part)
**Snoop:** to sneakily investigate
**Wannabe:** a derogatory term for someone who aspires to be like someone else
## Put Down a Putdown
### Playing with Insults and Casting Aspersions
Scrabble is also a great way to learn a slew of new insults. All of these words are in the _OSPD,_ meaning they're considered safe for family play. So dig in and take that **smarty** (an obnoxius know-it-all) you're playing down a peg or two!
**SCRABBLISH WAYS TO CALL SOMEONE STUPID**
**airhead**
**berk**
**birdbrain**
**bonehead**
**booby**
**butthead**
**charlie** ( **charley** )
**chucklehead**
**clod**
**clodhopper**
**clodpate**
**clodpole** ( **clodpoll** )
**coof**
**crackbrain**
**cretin**
**dobby**
**dolt**
**dork**
**dumbhead**
**dumbo**
**dummkoff**
**dunderhead**
**fathead**
**feeb**
**gaby**
**gomeral** ( **gomerel, gomeril** )
**gowk**
**haverel**
**idiot**
**jughead**
**lunk**
**lunkhead**
**lurdan**
**mome**
**mooncalf**
**moron**
**nidget**
**ninny**
**ninnyhammer**
**saphead**
**sawney**
**simp**
**softhead**
**spaz**
**staumrel**
**tomfool**
**OTHER, MORE-SPECIFIC SCRABBLISH INSULTS**
**Baddy:** a bad person (also **baddie** )
**Egghead:** one who is overly intellectual
**Gomer:** a slang term for a hospital patient who the staff feels ought not to be in a hospital _(an acronym for "Get Out of My Emergency Room"_ )
**Hoser:** a sloppy person
**Nerd:** an unstylish or awkward person (also **nurd** )
**Tawpie:** a foolish young person, usually a girl
In Scrabblish, there's no aspersion cast upon the **geek** : "a single-minded enthusiast or expert." What a great game!
## Should Auld Scots Words Be Forgot?
The _OSPD_ is full of Scots words that are useful in the game ( **gude** , **hae** , **frae** ), but some might ask when these words would ever be useful in real life. Well, come the stroke of midnight next New Year's Eve, you might find yourself once again singing those strange lyrics to Robert Burns's famous poem "Auld Lang Syne." Sure, the drunken revelry makes for a **gude** (good) excuse to slur through the lines, carefree of their meaning, but for the sake of respecting this rather bittersweet poem (if not for the sake of your Scrabble game), let's unpack the title, chorus, and verses using the _OSPD_ as a guide.
"Auld Lang Syne" literally means "Old Long Since," or more idiomatically, "Days Gone By," "Old Times," or "(The) Long Time Since."
Here are the lyrics, followed by a translation of the song.
Should **auld** acquaintance be forgot,
and never brought to mind?
Should auld acquaintance be forgot,
and auld **lang syne**?
CHORUS:
For auld lang syne, my **jo** ,
for auld lang syne,
we'll tak a cup o' kindness yet,
for auld lang syne.
And surely ye'll be1 your pint- **stowp**!
and surely I'll be mine!
And we'll tak a cup o' kindness yet,
for auld lang syne.
We **twa hae** run about the **brae** s,
and pu'd2 the **gowan** s fine;
But we've wander'd **mony** a weary fit,3
sin auld lang syne.
We twa hae paidl'd4 i' the burn,5
**Frae** morning sun till dine;
But seas between us braid6 hae roar'd
sin auld lang syne.
And there's a hand, my trusty fiere7!
and **gie** 's a hand o' thine!
And we'll tak a right **gude** -willy8 **waught** ,
for auld lang syne.
_____________
Definitions of words (as used above) that are either not in the _OSPD_ or are used differently in the poem than as defined in the _OSPD_ :
1. **be** : to buy
2. pu'd: the past tense of the Scots verb "pou," often written as pu in Burns' time; to "pull" or "pluck"
3. **fit** : foot
4. paidl'd: paddled
5. **burn** : a stream (playable: **burnie** : a brooklet)
6. **braid** : broad
7. fiere: fire (fiere is not playable, but two helpful words are **fierier** and **fieriest** : the comparative and superlative of **fiery** [intensely])
8. gude-willy: goodwill ( **willy** is playable as to **willow:** to clean fibers with a certain machine)
* * *
**Auld:** old
---
**Lang:** long
**Syne:** since
**Jo:** a sweetheart
**Stowp:** a basin where holy water is kept (also **stoup** )
**Twa:** two
**Hae:** have
**Brae:** a hill
**Gowan:** a daisy or other white and yellow flower
**Mony:** many
**Frae:** from
**Gie:** to give
**Gude:** good
**Waught:** to drink in deeply, to quaff
* * *
## Righting Your Rack
### How to Deal with Unruly Letters
Rack management, the ability to keep a good balance between usable vowels and consonants on one's rack, is one of the most undervalued skills in Scrabble. But sometimes you can't help it—one moment you're okay, and the next vowels or consonants have overtaken your rack like seven children who don't play well with each other.
Don't despair; you have options. The first is swapping these unruly misfits back into the bag for more amenable ones, though sending them back to the cloth orphanage will cost you a turn.
The alternative is to play with what you've got. Luckily, there are more than a few oddball words in the _OSPD_ at your disposal for dealing with either too many vowels and too many consonants.
#### WORDS WITHOUT CONSONANTS
**Ae ai** , blown by an **oe** oe'r the **eau** to Oahu, said, " **Oi** , look at this **aa**."
* * *
**Ae:** one _(_ adj. _)_
---
**Ai:** a three-toed sloth
**Oe:** a whirlwind off the Faeroe Islands
**Eau:** water (pl. -x)
**Oi:** an expression of dismay (also **oy** )
**Aa:** a type of stony, rough lava
* * *
#### FORTY FOUR-LETTER WORDS THAT HAVE THREE VOWELS
* * *
**Aeon:** a long period of time (also **eon** )
---
**Agee:** to one side (also **ajee** )
**Agio:** a surcharge applied when exchanging currency
**Ague:** sickness associated with malaria
**Ajee:** to one side (also **agee** )
**Akee:** a tropical tree
**Alae:** wings (pl. of **ala** )
**Alee:** on the side shielded from wind
**Amia:** a freshwater fish
**Amie:** a female friend
**Anoa:** a kind of small buffalo
**Awee:** a little while
**Eaux:** waters (pl. of **eau** )
**Eide:** distinctive appearances of things (pl. of **eidos** )
**Emeu:** an emu
**Etui:** an ornamental case
**Euro:** an Australian marsupial, also known as a **wallaroo** , for being like the kangaroo and wallaby; also a unified currency of much of Europe
**Ilea:** the terminal portions of small intestines (pl. of **ileum** )
**Ilia:** pelvic bones (pl. of **ilium** )
**Inia:** a part of the skull
**Ixia:** a plant with funnel-shaped flowers
**Jiao:** a Chinese currency (also **chiao** )
**Luau:** a large Hawaiian feast
**Meou:** to meow
**Moue:** a pouting expression
**Naoi:** ancient temples (pl. of **naos** )
**Obia:** a form of sorcery practiced in the Caribbean (also **obeah** )
**Odea:** concert halls (pl. of **odeum** )
**Ogee:** an S-shaped molding
**Ohia:** a Polynesian tree with bright flowers (also **lehua** )
**Olea:** corrosive solutions (pl. of **oleum** )
**Oleo:** margarine
**Olio:** a miscellaneous collection
**Ouzo:** Turkish anise-flavored liquor
**Raia:** a non-Muslim Turk (also **rayah** )
**Roue:** a lecherous old man
**Toea:** a currency in Papua New Guinea
**Unai:** a two-toed sloth (pl. **unau** ) _(An_ **ai** _is a three-toed sloth, an_ **unai** _is a two-toed sloth. That is to say, in the land of the sloths the three-toed ai is king.)_
**Zoea:** the larvae of some crustaceans
* * *
#### WORDS WITHOUT VOWELS
" **Brr** , **brrr** , it's cold in this **cwm** ," said Carl.
" **Hm** , **hmm** , it's like negative ten to the **nth** ," agreed Hilda.
" **Pst! Psst**! Do you hear someone playing the **crwth**?" asked Carl.
"Playing bop is like Scrabble with all the vowels missing."
—Duke Ellington
" **Sh** , **shh**!" she said.
" **Mm** ," said Carl, "sounds good."
**Pfft** —the sound disappeared.
" **Pht** , **phpht** ," **tsk** ed Hilda.
Carl **tsktsk** ed too.
* * *
**Brr:** used to indicate that one feels cold (also **brrr** )
---
**Cwm:** a cirque (a deep, steep-walled basin on a mountain) (pl. -s) _(pronounced to rhyme with_ boom _)_
**Hm:** used to express thoughtful consideration (also **hmm** )
**Nth:** describing an unspecified number of a series
**Psst:** used to attract someone's attention
**Crwth:** an ancient stringed instrument (pl. -s) _(pronounced to rhyme with_ booth _)_
**Sh:** used to urge silence (also **shh** and **sha** )
**Mm:** used to express assent or satisfaction
**Pfft:** used to express a sudden ending
**Phpht:** used as an expression of mild anger or annoyance (also **pht** )
**Tsk:** to utter an exclamation of annoyance (-ed, -ing, -s)
**Tsktsk:** to tsk (-ed, -ing, -s)
* * *
## The Language of Amour, and Vowels
### Words from the French
Growing up, it was always an adventure playing Scrabble with my **pere** , who is **francophone**. His cri **de** coeur of "But it's a word in French!" was common after being informed that—incroyable!—the word he played was not in the _OSPD_. Nevertheless, every once in a while he'd be buoyed by the discovery that a French word he'd attempted was legal in the **jeu**.
Other **mots** (witty remarks) include **frere** , **mere** (a pond, when used as a noun), **ami** , **beau** ( **-x** ) (a boyfriend), **petit/petite** , **ennui** , **fille** , **femme** , **monsieur** , **bonne** (but not bon), **chez** , **nom** , **sans** , **noir** , **blanche** , **rouge** , **tres** , and **eau** ( **-x** ). But there's **beaucoup** (also **boocoo** and **bookoo** ) more where those came from... which is France, I suppose:
**Un** (but not deux), **trois** , **quatre**... Here are some more:
* * *
**Artiste:** a performance artist
---
**Bastile:** a prison (also **bastille** )
**Bateau:** a type of riverboat (also **batteau** )
**Cent:** one hundredth of a dollar
**Cept:** a clan
**Chateau:** a castle
**Comte:** to enumerate
**Coterie:** a tight group
**Couteau:** a knife
**Dauphin:** the eldest son of a French king
**Dernier:** last
**Droit:** a right
**Escargot:** an edible snail
**Fils:** son (pl. fils)
**Flic:** a French policeman
**Frites:** French fries
**Gateau:** a cake
**Gauche:** devoid of social grace
**Gigot:** a lamb leg
**Jete:** a kind of ballet leap
**Lycee:** a French high school
**Matin:** a song sung by birds in the morning
**Mignon:** a small cut of beef
**Mille:** a thousand
**Modiste:** one who sells fashionable clothing
**Morceau:** a short composition
**Mouton:** a processed sheepskin made to look like that of another animal
**Nouveau:** of a new style
**Pannier:** a large container, generally a basket
**Pierrot:** a sad clown common in French pantomime
**Postbourgeois:** no longer representative of the middle class
**Poutine:** French fries covered in cheese curds and gravy (one of the highest-scoring bingos, calorically)
**Prochain:** next, close to (also **prochein** )
**Quartier:** a neighborhood or district of a city
**Sangfroid:** self-possession or "coolness" under pressure
**Sieur:** an antiquated title of respect for a Frenchman
**Tasse:** a particular piece of metal armor for the leg (also **tasset** )
**Vert:** a bright shade of green
* * *
**Pere:** father
**Francophone** : French-speaking
**De:** of, from
**Jeu:** a game (pl. **jeux** )
And **voila** , now you speak perfect French, and much improved Scrabblish to boot!
## The Absolutest Superlatively Weirdest Superlatives & Pluralizations
There are lots of strange words and constructions in the _OSPD_. For instance, if you think you've seen enough of them, and then I give you some more, it's possible that you've seen **enoughs**.
Here are some more of the **mostest** strange: **enows** ( **enow** : enough), **uniquer** / **uniquest** , **cherubims** ( **cherubim** is already plural, but maybe you can't get **enow** of a good thing), **absoluter** / **absolutest** , and **nothings**.
Considering some other unusual plurals is likely to get one thinking deeply about **pluralism** (the coexistence of more than one of a thing) and even **pluralisms.** The names of some centuries—like **duecento** (the thirteenth century), **trecento** (the fourteenth), **seicento** (the seventeenth)—can be pluralized ( **duecentos** , **trecentos** , and **seicentos** ). Perhaps it's in those alternative centuries that people played many **tennises**. And while we're **funning** (acting playfully), we can take advantage of the definition of the less common homonym of **none** as one of the seven canonical daily periods of prayer (like vespers, but none takes place at 3 pm and rhymes with _bone_ ), providing the chance to construct the surprising creation **nones**.
## Prefixal
Prefixes and suffixes are supremely important in Scrabble—just imagine your opponent playing **fix** when you held **transes** in your rack. The _OSPD_ has pages and pages of words starting with prefixes like _re_ \- and _un_ \- and is packed with words that can take suffixes - _er_ and - _ing_. Here are some surprising examples:
##### BEING ALONE
Bea's **beliquored** brain **bethinks** and **beshrews** her **beblooded** face, before **bepimpled** , now not to be **befingered** , let alone **bekissed**.
**Beliquor:** to drench in liquor
**Bethink:** to mull over
**Beshrew:** to put a curse upon
**Beblood:** to make bloody
**Bepimple:** to cover with pimples
**Befinger:** to touch all over
**Bekiss:** to kiss all over
##### OUTING: AN OUTBURST OF OUTBOASTING
Your throat may **outsnore** mine, your lungs may **outsmoke** mine,
Your luck may **outjinx** mine, your words may **outkill** mine,
Your voice may **outhowl** mine, your rage may **outburn** mine,
You may **outwar** and **outvie** me in almost every way,
But my **outthrobbing** , **outfawning** , **outpitying** heart **outloves** and **outfeels** yours.
**Outing:** a short trip, or, **out:** to reveal
**Outsnore:** to snore more than another does
**Outsmoke:** to smoke more than another does
**Outjinx:** to jinx more than another does
**Outkill:** to kill more than another does
**Outhowl:** to howl longer or louder than another does
**Outburn:** to burn longer or stronger than another does
**Outwar:** to beat another in a war
**Outvie:** to outdo in competition
**Outthrob:** to throb more than another
**Outfawn:** to be more fawning than another
**Outpity:** to pity more than another does
**Outlove:** to love more than another does
**Outfeel:** to feel more than another does
##### ALSO
**Bedunce:** to cause to look stupid
**Beworm:** to infest with worms
**Deair:** to take air out of
**Depeople:** to have fewer people at
**Disbud:** to prune buds from
**Disrate:** to lower the rating of
**Enhalo:** to encircle or crown with a halo
**Enplane:** to board an airplane
**Geekdom:** the world of geeks
**Incommode:** to bother
**Jibingly:** acting in an immovable fashion
**Outball:** to cry longer or louder than another
**Outsmell:** to have a better capacity for sensing an odor ( **outsmelled** , **outsmelt** )
**Overfat:** having too much fat ( _there is growing attention being paid to the condition of being overfat without being overweight_ )
**Semihobo:** a person exhibiting some traits of a hobo
**Stewable:** capable of being made into a stew
**Thingness:** the materiality of something
**Tubbable:** suitable for washing in a bathtub
Some words containing prefixes have surprising definitions:
**Bediaper:** to decorate with a repeated diamond design
**Debride:** to surgically remove dead tissue
**Outgas:** to remove gas from
## "Frank, This Is Frank"
The _OSPD_ is home to more **franks** than the Coney Island Nathan's:
I **ween** the **weeny weenie** costs more **francs** to **frank** than the **weenier weiner** , but the **weeniest wienie** —a **wee wienerwurst** —is so **weensy** it's practically **hotdogging.**
**Frank:** to mark a piece of mail for free delivery (as an adj.: honest, direct)
**Ween:** to suppose
**Weeny:** tiny ( **weenier** , **weeniest** )
**Weenie:** a hotdog (also **weiner** , **wienie** )
**Franc:** a former currency of France
**Wee:** very little
**Wienerwurst:** a Vienna sausage or hotdog
**Weensy:** tiny ( **weensier** , **weensiest** )
**Hotdogging:** showing off
## Words Like They Sound
**Fremd:** strange, foreign
**Furfur:** dandruff
**Schwa:** a particular vowel sound
**Smaze:** a combination of smog and haze
**Zebrass:** the offspring of a zebra and an equine, also known as a **zebroid**
## "No Nonblondes—What's Going On?"
### The Best Non-Words
_Nonblonde_ is not a word, but **nonblack** (a person who is not black) and **nonwhite** (a person who is not white) are. Other notable words starting with non- include: **nongay** (but not nonhetero), **nonself** (foreign material in the body), **nonsked** (an airline that does not have scheduled flights), **nonbook** (a book of little literary merit), and of course there's the golden **nonword** —a word that has no meaning!
These are all much more fun (though of perhaps less help in Scrabble) than **nona** —a strange inclusion in the _OSPD_ , which defines the word as "a viral disease," most likely referring to hepatitis C, which is sometimes referred to as "non-A" hepatitis.
## Play Some Music
### Bands and Musicians That Work in Scrabble
There is a wonderful assortment of playable band or musician names, and the list goes way beyond The **Who** or **Sting**. Sadly, Beatles isn't allowed (although the **fab** four can be found in **beetles** ).
The **abba beck** s his sons, a couple of **yardbird** s, to give them some fatherly advice before they head to the bus station down the **_backstreet_**. "Boys," he says, feeling **_eurythmic_** , "the **bee gees** and the **garth brooks** no pruners who are **jaggers. Aha!** And if a **madonna** overdoes it with the **rem** in the E.R., you let me know! We don't want a **megadeath** on our hands." " **Wilco** ," they reply, "just remember to feed the **feist**."
#### Play It Again, Sam: Musical Notes
The diatonic musical scale: **do** , **re** , **mi** , **fa** , **sol** , **la** , **ti**.
* * *
**Do:** the first tone of the diatonic musical scale (pl. **dos** )
---
**Re:** the second tone of the diatonic musical scale
**Mi:** the third tone of the diatonic musical scale
**Fa:** the fourth tone of the diatonic musical scale
**Sol:** the fifth tone of the diatonic scale (also **so** , pl. **sos** )
**La:** the sixth tone of the diatonic scale
**Ti:** the seventh tone of the diatonic scale (also **si** , pl. **sis** )
* * *
**Abba:** father
**Beck:** to beckon
**Yardbird:** an army recruit
**Backstreet:** a minor street
**Eurythmic:** in a generally upbeat, positive mood (also **eurhythmic** )
**Bee:** a type of flying insect
**Gee:** to move to the right
**Garth:** a garden or yard
**Brook:** to tolerate or permit
**Jagger:** one who cuts unevenly
**Aha:** an expression of surprise, triumph, or conclusion
**Madonna:** a former title of respect for a woman in Italy
**Rem:** a dosage of ionizing radiation
**Megadeath:** a unit of measure equal to one million human casualites ( _the band's name is spelled Megadeth_ )
**Wilco:** used to express consent, particularly over radio transmissions, like "roger" ( _from "will comply"_ )
**Feist:** a small hunting dog
## Performance-Enhancing Drugs
### _Or_ This Is Your Board; This Is Your Board... on Drugs
Performance-enhancing drugs aren't a problem with word games, though as in other arenas, some competitive players adhere to regimens of varieties of pills and supplements. While the jury is out on just how much good most drugs do for a player's game, it's safe to say that simply knowing the names of a lot drugs and drug-related words is a simpler—let alone cheaper and more interesting—way to up your game. Here are some good ones to start with, just for a little taste.
**Bedrug:** to make sleepy through drugs
**Benny:** an amphetamine tablet (from Benzedrine)
**Bhang:** the hemp plant in India
**Bidi:** a cigarette of India (also **beedi** )
**Bogart:** to use without sharing ( _From Humphrey Bogart's habit of keeping a cigarette dangling in his mouth, even while speaking. Try it with a blank during your next game: it's healthier, and more intimidating._ )
**Charas:** hashish
**Cig:** a cigarette
**Dagga:** Indian marijuana
**Dex:** a sulfate used as a stimulant ( **dexie:** a tablet thereof)
**Doobie:** a marijuana cigarette
**Druggy:** affected by drugs ( **druggie:** a drug addict [pl. -s])
**Ganja:** cannabis used for smoking (also **ganjah** )
**Gasper:** a cigarette ( _chiefly British_ )
**Hashhead:** one who smokes a lot of hashish
**Hashish:** a cannabis-based narcotic
**Joypop:** to use habit-forming drugs
**Junkie:** one addicted to drugs ( **junky** _is defined as "worthless"_ )
**Kahuna:** a Hawaiian shaman
**Kef:** hemp smoked to euphoria (also **kif** , **kaif** , **keef** , **kief** ) _(from the Arabic_ kayf, _meaning well-being or pleasure)_
**Lude:** a tablet of methaqualone
**Maryjane:** marijuana
**Meth:** methamphetamine
**Narc:** an undercover drug agent (also **narco** ) ( **nark** is the verb)
**Pothead:** one who smokes marijuana
**Qat:** the leaf of a type of shrub, chewed or used in tea as a mild stimulant (also _kat khat_ )
**Roofie:** a tablet of a powerful benzodiazepine sedative
**Scag:** heroin (also **skag** )
**Spliff:** a marijuana cigarette
**Stoner:** one who pelts another with stones (also known as **lapidating** )
**Toke:** to take a drag of a marijuana cigarette ( _one who does this is a_ **toker** )
**Trank:** a drug that tranquilizes (also **tranq** )
**Trippy:** suggestive of the experience of being on psychedelic drugs
## DDDEEE and IMNNNUUUU Is All I Want to Say to You
**Deeded** is the only Scrabblish word that uses two letters three times each.
**_Unununium_** is the only word in Scrabblish that begins with the same pair of letters three times in a row. It's the former name of the element roentgenium—which, for a reason I cannot discern, is not playable. The name unununium came from the element's atomic number of 111. Before it was christened roentgenium (Rg), **_unununium_** had probably the coolest symbol on the periodic table: Uuu.
## A, B, C, D, E, F, Blank
There's no playable word using just the first seven letters of the alphabet, _a_ through _g_ (may I humbly propose the supremely elegant cafbdge?). But there are two playable (and common) eight-letter words that can be made using all of the letters _a_ through _f_ , plus two blanks. Can you think of them?
Answer: **boldface** and **feedback**
## Complete List of All 101 Acceptable Two-Letter Words
1. **Aa:** a type of stony, rough lava ( _There are 16 two-letter words starting with_ a _, so you have a 62-percent chance that any tile you put after an A will make a word._ )
2. **Ab:** an abdominal muscle
3. **Ad:** an advertisement
4. **Ae:** one (adj.)
5. **Ag:** agriculture
6. **Ah:** an exclamation
7. **Ai:** a three-toed sloth
8. **Al:** a type of East Indian tree
9. **Am:** the first-person singular present form of "to be"
10. **An:** indefinite article
11. **Ar:** the letter _r_
12. **As:** similar to
13. **At:** in the position of
14. **Aw:** an expression of protest or sadness
15. **Ax:** a sharp-edged tool
16. **Ay:** a vote in the affirmative
17. **Ba:** the soul in ancient Egyptian spirituality
18. **Be:** to exist
19. **Bi:** a bisexual
20. **Bo:** a pal
21. **By:** a side issue
22. **De:** of; from
23. **Do:** a tone of the scale
24. **Ed:** education
25. **Ef:** the letter _f_
26. **Eh:** used to express doubt
27. **El:** an elevated train
28. **Em:** the letter _m_
29. **En:** the letter _n_
30. **Er:** used to express hesitation
31. **Es:** the letter _s_
32. **Et:** a past tense of _eat_
33. **Ex:** the letter _x_
34. **Fa:** a tone of the diatonic scale
35. **Fe:** a Hebrew letter
36. **Go:** a Japanese board game sometimes known as Othello (-s)
37. **Ha:** used to express surprise
38. **He:** a pronoun signifying a male
39. **Hi:** an expression of greeting
40. **Hm:** used to express consideration
41. **Ho:** used to express surprise
42. **Id:** the least censored part of the three-part psyche
43. **If:** a possibility
44. **In:** to harvest ( _yes, a verb in Scrabblish; takes_ -s, -ed, -ing)
45. **Is:** the third-person singular present form of "to be"
46. **It:** a neuter pronoun
47. **Jo:** a sweetheart
48. **Ka:** the spiritual self in ancient Egyptian spirituality
49. **Ki:** the vital life force in Chinese spirituality (also **qi** )
50. **La:** a tone of the diatonic scale
51. **Li:** a Chinese unit of distance
52. **Lo:** an expression of surprise
53. **Ma:** mother
54. **Me:** a singular objective pronoun
55. **Mi:** a tone of the diatonic scale (M _is the loosest consonant. In the first position, it'll pair up with every vowel, plus_ y _[ **ma** , **me** , **mi** , **mo** , **mu** , **my** ]. In the second position, it'll pair up with any vowel except_ i _[ **am** , **em** , **om** , **um** ]. It'll even pair up with hm. And if it can't find anyone else, it can even pair with itself: **mm**!_)
56. **Mm:** an expression of assent
57. **Mo:** a moment
58. **Mu:** a Greek letter
59. **My:** a first-person possessive adjective
60. **Na:** no; not
61. **Ne:** born with the name of
62. **No:** a negative answer
63. **Nu:** a Greek letter
64. **Od:** a hypothetical force
65. **Oe:** a whirlwind off the Faeroe Islands
66. **Of:** originating from
67. **Oh:** an exclamation of surprise
68. **Oi:** an expression of dismay (also **oy** )
69. **Om:** a sound used as a mantra
70. **On:** the batsman's side in cricket
71. **Op:** a style of abstract art dealing with optics
72. **Or:** the heraldic color gold ( _a noun, so pl._ -s)
73. **Os:** a bone
74. **Ow:** used to express pain
75. **Ox:** a clumsy person
76. **Oy:** an expression of dismay (also **oi** )
77. **Pa:** a father
78. **Pe:** a Hebrew letter
79. **Pi:** a Greek letter
80. **Qi:** the central life force in traditional Chinese culture (also **ki** )
81. **Re:** a tone of the diatonic scale
82. **Sh:** used to encourage silence
83. **Si:** a tone of the diatonic scale (also **ti** )
84. **So:** a tone of the diatonic scale (also **sol** )
85. **Ta:** an expression of thanks
86. **Ti:** a tone of the diatonic scale
87. **To:** in the direction of
88. **Uh:** used to express hesitation
89. **Um:** used to express hesitation
90. **Un:** one
91. **Up:** to raise (-s, -ped, -ping)
92. **Us:** a plural pronoun
93. **Ut:** the musical tone C in the French solmization system, now replaced by **do**
94. **We:** a first-person plural pronoun
95. **Wo:** woe
96. **Xi:** a Greek letter
97. **Xu:** a monetary unit of Vietnam equal to one-hundredth of a dong (also **sau** pl. **xu** )
98. **Ya:** you
99. **Ye:** you
100. **Yo:** an expression used to attract attention
101. **Za:** a pizza
## The Land of Za
The word **za** (as in, "I'll have a slice of za" or "Let's go for some za") became popular on Southern California college campuses in the 1970s. These days there's a fairly well-known pizza place in San Francisco called Za's, but the word has never quite caught on on the East Coast, where—outside of Scrabble—it's just as likely to hear a slice of toast referred to as "toe."
## The Baddest Definition in the _OSPD_
**Bad** is probably one of the best entries in the _OSPD._ It has three definitions:
**Bad:** (adj. **worse** , **worst** ) not good in any way
**Bad:** (n _._ -s) something that is bad
**Bad:** (adj. **badder** , **baddest** ) very good
So on second thought, it's probably one of the **baddest** entries.
Badass isn't an acceptable word, but one may play **bagass** , an alternative spelling of **bagasse** , crushed sugarcane.
## Words Gone Wild
### The Surprising Side of Many Boring Nouns
Definitions in the _OSPD_ (as well as this book) are often chosen to highlight uncommon usage. The idea is that words one would generally think of in one way ( **clock** : an instrument that displays the time) may also be used differently (to time with a stopwatch). Providing the definitions of these words as verbs signals their capacity to take on suffixes like - _ed_ and - _ing_ , in addition to the - _s_ that can be tagged onto most nouns and verbs.
While the _OSPD_ contains oodles of nouns that one could readily conceive of as verbs ( **fish:** to try to catch fish; **flower:** to bloom), it also includes many stranger entries:
**Beetle:** to stick out or project
**Belly:** to swell outward
**Bib:** to drink alcohol
**Brute:** to shape a diamond by rubbing it with another diamond
**Candle:** to examine eggs for freshness in front of a light
**Cat:** to hoist an anchor to the cathead
**Cheese:** to stop
**Chess:** to weed
**Coke:** to distill coal in order to create carbon fuel
**Crepe:** to frizz or spread out hair, particularly fake hair used by actors
**Disk:** to break up land, as with a hoe or plow
**Fig:** to adorn or dress up
**Fruit:** to come to bear fruit
**Guy:** to make fun of _(after the British villain/conspirator Guy Fawkes)_
**Hip:** to construct a roof
**Hull:** to remove a dry shell or covering from a fruit, nut, or seed
**In:** to harvest
**Iris:** to make to look like a rainbow ( _from Iris, the divine messenger in Greek mythology, who took the form of the rainbow and acted as messenger between the gods and humans)_
**Kite:** to use a check to obtain money fraudulently, to prey on another _(after the kite, a bird of prey similar to an eagle)_
**Lamp:** to look at or observe
**Lens:** to film something
**Low:** to make a sound like that of cattle
**May:** to gather flowers in the springtime
**Maze:** to bewilder
**Necklace:** to lynch by placing a tire around the neck and setting it on fire
**Pancake:** to land an airplane by having it drop vertically for several feet
**Peach:** to inform against another ( _as in_ impeach __)
**Pink:** to cut a saw-toothed edge into cloth
**Quail:** to cower
**Rug:** to tear roughly
**Shark:** to live by trickery
**Style:** to name or make in a fashion
**Super:** to reinforce a book's spine with thin cotton mesh
**Toilet:** to wash, dress, and attend to one's appearance ( _from the French_ toilette _, which first referred to a thin cloth doily that covered the clothes during shaving or hairdressing_ )
**Tomcat:** to act in a sexually promiscuous way—used of a male
**Weird:** (n.) one's destiny (Scottish); (v.) to cause to feel odd; (adj.) mysteriously strange ( _from the Old English_ wyrd _, meaning "fate;" as **dree** means "to endure," one can be told to "Dree your weird," or endure your fate._)
And then there are words you might not expect to see as nouns:
The **bigs** bring **go** to the **stank**.
**Big:** one of great importance
**Go:** a Japanese board game
**Stank:** a pond
Ironically, **bingo** , which is often used as a verb by Scrabble players, as in "She bingoed three times **agin** (against) me!" is not listed as a verb in the _OSPD_.
## Little Words for Lazy People
They say that sloth is one of the seven deadly sins. Well, here's a short list of short words to make your game a little deadlier, too. These are particularly well suited for play while still in bed at noon on a Sunday.
**Jauk:** to dawdle
**Laze:** to lounge idly
**Loll:** to lounge idly
**Moon:** to spend time idly dreaming ( _usually used with "away"_ )
**Sweer:** lazy or disinclined to act
**Toit:** to move about lazily
**Veg:** to spend time idly
## Please Play with the Animals
**Aoudad:** a type of wild sheep also known as Barbary sheep
**Aasvogel:** a vulture ( _from the Afrikaans_ aas, _meaning "carrion," and_ vogel, _meaning "bird"_ )
**Biddy:** a hen
**Bombyx:** a silkworm
**Booklice:** insects—though technically not lice—that damage books
**Bossy:** a calf or cow, or (adj.) domineering
**Cero:** a type of mackerel
**Cimex:** a bedbug
**Coypu:** a large river rat, also known as a **nutria**
**Cuscus:** a type of possum
**Dabchick:** a small water bird also known as the little _grebe (Dabchick is the only word in the_ OSPD _that contains the letter combination_ abc _._ )
**Dhole:** a type of wild dog found in India
**Dikdik:** a type of small antelope
**Drongo:** a tropical bird ( _The bird is known for its unusual behavior, which may be why in Australia_ drongo _is an insult something like "idiot."_ )
**Ebbet:** a green newt also known as a spring peeper
**Egger:** a kind of moth known for its tentlike webs
**Firebrat:** a small, wingless insect similar to the silverfish
**Ged:** a pike
**Godwit:** a large wading bird
**Hogg:** a young sheep, from about nine to eighteen months, until its second tooth comes in
**Hyrax:** a small harelike mammal
**Kinkajou:** an arboreal mammal also known as the honey bear, occasionally kept as a pet
**Numbat:** a small, endangered Australian marsupial
**Okapi:** an African mammal resembling the zebra, but more closely related to the giraffe
**Oldsquaw:** the long-tailed duck ( _Long-tailed duck is the preferred name and growing in usage on account of the offensive nature of_ squaw _._ )
**Oldwife:** a marine fish found in the Indian and Pacific oceans ( _Its name is derived from the sound it makes grinding its teeth when caught._ )
**Opah:** a marine fish also known as a **moonfish** , **cusk** , or **torsk**
**Oquassa:** a small freshwater trout
**Oxpecker:** a starling found in Africa
**Pard:** a leopard ( **pardine:** pertaining to a leopard) ( _The word_ leopard _is actually a Greek compound of_ leon _for lion and_ pardos _for male panther, as leopards were thought to be part lion and part panther._ )
**Pekepoo:** a dog that is a cross between a Pekingese and a poodle (also **peekapoo** )
**Pika:** a small mammal like a rabbit
**Platy:** a small tropical fish, also known as platyfish (also [adj.] split into thin, flat pieces)
**Pogy:** a marine fish of the herring family
**Pollywog:** a tadpole (also **polliwog** )
**Potto:** a lemur of tropical Africa—sometimes called a softly-softly
**Poyou:** an armadillo found in Argentina
**Puli:** a type of sheepdog (-lis, -lik), (also pl. of **pul:** a coin of Afghanistan [-s, -i])
**Punkie:** a biting gnat
**Quinnat:** a chinook salmon
**Rhebok:** a species of African antelope ( _The Africaans/Dutch spelling is_ Reebok, _and was chosen as the name of the sneaker company froma South African dictionary won in a race by one of the grandsons of the company's founder._)
**Sajou:** a long-tailed monkey also known as a capuchin (also **sapajou** )
**Skua:** any of several predatory seabirds
**Spitz:** a type of dog having a heavy coat, such as the Pomeranian
**Squab:** a young or baby pigeon
**Squilla:** any of various burrowing crustaceans with movable, stalked eyes
**Starnose:** a mole with a large, twenty-two-tentacled, starlike nose
**Tahr:** an Asian goatlike mammal
**Teiid:** a tropical American lizard also known as the whiptail
**Veery:** a small songbird
**Vizslal:** a Hungarian dog known for its blend of hunting skills and domesticity
**Volvox:** a green algae
**Wapiti:** elk
**Whaup:** a European bird
**Whippet:** a small, swift dog similar to the greyhound
**Xerus:** an African ground squirrel
**Yapok:** the water opossum (also **yapock** )
**Zander:** a freshwater fish like perch
**Zoril:** a small African weasel (also **zorilla** , **zorrille** , **zorillo** )
**Zyzzyva:** a tropical weevil ( _Though the penultimate word in the_ OSPD _,_ **zyzzyva** _is often the last word listed in dictionaries. As such, it's often used metaphorically to mean the last word on a subject. It has also been the answer to clues in at least three_ New York Times _crossword puzzles—unsurprisingly on two Fridays and a Saturday, days of notoriously difficult puzzles._ )
## Say It Ain't Sos
### Bad Grammar, Good Words
**Barefit:** without socks or shoes
**Brung:** a past tense of brought
**Dandriff:** dandruff
**Deers:** plural of deer
**Drownd:** to drown ( _This is present tense; past tense of_ **drownd** _is_ **drownded** _. Remember Paulie's reaction in_ Rocky IV _when little Rocky Jr. sprays him with_ **whipt** _cream?_ )
**Git:** to get
**Irregardless:** regardless
**Purty:** pretty
**Rin:** to run
**Strucken:** struck
**Wimmin** and **Womyn:** woman ( _erroneously left out of some editions of_ OSPD _, but legal_ )
#### PLAYING WITHOUT PIZZAZZ
Although it was accidentally left out of some initial printings _,_ **pizzazz** is now included in the _OSPD,_ along with **pizzazzy** and **pizzazzes** , though with their four **izzards** (the letter _z_ ), none of the words is playable with Scrabble's single Z and two blanks.
However, they are all possible in Super Scrabble, which enlarges the Scrabble board size from 225 squares to 441 and doubles the number of tiles from 100 to 200. Though the tile distribution in Super Scrabble does not simply double the distribution of classic Scrabble, it does provide two Zs and two blanks, making the three words possible.
**_Knickknack_** is another interesting case. With four Ks, it too is impossible to play in standard Scrabble. Its length (more than eight letters) necessarily keeps it out of the _OSPD_. It was erroneously excluded from the _OWL_ but is playable—if only you can manage it.
**Kickback** would require both blanks, the K, and both Cs. But if you're set on playing a three-K word at some point, try to serve up the marvelous **krumkake** , the thin Norwegian waffle cookie popular in the American Midwest.
#### CATCH SOME Zs
Besides **pizzazz** , **pizzazzes** , and **pizzazzy** , there are a surprising number of words—ten!—that require the use o f the Z plus both blanks used as Zs. They say there's a season for everything; I'm not sure when the season to play a Z with two blanks is, except I suppose, if the opportunity arises to play **zzz** or **zzyzyva** , no matter what the situation, one has to take it, just 'cause, **coz** (cousin.)
**Bezazz:** pizazz
**Pazazz:** pizazz
**Pizazz:** the quality of being excited (-es)
**Pizazzy:** having pizazz
**Pizzaz:** pizazz
**Zizzle:** to sizzle (-d, -s)
**Zyzzyva:** a tropical weevil
**Zzz:** used to suggest the sound of sleeping
There are no words that use more than one Q.
Words that take two Xs are **maxixe** (a Brazilian dance, pl. -s) and **paxwax** (a nuchal ligament of a quadruped, pl. -es). (Not to be confused with the old way to purchase baseball cards, wax packs.)
## Bathroom Breaks
### Making It "Jake" to Use the "Jakes"
Competitive Scrabble games typically afford players a total of 25 minutes for each player to play. One problem with timed play is, as Annie Warbucks famously put it, "When you gotta go, you gotta go." In today's tournaments, a player can get up to use the **pissoir** (urinal) in the **lav** (bathroom) or make a **caca** (excrement) in the **john** (toilet), and the clock ticks away from that allotted 25 minutes during the player's absence at the **potty** (toilet). Not so in competitive Scrabble's early days.
In New York chess clubs in the 1970s, instead of 25 total minutes to use over the course of the game, players had 3 minutes to make each move. If a player failed to play in 3 minutes, the turn was forfeited—and the player was you-know-what out of luck. Accordingly, if a player left the board to make a **doodoo** (feces), the clock ticked away. An extended absence from the table could cost a player several moves in a row, and during that time the opponent could go ahead and play after each forfeiture. A helpful rule of thumb for tournament players: if you must **weewee** (to urinate), do it in a **wee** (short period of time).
Three other useful five-letter words from the **privy** (outhouse) one should be privy to are **biffy** (a toilet), **gleet** (to discharge mucus from the urethra, often associated with gonorrhea), and **gripy** (causing sharp pains in the bowels).
## Tan Tiles, Silver Screen
The 2004 documentary _Word Wars_ is an entertaining and evocative look at professional Scrabble playing, but plenty of other movies also offer important Scrabble tips. The following titles from the **cinematiques** (a film archive where old films are screened) are all playable: **_terminator_** (one who ends something), **_transformers_** (instruments for changing an electrical current), **_jurassic_** [ **park** ] (very old), **moonstruck** (overcome with romantic feelings), **_synecdoche_** _,_ **goldeneye** (a species of duck), **dumbo** (a stupid person), and **fantasia** (a type of free-form musical composition).
Shrek isn't playable, but **schrick** (a fright) is. And there was that Judy Holliday/Jack London feature with Kim Novack called _Phffft!_ If only it was **Pfft**! or **Pht**! or **Phpht**! it would've worked.
## The Three Saddest Words in Scrabble...
**Climaxless:** devoid of a climax
**Poetless:** devoid of a poet
**Wineless:** devoid of wine
Depending on your tastes, you may also decry the states of being **weedless** and **pipeless** , though the citizens of Margaritaville might lament being **limeless** even more.
##... And Some of the Funnest
"Eating that **funest fugu** was **funner** than the **funnest funfest** , it was like a **funplex** in a **funfair** " said the **fubsy funnyman.**
**Funest:** portending evil or death
**Fugu:** the puffer fish, known for its toxicity to humans if not properly prepared ( _This is the Japanese name for the fish._ )
**Fun:** amusing ( **funner** , **funnest** ), also to act in a playful manner ( **funs** , **funned** , **funning** )
**Funfest:** a party
**Funplex:** a building in which to play games
**Funfair:** an amusement park
**Fubsy:** chubby and squat
**Funnyman:** a comedian
## Cheers!
### Ways to Drink, What to Drink, and What You Are When You've Drunk Too Much
##### WAYS TO DRINK
To pour: **skink**
To drink a little: **bib** , **dram**
To drink to the health of someone: **wassail**
To drink too much: **tope** , **swizzle** , **chugalug**
##### WHAT TO DRINK
Wines and champagnes: **zin** , **brut** , **fino** , **cuvee** , **vino** , **glogg** , **macon** , **mirin** , **negus** , **perry** , **rioja** , **soave** , **tokay**
Cocktails and Spirits: **kir** , **arak (arrack)** , **grog** , **nogg** , **ouzo** , **raki** , **sloe** , **hooch (hootch)** , **pisco** , **tafia** (a cheap rum)
Beers: **kvass (kvass** , **quass)** (a Russian beer made of fermented rye bread)
Other: **kumys (koumis** , **koumiss** , **koumyss** , **kumiss)** ( _However one spells_ **kumys** _, it's strange to most Americans' taste, as this Mongolian beverage consists of fermented horse milk._ )
#### "To Good Letters!": Toasts in Scrabble
**Lehayim:** a traditional Jewish toast (-s) (also **lechayim** )
**Prosit:** an expression used in toasting (also **prost** ) ( _from the Latin_ prosit _, meaning "may it be good"_ )
**Slainte:** used as a toast in Ireland, meaning "to health"
**Skoal:** to toast one's health in Scandanavia ( _referring to the drinking vessel—literally "bowl"_ )
##### WHEN YOU'VE DRUNK TOO MUCH
Ways to be drunk: **fou** , **blotto** , **beery** (that is, on booze), **boozy** , **stinko** , **tiddly** (that is, just a little), **sozzled** , **squiffed (squiffy)** , **swacked** , having **jimjams** (violent delirium)
General drunkards: **sot** , **alky** , **wino** , **dipso** , **shicker** , **boozer** , **tosspot**
Specialized drunkards: **ginny** (drunk on gin), **rubby** (one who gets drunk on rubbing alcohol), **stewbum** (a drunken bum)
## The Phiz and the Bod
* * *
**Auris:** the ear (pl. **aures** )
---
**Bazoo:** the mouth
**Beezer:** the nose
**Bod:** body
**Buccal:** regarding the cheek
**Coxa:** the hip (pl. **coxae** , adj. **coxal** )
**Craw:** the stomach
**Derm:** the skin (also **derma** )
**Dorsum:** the back
**Eyne:** a plural of eye (also **eyen** )
**Glim:** a light or lamp, or an eye
**Glossa:** the tongue
**Haeffet:** the cheekbone and temple
**Jole:** the jowl
**Jugal:** regarding the cheekbone
**Kyte:** the stomach
**Lingua:** the tongue
**Loof:** the palm of the hand
**Medius:** the middle finger (pl. **medii** )
**Nates:** (n./pl.) the buttocks
**Neif:** the fist or hand (also **nieve** )
**Nevus:** a birthmark
**Nucha:** the nape of the neck
**Omer:** a bone in the skull
**Orad:** toward the mouth
**Petto:** the breast
**Phiz:** the face ( _a shortening of_ **physiognomy** __)
**Risus:** a grin
**Sudor:** sweat
**Wame:** the belly
**Wizzen:** the throat (also **weasand** )
* * *
## And Bingo Was His Name-o
A _bingo_ is the North American term for playing all seven letters on one rack in one turn. A bonus of 50 points is awarded for a bingo. (In the United Kingdom, playing all seven letters is more often referred to as a "bonus.")
There are about 50,000 possible seven-letter bingos in Scrabble, plus all the words longer than seven letters that can be made using tiles already on the board. While it would be great to know them all, don't feel too bad if you don't think you're up to the task. U.S., Canadian, and World Scrabble champion Joel Watnick owns up to (only!) having memorized about 16,000 of them.
Let's **lour** (lower) our sights a little, and have a look at some of the more useful and interesting bingos out there and how to play them.
#### SCORING BIG BY PLAYING THE ODDS
Of the 3,199,724 different combinations of seven letters you can draw for your first rack, the good news is that you have only about a 1 in 800,000 shot at getting stuck with seven of the same letter. (It would have to be all As, Es, Is, or Os, as those are the only tiles of which there are at least seven.)
In _Word Freak_ , Stefan Fatsis reports that the chances of choosing letters that could form a playable bingo with that first draw are 12.63 percent, or just over 1 in 8. The odds of spotting that bingo, of course, are another story.
The least likely opening rack one might draw is BBJKQXZ, whereby the B could be replaced by any tile of which there are two in the bag (B, C, F, H, M, P, V, W, Y, and the blanks). But the odds come in at around 16 billion to 1, so you may want to hold on to this as interesting trivia, rather than hold your breath in hopes of not drawing such a crummy **septet** (group of seven).
#### THE MOST LIKELY DRAW
The most likely seven letters one might draw to open the game are AEINORT. At first glance, this may seem like meaningless trivia. The odds aren't great (at about 1 in 9,530), and, worse still, the rack doesn't offer any seven-letter bingos. But if you draw these letters and are set to play first, the best thing to do is actually to pass. As it happens, if your opponent plays any of twelve different letters (A, B, C, D, H, L, N, P, R, S, T, or Z), you've got a bingo waiting for you. If he plays a T, you have the chance to play **tentoria** (the internal skeletal form of an insect's head— _what an awesome word!_ ). Here's a list of other ways you can bingo by passing your first turn, if any of these letters are played while you are holding AEINORT:
A **Aeration:** the act of supplying with air
B **Baritone:** a deep, male voice
**Obtainer:** one who takes possession
**Reobtain:** to retake possession
**Taborine:** a **taboret:** a small drum
C **Actioner:** an action film
**Anoretic:** an anorexic
**Creation:** something made
**Reaction:** a response
D **Arointed:** past tense of **aroint:** to order away ( _as from_ Macbeth _: "Aroint thee, witch!"_ )
**Ordinate:** a geographic position
**Rationed:** past tense of **ration** : to allow or distribute a fixed amount
H **Antihero:** an ignoble protagonist
L **Oriental:** "an inhabitant of an eastern country," according to the _OSPD_
**Relation:** a connection between multiple things
N **Anointer:** one who ritually applies a substance
**Reanoint:** to reapply a substance as part of a ritual
P **Atropine:** a substance used to dilate the pupil
R **Anterior:** toward the front
S **Notaries:** plural of **notary:** a type of public official who deals with legal documents
**Senorita:** an unmarried Spanish woman or girl
T **Tentoria:** (n./pl.) the internal skeletal form of an insect's head
Z **Notarize:** to have a notary certify a document
#### MAKE EACH OF YOUR RETINAE A TRAINEE FOR THESE BINGOS
The most likely seven letters one can draw that create a seven-letter bingo to start the game are AEEINRT, which can create **retinae** (the plural of **retina** , **retinas** is another **pluralization** ) and **trainee**.
Odds are still slim (just 1 in 14,000), but the chances of seeing this combination sometime during a game, especially as players gain control over their racks and their ability to get rid of unhelpful letters by dumping them or trading them in, increase dramatically.
#### FOUR WORDS IT NEVER MAKES SENSE TO PLAY
One word one should (almost) never ever play is **tisane** (an herbal tea, also **ptisan** ). Say it three times: **tisane tisane tisane**. It's insane to play **tisane**. Remember it. Now forget about using it.
Likewise for its anagrams— **seitan** (wheat gluten), **tenias** (tapeworms), and **tineas** (fungal skin diseases like ringworm)—don't play these either. (Many players remember this stem as satine or anties, but considering how gross **tenias** and **tineas** are, maybe it's those words that are easier to remember not to play.)
The reason is that **tisane** is the most malleable six-letter combination in Scrabble. It's capable of creating bingos with every letter except _j_ , _q_ , and _y_ , for a total of sixty-nine different words. Except in the most extreme circumstances (there are no more letters in the bag, or you feel you must block an opportunity for your opponent to play a high-scoring bingo, or there's absolutely no place on the board that could create a possibility for you to place a bingo, etc.), there's no reason to play those six letters.
Do your future opponents a disfavor and at least become passingly familiar with this list. If you see the letters TISANE on your rack, start shaking your head until the bingo falls out of your ear (or a tenia does). Then stick it on the board and collect your 50 bonus points. Then find me and buy me a beer.
TISANE plus:
A **Entasia:** a muscle spasm
**Taenias:** a headband or ribbon worn in ancient Greece ( _a genus of tapeworms takes its name from it_ )
B **Banties:** plural of **banty:** a small type of poultry (also **bantam** , _which gives its name to **bantamweight** fighters_)
**Basinet:** a medieval open-faced helmet
C **Acetins:** plural of **acetin:** a liquid acetate
**Cineast:** a film enthusiast
D **Destain:** to eliminate a stain
**Detains:** plural of **detain:** to prevent from leaving
**Instead:** in place of
**Nidates:** present tense of **nidate:** to become implanted in the uterus
**Sainted:** past tense of **saint:** to canonize into sainthood
**Stained:** past tense of **stain:** to soil with a lasting mark
E **Etesian:** a northerly Mediterranean summer wind
F **Fainest:** superlative form of **fain:** happily ( _as in Juliet to Romeo: "Fain would I dwell on form. Fain, fain deny / What I have spoke. But farewell compliment! / Dost thou love me?"_ )
G **Easting:** a distance traveled eastwardly
**Eatings:** plural of **eating**
**Ingates:** plural of **ingate** : an entrance, particularly for molten metal to enter a mold
**Ingesta:** ingested material, particularly orally
**Seating:** seat cushion upholstery
**Teasing:** present participle of **tease:** to mock, annoy, or toy with
H **Sheitan:** the devil or an evil spirit in Islam (also **shaitan** ) ( _similar in origin and meaning to Satan_ )
**Sthenia:** unusually great strength or energy ( _from the Greek word from strength,_ sthenos _, from which the English term is also derived_ )
I **Isatine:** a chemical compound used to make a synthetic indigo dye (also **isatin)**
K **Intakes:** plural of **intake:** the process of taking in or absorbing
L **Elastin:** a protein found in connective tissue
**Entails:** present tense of **entail:** to limit an inheritance to specific heirs
**Nailset:** a tool in home construction for driving nails
**Salient:** a part of a battlefield front that projects into enemy territory
**Saltine:** a salted cracker, also known as a soda cracker
**Slainte:** used as a toast in Ireland, meaning "to health"
**Tenails:** plural of **tenail:** an exterior defense standing in front of a fortress (also **tennaille** )
M **Etamins:** plural of **etamine:** a soft cotton fabric (also **etamines** )
**Inmates:** plural of **inmate:** a prisoner in a jail
**Tameins:** plural of **tamein:** a skirt worn by Burmese women
N **Inanest:** superlative of **inane:** silly, devoid of reason
**Stanine:** one of nine sets into which standardized test scores are divided
O **Atonies:** plural of **atony:** a lack of muscular strength
P **Panties:** plural of **pantie:** a woman's or girl's undergarment (also **panty** )
**Patines:** present tense of **patine:** to apply a patina
**Sapient:** a wise person, a sage ( _as in_ King Lear _, when he addresses the Fool: "Thou sapient, sir, sit here!"_ )
**Spinate:** having thorns or spines
R This seven-letter combination is the most prodigious in Scrabble, creating 9 seven-letter bingos, as well as being the basis for 78 eight-letter bingos and about 250 nine-letter bingos.
**Anestri:** plural of **anestrus:** a period of sexual dormancy in cyclically breeding mammals
**Antsier:** comparative of **antsy:** nervous, restless
**Nastier:** comparative of **nasty:** mean, wicked
**Ratines:** plural of **ratine:** a rough fabric that is loosely woven
**Retains:** present tense of **retain:** to continue to possess
**Retinas:** plural of **retina:** a light-sensative membrane of the eye
**Retsina:** a Greek wine
**Stainer:** one who stains
**Stearin:** a triglyceride used to make candles and soap
S **Entasis:** a curve applied to architectural surfaces, particularly columns
**Nasties:** plural of **nasty:** something nasty
**Seitans:** plural of **seitan:** wheat gluten
**Sestina:** a particular type of 39-line poem composed of six **sextains** (a six-line stanza) and one **tercet** (a three-line stanza) in which the last words of the first sextain are used in a particular order to end every line for the length of the poem
**Tansies:** plural of **tansy:** an herb with a yellow flower, also known as **mugwort**
**Tisanes:** plural of **tisane:** an herbal tea (also **ptisan** ) ( _You have theauthor's blessing to use **tisanes** in the plural._)
T **Instate:** to put into office
**Satinet:** a thin satin fabric
U **Aunties:** plural of **aunty:** an aunt ( **aunties** _is one of the cutest bingos out there_ )
**Sinuate:** to curve back and forth, to wind
V **Naivest:** superlative of **naive:** credulous, lacking worldy experience
**Natives:** plural of **native:** an inhabitant by birth of a place
**Vainest:** superlative of **vain:** overly prideful of onself
W **Tawnies:** plural of **tawny:** a yellowish-brown color
**Waniest:** superlative of **wany:** visibly decreasing in size (also **waney** )
X **Antisex:** antagonistic toward sexual activity
**Sextain:** a six-line stanza
Z **Zeatins:** plural of **zeatin:** a plant hormone found in corn and coconut milk
**Zaniest:** superlative of **zany:** wacky
The other incredible six-tile combo to be on the lookout for is AEINRT, more easily remembered as RETAIN. They're common letters that, when combined with a blank or one of the following letters, are capable of producing more than fifty bingos (including the nine in the previous list that make up **tisane** +r). It's worthwhile being able to recognize them.
RETAIN plus:
C **Ceratin** : a fibrous protein found in hair, nails, claws, hooves, and horns (also **keratin** )
**Certain:** confident
**Creatin:** an organic compound that supplies energy to cells (also **creatine** )
D **Antired** : anticommunist
**Detrain:** to disembark from a railroad train
**Trained:** past tense of **train:** to instruct manually
E **Arenite:** a sedimentary rock
**Retinae:** plural of **retina:** a light-sensitive membrane of the eye (also **retinas** )
**Trainee:** one who is being trained
F **Fainter:** one who faints, or a comparative of **faint:** weak, languid
G **Granite:** a common igneous rock
**Gratine:** encrusted ( **gratin** _is a type of food crust,_ **gratine** _is the adjective,_ **gratinee** _is the verb_ )
**Ingrate:** one who is not appreciative or thankful
**Tangier:** comparative of **tangy:** having a strong sour or citrus taste or aftertaste
**Tearing:** present tense of **tear:** to emit tears from the eye; or present tense of **tear:** to rip
H **Hairnet:** a net worn on the head to confine hair
**Inearth:** to bury or inter
I **Inertia:** resistance to a change in motion or stillness
K **Keratin:** a fibrous protein found in hair, nails, claws, hooves, and horns (also **ceratin** )
L **Latrine:** a communal toilet
**Ratline:** a horizaontal rope used as part of **ratlines** , a set of ropes tied together on a ship to create a large rope ladder used to adjust sails and as a lookout ( _pronounced "rattlin"_ )
**Reliant:** exhibiting reliance
**Retinal:** a pigment found in the retina (also **retinene** )
**Trenail:** a wooden peg used to fasten timber in shipbuilding (also **treenail** ) ( _As treenails are made of wood, they expand when exposed to moisture, creating a secure hold._ )
M **Minaret:** a tall, conical spire that is typically attached to a mosque
**Raiment:** clothes
N **Entrain** : to board a train
P **Painter:** one who paints
**Pertain:** to refer or relate to
**Repaint:** to paint again
R **Retrain:** to train again
**Terrain:** geographical landscape
**Trainer:** one who trains
S **Anestri:** plural of **anestrus:** a period of sexual dormancy in cyclically breeding mammals
**Antsier:** comparative of **antsy:** nervous, restless
**Nastier:** comparative of **nasty:** mean, wicked
**Ratines:** plural of **ratine:** a rough fabric that is loosely woven
**Retains:** present tense of **retain:** to continue to possess
**Retinas:** plural of **retina:** a light-sensitive membrane of the eye (also **retinae** )
**Retsina:** a Greek wine
**Stainer:** one who stains
**Stearin:** a triglyceride used to make candles and soap
T **Intreat:** to beseech (also **entreat** )
**Iterant:** repeating
**Nattier:** comparative of **natty:** sharply attired
**Nitrate:** to apply nitric acid
**Tertian:** a severe, recurrent two-day fever typical of malaria
U **Taurine:** an organic acid found in animal tissues
**Uranite:** a radioactive, uranium-rich mineral (also **_uraninite_** )
**Urinate:** to discharge urine
W **Tawnier:** comparative of **tawny:** yellow-brown in color
**Tinware:** an article, such as housewares, made of tinplate
# [PART 3
The Lexicon Contextualized:
Speaking Scrabblish](003-toc.html#part3)
Now it's time to have a little fun. It's one thing to look over lists of related playable words, but words without sentences are notes without a song. To indulge the ulna (bone often thought of as the funny bone), the following example sentences and tongue twisters string together some of the best bits of the Scrabble lexicon. It's worth noting that just as Scrabblish is made of a lot of somewhat silly words, the following sentences are presented primarilyfor fun. While efforts havebeen made to use each word in an as-near-to-accurate way as possible, these examples are not intended to display model usage, but rather to suggest asense of the words' meanings.
* * *
# A
The **aa** formed the **maar** that you can see through the **haar**. But the natives, who **craal** their **baaing** sheep and fish from a **praam** out at the **haaf** , credit a **baal**.
**Aa:** a type of stony, rough lava
**Maar:** a volcanic crater
**Haar:** a fog
**Craal:** to pen in an animal (also **kraal** )
**Baa:** to make a bleating sound
**Praam:** a flat-bottomed boat (also **pram** )
**Haaf:** a deep-sea fishing ground found far offshore
**Baal:** any of several Canaanite or Phoenician gods
**Abed** in the **abri** , **abba** 's **aba** , like an **alb** , hid his **ab** s.
**Abed:** laying in bed
**Abri:** a bomb shelter
**Abba:** a father
**Aba:** a traditional sleeveless garment worn by Arab men
**Alb:** a full-length white vestment
**Ab:** an abdominal muscle
**Ae blae kae** with **twae alae hae nae wae** if it **lek** s well.
**Ae:** one (adj.)
**Blae:** bluish-black in color
**Kae:** a bird similar to the crow, also known as the jackdaw or grackle
**Twae:** two
**Alae:** plural of **ala:** a wing or similar appendage
**Hae:** to have ( **haed** , **haen** , **haeing** , **haes** )
**Nae:** no or not
**Wae:** woe
**Lek:** a currency of Albania (pl. **leks** or **leke** or **leku** ) or (v.) topresent a competitive mating display _(as, for example, the grackle does)_
Nan saw **lac** on the insect's **ala** she found in the **nala.** She looked it up in the **ana** as she ate **nan**.
**Lac:** a red resin secreted by some insects
**Ala:** a wing (pl. **alae** )
**Nala:** a steep narrow valley or ravine (also **nullah** )
**Ana:** a collection of miscellany about a specific topic
**Nan:** a leavened flatbread (also **naan** )
The **aga** was stricken with **ague** , so he threw his **aggie agly**.
**Aga:** a Turkish military commander (also **agha** )
**Ague:** a fever associated with malaria
**Aggie:** a type of marble
**Agly:** awry (also **agley** , **aglee** )
Ali, who was **anile** , strolled down the **allee** past an **aalii** , **anil** , **aal** , and **al**.
**Anile:** resembling an old and/or senile woman
**Allee:** a tree-lined path
**Aalii:** a type of tropical tree
**Anil:** a type of West Indian plant
**Aal:** a type of East Indian shrub
**Al:** a type of East Indian tree
* * *
# B
If I told you once, I told you **bis** —don't build a **burg** on a **berg**.
**Bis:** twice, again, _or_ plural of **bi** _(as in the wonderful Latin phrase_ "bis dat qui cito dat": _"he gives twice who gives promptly")_
**Burg:** a town or city
**Berg:** an iceberg
The **bawd** in **braw braws busk** s to **buss** her **beau**.
**Bawd:** a female head of a brothel
**Braw:** excellent
**Braws:** (n./pl.) fine clothing
**Busk:** to prepare
**Buss:** to kiss
**Beau:** a boyfriend (pl. **beaux** )
The **babu** beneath the **babul** eats **baba** , but the **baboo** in the **babool** prefers **babka**. Amidst the **babel** , the **bub** with a **bubo** rides a **bubal** by the **babassu**.
**Babu:** Hindu gentleman (also **baboo** )
**Babul:** the acacia tree (also **babool** )
**Baba:** a cake steeped in rum
**Baboo:** babu
**Babool:** babul
**Babka:** a type of coffee cake
**Babel:** confused ruckus
**Bub:** a young man, particularly an upstart
**Bubo:** a swollen lymph node in the groin or armpit
**Bubal:** a type of large antelope now extinct
**Babassu:** a palm tree that yields an edible oil
The **baldy** bet the **berk** a bag of **bani** that the **bairn** couldn't **banjax** the **bankit** with a **baffy**.
**Baldy:** one who is bald
**Berk:** a fool
**Bani:** a Romanian monetary unit (pl. **ban** )
**Bairn:** a child
**Banjax:** to damage or destroy
**Bankit:** a raised sidewalk, used in the American South (also **banquette** )
**Baffy:** the 4 wood golf club
In the **bazar** , near the baskets of **braize** , the **bozo** pays big **baiza** s for **braza** s of **baize**.
**Bazar:** a marketplace (also **bazaar** )
**Braize:** a European marine fish
**Bozo:** a fool
**Baiza:** an Omani monetary unit
**Braza:** a Spanish unit of length, equal to almost 12/3 meters
**Baize:** the coarse, woolen fabric used on billiards tables, similar to felt
If sheep on the **brae brux** when they **blat** , they may have **braxy**.
**Brae:** a slope or hillside
**Brux:** to grind the teeth (the noun is **bruxism** )
**Baa:** to bleat
**Blat:** to bleat
**Braxy:** a disease of sheep
In the **botel** , the **bucko bung** s the **bot** in the **bota** with **batt** he'd bought for four **baht**.
**Botel:** a floating hotel (also **boatel** , **floatel** )
**Bucko:** a bully or ruffian
**Bung:** to plug with a cork or stopper
**Bot:** the larva of a botfly (also **bott** )
**Bota:** a leather bottle, usually used for wine
**Batt:** cotton used for stuffing quilts and sleeping bags
**Baht:** a monetary unit of Thailand, worth about three American cents
* * *
# C
Care to **coff** some **cole** or **cos** , **coz**? I accept **chon** , **chao** , **cory** , and **cedi**.
**Coff:** to buy (chiefly Scottish)
**Cole:** kale (also **kail** )
**Cos:** romaine lettuce
**Coz:** a cousin (pl. **cozes** or **cozzes** )
**Chon:** the monetary unit of South Korea
**Chao:** "a monetary unit of Vietnam" according to the _OSPD_
**Cory:** "a former monetary unit of Guinea" according to the _OSPD_
**Cedi:** the basic monetary unit of Ghana
" **Ciao** ," said the **chic chica** by the **chico** to the **chicer chola** by the **casa**.
**Ciao:** an expression of greeting or departure
**Chic:** stylish or (n.) stylishness
**Chica:** a girl or young woman
**Chico:** a common shrub in the American west, known as the greasewood
**Chicer:** more stylish
**Chola:** a Mexican American girl
**Casa:** a house
He heard the **coon** , **cony** , **cavy** , and **chipmuck chirk** , **chirm** , and **chirr.**
**Coon:** a raccoon
**Cony:** a rabbit
**Cavy:** a short-tailed rodent found in South America
**Chipmuck:** a chipmunk
**Chirk:** to make a shrill noise like a small animal, or (adj.) happy
**Chirm:** to chirp like a bird
**Chirr:** to chirp like an insect (also **chirre** )
The _OSPD_ includes **cholo** (defined as a "pachuco") and **chola** ("a Mexican American girl"). **Pachuco** , "a flashy Mexican American youth," is also playable. The term _cholo_ dates back to at least the early seventeenth century, when it was defined in a Peruvian book as a word for a child of mixed black and indigenous Indian heritage: "it means _dog_ , not of the purebred variety, but of very disreputable origin; and the Spaniards use it for insult and vituperation."
In recent years, _cholo_ and _chola_ have been somewhat **_reappropriated_** , and are used self-descriptively by southwestern Americans of Mexican descent who identify with the general styles and/or ethos of youth culture found in hip-hop and/or **gangsta** scenes.
The **cadi** and **caid** count the **cain.**
**Cadi:** a judge in Islamic courts (also **kadi** , **qadi** )
**Caid:** a Muslim chieftain (also **qaid** )
**Cain:** rent for land paid in produce or livestock (also **kain** )
The **coot** and the **cooter** are **snooty** to the **cootie** and **crappy** to the **crappie**.
**Coot:** a dark-gray aquatic bird
**Cooter:** any of many freshwater turtles common in the eastern United States _(from_ kuta _, the Malian word for turtle)_
**Snooty:** snobbish
**Cootie:** a body louse
**Crappy:** decidedly bad
**Crappie:** a common freshwater food fish
Don't **cark** about the **carb** —just don't **clag** the **cam.**
**Cark:** to worry
**Carb:** a carburetor
**Clag:** to clog
**Cam:** a rotating or sliding piece of machinery
The **cobby kob stob** s the **cobb** near the **dobby** 's **doby**.
**Cobby:** stocky (used of animals)
**Kob:** an orange-brown antelope
**Stob:** to stab
**Cobb:** a sea gull
**Dobby:** a foolish old man
**Doby:** adobe (also **dobie** )
* * *
# D
The **dorty dork** in the **doty dory** takes his **dore dor** to the **dorp.**
**Dorty:** sulky
**Dork:** a socially inept person, an outsider _(It's likely that_ dork _, which can also mean "penis," became_ dick _.)_
**Doty:** marked by decay
**Dory:** a small, flat-bottomed boat found in New England
**Dore:** gilded
**Dor:** a European beetle (also **dorr** , **dorbug** )
**Dorp:** a village (chiefly South African)
The **dry** s **durn** the drunks who **dram**. They even **drat** and **dang** the **dreg** to the **deil**.
**Dry:** a prohibitionist (-s)
**Durn:** to damn
**Dram:** to drink alcohol
**Drat:** to damn
**Dang:** to damn
**Dreg:** the sediment of alcoholic beverages
**Deil:** the devil
"It's **dere** , **doc** ," says the **dona** , a former **def deb** from a **deme** in the **dene**. "I can't **dure** the **dol** s in my **derm**."
**Dere:** dire
**Doc:** doctor
**Dona:** a respected Spanish lady
**Def:** excellent ( **deffer** , **deffest** )
**Deb:** a debutante
**Deme:** a district in Greece
**Dene:** a valley
**Dure:** to endure
**Dol:** a unit of pain intensity equal to one "just noticeable difference" (JND) of pain _(Developed at Cornell University in the mid-twentieth century, the dol is no longer used.)_
**Derm:** the skin (also **derma** )
My **eme** Ed may only have an **elhi ed** —he's no **einstein** —but he knows an **edh** from an **eng**.
* * *
# E
**Eme:** uncle or an avuncular person (Scottish)
**Elhi:** relating to school grades 1 through 12
**Ed:** education
**Einstein:** a very intelligent person
**Edh:** a letter in Old English, Icelandic, and Faeroese (also **eth** )
**Eng:** a Latin letter
The **ecru euro** has had **enow eau**.
**Ecru:** a color similar to beige
**Euro:** an Australian marsupial
**Enow:** enough
**Eau:** water (pl. **eaux** )
"The **eyas** 's **eyry** was **erst** the **eyass** 's," the **ern** told the **eyra**.
**Eyas:** a young hawk
**Eyry:** the lofty nest of a bird of prey (also **aerie** , **eyrie** )
**Erst:** formerly, long ago
**Eyass:** a falcon _(defined in the_ OSPD _as an_ **eyas** _, but they're generally considered to have slightly different meanings)_
**Ern:** a sea eagle (also **erne** )
**Eyra:** an American wildcat
Because of his **eld** , it's less **eath** for the **ebbet** to climb the **ers**.
**Eld:** old age
**Eath:** easy (Scottish)
**Ebbet:** a green newt also known as a spring peeper
**Ers:** a climbing plant (also **ervil** )
* * *
# F
**Fie** , you won't **fob** me with that **faux fido** —it's just a **fil**! **Foh** , it's a **fico** —you can't **fub** me. I'm no **feeb!**
**Fie:** an exclamation of disapproval
**Fob:** to deceive or cheat (also **fub** )
**Faux:** fake
**Fido:** an irregular coin
**Fil:** a coin of little value found in many Middle East countries
**Foh:** an exclamation of disgust (also **faugh** )
**Fico:** a trifle, something almost worthless
**Fub:** to deceive or cheat (also **fob** )
**Feeb:** a feebleminded person
The **deft ganef** in the **weft reft** the **kef** from the **eft** and **eftsoon** left as he **cleft** the **teff.**
**Deft:** skillful in movements
**Ganef:** a thief (also **gonef** , **ganof** , **gonof** , **gonoph** , **gonif** , and **goniff** )
**Weft:** a type of specially woven fabric or garment
**Reft:** past tense of **reave:** to seize, to forcibly plunder
**Kef:** hemp smoked to euphoria (also **kif** , **kaif** , **keef** , **kief** )
**Eft:** a newt
**Eftsoon:** soon after (also **eftsoons** )
**Cleft:** past tense of **cleave:** to split
**Teff:** an African cereal grass
The **fice** was **fain** to **fet** the **fez fer** the **fud**.
**Fice:** a small dog of mixed breed (also **fyce** , **feist** )
**Fain:** pleased, particularly to do something
**Fet:** to fetch
**Fez:** a brimless hat worn by men in Turkey
**Fer:** for
**Fud:** old-fashioned, stuffy person
Since it **fash** es the **fiar** , a **fiat** forbids **fifing** in the **fief**.
**Fash:** to bother
**Fiar:** a landowner in Scottish law
**Fiat:** an order or decree
**Fife:** to play a sort of high-pitched flute
**Fief:** a feudal estate
**Fess** to the **fed** s or I'll **fink**!
**Fess:** to confess
**Fed:** a federal agent
**Fink:** to inform the authorities on someone
* * *
# G
The **giglet** did a **giga** with the **piglet** , **eaglet** , **auklet** , **swiftlet** , and **owlet**.
**Giglet:** a playful young girl (also **giglot** )
**Giga:** a lively dance that evolved from the jig (pl. **gighe** ) (also **gigue** )
**Piglet:** a young pig
**Eaglet:** a young eagle
**Auklet:** a young auk
**Swiftlet:** a Pacific swift, known for using saliva to help build its nest _(Their nests are a Chinese delicacy used in soups and are among the most expensive edible products eaten by humans. A bowl of bird's nest soup can cost $100, and a kilogram of bird's nest can run up to $10,000.)_
**Owlet:** a young or small owl
Wine **guggle** d from **guglet** s and **goglet** s as Gil grilled a **gigot**.
**Guggle:** to flow unevenly, producing a soft noise (also **gurgle** )
**Guglet:** a long-necked vessel of earthenware (also **goglet** )
**Gigot:** a mutton leg
The **saiga** 's **amiga** hates the **taiga**.
**Saiga:** a critically endangered tundra antelope
**Amiga:** a female friend
**Taiga:** a subarctic forest of firs and spruces
The **gleg grig glug** s **grog** in the **grot**.
**Gleg:** alert and quick to respond (Scottish)
**Grig:** a lively, animated person
**Glug:** to make a gurgling sound by drinking or pouring
**Grog:** a mixture of liquor and water; beer
**Grot:** a grotto
**Yerk** the **yegg** ; if you **igg** him he'll **nim** the **migg**.
**Yerk:** to beat vigorously through strikes, stabs, kicks, etc.
**Yegg:** a thief, particularly a burglar
**Igg:** to ignore
**Nim:** to steal
**Migg:** a type of playing marble (also **mig** , **miggle** )
If he drinks a **noggin** of **nogg** , the **hogg** will **mugg** and play with the **piggin** behind the **biggin.**
**Noggin:** a small amount of liquor
**Nogg:** a type of strong ale
**Hogg:** a young sheep before it's shorn
**Mugg:** to make a funny face
**Piggin:** a small wooden bucket
**Biggin:** a house or cottage
The **glumpy gled glime** s the **glim** —he'll **glom** it in a **gliff**.
**Glumpy:** glum or sulky
**Gled:** a bird of prey also known as the kite (also **glede** )
**Glime:** to glance at slyly
**Glim:** a light or lamp, or an eye _(To "douse the glim" means both "to put out the light" and "to punch someone in the eye"—just as in "knock someone's lights out.")_
**Glom:** to steal
**Gliff:** a brief moment; or a sudden sight of something that startles
* * *
# H
" **Voila** ," said the pitcher.
" **Um**...," said the **ump** , " **umm**... **uh**... **eh**... **hm**... **hmm** . .."
" **Hunh**?" said the catcher.
" **Ahem** ," said the batter.
" **Eureka** ," said the umpire. "Strike three!"
" **Huh?** " Said the batter.
" **Mm** ," said the pitcher.
" **Ah**!" " **Aah**! " **Aha**!" " **Hah**!" " **Ha**!" " **Ooh**!" " **Ho**!" " **Oho!** "
Everyone **ohed**. Except the **boobird** s **—** who **boo** ed.
**Voila:** used to call attention to something
**Um:** an expression of hesitation (also **umm** )
**Ump:** to umpire a baseball game; an umpire
**Umm:** an expression of hesitation (also **um** )
**Uh:** used to express hesitation
**Eh:** used to express doubt
**Hm:** an expression of thoughtful consideration (also **hmm** )
**Hmm:** used to express thoughtful consideration (also **hm** )
**Hunh:** an expression requesting the repetition of something said
**Ahem:** used to attract attention
**Eureka:** used to express triumph upon discovering something
**Huh:** used to express surprise
**Mm:** used to express appreciation or assent
**Ah:** aah
**Aah:** to exclaim in amazement, joy, or surprise
**Aha:** an expression of surprise, accomplishment, or mockery
**Hah:** ha
**Ha:** a sound of surprise
**Ooh:** to exclaim in amazement, joy, or surprise
**Ho:** used to express surprise
**Oho:** an interjection expressing surprise
**Oh:** to exclaim surprise, pain, or desire
**Boobird:** a fan who boos players on the home team
**Boo:** to cry "boo"
Let's **hongi** by the **honan hong** , **hon.**
**Hongi:** to greet another by pressing together noses ( **hongiing** )
**Honan:** a fine silk
**Hong:** a Chinese factory
**Hon:** a sweetheart (also **honeybun** )
* * *
# I
Eyeing the **ilea** , **ilia** , and **inia** made Ira **inly** feel **illy** and **iffy**.
**Ilea:** plural of **ileum:** a part of the small intestine
**Ilia:** plural of **ilium:** a bone in the pelvis
**Inia:** a part of the skull
**Inly:** inwardly
**Illy:** badly
**Iffy:** full of ifs, uncertain
Iris put **ilex** and **ixia** inside **ilka isba**.
**Ilex:** a genus of plant that includes holly
**Ixia:** a South African iris
**Ilka:** every
**Isba:** a Russian log cabin
The **genii** in the **torii** extend their **medii** at the **impli.**
**Genii:** plural of **genius:** a person of exceptional intelligence
**Torii:** a gateway to a Shinto temple
**Medii:** plural of **medius:** the middle finger
**Impli:** a band of Zulu warriors
**Iwis** , the **jus** of the **ibis** in the **mid** of the **miri** is **nisi**.
**Iwis:** certainly (also **ywis** )
**Jus:** a legal right
**Ibis:** a long-legged wading bird
**Mid:** middle
**Miri:** plural of **mir:** a Russian peasant compound
**Nisi:** not yet final, to be enacted depending upon unresolved factors
The **biddy** in a **midi** smokes a **bidi** beside the **tipi.**
**Biddy:** literally a hen, also used figuratively for an old woman
**Midi:** a type of skirt or coat that extends to the middle of the calf
**Bidi:** a cigarette of India (also **beedi** )
**Tipi:** tepee (also **teepee** )
At the **ceili** , Billy told me it'll be ten **inti** s or seven **sylis** to fix the **tali** of my **puli**.
**Ceili:** a Gaelic social gathering or party (also **ceilidh** )
**Inti:** a former currency of Peru
**Syli:** a former monetary unit of Guinea
**Tali:** plural of **talus:** a bone in the ankle
**Puli:** a long-haired sheepdog known for its corded coat (pl. **pulis** or **pulik** )
* * *
# J
The **joe** 's **jo** 's **jenny jib** s at the sight of the **jocko** and tries to **jink**.
**Joe:** a guy
**Jo:** a sweetheart
**Jenny:** a female donkey
**Jib:** to refuse to proceed further or comply
**Jocko:** a chimpanzee or monkey
**Jink:** to move out of the way, to change direction
#### An Irregular Joe
Like **qi** and **za** , **jo** is an invaluable word to have in your arsenal. **Jo** —a sweetheart—is by far the easiest way to unload the J for quick points. But beware: there is no _jos_. The plural, like the plural of **joe** (a guy) is **joes**.
The **juco coed** at the **juku** dance did the **kolo** , the **juba** , and the **jota**.
**Juco:** junior college
**Coed:** a female college student
**Juku:** a school in Japan that readies students for college
**Kolo:** a Serbian folk dance
**Juba:** a lively dance born on Southern slave plantations
**Jota:** a folk dance from northern Spain
The **jerry** in the **johnny jape** d the **jane** 's **jupe** and **jibe** d the **jabot** on her **joseph.**
**Jerry:** a German soldier
**Johnny:** a hospital gown
**Jape:** to play a joke upon or mock
**Jane:** a woman or girl
**Jupe:** a woman's short jacket
**Jibe:** to jeer (also **gibe** )
**Jabot:** a decoration on a shirt, like a frill or ruffle
**Joseph:** a woman's long cloak
The **jehu** , a **jefe** , didn't **jauk** ; he was there in a **jiff**.
**Jehu:** a fast, furious driver
**Jefe:** a chief
**Jauk:** to dally
**Jiff:** jiffy, instant
The **haji** in a **taj jee** s on his **haj** to the religious **mecca.**
**Haji:** one making a haj (also **hadji** and **hadjee** )
**Taj:** a tall, conical hat worn in some Islamic countries
**Jee:** to go to the right (also **gee** ) (also **ajee** , meaning "off to the side")
**Haj:** a pilgrimage to **Mecca** (also **hadj** , **hajj** )
**Mecca:** an important destination for many people
* * *
# K
Scott, a **scop** , and Sigrun, a **skald** , **kip** atop the **kop**.
**Scop:** an Old English poet
**Skald:** an Old Norse poet
**Kip:** to sleep
**Kop:** an isolated hill (also **kopje** )
Ken **ken** s **kendo** but prefers **keno**.
**Ken:** to know
**Kendo:** a modern Japanese martial art involving swords _(_ **Kendo** _literally means "Way of the Sword.")_
**Keno:** a game of chance like lotto popular in casinos _(_ **Keno** _offers perhaps the worst odds of return of any game offered at most casinos.)_
#### The Ki to the Kay: K Words
Thanks be to world religions, who give us two easy ways to drop the K: **ki** , the life force in Chinese culture, pronounced "chee" (also **qi** ), and **ka** the eternal soul in ancient Egyptian religion.
The **cate** , a cake, is so small it won't **sate** or even **bate** Kate's appetite. It's like eating **tate**. She should have made a **bisk** of **torsk** or **cusk** , or a **rusk**. And if she kept up her **kata** , it wouldn't go to her **nates**.
**Cate:** a choice food or delicacy
**Sate:** to satiate
**Bate:** to lessen
**Tate:** a tuft of hair
**Bisk:** bisque
**Torsk:** a large marine food fish also known as cusk, opah, or moonfish
**Cusk:** a large marine food fish also known as torsk, opah, or moonfish
**Rusk:** a light biscuit, often as baby food
**Kata:** a set of movements in several different Japanese martial arts
**Nates:** (n./pl.) the buttocks
For ten **taka** and one **marka** , the **makar** bought the **abaka** from the **kabaka**.
**Taka:** the currency of Bangladesh
**Marka:** the currency of Bosnia and Herzegovina
**Makar:** a poet (Scottish)
**Abaka:** a Philippine cultivar of banana (also **abaca** )
**Kabaka:** the king of Buganda, a subnational kingdom of Uganda
The **luckie** was a **sickie** , but her **dickie** was **duckie**.
**Luckie:** an affectionate term for an old woman _(Archaic terms for one's grandparents include_ lucky-minnie _for grandmother and_ lucky-daddie _for grandfather.)_
**Sickie:** a person of perverted tastes (also **sicko** and **phycho** )
**Dickie:** a detachable shirt or blouse front (also **dicky** , **dickey** )
**Duckie:** excellent (also **ducky** )
In the **mirk** near the **kirk** , Kirk **dirk** s Dirk Burke, then **burke** s him by the **birk**.
**Mirk:** dark, or darkness (also **murk** )
**Kirk:** a church
**Dirk:** to stab with a small knife _(_ Dirk _has also been traced back to the word_ dick _to mean "a penis.")_
**Burke:** to murder by strangulation, or to smother/hush someone _(Coined after the serial killer William Burke who, with his partner, suffocated seventeen victims in order to sell the corpses for medical dissection. Burke himself was hanged.)_
**Birk:** a birch tree
The **stirk** in the **sark** and the **koel** in **kohl** drink **kir** and **kola** with the **mola**.
**Stirk:** a yearling bull or cow
**Sark:** a shirt
**Koel:** a cuckoo of the Eastern hemisphere
**Kohl:** a type of black powder used as eye makeup
**Kir:** an aperitif of white wine and black currant liqueur popular in France _(named after Félix Kir, a former mayor of Dijon)_
**Kola:** cola
**Mola:** an enormous marine fish that typically weighs more than a ton, also known as the ocean sunfish
* * *
# L
**Lo**! Lee **lam** s up the **lum** and legs it **levo** a **li** to the **lea**.
**Lo:** (interjection) used to attract attention
**Lam:** to flee
**Lum:** chimney
**Levo:** toward the left
**Li:** a Chinese unit of distance equal to .4 miles
**Lea:** a meadow (also **ley** )
With **elan** , the **alan gae** s **alane alang** the **lang** lane.
**Elan:** style and enthusiasm
**Alan:** a breed of hunting dog, named after the Alan people (also **aland** , **alant** )
**Gae:** to go ( **gaed** , **gane** or **gaen** , **gaeing** , **gaun** , **gaes** )
**Alane:** alone
**Alang:** along
**Lang:** long
Louie, a **luny louie** without a **leu** or **lev** , **levant** s to the Levant.
**Luny:** loony (adj. and n.)
**Louie:** a lieutenant (also **looie** )
**Leu:** the currency of Moldova and Romania (pl. **lei** )
**Lev:** the currency of Bulgaria
**Levant:** to abscond in order to avoid a debt
* * *
# M
The **motmot** in the **mott** made a **mot** about the **motet**.
**Motmot:** a colorful, long-tailed bird native to Central and South America
**Mott:** a small cluster of trees, typically on a prairie (also **motte** )
**Mot:** a witty remark
**Motet:** a type of choral composition involving a sacred text
The **mod** in the **mag** for **modish ma** s wears a **mac** , one **moc** , and one **pac**.
**Mod:** a stylish person
**Mag:** a magazine
**Modish:** stylish
**Ma:** mother
**Mac:** a raincoat (also **mack** , **mackintosh** )
**Moc:** a moccasin
**Pac:** an insulated, waterproof, laced boot originally made by Native Americans (also **shoepack** )
The **mim moll** models a **mongo mobcap** for her **mon**.
**Mim:** prim, overly meek
**Moll:** a gangster's girlfriend
**Mongo:** low-grade wool, often used as rags (also **mongoe** , **mungo** )
**Mobcap:** a large cap worn indoors by married women in the eighteenth century
**Mon:** man
It's the **mozo** 's **moira** to **moil** for **mony mome** s.
**Mozo:** a manual laborer ( _in the southwestern United States_ )
**Moira:** fate or destiny, in Greek mythology
**Moil:** to work hard
**Mony:** many (also **monie** )
**Mome:** a fool, a dull or silent person with nothing to say
The **mopy moppet** made a **moue** at the **roue** with her **mammy**.
**Mopy:** mopey
**Moppet:** a child
**Moue:** a pouting expression
**Roue:** a lecherous old man
**Mammy:** mother
Watch the **mamba mambo** and **samba** , **embow** ing his back (he's got no **gamb** s) to the **gamba** and **mbira**.
**Mamba:** a venomous African snake
**Mambo:** to perform a Latin American dance like the rumba
**Samba:** to perform a Brazilian dance of African origin
**Embow:** to arch or vault
**Gamb:** the leg of animal, particularly when depicted on a coat of arms
**Gamba:** a bass viol, sounding approximately like a cello
**Mbira:** an African musical instrument made of a base with strips that resonate when plucked, also known as the thumb piano
* * *
# N
At **neap** , my **netop** —a **neatnik** who's a **naif** —tried to **nett nekton** from the **ness** by the **linn** for the **nth** time. Again: **nada.**
**Neap:** a tide halfway between high and low tides
**Netop:** a buddy
**Neatnik:** a compulsively neat person
**Naif:** a naive person
**Nett:** to net
**Nekton:** any free-swimming aquatic animals ( _in contrast with_ **plankton** )
**Ness:** a headland
**Linn:** a waterfall (also **lin** )
**Nth:** describing an unspecified number of a series
**Nada:** nothing
**Neist** , he ate a **neep** on a **bunn** and a **nutlet** and nabbed a **neg** of a **neb** of a **nene** from a **neuk**.
**Neist:** next
**Neep:** a turnip
**Bunn:** bun
**Nutlet:** a small nut
**Neg:** a photographic negative
**Neb:** a bird's beak
**Nene:** a rare, wild Hawaiian goose
**Neuk:** a nook
A **yob** by the **nom** of Bob, **ne** Robert, **hob** s the shoe of a **nob** —a **nawab** named Bobo, **nee** Roberta.
**Yob:** a hoodlum _(_ Yob _is an example of_ _backslang—a backward construction of_ boy _.)_
**Nom:** a name
**Ne:** born with the name of, for a male
**Hob:** to put hobnails on a shoe
**Nob:** a rich person
**Nawab:** a rich and prominent person, or one who acts in that fashion (also **nabob** , **nabobess** [f])
**Nee:** born with the name of, for a female
* * *
# O
"You call this **broo** , **bro**?" the **boor pooh** ed. "I call it **goo** , **goop** , **gook** , **gunk**. But it sure ain't **bree**."
**Broo:** a broth (also **bree** )
**Bro:** brother
**Boor:** an ill-mannered person
**Pooh:** to express contempt for
**Goo:** a sticky or slimy substance, often a residue of unknown origin (also **gook** )
**Goop:** a heavy, unpleasant liquid
**Gook:** a sticky or slimy substance, often a residue of unknown origin (also **goo** )
**Gunk:** a sticky, greasy, or unpleasant substance, often clogging a small space
**Bree:** broth (also **broo** )
For a **kobo** , the **lobo** buys a **gobo** for his **bo** , a **goby** in his **toby**.
**Kobo:** a small unit of Nigerian currency
**Lobo:** a timber wolf
**Gobo:** a shield to block extraneous noise from a microphone
**Bo:** a buddy
**Goby:** a very small fish that has fused pelvic fins that create a suction cup
**Toby:** a drinking mug in the shape of a man or a man's face
It's **soth** : the **wroth goth doth loth** his **mothy bothy**.
**Soth:** true (also **sooth** )
**Wroth:** very angry
**Goth:** a morose style of music, or a maker or fan of such music
**Doth:** an antiquated third-person present form of do
**A Tip from Prof. Wagstaff of Huxley College**
One used to be said to "wax **wroth** ," or grow angry, which is fodder for one of the great Groucho Marx lines in _Horse Feathers_ :
Secretary: "The Dean is furious. He's waxing wroth."
Groucho: "Is Roth out there too? Tell Roth to wax the Dean for a while."
**Loth:** loath
**Mothy:** abounding in moths
**Bothy:** a basic shelter left open for common use, particularly in Scotland
The **kea** in the **koa** , like the **boa** , **goa** , **moa** , and **anoa** , are all **zoa**.
**Kea:** a species of New Zealand parrot known for its exceptional intelligence
**Koa:** a flowering Hawaiian tree
**Boa:** a large nonvenomous snake that constricts its prey
**Goa:** a Tibetan gazelle
**Moa:** an extinct flightless bird of New Zealand
**Anoa:** the dwarf buffalo of Indonesia
**Zoa:** plural of **zoon** : the whole product of a single fertilized egg
"I'll have the **coho** with some **pepo** and **sybo** ," ordered the **boho**. "Oh, and some **vino**."
**Coho:** the North Pacific silver salmon
**Pepo:** any fruit having a fleshy interior and a hard rind, including melons and cucumbers
**Sybo:** a variety of onion also known as the Welsh onion (also **cibol** )
**Boho:** a bohemian
**Vino:** wine
Look at this **yahoo** trying to **shoo** the **hoopoe**. " **Skidoo** , **git oot noo**!" he **cooee** s.
**Yahoo:** a coarse, easily excited person
**Shoo:** to drive off
**Hoopoe:** a colorful bird with distinctive plumage on its head
**Skidoo:** to leave quickly (also **skiddo** )
**Git:** to get
**Oot:** out
**Noo:** now
**Cooee:** to cry out loudly in order to gain attention, particularly in the Australian outback (also **cooey** )
* * *
# P
" **Pfui**!"" **Pah**! **Poh**!"
**Pfui:** phooey
**Pah:** an exclamation of disgust
**Poh:** an expression of disapproval (also **pugh** )
On the shelf next to the **pye** and the **pyx** is a **pic** of the priest receiving the **pax** from the holy **pa**.
**Pye:** a former book of ecclesiastical rules in the Church of England
**Pyx:** a container for the Eucharist (also **pix** )
**Pic:** a photograph
**Pax:** a ceremonial kissing of a tablet at a Christian mass; the kiss of peace
**Pa:** a father
People with **pica** probably won't eat **paca** , or **spica** , or even an **orca** 's **plica** —they might prefer **mica.**
**Pica:** a medical disorder involving the craving to eat things that are not food
**Paca:** a large rodent considered a delicacy in Central and South America
**Spica:** an ear of corn (pl. **spica** or **spicae** )
**Orca:** the killer whale (also **orc** )
**Plica:** a fold of skin
**Mica:** a type of mineral that is easily cleaved along structural planes
**Yep** , the **klepht** wearing the **kepi** sure can **kep** a bee from a **skep** or a **kelep** from under a **cep.**
**Yep:** yes
**Klepht:** a Greek guerrilla who lived in the mountains of the Ottoman Empire and resisted Ottoman rule between the fifteenth and nineteenth centuries
**Kepi:** a type of cap primarily used by the French and American militaries during the nineteenth and early twentieth centuries
**Kep:** to catch (archaic Scots)
**Skep:** a handwoven beehive, often of wicker
**Kelep:** a stinging ant found in Central America
**Cep:** a type of large mushroom (also **cepe** )
The **grampus** holds a **gamp** for the **gramp grumphy** , who **gimp** s and **grump** s.
**Grampus:** the orca
**Gamp:** a large, bulky umbrella (chiefly British)
**Gramp:** a grandfather
**Grumphy:** a familiar name for a pig (also **grumphie** ) (chiefly Scottish)
**Gimp:** to limp
**Grump:** to complain or sulk
The **tampan tamp** s the **samp** on the **sampan.**
**Tampan:** a biting African tick
**Tamp:** to pack down or compress with repeated taps
**Samp:** coarsely ground maize, often used for porridge
**Sampan:** a flat-bottomed Chinese boat
* * *
# Q
The **quean** in the **quod** says she'd love a quart of **quass**.
" **Quotha**! and I'll have an **usque** and **aquae** ," **squib** s the **squab**.
**Quean:** a prostitute
**Quod:** a jail
**Quass:** a Russian beer made of fermented rye bread (also **kvas** , **kvass** )
**Quotha:** an expression of sarcasm or surprise
**Usque:** whiskey (also **usquebae** ) _(from the Gaelic_ usquebaugh _, meaning "water of life")_
**Aquae:** plural of **aqua:** water
**Squib:** to satirize
**Squab:** a juvenile pigeon
The **equid** , **quey** , **quokka** , and **quagga** are in a **qaug** near the **quai**.
**Equid:** any animal of the Equidae family, including horses, donkeys, and zebras
**Quey:** a heifer
**Quokka:** a short-tailed, cat-sized wallaby
**Quagga:** an extinct zebralike animal with the front of the body resembling a zebra and the back resembling a horse _(Efforts are under way to selectively "back-breed" the quagga into existence. This is different from the_ **zebroid/zebrass** _or the_ **zebrine** _, the specific offspring of a female zebra and a male horse.)_
**Quag:** a bog
**Quai:** a wharf or pier (also **quay** )
The **quale** of having a **quint** of **quate quin** s **quant** is **quare.**
**Quale:** the singular way it feels to experience a particular mental state
**Quint:** a sequence or set of five
**Quate:** quiet
**Quin:** a quintuplet
**Quant:** to propel a barge or boat through water with a pole
**Quare:** queer
"As a **tuque** is not even a **quasi toque** , the **quoit** is not **roque**." **quoth** the **quango.**
**Tuque:** a tight-fitting knit cap, as worn by sailors
**Quasi:** somewhat like
**Toque:** a close-fitting, tall, cylindrical hat popular with cooks
**Quoit:** to toss a ring in quoits, a throwing game like horseshoes and ring toss _(usually played outdoors, but one can find_ **quoit** _played indoors in an altered, table version in Wales and western England)_
**Roque:** an American form of croquet _(_ **Roque** _was actually an Olympic sport for just one Olympics, in 1904 in St. Louis, at which time it was being hailed as "the game of the century.")_
**Quoth:** said _(_ **Quoth** _is an unusual verb as it's to be used only before the subject.)_
**Quango:** a quasi-autonomous nongovernmental organization, found in the United Kingdom
The **quipu** , **qua** calculator, could count **qursh.**
**Quipu:** an intricate Inca calculating device making use of knotted llama or alpaca hair (also **quippu** )
**Qua:** in the capacity of
**Qursh:** a currency of Saudi Arabia equal to one-twentieth of a riyal (also **qurush** , **girsh** , **gursh** )
My **nuncle** , a **quidnunc** , **quirts** a **quezal** in a **squill**.
**Nuncle:** an uncle _(from an archaic combination of mine and uncle, as in_ King Lear _: "I have us'd it, nuncle, ever since thou mad'st thy daughters thy mother.")_
**Quidnunc:** a gossip
**Quirt:** to strike with a rawhide whip that forks into two tails at the end
**Quezal:** a bright green tropical bird (also **quetzal** )
**Squill:** a flowering bulb native to the Mediterranean
**"With or Without U"**
In UpWords, the Q tile is a little friendlier, as it actually contains a _u_ on it. But in Scrabble, Anagrams, and the like, it helps to know the approximately fifteen words in the _OSPD_ that use a Q but don't require a U.
Here's a song to help you remember them. (Sung to the tune of U2's "With or Without You." Sadly, _bono_ is not a word, but **bonobo** is. Also, look what four letters are hidden in **nutwood**.)
See the tiles there on the board
See all the ways I might have scored
I'll wait for U.
Sleight of hand and twist of fate
On a rack of wood Q makes me wait
And I wait without U.
With or without U.
With or without U.
From the bag, I'd drawn out four
It gave me Q but I want more
And I'm waiting for U.
With or without U
With or without U, aha,
I can't play with or without U.
And I try to make **qindar** ( **qintar** )
And you try **qadi** and **qaid**
And there's **qi** , and there's **qat**
And you find that you **qanat**.
There's no **qiviut** or **qabala** ( **h** ),
No **sheqel** or **mbaqanga** ,
No **qwerty** and there's nothing left sans Us.
And you try to make **faqir** ,
And you try to make **umiaq** ,
And there's **qoph** , and there's **tranq**
And you find that you **qanat**.
With or without U
With or without U, oh, oh
I can't play
With or without U.
With or without U
With or without U, oh, ah,
I can't play
With or without U.
**Qindar:** Albanian currency, equal to one-hundredth of a **lek** (also **qintar** )
**Qadi:** a judge in Islamic courts (also **kadi** , **cadi** )
**Qaid:** a Muslim chieftain (also **caid** )
**Qi:** the central life force in traditional Chinese culture, pronounced "chee" (also **ki** )
**Qat:** the leaf of a type of shrub, chewed to used in tea as a mild stimulant (also **kat** , **khat** )
**Qanat:** a gently sloping tunnel used for irrigation
**Qiviut:** musk-ox wool
**Qabala:** an occult belief (also **qabalah** , **cabala** , **kabala** )
**Sheqel:** several ancient Middle Eastern units of weight and money (also **shekel** ) (-s, -im)
**Mbaqanga:** a southern African music style _(from the Zulu_ _umbaqanga_ , _or "steamed cornbread," referring to homemade music that also provides for—and is a metaphorical type of—daily bread)_
**Qwerty:** used to describe a standard English-language keyboard
**Faqir:** a Muslim ascetic (also **fakir** )
**Umiaq:** a wooden, open boat used by Inuit (also **umiac** , **umiack** , **umiak** )
**Qoph:** a Hebrew letter (also **caph** , **khaf** , **kaph** )
**Tranq:** a tranquilizer (also **trank** )
**Foods That Start with the Letter Q, for 100**
In the film _White Men Can't Jump_ , Rosie Perez's character shows some of the traits of a competitive Scrabble player in her obsessive studying to prepare for an appearance on _Jeopardy!_ She informs her boyfriend that, besides **quince** , she's "got seven more" foods that start with _q_ in her arsenal, should the topic come up.
When she appears on _Jeopardy!,_ **lo** and behold, one of the categories is "Foods That Start with the Letter Q." We see her answer four questions correctly (or question four answers correctly, it being _Jeopardy!_ )— **quail** , **quiche** , **quahog** (a type of clam, also **quohog** ), and **quince**.
Here are four more she may have had at the ready:
**Quinoa:** a starchy grain of the Andes
**Quinnat:** the chinook, or king salmon
**Quenelle:** a poached dumpling containing minced fish, chicken, or meat
**Quamash:** a perennial herb with edible bulbs _(Native Americans introduced roasted quamash bulbs to Lewis and Clark, who thereafter relied heavily upon them.)_
* * *
# R
The **carl marl** s while the **jarl** eats **farl** s.
**Carl:** a peasant or manual laborer (also **carle** )
**Marl:** to fertilize using a certain type of loose sedimentary soil
**Jarl:** a Scandinavian chief or nobleman
**Farl:** a thin triangular bread or cake (also **farle** )
Along the **ria** , a **raia** with a taste for **raki** sells a **rya** to the **raja** and the **rani**.
**Ria:** long, thin inlet formed by a rising sea level
**Raia:** a non-Muslim Turk (also **raya** , **rayah** )
**Raki:** an anise-flavored Turkish liqueur
**Rya:** a traditional Scandinavian rug
**Raja:** an Indian monarch (also **rajah** )
**Rani:** the wife of a rajah (also **ranee** )
The **darb** , an **arb** in the **urb** , wears a **barbut** in the **darbar**.
**Darb:** someone or something excellent _(popular in the 1920s)_
**Arb:** an arbitrageur, an investor who sells purchases soon after buying them to capitalize on slightly differing prices
**Urb:** a city
**Barbut:** a steel, Greek-style helmet popular in fourteenth- and fifteenth-century Italy
**Darbar:** a type of Indian court (also **durbar** )
They'll probably **rif** the **reb** because he can't **ref** the **rec** soccer games by the **reg** s.
**Rif:** to lay off from employment _(from "Reduction in Force")_
**Reb:** a Southern soldier in the Civil War
**Ref:** to referee
**Rec:** recreation
**Reg:** regulation
The **lar gars** the **sowar** to hurl the **bola** at the **boyar.**
**Lar:** a spirit of the ancient Roman household
**Gar:** to force or compel
**Sowar:** a horseback soldier in India
**Bola:** a weapon of weighted balls connected by a cord thrown to catch cattle _(connected to the idea of the bolo tie)_
**Boyar:** a member of the former Russian aristocracy
* * *
# S
No **sal sall** be added to the **salep** or the **saloop**.
**Sal:** salt
**Sall:** shall _(_ **Sall** _can't be conjugated—this is its only form.)_
**Salep:** a flour ground from orchid tubers and used in food and medicine
**Saloop:** a hot tea of aromatic herbs
The **puss** will **buss** the **joss** , then **doss** in a **foss**.
**Puss:** a cat
**Buss:** to kiss
**Joss:** a Chinese shrine or idol
**Doss:** to sleep, particularly in a convenient, crude place
**Foss:** a moat or ditch (also **fosse** )
**"S-ential"**
Everyone knows those four Ss are a huge help for tacking on the ends of words, but they also help _s_ tart words. The most Scrabble words by far start with _s_ (about 20,000). Surprisingly, the second and third most common letters to start Scrabble words with are _c_ (about 16,000) and _p_ (about 15,000). Less surprising, the least numerous first letters are _x_ (152), _y_ (588), _z_ (601), and _q_ (850).
When I **scry** and **espy** my **tyne** d **sib** , I lose my **sel** and **sab**.
**Scry:** to look into a crystal ball for answers
**Espy:** to catch sight of
**Tyne:** to perish, to lose ( _as in "to become lost"_ ) (also **tine** )
**Sib:** a sibling
**Sel:** self
**Sab:** to sob
The **sei** is such a **seg** she even keeps her **sox** separated.
**Sei:** a small baleen whale
**Seg:** a racial segregationist
**Sox:** socks, a plural of sock
The **sri** in the **suq** accepts **ser** s of **sen** s, **sau** s, **sou** s or **soms.**
**Sri:** an Indian title of respect akin to _Mr._
**Suq:** a large marketplace (also **suk** , **souk** )
**Ser:** a former unit of volume in India equal to a liter
**Sen:** a monetary unit of Japan equal to one-hundredth of a yen
**Sau:** a monetary unit of Vietnam equal to one-hundredth of a dong (also **xu** )
**Sou:** a former French coin _(Today, in French_ sou _refers to any coin of little worth, and one can say "sans le sou," as in, "I'm broke.")_
**Som:** a monetary unit of several Central Asian countries
First they **joist** the **kist**. **Neist** they'll **hist** the **cist**.
**Joist:** to support from beneath with parallel horizontal wooden or steal beams
**Kist:** a chest or co≈n
**Neist:** next
**Hist:** to hoist
**Cist:** a prehistoric tomb of stone or hollowed wood
* * *
# T
The **tui** , the **tit** , and the **mut tut** a " **tsk** " at the **teg** on the **tor**.
**Tui:** a common bird of New Zealand also known as the honeyeater
**Tit:** a small bird also known as the titmouse or chickadee
**Mut:** a mutt
**Tut:** to make a sound of disapproval or impatience
**Tsk:** an expression of disapproval (also **tsktsk** )
**Teg:** a sheep before it is shorn (also **tegg** )
**Tor:** rocky hill or peak
"This is some **teuch** , **teugh** , tough **tuff**."
**Teuch:** tough (also **teugh** )
**Tuff:** rock made of consolidated volcanic ash
Softly, softly played the **potto** —but not **tanto** —on the **koto** during the **tutti** in the **lento**.
**Potto:** a small, nocturnal lemur of tropical Africa also known as the softly-softly
**Tanto:** too much—used as a musical direction, generally with a negative connotation
**Koto:** a traditional Japanese stringed musical instrument
**Tutti:** a passage of music played by all performers
**Lento:** a slow movement in a musical composition
The **tahr** in the **talar stotts** at the **sett** by the **yett**.
**Tahr:** an Asian goatlike mammal
**Talar:** a long cloak
**Stott:** to pronk, to leap upward with arched back ( _used of an animal_ ) (also **stot** )
**Sett:** a badger's burrow
**Yett:** an iron gate for a doorway, typically found in castles and tower houses in Scotland
The **tyee** , a **tyro** , **tost** the **rotte** into the **mott**.
**Tyee:** the king salmon, also known as the chinook or quinnat
**Tyro:** a novice
**Tost:** a past tense of toss
**Rotte:** a stringed instrument in sixth-century Germany
**Mott:** a small cluster of trees, typically on a prairie (also **motte** )
* * *
# U
The **uta** on the **ute** played an **ut** on his **uke**.
**Uta:** the side-blotched lizard
**Ute:** a utility vehicle
**Ut:** the note "do" in the French solmization system
**Uke:** ukulele
On their **zebec** , the **zebu** and the **kudu tabu** talk of the **habu** , **kagu** , and **kombu**. And no **gnu** , neither!
**Zebec:** a small, three-masted Mediterranean sailing vessel (also **xebec** )
**Zebu:** a type of Asiatic cattle
**Kudu:** a large striped antelope native to Africa (also **koodoo** )
**Tabu:** to refrain from doing or mentioning (also **taboo** ) (-ed, -ing, -s)
**Habu:** any of certain poisonous snakes in Japan
**Kagu:** a nearly flightless, endangered white bird of New Caledonia
**Kombu:** kelp used as soup seasoning in Japanese cooking
**Gnu:** a wildebeest
The **umbo** protects the **ululant ulan** 's **ulna**.
**Umbo:** the central rounded piece at the center of a shield, also known as a shield-boss
**Ululant:** screaming
**Ulan:** a member of the Polish light cavalry (also **uhlan** )
**Ulna:** a long bone of the forearm
The **hun** , a **mun** dressed in **dun** , broke the **unco** 's **unci** and **nam** his **jun**.
**Hun:** a barbarous, brutal person
**Mun:** a fellow
**Dun:** of a brownish gray color
**Unco:** a stranger or an unusual person ( _from uncouth_ )
**Unci:** plural of **uncus:** any hook-shaped anatomical part, often the curved anterior of the parahippocampal gyrus
**Nam:** past tense of **nim:** to steal
**Jun:** a coin used in North Korea (pl. **jun** )
The **lulu** in the **iglu** cuts her **sulu** with an **ulu**.
**Lulu:** something remarkable, often referring to a woman
**Iglu:** igloo
**Sulu:** a Fijian skirt worn by men and women
**Ulu:** an Inuit knife
The **supe** , a **yup** with an **updo** , **sups upo scup** beneath an **upas**. **Upby** , a **fou ouph** with a **pouf urps**.
**Supe:** a minor actor without a speaking part, a supernumerary
**Yup:** a yuppie
**Updo:** any hair style whereby the hair is arranged—such as in a beehive or ponytail—rather than letting it fall freely
**Sup:** to eat dinner
**Upo:** upon
**Scup:** the common porgy fish
**Upas:** an Asian tree famous for its poisonous sap
**Upby:** farther along ahead at a specific place (also **upbye** )
**Fou:** drunk
**Ouph:** an elf, sprite, or similar small, mischievous creature (also **ouphe** )
**Pouf:** a hairstyle involving a bump or large elevated wave of hair toward the front of the top of the head (also **pouff** )
**Urp:** to vomit
The **wud fud** wins a **kudo** for his **udo pud**.
**Wud:** crazy
**Fud:** a fuddy-duddy, one who is old-fashioned and opposed to excitement
**Kudo:** praise, a compliment
**Udo:** an east-Asian herb used in cooking, such as miso soup
**Pud:** pudding
* * *
# V
" **Brava, bobby** , for bagging the **bravo** who stole the **avo** yesterday **arvo**."
"Oh, I only yelled ' **Avast**!' It was nothing **ava! Ave!** "
**Brava:** an exclamation of applause
**Bobby:** a police officer (chiefly British)
**Bravo:** a professional killer (pl. -vos or -voes or -vi)
**Avo:** a unit of currency in Macao worth one-hundredth of a pataca
**Arvo:** afternoon
**Avast:** an exclamation used to command one to stop
**Ava:** at all
**Ave:** a poetic salutation of greeting or parting (as in "Ave Maria" or "Hail Mary")
In the **vid** , the **dev** with the deep **vox rev** s the **vis** of the **vac** with **vim** so he can **veg** in the **lav** again.
**Vid:** a video
**Dev:** a Hindu god (also **deva** )
**Vox:** voice
**Rev:** to quickly accelerate
**Vis:** force or strength (pl. **vires** )
**Vac:** a vacuum cleaner
**Vim:** enthusiasm
**Veg:** to be idle
**Lav:** a bathroom
The **vireo** and **veery** enjoy **venery** in the **vert veld** , and the **avadavant** in the **vanda** like **banda. Viva! Vive!**
**Vireo:** a small bird
**Veery:** a songbird
**Venery:** sexual intercourse
**Vert:** a heraldic shade of green
**Veld:** grassland of southern Africa (also **veldt** )
**Avadavant:** a small songbird
**Vanda:** a tropical orchard
**Banda:** a traditional, bass-heavy Mexican dance music
**Viva:** a shout of encouragement or exultation (also **vive** )
**Vum!** says the **vrow** when she stubs her toe in the **vug** by the **voe**.
**Vum:** an exclamation of surprise
**Vrow:** a woman (also **vrouw** ) _(from the same Dutch word meaning "woman" or "wife")_
**Vug:** a small cavity in a rock, often lined with crystals (also **vugg, vugh** )
**Voe:** a small bay or inlet
The **vizir** likes diverse **vivers** with his **vichy** and **vino**.
**Vizir:** a minister or high official in a Muslim government (also **vizier** )
**Vivers:** (n./pl.) food, provisions
**Vichy:** mineral water from Vichy, France, or a replica thereof
**Vino:** wine
* * *
# W
In the **weald** , the **wakanda** will protect your **wikiup** , but **wite** the **windigo** if a **williwaw wigwag** s your **wigwam**.
**Weald:** a wooded area, or an open field
**Wakanda:** the central animating spirit in Sioux spirituality
**Wikiup:** a domed American Indian hut (also **wigwam, wickiup** )
**Wite:** to blame (also **wyte** )
**Windigo:** an evil spirit in Algonquian mythology that overtakes a person with cannibalistic urges (also **wendigo** )
**Williwaw:** a violent, cold wind blowing down from a mountain (also **willyway** and **williwau)**
**Wigwag:** to move to and fro
**Wigwam:** a domed American Indian hut (also **wickiup** )
**Phew**. Every day I **tew** to **hew** the **yew**. Let me **shew** you my **thew**. You'll make a **whew**.
**Phew:** an expression of relief
**Tew:** to work hard
**Hew:** to cleave with an axe
**Yew:** several types of large, poisonous evergreen trees or shrubs that can live for thousands of years
**Shew:** to show
**Thew:** musculature (adj. **thewy** )
**Whew:** a sound made to illustrate amazement or relief
Once in a while, the **wittol wiss** es a **widdy** will take his **wifey awa**.
**Wittol:** a cuckold who permits or condones his wife's infidelity
**Wiss:** to wish
**Widdy:** a noose (also **widdie** )
**Wifey:** a wife
**Awa:** away
The **wivern** in the **welkin waff** s at the **raff**.
**Wivern** : a legendary dragon-headed lizard with a barbed tail (also **wyvern** )
**Welkin:** the celestial sphere, the vaulted sky
**Waff:** to wave
**Raff:** riffraff
**Wisha**! That **wany wavelet** 's a **waly**!
**Wisha:** an expression of surprise
**Wany:** visibly decreasing in size (also **waney** )
**Wavelet:** a small wave
**Waly:** something pleasing, especially to the eye (also **wally** )
I'm a **wee, twee tween**.
I'm **weer** , you **weet**.
I'm **weest** , you **wist**.
**Wee:** very little
**Twee:** affectedly or excessively cute
**Tween:** a child after mid-childhood but before puberty, generally between eight and twelve years old
**Weer:** even littler
**Weet:** to know, to wit
**Weest:** the littlest of a group
**Wist:** to know (past tense: **wis** ) ( **Wis** _and_ **wist** _are the only forms of this verb.)_
When I **wale** you **weel** you'll **wark** and **wawl** with the **weal** s and **whelk** s.
**Wale:** to injure, to create welts on the skin
**Weel:** well
**Wark:** to feel pain, to ache
**Wawl:** to cry like a cat (also **waul** )
**Weal:** a welt
**Whelk:** a pustule, a pimple
The **wiggy wack** wearing one **welly** awaits the **weka** in the **wahoo**.
**Wiggy:** insane
**Wack:** very bad, or a zany person
**Welly:** a rainboot (also **wellie** ) _(named after Arthur Wellesley, first Duke of Wellington, whose wearing of them popularized them among the British aristocracy in the early nineteenth century)_
**Weka:** a flightless bird of New Zealand also called the woodhen
**Wahoo:** a flowering American shrub with heart-shaped poisonous berries
**Ywis** , I **wiss** I **wis wha** the **wud wiz** was.
**Ywis:** absolutely (also **iwis** )
**Wiss:** to wish
**Wis:** knew (past tense of **wist:** to know, to be aware of)
**Wha:** who
**Wud:** crazy
**Wiz:** wizard
* * *
# X
The **prex** acts more like a **rex** : He likes to down **dex** , **rax** his rule, and eschews **lex** or **doxy**.
**Prex:** a president, usually of a college (also **prexy** )
**Rex:** a king (pl. **reges** ); or a species of cat with a single layer of fine hair, also known as the Cornish Rex (pl. **rexes** )
**Dex:** a sulfate used as a stimulant ( **dexy** and **dexie:** a tablet thereof)
**Rax:** to stretch or reach out
**Lex:** law (pl. **leges** )
**Doxy:** accepted ideas or doctrine
The **nix** on the **kex** was **vext** by the **nixy** she'd sent to the **pixy** on the **falx**.
**Nix:** a water sprite in German folklore (pl. **nixes, nixe** , a female is a **nixie** ), or to veto
**Kex:** any of several types of hollow-stalked plants
**Vext:** irritated, annoyed (a past tense of **vex** )
**Nixy:** mail that is undeliverable
**Pixy:** a mythical, miniature playful sprite
#### X Marks the Spot
One letter a player should _always_ be happy to see is the _x_ , with its high value and easy usability in two-letter words ( **ax, ex, xi, ox** , **xu** ). With all these two-letter combos, it's generally best (and easiest) to play the _x_ for big points by using it both horizontally and vertically at once.
**Falx:** any sickle-shaped structure, generally used of blades or anatomical parts (pl. **falces** )
Until the **xyst** is **fixt** , the **eaux** spoils the **jeux**.
**Xyst:** the covered portico of a Greek gymnasium used in inclement weather (also **xystus** )
**Fixt:** a past tense of _fix_
**Eaux:** waters (pl. of **eau:** water)
**Jeux:** games (pl. of **jeu:** game)
The **ixodid** —or is it a **cimex**?—is **dexter** of my **oxter**.
**Ixodid:** a tick
**Cimex:** a bedbug
**Dexter:** situated on the right (as opposed to **sinister** , on the left)
**Oxter:** the armpit
The **oxes** want their drinks to make a **fiz** , but since they don't have **gox** , they **lox** their soda so it's **oxo**.
**Oxes:** plural of **ox:** oaf _(as opposed to_ **_oxen_** _, which is the plural of the animal)_
**Fiz:** a sound similar to that of a carbonated beverage
**Gox:** gaseous oxygen
**Lox:** to supply with liquid oxygen
**Oxo:** containing oxygen (also **oxy** )
* * *
# Y
In the **ley** , the **gey fey dey fley** s the **bey** 's **quey**.
**Ley:** a meadow (also **lea** )
**Gey:** very
**Fey:** insane
**Dey:** the title given to rulers of some Ottoman Empire lands
**Fley:** to scare
**Bey:** an Ottoman Empire provincial governor
**Quey:** a heifer
####... and Sometimes Y
Have a rack with no vowels except a Y or two? **Oy**! But don't cry. Though the _y_ can be a trying letter to play, **ay** (a vote yes, also **aye** ), **ya** (you), **by** , **my** , **yo** , and **oy** give **ye** (you) some options. There are also some words that use only _y_ s as vowels.
Down the **wynd** and past the **wych** , the **sylph** with **syph** wonders the **whys** of the world.
**Wynd:** an alleyway
**Wych:** a type of European elm also known as the Scots elm
**Sylph:** a slender girl or young woman who moves gracefully
**Syph:** syphilis (-s)
**Why:** the reason for
The **pyknic** 's **tyke rykes** out into the **syke** with his **fyke**.
**Pyknic:** a person with a rotund or stocky build
**Tyke:** a toddler
**Ryke:** to reach
**Syke:** a small stream
**Fyke:** a fishing net
The **sayyid payed** many **tyiyn** and moved **inby** to see the **syzygy** painted on the **skyey** ceiling.
**Sayyid:** the title _lord_ or _sir_ used for a Muslim man (also **sayid** )
**Payed:** an alternate spelling of **paid**
**Tyiyn:** a monetary unit of Kyrgyzstan worth one-hundredth of a **som**
**Inby:** into a house or room
**Syzygy:** an alignment of three celestial bodies, as in the sun, moon, and earth during an eclipse
**Skyey:** sky-like
The **yeld yaud** was **yald**.
**Yeld:** a mature female not giving milk
**Yaud:** a mare of old age
**Yald:** robust, lively (also **yauld** )
* * *
# Z
Our **biz** is the best at cleaning the **zin** and **za** from your **tux.**
**Biz:** business
**Zin:** Zinfandel wine
**Za:** pizza
**Tux:** a tuxedo
The **zoril** holds a **zori** and a **zill**.
**Zoril:** a small African weasel (also **zorilla** , **zorrille** , **zorillo** )
**Zori:** a flat, thonged Japanese sandal
**Zill:** a finger cymbal
The **zek** gets a **zax** and an **adz** , but not a **zep**.
**Zek:** an inmate, particularly of a Soviet labor camp
**Zax:** a tool used in applying slate to a roof
**Adz:** to carve wood with an adz (a curved blade attached to a handle) (also **adze** )
**Zep:** a particular type of hoagie sandwich from eastern Pennsylvania
In the movie, the nasty **nazi** steals a **yagi** to control the **azo azon** from the **ghazi**.
**Nazi:** a racist fascist, or increasingly any dictatorial and intolerant person
**Yagi:** a type of antenna useful with amateur radio enthusiasts
#### Jazzy Multi-Z Words You'll Probably Never Use
**Fezzed** , **Fezzy:** adjectives related to **fez** (a brimless hat worn by men in Turkey)
**Frizz:** to form into small, tight curls
**Hazzan:** a cantor (pl. **hazzanim** )
**Huzza:** to cheer ( **huzzah** , **huzzas** )
**Izzard:** the letter _z_
**Jazzbo:** a devotee of jazz
**Mezuzah:** a small Judaic scroll ( **mezuza** )
**Mizzen:** a type of sail
**Mizzle:** to rain in fine droplets ( **mizzly** )
**Muzzy:** confused
**Nuzzle:** to push with the nose
**Pizzle:** the penis of an animal _(Primarily in Australia and New Zealand, though it also crops up in James Joyce's_ Ulysses. _Also occasionally found in the combination "bull pizzle," denoting a whip made from a bull's penis, as in Samuel Beckett's radio play_ Rough for Radio II. _)_
**Zugzwang:** a move in a game—generally chess—that a player is compelled to make and that significantly worsens the player's position _(from the German meaning "compulsion to move") (Icannot think of such an instance occurring in Scrabble, as players always have the option of passing)_
**Zuz:** an ancient Hebrew coin (pl. **zuzim** )
**Zzz:** used to express being asleep
**Azo:** containing nitrogen
**Azon:** a guided aerial bomb used by the Allies in World War II _(from AZimuth ONly)_
**Ghazi:** a Muslim war hero _(from the Ghazwa battles led by Muhammed)_
The **zerk** is **oozy** , not **sizy**.
**Zerk:** a grease fitting, also known as a grease nipple
**Oozy:** containing or resembling soft mud or slime
**Sizy:** thick and sticky
* * *
### MISCELLANEOUS
" **Yah**! I feel sick. I think I'm going **ralph vomito** out my **os**."
"Don't! If you **bevomit** yourself with **vomitus** , I might **upchuck**."
"Well if you **urp** , I could **keck** a whole lot of **yech**."
" **Ick**! Watching **emesis** always makes me **regorge**!"
" **Oops** , I **woops** ed!"
" **Ugh** , you **spewers** are **ugsome**."
**Yah:** an exclamation of disgust
**Ralph:** to vomit
**Vomito:** the black vomit associated with yellow fever
**Os:** an orifice
**Bevomit:** to vomit all over oneself
**Vomitus:** vomit
**Upchuck:** to vomit
**Urp:** to vomit
**Keck:** to retch
**Yech:** something gross (also **yecch** , **yechy** , **yuch** , and **yucch** )
**Ick:** an expression of disgust
**Emesis:** the act of vomiting ( **emetic:** a substance that is ingested to induce emesis)
**Regorge:** to vomit
**Oops:** an expression of mild apology, surprise, or dismay
**Woops:** to vomit
**Ugh:** used to suggest a cough or grunt
**Spewer:** one who vomits
**Ugsome:** gross
The **guv** 's **luv** is a **fauve** who keeps her **kuvasz** in the **lav**.
**Guv:** a governor
**Luv:** a sweetheart
**Fauve:** a fauvist
**Kuvasz:** a large breed of dog with a white coat (also **kuvaszo** )
**Lav:** a bathroom
The **jato** felt more like a **rato** to the **dato** in the **rabato**.
**Jato:** a jet-assisted takeoff _(an example of an acronym turned playable word)_
**Rato:** a rocket-assisted takeoff _(another example of an acronym turned playable word)_
**Dato:** a tribal chief in the Philippines (also **datto** )
**Rabato:** a type of collar with a laced edge
A **gaga dodo** in a **bubu** , a **coocoo kaka** in a **mumu** , and a **chichi nana** in a **tutu** go to the **dada gogo**.
**Gaga:** insane
**Dodo:** a large extinct bird incapable of flight
**Bubu:** a large, flowing garment (also **boubou** )
**Coocoo:** lunatic
**Kaka:** a parrot native to New Zealand
**Mumu:** a long, loose-fitting dress (also **muumuu** )
**Chichi:** affectedly stylish
**Nana:** a grandmother
**Tutu:** a short skirt worn by ballerinas
**Dada:** an artistic movement interested in subverting rationality
**Gogo:** a disco party
The **mama ouistiti** says to the **papa dikdik** , "If you need to **weewee** or make a **doodoo** or **caca** , do it by the **kaki** or the **titi** outside the **haha**."
**Mama:** a mother
**Ouistiti:** a monkey native to South America
**Papa:** a father
**Dikdik:** a large antelope
**Weewee:** to urinate
**Doodoo:** fecal matter
**Caca:** fecal matter
**Kaki:** a Japanese persimmon tree
**Titit:** a type of evergreen shrub
**Haha:** a sunken fence, used to separate property without blighting the landscape
If you like this book, **bakshish** the bookseller or give a **cumshaw** to the librarian who showed it to you.
**Bakshish:** to give a tip ( **baksheesh** )
**Cumshaw:** a gift ( _usually a tip for service_ )
# SOURCES
Burkeman, Oliver. "Spellbound." Retrieved from www.guardian.co.uk/lifeandstyle/2008/jun/28/healthandwellbeing.familyandrelationships.
Fatsis, Stefan. _Word Freak: Heartbreak, Triumph, Genius, and Obsession in the World of Competitive Scrabble Players._ Boston: Houghton Mifflin, 2001.
Lacey, Marc. " 'Haboobs' Stir Critics in Arizona." Retrieved from www.nytimes.com/2011/07/22/us/22haboob.html.
McCarthy, Paul. _Letterati: An Unauthorized Look at Scrabble and the People Who Play It._ Toronto: ECW Press, 2008.
Morris, Chris. "Now Legal in Scrabble: Tnoindent, Blingy and Grrl." Retrieved from www.cnbc.com/id/42991790/Now_Legal_in_Scrabble_Tnoindent_Blingy_and_Grrl).
North American Scrabble Players Association (NASPA). Source for records.] Retrieved from [www.scrabbleplayers.org/w/Records.
Seattle Scrabble Club. Source for some lists.] Retrieved from [www.seattlescrabble.org.
Smith, Keith W. _Total Scrabble: The (Un)Official Scrabble Record Book_ , January 2009 Update. Retrieved from http://cross-tables.com/download/totalscrabble.pdf.
Spahn, Mark. Probabilities of various letter combinations.] Retrieved from [www.mathkb.com/Uwe/Forum.aspx/recreational/2449/Scrabble-probabilities.
Wallace, Robert. "A Man Makes a Best-Selling Game—Scrabble—and Achieves His Ambition (Spelled Out Above)." _Life,_ Vol. 35, No. 24 _._ Dec. 14, 1953.
Wapnick, Joel. _How to Play Scrabble Like a Champion._ New York: Puzzlewright Press, 2010.
# DICTIONARIES
_Official Scrabble Players Dictionary, Fourth Edition (OSPD)._ This is the current edition, published in 2005. Where older editions are mentioned, they refer to
_OSPD1_ , the _Official Scrabble Players Dictionary, First Edition._ 1978.
_OSPD2_ , the _Official Scrabble Players Dictionary, Second Edition._ 1993.
_OSPD3_ , the _Official Scrabble Players Dictionary, Third Edition._ 1996.
_Merriam-Webster's Collegiate Dictionary, Eleventh Edition_ , 2003. ( _MWCD11_ ).
_The Oxford English Dictionary_ ( _OED_ ), 1971 edition.
_Official Tournament and Club Word List, Second Edition. (OWL_ or _OWL2)._ Merriam-Webster, 2006.
Text copyright © 2012 by David Bukszpan.
All rights reserved. No part of this book may be reproduced in any form without written permission from the publisher.
ISBN 978-1-4521-1610-5 (eBook)
The Library of Congress has previously cataloged this title under ISBN 978-1-4521-0824-7
Designed by Neil Egan
Typesetting by DC Type
Illustrations by David Hopkins
Chronicle Books LLC
680 Second Street
San Francisco, California 94107
www.chroniclebooks.com
| {
"redpajama_set_name": "RedPajamaBook"
} | 1,658 |
Panzer Division Marduk es el sexto álbum de la banda sueca de black metal Marduk. Fue grabado y mezclado en los estudios The Abyss en enero de 1999 y lanzado en junio de 1999 a través de Osmose Productions. El tema central del álbum es la guerra, así como el de Nightwing fue la sangre y en La Grande Danse Macabre (el siguiente álbum de Marduk) será la muerte, formando así una trilogía de la visión de "Sangre, Guerra y Muerte" que tiene Marduk de lo que representa black metal para ellos. Panzer Division Marduk es el último álbum de Marduk lanzado a través de Osmose Productions.
La carátula original mostraba la foto de la versión sueca del tanque británico Centurión, el Stridsvagn 104. En el 2008 se re-lanzó una versión que mostraba un tanque Panzer VI E "Tiger", el cual refuerza la temática sobre la segunda guerra mundial y la importante participación alemana en esta.
La manga interior muestra una columna de tanques triunfantes a través de una ciudad en ruinas: ese es el Ejército Rojo atravesando la ciudad de Berlín, destruida en 1945.
El concepto de la segunda guerra mundial fue tomado literalmente por muchos defensores del Black metal nacional socialista quienes llegaron a considerar a la banda como unos de ellos, a pesar de que de hecho el vocalista Legión y el guitarrista y líder de la banda Morgan Hakansson han desmentido esos rumores, asegurando que Marduk no posee ideas políticas y por lo tanto no las plasman en sus letras y mucho menos que tengan ideologías Nazis.
Lista de canciones
Créditos
Legion – voz
Morgan Steinmeyer Håkansson – guitarra
B. War – bajo
Fredrik Andersson – batería
Peter Tägtgren – mezclas
Enlaces externos
PDM en la Encyclopaedia Metallum
Álbumes de 1999
Álbumes de Marduk
Álbumes en inglés | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,073 |
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title></title>
</head>
<body>
<form action="" method="POST" enctype="multipart/form-data">
{% csrf_token %}
<input type="file" name="alunos"/>
<input type="submit" value="Enviar"/>
</form>
</body>
</html> | {
"redpajama_set_name": "RedPajamaGithub"
} | 6,414 |
#ifndef __KERN_PROC_TASK_H__
#define __KERN_PROC_TASK_H__
#include <kern/conf.h>
#include <time.h>
#include <mach/param.h>
#include <mach/types.h>
#include <kern/syscall.h>
#include <mt/tktlk.h>
#define TASK_LK_T zerotktlk
#define tasklk(lp) tktlk(lp)
#define taskunlk(lp) tktunlk(lp)
#define TASK_LK_BIT (1L << TASK_LK_BIT_POS)
#define TASK_LK_BIT_POS 0
/* process states */
#define TASKNEW 0
#define TASKREADY 1
#define TASKSLEEPING 2
#define TASKWAITING 3 // NEW
#define TASKSTOPPED 4
#define TASKZOMBIE 5
#define TASKNSTATE 6
struct tasktab {
TASK_LK_T lk;
void *tab;
};
struct tasklist {
struct task *ptr;
};
struct taskstk {
uint8_t *top;
void *sp;
void *base;
size_t size;
};
#define TASKWAITTABITEMS 29
#define TASKWAITTABSIZE (32 * WORDSIZE)
struct taskwaittab {
uintptr_t chan;
struct taskwaittab *next;
long n;
struct task *buf[TASKWAITTABITEMS];
};
#define TASKWAITHASHITEMS (1U << TASKWAITHASHBITS)
#define TASKWAITHASHBITS 10
#define TASKALLOCSIZE 1024
#define TASKBUFITEMS 32
/* process or thread attributes */
/* bits for schedflg-member */
#define TASKHASINPUT (1 << 0) // pending HID input
#define TASKISBOUND (1 << 1) // bound to a processor, cannot migrate
#define TASKXFERABLE (1 << 2) // task was added as transferable
#define TASKCATCHSIG (1 << 3) // sleeping thread awakened by signals
struct task {
/* thread control block - KEEP THIS FIRST in the structure */
struct m_task m_task; // machine thread control block
long id; // task ID
/* linkage */
struct proc *proc; // parent/owner process
struct task *prev; // previous in queue
struct task *next; // next in queue
/* execution state */
long argc; // # of command-line arguments
char **argv; // [textual] command-line arguments
long nenv; // # of environment strings
char **envp; // environment strings
/* system call context */
int errnum; // errno
struct sysctx sysctx; // current system call
/* signal state */
sigset_t sigmask; // signal mask
sigset_t sigpend; // pending signals
struct siginfo **sigqueue; // info structures for pending signals
/* scheduler parameters */
TASK_LK_T lk;
long unit; // CPU-affinity
long sched; // thread scheduler class
long flg; // received user input [interrupt]
long runprio; // current priority
long prio; // base priority
long sysprio; // kernel-mode priority
long nice; // priority adjustment
long state; // thread state
long score; // interactivity score
long slice; // timeslice in ticks
long runtime; // # of ticks run
long slptime; // # of ticks slept voluntarily
long slptick; // ID of tick when sleeping started
long ntick; // # of scheduler ticks received
long lastrun; // last tick we ran on
long firstrun; // first tick we ran on
long ntickleft; // # of remaining ticks of slice
long lasttick; // real last tick for affinity
uintptr_t waitchan; // wait channel
time_t timelim; // wakeup time or deadline
};
extern struct task *k_tasktab[TASKSMAX];
taskid_t taskgetid(void);
void taskputid(taskid_t id);
#define THRSTKSIZE (64 * 1024)
#define PIDSPEC_PID 0
#define PIDSPEC_TGID 1
#define PIDSPEC_PGID 2
#define PIDSPEC_SID 3
#define PIDSPEC_MAX 4
struct pid {
long num;
m_atomic_t cnt;
struct task *task;
struct pid *list;
struct pid *hash;
};
extern void tasksetsleep(struct task *task);
extern void tasksetwait(struct task *task);
#endif /* __KERN_PROC_TASK_H__ */
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,056 |
\section{Introduction}
The description of the collective motion (swarming) of multi-agent
aggregates resulting into large-scale structures is a striking
phenomena, as illustrated by the examples provided by birds, fish, bees or
ants. Explaining the emergence of these coordinated movements in terms
of microscopic decisions of each individual member of a swarm is a hot
matter of research in the natural sciences
\cite{camazine,couzin,parrish}. The formation of swarms and milling or
flocking patterns have been reported in animals with highly developed
social organization like insects (locusts, bees, ants, ...)
\cite{couzin}, fishes \cite{BTTYB,Bi} and birds
\cite{camazine,parrish} but also in micro-organisms as myxo-bacteria
\cite{kw}. Moreover, the understanding of natural swarms has been used
as an engineering design principle for unmanned artificial robots
operation \cite{BDT,pegoel09}.
The physics and applied mathematics literature has proliferated
and sprung in this direction in the recent years trying to model
these phenomena, mainly based on two strategies of description:
individual-based models or particle dynamics
\cite{vicek,parrish,LR,camazine,mogilner,chate,couzin,DCBC,CS1,CS2,LLE,LLE2}
and continuum models based on PDEs for the density or for the
momentum of the particle ensemble
\cite{parrish,TB04,Top06,CDMBC,EVL}. The key feature to explain is
the emergence of self-organization: flocking, milling, double
milling patterns or other coherent behavior.
Particle descriptions usually include three basic mechanisms in
different regions: short-range repulsion zone, long-range attraction
zone and alignment or orientation zone, leading to the so-called
\emph{three-zone models}. In addition, some of them incorporate a
mechanism for establishing a fixed asymptotic speed/velocity vector of
agents, as is usually observed in nature. Some of the models only
consider the orientation vector and not the speed in their discrete
version. The main differences of all these models reside in how these
three interactions are specifically considered. We will mainly work
with two generic examples in which several of the effects above are
included, namely the model for self-propelled interacting particles
introduced by D'Orsogna et al in \cite{DCBC} and the model of
alignment proposed by Cucker and Smale \cite{CS1,CS2}.
Together with particle and continuum models based on macroscopic
densities, there has been a very recent trend of mesoscopic models by
means of kinetic equations for swarming
\cite{HT08,CDP,HL08,cfrt09}. In these models one works with a
statistical description of the interacting agent system. Let us
represent by $x \in \R^d$ the position, where $d \geq 1$ stands for
the physical space dimension, and by $v \in \R^d$ the velocity. We are
interested in studying the evolution of $f = f(t,x,v)$ representing
the probability measure/density of individuals at position $x$, with
velocity $v$, and at time $t \geq 0$. These are the models we study in
the present paper. Given that we cover a variety of them, we refer the
reader to Section \ref{sec:bg} for a more detailed presentation of the
equations.
These kinetic models bridge the particle description of swarming to
the hydrodynamic one as already discussed in \cite{HT08,CDP,HL08}. The
main key idea is that solutions to particle systems are in fact
atomic-measure solutions for the kinetic equations, and solutions to
the hydrodynamic equations are solutions of a special form to the
kinetic equation; see Section \ref{sec:hydro} for more details.
In some cases, suitable compactness arguments based on the stability
properties in distances between probability measures allow to
construct a well-posedness theory for a kinetic equation. Such an
approach was done for the Vlasov equation in classical kinetic theory
\cite{neunzert,BH,dobru,spohn} with several nice reviews in
\cite{neunzert2,spohn2,Gol}. Of these references, \cite{dobru} uses
the Monge-Kantorovich-Rubinstein distance (the one we use in the
present paper); the others, as well as the recent work \cite{HL08} for
the kinetic Cucker-Smale model, use an approach based on the bounded
Lipschitz distance.
In this paper we present a generic approach to the well-posedness
of many of these models in the set of probability measures in
phase space based on the modern theory of optimal transport
\cite{Villani}. In fact, we will use the well-known
Monge-Kantorovich-Rubinstein distance between probability measures
instead of the bounded Lipschitz distance. Its better duality
properties actually make this technical approach easier in terms
of estimates leading to one of our crucial results: a stability
property of solutions to swarming equations under quite general
conditions.
We derive some consequences from this stability estimate. First, we
prove the mean-field limit, or convergence of the particle method
towards a measure solution of the kinetic equation. This mean field
limit is then established without any resorting to the BBGKY hierarchy
or the molecular chaos hypothesis \cite{Bo,CDP,HL08}. Second, we show
the stability for arbitrary times of the hydrodynamic solutions,
assuming they exist, although with constants depending on
time. Finally, the stability result can be used to obtain qualitative
properties of the measure solutions of the kinetic equations, as it
has been done in \cite{cfrt09} for the kinetic Cucker-Smale model.
This strategy is quite general, and we first demonstrate its use in a
particular kinetic model introduced in \cite{CDP} for dealing with the
mesoscopic description and certain patterns not covered by the
particle model proposed in \cite{DCBC}. Other models are treated by
the same procedure in subsequent sections, as the kinetic Cucker-Smale
model proposed in \cite{HT08} for the original alignment mechanism in
\cite{CS1,CS2}, the models studied in \cite{LLE,LLE2}, or any linear
combination of these mechanisms. We finally give general conditions on
a model that are sufficient for our well-posedness results to be
valid.
Let us comment on some limitations of the method we use. The first
one is that, as we work with solutions in a weak measure sense, we
have to require our interaction terms to be locally Lipschitz in
order in order to carry out the theory. This is a well-known
limitation in the literature for working with the mean field limit
and measure solutions, see \cite{spohn2,HJ07} and the references
therein. A less fundamental one is that we always work with
compactly supported solutions. One could probably develop a theory
substituting this condition by a suitable control on moments of
the solution, and then adapting the estimates to this setting;
however, in the present paper we do not pursue further extensions
in this direction.
Next section does a simple and brief review of the main interacting
particle systems under analysis and the needed concepts for the
Monge-Kantorovich-Rubinstein distance between probability
measures. The third section is devoted to the proof of the main result
of existence, uniqueness and stability of measure solutions to the
particular swarming equations introduced in \cite{CDP}. Section 4
generalizes this approach to a general family of these
equations. Finally, section 5 draws some consequences of the stability
property: the convergence of the particle method and the mean-field
limit are proved for the general model, while the stability in a
finite time interval of hydrodynamic solutions is shown for the
swarming model used in Section \ref{sec:swarming_model}.
\section{Preliminaries}
\label{sec:bg}
In this section we introduce the models mentioned in the
introduction. We give some particular representative cases and specify
the models to which our results apply. Also, we recall some notions
about optimal transport that shall come in handy.
\subsection{Main Kinetic Models}
The particle model proposed in \cite{DCBC} reads as:
\begin{equation*}
\left\lbrace
\begin{array}{ll}
\displaystyle \frac{dx_i}{dt} = {v}_i,
&\qquad (i = 1,\dots,N)
\vspace{.3cm}
\\
\displaystyle \frac{dv_i}{dt} = (\alpha - \beta \,|{v}_i|^2) {v}_i
- \frac1N \sum_{j \neq i } \nabla U (|x_i - x_j|),
&\qquad (i = 1,\dots,N).
\end{array}
\right.
\end{equation*}
where $\alpha$, $\beta$ are nonnegative parameters,
$U:\R^d\longrightarrow \R$ is a given potential modeling the
short-range repulsion and long-range attraction typical in these
models, and $N$ is the number of particles. Here, the potential has
been scaled depending on the mass of each particle as in \cite{CDP},
where we refer for further discussion. The term corresponding to
$\alpha$ models the self-propulsion of individuals, whereas the term
corresponding to $\beta$ is the friction assumed to follow Rayleigh's
law. The balance of these two terms imposes an asymptotic speed to the
agent (if other effects are ignored), but does not influence the
orientation vector. A typical choice for $U$ is the Morse potential
which is radial and given by
$$
U(x)=k(|x|) \qquad \mbox{with} \qquad k(r) = -C_A e^{-r /\ell_A} +
C_R e^{-r /\ell_R},
$$
where $C_A, C_R$ and $\ell_A, \ell_R$ are the strengths and the
typical lengths of attraction and repulsion, respectively. This
potential does not satisfy the smoothness assumption in our main
theorems but the qualitative behavior of the particle system does not
depend on this particular fact \cite{DCBC}. In fact, a typical
potential satisfying all of our hypotheses is
$$
U(x) = -C_A e^{-|x|^2 /\ell_A^2} + C_R e^{-|x|^2 /\ell_R^2}.
$$
The kinetic equation associated to this particle model as discussed in
\cite{CDP} gives the evolution of $f = f(t,x,v)$ as
\begin{equation}
\label{eq:swarming}
\partial_t f + v \cdot \grad_x f
- (\grad U * \rho) \cdot \grad_v f
+ \dv_v((\alpha - \beta \abs{v}^2) v f)
= 0,
\end{equation}
where $\rho$ represents the macroscopic \emph{density} of $f$:
\begin{equation}
\label{eq:def-rho}
\rho(t,x) := \int_{\R^d} f(t,x,v) \,dv
\quad \text{ for } t \geq 0, x \in \R^d.
\end{equation}
In the Cucker-Smale model, introduced in \cite{CS1,CS2}, the only
mechanism taken into account is the reorientation interaction between
agents. Each agent in the swarm tries to mimick other individuals by
adjusting/averaging their relative velocity with all the others. This
averaging is weighted in such a way that closer individuals have more
influence than further ones. For a system with $N$ individuals the
Cucker-Smale model reads as
$$
\left\{\begin{array}{lr} \displaystyle\frac{dx_i}{dt} = v_i, \\[3mm]
\displaystyle\frac{dv_i}{dt} = \frac1N \displaystyle\sum_{j=1}^{N}
w_{ij} \left(v_j - v_i\right) ,
\end{array}\right.
$$
with the \emph{communication rate} $w(x)$ given by:
$$
w_{ij} = w(|x_i-x_j|)= \frac 1 {\left(1 +|x_i - x_j|^2
\right)^\gamma}
$$
for some $\gamma \geq 0$. This particle model leads to the following
kinetic model \cite{HT08,HL08,cfrt09}:
\begin{equation}\label{eq:CS}
\frac{\partial f}{\partial t} + v\cdot \nabla_x f = \nabla_v \cdot
\left[\xi[f] \,f\right]
\end{equation}
where $\xi[f](x,v,t) = \left( H \ast f \right)(x,v,t)$, with
$H(x,v)=w(x)v$ and $\ast$ standing for the convolution in both
position and velocity ($x$ and $v$). We refer to \cite{CS1,CS2,cfrt09}
for further discussion about this model and qualitative properties.
Moreover, quite general models incorporating the three effects
previously discussed have been considered in \cite{LLE,LLE2}. In
particular, they consider that $N$ individuals follow the system:
\begin{equation}\label{eq:lle}
\left\lbrace
\begin{array}{l}
\displaystyle \frac{dx_i}{dt} = {v}_i, \vspace{.3cm}\\
\displaystyle \frac{dv_i}{dt} = F^A_i + F^I_i,
\end{array}
\right.
\end{equation}
where $F^A_i$ is the self-propulsion autonomously generated by the
$i$th-individual, while $F^I_i$ is due to interaction with the
others. The model in Section 3 corresponds to $F^A_i = (\alpha - \beta
\,|{v}_i|^2) {v}_i$, while the term $F^A_i= - \beta \, {v}_i$ is
considered in \cite{LR}, and $F^A_i= a_i - \beta \, {v}_i$ in
\cite{LLE,LLE2}. Here, $a_i$ is an autonomous self-propulsion force
generated by the $i$th-particle, and may depend on environmental
influences and the location of the particle in the school. The
interaction with other individuals can be generally modeled as:
$$
F^I_i=F^{I,x}_i + F^{I,v}_i = \sum_{j=1}^N
g_\pm(|x_i-x_j|)\frac{x_j-x_i}{|x_i-x_j|} + \sum_{j=1}^N
h_\pm(|v_i-v_j|)\frac{v_j-v_i}{|v_i-v_j|} .
$$
Here, $g_+$ and $h_+$ ($g_-$ and $h_-$) are chosen when the influence
comes from the front (behind), i.e., if $(x_j-x_i)\cdot v_i> 0$
($<0$); choosing $g_+ \neq g_-$ and $h_+ \neq h_-$ means that the
forces from particles in front and those from particles behind are
different. The sign of the functions $g_\pm(r)$ encodes the
short-range repulsion and long-range attraction for particles in front
of (+) and behind (-) the $i$th-particle. Similarly, $h_+>0$ ($<0$)
implies that the velocity-dependent force makes the velocity of
particle $i$ get closer to (away from) that of particle $j$.
In the next sections we will be concerned with the well-posedness for
measure solutions to \eqref{eq:swarming}, \eqref{eq:CS} and
generalized kinetic equations including the corresponding to the
$N$-individuals model in \eqref{eq:lle}.
\subsection{Preliminaries on mass transportation and notation}
\label{sec:transportation}
Let us recall some notation and known results about mass
transportation that we will use in the next sections. For a more
detailed approach, the interested reader can refer to
\cite{CT,Villani}.
We consider the space of probability measures $\Po(\R^d)$, consisting
of all probability measures on $\R^d$ with finite first moment. In
$\Po(\R^d)$ a natural concept of distance to work with is the
so-called \emph{Monge-Kantorovich-Rubinstein distance},
\begin{equation}
W_1(f,g) = \sup \left \{ \left |\int_{\R^d} \varphi(P)
(f(P)-g(P))\, d P \right |, \varphi \in \lip(\R^d),
\lip(\varphi)\leq 1 \right \}, \label{w1char}
\end{equation}
where $\lip(\R^d)$ denotes the set of Lipschitz functions on $\R^d$
and $\lip(\varphi)$ the Lipschitz constant of a function
$\varphi$. Denoting by $\Lambda$ the set of transference plans between
the measures $f$ and $g$, i.e., probability measures in the product
space $\R^d \times \R^d$ with first and second marginals $f$ and $g$
respectively, then we have
\begin{equation}\label{wpdef}
W_1(f, g) = \inf_{\pi\in\Lambda} \left\{ \int_{\R^d \times \R^d}
\vert P_1 - P_2 \vert \, d \pi(P_1, P_2) \right\}
\end{equation}
by Kantorovich duality. $\Po(\R^d)$ endowed with this distance is
a complete metric space. In the following proposition we recall
some of its properties. We refer to \cite{Villani} for a survey of
these basic facts.
\begin{prp}[$W_1$-properties]\label{w2properties}
The fol\-lowing properties of the distance $W_1$ hold:
\begin{enumerate}
\item[i)] {\bf Optimal transference plan:} The infimum in the
definition of the distance $W_1$ is achieved. Any joint
probability measure $\Pi_o$ satisfying:
$$
W_1(f, g) = \int_{\R^d \times \R^d} \vert P_1 - P_2 \vert \,
d\Pi_o(P_1, P_2).
$$
is called an optimal transference plan and it is generically non
unique for the $W_1$-distance.
\item[ii)] {\bf Convergence of measures:} Given $\{f_k\}_{k\ge 1}$
and $f$ in $\Po(\R^d)$, the following three assertions are
equivalent:
\begin{itemize}
\item[a)] $W_1(f_k, f)$ tends to $0$ as $n$ goes to infinity.
\item[b)] $f_k$ tends to $f$ weakly-* as measures as $k$ goes to
infinity and
$$
\sup_{k\ge 1} \int_{\vert v \vert > R} \vert v \vert \, f_k(v) \,
dv \to 0 \, \mbox{ as } \, R \to +\infty.
$$
\item[c)] $f_k$ tends to $f$ weakly-* as measures and
$$
\int_{\R^d} \vert v \vert \, f_k(v) \, dv \to
\int_{\R^d} \vert v \vert \, f(v) \, dv \, \mbox{ as } \,
\mbox{n} \to + \infty.
$$
\end{itemize}
\end{enumerate}
\end{prp}
Throughout the paper we will denote the integral of a function
$\varphi = \varphi(x)$ with respect to a measure $\mu$ by $\int
\varphi(x) \mu(x) \,dx$, even if the measure is not absolutely
continuous with respect to Lebesgue measure, and hence does not
have an associated density.
Given a probability measure $f \in \Po(\R^d \times \R^d)$ we
always denote by $\rho$ its first marginal, written as follows by
an abuse of notation:
\begin{equation}
\label{eq:def-rho-abuse}
\rho(x) := \int_{\R^d} f(x,v) \,dv.
\end{equation}
To be more precise, $\rho$ is given by its action on a
$\mathcal{C}^0_c$ function $\phi: \R^d \to \R$,
\begin{equation*}
\int_{\R^d} \rho(x) \phi(x) \,dx
= \int_{\R^d \times \R^d} f(x,v) \phi(x) \,dx\,dv.
\end{equation*}
For $T > 0$ and a function $f:[0,T] \to \Po(\R^d \times \R^d)$, it
is understood that $\rho$ is the function $\rho: [0,T] \to
\Po(\R^d)$ obtained by taking the first marginal at each time $t$.
Whenever we need to indicate explicitly the dependence of $\rho$
on $f$, we write $\rho[f]$ instead of just $\rho$.
We denote by $B_R$ the closed ball with center $0$ and radius $R > 0$
in the Euclidean space $\R^n$ of some dimension $n$. When we need to
explicitly indicate the dimension of the space, we will write
$B_R^n$. For a function $H : \R^n \to \R^m$, we will write $Lip_R(H)$
to denote the Lipschitz constant of $H$ in the ball $B_R \subseteq
\R^n$. For $T > 0$ and a function $H : [0,T] \times \R^n \to \R^m$, $H
= H(t,x)$, we again write $Lip_R(H)$ to denote the Lipschitz constant
\emph{with respect to $x$} of $H$ in the ball $B_R \subseteq \R^n$;
this is, $Lip_R(H)$ is the smallest constant such that
\begin{equation*}
\abs{H(t,x_1) - H(t,x_2)} \leq Lip_R(H) \abs{x_1 - x_2}
\quad
\text{ for all } x_1,x_2 \in B_R, t \in [0,T].
\end{equation*}
For any such function $H$, we will denote the function depending on
$x$ at a fixed time $t$ by $H_t$; this is, $H_t(x) := H(t,x)$.
\section{Well-posedness for a system with interaction and
self-propulsion}
\label{sec:swarming_model}
In this section we consider eq. \eqref{eq:swarming}. In this model
(and in fact, in every model considered in this paper) the total mass
is preserved, and by rescaling the equation and adapting the
parameters suitably one easily sees that we can normalize the equation
and consider only solutions with total mass $1$. We will do so and
reduce ourselves to work with probability measures.
\subsection{Notion of solution}
In order to motivate our definition of solution to equation
\eqref{eq:swarming} let us consider for a moment a general field $E$
instead of $-\nabla U * \rho$. Precisely, fix $T > 0$ and a function
$E:[0,T] \times \R^d \to \R^d$ such that:
\begin{hyp}[Conditions on $E$]
\label{hyp:E-conditions}
\begin{enumerate}
\item $E$ is continuous on $[0,T] \times \R^d$,
\item For some $C_E > 0$,
\begin{equation}
\label{eq:E-growth}
\abs{E(t,x)} \leq C_E(1 + \abs{x}),
\quad \text{ for all } t,x \in [0,T] \times \R^d,\text{ and}
\end{equation}
\item $E$ is \emph{locally Lipschitz with respect to $x$}, i.e., for
any compact set $K \subseteq \R^d$ there is some $L_K > 0$ such
that
\begin{equation}
\label{eq:Lipschitz-resp-x}
\abs{E(t,x) - E(t,y)} \leq L_K \abs{x-y},
\qquad t \in [0,T], \quad x,y \in K.
\end{equation}
\end{enumerate}
\end{hyp}
\noindent
We consider the equation
\begin{equation}
\label{eq:swarming-E}
\partial_t f + v \cdot \grad_x f
+ E \cdot \grad_v f
+ \dv_v((\alpha - \beta \abs{v}^2) v f)
= 0,
\end{equation}
which is a linear first-order equation. The associated
characteristic system of ode's is
\begin{subequations}
\label{eq:characteristics}
\begin{align}
\label{eq:characteristicsX}
\frac{d}{dt} X &= V,
\\
\label{eq:characteristicsV}
\frac{d}{dt} V &= E(t,X) + V(\alpha - \beta \abs{V}^2).
\end{align}
\end{subequations}
\begin{lem}[Flow Map]\label{lem:exist_cont_charact}
Take a field $E:[0,T] \times \R^d \to \R^d$ satisfying Hypothesis
\ref{hyp:E-conditions}. Given $(X_0,V_0)\in \R^d \times \R^d$ there
exists a unique solution $(X,V)$ to equations
\eqref{eq:characteristicsX}-\eqref{eq:characteristicsV} in
$\mathcal{C}^1([0,T];\R^d \times \R^d)$ satisfying $X(0)=X_0$ and
$V(0)=V_0$. In addition, there exists a constant $C$ which depends
only on $T$, $\abs{X_0}$, $\abs{V_0}$, $\a$, $\b$ and the constant
$C_E$ in eq. \eqref{eq:E-growth}, such that
\begin{equation}
\label{eq:chars-bounded}
\abs{ (X(t), V(t)) }
\leq \abs{(X_0,V_0)} e^{Ct}
\quad
\text{ for all } t\in[0,T].
\end{equation}
\end{lem}
\begin{proof}
As the field $E$ satisfies the regularity and growth conditions in
Hypothesis \ref{hyp:E-conditions}, standard results in ordinary
differential equations show that for each initial condition
$(X(0),V(0)) \in \R^d \times \R^d$ this system has a unique solution
defined on $[0,T)$ (the only term in the equations which does not
grow linearly is $-\beta V \abs{V}^2$, and it makes $\abs{V}$
decrease, so the solution is globally defined in time). The bound
\eqref{eq:chars-bounded} on the solutions follows from direct
estimates on the equation, using the linear growth of the field $E$.
\end{proof}
Calling $P \equiv (X,V)$, the system
(\ref{eq:characteristicsX})-(\ref{eq:characteristicsV}) can be
conveniently written as
\begin{equation}
\label{eq:characteristics-short}
\frac{d}{dt} P = \Psi_E(t, P),
\end{equation}
where $\Psi_E : [0,T] \times \R^d \times \R^d \to \R^d \times \R^d$ is
the right hand side of eqs. \eqref{eq:characteristicsX},
\eqref{eq:characteristicsV}. When the field $E$ is understood we will
just write $\Psi$ instead of $\Psi_E$. Using this notation, equation
(\ref{eq:swarming-E}) can also be rewriten as
\begin{equation}
\label{eq:swarming-E-short}
\frac{\partial f}{\partial t} + \dive(\Psi_Ef) = 0.
\end{equation}
We can thus consider the flow at time $t \in [0,T)$ of eqs. \eqref{eq:characteristics},
\begin{equation*}
\calT_E^t:\R^d \times \R^d \to \R^d \times \R^d.
\end{equation*}
Again by basic results in ode's, the map $(t,x,v) \mapsto
\calT_E^t(x,v)=(X,V)$ with $(X,V)$ the solution at time $t$ to
\eqref{eq:chars-bounded} with initial data $(x,v)$, is jointly
continuous in $(t,x,v)$. For a measure $f_0 \in \Po(\R^d \times
\R^d)$ it is well-known that the function
\begin{equation*}
f: [0,T) \to \Po(\R^d\times\R^d),
\quad
t \mapsto f_t := \calT_E^t \# f_0
\end{equation*}
is a measure solution to eq. \eqref{eq:swarming-E}, i.e., a
solution in the distributional sense. Here we are using the mass
transportation notation of \emph{push-forward}: $f_t = \calT_E^t \#
f_0$ is defined by
\begin{equation}\label{eq:pushforward}
\int_{\R^{2d}} \zeta(x,v) \,f(t,x,v)\,d(x,v) = \int_{\R^{2d}}
\zeta(\calT_E^t(x,v)) \,f_0(x,v) \,d(x,v),
\end{equation}
for all $\zeta \in \mathcal C_b^0(\R^{2d})$. Note that in the case
where the initial condition $f_0$ is regular (say,
$\mathcal{C}^\infty_c$) this is just a way to rewrite the solution
of the equation through the method of characteristics. This
motivates the following definition:
\begin{dfn}[Notion of Solution]
\label{dfn:solution}
Take a potential $U \in \mathcal{C}^{1}(\R^d)$ such that
\begin{equation}
\label{eq:U-growth}
\abs{ \nabla U (x) } \leq C (1+\abs{x}),
\qquad x \in \R^d,
\end{equation}
for some constant $C > 0$. Take also a measure $f_0 \in \P_1(\R^d
\times \R^d)$, and $T \in (0,\infty]$. We say that a function
$f:[0,T] \to \P_1(\R^d \times \R^d)$ is a solution of the swarming
equation \eqref{eq:swarming} with initial condition $f_0$ when:
\begin{enumerate}
\item The field $E[f] = -\nabla U * \rho$ satisfies the conditions in
Hypothesis \ref{hyp:E-conditions}.
\item It holds $f_t = \calT_{E[f]}^t \# f_0$.
\end{enumerate}
\end{dfn}
\begin{rem}\label{rem:unboundE}
This definition gives a convenient condition on $U$ so that a
measure solution in $\P_1(\R^d \times \R^d)$ makes sense. One can
weaken the requirement on $U$ in this definition as long as the
requirements on $f$ are suitably strengthened (e.g., one can allow a
faster growth of the potential if one imposes a faster decay of $f$,
or less local regularity of $U$ if one assumes more regularity of
$f$), but we will not consider these modifications in the present
paper.
Since we ask the gradient of the potential to be locally Lipschitz,
we cannot consider potentials with a singularity at the origin. This
is a strong limitation of the classical theory, and is considered a
difficult problem for the mean-field limit. As for the existence
theory, if one wants to consider more singular potentials, one can
work with functions $f$ which are more regular than just measures,
so that $\nabla U * \rho$ becomes locally Lipschitz and a parallel
existence theory can be developed.
\end{rem}
\subsection{Estimates on the characteristics}
We gather in this section some estimates on solutions to the
characteristic equations \eqref{eq:characteristics}. In this section
we fix $T > 0$ and fields $E,E^1,E^2 : [0,T] \times \R^d \to \R^d$
which are assumed to satisfy Hypothesis \ref{hyp:E-conditions}, and we
consider their corresponding characteristic equations
\eqref{eq:characteristics}. Recall that $\Psi_E$ is a shorthand for
the right hand side of \eqref{eq:characteristics}, as in
\eqref{eq:characteristics-short}.
We first gather some basic regularity results for the function
which defines the right hand side of eqs.
\eqref{eq:characteristicsX}--\eqref{eq:characteristicsV}:
\begin{lem}[Regularity of the characteristic equations]
\label{lem:reg-chareqs}
Take a field $E: [0,T] \times \R^d \to \R^d$ which satisfies
Hypothesis \ref{hyp:E-conditions}. Consider a number $R > 0$ and the
closed ball $B_R \subseteq \R^d \times \R^d$.
\begin{enumerate}
\item $\Psi_E$ is bounded in compact sets: For $P = (X, V) \in B_R$
and $t \in [0,T]$,
\begin{equation*}
\abs{\Psi_E(t,P)}
\leq
C
\end{equation*}
for some $C > 0$ which depends only on $\alpha$, $\beta$, $R$, and
$\norm{E}_{L^\infty([0,T]\times B_R)}$.
\item $\Psi_E$ is locally Lipschitz with respect to $x,v$: For all
$P_1 = (X_1, V_1)$, $P_2 = (X_2, V_2)$ in $B_R$, and $t \in
[0,T]$,
\begin{equation*}
\abs{\Psi_E(t,P_1) - \Psi_E(t,P_2)}
\leq
C (1+Lip_R(E_t)) \abs{P_1 - P_2},
\end{equation*}
for some number $C > 0$ which depends only on $\alpha$ and $\beta$.
\end{enumerate}
\end{lem}
\begin{proof}
This can be obtained by a direct calculation from
eqs. \eqref{eq:characteristicsX}--\eqref{eq:characteristicsV}.
\end{proof}
\begin{lem}[Dependence of the characteristic equations on $E$]
\label{lem:dep-chareqs-E}
Take two fields $E^1, E^2 : [0,T] \times \R^d \to \R^d$ satisfying
Hypothesis \ref{hyp:E-conditions}, and consider the functions
$\Psi_{E^1}$, $\Psi_{E^2}$ which define the characteristic equations
\eqref{eq:characteristics} as in
eq. \eqref{eq:characteristics-short}. Then, for any compact (in
fact, any measurable) set $B$,
\begin{equation*}
\norm{\Psi_{E^1} - \Psi_{E^2}}_{L^\infty(B)}
\leq
\norm{E^1 - E^2}_{L^\infty(B)}.
\end{equation*}
\end{lem}
\begin{proof}
Trivial from the expression of $\Psi_{E^1}$, $\Psi_{E^2}$.
\end{proof}
Now we explicitly state some results which give a quantitative
bound on the regularity of the flow $\calT_E^t$, and its dependence on
the field $E$.
\begin{lem}[Dependence of characteristics on $E$]
\label{lem:dep-chars-E}
Take two fields $E^1, E^2 : [0,T] \times \R^d \to \R^d$ satisfying
Hypothesis \ref{hyp:E-conditions}, and a point $P^0 \in \R^d \times
\R^d$. Take $R > 0$, and assume that
\begin{equation*}
\abs{\calT_{E^1}^t(P^0)} \leq R,
\quad
\abs{\calT_{E^2}^t(P^0)} \leq R
\qquad \mbox{for }t \in [0,T].
\end{equation*}
Then for $t \in [0,T]$ it holds that
for some constant $C$ which depends only on $\alpha$, $\beta$, $R$
and $\Lip_R(E^1)
\begin{equation}
\label{eq:dep-chars-E}
\abs{\calT_{E^1}^t(P^0) - \calT_{E^2}^t(P^0)}
\leq
\frac{e^{C t} - 1}{C}
\sup_{s \in [0,T)} \norm{E^1_s - E^2_s}_{L^\infty(B_R)}.
\end{equation}
\end{lem}
\begin{proof}
For ease of notation, write $P_i(t) \equiv \calT_{E_i}^t(P^0) \equiv
(X_i(t), V_i(t))$, for $i = 1,2$, $t \in [0,T]$. These functions
satisfy the characteristic equations \eqref{eq:characteristics}:
\begin{gather*}
\frac{d}{dt} P_i = \Psi_{E_i}(t,P_i), \quad P_i(0) = P^0,
\quad \text{ for } i = 1,2 .
\end{gather*}
Then, for $t \in [0,T]$, and using Lemmas \ref{lem:reg-chareqs} and
\ref{lem:dep-chareqs-E},
\begin{align*}
\abs{P_1(t) - P_2(t)}
\leq &\,
\int_0^t \abs{\Psi_{E^1}(s,P_1(s)) - \Psi_{E^2}(s,P_2(s))} \,ds
\\
\leq &\,
\int_0^t \abs{\Psi_{E^1}(s,P_1(s)) - \Psi_{E^1}(s,P_2(s))} \,ds
\\
&+
\int_0^t \abs{\Psi_{E^1}(s,P_2(s)) - \Psi_{E^2}(s,P_2(s))} \,ds
\\
\leq &\,
C \int_0^t \abs{P_1(s) - P_2(s)} \,ds
+
\int_0^t \norm{E^1_s - E^2_s}_{L^\infty(B_R)} \,ds
\end{align*}
where $C$ is the constant in point 2 of Lemma \ref{lem:reg-chareqs},
which depends on $\alpha$, $\beta$, $R$ and the Lipschitz constant
of $E^1$ with respect to $x$ in the ball $B_R$. By Gronwall's Lemma,
\begin{align*}
\abs{P_1(t) - P_2(t)}
\leq &\,
\int_0^t e^{C(t-s)} \norm{E^1_s - E^2_s}_{L^\infty(B_R)} \,ds.
\\
\leq &\,
\frac{e^{C t} - 1}{C}
\sup_{s \in (0,T)} \norm{E^1_s - E^2_s}_{L^\infty(B_R)},
\end{align*}
which finishes the proof.
\end{proof}
\begin{lem}[Regularity of characteristics with respect to initial
conditions]
\label{lem:reg-chars}
Take $T > 0$ and a field $E: [0,T] \times \R^d \to \R^d$ satisfying
Hypothesis \ref{hyp:E-conditions}. Take also $P_1, P_2 \in \R^d\times\R^d$
and $R > 0$, and assume that
\begin{equation*}
\abs{\calT_{E}^t(P_1)} \leq R,
\quad
\abs{\calT_{E}^t(P_2)} \leq R
\qquad t \in [0,T].
\end{equation*}
Then it holds that
\begin{equation}
\abs{\calT_{E}^t(P_1) - \calT_{E}^t(P_2)}
\leq
\abs{P_1 - P_2} e^{C \int_0^t (\Lip_R(E_s)+1) \,ds},
\quad t \in [0,T],
\end{equation}
for some constant $C$ which depends only on $R$, $\alpha$ and
$\beta$. Said otherwise, $\calT_E^t$ is Lipschitz on $B_R \subseteq \R^d
\times \R^d$, with constant
\begin{equation*}
\Lip_R(\calT_E^t) \leq e^{C \int_0^t (\Lip_R(E_s)+1) \,ds},
\quad t \in [0,T].
\end{equation*}
\end{lem}
\begin{proof}
Write $P_i(t) \equiv \calT_{E}^t(P_i) \equiv (X_i(t), V_i(t))$, for
$i = 1,2$, $t \in [0,T]$. These functions satisfy the
characteristic equations \eqref{eq:characteristics}:
\begin{gather*}
\frac{d}{dt} P_i = \Psi_{E}(t,P_i), \quad P_i(0) = P_i, \quad \text{ for } i = 1,2.
\end{gather*}
For $t \in [0,T]$, using Lemma \ref{lem:reg-chareqs},
\begin{align*}
\abs{P_1(t) - P_2(t)}
\leq &\,
\abs{P_1 - P_2}
+
\int_0^t \abs{\Psi_{E}(s,P_1(s)) - \Psi_{E}(s,P_2(s))} \,ds
\\
\leq &\,
\abs{P_1 - P_2}
+
C \int_0^t (\Lip_R(E_s)+1) \abs{P_1(s) - P_2(s)} \,ds
\end{align*}
We get our result by applying Gronwall's Lemma to this inequality.
\end{proof}
\begin{lem}[Regularity of characteristics with respect to time]
\label{lem:reg-chars-time}
Take $T > 0$ and a field $E: [0,T] \times \R^d \to \R^d$ satisfying
Hypothesis \ref{hyp:E-conditions}. Take $P^0 \in \R^d \times
\R^d$, $R > 0$ and assume that
\begin{equation*}
\abs{\calT_{E}^t(P^0)} \leq R,
\qquad t \in [0,T].
\end{equation*}
Then it holds that
\begin{equation}
\abs{\calT_{E}^t(P^0) - \calT_{E}^s(P^0)}
\leq
C \abs{t-s}
\quad \text{ for } s, t \in [0,T],
\end{equation}
for some constant $C$ which depends only on $\alpha$, $\beta$, $R$
and $\norm{E}_{L^\infty([0,T] \times B_R)}$.
\end{lem}
\begin{proof}
Direct by definition
of
$\calT_E^t(P^0)$ and from point 1 of Lemma
\ref{lem:reg-chareqs},
as we are assuming that $\calT_E^t(P^0)$ remains on a certain
compact subset of $\R^d \times
\R^d$
\end{proof}
\subsection{Existence and uniqueness}
\begin{thm}[Existence and uniqueness of measure solutions]
\label{thm:Existence}
Take a potential $U \in \mathcal{C}^1(\R^d)$ such that $\nabla U$ is locally Lipschitz and such that for some $C >
0$,
\begin{equation}
\label{eq:U-growth2}
\abs{\nabla U(x)} \leq C(1+\abs{x})
\quad \text{ for all } x \in \R^d ,
\end{equation}
and $f_0\in \Po(\R^d \times \R^d)$ with compact
support. There exists a solution $f$ on $[0,+\infty)$ to equation
\eqref{eq:swarming} with initial condition $f_0$ in the sense of
Definition \ref{dfn:solution}. In addition,
\begin{equation}
\label{eq:f-continuous}
f \in \mathcal{C}([0,+\infty); \P_1(\R^d\times \R^d))
\end{equation}
and there is some increasing function $R = R(T)$ such that for all $T
> 0$,
\begin{equation}
\label{eq:supp-f}
\supp f_t \subseteq B_{R(T)} \subseteq \R^d \times \R^d
\quad \text{ for all } t \in [0,T].
\end{equation}
This solution is unique among the family of solutions satisfying
\eqref{eq:f-continuous} and \eqref{eq:supp-f}.
\end{thm}
The rest of this section is dedicated to the proof of this result,
for which we will need some previous lemmas. We begin with a
general result on the transportation of a measure by two different
functions:
\begin{lem}
\label{lem:same-f0}
Let $P_1, P_2 : \R^d \to \R^d$ be two Borel measurable functions. Also, take $f \in \P_1(\R^d)$. Then,
\begin{equation}
\label{eq:same-f0}
W_1(P_1 \# f, P_2 \# f)
\leq
\norm{P_1 - P_2}_{L^\infty(\supp f)}.
\end{equation}
\end{lem}
\begin{proof}
We consider a transference plan defined by $\pi:=(P_1\times P_2)\#f$. One can check
that this measure has marginals $P_1 \# f$, $P_2 \# f$. Then,
\begin{align*}
W_1(P_1 \# f, P_2 \# f)
\leq &\,
\int_{\R^d \times \R^d} \abs{x-y} \pi(x,y) \,dx \,dy
\\
=&\,
\int_{\R^d} \abs{P_1(x) - P_2(x)}\, f(x) \,dx
\leq
\norm{P_1 - P_2}_{L^\infty(\supp f)},
\end{align*}
which proves the lemma.
\end{proof}
\begin{lem}[Continuity with respect to time]
\label{lem:time-W1-continuity}
Take $T > 0$ and a field $E: [0,T] \times \R^d \to \R^d$ in the
conditions of Hypothesis \ref{hyp:E-conditions}. Take also a measure
$f$ on $\R^d \times \R^d$ with compact support contained in the ball
$B_R$.
Then, there exists $C > 0$ depending only on $\alpha$, $\beta$, $R$
and $\norm{E}_{L^\infty([0,T] \times B_R)}$ such that
$$W_1(\calT_E^s\#f,\calT_E^t\#f) \leq C \abs{t-s},
\quad \text{ for any } t,s\in[0,T].$$
\end{lem}
\begin{proof}
From Lemma \ref{lem:same-f0} and the continuity of characteristics
with respect to time, Lemma \ref{lem:reg-chars-time}, we get
\begin{equation*}
W_1(\calT_E^s\#f,\calT_E^t\#f)
\leq
\|\calT_E^s-\calT_E^t\|_{L^\infty(\supp f)}
\leq
C \abs{t-s},
\end{equation*}
for some $C > 0$ which depends only on the quantities in the lemma.
\end{proof}
\begin{lem}
\label{lem:same-T_[f]}
Take a locally Lipschitz map $\calT : \R^d \to \R^d$ and $f,g \in
\P_1(\R^d)$, both with compact support contained in the ball
$B_R$. Then,
\begin{equation}
\label{eq:same-T_[f]}
W_1(\calT \# f, \calT \# g)
\leq
L\,W_1(f,g),
\end{equation}
where $L$ is the Lipschitz constant of $\calT$ on the ball $B_R$.
\end{lem}
\begin{proof}
Set $\pi$ to be an optimal transportation plan between $f$ and $g$. The
measure $\gamma=(\calT \times \calT) \# \pi$ has marginals $\calT \# f$ and $\calT \# g$,
as can be easily checked, so we can use it to bound $W_1(\calT \# f, \calT \# g)$:
\begin{align*}
W_1(\calT \# f, \calT \# g)
\leq &\,
\int_{\R^d \times \R^d} \!\!\!\!\abs{z-w} \, \gamma(z,w) \,dz \,dw
=
\int_{\R^d \times \R^d} \!\!\!\!\abs{\calT(z)-\calT(w)} \pi(z,w) \,dz \,dw
\\
\leq &\,
L \int_{\R^d \times \R^d} \abs{z-w} \pi(z,w) \,dz \,dw
=
L\, W_1(f,g),
\end{align*}
using that the support of $\pi$ is contained in $B_R \times B_R$, as
both $f$ and $g$ have support inside $B_R$.
\end{proof}
Recalling that $E[f]:=\nabla U\ast \rho$, the properties of
convolution lead immediately to the following information:
\begin{lem}
\label{lem:lipschitz-field}
Take a potential $U:\R^d \to \R$ in the conditions of Theorem
\ref{thm:Existence}, and a measure $f \in \Po(\R^d
\times \R^d)$ with support contained in a ball $B_R$. Then,
\begin{equation}
\label{eq:E-bounded}
\norm{E[f]}_{L^\infty(B_R)}
\leq
\norm{\nabla U}_{L^\infty(B_{2R})},
\end{equation}
and
\begin{equation}
\label{eq:E-Lipschitz}
\Lip_R(E[f]) \leq \Lip_{2R}(\nabla U).
\end{equation}
\end{lem}
\begin{lem}
\label{lem:field-W1-Linfty}
For $f,g \in \P_1(\R^d \times \R^d)$ and $R > 0$ it holds that
\begin{equation}
\label{eq:field-W1-Linfty}
\norm{E[f] - E[g]}_{L^\infty(B_R)}
\leq \mathrm{Lip}_{2R}(\nabla U) W_1(f,g).
\end{equation}
\end{lem}
\begin{proof}
Take $\pi$ to be an optimal transportation plan between the
measures $f$ and $g$. Then, for any $x \in B_R$, using that $\pi$
has marginals $f$ and $g$,
\begin{align*}
E[f](x)&\, - E[g](x)
=
\int_{\R^d} (\rho[f](y) - \rho[g](y)) \nabla U(x-y) \,dy
\\
= &\,
\int_{\R^d\times\R^d} f(y,v) \nabla U(x-y) \,dy \,dv
- \int_{\R^d\times\R^d} g(z,w) \nabla U(x-z) \,dz \,dw
\\
= &\,
\int_{\R^{4d}} (\nabla U(x-y) - \nabla U(x-z))
\,d \pi(y,v,z,w).
\end{align*}
Taking absolute value,
\begin{align*}
\abs{E[f](z) - E[g](z)}
\leq &\,
\int_{\R^{4d}} |\nabla U(x-y) - \nabla U(x-z)| \,d \pi(y,v,z,w)\\
\leq &\,
\Lip_{2R}(\nabla U)
\int_{\R^{4d}} \abs{y - z} \,d \pi(y,v,z,w)
\leq
\Lip_{2R}(\nabla U) W_1(f,g),
\end{align*}
using that $\pi(y,v,z,w)$ has support on $B_R \times B_R \subseteq
\R^{4d}$.
\end{proof}
We can now give the proof of the existence and uniqueness result.
\begin{proof}[Proof of theorem \ref{thm:Existence}]
Take $f_0 \in \P^1(\R^d \times \R^d)$ with support contained in a
ball $B_{R^0} \subseteq \R^d \times \R^d$, for some $R^0 > 0$. We
will prove local existence and uniqueness of solutions by a
contraction argument in the metric space $\mathcal{F}$ formed by all
the functions $f \in \mathcal{C}([0,T], \P_1(\R^d \times \R^d))$
such that the support of $f_t$ is contained in $B_R$ for all $t
\in [0,T]$, where $R := 2 R^0$ and $T > 0$ is a fixed number to be
chosen later. Here, we consider the distance in $\mathcal{F}$ given
by
\begin{equation}
\label{eq:distance-F}
\mathcal{W}_1(f,g) := \sup_{t \in [0,T]} W_1(f_t,g_t).
\end{equation}
Let us define an operator on this space for which a fixed point will
be a solution to the swarming equation \eqref{eq:swarming}. For $f
\in \mathcal{F}$, consider $E[f] := \nabla U * \rho[f]$. Then,
$E[f]$ satisfies Hypothesis \ref{hyp:E-conditions} (because of the
above two Lemmas \ref{lem:lipschitz-field} and
\ref{lem:field-W1-Linfty}, and the bound (\ref{eq:U-growth2}) on
$\nabla U$) and we can define
\begin{equation}
\label{eq:def-Gamma}
\Gamma[f](t) := \calT_{E[f]}^t \# f_0.
\end{equation}
In other words, $\Gamma[f]$ is the solution of the swarming
equations obtained through the method of characteristics, with field
$E[f]$ assumed known, and with initial condition $f_0$ at $t=0$.
Clearly, a fixed point of $\Gamma$ is a solution to
eq. \eqref{eq:swarming} on $[0,T]$. In order for $\Gamma$ to be well
defined, we need to prove that $\Gamma[f]$ is again in the space
$\mathcal{F}$, for which we need to choose $T$ appropriately. To do
this, observe that from eq. \eqref{eq:E-bounded} in Lemma
\ref{lem:lipschitz-field} we have
\begin{equation*}
\norm{E[f]}_{L^\infty([0,T] \times B_R)}
\leq
\norm{\nabla U}_{L^\infty(B_{2R})} =: C_1,
\end{equation*}
and from point 1 in lemma \ref{lem:reg-chareqs},
\begin{equation*}
\abs{ \frac{d}{dt} \calT_{E[f]}^t (P) }
\leq
C_2,
\end{equation*}
for all $P \in B_{R^0} \subseteq \R^d \times \R^d$, and some $C_2 >
0$ which depends only on $\alpha$, $\beta$, $R_0$ and
$C_1$. Choosing any $T < R^0/C_2$ one easily sees that
$\calT_{E[f]}^t \# f_0$ has support contained in $B_R$, for all $t
\in [0,T]$ (recall that we set $R := 2R^0$). Then, for each $t \in
[0,T]$, $\Gamma[f](t) \in \Po(\R^d \times \R^d)$, as follows from
mass conservation, the support of $\Gamma[f](t)$ is contained in
$B_R$ (we just chose $T$ for this to hold), and the function $t
\mapsto \Gamma[f](t)$ is continuous, as shown by Lemma
\ref{lem:time-W1-continuity}.
This shows that the map $\Gamma: \mathcal{F} \to \mathcal{F}$ is well
defined.
Let us prove now that this map is contractive (for which we will
have to restrict again the choice of $T$). Take two functions $f, g
\in \mathcal{F}$, and consider $\Gamma[f], \Gamma[g]$; we want to
show that
\begin{equation}
\label{eq:Gamma-Lipschitz}
\mathcal{W}_1(\Gamma[f], \Gamma[g]) \leq C\, \mathcal{W}_1(f, g)
\end{equation}
for some $0 < C < 1$ which does not depend on $f$ and $g$. Using
\eqref{eq:distance-F} and \eqref{eq:def-Gamma},
\begin{equation}
\label{eq:L1}
\mathcal{W}_1(\Gamma[f], \Gamma[g])
= \sup_{t \in [0,T]} W_1(\calT_{E[f]}^t \# f_0 , \calT_{E[g]}^t \# f_0),
\end{equation}
and hence we need to estimate the above quantity for each $t \in [0,
T)$. For $t \in [0,T]$, use lemmas \ref{lem:same-f0},
\ref{lem:dep-chars-E} and \ref{lem:field-W1-Linfty} to write
\begin{align*}
W_1(\calT_{E[f]}^t \# f_0 , \calT_{E[g]}^t \# f_0) &\,\leq \norm{\calT_{E[f]}^t -
\calT_{E[g]}^t}_{L^\infty(\supp f_0)}
\\ &\,
\leq C(t) \sup_{s \in [0,T]} \norm{E[f_s] - E[g_s]}_{L^\infty(B_R)}
\\ &\,
\leq C(t)\, L \, \sup_{s \in [0,T]} W_1(f_s, g_s)
=
C(t)\, L \, \mathcal{W}_1(f,g),
\end{align*}
where $C(t)$ is the function $(e^{C_3t}-1)/C_3$ which appears in
eq. \eqref{eq:dep-chars-E}, for some constant $C_3$ which depends
only on $\alpha$, $\beta$, $R$, and the Lipschitz constant $L$ of
$\grad U$ on $B_{2R}$ (see eq. \eqref{eq:E-Lipschitz}). Clearly,
\begin{equation}
\label{eq:small-Lipschitz-constant}
\lim_{t \to 0} C(t) = 0.
\end{equation}
With \eqref{eq:L1}, this finally gives
\begin{equation*}
\mathcal{W}_1(\Gamma[f], \Gamma[g])
\leq
C(T)\, L\,\mathcal{W}_1(f,g).
\end{equation*}
Taking into account \eqref{eq:small-Lipschitz-constant}, we can additionally choose
$T$ small enough so that $C(T) L < 1$. For such $T$, $\Gamma$ is
contractive, and this proves that there is a unique fixed point of
$\Gamma$ in $\mathcal{F}$, and hence a unique solution $f \in
\mathcal{F}$ of eq. \eqref{eq:swarming}.
Finally, as mass is conserved, by usual arguments one can extend
this solution as long as the support of the solution remains
compact. Since in our case the growth of characteristics is bounded
(see Lemma \ref{lem:exist_cont_charact}), one can construct a unique
global solution satisfying \eqref{eq:f-continuous} and
\eqref{eq:supp-f}.
\end{proof}
\subsection{Stability} \label{sec:stability}
\begin{thm}
\label{thm:stability}
Take a potential $U$ in the conditions of Theorem
\ref{thm:Existence}, and $f_0$, $g_0$ measures on $\R^d \times \R^d$
with compact support, and consider the solutions $f,g$ to
eq. \eqref{eq:swarming} given by Theorem \ref{thm:Existence} with
initial data $f_0$ and $g_0$, respectively.
Then, there exists a strictly increasing smooth function
$r(t):[0,\infty)\longrightarrow \R^+_0$ with $r(0)=1$ depending only
on the size of the support of $f_0$ and $g_0$, such that
\begin{equation}
\label{eq:stability}
W_1(f_t, g_t)
\leq
r(t)\, W_1(f_0, g_0),
\quad
t \geq 0.
\end{equation}
\end{thm}
\begin{proof}
Fix $T > 0$, and take $R > 0$ such that $\supp f_t$ and $\supp g_t$
are contained in $B_R$ for $t \in [0,T]$ (which can be done thanks
to theorem \ref{thm:Existence}). For $t \in [0,T]$, call $L_t$ the
Lipschitz constant of $\calT_{E[g]}^t$ on $B_R$, and notice that
from lemmas \ref{lem:reg-chars} and \ref{lem:lipschitz-field} we
have
\begin{equation}
\label{eq:st-proof1}
L_t \leq e^{C_1 t}, \qquad t \in [0,T]
\end{equation}
for some allowed constant $C_1 > 0$. Then we have, using lemmas
\ref{lem:same-f0}, \ref{lem:same-T_[f]}, \ref{lem:dep-chars-E} and
\ref{lem:field-W1-Linfty},
\begin{align*}
W_1(f_t,g_t)
= &\,
W_1(\calT_{E[f]}^t \# f_0, \calT_{E[g]}^t \# g_0)
\\
\leq &\,
W_1(\calT_{E[f]}^t \# f_0, \calT_{E[g]}^t \# f_0)
+
W_1(\calT_{E[g]}^t \# f_0, \calT_{E[g]}^t \# g_0)
\\
\leq &\,
\norm{\calT_{E[f]}^t - \calT_{E[g]}^t}_{L^\infty(\supp f_0)}
+
L_t\, W_1(f_0, g_0)
\\
\leq &\,
C_2 \int_0^t e^{C_2(t-s)} \norm{E[f_s] - E[g_s]}_{L^\infty(B_R)} \,ds
+
L_t\, W_1(f_0, g_0)
\\
\leq &\,
C_2 \lip_{2R}(\nabla U) \int_0^t e^{C_2(t-s)} W_1(f_s, g_s) \,ds
+
e^{C_1 t}\, W_1(f_0, g_0).
\end{align*}
Calling $C = \max\{C_1, C_2, C_2\lip_{2R}(\nabla U)\}$ and
multiplying by $e^{-Ct}$,
\begin{equation*}
e^{-Ct}
W_1(f_t,g_t)
\leq
C \int_0^t e^{-Cs} W_1(f_s, g_s) \,ds
+
W_1(f_0, g_0),
\qquad t \in [0,T],
\end{equation*}
and then by Gronwall's Lemma,
\begin{equation*}
e^{-Ct} W_1(f_t,g_t)
\leq
W_1(f_0,g_0) \, e^{Ct},
\quad t \in [0,T],
\end{equation*}
which proves our result. We point out that the particular rate
function $r(t)$ can be obtained by carefully looking at the
dependencies on time of the constants above, leading to double
exponentials.
\end{proof}
\begin{rem}[Possible generalizations]
As in Remark \ref{rem:unboundE}, by assuming more restrictive
growth properties at infinity of the potential $U$, we may weaken
the requirements on the support of the initial data allowing $f_0$
with bounded first moment for instance. We do not follow this
strategy in the present work.
\end{rem}
\subsection{Regularity}
\label{sec:regularity}
If the initial condition for eq. \eqref{eq:swarming} is more regular
than a general measure on $\R^d \times \R^d$ one can easily prove
that the solution $f$ is also more regular. For example, if $f_0$
is Lipschitz, then $f_t$ is Lipschitz for all $t \geq 0$. We will
show this next.
\begin{lem}
\label{lem:f-Lip}
Take an integrable function $f_0:\R^d \times \R^d \to [0,+\infty)$,
with compact support, and assume that $f_0$ is also Lipschitz. Take
also a potential $U \in \mathcal{C}^2(\R^D)$.
Consider the global solution $f$ to eq. \eqref{eq:swarming} with
initial condition $f_0$ given by Theorem \ref{thm:Existence}. Then,
$f_t$ is Lipschitz for all $t \geq 0$.
\end{lem}
\begin{proof}
Solutions obtained from Theorem \ref{thm:Existence} have bounded
support in velocity for all times $t > 0$, and their fields $E
\equiv E(t,x) := -\nabla U * \rho$ are Lipschitz with respect to
$x$. Hence, one can rewrite
eq. \eqref{eq:swarming} as a general equation of the form
\begin{equation*}
\partial_t f + \dv (a f) = 0,
\end{equation*}
where $a = a(t,x,v)$ is the expression appearing in the equation,
\begin{equation*}
a(t,x,v) = (v, E(t,x) + (\alpha - \beta\abs{v}^2) v).
\end{equation*}
Then, $a$ is bounded and Lipschitz with respect to $x,v$ on the
domain considered as the support in velocity is bounded, and
classical results show that $f_t$ is Lipschitz for all $t \geq 0$.
\end{proof}
\section{Well-posedness for General Models}
In this section we want to show that the same results we have
obtained in the previous section are also valid, with suitable
modifications, for much more general models than
\eqref{eq:swarming}. We will start by showing the adaptation of
the strategy for the Cucker-Smale system and then, we will extend
this strategy to more general models.
\subsection{Cucker-Smale Model}
\label{sec:cucker_smale}
We will prove well-posedness in a slightly more general setting
than that of the Cucker-Smale model in section \ref{sec:bg}, being
less restrictive on the communication rate and the velocity
averaging. To be more precise, we shall consider $\xi[f](x,v,t) =
\left[ (H(x,v)) \ast f \right](x,v,t)$ as in \eqref{eq:CS}, but
for a general $H:\R^d\times\R^d\to\R^d$, for which we only assume
the following hypotheses:
\begin{hyp}[Conditions on $H$]
\label{hyp:H-conditions}
\begin{enumerate}
\item $H$ is locally Lipschitz.
\item For some $C > 0$,
\begin{equation}\label{H3}
|H(x,v)|\leq C(1+|x|+|v|)
\quad \text{ for all } x,v \in \R^d.
\end{equation}
\end{enumerate}
\end{hyp}
Since the procedure to prove the well-posedness results to
\eqref{eq:CS} is the same we have already applied in the previous
section, we will state some of the results without proof. First of
all, fix $T > 0$ and let us introduce the system of ODE's solved by
the characteristics of \eqref{eq:CS}:
\begin{subequations}
\label{eq:characteristicsCS}
\begin{align}
\label{eq:characteristicsCSX}
\frac{d}{dt} X &= V,
\\
\label{eq:characteristicsCSV}
\frac{d}{dt} V &= -\xi(t,X,V),
\end{align}
\end{subequations}
where $\xi: [0,T] \times \R^d \times \R^d \to \R^d$ is any function
satisfying the following hypothesis:
\begin{hyp}[Conditions on $\xi$]
\label{hyp:xi-conditions}
\begin{enumerate}
\item $\xi$ is continuous on $[0,T] \times \R^d \times \R^d$,
\item For some $C > 0$,
\begin{equation}
\label{eq:xi-growth}
\abs{\xi(t,x,v)} \leq C(1 + \abs{x} + \abs{v}),
\quad \text{ for all } t,x,v \in [0,T] \times \R^d \times \R^v, and
\end{equation}
\item $\xi$ is \emph{locally Lipschitz with respect to $x$ and $v$}, i.e., for
any compact set $K \subseteq \R^d\times\R^d$ there is some $L_K > 0$ such
that
\begin{equation}
\label{eq:Lipschitz-resp-x2}
\abs{\xi(t,P_1) - \xi(t,P_2)} \leq L_K \abs{P_1-P_2},
\qquad t \in [0,T], \quad P_1,P_2 \in K.
\end{equation}
\end{enumerate}
\end{hyp}
Under these conditions, we may consider the flow map $P_\xi^t =
P_\xi^t(x,v)$ associated to \eqref{eq:characteristicsCS}, defined as
the solution to the system \eqref{eq:characteristicsCS} with initial
condition $(x,v)$. For ease of notation, we will write the system
\eqref{eq:characteristicsCS} as
$$
\frac{dP_\xi^t}{d t}=\Psi_{\xi}(t,P_\xi^t).
$$
\begin{rem}\label{rem:suplip}
Under Hypothesis \ref{hyp:H-conditions} on $H$, note that whenever $\tf\in
C([0,T],\Po(\R^d\times\R^d))$ is a given compactly supported measure
with $\supp(\tf_t)\subset B_{R^x}\times B_{R^v}$ for all $t\in
[0,T]$, the field $\xi[\tf]=H\ast\tf$ satisfies Hypothesis
\ref{hyp:xi-conditions}.
\end{rem}
\begin{dfn}[Notion of Solution]
\label{dfn:solution2}
Take $H$ satisfying Hypothesis \ref{hyp:H-conditions}, a measure
$f_0 \in \P_1(\R^d \times \R^d)$, and $T \in (0,\infty]$. We say
that a function $f:[0,T] \to \P_1(\R^d \times \R^d)$ is a solution
of the swarming equation \ref{eq:CS} with initial condition
$f_0$ when:
\begin{enumerate}
\item The field $\xi = H*f$ satisfies Hypothesis \ref{hyp:xi-conditions}.
\item It holds $f_t = P_\xi^t \# f_0$.
\end{enumerate}
\end{dfn}
Now, an analogue to Lemma \ref{lem:reg-chareqs} can be stated. We
shall state this one and the following lemmas for a general $\xi$
satisfying Hypothesis \eqref{hyp:xi-conditions}.
\begin{lem}[Regularity of the characteristic equations]
\label{lem:reg-chareqsCS}
Take $T>0$,
$\xi$ satisfying Hypothesis \eqref{hyp:xi-conditions}, $R>0$ and
$t\in [0,T]$. Then there exist constants $C$ and $L_p$ depending on
$\Lip_R(\xi)$ and $T$ such that
$$|\Psi_{\xi}(P)|\leq C
\quad \text{ for all } P \in B_R \times B_R$$
and
$$
|\Psi_{\xi}(P_1)-\Psi_{\xi}(P_2)|\leq L_p|P_1-P_2|
\quad \text{ for all } P_1, P_2 \in B_R \times B_R.
$$
\end{lem}
Lemmas \ref{lem:dep-chareqs-E}--\ref{lem:reg-chars-time} are valid
as they are presented, taking $\xi$ and Hypothesis
\ref{hyp:xi-conditions} to play the role of $E$ and Hypothesis
\ref{hyp:E-conditions}, and making the obvious minor modifications
on the dependence of the constants. Now we can look at the
existence of solutions:
\begin{thm}[Existence and uniqueness of measure solutions]
\label{thm:Existence-CS}
Assume $H$ satisfies Hypothesis \ref{hyp:H-conditions}, and take
$f_0\in \Po(\R^d\times\R^d)$ compactly supported. Then there exists
a unique solution $f\in C([0,T],\Po(\R^d\times\R^d))$ to equation
\eqref{eq:CS} in the sense of Definition \ref{dfn:solution2} with
initial condition $f_0$. Moreover, the solution remains compactly
supported for all $t\in [0,T]$, i.e., there exist $R^x$ and $R^v$
depending on $T$, $H$ and the support of $f_0$, such that
$$\supp(f_t)\subset B_{R^x}\times B_{R^v}
\text{ for all } t\in [0,T].$$
\end{thm}
The proof of this result can be done following the same steps as for
proving Theorem \ref{thm:Existence}. Lemmas \ref{lem:same-f0} to
\ref{lem:same-T_[f]} still hold in this situation, and we recombine
Lemmas \ref{lem:lipschitz-field} and \ref{lem:field-W1-Linfty} in the
following result:
\begin{lem}
\label{lem:lipschitz-W1-Linfty-field-CS}
Take $H$ satisfying Hypothesis \ref{hyp:H-conditions}, $\tf\in
\Po(\R^d\times\R^d)$ with $\supp(\tf_t)\subset B_{R^x}\times
B_{R^v}$, and $\xi := \xi[\tf] = H * \tf$. Then, for any
$R>0$ $$\Lip_R(\xi) \leq \Lip_{R+\hat{R}}(H),$$ with
$\hat{R}:=\max{R^x,R^v}$. Furthermore, if $\tilde g \in
\Po(\R^d\times\R^d)$ it holds that
\begin{equation}
\label{eq:field-W1-Linfty-cs}
\norm{\xi[\tf] - \xi[\tilde g]}_{\rmL^{\infty}(B_R)}
\leq \Lip_{R+\hat{R}}(H)\, W_1(\tf,\tilde g).
\end{equation}
\end{lem}
\begin{proof}
The first part follows directly from the properties of
convolution. For the second one, take $\pi$ to be an optimal
transportation plan between the measures $\tf$ and $\tilde g$. Then,
for any $x,v \in B_R$, using that $\pi$ has marginals $\tf$ and
$\tilde g$,
\begin{align*}
\xi[\tf](x,v) &\,- \xi[\tilde g](x,v)\\
\,=&
\int_{\R^{2d}} \!\! H(x-y,v-u)\tf(y,u) \,d (y,u)\\
&- \int_{\R^{2d}} \!\!H(x-z,v-w)\tilde g(z,w) \,d (z,w)
\\
\,=&
\int_{\R^{4d}} \left[H(x-y,v-u) - H(x-z,v-w)\right]
\,d \pi(y,u,z,w).
\end{align*}
Taking absolute value, and using that the support of $\pi$ is
contained in the ball $B_{\hat R} \subseteq \R^{4d}$,
\begin{multline*}
|\xi[\tf](x,v) - \xi[\tilde g](x,\,v)|
\\
\leq
\int_{B_{\hat R}} \!\! |H(x-y,v-u) - H(x-z,v-w)| \,d \pi(y,u,z,w)
\\
\leq \,
\Lip_{R+\hat{R}}(H)
\int_{\R^{4n}} \!\! \abs{(y - z, u-w)} \,d \pi(y,u,z,w)
=
\Lip_{R+\hat{R}}(H) W_1(\tf,\tilde g).
\end{multline*}
\end{proof}
Finally, a stability result also follows using the same steps as
in Theorem \ref{thm:stability}.
\begin{thm}[Stability in $W_1$]
\label{thm:stability-CS} Assume $H$ satisfies Hypothesis
\ref{hyp:H-conditions}, and $f_0,g_0\in \Po(\R^d\times\R^d)$ are
compactly supported. Consider the solutions $f,g$ to
eq. \eqref{eq:CS} given by Theorem \ref{thm:Existence-CS} with
initial data $f_0$ and $g_0$, respectively. Then, there exists a
strictly increasing function $r(t):[0,\infty)\longrightarrow \R^+_0$
with $r(0)=1$ depending only on $H$ and the size of the support of
$f_0$ and $g_0$, such that
\begin{equation}
\label{eq:stability2}
W_1(f_t, g_t)
\leq
r(t)\, W_1(f_0, g_0),
\quad
t \geq 0.
\end{equation}
\end{thm}
\begin{remark}[Evolution of the support in the Cucker-Smale model]
In \cite{cfrt09} it is shown a sharp bound on the evolution of the
support for the kinetic Cucker-Smale equation in which
$H(x,v)=w(x)v$. More precisely, it is proved that for any given $f_0
\in \Po(\R^d\times \R^d)$ compactly supported, we have that
$$
\supp(f_t) \subset B(x_c(0)+m t,R^x(t)) \times B(m,R^v(t)),
$$
with
$$
R^x(t) \leq \bar R \qquad \mbox{and} \qquad R^v(t) \leq R_0\,
e^{-\lambda t}
$$
for some $\bar R$ depending only on
$R_0=\max\{R^x(0),R^v(0)\}$ and $\lambda=w(2\bar R)$. Here, $m$
stands for the mean velocity of the system
$$
m:=\int_{\R^{2d}} v\, f(t,x,v)\,d x \,d v,
$$
which is preserved along its evolution. This precise bound on the
support and the particular choice of $H$ lead to a uniform control in
time of the constants $\Lip_R(H)$ and $L_p$ in the results above,
which are now bounded for all times. A tedious but straightforward
computation leads to a rate $r(t)$ in the stability result which is
exponentially increasing. Indeed, if we follow the steps of the proof
of Theorem \ref{thm:stability} for the particular case of the
Cucker-Smale model we can see that, since $R_x$ and $R_v$ are not
increasing with time, the numbers $C_1$ and $C_2$ that appear there
can be chosen independently of time, whence $r(t)$ shall grow at most
exponentially.
\end{remark}
\begin{remark}[Comparison with Literature]
As already mentioned above, the particular case of the kinetic
Cucker-Smale model has already been approached in \cite{HL08}
where the authors give a well-posedness result based on the
bounded Lipschitz distance. Here, we recover the same result but
based on the stability in the Wasserstein distance $W_1$, which
allows us to obtain sharper constants and rates.
\end{remark}
\subsection{General Models}
\label{sec:general_models}
With the techniques used in the previous sections one can include
quite general kinetic models in the well-posedness theory. In this
section we illustrate this by giving a result for a model which
includes both the potential interaction and self-propulsion
effects of section \ref{sec:swarming_model}, the
velocity-averaging effect of section \ref{sec:cucker_smale} and
the more general models above \cite{LLE,LLE2}.
Let us introduce some notation for this section: $\Pc(\R^d\times\R^d)$
denotes the subset of $\Po(\R^d\times\R^d)$ consisting of measures of
compact support in $\R^d\times\R^d$, and we consider the non-complete
metric space ${\cal A}:=\mathcal{C}([0,T], \Pc(\R^d\times\R^d))$
endowed with the distance ${\cal W}_1$. On the other hand, we consider
the set of functions ${\cal B}:=\mathcal{C}([0,T],\lip_{loc}
(\R^d\times\R^d,\R^d))$, which in particular are locally Lipschitz
with respect to $(x,v)$, uniformly in time. We consider an operator
${\cal H}[\cdot]:{\cal A}\longrightarrow {\cal B}$ and assume the
following:
\begin{hyp}[Hypothesis on a general operator]
\label{hyp:general-model}
Take any $R_0 > 0$ and $f, g \in \mathcal{A}$ such that $\supp(f_t)
\cup \supp(g_t) \subseteq B_{R_0}$ for all $t \in [0,T]$. Then for
any ball $B_R \subset \R^d \times \R^d$, there exists a constant $C
= C(R,R_0)$ such that
\begin{gather*}
\max_{t\in [0,T]}
\|{\cal H}[f]-{\cal H}[g]\|_{L^\infty(B_R)}
\leq
C \,{\cal W}_1(f,g),
\\
\max_{t \in [0,T]} \lip_R({\cal H}[f])
\leq C .
\end{gather*}
\end{hyp}
Associated to this operator, we can consider the following general
equation:
\begin{equation}
\label{eq:general-model}
\partial_t f + v \cdot \grad_x f
- \nabla_v \cdot [ {\cal H}[f] f ]
= 0.
\end{equation}
\begin{remark}[Generalization]
It is not difficult to see that the choices ${\cal
H}[f]=(\alpha-\beta|v|^2)v-\nabla U * \rho$ and ${\cal H}[f]=H *
f$ correspond to \eqref{eq:swarming} and \eqref{eq:CS},
respectively, and that they satisfy Hypothesis
\ref{hyp:general-model} if we assume the hypotheses of Theorems
\ref{thm:Existence} and \ref{thm:Existence-CS} respectively.
Moreover, one can cook up an operator of the form:
$$
{\cal H}[f]=F_A(x,v)+G(x) * \rho + H(x,v) * f
$$
with $F_A$, $G$ and $H$ given functions satisfying suitable
hypotheses, such that the kinetic equation \eqref{eq:general-model}
corresponds to the model \eqref{eq:lle}.
\end{remark}
We will additionally require the following:
\begin{hyp}[Additional constraint on {\cal H}]
\label{hyp:general-model2}
Given $f \in \mathcal{C}([0,T], \Pc(B_{R_0}))$, and for any
initial condition $(X^0, V^0) \in \R^d \times \R^d$, the following
system of ordinary differential equations has a globally defined
solution:
\begin{subequations}
\label{eq:characteristics-general}
\begin{align}
\label{eq:characteristicsX-general}
\frac{d}{dt} X &= V,
\\
\label{eq:characteristicsV-general}
\frac{d}{dt} V &= {\cal H}[f](t,X,V) ,
\\
X(0) &= X^0, \quad V(0) = V^0.
\end{align}
\end{subequations}
\end{hyp}
Of course, this is a requirement that has to be checked for every
particular model, and it is difficult to give useful properties of
${\cal H}$ that imply this and are general enough to encompass a
range of utile models; therefore, we prefer to give a general
condition which reduces the problem of existence and stability to
the simpler one of existence of the characteristics.
In the above conditions one can follow a completely analogous argument
to that in the proof of Theorems \ref{thm:Existence} and
\ref{thm:stability}, and obtain the following result:
\begin{thm}[Existence, uniqueness and stability of measure solutions for a
general model]
\label{thm:Existence-general}
Take an operator ${\cal H}[\cdot]:{\cal A}\longrightarrow {\cal B}$
satisfying Hypotheses \ref{hyp:general-model} and
\ref{hyp:general-model2}, and $f_0$ a measure on $\R^d \times \R^d$
with compact support. There exists a solution $f$ on $[0,+\infty)$
to equation \eqref{eq:general-model} with initial condition
$f_0$. In addition,
\begin{equation}
\label{eq:f-continuous-general}
f \in \mathcal{C}([0,+\infty); \Pc(\R^d\times \R^d))
\end{equation}
and there is some increasing function $R = R(T)$ such that for all $T>0$,
\begin{equation}
\label{eq:supp-f-general}
\supp f_t \subseteq B_{R(T)} \subseteq \R^d \times \R^d
\quad \text{ for all } t \in [0,T].
\end{equation}
This solution is unique among the family of solutions satisfying
\eqref{eq:f-continuous-general} and \eqref{eq:supp-f-general}.
Moreover, given any other initial data $g_0\in \Pc(\R^d\times\R^d)$ and $g$
its corresponding solution, then there exists a
strictly increasing function $r(t):[0,\infty)\longrightarrow \R^+_0$
with $r(0)=1$ depending only on ${\cal H}$ and the size of the support of
$f_0$ and $g_0$, such that
\begin{equation*}
W_1(f_t, g_t)
\leq
r(t)\, W_1(f_0, g_0),
\quad
t \geq 0.
\end{equation*}
\end{thm}
\section{Consequences of Stability}
\subsection{$N$-Particle approximation and the mean-field limit}
\label{sec:meso}
The stability theorems \ref{thm:stability} and
\ref{thm:stability-CS}, or the general version
\ref{thm:Existence-general}, give in particular a justification of
the approximation of this family of models by a finite set of
particles satisfying a system of ordinary differential equations.
We will state results for the general model
\eqref{eq:general-model}, under the conditions on $\cal H$ from
section \ref{sec:general_models}.
One can easily check that the following holds:
\begin{lem}[Particle solutions]
\label{lem:ode-is-measure-solution}
Assume $\cal H$ satisfies the conditions of Theorem
\ref{thm:Existence-general}. Take $N$ positive numbers
$m_1,\dots,m_N$, and consider the following system of differential
equations:
\begin{subequations}
\label{eq:swarming-odes}
\begin{align}
&\dot{x_i} = v_i,
\quad &i=1,\dots,N,
\\
&\dot{v_i} = \sum_{j \neq i} m_j {\cal H}[f^N](t,x_i,v_i),
\quad &i=1,\dots,N,
\end{align}
\end{subequations}
where $f^N:[0,T] \to \P_1(\R^d \times \R^d)$ is the measure defined
by
\begin{equation}
\label{eq:ode-equivalent-measure}
f^N_t := \sum_{i=1}^N m_i\, \delta_{(x_i(t), v_i(t))}.
\end{equation}
If $x_i,v_i:[0,T] \to \R^d$, for $i = 1,\dots,N$, is a solution to
the system \eqref{eq:swarming-odes}, then the function $f^N$ is the
solution to \eqref{eq:general-model} with initial condition
\begin{equation}
\label{eq:ode-equivalent-measure-initial}
f^N_0 = \sum_{i=1}^N m_i\, \delta_{(x_i(0), v_i(0))}.
\end{equation}
\end{lem}
As a consequence of the stability in $W_1$, we have an alternative
method to derive the kinetic equations \eqref{eq:swarming},
\eqref{eq:CS} or \eqref{eq:general-model}, based on the
convergence of particle approximations, other than the formal
BBGKY hierarchy in \cite{Bo,CDP}.
\begin{cor}[Convergence of the particle method]
\label{cor:N-particle}
Given $f_0\in \Po(\R^d\times\R^d)$ compactly supported and ${\cal
H}$ satisfying the conditions of Theorem
\ref{thm:Existence-general}, take a sequence of $f_0^N$ of measures
of the form \eqref{eq:ode-equivalent-measure-initial} (with $m_i$,
$x_i(0)$ and $v_i(0)$ possibly varying with $N$), in such a way that
$$
\lim_{N\to \infty} W_1(f_0^N, f_0) = 0.
$$
Consider $f^N_t$ given by \eqref{eq:ode-equivalent-measure}, where
$x_i(t)$ and $v_i(t)$ are the solution to system
\eqref{eq:swarming-odes} with initial conditions $x_i(0)$,
$v_i(0)$. Then,
$$
\lim_{N\to \infty} W_1(f_t^N, f_t) = 0 ,
$$
for all $t\geq 0$, where $f = f(t,x,v)$ is the unique measure
solution to eq. \eqref{eq:general-model} with initial data $f_0$.
\end{cor}
\subsection{Hydrodynamic limit}
\label{sec:hydro}
We state our hydrodynamic limit result for eq.
\eqref{eq:swarming}. If we look for solutions of
(\ref{eq:swarming}) of the form
\begin{equation}
\label{eq:f-velocity-delta}
f(t,x,v) = \rho(t,x)\, \delta(v - u(t,x))
\end{equation}
for some functions $\rho, u: [0,T] \times \R^d \to \R$, one
formally obtains that $\rho$ and $u$ should satisfy the following
equations:
\begin{subequations}
\label{eq:hydro}
\begin{align}
&\partial_t \rho + \dv_x(\rho u) = 0,\\
&\partial_t u + (u\cdot \nabla) u = u (\alpha - \beta \abs{u}^2) - \grad U *
\rho .
\end{align}
\end{subequations}
This is made precise by the following result whose existence part
was already obtained in \cite{CDP}:
\begin{lem}[Uniqueness for Hydrodynamic Solutions]
\label{lem:existence-hydro}
Take a potential $U \in \mathcal{C}^2(\R^d)$ and assume that there
exists a smooth solution $(\rho,u)$ with initial data
$(\rho_0,u_0)$ to the system \eqref{eq:hydro} defined on the
interval $[0,T]$. Then, if we define $f:[0,+\infty) \to
\P_1(\R^d\times\R^d)$ by
\begin{equation}
\label{eq:hydro-measure-equivalent}
\int_{\R^d\times\R^d} f(t,x,v)\, \phi(x,v) \,dx\,dv
= \int_{\R^d} \phi(x,u(t,x))\, \rho(t,x) \,dx
\end{equation}
for any test function $\phi \in {\cal C}^0_C(\R^d\times\R^d)$,
then $f$ is the unique solution to \eqref{eq:swarming} obtained
from Theorem \ref{thm:Existence} with initial condition
$f_0=\rho_0\delta(v-u_0)$.
\end{lem}
As a direct consequence of Lemma \ref{lem:existence-hydro} and the
stability result in Theorem \ref{thm:stability}, we get the
following result.
\begin{cor}[Local-in-time Stability of Hydrodynamics]
Take a potential $U \in \mathcal{C}^2(\R^d)$ and assume that there
exists a smooth solution $(\rho,u)$ with initial data
$(\rho_0,u_0)$ to the system \eqref{eq:hydro} defined on the
interval $[0,T]$. Let us consider a sequence of initial data
$f_0^k \in \P_1(\R^d\times\R^d)$ such that
$$
\lim_{k\to \infty} W_1(f_0^k, \rho_0\, \delta(v-u_0)) = 0.
$$
Consider the solution $f^k$ to the swarming eq. \eqref{eq:swarming}
with initial data $f_0^k$. Then,
$$
\lim_{k\to \infty} W_1(f^k_t, f_t) = 0 ,
$$
for all $t\in [0,T]$ with $f(t,x,v)=\rho(t,x)\, \delta(v -
u(t,x))$.
\end{cor}
\section*{Acknowledgments}
The authors acknowledge support from the project MTM2008-06349-C03-03
DGI-MCI (Spain) and 2009-SGR-345 from AGAUR-Generalitat de
Catalunya. This work was developed at the CRM-Barcelona during the
thematic program in ``Mathematical Biology'' in 2009.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,872 |
Dulcie Crawford Group
Ints C
L Letter Mini
Sql Oracle
Email In Invoices
Com Coupon
Harvard University Consent Law
The views and opinions expressed in this page are strictly those of the page author. University community is, on the basis of sex, sexual orientation, or gender identity, excluded from participation in, denied the benefits of, or subjected to discrimination in any University program or activity. While it did not escalate into a greater legal fight, other occasions required OLC to step in as an adjudicator to resolve these disagreements. United Healthcare System, Inc. Past consent does not imply future consent; silence or an absence of resistance does not imply consent. This Article proceeds in the following manner. Queer Legal History: A Field Grows Up and Comes Out. Is the information provided reliable and accurate?
Submissions close THIS SATURDAY!
Referrals Order Form
Curriculam Support Us
MUSIC Townhouses
SEO Services Play Video
Make a donation to support our coverage.
Criminal and harvard law and completeness
Consequently, the treating practitioner should be actively involved in the consent process and not rely wholly on others to obtain informed consent. Harvard Law Faculty Members Blast New Sexual Harassment. Among other things, she usually can appoint and supervise officials, preside at meetings, and distribute the work among her fellow commissioners or board members. PDF copy for your screen reader. Royal Dutch Petroleum Co. To my mind, there is no question that she was raped, almost certainly by more than one man. Please provide your Kindle email. There is needed, harvard university consent law, has always been granted a drunken consent and federal government funding for foreign enterprise when dealing with no. Before offering the conclusions of this paper, we would like to make a critical note. As harvard university school law was subject to harvard university consent law? He offered the university full access to his Facebook account and phone records. Data Help Control the Spread of the Coronavirus?
Title ix coordinator or reject the harvard law, had sought to this is potential foreign elements are you live news, which it does this means we think. Marital Rape A Non-criminalized Crime in India Harvard. If the commission shall designate at harvard university. However, the Court left open the possibility to consider dual service restrictions under the Appointments Clause. Understanding these patterns allows us to provide more relevant content to you and improve customer relationships. Why Support Harvard Magazine? It is not at all clear to me that this case, which occurred more than a decade ago, would be handled the same way today. Athletics programs are considered educational programs and activities. By providing your email address, and subsequently having your account verified, you may benefit from these offerings and promotions. Harvard Law School professors in a letter published in the Boston Globe on Oct. Columbia university of consent is one is offered to make an email, consent law university. Most of the findings that come from studying your sample will not be relevant to your personal health. This younger movement still claims quite a pedigree.
Who never give feedback on education and harvard business expenses as harvard university consent law professor emeritus who she heard that standard. WARN Act, did so without an iota of statutory interpretation. Graduate School of Education: suggestions for parents of kids in elementary and middle school, including book recommendations and videos for additional learning. Please accept terms of use. One of the limitations in the debate which our discussions revealed was the tendency of scholars interested in questions of sexual violence and consent to engage mainly with other work from within their own disciplinary boundaries. Personnel are insufficient, poorly paid, untrained, and unorganized. When a sexual relationship exists, effective steps should be taken to ensure unbiased evaluation or supervision of the student. Information somewhere on a sexual assault for harvard announced its own hands, or sexual act liability by harvard university consent law school have the territorial, insults with domestic? Creating a Definition of Rape in International Law. Curious which baby names stole the show this year?
Being prosecuted for consent law
School Law Practice Group, where he advises public school districts on a variety of general education, special education and labor and employment issues. Both medical and research needs as well as current legal and ethical constraints are met by our three levels of notification and consent processes. Congress generally legislates with domestic concerns in mind. These rules apply nowhere else and to no one else, but they do not conform to widely accepted definitions of sexual assault and rape in the rest of society. Hogue, Presidential Reorganization Authority: History, Recent Initiatives, and Options for Congress, Cong. Most data controllers simply state that users have to check the website for any changes in the privacy policy. We will use the code number to connect your sample to your health information that is stored in a computer database. University of Massachusetts at Amherst. But not discriminate on the bankruptcy abuse a prohibition to get the story does provide consent law of rape. Supreme Court modified this result. Subjects in school as a result of their state's legislation about sex education. Renty as well as his daughter, Delia, who is also seen stripped from the waist up. Genetic research may explore why some people are more likely than others to get certain diseases. Is the person who consents capable to consent?
Part community is consent law
Consent to research, in contrast, has its basis in ethical codes, statutes, and administrative regulations, with the courts playing a lesser role. This affidavit is submitted for the limited purpose of establishing probable cause to believe that LIEBER has committed the offenses described above. National Association of the Deaf et al v Harvard and MIT. Eddie Phillips wield solid science, medical knowledge, common sense and an endless supply of dad jokes to teach us how to eat better and feel better about it. Pat, Hanson and David Letterman. Ordinarily, students are asked to submit a written request that identifies the specific record or records they wish to inspect. Uniroyal Goodrich Tire Co. Of the Dissolution of Government. FEC commissioners, the Court encouraged Congress to act, which it did by amending the Federal Election Campaign Act and converted all FEC commissioners into PAS positions. The purpose is a mandatory for them carries with one parking spot that have to consent law university was elected prosecutors are heightened in. User preferences blocked performance cookies, analytics tag manager scripts will not be loaded. This is followed by more details about what will happen to your samples and health information. She was shocked when she heard the allegations.
If you consent law university programs for
All private parties to espionage, or other policies, harvard university law schools will not saying no sport from obtaining or expulsion overturned. Lord hath commanded him to be captain over his people, xiii. Many scholars noted that because independent agencies are not accountable to the President as executive departments, they are more susceptible to agency capture. Warn act does title ix coordinators listed compare and law university press conference that statute contained. It is not enough that other officers may be identified who formally maintain a higher rank, or possess responsibilities of a greater magnitude. The Commission shall have a Managing Director who shall be appointed by the Chairman subject to the approval of the Commission. She invited him to her room, and he testified that she initiated sexual contact when they entered it together. If questions still remain, the matter may be referred to Kari Limmer, MBA Registrar. Rave Guardian, use GPS technology allowing students to seek help through their smartphones if they are in a possibly dangerous situation. However, it should be noted that increased attention to different capacities and authorisation is not considered important by users. State Law will NOT be allowed within the complex.
Of course, colleges and universities, especially private ones, are fee to adopt their own rules of student conduct independent of the criminal law. Supreme Court or Circuit Court decisions, its importance here is to show how Executive Branch lawyers analyze whether an officer is principal or inferior. Given that impeachment is the process of removing a president. This designation power is a fairly common process that legal scholars and judges have considered to be part of agency design and have taken it for granted. Can add your items are legally expire when police investigated were being liquidated by harvard university law enforcement center with our founders envisioned a physics professor ho notes that it! In addition to statutes, there are numerous agency regulations that attempt to fill in any gaps. Fordham University Named in Class Action Lawsuit by Blind Individuals, Alleging Fordham. Victim or Victimizer: The Dilemma of Seduction in Classical Liberal Culture. When ZHENG arrived at the airport, he was met by Special Agents ofthe Federal Bureau of Investigation. The conduct must be both objectively and subjectively perceived as offensive. All students have access to their own education records and may contribute to them if they feel there is need for clarification.
But not unheard of harvard law
Checklist | Mail Inc | Free | In Gun | Santa | I Can | Directions
Child Custody Personal | Middle School Report Fraud | My Courses For Hunger Bread The | Claus Number Phone | Student Licence Cas Cx
Now harvard law university officers and written
Kawasaki, JCN Flash
Polish About Search
The consent process and understanding how will never read privacy by consent law university. The understandability of harvard university consent law center for initiating these. Rape on a defense information with harvard law on. Ada Meloy, general counsel for the American Council on Education. This article proceeds in educational purposes only, consent law university graduate program at all. Despite her heavy scholarly output and pro bono work, Professor Colker is also an innovator in the classroom. RAINN says that consent should be reestablished at multiple stages of intimacy instead of being interpreted as a blanket statement. RICO itself possesses an extraterritorial reach.
Burgan
Consent to treatment is rooted in case law.
Special Master was an inferior officer.
RICO and then limit its geographic reach.
Please enter any affiliation.Example× | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,771 |
Q: html5 local storage so I'm trying to figure out where the client side db gets stored actually. If I create something like this
var mcTasks = {};
mcTasks.webdb = {};
//OPENING THE DATABASE
mcTasks.webdb.db = null;
mcTasks.webdb.open = function() {
var dbSize = 5 * 1024 * 1024; // 5MB
mcTasks.webdb.db = openDatabase('todo', '1.0', 'todo manager', dbSize);
}
Where does this get stored? in cache? and if so that means that if I clear the cache the info gets lost right?
I'm trying to figure out which solution is better sqlLite or window.localStorage, any help is appreciated. thanks!
A: It gets stored in a location on the client machine, as a persistent file.
This is separate from the cache, so clearing the cache shouldn't have an effect on the stored data.
Where exactly the file is stored depends on the implementation.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,993 |
/* this file has been autogenerated by vtkNodeJsWrap */
/* editing this might proof futile */
#ifndef NATIVE_EXTENSION_VTK_VTKMASKFIELDSWRAP_H
#define NATIVE_EXTENSION_VTK_VTKMASKFIELDSWRAP_H
#include <nan.h>
#include <vtkSmartPointer.h>
#include <vtkMaskFields.h>
#include "vtkDataSetAlgorithmWrap.h"
#include "../../plus/plus.h"
class VtkMaskFieldsWrap : public VtkDataSetAlgorithmWrap
{
public:
using Nan::ObjectWrap::Wrap;
static void Init(v8::Local<v8::Object> exports);
static void InitPtpl();
static void ConstructorGetter(
v8::Local<v8::String> property,
const Nan::PropertyCallbackInfo<v8::Value>& info);
VtkMaskFieldsWrap(vtkSmartPointer<vtkMaskFields>);
VtkMaskFieldsWrap();
~VtkMaskFieldsWrap( );
static Nan::Persistent<v8::FunctionTemplate> ptpl;
private:
static void New(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyAllOff(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyAllOn(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyAttributeOff(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyAttributeOn(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyAttributesOff(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyAttributesOn(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyFieldOff(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyFieldOn(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyFieldsOff(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void CopyFieldsOn(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void NewInstance(const Nan::FunctionCallbackInfo<v8::Value>& info);
static void SafeDownCast(const Nan::FunctionCallbackInfo<v8::Value>& info);
#ifdef VTK_NODE_PLUS_VTKMASKFIELDSWRAP_CLASSDEF
VTK_NODE_PLUS_VTKMASKFIELDSWRAP_CLASSDEF
#endif
};
#endif
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,023 |
{"url":"https:\/\/www.nature.com\/articles\/s41598-022-19953-4?error=cookies_not_supported&code=c9a3fea9-c4ab-4852-9735-5d6326d3a769","text":"## Introduction\n\nSafety and efficacy are the two main pillars of any therapeutics and cell-based therapies and imaging are no exception. Not much is known, to effectively assess the biodistribution, clearance and efficacy of cell-based therapies due to the absence of an appropriate noninvasive imaging tool. In vivo cell tracking could provide information about distribution, localization, and clearance of various cell-based therapies including immune cells (CAR-T cells), stem cells and hepatocytes post-administration in the body. There are various non-invasive molecular imaging modalities that could be employed to track cell based therapies including optical imaging via fluorescence imaging (FLI)1,2, bioluminescence imaging (BLI)3,4, and ultrasound-guided photoacoustic imaging (PA)5,6,7. Radiology imaging including magnetic resonance imaging (MRI)8,9,10, computed tomography (CT)11,12,13, and nuclear medicine imaging such as positron emission tomography (PET)14,15,16,17,18 and single photon emission computed tomography (SPECT)19,20, could also be employed to effectively measure the distribution, localization, and clearance of various cell-based therapies over time and to shed light on safety and efficacy.\n\nAmong various imaging modalities, optical imaging modalities are restricted to small animals due to limited tissue penetration (1\u20132\u00a0mm) in humans. MRI and CT provide high resolution anatomical information, but have low sensitivity in both animals and humans. Both PET and SPECT are advantageous over other techniques and are often integrated with CT and MRI. The PET\/CT or SPECT\/CT or PET\/MRI provide quantitative and temporal distribution of immune and stem cells in animals and patients with no limitation of tissue penetration due to high energy gammas21,22,23,24.\n\nCell radiolabeling using a direct radiolabeling approach with various SPECT radiopharmaceuticals such as [99mTc]Tc-HMPAO (t1\/2\u2009=\u20096.01\u00a0h)27,28,29, and [111In]In-oxine (t1\/2\u2009=\u200968.2\u00a0h)30,31,32,33 have been used to track leukocytes for infection and inflammation imaging over the past four decades. SPECT is a powerful clinical imaging tool with lower usage cost than PET since an onsite cyclotron is not needed. PET, however, has many advantages over SPECT including two to threefold higher sensitivity, superior spatial resolution in the clinical setting, and with its quantitative nature it is a preferred imaging modality for tracking a single cell or small number of administered radiolabeled cells with more precise quantification and hence, requires lower radiation exposure34. Examples of commercial PET probes used to label cells include [18F]FDG (t1\/2\u2009=\u2009109.7\u00a0min, \u03b2+ \u2009=\u200997%)35, [64Cu]Cu-PTSM (t1\/2\u2009=\u200912.7\u00a0h, \u03b2+ \u2009=\u200917.9% )36 and [68\u00a0Ga]Ga-oxine (t1\/2\u2009=\u200968\u00a0min, \u03b2+ \u2009=\u200988.8%)37,38.\n\nRecently, among various PET radioisotopes, zirconium-89 (\u03b2+ \u2009=\u200922.3%) is gaining popularity for cell tracking due to its well established cyclotron-mediated production, longer half-life of 3.27\u00a0days and low average positron energy (E\u03b2+\u2009=\u20090.395\u00a0MeV). This enables monitoring of radiolabeled cells up to 3-weeks, either through direct cell labeling (also called non-specific cell labeling agents)39,40 or indirect labeling mediated through antibodies41,42, peptides43, proteins44 and nanoparticles45,46,47.\n\nVarious chelators used for the radiolabeling of cells with 89Zr are tropolone, malonate, hydroxamates, and oxine (8-hydroxyquinoline). Among these, oxine forms a lipophilic complex with 89Zr and enters the cells passively. To date, [89Zr]Zr-oxine is a commonly used radiotracer to label various cells including tumor cell lines48,49, bone marrow\u00a0cells50,51, T cells52, NK cells53, white blood cells (WBCs)54, stem cells (SCs)55 and leukocytes56. However, efflux of 89Zr from cells labeled with [89Zr]Zr-oxine remains a challenge. Recently, Friberger et al. reported a one-step clinically translatable method of synthesis of [89Zr]Zr-oxine with a cell labeling efficiency of 61\u201368% with human decidual stromal cells (hDSCs), bone marrow-derived macrophages (rMac) and human peripheral blood mononuclear cells (hPBMCs). However, a 29\u201338% apparent efflux of 89Zr from the labeled cells raised a further concern of radiotoxicity and non-specificity of the signal57.\n\nBesides [89Zr]Zr-oxine, the other reported method of cell labeling was covalent attachment of radiolabeled [89Zr]Zr-DFO-Bn-NCS complex with the primary amines present on the cell surface proteins to form stable thiourea bonds, which has solved the efflux problem observed with [89Zr]Zr-oxine57,58,59,60. The [89Zr]Zr-DFO-Bn-NCS has been successfully used to radiolabel mouse melanoma\u00a0cells, mouse dendritic cells and human mesenchymal stem cells with insignificant efflux of free 89Zr from [89Zr]Zr-DFO-Bn-NCS over time (7\u00a0days-post radiolabeling)58. Additionally, a better version of the DFO chelator as DFO* has been developed to further strengthen the stability of 89Zr complexation and has shown lower bone uptake over time61. Various other chelators are also being developed to address in vivo stability of the 89Zr complex over time62,63,64,65,66.\n\nIn this work, we have optimized and compared the radiolabeling yields of WBCs and SCs using three different ready-to-use labeling synthons [89Zr]Zr-Hy3ADA5-NCS66, [89Zr]Zr-Hy3ADA5-SA66 and [89Zr]Zr-DFO-Bn-NCS57,58,59,60 (Fig.\u00a01), and evaluated their applications in cell trafficking to better understand the biodistribution\/pharmacokinetics of cell based therapies. This approach could be extended to various other cell-based therapies like CAR-T cell therapy.\n\n## Results and discussion\n\n### Production of [89Zr]ZrCl4 and radiosynthesis of [89Zr]Zr-DFO-Bn-NCS, [89Zr]Zr-Hy3ADA5-NCS and [89Zr]Zr-Hy3ADA5-SA\n\nThe PET isotope 89Zr was produced and purified in-house using a cyclotron as described earlier by Pandey et al.67,68,69,70 in a high apparent molar activity of 17.0\u201323.13\u00a0GBq\/\u00b5mol, as assessed by complexing purified [89Zr]ZrCl4 with different amounts of DFO-Bn-NCS (Fig. S1, supplementary figure). All three synthons were successfully conjugated with 89Zr at 37\u00a0\u00b0C; pH 7.5\u20138.0 for 30\u00a0min in 72\u201398% radiolabeling yield. The DFO-Bn-NCS showed the highest complexation yield of 97.76\u2009\u00b1\u20090.31% (n\u2009=\u20093) followed by Hy3ADA5-NCS, 88.85\u2009\u00b1\u20090.05% (n\u2009=\u20093) and Hy3ADA5-SA, 71.58\u2009\u00b1\u20090.47% (n\u2009=\u20093) (Table 1, Fig.\u00a02). These results indicate that acyclic chelator DFO-Bn-NCS imparts faster binding kinetics as compared to hybrid \u201ccyclic-acyclic\u2019 chelators Hy3ADA5-NCS and Hy3ADA5-SA. This is consistent with the complexation yield reported in our previous work, where a higher complexation yield was observed for DFO derivatives as compared to Hy3ADA derivatives66.\n\n### Small animal PET imaging and biodistribution of 89Zr labeled WBCs\n\nOverall, the in vivo stability of [89Zr]Zr-DFO-Bn-NCS as demonstrated here is promising and superior over other synthons [89Zr]Zr-Hy3ADA5-NCS and [89Zr]Zr-Hy3ADA5-SA. Of these, [89Zr]Zr-Hy3ADA5-SA showed the lowest in-vivo stability but had considerably higher in vitro stability66. The biodistribution of radiolabeled WBCs in the rest of the major organs are presented in Figs. 6 and 7 and Table 2, indicating mild uptake in lung, heart, muscle, pancreas, and skin at 7\u00a0days post injection.\n\n### Small animal PET imaging of un-chelated [89Zr]ZrCl4\n\nThe in vivo characteristics of un-chelated [89Zr]ZrCl4 was also investigated (Fig.\u00a011). The small animal PET imaging showed a high accumulation of free 89Zr in the bones at 4\u00a0h and did not distribute to the lung, liver, spleen, or any other organs at any other time points as were noted with radiolabeled WBCs and SCs. The radioactivity increased significantly on day 2 and remained in the bones until day 7, attributed to the entrapment of osteophilic 89Zr and its poor clearance from the bones (Fig.\u00a012, Table 4). This observation is consistent with findings by Abou et al. 2011 demonstrating that [89Zr]ZrCl4 is a bone-seeking species and accumulates in bones and joints post-administration in mice71.\n\n## Materials and methods\n\n### General\n\nThe 89Zr used in this study was produced on a PETtrace cyclotron (GE Healthcare, Waukesha, WI) using 89Y target foil (0.1\u00a0mm; 50 X 50\u00a0mm, 99.9%), which was purchased from Alfa-Aesar, Haverhill, MA. The trace metal grade nitric acid (67\u201370%) and hydrochloric acid (34\u201337%) were purchased from Thermo Fisher Scientific, Waltham, MA. Sodium bicarbonate, oxalic acid dehydrate (TraceSELECT\u00ae\u2009\u2265\u200999.9999% metal basis), sodium carbonate, sodium citrate dihydrate and HPLC grade acetonitrile were purchased from Sigma Aldrich, St. Louis, MO. The silica gel iTLC was purchased from Agilent Technologies, Santa Clara, CA. The chelator p-SCN-Bn-Deferoxamine or DFO-Bn-NCS (\u2265\u200994%) was purchased from Macrocylics, Plano, TX, whereas the other two chelators Hy3ADA5-NCS and Hy3ADA5-SA were synthesized as described by Klasen et al.66. The empty Luer-Inlet SPE cartridges (1\u00a0mL) with frits (20\u00a0\u00b5m pore size) were purchased from Supelco Inc (Bellefonte, PA) and Chromafix\u00ae 30-PS-HCO3 PP cartridges (45\u00a0mg) were purchased from Macherey\u2013Nagel, Duren, Germany. The Millex \u00ae-GV filter (0.2\u00a0\u00b5m) was purchased from Millipore Sigma, Burlington, MA. The hydroxamate resin was synthesized in-house as demonstrated by Pandey et al.67,68,69,70 The Thermomixer was purchased from Eppendorf, Hamburg, Germany.\n\n### Production and purification of [89Zr]ZrCl4\n\nThe 89Zr was produced using yttrium foil on a solid target through a 89Y(p,n)89Zr nuclear reaction in a PETtrace cyclotron as described previously by Pandey et al.68. 89Zr was purified first as [89Zr]Zr-oxalate and then converted to [89Zr]ZrCl4 using activated Chromafix 30-PS-HCO3 SPE as demonstrated by Pandey67,68,69,70 and Larenkov et al.72, respectively. The final [89Zr]ZrCl4 was eluted in \u2053 0.5\u00a0mL of 1.0\u00a0N HCl and then dried using a steady flow of nitrogen gas in a V-vial at 65\u00a0\u00b0C.\n\n### Apparent molar activity of [89Zr]ZrCl4\n\nThe apparent molar activity of 89Zr was estimated using a DFO-Bn-NCS titration method. In this method, 10\u00b5L [89Zr]ZrCl4 (36.4\u00a0MBq) was added to 90\u00b5L de-ionized H2O. To this, 4 \u00b5L of 0.5\u00a0M Na2CO3 was added to neutralize and adjust the pH to 7.5\u20138.0. To the neutralized mixture, 0.01\u201310\u00a0\u00b5g of DFO-Bn-NCS in 4\u00b5L of DMSO was added and mixed. The complexation mixture was then incubated at 37\u00a0\u00b0C for 1\u00a0h. After 1\u00a0h, the degree of 89Zr complexation was determined with respect to the DFO-Bn-NCS concentration using radio-TLC with 20\u00a0mM sodium citrate (pH 4.9\u20135.1) as a mobile phase. The complexed 89Zr as [89Zr]Zr-DFO-Bn-NCS showed at the origin with Rf\u2009=\u20090, whereas free or un-complexed 89Zr had an Rf of 0.99 (solvent front). The half maximal inhibitory concentration (IC50) of DFO-Bn-NCS in mg\/mL was calculated using non-linear \ufeffregression curve fitting analysis.\n\nThe analysis was performed using analysis software\u2014GraphPad Prism 9 (GraphPad Software, San Diego, CA). The minimum ligand concentration for which 100% complexation occurred was estimated by multiplying the IC50 by 2, and the apparent molar activity (GBq\/\u00b5mole) and the apparent specific activity (GBq\/mg) of 89Zr were calculated by correcting for the total activity divided by \u00b5moles or mg of DFO-Bn-NCS needed for 100% 89Zr complexation.\n\nThe radiosynthesis of the different synthons [89Zr]Zr-DFO-Bn-NCS, [89Zr]Zr-Hy3ADA5-NCS and [89Zr]Zr-Hy3ADA5-SA were performed using a modified procedure demonstrated in our previous work.58 The purified [89Zr]ZrCl4 was resuspended in appropriate volume of 0.1\u00a0N HCl and then neutralized to pH \u2053 8.0 with 0.5\u00a0M Na2CO3. The neutralized [89Zr]ZrCl4 solution (70 -100 \u00b5L) containing\u2009~\u200921\u00a0MBq of 89Zr was used in the case of DFO-Bn-NCS, whereas\u2009~\u200961 to 68\u00a0MBq of 89Zr was used in the case of Hy3ADA5-NCS and Hy3ADA5-SA. To this neutralized [89Zr]ZrCl4 solution, 4 nmoles of DFO-Bn-NCS or Hy3ADA5-NCS or Hy3ADA5-SA (prepared in DMSO) were added in separate reactions. The resultant reaction mixtures were stirred for 30\u00a0min at 37\u00a0\u00b0C in a thermomixer at 500\u00a0rpm. The radiolabeling efficiency was determined at different time points by silica radio-TLC using 20\u00a0mM sodium citrate (pH 4.9\u20135.1) as a mobile phase.\n\n### Cell preparation\n\nThe human mesenchymal SCs were gifted by Dr. Atta Behfar from the Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA, and WBCs were isolated from the peripheral blood provided by the Division of Transfusion Medicine, Mayo Clinic, Rochester, MN, USA. The isolation of WBCs from the peripheral blood was performed using Lymphoprep\u2122 (STEMCELL Technologies Inc., Canada) gradient centrifugation method as per manufacturer instructions. The final WBC solution was washed with Hank\u2019s Balanced Salt Solution.\n\n### Trypan blue exclusion assay cellular viability test\n\nThe effect of radiolabeling on cellular viability was assessed using the trypan blue exclusion assay test within 1\u00a0h of labeling.\n\n#### Animals\n\n8\u201310\u00a0week old athymic nude mice (male and female, 1:1) were obtained from Charles Rivers Laboratories or Taconic Biosciences, Inc.\n\n### PET imaging and ex vivo biodistribution studies\n\nAfter radiolabeling, the radiolabeled WBCs (0.1\u20130.6 \u00d7 106; 0.03\u20130.11\u00a0MBq); and SCs (0.1\u20131 \u00d7 106; 0.1\u20130.15\u00a0MBq) were injected via tail vein into a group (n\u2009=\u20093) of athymic nude mice. PET images were acquired at 4\u00a0h, 2 days, 4 days and 7 days post-injection (p.i.) using a small animal PET scanner. The free [89Zr]ZrCl4 with radioactivity (0.15\u20130.19\u00a0MBq) was also injected intravenously via the tail vein. The small animal PET images were visualized, analyzed, and scaled to SUV using image analysis software, MIM 7 software (MIM Software Inc., Cleveland, OH, USA). The PET images are shown as maximum intensity projection (MIP) images in the coronal and sagittal plane. The animals were euthanized at 7d p.i., and organs\/tissues collected to measure the standardized uptake value (SUV) in major organs. Animals were euthanized via cardiectomy under anesthesia using isoflurane as approved by the Institutional Animal Care and Use Committee (IACUC) of the Mayo Clinic Rochester MN USA.\u00a0SUV was calculated using following formula:\n\n$$Standardized \\,Uptake\\, Value =\\frac{Radioactivity \\,concentration\\, in\\, tissue\\, (\\upmu Ci\/g)}{{\\frac{Injected \\,dose\\, (\\upmu Ci)}{Body\\, Weight (g)}}}$$\n\n### Statistics\n\nThe obtained data were analyzed using Microsoft Excel program and the results were compared using unpaired Student\u2019s t-test analysis. Differences were regarded as statistically significant for p\u2009<\u20090.05.\n\n### Ethical standards\n\nStudies were conducted with proper use and care of animals as approved by the Institutional Animal Care and Use Committee of the Mayo Clinic Rochester MN USA. Additionally, all methods were also performed in accordance with Institutional Animal Care and Use Committee\u2019s guidelines and regulation. These guidelines are equivalent to the ARRIVE guidelines and therefore all methods were also performed in accordance with ARRIVE guidelines.","date":"2022-12-09 06:24:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3956325650215149, \"perplexity\": 13303.629626597518}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711390.55\/warc\/CC-MAIN-20221209043931-20221209073931-00562.warc.gz\"}"} | null | null |
Augusto Fanjul
Augusto Fanjul, Constructed
Oil on canvas, 2020, 36"x30"
AUGUSTO FANJUL currently lives and works in Brooklyn, New York. He studied Fine Arts at the National School of Fine Arts of the Dominican Republic (2006-2010) graphic design at the Autonomous University of Santo Domingo (UASD) and Fine Arts in Altos de Chavon, School of Design (2012-2014), and received his Master degree on Painting at the New York Academy of Art (2018 - 2020). His work has been shown in various locations within the United States, such as the Queens Museum of Art in New York, and the Dominican Republic. The National School of Visual Arts (ENAV) awarded him the first prize for sculpture and painting and the second prize for academic excellence in 2007. He was also selected and awarded by the Embassy of Qatar in the Dominican Republic as part of an annual catalog of art in 2010. He received a Scholarship at Altos de Chavon, School of Design (2012) and awarded two times with the New York Academy of Art Patron Scholarship (2018-2020).
"Constructed" is a result of the way he perceives shapes and planes in his subjects. These ideas connected with a portrait bring many questions of perception to both the artist and viewer. Beauty and representation in the figure are seen through his eyes as the painter. For this piece he worked with a live model to capture the essence of the portrait and how his pure perception of the shadows and light change through every plane direction. By intertwining strong brushstrokes, loaded with pigment, he tries to sculpt this figure with oil paint. Creating the figure to feel bold and powerful, this relationship exemplifies the concept of beauty within the incompleteness of a work. There is a balance on what we see and what we perceive while portraying an emotional aspect of the piece that completes image. The connection between his subject, what his mind processes, and the result of his mark making on the canvas is all an act of mere perception and discovery. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 388 |
I'm 26, residing in UK and working as a freelance writer with many professionals and reliable essay writing serviceand I'm quite happy with this profession and wants to gains some knowledge and be a part of this community.
Hello everyone I am also a newbie here. Hope to get something informative from this site.
Wellcome. We ll be glad to see you joining us!
Hi guys, I am new here and glad to meet you all.
@lucky1988 , you are welcome!
I'll make this brief. Hello.
i couldn't think of anything else to say, so I thought I'd just quote the cenobite himself.
I should really get out more.
I noticed today that i was welcomed in the moderators group now!
Thanks to @Global-Moderators @Community-Moderators @developers !
I will do (and already done) my best to help and force Antergos!
Welcome every newbies and linuxresistance here!!! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,470 |
I tested the following languages: Japanese, Korean, Chinese (PRC) and Thai. All worked perfectly! Everything imported back into ToolBook just fine! Since we had not been able to use ToolBook for double-byte language content, we have built up a library of other tools that we use depending on the type of courses we are building. Now we should be able to eliminate one or more of these other applications which will make up the cost savings to purchase the TTS.
Here is another example of how we use Content connection: Our programs are geared toward manufacturing companies, but the topics are also relevant for service companies, but require a "translation." No problem... we handle it just like a foreign translation. Extend this idea to doing "custom" versions of programs for various customer groups/types to meet specific needs and you have a very powerful tool for under $1,000. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,798 |
Exercise bikes are convenient and effective appliances for those have little time and little space to exercise at home. However, the fact that most of exercise bike models on the market are for above-5-feet users. If you're under 5 feet, it will be hard to find the model that fits you.
That's why you should have a look at Xspec Foldable Stationary Upright Exercise Bike since it is specialized for short people. It will be a wonderful tool that helps you to maintain a healthy lifestyle.
Xspec is the line of upright bikes provided by Crosslinks. Crosslinks is a young e-commerce company founded in 1999, in California. Since being founded, it has been developing so fast due to the development of the Internet.
Crosslinks is a distributor that connect directly to the factories. The company is dedicated to bringing customers the best quality products at the most affordable prices as their number one core value is customer satisfaction. They also committed to delivering their product within 3 business day, and their customer service is very good.
They provide a wide range of household product, including home&garden appliances & accessories, kid playpens and toys, sports & fitness equipment, and medical aids. In the terms of exercise bikes, they just provide the Xspec line of about 10 upright bikes with really good prices – from $100 – $200.
Although the company is young and have a little of experience on exercise bikes, we can trust in their novelty and innovation, and of course in their competitive prices.
Xspec Foldable Stationary Upright Exercise Bike is an Upright Bike. However, it's not really a traditional upright bike but a hybrid bike between a recumbent bike and an upright bike.
In comparison to a recumbent bike, an upright bike might be less comfortable, but surely, it can help you practice more effectively. You need to put more effort to exercise on an upright bike than a recumbent bike, which causes more stress on your back and joints – but it's still easy for most users. And since "hard work will pay off", you can burn more calories. Overall, an upright exercise bike is perfect for improving your cardiovascular, losing weight, and getting fit.
Being designed like a real bike with a high seat and pedals under the seat, an upright bike gives you an outdoor recycling feeling which is interesting and challenging enough to stimulate your inspiration, and you might work out for hours without notice.
However, Xspec Foldable Stationary Upright Exercise Bike has the pedals designed a bit forward. This makes the bike more like a recumbent bike, and provide you with more comfort, but in exchange, its exercises will be less intense. The bike can support your joints well with less stress on the knees and ankles, so it can be used for rehabilitation therapy treatment as well.
Furthermore, an upright bike normally has a small footprint and this bike also has the folding-ability, so you can save a lot of your home space.
– Xspec Foldable Stationary Upright Exercise Bike has a nice outlook and really space-saving. Its dimensions are only 17W x 44H x 27L inches (43.2W x 111.8H x 68.6L centimeters) and the weight is 32 pounds (14.5 kilograms) – really compact. So, it is great for small apartments.
– It weighs only 32 pounds but can carry up to 220 pounds (99.8 kgs). That's not a high weight capacity, however, since this bike seems to be designed for short people, the weight capacity is fair. And Xspec Foldable Stationary Upright Exercise Bike works with even under 5 feet users.
– This upright bike is also simple and easy to use, all you need is to climb on the bike and pedal. There are not many functions on the bike, they just focus on the core functions, making the bike really affordable.
– Xspec Foldable Stationary Upright Exercise Bike uses a magnetic resistance mechanism which provides a quiet and smooth operation. Your cycling experience is assured to be comfortable and will not interrupt your movie/music enjoyment.
– This bike is definitely not designed for heavy-duty works or musculoskeletal training.
Xspec Foldable Stationary Upright Exercise Bike comes with two colour options, blue and orange. Both models look young and dynamic with the white frame. The frame is made of solid 14-gauge steel tubes, and create the x-frame design. Together with two front and rear bars, the sturdy frame will make the bike stand firm without any rocking or the feeling that it might tip you off.
Front and rear bars are rubber wrapped so that the bike won't move while you are exercising. These rubberized foot bases also prevent scratching when you drag the bike across a room.
You can see that the bike is uniquely designed to save space and for portability, as it is really compact, has the ability to fold up, and the rubberized foot bases. Indeed, when the bike is folded up, its dimensions are 16 x 46 x 8 inches, so you can store it in a cabinet.
This design is specialized for short people to get on and off the bike. It's really easy, you won't have any issues getting on or off.
Using a magnetic resistance mechanism, Xspec Foldable Stationary Upright Exercise Bike provides you with 8 levels of resistance that can easily adjust for an easier or more difficult workout. All you need is to turn the knob in front you.
To increase the tension, turn the knob in a clockwise direction. To decrease the tension, turn the knob in a counterclockwise direction.
The seat can be adjusted to fit many users' height, but overall, the bike is suitable for those short. The seat is quite large and cushioned, and the surface is a soft faux leather. There is no shock absorption system, however, it's quite okay to seat, not uncomfortable at all.
But if you want more comfort, you can purchase an additional seat cover, such as Cruiser Gel Seat Cover.
It also doesn't come with a backrest since it's an upright bike. That's a drawback since I personally think that an additional backrest will be totally fit the light workouts that this bike offers.
There are two handlebars in front of you. When grasping these handlebars, your upper body will forward and a little downward, creating a dynamic posture for effective workouts.
However, because the pedals are forward, your body cannot go too downward, otherwise, the posture will make you so uncomfortable. That's a consequence of the hybrid design – which can be considered as a drawback.
On each handlebar, a Hand Pulse Sensor is integrated to monitor your heart rate. Measuring your pulse rate is an important tool for exercising correctly and efficiently. The more steady and prolonged the elevated heart rate is during the workout, the more fat is burned. This important piece of health data will help you better understand your health and fitness status.
Two large pedals with adjustable foot straps are in front of you. Larger pedal design with safety strap prevents any foot slippage when exercising, giving you proper alignment for efficient pedalling with ultimate control. You will never have to be concerned that your feet would slip out while riding.
These pedals are counterweighted so that they are always upright and ready for use to use. And also it'll support your ankles that your ankles don't need always create a force to hold the pedals.
It is a 3 piece 'high torque' cranking system that gives you a smooth and consistent pedalling motion without any noise. Together with the precision balanced flywheel and magnetic resistance mechanism, the bike will completely quiet and won't interfere with watching TV or listening to music.
At the handlebars, there is a small LCD display equipped for you to track your workout progress. It displays speed, distance, time, and calories burned. There's only one button, and it's super simple to use the display.
That's all about the design. The bike is really simple in the design, and it also simple in the way of use. Let's find out how the bike provides you with the exercise experience.
Xspec Foldable Stationary Upright Exercise Bike is simple to use. You just need to climb on the bike, grasp the handlebars, adjust the tension level and pedal.
– The bike provides you with smooth and stable pedalling.
– It has a display to monitor your workout effectiveness.
– It's easy on your knees and it's great for both rehabilitation and strength training.
– The bike is specialized for short people.
– The lack of interesting programmed exercise modes.
Like other upright bikes, when you are pedalling, your body will be forward and your hands will grasp the front handlebars, creating a posture the same as bicycle riding posture. This dynamic posture allows you to put more effort to pedal fast, which is great for weight loss and keeping you healthy.
An original upright bike has the pedals right under the seat, which lets you form a more dynamic posture. Whereas, Xspec Foldable Stationary Upright Exercise Bike has the pedals in front of the seat, which makes your body form less dynamic and less like a real bicycle riding posture. In exchange, there will be less stress on your hip, knees and ankles.
There is no backrest, so your back will not be supported, neither your shoulder is supported. However, your knees and ankles will be fair supported, so that you can confident that you will not be injured. And you should lower your upper body in order not to get tired back too soon.
There are many bikes that have the same design as Xspec Foldable Stationary Upright Exercise Bike. Most of them have the problems for short leg people to pedal fast since the distance from the seat and the farthest point of the pedals is a little far. However, you won't face that problem when it comes to Xspec Foldable Stationary Upright Exercise Bike as they design this bike so well for short users.
To create momentum and resistance, Xspec Foldable Stationary Upright Exercise Bike uses a flywheel with a magnetic resistance mechanism. They are the most advanced technology for exercise bike at present and will provide you with a quiet and smooth operation. In your first rotations, it's quite hard, but after, it will get smoother and easier that makes the pedalling so interesting.
Also, The solid frame and sturdy design give you a firm feeling during your workouts. The pedals' operation is rhythmic and supports your ankles. You are assured of a well full-body support so that you won't have any injuries using this bike.
In fact, many people with injured knees have used the bike for rehabilitation or general fitness and have got good results. In the hardest resistance level, the bike is also good for losing weight. The bike will not be suitable for muscular training due to its lightweight design. But at the hardest level, you can practice your lower muscles as well – however, you should know that it won't be as effective as other upright bikes.
Heat: The bike is designed openly that supports air circulation. You'll be cool during long hour workouts.
Xspec Foldable Stationary Upright Exercise Bike has a really good price . So it's a great option for beginners to do light workouts. And the bike can be considered as the best exercise bike for short people.
As said above, you need to assemble it yourself as it arrives unassembled. If you don't want to assemble it yourself, you can pay an additional price for an Exercise Bike Assembly Service Package. However, the assembly is really easy since the bike is small and compact. I think you can handle it well.
It's the Crosslinks' policy that you are accepted to return the bike for any reason on brand new and unused items within 30 days of the purchase date. The returned must be in 100% brand new, resellable condition, and in the original packaging, otherwise, it will only be 50% partial refunded or be denied completely. All returns will be subject to a restocking fee of 10% or more depending on the condition. In another word, you can return the bike within 30 days unless the item has been tried, assembled, or utilized.
For warranty program, within 3 business days of receiving, if you find any damaged/defective/wrong parts, you can contact Crosslinks and they will be happy to walk you through their return process. Defective merchandise can be returned for a full refund however Crosslinks will require photos (or video) of the defect prior to authorizing the return. Crosslinks will cover all shipping charges in these situations. However, you must keep original box and accessories.
For the workmanship, you are assured that the product is free from defects of workmanship for 30 days from the date of sale.
Xspec Foldable Stationary Upright Exercise Bike has a high Amazon product's star ratings . This index uses an AI model instead of the simple average indicator to give customers better information. In particular, it considers the factors like the time of reviews, helpfulness votes by customers and whether the reviews are from verified purchases, to calculate the final score.
– I am 5'1″, will it be hard to reach the pedals?
As said above, the bike is adjustable and perfect for short people. Even under-5-feet users can use this bike.
– Is the bike easy to get on? I'm quite short, and the last bike was too high for me to get on.
Yes. It's really easy to get on the bike. Even for those have injured back.
– Can the handlebars be adjusted?
No. Only the seat is adjustable.
In the customer reviews section, Most of the comments are positive. According to this section, people really appreciated the customer services of Crosslinks, and the rated the bike as good as $300 – $400 bikes. Many comments said that the bike is perfect for short people, and will work stably, smoothly and quietly.
However, there are some negative comments on the warranty since the provider only offers the 30-day warranty, and they need to replace some broken parts by themselves after this short period of time.
It's fair to say that this is a high-end exercise bike at low price, which is great for short people to do light workouts. The bike is super space-saving and sturdy. By committing to exercise using this bike, surely you can improve your general health and stay fit. It is also good for physical rehabilitation/physical therapy treatment. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,399 |
Home » TV Celebrity » Alison Morris Wiki, Age, Husband, Married, Engaged, Fox 5
Alison Morris Wiki, Age, Husband, Married, Engaged, Fox 5
Date: 15 Oct, 2018
1979--10-31
Husband/Spouse
Scott Roslyn (m. 2013)
"Rome was not built on a day."
It indeed takes time to reach the goal one has made in his/her life. Along with the time, one needs patience, sheer determination, and hard work to fulfill what they desire. The best example of a person having all three qualities is none other than the American reporter Alison Morris.
Since childhood, Allison dreamt of becoming a successful journalist. So, she put a hard-core effort in her passion. Eventually, her work credited her with the post of a reporter in the top American channel, Fox 5.
Alison Morris Wiki
39-year-old Alison was born on 31 October 1979 in Long Island, New York. Apart from her mother-tongue, she is also fluent in French. She prefers to run in her leisure time.
Interesting: Jedediah Bila Married, Engaged, Boyfriend, Partner, Ethnicity
She earned her Bachelor degree in Sociology from Yale University. Likewise, she is a graduate of Our Lady of Mercy Academy in Syosset.
Alison Morris Married, Family
Having a promising career can't always fulfill the personal spaces that need to be covered by loved ones. How can Alison Morris be indifferent from this?
Well, she is married to her husband, Scott Roslyn, a brand strategist.
After passing through the phase of dating, and being engaged, Alison tied the wedding knots with Scott on 20 July 2013. Since then, she is relishing her marital voyage with her partner and their pet, boxer Riley in their home in Manhattan.
Alison Morris spends quality time with her husband, Scott Roslyn on 14 September 2014 (Photo: Scott Roslyn's Instagram)
Interestingly, Alison rejuvenates her marriage and her family life by going on vacations and spending quality time with her husband. Moreover, her husband keeps the romance alive by cooking delicacies for Allison.
Allison's Career
The New York native started her journaling career in Paris; writing for the Wall Street Journal Europe. Meanwhile, she also reported on the French Stock Market (The CAC-40) for CNBC Europe.
See Also: Zoe Williams Wiki, Age, Birthday, Married, Husband, Children, Family
As of now, Alison is a business reporter/ news presenter at Fox 5 ( WNYM-TV New York) and anchors the 5 pm, 10 pm, and 11 pm news.
Before Fox News, Allison covered the Super Bowl XLVII in New York. Then, she began to work as a weekend anchor and consumer reporter for both FoxCT and The Hartford Courant.
Within her tenure at the Hartford, she covered Sandy Hook Tragedy and The World Series, which nominated her for two Emmy awards.
Moreover, Alison also passed five years as a general assignment reporter in KDKA-TV like Kacey Montoya. There, she was the lead reporter on several award-winning national news stories including The Sago Mine Tragedy, The LA Fitness Shooting, and The G-20 Summit.
With a promising career as a reporter, Allison now bags an annual salary of $43,977.
#American Reporter #Fox 5 #KDKA-TV #Super Bowl XLVII #Scott Roslyn
WGN Erin McElroy Wiki, Birth Date, First Husband
BBC's Victoria Fritz Bio: Married & Husband Details Unveiled | Family Status
Amberley Lobo Bio, Boyfriend, Parents
Michael Smerconish Wiki, Bio, Wife, Children, Family, Net Worth
Alexis Stewart Wedding, Husband, Divorce, Children, Net Worth, Bio
What Is CBS46 Meghan Packer Age? Bio, Married, Engaged, Family
Mary Calvi Married, Husband, Family, Salary, Bio
June Sarpong Husband, Net Worth, Height, Ethnicity
Ann Curry Was Fired? Regarding Matt Lauer As Married Life and Husband
Dr. Lisa Masterson Bio: Married, Husband, Net Worth & More
Is Amber Ruffin Married? Husband, Partner, Family & Children | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,688 |
{"url":"https:\/\/stackoverflow.com\/questions\/2680827\/conversion-to-dalvik-format-failed-with-error-1-on-external-jar","text":"# \u201cConversion to Dalvik format failed with error 1\u201d on external JAR\n\nIn my Android application in Eclipse I get the following error.\n\nUNEXPECTED TOP-LEVEL EXCEPTION:\n....\nConversion to Dalvik format failed with error 1\n\nThis error only appears when I add a specific external JAR file to my project. I searched for a long time for the possible solution, but none of the possible solutions work.\n\nI even tried to change to Android 1.6 instead of 1.5 (the current version I use).\n\n\u2022 This acticle may help you to fix this error in case that you use library project in your workspace. \u2013\u00a0Nguyen Minh Binh Mar 6 '12 at 3:01\n\u2022 I tried this and it gave same error. I finally fixed it by adding the library in Properties->JavaBuildPath->Projects and add the library there. Its called \"Required objects on the build path:\". \u2013\u00a0user407749 Apr 6 '12 at 19:28\n\u2022 This problem has become brutal for me. It's almost enough to give up Android coding. None of the solutions work for me. I simply have to continuously try to export failing with Dalvik error 1 and eventually it will succeed. It takes 15-30 minutes to make a release build. It's a complete disaster. \u2013\u00a0Anthony Mar 1 '13 at 8:06\n\u2022 If you use two computers on one workspace with a file share software, it sometimes duplicates workspace->project->bin->com folder as com 1, com 2. Simply delete everything with 1 or 2, clean your project and you are ready to go. \u2013\u00a0Ozan Atmar Jul 21 '14 at 15:40\n\nGo to Project \u00bb Properties \u00bb Java Build Path \u00bb Libraries and remove all except the \"Android X.Y\" (in my case Android 1.5). click OK. Go to Project \u00bb Clean \u00bb Clean projects selected below \u00bb select your project and click OK. That should work.\n\nIt is also possible that you have a JAR file located somewhere in your project folders (I had copied the Admob JAR file into my src folder) and THEN added it as a Java Path Library. It does not show up under the Package Explorer, so you don't notice it, but it does get counted twice, causing the dreaded Dalvik error 1.\n\nAnother possible reason could be package name conflicts. Suppose you have a package com.abc.xyz and a class named A.java inside this package, and another library project (which is added to the dependency of this project) which contains the same com.abc.xyz.A.java, then you will be getting the exact same error. This means, you have multiple references to the same file A.java and can't properly build it.\n\nIn other ways this may be occurred if you accidentally or knowingly edit\/ add any thing in the class path file manually .In certain cases we may add android.jar path manually to classpath file for generating java doc.On removing the that after javadoc generated code will works fine.Please check this too if any one still occurs.\n\n\u2022 I had the same issue and the above steps by user408841 worked for me. \u2013\u00a0Arunabh Das Aug 14 '10 at 11:29\n\u2022 Hi. It fixes for me just by going \"Project \u00bb Clean \u00bb Clean projects selected below \u00bb select your project and click OK.\" ignoring the 1st bit \u2013\u00a0Kurru Jan 6 '11 at 20:55\n\u2022 I am quite confusing with this solution. If you remove all jar files and libraries, how are you going to compile the project successfully? \u2013\u00a0Cheok Yan Cheng Feb 2 '12 at 2:57\n\u2022 I found that the only way this worked for me was to CLEAN ALL PROJECTS! Especially ones that the offending project depends on. \u2013\u00a0Roy Hinkley Jul 12 '12 at 20:58\n\u2022 I'm new to Android development coming from .NET, and to be honest this error epitomises my experience with it. Seriously... what are you meant to do with that error message? It's totally meaningless. 'Dalvik'?? You what?? Get me back to visual studio :p \u2013\u00a0David Masters Sep 11 '12 at 16:06\n\nI solved the problem.\n\nThis is a JAR file conflict.\n\nIt seems that I have two JAR files on my buildpath that include the same package and classes.\n\nsmack.jar and android_maps_lib-1.0.2\n\nDeleting this package from one of the JAR files solved the problem.\n\n\u2022 +1 - In summary, this is a JAR file conflict. Specifically, this could be a conflict in any two JAR files. In my case, the conflict was between jackson-all-1.7.4.jar, jackson-core-asl-1.5.4.jar, and jackson-mapper-asl-1.5.4.jar. I removed the two 1.5.4 JAR's and left the 1.7.4 all. \u2013\u00a0jmort253 Jun 12 '11 at 22:53\n\u2022 If you have jars in any library projects you've added on your own project, like how I had added the ScoreloopUI project to my AndEngine game, these jars also interfere with yours, even though they are not on your own project's build path. \u2013\u00a0pqn Jul 20 '11 at 4:08\n\u2022 I had specified android-sdk\/platforms\/android-10\/android.jar manually in .classpath, then I changed to 2.3.3 (Android 10) in the project properties which caused both android.jar's to conflict. Removing the extra in the Java Build Path did the trick \u2013\u00a0styler1972 Aug 4 '11 at 18:50\n\u2022 On my side the conflict came from the ActionbarSherlock which already includes the android support package which git included twice. \u2013\u00a0Tobias Apr 5 '12 at 20:03\n\u2022 Just for information: In my case, because of the fail, it was that the different android target was not the same in both project: android:targetSdkVersion=\"xx\" \u2013\u00a0Andreas Mattisson Sep 21 '14 at 10:33\n\nWindows 7 Solution:\n\nConfirmed the problem is caused by ProGuard command line in the file\n[Android SDK Installation Directory]\\tools\\proguard\\bin\\proguard.bat\n\nEdit the following line will solve the problem:\n\ncall %java_exe% -jar \"%PROGUARD_HOME%\"\\lib\\proguard.jar %*\n\n\nto\n\ncall %java_exe% -jar \"%PROGUARD_HOME%\"\\lib\\proguard.jar %1 %2 %3 %4 %5 %6 %7 %8 %9\n\n\u2022 this didn't work for me, but commenting out the proguard line from my default.properties file did make the error go away. my problem does seem to be related to proguard somehow. i was only getting \"Conversion to Dalvik format failed with error 1\" when trying to export the project. any thoughts? \u2013\u00a0Ben H Aug 22 '11 at 23:42\n\u2022 i tried the above suggestion again, and removed and re-added my LVL library project, and now it works. not sure what exactly fixed it, but it works. thank god. i wasted hours on this. \u2013\u00a0Ben H Aug 22 '11 at 23:54\n\u2022 Windows Vista, and this worked for me. I had previously tried changing the path of the SDK to progra~2, reinstalling, cleaning the project, removing the libraries and fixing, etc... So thanks Noah! \u2013\u00a0Mudar Sep 6 '11 at 19:14\n\u2022 Thank you so much Noah, this worked for me also on Windows 7. The error started after I updated ADT to version 12. \u2013\u00a0Kevin Sep 8 '11 at 16:52\n\u2022 Solved my issue on Windiws 7. \u2013\u00a0poiuytrez Oct 21 '11 at 12:04\n\nYou can solve this issue easily (with Eclipse Android Developer Tools, Build: v22.0.1-685705) by turn off menu > \"Project\" > \"Build Automatically\" while exporting (un)signed Android application. After that, don't forget to turn it on again.\n\n\u2022 This solved it in my case. The problem otherwise occurs sporadically and without definitive cause (e.g. bad jar or path) \u2013\u00a0Dirk Conrad Coetsee Oct 4 '13 at 9:18\n\u2022 This was driving me nuts! A the only solution that worked for me. Wonder why. \u2013\u00a0PrivusGuru Jan 31 '14 at 21:38\n\u2022 This solved for me - other solutions might have helped also, but this one was the final one before getting it to work. \u2013\u00a0pertz Feb 4 '14 at 20:59\n\u2022 I've spent 4 hours working through every suggestion I could find and finally got to this one. Thanks - it solved the problem immediately. \u2013\u00a0Squonk Feb 7 '14 at 3:44\n\u2022 Very simple solution, but it works! Saved me a few hours of pointless fight with Eclipse. Try it before you try any more complicated solution. \u2013\u00a0fragon Mar 4 '15 at 9:13\n\nIf you have ADT revision 12+, you should update your proguard from 4.4 -> 4.6 (as described here). Also, you should leave ...\\bin\\proguard.bat file in the orginal form.\n\n[Android SDK Installation Directory]\\tools\\proguard\\lib\n\n\u2022 This has been my issue across multiple ADTs. Downloading and replacing the proguard JARs works. \u2013\u00a0Mr. S Nov 6 '11 at 20:00\n\u2022 Wow, thanks, this fixed it for me, although I also needed to clear up some warnings as well. stackoverflow.com\/q\/4525661\/38258 \u2013\u00a0Nick Gotch Jan 24 '12 at 8:31\n\u2022 i changed proguard version from 4.4 to 4.9, after that my problem still not solved. \u2013\u00a0Hiren Patel Apr 13 '13 at 13:36\n\u2022 I just had to do this myself, going from 4.7 -> 4.11 (latest as of now here: sourceforge.net\/projects\/proguard\/files). I just renamed the old <sdk>\/tools\/proguard dir and moved the complete unzipped proguard4.11 dir to the old path. Voila! Now if only I could get my previous hour back... -_- \u2013\u00a0qix Jun 30 '14 at 22:00\n\u2022 Thanks! this solve my issue right. My problem is caused when i update my sdk from sdk manager and the proguard has been reset to the original version that came with it with api level 11. After i update the proguard,jar to latest version. and it works! My reference: java.dzone.com\/articles\/my-reminder-how-fix-conversion \u2013\u00a0mmw5610 Mar 25 '15 at 20:29\n\nEDIT (new solution):\n\nIt looks like the previous solution is only a bypass. I managed to finally fix the problem permanently: In my case there was a mismatch in android-support-v4 files in my project and in the Facebook project that is referenced in my project.\n\nI found this error by performing Lint Check (Android Tools \/ Run Lint: Check for Common Errors)\n\nMy previous solution:\n\nI've tried any possible solution on this site - nothing helped!!!\n\nEasy steps:\n\nGo to Project -> uncheck Build Automatically\n\nGo to Project -> Clean... , clean both the library project and your app project\n\nExport your app as a signed APK while Build Automatically is still disabled\n\n\u2022 Thank you worked for me for my cocos2dx js project using adt eclipse 23 \u2013\u00a0snowflakes74 Nov 21 '14 at 10:45\n\u2022 Error: Unknown option '0409' in argument number 1 \u2013\u00a0Iman Marashi Apr 22 '16 at 12:12\n\nHere's another scenario, and solution:\n\nIf you run into this problem recently after updating the ADT for Eclipse:\n\n1. In your app project, check for any linked source folders pointing to your library projects (they have names in the form \"LibraryName_src\").\n2. Select all those projects, right-click, choose \"Build Path\"->\"Remove from Build Path\".\n3. Choose \"Also unlink the folder from the project\", and click \"Yes\".\n4. Clean, rebuild and redeploy the project.\n\nIt seems the reason is that some previous version of ADT linked Library project source folders to the \"child\" projects, and the current ADT\/Dex combination isn't compatible with that solution anymore.\n\nEDIT: this is confirmed by an Android Dev Blog entry, specifically this one - see the 8th paragraph onwards.\n\n\u2022 This is by far the most helpful answer to my particular problem. Thank you so much!!! \u2013\u00a0Bill The Ape Jan 12 '12 at 5:53\n\u2022 This is what I could use successfully, not the highly upvoted answer by user408841. \u2013\u00a0rics Mar 1 '12 at 12:30\n\nGo to Project and then uncheck \"Build Automatically\".Then try to export the project and the error is gone.\n\nThis can also be caused if you have added Android.jar file to your build path, perhaps by an accidental quick fix in Eclipse. Remove it with right clicking Project -> build path -> configure build path -> android.jar, remove.\n\nSimply cleaning the project has worked for me every time this error has come up.\n\n\u2022 Restarting Eclipse + cleaning the project + manual build worked for me. \u2013\u00a0dodgy_coder Jul 6 '13 at 12:18\n\u2022 Cleaning worked for me....so if you have a good debug build that you have been running on actual phones and then go to release export and see this error... try the clean all projects first. \u2013\u00a0lepert Sep 12 '13 at 16:01\n\nMy own and only solution that I found today after four hours of testing all the solutions, is a combination of many solutions provided here:\n\n\u2022 Delete project from Eclipse\n\u2022 Delete files in \\bin and \\gen from project folder\n\u2022 Remove references to libraries into .classpath file in root project folder\n\u2022 Restart Eclipse with command line : eclipse -clean\n\u2022 Import project\n\u2022 Right click on project - select Properties > Java Build Path > Libraries and remove everything else than Android XX.Y\n\u2022 Finally clean project, wait for automatic Building or Build it\n\u2022 Launch and now it works! At least for me...\n\nI tried every step at a time and many combinations, but only the succession of all steps at once made it! I hope I won't face this again...\n\n\u2022 There was a warning from an anonymous user that this had disastrous results. Try at your own risk! \u2013\u00a0PaulG May 4 '12 at 14:28\n\u2022 it work for me but i did not do : \"eclipse -clean\" and did not remove Libraries. \u2013\u00a0NicoMinsk Oct 26 '12 at 13:50\n\u2022 Deleting the bin folder did it for me. \u2013\u00a0boltup_im_coding Nov 2 '13 at 18:44\n\nJust for the other people who still have this problem and they have tried the above answers but still getting the error (which was my case), then my solution was to delete the project from Eclipse and re-import it again.\n\nThis made the Android library to be added again to my referenced libraries, so now I have two Android JAR files referenced, hence I deleted one of them and now it compiles fine.\n\nSolution: Delete the project from Eclipse IDE and then re-import it again, then check for the above solutions.\n\n\u2022 Thanks! This helped me out of scratch. \u2013\u00a0herbertD Jul 3 '13 at 1:52\n\nRan into this problem myself today. Cleaning and rebuild did not fix the problem. Deleting and reimporting the project didn't help either.\n\nI finally traced it back to a bad additions to my .class file. I think this was added by the plugin tools when I was trying to fix another problem, removing it got rid of the \"Conversion to Dalvik format failed with error 1\" build error:\n\n<classpathentry kind=\"lib\" path=\"C:\/dev\/repository\/android-sdk-windows\/platforms\/android-3\/android.jar\">\n<attributes>\n<\/attributes>\n<accessrules>\n<accessrule kind=\"nonaccessible\" pattern=\"com\/android\/internal\/**\"\/>\n<\/accessrules>\n\n\nFor me, an extra JAR reference had appeared in my build path. I deleted this, and it works now.\n\nMy problem was caused by ADT version 12.0 and ProGuard integration. This bug is well documented and the solution is in the documentation\n\nSolution is in here\n\nProGuard command line\n\n\u2022 Thanks Croc, I do not think that bug is very well documented - I didn't find it after searching for an hour for a solution. \u2013\u00a0Kevin Sep 8 '11 at 16:54\n\u2022 You're probably write - the problem is that a single error message have too many causes. Google should have some more messages to help users outline the problem. You can get comfort in the fact it took me two days to resolve the problem when i encountered it... \u2013\u00a0Croc Sep 16 '11 at 6:34\n\u2022 Comment 11 on the linked bug discussion page solved the problem here: set build output to Verbose (from Window > Preferences > Android > Build) and look for JAR or classes that the compiler is using but shouldn't be there. Remove them. Experience build without any problem again. \u2013\u00a0Giulio Piancastelli Jul 6 '12 at 9:04\n\nI've dealt with this problem when using Sherlock ActionBar library in my project. You could do the following step, it's work for me.\n\n1. Right click to your project, select properties.\n2. A dialog will show up, select 'Java build path' on the left menu.\n3. Remove 'Android dependencies' and 'Android private libraries' on the right panel then click OK\n5. Right click your project, select Android Tools -> Fix project properties\n6. Clean project once again.\n8. Open eclipse and Export apk\n\n\u2022 Their error message should be more specified. You save my time. Thanks. \u2013\u00a0teapeng Jun 14 '15 at 4:45\n\nIn my case the problem is actually with OpenFeint API project. I have added OpenFeint as library project:\n\n.\n\nIt is also added into build path, ADT tools 16 gives error with this sceneario.\n\nRight click on your project and click build path, configure the build path and then see the image and remove your project OpenFeint from here and all is done :)\n\nI found something else. Android uses the \/libs directory for JAR files. I have seen the \"Conversion to Dalvik format failed with error 1\" error numerous times, always when I made a mistake in my JAR files.\n\nNow I upgraded Roboguice to a newer version, by putting the new JAR file in the \/libs directory and switching the class path to the new version. That caused the Dalvik error.\n\nWhen I removed one of the Roboguice JAR files from the \/libs folder, the error disappeared. Apparently, Android picks up all JAR files from \/libs, regardless of which ones you specify in the Java build path. I don't remember exactly, but I think Android started using \/libs by default starting with Android 4.0 (Ice Cream Sandwich, ICS).\n\n\u2022 My proguard path was C:\\Program Files (x86)\\Android\\android-sdk\\tools\\proguard\\\n\u2022 and replaced both bin and lib folders\n\nTHANK GOD!\n\nIn general, it seems that this problem comes when there are unnecessary JAR files in build path.\n\nI faced this problem while working on IntelliJ IDEA. For me it happened because I added JUnit and Mockito libraries which were being compiled at runtime. This needed to be set to \"testing\" in module properties.\n\n\u2022 Oh thanx for mentioning \"In general, it seems that this problem comes when there are jars unnecessary in build path.\" After deleting the unnecessary jar files error is gone. :) \u2013\u00a0surendran Jun 15 '12 at 12:51\n\nNone of previously proposed solutions worked for me. In my case, the problem happened when I switched from referencing a library source code folder to using the library JAR file. Initially there was an Android library project listed under the Android application project Properties\\ Android page\\ Library section, and the library compared also in project explorer tree as a link to the library source directory.\n\nIn the first place, I just deleted the directory link from the project tree and I added the JAR library to the build path, but this caused the exception.\n\nThe correct procedure was (after changing back the build path and putting back the reference to the library source):\n\n\u2022 properly remove the library source directory link by actually removing the reference from application project Properties\\ Android page\n\n\u2022 adding the library JAR to the application project build path as usual.\n\nNone of the listed solutions worked for me.\n\nHere's where I was having a problem:\n\nI added the jSoup external JAR file to my project's path by first putting it in a source folder called \"libs\", and then right clicking on it, Build Path -> add to build path. This threw the Dalvik conversion error. It said I had \"already included\" a class from that JAR file. I looked around the project's directory and found that the place where it was \"already included\" was in fact the bin directory. I deleted the JAR file from the bin directory and refreshed the project in Eclipse and the error went away!\n\nAll the solutions above didn't work for me. I'm not using any precompiled .jar. I'm using the LVL and the Dalvik errors where all related to the market licensing library.\n\nThe problem got solved by deleting the main project and reimporting (create a new project from existing sources).\n\nI had the same problem and none of these solutions worked. Finally, I saw in the console that the error was due to duplicated class (one in the existing project, one in the added jar file) :\n\njava.lang.IllegalArgumentException: already added: package\/MyClassclass;\n[2011-01-19 14:54:05 - ...]: Dx1 error; aborting\n[2011-01-19 14:54:05 - ...] Conversion to Dalvik format failed with error 1\n\n\nSo check if you are adding jar with duplicated classes in your project. If yes, try removing one of them.\n\nIt worked for me.\n\nOften for me, cleaning the project DOES NOT fix this problem.\n\nBut closing the project in Eclipse and then re-opening it does seem to fix it in those cases...\n\n\u2022 Same for me, closing Eclipse and starting up again usually solves it. Then I get I try, but I cannot make a second export, I have to restart again. \u2013\u00a0Ted Feb 25 '13 at 23:55\n\nI ran into this problem but my solution was twofold. 1.) I had to add an Android target version under project -> properties -> Android. 2.) I didn't have all google 'third party add-ons'. Click in AVD SDK manager under available packages -> third-party add-ons -> Google Inc. I downloaded all of the SDKs and that solved my issue.\n\n\u2022 My solution was pretty much the same as @THE_DOM 's solution. Right-click the project, choose Android Tools, then choose Add Compatibility Library. This will download those \"third party add-ons\" if you don't have them (THE_DOM's solution), but it will also add them into your project. Whether this solution is just a placebo or not is yet to be seen. :) \u2013\u00a0Chris Aug 17 '11 at 22:35\n\nI am using Android 1.6 and had one external JAR file. What worked for me was to remove all libraries, right-click project and select Android Tools -> *Fix Project Properties (which added back Android 1.6) and then add back the external JAR file.\n\nI ran into this problem because the Android-Maven-plugin in Eclipse was apparently not recognizing transitive references and references referenced twice from a couple of projects (including an Android library project), and including them more than once. I had to use hocus-pocus to get everything included only once, even though Maven is supposed to take care of all this.\n\nFor example, I had a core library globalmentor-core, that was also used by globalmentor-google and globalmentor-android (the latter of which is an Android library). In the globalmentor-android pom.xml I had to mark the dependency as \"provided\" as well as excluded from other libraries in which it was transitively included:\n\n <dependency>\n<groupId>com.globalmentor<\/groupId>\n<artifactId>globalmentor-core<\/artifactId>\n<version>1.0-SNAPSHOT<\/version>\n<!-- android-maven-plugin can't seem to automatically keep this from being\nincluded twice; it must therefore be included manually (either explicitly\nor transitively) in dependent projects -->\n<scope>provided<\/scope>\n<\/dependency>\n\n\nThen in the final application pom.xml I had to use the right trickery to allow only one inclusion path---as well as not explicitly including the core library:\n\n <!-- android-maven-plugin can't seem to automatically keep this from being\nincluded twice -->\n<!-- <dependency> -->\n<!-- <groupId>com.globalmentor<\/groupId> -->\n<!-- <artifactId>globalmentor-core<\/artifactId> -->\n<!-- <version>1.0-SNAPSHOT<\/version> -->\n<!-- <\/dependency> -->\n\n<dependency>\n<groupId>com.globalmentor<\/groupId>\n<version>1.0-SNAPSHOT<\/version>\n<exclusions>\n<!-- android-maven-plugin can't seem to automatically keep this from\nbeing included twice -->\n<exclusion>\n<groupId>com.globalmentor<\/groupId>\n<artifactId>globalmentor-core<\/artifactId>\n<\/exclusion>\n<\/exclusions>\n<\/dependency>\n\n<dependency>\n<groupId>com.globalmentor<\/groupId>\n<artifactId>globalmentor-android<\/artifactId>\n<version>1.0-SNAPSHOT<\/version>\n<\/dependency>\n\n\u2022 Strange thing for me was that <extractDuplicates>true<\/extractDuplicates> in the android-maven-plugin fixed the issue for mvn install but not for mvn deploy where I got the described error ... really strange \u2013\u00a0Karussell Nov 23 '13 at 10:09\n\nIn my case\n\nproject->properties->java build path -> in order and export tab -> uncheck android-support-v4.jar\n\nJust clean the project\n\nIf this does not work try the other solutions\n\n\u2022 plus 1 for the simple solution \u2013\u00a0Shylendra Madda May 11 '15 at 10:03\n\u2022 And I also need to restart Eclipse. \u2013\u00a0Dino Tw Dec 3 '15 at 2:02","date":"2020-06-02 02:23:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.35913094878196716, \"perplexity\": 3565.0595758107565}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347422065.56\/warc\/CC-MAIN-20200602002343-20200602032343-00232.warc.gz\"}"} | null | null |
Q: How to show a slow transition of a canvas? I have written a canvas within and when I click a link, I am expecting the canvas to reduce to a smaller size. I am able to achieve this using onClick event but I want to show the slow transition from larger to smaller size. Could anyone throw some pointers please?
A: jquery .animate() function can do the trick:
div.onClick(function(){
$(this).animate(height:'10px',2000);
});
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 376 |
Q: d3.js - Steady links between nodes on force diagram I modified a force diagram to change node circles into images, but would like some consistency with the way links are connected, like how a flowchart is. Similar to what is seen on this fiddle.
There may be something that needs to be modified in this code:
var forceLayout = d3.layout.force()
.nodes(nodes)
.links([])
.gravity(gravity)
.size([width, height])
.charge(function(d){
var charge = -500;
if (d.index === 0) charge = 10 * charge;
return charge;
});
The way that fiddle has it, the chargeand the linkDistance makes it look consistent, but changing the values to what's here doesn't help.
var force = d3.layout.force()
.charge(-200)
.linkDistance(50)
.size([width + margin.left + margin.right, height + margin.top + margin.bottom]);
Here's a link to my fiddle.
A: In your fiddle code, I set in the CSS part row 7 to width: 600px; instead of width: 80%;.
Also I added in the JavaScript, in row 118 a global variable
var width = 600
height = 800
and I made the .linkDistance(30)
Hope this helps you..
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,859 |
ACCEPTED
#### According to
Index Fungorum
#### Published in
null
#### Original name
Acarospora gyrodes H. Magn.
### Remarks
null | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,149 |
\section{Introduction}
Cell membranes contain large amounts of proteins within or attached
to the lipid bilayer \cite{dupuy08}. The distribution of the proteins
is not necessarily homogeneous, which can have important
functional consequences. For example, proteins with an intrinsic curvature
couple to the bilayer conformation
\cite{sens08,drin07,blood06,auth05,bickel02,ford02,harden94}; on the one
hand, such proteins are preferably found on similarly curved parts of the
membrane \cite{hagerstrand06}, on the other hand, the proteins
deform the membrane locally \cite{kozlov07,antonny06}. Asymmetric, curved
proteins can regulate the polymerization of the three-dimensional
cytoskeleton of the cell \cite{shlomovitz07} and control intracellular
transport via endocytosis \cite{roemer07,mcmahon05}. Virus
endocytosis can occur via the same mechanism \cite{gao05,gozdz07}. The conical
inclusions in our model mimick asymmetric proteins within the bilayer
\cite{ford02}, proteins or polymers attached to the bilayer
\cite{blood06,auth03,tsafrir03}, curved lipid domains
\cite{campelo07,baumgart03,haluska02,gozdz01}, and viruses that bind to the
membrane \cite{gao05}.
The interaction between the inclusions in a lipid bilayer is mediated by
membrane deformations and thermal undulations \cite{bruinsma96,goulian96}, in
addition to surface tension \cite{sens08} and possible direct interactions
that we do not consider in this paper. The deformation-induced, pairwise
interaction of curved inclusions occurs in the absence of thermal membrane
undulations and is usually repulsive \cite{goulian93,goulian93a}; in a planar
membrane it is long-range \cite{weikl98, goulian93, goulian93a}. However,
the interactions can be strongly screened if the average curvature of the
membrane and the protein curvature are similar
\cite{chou01,kim98,dommersnes98}. One obvious example for strongly screened
interactions are inclusions that are placed on a vesicle
with similar curvature radius \cite{dommersnes98}. Screening can also be
achieved by many-body interaction in clusters of inclusions \cite{kim99,kim98}.
At finite temperature, Casimir-like interactions due to membrane undulations
generate attraction \cite{helfrich01,weikl01,golestanian96a,golestanian96b,
goulian93,goulian93a}.
Curvature generation by inclusions and induced budding in lipid bilayer
membranes has been reported in many experimental studies of biological and
biomimetic systems \cite{ford02,antonny06,roemer07,mcmahon05,tsafrir03}.
Computer simulations allow to study the membrane-mediated interaction
between the inclusions in detail without the presence of other, direct
interactions. Recently, bud formation by curved inclusions has
been investigated with computer simulations \cite{reynwar07,
atilgan07}. It was found that the inclusions on the buds have a
higher density than they had in the initially nearly flat membrane
\cite{reynwar07}. This might appear to be a
result of undulation-induced attraction that in consequence leads to
clustering of the inclusions and to budding.
Such systems and processes can be studied theoretically on the basis of
an elastic membrane that is characterized by its bending rigidity, $\kappa$,
and Gaussian saddle splay modulus, $\bar{\kappa}$, with curved inclusions
that consist of sections of a sphere with a given opening angle. We
demonstrate that bud formation can already be well understood on the
basis of the membrane deformation alone. We show that the higher
inclusion density on the bud is a result of a screened repulsive interaction.
We further argue that the budding pathway
plays an important role for bud size. This allows us to predict a
range of possible bud radii for a given system, which nicely agrees with
recent simulation results \cite{reynwar07}.
At finite temperature, the inclusions can exist in a fluid and in a
crystalline phase, which depends on the strength of their repulsive
interaction. We construct an approximate free-energy functional that
takes into account for the bending energy as well as the translational
entropy of the inclusions. We calculate a phase diagram for the fission
of a single vesicle of given size and for given number and geometry of
the inclusions. The inclusion entropy plays a decisive role
for the sizes of the smaller vesicles into which a larger
vesicle may split.
\section{Membrane bending energy}
\label{sec2}
\subsection{Membrane shape near inclusions in a lipid bilayer}
The bending energy $\mathcal{E}$ of a lipid bilayer is given by the integral
over the entire membrane area,
\begin{equation}
\mathcal{E} = \int d S \, (2 \kappa H^2 + \bar{\kappa} K) \, ,
\end{equation}
where $\kappa$ is the bending rigidity, $\bar{\kappa}$
is the saddle-splay modulus, $H=(c_1+c_2)/2$ is the mean curvature,
$K=c_1 c_2$ is the Gaussian curvature, and $c_1$ and $c_2$ are the principal
curvatures at each point of the membrane. The integral over the
Gaussian curvature is determined by the topology of the membrane and by
the geodesic curvature at the boundary. In our case, the geodesic
curvature is given by the geometry of the inclusions, so that in general
this term of the
integral over the membrane shape does not need to be calculated
explicitly. For bud formation, we neglect the constant contribution
of the Gaussian saddle splay modulus.
In order to minimize the bending energy, the inclusions preferably order
on a hexagonal lattice (Fig.~\ref{fig3}~{\em (a)}); therefore it is
a natural assumption that the symmetry axis is oriented normal to the local
tangent plane of the vesicle on which the inclusions are placed. To calculate
the deformation energy, we approximate the hexagons with overlapping circles
that have the same projected area (Fig.~\ref{fig3}~{\em (b)}).
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=\columnwidth]{fig01.eps}
\vspace{-3ex}
\end{center}
\caption{(a) Membrane deformations induced by curved inclusions
in a planar membrane. The inclusions have a
repulsive interaction potential that decreases with the
distance between the inclusions, $d$, like $V \sim d^{-2}$.
To minimize the
bending energy, the inclusions order in an hexagonal
structure. (b) The hexagons are approximated by overlapping
circles that have the same projected area.}
\label{fig3}
\end{figure}
If there are no overhangs, the
membrane conformation can be described in Monge parametrization by a
height field, $h(x,y)$, over a planar reference surface. For an almost
planar membrane, the bending energy of the membrane is
\begin{eqnarray}
\mathcal{E} = \frac{1}{2} \kappa \int d A \left[ \Delta h(\bm{\rho})
\right]^2 \hspace{3ex} (\bm{\rho}=(x,y)) \, ,
\end{eqnarray}
with $\int d A$ the integral over the reference plane. Minimization
of the bending energy gives the biharmonic Euler-Lagrange equation,
\begin{equation}
\Delta^2 h ( \bm{\rho} ) = 0 \, \, \, . \label{eq3}
\end{equation}
In cylindrical coordinates, the general solution of Eq.~(\ref{eq3}) is
\begin{equation}
h(\rho) = \frac{1}{4} \rho^2 ( 2 C_2 - C_3 ) + C_4 + (C_1 +
\frac{1}{2} \rho^2 C_3 ) \ln(\rho) \label{eq7}
\end{equation}
with the four integration constants $C_1$ to $C_4$ \cite{footnote13}.
The boundary conditions that are
imposed on the membrane are sketched in Fig.~\ref{fig4}. The radius
of the inner boundary, $\rho_i=r_i \sin(\alpha)$, and the slope of
the membrane at the inner boundary, $h'(\rho_i) \equiv a=-\tan(\alpha)$, are
determined by the inclusion geometry. For $n \approx 4 \pi R^2
\sigma$ inclusions on a vesicle with radius $R$ and surface number
density $\sigma$ of the inclusions, the radius of the
outer boundary is $\rho_o \approx R \sin(\beta)$ with
$\beta=\arccos((n-2)/n)$; the slope of the membrane at the outer
boundary is $h'(\rho_o) \equiv b=-\tan(\beta)$. For inclusions on a planar
membrane, the latter expressions simplify to $\rho_o \approx
1/(\pi\sigma)^{(1/2)}$ and $b=0$. The remaining two boundary
conditions are given by fixing the membrane height at the inner (or
equivalently at the outer) boundary and minimizing the energy with respect
to the height of the inclusion above the vesicle (i.~e.\ the height
difference between both boundaries), which implies $h (\rho_i) = 0$ at the
inclusion and $\partial_\rho \Delta h(\rho) |_{\rho_o} = 0$ at the
outer boundary.
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=0.80\columnwidth]{fig02.eps}
\vspace{-3ex}
\end{center}
\caption{(Color online)
Curved inclusion (red) and resulting membrane deformation (blue).
The inclusion geometry is characterized by the curvature radius, $r_i$, the
opening angle, $\alpha$, and the projected inclusion radius,
$\rho_i = r_i \sin(\alpha)$.
The size of the corresponding membrane patch is $\rho_o$, the slope of the
membrane at the inclusion is $a = - \tan (\alpha)$, and the slope of the
membrane at the outer boundary is $b$ ($b=0$ for inclusions on a planar
membrane).}
\label{fig4}
\end{figure}
Eq.~(\ref{eq7}) together with the boundary conditions gives the shape
of the deformation,
\begin{equation}
h(\rho) = \frac{(\rho^2-\rho_i^2)(b \rho_o - a \rho_i)+2 \rho_o
\rho_i (a \rho_o-b\rho_i) \ln(\rho/\rho_i)}{2 (\rho_o^2- \rho_i^2)} \, ,
\label{eq10a}
\end{equation}
and the corresponding bending-energy cost,
\begin{equation}
\mathcal{E} (\rho_o,b) = \frac{\kappa}{2} \int_{\rho_i}^{\rho_o} d
\rho \left[ \Delta_r h(\rho) \right]^2 = \frac{2 \pi \kappa (b
\rho_o - a \rho_i)^2}{(\rho_o^2-\rho_i^2)} \, \, \, . \label{eq10}
\end{equation}
The energy a function of $\rho_o$ and $b$, which depend on the
inclusion density, while all other quantities are intrinsic properties
of membrane and inclusions. For a single inclusion in an infinite planar
membrane, $b=0$ and $\rho_o \rightarrow \infty$, the bending energy vanishes
and the membrane deformation is catenoid-like,
$h(\rho) = a \rho_i \ln (\rho/\rho_i)$. Note that in a pairwise approximation,
the interaction energy for two inclusions in a planar membrane ($b=0$)
decays like $d^{-2}$ for large distances between the inclusions
(large $\rho_o=d/2$).
\subsection{Optimal, low, and high inclusion density}
\label{sec2b}
\label{sec1c}
For inclusions on a vesicle, the membrane shape and the minimal bending
energy (assuming that the inclusions have maximal mutual distances)
can be calculated using Eqs.~(\ref{eq10a}) and (\ref{eq10}). For
$b \rho_o = a \rho_i$, the membrane around the inclusion has almost
catenoid shape \cite{footnote1}; the catenoid is a minimal surface without
bending-energy cost. If the entire vesicle is covered with inclusions and
catenoids such that the bending energy is zero (Fig.~\ref{fig2}~{\em (b)}),
the inclusions have optimal density.
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=0.9\columnwidth]{fig03.eps}
\end{center}
\caption{(Color online)
(a) Vesicle decorated with curved inclusions. Around each
inclusion, the membrane can be modeled by segments of the
catenoid minimal surface (white). The total bending energy
is $\mathcal{E}=8 \pi \kappa
(1-S_{\rm cat}/S_{\rm sph})$, where $S_{\rm cat}/S_{\rm sph}$
is the area fraction of the vesicle that is covered
with inclusions and catenoidal patches. (b) Vesicle decorated
with inclusions at optimal density; the bending energy of the
lipid bilayer membrane vanishes.}
\label{fig2}
\end{figure}
For lower inclusion densities, in a first approximation the catenoid\
shape borders on a spherical shape with the curvature radius of the vesicle.
The bending energy of a vesicle that is decorated with curved inclusions
is reduced by the fraction of the sphere's surface area that is covered
by the inclusions and the catenoid-shaped membrane segments,
see Fig.~\ref{fig2}. Therefore for low inclusion density,
the bending energy of the decorated spherical vesicle is in the range $0
< \mathcal{E} < 8 \pi \kappa$.
In the full solution, which is given by Eq.~(\ref{eq10a}) and will be used
in the remainder of the paper, there is no jump in the mean curvature
from $H=0$ to $H=1/R$ between the catenoid and a sphere as sketched in
Fig.~\ref{fig2} but rather a smooth transition from zero to finite mean
curvature.
For inclusion densities that are higher than the optimal density, due to the
boundary conditions no solution can be constructed by matching
of catenoids. In this case, the bending energy always
has a finite value that can exceed the bending energy of a bare vesicle.
The minimal bending energy of a vesicle with inclusions is shown in
Fig.~\ref{fig5} as function of the number of inclusions, $n$, and the vesicle
radius, $R$. We find degenerate zero-energy ground
states that have optimal inclusion density with an approximately linear
dependence $R(n)$, where \cite{footnote2}
\begin{eqnarray}
R \hspace{-1ex} & \approx & \hspace{-1ex} |a| \rho_i n / 4 \approx
1/(\pi \sigma |a| \rho_i) = (\cos
\alpha)/(\pi \sigma \sin^2 \alpha)(1/r_i) \nonumber \\
& & \hspace{-4ex} {\rm and} \label{eq11} \\
n \hspace{-1ex} & \approx & \hspace{-1ex} 4/(\pi \sigma a^2
\rho_i^2) = (4 \cos^2 \alpha)/(\pi \sigma \sin^4 \alpha)(1/r_i^2) \nonumber
\end{eqnarray}
The natural spontaneous curvature of the bilayer for given inclusion
density and geometry is $c_0 = 1/R_0 \approx \pi \sigma |a| \rho_i$.
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=1.0\columnwidth]{fig04.eps}
\vspace{-3ex}
\end{center}
\caption{Normalized bending energy, $E/\kappa$, of a vesicle with
radius $R$ with $n$
inclusions ($r_i\approx 5.5 \, \rm nm$, $\alpha=0.64$). There
is a region of low inclusion density at large $R$ with
$0 < \mathcal{E} < 8 \pi \kappa$, which is delineated by a
line of zero-energy ground states from a region of high inclusion density
at small $R$, where also bending energies $\mathcal{E} > 8 \pi \kappa$
can be found. (The high energies that are cut off at small $n$ and
large $R$ mark the breakdown of the small-curvature expansion of the
bending energy.)}
\label{fig5}
\end{figure}
The same value for the inclusion density can be high, optimal, or low,
depending on the radius of the vesicle on which the inclusions are
placed. The smaller the radius of the vesicle, the larger the value
of the optimal density. In a planar membrane, the slopes of two adjacent
catenoid-like deformations cannot be matched for any finite distance
between the inclusions. Therefore the inclusions are always in the
high-density regime in this case.
\subsection{Budding and vesiculation}
Bud formation does not occur for a vesicle with low inclusion density
and bending energy,
$0 < \mathcal{E} < 8 \pi \kappa$, because this would lead to an increase of
the total bending energy \cite{footnote3}. However, for high inclusion
density ($n \gtrsim 4 R/(|a| \rho_i)$, see
Eq.~(\ref{eq11})), the system can always reach a state of lower bending
energy if small vesicles bud from the main vesicle.
The set of smaller vesicles into which a large
vesicle with high inclusion density splits up is not uniquely determined
from bending energy alone,
because the states of vanishing bending energy are degenerate. A natural
assumption is that the vesicle will split into one large 'mother' vesicle
and one or more small 'daughter' vesicle(s) of equal size, such that the total
bending energy vanishes and the membrane area is kept constant.
In Fig.~\ref{fig5a}, we show the radii of the mother and daughter vesicles
as function of the number of inclusions.
For a given number of $n_v-1$
daughter vesicles, there is a maximal number of inclusions $n_{\rm max} =
n_v^{1/2} w R$ ($w = 4/(|a| \rho_i)$) that still allows to obtain a zero
energy state, for which mother and daughter vesicles have equal sizes.
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=0.9\columnwidth]{fig05.eps}
\vspace{-3ex}
\end{center}
\caption{(Color online) A single vesicle of radius $R=10 \, r_i$ with
$n$ inclusions splits into one large 'mother'-vesicle and
several small 'daughter' vesicles, in the figure the cases of 1, 2, 3, and
4 daughter vesicles are shown. The same parameters as in Fig.~\ref{fig5}
are used. For 100 inclusions, the initial vesicle
has vanishing bending energy. For more than 100 inclusions, the sizes
of mother and daughter vesicles are plotted. The upper branch always
gives the radius of the mother vesicle, the lower branch is the size of
the daughter vesicles.
For a fixed number $n_{\rm v} - 1$ of daughter vesicles (as indicated),
there is a maximum number of inclusions that allows the formation of
a state with vanishing bending energy (for which mother and daughter
vesicles have equal sizes; filled circles).}
\label{fig5a}
\end{figure}
If the system can split up into $n_v$ smaller vesicles, it can also split
up into a larger number of small vesicles \cite{footnote4}.
For a vesicle with total number of inclusions $n = n_+ + (n_v - 1) n_-$
and radius $R = (R_+^2 + (n_v - 1) R_-^2)^{1/2}$, bending energy minimization
predicts for the radii and inclusion number on mother ($R_+$, $n_+$) and
daughter ($R_-$, $n_-$) vesicles:
\begin{eqnarray}
n_+ & = & \frac{n + (n_v-1)^{1/2} (n_v w^2 R^2 - n^2)^{1/2}}{n_v} \label{eqvesbr} \\
n_- & = & \frac{n - (n_v-1)^{-1/2} (n_v w^2 R^2 - n^2)^{1/2}}{n_v} \nonumber \\
R_+ & = & \frac{n + (n_v-1)^{1/2} (n_v w^2 R^2 - n^2)^{1/2}}{n_v w} \nonumber \\
R_- & = & \frac{n - (n_v-1)^{-1/2} (n_v w^2 R^2 - n^2)^{1/2}}{n_v w}
\, . \nonumber
\end{eqnarray}
Because of the degeneracy of the states with vanishing bending energy,
thermal fluctuations and the budding pathway play a decisive role to
determine how a large vesicle with high inclusion density splits up into
smaller vesicles. Note that the results of our analytical model so far do
not depend on the value of the bending rigidity of the bilayer.
\subsection{Inclusion clusters}
\label{sec3d}
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=0.9\columnwidth]{fig06.eps}
\vspace{-3ex}
\end{center}
\caption{The coagulation factor, $f(\alpha)=R(\alpha) \pi r_i \sigma_0$,
describes the dependence of the optimal
vesicle radius, $R_0$, on the degree of aggregation of the inclusions
in the membrane. The total inclusion area is kept constant
and the reference density $\sigma_0$
corresponds to $\alpha_0=0.1 \, \pi$.}
\label{fig6}
\end{figure}
A direct attractive interaction between inclusions can induce cluster
formation \cite{sieber07}. In this case, the preferred curvature radius,
$R_0$, not only depends on the number and geometry of the inclusions, but
also on cluster size. For given inclusion curvature radius $r_i$, opening
angle $\alpha_0$, and fixed density $\sigma_0$, inclusion clusters with a
larger opening angle $\alpha$ and reduced density,
$\sigma=(1 - \cos \alpha_0)/(1 - \cos \alpha) \, \sigma_0$, have a stronger
effect on the curvature of the membrane than homogeneously distributed
inclusions. The preferred curvature radius for
clusters with opening angle $\alpha$ is $R (\alpha) = ((1 - \cos \alpha) \cos
\alpha)/((1- \cos \alpha_0) \sin^2 \alpha)/(\pi r_i \sigma_0) \equiv
f(\alpha)/(\pi r_i \sigma_0)$. We call the normalized curvature radius
$f(\alpha)$ the coagulation factor, because it multiplies
the curvature radius for
the reference inclusion density and opening angle $\alpha_0$
\cite{footnote6,footnote7}, see Fig.~\ref{fig6}.
For a fixed number $n_0$ of inclusions, the preferred curvature radius
decreases when aggregates are formed,
i.~e.\ the efficiency with which the inclusions influence the membrane
curvature increases. Cluster formation therefore also shifts the high-density
regime to smaller numbers of inclusions for the
same vesicle radius, $n_0 \gtrsim f(\alpha) \, 4 R/r_i$, and may cause a large
vesicle to break up into smaller vesicles.
\section{Thermal fluctuations}
For finite temperature, the translational entropy of the inclusions
contributes to the free energy. We distinguish a crystalline hexagonal
phase and a disordered fluid phase for which we construct free energy
functionals. Phase diagrams have been calculated more rigorously in
Ref.~\cite{netz95} in the limiting case of an almost planar membrane and
for inclusions that are slightly stiffer than the membrane and weakly curved
--- but not in the context of budding.
We neglect the interaction between inclusions by thermal membrane
undulations. For a pair of inclusions and an arbitrary orientation of
the inclusion axis, to lowest order of $\rho_i^2/d^2$ the deformation
energy is
\cite{weikl98,fournier97,goulian93,goulian93a}
\begin{equation}
\mathcal{E}_{\rm def.} = 8 \pi \kappa \alpha^2 \frac{\rho_i^4}{d^4}
\end{equation}
and the undulation-induced interaction energy
\cite{weikl02,golestanian96a,golestanian96b,goulian93}
\begin{equation}
\mathcal{F}_{\rm und.} = - 6 k_B T \frac{\rho_i^4}{d^4} \, .
\end{equation}
The ratio of the membrane deformation-induced repulsion to the
undulation-induced attration in a planar membrane is
$4 \pi \kappa \alpha^2/(3 k_B T)$. The undulation-induced
attraction can be neglected if it is one order of magnitude
smaller than the deformation interaction; for $\kappa=10 \, k_B T$
this is the case for $\alpha \gtrsim 0.5$, for $\kappa = 20 \, k_B T$
already for $\alpha \gtrsim 0.35$.
For low inclusion densities, the undulation-induced interaction
energy decays with the square of the density while the free
energy due to the inclusion entropy is of the order of $k_B T$; in case
of the phase diagrams in Fig.~\ref{fig12}, the undulation free energy
is about $~10^{-4} \, k_B T$.
\subsection{Inclusion effective pair potential and effective hard-disc radius}
For inclusions on a hexagonal lattice, each inclusion corresponds to
three pair interactions and a radius of the deformation patch,
$\rho_o$, that is half the distance between the inclusions. The
effective pair potential, obtained from Eq.~(\ref{eq10}), is thus
\begin{equation}
\begin{array}{c@{\hspace{3ex}}l}
u (d) = \frac{2 \pi \kappa (b \, d - 2 a \rho_i)^2}{3 (d^2 - 4 \rho_i^2)} &
\text{if $b \, d \le 2 a \rho_i$} \\
u (d) \approx 0 & \text{if $b \, d > 2 a \rho_i$} \, .
\end{array}
\label{eq9new}
\end{equation}
For inclusions on a planar membrane ($b=0$) and
for large $d$, i.~e.\ for $d \gg \rho_i$, the repulsive interaction
potential decays with a power law, $u \sim d^{-2}$.
To determine the free energy of this system, we employ the method
developed for suspensions of repulsive colloids \cite{barker67}, i.~e.\
we mimic the interaction potential by hard discs with an effective radius,
$r_{\rm hd}$. The radius is calculated
from a comparison of the membrane deformation energy with the thermal energy,
$k_B T$. We use a modified Barker-Henderson method that is appropriate for soft
potentials \cite{barker67,footnote7},
\begin{equation}
r_{\rm hd} = \frac{1}{2} \int_{0}^{\rho_u} \left\{ 1 - \exp \left[ -
\beta \mathcal{E} (\rho) \right] \right\} d \rho \, , \label{eq14}
\end{equation}
where the upper integral boundary is determined by $u (2 \rho_u)
= k_B T$ \cite{henderson70, watts69}. The effective hard-disc radii
therefore depend on the geometry of the inclusion, the bending rigidity
of the membrane, and on the radius of the vesicle on which the inclusions
are placed.
In Fig.~\ref{fig7}, the effective hard-disc radii, $r_{\rm hd}$, are
plotted for inclusions with various opening angles as function of
the vesicle radius, $R$ (an extremely large radius
$R=100 \, \rm \mu m$ of the vesicle is used to describe inclusions
in planar membranes). The hard disc radii increase with
increasing vesicle radius; the increase of $r_{hd}$ with $R$ is the
stronger, the larger the opening angle $\alpha$ of the inclusion is.
For example, the inclusions with $r_i=5.5 \, \rm nm$ and
$\alpha = 0.4 \, \pi$ can approach each other by diffusion about an order of
magnitude closer on a vesicle with radius $R = 10 \, \rm nm$ than this
is possible on a planar membrane. Consequently, the translational entropy
of the inclusions lowers the free energy on the vesicle compared
with the planar membrane.
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=0.9\columnwidth]{fig07.eps}
\vspace{-3ex}
\end{center}
\caption{Effective hard-disc radii for inclusions on a vesicle as
function of the vesicle's radius, $R$ (see Eq.~(\ref{eq14})). All
inclusions have the curvature radius, $r_i = 5.5 \, \rm nm$, and
the opening angles $\alpha=0.4 \pi$, $\alpha=0.35 \pi$, ...,
$\alpha=0.05 \pi$ (from top to bottom). The projected inclusion
radii are in the range $0.86 \, \rm nm < \rho_i < 5.2 \, \rm nm$,
the membrane bending rigidity is $\kappa=12 \, k_B T$.}
\label{fig7}
\end{figure}
As discussed in Sec.~\ref{sec3d}, cluster formation of inclusions
increases their effect on the membrane curvature. The effect of cluster
formation on the area fraction of effective hard discs is plotted in
Fig.~\ref{fig8}. For $\kappa=12 \, k_B T$, which is a typical
value for a lipid bilayer, clustering on vesicles with large radii, $R$,
strongly increases the area fraction of the effective hard discs.
To illustrate the strong effect of the bending
rigidity on the effective hard-disc radius, which enters the calculation
of the radius via the exponential function in Eq.~(\ref{eq14}), we
plot the effective hard disc radii for $\kappa=1 \, k_B T$;
the increase of the area fraction of the hard discs with $\alpha$ is
much smaller than for $\kappa=12 \, k_B T$. Thus the translational entropy
of the inclusions plays a much more important role for smaller
$\kappa$ \cite{footnote8}.
\begin{figure}[tp]
\begin{center}
\leavevmode
\includegraphics[width=0.9\columnwidth]{fig08.eps}
\vspace{-5ex}
\end{center}
\caption{Area fraction of effective hard discs for inclusions with
$r_i=5.5 \, \rm nm$ and opening angles $0.1 < \alpha <
1.3$. For a fixed number of inclusions,
cluster formation leads to a larger area fraction of the effective
hard discs and finally to crystallization. The area fraction of the
effective hard discs with opening angle $\alpha$ is normalized by the
area fraction of effective hard discs with $\alpha=0.05 \, \pi$: an
increase of $\alpha$ corresponds to a decrease of $\sigma$ (compare
Fig.~\ref{fig6}). The inclusions are placed on vesicles with
$\kappa=12 \, k_B T$ and various radii $R=0.1 \, \rm \mu m$ (dotted),
$R=1 \, \rm \mu m$ (short-dashed), $R=10 \, \rm \mu m$
(long-dashed), and $R=100 \, \rm \mu m$ (solid). For
comparison, the area fraction of effective hard discs is also shown
for $\kappa = 1 \, k_B T$ and $R=100 \, \rm \mu m$ (dashed-dotted).}
\label{fig8}
\end{figure}
\subsection{Inclusion entropy and free energy of hard discs}
The free energy of a fluid of hard discs can be very well described
by the Carnahan-Starling
free energy \cite{carnahan69,maeso93}. It is the sum of the ideal-gas free
energy, $\mathcal{F}_{\rm id}/n \approx k_B T \log(\sigma \Lambda^2)$,
where $\Lambda$ is the thermal wavelength (see e.~g.\ Ref.~\cite{goegelein08}),
and the excess free energy $\mathcal{F}_{\rm CS}$ \cite{hansenbook}. The
latter is calculated from the
Carnahan-Starling equation of state \cite{maeso93},
\begin{equation}
\frac{p}{\sigma \, k_B T} = \frac{1}{(1-y)^2} \, \, \, ,
\end{equation}
with the hard-disc area fraction, $y=\sigma \pi r_{\rm hd}^2$.
Integration of the thermodynamic relation $p=-(\partial
F/\partial V)_{T,N}$ finally gives the excess free energy,
\begin{equation}
\frac{1}{n} \frac{\mathcal{F}_{\rm CS}}{k_B T} = \int_0^{y} \left(
\frac{p}{k_B T \, \sigma} - 1 \right) \frac{d\tilde{y}}{\tilde{y}} =
\frac{y}{1-y} - \ln (1-y) \, \, \, .
\end{equation}
The Carnahan-Starling excess free energy diverges at the crystallization
transition of the effective hard discs.
Because in the fluid as well as in the crystalline phase of the inclusions
the squared thermal wavelength enters through the same constant and additive
term, we consistently replace it in both cases --- without loss of
information --- by the projected area of the inclusion, $\pi \rho_i^2$,
such that
$\tilde{\mathcal{F}}_{\rm id}/n \approx k_B T \log(\sigma \pi \rho_i^2)$.
Usually the translational entropy favors a homogeneous distribution of
particles. However, because the effective hard-disc radius depends on
the membrane curvature, on a deformable membrane, a homogeneous distribution
of inclusion does not need to be the most favourable state. Instead, the
inclusion density on the bud can be higher than on the mother vesicle because
of the screened repulsive interactions.
Fig.~\ref{fig9} shows the free energies of a fluid of effective hard
discs with curvature radius $r_i = 5.5 \, \rm nm$ for various opening
angles in a lipid bilayer with $\kappa=12 \, k_B T$.
For nearly identical curvature radii of vesicle and inclusions, the
effective hard-disc radius almost coincides with the geometric hard-disc
radius of the inclusion.
\begin{figure}[hp]
\begin{center}
\leavevmode
\includegraphics[width=0.9\columnwidth]{fig09.eps}
\vspace{-3ex}
\end{center}
\caption{Carnahan-Starling free energy of hard discs as function
of the inclusion area fraction. The system is depicted
in the inset with periodic boundary conditions. We plot the free
energy for discs with the geometrical projected radii of the
inclusions, $r_{\rm hd}=\rho_i = r_i \sin \alpha$ (solid line), as well
as the free energies for effective hard discs for inclusions with
curvature radius $r_i = 5.5 \, \rm nm$ and various opening angles
on vesicles with $\kappa=12 \, k_B T$:
$\alpha = 0.16$ and $R = 20 \, \rm nm$ (long dashed),
$\alpha = 0.16$ and $R = 100 \, \rm \mu m$ (short dashed),
$\alpha = 0.64$ and $R = 20 \, \rm nm$ (long-dashed dotted),
$\alpha = 0.64$ and $R = 100 \, \rm \mu m$ (short-dashed dotted),
$\alpha = 0.82$ and $R = 20 \, \rm nm$ (dotted), and
$\alpha = 0.82$ and $R = 100 \, \rm \mu m$ (double dashed).}
\label{fig9}
\end{figure}
\subsection{Free energy per inclusion in fluid and crystalline phases}
\label{sec4}
\begin{figure}[bp]
\begin{center}
\leavevmode
\includegraphics[width=0.93\columnwidth]{fig10.eps}
\vspace{-3ex}
\end{center}
\caption{Free energies as function
of the inclusion area fraction, $\sigma \pi \rho_i^2$, for inclusions
with $r_i=5.5 \, \rm nm$, $\alpha = 0.82$ in a membrane with
$\kappa= 12 \, k_B T$ for several vesicle radii:
$R = 1 \, \rm \mu m$ (solid),
$R = 100 \, \rm nm$ (long-dashed), $R = 50 \, \rm nm$ (short-dashed),
$R = 30 \, \rm nm$ (dotted), $R = 20 \, \rm nm$ (long dashed-dotted),
$R = 15 \, \rm nm$ (short dashed-dotted). For low densities, the
inclusions are in a fluid phase and the free energy is given by the
membrane bending energy plus the Carnahan-Starling excess free energy
for the effective hard discs. For high densities, the inclusions are in
a crystalline phase and the free energy is given by the membrane
bending energy and the free energy of a harmonic crystal.}
\label{fig10}
\end{figure}
We construct the free energy per inclusion in the crystalline phase
from the membrane
bending energy and the fluctuation free energy of a harmonic crystal,
and in the fluid phase from the sum of the membrane bending energy and
the translational entropy of the inclusions \cite{footnote12}.
For the harmonic crystal, the spring constant $k_{\rm sp}$ is obtained
for a hexagonal lattice with the interaction potential in Eq.~\ref{eq9new},
\begin{equation}
k_{\rm sp} = \frac{16 \pi \kappa \rho_i}{(d^2-4 \rho_i^2)^3} (3 d^2 +4 \rho_i^2) \rho_i (a^2+b^2)-(d^2+12 \rho_i^2) a b d \, .
\end{equation}
The free energy contribution of the positional fluctuations of the inclusions
is therefore
\begin{equation}
\mathcal{F}_{\rm HC} = k_B T \ln \left[\frac{k_{\rm sp} \Lambda^2}{2 \pi}\right]
\end{equation}
or --- after replacement of the thermal wavelength by the inclusion size ---
\begin{equation}
\tilde{\mathcal{F}}_{\rm HC} = k_B T \ln \left[\frac{k_{\rm sp} \rho_i^2}{2}\right] \, .
\end{equation}
Whenever we use the free energy for the crystalline phase in this paper, the
Lindemann parameter remains below the critical value for melting of
the inclusion crystal
\cite{eisenmann04}.
The transition between the fluid and the crystalline
phase already occurs below the crystallization transition of the effective
hard discs, $\sigma \pi r_{\rm hd}^2 \approx 0.7$. In Fig.~\ref{fig10},
the free energy per inclusion is plotted for several vesicle radii.
Entropy reduces the optimal bud radius for a given inclusion density
compared with Eq.~(\ref{eq11}). However, the bending energy alone still
provides a good estimate for the optimal bud radius, because of the strong
increase of the free energy per inclusion for low inclusion densities
(see Fig.~\ref{fig10}).
\subsection{Vesiculation diagrams}
\begin{figure}[tp]
\begin{center}
\leavevmode
\includegraphics[width=0.75\columnwidth]{fig11.eps}
\vspace{-3ex}
\end{center}
\caption{(Color online)
Vesiculation phase diagram for an inclusion density $\sigma$
of inclusions with $r_i=5.5 \, \rm nm$ and $\alpha = 0.64$ on (initially)
a single vesicle of radius $R$. For small $\sigma$ or $R$, the
energetically favorable state is the single vesicle; fission, first
into two, and finally into $3$ or more vesicles is expected to occur
when $\sigma$ and/or $R$ is increased (more than 3 vesicles
are not resolved by the calculation). {\em (a) $\bar{\kappa}=0$:}
In the two-vesicle regime, a region where the two vesicles have equal
sizes grows with decreasing $\kappa$ and bounds the three-vesicle regime.
The lines depict the border between equally and differently sized vesicles
for $\kappa=200 \, k_B T$, $\kappa=100 \, k_B T$, $\kappa=50 \, k_B T$,
$\kappa=30 \, k_B T$, $\kappa=10 \, k_B T$.
(the vesicle sizes are not resolved in the $3+$ region).
{\em (b) $\bar{\kappa}=-\kappa/2$:} Fission occurs already for smaller
values of $\sigma$ and $R$. In the $2$-vesicle region the two
vesicles have different sizes for $\kappa=200 \, k_B T$,
$\kappa=100 \, k_B T$, $\kappa=50 \, k_B T$, and $\kappa=30 \, k_B T$.
For $\kappa = 10 \, k_B T$, in a small region both vesicles have
equal sizes.}
\label{fig12}
\end{figure}
From the total free energy in Sec.~\ref{sec4}, we calculate vesiculation
diagrams starting with a single vesicle of radius $R$ and a given number of
inclusions for several values of $\kappa$. Because the topology changes
when buds detach, the value of $\bar{\kappa}$ plays an important role for
vesiculation.
Fig.~\ref{fig12} shows vesiculation diagrams for $\bar{\kappa}=0$
and for $\bar{\kappa}=-\kappa/2$ (the ratio $\bar{\kappa}/\kappa$ is still under
debate, see Ref.~\cite{deserno09}). We calculate whether the single
vesicle is the most stable
state or if fission in two or more smaller vesicles is favorable.
With increasing initial vesicle radius and inclusion density, fission
becomes more likely to occur --- first into two, at even larger $R$ or
$\sigma$ into three or more vesicles.
For $\bar{\kappa} = 0$, fission in the bending-energy dominated regime
produces two smaller vesicles that in general have different sizes,
see Eq.~(\ref{eqvesbr}). If entropy is important, the two
vesicles may have equal size. In Fig.~\ref{fig12}~(a), it is shown
that a regime of equally-sized vesicles develops, bordering the
three-vesicle regime and increasing in size with decreasing $\kappa$.
Within the error bars of our calculation, we find that the boundaries
between one, two, and three vesicles are independent of the
value of the bending rigidity for $10 \, k_B T \le \kappa \le 200 k_B T$.
For $\bar{\kappa} = - \kappa/2$, vesiculation takes place
for smaller initial vesicle radii and inclusion densities than for
$\bar{\kappa} = 0$, because each additional vesicle lowers the free
energy by $4 \pi \bar{\kappa}$. In the entire two vesicle regime, both vesicles
have different sizes for $\kappa=200 \, k_B T$, $\kappa=100 \, k_B T$,
$\kappa=50 \, k_B T$, and $\kappa=30 \, k_B T$, while for
$\kappa = 10 \, k_B T$ a small region of equally sized vesicles is found,
see Fig.~\ref{fig12}~(b).
Note that for bud formation, which has to occur before vesiculation,
the value of $\bar{\kappa}$ is irrelevant and the phase diagrams for
$\bar{\kappa}=0$ apply (assuming that the buds are connected by
catenoidal necks with vanishing bending energy, and that the membrane
area needed to form the neck is negligible). While the bud is
being formed and has not yet detached, the integral over the Gaussian
curvature and therefore the contribution of the saddle splay modulus to the
deformation energy stays constant.
However, a negative saddle splay modulus facilitates the neck
between two vesicles to break. In this case, the budded
state can act as energy barrier for vesiculation that prevents
fission, separating a high-energy single-vesicle state and a low-energy
state of several smaller vesicles.
\section{Budding pathway}
\label{sec5}
To shed more light on the role of the budding pathway, we compare the
typical diffusion time of the inclusions with the relaxation time of
the membrane conformation on the same length scale. If the
diffusion of inclusions is fast compared with the relaxation time of
the membrane, the initial membrane shape is decisive; for
a fast membrane relaxation, the initial distribution of inclusions
mainly determines the budding process.
The diffusion time is
$t_d= \lambda^2/(4 D)$ where $\lambda$ is a characteristic
length scale that the inclusion has diffused and $D$ is the diffusion
coefficient of the inclusion. A typical value is $D= 1 \, \rm \mu
m^2 s^{-1}$ for the diffusion of lipids and the diffusion coefficient
for proteins in cell membranes can be up to two orders of
magnitude smaller \cite{henis93}. The relaxation time of the
membrane is $t_r = \eta \lambda^3/(2 \pi^3 \kappa)$ \cite{gov03},
where $\eta=1 \, \rm mPa \, \approx 2.4 \,
10^{-10} \, k_B T \, \rm s \, nm^{-3}$ is the viscosity of the
surrounding water and $\lambda$ is the wavelength of the membrane
undulations. From the cubic versus the quadratic dependence on
$\lambda$, we find that for small $\lambda$, $t_r < t_d$, while for
large $\lambda$, $t_r>t_d$.
For $\kappa= 10 \, k_B T$ and $D=1 \, \rm \mu m^2 s^{-1}$ (which is
an upper bound for the diffusion coefficient of proteins), we find
that $t_r=t_d$ for $\lambda \approx 0.6 \, \rm mm$. This length is
much larger than $10 \, \rm \mu m$, which is the size of cells
\cite{albertsbook} or giant unilamellar vesicles \cite{pecreaux04},
thus the initial aggregation of inclusions is diffusion-limited. We
therefore assume that inhomogeneities in the protein distribution on
the membrane will immediately lead to a membrane deformation that
minimizes the bending energy. The larger the initial inclusion density,
the larger the spontaneous curvature of the membrane (compare
Eq.~(\ref{eq11})) and the smaller the size of the buds that are formed.
Bud formation is initialized in some regions of the membrane that have
a noticeably higher inclusion density than others. The relative protein
density fluctuations decrease with the size of a membrane patch which is
considered. If we assume a random distribution of inclusions,
for a patch with $N$ inclusions the relative fluctuations of the inclusion
number are of the order of $N^{-1/2}$. Thus for small patches, the
inhomogeneities are strongest and budding will preferably occur on the
smallest possible lengthscale. A small average membrane curvature with
appropriate sign further attracts proteins to those regions where the
bending energy per inclusion is already reduced. However, this clustering
of inclusions during the budding process is hindered by a ring with
opposite membrane curvature that forms the neck of a growing bud. This
ring acts as energetic barrier that prevents further inclusions to enter
a patch of the membrane where budding has already started \cite{footnote10}.
Because of the neck formation and the diffusion-limited budding
process, the bud size is roughly determined by the initial
inclusion density on the membrane.
\section{Comparison with simulation results}
\begin{figure}[tp]
\begin{center}
\leavevmode
\includegraphics[width=0.9\columnwidth]{fig12.eps}
\vspace{-3ex}
\end{center}
\caption{Energies per inclusion needed to place $10$ inclusions
of size $\rho_i=2.5 \, \rm nm$ on a vesicle with $R = 15 \, \rm nm$,
$\kappa = 20 \, k_B T$, and $\bar{\kappa} = - 20 k_B T$, as function
of $\alpha$. Lines show bending energy (long-dashed), bending energy
and inclusion
entropy (short-dashed), saddle-splay energy (dotted), and total energy
(solid). For comparison, we also plot the simulation data of
Ref.~\cite{atilgan07} (symbols indicate different calculation methods
\cite{atilgan07}), shifted by $\Delta F = -10.5 \, k_B T$ (see main text).
The deviation of the simulations and our theory
for $\alpha \gtrsim 0.4$ might be due to the surface tension used
in the simulation, which is not included in our model.}
\label{fig11}
\end{figure}
Budding due to membrane inclusions has been studied recently with computer
simulations \cite{reynwar07,atilgan07}. In Ref.~\cite{atilgan07}, entire
vesicles with inclusions are simulated where the membrane is modelled as a
dynamically triangulated surface. In Ref.~\cite{reynwar07}, coarse-grained
model lipids are used to study budding for planar bilayer patches.
In Fig.~\ref{fig11}, the different contributions to the free energy
per inclusion needed to graft $10$ inclusions with given projected
radius $\rho_i = 2.5 \, \rm nm$ on a vesicle with radius $R=15 \,
\rm nm$ are plotted as function of the opening angle, $\alpha$. The
inclusions are in the fluid phase.
For comparison, we also plot the simulation data of Ref.~\cite{atilgan07},
shifted by $\Delta F = -10.5 \, k_B T$ because our model does not account
for thermal undulations of the membrane conformation. This energy difference
is extracted from the simulation data for $T= 300 \, \rm K$ and for
$T = 3 \, \rm K$, see Fig. 6A in Ref.~\cite{atilgan07}. However, the very
good match is somewhat fortuituous because we replace the thermal wavelength
in the ideal gas free energy by the inclusion size, such that it is similar
to the free energy obtained on a triangulated vesicle with a bond length
that approximately equals to the inclusion diameter, compare \cite{smith05}.
We consider curvature radii of the inclusions that are both smaller and larger
than the curvature radius of the vesicle. For $\alpha \rightarrow 0$ and $r_i
\rightarrow \infty$, such that $\rho_i = 2.5 \, \rm nm$, bending
energy is needed to insert the flat inclusion into the curved
vesicle. This bending energy cost decreases with increased
opening angle $\alpha$ and is zero for $\alpha \approx 0.17$ where the
curvature radius of the inclusion equals the curvature radius of the
vesicle. If $\alpha$ is further increased, the bending energy per
inclusion continues to decrease as more and more of the vesicle area
consists of catenoidal patches around the inclusions. For $\alpha \approx
1.18$, i.~e.\ for even larger opening angles than plotted in the figure,
the inclusions have optimal density and the bending energy
gained by grafting all $10$ inclusions to the vesicle is $8 \pi
\kappa$.
In addition to the bending energy, there is an energy
cost $\mathcal{E}_{\bar{\kappa}}$ for grafting that arises due to
the saddle-splay modulus, which has been
chosen to be $\bar{\kappa} = - \kappa = - 20 \, k_B T$ for consistency
with Ref.~\cite{atilgan07}. The energy cost per inclusion only
depends on the geodesic curvature of the membrane at the inclusion,
i.~e.\ on the opening angle $\alpha$, which implies
$\mathcal{E}_{\bar{\kappa}} = - 2 \pi \bar{\kappa} (1 - \cos \alpha)$.
Therefore it is independent of vesicle
radius and inclusion density and is only important to calculate the
chemical potential for the inclusions on the surface; the budding
transition at given inclusion density in the membrane
is independent of $\bar{\kappa}$.
In the simulations presented in Ref.~\cite{reynwar07}, budding is induced
by inclusions with $r_i = 5.5 \, \rm nm$ that are initially
placed in a regular array on a planar membrane with $\kappa = 12 \, k_B T$.
Under the assumption that the initial inclusion density,
$\sigma = 2 \, 10^{-3} \, \rm nm^{-2}$, determines the bud size
(see Sec.~\ref{sec5}), we can
roughly predict the bud radius from Eq.~(\ref{eq11}).
Possible bud radii are estimated by comparing
the free energies for different vesicle radii in Fig.~\ref{fig10} at
the initial inclusion density.
The parameters for which
the free energies are plotted in Fig.~\ref{fig10} are chosen to
match the bending rigidity and the inclusion geometry of the
'large inclusions' in Ref.~\cite{reynwar07} with $\alpha = 0.26 \, \pi$
\cite{footnote11}. For
an initial inclusion density in the planar membrane, $\sigma=0.002 \,
\rm nm^{-2} \approx 0.15 \, \sigma \pi \rho_i^2$, which is estimated by visual
inspection from the simulation snapshots, we find that the inclusions
are in the crystalline phase on the
vesicle with $R=1 \, \rm \mu m$ (i.~e.\ in a planar membrane).
The free energy per inclusion is about $10 \, k_B T$. Significantly
smaller free
energies per inclusion can be found for vesicle radii
$22 \, {\rm nm} \lesssim R \lesssim 100 \, \rm nm$;
the optimal vesicle radii are $30 \, {\rm nm} \lesssim
R \lesssim 60 \, \rm nm$ with free energies per inclusion of
about $-1 \, k_B T$. These radii agree well with the observed
bud radius of $R=30 \, \rm nm$ that formes in the simulations as final
state via an initially slightly larger bud. The optimal bud radius from
Eq.~(\ref{eq11}) is approximately $37 \, \rm nm$.
Similarly, for the 'very large inclusions' in Ref.~\cite{reynwar07}
($\alpha = 0.39 \, \pi$)
we predict bud radii, $15 \, {\rm nm} \lesssim R \lesssim 20 \, \rm
nm$, as observed in the simulations; based on bending energy only we
find from Eq.~(\ref{eq11}) $R\approx 11 \, \rm nm$. For the
'small inclusions' in Ref.~\cite{reynwar07} ($\alpha = 0.20 \, \pi$)
that are already in the planar
membrane almost in the fluid phase, our model predicts for the $36$
inclusions studied in the simulations a maximal gain for the free
energy per inclusion of $\approx 1.5 \, k_B T$ for $R = 38 \, \rm nm$
and a strong decrease of this energy gain for smaller vesicle radii.
Already for
$R=34 \, \rm nm$ and $29$ inclusions, the energy per inclusion on the
bud and in the plane are approximately equal.
Larger energy gains are possible for larger bud radii, for which
many more than the simulated 36 inclusions and a larger area of the bilayer
patch are needed. From these considerations, it is not surprising that no
budding is observed for the 'small inclusions' in the simulations of
Ref.~\cite{reynwar07}.
\section{Conclusions}
We have calculated the membrane-mediated interaction of conical inclusions
in a lipid bilayer and the inclusion entropy, which allow the prediction of
budding transitions
and vesiculation. Our model is based on the membrane bending energy;
with this contribution alone, the spontaneous curvature of a bilayer
with inclusions as well as budding can be predicted for many
systems, ranging from protein inclusions to viral budding.
Although the interaction between the inclusions by membrane deformation
is repulsive, the screening of the repulsive interaction due to the
average membrane curvature allows higher inclusion densities
on a bud than in the initial vesicle. Translational entropy of the inclusions
favors the formation of equally-sized daughter vesicles and lifts the
degeneracy that is found for states with vanishing bending energy.
From our calculations, the following picture of the effect of the bilayer
deformation by curved inclusions emerges. For low inclusion density,
the membrane around each inclusion assumes a catenoid
shape of vanishing curvature energy. At optimal inclusion density,
the catenoids are closely packed and the bending energy for the entire
vesicle vanishes. For high inclusion density, the boundary conditions for
the membrane deformation around the inclusion do not allow the formation
of catenoidal patches, and the inclusions always feel the membrane-mediated
repulsive interaction with neighboring inclusions. In this regime, bud
formation can occur.
If we assume a specific biological mechanism that leads to formation of
clusters with well-defined and limited size, we find that such a mechanism
can induce
bud formation and vesiculation without the need to insert additional conical
proteins into the cell membrane: cluster formation reduces the preferred
curvature radius of the membrane. We quantify the effect of aggregation
by the coagulation factor, which describes how the preferred curvature
radius for a fixed amount of inclusions changes with the cluster size.
In general, our analytical model is applicable for a wide range of
length scales and inclusion geometries. Computer simulations are usually
designed only for a specific length scale, e.~g.\ a length scale comparable
to the length scale of lipids in Ref.~\cite{reynwar07} or the lengthscale
of entire vesicles in Ref.~\cite{atilgan07}. The good agreement with the
simulation results of Refs.~\cite{reynwar07,atilgan07} suggests that the
approximations used in our calculations are justified.
We argue that the undulation-induced attraction can be
neglected compared with the deformation-induced repulsion
and the translational entropy of the inclusions for
$\kappa \alpha^2 \gtrsim 15 \, k_B T/(2 \pi)$, i.~e.
$\alpha \gtrsim 0.35$ for $\kappa=20 \, k_B T$. For example,
the BAR protein induces a local membrane curvature with an
opening angle $\alpha \approx 0.4$ \cite{blood06}. Clathrin
can induce a variety of opening angles \cite{kohyama03,heuser80}.
The number of the inclusions per bud is determined by the budding process.
Around a growing bud, a neck forms that presents
an energetic barrier for the diffusion of inclusions. Because the deformation
of the lipid membrane typically occurs much faster than the diffusion of
the inclusions within the membrane, the number of inclusions per
bud is well determined by the initial inclusion density in the
membrane. From this, we can estimate a range of possible bud radii,
which agrees well with the simulations in Ref.~\cite{reynwar07}.
It would be interesting to test
the validity of our model in the limits of (a) very
small inclusions, such as lipids with large headgroups, when the
description of the lipid membrane by curvature
elastic constants may not be appropriate and (b)
very floppy membranes, when neglecting the thermal membrane undulations
may not be justified.
\acknowledgments
We acknowledge helpful discussions with G.~N\"agele, R.~G.~Winkler,
J.~L.~McWhirter, K.~Mecke, R.~Golestanian, M. Deserno, and M. Oettel.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,708 |
Rhian M. Touyz MBBCh, MSc (Med), PhD, FRCP, FRSE, FMedSci, FCAHS (born September 14, 1959) is a Canadian medical researcher. She is currently serving as the Executive Director and Chief Scientific Officer of the Research Institute of the McGill University Health Centre in Montreal, Canada, since 2021. A clinician scientist, her research primarily focuses on hypertension and cardiovascular disease.
Early life and education
Touyz earned her BSc (1980), MBBCh (1984), MSc (1986) and PhD (1992) from the University of the Witwatersrand, Johannesburg, South Africa. She completed a post-doctoral fellowship (1992-1996) at the Institut de recherches cliniques de Montréal (IRCM).
Career
Touyz was Staff Scientist and Professor at the IRCM, in Montreal, Canada until 2005.
From 2005 to 2011, Touyz joined the Kidney Research Centre of the Ottawa Hospital Research Institute, University of Ottawa, where she held a Tier 1 Canada Research Chair in Hypertension.
In 2011, Touyz moved to the University of Glasgow where she served as Director of the Institute of Cardiovascular and Medical Sciences (ICAMS), British Heart Foundation (BHF) Chair and Professor of Cardiovascular Medicine, University of Glasgow. A clinician scientist, she was also Director of the BHF Centre of Research Excellence in Vascular Science and Medicine and a consultant in the Queen Elizabeth University Hospital.
In 2021, Touyz was appointed as the Executive Director and Chief Scientific Officer of the Research Institute of the McGill University Health Centre in Montreal, Canada. Concurrently, she became a full professor in the departments of Medicine and Family Medicine, Faculty of Medicine and Health Sciences and the Dr. Phil Gold Chair in Medicine at McGill University.
Touyz' research focuses on molecular and vascular biology of hypertension and target organ damage, particularly: 1) vascular signaling and redox biology; 2) adipose biology and cardiometabolic disease; 3) cardiovascular toxicity of anti-cancer drugs, 4) pathophysiology and management of human hypertension.
Touyz has held important roles in many organizations including the European Society of Cardiology (committee member, Guidelines), the Canadian Hypertension Society (President), the AHA High Blood Pressure Research Council (Chair), the International Society of Hypertension (President) and the European Council for Cardiovascular Research (President).
Touyz serves as the Editor-in-Chief of Hypertension She is an Associate Editor of Pharmacological Reviews
Awards and honours
2005 Dahl Lecture Award by the American Heart Association
2006 Grace A Goldsmith Award, American Society of Nutrition
2009 Vincenzo Panagia Distinguished Lecture Award, Institute of Cardiovascular Sciences Award
2010 Distinguished Service award from Hypertension Canada
2012 Robert M. Berne Distinguished Lecturer of the American Physiological Society.
2013 Elected a Fellow of the Royal Society of Edinburgh
2014 RD Wright Lecture Award of the High Blood Pressure Research Council of Australia
2015 Harriet Dustan Award for research excellence, Council on Hypertension, American Heart Association.
2016 American Society of Hypertension's 2016 Irvine Page Award for outstanding work in the field of hypertension
2017 Joan Mott Prize Lecture from The Physiological Society.
2019 Award of Research Excellence by the American Heart Association (AHA) Council on Hypertension.
References
Living people
University of the Witwatersrand alumni
Fellows of the Royal Society of Edinburgh
Canadian women scientists
1959 births | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,184 |
/**
* @file
* @brief Console handler implementation of shell.h API
*/
#include <zephyr.h>
#include <stdio.h>
#include <string.h>
#include <console/console.h>
#include <misc/printk.h>
#include <misc/util.h>
#ifdef CONFIG_UART_CONSOLE
#include <console/uart_console.h>
#endif
#ifdef CONFIG_TELNET_CONSOLE
#include <console/telnet_console.h>
#endif
#include <shell/shell.h>
#define ARGC_MAX 10
#define COMMAND_MAX_LEN 50
#define MODULE_NAME_MAX_LEN 20
/* additional chars are " >" (include '\0' )*/
#define PROMPT_SUFFIX 3
#define PROMPT_MAX_LEN (MODULE_NAME_MAX_LEN + PROMPT_SUFFIX)
/* command table is located in the dedicated memory segment (.shell_) */
extern struct shell_module __shell_cmd_start[];
extern struct shell_module __shell_cmd_end[];
#define NUM_OF_SHELL_ENTITIES (__shell_cmd_end - __shell_cmd_start)
static const char *prompt;
static char default_module_prompt[PROMPT_MAX_LEN];
static int default_module = -1;
#define STACKSIZE CONFIG_CONSOLE_SHELL_STACKSIZE
static char __stack stack[STACKSIZE];
#define MAX_CMD_QUEUED CONFIG_CONSOLE_SHELL_MAX_CMD_QUEUED
static struct console_input buf[MAX_CMD_QUEUED];
static struct k_fifo avail_queue;
static struct k_fifo cmds_queue;
static shell_cmd_function_t app_cmd_handler;
static shell_prompt_function_t app_prompt_handler;
static const char *get_prompt(void)
{
if (app_prompt_handler) {
const char *str;
str = app_prompt_handler();
if (str) {
return str;
}
}
if (default_module != -1) {
return default_module_prompt;
}
return prompt;
}
static void line_queue_init(void)
{
int i;
for (i = 0; i < MAX_CMD_QUEUED; i++) {
k_fifo_put(&avail_queue, &buf[i]);
}
}
static size_t line2argv(char *str, char *argv[], size_t size)
{
size_t argc = 0;
if (!strlen(str)) {
return 0;
}
while (*str && *str == ' ') {
str++;
}
if (!*str) {
return 0;
}
argv[argc++] = str;
while ((str = strchr(str, ' '))) {
*str++ = '\0';
while (*str && *str == ' ') {
str++;
}
if (!*str) {
break;
}
argv[argc++] = str;
if (argc == size) {
printk("Too many parameters (max %zu)\n", size - 1);
return 0;
}
}
/* keep it POSIX style where argv[argc] is required to be NULL */
argv[argc] = NULL;
return argc;
}
static int get_destination_module(const char *module_str)
{
int i;
for (i = 0; i < NUM_OF_SHELL_ENTITIES; i++) {
if (!strncmp(module_str,
__shell_cmd_start[i].module_name,
MODULE_NAME_MAX_LEN)) {
return i;
}
}
return -1;
}
/* For a specific command: argv[0] = module name, argv[1] = command name
* If a default module was selected: argv[0] = command name
*/
static const char *get_command_and_module(char *argv[], int *module)
{
*module = -1;
if (!argv[0]) {
printk("Unrecognized command\n");
return NULL;
}
if (default_module == -1) {
if (!argv[1] || argv[1][0] == '\0') {
printk("Unrecognized command: %s\n", argv[0]);
return NULL;
}
*module = get_destination_module(argv[0]);
if (*module == -1) {
printk("Illegal module %s\n", argv[0]);
return NULL;
}
return argv[1];
}
*module = default_module;
return argv[0];
}
static int show_cmd_help(char *argv[])
{
const char *command = NULL;
int module = -1;
const struct shell_module *shell_module;
int i;
command = get_command_and_module(argv, &module);
if ((module == -1) || (command == NULL)) {
return 0;
}
shell_module = &__shell_cmd_start[module];
for (i = 0; shell_module->commands[i].cmd_name; i++) {
if (!strcmp(command, shell_module->commands[i].cmd_name)) {
printk("%s %s\n",
shell_module->commands[i].cmd_name,
shell_module->commands[i].help ?
shell_module->commands[i].help : "");
return 0;
}
}
printk("Unrecognized command: %s\n", argv[0]);
return 0;
}
static void print_module_commands(const int module)
{
const struct shell_module *shell_module = &__shell_cmd_start[module];
int i;
printk("help\n");
for (i = 0; shell_module->commands[i].cmd_name; i++) {
printk("%s\n", shell_module->commands[i].cmd_name);
}
}
static int show_help(int argc, char *argv[])
{
int module;
/* help per command */
if ((argc > 2) || ((default_module != -1) && (argc == 2))) {
return show_cmd_help(&argv[1]);
}
/* help per module */
if ((argc == 2) || ((default_module != -1) && (argc == 1))) {
if (default_module == -1) {
module = get_destination_module(argv[1]);
if (module == -1) {
printk("Illegal module %s\n", argv[1]);
return 0;
}
} else {
module = default_module;
}
print_module_commands(module);
} else { /* help for all entities */
printk("Available modules:\n");
for (module = 0; module < NUM_OF_SHELL_ENTITIES; module++) {
printk("%s\n", __shell_cmd_start[module].module_name);
}
printk("\nTo select a module, enter 'select <module name>'.\n");
}
return 0;
}
static int set_default_module(const char *name)
{
int module;
if (strlen(name) > MODULE_NAME_MAX_LEN) {
printk("Module name %s is too long, default is not changed\n",
name);
return -1;
}
module = get_destination_module(name);
if (module == -1) {
printk("Illegal module %s, default is not changed\n", name);
return -1;
}
default_module = module;
strncpy(default_module_prompt, name, MODULE_NAME_MAX_LEN);
strcat(default_module_prompt, "> ");
return 0;
}
static int select_module(int argc, char *argv[])
{
if (argc == 1) {
default_module = -1;
} else {
set_default_module(argv[1]);
}
return 0;
}
static shell_cmd_function_t get_cb(int argc, char *argv[])
{
const char *first_string = argv[0];
int module = -1;
const struct shell_module *shell_module;
const char *command;
int i;
if (!first_string || first_string[0] == '\0') {
printk("Illegal parameter\n");
return NULL;
}
if (!strcmp(first_string, "help")) {
return show_help;
}
if (!strcmp(first_string, "select")) {
return select_module;
}
if ((argc == 1) && (default_module == -1)) {
printk("Missing parameter\n");
return NULL;
}
command = get_command_and_module(argv, &module);
if ((module == -1) || (command == NULL)) {
return NULL;
}
shell_module = &__shell_cmd_start[module];
for (i = 0; shell_module->commands[i].cmd_name; i++) {
if (!strcmp(command, shell_module->commands[i].cmd_name)) {
return shell_module->commands[i].cb;
}
}
return NULL;
}
static inline void print_cmd_unknown(char *argv)
{
printk("Unrecognized command: %s\n", argv);
printk("Type 'help' for list of available commands\n");
}
static void shell(void *p1, void *p2, void *p3)
{
char *argv[ARGC_MAX + 1];
size_t argc;
ARG_UNUSED(p1);
ARG_UNUSED(p2);
ARG_UNUSED(p3);
while (1) {
struct console_input *cmd;
shell_cmd_function_t cb;
printk("%s", get_prompt());
cmd = k_fifo_get(&cmds_queue, K_FOREVER);
argc = line2argv(cmd->line, argv, ARRAY_SIZE(argv));
if (!argc) {
k_fifo_put(&avail_queue, cmd);
continue;
}
cb = get_cb(argc, argv);
if (!cb) {
if (app_cmd_handler != NULL) {
cb = app_cmd_handler;
} else {
print_cmd_unknown(argv[0]);
k_fifo_put(&avail_queue, cmd);
continue;
}
}
/* Execute callback with arguments */
if (cb(argc, argv) < 0) {
show_cmd_help(argv);
}
k_fifo_put(&avail_queue, cmd);
}
}
static int get_command_to_complete(char *str, char **command_prefix)
{
char dest_str[MODULE_NAME_MAX_LEN];
int dest = -1;
char *start;
/* remove ' ' at the beginning of the line */
while (*str && *str == ' ') {
str++;
}
if (!*str) {
return -1;
}
start = str;
if (default_module != -1) {
dest = default_module;
/* caller function already checks str len and put '\0' */
*command_prefix = str;
}
/*
* In case of a default module: only one parameter is possible.
* Otherwise, only two parameters are possibles.
*/
str = strchr(str, ' ');
if (default_module != -1) {
return (str == NULL) ? dest : -1;
}
if (str == NULL) {
return -1;
}
if ((str - start + 1) >= MODULE_NAME_MAX_LEN) {
return -1;
}
strncpy(dest_str, start, (str - start + 1));
dest_str[str - start] = '\0';
dest = get_destination_module(dest_str);
if (dest == -1) {
return -1;
}
str++;
/* caller func has already checked str len and put '\0' at the end */
*command_prefix = str;
str = strchr(str, ' ');
/* only two parameters are possibles in case of no default module */
return (str == NULL) ? dest : -1;
}
static u8_t completion(char *line, u8_t len)
{
const char *first_match = NULL;
int common_chars = -1, space = 0;
int i, dest, command_len;
const struct shell_module *module;
char *command_prefix;
if (len >= (MODULE_NAME_MAX_LEN + COMMAND_MAX_LEN - 1)) {
return 0;
}
/*
* line to completion is not ended by '\0' as the line that gets from
* k_fifo_get function
*/
line[len] = '\0';
dest = get_command_to_complete(line, &command_prefix);
if (dest == -1) {
return 0;
}
command_len = strlen(command_prefix);
module = &__shell_cmd_start[dest];
for (i = 0; module->commands[i].cmd_name; i++) {
int j;
if (strncmp(command_prefix,
module->commands[i].cmd_name, command_len)) {
continue;
}
if (!first_match) {
first_match = module->commands[i].cmd_name;
continue;
}
/* more commands match, print first match */
if (first_match && (common_chars < 0)) {
printk("\n%s\n", first_match);
common_chars = strlen(first_match);
}
/* cut common part of matching names */
for (j = 0; j < common_chars; j++) {
if (first_match[j] != module->commands[i].cmd_name[j]) {
break;
}
}
common_chars = j;
printk("%s\n", module->commands[i].cmd_name);
}
/* no match, do nothing */
if (!first_match) {
return 0;
}
if (common_chars >= 0) {
/* multiple match, restore prompt */
printk("%s", get_prompt());
printk("%s", line);
} else {
common_chars = strlen(first_match);
space = 1;
}
/* complete common part */
for (i = command_len; i < common_chars; i++) {
printk("%c", first_match[i]);
line[len++] = first_match[i];
}
/* for convenience add space after command */
if (space) {
printk(" ");
line[len] = ' ';
}
return common_chars - command_len + space;
}
void shell_init(const char *str)
{
k_fifo_init(&cmds_queue);
k_fifo_init(&avail_queue);
line_queue_init();
prompt = str ? str : "";
k_thread_spawn(stack, STACKSIZE, shell, NULL, NULL, NULL,
K_PRIO_COOP(7), 0, K_NO_WAIT);
/* Register serial console handler */
#ifdef CONFIG_UART_CONSOLE
uart_register_input(&avail_queue, &cmds_queue, completion);
#endif
#ifdef CONFIG_TELNET_CONSOLE
telnet_register_input(&avail_queue, &cmds_queue, completion);
#endif
}
/** @brief Optionally register an app default cmd handler.
*
* @param handler To be called if no cmd found in cmds registered with
* shell_init.
*/
void shell_register_app_cmd_handler(shell_cmd_function_t handler)
{
app_cmd_handler = handler;
}
void shell_register_prompt_handler(shell_prompt_function_t handler)
{
app_prompt_handler = handler;
}
void shell_register_default_module(const char *name)
{
int result = set_default_module(name);
if (result != -1) {
printk("\n%s", default_module_prompt);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,190 |
This is a placeholder page for Travis Dubasik, which means this person is not currently on this site. We do suggest using the tools below to find Travis Dubasik.
You are visiting the placeholder page for Travis Dubasik. This page is here because someone used our placeholder utility to look for Travis Dubasik. We created this page automatically in hopes Travis Dubasik would find it. If you are not Travis Dubasik, but are an alumni of Newton Falls High School, register on this site for free now. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,707 |
Lt.-Col. William Tosco H. Peppé1
Child of Lt.-Col. William Tosco H. Peppé
Commander William Lawrence Tosco Peppé+2 b. 1937
Marianne Beevor1
Marianne Beevor is the daughter of Miles Beevor and Mary Beevor.1 She married George Harvey.1
Her married name became Harvey.1
[S6399] Sir Bernard Burke, A Genealogical and Heraldic Dictionary of the Peerage and Baronetage of the British Empire, 11th edition (London, U.K.: Hurst & Blackett, 1847), page 84. Hereinafter cited as The Peerage and Baronetage, 9th ed.
William Roger Tosco Peppé1
M, #602013, b. 24 December 1968
William Roger Tosco Peppé was born on 24 December 1968.1 He is the son of Commander William Lawrence Tosco Peppé and Deirdre Eva Preston Wakefield.2
Cecily Florence Loveday Perceval-Maxwell1
F, #602014, b. 28 June 2006
Cecily Florence Loveday Perceval-Maxwell was born on 28 June 2006.1 She is the daughter of John William Richard Perceval-Maxwell and Loveday Manners Price.1
Rear-Admiral James Burney1
Rear-Admiral James Burney was born in 1750.1 He was the son of Charles Burney and Esther Sleepe.1 He died in 1821.1
He was appointed Fellow, Royal Society (F.R.S.)2
[S4567] Bill Norton, "re: Pitman Family," e-mail message to Darryl Roger LUNDY (101053), 6 April 2010 and 19 April 2011. Hereinafter cited as "re: Pitman Family."
[S18] Matthew H.C.G., editor, Dictionary of National Biography on CD-ROM (Oxford, U.K.: Oxford University Press, 1995). Hereinafter cited as Dictionary of National Biography.
Frances Burney1
Frances Burney was born in 1752.1 She was the daughter of Charles Burney and Esther Sleepe.1 She married General Alexandre-Jean-Baptiste Piochard d'Arblay on 28 July 1783 at St. Michael, Mickelham, Surrey, England.2 She died in 1840.1
From 28 July 1783, her married name became Piochard d'Arblay.2
Luke Archibald Hope1
M, #602017, b. 3 August 2006
Luke Archibald Hope was born on 3 August 2006.1 He is the son of Sir Alexander Archibald Douglas Hope of Craighall, 19th Bt. and Emmeline Barrow.1
Elizabeth Louisa Hoare1
Last Edited=4 Oct 2013
Elizabeth Louisa Hoare was born on 19 October 2009.1 She is the daughter of Sir Charles James Hoare, 9th Bt. and Hon. Eleanor Filumena Flower.1
Elburge van Arkel Vrouwe van Asperen1
Elburge van Arkel Vrouwe van Asperen was born in 1330.1 She was the daughter of Otto II van Arkel Heer van Asperen en Hagestein and Aleida d'Avesnes.1 She married Diederik van Polanen Heer van Breda en De-Leck, son of Jan I van Duivenvoorde Heer van Breda Castricum en Heemskerk and Catharina van Brederode Vrouwe van Polanen, in 1366.1 She died in 1415.1
Child of Elburge van Arkel Vrouwe van Asperen and Diederik van Polanen Heer van Breda en De-Leck
Otto van Polanen Heer van Asperen+1 d. bt Aug 1426 - Jan 1429
Gerald Hugo Cropper Wakefield1
M, #602020, b. 15 September 1938
Gerald Hugo Cropper Wakefield was born on 15 September 1938.1 He is the son of Sir Edward Birkbeck Wakefield, 1st Bt. and Constance Lalage Perronet Thompson.2 He married Victoria Rose Feilden, daughter of Major Cecil Henry Feilden and Olivia Constance Leonora Baring, on 4 December 1971.1
He was educated at Eton College, Windsor, Berkshire, England.1 He gained the rank of Lieutenant in the 12th Lancers.1 He graduated from Trinity College, Cambridge University, Cambridge, Cambridgeshire, England, in 1961 with a Bachelor of Arts (B.A.)1 He was chairman of Guy Carpenter & Company Inc, New York.1 He was chairman of J & H Marsh & McLennan (Hldgs).1 He lived in 2003 at Bramdean House, Alresford, Hampshire, England.1
Child of Gerald Hugo Cropper Wakefield and Victoria Rose Feilden
Edward Cecil Wakefield+2 b. 7 Mar 1973 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,857 |
1. Papp K.A., et al. Risankizumab versus Ustekinumab for Moderate-to-Severe Plaque Psoriasis. N Engl J Med. 2017 Apr 20; 376:1551-1560.
2. World Health Organization. Global Report on Psoriasis. 2016. Available at: http://apps.who.int/iris/bitstream/10665/204417/1/9789241565189_eng.pdf. Accessed on February 6, 2019.
3. National Psoriasis Foundation website. About Psoriasis. Available at: https://www.psoriasis.org/about-psoriasis. Accessed on February 6, 2019.
SankeiBiz © 2019 SANKEI DIGITAL INC. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,732 |
\section*{Acknowledgements}
The authors would like to thank Prof.~T F\"ul\"op for fruitful discussions.
This work is supported by the Olle Engqvist Foundation and has received funding from the European
Research Council (ERC) under the European Union's Horizon 2020 research and
innovation programme under grant agreement no.~647121. Simulations were performed on resources at Chalmers Centre for Computational Science and Engineering (C3SE) provided by the Swedish National Infrastructure for Computing (SNIC).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,351 |
Oceano Andrade da Cruz és un futbolista portugués ja retirat nascut a l'illa de São Vicente, Cap Verd. La seua família va emigrar des de les illes de Cap Verd fins a Portugal quan era un xiquet.
Carrera esportiva
Va començar la seua carrera en l'Almada, abans de militar al CD Nacional, on va destacar, la qual cosa va cridar l'atenció del Sporting de Lisboa. Va estar des del 1982 fins al 1990 al club lisboeta, repetint en una segona etapa entre 1994 i 1998. Tot i que era un dels símbols del club, tan sols va poder guanyar una Copa. A la lliga espanyola, va jugar amb la Reial Societat de la temporada 90/91 la temporada 93/94. La seua darrera campanya en actiu va ser la 98/99 a les files del Toulouse francès.
Amb la selecció portuguesa de futbol va disputar fins a 54 partits, marcant 8 gols. La seua primera aparició va ser el gener de 1985 contra Romania, i el seu darrer partit, a l'abril de 1998 contra Anglaterra. Només va gaudir de continuïtat durant els anys 90, i fou un dels capitans del combinat portuguès que va acudir a l'Eurocopa de 1996, a Anglaterra. Després de la seua retirada, Oceano ha continuat lligat a la selecció del seu país, desenvolupant càrrecs tècnics.
Futbolistes capverdians
Futbolistes internacionals amb Portugal de la dècada de 1980
Futbolistes internacionals amb Portugal de la dècada de 1990
Futbolistes de la Reial Societat
Futbolistes del Toulouse Football Club
Entrenadors de futbol africans
Entrenadors de futbol portuguesos
Futbolistes del CD Nacional
Futbolistes de l'Sporting CP | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,683 |
\section{Introduction and main results}
We are interested in the behavior of solutions to certain semilinear elliptic equations that are perturbations of the critical equation
$$
-\Delta U = 3\, U^5
\qquad\text{in}\ \mathbb{R}^3 \,.
$$
It is well-known that all positive solutions to the latter equation are given by
\begin{equation}
\label{eq:uxl}
U_{x, \lambda}(y) := \frac{\lambda^{1/2}}{(1 + \lambda^2|y-x|^2)^{1/2}}
\end{equation}
with parameters $x \in \mathbb{R}^3$ and $\lambda > 0$. This equation arises as the Euler--Lagrange equation of the optimization problem related to the Sobolev inequality
$$
\int_{\mathbb{R}^3} |\nabla z|^2 \geq S \left( \int_{\mathbb{R}^3} z^6 \right)^{1/3}
$$
with sharp constant \cite{Rod,Ro,Au,Ta}
$$
S = 3 \left( \frac\pi 2 \right)^{4/3}.
$$
The perturbed equations that we are interested in are posed in a bounded open set $\Omega\subset\mathbb{R}^3$ and involve two functions $a$ and $V$ on $\Omega$. The operator $-\Delta +a$ with Dirichlet boundary conditions is assumed to be coercive. (Later, we will be more precise concerning regularity assumptions on $\Omega$, $a$ and $V$.) The case where $a$ and $V$ are constants is also of interest.
We consider solutions $u=u_\epsilon$, parametrized by $\epsilon>0$, to the equation
\begin{equation}
\label{equation u}
\begin{cases}
- \Delta u + (a + \epsilon V) u = 3\, u^5 & \text{ in } \Omega, \\
u > 0 & \text{ in } \Omega, \\
u = 0 & \text{ on } \partial \Omega.
\end{cases}
\end{equation}
We also assume that these solutions are a minimizing sequence for the Sobolev inequality, that is,
\begin{equation}
\label{eq:sobmin}
\lim_{\epsilon \to 0} \frac{\int_\Omega |\nabla u_\epsilon|^2}{\left( \int_\Omega u_\epsilon^6 \right)^{1/3}} = S \,.
\end{equation}
The existence of such solutions under certain assumptions on $a$ and $V$ can be obtained by minimization, see, for instance, \cite{FrKoKo1}. Moreover, it is not hard to prove, based on the characterization of optimizers in Sobolev's inequality, that such solutions converge weakly to zero in $H^1_0(\Omega)$ and that $u_\varepsilon^6$ converges weakly in the sense of measures to a multiple of a delta function; see Proposition \ref{lemma PU + w}. In this sense, the functions $u_\varepsilon$ blow up.
The problem of interest is to describe this blow-up behavior more precisely. This question was advertised in the influential paper by Br\'ezis and Peletier \cite{BrPe}, who discussed the above problem in detail in the case where $\Omega$ is a ball and $a$ and $V$ are constants. For earlier results on the related problem where the perturbation $\epsilon V u$ is replaced by $3(u^5 - u^{5-\epsilon})$, see \cite{AtPe,Bu}. Concerning the case of general open set $\Omega\subset\mathbb{R}^3$, the Br\'ezis--Peletier paper contains three conjectures, the second of which concerns the blow-up behavior of solutions to the analogue of equation \eqref{equation u} in dimensions $N\geq 4$ with $a\equiv 0$. This conjecture was proved independently in celebrated works of Han \cite{Ha} and Rey \cite{Re1,Re2}. In these works also the first Br\'ezis--Peletier conjecture is proved, which is the analogue of their second conjecture for the $3(u^5 - u^{5-\epsilon})$-problem.
The third Br\'ezis--Peletier conjecture is, as far as we know, still open. It concerns the blow-up behavior in the $3(u^5 - u^{5-\epsilon})$-problem for nonzero $a$ in the three-dimensional case. Our goal in this paper is to prove the analogue of the third Br\'ezis--Peletier conjecture for problem \eqref{equation u}. We expect that the techniques that we develop in this paper will also be useful to prove the third Br\'ezis--Peletier conjecture in its original form and we plan to return to this problem in the near future.
A characteristic feature of the three dimensional case is the notion of criticality for the function $a$. To motivate this concept, let
$$
S(a) := \inf_{0\not\equiv z\in H^1_0(\Omega)} \frac{\int_\Omega (|\nabla z|^2+ a z^2)}{(\int_\Omega z^6)^{1/3}} \,.
$$
One of the findings of Br\'ezis and Nirenberg \cite{BrNi} is that if $a$ is small (for instance, in $L^\infty(\Omega)$), but possibly nonzero, then $S(a)=S$. This is in stark contrast to the case of dimensions $N\geq 4$ where the corresponding analogue of $S(a)$ (with the exponent $6$ replaced by $2N/(N-2)$) is always strictly below the corresponding Sobolev constant, whenever $a$ is negative somewhere.
This phenomenon leads naturally to the following definition due to Hebey and Vaugon \cite{HeVa}. A continuous function $a$ on $\overline\Omega$ is said to be \emph{critical} in $\Omega$ if $S(a)=S$ and if for any continuous function $\tilde a$ on $\overline\Omega$ with $\tilde a\leq a$ and $\tilde a\not\equiv a$ one has $S(\tilde a)<S(a)$. Throughout this paper we assume that $a$ is critical in $\Omega$.
A key role in our analysis is played by the regular part of the Green's function and its zero set. To introduce these, we follow the sign and normalization convention of \cite{Re2}. Since the operator $-\Delta+a$ in $\Omega$ with Dirichlet boundary conditions is assumed to be coercive, it has a Green's function $G_a$ satisfying, for each fixed $y\in\Omega$,
\begin{equation} \label{Ga-pde}
\left\{
\begin{array}{l@{\quad}l}
-\Delta_x\, G_a(x,y) + a(x)\, G_a(x,y) = 4\pi \, \delta_y & \quad \text{in} \ \ \Omega\,, \\
G_a(x,y) = 0 & \quad \text{on} \ \ \partial\Omega \,.
\end{array}
\right.
\end{equation}
The regular part $H_a$ of $G_a$ is defined by
\begin{equation} \label{ha-def}
H_a(x,y) := \frac{1}{|x-y|} - G_a(x,y)\, .
\end{equation}
It is well-known that for each $x\in\Omega$ the function $H_a(x,\cdot)$, which is originally defined in $\Omega\setminus\{x\}$, extends to a continuous function in $\Omega$ and we abbreviate
$$
\phi_a(x) := H_a(x,x) \,.
$$
It was conjectured by Br\'ezis \cite{Br} and shown by Druet \cite{Dr} that, as a consequence of criticality,
$$
\inf_{x\in\Omega} \phi_a(x) = 0 \,;
$$
see also \cite{Es} and \cite[Proposition 5.1]{FrKoKo1} for alternative proofs. Finally, we set
\[ \mathcal N_a := \left\{ x \in \Omega \, : \, \phi_a(x) = 0 \right\}. \]
Note that each point in $\mathcal N_a$ is a critical point of $\phi_a$.
Let us summarize the setting in this paper.
\begin{assumption}
\begin{enumerate}
\item[(a)] $\Omega\subset\mathbb{R}^3$ is a bounded, open set with $C^2$ boundary
\item[(b)] $a \in C^{0,1}(\overline{\Omega})\cap C^{2,\sigma}_{\rm loc}(\Omega)$ for some $\sigma>0$
\item[(c)] $a$ is critical in $\Omega$
\item[(d)] $a<0$ in $\mathcal N_a$
\item[(e)] Any point in $\mathcal N_a$ is a nondegenerate critical point of $\phi_a$, that is, for any $x_0\in\mathcal N_a$, the Hessian $D^2 \phi_a(x_0)$ does not have a zero eigenvalue
\item[(f)] $V\in C^{0,1}(\overline\Omega)$
\end{enumerate}
\end{assumption}
Let us briefly comment on these items. Assumptions (a), (b) and (f) are modest regularity assumptions, which can probably be further relaxed with more effort. Assumption (d) is not severe, as we know from \cite[Corollary 2.2]{FrKoKo1} that any critical $a$ satisfies $a \leq 0$ on $\mathcal N_a$. In particular, it is fulfilled if $a$ is a negative constant, which is the original Br\'ezis--Peletier setting \cite{BrPe}. Concerning assumption (e) we first note that $\phi_a \in C^2(\Omega)$ by Lemma \ref{lemma C2}. We believe that assumption (e) is `generically' true. (For results in this spirit, but in the noncritical case $a\equiv 0$, see \cite{MiPi}.) Assumption (e) holds, in particular, if $\Omega$ a ball and $a$ is a constant, as can be verified by explicit computation.
To leading order, the blow-up behavior of solutions of \eqref{equation u} will be given by the projection of a solution \eqref{eq:uxl} of the unperturbed whole space equation to $H^1_0(\Omega)$. For parameters $x\in\mathbb{R}^3$, $\lambda>0$ we introduce $PU_{x, \lambda} \in H^1_0(\Omega)$ as the unique function satisfying
\begin{equation} \label{eq-pu}
\Delta PU_{x,\lambda} = \Delta U_{x,\lambda}\ \ \ \text{ in } \Omega, \qquad PU_{x,\lambda} = 0 \ \ \ \text{ on } \partial \Omega \,.
\end{equation}
Moreover, let
$$
T_{x, \lambda} := \text{ span}\, \big\{ PU_{x, \lambda},\ \partial_\lambda PU_{x, \lambda},\ \partial_{x_1} PU_{x, \lambda}\,\ \partial_{x_2} PU_{x, \lambda}\,\ \partial_{x_3} PU_{x, \lambda} \big\}
$$
and let $T_{x, \lambda}^\perp$ be the orthogonal complement of $T_{x,\lambda}$ in $H^1_0(\Omega)$ with respect to the inner product $\int_\Omega \nabla u \cdot \nabla v$. By $\Pi_{x,\lambda}$ and $\Pi_{x,\lambda}^\bot$ we denote the orthogonal projections in $H^1_0(\Omega)$ onto $T_{x,\lambda}$ and $T_{x,\lambda}^\bot$, respectively.
Finally, we abbreviate
\begin{align} \label{eq-Q}
Q_V(x) & := \int_\Omega V(y) \, G_a(x,y)^2 , \qquad x\in\Omega ,\
\end{align}
Here are our two main results. The first one provides an asymptotic expansion of $u_\varepsilon$ with a remainder in $H^1_0(\Omega)$.
\begin{theorem}[Asymptotic expansion of $u_\varepsilon$]
\label{thm expansion}
Let $(u_\epsilon)$ be a family of solutions to \eqref{equation u} satisfying \eqref{eq:sobmin}.
Then there are sequences $(x_\epsilon)\subset\Omega$, $(\lambda_\epsilon)\subset(0,\infty)$, $(\alpha_\epsilon)\subset\mathbb{R}$ and $(r_\varepsilon) \subset T_{x_\varepsilon, \lambda_\varepsilon}^\bot$ such that
\begin{equation} \label{u-eps-final}
u_\epsilon = \alpha_\epsilon \left( PU_{x_\epsilon, \lambda_\epsilon} - \lambda_\epsilon^{-1/2}\, \Pi_{x_\epsilon,\lambda_\epsilon}^\bot (H_a(x_\epsilon, \cdot)- H_0(x_\epsilon, \cdot)) + r_\epsilon \right)
\end{equation}
and a point $x_0 \in \mathcal N_a$ with $Q_V(x_0) \leq 0$ such that, along a subsequence,
\begin{align}
|x_\epsilon - x_0| &= o(\varepsilon^{1/2}) \,, \label{x-x} \\
\phi_a(x_\epsilon) & = o(\epsilon) \,, \label{phi-asymp} \\
\lim_{\epsilon \to 0}\, \epsilon \, \lambda_\epsilon & = 4\pi^2\, \frac{|a(x_0)|}{|Q_V(x_0)|} \,, \label{lim eps lambda}\\[3pt]
\alpha_\epsilon & = 1 + \frac{4}{3\pi^3}\, \frac{\phi_0(x_0) \, |Q_V(x_0)|}{|a(x_0)|}\, \epsilon + o(\varepsilon) \,, \label{alpha-asymp}\\
\|\nabla r_\epsilon\| &= \mathcal O(\epsilon^{3/2})\,.
\end{align}
If $Q_V(x_0) = 0$, the right side of \eqref{lim eps lambda} is to be interpreted as $\infty$.
\end{theorem}
Two noteworthy features of this theorem are that we identify the localization length $\lambda_\varepsilon^{-1}$ of the projected bubble $PU_{x_\epsilon,\lambda_\epsilon}$ as an explicit constant times $\varepsilon$ (provided $Q_V(x_0)<0$) and that we extract the subleading correction $\lambda_\epsilon^{-1/2}\, \Pi_{x_\epsilon,\lambda_\epsilon}^\bot (H_a(x_\epsilon, \cdot)- H_0(x_\epsilon, \cdot))$. Note that the strict inequality $Q_V(x_0)<0$ holds, for instance, if $0\not\equiv V\leq 0$ and, in particular, if $V$ is a negative constant, as in the original Br\'ezis--Peletier setting.
A very similar asymptotic expansion is proved in \cite{FrKoKo1} for energy-minimizing solutions of \eqref{equation u}; see also \cite{FrKoKo2} for the simpler higher-dimensional case. There, we did not assume the nondegeneracy of $D^2\phi_a(x_0)$, but we did assume that $Q_V<0$ in $\mathcal N_a$. Moreover, in the energy minimizing setting we showed that $x_0$ satisfies
$$
Q_V(x_0)^2/|a(x_0)| = \sup_{x\in\mathcal N_a, Q_V(x)<0}Q_V(x)^2/|a(x)| \,,
$$
but this cannot be expected in the more general setting of the present paper. While there is a similarity in the underlying proof strategy in \cite{FrKoKo1} and in the present paper, there are also some fundamental differences. In fact, in \cite{FrKoKo1} we did not use equation \eqref{equation u} at all and our results there are valid as well for certain `almost minimizers'. We will comment on the differences in more detail at the end of this introduction.
Our second main result concerns the pointwise blow-up behavior, both at the blow-up point and away from $x_0$.
\begin{theorem}[\`a la Br\'ezis-Peletier]
\label{thm BP}
Let $(u_\epsilon)$ be a family of solutions to \eqref{equation u} satisfying \eqref{eq:sobmin}.
\begin{enumerate}
\item[(a)] The asymptotics close to the concentration point $x_0$ are given by
\[ \lim_{\epsilon \to 0}\, \epsilon \, \|u_\varepsilon\|^2_\infty = \lim_{\epsilon \to 0}\, \epsilon \, |u_\varepsilon(x_\varepsilon)|^2 = 4\pi^2\, \frac{|a(x_0)|}{|Q_V(x_0)|}. \]
If $Q_V(x_0) = 0$, the right side of \eqref{lim eps lambda} is to be interpreted as $\infty$.
\item[(b)] The asymptotics away from the concentration point $x_0$ are given by
\[ u_\varepsilon (x) = \lambda_\varepsilon^{-1/2} G_a(x, x_0) + o(\lambda_\varepsilon^{-1/2}) \]
for every fixed $x \in \Omega \setminus \{x_0\}$. The convergence is uniform for $x$ away from $x_0$.
\end{enumerate}
\end{theorem}
This theorem proves the analogue of \cite[Conjecture 3]{BrPe} for problem \eqref{equation u}. In the special case where $\Omega$ is a ball and $a$ and $V$ are constants, our theorem reduces to \cite[Theorem 3]{BrPe}.
In the past three decades there has been an enormous literature on blow-up of solutions to semilinear equations with critical exponent, which is impossible to summarize. In some sense, the situation in the present paper is the simplest blow-up situation concerning single bubble blow-up of positive solutions in the interior. Much more refined blow-up situations have been studied, including, for instance, multi-bubbling, sign-changing solutions or concentration on the boundary under Neumann boundary conditions. For an introduction and references we refer to the books \cite{DrHeRo,He}. In this paper we are interested in the description of the behavior of a given family of solutions. For the converse problem of constructing blow-up solutions in our setting, see \cite{dPDoMu}, and for a survey of related results, see \cite{Pi} and the references therein.
What makes the critical case in three dimensions significantly harder than the higher-dimen\-sional analogues solved by Han \cite{Ha} and Rey \cite{Re1,Re2} is a certain cancellation, which is related to the fact that $\inf \phi_a =0$. Thus, the term that in higher dimensions completely determines the blow-up vanishes in our case. Our way around this impasse is to iteratively improve our knowledge about the functions $u_\epsilon$. The mechanism behind this iteration is a certain coercivity inequality, due to Esposito \cite{Es}, which we state in Lemma \ref{lemma coercivity}, and a crucial feature of our proof is to apply this inequality repeatedly, at different orders of precision. To arrive at the level of precision stated in Theorem \ref{thm expansion} two iterations are necessary (plus a zeroth one, hidden in the proof of Proposition \ref{lemma PU + w}).
The first iteration, contained in Section \ref{sec:firstexpansion}, is relatively standard and follows Rey's ideas in \cite{Re2} with some adaptions due to Esposito \cite{Es} to the critical case in three dimensions. The main outcome of this first iteration is the fact that concentration occurs in the interior and an order-sharp remainder bound in $H^1_0$ on the remainder $\alpha_\varepsilon^{-1} u_\varepsilon - PU_{x_\epsilon, \lambda_\epsilon}$.
The second iteration, contained in Section \ref{section refining}, is more specific to the problem at hand. Its main outcome is the extract the subleading correction $\lambda_\epsilon^{-1/2}\, \Pi_{x_\epsilon,\lambda_\epsilon}^\bot (H_a(x_\epsilon, \cdot)- H_0(x_\epsilon, \cdot))$. Using the nondegeneracy of $D^2\phi_a(x_0)$ we will be able in the proof of Theorem \ref{thm expansion} to show that $\lambda_\varepsilon$ is proportional to $\epsilon^{-1}$ (if $Q_V(x_0)<0$).
The arguments described so far are, for the most part, carried out in $H^1_0$ norm. Once one has completed the two iterations, we apply in Subsection \ref{subsection infty bound w} a Moser iteration argument in order to show that the remainder $\alpha_\varepsilon^{-1} u_\varepsilon - PU_{x_\epsilon, \lambda_\epsilon}$ is negligible also in $L^\infty$ norm. This will then allow us to deduce Theorem \ref{thm BP}.
As we mentioned before, Theorem \ref{thm expansion} is the generalization of the corresponding theorem in \cite{FrKoKo1} for energy-minimizing solutions. In that previous paper, we also used a similar iteration technique. Within each iteration step, however, minimality played an important role in \cite{FrKoKo1} and we used the iterative knowledge to further expand the energy functional evaluated at a minimizer. There is no analogue of this procedure in the current paper. Instead, as in most other works in this area, starting with \cite{BrPe}, Pohozaev identities now play an important role. More precisely, there are five such identities corresponding, in some sense, to the five functions in the kernel of the Hessian at an optimizer of the Sobolev inequality (namely, invariance under multiplication by constants, by dilations and by translations). All five identities will be used to control the five parameters $\alpha_\varepsilon$, $\lambda_\varepsilon$ and $x_\varepsilon$ in \eqref{u-eps-final}, which precisely correspond to the five asymptotic invariances. In fact, four of these identities will be used in the first iteration and all five in the second one.
The reason that we only use four instead of all five identities in the first iteration is related to the cancellation mentioned before. Indeed, in view of what Rey \cite{Re2} does in dimensions $N \geq 4$, the most natural approach would be to expand the Pohozaev identity coming from integration against $y \cdot \nabla u(y)$ to sufficient precision. When $a \equiv 0$, this identity extracts the main term $\phi_0(x_0) > 0$ with the help of which one can easily complete the proof. Similarly, for general $a$ one obtains the main term $\phi_a(x_0)$, but this term vanishes when $a$ is critical. Expanding to higher precision is not possible at this stage, because of our limited knowledge of the size of the various terms. Instead, in the first iteration we use the four remaining identities together with the coercivity inequality to improve our knowledge, but this is still not enough to retrieve enough information from the $y \cdot \nabla u(y)$ Pohozaev identity. In the second iteration we use all five identities, namely those coming from integration against $u$ (for the expansion of $\alpha_\varepsilon$), against $y\cdot\nabla u(y)$ (for the expansion of $\phi_a)$ and against $\nabla u$ (for the expansion of $\nabla \phi_a$). These identities can be brought together in such a way that they give the final result of Theorem \ref{thm expansion}. As we mentioned before, we believe that this strategy is applicable as well to the problem where the term $\epsilon Vu$ in \eqref{equation u} is replaced by $3(u^5-u^{5-\varepsilon})$.
\section{A first expansion}\label{sec:firstexpansion}
In this and the following section we will prepare for the proof of Theorems \ref{thm expansion} and \ref{thm BP}.
The main result from this section is the following preliminary asymptotic expansion of the family $(u_\varepsilon)$.
\begin{proposition}
\label{prop first expansion}
Let $(u_\epsilon)$ be a family of solutions to \eqref{equation u} satisfying \eqref{eq:sobmin}.
Then, up to extraction of a subsequence, there are sequences $(x_\epsilon)\subset\Omega$, $(\lambda_\epsilon)\subset(0,\infty)$, $(\alpha_\epsilon)\subset\mathbb{R}$ and $(w_\varepsilon) \subset T_{x_\varepsilon, \lambda_\varepsilon}^\bot$ such that
\begin{equation}
\label{expansion PU + w}
u_\epsilon = \alpha_\epsilon(PU_{x_\varepsilon, \lambda_\varepsilon} + w_\varepsilon),
\end{equation}
and a point $x_0 \in \Omega$ such that
\begin{equation}
\label{parameters PU + w}
|x_\varepsilon - x_0| = o(1), \quad \alpha_\varepsilon = 1 + o(1), \quad \lambda_\varepsilon \to \infty, \quad \|\nabla w\| = \mathcal O(\lambda^{-1/2}).
\end{equation}
\end{proposition}
This proposition follows to a large extent by an adaptation of existing results in the literature. We include the proof since we have not found the precise statement and since related arguments will appear in the following section in a more complicated setting.
An initial qualitative expansion follows from works of Struwe \cite{St} and Bahri-Coron \cite{BaCo}. In order to obtain the statement of Proposition \ref{prop first expansion}, we then need to show two things, namely, the bound on $\|\nabla w\|$ and the fact that $x_0\in\Omega$. The proof of the bound on $\|\nabla w\|$ that we give is rather close to that of Esposito \cite{Es}. The setting in \cite{Es} is slightly different (there, $V$ is equal to a negative constant and, more importantly, the solutions are assumed to be energy minimizing), but this part of the proof extends to our setting. On the other hand, the proof in \cite{Es} of the fact that $x_0\in\Omega$ relies on the energy minimizing property and does not work for us. Instead, we adapt some ideas from Rey in \cite{Re2}. The proof in \cite{Re2} is only carried out in dimensions $\geq 4$ and without the background $a$, but, as we will see, it extends with some effort to our situation.
We subdivide the proof of Proposition \ref{prop first expansion} into a sequence of subsections. The main result of each subsection is stated as a proposition at the beginning and summarizes the content of the corresponding subsection.
\subsection{A qualitative initial expansion}
As a first important step, we derive the following expansion, which is already of the form of that in Proposition \ref{prop first expansion}, except that all remainder bounds are nonquantitative and the limit point $x_0$ may a priori be on the boundary $\partial \Omega$.
\begin{proposition}
\label{lemma PU + w}
Let $(u_\epsilon)$ be a family of solutions to \eqref{equation u} satisfying \eqref{eq:sobmin}.
Then, up to extraction of a subsequence, there are sequences $(x_\epsilon)\subset\Omega$, $(\lambda_\epsilon)\subset(0,\infty)$, $(\alpha_\epsilon)\subset\mathbb{R}$ and $(w_\varepsilon) \subset T_{x_\varepsilon, \lambda_\varepsilon}^\bot$ such that \eqref{expansion PU + w} holds and a point $x_0 \in \overline{\Omega}$ such that
\begin{equation}
\label{parameters PU + w prelim}
|x_\varepsilon - x_0| = o(1), \quad \alpha_\varepsilon = 1 + o(1), \quad d_\varepsilon \lambda_\varepsilon \to \infty, \quad \|\nabla w\| = o(1),
\end{equation}
where we denote $d_\varepsilon := d(x_\varepsilon, \partial \Omega)$.
\end{proposition}
\begin{proof}
We shall only prove that $u_\varepsilon\rightharpoonup 0$ in $H^1_0(\Omega)$. Once this is shown, we can use standard arguments, due to Lions \cite{Lio}, Struwe \cite{St} and Bahri--Coron \cite{BaCo}, to complete the proof of the proposition; see, for instance,~\cite[Proof of Proposition 2]{Re2}.
\emph{Step 1.} We begin by showing that $(u_\varepsilon)$ is bounded in $H^1_0(\Omega)$ and that $\|u_\varepsilon\|_6 \gtrsim 1$. Integrating the equation for $u_\varepsilon$ against $u_\varepsilon$, we obtain
\begin{equation}
\label{eq:energyident}
\int_\Omega \left( |\nabla u_\varepsilon|^2 + (a-\varepsilon V)u_\varepsilon^2\right) = 3 \int_\Omega u_\varepsilon^6
\end{equation}
and therefore
$$
3 \left( \int_\Omega u_\varepsilon^6 \right)^{2/3} = \frac{\int_\Omega |\nabla u_\varepsilon|^2}{\left( \int_\Omega u_\varepsilon^6 \right)^{1/3}} + \frac{\int_\Omega (a+\epsilon V) u_\varepsilon^2}{\left( \int_\Omega u_\varepsilon^6 \right)^{1/3}} \,.
$$
On the right side, the first quotient converges by \eqref{eq:sobmin} and the second quotient is bounded by H\"older's inequality. Thus, $(u_\varepsilon)$ is bounded in $L^6(\Omega)$. By \eqref{eq:sobmin} we obtain boundedness in $H^1_0(\Omega)$. By coercivity of $-\Delta +a$ in $H^1_0(\Omega)$ and Sobolev's inequality, for all sufficiently small $\varepsilon>0$, the left side in \eqref{eq:energyident} is bounded from below by a constant times $\|u_\varepsilon\|_6^2$. This yields the lower bound on $\|u_\varepsilon\|_6 \gtrsim 1$.
\emph{Step 2.} According to Step 1, $(u_\varepsilon)$ has a weak limit point in $H^1_0(\Omega)$ and we denote by $u_0$ one of those. Our goal is to show that $u_0\equiv 0$. Throughout this step, we restrict ourselves to a subsequence of $\varepsilon$'s along which $u_\varepsilon\rightharpoonup u_0$ in $H^1_0(\Omega)$. By Rellich's lemma, after passing to a subsequence, we may also assume that $u_\varepsilon\to u_0$ almost everywhere. Moreover, passing to a further subsequence, we may also assume that $\|\nabla u_\varepsilon\|$ has a limit. Then, by \eqref{eq:sobmin}, $\|u_\varepsilon\|_6$ has a limit as well and, by Step 1, none of these limits is zero.
We now argue as in the proof of \cite[Proposition 3.1]{FrKoKo1} and note that, by weak convergence,
$$
\mathcal T = \lim_{\varepsilon\to 0} \int_\Omega |\nabla (u_\varepsilon-u_0)|^2
\quad \text{exists and satisfies}\quad \lim_{\varepsilon\to 0} \int_\Omega |\nabla u_\varepsilon|^2 = \int_\Omega |\nabla u_0|^2 + \mathcal T
$$
and, by the Br\'ezis--Lieb lemma \cite{BrLi},
$$
\mathcal M = \lim_{\varepsilon\to 0} \int_\Omega (u_\varepsilon-u_0)^6
\quad \text{exists and satisfies}\quad \lim_{\varepsilon\to 0} \int_\Omega u_\varepsilon^6 = \int_\Omega u_0^6 + \mathcal M \,.
$$
Thus, \eqref{eq:sobmin} gives
$$
S\left( \int_\Omega u_0^6 + \mathcal M \right)^{1/3} = \int_\Omega |\nabla u_0|^2 + \mathcal T \,.
$$
We bound the left side from above with the help of the elementary inequality
\begin{equation*}
\left( \int_\Omega u_0^6 + \mathcal M \right)^{1/3} \leq \left( \int_\Omega u_0^6\right)^{1/3} + \mathcal M^{1/3}
\end{equation*}
and, by the Sobolev inequality for $u_\varepsilon-u_0$, we bound the right side from below using
\begin{equation*}
\mathcal T \geq S \mathcal M^{1/3} \,.
\end{equation*}
Thus,
$$
S \left( \int_\Omega u_0^6 \right)^{1/3} \geq \int_\Omega |\nabla u_0|^2 \,.
$$
Thus, either $u_0\equiv 0$ or $u_0$ is an optimizer for the Sobolev inequality. Since $u_0$ has support in $\Omega\subsetneq\mathbb{R}^3$, the latter is impossible and we conclude that $u_0\equiv 0$, as claimed.
\end{proof}
\textbf{Convention.} Throughout the rest of the paper, we assume that the sequence $(u_\epsilon)$ satisfies the assumptions and conclusions from Proposition \ref{lemma PU + w}. We will make no explicit mention of subsequences. Moreover, we typically drop the index $\varepsilon$ from $u_\varepsilon$, $\alpha_\varepsilon$, $x_\varepsilon$, $\lambda_\varepsilon$, $d_\varepsilon$ and $w_\varepsilon$.
\subsection{Coercivity}
The following coercivity inequality from \cite[Lemma 2.2]{Es} is a crucial tool for us in subsequently refining the expansion of $u_\varepsilon$. It states, roughly speaking, that the subleading error terms coming from the expansion of $u_\varepsilon$ can be absorbed into the leading term, at least under some orthogonality condition.
\begin{lemma}\label{lemma coercivity}
There are constants $T_*<\infty$ and $\rho>0$ such that for all $x\in\Omega$, all $\lambda>0$ with $d\lambda\geq T_*$ and all $v\in T_{x,\lambda}^\bot$,
\begin{equation} \label{coercivity}
\int_\Omega \left( |\nabla v|^2 + av^2 - 15\, U_{x,\lambda}^4 v^2\right) \geq \rho \int_\Omega |\nabla v|^2 \,.
\end{equation}
\end{lemma}
The proof proceeds by compactness, using the inequality \cite[(D.1)]{Re2}
\begin{equation*}
\int_\Omega \left( |\nabla v|^2 - 15\, U_{x,\lambda}^4 v^2 \right) \geq \frac47 \int_\Omega |\nabla v|^2
\qquad\text{for all}\ v\in T_{x,\lambda}^\bot \,.
\end{equation*}
For details of the proof, we refer to \cite{Es}.
In the following subsection, we use Lemma \ref{lemma coercivity} to deduce a refined bound on $\|\nabla w\|$. We will use it again in Section \ref{subsection nabla r} below to obtain improved bounds on the refined error term $\|\nabla r\|$, with $r \in T_{x, \lambda}^\bot$ defined in \eqref{definition r}.
\subsection{The bound on $\|\nabla w\|$}
\label{subsection bound w}
The goal of this subsection is to prove
\begin{proposition}\label{boundw}
As $\varepsilon\to 0$,
\begin{equation}
\label{bound w subsec}
\|\nabla w\| = \mathcal O(\lambda^{-1/2}) + \mathcal O((\lambda d)^{-1}).
\end{equation}
\end{proposition}
Using this bound, we will prove in Subsection \ref{subsec bdry conc} that $d^{-1} = \mathcal O(1)$ and therefore the bound in Proposition \ref{boundw} becomes $\| \nabla w\| = \mathcal O(\lambda^{-1/2})$, as claimed in Proposition \ref{prop first expansion}.
\begin{proof}
The starting point is the equation satisfied by $w$. Since $-\Delta PU_{x, \lambda} = -\Delta U_{x, \lambda} = 3 U_{x, \lambda}^5$, from \eqref{expansion PU + w} and \eqref{equation u} we obtain
\begin{equation}
\label{equation w} (-\Delta +a) w = - 3 U_{x, \lambda}^5 + 3 \alpha^4 (PU_{x, \lambda} + w)^5 - (a + \varepsilon V) PU_{x, \lambda} - \varepsilon V w.
\end{equation}
Integrating this equation against $w$ and using $\int_\Omega U_{x, \lambda}^5 w = (1/3) \int_\Omega \nabla PU_{x, \lambda}\cdot \nabla w = 0$, we get
\begin{equation}
\label{esposito eq1}
\int_\Omega (|\nabla w|^2 + a w^2) = 3 \alpha^4 \int_\Omega (PU_{x, \lambda} +w)^5 w - \int_\Omega (a + \varepsilon V) PU_{x, \lambda} w - \int_\Omega \varepsilon V w^2 .
\end{equation}
We estimate the three terms on the right hand side separately.
The second and third ones are easy: we have by Lemma \ref{lemma Lq norm of U}
\[ \left|\int_\Omega (a + \varepsilon V) PU_{x, \lambda} w \right| \lesssim \| w\|_6 \|U_{x, \lambda}\|_{6/5} \lesssim \lambda^{-1/2} \|\nabla w\| \,. \]
Moreover,
\[ \left|\int_\Omega \varepsilon V w^2 \right| \lesssim \varepsilon \| w\|_6^2 = o(\|\nabla w\|^2) \,. \]
The first term on the right side of \eqref{esposito eq1} needs a bit more care. We write $PU_{x, \lambda} = U_{x, \lambda} - \varphi_{x, \lambda}$ as in Lemma \ref{lemma PU} and expand
\begin{align*}
& \int_\Omega (PU_{x, \lambda} +w)^5 w \\
&= \int_\Omega U_{x, \lambda}^5 w + 5 \int_\Omega U_{x, \lambda}^4 w^2 + \mathcal O\left( \int_\Omega \left( U_{x, \lambda}^4\, \varphi_{x, \lambda} |w| + U_{x, \lambda}^3 (|w|^3 + |w| \varphi_{x, \lambda}^2) + \varphi_{x, \lambda}^5 |w| + w^6 \right) \right) \\
& = 5 \int_\Omega U_{x, \lambda}^4 w^2 + \mathcal O \left(
\int_\Omega U_{x, \lambda}^4 \, \varphi_{x, \lambda} |w| + \|\nabla w\| \|\varphi_{x, \lambda}\|_6^2 + \|\nabla w\|^3 \right).
\end{align*}
where we again used $\int_\Omega U_{x, \lambda}^5 w = 0$. By Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU}, we have $\|\varphi_{x, \lambda}\|_6^2 \lesssim (d\lambda)^{-1}$ and
\begin{align*}
\int_\Omega U_{x, \lambda}^4 \varphi_{x, \lambda} |w| &\lesssim \| w\|_6 \|\varphi_{x, \lambda}\|_\infty \|U_{x, \lambda}\|_{24/5}^4 \lesssim \|\nabla w\| (d \lambda)^{-1}.
\end{align*}
Putting all the estimates together, we deduce from \eqref{esposito eq1} that
\begin{align*}
\int_\Omega (|\nabla w|^2 + a w^2 - 15 \alpha^4 U^4 w^2) & = \mathcal O( (d\lambda)^{-1} \|\nabla w\| + \lambda^{-1/2} \|\nabla w\|) + o (\|\nabla w\|^2)\, .
\end{align*}
Due to the coercivity inequality from Lemma \ref{lemma coercivity}, the left side is bounded from below by a positive constant times $\| \nabla w\|^2$. Thus, \eqref{bound w subsec} follows.
\end{proof}
\subsection{Excluding boundary concentration}
\label{subsec bdry conc}
The goal of this subsection is to prove
\begin{proposition}
\label{prop bdry concentration}
$d^{-1} = \mathcal O(1)$.
\end{proposition}
By integrating the equation for $u$ against $\nabla u$, one obtains the Pohozaev-type identity
\begin{equation}
\label{pohozaev type u}
- \int_\Omega (\nabla (a+\varepsilon V)) u^2 = \int_{\partial \Omega} n \left( \frac{\partial u}{\partial n} \right)^2 \,.
\end{equation}
Inserting the decomposition $u = \alpha ( PU + w)$, we get
\begin{align}
\label{bdry conc identity}
\int_{\partial \Omega} n \left( \frac{\partial PU_{x, \lambda}}{\partial n} \right)^2 & = -\int_{\partial \Omega} n \left( 2 \frac{\partial PU_{x, \lambda}}{\partial n} \frac{\partial w}{\partial n} + \left( \frac{\partial w}{\partial n} \right)^2 \right) \notag \\
& \quad - \int_\Omega (\nabla (a+ \varepsilon V)) (PU_{x, \lambda}+w)^2 .
\end{align}
Since $a, V \in C^1(\overline{\Omega})$, the volume integral is bounded by
\begin{equation}
\label{eq:poho1vol}
\left| \int_\Omega (\nabla (a+ \varepsilon V)) (PU_{x, \lambda}+w)^2 \right| \lesssim \|PU_{x, \lambda}\|_2^2 + \|w\|_2^2 \lesssim \lambda^{-1} + (\lambda d)^{-2},
\end{equation}
where we used \eqref{bound w subsec} and Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU}.
The function $\partial PU_{x, \lambda}/\partial n$ on the boundary is controlled in Lemma \ref{lemma PU bdry integral}. We now control the function $\partial w/\partial n$ on the boundary.
\begin{lemma}
\label{lemma w bdry integral}
$\int_{\partial \Omega} \left( \frac{\partial w}{\partial n} \right)^2 = \mathcal O(\lambda^{-1} d^{-1}) + o(\lambda^{-1} d^{-2})$.
\end{lemma}
\begin{proof}
The following proof is analogous to \cite[Appendix C]{Re2}. It relies on the inequality
\begin{equation}
\label{trace estimate w}
\left\| \frac{\partial z }{\partial n} \right\|^2_{L^2(\partial \Omega)} \lesssim \left\| \Delta z \right\|^2_{L^{3/2}(\Omega)} \qquad \text{ for all } z \in H^2(\Omega) \cap H^1_0(\Omega) \,.
\end{equation}
This inequality is well-known and contained in \cite[Appendix C]{Re2}. A proof can be found, for instance, in \cite{HaWaYa}.
We write equation \eqref{equation w} for $w$ as $-\Delta w = F$ with
\begin{equation}
\label{eq:eqwrhs}
F:=3 \alpha^4 (PU_{x, \lambda} + w)^5 - 3 U_{x, \lambda}^5 - (a+ \varepsilon V) (PU_{x, \lambda} +w) \,.
\end{equation}
We fix a smooth $0 \leq \chi \leq 1$ with $\chi \equiv 0$ on $\{|y| \leq 1/2\}$ and $\chi \equiv 1$ on $\{ |y| \geq 1 \}$ and define the cut-off function
\[ \zeta(y) := \chi \left(\frac{y-x}{d}\right). \]
Then $\zeta w \in H^2(\Omega) \cap H^1_0(\Omega)$ and
\[ -\Delta (\zeta w) = \zeta F - 2 \nabla \zeta \cdot \nabla w - (\Delta \zeta)w \,. \]
The function $F$ satisfies the simple pointwise bound
\begin{equation}
\label{pointwise bound f}
|F| \lesssim U_{x, \lambda}^5 + |w|^5 + U_{x, \lambda} + |w| \,,
\end{equation}
which, when combined with inequality \eqref{trace estimate w}, yields
\begin{align*}
\left\| \frac{\partial w}{\partial n} \right\|_{L^2(\partial \Omega)}^2 & = \left\| \frac{\partial (\zeta w)}{\partial n} \right\|_{L^2(\partial \Omega)}^2 \lesssim \|\zeta F - 2 \nabla \zeta \cdot \nabla w - (\Delta \zeta)w \|_{3/2}^2 \\
& \lesssim \|\zeta(U_{x, \lambda} ^5 + |w|^5 + U_{x, \lambda} + |w|)\|_{3/2}^2 + \| |\nabla \zeta| |\nabla w|\|_{3/2}^2 + \|(\Delta \zeta)w \|_{3/2}^2 \,.
\end{align*}
It remains to bound the norms on the right side. The term most difficult to estimate is $\|\zeta w^5 \|_{3/2}$, because $5\cdot 3/2 = 15/2 > 6$, and we shall come back to it later. The other terms can all be estimated using bounds on $\|U\|_{L^p(\Omega \setminus B_{d/2}(x))}$ from Lemma \ref{lemma Lq norm of U}, as well as the bound $\|w\|_6 \lesssim \lambda^{-1/2} + \lambda^{-1} d^{-1}$ from Proposition \ref{boundw}. Indeed, we have
\begin{align*}
\| \zeta U_{x, \lambda} ^5 \|_{3/2}^2 & \lesssim \|U_{x, \lambda} \|_{L^{15/2}(\Omega \setminus B_{d/2}(x))}^{10} \lesssim \lambda^{-5} d^{-6} = o(\lambda^{-1} d^{-2}), \\
\| \zeta U_{x, \lambda} \|_{3/2}^2 & \lesssim \|U_{x, \lambda} \|_{L^{3/2}(\Omega \setminus B_d)}^2 \lesssim \lambda^{-1} = \mathcal O(\lambda^{-1} d^{-1}), \\
\| \zeta w\|_{3/2}^2 & \lesssim \|w\|_6^2 \lesssim \lambda^{-1} + \lambda^{-2} d^{-2} = \mathcal O(\lambda^{-1} d^{-1}) + o(\lambda^{-1} d^{-2}), \\
\| |\nabla \zeta| |\nabla w| \|_{3/2}^2 & \lesssim \|\nabla w\|_2^2 \|\nabla \zeta\|_6^2 \lesssim (\lambda^{-1} + \lambda^{-2} d^{-2}) d^{-1} = \mathcal O(\lambda^{-1} d^{-1}) + o(\lambda^{-1} d^{-2})
\end{align*}
and
\[ \| (\Delta \zeta) w\|_{3/2}^2 \lesssim \|w\|_6^2 \|\Delta \zeta\|_2^2 \lesssim (\lambda^{-1} + \lambda^{-2} d^{-2}) d^{-1} = \mathcal O(\lambda^{-1} d^{-1}) + o(\lambda^{-1} d^{-2}). \]
In order to estimate the difficult term $\|\zeta w^5 \|_{3/2}$, we multiply the equation $-\Delta w = F$ by $\zeta^{1/2} |w|^{1/2} w$ and integrate over $\Omega$ to obtain
\begin{equation}
\label{eq:moserinput}
\int_\Omega \nabla (\zeta^{1/2} |w|^{1/2} w) \cdot \nabla w \leq \int_\Omega |F| \, \zeta^{1/2} |w|^{3/2} \,.
\end{equation}
We now note that there are universal constants $c>0$ and $C<\infty$ such that pointwise a.e.
\begin{equation}
\label{eq:ptwineq}
\nabla (\zeta^{1/2} |w|^{1/2} w) \cdot\nabla w \geq c |\nabla (\zeta^{1/4} |w|^{1/4} w)|^2 - C |w|^{5/2} |\nabla (\zeta^{1/4})|^2.
\end{equation}
Indeed, by repeated use of the product rule and chain rule for Sobolev functions, one finds
\begin{align*}
\nabla (\zeta^{1/2} |w|^{1/2} w) \cdot\nabla w & = \frac 32 \left( \frac{4}{5} \right)^2 |\nabla (\zeta^{1/4} |w|^{1/4} w)|^2 + \left( \frac 32 \left( \frac{4}{5} \right)^2 - \frac 45 \cdot 2 \right) |w|^{5/2} |\nabla (\zeta^{1/4})|^2 \\
& \quad - \left( \frac 32 \left( \frac{4}{5} \right)^2 \cdot 2 - \frac 45 \cdot 2 \right) |w|^{1/4} w \nabla (\zeta^{1/4}) \cdot \nabla (\zeta^{1/4} |w|^{1/4} w) \,.
\end{align*}
The claimed inequality \eqref{eq:ptwineq} follows by applying Schwarz's inequality $v_1 \cdot v_2 \geq - \varepsilon |v_1|^2 - \frac{1}{4 \varepsilon}|v_2|^2$ to the cross term on the right side with $\varepsilon > 0$ small enough.
As a consequence of \eqref{eq:ptwineq}, we can bound the left side in \eqref{eq:moserinput} from below by
\begin{align*}
\int_\Omega \nabla (\zeta^{1/2} |w|^{1/2} w) \cdot \nabla w
\geq c \int_\Omega |\nabla (\zeta^{1/4} |w|^{1/4} w)|^2 - C \int_\Omega |w|^{5/2} |\nabla (\zeta^{1/4})|^2 \,.
\end{align*}
Thus, by the Sobolev inequality for the function $\zeta^{1/4} |w|^{1/4} w$ and \eqref{eq:moserinput}, we get
\begin{align} \| \zeta w^5\|_{3/2}^2 &= \left(\int_\Omega |\zeta^{1/4} |w|^{1/4} w|^6 \right)^{4/3} \lesssim \left(\int_\Omega |\nabla (\zeta^{1/4} |w|^{1/4} w)|^2 \right)^4 \label{zeta w5 estimate} \nonumber \\
&\lesssim \left( \int_\Omega |w|^{5/2} |\nabla (\zeta^{1/4})|^2 \right) ^4 + \left( \int_\Omega |F| \, \zeta^{1/2} |w|^{3/2} \right)^4.
\end{align}
For the first term on the right side, we have
\begin{align*}
\left( \int_\Omega |w|^{5/2} |\nabla (\zeta^{1/4})|^2 \right) ^4 &\leq \|w\|_6^{10} \left( \int_\Omega |\nabla (\zeta^{1/4})|^{24/7} \right)^{7/3} \lesssim (\lambda^{-5} + \lambda^{-10} d^{-10}) d^{-1} \\
&= \mathcal O(\lambda^{-1} d^{-1}) + o(\lambda^{-1} d^{-2}).
\end{align*}
To control the second term on the right side of \eqref{zeta w5 estimate}, we use again the pointwise estimate \eqref{pointwise bound f}. The contribution of the $|w|^5$ term to the second term on the right side of \eqref{zeta w5 estimate} is
\[ \left( \int_\Omega |w|^{5 + \frac 32} \zeta^{1/2} \right)^4 = \left( \int_\Omega ( \zeta^{1/2} w^{5/2}) w^4 \right)^4 \leq \|\zeta w^5\|_{3/2}^2 \|w\|_6^{16} = o(\|\zeta w^5\|_{3/2}^2), \]
which can be absorbed into the left side of \eqref{zeta w5 estimate}.
For the remaining terms, we have
\begin{align*}
\left( \int_\Omega |w|^{ 3/2} U_{x, \lambda}^5 \zeta^{1/2} \right)^4 &\lesssim \|w\|_6^6 \|U_{x, \lambda}\|_{L^{20/3}(\Omega \setminus B_{d/2}(x))}^{20} = (\lambda^{-3} + (d \lambda)^{-6}) (\lambda^{-10} d^{-11}), \\
\left( \int_\Omega |w|^{ 3/2} U_{x, \lambda} \zeta^{1/2} \right)^4 &\lesssim \|w\|_6^6 \|U_{x, \lambda}\|_{L^{4/3}(\Omega)}^{4} = (\lambda^{-3} + (d \lambda)^{-6}) \lambda^{-2}, \\
\left( \int_\Omega |w|^{ 5/2} \zeta^{1/2} \right)^4 &\lesssim \|w\|_6^{10} = \lambda^{-5} + (d\lambda)^{-10},
\end{align*}
all of which is $\mathcal O(\lambda^{-1} d^{-1}) + o(\lambda^{-1} d^{-2})$. This concludes the proof of the bound $\|\zeta w^5\|_{3/2}^2 = \mathcal O(\lambda^{-1} d^{-1}) + o(\lambda^{-1} d^{-2})$, and thus of Lemma \ref{lemma w bdry integral}.
\end{proof}
It is now easy to complete the proof of the main result of this section.
\begin{proof}[Proof of Proposition \ref{prop bdry concentration}]
The identity \eqref{bdry conc identity}, together with the bound \eqref{eq:poho1vol} and Lemma \ref{lemma PU bdry integral} (a), yields
\[ C \lambda^{-1} \nabla \phi_0(x) = \mathcal O(\lambda^{-1}) + o (\lambda^{-1} d^{-2}) + \mathcal O\left( \left\| \frac{\partial PU_{x, \lambda}}{\partial n}\right\|_{L^2(\partial \Omega)} \left\| \frac{\partial w}{\partial n} \right\|_{L^2(\partial \Omega)} + \left\| \frac{\partial w}{\partial n} \right\|_{L^2(\partial \Omega)}^2 \right) \]
for some $C >0$. By Lemmas \ref{lemma PU bdry integral} (c) and \ref{lemma w bdry integral} the last term on the right side is bounded by $\lambda^{-1} d^{-3/2} + o(\lambda^{-1}d^{-2})$, so we get
$$
\nabla \phi_0(x) = \mathcal O(d^{-3/2}) + o(d^{-2}) \,.
$$
On the other hand, according to \cite[Equation (2.9)]{Re2}, we have $|\nabla \phi_0(x) | \gtrsim d^{-2}$. Hence $d^{-2} =\mathcal O(d^{-3/2}) + o(d^{-2})$, which yields $d^{-1} = \mathcal O(1)$, as claimed.
\end{proof}
\subsection{Proof of Proposition \ref{prop first expansion}}
The existence of the expansion follows from Proposition \ref{lemma PU + w}. Proposition \ref{prop bdry concentration} implies that $d^{-1} = \mathcal O(1)$, which implies that $x_0\in\Omega$ and $\lambda\to\infty$. Moreover, inserting the bound $d^{-1} = \mathcal O(1)$ into Proposition \ref{boundw}, we obtain $\| \nabla w\| = \mathcal O(\lambda^{-1/2})$, as claimed in Proposition \ref{prop first expansion}. This completes the proof of the proposition.\qed
\section{Refining the expansion}
\label{section refining}
Our goal in this section is to improve the decomposition given in Proposition \ref{prop first expansion}. As in \cite{FrKoKo1}, our goal is to discover that a better approximation to $u_\epsilon$ is given by the function
\begin{equation}
\label{definition psi}
\psi_{x, \lambda} := PU_{{x, \lambda}} - \lambda^{-1/2}\left(H_a(x, \cdot) - H_0(x, \cdot)\right).
\end{equation}
Let us set
\begin{equation}
\label{definition q}
q_\epsilon := w_\epsilon + \lambda_\epsilon^{-1/2} \left(H_a(x_\epsilon, \cdot) - H_0(x_\epsilon, \cdot) \right),
\end{equation}
so that
$$
u_\epsilon = \alpha_\varepsilon \left( \psi_{x_\varepsilon, \lambda_\varepsilon} + q_\varepsilon \right).
$$
As in \cite{FrKoKo1}, we further decompose
\begin{equation} \label{q-split}
q_\varepsilon = s_\varepsilon + r_\varepsilon
\end{equation}
with $s_\varepsilon \in T_{x_\varepsilon, \lambda_\varepsilon}$ and $r_\varepsilon \in T_{x_\varepsilon, \lambda_\varepsilon}^\bot$ given by
\begin{equation}
\label{definition r}
r_\varepsilon := \Pi_{x_\varepsilon, \lambda_\varepsilon}^\perp q
\qquad\text{and}\qquad
s_\varepsilon := \Pi_{x_\varepsilon, \lambda_\varepsilon} q \,.
\end{equation}
We note that the notation $r_\varepsilon$ is consistent with the one used in Theorem \ref{thm expansion} since, writing $w_\varepsilon= q_\varepsilon+ \lambda_\varepsilon^{-1/2} \left( H_a(x_\varepsilon,\cdot) - H_0(x_\varepsilon,\cdot) \right)$ and using $w_\varepsilon\in T_{x_\varepsilon, \lambda_\varepsilon}^\bot$, we have
\begin{equation}
\label{eq:sproj}
s_\varepsilon = \lambda_\varepsilon^{-1/2}\, \Pi_{x_\varepsilon, \lambda_\varepsilon} \left( H_a(x_\varepsilon,\cdot) - H_0(x_\varepsilon,\cdot) \right).
\end{equation}
The following proposition summarizes the results of this section.
\begin{proposition}
\label{prop second expansion}
Let $(u_\epsilon)$ be a family of solutions to \eqref{equation u} satisfying \eqref{eq:sobmin}.
Then, up to extraction of a subsequence, there are sequences $(x_\epsilon)\subset\Omega$, $(\lambda_\epsilon)\subset(0,\infty)$, $(\alpha_\epsilon)\subset\mathbb{R}$, $(s_\varepsilon) \subset T_{x_\varepsilon, \lambda_\varepsilon}$ and $(r_\varepsilon) \subset T_{x_\varepsilon, \lambda_\varepsilon}^\bot$ such that
\begin{equation}
\label{expansion psi + q}
u_\epsilon = \alpha_\epsilon(\psi_{x_\varepsilon, \lambda_\varepsilon} + s_\varepsilon + r_\varepsilon)
\end{equation}
and a point $x_0 \in \Omega$ such that, in addition to Proposition \ref{prop second expansion},
\begin{align}
\|\nabla r_\varepsilon\|_2 &= \mathcal O(\varepsilon \lambda_\varepsilon^{-1/2}) \,, \label{r-eps-bound}\\
\phi_a(x_\varepsilon) &= a(x_\varepsilon) \pi \lambda_\varepsilon^{-1} - \frac{\varepsilon}{4\pi}\, Q_V(x_\varepsilon) + o(\lambda_\varepsilon^{-1}) +o(\varepsilon) \,, \nonumber \\
\nabla \phi_a(x_\varepsilon) &= \mathcal O(\varepsilon^\mu) \qquad\text{for any}\ \mu<1\,, \nonumber\\
\lambda_\varepsilon^{-1} &= \mathcal O(\varepsilon) \,, \nonumber \\
\alpha_\varepsilon^4 &= 1 + \frac{64}{3 \pi} \phi_0(x_\varepsilon) \lambda_\varepsilon^{-1} + \mathcal O(\varepsilon \lambda_\varepsilon^{-1}) \,. \nonumber
\end{align}
\end{proposition}
The expansion of $\phi_a(x)$ will be of great importance also in the final step of the proof of Theorem \ref{thm expansion}. Indeed, by using the bound on $|\nabla \phi_a(x)|$ we will show that in fact $\phi_a(x) = o(\lambda^{-1}) + o(\epsilon)$. This allows us to determine $\lim_{\epsilon \to 0} \varepsilon \lambda_\varepsilon$.
We prove Proposition \ref{prop second expansion} in the following subsections. Again the strategy is to expand suitable energy functionals.
\subsection{Bounds on $s$}
\label{subsection s}
In this section we record bounds on the function $s$ introduced in \eqref{definition r}, and on the coefficients $\beta,\gamma$ and $\delta_j$ defined by the decomposition
\begin{equation}
\label{expansion s}
s = \Pi_{x, \lambda} q =: \lambda^{-1} \beta PU_{x, \lambda} + \gamma \partial_\lambda PU_{x, \lambda} + \lambda^{-3} \sum_{i = 1}^3 \delta_i \partial_{x_i} PU_{x, \lambda} \,.
\end{equation}
Since $PU_{x, \lambda}$, $\partial_\lambda PU_{x, \lambda}$ and $\partial_{x_i} PU_{x, \lambda}$, $i=1,2,3$, are linearly independent for sufficiently small $\epsilon$, the numbers $\beta$, $\gamma$ and $\delta_i$, $i=1,2,3$, (depending on $\varepsilon$, of course) are uniquely determined. The choice of the different powers of $\lambda$ multiplying these coefficients is motivated by the following proposition.
\begin{proposition}
\label{proposition s}
The coefficients appearing in \eqref{expansion s} satisfy
\begin{equation}
\label{bound beta gamma delta} \beta, \gamma, \delta_i = \mathcal O(1).
\end{equation}
Moreover, we have the bounds
\begin{equation}
\label{bounds s}
\|s\|_\infty = \mathcal O(\lambda^{-1/2}), \quad \| \nabla s\| = \mathcal O(\lambda^{-1})\quad \text{and } \quad \|s\|_{2} = \mathcal O(\lambda^{-3/2}),
\end{equation}
as well as
\begin{equation}
\label{bound nabla s outside} \|\nabla s\|_{L^2(\Omega \setminus B_{d/2}(x))} =\mathcal O (\lambda^{-3/2}).
\end{equation}
\end{proposition}
\begin{proof}
Because of \eqref{eq:sproj}, $s_\varepsilon$ depends on $u_\varepsilon$ only through the parameters $\lambda$ and $x$. Since these parameters satisfy the same properties $\lambda\to\infty$ and $d^{-1} =\mathcal O(1)$ as in \cite{FrKoKo1}, the results on $s_\varepsilon$ there are applicable. In particular, the bound \eqref{bound beta gamma delta} follows from \cite[Lemma 6.1]{FrKoKo1}.
The bounds stated in \eqref{bounds s} follow readily from \eqref{expansion s} and \eqref{bound beta gamma delta}, together with the corresponding bounds on the basis functions $PU_{x, \lambda}$, $\partial_{\lambda} PU_{x, \lambda}$ and $\partial_{x_i} PU_{x, \lambda}$, $i=1,2,3$, which come from
\[ \| U_{x, \lambda}\|_\infty \lesssim \lambda^{1/2}, \quad \|\nabla U_{x, \lambda}\| \lesssim 1, \quad \|U_{x, \lambda}\|_{2} \lesssim \lambda^{-1/2}, \]
and similar bounds on $\partial_{\lambda} U_{x, \lambda}$ and $\partial_{x_i} U_{x, \lambda}$, compare Lemma \ref{lemma Lq norm of U}, as well as
\[ \|H_0(x, \cdot)\|+\|\nabla_x H_0(x, \cdot)\| + \|\nabla_x \nabla_y H_0(x,y)\| \lesssim 1. \]
It remains to prove \eqref{bound nabla s outside}. Again by \eqref{expansion s} and \eqref{bound beta gamma delta}, it suffices to show that
\begin{equation}
\label{eq:propsproof}
\lambda^{-1} \|\nabla PU_{x, \lambda}\|_{L^2(\Omega \setminus B_{d/2}(x))}
+ \|\nabla \partial_{\lambda} PU_{x, \lambda}\|_{L^2(\Omega \setminus B_{d/2}(x))} + \lambda^{-3} \|\nabla \partial_{x_i} PU_{x, \lambda}\|_{L^2(\Omega \setminus B_{d/2}(x))}
\lesssim \lambda^{-3/2}.
\end{equation}
(In fact, there is a better bound on $\nabla \partial_{x_i} PU_{x, \lambda}$, but we do not need this.) Since the three bounds in \eqref{eq:propsproof} are all proved similarly, we only prove the second one.
By integration by parts, we have
\[ \int_{\Omega \setminus B_{d/2}(x)} |\nabla \partial_{\lambda} PU_{x, \lambda}|^2 = 15 \int_{\Omega \setminus B_{d/2}(x)} U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} \partial_{\lambda} PU_{x, \lambda} + \int_{\partial B_{d/2}(x)} \frac{\partial (\partial_{\lambda} PU_{x, \lambda})}{\partial n} \partial_{\lambda} PU_{x, \lambda} \,. \]
By the bounds from Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU}, the volume integral is estimated by
\begin{align*}
\int_{\Omega \setminus B_{d/2}(x)} U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} \partial_{\lambda} PU_{x, \lambda}
&\leq \int_{\mathbb{R}^n \setminus B_{d/2}(x)} U_{x, \lambda}^4 (\partial_{\lambda} U_{x, \lambda})^2 + \|\partial_{\lambda} \varphi_{x, \lambda}\|_\infty \int_{\mathbb{R}^n \setminus B_{d/2}(x)} U_{x, \lambda}^4 |\partial_{\lambda} U_{x, \lambda}| \\
& \lesssim \lambda^{-5}.
\end{align*}
Since
\[ \nabla \partial_{\lambda} U_{x, \lambda}(y) = \frac{\lambda^{3/2}}{2} \frac{(-5+3\lambda^2|y-x|^2)(y-x) }{(1 + \lambda^2|y-x|^2)^{5/2}}, \]
we find $|\nabla \partial_{\lambda} U_{x, \lambda}| \lesssim \lambda^{-3/2}$ on $\partial B_{d/2}(x)$. By the mean value formula for the harmonic function $\partial_{\lambda} \varphi_{x, \lambda}$ and the bound from Lemma \ref{lemma PU},
\[ |\nabla \partial_{\lambda} \varphi_{x, \lambda}(y)| = \|\partial_{\lambda} \varphi_{x, \lambda}\|_\infty \lesssim \lambda^{-3/2}
\qquad\text{for all}\ y\in\partial B_{d/2}(x) . \]
This implies that $|\nabla (\partial_{\lambda} PU_{x, \lambda})| \lesssim \lambda^{-3/2}$ on $\partial B_{d/2}(x)$. Thus, the boundary integral is estimated by
\begin{align*}
\int_{\partial B_{d/2}(x)} \frac{\partial (\partial_{\lambda} PU_{x, \lambda})}{\partial n} \partial_{\lambda} PU_{x, \lambda} &= \|\nabla (\partial_{\lambda} PU_{x, \lambda})\|_{L^\infty(\partial B_{d/2}(x))} (\| \partial_{\lambda} U_{x, \lambda}\|_{L^\infty(\Omega \setminus B_{d/2}(x))} + \|\partial_{\lambda} \varphi_{x, \lambda}\|_\infty) \\
& \lesssim \lambda^{-3} \,,
\end{align*}
since $\| \partial_{\lambda} U_{x, \lambda}\|_{L^\infty(\Omega \setminus B_{d/2}(x))} \lesssim \lambda^{-3/2}$ by Lemma \ref{lemma Lq norm of U}. Collecting these estimates, we find
$\|\nabla \partial_{\lambda} PU_{x, \lambda}\|_{L^2(\Omega \setminus B_{d/2}(x))}$ $\lesssim \lambda^{-3/2}$, which is the second bound in \eqref{eq:propsproof}.
\end{proof}
Later we will also need the leading order behavior of the zero mode coefficients $\beta$ and $\gamma$ in \eqref{expansion s}.
\begin{proposition} \label{prop-beta-gamma} As $\varepsilon\to 0$,
\begin{equation}
\label{eq beta gamma subsec}
\beta= \frac{16}{3\pi}\, (\phi_a(x) - \phi_0(x)) + \mathcal{O}(\lambda^{-1}) , \qquad
\gamma=-\frac 85\, \beta + \mathcal{O}(\lambda^{-1}).
\end{equation}
\end{proposition}
\begin{proof}
According to \eqref{eq:sproj}, we have
\begin{align}
\label{eq s Ha H0 scalarprod1}
\int_\Omega \nabla s \cdot \nabla PU_{x, \lambda} & = \lambda^{-1/2} \int_\Omega \nabla (H_a(x,\cdot) - H_0(x,\cdot))\cdot \nabla PU_{x, \lambda}, \\
\label{eq s Ha H0 scalarprod2}
\int_\Omega \nabla s \cdot \nabla \partial_{\lambda} PU_{x, \lambda} & = \lambda^{-1/2} \int_\Omega \nabla (H_a(x,\cdot) - H_0(x,\cdot)) \nabla \partial_{\lambda} PU_{x, \lambda}.
\end{align}
By \eqref{expansion s}, the left side of \eqref{eq s Ha H0 scalarprod1} is
\begin{align*}
& \beta \lambda^{-1} \int_\Omega |\nabla PU_{x, \lambda}|^2 + \gamma \int_\Omega \nabla \partial_{\lambda} PU_{x, \lambda}\cdot \nabla PU_{x, \lambda} + \lambda^{-3} \sum_{i=1}^3 \delta_i \int_\Omega \nabla \partial_{x_i} PU_{x, \lambda} \cdot \nabla PU_{x, \lambda} \\
& = 3 \beta \lambda^{-1} \frac{\pi^2}{4} + \mathcal O(\lambda^{-2}),
\end{align*}
where we used the facts that, by \cite[Appendix B]{Re2},
\begin{align}
\label{rey scalarprods}
& \int_\Omega |\nabla PU_{x, \lambda}|^2 = 3 \frac{\pi^2}{4} + \mathcal O(\lambda^{-1}), \qquad \int_\Omega \nabla \partial_{\lambda} PU_{x, \lambda} \cdot \nabla PU_{x, \lambda} = \mathcal O(\lambda^{-2}) \\
& \int_\Omega \nabla \partial_{x_i} PU_{x, \lambda} \cdot \nabla PU_{x, \lambda} = \mathcal O(\lambda^{-1}).
\end{align}
On the other hand, the right side of \eqref{eq s Ha H0 scalarprod1} is
\begin{align*}
\lambda^{-1/2} \int_\Omega \nabla (H_a(x,\cdot) - H_0(x,\cdot)) \cdot \nabla PU
& = 3 \lambda^{-1/2} \int_\Omega (H_a(x,\cdot) - H_0(x,\cdot)) U_{x, \lambda}^5 \\
& = 4 \pi (\phi_a(x) - \phi_0(x)) \lambda^{-1} + \mathcal O(\lambda^{-2})
\end{align*}
by Lemma \ref{lemma U Ha}. Comparing both sides yields the expansion of $\beta$ stated in \eqref{eq beta gamma subsec}.
Similarly, by \eqref{expansion s}, the left side of \eqref{eq s Ha H0 scalarprod2} is
\begin{align*}
& \frac{\beta}{\lambda^2} \int_\Omega \nabla PU_{x, \lambda} \cdot \nabla \partial_{\lambda} PU_{x, \lambda} + \gamma \int_\Omega |\nabla \partial_{\lambda} PU_{x, \lambda}|^2 + \lambda^{-3} \sum_{i=1}^3 \delta_i \int_\Omega \nabla \partial_{x_i} PU_{x, \lambda} \cdot \nabla \partial_{\lambda} PU_{x, \lambda} \\
&= \frac{15 \pi^2 \gamma}{64\, \lambda^2} + \mathcal O(\lambda^{-3}) \,,
\end{align*}
where, besides \eqref{rey scalarprods}, we used $\int_\Omega \nabla \partial_{x_i} PU_{x, \lambda} \cdot \nabla \partial_{\lambda} PU_{x, \lambda} = \mathcal O(\lambda^{-2})$ by \cite[Appendix B]{Re2}, and
\[
\int_\Omega |\nabla \partial_{\lambda} PU_{x, \lambda}|^2 = \int_\Omega |\nabla \partial_{\lambda} U_{x, \lambda}|^2 + \mathcal O(\lambda^{-3}) = \frac{15 \pi^2}{64} \lambda^{-2} + \mathcal O(\lambda^{-3}) \,.
\]
(The numerical value comes from an explicit evaluation of the integral in terms of beta functions, which we omit.) On the other hand, the right side of \eqref{eq s Ha H0 scalarprod2} is
\begin{align*}
\lambda^{-1/2} \int_\Omega \nabla (H_a(x,\cdot) - H_0(x,\cdot))\cdot \nabla \partial_{\lambda} PU_{x, \lambda} &= 15 \lambda^{-1/2} \int_\Omega (H_a(x,\cdot) - H_0(x,\cdot)) U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} \\
& = -2 \pi (\phi_a(x) - \phi_0(x)) \lambda^{-2} + \mathcal O(\lambda^{-3})
\end{align*}
by Lemma \ref{lemma U Ha}. Comparing both sides yields the expansion of $\gamma$ stated in \eqref{eq beta gamma subsec}.
\end{proof}
\subsection{The bound on $\|\nabla r\|$}
\label{subsection nabla r} The goal of this subsection is to prove
\begin{proposition} \label{prop-r-bound}
As $\varepsilon\to 0$,
\begin{equation}
\label{nabla r subsection}
\|\nabla r\| = \mathcal O(\phi_a(x) \lambda^{-1}) + \mathcal O(\lambda^{-3/2}) + \mathcal O(\varepsilon \lambda^{-1/2}).
\end{equation}
\end{proposition}
Using $\Delta (H_a(x,\cdot)-H_0(x,\cdot)) = -a G_a(x,\cdot)$ and introducing the function $g_{x, \lambda}$ from \eqref{eq:defg}, we see that the equation \eqref{equation w} for $w$ implies
\begin{equation}
\label{equation r}
(-\Delta+ a) r = - 3 U_{x, \lambda}^5 + 3 \alpha^4 (\psi_{x, \lambda} + s + r)^5 + a (f_{x, \lambda}+g_{x, \lambda}) - as - \varepsilon V (\psi_{x, \lambda} + s +r) + \Delta s \,.
\end{equation}
Integrating against $r$ and using the orthogonality conditions $\int_\Omega (\Delta s ) r = -\int_\Omega \nabla s \cdot \nabla r = 0$ and $3\int_\Omega U_{x, \lambda}^5 r = \int_\Omega \nabla PU_{x, \lambda}\cdot\nabla r =0$, we obtain
\begin{equation}
\label{energy r}
\int_\Omega \left(|\nabla r|^2+ ar^2\right) = 3 \alpha^4 \int_\Omega (\psi_{x, \lambda} + s + r)^5 r - \int_\Omega a (s- f_{x, \lambda} - g_{x, \lambda}) r - \int_\Omega \varepsilon V (\psi_{x, \lambda} + s +r) r.
\end{equation}
The terms appearing in \eqref{energy r} satisfy the following bounds.
\begin{lemma}
\label{lemma expansion r}
As $\varepsilon \to 0$, the following holds.
\begin{enumerate}
\item[(a)] $\left| 3 \alpha^4 \int_\Omega (\psi_{x, \lambda} + s + r)^5 r - 15 \alpha^4 \int_\Omega U_{x, \lambda}^4 r^2\right| \lesssim \left( \lambda^{-3/2} + \lambda^{-1}\phi_a(x) + \|r\|_6^2 \right)\|r\|_6$.
\item[(b)] $\left | \int_\Omega \left( a (s- f_{x, \lambda} - g_{x, \lambda} ) + \epsilon V(\psi_{x, \lambda}+s+r)\right) r \right| \leq \left( \lambda^{-3/2} + \epsilon \lambda^{-1/2} \right) \|r\|_6$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) We write $\psi_{x, \lambda} = U_{x, \lambda} - \lambda^{-1/2} H_a(x,\cdot) - f_{x, \lambda}$ and bound pointwise
\begin{align}
\label{eq:expansionrproof}
(\psi_{x, \lambda} + s + r)^5 & = U_{x, \lambda}^5 + 5U_{x, \lambda}^4(s+r) + \mathcal O\left( U_{x, \lambda}^4\left( \lambda^{-1/2} |H_a(x,\cdot)| + |f_{x, \lambda}|\right) + U_{x, \lambda}^3 \left( r^2 + s^2 \right) \right) \notag \\
& \quad + \mathcal O\left( \lambda^{5/2}|H_a(x,\cdot)|^5 + |f_{x, \lambda}|^5 + |r|^5 + |s|^5\right).
\end{align}
When integrated against $r$, the first term vanishes by orthogonality. Let us bound the contribution coming from the second term, that is, from $5 U_{x, \lambda}^4 s$. We write
$$
s = \lambda^{-1} \beta U_{x, \lambda} + \gamma \partial_{\lambda} U_{x, \lambda} + \tilde s \,,
$$
so $\tilde s$ consists of the zero mode contributions involving the $\delta_i$, plus contributions from the difference between $PU_{x, \lambda}$ and $U_{x, \lambda}$ in the terms involving $\beta$ and $\gamma$. By orthogonality, we have
$$
\int_\Omega U_{x, \lambda}^4 sr = \int_\Omega U_{x, \lambda}^4 \tilde s r = \mathcal O(\|U_{x, \lambda}\|_6^4 \|\tilde s\|_6 \|r\|_6) \,.
$$
and, by Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU}, as well as Proposition \ref{proposition s},
$$
\|\tilde s\|_6 \leq \left( |\beta|+|\gamma|\right) \left( \lambda^{-1} \|\varphi_{x, \lambda} \|_6 + \|\partial_{\lambda}\varphi_{x, \lambda}\|_6 \right) + \lambda^{-3} \sum_{i=1}^3 |\delta_i| \|\partial_{x_i}PU_{x, \lambda}\|_6 \lesssim \lambda^{-3/2} \,.
$$
This proves
$$
\int_\Omega U_{x, \lambda}^4 sr = \mathcal O(\lambda^{-3/2}\| r\|_6) \,.
$$
It remains to bound the remainder terms in \eqref{eq:expansionrproof}. We write $H_a(x,y) = \phi_a(x) + \mathcal O(|x-y|)$ and bound
\begin{align*}
\int_{\Omega} U_{x, \lambda}^{24/5} |H_a(x,\cdot)|^{6/5} & \lesssim \phi_a(x)^{6/5} \int_\Omega U_{x, \lambda}^{24/5} + \int_\Omega U_{x, \lambda}^{24/5} \left( |x-y|^{6/5} + \phi_a(x)^{1/5} |x-y|\right) \\
& \lesssim \lambda^{-3/5}\phi_a(x)^{6/5} + \lambda^{-9/5} + \lambda^{-8/5}\phi_a(x)^{1/5}
\lesssim \lambda^{-3/5}\phi_a(x)^{6/5} + \lambda^{-9/5} \,,
\end{align*}
where we used $ab\lesssim a^6 + b^{6/5}$ in the last step. Hence
\begin{align*}
& \left| \int_\Omega U_{x, \lambda}^4 \left( \lambda^{-1/2} |H_a(x, \cdot)| + |f_{x, \lambda}| \right) |r| \right| \leq \left( \lambda^{-1/2} \|U^4 H_a(x,\cdot)\|_{6/5} + \|f_{x, \lambda} \|_\infty \right) \| r\|_6 \\
& \lesssim \left(\lambda^{-1} \phi_a(x) + \lambda^{-2} \right) \| r\|_6 \,.
\end{align*}
Finally, using Proposition \ref{proposition s},
\begin{align*}
& \int_\Omega U_{x, \lambda}^3 \left(r^2 + s^2 \right)|r| + \int_\Omega \left(\lambda^{-5/2} |H_a(x,\cdot)|^5 + |f_{x, \lambda}|^5 + |r|^5 + |s|^5 \right)|r| \\
& \lesssim \left( \|r\|_6^2 + \|s\|^2_6 + \lambda^{-5/2} + \|f_{x, \lambda}\|_\infty^5 + \|r\|_6^5 + \|s\|_6^5 \right) \|r\|_6
\lesssim \left( \|r\|_6^2 + \lambda^{-2} \right) \| r\|_6 . \nonumber
\end{align*}
\medskip
(b) We have
\begin{align*}
& \left | \int_\Omega \left( a (s- f_{x, \lambda} - g_{x, \lambda} ) + \epsilon V(\psi_{x, \lambda}+s+r)\right) r \right| \\
& \lesssim \left( \|s\|_{6/5} + \|f_{x, \lambda}\|_{6/5} + \|g_{x, \lambda}\|_{6/5} + \epsilon \|\psi_{x, \lambda}\|_{6/5} + \epsilon \|r\|_{6/5} \right) \|r\|_6 \,.
\end{align*}
By Proposition \ref{proposition s}, $\|s\|_{6/5}\lesssim \|s\|_2 \lesssim \lambda^{-3/2}$. By Lemma \ref{lemma PU}, $\|f_{x, \lambda}\|_{6/5}\lesssim \|f_{x, \lambda}\|_\infty \lesssim \lambda^{-5/2}$. By Lemma \ref{lem-g}, $\|g_{x, \lambda}\|_{6/5}\lesssim \lambda^{-2}$. By Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU}, $\|\psi_{x, \lambda}\|_{6/5} \lesssim \lambda^{-1/2}$. Finally, $\|r\|_{6/5}\lesssim \|r\|_6$. This proves the claimed bound.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop-r-bound}]
We deduce from identity \eqref{energy r} together with Lemma \ref{lemma expansion r} that
\begin{align*}
\int_\Omega \left( |\nabla r|^2 + ar^2 - 15 \alpha^4 \, U_{x,\lambda}^4 r^2\right)
& \lesssim \left( \lambda^{-1} \phi_a(x) + \lambda^{-3/2} + \epsilon \lambda^{-1/2} + \|\nabla r\|^2 + \epsilon \|\nabla r\| \right) \|\nabla r\| \,.
\end{align*}
Since $\alpha^4 \to 1$ and $r \in T_{x, \lambda}^\bot$, the coercivity inequality \eqref{coercivity} implies that for all sufficiently small $\epsilon>0$ the left side is bounded from below by $c \|\nabla r\|^2$ with a universal constant $c>0$. Thus,
$$
\|\nabla r\| \lesssim \lambda^{-1} \phi_a(x) + \lambda^{-3/2} + \epsilon \lambda^{-1/2} + \|\nabla r\|^2 + \epsilon \|\nabla r\| \,.
$$
For all sufficiently small $\epsilon>0$, the last two terms on the right side can be absorbed into the left side and we obtain the claimed inequality \eqref{nabla r subsection}.
\end{proof}
\noindent Proposition \ref{prop-r-bound} is a first step to prove the bound \eqref{r-eps-bound} in Proposition \ref{prop second expansion}. In Section \ref{subsection phi a} we will show that $\phi_a(x)=\mathcal O(\lambda^{-1}+\epsilon)$ and $\lambda^{-1} =\mathcal O(\varepsilon)$. Combining these bounds with Proposition \ref{prop-r-bound} we will obtain \eqref{r-eps-bound}.
\subsection{Expanding $\alpha^4$}
In this subsection, we will prove
\begin{proposition} \label{prop-alpha4}
As $\varepsilon\to 0$,
\begin{equation}
\label{alpha4 exp subsec}
\alpha^4 = 1 - 4 \beta \lambda^{-1} + \mathcal O(\phi_a(x) \lambda^{-1} + \lambda^{-2} + \varepsilon \lambda^{-1}),
\end{equation}
where $\beta$ is the zero-mode coefficient from \eqref{expansion s}.
\end{proposition}
\noindent To prove \eqref{alpha4 exp subsec}, we expand the energy identity obtained by integrating the equation for $u$ against $u$. Writing $u = \psi_{x, \lambda} + q$, this yields
\begin{equation*}
\int_\Omega |\nabla (\psi_{x, \lambda} + q)|^2 + \int_\Omega (a + \varepsilon V) (\psi_{x, \lambda} + q)^2 = 3 \alpha^4 \int_\Omega (\psi_{x, \lambda} + q)^6,
\end{equation*}
which we write as
\begin{align}
\label{energy identity alpha^4}
& \int_\Omega \left( |\nabla\psi_{x, \lambda}|^2 + (a+\varepsilon V) \psi_{x, \lambda}^2 - 3 \alpha^4 \psi_{x, \lambda}^6 \right) + 2 \int_\Omega \left( \nabla q\cdot\nabla\psi_{x, \lambda} + (a+\varepsilon V) q \psi_{x, \lambda} - 9 \alpha^4 q\psi_{x, \lambda}^5 \right) \notag \\
& = \mathcal R_0
\end{align}
with
$$
\mathcal R_0 := - \int_\Omega \left(|\nabla q|^2 + (a+\varepsilon V) q^2 \right) + 3 \alpha^4 \sum_{k=2}^6 {6 \choose k} \int_\Omega \psi_{x, \lambda}^{6-k} q^k \,.
$$
\noindent The following lemma provides the expansions of the terms in \eqref{energy identity alpha^4}.
\begin{lemma}
\label{lemma alpha^4}
As $\varepsilon \to 0$, the following holds.
\begin{enumerate}
\item[(a)] $\int_\Omega \left( |\nabla\psi_{x, \lambda}|^2 + (a+\varepsilon V) \psi_{x, \lambda}^2 - 3 \alpha^4 \psi_{x, \lambda}^6 \right) = (1-\alpha^4) 3 \frac{\pi^2}{4} + \mathcal O(\phi_a(x) \lambda^{-1} + \lambda^{-2} + \varepsilon \lambda^{-1})$.
\item[(b)] $\int_\Omega \left( \nabla q\cdot\nabla\psi_{x, \lambda} + (a+\varepsilon V) q \psi_{x, \lambda} - 9 \alpha^4 q\psi_{x, \lambda}^5 \right) = 3(1-3\alpha^4) \frac{\pi^2}{4}\beta \lambda^{-1} + \mathcal O(\lambda^{-2}+ \epsilon^2 \lambda^{-1})$.
\item[(c)] $\mathcal R_0 = \mathcal O (\lambda^{-2}+ \varepsilon^2\lambda^{-1})$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) In \cite[Theorem 2.1]{FrKoKo1}, we have shown the expansions
\begin{align*}
& \int_\Omega \left( |\nabla \psi_{x, \lambda}|^2 + (a+\varepsilon V) \psi_{x, \lambda}^2 \right) = 3 \frac{\pi^2}{4} + \mathcal O(\phi_a(x)\lambda^{-1} + \lambda^{-2} + \varepsilon \lambda^{-1}) \,, \\
& 3 \int_\Omega |\psi_{x, \lambda}|^6 = 3 \frac{\pi^2}{4} + \mathcal O(\phi_a(x) \lambda^{-1} + \lambda^{-2}) \,,
\end{align*}
which immediately imply the bound in (a).
\medskip
(b) Since $\Delta (H_a(x,\cdot)-H_0(x,\cdot)) = -a G_a(x,\cdot)$, we have $-\Delta\psi_{x, \lambda} = 3U_{x, \lambda}^5 - \lambda^{-1/2}a G_a(x,\cdot)$. Since $\psi_{x, \lambda}=\lambda^{-1/2}G_a(x,\cdot) - f_{x, \lambda} - g_{x, \lambda}$ with $g_{x, \lambda}$ from \eqref{eq:defg}, we can rewrite this as
\begin{equation}\label{eq:eqpsi}
-\Delta\psi_{x, \lambda} + a\psi_{x, \lambda} = 3U_{x, \lambda}^5 - a(f_{x, \lambda} + g_{x, \lambda}) \,.
\end{equation}
Thus,
\begin{align*}
& \int_\Omega \left( \nabla q\cdot\nabla\psi_{x, \lambda} + (a+\varepsilon V) q \psi_{x, \lambda} - 9 \alpha^4 q\psi_{x, \lambda}^5 \right) \\
& = 3(1-3\alpha^4) \int_\Omega q U_{x, \lambda}^5 - \int_\Omega q \left( 9 \alpha^4 ( \psi_{x, \lambda}^5 - U_{x, \lambda}^5 ) + a(f_{x, \lambda} + g_{x, \lambda})+ \varepsilon V\psi_{x, \lambda} \right).
\end{align*}
By orthogonality and the computations in the proof of Proposition \ref{prop-beta-gamma},
$$
3 \int_\Omega q U_{x, \lambda}^5 = \int_\Omega \nabla s \cdot \nabla PU_{x, \lambda}
= \frac{\pi^2}{4}\, \beta \lambda^{-1} + \mathcal O(\lambda^{-2}) \,.
$$
Moreover,
\begin{align*}
& \left| \int_\Omega q \left( 9 \alpha^4 ( \psi_{x, \lambda}^5 - U_{x, \lambda}^5 ) + a(f_{x, \lambda} + g_{x, \lambda}) + \varepsilon V \psi_{x, \lambda} \right) \right| \\
& \lesssim \|q\|_6 \left( \| \psi_{x, \lambda}^5 - U_{x, \lambda}^5 \|_{6/5} + \|f_{x, \lambda}\|_{6/5} + \|g_{x, \lambda}\|_{6/5} + \varepsilon \|\psi_{x, \lambda}\|_{6/5} \right).
\end{align*}
By Propositions \ref{proposition s} and \ref{prop-r-bound}, we have
\begin{equation}
\label{eq:qbound}
\|q\|_6 \lesssim \|\nabla q\| \lesssim \lambda^{-1} + \epsilon \lambda^{-1/2} \,,
\end{equation}
by Lemma \ref{lemma PU}, $\|f_{x, \lambda}\|_\infty \lesssim \lambda^{-5/2}$ and, by Lemma \ref{lem-g}, $\|g_{x, \lambda}\|_{6/5}\lesssim \lambda^{-2}$. Moreover, writing $\psi_{x, \lambda} = U_{x, \lambda} - \lambda^{-1/2} H_a(x,\cdot) - f_{x, \lambda}$ and using Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU} and \eqref{Ha-bound}, we get $\|\psi_{x, \lambda}\|_{6/5} \lesssim \lambda^{-1/2}$. Also, bounding
$$
\left| \psi_{x, \lambda}^5 - U_{x, \lambda}^5 \right| \lesssim \psi_{x, \lambda}^4 \left( \lambda^{-1/2} |H_a(x,\cdot)| + |f_{x, \lambda}| \right) + \lambda^{-5/2} |H_a(x,\cdot)|^5 + |f_{x, \lambda}|^5 \,,
$$
we obtain from Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU} and from \eqref{Ha-bound},
$$
\| \psi_{x, \lambda}^5 - U_{x, \lambda}^5 \|_{6/5} \lesssim \lambda^{-1/2} \|\psi_{x, \lambda}\|_{24/5}^4 + \lambda^{-5/2}\lesssim \lambda^{-1} \,.
$$
Collecting all the terms, obtain the claimed bound.
\medskip
(c) Because of the second inequality in \eqref{eq:qbound}, the first integral in the definition of $\mathcal R_0$ is $\mathcal O(\lambda^{-2} + \varepsilon^2\lambda^{-1})$. The second integral is bounded, in absolute value, by a constant times
$$
\int_\Omega \left( \psi_{x, \lambda}^4 q^2 + q^6\right) \leq \|\psi_{x, \lambda}\|_6^4 \|q\|_6^2 + \|q\|_6^6 \lesssim \lambda^{-2} + \varepsilon^2\lambda^{-1} \,.
$$
This completes the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop-alpha4}]
The claim follows from \eqref{energy identity alpha^4} and Lemma \ref{lemma alpha^4}.
\end{proof}
\subsection{Expanding $\phi_a(x)$ }
\label{subsection phi a}
In this section we prove the following important expansion.
\begin{proposition}\label{phiaexp}
As $\epsilon\to 0$,
\begin{equation}
\label{phi a exp subsection}
\phi_a(x) = \pi\, a(x) \lambda^{-1} - \frac{\varepsilon}{4\pi}\, Q_V(x) + o(\lambda^{-1}) +o(\varepsilon)
\end{equation}
\end{proposition}
Before proving it, let us note the following consequence.
\begin{corollary} \label{cor-lambda-eps}
We have $\phi_a(x_0)=0$, $Q_V(x_0) \leq 0$ and
\begin{equation} \label{lambda-1}
\lambda^{-1} = \mathcal O(\varepsilon),
\end{equation}
as $\varepsilon \to 0$. Moreover, $\|\nabla r\| = \mathcal{O}(\varepsilon\lambda^{-1/2})$ and $\alpha^4 = 1 + \frac{64}{3 \pi} \phi_0(x) \lambda^{-1} + \mathcal O(\varepsilon \lambda^{-1})$.
\end{corollary}
\begin{proof}
The fact that $\phi_a(x_0)=0$ follows immediately from \eqref{phi a exp subsection}. Since $\phi_a(x)\geq 0$ by criticality and since $a(x_0)<0$ by assumption, we deduce from \eqref{phi a exp subsection} that $Q_V(x_0) \leq 0$ and that
$$
\lambda^{-1} \leq \frac{|Q_V(x_0)| + o(1)}{4 \pi^2 |a(x_0)| + o(1)}\ \varepsilon = \mathcal O(\varepsilon).
$$
Reinserting this into \eqref{phi a exp subsection} we find $\phi_a(x) = \mathcal O(\varepsilon)$. Inserting this into Proposition \ref{prop-r-bound}, we obtain the claimed bound on $\|\nabla r\|$, and inserting it into \eqref{alpha4 exp subsec} and \eqref{eq beta gamma subsec}, we obtain the claimed expansion of $\alpha^4$.
\end{proof}
The proof of \eqref{phi a exp subsection} is based on the Pohozaev identity obtained by integrating the equation for $u$ against $\partial_{\lambda} \psi_{x, \lambda}$. We write the resulting equality in the form
\begin{align}
\label{pohozaev identity lambda refined II}
& \int_\Omega \left( \nabla\psi_{x, \lambda}\cdot\nabla\partial_{\lambda}\psi_{x, \lambda} + (a+\varepsilon V)\psi_{x, \lambda}\partial_{\lambda}\psi_{x, \lambda} -3 \alpha^4 \psi_{x, \lambda}^5 \partial_{\lambda}\psi_{x, \lambda} \right) \notag \\
&= - \int_\Omega \left( \nabla q\cdot\nabla\partial_{\lambda}\psi_{x, \lambda} + a q \partial_{\lambda}\psi_{x, \lambda} -15 \alpha^4 q\psi_{x, \lambda}^4 \partial_{\lambda}\psi_{x, \lambda} \right) + 30\alpha^4 \int_\Omega q^2 \psi_{x, \lambda}^{3} \partial_{\lambda} \psi_{x, \lambda} + \mathcal R
\end{align}
with
$$
\mathcal R = - \epsilon \int_\Omega V q \partial_{\lambda}\psi_{x, \lambda} + 3 \alpha^4 \sum_{k = 3}^5 {5 \choose k} \int_\Omega \psi_{x, \lambda}^{5-k} q^{k} \partial_{\lambda} \psi_{x, \lambda} .
$$
The involved terms can be expanded as follows.
\begin{lemma}
\label{lemma pohozaev bis}
We have
\begin{enumerate}
\item[(a)] \begin{align*}
& \int_\Omega \left( \nabla\psi_{x, \lambda}\cdot\nabla\partial_{\lambda}\psi_{x, \lambda} + (a+\varepsilon V)\psi_{x, \lambda}\partial_{\lambda}\psi_{x, \lambda} -3 \alpha^4 \psi_{x, \lambda}^5 \partial_{\lambda}\psi_{x, \lambda} \right) \\
& = -2\pi \phi_a(x) \lambda^{-2} - \frac12 Q_V(x) \epsilon\lambda^{-2}
+ (1-\alpha^4) 4\pi \phi_a(x) \lambda^{-2} + \left( 2\pi^2 a(x) + 15 \pi^2 \phi_a(x)^2 \right) \lambda^{-3} \\
& \quad + o(\lambda^{-3}) + o(\varepsilon \lambda^{-2})
\end{align*}
\item[(b)] \begin{align*}
& \int_\Omega \left( \nabla q\cdot\nabla\partial_{\lambda}\psi_{x, \lambda} + a q \partial_{\lambda}\psi_{x, \lambda} -15 \alpha^4 q\psi_{x, \lambda}^4 \partial_{\lambda}\psi_{x, \lambda} \right) \\
& = -(1-\alpha^4)2\pi \left( \phi_a(x) - \phi_0(x) \right) \lambda^{-2} + \mathcal O(\phi_a(x)\lambda^{-3}) + o(\epsilon\lambda^{-2}) + o(\lambda^{-3}) \,.
\end{align*}
\item[(c)] \begin{align*}
30\alpha^4 \int_\Omega q^2 \psi_{x, \lambda}^{3} \partial_{\lambda} \psi_{x, \lambda} = \frac{15\pi^2}{16}\, \beta\gamma\, \lambda^{-3} + \mathcal O(\phi_a(x)\lambda^{-3}) + o(\epsilon\lambda^{-2}) + o(\lambda^{-3}) \,.
\end{align*}
\item[(d)] \begin{align*}
\mathcal R = \mathcal O(\phi_a(x)\lambda^{-3}) + o(\epsilon\lambda^{-2}) + o(\lambda^{-3}) \,.
\end{align*}
\end{enumerate}
\end{lemma}
\noindent We emphasize that the proof of Lemma \ref{lemma pohozaev bis} is independent of the expansion of $\alpha^4$ in \eqref{alpha4 exp subsec}. We only use the fact that $\alpha=1+o(1)$.
\begin{proof}
[Proof of Lemma \ref{lemma pohozaev bis}]
(a) Because of \eqref{eq:eqpsi}, the quantity of interest can be written as
\begin{align}\label{eq:expnp1}
& \int_\Omega \left( \nabla\psi_{x, \lambda}\cdot\nabla\partial_{\lambda}\psi_{x, \lambda} + (a+\varepsilon V)\psi_{x, \lambda}\partial_{\lambda}\psi_{x, \lambda} -3 \alpha^4 \psi_{x, \lambda}^5 \partial_{\lambda}\psi_{x, \lambda} \right) \notag \\
& = 3 \int_\Omega \left( U_{x, \lambda}^5 - \alpha^4 \psi_{x, \lambda}^5 \right)\partial_{\lambda} \psi_{x, \lambda}
- \int_\Omega a (f_{x, \lambda} + g_{x, \lambda})\partial_{\lambda}\psi_{x, \lambda} + \varepsilon \int_\Omega V\psi_{x, \lambda}\partial_{\lambda}\psi_{x, \lambda} \,.
\end{align}
We discuss the three integrals on the right side separately. As a general rule, terms involving $f_{x, \lambda}$ will be negligible as a consequence of the bounds $\|f_{x, \lambda}\|_\infty = \mathcal O(\lambda^{-5/2})$ and $\| \partial_{\lambda} f_{x, \lambda}\|_\infty = \mathcal O(\lambda^{-7/2})$ in Lemma \ref{lemma PU}. This will not always be carried out in detail.
We have
\begin{equation}
\label{eq:expnp11}
\int_\Omega \left( U_{x, \lambda}^5 - \alpha^4 \psi_{x, \lambda}^5 \right)\partial_{\lambda} \psi_{x, \lambda}
= (1-\alpha^4) \int_\Omega U_{x, \lambda}^5 \partial_{\lambda} \psi_{x, \lambda} + \alpha^4 \int_\Omega \left( U_{x, \lambda}^5 - \psi_{x, \lambda}^5 \right)\partial_{\lambda} \psi_{x, \lambda} \,.
\end{equation}
The first integral is, since $\psi_{x, \lambda} = U_{x, \lambda} - \lambda^{-1/2}H_a(x,\cdot) - f_{x, \lambda}$,
$$
\int_\Omega U_{x, \lambda}^5 \partial_{\lambda} \psi_{x, \lambda} = \int_\Omega U_{x, \lambda}^5 \partial_{\lambda} U_{x, \lambda} + \frac12 \lambda^{-3/2} \int_\Omega U_{x, \lambda}^5 H_a(x,\cdot) + \mathcal O(\lambda^{-4}) \,.
$$
Since $\int_{\mathbb{R}^3} U_{x, \lambda}^5 \partial_{\lambda} U_{x, \lambda} = (1/6) \partial_{\lambda} \int_{\mathbb{R}^3} U_{x, \lambda}^6 = 0$, we have
\begin{align*}
\left| \int_\Omega U_{x, \lambda}^5 \partial_{\lambda} U_{x, \lambda} \right| = \left| \int_{\mathbb{R}^3\setminus\Omega} U_{x, \lambda}^5 \partial_{\lambda} U_{x, \lambda} \right| \lesssim \lambda^{-1} \int_{d \lambda}^\infty \left|\frac{r^2 - r^4}{(1 + r^2)^4} \right| \, dr = \mathcal O(\lambda^{-4}).
\end{align*}
Next, by Lemma \ref{lemma U Ha},
$$
\frac12 \lambda^{-3/2} \int_\Omega U_{x, \lambda}^5 H_a(x,\cdot) = \frac{2\pi}3 \phi_a(x) \lambda^{-2} + \mathcal O(\lambda^{-3}) \,.
$$
This completes our discussion of the first term on the right side of \eqref{eq:expnp11}. For the second term we have similarly,
\begin{align*}
\int_\Omega \left( U_{x, \lambda}^5 - \psi_{x, \lambda}^5 \right)\partial_{\lambda} \psi_{x, \lambda} & = \int_\Omega \left( U_{x, \lambda}^5 - (U_{x, \lambda}-\lambda^{-1/2}H_a(x,\cdot))^5 \right)\partial_{\lambda} (U_{x, \lambda}-\lambda^{-1/2}H_a(x,\cdot)) \\
& \quad + o(\lambda^{-3}) \\
& = 5 \lambda^{-1/2} \int_\Omega U_{x, \lambda}^4H_a(x,\cdot)\partial_{\lambda} U_{x, \lambda} + \frac52 \lambda^{-2} \int_\Omega U_{x, \lambda}^4H_a(x,\cdot)^2 \\
& \quad - 10 \lambda^{-1} \int_\Omega U_{x, \lambda}^3 H_a(x,\cdot)^2 \partial_{\lambda} U_{x, \lambda} \\
& \quad + \sum_{k=3}^5 {5 \choose k} (-1)^k \lambda^{-k/2} \int_\Omega U_{x, \lambda}^{5-k} H_a(x,\cdot)^k\partial_{\lambda} U_{x, \lambda} \\
& \quad - \frac12 \sum_{k=2}^5 {5 \choose k} (-1)^k \lambda^{-(k+3)/2} \int_\Omega U_{x, \lambda}^{5-k} H_a(x,\cdot)^{k+1} + o(\lambda^{-3}) \,.
\end{align*}
Again, by Lemma \ref{lemma U Ha},
\begin{align*}
& 5 \lambda^{-1/2} \int_\Omega U_{x, \lambda}^4H_a(x,\cdot)\partial_{\lambda} U_{x, \lambda} + \frac52 \lambda^{-2} \int_\Omega U_{x, \lambda}^4H_a(x,\cdot)^2 - 10 \lambda^{-1} \int_\Omega U_{x, \lambda}^3 H_a(x,\cdot)^2 \partial_{\lambda} U_{x, \lambda} \\
& = -\frac23 \pi \phi_a(x) \lambda^{-2} + \left( 2\pi a(x) + 5 \pi^2 \phi_a(x)^2 \right) \lambda^{-3} + o(\lambda^{-3}) \,.
\end{align*}
Finally, the two sums are bounded, in absolute value, by
\begin{align*}
& \int_\Omega (U_{x, \lambda}^2 \lambda^{-3/2} |H_a(x,\cdot)|^3 + \lambda^{-5/2} |H_a(x,\cdot)|^5)|\partial_{\lambda} U_{x, \lambda}| + \int_\Omega (U_{x, \lambda}^3 \lambda^{-5/2} |H_a(x,\cdot)|^3 + \lambda^{-4} |H_a(x,\cdot)|^6) \\
& \lesssim \|\partial_{\lambda} U_{x, \lambda}\|_6 (\|U_{x, \lambda}\|_{12/5}^2 \lambda^{-3/2} + \lambda^{-5/2}) + \|U_{x, \lambda}\|_3^3 \lambda^{-5/2} + \lambda^{-4} = o(\lambda^{-3}).
\end{align*}
This completes our discussion of the second term on the right side of \eqref{eq:expnp11} and therefore of the first term on the right side of \eqref{eq:expnp1}.
For the second term on the right side of \eqref{eq:expnp1} we get, using $\psi_{x, \lambda} = U_{x, \lambda} - \lambda^{-1/2}H_a(x,\cdot) - f_{x, \lambda}$,
$$
\int_\Omega a (f_{x, \lambda} + g_{x, \lambda})\partial_{\lambda}\psi_{x, \lambda} = \int_\Omega a g_{x, \lambda} \partial_{\lambda} U_{x, \lambda} +\frac12 \lambda^{-3/2} \int_\Omega a g_{x, \lambda} H_a(x,\cdot) + o(\lambda^{-3})
$$
The second integral is negligible since, by Lemma \ref{lem-g},
$$
\left| \frac12 \lambda^{-3/2} \int_\Omega a g_{x, \lambda} H_a(x,\cdot) \right| \lesssim \lambda^{-3/2} \int_{\Omega} g_{x, \lambda} \lesssim \lambda^{-4}\ln\lambda \,.
$$
Since $a$ is differentiable, we can expand the first integral as
$$
\int_\Omega a g_{x, \lambda} \partial_{\lambda} U_{x, \lambda} = a(x) \int_\Omega g_{x, \lambda} \partial_{\lambda} U_{x, \lambda} + \mathcal O\left( \int_\Omega |x-y| g_{x, \lambda} |\partial_{\lambda} U_{x, \lambda}| \right).
$$
We have
$$
\int_\Omega g_{x, \lambda} \partial_{\lambda} U_{x, \lambda} = \lambda^{-3} \int_{\lambda(\Omega-x)} g_{0,1} \partial_\lambda U_{0,1} = \lambda^{-3} \int_{\mathbb{R}^3} g_{0,1} \partial_\lambda U_{0,1} + o(\lambda^{-3})
$$
and
$$
\int_{\mathbb{R}^3} g_{0,1} \partial_\lambda U_{0,1} = 4\pi \int_0^\infty \left( \frac1r - \frac1{\sqrt{1+r^2}} \right) \frac{1-r^2}{2(1+r^2)^{3/2}} \,r^2\,dr = 2\pi(3-\pi) \,.
$$
Using similar bounds one verifies that
$$
\int_\Omega |x-y| g_{x, \lambda} |\partial_{\lambda} U_{x, \lambda}| \lesssim \lambda^{-4} \int_{\lambda(\Omega-x)} |z| g_{0,1} |\partial_\lambda U_{0,1} | \lesssim \lambda^{-4} \,.
$$
This completes our discussion of the second term on the right side of \eqref{eq:expnp1}.
For the third term on the right side of \eqref{eq:expnp1}, we write $\psi_{x, \lambda}= \lambda^{-1/2} G_a(x,\cdot) - f_{x, \lambda}-g_{x, \lambda}$ and get
\begin{align*}
& \int_\Omega V\psi_{x, \lambda}\partial_{\lambda}\psi_{x, \lambda} = \int_\Omega V \left( \lambda^{-1/2} G_a(x,\cdot) - g_{x, \lambda} \right) \partial_\lambda \left( \lambda^{-1/2} G_a(x,\cdot) - g_{x, \lambda} \right) + o(\lambda^2) \\
& = - \frac12 \lambda^{-2} Q_V(x) + \mathcal O \left( \lambda^{-3/2} \int_\Omega G_a(x,\cdot) g_{x, \lambda} + \lambda^{-1/2} \int_\Omega G_a(x,\cdot) |\partial_{\lambda} g_{x, \lambda}| + \int_\Omega g_{x, \lambda} |\partial_{\lambda} g_{x, \lambda}| \right) \\
& \quad + o(\lambda^2) \\
&= - \frac12 \lambda^{-2} Q_V(x) + \mathcal O\left(\lambda^{-3/2} \|G_a(x,\cdot)\|_2 \|g_{x, \lambda}\|_2 + \lambda^{-1} \|G_a(x,\cdot)\|_2 \|\partial_{\lambda} g_{x, \lambda}\|_2 + \|g_{x, \lambda}\|_2 \|\partial_{\lambda} g_{x, \lambda}\|_2 \right) \\
& \quad + o(\lambda^{-2}) \\
&= - \frac12 \lambda^{-2} Q_V(x) + o( \lambda^{-2}).
\end{align*}
In the last equality we used the bounds from Lemma \ref{lem-g} and the fact that $G_a(x,\cdot)\in L^2(\Omega)$. This completes our discussion of the third term on the right side of \eqref{eq:expnp1} and concludes the proof of (a).
\medskip
(b) We note that \eqref{eq:eqpsi} yields
$$
-\Delta\partial_{\lambda}\psi_{x, \lambda} + a \partial_{\lambda}\psi_{x, \lambda} = 15 U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} - a\left( \partial_{\lambda} f_{x, \lambda} + \partial_{\lambda} g_{x, \lambda} \right).
$$
Because of this equation, the quantity of interest can be written as
\begin{align}\label{eq:expnp2}
& \int_\Omega \left( \nabla q\cdot\nabla\partial_{\lambda}\psi_{x, \lambda} + a q \partial_{\lambda}\psi_{x, \lambda} -15 \alpha^4 q\psi_{x, \lambda}^4 \partial_{\lambda}\psi_{x, \lambda} \right) \notag \\
& = 15 \int_\Omega q \left( U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} - \alpha^4 \psi_{x, \lambda}^4 \partial_{\lambda}\psi_{x, \lambda}\right)
- \int_\Omega a q \left( \partial_{\lambda} f_{x, \lambda} + \partial_{\lambda} g_{x, \lambda} \right).
\end{align}
We discuss the two integrals on the right side separately.
We have
\begin{align}
\label{eq:expnp21}
\int_\Omega q \left( U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} - \alpha^4 \psi_{x, \lambda}^4 \partial_{\lambda}\psi_{x, \lambda}\right)
& = (1-\alpha^4) \int_\Omega q U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} \notag \\
& \quad + \alpha^4 \int_\Omega q \left( U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} - \psi_{x, \lambda}^4 \partial_{\lambda}\psi_{x, \lambda}\right).
\end{align}
The first integral is, in view of the orthogonality condition $0 = \int_\Omega \nabla w\cdot \nabla\partial_{\lambda} P U_{x, \lambda} = 15 \int_\Omega w U_{x, \lambda}^4\partial_{\lambda} U_{x, \lambda}$,
$$
\int_\Omega q U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} = \lambda^{-1/2} \int_\Omega \left( H_a(x,\cdot) - H_0(x,\cdot) \right) U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} = - \frac2{15} \pi \left( \phi_a(x)-\phi_0(x)\right) \lambda^{-2} + \mathcal O(\lambda^{-3}).
$$
For the second integral on the right side of \eqref{eq:expnp21} we have
\begin{align*}
& \int_\Omega q \left( U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} - \psi_{x, \lambda}^4 \partial_{\lambda}\psi_{x, \lambda}\right) \\
& = \int_\Omega q \left( U_{x, \lambda}^4 \partial_{\lambda} U_{x, \lambda} - (U_{x, \lambda} - \lambda^{-1/2} H_a(x,\cdot))^4 \partial_{\lambda} \left( U_{x, \lambda} - \lambda^{-1/2} H_a(x,\cdot) \right) \right) + o(\lambda^{-3})\\
& = \mathcal O(\phi_a(x)\lambda^{-3}) + o(\epsilon\lambda^{-2}) + o(\lambda^{-3}) \,.
\end{align*}
Let us justify the claimed bound here for a typical term. We write $H_a(x,y) = \phi_a(x) + \mathcal O(|x-y|)$ and get
$$
\int_\Omega q U^4 \lambda^{-3/2} H_a(x,\cdot) = \lambda^{-3/2} \phi_a(x) \int_\Omega q U_{x, \lambda}^4 + \mathcal O\left( \lambda^{-3/2} \int_\Omega q U^4 |x-y| \right).
$$
Using the bound \eqref{eq:qbound} on $q$ and Lemma \ref{lemma Lq norm of U} we get
$$
\left| \int_\Omega q U_{x, \lambda}^4 \right| \leq \|q\|_6 \|U_{x, \lambda}\|_{24/5}^4 \lesssim \lambda^{-3/2} + \epsilon \lambda^{-1} \,.
$$
The remainder term is better because of the additional factor of $|x-y|$. We gain a factor of $\lambda^{-1}$ since
$$
\left\| |x-\cdot|^{1/4} U_{x, \lambda} \right\|_{24/5}^4 \lesssim \lambda^{-3/2} \,.
$$
Another typical term,
$$
\int_\Omega q U^3 \lambda^{-1/2} H_a(x,\cdot) \partial_{\lambda} U_{x, \lambda} \,,
$$
can be treated in the same way, since the bounds for $\partial_{\lambda} U_{x, \lambda}$ are the same as for $\lambda^{-1} U_{x, \lambda}$; see Lemma \ref{lemma Lq norm of U}. The remaining terms are easier. This completes our discussion of the first term on the right side of \eqref{eq:expnp2}.
The second term on the right side of \eqref{eq:expnp2} is negligible. Indeed,
$$
\int_\Omega a q \left( \partial_{\lambda} f_{x, \lambda} + \partial_{\lambda} g_{x, \lambda} \right) = \mathcal O( \|q\|_6 \|\partial_{\lambda} g_{x, \lambda}\|_{6/5}) + o(\lambda^{-3}) = o(\lambda^{-3}) \,,
$$
where we used Lemma \ref{lem-g} and the same bound on $q$ as before. This completes our discussion of the second term on the right side of \eqref{eq:expnp2} and concludes the proof of (b).
\medskip
(c) We use the form \eqref{expansion s} of the zero modes $s$, as well as the bounds on $\|\nabla s\|$ and $\|\nabla r\|$ from \eqref{bounds s} and \eqref{nabla r subsection}, to find
\begin{align}
& \int_\Omega q^2\, \psi_{x, \lambda}^3 \, \partial_{\lambda} \psi_{x, \lambda} = \int_\Omega s^2\, \psi_{x, \lambda}^3 \, \partial_{\lambda} \psi_{x, \lambda} + \mathcal O(\phi_a(x) \lambda^{-3}) + o(\lambda^{-3}) + o(\varepsilon \lambda^{-2}) \nonumber \\
& = \beta^2\lambda^{-2} \int_\Omega U_{x, \lambda}^5 \, \partial_{\lambda} U_{x, \lambda} +2\beta\gamma\, \lambda^{-1}
\int_\Omega U_{x, \lambda}^4 \, (\partial_{\lambda} U_{x, \lambda} )^2 +\gamma^2 \int_\Omega U_{x, \lambda}^3 \, (\partial_{\lambda} U_{x, \lambda} )^3 \nonumber \\
& \quad + \mathcal O(\phi_a(x) \lambda^{-3}) + o(\lambda^{-3}) + o(\varepsilon \lambda^{-2}) \, . \label{rk-2}
\end{align}
A direct calculation using \eqref{pl-U} gives
$$
\lambda^{-2} \int_\Omega U_{x, \lambda}^5 \, \partial_{\lambda} U_{x, \lambda} = o(\lambda^{-3}), \qquad \int_\Omega U_{x, \lambda}^3 \, (\partial_{\lambda} U_{x, \lambda} )^3 = o(\lambda^{-3})
$$
and
\begin{align*}
\int_\Omega U_{x, \lambda}^4 \, (\partial_{\lambda} U_{x, \lambda} )^2 & = \frac 14\, \lambda^{-2} \int_\Omega U_{x, \lambda}^6 -\lambda^3 \int_\Omega \frac{ |x-y|^2}{(1+\lambda^2\, |x-y|^2)^{4}}\, + \lambda^5 \int_\Omega \frac{ |x-y|^4}{(1+\lambda^2\, |x-y|^2)^{5}}\ \\
& = \frac{\pi^2}{16}\, \lambda^{-2} -4\pi \lambda^{-2}\, \int_0^\infty\frac{t^4\, dt}{(1+t^2)^4} +4\pi \lambda^{-2} \int_0^\infty\frac{t^6\, dt}{(1+t^2)^5}+ o(\lambda^{-2}) \\
& = \frac{\pi^2}{64}\, \lambda^{-2}+ o(\lambda^{-2}) .
\end{align*}
Inserting this into \eqref{rk-2} gives the claimed expansion (c).
The proof of (d) uses similar bounds as in the rest of the proof and is omitted.
\end{proof}
\begin{proof}[Proof of Proposition \ref{phiaexp}]
Combining \eqref{pohozaev identity lambda refined II} with Lemma \ref{lemma pohozaev bis} yields
\begin{align}
\label{phi a intermed}
0 &= -4 \pi \phi_a(x) \lambda^{-2} - Q_V(x) \varepsilon \lambda^{-2} + 4 \pi^2 a(x) \lambda^{-3} + \lambda^{-3} R \notag \\
& \quad + \mathcal O(\phi_a(x) \lambda^{-3}) + o(\lambda^{-3}) + o(\varepsilon \lambda^{-2})
\end{align}
with
\begin{equation*}
R = \lambda(1-\alpha^4) 4\pi \left( \phi_a(x)+\phi_0(x) \right) + 30\pi^2\phi_a(x)^2 - \frac{15}{8} \beta \gamma \pi^2 \,.
\end{equation*}
We now make use of the expansion \eqref{alpha4 exp subsec} of $\alpha^4-1$ and obtain
\begin{align*}
R & = 16 \beta \pi \phi_0(x) - \frac{15}{8} \beta \gamma \pi^2 + \mathcal O(\phi_a(x) + \lambda^{-1} + \epsilon) \,.
\end{align*}
Inserting the expansions \eqref{eq beta gamma subsec} of $\beta$ and $\gamma$, we find the cancellation
\begin{align}
\label{R3}
R & = \mathcal O(\phi_a(x) + \lambda^{-1} + \epsilon) \,.
\end{align}
In particular, $R=\mathcal O(1)$ and, inserting this into \eqref{phi a intermed}, we obtain
\[ \phi_a(x) = \mathcal O(\lambda^{-1} + \varepsilon). \]
In particular, for the error term in \eqref{phi a intermed}, we have $\phi_a(x) \lambda^{-3} = o(\lambda^{-3})$ and, moreover, by \eqref{R3}, $R=\mathcal O(\lambda^{-1} + \varepsilon)$. Inserting this bound into \eqref{phi a intermed}, we obtain the claimed expansion \eqref{phi a exp subsection}.
\end{proof}
\subsection{The bound on $|\nabla \phi_a(x)|$}
The goal of this subsection is to prove
\begin{proposition} \label{prop-grad-phi}
For every $\mu<1$, as $\varepsilon\to 0$,
\begin{equation} \label{bound-grad-phi}
|\nabla \phi_a(x)| \lesssim \varepsilon^\mu \,.
\end{equation}
\end{proposition}
The proof of this proposition is a refined version of the proof of Proposition \ref{prop bdry concentration}. It is also based on expanding the Pohozaev identity \eqref{pohozaev type u}. Abbreviating, for $v,z \in H^1(\Omega)$,
\[ I[v,z] := \int_{\partial \Omega} \frac{\partial v}{\partial n} \frac{\partial z}{\partial n} n + \int_\Omega (\nabla a) vz , \]
and writing $u = \alpha (\psi_{x, \lambda} + q)$, we can write identity \eqref{pohozaev type u} as
\begin{equation}
\label{pohoz-nabla-u}
0 = I[\psi_{x, \lambda}] + 2 I[\psi_{x, \lambda}, q] + I[q] + \varepsilon \int_\Omega (\nabla V) (\psi_{x, \lambda} + q)^2 \,.
\end{equation}
The following lemma extracts the leading contribution from the main term $I[\psi_{x, \lambda}]$.
\begin{lemma}
\label{lemma I psi}
$I[\psi_{x, \lambda}] = 4 \pi \nabla \phi_a(x) \lambda^{-1} + \mathcal O(\lambda^{-1-\mu})$ for every $\mu < 1$.
\end{lemma}
On the other hand, the next lemma allows to control the error terms involving $q$.
\begin{lemma}
\label{lemma q bdry integral}
$\|\frac{\partial q}{\partial n}\|_{L^2(\partial \Omega)} \lesssim \varepsilon \lambda^{-1/2}$.
\end{lemma}
Before proving these two lemmas, let us use them to give the proof of Proposition \ref{prop-grad-phi}. In that proof, and later in this subsection, we will use the inequality
\begin{equation}
\label{eq:qbound2}
\|q\|_2 \lesssim \varepsilon \lambda^{-1/2} \,.
\end{equation}
This follows from the bound \eqref{bounds s} on $s$ and the bounds in Corollary \ref{cor-lambda-eps} on $\lambda^{-1}$ and $r$. Note that \eqref{eq:qbound2} is better than the bound \eqref{eq:qbound} in the $L^6$ norm.
\begin{proof}[Proof of Proposition \ref{prop-grad-phi}]
We shall make use of the bounds
\begin{equation}
\label{eq:psibounds2}
\|\psi_{x, \lambda}\|_2 + \| \frac{\partial \psi_{x, \lambda}}{\partial n}\|_{L^2(\partial \Omega)} \lesssim \lambda^{-1/2} \,.
\end{equation}
The first bound follows by writing $\psi_{x, \lambda} = U_{x, \lambda} -\lambda^{-1/2}H_a(x,\cdot) + f_{x, \lambda}$ and using the bounds in Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU} and in \eqref{Ha-bound}. For the second bound we write $\psi_{x, \lambda} = PU_{x, \lambda} - \lambda^{-1/2}(H_a(x,\cdot)-H_0(x,\cdot))$ and use the bounds in Lemmas \ref{lemma PU bdry integral} and \ref{lemma nabla Hb bounds}.
Combining the bounds \eqref{eq:psibounds2} with the corresponding bounds for $q$ from Lemma \ref{lemma q bdry integral} and \eqref{eq:qbound2} we obtain
\[
\left| I[\psi_{x, \lambda}, q] \right| \lesssim \varepsilon \lambda^{-1}, \qquad I[q] \lesssim \varepsilon^2 \lambda^{-1} \,.
\]
Moreover, by \eqref{eq:qbound2} and \eqref{eq:psibounds2},
$$
\varepsilon \left| \int_\Omega (\nabla V) (\psi_{x, \lambda}+q)^2 \right| \lesssim \varepsilon \lambda^{-1}.
$$
In view of these bounds, Lemma \ref{lemma I psi} and equation \eqref{pohoz-nabla-u} imply $|\nabla \phi_a(x)| \lesssim \varepsilon + \lambda^{-\mu}$. Because of \eqref{lambda-1}, this implies \eqref{bound-grad-phi}.
\end{proof}
It remains to prove Lemmas \ref{lemma I psi} and \ref{lemma q bdry integral}.
\begin{proof}
[Proof of Lemma \ref{lemma I psi}]
We integrate equation \eqref{eq:eqpsi} for $\psi_{x, \lambda}$ against $\nabla \psi_{x, \lambda}$ and obtain
\begin{equation}
\label{eq:lemmaipsiproof}
-\frac12 I[\psi_{x, \lambda}] = 3 \int_\Omega U_{x, \lambda}^5 \nabla \psi_{x, \lambda} - \int_\Omega a (f_{x, \lambda} + g_{x, \lambda}) \nabla \psi_{x, \lambda} \,.
\end{equation}
For the first integral on the right side we write $\psi_{x, \lambda} = U_{x, \lambda} - \lambda^{-1/2} H_a(x, \cdot) + f_{x, \lambda}$ and integrate by parts to obtain
\begin{align*}
3 \int_\Omega U_{x, \lambda}^5 \nabla \psi_{x, \lambda} & = 3 \int_{\partial\Omega} U_{x, \lambda}^5 \left( \frac16\, U_{x, \lambda} - \lambda^{-1/2}H_a(x,\cdot) + f_{x, \lambda} \right) n \\
& \quad + 15 \int_\Omega U_{x, \lambda}^4 (\nabla U_{x, \lambda}) \left(\lambda^{-1/2}H_a(x,\cdot) -f_{x, \lambda} \right).
\end{align*}
By Lemma \ref{lemma U Ha}, see also Remark \ref{rem:U^4 pxi Hb}, we have
$$
\int_\Omega U_{x, \lambda}^4 (\nabla U_{x, \lambda}) H_a(x,\cdot) = -\int_\Omega U_{x, \lambda}^4 (\nabla_x U_{x, \lambda}) H_a(x,\cdot) = -\frac{2\pi}{15} \nabla\phi_a(x) \lambda^{-1/2} + \mathcal O(\lambda^{-1/2-\mu}) \,.
$$
Finally, since $U_{x, \lambda} \lesssim \lambda^{-1/2}$ on $\partial\Omega$ and by the bounds on $U_{x, \lambda}$, $f_{x, \lambda}$ and $H_a(x,\cdot)$ from Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU} and from \eqref{Ha-bound}, we have
$$
3 \int_{\partial\Omega} U_{x, \lambda}^5 \left( \frac16 U_{x, \lambda} - \lambda^{-1/2}H_a(x,\cdot) + f_{x, \lambda} \right) n + 15 \int_\Omega U_{x, \lambda}^4 (\nabla U_{x, \lambda}) f_{x, \lambda} = \mathcal O(\lambda^{-2}) \,.
$$
This shows that the first term on the right side of \eqref{eq:lemmaipsiproof} gives the claimed contribution.
On the other hand, for the second term on the right side of \eqref{eq:lemmaipsiproof} we have
\begin{align*}
& \int_\Omega a(f_{x, \lambda}+g_{x, \lambda})\nabla\psi_{x, \lambda} = \int_\Omega a (f_{x, \lambda} +g_{x, \lambda}) \nabla (U_{x, \lambda} -\lambda^{-1/2} H_a(x,\cdot)) \\
& \quad - \frac12 \int_\Omega (\nabla a) f_{x, \lambda}^2 - \int_\Omega (a\nabla g_{x, \lambda} + g_{x, \lambda} \nabla a) f_{x, \lambda} + \frac12 \int_{\partial\Omega} a f_{x, \lambda}^2 + \int_{\partial \Omega} a f_{x, \lambda} g_{x, \lambda} \\
& = \int_\Omega a g_{x, \lambda} \nabla U_{x, \lambda} + \mathcal O(\lambda^{-3}) \,.
\end{align*}
Here we used bounds from Lemmas \ref{lemma PU} and \ref{lem-g} and from the proof of the latter. Finally, we write $a(y) = a(x) + \mathcal O(|x-y|)$ and using oddness of $g_{x, \lambda} \nabla U_{x, \lambda}$ to obtain
$$
\int_\Omega a g_{x, \lambda} \nabla U_{x, \lambda} = \mathcal O\left( \int_\Omega |x-y| g_{x, \lambda} |\nabla U_{x, \lambda}| \right) = \mathcal O(\lambda^{-2}) \,.
$$
This proves the claimed bound on the second term on the right side of \eqref{eq:lemmaipsiproof}.
\end{proof}
\begin{proof}
[Proof of Lemma \ref{lemma q bdry integral}]
The proof is analogous to that of Lemma \ref{lemma w bdry integral}. By combining equation \eqref{equation w} for $w$ with $\Delta(H_a(x,\cdot) - H_0(x,\cdot)) = - aG_a(x,\cdot)$, we obtain $-\Delta q = F$ with
$$
F:= -3U_{x, \lambda}^5 + 3 \alpha^4 (\psi_{x, \lambda} + q)^5 - aq + a (f_{x, \lambda} + g_{x, \lambda}) - \varepsilon V(\psi_{x, \lambda}+q) \,.
$$
(We use the same notation as in the proof of Lemma \ref{lemma w bdry integral} for analogous, but different objects.)
We define the cut-off function $\zeta$ as before, but now in our bounds we do not make the dependence on $d$ explicit, since we know already $d^{-1}=\mathcal O(1)$ by Proposition \ref{prop bdry concentration}. Then $\zeta q\in H^2(\Omega)\cap H^1_0(\Omega)$ and
$$
-\Delta(\zeta q) = \zeta F - 2\nabla\zeta \cdot\nabla q - (\Delta\zeta)q \,.
$$
We claim that
\begin{equation}
\label{eq:ptwf}
\zeta |F| \lesssim \zeta |q|^5 + \varepsilon \zeta U_{x, \lambda} + |q| + \varepsilon \lambda^{-1/2} \,.
\end{equation}
Indeed, on $\Omega \setminus B_{d/2}(x)$, we have $U_{x, \lambda} \lesssim \lambda^{-1/2}$ and $g_{x, \lambda}\lesssim \lambda^{-5/2}$. By Corollary \ref{cor-lambda-eps}, we have $\lambda^{-5/2} = \mathcal O(\varepsilon \lambda^{-1/2})$. Moreover, we write $\psi_{x, \lambda} = U_{x, \lambda} - \lambda^{-1/2} H_a(x, \cdot) + f_{x, \lambda}$ and use the bounds on $f_{x, \lambda}$ and $H_a(x,\cdot)$ from Lemma \ref{lemma PU} and \eqref{Ha-bound}.
Combining \eqref{eq:ptwf} with inequality \eqref{trace estimate w}, we obtain
\begin{align*}
\left\| \frac{\partial q}{\partial n} \right\|_{L^2(\partial \Omega)} & = \left\| \frac{\partial (\zeta q)}{\partial n} \right\|_{L^2(\partial \Omega)} \lesssim \| \zeta F - 2\nabla\zeta \cdot\nabla q - (\Delta\zeta)q \|_{3/2} \\
& \lesssim \|\zeta q^5\|_{3/2} + \varepsilon \|\zeta U_{x, \lambda}\|_{3/2} + \|q\|_{3/2} + \varepsilon \lambda^{-1/2} + \| |\nabla \zeta| |\nabla q|\|_{3/2} + \|(\Delta \zeta) q\|_{3/2} \,.
\end{align*}
It remains to bound the norms on the right side. All terms, except for the first one, are easily bounded. Indeed, by \eqref{eq:qbound2},
\[ \|q\|_{3/2} + \| (\Delta\zeta) q \|_{3/2} \lesssim \|q\|_2 \lesssim \varepsilon \lambda^{-1/2} \]
and
\[ \| |\nabla \zeta| |\nabla q| \|_{3/2} \lesssim \|\nabla q\|_{L^2(\Omega \setminus B_{d/2}(x))} \leq \|\nabla s\|_{L^2(\Omega \setminus B_{d/2}(x))} + \|\nabla r\|_2 \lesssim \varepsilon \lambda^{-1/2}, \]
where we used $\|\nabla s\|_{L^2(\Omega \setminus B_{d/2}(x))} \lesssim \lambda^{-3/2}$ by Lemma \ref{bounds s} and $\|\nabla r\| \lesssim \varepsilon \lambda^{-1/2}$ by Corollary \ref{cor-lambda-eps}. (Notice that for the estimate on $s$ it is crucial that the integral avoids $B_{d/2}(x)$.) Moreover, by Lemma \ref{lemma Lq norm of U},
$$
\|\zeta U_{x, \lambda}\|_{3/2} \lesssim \|U_{x, \lambda}\|_{L^{3/2}(\Omega\setminus B_{d/2}(x))} \lesssim \lambda^{-1/2} \,.
$$
To bound the remaining term $\|\zeta q^5\|_{3/2}$ we argue as in Lemma \ref{lemma w bdry integral} above and get
\begin{align*} \| \zeta q^5\|_{3/2} &= \left(\int_\Omega |\zeta^{1/4} |q|^{1/4} q|^6 \right)^{2/3} \lesssim \left(\int_\Omega |\nabla (\zeta^{1/4} |q|^{1/4} q)|^2 \right)^2 \\
&\lesssim \left( \int_\Omega |q|^{5/2} |\nabla (\zeta^{1/4})|^2 \right) ^2 + \left( \int_\Omega |F|\, \zeta^{1/2} |q|^{3/2} \right)^2 \nonumber \\
& \lesssim \|q\|_6^5 + \left( \int_\Omega |F|\, \zeta^{1/2} |q|^{3/2} \right)^2.
\end{align*}
We use the pointwise estimate \eqref{eq:ptwf} on $\zeta F$, which is equally valid for $\zeta^{1/2} F$. The term coming from $|q|^5$ is bounded by
\[ \left( \int_\Omega |q|^{5 + \frac 32} \zeta^{1/2} \right)^2 = \left( \int_\Omega ( \zeta|q|^{5})^{1/2} q^4 \right)^2 \leq \|\zeta q^5\|_{3/2} \|q\|_6^{8} = o(\|\zeta q^5\|_{3/2}), \]
which can be absorbed into the left side. The contributions from the remaining terms in the pointwise bound on $\zeta^{1/2} |F|$ can by easily controlled and we obtain
\[ \| \zeta q^5\|_{3/2} \lesssim \|q\|_6^5 + \lambda^{-5} + (\varepsilon \lambda^{-1/2})^5 \lesssim \epsilon \lambda^{-1/2}. \]
Collecting all the estimates, we obtain the claimed bound.
\end{proof}
\subsection{A bound on $\|w\|_\infty$}
\label{subsection infty bound w}
In this subsection, we prove a crude bound on the $L^\infty$ norm of the first-order remainder $w$ appearing in the decomposition $u = \alpha( PU_{x, \lambda} + w)$. This bound is not needed in the proof of Theorem \ref{thm expansion}, but only in that of Theorem \ref{thm BP}.
\begin{proposition}
\label{proposition infty bound w}
As $\varepsilon \to 0$,
\begin{equation}
\label{infty bound w}
\|w\|_\infty = o(\lambda^{1/2}) \,.
\end{equation}
\end{proposition}
Our proof follows \cite[proof of (25)]{Re1}, which concerns the case $N \geq 4$ and $a=0$. Since some of the required modifications are rather complicated to state, we give details for the convenience of the reader.
\begin{proof}
We define $F$ by \eqref{eq:eqwrhs} and write equation \eqref{equation w} for $w$ as
\begin{equation}
\label{equation w with greensfct}
w (x) = \frac{1}{4 \pi} \int_\Omega G_0(x,y) F(y) .
\end{equation}
By H\"older's inequality and the fact that $0\leq G_0(x,y)\leq |x-y|^{-1}$, we have for every $\delta \in (0,2)$
\[ \|w_\varepsilon\|_\infty \leq \sup_{x \in \Omega} \|G_0(x, \cdot)\|_{3 - \delta} \|F\|_{\frac{3 - \delta}{2 - \delta}} \lesssim \|F\|_{\frac{3 - \delta}{2 - \delta}}. \]
Hence it suffices to estimate $\|F\|_q$ with some $q > 3/2$. We will assume throughout that $q<3$. Writing $PU_{x, \lambda} = U_{x, \lambda} - \varphi_{x, \lambda}$, we can replace \eqref{eq:ptwf} by the more refined bound
\begin{equation}
\label{f eq 2}
|F| \lesssim |\alpha^4 - 1| U_{x, \lambda}^5 + U_{x, \lambda}^4 |w| + |w|^5 + U_{x, \lambda}^4 \varphi_{x, \lambda} + U_{x, \lambda} + \varphi_{x, \lambda} + |w| \,.
\end{equation}
(Here we used Lemma \ref{lemma PU} to bound $\phi_{x, \lambda}^5\lesssim\phi_{x, \lambda}$.)
The $L^q$-norms of certain summands are easy to estimate. Indeed, since $|\alpha^4 - 1| \lesssim \lambda^{-1}$ by Proposition \ref{prop-alpha4}, we have by Lemma \ref{lemma Lq norm of U}
\[ |\alpha^4 - 1| \|U_{x, \lambda}^5\|_{q} \lesssim \lambda^{-1} \|U\|_{5q}^5 \sim \lambda^{\frac{3}{2} - \frac{3}{q}} = o(\lambda^{1/2}) \]
since $q<3$. Next, by Lemma \ref{lemma Lq norm of U} and \ref{lemma PU},
\begin{equation*}
\| U_{x, \lambda}^4 \varphi_{x, \lambda}\|_q \leq \lambda^{-1/2} \|U_{x, \lambda}\|_{4q}^4 = \mathcal O( \lambda^{-\frac12 + 2 - \frac 3q}) = o(\lambda^{1/2})
\end{equation*}
and, recalling also $\|\nabla w\|\lesssim \lambda^{-1/2}$ by \eqref{parameters PU + w},
\begin{align*}
\| U_{x, \lambda} +\varphi_{x, \lambda} + |w| \|_q \leq \|U_{x, \lambda}\|_q + \|\phi_{x, \lambda}\|_\infty + \|w\|_6 \lesssim \lambda^{-1/2} = o(\lambda^{1/2})
\end{align*}
It thus remains to estimate the two terms $U_{x, \lambda}^4 |w|$ and $|w|^5$ in \eqref{f eq 2}. For the first one, we have by Hölder
\begin{equation*}
\|U_{x, \lambda}^4 w\|_q \leq \|U_{x, \lambda}\|_{5q}^4 \|w\|_{5q} \sim \lambda^{2 - \frac{12}{5q}} \|w\|_{5q}.
\end{equation*}
Noting that $2 - \frac{12}{5q} > 0$ for all $q \in (3/2, 3)$, the proof is complete if we can show
\begin{equation}
\label{w qnorm goal} \lambda^{2 - \frac{12}{5q}} \|w\|_{5q} = o(\lambda^{1/2}) \qquad \text{ for some } \quad q \in (3/2, 3).
\end{equation}
To prove \eqref{w qnorm goal}, the starting point is again equation \eqref{equation w} for $w$. We let $r> 1$ (to be chosen later appropriately), multiply \eqref{equation w} with $|w|^{r-1} w$ and integrate by parts to obtain
\[ \frac{4r}{(r+1)^2} \int_\Omega |\nabla |w|^{\frac{r+1}{2}}|^2 = \int_\Omega F |w|^{r-1} w . \]
Thus, by Sobolev's inequality applied to $v = |w|^{\frac{r+1}{2}}$,
\begin{equation}
\label{estimate int fw}
\|w\|^{r+1}_{3(r+1)} \lesssim \int_\Omega |F| |w|^r.
\end{equation}
Setting $r = \frac{5}{3}q - 1 $, the estimate \eqref{w qnorm goal} is equivalent to
\begin{equation}
\label{w rnorm goal}
\|w\|_{3(r+1)} = o\left(\lambda^{\frac{4}{r+1} - \frac{3}{2}} \right) \qquad \text{ for some } \quad r \in (3/2, 4) .
\end{equation}
In order to estimate the right side of \eqref{estimate int fw}, we make again use of \eqref{f eq 2} and estimate the various terms individually. Using H\"older's inequality, Lemma \ref{lemma Lq norm of U} and the fact that for any $\eta, p,q > 0$ with $p^{-1} + q^{-1} = 1$, there is $C_\eta > 0$ such that for any $a, b > 0$ one has $ab \leq \eta a^p + C_\eta b^q$, we obtain
\begin{align*}
|\alpha^4-1| \int_\Omega U_{x, \lambda}^5 |w|^r & \leq \lambda^{-1} \|w\|_{3(r+1)}^r \|U\|^5_{5 \cdot \frac{3r+3}{2r+3}} \lesssim \lambda^{-1} \|w\|_{3(r+1)}^r \lambda^{\frac{1}{2} \cdot \frac{r-1}{r+1}} = \|w\|_{3(r+1)}^r \lambda^{-\frac{r+3}{2(r+1)}} \\
& \leq \eta \|w\|_{3(r+1)}^{r+1} + C_\eta \lambda^{-\frac{r+3}{2}} \, ;
\end{align*}
\begin{align*}
\int_\Omega |w|^{5+r} \leq \|w\|_{3(r+1)}^{r+1} \|w\|_6^4 \lesssim \|w\|_{3(r+1)}^{r+1} \lambda^{-2} \,;
\end{align*}
\begin{align*}
\int_\Omega U_{x, \lambda}^4 |w|^{r+1} \leq \left( \int_\Omega U_{x, \lambda}^5 |w|^r \right)^{4/5} \left( \int_\Omega |w|^{r+5} \right)^{1/5} \leq \|w\|_{3(r+1)}^{r + \frac{1}{5}} \lambda^{- \frac{4}{5(r+1)}} \leq \eta \|w\|_{3(r+1)}^{r+1} + C_\eta \lambda^{-1};
\end{align*}
\begin{align*}
\int_\Omega U_{x, \lambda}^4 |w|^{r} \varphi_{x, \lambda} & \leq \lambda^{-1/2} \|w\|_{3(r+1)}^r \|U_{x, \lambda}\|^4_{4 \cdot \frac{3r+3}{2r+3}} = \lambda^{-\frac{1}{2} - \frac{1}{r+1}} \|w\|_{3(r+1)}^r = \lambda^{- \frac{r+3}{2(r+1)}} \|w\|_{3(r+1)}^r \\
& \leq \eta \|w\|_{3(r+1)}^{r+1} + C_\eta \lambda^{-\frac{r+3}{2}} \,;
\end{align*}
\begin{align*}
\int_\Omega \varphi_{x, \lambda} |w|^{r} \lesssim \lambda^{-\frac12} \|w\|_{3(r+1)}^r
\leq \eta \|w\|_{3(r+1)}^{r+1} + C_\eta \lambda^{-\frac{r+1}2} \,;
\end{align*}
\begin{align*}
\int_\Omega U_{x, \lambda} |w|^{r} \leq \|w\|_{3(r+1)}^r \|U_{x, \lambda}\|_\frac{3r+3}{2r+3} \lesssim \|w\|_{3(r+1)}^r \lambda^{-\frac{1}{2}} \leq \eta \|w\|_{3(r+1)}^{r+1} + C_\eta \lambda^{-\frac{r+1}2}\, ;
\end{align*}
\begin{align*}
\int_\Omega |w|^{r+1} \lesssim \left( \int_\Omega |w|^{5+r} \right)^\frac{r+1}{r+5} \lesssim \| w \|_{3(r+1)}^{\frac{(r+1)^2}{r+5}} \lambda^{-\frac{2(r+1)}{r+5}} \leq \eta \|w\|_{3(r+1)}^{r+1} + C_\eta \lambda^{-\frac{r+1}{2}} \,.
\end{align*}
By choosing $\eta$ small enough (but independent of $\lambda$), we can absorb the term $\eta \|w\|_{3(r+1)}^{r+1}$, as well as the term $\lambda^{-2} \|w\|_{3(r+1)}^{r+1}$, into the left hand side of inequality \eqref{estimate int fw} to get
\begin{align*}
\|w\|^{r+1}_{3(r+1)} &\lesssim \lambda^{-\frac{r+3}{2}} + \lambda^{-1} + \lambda^{-\frac{r+1}{2}} \lesssim \lambda^{-1} \,.
\end{align*}
Since $\lambda^{-1/(r+1)} = o(\lambda^{\frac{4}{r+1} - \frac{3}{2}})$ for $r<7/3$, for such $r$ we get indeed \eqref{w rnorm goal}. As explained before, is concludes the proof of Proposition \ref{proposition infty bound w}.
\end{proof}
\section{Proof of the main results}
\subsection{The behavior of $\phi_a$ near $x_0$}
We are now in a position to complete the proof of Theorem \ref{thm expansion}. Our main remaining goal is to prove
\begin{equation}
\label{varphi_a final }
\phi_a(x) = o(\epsilon).
\end{equation}
Once this is shown, we will be able to find a relation between $\lambda$ and $\varepsilon$. The proof of \eqref{varphi_a final } (and only this proof) relies on the nondegeneracy of critical points of $\phi_a$.
We already know that $\phi_a(x_0) = 0$ and that $\phi_a(y)\geq 0$ for all $y\in\Omega$, hence $x_0$ is a critical point of $\phi_a$. In this subsection we collect the necessary ingredients which exploit this fact.
\begin{lemma}
\label{lemma C2}
The function $\phi_a$ is of class $C^2$ on $\Omega$.
\end{lemma}
Since we were unable to find a proof for this fact in the literature, we provide one in Appendix \ref{section c2}.
Thus, the following general lemma applies to $\phi_a$.
\begin{lemma}
\label{lemma hessian}
Let $u$ be $C^2$ near the origin and suppose that $u(0)=0$, $\nabla u(0)=0$ and that $\Hess u(0)$ is invertible. Then, as $x \to 0$,
\begin{equation}
\label{hessian general}
u(x) = \frac12 \nabla u(x)\cdot \left( \Hess u(0) \right)^{-1} \nabla u(x) + o(|x|^2) \,.
\end{equation}
Suppose additionally that $\Hess u(0) \geq c$ for some $c > 0$ in the sense of quadratic forms, i.e. the origin is a nondegenerate minimum of $u$. Then, as $x \to 0$,
\begin{equation}
\label{hessian minimum}
u(x) \lesssim |\nabla u(x)|^2.
\end{equation}
\end{lemma}
\begin{proof}
We abbreviate $H(x) = \Hess u(x)$ and make a Taylor expansion around $x$ to get
\begin{equation}
\label{eq:u}
0 = u(0) = u(x) - \nabla u(x)\cdot x + \frac12 x\cdot H(x) x + o(|x|^2)
\end{equation}
and
\begin{eqnarray}
\label{eq:nablau}
0 = \nabla u(0) = \nabla u(x) - H(x) x + o(|x|^2) \,.
\end{eqnarray}
We infer from \eqref{eq:nablau} and the invertibility of $H(0)$ that
$$
x = H(x)^{-1} \nabla u(x) + o(|x|^2) \,.
$$
Inserting this into \eqref{eq:u} gives
$$
0 = u(x) - \frac12 \nabla u(x) \cdot H(x)^{-1} \nabla u(x) + o(|x|^2) \,,
$$
Since $H(x)^{-1} = H(0)^{-1} + o(|x|)$, this yields \eqref{hessian general}.
To prove \eqref{hessian minimum}, if 0 is a nondegenerate minimum, then a Taylor expansion around 0 shows
\begin{equation} \label{phi-lowerb}
u(x) = \frac{1}{2} x \cdot H(0) x + o(|x|^2) \geq \frac{c}{4}|x|^2
\end{equation}
for small enough $|x|$. Thus, the $o(|x|^2)$ in \eqref{hessian general} can be absorbed in the left side, thus \eqref{hessian minimum}.
\end{proof}
\subsection{Proof of Theorem \ref{thm expansion}}
Equation \eqref{u-eps-final} follows from Proposition \ref{prop first expansion}, together with \eqref{definition q}, \eqref{q-split} and \eqref{eq:sproj}. The facts that $x_0\in\mathcal N_a$ and that $Q_V(x_0)\leq 0$ follow from Corollary \ref{cor-lambda-eps}.
By Lemma \ref{lemma C2} and the assumption that $x_0$ is a nondegenerate minimum of $\phi_a$, we can apply Lemma \ref{lemma hessian} to the function $u(x) := \phi_a(x + x_0)$ to get
\[ \phi_a(x) \lesssim |\nabla \phi_a(x)|^2 \,. \]
Therefore, by the bound on $\nabla\phi_a(x)$ in Proposition \ref{prop second expansion} with some fixed $\mu\in(1/2,1)$, we get
\begin{equation} \label{phi-grad-phi}
\phi_a(x) \lesssim |\nabla \phi_a(x)|^2 = o(\varepsilon) \,.
\end{equation}
This proves \eqref{phi-asymp} and, by non-degeneracy of $x_0$, also \eqref{x-x}. Moreover, inserting \eqref{phi-grad-phi} into the expansion of $\phi_a(x)$ from Proposition \ref{prop second expansion}, we find
\[ 0= a(x) \pi \lambda^{-1} - \frac{\varepsilon}{4\pi}\, Q_V(x) + o(\lambda^{-1}) +o(\varepsilon), \]
that is,
\[ \varepsilon \lambda = 4 \pi^2 \frac{|a(x_0)|+ o(1)}{|Q_V(x_0)|+o(1)} \]
with the understanding that this means $\varepsilon\lambda\to \infty$ if $Q_V(x_0)=0$. This proves \eqref{lim eps lambda}.
The remaining claims in Theorem \ref{thm expansion} follow from Proposition \ref{prop second expansion}.
\subsection{Proof of Theorem \ref{thm BP}}
By Proposition \ref{prop first expansion}, we have $u = \alpha(PU_{x, \lambda} + w)$ with $\alpha = 1 + o(1)$. Moreover, by Proposition \ref{proposition infty bound w}, $\|w\|_\infty = o(\lambda^{1/2})$. On the other hand, by Lemma \ref{lemma PU} we have
\[ \|PU_{x, \lambda}\|_\infty = \|U_{x, \lambda}\|_\infty + \mathcal O(\|\varphi_{x, \lambda}\|_\infty) = \lambda^{1/2} + \mathcal O(\lambda^{-1/2}). \]
Putting these estimates together, we obtain
\[ \varepsilon \|u_\varepsilon \|_\infty^2 = \varepsilon (\lambda^{1/2} + o(\lambda^{1/2}))^2 = \varepsilon \lambda (1+o(1)) = 4 \pi^2 \frac{|a(x_0)|}{|Q_V(x_0)|}(1+ o(1)) \]
by the relationship between $\varepsilon$ and $\lambda$ proved in Theorem \ref{thm expansion}. Moreover, $U_{x, \lambda}(x)=\lambda^{1/2} = \|U_{x, \lambda}\|_\infty$. This finishes the proof of part (a) in Theorem \ref{thm BP}.
The proof of part (b) necessitates much fewer prerequisites and can be given by only relying on the crude expansion of $u$ given in Proposition \ref{prop first expansion}.
By applying $(-\Delta + a)^{-1}$, we write \eqref{equation u} as
\begin{equation}
\label{u greens asympt} u(x) = \frac{3}{4 \pi} \int_\Omega G_a(x,y) u(y)^5 - \frac{\epsilon}{4 \pi } \int_\Omega G_a(x,y) V(y) u(y) \,.
\end{equation}
Since $u = \alpha(PU_{x, \lambda} + w)$ with $\|\nabla w\| = \mathcal O(\lambda^{-1/2})$, it is not hard to see that $\lambda^{1/2} \frac{3}{4 \pi} u^5$ is a delta-sequence. Therefore the first term on the right side of \eqref{u greens asympt} is equal to
\[ \lambda^{-1/2} G_a(x, x_0) + o(\lambda^{-1/2}) \]
uniformly in $x$ away from $x_0$.
Since $V \in L^\infty(\Omega)$, the second term on the right side of \eqref{u greens asympt} is estimated by a constant times
\[ \epsilon \| G_a(x, \cdot)\|_{3 - \delta} \|u\|_\frac{3-\delta}{2-\delta}, \]
for some fixed $0<\delta<3/2$. Since $\delta>0$, we have $\| G_a(x, \cdot)\|_{3 - \delta} < \infty$ uniformly in $x \in \Omega$, and since $\delta<3/2$, we have
\[ \|u\|_\frac{3-\delta}{2-\delta} \lesssim \|PU_{x, \lambda}\|_\frac{3-\delta}{2-\delta} + \|w\|_\frac{3-\delta}{2-\delta} \lesssim \lambda^{-1/2}. \]
Here we used the bound $\|PU_{x, \lambda}\|_\frac{3-\delta}{2-\delta}\lesssim \lambda^{-1/2}$ from Lemmas \ref{lemma Lq norm of U} and \ref{lemma PU}, as well as the bound $\|\nabla w\| = \mathcal O(\lambda^{-1/2})$ from Proposition \ref{prop first expansion}. This completes the proof of Theorem \ref{thm BP}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,094 |
Access Cities
following unfollow (26)
Open Innovation Call in Copenhagen: Air Quality & Urban Heat Island Effect
On a mission to find ideas, tech & approaches to reduce the negative impacts of urban air pollution and the heat island effect in Denmark.
Energy, Environment & Resources Technology
Presentation at C40 World Mayors' Summit
Overview Timeline Updates 1 Forum 2 Teams 26
Challenge Overview
"identify and pilot the most promising solution within the City"
The City of Copenhagen, Denmark, is looking for new and innovative ideas, technologies, and approaches to reduce the negative impacts of urban air pollution and the heat island effect. The proposed solutions can approach the challenge from a mitigation perspective (reduce ambient air pollution or temperatures) or from an adaptation perspective (to help the citizens of Copenhagen to cope with the negative effects of air pollution or heat), or both. The goal of the challenge is to identify and pilot the most promising solution within the City in the hopes that it can improve upon the current state of air quality and/or urban heat. The solution, if piloted, will need to be measurable in its effectiveness to do so.
Click the orange ACCEPT CHALLENGE button to enter.
Read the full challenge guide here.
Moss-covered walls can significantly improve air quality and reduce urban heating in Copenhagen
Oct. 25, 2019, 8:17 a.m. PDT by Despina Maliaka
The winner of the Open Innovation Call was announced in Copenhagen during the C40 Mayor's Summit. The solution by the start-up 4MOSST is seeking to cover surfaces in major cities with moss to reduce urban heating and improve air quality.
With the C40 Mayor's Summit as a backdrop, Copenhagen has just found its newest and most innovative solution to air pollution and the urban heat island effect. The start-up company 4MOSST aims to tackle several urban climate problems – including urban heat island effect, air pollution and even mental health. Their solution is to create comfortable and aesthetic urban spaces by covering surfaces like walls with moss.
Our solution is systemic as it tackles a range of issues at once with a single solution. Cities are lacking this element of urban greening and this solution is very concretely lifting this challenge
Thimo Hillenius, who is one of the four partners at 4MOSST.
In August, Access Cities partnered with the City of Copenhagen to solve the pressing issues of air pollution and urban heat island effect in Copenhagen. At the event, where 4MOSST was announced, five finalists delivered their final pitch to the judging panel consisting of high level representatives from the City of Copenhagen and prominent Danish companies.
Open Innovation Calls are a part of ensuring liveability in cities
Besides having the finale of the Open Innovation Call, the Access Cities event on Friday focused on solving cities challenges in Copenhagen, Singapore, New York City and Aarhus. City representatives from the four cities engaged in speed dating with the participating companies and organisations.
There are several challenges that cities like Copenhagen and Singapore have in common. Therefore, it has been very relevant for us to get the perspectives from Denmark on these challenges, and experiencing the companies' innovative solutions. I think many of the solutions are definitely applicable to be scaled throughout South East Asia
Dr. Aravind Muthiah, Strategy Lead of Electric Mobility/Energy Storage Systems at Ecolab.
A similar Open Innovation Call winner was also announced in New York City during the UN Climate Action Week last month. Just as in Copenhagen, the event in New York brought together important leaders from public authorities, companies and organisations from around the world. Both events were focused on gathering international cooperating to find the best sustainable solutions to make cities in the 21st century liveable and fulfil the SDGs.
4MOSST creates an affordable and accessible alternative to existing green walls. The start-up is developing a solution called HSMPaint, an innovative self-growing plaster containing moss spores that after application grows into a moss coverage on walls in cities. The product is a non-toxic paste-like structure, containing moss spores, binding material and moisture protection for the wall. It can be directly applied to outdoor walls out of direct sunlight.
Access Cities is a Global Alliance for Sustainable Urban Development. During the C40 Mayor's Summit Access Cities has been broadly involved in the discussions on future sustainable solutions for the world's cities.
Access Cities is currently running in New York City, Munich, Singapore, and the two Danish cities of Copenhagen and Aarhus, the program aims to solve city problems through multi-stakeholder challenges and open innovation, as a way of accelerating technology and solution knowledge sharing between cities, and to offer better testing opportunities for vendors.
Teams 26
Other Challenges You Maybe Interested In
by American-Made Challenges
Geothermal Manufacturing Prize
Catalyze geothermal manufacturing innovation by harnessing the rapid advances in additive manufacturing.
Enter - 31 Days Remaining
by UpGyres
PLASTIKA REPARABILIS CHALLENGE
ALL types of plastic in ONE bin everywhere on the PLANET Connection of local solutions on a global scale
Enter - 276 Days Remaining
by ideanco.
IDEANCO CLIMATHON
Ideanco Climathon in 9 cities to resolve three core challenges of Agriculture, Air Pollution, and Mobility. Date: April 1 – 11, 2021
Please tell us why. The suspense is killin' us!
Your feedback is always welcome! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,737 |
/**
* Cockpit Import module build file
*/
var gulp = require('gulp');
var pkg = require('./package.json');
var path = pkg.config.path;
/**
* Build task
*/
gulp.task('build', function buildTask(cb) {
// Copy vendor dist files into assets folder
var files = [
path.vendor + '/papaparse/papaparse.js',
path.vendor + '/papaparse/papaparse.min.js',
];
gulp.src(files)
.pipe(gulp.dest(path.src + '/assets/3p'));
;
});
/**
* Default task
*/
gulp.task('default', [
'build'
]);
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,050 |
\section{Introduction}
The question of how an external (constant, uniform) electrical field
influences the electron motion in periodic structures has been of
great interest for decades \cite{Blo,Zen,Wan,Zak}.
Nevertheless, the disagreements on the nature of the energy spectrum
still persist at present. Some analytical investigations
\cite{Wan,Kri,Ro2} show that the energy spectrum should be discrete
irrespective of the potential form and consist of the so-called
Wannier-Stark ladders with uniformly spaced levels. But other works
(see \cite{Zak,Ra2,Ao1,Av2} and references therein, including
rigorous mathematical results for the smooth potentials) point to the
fact that under certain restrictions on the potential the spectrum is
continuous, and a discrete spectrum may exist only for the periodic
structures consisting of the $\delta$ --- potentials (under certain
conditions) and $\delta^\prime$ --- potentials (always).
In the simplest model the problem is reduced to solving
the one-dimensional Schr\"{o}dinger equation whose Hamiltonian
includes the periodic potential and the potential of an electric field.
It is known \cite{Kri} that the properties of the equation depend
strongly on the choice of a gauge for the field. When a scalar
potential is used, the Hamiltonian is time independent, as in the
absence of the field (the problem with the zero field will be referred
to as a zero-field problem (ZFP)). But its symmetry is different from
the translational one. In this case it is important to reveal
the changes, in the band-gap energy spectrum of the ZFP, caused by the
field and to find the wave functions satisfying the new conditions of
symmetry (these functions will play the role which is similar to that
of the Bloch functions in the ZFP). A directly opposite situation
arises for a vector representation. Now, switching on the field does
not break the translational symmetry, and the Hamiltonian becomes time
dependent. As a result, the electron energy is no longer a quantum
number and the initial problem can be treated as the one on the Bloch
electron accelerated by the field.
The mathematical difficulties associated with making use of the scalar
potential are well known beginning with the famous paper \cite{Wan} by
Wannier. To overcome them, the author had to treat a finite number
of the Bloch bands. This approximation was rightly disputed later
\cite{Zak,Ra2}. Recent investigations (see for example \cite{Ao1,Av2})
show that the solution of the problem essentially depends on the
alternation order of Bloch bands and gaps in the high energy region in
the spectrum of the ZFP. It is known that for periodic finite-value
potentials the band width increases to infinity with increasing energy,
while the gap width vanishes. Taking into account a finite number of
Bloch bands is equivalent to the fact that the whole high-energy region
is a gap. Such an approximation is sure to result in a discrete
spectrum.
As far as we know, the rigorous analytical solution to the problem with
the scalar potential of the general form have not been found. Besides,
the stationary electron states, displaying the symmetry of the problem,
remain to be investigated. In this work we propose an exact analytical
method to find such stationary states. The connection between them and
the Bloch states is discussed here. The energy spectrum of an electron
and the Zener tunneling are also considered.
\section {Symmetry of wave functions}
The basis for our approach is the transfer matrix method (TMM)
\cite{Ch1}, that we have used \cite{Ch2} for solving the ZFP.
We shall remind that one of the main points of that formalism is the
notion of out-of-barrier regions (OBR), where the total potential is
equal to zero. Here we shall use this notion as well, having made
the necessary generalizations with reference to the problem at hand. It
can be made in two ways. Firstly, one may consider that the
total potential in the OBR coincides with the Stark potential which is
a linear function of $x$ (in this case the treatment should be based on
the Airy-functions formalism). Secondly, one may consider that the
potential in these regions is a constant which depends linearly on the
cell number. The proportionality coefficient depends on the
electrical-field strength. Both the variants can be used in our
approach. However in this work we dwell on the latter because there is
a more evident association with the ZFP in this case.
The stationary Schr\"{o}dinger equation for a structure of $N$
periods (unit cells) may be written as
\begin{equation} \label{1}
\frac{d^2\Psi}{dx^2}+\frac{2m}{\hbar^2}
\bigl(E-V(x)\bigr)\Psi=0,
\end{equation}
\noindent where $E$ is the electron energy; $m$ is its mass;
$V(x)$ is defined by expressions: $V(x)=v(x)-n\Delta$, if $x\in
(a_n,b_{n+1})$ ($n=0,\ldots,N-1$); $V(x)=-n\Delta$, if $x\in (b_n,
a_n)$; $b_n=nD$; $a_n=l+nD$ ($n=0,\ldots,N$); $\Delta=e{\cal E}D$; $e$
is the electron charge (by modulus); $l$ is the OBR width; $D$ is the
structure period; $\cal E$ is the electric-field strength; $v(x)$ is a
bounded $D$-periodical function.
It should be noted that boundary conditions at the points $x=0$ and
$x=a_N$ are not needed here, for we do not solve the boundary-value
problem. Semi-infinite and infinite structures will be considered
below. Notice also that the parameter $l$ may be equal to zero. For the
OBR always may be included into the initial potential (as the point
with an infinitesimal vicinity) without changing the solution of Eq.
(\ref{1}) (the new and initial potentials are equivalent functions).
Thus the present method can be used for any initial potential.
\newcommand{$\Psi_{\cal E}(x;E)$ }{$\Psi_{\cal E}(x;E)$ }
It is known \cite{Wan} that if the function $\Psi_E(x)$ is a solution
of Eq. (\ref{1}), then $\Psi_{E+\Delta}(x-D)$ is a solution too. On the
basis of this statement one can assume that there are solutions (to be
referred to as $\Psi_{\cal E}(x;E)$ ) among those of Eq. (\ref{1}), satisfying the
condition
\begin{equation} \label{2}
\Psi_{\cal E}(x+D;E)=const\cdot\Psi_{\cal E}(x;E+\Delta),
\end{equation}
where $const$ is a complex value. Our main goal is to find the
functions $\Psi_{\cal E}(x;E)$ and to examine their properties.
The solutions to the Schr\"{o}dinger equation, both with the scalar and
with the vector potentials, are generally found in the form of an
expansion in orthogonal functions (for example, in the Bloch
functions). In this case the required and basis functions are supposed
to belong to the same class of functions. The disadvantage of such an
approach is that, in finding wave functions obeying the symmetry
conditions, it is not always clear to what class of functions they are
to belong. So, for example, in the absence of the field the same
condition of the translational symmetry (it coincides with (\ref{2})
when $\Delta=0$) leads, in the bands, to the Bloch functions bounded
everywhere, but in the gaps it provides the functions unbounded at the
plus or minus infinity. As will be shown further, the functions $\Psi_{\cal E}(x;E)$ are
unbounded when $x\to -\infty$. Thus, if we attempted to find these
functions in the Bloch or Wannier expansions, we could obtain an
incorrect result, because both sets of functions belong to other
classes. The transfer matrix method is free from this demerit, for
the expansions technique is not used there.
\section {Functional equation for wave functions in the transfer
matrix method}
\newcommand{{\cal A}}{{\cal A}}
The general solution of the Eq. (\ref{1}) in the OBR's is
\begin{equation} \label{3}
\Psi(x;E)=A_n^{(+)}(E)\exp[ik_n(x-b_n)]+A_n^{(-)}(E)
\exp[-ik_n(x-b_n)], \end{equation}
\noindent where $k_n=\sqrt{2m(E+n\Delta)/\hbar^2}$; $n=0,\ldots,N$.
Here the main problem is to find the coefficients $A_n^{(+)}(E)$ and
$A_n^{(-)}(E)$; $n=0,\ldots,N$. Once the coefficients have been
found, the determination of the $\Psi_{\cal E}(x;E)$ in the barrier regions should
present no principal problems. In the general case for this purpose one
can use, for example, the numerical technique \cite{Ch3}.
The connection between the coefficients of the solution in the first
two OBR's is given by
\begin{equation} \label{4}
{\cal A}_0(E)=\alpha(E) Y(E)\Gamma(E){\cal A}_1(E);\end{equation}
Here $Y$ is a transfer matrix (see \cite{Ch1}), describing the barrier
at the unit cell $n=0$ (providing that there is no step at the
point $b_1$), and $\alpha\cdot\Gamma$ is a matrix matching the
solutions at the step at $x=b_1$:
\begin{equation} \label{5}
Y=\left(\begin{array}{cc}\tilde{q} & \tilde{p} \\\tilde{p}^* &
\tilde{q}^* \end{array} \right), \hspace{8mm}
\Gamma=\left(\begin{array}{cc}q_s & p_s \\p_s & q_s
\end{array} \right); \hspace{8mm}
{\cal A}_n=\left(\begin{array}{c} A_n^{(+)} \\ A_n^{(-)}
\end{array} \right);
\end{equation}
\[
\tilde{q}=\frac{1}{\sqrt{T}}\exp[-i(J+k_0l)]; \hspace{8mm}
\tilde{p}=\sqrt{\frac{R}{T}}\exp[i(\frac{\pi}{2}+F-k_0l)];
\]
\[
q_s=(\alpha+\alpha^{-1})/2;
\hspace{8mm} p_s=(\alpha^{-1}-\alpha)/2; \hspace{8mm}
\alpha(E)=\sqrt{k_1(E)/k_0(E)};
\]
the phases $J(E)$, $F(E)$ and the transmission coefficient $T(E)$ (see
\cite{Ch1}) describing the barrier in the zero cell are supposed to
be known; $R=1-T$.
Let
\[Z=Y\Gamma =\left(\begin{array}{cc}q & p
\\p^* & q^* \end{array} \right), \]
then the relationship (\ref{4}) can be rewritten as
\begin{equation} \label{8}
{\cal A}_0(E)=\alpha(E)Z(E){\cal A}_1(E),
\end{equation}
and the connection between any two adjacent OBR's will be
determined by
\begin{equation} \label{9}
{\cal A}_n(E)=\alpha(E+n\Delta)Z(E+n\Delta){\cal A}_{n+1}(E); \hspace{8mm}
n=0,1,\ldots,N-1.
\end{equation}
Providing (\ref{9}) the connection between the zero and $N$-th unit cells
can be written in the form
\begin{equation} \label{10}
{\cal A}_0(E)=\alpha_{(1,N)}(E) {\cal Z}_{(1,N)}(E){\cal A}_N(E),
\end{equation}
where
\begin{equation} \label{11}
{\cal Z}_{(1,N)}(E)=Z(E)\cdot\ldots
\cdot Z(E+(N-1)\Delta);
\end{equation}
\[\alpha_{(1,N)}(E)=\prod_{n=0}^{N-1}\alpha(E+n\Delta)=
\sqrt{\frac{k_0(E+N\Delta)}{k_0(E)}}.\]
Defining for all $n$ the vector
\[\tilde{{\cal A}}_n(E)=\alpha_{(1,n)}(E){\cal A}_n(E),\]
we can rewrite Eq. (\ref{10}) as
\begin{equation} \label{12}
{\cal A}_0(E)\equiv\tilde{{\cal A}}_0(E)={\cal Z}_{(1,N)}(E)\tilde{{\cal A}}_N(E).
\end{equation}
Now, by analogy with the ZFP \cite{Ch2} we will attempt to find the
wave functions whose expressions for the extreme OBR's (i.e. for the
zero and $N$-th unit cells) are connected by means of symmetry. For this
purpose we will demand of that the coefficients of the zero and first
OBR's must satisfy the condition
\begin{equation} \label{13}
\tilde{{\cal A}}_1(E)=C(E)\cdot{\cal A}_0(E+\Delta),
\end{equation}
\noindent where $C(E)$ is a complex function. Then, by Eq. (\ref{8}),
${\cal A}_0(E)$ must obey the functional equation
\begin{equation} \label{14}
{\cal A}_0(E)=C(E)Z(E){\cal A}_0(E+\Delta).
\end{equation}
It is easy to check that ${\cal A}_0(E)$ is determined by this equation with
an accuracy of a scalar periodical function, $\omega (E)$;
$\omega(E+\Delta)=\omega(E)$. Namely, if the function ${\cal A}_0(E)$ is a
solution, so will be the one $\omega(E)\cdot{\cal A}_0(E)$.
Now, taking into account (\ref{13}) and (\ref{14}) in the relation
(\ref{10}) we have
\begin{equation} \label{15}
\tilde{{\cal A}}_N(E)=G_N(E){\cal A}_0(E+N\Delta),
\end{equation}
\noindent where $G_N(E)=\prod_{n=0}^{N-1}C(E+n\Delta)$.
As in the ZFP \cite{Ch2}, Eqs. (\ref{12}) and (\ref{15}) provide
theoretically the way of deriving, in explicit form, the expressions for
the $N$-barrier transfer matrix (\ref{11}) in terms of ${\cal A}_0(E)$, i.e.
in terms of unit-cell characteristics. However as will be seen from the
following, in this case it gives no preferences in calculating the
${\cal Z}_{(1,N)}(E)$.
Considering (\ref{15}) and the relation
$G_{n+1}(E)=C(E)G_n(E+\Delta)$, one can show that
\[
\tilde{{\cal A}}_{n+1}(E)=C(E)\tilde{{\cal A}}_n(E+\Delta).
\]
Such a connection between the coefficients of two adjacent OBR's
provides fulfilling the symmetry condition (\ref{2}). Namely,
\begin{equation} \label{16}
\Psi_{\cal E}(x+D;E)=\alpha^{-1}(E)C(E)\Psi_{\cal E}(x;E+\Delta).
\end{equation}
So, in the TMM the symmetry condition leads to the functional equation
(\ref{14}) for coefficients of the general solution of the Schr\"{o}dinger
equation.
\section{Solutions of the functional equation}
According to the theory of functional equations \cite{Kuc}, in order to
solve Eq. (\ref{14}) one needs to define the auxiliary functions
$\eta_n(E)$, where $n=0,1,\ldots$, with help of the relationships
\begin{equation} \label{17}
\eta_0(E)=C(E)Z(E)\eta_0(E);
\end{equation}
\begin{equation} \label{18}
\eta_n(E)=C(E)Z(E)\eta_{n-1}(E+\Delta).
\end{equation}
Then the solution of Eq. (\ref{14}) can be written, it is easily
checked, in the form
\begin{equation} \label{19}
{\cal A}_0(E)=\lim_{n\to \infty}\eta_n(E).
\end{equation}
In fact, it means that we have to solve the auxiliary equation (\ref{17})
and to prove the existence of the limit (\ref{19}).
Considering Eqs. (\ref{17}) and (\ref{18}), we can write the limit
(\ref{19}) also as
\begin{equation} \label{20}
{\cal A}_0(E)=G_\infty(E){\cal Z}_{(1,\infty)}(E)\cdot\tilde{\eta}_0,
\end{equation}
where $\tilde{\eta_0}=\lim_{n\to \infty}\eta_0(E+n\Delta).$
The finding of ${\cal A}_0(E)$ is seen to be associated with calculating
the matrix ${\cal Z}_{(1,\infty)}(E)$ for the semi-infinite structure.
That is the reason that to derive expressions for ${\cal Z}_{(1,N)}(E)$
in terms of ${\cal A}_0(E)$ is of no interest in this approach.
Let us begin with solving Eq. (\ref{17}). It can be rewritten as
\begin{equation} \label{21}
\frac{\eta_0^{(-)}}{\eta_0^{(+)}}=\frac{C^{-1}-q}{p}=
\frac{p^*}{C^{-1}-q^*}; \hspace{8mm}
\eta_0=\left(\begin{array}{c} \eta_0^{(+)} \\ \eta_0^{(-)}
\end{array} \right).
\end{equation}
This equation coincides by form with equation (8) (see the Ref.
\cite{Ch2}) in the ZFP. The only difference is that the matrix
$Z(E)$ describes the one-cell potential which involves the
electric-field effect. Here the graduation of the energy scale into
Bloch bands ("allowed" energy regions) and gaps ("forbidden" energy
ones) arises as well. But such a division does not yield the energy
spectrum to the given problem and is exclusive of an auxiliary
significance.
Since $det Z(E)=1$ the solutions of the characteristic equation
(\ref{21}) (the right equality) are two reciprocal quantities. In
choosing the required root, for any energy region, we must proceed from
the fact that the function $C(E)$ must have the limit when
$E\to\infty$. Otherwise, the limit $\tilde{\eta_0}$ does not exist
too, and hence Exp. (\ref{20}) loses its meaning.
Let us show that the solutions of the auxiliary equation (\ref{21}),
having the properties needed, are expressed by
\[
C_1(E)=\frac{1}{q+y}; \hspace{8mm}
\eta_0^{(+)}|_1=1; \hspace{8mm}
\eta_0^{(-)}|_1=\frac{y}{p};
\]
\[
C_2(E)=\frac{1}{q^*-y}; \hspace{8mm}
\eta_0^{(+)}|_2=-\frac{y}{p^*}; \hspace{8mm}
\eta_0^{(-)}|_2=1; \]
\[C_2=C_1^{-1}, \hspace{8mm} y=-\frac{i |p|^2\cdot
sign(u)}{|u|+\sqrt{u^2-|p|^2}}, \hspace{8mm} u=Im(q).\]
\newcommand{\stackrel{<}{\sim}}{\stackrel{<}{\sim}}
\newcommand{E+N\Delta}{E+N\Delta}
\newcommand{S_{E,\Delta}}{S_{E,\Delta}}
\newcommand{S^f_{E,\Delta}}{S^f_{E,\Delta}}
\newcommand{S^a_{E,\Delta}}{S^a_{E,\Delta}}
First of all it should be noted that the limit $\tilde{\eta_0}$
is calculated on the set of the equidistant points $E_n$, where
$E_n=E+n\Delta$; $n=0,1,\ldots$. The set will be denoted by $S_{E,\Delta}$, in
doing so we emphasize its dependency on the parameters $E$ and $\Delta$.
It is supposed that $E$ varies in the interval $(0,\Delta]$. The set
$S_{E,\Delta}$ consists of the two subsets $S^a_{E,\Delta}$ and $S^f_{E,\Delta}$ whose points
belong to the bands ($|u|>|p|$) and gaps ($|u|\le|p|$), respectively.
As will be shown below, the behavior of the vector-function
$\eta_0(E_n)$ on the subsets $S^a_{E,\Delta}$ and $S^f_{E,\Delta}$ differs qualitatively.
Therefore, there exists no limit, $\tilde{\eta_0}$, when both the
subsets are infinite.
It follows from the general considerations that the number of
points in $S^a_{E,\Delta}$ and $S^f_{E,\Delta}$ depends on the bands and gaps width as well
as on their location on the energy scale. As will become clear from the
following, both factors are sufficient to investigate for the
rectangular barrier ($v(x)=v_0$). By using the explicit expressions
for the tunneling parameters of the rectangular barrier (see for
example \cite{Ch1}), one can show that for the matrix element
$\tilde{p}$ in the high-energy region the inequality $|\tilde{p}|\stackrel{<}{\sim}
(v_0/2) E^{-1}$ is valid. The asymptote of the phase $J(E)$ is the
function $k_0(E)d$ ($d=b_1-a_0$ is the barrier width); for the bigger
is the electron energy, the more its motion similar to that of the
free electron. Thus, in the high-energy region the centres of the
gaps (the points which satisfy the equation $\sin(J(E)+k_0l)=0$ are
meant) for periodical structures formed of the rectangular
barriers asymptotically coincide on the energy scale with the points
$E_L$, where \[E_L=L^2\epsilon; \hspace{4mm}
\epsilon=\frac{\pi^2\hbar^2}{2m D^2}; \hspace{4mm} L=0,1,\ldots.\] The
distance between the gaps centres in the high-energy region is,
consequently, a multiple of the constant $\epsilon$. In this case the
gaps width tends to zero with increasing $E$, and the bands width, on
the contrary, does to infinity (see \cite{Ra2,Ao1,Av2}). These
findings are not changed by the presence of the step at the right
boundary of the barrier, because the corresponding matrix $\Gamma$ is
real, and, besides, $|p_s(E)|\sim E^{-1}$ as for the rectangular
barrier. Namely, for the matrix $Z(E)$ we have \begin{equation}
\label{24} |p(E)|\stackrel{<}{\sim} \frac{v_0}{2E}, \hspace{8mm} arg(q)\approx
k_0(E)D, \end{equation} here it is also taken into account that
$q=\tilde{q}q_s+\tilde{p}p_s$, $p=\tilde{q}p_s+\tilde{p}q_s$;
$|\tilde{q}|^2-|\tilde{p}|^2=1$, $q_s^2-p_s^2=1$. The asymptotics in
the high-energy region is not changed also when going to the
general-form barrier, because in this case the inequality
$|\tilde{p}(E)|\stackrel{<}{\sim} (v_{max}/2)E^{-1}$ holds, and the asymptotics of
$p_s(E)$ remains the same; here $v_{max}$ is the maximum of $v(x)$ by
modulus.
It follows from the above that the subset $S^a_{E,\Delta}$ is always
infinite, and $S^f_{E,\Delta}$ is infinite in the exceptional ("resonance") cases
only: $E=\Delta=r\epsilon$, where $r$ is a rational number. There
is no limit $\tilde{\eta_0}$ under these conditions, for the modulus of
the functions $\eta_0^{(-)}(E)|_1$ and $\eta_0^{(+)}(E)|_2$ is equal to
unity on $S^f_{E,\Delta}$, but on $S^a_{E,\Delta}$ it varies between the limits zero and
unity.
At the given $\Delta$ the set of the energy values, where the
"resonance" takes place, is no more than a countable set. It is
eventually connected with the fact that the gaps width is zero in the
limit $L\to\infty$. Any arbitrary small variation of $E$ removes the
points $E_n$, beginning with the some number ${\cal N}$, from the gaps.
Since all these points belong to the subset $S^a_{E,\Delta}$, where the
inequality $|u|>|p|$ holds, there is a such $\delta>0$ that for all
$n>{\cal N}$ the condition $|u(E_n)|\ge|p(E_n)|^{1-\delta}$ is valid
(note, that $|p|^2<1$ in the high-energy region). It follows from here
that at these points we have
\[|y|=\frac{|p|^2}{|u|+\sqrt{u^2-|p|^2}}<
\frac{|p|^2}{|u|}\le|p|^{1+\delta}.\]
And hence
\[|\eta_0^{(-)}(E_n)|_1=\frac{|y|}{|p|}\le|p(E_n)|^\delta\stackrel{<}{\sim}
\gamma E_n^{-\delta},\]
where $\gamma=(v_{max}/2)^\delta$. The same asymptotics takes place for
$\eta_0^{(+)}(E_n)|_2$. It means that almost everywhere on the energy
scale
\begin{equation} \label{25}
\tilde{\eta_0}|_1=\left(\begin{array}{c} 1 \\ 0
\end{array} \right); \hspace{8mm}
\tilde{\eta_0}|_2=\left(\begin{array}{c} 0 \\ 1
\end{array} \right).
\end{equation}
Now substituting (\ref{25}) into (\ref{20}) we get the final expressions
for two solutions of functional equation (\ref{14}):
\begin{equation} \label{26}
{\cal A}_0^{(1)}=\left(\begin{array}{c} Q_{(1,\infty)}G_\infty \\
P^*_{(1,\infty)}G_\infty \end{array} \right); \hspace{8mm}
{\cal A}_0^{(2)}=\left(\begin{array}{c} P_{(1,\infty)}G^{-1}_\infty \\
Q^*_{(1,\infty)}G_\infty^{-1} \end{array} \right),
\end{equation}
where $G_\infty=G_\infty^{(1)}=1/G_\infty^{(2)}$; $Q_{(1,\infty)}$
and $P_{(1,\infty)}$ are the elements of ${\cal Z}_{(1,\infty)}$.
Then from (\ref{15}) it follows that
\begin{equation} \label{27}
\tilde{{\cal A}}_\infty^{(1)}(E)=\left(\begin{array}{c}
G_\infty(E) \\ 0 \end{array} \right);
\hspace{8mm} \tilde{{\cal A}}_\infty^{(2)}(E)=\left(\begin{array}{c}
0 \\ G_\infty^{-1}(E)
\end{array} \right).
\end{equation}
Expressions (\ref{3}), (\ref{10}) and (\ref{26}) provide two
independent functions $\Psi^{(1)}_{\cal E}(x;E)$ and $\Psi^{(2)}_{\cal
E}(x;E)$. Both solutions are current-carrying. The corresponding
probability flows, $I_{(1)}(E)$ and $I_{(2)}(E)$, are
\begin{equation} \label{28}
I_{(1)}(E)=\hbar m^{-1}k_0(E)|G_\infty(E)|^2; \hspace{4mm}
I_{(2)}(E)=\hbar m^{-1}k_0(E)|G_\infty(E)|^{-2}.
\end{equation}
Now we have to prove that the limit in (\ref{19}) exists. Otherwise
Exps. (\ref{26})-(\ref{28}) are meaningless.
\section{On existence of the solutions $\Psi_{\cal E}(x;E)$ }
\renewcommand{E+N\Delta}{E+n\Delta}
For a complex-valued matrix $H$ and vector ${\cal A}$ let us define the
norms
\[\| H\|=\max_j\sqrt{\sum_{i=1}^2|h_{ij}|^2}; \hspace{4mm} j=1,2;
\hspace{4mm} \| {\cal A}\|=|A^{(+)}|+|A^{(-)}|.\]
In particular, it means that $\|Z\|^2=1+2|p|^2$. Considering the
first solution we will prove that for any given $E$ and $\varepsilon>0$
we can find such number $N$ that
\begin{equation} \label{29}
\|\eta_n(E)-\eta_{n-1}(E)\|<\varepsilon
\end{equation}
for $n>N$. Since \[\eta_n(E)=G_n(E){\cal
Z}_{(1,n)}(E)\eta_0(E_n),\] (here $E_n=E+n\Delta$) we have
\begin{equation} \label{30}
\|\eta_n(E)-\eta_{n-1}(E)\|\le |G_{n-1}(E)| \cdot\|{\cal
Z}_{(1,n-1)}(E)\|\cdot F(E),
\end{equation}
where $F(E)=\|C(E_n)Z(E_n)\eta_0(E_n)-\eta_0(E_{n-1})\|$.
Let us show that the first two norms are bounded as $n\to\infty$.
We have \[|G_\infty(E)|^{-1}=\prod_{n=0}^{\infty}|C(E_n)|^{-1}=
\prod_{n=0}^\infty|q(E_n)+y(E_n)|\le\]
\begin{equation} \label{31}
\le\prod_{n=0}^{\infty}|q(E_n)|\cdot\Biggl(1+\frac{|y(E_n)|}
{|q(E_n)|}\Biggr).
\end{equation}
The convergence of both norms in (\ref{31}) is equivalent to that of
the series $\sum_{n=0}^{\infty} n^{-2}$, because $|q|=\sqrt{1+|p|^2}$,
$y\sim |p|^2$, $|p|\sim n^{-1}$. Since this series converges, the
infinite product $|G_\infty(E)|$ does as well.
For the matrix describing the semi-infinite structures we have
\[\|{\cal Z}_{(1,\infty)}(E)\|^2\le\prod_{n=0}^{\infty}\|Z(E_n)\|
=\prod_{n=0}^{\infty}(1+2|p(E_n)|^2).\] Obviously, this product
converges for the same reason as in (\ref{31}). In addition, since
\[\|{\cal Z}_{(1,\infty)}(E)\|^2\equiv
1+2\frac{R_{(1,\infty)}(E)}{T_{(1,\infty)}(E)},\]
we have that $T_{(1,\infty)}(E)\ne 0$. So that the semi-infinite
structure must not be absolutely opaque for an electron.
Now, it remains to be shown that $F$ in (\ref{30}) approaches zero with
increasing $n$. Using (\ref{17}) we have
\begin{equation} \label{32}
F=\|\eta_0(E_n)-\eta_0(E_{n-1})\|\stackrel{<}{\sim} 2\gamma n^{-\delta}.
\end{equation}
Since the norms $|G_\infty(E)|$ and $\|{\cal Z}_{(1,\infty)}(E)\|$
are bounded there is \[\max_j (|G_{j}(E)| \cdot \|{\cal
Z}_{(1,j)}(E)\|),\] where $j=1,2,\ldots$. Together with (\ref{32}) this
provides fulfilling the inequality (\ref{29}), that proves the existence
of the limit in (\ref{19}). For the second solution the arguments are
similar.
\section{Conclusions}
At first glance the functions $\Psi_{\cal E}(x;E)$ may be calculated by this
method in the region located to the right of the zero cell
only. However it should be noted that any unit cell of the periodical
structure may be taken as a zero cell. Then by making use of the
transfer matrix which connects solutions in the zero cell and in the
regions to the left of it, one can calculate the functions $\Psi_{\cal E}(x;E)$ on
the whole axis $Ox$.
Since both functions $\Psi_{\cal E}(x;E)$ are current-carrying they increase infinitely
by modulus in the classically inaccessible range when $x\to -\infty$,
according to the general properties of the one-dimensional
Schr\"{o}dinger equation. Thus, for infinite structures, $\Psi_{\cal E}(x;E)$ are no
solutions to the problem. However, for any $E$ (excluding a countable
set for certain values of $\Delta$), the solution (undegenerate) for
the infinite structure can be obtained as a linear combination of these
functions. As a result we arrive at two important conclusions. First,
for the limited periodical potentials the energy spectrum of an
electron in the problem for infinite structures is continuous (so that
the Wannier-Stark states, by the model, may exist only as the
quasi-stationary ones). Second, the stationary wave functions of an
electron in the infinite structures, being the linear combinations
of the functions $\Psi_{\cal E}(x;E)$ , do not satisfy the symmetry condition (\ref{2}).
(There exist a mistaken opinion that the continuity of the spectrum in
this problem is obvious. The following arguments are used in this
case. Namely, the energy spectrum is continuous since
a) the range, where $x$ is large, is classically accessible for an
electron;
b) the periodical potential is negligible in comparison with the Stark
potential when $x\to \infty$, and, consequently, the electron motion in
this range is of the free electron type (see, for example, \cite{Wan}).
However, it should be noted that the first statement is valid only if
$V(x)$ remains finite at the plus infinity. But if $V(x)\to -\infty$ as
$x\to \infty$, then the electron spectrum may be both continuous and
discrete, depending on the monotonicity and rate of decreasing
$V(x)$ at $x\to \infty$ (see, for example, \cite{Tic}). It follows also
from this the erroneousness of the second argument, because on the
whole axis $OX$ the derivatives of $V(x)$ (and, hence, its
monotonicity) are determined by the periodical component of the
potential.)
It is interesting also to dwell on the question of the connection of the
given problem to the ZFP. We will start with the fact that the wave
functions $\Psi_{\cal E}(x;E)$ are defined in terms of the solutions of auxiliary
equation (\ref{21}) describing formally the electron motion in the
periodic structures in the absence of an electric field. In addition,
for finite structures the functions $\Psi_{\cal E}(x;E)$ , by their properties, are
close to the solutions of the ZFP, if $N\Delta\ll E$ ($N$ is the number
of unit cells in the structure). In particular, if values of $E$ are in
the band, then $\Psi_{\cal E}(x;E)$ , in the given interval, are close to the
usual Bloch functions. It is the case when an electrical field has a
weak effect on the electron with the energy $E$. However there is no
transition from the given problem to the ZFP when the periodical
structure is considered on the whole axis $Ox$. The wave functions $\Psi_{\cal E}(x;E)$
are unbounded, when $x\to -\infty$, at any value of the electric-field
strength.
Some comments should be further made about the role of the
Zener tunneling (ZT) which have been the subject of great interest (see,
for example, \cite{Ro2} and referrers therein) since paper
\cite{Zen}. Strictly speaking, by this concept is meant the electron
transitions between the bands, and therefore the latter relates to the
non-stationary case. In the models with the vector potential the Zener
tunneling is caused by the accelerating effect of the field,
resulting in that a Bloch electron passes (tunnels) from the
lower bands to the upper.
In our approach we investigate stationary states. Nevertheless, we can
draw some conclusions on the question. It is possible because symmetry
condition (\ref{16}), governing the functions $\Psi_{\cal E}(x;E)$ , links their $E$-
and $x$-dependencies. In particular, for $\Psi_{\cal E}(x;E)$ relationship
(\ref{15}) is valid. Note also that $\tilde{{\cal A}}_\infty$ (see
(\ref{27})) is a bounded non-zero value. It provides the asymptotics
${\cal A}_n\sim n^{-1/4}$ and ${\cal A}_0(E)\sim E^{-1/4}$ (as for Airy's
functions). Thus the probability that an electron is in the $n$-th
unit cell or it has the energy $E$ decreases with increasing of these
parameters by the power law instead of the exponential one. This result
makes the conclusion presented in Ref. \cite{Ao1} more precise.
Also it follows from the above that the well-known Bloch
oscillations can exist only as the decaying ones. As for the
experimental evidences of the long-lived Bloch oscillations and the
Wannier-Stark ladders in superlattices, it is not the question of
correctness of our approach. This implies only that one needs to find
the mathematical model which would be more suitable for the experiments
on superlattices. In the following paper we are going to present such a
model.
\section*{Acknowledgment}
The author thanks professor G.F.Karavaev for useful discussions.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,337 |
{"url":"https:\/\/byjus.com\/jee\/jee-main-geometric-progression-previous-year-questions-with-solutions\/","text":"# JEE Main Geometric Progression Previous Year Questions With Solutions\n\nPast year JEE Main solved problems on geometric progression are available here. A geometric sequence is a sequence of numbers in which the ratio of consecutive terms is always constant. This constant term is called the common ratio, which can be represented as \u201cr\u201d. The value of the common ratio determines whether the G.P. is decreasing, increasing, positive or negative. In this section, students can get a list of solved questions asked in previous year JEE Main exams.\n\n### Important Points\n\n Definition: A geometric progression is a sequence where the succeeding term is \u2018r\u2019 times the preceding term. Representation: a, ar, ar2, \u2026\u2026, arn-1, arn Where a = first term and r = common ratio Formula to find common ratio: r = an\/an-1 General term\/nth term: an = arn-1 Sum of n terms: Sn = a(1-rn)\/(1-r) if r < 1 and Sn = a(rn \u2013 1)\/(r-1) if r > 1\n\n## JEE Main Past Year Questions With Solutions on G.P.\n\nQuestion 1: a, b, c are in G.P., a + b + c = bx, then x can not be\n\n(a) 2\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(b) -2\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (c) 3\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ( d) 4\n\nSolution:\n\nLet the terms of G.P. be a\/r, a, ar\n\nTherefore, a\/r + a + ar = ax\n\n=> x = r + 1\/r + 1\n\nBut r + 1\/r \u2265 2 or r + 1\/r \u2264 -2 [By A.M and G.M. inequality]\n\nTherefore,\n\nBut x \u2013 1 \u2265 2 or x \u2013 1 \u2264 -2\n\n=> x \u2265 3 or x \u2264 -1\n\nQuestion 2: Let a1, a2, a3, \u2026. be a G.P. such that a1 < 0, a1 + a2 = 4 and a3 + a4 = 16. If $\\sum_{i=1}^9a_i = 4 \\lambda$, then \u03bb is equal to\n\n(a) 171\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(b) 511\/3\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(c) -171\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (d) -513\n\nSolution:\n\na1 + a2 = 4\n\n=> a + ar = 4\n\n=> a(1 + r) = 4\n\nAnd, a3 + a4 = 16\n\n=> ar2 + ar3 = 16\n\n=> ar2(1 + r) = 16\n\n=> 4r2 = 16\n\n=> r = \u00b12\n\nIf r = 2, a = 4\/3; which is not possible as a1 < 0\n\nIf r = \u22122, a = \u22124\n\n$\\sum_{i=1}^9a_i = \\frac{a(r^9-1}{r-1} = \\frac{-4((-2)^9 \u2013 1)}{-3} = \\frac{4}{3}(-512-1)$\n\n= 4(\u2212171)\n\nTherefore, \u03bb = \u2212171\n\nQuestion 3: Let an be the nth term of a G.P. of positive terms.\n\nIf $\\sum_{n=1}^{100}a_{2n+1} = 200 \\; and \\; \\sum_{n=1}^{100}a_{2n} = 100 \\; then \\; \\sum_{n=1}^{200}a_{n}$ is equal to:\n\n(a) 300\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(b) 175\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (c) 225\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(d) 150\n\nSolution: an is a positive term of GP.\n\nLet G.P. be a, ar, ar2,\u2026..\n\n$\\sum_{n=1}^{100} a_{2n+1} = a_3 + a_5 + \u2026\u2026.+a_{201}$\n\n200 = ar2 + ar4 + \u2026\u2026.+ ar201\n\n=> 200 = [ar2(r200\u22121)]\/[r2\u22121] \u2026.. . . (1)\n\nAlso, $\\sum_{n=1}^{100} a_{2n} = 100$\n\n100 = a2 + a4 + \u2026+ a200\n\n=> 100 = ar + ar3 + \u2026\u2026+ ar199\n\n100 = [ar(r200 \u2013 1)]\/[r2-1] \u2026..(2)\n\nFrom (1) and (2), we have, r = 2\n\nAnd, $\\sum_{n=1}^{100} a_{2n+1} + \\sum_{n=1}^{100} a_{2n} = 300$\n\n=> a2 + a3 + a4 + \u2026\u2026.+ a200 + a201 = 300\n\n=> ar + ar2 + ar3 + \u2026.+ ar200 = 300\n\n=> r(a + ar + ar2 + \u2026..+ ar199) = 300\n\n=> 2(a1 + a2 + a3 + \u2026\u2026.+ a200) = 300\n\n$\\sum_{n=1}^{200} a_{n} = 150$\n\nQuestion 4: If x, y, z are in G.P. and ax = by = cz, then find the relation between a and b.\n\nSolution:\n\nx, y, z are in G.P., then y2 = x * z\n\nNow ax = by = cz = m\n\n\u21d2 x loge a = y loge b = z loge c = loge m\n\n\u21d2 x = loga m, y = logb m, z = logc m\n\nAgain, as x, y, z are in G.P., so\n\n(p \u2212 r)2 = (p + r)2 \u22124pr\n\n= 16K2 \u2212 16K2 = 0\n\n\u21d2 logbm \/ logam = logcm \/ logbm\n\n\u21d2 logba = logcb\n\nQuestion 5: Consider an infinite G.P. with first term a and common ratio r, its sum is 4 and the second term is 3\/4, then find a and r.\n\nSolution:\n\nHere a \/ [1 \u2212 r] = 4 and ar = 3\/4.\n\nDividing these, r (1 \u2212 r) = 3 \/ 16 or 16r2 \u2212 16r + 3 = 0 or\n\n(4r \u2212 3) (4r \u2212 1) = 0\n\nor r = 1\/4,\u00a0 3\/4 and\n\na=3, 1\n\nSo (a, r) = (3, 1\/4), (1, 3\/4).\n\nQuestion 6: If b is the first term of an infinite G.P. whose sum is 5, then b lies in the interval,\n\n(a) (-\u221e, -10)\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (b) (10, \u221e)\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (c) (0, 10)\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (d) (10, 0)\n\nSolution: Here, first term = b\n\ncommon difference = d\n\nFor infinite series: \u22121 < d < 1\n\nAnd, S= b\/(1\u2212d) = 5\n\n=> b = 5(1 \u2212 d)\n\nSince |d| < 1 for an infinite geometric series to converge, we have \u22121 < d < 1, This implies\n\n0 < 1\u2212d < 2\n\n=> 0 < b <10\n\nSo, b lies in the interval (0, 10).\n\nQuestion 7: If fifth term of G.P. is 2. Find the product of its nine terms.\n\nSolution:\n\nGiven, a5 = 2\n\nar4 = 2\n\nProduct of nine terms = a x ar x ar2 x ar3 x ar4 x ar5 x ar6 x ar7 x ar8\n\n= (ar4)9 = 29 = 512\n\nQuestion 8: Let a, b, c be positive integers such that b\/a is an integer. If a, b, c are in G.P. and the arithmetic mean of a, b, c is b+2, then the value of [a2+a-14]\/[a+1] is\n\n(a) 3\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(b)\u00a0 4\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (c) 6\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(d) 5\n\nSolution:\n\nHere b\/a = c\/b\n\n=> c = b2\/a and\n\n[a+b+c]\/3 = b + 2\n\n=> a \u2013 2b + c = 6\n\nUsing value of C, we have\n\na \u2013 2b + b2\/a = 6\n\nAbove quadratic equation cab be written as,\n\n(b\/a \u2013 1) 2 = 6\/a\n\n=> (b\/a \u2013 1)2 is a perfect square\n\nTherefore, 6\/a is a perfect square only if a = 6\n\nQuestion 9: If the 4th, 7th and 10th terms of G.P. be a, b, c respectively, then the relation between [a, b, c] is\n\n(a) b = [a+c]\/2\n\n(b) a2 = bc\n\n(c) b2 = ac\n\n(d) c2 = ab\n\nSolution:\n\nGiven: 4th, 7th and 10th terms of G.P. be a, b, c.\n\nT4 = ar3 = a\n\nT7 = ar6 = b\n\nT10 = ar9 = c\n\nSince a, b, c are in G.P.\n\n=> b2 = ac; relation is true.\n\nVerify: b2 = (ar6) 2 = a2r12 and ac = ar3 x ar9 = a2r12\n\nQuestion 10: Sum of infinite number of terms of G.P. is 20 and sum of their square is 100. The common ratio of G.P. is\n\n(a) 5 (b) 3\/5 (c) 8\/5 (d) 1\/5\n\nSolution:\n\nLet G.P. be a, ar, ar2\n\nSum of infinite terms = 20 = a\/(1-r)\n\n=> a = 20(1 \u2013 r) \u2026.(i)\n\nAgain, Sum of their square = a2 + a2r2 + a2r4 + \u2026. to infinity = 100\n\n=> a2\/(1-r2) = 100\n\n=> a2 = 100(1 \u2013 r2) \u2026.(ii)\n\nUsing (i) and (ii), we have\n\n100(1 \u2013 r)(1 + r) = 400(1 \u2013 r2)\n\n=> 1 + r = 4 \u2013 4r\n\n=> r = 3\/5","date":"2020-08-12 07:25:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 7, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7831240296363831, \"perplexity\": 4293.107580428319}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439738878.11\/warc\/CC-MAIN-20200812053726-20200812083726-00035.warc.gz\"}"} | null | null |
The siege of Leith ended a twelve-year encampment of French troops at Leith, the port near Edinburgh, Scotland. The French troops arrived by invitation in 1548 and left in 1560 after an English force arrived to attempt to assist in removing them from Scotland. The town was not taken by force and the French troops finally left peacefully under the terms of a treaty signed by Scotland, England and France.
Background
The Auld Alliance and Reformation of religion
Scotland and France had long been allies under the "Auld Alliance", first established in the 13th century. However, during the 16th century, divisions appeared between a pro-French faction at Court and Protestant reformers. The Protestants saw the French as a Catholic influence and, when conflict broke out between the two factions, called on English Protestants for assistance in expelling the French from Scotland.
In 1542, King James V of Scotland died, leaving only a week-old daughter who was proclaimed Mary, Queen of Scots. James Hamilton, Earl of Arran, was appointed Regent and agreed to the demand of King Henry VIII of England that the infant Queen should marry his son Edward. This policy was soon reversed, however, through the influence of Mary's mother Mary of Guise and Cardinal Beaton, and Regent Arran rejected the English marriage offer. He then successfully negotiated a marriage between the young Mary and François, Dauphin of France.
War of the Rough Wooing
The English King Henry VIII, angered by the Scots reneging on the initial agreement, made war on Scotland in 1544–1549, a period which the writer Sir Walter Scott later christened the "Rough Wooing". In May 1544 an English army landed at Granton and captured Leith to land heavy artillery for an assault on Edinburgh Castle, but withdrew after burning the town and the Palace of Holyrood over three days. Three years later, following another English invasion and victory at Pinkie Cleugh in 1547, the English attempted to establish a "pale" within Scotland. Leith was of prime strategic importance because of its vital role as Edinburgh's port, handling its foreign trade and essential supplies. The English arrived in Leith on 11 September 1547 and camped on Leith Links. The military engineer Richard Lee scouted around the town on 12 September looking to see if it could be made defensible. On 14 September the English began digging a trench on the south-east side of Leith near the Firth of Forth. William Patten wrote that the work was done as much for exercise as for defence, since the army only stayed for five days.
In response to the English invasion the Scottish Court looked to France for assistance, and on 16 June 1548 the first French troops arrived in Leith, soon to total 8,000 men commanded by André de Montalembert sieur d'Esse. The infant Queen Mary was removed to France the following month and the English cause was effectively lost. Most of their troops had left by the end of 1549.
In the following years the French interest became dominant in Scotland with increasing numbers of French troops concentrated in Haddington, Broughty Castle, and Leith.
From 1548 onwards work began fortifying the port of Leith initially with a bulwark at the Kirkgate and at the chapel by the harbour, perhaps designed by the Italian Migliorino Ubaldini. The rest of the new fortifications were almost certainly designed by another Italian military engineer, Piero di Strozzi, and these represent the earliest use of the trace italienne style of artillery fortification in Britain. In August 1548 Strozzi directed the 300 Scottish workmen from a chair carried by four men because he had been shot in the leg at Haddington. In 1554, Mary of Guise, the Catholic French widow of James V, was appointed Regent in place of the Earl of Arran, who had been made Duke of Châtellerault by Henry II of France. Guise continued the pro-French policy, appointing Frenchmen to key positions. In September 1559 she continued to improve the fortification at Leith with works which were probably designed by Lorenzo Pomarelli, an Italian architect and military engineer.
The Reformation crisis
Meanwhile, the Protestant Scots became increasingly restless, particularly after the marriage of Mary and François in 1558. A group of noblemen, styling themselves the Lords of the Congregation, appointed themselves leaders of the anti-French, Protestant party, aligning themselves with John Knox and other religious reformers. They raised 12,000 troops in an attempt to oust the French from Scotland. Arran changed sides, joining the Lords of the Congregation. Meanwhile, Henry II of France was accidentally killed in a jousting tournament and Mary's husband became King of France on 10 July 1559.
During 1559 the Lords of the Congregation dominated most of central Scotland and entered Edinburgh, forcing Mary of Guise to retreat to Dunbar Castle. However, with the aid of 2,000 French troops, she regained control of the capital in July. A short-lived truce was made with the Articles of Leith on 25 July 1559. Guise received further military aid from France, thanks to the influence of Jacques de la Brosse and the Bishop of Amiens. The Lords considered this assistance a breach of the Leith articles. Châtellerault wrote to summon other Scottish lords at the start of October 1559 to resolve their situation: ...it is not unknawin how the Franchmen hes begun mair nor 20 dayis to fortifie the toun of Leyth, tending thairthrow to expell the inhabitantis thairoff and plant thame selffis, thair wyffis and bairnis thairintill suppressing the libertie of this realme. Mary of Guise responded by making a proclamation on 2 October and writing to Lord Seton, Provost of Edinburgh, that it was well known that Leith was fortified as a response to the Congregation's intent to come in arms to Edinburgh on 8 October 1559, rather than to accommodate French troops and their families. She wrote "we could do no less than provide ourselves with some sure retreat for ourselves and our company if we were pursued." The French, she said, had not brought their families. At least one English soldier, Hector Wentworth, joined the defenders of Leith.
Landowners affected by the new fortification works were compensated, one merchant William Dawson was granted exemption from any future customs duties for the loss of his building in North Leith. The Lords of the Congregation suspended Guise's regency and appealed to the Protestant Queen Elizabeth for English military support. In response to the situation, Elizabeth appointed the Duke of Norfolk to lead an expedition, and he travelled north to meet the Scots leaders at Berwick, and concluded the Treaty of Berwick. By this treaty England now recognised the Lords of the Congregation as a power in Scotland, and safeguards were agreed for an English military intervention against the French in Scotland with provisions for their withdrawal.
The siege
Preparations for the siege
The French army continued to strengthen the fortifications of Leith during late 1559. The defences included eight projecting bastions, including Ramsay's Fort protecting the harbour, "Little london" at the north-east, and the Citadel at the north-west. Within the walls was a raised platform for guns, called a "cavalier" by the anonymous French journalist of the siege.
At the end of January 1560, an English fleet, under the command of William Wynter, arrived in the Firth of Forth, having sailed north from the naval base at Queenborough Castle in the Thames Estuary. English diplomats claimed Wynter's arrival in the Firth was accidental, and Norfolk told Wynter to act as if he was a maverick with no commission. The ships were sent by William Cecil under the authority of Queen Elizabeth. On 2 February, a proclamation was issued in the name of the Queen of Scots to summon the men of Selkirk and Jedburgh to be ready to mobilise against the "wicked doings of the English ships" in Scottish waters, and the intended invasion of the Merse and East Lothian.
After the Treaty of Berwick provided a framework for an English military incursion, the English made plans to bring the army and guns to Leith. Considering the weather and difficulties of the road into Scotland, on 8 February 1559 Thomas Howard, Duke of Norfolk and Lord Grey de Wilton wrote to the Lords of Congregation from Newcastle; "we find greate difficultie of the cariadge of the same by land at this tyme of the yere, as well by reason of the deepe and foule wayes between Barwick and Lythe, as also that for such a number of cariadges and draught horses as the same doth require can not be had in time, and therefore we suppose the same must of necessity be transported by sea, and the number of footmen also appointed for this journey to be set on land as near unto Lythe as may be convenientlie. And in that case, our horsemen to enter by land as soon as we have intelligence of the landing of our footmen."
In the event, an army of around 6,000 English soldiers, under Lord Grey de Wilton marched from Berwick, arriving in early April to join up with the Scottish Lords. Camping for a night at Halidon Hill, then at Dunglass and Lintonbriggs, the English army were at Prestongrange on 4 April where the lighter artillery pieces for the siege were landed from ships at Aitchison's Haven. Just before this English army arrived, the French raided Glasgow and Linlithgow. The French garrison at Haddington had withdrawn to the prepared position at Leith, swelling the number of French troops there to an estimated 3,000.
Meanwhile, Mary of Guise and her advisors stayed secure in Edinburgh Castle from 28 March 1560. The Keeper of the castle, John Erskine declared it neutral, and this was respected by both sides, and the castle played no part in the conflict. Before the English army arrived at Leith, the commander William Grey of Wilton considered that capturing the castle with the Queen Regent might be a better option. However, the Duke of Norfolk advised him against it, as their proper target was the French soldiery in Leith, not Erskine's Scottish garrison.
Battle of Restalrig
When the Duke of Norfolk arrived at Berwick in January 1560, Mary of Guise's military advisor Jacques de la Brosse wrote to him saying he did not believe the rumour in Edinburgh that Norfolk was Elizabeth's lieutenant-general in Scotland, there to attack the French and favour the rebels, against the peace treaty between Scotland and England. However, that was exactly Norfolk's mission. Norfolk remained at Berwick, instructed that Grey of Wilton was to have charge of "martial affairs" in Scotland, as Grey himself wished, while Ralph Sadler, a long-serving diplomat, was to forward a peaceable settlement with Mary of Guise by diplomacy, liaising with the Duke of Châtellerault and his party. Elizabeth appointed James Croft to be Grey of Wilton's deputy. Ralph Sadler was given Grey's border administrative roles at Berwick upon Tweed.
Grey of Wilton set his camp at Restalrig village on 6 April 1560 and twice offered to parley with Mary of Guise and the French military commander Henri Cleutin, Sieur d'Oysel et Villeparisis via the English Berwick Pursuivant. This offer was refused, and the English herald Rouge Croix was sent to demand that the French withdraw from the field into Leith. Cleutin replied that his troops were on his master and mistress's ground.
Soon after this exchange fighting broke out at Restalrig with casualties on both sides. Some French mounted arquebusiers who pursued an English detachment were killed on the slopes above Leith, or captured by the sea-shore. Over 100 French casualties were reported with 12 officers killed and a number of prisoners taken. George Buchanan and John Hayward's 17th century history make the point that the French were trying to secure the high-ground to the south of Leith: Hawkhill, the crag (at Lochend), and the chapel, which the French journalist of the siege called the "Magdalene Chapel". Hayward and Mary's secretary John Lesley mentioned that George Howard and James Croft were parleying with Mary of Guise at the spur blockhouse of Edinburgh Castle when the fighting started.
Mount Pelham
Norfolk reported that, "Restaricke Deanrie is so sweete, that our campe lyeth not within halfe a myle and more of our trenches." The English began constructing their siege-works against the town in mid-April. There were trenches on the Hawkhill ridge north of Restalrig and towards Lochend Castle. South of Leith Links, at the latter-day site of Hermitage House, below the Magdalen Chapel on the ridge, there was a fortlet, "Mount Pelham" named after the captain of the pioneers, William Pelham. Pelham led a force of 400 pioneers. The fort faced across the present-day Leith Links towards the eastern side of the town and South Leith Church.
Mount Pelham was developed from a trench dug on the night of 12 April and finished 13 days later as a sconce with four corner bastions. An eyewitness, Humfrey Barwick later wrote that he suggested that Pelham should begin his fort "at the fwte of this hill and run straight to yonder hillocke," presumably meaning by hillock the "Giant's Brae" and "Lady Fyfe's Brae" on Leith Links.
Captain Cuthbert Vaughan was the fortlet's commander with 240 men. Five years earlier, Vaughan and James Croft had been imprisoned as supporters of Lady Jane Grey, and they subsequently took part in Wyatt's Rebellion. Vaughan was killed in 1563, at the siege of Newhaven in France. Both Holinshed and George Buchanan mention the fort was too far from Leith for its cannon to have much effect on the town.
While the English were at work in April the French also constructed and manned entrenchments outside of the main walls encircling the town.
The bombardment
The English had brought some small cannon with them. Holinshed records that the carriages and shot for the large siege guns were landed on 10 April and the guns on the next day. The Scottish chronicle, the Diurnal of Occurrents, notes that 27 heavier English artillery pieces were shipped to "Figgate" at Portobello. On 12 April, the French heard a rumour that the English believed they were underequipped, and their response was to give a salvo from their 42 cannon, killing 16 in the English camp. The large guns were ready on Sunday 14 April, Easter day, and the English bombardment began. The cannon were placed in batteries to the west and south of Leith. According to a later chronicle, the History of the Estate of Scotland, the besiegers' guns were placed at the same distance of "twoe fflight shott" from South Leith church as Mount Pelham. The chronicle calls the location "Clayhills". The English plan of Leith, dated 7 July 1560, marks the position of the "first battery" to the south west of the Church, lying in front of the later gun position called "Mount Somerset", at Pilrig. The French journal also mentions Pilrig as well as the entrenchment at Pelham, and a rumour that the newly arrived English great guns would be placed in the trenches on Hawkhill to the south. The French returned fire from cannon on the steeple of St Anthony's church, Logan's Bulwark, and the Sea Bulwark.
John Lesley, Bishop of Ross, wrote that despite the bombardment, the French commanders and Father Andrew Leich celebrated Easter mass in South Leith Parish Church. During the service a cannonball passed harmlessly in through a window and out of the church door, while outside the air was thick with broken stone and plaster. This story was omitted from the contemporary Scots Language manuscript of Lesley's History.
Mount Pelham overwhelmed
The next day, 16 April, according to the French journal of the siege, 60 French cavalry and 1,200 foot soldiers overwhelmed the unfinished English position at Mount Pelham and spiked four cannons, killing 200 men and taking officers as prisoners. Arthur Grey, the son and biographer of Grey de Wilton, who was commander of a company of demi-lance horsemen, was shot twice, but was not in danger of losing his life. The French were repulsed and Norfolk reported 150 killed on both sides. Humfrey Barwick blamed Arthur Grey's injury on William Pelham not securing the position properly while the fortlet was under construction.
According to a poem by Thomas Churchyard, a Scotswoman initiated this attack by signalling an opportunity to the French. She came with Scottish victuallers to the English position and made her sign from a crag where a cannon had been placed. This story may refer to the existing mounds near the site of Mount Pelham called the "Giant's Brae" and "Lady Fyfe's Brae". The Leith historian Alexander Campbell, writing in 1827, regarded the mounds as important monuments of the siege, writing that the eastern mound took its name from "Lady Fife's Well", and children called the larger mound by the Grammar School "the Giant's Brae". This was repeated by D.H. Robertson, and the 1852 Ordnance Survey map marked the Giant's Brae as (the remains of) Somerset's Battery with Lady Fyfe's Brae as (the remains of) Pelham's Battery. A more recent historian, Stuart Harris, dismissed the assertion that these mounds were siegeworks rather than natural hillocks, stating that the belief was a "spurious 'tradition".
The Diurnal of Occurrents records another attack on the completed Mount Pelham on 18 June by 300 French soldiers who were chased back to Leith by 30 English cavalry. Forty French were killed, seven captured, and the English lost their trumpeter.
Mount Somerset, Mount Falcon, and Byer's Mount
At the end of April the siege works were extended westwards and a new emplacement built in the vicinity of the later Pilrig House, named "Mount Somerset" after Captain Francis Somerset, whom Thomas Churchyard identifies as the brother of the Earl of Worcester. Among the Scots recorded at the siege at this time, on 27 April 1560 with a tent (palzoun) at the Water of Leith, were Robert and John Haldane of Gleneagles.
The works were continued further west and north, across the Water of Leith to Bonnington, where a series of batteries were established. "Mount Falcon" was built after 7 May 1560 and, according to John Lesley, commanded the houses on the Shore quayside. The battery was placed west of a bend in the Water of Leith, near the intersection of South Fort Street and West Bowling Green Street. A position to the north with a single cannon is marked "Byere Mownt" on the Petworth map. Stuart Harris locates the gun's position near the intersection of the present-day Ferry Road and Dudley Avenue South.
The completed emplacements stretched for approximately around the fortified town, with six gun sites at a distance of around from the Leith ramparts. Mounts Pelham and Somerset, named after their officers, were both large temporary forts with ramparts up to high. Apart from the foot soldiers, there were, on 25 May 145 English artillery-men, with 750 English and 300 Scottish pioneers or labourers working on the fortifications, and 468 men looking after work-horses.
7 May – an English defeat
Elizabeth and her secretary William Cecil were exerting pressure on Norfolk for a result at Leith. To show that progress was being made, Norfolk started forwarding Grey's dispatches and apologising for his depute's "humour", asking that Elizabeth should send Grey a letter showing her thanks. Norfolk brought in expert military advisors, Sir Richard Lee and his own cousin Sir George Howard, who Norfolk believed would bring the siege to a rapid conclusion. Norfolk wrote to William Cecil on 27 April that it was a shame to have "to lie so long at a sand wall."
It was planned to storm the town before daybreak on 7 May. In early May cannon were deployed to make a substantial breach in the western ramparts. The assault was to be carried out in two waves, the first at 3.00 am by 3,000 men, the second by 2,240, with a further 2,400 holding back to keep the field. William Winter would wait for a signal to land 500 troops on the quayside of the Water of Leith at the Shore inside the town. As a diversion, Cuthbert Vaughan's 1,200 men with 500 Scotsmen were to attack from the south, crossing Leith Links from Mount Pelham. James Croft's men would assault from the north-west, presumably at low-tide.
There was an accidental fire in Leith on 1 May which burnt in the south-west quarter. The next evening Grey planted his battery against the west walls and started firing before 9.00 am, writing to Norfolk that his gunners had not yet found their mark. The next day, Grey was worried that the French had effected repairs so the town appeared even stronger. He continued with the bombardment and ordered his captains to try small-scale assaults against the walls to gather intelligence. Cuthbert Vaughan measured the ditch and ramparts for making scaling ladders.
The attempt was now scheduled for 4.00 am on Tuesday 7 May and by two hours past daylight the English were defeated. Although there were two breaches, the damage to the walls was insufficient. None of the flanking batteries were disabled, and the scaling ladders were too short. The result was heavy losses estimated at 1000 to 1500 Scots and English. A report by Peter Carew estimated a third of the dead were Scottish. However, Carew's total of six-score dead, which was followed by George Buchanan, is roughly a tenth of the other reports. The accountant and victualler of Berwick, Sir Valentine Browne noted there were 1,688 men unable to serve, still on the payroll, hurt at the assault or at various other times, and now sick or dead. The author of the Diurnal of Occurrents put the total number slain at 400. Humfrey Barwick was told the French collected the top-coats of the English who had reached and died on the walls, and 448 were counted. The French journal claims only 15 defenders were killed. John Knox and the French journal attributed some of the casualties to the women of Leith throwing stones from the ramparts.
According to Knox, Mary of Guise surveyed her victory from the fore-wall of Edinburgh Castle with some pleasure, comparing the English dead laid on the walls of Leith to fair tapestry, laid out to air in the sun:
The Frenche, prowd of the victorie, strypit naikit all the slayne, and laid thair carcassis befoir the hot sune alang thair wall, quhair thay sufferit thame to lye ma dayis nor ane, unto the quhilk, quhen the Quene Regent luikit, for myrth sche happit and said, "Yonder are the fairest tapestrie that I ever saw, I wald that the haill feyldis that is betwix this place and yon war strewit with the same stuiffe."
Knox thought James Croft had not wholeheartedly played his part. Carew heard that Croft should have attacked a breach in the Pale: instead his men "ran up between the Church and the water." Norfolk blamed Croft, who he believed colluded with Guise, later writing, "I thought a man could not have gone nigher a traitor than Sir James, I pray God make him a good man." Richard Lee made a map of Leith, which Norfolk sent to London on 15 May. This map or "platte" was perhaps made as much for the enquiry into the 7 May events as for future works. Elizabeth read Carew and Valentine's reports and sent them to William Cecil with instructions to keep them safe and secret.
Mines and code
Now diplomatic efforts for peace were re-doubled, but the siege was tightened. The English brought specialists from Newcastle upon Tyne to dig mines towards the fortifications. Mary of Guise, who was very ill by this time, wrote a letter to d'Oysel asking him to send her drugs from Leith. This letter was passed to Grey of Wilton who was suspicious because medicines could be easily found in Edinburgh. According to John Knox, he held the letter in the heat of a fire and discovered a message in invisible ink. Grey threw the letter on the fire. The French journal of the siege puts the story on 5 May, and says that Guise required ointment from one Baptiste in Leith, and the secret cipher on the back of the letter was "insert the notice of the English enterprise and other matters." Grey spoilt the letter looking for the secret writing and could not return it to James Drummond, the trumpet messenger.
Coded letters were carried out of Leith by another soldier, a drummer messenger of the Lords of the Congregation. First, Captain Sarlabous got him to take a note to a lady-in-waiting of Mary of Guise which had a secret cipher on the back. On 9 May he took a message with a handkerchief containing information about the English mines.
Mary of Guise sent letters to d'Oysel describing what her spies had found out about these works. On 19 May she wrote in code that the English were mining at the Citadel, St Anthony's Flanker, and the Mill Bulwark. The English were confident that their mines would be deeper than any French counter-mines. Guise now found it more difficult to send her letters into Leith, and this one was captured and deciphered.
The English ambassador in France, Nicholas Throckmorton, discovered that Mary of Guise had obtained details of the plans for the 7 May assault. She had also changed her ciphers. Throckmorton intercepted a letter meant for Jacques de la Brosse from Mary of Guise's brothers. He hoped to infiltrate his agent Ninian Cockburn into Leith posing as the messenger. He gave Ninian, a captain in the Garde Écossaise, the alias "Beaumont".
By 18 June 1560, after Mary of Guise had died, the French at Edinburgh Castle realised their cipher was in English hands, and they advised the Leith garrison to continue to use the code in letters that might be captured, to spread disinformation that would be advantageous in the ongoing peace negotiations. The coded advice letter itself was intercepted by the English and deciphered. It also suggested the use of fire signals to advertise how much longer they could last, as food was short, for the benefit of the French diplomats at Edinburgh Castle. Signal beacons were to be lit on St Anthony's Church or the Citadel or both, half an hour before midnight.
Hunger in Leith
On 8 May, after the assault, Grey sent Francis Killinghale to London carrying a detailed analysis of the situation. Grey was worried about deserters "stealing" back into England, but he thought that with reinforcements he could take the town by storm, or enclose it and starve out the garrison, as there was already "great scarcity" within. Ralph Sadler also wrote of desertions and the weariness of the besiegers. The French continued to make sallies from the town, despite their dwindling provisions. The besiegers, conversely, were supplied with more troops and provisions from England and Scotland.
Grey described his men killing 40 or 50 French soldiers and others who came out of the town to gather cockles and periwinkles on 13 May. The French journalist wrote of the same event, relating that some of the hungry townspeople went out to collect shell-fish and were attacked by the English. A little French boy taken on the shore was brought to Grey of Wilton. When asked if they had enough food for a fortnight, the boy said he had heard the captains say the English would not take the town by famine or force for four or five months yet. Raphael Holinshed puts this event on 4 July, saying that Grey first issued a warning to d'Oysel about the cockle-pickers.
The 17th century writer John Hayward gave a description of famine in the town based on the account of an English prisoner in Leith called Scattergood. He said the inhabitants and troops were reduced to eating horses, dogs, cats and vermin, with leaves, weeds and grass, "seasoned with hunger"."Hereupon they grewe very short in strength of men, and no lesse short in provision of foode for those men which they had; the one happeninge to tress for them by the force of their enimies, the other either by disabilitie or negligence of their freinds; so, their old stoore beinge spent, they were inforced to make use of every thinge out of which hunger was able to drawe nourishement. The fleshe of horses was then more daintie then ever they esteemed venison before; doggs, catts, and vermine of more vile nature were highelie valued; vines were striped of their leaves and tender stalkes; grasse and weedes were picked up, and, beinge well seasoned with hunger, were reputed amonge (them) for dainties and dilicate dishes."
Holinshed mentions Hayward's source, Scattergood, as a spy who entered Leith pretending to be a fugitive or deserter. Peter Carew reported on 28 May 1560 that the French had no meat or drink except water for three weeks. There was only bread and salted salmon. These were rationed with 126 ounces of bread for a man each day and a salmon between six men each week. There were 2,300 French soldiers in Leith and more than 2,000 others.
After Mary of Guise died, a week's truce was declared on Monday 17 June. On 20 June, French and English soldiers ate together on the beach. Captain Vaughan, Andrew Corbett, Edward Fitton and their men brought beef, bacon, poultry, wine and beer: the French brought cold roast capon, a horse pie and six roast rats. William Cecil and Nicholas Wotton thought reports of a lack of food in Leith were exaggerated. The French had access to fresh fish and had two fishing boats with nets. They had been able to send provisions to Inchkeith. The ordinary townsfolk however had been driven to extremity, forced "to seek their living by cockles and other shellfish upon the sea sands".
Treaty of Edinburgh
After the English defeat on 7 May, peace talks progressed with a dinner at Edinburgh Castle on 12 May for Mary of Guise and the Lords of the Congregation, but negotiations failed the next day when the French commanders in Leith were not permitted to come to the Castle and meet Guise to discuss the proposals. A fresh attempt at negotiations began in June. Commissioners, including the Count of Randon and the Bishop of Valence for the French, and William Cecil and Nicholas Wotton for the English, arrived in Edinburgh, only to find that Mary of Guise, Regent of Scotland, had died at Edinburgh Castle on 11 June.
Her death demoralised the French, and the commissioners agreed a week's armistice on 17 June. This ended on 22 June, but the only further military action was a skirmish on 4 July. Peace was agreed shortly after and proclaimed on 7 July in the names of Elizabeth, Queen of England, and François and Mary, King and Queen of France and Scotland.
The peace became known as the Treaty of Leith or the Treaty of Edinburgh. It secured the withdrawal of both French and English troops from Scotland and effectively dissolved the Auld Alliance. By 17 July the foreign soldiers had left the city. The total number of French evacuated from Scotland to Calais under William Winter's supervision was 3,613 men, 267 women, and 315 children—in all 4,195 with Lord Seton and the Bishop of Glasgow. The terms of the treaty allowed 120 French soldiers to remain at Inchkeith and Dunbar, although the defences of Leith were to be immediately demolished. New outworks at Dunbar Castle, which were still being completed by an Italian military engineer in May, were scheduled for demolition. The defences of Leith were slighted by English soldiers on 15 July and some strong points or bulwarks undermined.
A key term was that François and Mary should cease using the style and arms of the King and Queen of England. As Catholics, they regarded Elizabeth, daughter of Anne Boleyn, as illegitimate, leaving Mary herself as the rightful Queen. Their use of the English royal arms led the French to dub the campaign the "War of the Insignia". Queen Mary never ratified the agreement, since by doing so she would have acknowledged Elizabeth as rightful Queen of England, and she did not wish to relinquish her own claim to the English throne.
Edinburgh's town treasurer paid for the Shore of Leith to be cleaned after the evacuation, and a gun found in the ditches was taken to Edinburgh. A ship scuttled by the French to block the harbour of Newhaven was floated off in September 1560 over two successive high tides by men working from small boats. Two hundred Scottish workmen were working to remove the fortifications.
Legacy
The School of War
As this was the first military conflict of the reign, Elizabethan writers called the siege the "School of War", a title used by Thomas Churchyard for his poem narrating the action of the siege. Churchyard describes a Scottish woman who signalled to the French from a gun emplacement on Hermitage rock before defeat on 7 May 1560.
Among our men, might Scottish vittlers haunt
Who with the French a treason tooke in hand
A wife, a queane, did make the French a grant
Upon this rock in sight of Leith to stand :
And there to make a sign to Dozie's band,
When that the ward were careless and at rest
Which she did keep, her self the same confesssed.
Churchyard also wrote that the French tried to take Pelham's mount disguised as serving women:
By deep foresight, a mount there was devised
Which bare the name of Pelham for the space
I had forgot, how Frenchmen came disguised
In women's weeds, like queanes with muffled face
They did no act, but soon they took the chase.
The 17th-century playwright William Sampson set his The Vow-Breaker, or The Fayre Maid of Clifton around the soldiers recruited for Leith from Nottinghamshire under Captain Jervis Clifton. The Vow-Breaker, published in 1636, contains much historical detail. It is written as if it was performed at Nottingham Castle in September 1562 for a meeting between Elizabeth and Mary, Queen of Scots, which never took place. The 450th anniversary in 2010 saw a celebration of the end of the siege with performances in Leith of a new play telling the story.
Archaeology and fortifications
There is still significant evidence of the fortifications built by the French and batteries built by the English, and new examples were uncovered in 2001, 2002 and 2006. The site of the Mount Falcon battery near Byer's Mount is marked by a plaque, and the two mounds on Leith Links are scheduled monuments.
The ramparts were ordered to be demolished at the conclusion of the siege by Edinburgh townsfolk on the orders of the Lords and Burgh council to, "make blockhouse and curtain equal with the ground." Progress was slower than English observers wanted, and in August 1560 Little London and Loggan's bulwark were still "clean whole".
Some repairs to the walls were made in 1572 during the "War between Leith and Edinburgh" using turf called "faill" in the accounts. In April 1594 supporters of Francis Stewart, 5th Earl of Bothwell, rebels against James VI of Scotland, repaired the fortifications.
On 20 March 1639 Lord Newburgh reported on the activities of the Scottish Covenanters at Leith, where women were working on the walls; "they work hard at their new fortification at Leith, where the ladies and women of all sorts serve with wheelbarrows and baskets". Women were rarely recorded in manual work on Scottish building sites, the other examples are at Dumbarton Castle (1620) and Inchkeith (1555).
A part of the ramparts and the Citadel at the site of St Nicholas's Church at the north-west were reconstructed during the war of the Three Kingdoms in 1649. The master mason John Milne obtained stones from the demolition of houses that were adjacent to the walls of Edinburgh and from the Spur fortification at Edinburgh Castle. The renewed fortifications were held for Charles II, as King of Scots. Leith and the Citadel were bombarded by Rear-Admiral Captain Hall on 29 July 1650 from the Liberty, the Heart frigate, the Garland and the Dolphin.
In the 19th-century Great Junction Street and Constitution Street were laid along the line of the southern and eastern walls respectively.
See also
History of Scotland
Scottish Reformation
Notes
References
Campbell, Alexander, The History of Leith from earliest accounts to the present period, Leith (1827)
Pollard, Tony, 'The Archaeology of the Siege of Leith, 1560,' in Journal of Conflict Archaeology, vol. 4, Numbers 1-2, (2008), pp. 159–188, or, in, Pollard, Tony & Banks, Iain, ed., Bastions and barbed wire, (2009), 159-188
Robertson, David H., The Sculptured Stones of Leith, Edinburgh (1851)
Published primary sources for the siege of Leith and the Scottish reformation include the following:
Dickinson, Gladys, ed., 'A Journal of the Siege of Leith,' in Two Missions of Jacques de la Brosse, SHS (1942)
Bain, Joseph, ed., Calendar of State Papers, Scotland 1547–1563, vol. 1, Edinburgh (1898)
Bruce, John, ed., Annals of the first four years of Queen Elizabeth by Sir John Hayward, Camden Society (1840), pp.44-73
Laing, David, ed., 'John Knox's 'History of the Reformation', Book 3,' The Works of John Knox, vol. 2, Bannatyne Club, Edinburgh (1848)
Stevenson, Joseph, ed., Calendar State Papers Foreign Elizabeth, vol.2, 1559-60, London, Longman (1865)
Stevenson, Joseph, ed., Calendar State Papers Foreign Elizabeth, vol.3, 1560-1, London, Longman (1865)
Haynes, Samuel, ed., A Collection of State Papers left by William Cecil, 1542-1570, London (1740), pp.242-360, calendared in HMC (1883)
Historical Manuscripts Commission, HMC, Manuscripts of the Marquis of Salisbury at Hatfield House, vol. 1, HMSO (1883)
Clifford, Arthur, ed., State Papers and Letters of Sir Ralph Sadler, Edinburgh (1809), 697-732
Forbes, Patrick, ed., A Full View of the Public Transactions of Queen Elizabethh, vol. i, London (1740) correspondence of the English ambassador in Paris
'The Historie of the Estate of Scotland, 1558-60,' in The Wodrow Society Miscellany, vol.1 (1844) pp.49-86
Of these, the eyewitness French journal in Two Missions is essential reading; John Knox's History of the Reformation gives another contrasting contemporary account. gives a concise version from an English viewpoint.
Holinshed, Raphael, The Scottish Chronicle, vol. 2, Arbroath (1805), pp.290-309 (double volume)
Holinshed, Raphael, Chronicles of England, Scotland, and Ireland,, vol.4, London (1808), pp.189-201
Thomson, Thomas, ed., John Lesley's History of Scotland, from the death of King James I. in the year M.CCCC.XXXVI to the year M.D.LXI, Bannatyne Club (1830)
Thomson, Thomas, ed., A Diurnal of Remarkable Occurrents in Scotland, 1513-1575, (1833)
External links
Map of Leith circa 1681, by Captain Grenville Collins, showing fortifications as renewed in 1649-50, National Library of Scotland map site.
Robert Kirkwood's Ancient Map of Edinburgh & Environs, (1817) shows the site of the walls of Leith as embankments and gardens. NLS maps.
Robert Kirkwood, Map of Edinburgh & Environs, (1817) shows Restalrig, Lochend, Hawkhill & Hermitage locations. NLS maps.
Edinburgh Evening News report on fortifications excavated in Junction Street, 20 April 2012
Leith
Leith
Leith
Siege of Leith
Scottish Reformation
16th century in Scotland
History of Leith
France–Scotland relations
Leith
Auld Alliance
Leith | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,604 |
Q: How could I get the total number inside a java xml tap? For xml file view:
<?xml version="1.0"?>
<EXAMPLE DATE="20160830">
<SUB NUM="1">
<NAME>Peter</NAME>
</SUB>
<SUB NUM="2">
<NAME>Mary</NAME>
</SUB>
</EXAMPLE>
After I setup a NodeList for check the document,
I want it can be count the "NAME" Tap in each "SUB NUM="[x]""
For the code that I set for it:
NodeList nList= doc.getElementsByTagName("NUM"); // doc has been set correct and get successful
The nList.length will return "2" due to xml having 2 of the tap which is named as: "NUM", but I want to check each of the group only.
Is any Idea how could I get the length like:
SUB NUM [1] Found: [1] Length with tap name: [NAME]
SUB NUM [2] Found: [1] Length with tap name: [NAME]
A: This can be done as follows. Just the hashmap printing using a JAVA8 syntax. You should be able to iterate normally and print if you are not on 8.
import java.io.StringReader;
import java.util.HashMap;
import javax.xml.parsers.SAXParser;
import javax.xml.parsers.SAXParserFactory;
import org.xml.sax.Attributes;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import org.xml.sax.helpers.DefaultHandler;
public class NumCountHandler extends DefaultHandler {
private HashMap<String, Integer> countOfNum = new HashMap<String, Integer>();
@Override
public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException {
if (qName.equalsIgnoreCase("SUB")) {
String attributeNum = attributes.getValue("NUM");
// System.out.println("Here" + qName +"" + );
if (countOfNum.containsKey(attributeNum)) {
Integer count = countOfNum.get(attributeNum);
countOfNum.put(attributeNum, new Integer(count.intValue() + 1));
} else {
countOfNum.put(attributeNum, new Integer(1));
}
}
}
public static void main(String[] args) {
try {
String xml = "<EXAMPLE DATE=\"20160830\"> <SUB NUM=\"1\"> <NAME>Peter</NAME> </SUB> <SUB NUM=\"2\"> <NAME>Mary</NAME> </SUB></EXAMPLE>";
InputSource is = new InputSource(new StringReader(xml));
SAXParserFactory factory = SAXParserFactory.newInstance();
SAXParser saxParser = factory.newSAXParser();
NumCountHandler userhandler = new NumCountHandler();
saxParser.parse(is, userhandler);
userhandler.countOfNum
.forEach((k, v) -> System.out.println("SUB NUM [" + k + "]" + "Length with tap name :[" + v + "]"));
} catch (Exception e) {
e.printStackTrace();
}
}
}
Prints
SUB NUM [1]Length with tap name :[1]
SUB NUM [2]Length with tap name :[1]
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,072 |
using namespace vgui;
#define SPECTATOR_PANEL_CMD_NONE 0
#define SPECTATOR_PANEL_CMD_OPTIONS 1
#define SPECTATOR_PANEL_CMD_PREVPLAYER 2
#define SPECTATOR_PANEL_CMD_NEXTPLAYER 3
#define SPECTATOR_PANEL_CMD_HIDEMENU 4
#define SPECTATOR_PANEL_CMD_TOGGLE_INSET 5
#define SPECTATOR_PANEL_CMD_CAMERA 6
#define SPECTATOR_PANEL_CMD_PLAYERS 7
// spectator panel sizes
#define PANEL_HEIGHT 64
#define BANNER_WIDTH 256
#define BANNER_HEIGHT 64
#define OPTIONS_BUTTON_X 96
#define CAMOPTIONS_BUTTON_X 200
#define SEPERATOR_WIDTH 15
#define SEPERATOR_HEIGHT 15
#define TEAM_NUMBER 2
class SpectatorPanel : public Panel //, public vgui::CDefaultInputSignal
{
public:
SpectatorPanel(int x,int y,int wide,int tall);
virtual ~SpectatorPanel();
void ActionSignal(int cmd);
// InputSignal overrides.
public:
void Initialize();
void Update();
public:
void EnableInsetView(bool isEnabled);
void ShowMenu(bool isVisible);
DropDownButton * m_OptionButton;
// CommandButton * m_HideButton;
//ColorButton * m_PrevPlayerButton;
//ColorButton * m_NextPlayerButton;
CImageButton * m_PrevPlayerButton;
CImageButton * m_NextPlayerButton;
DropDownButton * m_CamButton;
CTransparentPanel * m_TopBorder;
CTransparentPanel * m_BottomBorder;
ColorButton *m_InsetViewButton;
DropDownButton *m_BottomMainButton;
CImageLabel *m_TimerImage;
Label *m_BottomMainLabel;
Label *m_CurrentTime;
Label *m_ExtraInfo;
Panel *m_Separator;
Label *m_TeamScores[TEAM_NUMBER];
CImageLabel *m_TopBanner;
bool m_menuVisible;
bool m_insetVisible;
};
class CSpectatorHandler_Command : public ActionSignal
{
private:
SpectatorPanel * m_pFather;
int m_cmd;
public:
CSpectatorHandler_Command( SpectatorPanel * panel, int cmd )
{
m_pFather = panel;
m_cmd = cmd;
}
virtual void actionPerformed( Panel * panel )
{
m_pFather->ActionSignal(m_cmd);
}
};
#endif // !defined SPECTATORPANEL_H
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,150 |
Q: Delete row in other sheets based on 2 values on main file Is it possible to delete rows on other sheets based on 2 values? Say I have 3 sheets. In the main sheet (sheet 1), there will be 2 columns: Branch and Manager same with the remaining sheets.
SAMPLE SPREADSHEET HERE.
Example data:
SHEET 1: (main sheet)
--- BRANCH --- MANAGER ---
California Tom Chang
Brooklyn Jon Sieg
New York Raq Craig
SHEET 2:
--- BRANCH --- MANAGER ---
California Jane Cali
California Tom Chang
San Francisco James Chao
SHEET 2:
--- BRANCH --- MANAGER ---
California Jane Cali
California Tom Chang
New York Daniel Trevor
What should happen is that:
Branch column values should NOT duplicate in all sheets. So what we need to do is delete the row on sheet 2 and 3 if branch column is equal with the main sheet (sheet1) IF AND ONLY IF the manager is not the same/equal. So in my given data above, Branch California and Manager Tom Chang exists in all sheets therefore it should not be touched. But California branch was repeated in the remaining 2 sheets with a different Manager. Therefore, row California ---- Jane Cali should be deleted on Sheets 2 and 3.
Came upon a script borrowed from this post but can't seem to work. Here:
function removeDupsInOtherSheets() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var mainsheet = ss.getSheetByName("Sheet3").getDataRange().getValues();
var sheet2 = ss.getSheetByName("Sheet2").getDataRange().getValues();
var sheet3 = ss.getSheetByName("Sheet3").getDataRange().getValues();
// iterate mainsheet and check in sheet2 & sheet3 if duplicate values exist
var nsheet2 = [];
var nsheet3 = [];
var mainsheetCol1 = [];// data in column1 of main sheet
var mainsheetCol2 = [];// data in column2 of main sheet
for(var n in mainsheet){
mainsheetCol1.push(mainsheet[n][0]); //column1
mainsheetCol2.push(mainsheet[n][3]); //column2
}
for(var n in sheet2){ // iterate sheet2 and test col 1 vs col 1 and co2 1 vs co2 1in sheet2
var noDup1 = checkForDup(sheet2[n],mainsheetCol1,mainsheetCol2)
if(noDup1){nsheet2.push(noDup1)};// if not present in sheet3 then keep
}
for(var n in sheet3){ // iterate sheet3 and test col 1 vs col 1 and co2 1 vs co2 in sheet3
var noDup2 = checkForDup(sheet3[n],mainsheetCol1,mainsheetCol2)
if(noDup2){nsheet3.push(noDup2)};// if not present in sheet3 then keep
}
// view result
Logger.log(nsheet2);
Logger.log(nsheet3);
// clear and update sheets
ss.getSheetByName("Sheet2").getDataRange().clear();
ss.getSheetByName("Sheet3").getDataRange().clear();
ss.getSheetByName("Sheet2").getRange(1,1,nsheet2.length,nsheet2[0].length).setValues(nsheet2);
ss.getSheetByName("Sheet3").getRange(1,1,nsheet3.length,nsheet3[0].length).setValues(nsheet3);
}
//Here can't seem to make it work to check if column 2 is not equal to the other sheets
//item is sheet2[n]
// s is mainsheetCol1
// s2 is mainsheetCol2
function checkForDup(item,s,s2){
Logger.log(s+' = '+item[0]+' ?')
Logger.log(s2+' = '+item[1]+' ?')
if((s.indexOf(item[0])>-1) && (s2.indexOf(item[1])>-1))){
return null;
}
return item;
}
Hoping someone could help/guide me. Thank you!
A: Try this:
function removeDuplicate(){
var ss = SpreadsheetApp.getActiveSpreadsheet();
var mainsheet = ss.getSheetByName("Sheet1");
var sheet2 = ss.getSheetByName("Sheet2");
var sheet3 = ss.getSheetByName("Sheet3");
var masterData = mainsheet.getDataRange().getValues();
var sheetsToCheck = [sheet2,sheet3];
for(var i in sheetsToCheck){
var valuesToCheck = sheetsToCheck[i].getDataRange().getValues();
for(var j=0;j<valuesToCheck.length;j++){
for(var k in masterData){
if(masterData[k][0] == valuesToCheck[j][0] && !(masterData[k][1] == valuesToCheck[j][1])){
sheetsToCheck[i].deleteRow(j+1);
}
}
}
}
}
A: Code given at Post is not working. I have modified it little bit to make it work.
Here it is,
function removeDupsInOtherSheets() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var s1 = ss.getSheetByName("Sheet1").getDataRange().getValues();
var s2 = ss.getSheetByName("Sheet2").getDataRange().getValues();
var s3 = ss.getSheetByName("Sheet3").getDataRange().getValues();
// iterate s3 and check in s1 & s2 if duplicate values exist
var nS1 = [];
var nS2 = [];
var s3Col1 = [];// data in column1 of sheet3
for(var n=0; n<s3.length; ++n){
s3Col1.push(s3[n][0]);
}
for(var n=0; n<s1.length; ++n){ // iterate sheet1 and test col 1 vs col 1 in sheet3
var noDup1 = checkForDup(s1[n],s3Col1)
if(noDup1){nS1.push(noDup1)};// if not present in sheet3 then keep
}
for(var n=0; n<s2.length; ++n){ // iterate sheet2 and test col 1 vs col 1 in sheet3
var noDup2 = checkForDup(s2[n],s3Col1)
if(noDup2){nS2.push(noDup2)};// if not present in sheet3 then keep
}
Logger.log(nS1);// view result
Logger.log(nS2);
ss.getSheetByName("Sheet1").getDataRange().clear();// clear and update sheets
ss.getSheetByName("Sheet2").getDataRange().clear();
var nS1Length = nS1.length;
var nS2Length = nS2.length;
// This is the change needed in code you have copied from,
**if(nS1 != undefined){
data = []
data[0] = nS1;
var range = ss.getSheetByName("Sheet1").getRange(1,1);
range.setValues(data);
}
if(nS2 != undefined){
data = []
data[0] = nS1;
var range = ss.getSheetByName("Sheet2").getRange(1,1);
range.setValues(data);
}**
}
function checkForDup(item,s){
Logger.log(s+' = '+item[0]+' ?')
if(s.indexOf(item[0])>-1){
return null;
}
return item;
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,459 |
{"url":"https:\/\/control.com\/forums\/threads\/hi.3571\/","text":"# Hi...\n\nM\n\n#### Marcelo Elias del Valle\n\nHi all. I am brasilian, so please understand my possible grammar error. By now, i just subscribed on this mailing list. I work today, mostly times, with siemens S7-200 and S7-300 CPUs, using supervisory control via windows software. I am thinking in develop a commercial SCADA\/MMI software for linux (only), develop some drivers and, after some time, make the drivers GPL. I was thinking in do all the software GPL, but once SCADA control is a thing that only big enterprises would use, I think there is no sense to make an entire GPL software. However, I have seen that you, on the webpage, place some very useful information about CPL programming on linux. I would like to know what exactly do you intend with linuxplc and if I could help you some way. Actually, On my job, I develop supervisory applications and dedicated software with people that develop CPL programs. I am trying to convince my boss to create an supervisory SCADA\/MMI software for linux using some GPL software libraries and developing part of the software under GPL. This way, I could colaborate with linux comunity and be payed to develop a linux software and applications with this software. I am not new to linux, but I am new to CPL drivers development and on linux CPL drivers development. I would like to listen any suggestions about what I am doing and know if my interest could be shared with the interest of linuxplc goal. Thank you very much. Regards, Marcelo. _______________________________________________ LinuxPLC mailing list [email\u00a0protected] http:\/\/linuxplc.org\/mailman\/listinfo\/linuxplc\n\nC\n\n#### Curt Wuollet\n\nHi Marcello Welcome! What we are thinking of doing with the LPLC is to provide a free, publicly owned, GPL'd control system with PLC functionality and the auxilliary functions needed for complete automation systems that run on Linux and require no licensing. The basic idea is to provide much lower cost and higher reliability systems than can be built with the expensive, closed, proprietary offerings that exist today. The software will be matched in importance by the proven development advantages of Open Systems with the tremendous power of sharing and taking advantage of the world class environment of Linux and the availability of source code for nearly any need to prevent the need for reinventing the wheel. Since the entire automation industry is in the business of reinventing the wheel in hundreds of functionally identical proprietary instances, standardizing on one code base and one system that can address those needs in a truly open and extensible fashion will offer tremendous advantages for users and solution providers alike. As enhancements and added features are added back into the code base, the system can grow to meet more needs and sharing these is a great increase in productivity compared to supporting all those hundreds of closed products. Education will be targeted as there is no lower cost way to teach automation and with the source code a much greater understanding of what is going on can be had. We also seek to change an industry where competition has failed to provide choice and lower prices. Where simple connectors cost $80.00 and a serial card that should be$10.00 is outfitted with non-standard connectors and priced at more than \\$400.00. Where the effort expended on locking in and exploiting customers far exceeds the effort at advancing the state of the art. This hyper-proprietary and exploitative attitude has resulted in only lateral growth, that is. more solutions that do the same thing and very little effort at better solutions. And this is bad for the industry and the consumer. Our ideas are better ideas for moving forward and getting more people behind a single system. You can help by writing code for LPLC and donating code written for Linux. Regards cww That's how I see it in a nutshell. _______________________________________________ LinuxPLC mailing list [email\u00a0protected] http:\/\/linuxplc.org\/mailman\/listinfo\/linuxplc\n\nM\n\n#### MDV\n\nI think your wish is the same as putting the same engines in all cars, or making all cars the same.\nPersonally I dont like the IEC... programming language, takes too much time to do something.\n\nEverything what is new does not have to be better....\n\nfor example, Step 5 is quick and small (1 Mb), Step 7 is slow and big (200Mb)\n\nRegardsm\n\nMDV","date":"2021-03-03 18:32:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2086801677942276, \"perplexity\": 1710.050752647453}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178367183.21\/warc\/CC-MAIN-20210303165500-20210303195500-00506.warc.gz\"}"} | null | null |
package org.jetbrains.plugins.scala.refactoring.introduceVariable
import com.intellij.openapi.editor.{Editor, SelectionModel}
import com.intellij.openapi.fileEditor.{FileEditorManager, OpenFileDescriptor}
import com.intellij.openapi.project.Project
import com.intellij.psi.util.PsiTreeUtil.{findCommonParent, getParentOfType}
import com.intellij.psi.{PsiElement, PsiFile}
import org.jetbrains.plugins.scala.extensions.PsiElementExt
import org.jetbrains.plugins.scala.lang.actions.ActionTestBase
import org.jetbrains.plugins.scala.lang.psi.api.ScalaFile
import org.jetbrains.plugins.scala.lang.psi.api.base.types.ScTypeElement
import org.jetbrains.plugins.scala.lang.psi.api.expr.{ScBlock, ScExpression}
import org.jetbrains.plugins.scala.lang.psi.api.toplevel.templates.ScTemplateBody
import org.jetbrains.plugins.scala.lang.refactoring.util.ScalaRefactoringUtil.{getExpression, getTypeElement}
import org.jetbrains.plugins.scala.lang.refactoring.util._
import org.jetbrains.plugins.scala.util.TestUtils._
abstract class AbstractIntroduceVariableValidatorTestBase(kind: String) extends ActionTestBase(
Option(System.getProperty("path")).getOrElse(s"""$getTestDataPath/introduceVariable/validator/$kind""")
) {
protected var myEditor: Editor = _
protected var fileEditorManager: FileEditorManager = _
protected var myFile: PsiFile = _
import AbstractIntroduceVariableValidatorTestBase._
override def transform(testName: String, data: Array[String]): String = {
setSettings()
val fileText = data(0)
val psiFile = createPseudoPhysicalScalaFile(getProject, fileText)
processFile(psiFile)
}
protected def removeAllMarker(text: String): String = {
val index = text.indexOf(ALL_MARKER)
myOffset = index - 1
text.substring(0, index) + text.substring(index + ALL_MARKER.length)
}
private def processFile(file: PsiFile): String = {
var replaceAllOccurrences = false
var fileText = file.getText
var startOffset = fileText.indexOf(BEGIN_MARKER)
if (startOffset < 0) {
startOffset = fileText.indexOf(ALL_MARKER)
replaceAllOccurrences = true
fileText = removeAllMarker(fileText)
}
else {
replaceAllOccurrences = false
fileText = removeBeginMarker(fileText)
}
val endOffset = fileText.indexOf(END_MARKER)
fileText = removeEndMarker(fileText)
myFile = createPseudoPhysicalScalaFile(getProject, fileText)
fileEditorManager = FileEditorManager.getInstance(getProject)
myEditor = fileEditorManager.openTextEditor(new OpenFileDescriptor(getProject, myFile.getVirtualFile, 0), false)
myEditor.getSelectionModel.setSelection(startOffset, endOffset)
try {
val maybeValidator = getValidator(myFile)(getProject, myEditor)
maybeValidator.toSeq
.flatMap(_.findConflicts(getName(fileText), replaceAllOccurrences))
.map(_._2)
.toSet[String]
.mkString("\n")
} finally {
fileEditorManager.closeFile(myFile.getVirtualFile)
myEditor = null
}
}
protected def getName(fileText: String): String
}
object AbstractIntroduceVariableValidatorTestBase {
private val ALL_MARKER = "<all>"
def getValidator(file: PsiFile)
(implicit project: Project, editor: Editor): Option[ScalaValidator] = {
implicit val selectionModel: SelectionModel = editor.getSelectionModel
getParentOfType(file.findElementAt(selectionModel.getSelectionStart), classOf[ScExpression], classOf[ScTypeElement]) match {
case _: ScExpression => getExpression(file).map(getVariableValidator(_, file))
case _: ScTypeElement => getTypeElement(file).map(getTypeValidator(_, file))
case _ => None
}
}
import ScalaRefactoringUtil._
private[this] def getContainerOne(file: PsiFile, length: Int)
(implicit selectionModel: SelectionModel): PsiElement = {
val origin = file.findElementAt(selectionModel.getSelectionStart)
val bound = file.findElementAt(selectionModel.getSelectionEnd - 1)
val commonParentOne = findCommonParent(origin, bound)
val classes = Seq(classOf[ScalaFile], classOf[ScBlock], classOf[ScTemplateBody])
(length match {
case 1 => commonParentOne.parentOfType(classes)
case _ => commonParentOne.nonStrictParentOfType(classes)
}).orNull
}
private[this] def getVariableValidator(expression: ScExpression, file: PsiFile)
(implicit selectionModel: SelectionModel): ScalaVariableValidator = {
val occurrences = getOccurrenceRanges(expression, fileEncloser(file).orNull)
val containerOne = getContainerOne(file, occurrences.length)
val parent = commonParent(file, occurrences)
new ScalaVariableValidator(expression, occurrences.isEmpty, enclosingContainer(parent), containerOne)
}
private[this] def getTypeValidator(typeElement: ScTypeElement, file: PsiFile)
(implicit selectionModel: SelectionModel): ScalaTypeValidator = {
val occurrences = getTypeElementOccurrences(typeElement, fileEncloser(file).orNull)
val containerOne = getContainerOne(file, occurrences.length)
val parent = findCommonParent(occurrences: _*)
new ScalaTypeValidator(typeElement, occurrences.isEmpty, enclosingContainer(parent), containerOne)
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,132 |
{"url":"https:\/\/www.cravencountryjamboree.com\/personal-blog\/how-do-i-add-spaces-in-overleaf\/","text":"# How do I add spaces in overleaf?\n\n## How do I add spaces in overleaf?\n\nThere are two commands that insert horizontal blank spaces in this example: \\\\hspace{1cm} Inserts a horizontal space whose length is 1cm. Other LaTeX units can be used with this command.\n\n## How do I add a space between a table and caption in LaTeX?\n\nTo create a bit more spacing between the caption and the tabular material, load the caption package and specify the desired value for the option skip ; in the example below, I set skip=0.5\\\\baselineskip . Don\u2019t use a center environment inside a table ; instead, use the \\\\centering macro.\n\n## How do I change line spacing in LaTeX?\n\nHow can I change the spacing in my LaTeX document?\\ckage{setspace} after your \\\\documentclass line. \\\\doublespacing. will make the text of the whole document double spaced. \\\\onehalfspacing. In order to make a part of the text of your document singlespaced, you can put:\\\\begin{singlespace} \\\\end{singlespace} \\\\setstretch{1.25}\n\n## How do I change the space between words in LaTeX?\n\nIf the command produces text and you want a space to follow this text, you cannot just leave a space after the command; that space is treated as the end-of-command signal and several spaces are equivalent to one in LaTeX. To generate a space after a text-producing command you can use \\space>.\n\n## How do I make 1.5 spacing in LaTeX?\n\nSo to answer your question, if you want true 1.5 line spacing, go with \\onehalfspacing .\n\n## How do you start a new paragraph in overleaf?\n\nEven though the default formatting in LaTeX is fine, sometimes we need to change some elements. To start a new paragraph in LaTeX, as said before, you must leave a blank line in between. Paragraphs in LaTeX are fully justified, i.e. flush with both the left and right margins.\n\n## How do I keep a vertical space in LaTeX?\n\nThe \\vspace command adds vertical space. The length of the space can be expressed in any terms that LaTeX understands, i.e., points, inches, etc. You can add negative as well as positive space with an \\vspace command. LaTeX removes vertical space that comes at the end of a page.\n\n## How do I reduce the space between paragraphs in LaTeX?\n\nThe length parameter that characterises the paragraph spacing is \\parskip , this determines the space between a paragraph and the preceding text. In the example, the command \\setlength{\\parskip}{1em} sets the paragraph separation to 1em.\n\n## How do you start a new line in a paragraph in LaTeX?\n\nThere are two forms in which the line breaks in a paragraph while typesetting:The \\\\ and \\newline commands break the line at the point of insertion but do not stretch it.The \\linebreak command breaks the line at the point of insertion and stretches the line to make it of the normal width.\n\n## How do you start a new paragraph?\n\nStart a new paragraph for each new point or stage in your writing. When you begin a paragraph you should always be aware of the main idea being expressed in that paragraph. Be alert to digressions or details that belong either in a different paragraph or need a paragraph of their own.\n\n## Can you start a new paragraph with however?\n\nyes in fact you can star a paragraph with the word however because it is a transitional word\u2026.for example it may be used when you are writing an essay contrasting things. so when u start a new paragraph u say however [[its like saying on the other hand]].\n\n## Do you need to start a new paragraph every time someone speaks?\n\nIt\u2019s considered normal to start a new paragraph when somebody new speaks; however, it\u2019s not essential. Switching to a new paragraph is a stylistic way of indicating that the speaker has changed. But just switching paragraphs may not be enough. For example, there could be more than two characters.\n\n## When should u start a new paragraph?\n\nYou should start a new paragraph when: When you begin a new idea or point. New ideas should always start in new paragraphs. If you have an extended idea that spans multiple paragraphs, each new point within that idea should have its own paragraph.\n\n## How do you separate paragraphs in an essay?\n\nNot all paragraphs indent the first line. If you do not indent the first line, you must skip a line between paragraphs. This is the second way to separate paragraphs. Look at the next paragraph and you will see that there is a space \u2014 an empty line \u2014 between the two paragraphs.\n\n## What is the symbol for a new paragraph?\n\nSymbolsSymbol NameImageMeaningPilcrow (Unicode U+00B6)\u00b6Begin new paragraphPilcrow (Unicode U+00B6)\u00b6 noRemove paragraph breakCaret (Unicode U+2038, 2041, 2380)\u2038 \u2041 \u2380Insert#Insert space7\n\n## What does the T in Peter stand for?\n\nPoint Evidence Technique Explain\n\n## What are pee paragraphs?\n\nPEE stands for : Point, Evidence, Explanation. Point is a specific argument that you want to make within a paragraph. Evidence is the information you provide that supports the argument, statement or claim that you have made. It could be a quote or a piece of technical data.\n\n## What is the pee strategy?\n\nWhat PEE (point-evidence-explanation) and PEA (point-evidence-analysis) mean is that you as the student must make the point that you want to prove or develop and then support it with a specifically chosen piece of evidence \u2013 like a data point, statistic, or quotation \u2013 and then explain briefly how that particular piece \u2026\n\n## How do you nail a Peter paragraph?\n\nYou make a clear and suitable. point. It refers to the question. You chose appropriate. evidence. You embed your evidence. features. You use subject terminology. You explore at least one effect of the technique. You look at the quotation as a. whole. You suggest how it affects the reading of the text around it.","date":"2021-08-01 16:24:39","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8947511315345764, \"perplexity\": 1234.0135746443273}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154214.63\/warc\/CC-MAIN-20210801154943-20210801184943-00683.warc.gz\"}"} | null | null |
#ifndef GENERALIZED_PENALIZED_NO1_H
#define GENERALIZED_PENALIZED_NO1_H
#include <memory>
#include <string>
#include <cmath>
#include "Problem.h"
#include "Configuration.h"
#include "PrototypeManager.h"
#include "Individual.h"
#include "util/math_tool.h"
namespace adef {
/**
@brief GeneralizedPenalizedNo1 function.
@par The configuration
GeneralizedPenalizedNo1 has no extra configurations.@n
It has fixed configurations:
- kind: min
- dimension_of_objective_space: 1
.
It has default configurations:
- dimension_of_decision_space: 30
- lower_bound_of_decision_space: -50.0
- upper_bound_of_decision_space: 50.0
- optimal_solution: 0.0
.
See setup() for the details.
*/
class GeneralizedPenalizedNo1 : public Problem
{
public:
/// @copydoc Problem::Object
using Object = typename Problem::Object;
GeneralizedPenalizedNo1() : Problem("Generalized Penalized No.1")
{
}
GeneralizedPenalizedNo1(const GeneralizedPenalizedNo1& rhs) = default;
/**
@brief Clone the current class.
@sa clone_impl()
*/
std::shared_ptr<GeneralizedPenalizedNo1> clone() const
{
return std::dynamic_pointer_cast<GeneralizedPenalizedNo1>(clone_impl());
}
/**
@brief Set up the internal states.
If GeneralizedPenalizedNo1 has the following configuration:
- dimension_of_decision_space: 30
- lower_bound_of_decision_space: -50.0
- upper_bound_of_decision_space: 50.0
- optimal_solution: 0.0
.
its configuration should be
- JSON configuration
@code
"Problem": {
"classname" : "GeneralizedPenalizedNo1",
"dimension_of_decision_space" : 30,
"lower_bound_of_decision_space" : -50.0,
"upper_bound_of_decision_space" : 50.0,
"optimal_solution" : 0.0
}
@endcode
.
*/
void setup(const Configuration& config, const PrototypeManager& pm) override
{
problem_kind_ = MIN;
auto dim_ds_config = config.get_config("dimension_of_decision_space");
dimension_of_decision_space_ = dim_ds_config.is_null() ?
30 : dim_ds_config.get_uint_value();
auto lb_ds_config = config.get_config("lower_bound_of_decision_space");
auto lb_ds = lb_ds_config.is_null() ?
-50.0 : lb_ds_config.get_value<Object>();
auto ub_ds_config = config.get_config("upper_bound_of_decision_space");
auto ub_ds = ub_ds_config.is_null() ?
50.0 : ub_ds_config.get_value<Object>();
boundaries_of_decision_space_.resize(dimension_of_decision_space_,
Boundary(lb_ds, ub_ds));
dimension_of_objective_space_ = 1;
auto optimal_config = config.get_config("optimal_solution");
optimal_solution_ = optimal_config.is_null() ?
0.0 : optimal_config.get_value<Object>();
}
void evaluation_function(std::shared_ptr<Individual> individual) const override
{
Object inner = 0.0, outer = 0.0;
Object temp_sin = std::sin(pi() * y(individual->variables(0)));
inner += 10.0 * temp_sin * temp_sin;
for (unsigned int idx = 0; idx < dimension_of_decision_space_ - 1; ++idx) {
Object temp = y(individual->variables(idx)) - 1;
Object temp_sin = std::sin(pi() * y(individual->variables(idx+1)));
inner += temp * temp * (1.0 + 10.0 * temp_sin * temp_sin);
}
Object temp = y(individual->variables(dimension_of_decision_space_ - 1)) - 1;
inner += temp * temp;
for (unsigned int idx = 0; idx < dimension_of_decision_space_; ++idx) {
outer += u(individual->variables(idx), 10, 100, 4);
}
Object sum = pi() * inner / dimension_of_decision_space_ + outer;
individual->objectives() = sum;
individual->set_fitness_value(sum);
}
private:
double y(double x) const
{
return 1.0 + (x + 1.0) / 4.0;
}
double u(double x, double a, double k, double m) const
{
if (x > a) {
return k * std::pow(x - a, m);
}
else if (x < 0.0 - a) {
return k * std::pow(0.0 - x - a, m);
}
else {
return 0.0;
}
}
private:
std::shared_ptr<Prototype> clone_impl() const override
{
return std::make_shared<GeneralizedPenalizedNo1>(*this);
}
};
}
#endif
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,411 |
Thnks for posting up. The gifs kinda slowed down the sequence of the MV & the 1st one made me realize tt there's a crying scene. COOL~ Our diva is really workin it. u knw ~ we shld applaud for the crew on/off set as well. They r the motors behind the wheels too. 🙂 *claps claps* And…yea, i agree hyolee shld make a comeback in acting sometime after singing. she probably misses acting too. Of course there's been improvements but there's still more improvements to b made. Good lucks !! | {
"redpajama_set_name": "RedPajamaC4"
} | 7,993 |
{"url":"https:\/\/www.physicsforums.com\/threads\/steam-condensation-question.914958\/","text":"# Steam condensation question\n\n1. May 17, 2017\n\n### pranj5\n\nWhat I want to be clarified in this thread is about condensation of steam at constant pressure. Suppose Suppose an enclosed volume of size 1 m\u00b3 and it's filled with steam at 100\u00b0C and 1 barA pressure. Now, the container is connected to a chamber filled with dry air in such a way that the pressure inside the chamber can't change. Now, the temperature has been lowered to 70\u00b0C. Question is, what % of steam will be condensed then.\nAt 100\u00b0C, the pressure of steam is 101.32 kPa while at 70\u00b0C it's 31.15 kPa. Does that mean that {1-(31.15\/101.32)}% of steam i.e. approx 59.256% of steam has been converted to water?\n\n2. May 17, 2017\n\nYes.","date":"2017-12-16 19:02:12","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8129298090934753, \"perplexity\": 1348.4504781893415}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948588420.68\/warc\/CC-MAIN-20171216181940-20171216203940-00410.warc.gz\"}"} | null | null |
Ralf Schmid (* 1969) ist ein deutscher Pianist, Komponist, Arrangeur und Musikproduzent.
Leben
Ralf Schmid wuchs in Konstanz auf und bekam klassischen Klavierunterricht. Während der Schulzeit spielte er in Bands Jazz, Rock und Funk. Ab 1990 studierte er an der Hochschule für Musik und Darstellende Kunst Stuttgart Schulmusik und ab 1996 an der Filmakademie Ludwigsburg Filmmusik. 1998 übersiedelte er mit seiner Familie nach New York, um als DAAD-Stipendiat Jazzpiano und Komposition zu studieren. 1999 wurde er als Kompositionsstudent am Henry Mancini Institute in Los Angeles aufgenommen. Wichtige Lehrer und Mentoren seiner Studienzeit waren Bernd Rabe, Horst Jankowski, Richie Beirach und Robert Sadin.
Ralf Schmid gründete 1995 das Tales in Tones Trio (früher schmid/hübner/krill, mit Veit Hübner, Bass, und Torsten Krill, Drums). Das Trio gewann 1998 den Hennessy Jazz Search in der Kölner Philharmonie und veröffentlichte vier CDs, davon zwei mit dem Trompeter Joo Kraus.
Mit dem Musikproduzenten und Gitarristen Michele Locatelli gründete Ralf Schmid in New York das Label ObliqSound, für das er zahlreiche Aufnahmen produzierte. Nach seiner Rückkehr aus USA etablierte er einen zweiten Firmensitz des Labels in München.
Weiterhin arbeitete Ralf Schmid als Arrangeur und Dirigent mit professionellen Big Bands; er dirigierte in Konzerten, Studioaufnahmen, Radio- und TV-Sendungen die NDR Bigband, die SWR Big Band, die hr-Bigband, die RIAS Big Band Berlin und das Henry Mancini Institute Orchestra Los Angeles.
Mit Joo Kraus produzierte er ab 2002 mehrere Alben: Public Jazz Lounge (2002), The Ride (2006), Sueño (2008), entstanden im Egrem-Studio in Havanna (Kuba), Songs from Neverland (2010) Painting Pop (2011), Public Jazz Society (2016) und Joo Jazz (2016). Das Album 'Public Jazz Lounge' erhielt 2015 einen GERMAN JAZZ AWARD GOLD für 10 000 in Deutschland verkaufte Tonträger, mit 'Painting Pop' gewann Joo Kraus einen ECHO Jazz.
2009 produzierte und arrangierte Schmid zum 50. Jubiläum des Bossa Nova das Album bossarenova mit Paula Morelenbaum und der SWR Big Band, das für den Brazilian Music Award nominiert wurde. Er präsentiert das Projekt seit 2011 mit Paula Morelenbaum und Joo Kraus unter dem Namen 'bossarenova trio' in Konzerten, u. a. in New York, San Francisco, Singapore und zahlreichen europäischen Metropolen.
Im März 2013 erschien das Album cornucopía mit Ivan Lins und der SWR Big Band, ebenfalls von Ralf Schmid produziert und arrangiert, das in Stuttgart, New York, Rio de Janeiro und Johannesburg aufgenommen wurde. Schmid leitete 2017 ein weiteres Projekt mit Ivan Lins in Kopenhagen, u. a. mit der Danish Radio Big Band und den New York Voices.
Im Oktober 2014 wurde Ralf Schmids Musiktheater "A Distant Drum" u. a mit Daniel Hope, Jason Marsalis in der Carnegie Hall New York uraufgeführt.
2015 begann Schmid die Arbeit an seinem futuristischen Piano-Electro-Projekt PYANOOK, das er im KUBUS-Studio des ZKM Karlsruhe 2016 als audiovisuelle Produktion aufnahm und in dem er u. a. Datenhandschuhe zur Echtzeit-Klangsteuerung der Klavierklänge einsetzte. PYANOOK wird seit September 2017 live aufgeführt, u. a. bei Festivals in Freiburg, Rio de Janeiro und Berlin.
Ralf Schmid ist seit 2002 Professor an der Hochschule für Musik Freiburg.
Diskografische Hinweise
Als Produzent/Arrangeur
2002: The Triumph of Time (Tama Waipara, ObliqSound)
2003 hr-Bigband, Ralf Schmid, Martin Fondse: Two Suites: Tribal Dances / Cottacatya! (HR-Music)
2003: Public Jazz Lounge: (Joo Kraus & SWR Big Band, skip)
2003: Nana Swings (Nana Mouskouri & Berlin Radio Big Band, Universal)
2004: Different Rooms (Pee Wee Ellis, skip)
2006: The Ride (Joo Kraus, edel)
2008: Sueño (Joo Kraus, edel)
2009: bossarenova (mit Paula Morelenbaum & SWR Big Band, skip)
2013: cornucopía (mit Ivan Lins & SWR Big Band, moosicus records)
2013: Samba Preludío (mit bossarenova trio)
2016: Public Jazz Society: (Joo Kraus & SWR Big Band, skip)
Als Pianist
1998: time makes the tune (mons music)
2002: flügelschlag! (ObliqSound, mit Gert Wilden junior und Andy Lutter)
2002: Nachtfarben (Acoustic Music)
2004: Sub Surface (ObliqSound)
2008: MusikFreiZeit (frimfram)
2010: Songs from Neverland (edel)
2011: Painting Pop (edel)
2012: captured for good (edel)
Auszeichnungen und Stipendien
1996: 1. Preis Kompositionswettbewerb des Hessischen Rundfunks
1998: Stipendium DAAD, New York
1998: 1. Preis Hennessy Jazz Search (mit Veit Hübner und Torsten Krill)
1999: Stipendium Henry Mancini Institute, Los Angeles
2000: Stipendium Kunststiftung Baden-Württemberg
2001: Landesjazzpreis Baden-Württemberg
2015: German Jazz Award Gold für die Produktion des Albums "Public Jazz Lounge" mit Joo Kraus
2018: Stipendium des Reinhold-Schneider-Preises der Stadt Freiburg
Weblinks
Website von Ralf Schmid
Website von Pyanook
Interview in O Globo
Einzelnachweise
Jazz-Pianist
Musikproduzent
Arrangeur
Komponist (Jazz)
Komponist (Deutschland)
Deutscher
Geboren 1969
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,081 |
Q: Adding Search to Ruby on Rails - Easy Question I am trying to figure out how to add search to my rails application. I am brand new so go slow. I have created a blog and done quite a bit of customizing including adding some AJAX, pretty proud of myself so far. I am having trouble finding any good tutorials about how to add this functionality. Basically I just want to enable a full search to search my posts table. What is the easiest way to do this?
A: There's a great Railscast on Thinking Sphinx, which is my favorite above the other mentioned options. It's fast, simple, and is continually being developed.
There's also SearchLogic which is great if you don't really need full text indexing (you probably don't for a blog). And a Railscast to go along with that as well.
A: Depending on the RDBMS you are using there might be a built-in solution for fulltext search.
Otherwise you might check out Sunspot (and the Rails plugin) which uses Apache Solr for fulltext search and is easy to use. Especially writing queries/searches is much more fun than with the standard acts_as_solr plugin.
Edit Oh, and here is a screencast on Sunspot for the visual people.
A: I'd suggest using the acts_as_solr plugin. I'm just starting with Rails too and that's the search indexing plugin recommended by my professor. It includes the SOLR search engine in the plugin. The site includes installation and usage instructions.
Basically, as the usage shows, you would just include the acts_as_solr tag in whichever models you want to be searchable, and then specify which attributes in your model you want to be indexed for searching on... so you'd do something like:
class Post < ActiveRecord::Base
acts_as_solr :fields => [:post, :comments, :whatever]
And for searching you'd do something like...
Post.find_by_solr(query_string)
A: The easiest way to do this would be to do something like this:
results = Post.find(:all, :conditions => "post_body LIKE '%#{search_string}%'" )
However, this is pretty limited in that it will only search for the exact word or phrase you are looking for (not to mention a vulnerability to SQL injection). As I mentioned, this is the "easiest" way to do a search, but definitely not the best. I would look into using acts_as_solr if you want to do this seriously.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,045 |
Rdutów – dawna gromada, czyli najmniejsza jednostka podziału terytorialnego Polskiej Rzeczypospolitej Ludowej w latach 1954–1972.
Gromady, z gromadzkimi radami narodowymi (GRN) jako organami władzy najniższego stopnia na wsi, funkcjonowały od reformy reorganizującej administrację wiejską przeprowadzonej jesienią 1954 do momentu ich zniesienia z dniem 1 stycznia 1973, tym samym wypierając organizację gminną w latach 1954–1972.
Historia
Gromadę Rdutów siedzibą GRN w Rdutowie utworzono – jako jedną z 8759 gromad na obszarze Polski – w powiecie kutnowskim w woj. łódzkim, na mocy uchwały nr 30/54 WRN w Łodzi z dnia 4 października 1954. W skład jednostki weszły obszary dotychczasowych gromad Piotrówek, Podgajew, Radzyń, Rdutów, Rdutów Nowy i Aleksandrów (z wyłączeniem P.G.R. Chodów) ze zniesionej gminy Czerwonka w tymże powiecie. Dla gromady ustalono 15 członków gromadzkiej rady narodowej.
Gromadę zniesiono 1 lipca 1968, a jej obszar włączono do znoszonej gromady Czerwonka w tymże powiecie.
Zobacz też
gmina Rdutów
Przypisy
Rdutozzxw | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,638 |
\section{Introduction}
\IEEEPARstart{T}{his} work addresses the problem of symbol and frame synchronization during signal modulation
by means of the position of each pulse in the time domain when the channel is assumed to be noisy.
The approach taken here is to rethink the problem in a coding theoretic framework
and give a theoretical and general foundation.
Among various forms of information, perhaps the simplest of them is binary information.
In many communications scenarios, however, it is beneficial to transmit data in $M$-ary format with $M > 2$.
For this reason, there have been proposed various modulation techniques that support not only binary format but also $M$-ary format with large $M$.
\textit{Pulse position modulation}, or \textit{PPM} for short, is one of the more popular modulation techniques \cite{P}.
In this modulation technique, each symbol occupies a time interval of equal length.
A symbol interval is divided into $Q$ time slots of equal length, where exactly one pulse is transmitted at one of the $Q$ time slots.
Which symbol each interval represents is determined by at which time slot the unique pulse is transmitted.
Because there are $Q$ choices of pulse positions for each symbol interval, PPM offers $M = Q$ distinct symbols.
While PPM is one of the most fundamental forms of signal modulation in use today, there are some inherent drawbacks.
For instance, in a communications system with a severe peak power constraint,
PPM becomes inefficient as the number of symbols increases
because the energy per symbol drops accordingly.
\textit{Multipulse} PPM is a generalization of PPM to mitigate this problem,
where $K$ pulses are sent during each symbol interval by using $K$ out of $Q$ time slots so that $M= {{Q}\choose{K}}$ symbols can be represented \cite{SN}.
Let $\mathcal{P}_K = \left\{\boldsymbol{v}_i \in \mathbb{F}_2^Q \ \middle\vert\ \operatorname{wt}(\boldsymbol{v}_i) = K\right\}$
be the set of all $Q$-dimensional binary vectors $\boldsymbol{v}_i$ of weight $K$.
We let $1$s represent time slots at which single pulses are transmitted and $0$s those at which no pulse is sent.
From the viewpoint of coding theory,
the symbols of PPM can be seen as the binary constant-weight code $\mathcal{P}_1$ of length $Q$, weight one,
and minimum distance two with $Q$ codewords,
whereas the symbols of multipulse PPM can be regarded as the binary constant-weight code $\mathcal{P}_K$
of length $Q$, weight $K$, and minimum distance two with $ {{Q}\choose{K}}$ codewords.
The noncoherent nature of PPM and multipulse PPM makes them attractive to communications systems
in which coherent detection is expensive or impossible, such as optical communications systems \cite{opticalcommunications}.
However, as is evident from the fact that the codes $\mathcal{P}_1$ and $\mathcal{P}_K$ are both of minimum distance two,
one of the major disadvantages of these modulation techniques is that
they are inherently vulnerable to intersymbol interference and natural environmental noise.
Hence, PPM and multipulse PPM require strong error correcting schemes at a higher level when errors are of concern.
\textit{Expurgated} PPM is a recently proposed modulation technique that generalizes PPM
in a way error correction can be provided at the modulation stage while offering the same number $M = Q$ of symbols \cite{NB}.
If we see it from the coding theoretic point of view,
the key idea of expurgated PPM can be understood as using special combinatorial designs called
\textit{symmetric} \textit{designs} to define constant-weight codes with large minimum distances
which allow for simple implementation.
The above three modulation techniques still share the other major disadvantage, namely the susceptibility to loss of synchronization.
As in earlier research on synchronization for this type of modulation \cite{G,G2,PG,CG,VG2,VG},
we assume that slot synchronization is always provided so that the magnitude of misalignment can by expressed by a multiple of the length of a time slot.
Under this assumption, erroneous symbol synchronization means that
the window of the receiver is aligned to the consecutive $Q$ time slots that
consist of the last $i$ time slots of one symbol interval and the first $Q-i$ time slots of the following one for some positive integer $i \leq Q-1$.
Erroneous frame synchronization is understood the same way by regarding a set of consecutive $f$ symbol intervals as a frame of length $f$ with $fQ$ time slots.
In PPM, even if the channel is completely noiseless, erroneous symbol synchronization can not be detected
if there happens to be exactly one pulse within the misaligned window.
Similarly, the receiver can not detect erroneous symbol synchronization under multipulse PPM
if there are exactly $K$ pulses within the misaligned window.
In the case of expurgated PPM, erroneous symbol synchronization will go unnoticed if the $Q$ slots in the misaligned window
form a valid codeword or invalid one within the decodable range.
Hence, without some sort of synchronization mechanism, these modulation techniques may exhibit a severe error floor.
The known method for alleviating this synchronization problem in the literature is
to periodically insert a synchronization marker that consists of $sQ$ time slots $T_i$, $0 \leq i \leq sQ-1$, for some positive integer $s$
in which for any nonnegative integer $j \leq s-1$ the $Q$ consecutive time slots
$T_{i+jQ}$, $0 \leq i \leq Q-1$, form a valid codeword (see, for example, \cite{G,G2,VG}).
In other words, a certain pattern of consecutive $s$ symbols is periodically inserted to signal the boundaries.
Ideally, the off-peak autocorrelations of the synchronization marker should be as small as possible to suppress
the probability that the receiver misses the marker or is deceived by a false one at an unintended position.
For PPM and multipulse PPM, it is possible to find a synchronization marker that causes no ambiguity as long as the channel is noiseless.
However, an ambiguous synchronization marker can not be short.
If the channel is noisy, synchronization with this approach becomes increasingly difficult and complicated as well.
The primary purpose of this paper is to provide a unified solution to both error tolerance and synchronization by using coding theory.
We propose a modulation scheme in which reliable symbol and frame synchronization can be achieved under the presence of noise
through the same type of correlation receiver used for expurgated PPM.
The required overhead in terms of the size of a synchronization marker is significantly smaller than the known method.
The signal modulation can be performed separately, so that
the known modulation techniques of PPM kind can be exploited straightforwardly.
We also develop a generalized format of expurgated PPM which has a larger number of symbols and increased minimum distance than
standard PPM and expurgated PPM.
This generalized PPM can also be used for the proposed self-synchronizing modulation scheme
in place of other modulation techniques of PPM kind to offer higher error tolerance and/or better throughput.
\section{The scheme}
We describe our scheme as a special class of constant-weight codes by exploiting the coding theoretic view introduced in the previous section.
For the sake of generality, a simple setting is assumed
where the probability that the receiver fails to correctly decode symbols decreases monotonously as the minimum distance increases.
Thus, for the most part, our framework will focus on the minimum distance
and generally aim for the largest possible codewords for given parameters.
A more detailed analysis of the performance of our proposed modulation technique is briefly discussed at the end of this paper.
This section is divided into two subsections. Subsection \ref{sync} is devoted to our self-synchronization mechanism
that allows for separate implementation of a modulation layer based on pulse positions in the time domain.
Subsection \ref{mod} gives a generalized version of expurgated PPM that can be used both as a stand-alone modulation scheme
with error correction and as the modulation part of our scheme.
\subsection{Synchronization layer}\label{sync}
As is pointed out in the previous section,
PPM and its variations can be interpreted in terms of constant-weight codes.
In this coding theoretic framework,
the number of codewords corresponds to the number of available symbols while
the minimum distance corresponds to the error tolerance capability.
The constraint that the weight of a codeword is constant ensures equal energy across symbols.
Hence, a good signal modulation technique based on the pulse positions within symbol intervals of fixed length
corresponds to a constant-weight code of large minimum distance with many codewords that can be decoded in a certain simple manner.
Since this coding theoretic view offers a clear picture of how the sensitivity to noise may be alleviated,
it would be natural to ask if it is also possible to exploit the framework to overcome the other major weakness, namely the vulnerability to loss of synchronization,
while preserving other major features of existing standard techniques.
This subsection answers this question in the affirmative
by developing a theory of symbol and frame synchronization for signal modulation of PPM type.
In what follows, we use binary codes to represent symbols expressed by pulse positions,
where $0$s in a codeword correspond to time slots with no pulse
while $1$s represent those at which single pulses are sent.
A \textit{self-synchronizing code} $\mathcal{C} \subset \mathbb{F}_2^n$ is a binary block code of length $n$ where the
symbol string formed by an overlapped portion of any two concatenated codewords is not a valid codeword.
In the coding theory literature, self-synchronizing codes are also called \textit{comma-free codes}.
The property that no codeword appears as a substring of two adjacent codewords allows for block synchronization
without any external help as long as synchronization is provided at the bit level.
The key idea of our approach is to use a special self-synchronizing code
that allows for synchronization by observing only part of the window and frees up the rest for modulation.
To obtain constant-weight self-synchronizing codes with desirable properties for our purpose,
we employ combinatorial design theory.
Take a sequence of codewords of a binary block code of length $n$.
A {\it splice} of length $n$ between codeword $\boldsymbol{x} = (x_0, x_1, \dots, x_{n-1})$
and the following codeword $\boldsymbol{y} = (y_0, y_1, \dots, y_{n-1})$ in the codeword sequence is
a concatenated binary sequence $(x_{n-i}, \dots, x_{n-1}, y_0, \dots y_{n-i-1})$ composed of the last $i$ bits of $\boldsymbol{x}$
and the first $n-i$ bits of $\boldsymbol{y}$ for some positive integer $i \leq n-1$.
A binary block code of length $n$ is said to be of \textit{comma-free index} $\rho$ if the Hamming distance between any codeword $\boldsymbol{z}$
and any splice of length $n$ between any two codewords $\boldsymbol{x}, \boldsymbol{y}$ is at least $\rho$.
By definition, a self-synchronizing code is a binary block code of comma-free index at least $1$.
It is straightforward to see that with a hard-decision algorithm a self-synchronizing code of comma-free index $\rho$
assures block synchronization under the presence of up to $\lfloor\frac{\rho-1}{2}\rfloor$ bit flips (or errors) in the received message of length $n$.
In an additive white Gaussian noise (AWGN) channel, for example, one may use a correlation receiver for soft-decision synchronization
to take advantage of the Hamming distance between a valid codeword and a splice.
A \textit{difference system of sets} (DSS) of \textit{index} $\rho$ over ${\textit{\textbf{Z}}}_n$
is a family of disjoint subsets $D_i$ of ${\textit{\textbf{Z}}}_n$ such that
the multi-set
\begin{equation}
\label{equ}
\{ a-b \pmod{n} \ \vert \ a \in D_i, b \in D_j, i \not=j \}
\end{equation}
contains
every $d \in {\textit{\textbf{Z}}}_n\setminus \{0\}$ at least $\rho$ times.
The difference between two elements from different subsets of ${\textit{\textbf{Z}}}_n$ is called an \textit{outer difference}.
A DSS is {\it perfect} if the multi-set defined in (\ref{equ}) contains
every $d \in {\textit{\textbf{Z}}}_n\setminus \{0\}$ exactly $\rho$ times.
A DSS is \textit{regular} if all subsets $D_i$ are of the same size.
For instance, the set $\{\{1,2\}, \{3,5\}\}$ over ${\textit{\textbf{Z}}}_8$ forms a regular DSS of index one
because every nonzero outer difference appears at least once as follows:
\begin{align*}
1 - 3 &\equiv 6 \pmod{8}, \ 3 - 1 \equiv 2 \pmod{8},\\
1 - 5 &\equiv 4 \pmod{8}, \ 5 - 1 \equiv 4 \pmod{8},\\
2 - 3 &\equiv 7 \pmod{8}, \ 3 - 2 \equiv 1 \pmod{8},\\
2 - 5 &\equiv 5 \pmod{8}, \ 5 - 2 \equiv 3 \pmod{8}.
\end{align*}
This DSS is not perfect because $4$ appears twice as an outer difference
while each of the other elements of ${\textit{\textbf{Z}}}_8\setminus \{0\}$ occurs exactly once.
The original motivation of the study of DSSs was to realize self-synchronizing codes as cosets of linear codes
in order to achieve low encoding and decoding complexity \cite{L,L2}.
However, DSSs appear to have far greater potential and can be exploited to provide self-synchronizing codes with various desired properties.
For our purpose, we would like binary constant-weight self-synchronizing codes of sufficiently large comma-free index with a cartain additional property.
We use DSSs with exactly two sets to obtain desirable codes.
\begin{theorem}\label{DSS}
If there exist a DSS $\{D_0, D_1\}$ of index $\rho$ over ${\textit{\textbf{Z}}}_n$ and
a binary constant-weight code $\mathcal{C}$ of length $n - \vert D_0 \vert - \vert D_1 \vert$ and weight $K$,
then there exists a binary constant-weight self-synchronizing code of length $n$, comma-free index $\rho$, and weight $K+\vert D_1\vert$
with $\vert \mathcal{C} \vert$ codewords.
\end{theorem}
\begin{IEEEproof}
Let $\{D_0, D_1\}$ be a DSS of index $\rho$ over ${\textit{\textbf{Z}}}_n$
and $\mathcal{C}$ a binary constant-weight code of length $n - \vert D_0 \vert - \vert D_1 \vert$ and weight $K$.
For every codeword $\boldsymbol{c} \in \mathcal{C}$,
construct the $n$-dimensional vector $\boldsymbol{d}_{\boldsymbol{c}} = (d_0,\dots,d_{n-1})$,
where $d_i = 0$ for all $i \in D_0$, $d_i = 1$ for all $i \in D_1$, and
the $(n - \vert D_0 \vert - \vert D_1 \vert)$-dimensional vector $(d_i)$, $i \not\in D_0 \cup D_1$, forms $\boldsymbol{c}$.
Each of the resulting $\vert\mathcal{C}\vert$ elements of $\mathbb{F}_2^n$ is of weight $K + \vert D_1 \vert$.
It suffices to show that the set $\mathcal{D} = \{\boldsymbol{d}_{\boldsymbol{c}} \ \vert \ \boldsymbol{c} \in \mathcal{C}\}$
of these vectors forms a code of comma-free index $\rho$.
Take a pair of not necessarily distinct codewords from $\mathcal{D}$ and form a splice $\boldsymbol{s}$ of length $n$
by concatenating the first $s$ bits of one codeword and the last $n-s$ bits of the other for a positive integer $s \leq n-1$.
Take another not necessarily distinct codeword $\boldsymbol{d}$ from $\mathcal{D}$.
Because $\{D_0, D_1\}$ forms a DSS of index $\rho$,
there are at least $\rho$ ordered pairs $(a, b)$ such that $a - b \equiv s \pmod{n}$, where $a$ and $b$ belong to different sets of the DSS.
Thus, there are at least $\rho$ discrepancies between $\boldsymbol{s}$ and $\boldsymbol{d}$ within the coordinates $i \in D_0 \cup D_1$.
The proof is complete.
\end{IEEEproof}
To make the virtue of the above construction clearer,
take the set $\{\{1,2,3,4,5\}, \{0,6,11,16,21\}\}$, which forms a perfect regular DSS of index two over $\textit{\textbf{Z}}_{26}$.
The two sets of cardinality five specify the positions of $0$s and $1$s respectively as synchronization markers
while the remaining $16$ positions are freely available for signal modulation
by a binary constant-weight code of length $16$ such as multipulse PPM with $Q = 16$.
If we use $\{1,2,3,4,5\}$ for $0$s and $\{0,6,11,16,21\}$ for $1$s,
by writing a bit used by the constant-weight code of length $16$ as $*$,
we have 26 bit sequence
\[1000001{*}{*}{*}{*}1{*}{*}{*}{*}1{*}{*}{*}{*}1{*}{*}{*}{*}.\]
Because each nonzero outer difference appears twice in the DSS,
regardless of the content of each $*$,
there are at least two discrepancies among the positions $\{1,2,3,4,5\}\cup\{0,6,11,16,21\}$
between any pair of a valid codeword of the resulting self-synchronizing code and a splice.
To give a smaller example,
we can combine the DSS $\{\{1,2\},\{3,5\}\}$ over ${\textit{\textbf{Z}}}_8$ and a binary constant-weight code of length four such as PPM with $Q = 4$ in the same way.
In this case, the coordinates $0$, $4$, $6$, and $7$ correspond to free bits.
Note that in each of these examples the cardinalities $\vert D_0 \vert$ and $\vert D_1\vert$ are the same.
Hence, we may swap the roles of the sets to obtain a binary constant-weight self-synchronizing code of the same length
and comma-free index guaranteed by Theorem \ref{DSS} which has the same number of codewords.
A self-synchronizing code constructed by the method given in Theorem \ref{DSS} can be synchronized
by only looking at periodic autocorrelations of the partial window specified by the corresponding DSS.
Hence, the synchronization device on the receiver side only needs the corresponding signals as inputs for synchronization.
If we use symbol intervals of PPM, multipulse PPM or expurgated PPM as the binary constant-weight code $\mathcal{C}$
of length $Q = n - \vert D_0 \vert - \vert D_1 \vert$ in Theorem \ref{DSS},
we achieve symbol synchronization that is securely checked for each symbol interval
while allowing for modulation by pulse positions in the time domain through freely available bits.
Packing $f$ symbol intervals into the freely available part provides frame synchronization that is constantly checked for each frame consisting of $f$ symbol intervals.
If we use a DSS over ${\textit{\textbf{Z}}}_n$ for symbol synchronization and then employ another DSS $\{D_0', D_1'\}$ over ${\textit{\textbf{Z}}}_{n'}$
such that $n' - \vert D_0' \vert - \vert D_1' \vert$ is a multiple $f=nf'$ of $n$,
then the scheme provides both symbol and frame synchronization.
In the remainder of this subsection, we explore properties of DSSs
and demonstrate that our synchronization method is significantly more efficient than the known technique
that uses a sequence of symbols as a synchronization marker.
Because a DSS $\{D_0, D_1\}$ over ${\textit{\textbf{Z}}}_n$ leaves $n - \vert D_0 \vert - \vert D_1 \vert$ bits for the modulation layer,
ceteris paribus, it is desirable for $\vert D_0 \vert + \vert D_1 \vert$ to be small.
This parameter is called the \textit{redundancy} of a DSS.
In our context, redundancy is the parameter that denotes the number of time slots we sacrifice
for synchronization per symbol interval or per frame interval.
We use $r(n,\rho)$ to denote the smallest achievable redundancy for given order $n$ and index $\rho$.
A DSS is \textit{optimal} if its redundancy is $r(n,\rho)$.
The following is a special case of the well-known Levenshtein bound:
\begin{theorem}[\cite{L}]\label{lbound}
For any DSS of index $\rho$ over ${\textit{\textbf{Z}}}_n$ with exactly two sets, it holds that
\[r(n,\rho) \geq \sqrt{2\rho(n-1)}\]
with equality if and only if the DSS is perfect and regular.
\end{theorem}
If we are allowed to have a sufficiently strong signal for each pulse compared to noise,
it may be enough to employ a DSS of index one or two.
In this case, the following classical results give optimal DSSs:
\begin{theorem}[\cite{L,C}]\label{rho1}
For any integer $n \geq 2$,
the pair of sets
\begin{align*}D_0 &= \{i\tau_0+1 \ \vert \ 1 \leq i \leq \tau_1\}\ \text{and}\\
D_1 &= \{i \ \vert \ 1 \leq i \leq \tau_0\}
\end{align*}
form an optimal DSS of index one over ${\textit{\textbf{Z}}}_n$, where
\[\tau_0 = \left\lceil\frac{n-1}{2\tau_1}\right\rceil\ \text{and}\ \tau_1 = \left\lceil\sqrt{\frac{n-1}{2}}\right\rceil.\]
\end{theorem}
Note that when the order of the ring ${\textit{\textbf{Z}}}_n$ is of the form $n = 2m^2+1$ for some positive integer $m$,
the redundancy of the optimal DSS given above achieves the Levenshtein bound with equality.
\begin{theorem}[\cite{L}]\label{rho2}
For any integer $n \geq 2$,
the pair of sets
\begin{align*}D_0 &= \{\tau_0+1\}\cup\{n-i\tau_0 \ \vert \ 0 \leq i \leq \tau_1-2\}\ \text{and}\\
D_1 &= \{i \ \vert \ 1 \leq i \leq \tau_0\}
\end{align*}
form an optimal DSS of index two over ${\textit{\textbf{Z}}}_n$, where
\[\tau_0 = \left\lceil\frac{n-1}{\tau_1}\right\rceil\ \text{and}\ \tau_1 = \left\lceil\sqrt{n-1}\right\rceil.\]
\end{theorem}
As in Theorem \ref{rho1}, the redundancies of the optimal DSSs in Theorem \ref{rho2} achieve the lower bound in Theorem \ref{lbound} with equality
when $n = m^2+1$ for some positive integer $m$.
It is notable that,
in terms of the asymptotic notation,\footnote{Here we use the family of Bachmann-Landau notations
defined in standard textbooks in mathematics and computer science such as \cite[Section 9]{ConMath}.
The Landau symbol $\mathcal{O}(\cdot)$, which is also known as the big-$O$ symbol, is sometimes written
as $O(\cdot)$ with a simple italic $O$ letter in the literature.}
these optimal DSSs only require $\mathcal{O}(n^{\frac{1}{2}})$ bits for synchronization.
For instance, if we use an optimal DSS for symbol synchronization with PPM,
because $n = \mathcal{O}(Q)$, the number of time slots we sacrifice is $\mathcal{O}(Q^{\frac{1}{2}})$ per symbol interval.
Any method that inserts a sequence of valid symbol intervals
must sacrifice $sQ = \Omega(Q)$ time slots for some positive integer $s$,
occupying a significantly larger number of time slots than our method.
It is also worth noting that Theorems \ref{rho1} and \ref{rho2} explicitly give optimal examples for all nontrivial order $n \geq 2$.
If the peak power is severely limited, we may need a DSS of larger index for secure synchronization.
Such DSSs have been studied in various contexts (see, for example, \cite{FV,FL,ZTWY,LF,CLY,D,FMT,CD,T} and references therein).
To study the use of a DSS $\{D_0, D_1\}$ over ${\textit{\textbf{Z}}}_n$ for synchronization in the context of signal modulation by pulse positions,
we use the \textit{redundancy rate} $R = \frac{\vert D_0 \vert + \vert D_1 \vert}{n}$ of the DSS to measure its slot usage.
For instance, if the redundancy rate is half, synchronization and modulation require the same amount of time resources.
The least useful DSSs for our purpose are those of redundancy rate one because there would be no time slots for modulation.
We aim for the smallest possible redundancy rate for given $n$ and $\rho$.
Note that if we define an analogous parameter for a synchronization method that inserts a sequence of valid codewords for symbol synchronization
by taking the fraction between the numbers of time slots for synchronization and those for modulation per symbol,
such a value can never be less than a half because one ought to insert at least one symbol per symbol interval.
As we will see next, our method can break this fundamental limit even when the channel is assumed to be noisy.
The lower bound on the achievable redundancy given in Theorem \ref{lbound}
suggests that the number of time slots required for synchronization may still be only $\mathcal{O}(n^{\frac{1}{2}})$ for the case when the index is larger than two.
In fact, there are known classes of optimal DSSs with exactly two sets that meet the Levenshtein bound with equality.
In the remainder of this subsection, we list known optimal DSSs that are useful for our purpose
as well as almost optimal DSSs that are equally of interest but are not found in the literature.
Let $p = eg+1$ be an odd prime for some positive integers $e$ and $g$.
The $e$th \textit{cyclotomic classes} in $\mathbb{F}_p$ are defined as
$C_i^e = \{\alpha^{i+te} \ \vert \ 0 \leq t \leq g-1\}$, where $\alpha$ is a primitive element of $\mathbb{F}_p$
and $0 \leq i \leq e-1$.
The \textit{cyclotomic numbers} $(i,j)_e$ of \textit{order} $e$ are defined as $(i,j)_e = \left\vert (C_i^e+1) \cup C_j^e\right\vert$.
Note that the ring ${\textit{\textbf{Z}}}_p$ may be regarded as the finite field $\mathbb{F}_p$ by defining the natural division when $p$ is prime.
We use the following special case of the construction for DSSs given in \cite{MT}:
\begin{theorem}[\cite{MT}]\label{cyclotomic}
Let $p = 2eh+1$ be an odd prime, where $e$ and $h$ are positive integers.
The set $\{C_{0}^{2e}, C_{e}^{2e}\}$ of two cyclotomic classes in $\mathbb{F}_p$ forms
a regular \textup{DSS} over ${\textit{\textbf{Z}}}_p$ and index $\rho$, where
\[\rho = \min\left\{(i, e)_{2e} + (i+e, e)_{2e}\ \middle\vert \ 0 \leq i \leq e-1\right\}.\]
In particular, if
\[(i, e)_{2e} + (i+e, e)_{2e} = \frac{h}{e}\]
for every $i$, then the regular \textup{DSS} is of index $\frac{h}{e}$, perfect, and hence optimal.
\end{theorem}
The following are the two known classes of optimal DSSs with exactly two sets constructed in this manner:
\begin{theorem}[\cite{FV}]\label{half1}
For every $m$ such that $16m^2+1$ is an odd prime,
the set $\{C_0^4, C_2^4\}$ of two cyclotomic classes in $\mathbb{F}_{16m^2+1}$ forms
a perfect regular \textup{DSS} of index $2m^2$ and redundancy rate $\frac{1}{2}-\frac{1}{32m^2+2}$ over ${\textit{\textbf{Z}}}_{16m^2+1}$.
\end{theorem}
\begin{theorem}[\cite{FV}]\label{third}
For every $m$ such that $108m^2+1$ is an odd prime,
the set $\{C_0^6, C_3^6\}$ of two cyclotomic classes in $\mathbb{F}_{108m^2+1}$ forms
a perfect regular \textup{DSS} of index $6m^2$ and redundancy rate $\frac{1}{3}-\frac{1}{324m^2+3}$ over ${\textit{\textbf{Z}}}_{108m^2+1}$.
\end{theorem}
Because the above optimal DSSs are both perfect and regular at the same time,
their redundancies meet the Levenshtein bound with equality.
For instance, the DSS over ${\textit{\textbf{Z}}}_{16m^2+1}$ in Theorem \ref{half1} uses $8m^2$ time slots for synchronization
and leaves the remaining $8m^2+1$ time slots for modulation.
For comparison to the method that inserts a sequence of symbols,
in the case of symbol synchronization,
this means that the efficiency of the DSSs in Theorem \ref{half1} in terms of slot usage is almost the same as inserting only one symbol per symbol interval.
Any method that inserts a symbol sequence in standard PPM or multipulse PPM
can not assure synchronization at this redundancy rate even if the channel is almost noiseless,
whereas our method tolerates up to $\lfloor\frac{2m^2-1}{2}\rfloor = m^2-1$ bit flips for hard-decision decoding
and an equivalent level of noise for soft-decision decoding.
By the same token, Theorem \ref{third} gives DSSs in which
the number of time slots per symbol interval for synchronization is only about a half of the number of time slots for modulation.
As their redundancy rate $R \approx \frac{1}{3}$ suggests, this level of efficiency in terms of the number of sacrificed time slots is fundamentally unachievable
by any method that inserts valid codewords for synchronization.
Table \ref{numerical} lists the perfect regular DSSs obtained by Theorems \ref{half1} and \ref{third} for $m \leq 10$.
\begin{table}
\renewcommand{\arraystretch}{1.3}
\caption{Perfect regular DSSs from cyclotomic constructions for $m \leq 10$}
\label{numerical}
\centering
\begin{tabular}{cccccc}
\hline\hline
$m$ & $n$ & $\vert D_i \vert$ & $\rho$ & $R = \frac{\vert D_0 \vert + \vert D_1 \vert}{n}$ & \bfseries Reference\\
\hline
$1$ & $17$ & $4$ & $2$ & $\frac{8}{17}$ & Theorem \ref{half1}\\
$4$ & $257$ & $64$ & $32$ & $\frac{128}{257}$ & Theorem \ref{half1}\\
$5$ & $401$ & $100$ & $50$ & $\frac{200}{401}$ & Theorem \ref{half1}\\
$6$ & $577$ & $144$ & $72$ & $\frac{288}{577}$ & Theorem \ref{half1}\\
$9$ & $1297$ & $324$ & $162$ & $\frac{648}{1297}$ & Theorem \ref{half1}\\
$10$ & $1601$ & $400$ & $200$ & $\frac{800}{1601}$ & Theorem \ref{half1}\\
\hline
$1$ & $109$ & $18$ & $6$ & $\frac{36}{109}$ & Theorem \ref{third}\\
$2$ & $433$ & $72$ & $24$ & $\frac{144}{433}$ & Theorem \ref{third}\\
$6$ & $3889$ & $648$ & $216$ & $\frac{1296}{3889}$ & Theorem \ref{third}\\
\hline
\hline
\end{tabular}
\end{table}
If we allow the redundancy of a DSS to be slightly above the right-hand side of the lower bound in Theorem \ref{lbound},
we may extend the range of parameters covered by the same construction method while keeping redundancy very low.
This idea was investigated in a more general setting by Mutoh in his unpublished manuscript \cite{Mutoh}.
Here we give two classes of DSSs by simply plugging cyclotomic numbers calculated in \cite{Dickson} into Theorem \ref{cyclotomic}.
For details of the calculations of cyclotomic numbers, we refer the reader to \cite{Dickson,Storer}.
\begin{theorem}\label{4n+1}
Let $n \equiv 1 \pmod{4}$ be a prime and binary quadratic form $n = x^2+4y^2$ with $x \equiv 1 \pmod{4}$ its decomposition.
Then the set $\{C_0^4, C_2^4\}$ of two cyclotomic classes in $\mathbb{F}_n$ forms
a regular \textup{DSS} of index $\rho$ and redundancy rate $\frac{1}{2}-\frac{1}{2n}$ over ${\textit{\textbf{Z}}}_n$, where
\[\rho =
\begin{cases}
\min\left(\frac{n-3+2x}{8}, \frac{n+1-2x}{8}\right) & \text{if}\ n \equiv 1 \pmod{8},\\
\min\left(\frac{n-3-2x}{8}, \frac{n+1+2x}{8}\right) & \text{otherwise}.
\end{cases}\]
\end{theorem}
\begin{IEEEproof}
Let $n = 4h+1$ be a prime for some integer $h$ and $x^2+4y^2$ with $x \equiv 1 \pmod{4}$ its decomposition.
Take the set $\{C_0^4, C_2^4\}$ of two cyclotomic classes in $\mathbb{F}_n$.
We compute the index $\rho$ of this set as a DSS over ${\textit{\textbf{Z}}}_{n}$.
By Theorem \ref{cyclotomic}, we have $\rho = \min((0,2)_4+(2,2)_4, (1,2)_4+(3,2)_4)$.
If $n \equiv 1 \pmod{8}$, by plugging the actual values of the cyclotomic numbers of order four \cite{Dickson},
we have
\begin{align*}
(0,2)_4+(2,2)_4 &= 2(0,2)_4\\
&= \frac{n-3+2x}{8}
\end{align*}
and
\begin{align*}
(1,2)_4+(3,2)_4 &= 2(1,2)_4\\
&= \frac{n+1-2x}{8}.
\end{align*}
Similarly, if $n \equiv 5 \pmod{8}$, we have
\[(0,2)_4+(2,2)_4 = \frac{n-3-2x}{8}\]
and
\[(1,2)_4+(3,2)_4 = \frac{n+1+2x}{8}.\]
Each cyclotomic class contains $h = \frac{n-1}{4}$ elements of $\mathbb{F}_n$.
Hence, $\{C_0^4, C_2^4\}$ forms a DSS of desired parameters.
\end{IEEEproof}
\begin{theorem}\label{6n+1}
Let $n \equiv 1 \pmod{6}$ be a prime and binary quadratic form $n = x^2+3y^2$ with $x \equiv 1 \pmod{3}$ its decomposition.
Then the set $\{C_0^6, C_3^6\}$ of two cyclotomic classes in $\mathbb{F}_n$
forms a regular \textup{DSS} of index $\rho$ and redundancy rate $\frac{1}{3}-\frac{1}{3n}$ over ${\textit{\textbf{Z}}}_n$, where
\[\rho =
\begin{cases}
\min\left(\frac{n-5+4x}{18}, \frac{n+1-2x}{18}\right)\\ \quad \text{if}\ 2\ \text{is a cubic residue modulo $n$},\\
\min\left(\frac{n-5+4x+6y}{18}, \frac{n+1-2x-12y}{18}, \frac{n+1-2x+6y}{18}\right)\\ \quad \text{otherwise}.
\end{cases}\]
\end{theorem}
\begin{IEEEproof}
Let $n = 6h+1$ be a prime for some integer $h$ and $x^2+3y^2$ with $x \equiv 1 \pmod{3}$ its decomposition.
Take the set $\{C_0^6, C_3^6\}$ of two cyclotomic classes in $\mathbb{F}_n$
to construct a DSS of index $\rho$ over ${\textit{\textbf{Z}}}_{n}$.
By Theorem \ref{cyclotomic}, we have
\[\rho = \min((0,3)_6+(3,3)_6, (1,3)_6+(4,3)_6, (2,3)_6+(5,3)_6).\]
As in the proof of Theorem \ref{4n+1}, a routine computation shows that
\[
(0,3)_6+(3,3)_6 =
\begin{cases}
\frac{n-5+4x}{18} & \text{if}\ 2\ \text{is a cubic residue},\\
\frac{n-5+4x+6y}{18} & \text{otherwise},
\end{cases}
\]
that
\[
(1,3)_6+(4,3)_6 =
\begin{cases}
\frac{n+1-2x}{18} & \text{if}\ 2\ \text{is a cubic residue},\\
\frac{n+1-2x-12y}{18} & \text{otherwise},
\end{cases}
\]
and that
\[
(2,3)_6+(5,3) =
\begin{cases}
\frac{n+1-2x}{18} & \text{if}\ 2\ \text{is a cubic residue},\\
\frac{n+1-2x+6y}{18} & \text{otherwise}.
\end{cases}
\]
Each cyclotomic class contains $h = \frac{n-1}{6}$ elements of $\mathbb{F}_n$.
Hence, we obtain a DSS of desired parameters.
\end{IEEEproof}
Note that Theorems \ref{half1} and \ref{third} are special cases of these two classes
in which the resulting DSSs are simultaneously perfect and regular.
In general, Theorem \ref{cyclotomic} produces an optimal DSS or one close to optimal
when the cyclotomic classes in $\mathbb{F}_p$ can be taken such that
$(i, e)_{2e} + (i+e, e)_{2e}$ are uniform or almost uniform across $i$.
For instance, take $n = 37 = 1^2+4\cdot3^2$. Then by Theorem \ref{4n+1},
we obtain a DSS of index $4$ and redundancy 18 over ${\textit{\textbf{Z}}}_{37}$.
By the Levenshtein bound, the redundancy of any DSS of the same index with exactly two sets over ${\textit{\textbf{Z}}}_{37}$
must be at least
\[\left\lceil \sqrt{2\cdot4\cdot(37-1)} \right\rceil = 17,\]
which is very close to $18$.
If lower time slot usage is desirable,
in principle, regular DSSs of better redundancy rate can be obtained in the same way in exchange for poorer indices $\rho$
by applying cyclotomic numbers of higher orders.
If higher indices are required to tolerate a higher noise level,
such perfect DSSs can be constructed, albeit with more complicated computation,
by taking unions of cyclotomic classes and increasing time slot usage accordingly (see \cite{FMY}).
DSSs of index larger than two can be obtained by known recursive constructions as well.
The following is a relevant special case of the recursive constructions given in \cite{FL}.
\begin{theorem}[\cite{FL}]\label{recursive}
Let $n = \frac{q^{t+1}-1}{q-1}$ and $n' = \frac{q^{2t+2}-1}{q-1}$,
where $q$ is a prime power and $t$ a positive integer.
If there exists a \textup{DSS} $\{D_0, D_1\}$ of index $\rho$ over ${\textit{\textbf{Z}}}_n$,
then there exist a \textup{DSS} $\{D'_0, D'_1\}$ of index $\rho'$ over ${\textit{\textbf{Z}}}_{n'}$, where
\[\vert D'_i \vert = q^{t+1}\vert D_i\vert \ \text{for}\ i = 0, 1\]
and
\[\rho' = \min\left(\rho q^{t+1}, 2(q-1)\vert D_0\vert\vert D_1\vert\right),\]
and a \textup{DSS} $\{D''_0, D''_1\}$ of index $\rho''$ over ${\textit{\textbf{Z}}}_{n'}$, where
\[\vert D''_i \vert = q^{t+1}\vert D_i\vert + in \ \text{for}\ i = 0, 1\]
and
\[\rho'' = \min\left(\rho q^{t+1}, 2(q-1)\vert D_0\vert\vert D_1\vert + 2\vert D_0 \vert\right).\]
\end{theorem}
This recursive construction gives DSSs of improved index by increasing the time slot usage for synchronization.
For instance, if we apply the first half of Theorem \ref{recursive} to the optimal DSS of index one over ${\textit{\textbf{Z}}}_n$
with $n = \frac{q^{t+1}-1}{q-1}$ obtained by Theorem \ref{rho1},
the index of the resulting DSS is at least $q^{t+1}-q$ because we have
\begin{align*}
2(q-1)\tau_0\tau_1 &\geq 2(q-1)\tau_1^2\\
&\geq q^{t+1}-q.
\end{align*}
While we have focused on theoretical aspects of DSSs and systematic constructions,
one may also look for optimal DSSs of specific parameters through computer searches.
An algorithm for finding optimal DSSs is proposed in \cite{TW}.
An explicit example of optimal DSSs of index $\rho$ over ${\textit{\textbf{Z}}}_n$ for each $\rho \leq 5$ and $n \leq 30$
can be found in \cite{web}.
The computer search results and the existence of systematic constructions for particular parameters seem to suggest that
while it is quite difficult to give explicit constructions,
the redundancies of optimal DSSs of index $\rho$ over ${\textit{\textbf{Z}}}_n$ are generally very close or equal to $\sqrt{2\rho(n-1)}$.
\subsection{Modulation layer}\label{mod}
We now turn our attention to the modulation layer of our scheme.
As we have seen in the previous subsection,
we can employ standard PPM and its variations by exploiting the freely available bits given by a DSS.
One major benefit of using a DSS is that it allows for error tolerant synchronization
while still keeping the number of time slots for synchronization per symbol very low.
One might then wish error correction for the modulation layer at the modulation stage
to eliminate the need of or reduce the burden on the shoulders of error correction at a higher level.
Expurgated PPM is the error-correcting variant of PPM which can be understood as a binary constant-weight error-correcting code.
To take advantage of our coding theoretic framework, we first briefly review this modulation technique and describe it
in the language of constant-weight codes.
A more general error-correcting variant of expurgated PPM will then be developed to accommodate a larger number of symbols.
Expurgated PPM employs special combinatorial designs with cyclic automorphisms.
A \textit{simple} $2$-\textit{design} of \textit{order} $v$, \textit{block size} $k$, and \textit{index} $\mu$
is an ordered pair $(V, \mathcal{B})$, where $V$ is a finite set of cardinality $v$
and $\mathcal{B}$ is a set of $k$-subsets of $V$ such that each pair of elements of $V$ is included in exactly $\mu$ elements of $\mathcal{B}$.
Elements of $V$ are called \textit{points} while those of $\mathcal{B}$ are \textit{blocks}.
A simple $2$-design $(V, \mathcal{B})$ of order $v$ is said to be \textit{symmetric} if $\vert \mathcal{B} \vert = \vert V \vert = v$.
It is \textit{cyclic} if the cyclic group of order $v$ acts regularly on the points.
A \textit{difference set} of order $v$ and index $\mu$ is a set $B$ of non-negative integers less than $v$
such that every element of $\textit{\textbf{Z}}_v \setminus \{0\}$ appears exactly $\mu$ times each
as the difference $a - b \pmod{v}$ between two distinct elements $a, b \in B, a \not= b$.
To avoid the trivial case, we assume that $k > \mu$.
Let $\pi$ be the map $B \mapsto B+1 = \{b +1 \pmod{v} \ \vert \ b \in B\}$.
It is straightforward to see that if the orbit $\text{\textit{Orb}}_{\textit{\textbf{Z}}_v}(B) = \bigcup_{i \in \textit{\textbf{Z}}_v}\left\{\pi^i(B)\right\}$ is of length $v$,
then the $v$ subsets of $\textit{\textbf{Z}}_v$ form a cyclic simple $2$-design of order $v$ and index $\mu$ that is symmetric.
Its block size is $\vert B \vert = \frac{1+\sqrt{4(v-1)\mu+1}}{2}$.
Expurgated PPM employs the $v$ blocks of a simple $2$-design constructed from a difference set $B$ of order $v$.
Trivially, the block set $\mathcal{B} = \text{\textit{Orb}}_{\textit{\textbf{Z}}_v}(B)$ of this symmetric design forms the set of supports of the codewords
of a binary constant-weight code of length $v$ and weight $k = \frac{1+\sqrt{4(v-1)\mu+1}}{2}$ with $v$ codewords.
Because every nonzero difference appears exactly $\mu$ times in $B$, the minimum distance is $2(k-\mu)$.
Conversely, it is straightforward to see that a binary constant-weight code of these parameters in which
every cyclic shift of a codeword is also a codeword forms a difference set.
By employing this binary constant-weight code for modulation based on pulse positions as in PPM,
we have $Q = v$ time slots for each symbol interval in which
$K = k = \frac{1+\sqrt{4(v-1)\mu+1}}{2}$ pulses are transmitted to represent $M = v = Q$ symbols.
Expurgated PPM thus supports the same number of symbols as standard PPM and has an increased minimum distance.
The code represented by $\mathcal{B}$ has the property that every cyclic shift of a codeword is also a codeword.
Hence, implementation on the receiver side only requires a simple correlation receiver \cite{NB,NB2}.
The above coding theoretic interpretation may be summarized by the following proposition:
\begin{proposition}\label{prop}
The symbols of expurgated PPM that transmits $K$ pulses in each $Q$ slot interval
are equivalent to a binary constant-weight code of length $Q$, weight $K$, and minimum distance $\frac{2K(Q-K)}{Q-1}$ with $Q$ codewords
in which every cyclic shift of a codeword is also a codeword.
\end{proposition}
\begin{IEEEproof}
The set of the $Q$ symbols of expurgated PPM is a subset $\mathcal{P}'_K$ of
the set $\mathcal{P}_K = \left\{\boldsymbol{v}_i \in \mathbb{F}_2^Q \ \middle\vert\ \operatorname{wt}(\boldsymbol{v}_i) = K\right\}$
of all $Q$-dimensional binary vectors $\boldsymbol{v}_i$ of weight $K$ such that
the set $\mathcal{B} = \{\text{supp}(\boldsymbol{v}_i)\ \vert \ \boldsymbol{v}_i \in \mathcal{P}'_K\}$ of supports of the $Q$-dimensional vectors in $\mathcal{P}'_K$
forms the block set of a cyclic simple $2$-design of order $Q$ and block size $K$ that is symmetric.
Let $\mu$ be the index of this corresponding symmetric $2$-design.
It suffices to show that the minimum distance $2(K-\mu)$ of the corresponding constant-weight code is equal to $\frac{2K(Q-K)}{Q-1}$.
Because every pair of points is included in exactly $\mu$ blocks while each of the $Q$ blocks contains ${{K}\choose{2}}$ pairs,
we have
\[\mu{{Q}\choose{2}} = Q{{K}\choose{2}},\]
which implies that
\[\mu = \frac{K(K-1)}{Q-1}.\]
Hence, we have
\begin{align*}
2(K-\mu) &= 2\left(K-\frac{K(K-1)}{Q-1}\right)\\
&= \frac{2K(Q-K)}{Q-1}
\end{align*}
as desired.
\end{IEEEproof}
We generalize expurgated PPM by taking advantage of the above interpretation.
Our approach is to use more general constant-weight codes to realize a variety of parameters of PPM schemes.
For instance, we may increase the number of orbits over $\textit{\textbf{Z}}_v$ while keeping the minimum distance large,
so that the number of codewords becomes a multiple of $v$ rather than exactly $v$.
As we will see in this section, this idea can be formalized through coding theory.
A $(v,k,\lambda)$ \textit{optical orthogonal code}
$\mathcal{C} \subseteq \mathbb{F}_2^v$ of \textit{length} $v$, \textit{weight} $k$, and \textit{index} $\lambda$ is
a set of $v$-dimensional binary vectors of weight $k$ such that
for any $\boldsymbol{c} \in \mathcal{C}$ its off-peak periodic autocorrelations are at most $\lambda$
and for any pair of distinct codewords $\boldsymbol{c}, \boldsymbol{c}' \in \mathcal{C}$ their periodic cross-correlations are at most $\lambda$.
In other words, it is a set of $v$-dimensional vectors with $k$ $1$s and $v-k$ $0$s whose coordinates are indexed by $\textit{\textbf{Z}}_v$ such that
\[\sum_{0 \leq t \leq v-1}c_t c_{t+i} \leq \lambda\]
for any $\boldsymbol{c} = (c_0, c_1, \dots, c_{v-1}) \in \mathcal{C}$ and any nonzero element $i \in \textit{\textbf{Z}}_v$ and such that
\[\sum_{0 \leq t \leq v-1}c_t c'_{t+i} \leq \lambda\]
for any pair of distinct vectors $\boldsymbol{c} = (c_0, c_1, \dots, c_{v-1}), \boldsymbol{c}' = (c'_0, c'_1, \dots, c'_{v-1})\in \mathcal{C}$
and any $i \in \textit{\textbf{Z}}_v$.
We do not consider the trivial case $k = \lambda$ and always assume that $k > \lambda$.
We allow the special case $\vert \mathcal{C} \vert = 1$ as long as the autocorrelation property holds for the unique codeword.
Optical orthogonal codes have been extensively investigated from various viewpoints including
the initial motivation in the context of code-division multiple-access fiber optical communications \cite{CSW}.
A useful observation for our goal is that an optical orthogonal code is equivalent to
a binary constant-weight code in which every cyclic shift of a codeword is also a distinct codeword,
which is the property we would like for our signal modulation purpose.
To see the equivalence, take the union $\mathcal{D}$
of a $(v,k,\lambda)$ optical orthogonal code $\mathcal{C}$ and the set of all $v-1$ distinct cyclic shifts of each codeword.
Then $\mathcal{D}$ forms a binary constant-weight code of length $v$, weight $k$, and minimum distance $2(k-\lambda)$.
Trivially, the converse also holds.
Because we included all cyclic shifts of $\boldsymbol{c} \in \mathcal{C}$ in $\mathcal{D}$,
every cyclic shift of $\boldsymbol{d} \in \mathcal{D}$ is naturally in $\mathcal{D}$ again.
Because the condition that $k > \lambda$ is assumed,
the number of codewords of the corresponding binary constant-weight code is $v\vert \mathcal{C} \vert$.
Now by Proposition \ref{prop},
if we look at expurgated PPM with $v$ symbols that uses $k$ pulses per symbol in our coding theoretic framework,
it is a binary constant-weight code of length $v$, weight $k$, and minimum distance $2(k-\lambda)$
with exactly $v$ codewords in which every cyclic shift of a codeword is also a codeword.
In other words, it is simply the union of a special $(v,k,\lambda)$ optical orthogonal code with only one codeword and its $v-1$ cyclic shifts.
Because it is also a symmetric design, we have $\lambda = \frac{k(k-1)}{v-1}$.
Thus, we obtain the following proposition:
\begin{proposition}
The set of symbols of expurgated PPM that transmits $K$ pulses in each $Q$-slot interval is equivalent to
a $(Q, K, \frac{K(K-1)}{Q-1})$ optical orthogonal code with exactly one codeword.
\end{proposition}
Since an optical orthogonal code $\mathcal{C}$ of length $v$ gives rise to a binary constant-weight code with $v\vert\mathcal{C}\vert$ codewords
by joining the $v-1$ distinct cyclic shifts of all $\boldsymbol{c} \in \mathcal{C}$,
it is natural to generalize the PPM method such that modulation exploits an optical orthogonal code with more than one codeword.
With this generalization, a larger number of symbols can be supported compared to expurgated PPM
while still using the same decoder for each orbit and maintaining the error correction mechanism at the modulation stage.
Summarizing the discussion given above in this subsection, we have the following theorem:
\begin{theorem}\label{generalPPM}
The set of the codewords of a $(v,k,\lambda)$ optical orthogonal code $\mathcal{C}$ and all cyclic shifts of each codeword
defines a $(v\vert\mathcal{C}\vert)$-ary signal modulation technique
in which each of the $v\vert\mathcal{C}\vert$ symbols with mutual Hamming distance at least $2(k-\lambda)$
is represented by $k$ single pulses transmitted at $k$ out of $v$ time slots.
\end{theorem}
It is notable that this generalization of expurgated PPM also allows for the case when the code is of size $\vert \mathcal{C} \vert = 1$
but does not form a difference set. An example of this type of optical orthogonal code is the binary vector representation of
an \textit{almost difference set} given in \cite{DHL}.
Because difference sets are known to be difficult to construct,
allowing optical orthogonal codes that are not difference sets
substantially extends the range of possible parameters of modulation schemes of PPM type.
We would like optical orthogonal codes with the largest possible number of codewords for given $v$, $k$, and $\lambda$.
Because an optical orthogonal code $\mathcal{C}$ is already a constant-weight code before joining the cyclic shifts of each codeword,
the Johnson bound gives the upper bound on the number of codewords of $\mathcal{C}$:
\begin{theorem}[\cite{Johnson}]\label{jb}
Let $\mathcal{C}$ be a $(v,k,\lambda)$ optical orthogonal code.
Then it holds that
\[
\vert \mathcal{C} \vert \leq
\left\lfloor\frac{1}{k}\left\lfloor\frac{v-1}{k-1}\left\lfloor\frac{v-2}{k-2}\left\lfloor\cdots\left\lfloor\frac{v-\lambda}{k-\lambda}
\right\rfloor\cdots\right\rfloor\right\rfloor\right\rfloor\right\rfloor.
\]
\end{theorem}
A $(v,k,\lambda)$ optical orthogonal code is \textit{optimal} if the number of codewords attains this upper bound.
Table \ref{ooctable} lists well-known classes of optimal optical orthogonal codes of index one with more than one codeword that span a variety of lengths and weights.
\begin{table*}
\renewcommand{\arraystretch}{1.6}
\caption{Some classes of optimal optical orthogonal codes of index one with more than one codeword}
\label{ooctable}
\centering
\begin{tabular}{cccccc}
\hline\hline
\bfseries Length $v$ & \bfseries Weight $k$ & \bfseries Index $\lambda$ & \bfseries Number $\vert \mathcal{C} \vert$ of codewords & \bfseries Constraint & \bfseries Reference\\
\hline
$n$ & $3$ & $1$ & $\left\lfloor\frac{n-1}{6}\right\rfloor$ & $n\not\equiv 14, 20 \pmod{24}$\rlap{\textsuperscript{a}} & \cite{Peltesohn,CSW} \\
$n$ & $4$ & $1$ & $\left\lfloor\frac{n-1}{12}\right\rfloor$ & $n \equiv 0, 6, 18 \pmod{24}$ & \cite{GY,CFM,CM} \\
\multirow{3}{*}{$p$} & \multirow{3}{*}{any integer $k$} & \multirow{3}{*}{$1$} & \multirow{3}{*}{$\frac{p-1}{k(k-1)}$} & $p \equiv 1 \pmod{k(k-1)}$ is prime, & \multirow{3}{*}{\cite{WilsonC}} \\
& & & & $p > c_k$, &\\
& & & & $c_k$ is a constant dependent on $k$\rlap{\textsuperscript{b}} &\\
$q^t-1$ & $q$ & $1$ & $\frac{q^{t-1}-1}{q-1}$ & $q$ is a prime power& Affine geometry with origine deleted\rlap{\textsuperscript{c}}\\
$\frac{q^{t+1}-1}{q-1}$ & $q+1$ & $1$ & $\begin{cases}\frac{q^t-1}{q^2-1}, \ t\ \text{even}\\ \frac{q^t-1}{q^2-1}, \ t\ \text{odd}\end{cases}$ & $q$ is a prime power& Projective geometry\\
\hline
\hline
\multicolumn{6}{l}{\scriptsize\textsuperscript{a}
This is a necessary and sufficient condition for the existence (see \cite{AB} for a short proof).
The constrains for the other classes are sufficient conditions.}\vspace{-1.1mm}\\
\multicolumn{6}{l}{\scriptsize\textsuperscript{b} $c_4 = c_5 = 0$ \cite{CZ}.
$c_6 = 61$ \cite{CZ2}. For $k \geq 7$ in general, the best known value is $c_k = {{k}\choose{2}}^{k(k-1)}$ \cite{WilsonC}.}\vspace{-1.1mm}\\
\multicolumn{6}{l}{\scriptsize\textsuperscript{c} The same parameters may be realized as a generalized Bose-Chowla family \cite{MOKL}.}\vspace{2.2mm}
\end{tabular}
\end{table*}
There are numerous other constructions and existence results.
For the case when the index is one, all other known results can be found in \cite{HandbookCD,WC,RR,BT,Momi,YYL} and references therein.
For the latest results on optical orthogonal codes of higher index, we refer the reader to \cite{HandbookCD,FCJ,FCJ2,FM2} and references therein.
In the remainder of this section, we give some examples of our generalized expurgated PPM
to show how our modulation method enriches the family of PPM schemes.
Because typical signal modulation is $2^m$-ary for some positive integer $m$,
we focus on the case when the number $M$ of symbols is a power of $2$.
To present binary constant-weight codes for modulation in a compact way, codewords are given in terms of their supports.
Take a constant-weight code $\mathcal{D} \subseteq \mathbb{F}_2^Q$ of length $Q$, constant weight $K$,
and minimum distance $2(K-\lambda)$ in which every cyclic shift of every codeword is also a codeword.
Let $\mathcal{B} = \{\operatorname{supp}(\boldsymbol{d}) \ \vert \ \boldsymbol{d} \in \mathcal{D}\}$ be the set of supports of all vectors in $\mathcal{D}$.
Then for any $B \in \mathcal{B}$, we have $B + 1 \in \mathcal{B}$.
In other words, $\mathcal{B}$ is a set of subsets of $V = \{0,1,\dots,Q-1\}$ in which the cyclic group of order $Q$ acts regularly on $V$.
Hence, the elements of $\mathcal{B}$ can be partitioned into orbits
$\text{\textit{Orb}}_{\textit{\textbf{Z}}_Q}(B) = \bigcup_{i \in \textit{\textbf{Z}}_Q}\left\{\pi^i(B)\right\}$, $B \in \mathcal{B}$,
where $\pi(B) = B+1$.
As we have seen in this section, if each orbit is of the same size $Q$, a system of representatives of these orbits forms
the set of supports of all codewords of a $(Q,K,\lambda)$ optical orthogonal code.
Trivially, if $K > \lambda$, the converse also holds.
With this relation, optical orthogonal codes can always be written by finite sets.
For instance, an optimal $(8,3,1)$ optical orthogonal code with a single codeword $(1,1,0,1,0,0,0,0)$ can be understood as a single set $\{0,1,3\}$
because there is $1$ at coordinate $i \in \{0,1,3\}$ and $0$ otherwise.
The binary constant-weight code $\mathcal{D}$ for modulation
in which the positions of $1$s of a codeword represent the positions of pulses in the corresponding symbol
is exactly the set
\begin{align*}
\mathcal{D} = \{&(1,1,0,1,0,0,0,0), (0,1,1,0,1,0,0,0),\\ &(0,0,1,1,0,1,0,0), (0,0,0,1,1,0,1,0),\\ &(0,0,0,0,1,1,0,1), (1,0,0,0,0,1,1,0),\\ &(0,1,0,0,0,0,1,1), (1,0,1,0,0,0,0,1)\}
\end{align*}
obtained by joining the cyclic shifts of $(1,1,0,1,0,0,0,0)$.
If there are two or more codewords in an optical orthogonal code,
then each of the corresponding sets obtained by taking the supports forms a distinct orbit in $\mathcal{D}$.
For more mathematical details of the set representation of an optical orthogonal code, we refer the reader to \cite{FM}.
Table \ref{tablePPM} lists some examples of $M$-ary PPM, MPPM, expurgated PPM, and generalized expurgated PPM for $M = 8$, $16$, and $32$.
\begin{table*}
\renewcommand{\arraystretch}{1.6}
\caption{Small $M$-ary PPM for $M = 2^m$}
\label{tablePPM}
\centering
\begin{tabular}{cccccc}
\hline\hline
\bfseries Type\rlap{\textsuperscript{a}} & \bfseries Number $M$ of Symbols & \bfseries Interval Size $Q$ & \bfseries Number $K$ of Pulses & \bfseries Minimum Distance $d$ & \bfseries Optical Orthogonal Code\rlap{\textsuperscript{b}}\\
\hline
PPM & $8$ & $8$ & $1$ & $2$ & N/A\\
GEPPM & $8$ & $8$ & $3$ & $4$ & $\{0,1,3\}$\\
EPPM & $8$ & $11$ & $5$ & $6$ & $\{0,2,3,4,8\}$\\
\hline
PPM & $16$ & $16$ & $1$ & $2$ & N/A\\
AEPPM & $16$ & $11$ & $5$ & $5$ & $\{0,2,3,4,8\}, \{1,5,6,7,9,10\}$\\
GEPPM & $16$ & $16$ & $4$ & $6$ & $\{0,1,3,7\}$\\
GEPPM & $16$ & $16$ & $8$ & $8$ & Almost difference set \cite[Theorem 4]{ADHKM}\rlap{\textsuperscript{c}}\\
EPPM & $16$ & $19$ & $9$ & $10$ & Paley-type difference set \cite{DSref}\rlap{\textsuperscript{d}}\\
\hline
PPM & $32$ & $32$ & $1$ & $2$ & N/A\\
MPPM & $32$ & $7$ & $3$ & $2$ & N/A\\
GEPPM & $32$ & $16$ & $3$ & $4$ & $\{0,1,3\}, \{0,4,9\}$\\
GEPPM & $32$ & $37$ & $10$ & $14$ & Almost difference set \cite[Theorem 3]{DHL}\rlap{\textsuperscript{e}}\\
EPPM & $32$ & $35$ & $17$ & $18$ & Difference set \cite{DSref}\rlap{\textsuperscript{f}}\\
\hline
\hline
\multicolumn{6}{l}{\scriptsize\textsuperscript{a}
This column indicates the type of modulation.
GEPPM stands for our proposed PPM that generalizes expurgated PPM. AEPPM is a variation of EPPM given in \cite{NB}.}\vspace{-1.1mm}\\
\multicolumn{6}{l}{\scriptsize\textsuperscript{b}
Explicit examples are given for small optical orthogonal codes of which the origins are unknown.}\vspace{-1.1mm}\\
\multicolumn{6}{l}{\scriptsize\textsuperscript{c}
The $(n,k,\lambda,t)$ almost difference sets given in \cite[Theorem 4]{ADHKM} are
$(n,k,\lambda+1)$ optical orthogonal codes of size $\vert \mathcal{C}\vert = 1$.}\vspace{-1.1mm}\\
\multicolumn{6}{l}{\scriptsize\textsuperscript{d}
An example of its set representation is $\{0,3,4,5,6,8,10,15,16\}$.}\vspace{-1.1mm}\\
\multicolumn{6}{l}{\scriptsize\textsuperscript{e}
The $(n,k,\lambda)$ almost difference sets given in \cite[Theorem 3]{DHL} are
$(n,k,\lambda+1)$ optical orthogonal codes of size $\vert \mathcal{C}\vert = 1$.}\vspace{-1.1mm}\\
\multicolumn{6}{l}{\scriptsize\textsuperscript{f} An example of its set representation is $\{0,1,3,4,7,9,11,12,13,14,16,17,21,27,28,29,33\}$.}\vspace{5.5mm}
\end{tabular}
\end{table*}
To keep the table concise, the set representations of example optimal optical orthogonal codes
for expurgated PPM and its generalized versions are given only when the origins seem unknown.
For other cases, references are given instead.
As is shown in the table, our modulation scheme greatly widens the range of available parameters of modulation techniques of PPM type.
For instance, when compared to standard $8$-ary PPM,
$8$-ary EPPM achieves large minimum distance $6$ by increasing the interval size from $8$ to $11$ and the number of pulses from $1$ to $5$.
In other words, $8$-ary EPPM increases the error tolerance capability
by a large extent in exchange for increased energy per symbol and a poorer information rate.
Our $8$-ary modulation based on an $(8,3,1)$ optimal optical orthogonal code is a middle ground approach
in that it does not sacrifice the information rate and only slightly increases the required energy per symbol
in order to achieve good minimum distance.
As is also illustrated by the examples for the $16$-ary and $32$-ary cases in the table,
our generalized scheme typically complements PPM, MPPM, and EPPM by offering a solution that falls between the three extreme approaches.
\section{Concluding remarks}
We introduced a coding theoretic framework to the study of signal modulation based on pulse positions in the time domain.
With the further help of combinatorial design theory,
this approach allowed us to develop a self-synchronizing scheme with significantly improved efficiency
while maintaining compatibility to existence modulation techniques such as standard pulse position modulation.
In fact, the number of time slots required per symbol for synchronization is now improved from $\Omega(Q)$ to $\mathcal{O}(Q^{\frac{1}{2}})$ in the asymptotic sense,
breaking the fundamental limit of the previously known synchronization method.
We were also able to generalize the recently introduced error-correcting pulse position modulation technique to realize a larger number of symbols.
This generalization is particularly appealing as the modulation layer for our synchronization method
because this way error correction can be fully supported at the modulation stage.
In the previous section on synchronization and error correction, we placed particular emphasis on generality so as not to unnecessarily spoil the potential.
For this reason, our focus has been on the minimum distance.
However, it would also be of importance to investigate various other aspects of our scheme by assuming a particular context.
In fact, PPM and its variants have extensively been studied with various applications in mind
(see, for example, \cite{CS,ZC} for use in ultra-wideband communications,
\cite{WBCB} for free-space optics scenarios, and \cite{Ohtsuki} for the purpose of atmospheric optical code-division multiple-access).
Among many possible directions of more focused research, it would be of particular interest
to analyze the efficiency in detail and more accurately estimate the synchronization error rate and bit error rate over a reasonably realistic channel.
If one wishes to extract finer structural information than minimum distance for a detailed analysis in a specific context,
the algebraic properties of DSSs and optical orthogonal codes may be effectively exploited.
For instance, it is straightforward to see that if we employ an optical orthogonal code $\mathcal{C}$ of length $v$, block size $k$, and index $1$
to form the binary constant-weight code $\mathcal{D}$ for modulation by joining cyclic shifts,
for any codeword $\boldsymbol{d} \in \mathcal{D}$ the number $n_{\boldsymbol{d}}$
of codewords of Hamming distance $2(k-1)$ from $\boldsymbol{d}$, which are the nearest,
and the number $f_{\boldsymbol{d}}$ of other codewords except $\boldsymbol{d}$, which are all of Hamming distance $2k$ from $\boldsymbol{d}$, are
\[n_{\boldsymbol{d}} = k^2\vert\mathcal{C}\vert - k\]
and
\begin{align*}
f_{\boldsymbol{d}} &= v\vert\mathcal{C}\vert - 1 - n_{\boldsymbol{d}}\\
&= (v-k^2)\vert\mathcal{C}\vert+k-1
\end{align*}
respectively.
As an example use of structural information for obtaining a context-specific performance estimation,
assume that pulses are transmitted through a typical free-space optical link
approximated by the AWGN channel with power spectral density, say, $\frac{N_0}{2}$.
Supposing that the decoder is optimal
and that all codewords of a $(v,k,\lambda)$ optical orthogonal code $\vert \mathcal{C} \vert$ and their cyclic shifts are used for modulation,
the symbol error probability $P_s$ of our generalized PPM scheme $\mathcal{D}$ can be estimated by the union bound
\begin{align*}
P_s &\leq \frac{1}{2M}\sum_{\substack{\boldsymbol{d}, \boldsymbol{d}'\in\mathcal{D}\\ \boldsymbol{d}\not=\boldsymbol{d}'}}
\operatorname{erfc}\left(\sqrt{\frac{\gamma\operatorname{wt}(\boldsymbol{d}\oplus\boldsymbol{d}')\log{M}}{2Q}}\right)\\
&=\frac{k^2\vert \mathcal{C} \vert-k}{2}\operatorname{erfc}\left(\sqrt{\frac{\gamma(k-1)\log{(v\vert\mathcal{C}\vert)}}{v}}\right)\\
&\quad + \frac{(v-k^2)\vert\mathcal{C}\vert+k-1}{2}\operatorname{erfc}\left(\sqrt{\frac{\gamma k\log{(v\vert\mathcal{C}\vert)}}{v}}\right),
\end{align*}
where $\operatorname{erfc}$ is the complementary error function, $\oplus$ is the bitwise sum modulo $2$, and
$\gamma = \frac{\rho^2P_0^2}{N_0R_b}$ is the signal-to-noise ratio with $\rho$, $P_0$, and $R_b$ being
the photodetector responsivity, peak optical power, and bit-rate respectively (see \cite{Guimaraes}).
Fig.\ \ref{figure} compares the performance of our $16$-ary error-tolerant PPM based on an $(8,4,1)$ optimal optical orthogonal code with PPM and EPPM.
\begin{figure}
\centering
\includegraphics[width=3.3in]{Fig1}
\caption{Estimated symbol error rates of $16$-ary modulation. $\vert$
PPM, generalized EPPM based on an $(8,4,1)$ optical orthogonal code, and EPPM based on a Paley-type difference set are compared
by the union bound at high SNR.}
\label{figure}
\end{figure}
As expected from their minimum distances, our scheme offers good error tolerance while requiring only a modest amount of energy per symbol.
One may analyze the properties and performance in more detail for a very specific channel in a similar manner as well.
Another aspect we did not address is the relation of our version of expurgated PPM to error-correcting codes at a higher level.
If we see PPM as a binary constant-weight code, the idea of error correction at a higher level is a code concatenation in one sense.
There have been proposed various types of code and their uses for such concatenations
(see, for instance, \cite{McEliece,PB,DVN,TGALF,NL}).
It would be of interest and importance to understand how best to exploit our scheme
along with an error-correcting code at a higher level in a particular communications system.
\section*{Acknowledgment}
Y.F. thanks the three anonymous reviewers
and Associate Editor Robert Fischer for careful reading of the manuscript and constructive suggestions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,347 |
Q: what should relation between two classes be? I have two classes namely;
-------------- -------------------
class A class B
-------------- -------------------
int c
-------------- -------------------
-------------- -------------------
class A is responsible for taking input from user, and class B is responsible for storing input token by class A.
What should the relation be between them?
There are direct relations between them:
*
*class A function takes input then this input is directly stored in class B.
*One of class A's functions is friend of class B.
A: There are three possibilities:
*
*A could access B (perhaps via an interface) to store the data it produces;
*B could access A (perhaps via an interface) to fetch the data it stores;
*They could be unrelated, with higher-level business logic fetching data from A and storing it in B.
The third would be my preference, since it makes the objects self-contained and easier to test in isolation, and more flexible since they are not constrained to act together in a particular way.
A: With the limited info provided, I am assuming a scenario here:
Since you want Class A to be storing/setting some data residing inside of an object of Class B, probably Class A would need to use a setter method from Class B. This is a 'uses' relationship and could be categorized as association relationship.
If class A is also responsible to create instances of Class B, then the relationship would be aggregation.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,997 |
\subsection{Proof of Correctness of Top-$K$ Max-Product}
We now consider the top-$K$ max-product algorithm, shown in full generality
in Algo.~\ref{algo:dp:topK:main}. The following proposition proves its correctness.
\begin{proposition} \label{eq:prop:dp:topK_guarantee}
Consider as inputs to Algo.~\ref{algo:dp:topK:main} an augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on
tree structured graph $\mcG$, and an integer $K > 0$. Then, the outputs
of Algo.~\ref{algo:dp:topK:main} satisfy $\psi\pow{k} = \psi(\yv\pow{k}) = \maxK{k}{\yv \in \mcY} \psi(\yv)$.
Moreover, Algo.~\ref{algo:dp:topK:main} runs in time $\bigO(pK\log K \max_{v\in\mcV} \abs{\mcY_v}^2)$
and uses space $\bigO(p K \max_{v\in\mcV} \abs{\mcY_v})$.
\end{proposition}
\begin{proof}
For a node $v \in \mcV$, let $\tau(v)$ denote the sub-tree of $\mcG$ rooted at $v$. Let $\yv_{\tau(v)}$ denote
$\big( y_{v'} \text{ for } v' \in \tau(v) \big)$. Define $\psi_{\tau(v)}$ as follows:
if $v$ is a leaf, $\yv_{\tau(v)} = (y_v)$ and $\psi_{\tau(v)}(\yv_{\tau(v)}) := \psi_v(y_v)$.
For a non-leaf $v$, define recursively
\begin{align} \label{eq:dp:topk_proof:phi_subtree}
\psi_{\tau(v)}(\yv_{\tau(v)}) := \psi_v(y_v) + \sum_{v' \in C(v)} \left[ \psi_{v, v'}(y_v, y_{v'})
+ \psi_{\tau(v')}(\yv_{\tau(v')}) \right] \,.
\end{align}
We will need some identities about choosing the $k$th largest element from a finite collection.
For finite sets $S_1, \cdots, S_n$ and
functions $f_j: S_j \to \reals$, $h: S_1 \times S_2 \to \reals$, we have,
\begin{gather}
\label{eq:dp:topk_proof:bellman1}
\maxK{k}{u_1 \in S_1, \cdots, u_n \in S_n} \left\{ \sum_{j=1}^n f_j(u_j) \right\}
= \quad \maxK{k}{l_1, \cdots, l_n \in [k]} \left\{
\sum_{j=1}^n \maxK{l_j}{u_j \in S_j} f_j(u_j)
\right\} \,, \\
\label{eq:dp:topk_proof:bellman2}
\maxK{k}{u_1 \in S_1, u_2 \in S_2} \{ f_1(u_1) + h(u_1, u_2) \}
= \quad \maxK{k}{u_1 \in S_1, l \in [k]} \left\{
f_1(u_1) + \maxK{l}{u_2 \in S_2} h(u_1, u_2)
\right\} \,.
\end{gather}
The identities above state that for a sum to take its $k$th largest value,
each component of the sum must take one of its $k$ largest values. Indeed, if one of the components
of the sum took its $l$th largest value for $l > k$, replacing it with any of the $k$ largest values cannot
decrease the value of the sum.
Eq.~\eqref{eq:dp:topk_proof:bellman2} is a generalized version of Bellman's principle of
optimality (see \citet[Chap. III.3.]{bellman1957dynamic} or \citet[Vol. I, Chap. 1]{bertsekas1995dynamic}).
For the rest of the proof, $\yv_{\tau(v)} \backslash y_v$ is used as
shorthand for $\{y_{v'} \, | \, v' \in \tau(v) \backslash \{v\} \}$.
Moreover, $\max_{\yv_{\tau(v)}}$ represents maximization over
$\yv_{\tau(v)} \in \bigtimes_{v' \in \tau(v)} \mcY_{v'}$. Likewise for $\max_{\yv_{\tau(v)} \backslash y_v}$.
Now, we shall show by induction that for all $v \in \mcV$, $y_v \in \mcY_v$ and $k = 1,\cdots, K$,
\begin{align} \label{eq:dp:topk_proof:ind_hyp}
\maxK{k}{\yv_{\tau(v)}\backslash y_v} \psi_{\tau(v)}(\yv_{\tau(v)}) = \psi_v(y_v) +
\maxK{k}{} \bigg\{ \sum_{v' \in C(v)} m_{v'}\pow{l_{v'}}(y_{v'})
\bigg| l_{v'} \in [K] \text{ for } v' \in C(v)
\bigg\} \,.
\end{align}
The induction is based on the height of a node. The statement is clearly true for a leaf $v$ since $C(v) = \varnothing$.
Suppose \eqref{eq:dp:topk_proof:ind_hyp} holds for all nodes of height $\le h$. For
a node $v$ of height $h+1$, we observe that $\tau(v) \backslash v$ can be partitioned
into $\{\tau(v') \text{ for } v' \in C(v)\}$ to get,
\begin{gather} \nonumber
\maxK{k}{\yv_{\tau(v)}\backslash y_v} \psi_{\tau(v)}(\yv_{\tau(v)})
- \psi_v(y_v)
\stackrel{\eqref{eq:dp:topk_proof:phi_subtree}}{=} \maxK{k}{\yv_{\tau(v)}\backslash y_v}
\bigg\{ \sum_{v' \in C(v)} \psi_{v, v'}(y_v, y_{v'}) + \psi_{\tau(v')}(\yv_{\tau(v')}) \bigg\} \\
\label{eq:eq:topk_proof:ind_hyp_todo}
\stackrel{\eqref{eq:dp:topk_proof:bellman1}}{=}
\maxK{k}{} \bigg\{
\sum_{v' \in C(v)}
\underbrace{
\maxK{l_{v'}}{\yv_{\tau(v')}}
\{ \psi_{v, v'}(y_v, y_{v'}) + \psi_{\tau(v')}(\yv_{\tau(v')}) \} }_{=:\mcT_{v'}(y_v)}
\, \bigg| \,
l_{v'} \in [K] \text{ for } v' \in C(v)
\bigg\} \,.
\end{gather}
Let us analyze the term in the underbrace, $\mcT_{v'}(y_v)$. We successively deduce,
with the argument $l$ in the maximization below taking values in $\{1, \cdots, K\}$,
\begin{align*}
\mcT_{v'}(y_v)
&\stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{l_{v'}}{y_{v'}, l} \bigg\{
\psi_{v, v'}(y_v, y_{v'}) + \maxK{l}{\yv_{\tau(v')}\backslash y_{v'}} \psi_{\tau(v')}(\yv_{\tau(v')})
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk_proof:ind_hyp}}{=}
\maxK{l_{v'}}{y_{v'}, l} \bigg\{
\begin{matrix}
\psi_{v'}(y_{v'}) + \psi_{v, v'}(y_v, y_{v'}) + \\
\maxK{l}{} \big\{ \sum_{v'' \in C(v')} m_{v''}\pow{l_{v''}}(y_{v'})
\, | \, l_{v''} \in [K] \text{ for } v'' \in C(v')
\big\}
\end{matrix}
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{l_{v'}}{} \bigg\{
\begin{matrix}
\psi_{v'}(y_{v'}) + \psi_{v', v}(y_{v'}, y_v) \\
+ \sum_{v'' \in C(v')} m\pow{l_{v''}}_{v''}(y_{v'})
\end{matrix} \,
\bigg| \,
\begin{matrix}
y_{v'} \in \mcY_{v'} \text{ and } \\ l_{v''} \in [K] \text{ for } v'' \in C(v)
\end{matrix}
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk:algo:update}}{=}
m\pow{l_{v'}}_{v'}(y_v) \, .
\end{align*}
Here, the penultimate step followed from applying in reverse the identity \eqref{eq:dp:topk_proof:bellman2}
with $u_1, u_2$ being by $y_{v'}, \{ l_{v''} \text{ for } v'' \in C(v')\}$ respectively,
and $f_1$ and $h$ respectively being $ \psi_{v'}(y_{v'}) + \psi_{v', v}(y_{v'}, y_v)$
and $ \sum_{v''} m\pow{l_{v''}}_{v''}(y_{v'})$.
Plugging this into \eqref{eq:eq:topk_proof:ind_hyp_todo} completes the induction argument.
To complete the proof, we repeat the same argument over the root as follows. We note that
$\tau(r)$ is the entire tree $\mcG$. Therefore, $\yv_{\tau(r)} = \yv$ and $\psi_{\tau(r)} = \psi$.
We now apply the identity \eqref{eq:dp:topk_proof:bellman2} with $u_1$ and $u_2$ being
$y_r$ and $\yv_{\tau(r) \backslash r}$ respectively and $f_1 \equiv 0$ to get
\begin{align*}
\maxK{k}{\yv \in \mcY} \psi(\yv)
& \stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{k}{y_r, l} \left\{ \maxK{l}{\yv\backslash y_r} \psi(\yv) \right\}
= \maxK{k}{y_r, l} \left\{ \maxK{l}{\yv_{\tau(r)} \backslash y_r} \psi_{\tau(r)}(\yv_{\tau(r)}) \right\} \\
&\stackrel{\eqref{eq:dp:topk_proof:ind_hyp}}{=}
\maxK{k}{y_r, l} \bigg\{
\begin{matrix}
\psi_{r}(y_{r}) +
\maxK{l}{} \big\{ \sum_{v \in C(r)} m_{v}\pow{l_{v}}(y_r)
\, | \, l_{v} \in [K] \text{ for } v \in C(r)
\big\}
\end{matrix}
\bigg\} \\
&\stackrel{\eqref{eq:dp:topk_proof:bellman2}}{=}
\maxK{k}{} \bigg\{
\begin{matrix}
\psi_r(y_r) + \\
\sum_{v \in C(r)} m\pow{l_{v}}_{v}(y_r)
\end{matrix} \,
\bigg| \,
\begin{matrix}
y_{r} \in \mcY_{r} \text{ and } \\ l_{v} \in [K] \text{ for } v \in C(r)
\end{matrix}
\bigg\} \\
&\,= \psi\pow{k} \,,
\end{align*}
where the last equality follows from Line~\ref{line:algo:dp:topk:final_score_k} of Algo.~\ref{algo:dp:topK:main}.
The algorithm requires storage of $m_v\pow{k}$, an array of size $\max_{v \in \mcV} \abs{\mcY_v}$
for each $k = 1,\cdots, K$, and $v \in \mcV$. The backpointers $\delta, \kappa$ are of the same size.
This adds up to a total storage of $\bigO(pK \max_{v} \abs{\mcY_v})$.
To bound the running time, consider Line~\ref{line:algo:dp:topk:message_k} of Algo.~\ref{algo:dp:topK:main}.
For a fixed $v' \in C(v)$, the computation
\begin{align*}
\maxK{k}{y_v, l_{v'} } \left\{
\psi_v(y_v) + \psi_{v, \rho(v)}(y_v, y_{\rho(v)})
+ m_{v'}^{(l_{v'})}(y_v)
\right\}
\end{align*}
for $k= 1, \cdots, K$ takes time $\bigO(K \log K \max_v \abs{\mcY_v})$.
This operation is repeated for each $y_v \in \mcY_v$ and once for every $(v, v')\in \mcE$. Since
$\abs\mcE = p-1$, the total running time is $\bigO(p K \log K \max_v \abs{\mcY_v}^2)$.
\end{proof}
\subsection{Proof of Correctness of Entropy Smoothing of Max-Product}
Next, we consider entropy smoothing.
\begin{proposition}\label{eq:prop:dp:ent_guarantee}
Given an augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on
tree structured graph $\mcG$ and $\mu > 0$
as input, Algo.~\ref{algo:dp:supp_exp} correctly computes
$f_{-\mu H}(\wv)$
and $\grad f_{-\mu H}(\wv)$.
Furthermore, Algo.~\ref{algo:dp:supp_exp} runs in time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and requires space $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\end{proposition}
\begin{proof}
The correctness of the function value $f_{- \mu H}$ follows from the bijection
$f_{- \mu H}(\wv) = \mu \, A_{\psi/\mu}(\wv)$ (cf. Prop.~\ref{prop:smoothing:exp-crf}),
where Thm.~\ref{thm:pgm:sum-product} shows correctness of $A_{\psi / \mu}$.
To show the correctness of the gradient, define the probability distribution $ P_{\psi, \mu}$
as the probability distribution from Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:exp}
and $P_{\psi, \mu, v}, P_{\psi, \mu, v, v'}$ as its node and edge marginal probabilities respectively as
\begin{align*}
P_{\psi, \mu}(\yv ; \wv)
&= \frac{
\exp\left(\tfrac{1}{\mu}\psi(\yv ; \wv)\right)}
{\sum_{\yv' \in \mcY }\exp\left(\tfrac{1}{\mu}\psi(\yv' ; \wv)\right)} \,, \\
P_{\psi, \mu, v}(\overline y_v ; \wv)
&= \sum_{\substack{ \yv \in \mcY \, : \\ y_v = \overline y_v} }P_{\psi, \mu}(\yv ; \wv) \quad
\text{for } \overline y_v \in \mcY_v, v \in \mcV\,, \text{ and, } \\
P_{\psi, \mu, v, v'}(\overline y_v, \overline y_{v'} ; \wv)
&= \sum_{\substack{ \yv \in \mcY : \\ y_v = \overline y_v, \\ y_{v'} = \overline y_{v'} } }P_{\psi, \mu}(\yv ; \wv) \quad \text{for } \overline y_v \in \mcY_v, \overline y_{v'} \in \mcY_{v'}, (v,v') \in \mcE \,.
\end{align*}
Thm.~\ref{thm:pgm:sum-product} again shows that Algo.~\ref{algo:dp:supp_sum-prod} correctly produces
marginals $P_{\psi, \mu, v}$ and $P_{\psi, \mu, v, v'}$.
We now start with Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:exp} and invoke \eqref{eq:smoothing:aug_score_decomp}
to get
\begin{align*}
\grad f_{-\mu H}(\wv) =& \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi(\yv ; \wv) \\
{=}& \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv)
\left(
\sum_{v\in \mcV} \grad \psi_v(y_v ; \wv)
+ \sum_{(v, v')\in \mcE} \grad \psi_{v, v'}(y_v, y_{v'} ; \wv)
\right) \,, \\
=& \sum_{v \in \mcV}
\sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi_v( y_v ; \wv)
+
\sum_{(v,v') \in \mcE} \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi_{v, v'}(y_v, y_{v'} ; \wv) \\
=& \sum_{v \in \mcV} \sum_{\overline y_v \in \mcY_v}
\sum_{\yv \in \mcY \, : \,y_v = \overline y_v} P_{\psi, \mu}(\yv ; \wv) \grad \psi_v(\overline y_v ; \wv)
\\ &\qquad+
\sum_{(v,v') \in \mcE} \sum_{\overline y_v \in \mcY_v} \sum_{\overline y_{v'} \in \mcY_{v'}}
\sum_{\yv \in \mcY \, :\, \substack{ y_v = \overline y_v \\ y_{v'} = \overline y_{v'} } }
P_{\psi, \mu}(\yv ; \wv) \grad \psi_{v, v'}(\overline y_v, \overline y_{v'} ; \wv) \\
=& \sum_{v \in \mcV} \sum_{\overline y_v \in \mcY_v} P_{\psi, \mu, v}(\overline y_v ; \wv)
\grad \psi_{v}(\overline y_v ; \wv)
\\ &\qquad+
\sum_{(v,v') \in \mcE} \sum_{\overline y_v \in \mcY_v} \sum_{\overline y_{v'} \in \mcY_{v'}}
P_{\psi, \mu, v, v'}(\overline y_v, \overline y_{v'} ; \wv)
\grad \psi_{v, v'}(\overline y_v, \overline y_{v'} ; \wv) \,.
\end{align*}
Here, the penultimate equality followed from breaking the sum over $\yv \in \mcY$ into an outer sum that sums over every
$\overline y_v \in \mcY_v$ and an inner sum over $\yv \in \mcY : y_v = \overline y_v$, and likewise for the edges.
The last equality above followed from the definitions of the marginals.
Therefore, Line~\ref{line:algo:dp:exp:gradient} of Algo.~\ref{algo:dp:supp_exp} correctly computes the gradient.
The storage complexity of the algorithm is $\bigO(p \max_v \abs{\mcY_v})$ provided that the edge marginals $P_{\psi, \mu, v, v'}$
are computed on the fly as needed. The time overhead of Algo.~\ref{algo:dp:supp_exp} after Algo.~\ref{algo:dp:supp_sum-prod}
is $\bigO(p \max_v \abs{\mcY_v}^2)$, by noting that each edge marginal can be computed in constant time
(Remark~\ref{remark:pgm:sum-prod:fast-impl}).
\end{proof}
\begin{algorithm}[ptbh]
\caption{Sum-product algorithm}
\label{algo:dp:supp_sum-prod}
\begin{algorithmic}[1]
\STATE {\bfseries Procedure:} \textsc{SumProduct}
\STATE {\bfseries Input:} Augmented score function $\psi$ defined on
tree structured graph $\mcG$ with root $r \in \mcV$.
\STATE {\bfseries Notation:} Let $N(v) = C(v) \cup \{\rho(v)\}$ denote all the neighbors of
$v \in \mcV$ if the orientation of the edges were ignored.
\STATE {\bfseries Initialize:}
Let $V$ be a list of nodes from $\mcV$ arranged in increasing order of height.
\FOR{$v$ in $V \backslash \{r\}$}
\STATE Set for each $y_{\rho(v)} \in \mcY_{\rho(v)}$: \label{line:dp:exp_dp:update}
\[
m_{v \to \rho(v)}(y_{\rho(v)}) \leftarrow \sum_{y_v \in \mcY_v} \left[ \exp \left(
\psi_v(y_v) + \psi_{v, \rho(v)}(y_v, y_{\rho(v)}) \right) \prod_{v' \in C(v)} m_{v' \to v}(y_v) \right]
\,.
\]
\ENDFOR
\STATE $A \leftarrow \log \sum_{y_r \in \mcY_r} \left[ \exp\left(
\psi_r(y_r) \right) \prod_{v' \in C(r)} m_{v' \to r}(y_r) \right] $.
\label{line:dp:exp_dp:log_part}
\FOR{$v$ in $\mathrm{reverse}(V)$}
\FOR{$v' \in C(v)$}
\STATE Set for each $y_{v'} \in \mcY_{v'}$:
\[
m_{v\to v'}(y_{v'}) = \sum_{y_v \in \mcY_v}\left[
\exp\left(
\psi_v(y_v) + \psi_{v', v}(y_{v'}, y_v)
\right)
\prod_{v'' \in N(v)\backslash\{v'\}} m_{v'' \to v}(y_v)
\right] \,.
\]
\ENDFOR
\ENDFOR
\FOR{ $v$ in $\mcV$}
\STATE Set $P_v(y_v) \leftarrow \exp\left( \psi_v(y_v) - A \right) \prod_{v'' \in N(v)} m_{v''\to v}(y_v)$
for every $y_v \in \mcY_v$.
\ENDFOR
\FOR{$(v, v')$ in $\mcE$}
\STATE For every pair $(y_v, y_{v'}) \in \mcY_v \times \mcY_{v'}$, set \label{line:algo:sum-prod:pair}
\begin{align*}
P_{v, v'}(y_v, y_{v'}) \leftarrow &
\exp\left(\psi_v(y_v) + \psi_{v'}(y_{v'}) + \psi_{v, v'}(y_v, y_{v'}) - A \right) \\&
\prod_{v'' \in N(v) \backslash \{v'\}} m_{v''\to v}(y_v) \prod_{v'' \in N(v') \backslash \{v\}} m_{v''\to v'}(y_{v'}) \,.
\end{align*}
\ENDFOR
\RETURN $A, \{P_v \text{ for } v \in \mcV \}, \{P_{v, v'} \text{ for } (v, v') \in \mcE \}$.
\end{algorithmic}
\end{algorithm}
Given below is the guarantee of the sum-product algorithm (Algo.~\ref{algo:dp:supp_sum-prod}).
See, for instance, \citet[Ch. 10]{koller2009probabilistic} for a proof.
\begin{theorem} \label{thm:pgm:sum-product}
Consider an augmented score function $\psi$ defined over a tree structured graphical model $\mcG$.
Then, the output of Algo.~\ref{algo:dp:supp_sum-prod} satisfies
\begin{align*}
A &= \log \sum_{\yv \in \mcY} \exp(\psi(\yv)) \,, \\
P_v(\overline y_v) &= \sum_{\yv \in \mcY\, :\, y_v = \overline y_v} \exp(\psi(\yv) - A) \,
\quad \text{for all $\overline y_v \in \mcY_v, v \in \mcV$, and, }\, \\
P_{v, v'}(\overline y_v, \overline y_{v'}) &=
\sum_{\yv \in \mcY\, : \, \substack{ y_v = \overline y_v, \\ y_{v'} = \overline y_{v'} } }
\exp(\psi(\yv) - A) \,
\quad \text{for all $\overline y_v \in \mcY_v, \overline y_{v'} \in \mcY_{v'}, (v,v') \in \mcE$.} \\
\end{align*}
Furthermore, Algo.~\ref{algo:dp:supp_sum-prod} runs in time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and requires an intermediate storage of $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\end{theorem}
\begin{remark} \label{remark:pgm:sum-prod:fast-impl}
Line~\ref{line:algo:sum-prod:pair} of Algo.~\ref{algo:dp:supp_sum-prod} can be
implemented in constant time by reusing the node marginals $P_v$ and messages $m_{v \to v'},m_{v' \to v}$ as
\begin{align*}
P_{v, v'}(y_v, y_{v'}) =
\frac{P_v(y_v) P_{v'}(y_{v'}) \exp(\psi_{v, v'}(y_v, y_{v'}) + A)}{m_{v'\to v}(y_v) m_{v\to v'}(y_{v'}) } \,.
\end{align*}
\end{remark}
\subsection{Review of Best Max-Marginal First} \label{sec:a:bmmf}
If one has access to an algorithm $\mcM$ that can compute max-marginals,
the top-$K$ oracle is easily implemented via the Best Max Marginal First (BMMF) algorithm of \citet{yanover2004finding},
which is recalled in Algo.~\ref{algo:top_k_map:general}.
This algorithm requires computations of
two sets of max-marginals
per iteration, where a {\em set} of max-marginals refers to max-marginals for all variables $y_v$ in $\yv$.
\paragraph{Details}
The algorithm runs by maintaining a partitioning of the search space $\mcY$
and a table $\varphi\pow{k}(v, j)$ that stores the best score
in partition $k$ (defined by constraints $\mcC\pow{k}$)
subject to the additional constraint that $y_v = j$.
In iteration $k$, the algorithm looks at the $k-1$ existing partitions and picks the best
partition $s_k$ (Line~\ref{alg:bmmf:best_part}).
This partition is further divided into two parts:
the max-marginals in the promising partition (corresponding to $y_{v_k} = j_k$)
are computed (Line~\ref{alg:bmmf:line:max-marg})
and decoded (Line~\ref{alg:bmmf:line:decoding}) to yield $k$th best scoring $\yv\pow{k}$.
The scores of the less promising partition are updated via a second round of
max-marginal computations (Line~\ref{alg:bmmf:line:update_score}).
\begin{algorithm}[tb]
\caption{Best Max Marginal First (BMMF)}
\label{algo:top_k_map:general}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi$, parameters $\wv$, non-negative integer $K$,
algorithm $\mcM$ to compute max-marginals of $\psi$.
\STATE {\bfseries Initialization:} $\mcC\pow{1} = \varnothing$ and $\mcU\pow{2} = \varnothing$.
\FOR{$v \in [p]$}
\STATE For $j \in \mcY_v$, set
$\varphi\pow{1}(v;j) = \max \{ \psi(\yv ; \wv) \, | \, \yv \in \mcY \text{ s.t. } y_v = j \}$ using $\mcM$.
\STATE Set $y\pow{1}_v = \argmax_{j \in \mcY_v} \varphi\pow{1}(v, j)$.
\ENDFOR
\FOR{$k = 2, \cdots, K$}
\STATE Define search space
$\mcS\pow{k} = \left\{ (v, j, s) \in [p] \times \mcY_v \times [k-1] \, \big| \,
y\pow{s}_v \neq j, \text{ and } (v, j, s) \notin \mcU\pow{t} \right\}$.
\label{alg:bmmf:part_search}
\STATE Find indices $(v_k, j_k, s_k) = \argmax_{(v, j, s) \in \mcS\pow{k}} \varphi\pow{s}(v,j)$
and set constraints
$\mcC\pow{k} = \mcC\pow{s_k} \cup \{ y_{v_k} = j_k \}$. \label{alg:bmmf:best_part}
\FOR{$v\in[p]$}
\STATE For each $j \in \mcY_v$, use $\mcM$ to set $\varphi\pow{k}(v, j) = \max \left\{
\psi(\yv ; \wv) \, | \, \yv \in \mcY \text{ s.t. constraints }
\mcC\pow{k} \text{ hold and } y_v = j \right\}$. \label{alg:bmmf:line:max-marg}
\STATE Set $y\pow{k}_v = \argmax_{j \in \mcY_v} \varphi\pow{k}(v, j)$. \label{alg:bmmf:line:decoding}
\ENDFOR
\STATE Update $\mcU\pow{k+1} = \mcU\pow{k} \cup \left\{ (v_k, j_k, s_k) \right\}$ and
$\mcC\pow{s_k} = \mcC\pow{s_k} \cup \{ y_{v_k} \neq j_k \}$ and the max-marginal table
$\varphi\pow{s_k}(v, j) = \max_{\yv \in \mcY, \mcC\pow{s_k}, y_v = j} \psi(\yv ; \wv)$ using $\mcM$.
\label{alg:bmmf:line:update_score}
\ENDFOR
\RETURN $\left\{ \left(\psi(\yv\pow{k} ; \wv), \yv\pow{k} \right) \right\}_{k=1}^K$.
\end{algorithmic}
\end{algorithm}
\paragraph{Guarantee}
The following theorem shows that Algo.~\ref{algo:top_k_map:general} provably
implements the top-$K$ oracle as long as the max-marginals can be computed exactly
under the assumption of unambiguity. With approximate max-marginals however,
Algo.~\ref{algo:top_k_map:general} comes with no guarantees.
\begin{theorem}[\citet{yanover2004finding}] \label{thm:inference:topKmm}
Suppose the score function $\psi$ is unambiguous, that is,
$\psi(\yv' ; \wv) \neq \psi(\yv'' ;\wv)$ for all distinct $\yv', \yv'' \in \mcY$.
Given an algorithm $\mcM$ that can compute the max-marginals of $\psi$ exactly,
Algo.~\ref{algo:top_k_map:general} makes at most $2K$ calls to $\mcM$ and its output
satisfies $\psi(\yv_k ; \wv) = \max\pow{k}_{\yv \in \mcY} \psi(\yv ; \wv)$.
Thus, the BMMF algorithm followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
is a correct implementation of the top-$K$ oracle.
It makes $2K$ calls to $\mcM$.
\end{theorem}
\paragraph{Constrained Max-Marginals}
The algorithm requires computation of max-marginals subject to constraints of the form
$y_v \in Y_v$ for some set $Y_v \subseteq \mcY_v$. This is accomplished by
redefining for a constraint $y_v \in Y_v$:
\[
\overline \psi(\yv) =
\begin{cases}
\psi(\yv), \, \text{ if } y_v \in Y_v \\
-\infty, \, \text{ otherwise}
\end{cases} \,.
\]
\subsection{Max-Marginals Using Graph Cuts} \label{sec:a:graph_cuts}
\begin{algorithm}[tb]
\caption{Max-marginal computation via Graph Cuts}
\label{algo:top_k_map:graph_cuts}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$ with $\mcY = \{0, 1\}^p$,
constraints $\mcC$ of the form $y_v = b$ for $b \in \{0,1\}$.
\STATE Using artificial source $s$ and sink $t$, set $V' = \mcV \cup \{s, t\}$ and $E' = \varnothing$.
\FOR{$v \in [p]$}
\STATE Add to $E'$ the (edge, cost) pairs $(s \to y_v, \theta_{v; 0})$ and
$(y_v \to t, \theta_{v; 1})$.
\ENDFOR
\FOR{$v,v' \in \mcR$ such that $v < v'$}
\STATE Add to $E'$ the (edge, cost) pairs
$(s \to y_v , \theta_{vv'; 00})$,
$(y_{v'} \to t , \theta_{vv'; 11})$,
$(y_v \to y_{v'} , \theta_{vv'; 10})$,
$(y_{v'} \to y_v , \theta_{vv'; 01} - \theta_{vv' ;00} - \theta_{vv' ; 11})$.
\ENDFOR
\FOR{constraint $y_v = b$ in $\mcC$}
\STATE Add to $E'$ the edge $y_v \to t$ if $b=0$ or edge $s \to y_v$ if $b=1$ with cost $+\infty$.
\label{line:algo:mincut:constr}
\ENDFOR
\STATE Create graph $G'=(V', E')$, where parallel edges are merged by adding weights.
\STATE Compute minimum cost $s, t$-cut of $G'$. Let $C$ be its cost.
\STATE Create $\widehat \yv \in \{0, 1\}^p$ as follows: for each $v \in \mcV$,
set $\widehat y_v = 0$ if the edge $s\to v$ is cut.
Else $\widehat y_v = 1$.
\RETURN $-C, \widehat \yv$.
\end{algorithmic}
\end{algorithm}
This section recalls a simple procedure to compute max-marginals using graph cuts.
Such a construction was used, for instance, by \citet{kolmogorov2004energy}.
\paragraph{Notation}
In the literature on graph cut inference, it is customary to work with the energy function, which is defined as
the negative of the augmented score $-\psi$.
For this section, we also assume that the labels are binary, i.e., $\mcY_v = \{0,1\}$ for each $v\in[p]$.
Recall the decomposition~\eqref{eq:smoothing:aug_score_decomp} of the augmented score function over nodes and edges.
Define a reparameterization
\begin{gather*}
\theta_{v;z}(\wv) = -\psi_v(z ; \wv) \, \text{ for } v\in \mcV, z \in \{0,1\} \,\\
\theta_{vv';z,z'}(\wv) =
-\psi_{v,v'}(z, z' ;\wv) \,, \, \text{ if } (v, v') \in \mcE \\
\text{ for } (v, v')\in \mcE, (z,z') \in \{0,1\}^2\,.
\end{gather*}
We then get
\begin{gather*}
-\psi(\yv) = \sum_{v=1}^p \sum_{z \in \{0,1\}} \theta_{v; z} \ind(y_v=z)
+ \sum_{v=1}^p \sum_{v'=i+1}^p \sum_{z, z' \in \{0,1\}} \theta_{vv'; z z'} \ind(y_v=z) \ind(y_{v'}=z') \ind((v,v') \in \mcE) \,,
\end{gather*}
where we dropped the dependence on $\wv$ for simplicity.
We require the energies to be submodular, i.e., for every $v, v' \in [p]$, we have that
\begin{align} \label{eq:top_k_map:submodular}
\theta_{vv' ; 00} + \theta_{vv' ; 11} \le \theta_{vv' ; 01} + \theta_{vv' ; 10} \,.
\end{align}
Also, assume without loss of generality that $\theta_{v;z}, \theta_{vv';zz'}$ are non-negative
\citep{kolmogorov2004energy}.
\paragraph{Algorithm and Correctness}
Algo.~\ref{algo:top_k_map:graph_cuts} shows how to compute the max-marginal relative
to a single variable $y_v$. The next theorem shows its correctness.
\begin{theorem}[\citet{kolmogorov2004energy}] \label{thm:top_k_map:graph_cuts}
Given a binary pairwise graphical model with augmented score function $\psi$
which satisfies \eqref{eq:top_k_map:submodular},
and a set of constraints $\mcC$, Algo.~\ref{algo:top_k_map:graph_cuts}
returns $\max_{\yv \in \mcY_\mcC} \psi(\yv ; \wv)$, where $\mcY_\mcC$ denotes the
subset of $\mcY$ that satisfies constraints $\mcC$.
Moreover, Algo.~\ref{algo:top_k_map:graph_cuts} requires one maximum flow computation.
\end{theorem}
\iffalse
\begin{proof}
Suppose first that there are no constraints. We shall use the following two facts
about the graph $G'$ in Algo.~\ref{algo:top_k_map:graph_cuts}.
\begin{fact} \label{fact:graphcuts:fact1}
Every label $\yv \in \mcY$ corresponds to a minimal cut in the graph $G'$,
where the energy of the labeling $-\psi(\yv ; \wv)$ equals the
cost of the cut.
\end{fact}
This shows that there exists a cut that corresponds to the minimum energy label
$\yv^* = \argmax_{\yv \in \mcY} \psi(\yv ; \wv)$.
However, the minimum cut might not correspond to any $\yv \in \mcY$.
\begin{fact} \label{fact:graphcuts:fact2}
The minimum cut of graph $G'$ corresponds to a valid labeling $\widehat \yv \in \mcY$.
This labeling can be decoded as follows: $\widehat y_v = 0$ if the edge $s\to v$ is cut.
Else $\widehat y_v = 1$.
\end{fact}
From Facts~\ref{fact:graphcuts:fact1} and~\ref{fact:graphcuts:fact2}, we deduce
that the minimum cut of $G'$ corresponds to the minimum energy assignment $\yv^*$
and that the cost of the cut equals $\psi(\yv^* ; \wv)$.
When we have constraints, note that the infinite weight edges added in
Line~\ref{line:algo:mincut:constr} of Algo.~\ref{algo:top_k_map:graph_cuts} are never cut,
else the cost of the cut would be $+\infty$.
This enforces the constraints, and the cost of the returned cut is therefore
$\max_{\yv \in \mcY_\mcC} \psi(\yv ; \wv)$.
\end{proof}
Let us now prove Facts~\ref{fact:graphcuts:fact1} and~\ref{fact:graphcuts:fact2}.
\begin{proof}[Fact~\ref{fact:graphcuts:fact1}]
Suppose we have labeling $\yv \in \mcY$.
For each $v \in [p]$, cut $s \to y_v$ if $y_v = 0$, else cut $y_v = t$.
For edges $vv'$, if $y_v = 0$ and $y_{v'} = 1$, also cut the edge $v' \to v$. Else, if
if $y_v = 1$ and $y_{v'} = 0$, cut instead the edge $v \to v'$.
Clearly, this is a $s,t$-cut because every path from $s$ to $t$
has a cut edge.
Moreover, the cut is {\em minimal} in that adding any one cut edge back to the graph
results in a valid $s-t$ path.
The cost of this cut $C$ is now, (each summation on $v, v'$ below runs from $1$ to $p$ unless mentioned otherwise)
\begin{align*}
\mathrm{cost}(C) =&
\sum_{v} \theta_{v ; y_v} +
\sum_{v}\sum_{v' > v}\theta_{vv' ; 00} \ind(y_v = 0) +
\sum_{v}\sum_{v' < v} \theta_{v'v ; 11} \ind(y_v = 1) \\ &+
\sum_{v}\sum_{v' > v} \theta_{vv' ; 10} \ind(y_v = 1) \ind(y_{v'} = 0) +
\sum_{v}\sum_{v' > v} \left( \theta_{vv' ; 01} - \theta_{vv' ; 00} - \theta_{vv' ; 11} \right) \ind(y_v = 0) \ind(y_{v'} = 1) \\
=& \sum_{v} \theta_{v ; y_v} +
\sum_{v} \sum_{v' > v} \big[
\theta_{vv' ; 00} \ind(y_v = 0) \left( 1 - \ind(y_{v'} = 1) \right) +
\theta_{vv' ; 11} \ind(y_{v'} = 1) \left( 1 - \ind(y_v = 0) \right) \\ &+
\theta_{vv' ; 10} \ind(y_v = 1) \ind(y_{v'} = 0) +
\theta_{vv' ; 01} \ind(y_v = 0) \ind(y_{v'} = 1)
\big] \\
=& \sum_{v} \theta_{v ; y_v} +
\sum_{v} \sum_{v' > v} \big[
\theta_{vv' ; 00} \ind(y_v = 0) \ind(y_{v'} = 0) +
\theta_{vv' ; 11} \ind(y_{v'} = 1) \ind(y_v = 1) \\ &+
\theta_{vv' ; 10} \ind(y_v = 1) \ind(y_{v'} = 0) +
\theta_{vv' ; 01} \ind(y_v = 0) \ind(y_{v'} = 1)
\big] \\
=& \sum_{v} \theta_{v ; y_v} + \sum_{v} \sum_{v' > v} \theta_{vv' ; y_v y_{v'}}
= - \psi(\yv ; \wv) \,.
\end{align*}
\end{proof}
Next, let us prove Fact~\ref{fact:graphcuts:fact2}.
\begin{proof}[Fact~\ref{fact:graphcuts:fact2}]
Call the minimum cut $C$.
For every vertex $v$, there exists a path $s \to v \to t$. Therefore,
one of $s \to v$ or $v \to t$ must be cut.
Since each $\theta_{v ; z}$ is assumed to be non-negative,
{\em exactly one} of $s \to v$ or $v \to t$ is cut in the min cost cut.
If $s \to v$ is cut, set $\widehat y_v = 0$. Else set $\widehat y_v = 1$. The
labeling $\widehat \yv$ so obtained corresponds to a minimal cut, say $C'$, by Fact~\ref{fact:graphcuts:fact1}.
By construction, $C$ and $C'$ agree on which $s \to v$ and $v \to t$ edges to cut.
Minimality of $C$ and $C'$ ensure that $C = C'$, up to zero weight edges.
But zero weight edges do not impact the cost of the cut and therefore,
$\mathrm{cost}(C) = \mathrm{cost}(C') = -\psi(\widehat \yv ; \wv)$.
Therefore, the min cut $C$ corresponds to the labeling $\widehat \yv$.
\end{proof}
\fi
\subsection{Max-Marginals Using Graph Matchings} \label{sec:a:graph_matchings}
The alignment problem that we consider in this section is as follows:
given two sets $V, V'$, both of equal size (for simplicity), and a weight function
$\varphi: V \times V' \to \reals$, the task is to find a map $\sigma : V \to V'$
so that each $v \in V$ is mapped to a unique $z \in V'$ and the total weight $\sum_{v \in V} \varphi(v, \sigma(v))$
is maximized.
For example, $V$ and $V'$ might represent two natural language sentences and this task is to align the two sentences.
\paragraph{Graphical Model}
This problem is framed as a graphical model as follows.
Suppose $V$ and $V'$ are of size $p$. Define $\yv = (y_1, \cdots, y_p)$ so that $y_v$ denotes $\sigma(v)$.
The graph $\mcG = (\mcV, \mcE)$ is constructed as the fully connected graph over $\mcV = \{1, \cdots, p\}$.
The range $\mcY_v$ of each $y_v$ is simply $V'$ in the unconstrained case.
Note that when considering constrained max-marginal computations, $\mcY_v$ might be subset of $V'$.
The score function $\psi$ is defined as node and edge potentials as in
Eq.~\eqref{eq:smoothing:aug_score_decomp}. Again, we suppress
dependence of $\psi$ on $\wv$ for simplicity.
Define unary and pairwise scores as
\begin{align*}
\psi_v(y_v) = \varphi(v, y_v) \quad \text{and} \quad
\psi_{v, v'}(y_v, y_{v'}) =
\begin{cases}
0, \text{ if } y_v \neq y_{v'} \\
-\infty, \text{ otherwise }
\end{cases}
\, .
\end{align*}
\paragraph{Max Oracle}
The max oracle with $\psi$ defined as above, or equivalently, the inference problem \eqref{eq:pgm:inference}
(cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:max}) can be cast as a maximum weight bipartite matching,
see e.g., \citet{taskar2005discriminative}.
Define a fully connected bipartite graph $G = (V \cup V', E)$ with partitions $V, V'$, and directed edges from
each $v \in V$ to each vertex $z \in V'$ with weight $\varphi(v, z)$.
The maximum weight bipartite matching in this graph $G$ gives the mapping $\sigma$, and thus implements the max oracle.
It can be written as the following linear program:
\begin{align*}
\max_{\{\theta_{v,z} \text{ for } (v,z) \in E\}} \, & \sum_{(v,z) \in E} \varphi(v, z) \theta_{v, z} \,, \\
\mathrm{s.t.} \quad & 0 \le \theta_{v, z} \le 1 \, \forall (v, z) \in V \times V' \\
& \sum_{v \in V} \theta_{v, z} \le 1 \, \forall z \in V' \\
& \sum_{z \in V'} \theta_{v, z} \le 1 \, \forall v \in V \, .
\end{align*}
\paragraph{Max-Marginal}
For the graphical model defined above, the max-marginal $\psi_{\bar v ; \bar z}$ is the constrained maximum weight matching in the
graph $G$ defined above subject to the constraint that $\bar v$ is mapped to $\bar z$. The linear program above can be
modified to include the constraint $\theta_{\bar v, \bar z} = 1$:
\begin{align} \label{eq:top_k_map:graph_matchings:max-marg:def}
\begin{aligned}
\max_{\{\theta_{v,z} \text{ for } (v,z) \in E\}} \, & \sum_{(v,z) \in E} \varphi(v, z) \theta_{v, z} \,, \\
\mathrm{s.t.} \quad & 0 \le \theta_{v, z} \le 1 \, \forall (v, z) \in V \times V' \\
& \sum_{v \in V} \theta_{v, z} \le 1 \, \forall z \in V' \\
& \sum_{z \in V'} \theta_{v, z} \le 1 \, \forall v \in V \\
& \theta_{\bar v, \bar z} = 1 \, .
\end{aligned}
\end{align}
\paragraph{Algorithm to Compute Max-Marginals}
Algo.~\ref{algo:top_k_map:graph_matchings}, which shows how to compute max-marginals
is due to \citet{duchi2007using}.
Its running time complexity is as follows: the initial
maximum weight matching computation takes $\bigO(p^3)$ via computation of a maximum flow~\citep[Ch.~10]{schrijver-book}.
Line~\ref{line:top_k_map:graph_matching:all-pairs} of Algo.~\ref{algo:top_k_map:graph_matchings}
can be performed by the all-pairs shortest paths algorithm \citep[Ch.~8.4]{schrijver-book} in time $\bigO(p^3)$.
Its correctness is shown by the following theorem:
\begin{theorem}[\citet{duchi2007using}] \label{thm:top_k_map:graph_matching}
Given a directed bipartite graph $G$ and weights $\varphi: V \times V' \to \reals$,
the output $\psi_{v ; z}$ from Algo.~\ref{algo:top_k_map:graph_matchings}
are valid max-marginals, i.e., $\psi_{v ; z}$ coincides with the optimal value of the linear program
\eqref{eq:top_k_map:graph_matchings:max-marg:def}. Moreover, Algo.~\ref{algo:top_k_map:graph_matchings}
runs in time $\bigO(p^3)$ where $p = \abs{V} = \abs{V'}$.
\end{theorem}
\begin{algorithm}[tb]
\caption{Max marginal computation via Graph matchings}
\label{algo:top_k_map:graph_matchings}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Directed bipartite graph $G=(V \cup V', E)$,
weights $\varphi: V \times V' \to \reals$.
\STATE Find a maximum weight bipartite matching $\sigma^*$ in the graph $G$. Let the maximum weight be $\psi^*$.
\STATE Define a weighted residual bipartite graph $\widehat G = (V \cup V', \widehat E)$,
where the set $\widehat E$ is populated as follows:
for $(v,z) \in E$, add an edge $(v,z)$ to $\widehat E$ with weight $1 - \ind(\sigma^*(v) = z)$,
add $(z, v)$ to $\widehat E$ with weights $- \ind(\sigma^*(v) = z)$.
\STATE Find the maximum weight path from every vertex $z\in V'$ to every vertex $v \in V$
and denote this by $\Delta(z, v)$. \label{line:top_k_map:graph_matching:all-pairs}
\STATE Assign the max-marginals $\psi_{v ; z} = \psi^* + \ind(\sigma^*(v) \neq z) \, \left( \Delta(z, v) + \varphi(v, z) \right)$
for all $(v, z) \in V \times V'$.
\RETURN Max-marginals $\psi_{v;z}$ for all $(v, z) \in V \times V'$.
\end{algorithmic}
\end{algorithm}
\subsection{Proof of Proposition~\ref{prop:smoothing:max-marg:all}} \label{sec:a:proof-prop}
\begin{proposition_unnumbered}[\ref{prop:smoothing:max-marg:all}]
Consider as inputs an augmented score function $\psi(\cdot, \cdot ; \wv)$,
an integer $K>0$ and a smoothing parameter $\mu > 0$.
Further, suppose that $\psi$ is unambiguous, that is,
$\psi(\yv' ; \wv) \neq \psi(\yv'' ;\wv)$ for all distinct $\yv', \yv'' \in \mcY$.
Consider one of the two settings:
\begin{enumerate}[label={\upshape(\Alph*)}, align=left, leftmargin=*]
\item the output space $\mcY_v = \{0,1\}$ for each $v \in \mcV$, and the function
$-\psi$ is submodular (see Appendix~\ref{sec:a:graph_cuts} and, in particular, \eqref{eq:top_k_map:submodular}
for the precise definition), or,
\item the augmented score corresponds to an alignment task where the
inference problem~\eqref{eq:pgm:inference} corresponds to a
maximum weight bipartite matching (see Appendix~\ref{sec:a:graph_matchings} for a precise definition).
\end{enumerate}
In these cases, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item The max oracle can be implemented at a
computational complexity of $\bigO(p)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3)$ in Case~\ref{part:prop:max-marg:matching}.
\item The top-$K$ oracle can be implemented at a
computational complexity of $\bigO(pK)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3K)$ in Case~\ref{part:prop:max-marg:matching}.
\item The exp oracle is \#P-complete in both cases.
\end{enumerate}
\end{proposition_unnumbered}
\begin{proof}
A set of max-marginals can be computed by an algorithm $\mcM$ defined as follows:
\begin{itemize}
\item In Case~\ref{part:prop:max-marg:cuts}, invoke Algo.~\ref{algo:top_k_map:graph_cuts} a total of $2p$ times,
with $y_v =0$, and $y_v = 1$ for each $v \in \mcV$. This takes a total of $2p$ min-cut computations.
\item In Case~\ref{part:prop:max-marg:matching}, $\mcM$ is simply Algo.~\ref{algo:top_k_map:graph_matchings}, which takes time
$\bigO(p^3)$.
\end{itemize}
The max oracle can then be implmented by the decoding in Eq.~\eqref{eq:max-marg:defn}, whose correctness is
guaranteed by Thm.~\ref{thm:a:loopy:decoding}.
The top-$K$ oracle is implemented by invoking the BMMF algorithm with $\mcM$ defined above, followed by
a projection onto the simplex (Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
and its correctness is guaranteed by Thm.~\ref{thm:inference:topKmm}.
Lastly, the result of exp oracle follows from \citet[Thm. 15]{jerrum1993polynomial} in conjunction with
Prop.~\ref{prop:smoothing:exp-crf}.
\end{proof}
\subsection{Inference using branch and bound search} \label{sec:a:bb_search}
Algo.~\ref{algo:top_k:bb} with the input $K=1$ is the standard best-first branch and bound
search algorithm.
Effectively, the top-$K$ oracle is implemented by simply
continuing the search procedure until $K$ outputs have been produced - compare
Algo.~\ref{algo:top_k:bb} with inputs $K=1$ and $K > 1$. We now prove the correctness guarantee.
\begin{algorithm}[tb]
\caption{Top-$K$ best-first branch and bound search}
\label{algo:top_k:bb}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$, integer $K > 0$,
search space $\mcY$, upper bound $\widehat \psi$, split strategy.
\STATE {\bfseries Initialization:} Initialize priority queue with
single entry $\mcY$ with priority $\widehat \psi(\mcY ; \wv)$,
and solution set $\mcS$ as the empty list.
\WHILE{$\abs{\mcS} < K$}
\STATE Pop $\widehat \mcY$ from the priority queue. \label{line:algo:bbtopk:pq}
\IF{${\widehat \mcY} = \{\widehat \yv\}$ is a singleton} \label{line:algo:bbtopk:1}
\STATE Append $( \widehat \yv, \psi(\widehat \yv ; \wv) )$ to $S$.
\ELSE
\STATE $\mcY_1, \mcY_2 \leftarrow \mathrm{split}(\widehat \mcY)$.
\STATE Add $\mcY_1$ with priority $\widehat \psi(\mcY_1 ; \wv)$
and $\mcY_2$ with priority $\widehat \psi(\mcY_2 ; \wv)$ to the priority queue.
\ENDIF
\ENDWHILE
\RETURN $\mcS$.
\end{algorithmic}
\end{algorithm}
\begin{proposition_unnumbered}[\ref{prop:smoothing:bb-search}]
Consider an augmented score function $\psi(\cdot, \cdot, \wv)$,
an integer $K > 0$ and a smoothing parameter $\mu > 0$.
Suppose the upper bound function $\widehat \psi(\cdot, \cdot ; \wv): \mcX \times 2^{\mcY} \to \reals$
satisfies the following properties:
\begin{enumerate}[label=(\alph*), align=left, widest=a, leftmargin=*]
\item $\widehat \psi(\widehat \mcY ; \wv)$ is finite for every $\widehat \mcY \subseteq \mcY$,
\item $\widehat \psi(\widehat \mcY ; \wv) \ge \max_{\yv \in \widehat \mcY} \psi(\yv ; \wv)$
for all $\widehat \mcY \subseteq \mcY$, and,
\item $\widehat \psi(\{\yv\} ; \wv) = \psi(\yv ; \wv)$ for every $\yv \in \mcY$.
\end{enumerate}
Then, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=ii, leftmargin=*]
\item Algo.~\ref{algo:top_k:bb} with $K=1$ is a valid implementation of the max oracle.
\item Algo.~\ref{algo:top_k:bb} followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing}) is a valid implementation of the top-$K$ oracle.
\end{enumerate}
\end{proposition_unnumbered}
\begin{proof}
Suppose at some point during the execution of the algorithm,
we have a $\widehat \mcY = \{\widehat \yv\}$ on Line~\ref{line:algo:bbtopk:1}
and that $\abs{\mcS} = k$ for some $0 \le k < K$.
From the properties of the quality upper bound $\widehat \psi$,
and using the fact that $ \{\widehat \yv\}$ had the highest priority
in the priority queue (denoted by $(*)$), we get,
\begin{align*}
\psi(\widehat\yv ; \wv) &= \widehat \psi(\{ \widehat \yv\} ; \wv) \\
&\stackrel{(*)}{\ge} \max_{Y \in \mcP} \widehat \psi(Y ; \wv) \\
&\ge \max_{Y \in \mcP} \max_{\yv \in Y} \psi(\yv ; \wv) \\
&\stackrel{(\#)}{=} \max_{\yv \in \mcY - \mcS} \psi(\yv ; \wv) \,,
\end{align*}
where the equality $(\#)$ followed from the fact that
any $\yv \in \mcY$ exits the priority queue only if it is added to $\mcS$.
This shows that if a $\widehat \yv$ is added to $\mcS$, it has a score that is no less than
that of any $\yv \in \mcY - \mcS$. In other words, Algo.~\ref{algo:top_k:bb} returns
the top-$K$ highest scoring $\yv$'s.
\end{proof}
\subsection{Behavior of the Sequence $(\alpha_k)_{k \ge 0}$} \label{sec:a:c_alpha_k}
\begin{lemma_unnumbered}[\ref{lem:c:alpha_k}]
Given a positive, non-decreasing sequence $(\kappa_k)_{k\ge 1}$ and $\lambda \ge 0$,
consider the sequence $(\alpha_k)_{k \ge 0}$ defined by \eqref{eq:c:update_alpha}, where
$\alpha_0 \in (0, 1)$ such that $\alpha_0^2 \ge \lambda / (\lambda + \kappa_1)$.
Then, we have for every $k \ge 1$ that $0< \alpha_k \le \alpha_{k-1}$ and,
$
\alpha_k^2 \ge {\lambda}/({\lambda + \kappa_{k+1}}) \,.
$
\end{lemma_unnumbered}
\begin{proof}
It is clear that \eqref{eq:c:update_alpha} always has a positive root, so the update is well defined.
Define sequences $(c_k)_{k \ge 1}, (d_k)_{k \ge 0}$ as
\begin{align*}
c_k = \frac{\lambda + \kappa_k}{\lambda + \kappa_{k+1}}\,, \quad \mbox{and} \quad
d_k = \frac{\lambda}{\lambda + \kappa_{k+1}} \,.
\end{align*}
Therefore, we have that $c_k d_{k-1} = d_k$, $0 < c_k \le 1$ and $0 \le d_k < 1$.
With these in hand, the rule for $\alpha_k$ can be written as
\begin{align} \label{eq:lem:c:alpha_k}
\alpha_k = \frac{ -(c_k \alpha_{k-1}^2 - d_k ) + \sqrt{ (c_k \alpha_{k-1}^2 - d_k )^2 + 4 c_k \alpha_{k-1}^2 }}{2} \,.
\end{align}
We show by induction that that $d_k \le \alpha_k^2 < 1$.
The base case holds by assumption. Suppose that $\alpha_{k-1}$ satisfies
the hypothesis for some $k \ge 1$.
Noting that $\alpha_{k-1}^2 \ge d_{k-1}$ is equivalent to $c_k \alpha_{k-1}^2 - d_k \ge 0$, we get that
\begin{align}
\nonumber
\sqrt{ (c_k \alpha_{k-1}^2 - d_k )^2 + 4 c_k \alpha_{k-1}^2 }
&\le
\sqrt{ (c_k \alpha_{k-1}^2 - d_k )^2 + 4 c_k \alpha_{k-1}^2
+ 2 (c_k \alpha_{k-1}^2 - d_k) (2\sqrt{c_k} \alpha_{k-1}) } \\
&= c_k \alpha_{k-1}^2 - d_k + 2\sqrt{c_k} \alpha_{k-1} \,.
\label{eq:lem:c:alpha_k_helper}
\end{align}
We now conclude from \eqref{eq:lem:c:alpha_k} and \eqref{eq:lem:c:alpha_k_helper} that
\begin{align}
\nonumber
\alpha_k &\le \frac{ -(c_k \alpha_{k-1}^2 - d_k ) + (c_k \alpha_{k-1}^2 - d_k + 2\sqrt{c_k} \alpha_{k-1}) }{2} \\
&= \sqrt{c_k}{\alpha_{k-1}} \le \alpha_{k-1} < 1\,,
\label{eq:lem:c:alpha_k_dec}
\end{align}
since $c_k \le 1$ and $\alpha_{k-1} < 1$. To show the other side, we expand out \eqref{eq:lem:c:alpha_k}
and apply \eqref{eq:lem:c:alpha_k_helper} again to get
\begin{align*}
\alpha_k^2 - d_k
&= \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k)^2 + (c_k \alpha_{k-1}^2 - d_k)
- \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k) \sqrt{(c_k \alpha_{k-1}^2 - d_k)^2 + 4 c_k \alpha_{k-1}^2 } \\
&= \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k) \left(2 + (c_k \alpha_{k-1}^2 - d_k)
- \sqrt{(c_k \alpha_{k-1}^2 - d_k)^2 + 4 c_k \alpha_{k-1}^2 }
\right) \\
&\ge \frac{1}{2}(c_k \alpha_{k-1}^2 - d_k) \left(2 + (c_k \alpha_{k-1}^2 - d_k)
- (c_k \alpha_{k-1}^2 - d_k+ 2\sqrt{c_k} \alpha_{k-1})
\right) \\
&= (c_k \alpha_{k-1}^2 - d_k) ( 1- \sqrt{c_k}\alpha_{k-1}) \ge 0 \,.
\end{align*}
The fact that $(\alpha_{k})_{k\ge 0}$ is a non-increasing sequence follows from~\eqref{eq:lem:c:alpha_k_dec}.
\end{proof}
\subsection{Proofs of Corollaries to Theorem~\ref{thm:catalyst:outer}} \label{subsec:c:proofs_missing_cor}
We rewrite \eqref{thm:c:main:main} from Theorem~\ref{thm:catalyst:outer} as follows:
\begin{align} \label{eq:c:app:main}
F&(\wv_k) - F^* \le
\left( \prod_{j=1}^k \frac{1-\alpha_{j-1}}{1-\delta_j} \right)
\left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right) + \mu_k D_\omega \\
&+
\frac{1}{1-\alpha_k} \left[
\left( \prod_{j=1}^k \frac{1-\alpha_j}{1-\delta_j} \right) (1 + \delta_1) \mu_1 D_\omega +
\sum_{j=2}^k \left( \prod_{i=j}^k \frac{1-\alpha_i}{1-\delta_i} \right)
\left( \mu_{j-1} - (1-\delta_j)\mu_j \right)D_\omega
\right]
\,, \nonumber
\end{align}
Next, we have proofs of Corollaries~\ref{cor:c:outer_sc} to~\ref{cor:c:outer_smooth_dec_smoothing}.
\begin{corollary_unnumbered}[\ref{cor:c:outer_sc}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = \frac{\lambda}{\lambda + \kappa}$.
Suppose $\lambda > 0$ and $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
$\delta_k = \frac{\sqrt{q}}{2 - \sqrt{q}} \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{3 - \sqrt{q}}{1 - \sqrt{q}} \mu D +
2 \left( 1- \frac{\sqrt q}{2} \right)^k \left( F(\wv_0) - F^* \right) \,.
\end{align*}
\end{corollary_unnumbered}
\begin{proof}
Notice that when $\alpha_0 = \sqrt{q}$, we have, $\alpha_k = \sqrt{q}$ for all $k$. Moreover, for our choice of $\delta_k$,
we get, for all $k, j$, $\frac{1-\alpha_k}{1-\delta_j} = 1 - \frac{\sqrt q}{2}$.
Under this choice of $\alpha_0$, we have, $\gamma_0 = \lambda$. So, we get the dependence on initial conditions as
\begin{align*}
\Delta_0 = F(\wv_0) - F^* + \frac{\lambda}{2} \normsq{\wv_0 - \wv^*} \le 2( F(\wv_0) - F^*) \,,
\end{align*}
by $\lambda$-strong convexity of $F$. The last term of \eqref{eq:c:app:main} is now,
\begin{align*}
\frac{\mu D}{1-\sqrt {q}} \left[ \underbrace{\left( 1 - \frac{\sqrt q}{2} \right)^{k-1} }_{\le 1}
+ \underbrace{\frac{\sqrt q}{2} \sum_{j=2}^k \left( 1 - \frac{\sqrt q}{2} \right)^{k-j}}_{\stackrel{(*)}{\le} 1 }
\right] \le \frac{2 \mu D}{1 - \sqrt q} \, ,
\end{align*}
where $(*)$ holds since
\begin{align*}
\sum_{j=2}^k \left( 1 - \frac{\sqrt q}{2} \right)^{k-j} \le \sum_{j=0}^\infty \left( 1 - \frac{\sqrt q}{2} \right)^{j}
= \frac{2}{\sqrt{q}} \,.
\end{align*}
\end{proof}
\begin{corollary_unnumbered}[\ref{cor:c:outer_sc:decreasing_mu_const_kappa}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = \frac{\lambda}{\lambda + \kappa}, \eta = 1 - \frac{\sqrt q}{2}$.
Suppose $\lambda > 0$ and
$\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
the sequences $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$ as
\begin{align*}
\mu_k = \mu \eta^{{k}/{2}} \,, \qquad \text{and,} \qquad
\delta_k = \frac{\sqrt{q}}{2 - \sqrt{q}} \,,
\end{align*}
where $\mu > 0$ is any constant.
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \eta^{{k}/{2}} \left[
2 \left( F(\wv_0) - F^* \right)
+ \frac{\mu D_\omega}{1-\sqrt{q}} \left(2-\sqrt{q} + \frac{\sqrt{q}}{1 - \sqrt \eta} \right)
\right] \, .
\end{align*}
\end{corollary_unnumbered}
\begin{proof
As previously in Corollary~\ref{cor:c:outer_sc}, notice that under the specific parameter choices here, we have,
$\gamma_0 = \lambda$, $\alpha_k = \sqrt{q}$ for each $k$, and $\frac{1 - \delta}{1 - \alpha} = 1 - \frac{\sqrt q}{2} = \eta$.
By $\lambda$-strong convexity of $F$ and the fact that $\gamma_0 = \lambda$, the contribution of $\wv_0$ can be upper
bounded by $2(F(\wv_0) - F^*)$. Now, we plugging these into \eqref{eq:c:app:main}
and collecting the terms dependent on
$\delta_k$ separately, we get,
\begin{align} \label{eq:c:outer:sc:dec_smoothing}
\nonumber
F(\wv_k) - F^* \le & \underbrace{2 \eta^k (F(\wv_0) - F^*)}_{=: \mcT_1} +
\underbrace{\mu_k D}_{=: \mcT_2} \\ &+
\frac{1}{1 - \sqrt{q}} \left(
\underbrace{\eta^k \mu_1 D}_{=: \mcT_3} +
\underbrace{\sum_{j=2}^k \eta^{k-j+1} (\mu_{j-1} - \mu_j)D}_{=:\mcT_4} +
\underbrace{\sum_{j=1}^k \eta^{k-j+1} \mu_j \delta_j D}_{=: \mcT_5}
\right) \,.
\end{align}
We shall consider each of these terms. Since $\eta^k \le \eta^{k/2}$, we get
$\mcT_1 \le 2\eta^{k/2}(F(\wv_0) - F^*)$ and $\mcT_3 = \eta^k \mu_1 D \le \eta^k \mu D \le \eta^{k/2} \mu D$.
Moreover, $\mcT_2 = \mu_k D = \eta^{k/2} \mu D$.
Next, using $ 1- \sqrt \eta \le 1 - \eta = \frac{\sqrt q}{2}$,
\begin{align*}
\mcT_4 &= \sum_{j=2}^k \eta^{k-j+1}(\mu_{j-1} - \mu_j) D
= \sum_{j=2}^k \eta^{k-j+1} \mu \eta^{\nicefrac{(j-1)}{2}} (1 - \sqrt\eta) D \\
&\le \frac{\sqrt{q}}{2} \mu D \sum_{j=2}^k \eta^{k - \frac{j-1}{2}}
= \frac{\sqrt{q}}{2} \mu D \eta^{\nicefrac{(k+1)}{2}} \sum_{j=0}^{k-2} \eta^{j/2}
\le \frac{\sqrt{q}}{2} \mu D \frac{\eta^{\nicefrac{(k+1)}{2}} }{1- \sqrt\eta} \\
&\le \frac{\sqrt{q}}{2} \mu D \frac{\eta^{\nicefrac{k}{2}} }{1- \sqrt\eta} \, .
\end{align*}
Similarly, using $\delta_j = \nicefrac{\sqrt q}{2\eta}$, we have,
\begin{align*}
\mcT_5 &= \sum_{j=1}^k \eta^{k-j+1} \mu \eta^{j/2} D \frac{\sqrt q}{2\eta}
= \frac{\sqrt{q}}{2} \mu D\sum_{j=1}^k \eta^{\nicefrac{k-j}{2}}
\le \frac{\sqrt{q}}{2} \mu D \frac{\eta^{\nicefrac{k}{2}} }{1- \sqrt\eta} \, .
\end{align*}
Plugging these into \eqref{eq:c:outer:sc:dec_smoothing} completes the proof.
\end{proof}
\begin{corollary_unnumbered}[\ref{cor:c:outer_smooth}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}. Suppose $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$
and $\lambda = 0$. Choose $\alpha_0 = \frac{\sqrt{5}-1}{2}$ and
$\delta_k = \frac{1}{(1 + k)^2} \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{8}{(k+2)^2} \left( F(\wv_0) - F^* + \frac{\kappa}{2} \normasq{2}{\wv_0 - \wv^*} \right)
+ \mu D_\omega\left( 1 + \frac{12}{k+2} + \frac{30}{(k+2)^2} \right) \, .
\end{align*}
\end{corollary_unnumbered}
\begin{proof
Firstly, note that $\gamma_0 = \kappa \frac{\alpha_0^2}{1-\alpha_0} = \kappa$. Now, define
\begin{align*}
\mcA_k &= \prod_{i=0}^k (1- \alpha_i) \text{, and, }
\mcB_k = \prod_{i=1}^k (1-\delta_i) \, .
\end{align*}
We have,
\begin{align} \label{lem:c:b_k_1}
\mcB_k = \prod_{i=1}^k \left( 1 - \frac{1}{(i+1)^2} \right) = \prod_{i=1}^k \frac{i(i+2)}{(i+1)^2} = \frac{1}{2} + \frac{1}{2(k+1)}\,.
\end{align}
Therefore,
\begin{align*}
F(\wv_k) - F^* \le& \frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^*
+ \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)
+ \mu D \\ &+ \frac{\mu D}{1-\alpha_0} \left( \prod_{j=1}^k \frac{1-\alpha_{j-1}}{1-\delta_k} \right) (1 + \delta_1) +
\mu D \sum_{j=2}^k \left( \prod_{i=j}^k \frac{1- \alpha_{i-1}}{1-\delta_i} \right) \frac{\delta_j}{1-\alpha_{j-1}}
\\
\le& \underbrace{\frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)}_{=:\mcT_1}
+ \mu D \\
&+
\underbrace{ \frac{\tfrac{5}{4}\mu D}{1-\alpha_0} \frac{\mcA_{k-1}}{\mcB_k} }_{=:\mcT_2}+
\underbrace{\mu D \sum_{j=2}^k \frac{ \nicefrac{\mcA_{k-1}} {\mcA_{j-2}}} { \nicefrac{\mcB_k}{\mcB_{j-1}}}
\frac{\delta_j}{1-\alpha_{j-1}}}_{=:\mcT_3} \,.
\end{align*}
From Lemma~\ref{lem:c:A_k:const_kappa}, which analyzes the evolution of $(\alpha_k)$ and $(\mcA_k)$,
we get that $\frac{2}{(k+2)^2} \le \mcA_{k-1} \le \frac{4}{(k+2)^2}$ and $\alpha_k \le \frac{2}{k+3}$ for $k \ge 0$.
Since $\mcB_k \ge \frac{1}{2}$,
\begin{align*}
\mcT_1 \le \frac{8}{(k+2)^2} \left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right) \,.
\end{align*}
Moreover, since $\alpha_0 \le 2/3$,
\begin{align*}
\mcT_2 \le \frac{30}{(k+2)^2} \,.
\end{align*}
Lastly, we have,
\begin{align*}
\mcT_3 &\le \sum_{j=2}^k \frac{4}{(k+2)^2} \times \frac{(j+1)^2}{2} \times
2\left( \frac{1}{2}+ \frac{1}{2j} \right) \times \frac{1}{(j+1)^2} \times \frac{1}{1 - \nicefrac{2}{j+2}} \\
&\le 2\frac{2}{(k+2)^2} \sum_{j=2}^k \frac{j+2}{j} \le \frac{4}{(k+2)^2} \left(k -1 + 2 \log k \right)
\le \frac{12}{k+2} \, ,
\end{align*}
where we have used the simplifications $\sum_{j=2}^k 1/k \le \log k$ and $ k-1+2\log k \le 3k$.
\end{proof}
\begin{corollary_unnumbered}[\ref{cor:c:outer_smooth_dec_smoothing}]
Consider the setting of Thm.~\ref{thm:catalyst:outer} with $\lambda = 0$.
Choose $\alpha_0 = \frac{\sqrt{5}-1}{2}$, and for some non-negative constants $\kappa, \mu$,
define sequences $(\kappa_k)_{k \ge 1}, (\mu_k)_{k \ge 1}, (\delta_k)_{k \ge 1}$ as
\begin{align*}
\kappa_k = \kappa \, k\,, \quad
\mu_k = \frac{\mu}{k} \quad \text{and,} \quad
\delta_k = \frac{1}{(k + 1)^2} \,.
\end{align*}
Then, for $k \ge 2$, we have,
\begin{align}
F(\wv_k) - F^* \le
\frac{\log(k+1)}{k+1} \left(
2(F(\wv_0) - F^*) + \kappa \normasq{2}{\wv_0 - \wv^*} + 27 \mu D_\omega
\right) \,.
\end{align}
For the first iteration (i.e., $k = 1$), this bound is off by a constant factor $1 / \log2$.
\end{corollary_unnumbered}
\begin{proof
Notice that $\gamma_0 = \kappa_1 \frac{\alpha_0^2}{1- \alpha_0} = \kappa$.
As in Corollary~\ref{cor:c:outer_smooth}, define
\begin{align*}
\mcA_k &= \prod_{i=0}^k (1- \alpha_i)\,, \quad \text{and,} \quad
\mcB_k = \prod_{i=1}^k (1-\delta_i) \, .
\end{align*}
From Lemma~\ref{lem:c:A_k:inc_kappa} and \eqref{lem:c:b_k_1} respectively, we have for $k \ge 1$,
\begin{align*}
\frac{1- \frac{1}{\sqrt 2}}{k+1} &\le \mcA_{k} \le \frac{1}{k+2}\,, \quad \text{and,} \quad
\frac{1}{2} \le \mcB_{k} \le 1\, .
\end{align*}
Now, invoking Theorem~\ref{thm:catalyst:outer}, we get,
\begin{align} \label{eq:cor:c:nsc:dec_smoothing_eq}
F(\wv_k) - F^* \le& \nonumber
\underbrace{\frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^*
+ \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)}_{=:\mcT_1} +
\underbrace{\mu_k D}_{=:\mcT_2} +
\underbrace{\frac{1}{1 - \alpha_0} \frac{\mcA_{k-1}}{\mcB_k} \mu_1 D(1 + \delta_1)}_{=:\mcT_3} + \\
&\underbrace{\sum_{j=2}^k \frac{\mcA_{k-1}/\mcA_{j-1}}{\mcB_k / \mcB_{j-1}} (\mu_{j-1} - \mu_j) D}_{=:\mcT_4} +
\underbrace{\sum_{j=2}^k \frac{\mcA_{k-1}/\mcA_{j-1}}{\mcB_k / \mcB_{j-1}} \delta_j \mu_j D }_{=:\mcT_5} \,.
\end{align}
We shall bound each of these terms as follows.
\begin{gather*}
\mcT_1 = \frac{\mcA_{k-1}}{\mcB_k} \left( F(\wv_0) - F^* + \frac{\gamma_0}{2} \normsq{\wv_0 - \wv^*} \right)
= \frac{2}{k+1} \left( F(\wv_0) - F^* + \frac{\kappa _0}{2} \normsq{\wv_0 - \wv^*} \right) \,, \\
\mcT_2 = \mu_k D = \frac{\mu D}{k} \le \frac{2\mu D}{k+1} \, , \\
\mcT_3 = \frac{1}{1 - \alpha_0} \frac{\mcA_{k-1}}{\mcB_k} \mu_1 D(1 + \delta_1)
\le 3 \times \frac{2}{k+1} \times {\mu} \times \frac{5}{4}D = \frac{15}{2} \frac{\mu D}{k+1} \,,
\end{gather*}
where we used the fact that $\alpha_0 \le 2/3$. Next,
using $\sum_{j=2}^k {1}/({j-1}) = 1 + \sum_{j=2}^{k-1} {1}/{j} \le 1 + \int_{1}^{k-1}{dx}/{x} = 1 + \log(k-1)$,
we get,
\begin{align*}
\nonumber
{\mcT_4}
&= \sum_{j=2}^k \frac{2}{k+1} \cdot \frac{j}{1- \frac{1}{\sqrt 2}} \left(\frac{\mu}{j-1} - \frac{\mu}{j}\right) D
= 2\sqrt2(\sqrt2 + 1) \frac{\mu D}{k+1} \sum_{j=2}^k \frac{1}{j-1} \nonumber \\
&\le 2\sqrt2(\sqrt2 + 1) \mu D \left( \frac{1 + \log(k+1)}{k+1} \right) \,.
\end{align*}
Moreover, from $\sum_{j=2}^k {1}/{(j+1)^2} \le \int_{2}^{k+1} {dx}/{x^2} \le 1/2$, it follows that
\begin{align*}
\mcT_5
= \sum_{j=2}^k \frac{2}{k+1} \cdot \frac{j}{1- \frac{1}{\sqrt 2}} \frac{\mu}{j} \cdot \frac{1}{(j+1)^2} D
= 2\sqrt2(\sqrt2+1) \frac{\mu D}{k+1} \sum_{j=2}^k \frac{1}{(j+1)^2}
\le \sqrt2(\sqrt2 + 1) \frac{\mu D}{k+1} \, .
\end{align*}
Plugging these back into \eqref{eq:cor:c:nsc:dec_smoothing_eq}, we get
\begin{align*}
F(\wv_k) - F^* \le& \frac{2}{k+1} \left( F(\wv_k) - F^* + \frac{\kappa}{2} \normsq{\wv_0 - \wv^*} \right)+ \\
&\frac{\mu D}{k+1} \left(2 + \frac{15}{2} + \sqrt2(1 + \sqrt2) \right) +
2\sqrt2(1 + \sqrt2)\mu D \frac{1 + \log(k+1)}{k+1} \,.
\end{align*}
To complete the proof, note that $\log(k+1) \ge 1$ for $k\ge 2$ and numerically verify that the coefficient of $\mu D$ is
smaller than 27.
\end{proof}
\subsection{Inner Loop Complexity Analysis for Casimir} \label{sec:c:proofs:inner_compl}
Before proving Prop.~\ref{prop:c:inner_loop_final}, the following lemmas will be helpful.
First, we present a lemma from \citet[Lemma 11]{lin2017catalyst}
about the expected number of iterations a randomized linearly convergent first order methods requires
to achieve a certain target accuracy.
\begin{lemma}
\label{lem:c:inner_loop}
Let $\mcM$ be a linearly convergent algorithm and $f \in \mcF_{L, \lambda}$.
Define $f^* = \min_{\wv \in \reals^d} f(\wv)$.
Given a starting point $\wv_0$ and a target accuracy $\eps$,
let $(\wv_k)_{k \ge 0}$ be the sequence of iterates generated by $\mcM$.
Define
$T(\eps) = \inf \left\{ k \ge 0 \, | \, f(\wv_k) - f^* \le \eps \right\} \,.$
We then have,
\begin{align}
\expect[T(\eps)] \le \frac{1}{\tau(L, \lambda)} \log \left( \frac{2C(L, \lambda)}
{\tau(L,\lambda)\eps} (f(\wv_0) - f^*) \right) + 1 \,.
\end{align}
\end{lemma}
This next lemma is due to \citet[Lemma 14, Prop.~15]{lin2017catalyst}.
\begin{lemma
\label{lem:c:inner_loop_restart}
Consider $F_{\mu\omega, \kappa}(\cdot \, ;\zv)$ defined in Eq.~\eqref{eq:prox_point_algo}
and let $\delta \in [0,1)$. Let $\widehat F^* = \min_{\wv \in \reals^d} F_{\mu\omega, \kappa}(\wv ;\zv)$
and $\widehat \wv^* = \argmin_{\wv \in \reals^d} F_{\mu\omega, \kappa}(\wv ;\zv)$.
Further let $F_{\mu\omega}(\cdot \, ;\zv)$ be $L_{\mu\omega}$-smooth.
We then have the following:
\begin{gather*}
F_{\mu\omega, \kappa}(\zv ;\zv) - \widehat F^* \le \frac{L_{\mu\omega} + \kappa}{2} \normasq{2}{\zv - \widehat \wv^*} \,,
\quad \text{and,} \\
F_{\mu\omega, \kappa}(\widehat\wv ;\zv) - \widehat F^* \le \frac{\delta\kappa}{8} \normasq{2}{\zv - \widehat \wv^*}
\, \implies \,
F_{\mu\omega, \kappa}(\widehat\wv ;\zv) - \widehat F^* \le \frac{\delta\kappa}{2} \normasq{2}{\widehat \wv - \zv} \,.
\end{gather*}
\end{lemma}
We now restate and prove Prop.~\ref{prop:c:inner_loop_final}.
\begin{proposition_unnumbered}[\ref{prop:c:inner_loop_final}]
Consider $F_{\mu\omega, \kappa}(\cdot \, ;\zv)$ defined in Eq.~\eqref{eq:prox_point_algo},
and a linearly convergent algorithm $\mcM$ with parameters $C$, $\tau$.
Let $\delta \in [0,1)$. Suppose $F_{\mu\omega}$ is $L_{\mu\omega}$-smooth and
$\lambda$-strongly convex.
Then the expected number of iterations $\expect[\widehat T]$ of $\mcM$ when started at $\zv$
in order to obtain $\widehat \wv \in \reals^d$ that satisfies
\begin{align}\label{eq:inner_stopping_criterion}
F_{\mu\omega, \kappa}(\widehat\wv;\zv) - \min_\wv F_{\mu\omega, \kappa}(\wv;\zv)\leq \tfrac{\delta\kappa}{2} \normasq{2}{\wv - \zv}
\end{align}
is upper bounded by
\begin{align*}
\expect[\widehat T] \le \frac{1}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_{\mu\omega} + \kappa, \lambda + \kappa)}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \cdot
\frac{L_{\mu\omega} + \kappa}{\kappa \delta} \right) + 1 \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
In order to invoke
Lemma~\ref{lem:c:inner_loop}, we must appropriately set $\eps$ for
$\widehat\wv$ to satisfy \eqref{eq:inner_stopping_criterion} and then bound the ratio
$(F_{\mu\omega, \kappa}(\zv ;\zv) - \widehat F^*) / \eps$.
Firstly, Lemma~\ref{lem:c:inner_loop_restart} tells us that choosing
$\eps = \frac{\delta_k \kappa_k}{8} \normasq{2}{\zv_{k-1} - \widehat \wv^*}$ guarantees
that the $\widehat \wv$ so obtained satisfies \eqref{eq:inner_stopping_criterion},
where $\widehat \wv^* := \argmin_{\wv \in \reals^d} F_{\mu\omega, \kappa}(\wv ;\zv)$,
Therefore, $(F_{\mu\omega, \kappa}(\zv ;\zv) - \widehat F^*) / \eps$
is bounded from above by ${4(L_{\mu\omega} + \kappa)}/{\kappa \delta}$.
\end{proof}
\subsection{Information Based Complexity of {Casimir-SVRG}} \label{sec:c:proofs:total_compl}
Presented below are the proofs of Propositions~\ref{prop:c:total_compl_svrg_sc} to
\ref{prop:c:total_compl_nsc:dec_smoothing} from Section~\ref{sec:catalyst:total_compl}.
We use the following values of $C, \tau$, see e.g., \citet{hofmann2015variance}.
\begin{align*}
\tau(L, \lambda) &= \frac{1}{8 \tfrac{L}{\lambda} + n} \ge \frac{1}{8 \left( \tfrac{L}{\lambda} + n \right)}\\
C(L, \lambda) &= \frac{L}{\lambda} \left( 1 + \frac{n \tfrac{L}{\lambda}}{8 \tfrac{L}{\lambda} + n} \right)\,.
\end{align*}
\begin{proposition_unnumbered}[\ref{prop:c:total_compl_svrg_sc}]
Consider the setting of Thm.~\ref{thm:catalyst:outer} with $\lambda > 0$ and
fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu = \eps / {10 D_\omega}$, $\kappa_k = k$ chosen as
\begin{align*}
\kappa =
\begin{cases}
\frac{A}{\mu n} - \lambda \,, \text{ if } \frac{A}{\mu n} > 4 \lambda \\
\lambda \,, \text{ otherwise}
\end{cases} \,,
\end{align*}
$q = {\lambda}/{(\lambda + \kappa)}$, $\alpha_0 = \sqrt{q}$, and
$\delta = {\sqrt{q}}/{(2 - \sqrt{q})}$.
Then, the number of iterations $N$ to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left(
n + \sqrt{\frac{A_\omega D_\omega n}{\lambda \eps}}
\right) \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
We use shorthand $A:=A_\omega$, $D := D_\omega$, $L_\mu = \lambda + \nicefrac{A}{\mu}$ and
$\Delta F_0 = F(\wv_0) - F^*$.
Let $C, \tau$ be the linear convergence parameters of SVRG.
From Cor.~\ref{cor:c:outer_sc}, the number of outer iterations $K$ required to obtain
$F(\wv_K) - F^* \le \eps$ is
\begin{align*}
K \le \frac{2}{\sqrt{q}} \log\left(\frac{ 2 \Delta F_0}{\eps - c_q \mu D} \right)\, ,
\end{align*}
where $c_q = (3 - \sqrt q)/(1 - \sqrt q)$.
From Prop.~\ref{prop:c:inner_loop_final}, the number $T_k$ of inner iterations
for inner loop $k$ is, from $\delta_k = {\sqrt q}/({2 - \sqrt{q}})$,
\begin{align*}
\expect[T_k] &\le \frac{1}{\tau(L_\mu + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \lambda + \kappa)}{\tau(L_\mu + \kappa, \lambda + \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot \frac{2 - \sqrt{q}}{\sqrt{q}} \right) + 1 \\
&\le \frac{2}{\tau(L_\mu + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \lambda + \kappa)}{\tau(L_\mu + \kappa, \lambda + \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot \frac{2 - \sqrt{q}}{\sqrt{q}} \right) \,.
\end{align*}
Let the total number $N$ of iterations of SVRG to obtain an iterate $\wv$ that satisfies $F(\wv) - F^* \le \eps$.
Next, we upper bound $\expect[N] \le \sum_{i=1}^K \expect[T_k]$ as
\begin{align} \label{eq:c:total_compl_sc}
\expect[N] \le \frac{4}{\sqrt{q} \tau(L_\mu + \kappa, \lambda +\kappa)} \log \left(
\frac{8 C(L_\mu + \kappa, \lambda + \kappa)}{\tau(L_\mu + \kappa, \lambda + \kappa)}
\frac{L_\mu + \kappa}{\kappa} \frac{2 - \sqrt{q}}{\sqrt q} \right)
\log\left( \frac{2(F(\wv_0) - F^*)}{\eps - c_q \mu D} \right)\,.
\end{align}
Next, we shall plug in $C, \tau$ for SVRG in two different cases:
\begin{itemize}
\item Case 1: $A > 4\mu \lambda n$, in which case $\kappa + \lambda = A / (\mu n)$ and $q < 1/4$.
\item Case 2: $A \le 4 \mu \lambda n$, in which case, $\kappa = \lambda$ and $q = 1/2$.
\end{itemize}
We first consider the term outside the logarithm. It is, up to constants,
\begin{align*}
\frac{1}{\sqrt{q}} \left( n + \frac{A}{\mu(\lambda + \kappa)} \right)
= n \sqrt{\frac{\lambda + \kappa}{\lambda}} + \frac{A}{\mu \sqrt{\lambda(\lambda + \kappa)}} \,.
\end{align*}
For Case 1, plug in $\kappa + \lambda = A / (\mu n)$ so this term evaluates to $\sqrt{{ADn}/({\lambda \eps})}$.
For Case 2, we use the fact that $A \le 4 \mu \lambda n$ so that this term can be upper bounded by,
\[
n\left( \sqrt{\frac{\lambda + \kappa}{\lambda}} + 4 \sqrt{ \frac{\lambda}{\lambda + \kappa}} \right) = 3\sqrt{2}n \,,
\]
since we chose $\kappa= \lambda$.
It remains to consider the logarithmic terms. Noting that $\kappa \ge \lambda$ always,
it follows that the first log term of \eqref{eq:c:total_compl_sc} is clearly
logarithmic in the problem parameters.
As for the second logarithmic term, we must evaluate $c_q$. For Case 1, we have that $q < 1/4$ so that $c_q < 5$
and $c_q \mu D < \eps / 2$. For Case 2, we get that $q = 1/2$ and $c_q < 8$ so that $c_q \mu D < 4\eps/5$. Thus, the
second log term of \eqref{eq:c:total_compl_sc} is also logarithmic in problem parameters.
\end{proof}
\begin{proposition_unnumbered} [\ref{prop:c:total_compl_sc:dec_smoothing_main}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda > 0$ and $\kappa_k = \kappa$, for all $k \ge 1$ and
that $\alpha_0$, $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_sc:decreasing_mu_const_kappa},
with $q = \lambda/(\lambda + \kappa)$ and $\eta = 1- {\sqrt q}/{2}$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n
+ \frac{A_\omega}{\mu(\lambda + \kappa)\eps} \left( F(\wv_0) - F^* + \frac{\mu D_\omega}{1-\sqrt{q}} \right)
\right) \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
{
We continue to use shorthand $A:=A_\omega$, $D := D_\omega$.
First, let us consider the minimum number of outer iterations $K$ required to achieve $F(\wv_K) - F^* \le \eps$.
From Cor.~\ref{cor:c:outer_sc:decreasing_mu_const_kappa}, if we have $\eta^{-K/2} \Delta_0 \le \eps$, or,
\[
K \ge K_{\min} := \frac{\log\left( {\Delta_0}/{\eps} \right)}{\log\left({1}/{\sqrt\eta}\right)} \,.
\]
For this smallest value, we have,
\begin{align} \label{eq:c:min_smoother}
\mu_{K_{\min}} = \mu \eta^{K_{\min}/2} = \frac{\mu \eps}{\Delta_0} \,.
\end{align}
Let $C, \tau$ be the linear convergence parameters of SVRG, and
define $L_k := \lambda + {A}/{\mu_k}$ for each $k\ge 1$.
Further, let $\mcT'$ be such that
\[
\mcT' \ge \max_{k\in\{1, \cdots, K_{\min}\}} \log\left( 8
\frac{C(L_k + \kappa, \lambda + \kappa)}{\tau(L_k + \kappa, \lambda+\kappa)} \frac{L_k + \kappa}{\kappa\delta} \right) \,.
\]
Then, the total complexity is, from Prop.~\ref{prop:c:inner_loop_final}, (ignoring absolute constants)
\begin{align}
\nonumber
\expect[N] &\le \sum_{k=1}^{K_{\min}} \left( n + \frac{\lambda + \kappa + \frac{A}{\mu_k}}{\lambda + \kappa} \right) \mcT' \\
\nonumber
&= \sum_{k=1}^{K_{\min}} \left( n+1 + \frac{\nicefrac{A}{\mu}}{\lambda + \kappa} \eta^{-k/2} \right) \mcT' \\
\nonumber
&= \left( K_{\min}(n+1) + \frac{\nicefrac{A}{\mu}}{\lambda + \kappa} \sum_{k=1}^{K_{\min}} \eta^{-k/2} \right) \mcT' \\
\nonumber
&\le \left( K_{\min}(n+1) + \frac{\nicefrac{A}{\mu}}{\lambda + \kappa}
\frac{\eta^{-K_{\min}/2}}{1 - \eta^{1/2} } \right) \mcT' \\
&= \left( (n+1)\frac{\log\left( \frac{\Delta_0}{\eps} \right)}{\log(\nicefrac{1}{\sqrt\eta})}
+ \frac{\nicefrac{A}{\mu}}{\lambda + \kappa} \frac{1}{1 - \sqrt\eta} \frac{\Delta_0}{\eps} \right) \mcT' \,.
\end{align}
It remains to bound $\mcT'$. Here, we use $\lambda + \frac{A}{\mu} \le L_k \le \lambda + \frac{A}{\mu_K}$ for all $k \le K$
together with \eqref{eq:c:min_smoother} to
note that $\mcT'$ is logarithmic in $\Delta_0/\eps, n, AD, \mu, \kappa, \lambda\inv$.
}
\end{proof}
\begin{proposition_unnumbered}[\ref{prop:c:total_compl_svrg_smooth}]
Consider the setting of Thm.~\ref{thm:catalyst:outer} and fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu ={\eps}/{20 D_\omega}$, $\alpha_0 = \tfrac{\sqrt{5} - 1}{2}$,
$\delta_k = {1}/{(k+1)^2}$, and $\kappa_k = \kappa = {A_\omega}/{\mu(n+1)}$.
Then, the number of iterations $N$ to get a point $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n\sqrt{\frac{F(\wv_0) - F^*}{\eps}} +
\sqrt{A_\omega D_\omega n} \frac{\norma{2}{\wv_0 - \wv^*}}{\eps} \right) \, .
\end{align*}
\end{proposition_unnumbered}
\begin{proof
We use shorthand $A:=A_\omega$, $D := D_\omega$, $L_\mu = \nicefrac{A}{\mu}$ and
$\Delta F_0 = F(\wv_0) - F^* + \frac{\kappa}{2} \normsq{\wv_0 -\wv^*}$.
Further, let $C, \tau$ be the linear convergence parameters of SVRG.
In Cor.~\ref{cor:c:outer_smooth}, the fact that $K \ge 1$ allows us to bound the contribution of the
smoothing as $10 \mu D$. So, we get that the number of outer iterations $K$ required to get
$F(\wv_K) - F^* \le \eps$ can be bounded as
\begin{align*}
K+1 \le \sqrt{\frac{8\Delta F_0}{\eps - 10 \mu D}} \,.
\end{align*}
Moreover, from our choice $\delta_k = 1 / (k+1)^2$, the number of inner iterations $T_k$
for inner loop $k$ is, from Prop.~\ref{prop:c:inner_loop_final},
\begin{align*}
\expect[T_k] &\le \frac{1}{\tau(L_\mu + \kappa, \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \kappa)}{\tau(L_\mu + \kappa, \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot (k+1)^2 \right) + 1\\
&\le \frac{2}{\tau(L_\mu + \kappa, \kappa)} \log\left(
\frac{8 C(L_\mu + \kappa, \kappa)}{\tau(L_\mu + \kappa, \kappa)} \cdot
\frac{L_\mu + \kappa}{\kappa} \cdot {\frac{8\Delta F_0}{\eps - 10 \mu D}} \right) \,.
\end{align*}
Next, we consider the total number $N$ of iterations of SVRG to obtain an iterate $\wv$ such that
$F(\wv) - F^* \le \eps$. Using the fact that $\expect[N] \le \sum_{i=1}^K \expect[T_k]$, we
bound it as
\begin{align} \label{eq:c:total_compl_smooth}
\expect[N] \le \frac{1}{\tau(L_\mu + \kappa, \kappa)}
\sqrt{\frac{8 \Delta F_0}{\eps - 10\mu D}}
\log \left(
\frac{64 C(L_\mu + \kappa, \kappa)}{\tau(L_\mu + \kappa, \kappa)}
\frac{L_\mu + \kappa}{\kappa} \frac{\Delta F_0}{\eps - 10\mu D} \right)\,.
\end{align}
Now, we plug into \eqref{eq:c:total_compl_smooth} the values of $C, \tau$ for SVRG.
Note that $\kappa = {L_\mu}/({n+1})$. So we have,
\begin{align*}
\frac{1}{\tau(L_\mu + \kappa, \kappa)} &= 8 \left( \frac{L_\mu + \kappa }{\kappa} + n \right) = 16(n+1) \,, \text{ and, } \\
C(L_\mu+\kappa, \kappa) &= \frac{L_\mu + \kappa}{\kappa} \left( 1 + \frac{n \tfrac{L_\mu+\kappa}{\kappa}}{8\tfrac{L+\kappa}{\kappa} + n} \right)
\le (n+2) \left(1 + \tfrac{n}{8} \right)\, .
\end{align*}
It now remains to assign $\mu = {\eps}/({20D})$ and plug $C, \tau$ from above into \eqref{eq:c:total_compl_smooth},
noting that $\kappa = {20A D}/({\eps(n+1)})$.
\end{proof}
\begin{proposition_unnumbered}[\ref{prop:c:total_compl_nsc:dec_smoothing}]
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda = 0$ and that $\alpha_0$, $(\mu_k)_{k\ge 1}$,$ (\kappa_k)_{k\ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_smooth_dec_smoothing}.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde\bigO \left( \frac{1}{\eps}
\left( F(\wv_0) - F^* + \kappa \normasq{2}{\wv_0 - \wv^*} + \mu D \right)
\left( n + \frac{A_\omega}{\mu \kappa} \right)
\right) \,.
\end{align*}
\end{proposition_unnumbered}
\begin{proof
Define short hand $A:= A_\omega$, $D := D_\omega$ and
\begin{gather}
\label{eq:c:nsc:dec_smoothing_1}
\Delta_0 := 2(F(\wv_0) - F^*) + \kappa \normsq{\wv_0 - \wv^*} + 27 \mu D \, .
\end{gather}
From Cor.~\ref{cor:c:outer_smooth_dec_smoothing},
the number of iterations $K$ required to obtain $F(\wv_K) - F^* \le \frac{\log(K+1)}{K+1} \Delta_0 \le \eps$ is
(see Lemma~\ref{lem:c:helper_logx}),
\begin{align} \label{eq:c:nsc:dec_smoothing}
K + 1 = \frac{2\Delta_0}{\eps} \log \frac{2\Delta_0}{\eps} \,.
\end{align}
Let $C, \tau$ be such that SVRG is linearly convergent with parameters $C, \tau$, and
define $L_k := {A}/{\mu_k}$ for each $k \ge 1$.
Further, let $\mcT'$ be such that
\[
\mcT' \ge \max_{k\in\{1, \cdots, K\}} \log\left( 8
\frac{C(L_k + \kappa, \kappa)}{\tau(L_k + \kappa, \kappa)} \frac{L_k + \kappa}{\kappa \delta_k} \right) \, .
\]
Clearly, $\mcT'$ is logarithmic in $K, n, AD, \mu, \kappa $.
From Prop.~\ref{prop:c:inner_loop_final}, the minimum total complexity is (ignoring absolute constants)
\begin{align}
\expect[N] &= \sum_{k=1}^K \left( n + \frac{\nicefrac{A}{\mu_k} + \kappa_k}{\kappa_k} \right) \mcT' \nonumber \\
&= \sum_{k=1}^K \left( n + 1 + \frac{A}{\mu_k\kappa_k} \right) \mcT' \nonumber \\
&= \sum_{k=1}^K \left( n + 1 + \frac{A}{\mu\kappa} \right) \mcT' \nonumber \\
&\le \left( n+ 1 + \frac{A}{\mu \kappa} \right)K \mcT' \,,
\end{align}
and plugging in $K$ from~\eqref{eq:c:nsc:dec_smoothing} completes the proof.
\end{proof}
\subsection{Prox-Linear Convergence Analysis} \label{sec:c:pl_struct_pred}
We first prove Lemma~\ref{lem:pl:struct_pred} that specifies the assumption required by the prox-linear in the case of structured prediction.
\begin{lemma_unnumbered}[\ref{lem:pl:struct_pred}]
Consider the structural hinge loss $f(\wv) = \max_{\yv \in \mcY} \psi(\yv ; \wv) = h\circ \gv(\wv)$
where $h, \gv$ are as defined in \eqref{eq:mapping_def}.
If the mapping $\wv \mapsto \psi(\yv ; \wv)$ is $L$-smooth with respect to $\norma{2}{\cdot}$ for all
$\yv \in \mcY$, then it holds for all $\wv, \zv \in \reals^d$ that
\begin{align*}
|h(\gv(\wv+\zv)) - h(\gv(\wv) + \grad\gv(\wv) \zv)| \le \frac{L}{2}\normasq{2}{\zv}\,.
\end{align*}
\end{lemma_unnumbered}
\begin{proof}
For any $\Am \in \reals^{m \times d}$ and $\wv \in \reals^d$, and $\norma{2,1}{\Am}$ defined in~\eqref{eq:matrix_norm_defn},
notice that
\begin{align} \label{eq:pl-struc-pred-pf:norm}
\norma{\infty}{\Am\wv} \le \norma{2, 1}{\Am} \norma{2}{\wv} \,.
\end{align}
Now using the fact that max function $h$ satisfies $|h(\uv') - h(\uv)| \le \norma{\infty}{\uv' - \uv}$
and the fundamental theorem of calculus $(*)$, we deduce
\begin{align}
|h(\gv(\wv+\zv)) - h(\gv(\wv) + \grad\gv(\wv) \zv)|
&\le \norma{\infty}{\gv(\wv+\zv)- \left( \gv(\wv) + \grad\gv(\wv) \zv \right) } \nonumber \\
&\stackrel{(*)}{\le} \norm*{\int_0^1 (\grad\gv(\wv + t\zv) - \grad\gv(\wv) )\zv \, dt }_{\infty}
\nonumber \\
&\stackrel{\eqref{eq:pl-struc-pred-pf:norm}}{\le}
\int_0^1 \norma{2,1}{\grad\gv(\wv + t\zv) - \grad\gv(\wv) } \norma{2}{\zv} \, dt \,.
\label{eq:pl-struc-pred-pf:1}
\end{align}
Note that the definition \eqref{eq:matrix_norm_defn} can equivalently be stated as
$\norma{2,1}{\Am} = \max_{\norma{1}{\uv}\le 1} \norma{2}{\Am\T \uv}$.
Given $\uv \in \reals^m$, we index its entries $u_\yv$ by $\yv \in \mcY$. Then, the matrix norm
in \eqref{eq:pl-struc-pred-pf:1} can be simplified as
\begin{align*}
\norma{2,1}{\grad\gv(\wv + t\zv) - \grad\gv(\wv) }
&= \max_{\norma{1}{\uv} \le 1} \bigg\|{\sum_{\yv \in \mcY} u_\yv ( \grad \psi( \yv ; \wv + t\zv)
- \grad \psi( \yv ; \wv)) } \bigg\|_2 \\
&\le \max_{\norma{1}{\uv} \le 1} \sum_{\yv \in \mcY} |u_\yv| \norma{2}{\grad \psi( \yv ; \wv + t\zv)
- \grad \psi( \yv ; \wv)} \\
&\le L t \norma{2}{\zv} \,,
\end{align*}
from the $L$-smoothness of $\psi$.
Plugging this back into \eqref{eq:pl-struc-pred-pf:1} completes the proof. The bound on the smothing approximation holds similarly by noticing that if $h$ is $1$-Lipschitz then $h_{\mu \omega}$ too since $\nabla h_{\mu\omega}(\uv) \in \dom h^*$ for any $\uv \in \dom h$.
\end{proof}
\subsection{Information Based Complexity of the Prox-Linear Algorithm with {Casimir-SVRG}} \label{sec:c:pl_proofs}
\begin{proposition_unnumbered}[\ref{prop:pl:total_compl}]
Consider the setting of Thm.~\ref{thm:pl:outer-loop}. Suppose the sequence $\{\eps_k\}_{k\ge 1}$
satisfies $\eps_k = \eps_0 / k$ for some $\eps_0 > 0$ and that
the subproblem of Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear} is solved using
{Casimir-SVRG}{} with the settings of Prop.~\ref{prop:c:total_compl_svrg_sc}.
Then, total number of SVRG iterations $N$ required to produce a $\wv$ such that
$\norma{2}{\bm{\varrho}_\eta(\wv)} \le \eps$ is bounded as
\begin{align*}
\expect[N] \le \widetilde\bigO\left(
\frac{n}{\eta \eps^2} \left(F(\wv_0) - F^* + \eps_0 \right) +
\frac{\sqrt{A_\omega D_\omega n \eps_0\inv}}{\eta \eps^3} \left( F(\wv_0) - F^* + \eps_0 \right)^{3/2}
\right) \, .
\end{align*}
\end{proposition_unnumbered}
\begin{proof
First note that $\sum_{k=1}^{K} \eps_k \le \eps_0 \sum_{k=1}^K k\inv \le 4 \eps_0 \log K$
for $K \ge 2$. Let $\Delta F_0 := F(\wv_0) - F^*$ and use shorthand $A, D$ for $A_\omega, D_\omega$ respectively.
From Thm.~\ref{thm:pl:outer-loop}, the number $K$ of prox-linear iterations required to find a $\wv$ such that
$\norma{2}{\bm{\varrho}_\eta(\wv)} \le \eps$ must satisfy
\begin{align*}
\frac{2}{\eta K} \left( \Delta F_0 + 4\eps_0 \log K \right) \le \eps \,.
\end{align*}
For this, it suffices to have (see e.g., Lemma~\ref{lem:c:helper_logx})
\begin{align*}
K \ge \frac{4(\Delta F_0 + 4 \eps_0)}{\eta\eps^2} \log\left( \frac{4(\Delta F_0 + 4 \eps_0)}{\eta\eps^2} \right) \,.
\end{align*}
Before we can invoke Prop.~\ref{prop:c:total_compl_svrg_sc},
we need to bound the dependence of each inner loop on its warm start:
$F_\eta(\wv_{k-1} ; \wv_{k-1}) - F_\eta(\wv_{k}^* ; \wv_{k-1})$ in terms of problem parameters,
where $\wv_k^* = \argmin_{\wv} F_\eta(\wv ; \wv_{k-1})$
is the exact result of an exact prox-linear step.
We note that $F_\eta(\wv_{k-1} ; \wv_{k-1}) = F(\wv_{k-1}) \le F(\wv_0)$, by Line~\ref{line:pl:algo:accept}
of Algo.~\ref{algo:prox-linear}.
Moreover, from $\eta \le 1/L$ and Asmp.~\ref{asmp:pl:upper-bound}, we have,
\begin{align*}
F_\eta(\wv_k^* ; \wv_{k-1}) &= \frac{1}{n} \sum_{i=1}^n h \big( \gv\pow{i}(\wv_{k-1}) + \grad \gv\pow{i}(\wv_{k-1})(\wv_k^* - \wv_{k-1}) \big)
+ \frac{\lambda}{2}\normasq{2}{\wv_k^*} + \frac{1}{2\eta} \normasq{2}{\wv_k^* - \wv_{k-1}} \\
&\ge \frac{1}{n} \sum_{i=1}^n h \big( \gv\pow{i}(\wv_{k-1}) + \grad \gv\pow{i}(\wv_{k-1})(\wv_k^* - \wv_{k-1}) \big)
+ \frac{\lambda}{2}\normasq{2}{\wv_k^*} + \frac{L}{2} \normasq{2}{\wv_k^* - \wv_{k-1}} \\
&\ge \frac{1}{n} \sum_{i=1}^n h \big( \gv\pow{i}(\wv_k^*) \big)
+ \frac{\lambda}{2}\normasq{2}{\wv_k^*} \\
&= F(\wv_k^*) \ge F^* \,.
\end{align*}
Thus, we bound $F_\eta(\wv_{k-1} ; \wv_{k-1}) - F_\eta(\wv_{k}^* ; \wv_{k-1}) \le \Delta F_0$.
We now invoke Prop.~\ref{prop:c:total_compl_svrg_sc} and collect all constants and terms logarithmic in
$n$, $\eps\inv, \eps_0 \inv$, $\Delta F_0$, $\eta\inv$, $A_\omega D_\omega$ in $\mcT, \mcT', \mcT''$.
We note that all terms in the logarithm in Prop.~\ref{prop:c:total_compl_svrg_sc} are logarithmic in the problem parameters here.
Letting $N_k$ be the number of
SVRG iterations required for iteration $k$, we get,
\begin{align*}
\expect[N] &= \sum_{k=1}^K \expect[N_k]
\le \sum_{k=1}^K \left( n + \sqrt{\frac{\eta A D n}{\eps_k}} \right) \, \mcT \\
&\le \left[ nK + \sqrt{\frac{\eta A D n}{\eps_0}} \left( \sum_{k=1}^K \sqrt{k} \right) \right] \, \mcT \\
&\le \left[ nK + \sqrt{\frac{\eta A D n}{\eps_0}}\, K^{3/2} \right] \, \mcT' \\
&\le \left[ \frac{n}{\eta \eps^2} (\Delta F_0 + \eps_0)
+ \sqrt{\frac{\eta A D n}{\eps_0}}\, \left( \frac{\Delta F_0 + \eps_0}{\eta \eps^2} \right)^{3/2}
\right] \, \mcT'' \\
&= \left[ \frac{n}{\eta \eps^2} (\Delta F_0 + \eps_0)
+ \frac{\sqrt{ADn}}{\eta \eps^3} \frac{(\Delta F_0 + \eps_0)^{3/2}}{\sqrt{\eps_0}}
\right] \, \mcT'' \,.
\end{align*}
\end{proof}
\subsection{Some Helper Lemmas} \label{subsec:a:catalyst:helper}
The first lemma is a property of the squared Euclidean norm from \citet[Lemma 5]{lin2017catalyst},
which we restate here.
\begin{lemma}\label{lem:c:helper:quadratic}
For any vectors, $\wv, \zv, \rv \in \reals^d$, we have, for any $\theta > 0$,
\begin{align*}
\normsq{\wv - \zv} \ge (1-\theta) \normsq{\wv - \rv} + \left( 1 - \frac{1}{\theta} \right) \normsq{\rv - \zv} \,.
\end{align*}
\end{lemma}
The next lemmas consider rates of the sequences $(\alpha_k)$ and $(A_k)$ under different recursions.
\begin{lemma} \label{lem:c:A_k:const_kappa}
Define a sequence $(\alpha_k)_{k \ge 0}$ as
\begin{align*}
\alpha_0 &= \frac{\sqrt 5 - 1}{2} \\
\alpha_k^2 &= (1 - \alpha_k) \alpha_{k-1}^2 \,.
\end{align*}
Then this sequence satisfies
\begin{align*}
\frac{\sqrt 2}{k+3} \le \alpha_k \le \frac{2}{k+3} \,.
\end{align*}
Moreover, $A_k := \prod_{j=0}^k (1-\alpha_k)$ satisfies
\begin{align*}
\frac{2}{(k+3)^2} \le A_k \le \frac{4}{(k+3)^2} \,.
\end{align*}
\end{lemma}
\begin{proof}
Notice that $\alpha_0$ satisfies $\alpha_0^2 = 1 - \alpha_0$.
Further, it is clear from definition that $\alpha_k \in (0, 1)\, \forall k \ge 0$.
Hence, we can define a sequence $(b_k)_{k\ge 0}$ such that $b_k := 1/\alpha_k$.
It satisfies the recurrence, $b_k^2 - b_k = b_{k-1}^2$ for $k \ge 1$,
or in other words, $b_k = \tfrac{1}{2}\left( 1 + \sqrt{1 + 4 b_{k-1}^2} \right)$.
Form this we get,
\begin{align*}
b_k &\ge b_{k-1} + \frac{1}{2} \ge b_0 + \frac{k}{2} \ge \frac{3}{2} + \frac{k}{2} \,.
\end{align*}
since $b_0 = \frac{\sqrt 5 + 1}{2}$. This gives us the upper bound on $\alpha_k$.
Moreover, unrolling the recursion,
\begin{align} \label{eq:c:helper:2_}
\alpha_k^2 = (1- \alpha_k) \alpha_{k-1}^2 = A_k \frac{\alpha_0^2}{1 - \alpha_0} = A_k \, .
\end{align}
Since $\alpha_k \le 2/(k+3)$, \eqref{eq:c:helper:2_} yields the upper bound on $A_k$.
The upper bound on $\alpha_k$ again gives us,
\begin{align*}
A_k \ge \prod_{i=0}^k \left( 1 - \frac{2}{i+3} \right) = \frac{2}{(k+2)(k+3)} \ge \frac{2}{(k+3)^2} \,,
\end{align*}
to get the lower bound on $A_k$. Invoking \eqref{eq:c:helper:2_} again to obtain the lower bound on $\alpha_k$
completes the proof.
\end{proof}
The next lemma considers the evolution of the sequences $(\alpha_k)$ and $(A_k)$ with a different recursion.
\begin{lemma} \label{lem:c:A_k:inc_kappa}
Consider a sequence $(\alpha_k)_{k\ge 0}$ defined by $\alpha_0 = \frac{\sqrt{5}- 1}{2}$, and
$\alpha_{k+1}$ as the non-negative root of
\begin{align*}
\frac{\alpha_k^2}{1 - \alpha_k} = \alpha_{k-1}^2 \frac{k}{k+1} \,.
\end{align*}
Further, define
\begin{align*}
A_k = \prod_{i=0}^k ( 1- \alpha_i) \, .
\end{align*}
Then, we have for all $k\ge 0$,
\begin{align}
\frac{1}{k+1} \left(1 - \frac{1}{\sqrt2} \right) \le A_k \le \frac{1}{k+2} \, .
\end{align}
\end{lemma}
\begin{proof}
Define a sequence $(b_k)_{k\ge 0}$ such that $b_k = 1/\alpha_k$, for each $k$. This is well-defined because
$\alpha_k \neq 0$, which may be verified by induction. This sequence satisfies the recursion for $k\ge 1$:
$b_k (b_k -1) = \left( \frac{k+1}{k} \right)b_{k-1}$.
From this recursion, we get,
\begin{align}
\nonumber
b_k &= \frac{1}{2} \left( 1 + \sqrt{1 + 4 b_{k-1}^2 \left( \frac{k+1}{k} \right)} \right) \\
\nonumber
&\ge \frac{1}{2} + b_{k-1} \sqrt\frac{k+1}{k} \\
\nonumber
&\ge \frac{1}{2}\left( 1 + \sqrt{\frac{k+1}{k}} + \cdots + \sqrt{\frac{k+1}{2}} \right) + b_0\sqrt{k+1} \\
\nonumber
&= \frac{\sqrt{k+1}}{2} \left( 1/\sqrt 2 + \cdots + 1/\sqrt{k+1} \right) + b_0 \sqrt{k+1} \\
&\stackrel{(*)}{\ge} \sqrt{k+1}\left( \sqrt{k+2} + b_0 - \sqrt 2 \right)
= \sqrt{k+1} \left( \sqrt{k+2} + b_0 - \sqrt 2 \right) \,,
\end{align}
where $(*)$ followed from noting that $1/\sqrt{2}+\cdots+1/\sqrt{k+1} \ge \int_2^{k+2} \frac{dx}{\sqrt x}
= 2(\sqrt{k+2}-\sqrt 2)$\,.
Since $b_0 = 1/\alpha_0 = \frac{\sqrt{5} + 1}{2} > \sqrt 2$, we have, for $k \ge 1$,
\begin{align}
\alpha_k \le \frac{1}{\sqrt{k+1}(\sqrt{k+2}+ b_0 - \sqrt{2})} \le \frac{1}{\sqrt{k+1}\sqrt{k+2}} \,.
\end{align}
This relation also clearly holds for $k=0$. Next, we claim that
\begin{align}
A_k = (k+1) \alpha_k^2 \le \frac{k+1}{(\sqrt{k+1}\sqrt{k+2})^2} = \frac{1}{k+2}\, .
\end{align}
Indeed, this is true because
\begin{align*}
\alpha_k^2 = (1 - \alpha_k) \alpha_{k-1}^2 \frac{k}{k+1} = A_k \frac{\alpha_0^2}{1 - \alpha_0} \frac{1}{k+1}
= \frac{A_k}{k+1} \,.
\end{align*}
For the lower bound, we have,
\begin{align*}
A_k = \prod_{i=0}^k (1 - \alpha_i)
\ge \prod_{i=0}^k \left(1 - \frac{1}{\sqrt{i+1}\sqrt{i+2}} \right)
\ge \left( 1 - \frac{1}{\sqrt2} \right) \prod_{i=1}^k \left(1 - \frac{1}{i+1} \right)
= \frac{1 - \frac{1}{\sqrt{2}}}{k+1} \, .
\end{align*}
\end{proof}
\begin{lemma} \label{lem:c:helper_logx}
Fix some $\eps > 0$.
If $k \ge \frac{2}{\eps} \log \frac{2}{\eps}$, then we have that
$\frac{\log k}{k} \le \eps$.
\end{lemma}
\begin{proof}
We have, since $\log x \le x$ for $x > 0$,
\begin{align*}
\frac{\log k }{k} \le \frac{\log\frac{2}{\eps} + \log\log\frac{2}{\eps}}{\frac{2}{\eps} \log\frac{2}{\eps}}
= \frac{\eps}{2} \left( 1 + \frac{\log\log\frac{2}{\eps}}{\log\frac{2}{\eps}} \right) \le \eps \,.
\end{align*}
\end{proof}
\subsection{Related Work} \label{sec:related_work}
\begin{table*}[t!]
\caption{\small{Convergence rates given in terms of the number of calls to various oracles for different optimization algorithms
on the learning problem~\eqref{eq:c:main:prob} in
case of structural support vector machines~\eqref{eq:pgm:struc_hinge}.
The rates are specified in terms of the target accuracy $\eps$,
the number of training examples $n$, the
regularization $\lambda$, the
size of the label space~$\scriptsize{\abs\mcY}$, the
max feature norm $R=\max_i \norma{2}{\Phi(\xv\pow{i},\yv) - \Phi(\xv\pow{i}, \yv\pow{i})}$ and $\widetilde R \ge R$
(see Remark~\ref{remark:smoothing:l2vsEnt} for explicit form).
The rates are specified up to constants and factors logarithmic in the
problem parameters. The dependence on the initial error is ignored.
* denotes algorithms that make $\bigO(1)$ oracle calls per iteration.
\vspace{2mm}
}}
\label{tab:rates}
\footnotesize\setlength{\tabcolsep}{2pt}
\begin{minipage}{.32\linewidth}
\centering
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c|c|}
\hline
\rule{0pt}{10pt}
\textbf{Algo.} (\textit{exp} oracle) & \textbf{\# Oracle calls} \\[0.45ex] \hline\hline
\rule{0pt}{15pt}
\begin{tabular}{c} Exponentiated \\ gradient* \\ \citep{collins2008exponentiated}\end{tabular} &
$\dfrac{(n + \log |\mcY|) R^2 }{\lambda \eps}$ \\[2.54ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Excessive gap \\ reduction \\ \citep{zhang2014accelerated} \end{tabular} &
$n R \sqrt{\dfrac{\log |\mcY|}{\lambda \eps}}$ \\[2.54ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_svrg_main}*, \\ entropy smoother \end{tabular}
& $\sqrt{\dfrac{nR^2 \log\abs\mcY}{\lambda \eps}}$ \\[2.54ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_sc:dec_smoothing_main}*, \\ entropy smoother \end{tabular}
& $n + {\dfrac{R^2 \log\abs\mcY}{\lambda \eps}}$ \\[2.54ex] \hline
\end{tabular}
\end{adjustbox}
\end{minipage} \hspace{2.6mm}%
\begin{minipage}{.35\linewidth}
\centering
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c|c|}
\hline
\rule{0pt}{10pt}
\textbf{Algo.} (\textit{max} oracle) & \textbf{\# Oracle calls} \\[0.45ex] \hline\hline
\rule{0pt}{15pt}
\begin{tabular}{c} BMRM \\ \citep{teo2009bundle}\end{tabular} &
$\dfrac{n R^2}{\lambda \eps}$ \\[2ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} QP 1-slack \\ \citep{joachims2009cutting} \end{tabular}&
$\dfrac{n R^2}{\lambda \eps}$ \\[2ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Stochastic \\ subgradient* \\ \citep{shalev2011pegasos} \end{tabular}&
$\dfrac{R^2}{\lambda \eps}$ \\[2ex] \hline
\rule{0pt}{15pt}
\begin{tabular}{c} Block-Coordinate \\ Frank-Wolfe* \\ \citep{lacoste2012block} \end{tabular} &
$n + \dfrac{R^2}{\lambda \eps}$ \\[2ex] \hline
\end{tabular}
\end{adjustbox}
\end{minipage} \hspace{2.2mm}
\begin{minipage}{.25\linewidth}
\centering
\medskip
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|c|c|}
\hline
\rule{0pt}{0pt}
\begin{tabular}{c} \textbf{Algo.} \\ (\textit{top-$K$} oracle) \end{tabular}
& \textbf{\# Oracle calls} \\[0.45pt] \hline\hline
\rule{0pt}{12pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_svrg_main}*, \\ $\ell_2^2$ smoother \end{tabular}
& $\sqrt{\dfrac{n{\widetilde R}^2}{\lambda \eps}}$ \\[2.54ex] \hline
\rule{0pt}{12pt}
\begin{tabular}{c} Prop.~\ref{prop:c:total_compl_sc:dec_smoothing_main}*, \\ $\ell_2^2$ smoother \end{tabular}
& $n + {\dfrac{{\widetilde R}^2}{\lambda \eps}}$ \\[2.45ex] \hline
\end{tabular}
\end{adjustbox}
\end{minipage}%
\end{table*}
\paragraph{Optimization for Structural Support Vector Machines}
Table~\ref{tab:rates} gives an overview of different optimization algorithms designed for structural
support vector machines.
Early works~\citep{taskar2004max,tsochantaridis2004support,joachims2009cutting,teo2009bundle}
considered batch dual quadratic optimization (QP) algorithms.
The stochastic subgradient method operated directly
on the non-smooth primal formulation~\citep{ratliff2007approximate,shalev2011pegasos}.
More recently,~\citet{lacoste2012block} proposed a block coordinate Frank-Wolfe (BCFW) algorithm to
optimize the dual formulation of structural support vector machines; see also~\citet{osokin2016minding} for variants and extensions.
Saddle-point or primal-dual approaches include
the mirror-prox algorithm~\citep{taskar2006structured,cox2014dual,he2015semi}.~\citet{palaniappan2016stochastic} propose an incremental optimization algorithm for saddle-point problems. However, it is unclear how to extend it to the structured prediction problems considered here. Incremental optimization algorithms for conditional random fields were proposed by~\citet{schmidt2015non}.
We focus here on primal optimization algorithms in order to be able to
train structured prediction models with affine or nonlinear mappings
with a unified approach, and on incremental optimization algorithms which can scale to large datasets.
\paragraph{Inference}
The ideas of dynamic programming inference in tree structured graphical models have been around
since the pioneering works of \citet{pearl1988probabilistic} and \citet{dawid1992applications}.
Other techniques emerged based on graph cuts \citep{greig1989exact,ishikawa1998segmentation},
bipartite matchings \citep{cheng1996maximum,taskar2005discriminative} and search
algorithms \citep{daume2005learning,lampert2008beyond,lewis2014ccg,he2017deep}.
For graphical models that admit no such a discrete structure,
techniques based on loopy belief propagation~\citep{mceliece1998turbo,murphy1999loopy},
linear programming (LP)~\citep{schlesinger1976syntactic}, dual decomposition \citep{johnson2008convex}
and variational inference \citep{wainwright2005map,wainwright2008graphical}
gained popularity.
\paragraph{Top-$K$ Inference}
Smooth inference oracles with $\ell_2^2$ smoothing echo older heuristics
in speech and language processing~\citep{jurafsky2014speech}.
Combinatorial algorithms for top-$K$ inference have been studied extensively by the graphical models community under the name
``$M$-best MAP''.
\citet{seroussi1994algorithm} and \citet{nilsson1998efficient}
first considered the problem of finding the $K$ most probable configurations
in a tree structured graphical model.
Later, \citet{yanover2004finding} presented the Best Max-Marginal First algorithm which solves this problem with access only to
an oracle that computes max-marginals.
We also use this algorithm in Sec.~\ref{subsec:smooth_inference_loopy}.
\citet{fromer2009lp} study top-$K$ inference for LP relaxation, while
\citet{batra2012efficient} considers the dual problem to exploit graph structure.
\citet{flerova2016searching} study top-$K$ extensions of the popular $\text{A}^\star$
and branch and bound search algorithms in the context of graphical models.
Other related approaches include diverse $K$-best solutions \citep{batra2012diverse} and
finding $K$-most probable modes \citep{chen2013computing}.
\paragraph{Smoothing Inference}
Smoothing for inference was used to speed up iterative algorithms for continuous relaxations.
\citet{johnson2008convex} considered smoothing dual decomposition inference using the entropy smoother,
followed by \citet{jojic2010accelerated} and \citet{savchynskyy2011study} who studied its theoretical properties.
\citet{meshi2012convergence} expand on this study to include $\ell_2^2$ smoothing.
Explicitly smoothing discrete inference algorithms in order to smooth the learning problem was considered by
\citet{zhang2014accelerated} and \citet{song2014learning} using the entropy and $\ell_2^2$ smoothers respectively.
The $\ell_2^2$ smoother was also used by \citet{martins2016softmax}.
\citet{hazan2016blending} consider the approach of blending learning and inference, instead of using inference
algorithms as black-box procedures.
Related ideas to ours appear in the independent works~\citep{mensch2018differentiable,niculae2018sparsemap}.
These works partially overlap with ours, but the papers choose different perspectives,
making them complementary to each other.~\citet{mensch2018differentiable} proceed
differently when, {e.g.},~smoothing inference based on dynamic programming.
Moreover, they do not establish complexity bounds
for optimization algorithms making calls to the resulting smooth inference oracles.
We define smooth inference oracles in the context
of black-box first-order optimization and
establish worst-case complexity bounds for incremental optimization algorithms making calls to these oracles.
Indeed we relate the amount of smoothing controlled by $\mu$ to the resulting complexity of the optimization algorithms relying on smooth inference oracles.
\paragraph{End-to-end Training of Structured Prediction}
The general framework for global training of structured prediction models was introduced by~\cite{bottou1990framework}
and applied to handwriting recognition by~\cite{bengio1995lerec} and to document processing by~\cite{bottou1997global}.
This approach, now called ``deep structured prediction'', was used, e.g.,
by \citet{collobert2011natural} and \citet{belanger2016structured}.
\subsection{Notation}
Vectors are denoted by bold lowercase characters as $\wv \in \reals^d$ while matrices are denoted by bold uppercase characters as $\Am \in \reals^{d \times n}$.
For a matrix $\Am \in \reals^{m \times n}$, define the norm for $\alpha,\beta \in \{1, 2, \infty\}$,
\begin{align} \label{eq:matrix_norm_defn}
\norma{\beta, \alpha}{\Am} = \max\{ \inp{\yv}{\Am\xv} \, | \, \norma{\alpha}{\yv} \le 1 \, , \, \norma{\beta}{\xv} \le 1 \}
\,.
\end{align}
For any function $f: \reals^d \to \reals \cup \{ +\infty \}$, its convex conjugate $f^*:\reals^d \to \reals \cup \{+\infty\}$ is defined as
\begin{align*}
f^*(\zv) = \sup_{\wv \in \reals^d} \left\{ \inp{\zv}{\wv} - f(\wv) \right\} \, .
\end{align*}
A function $f : \reals^d \to \reals$ is said to be $L$-smooth with respect to an arbitrary norm $\norm{\cdot}$
if it is continuously differentiable and its gradient $\grad f$
is $L$-Lipschitz with respect to $\norm{\cdot}$.
When left unspecified, $\norm{\cdot}$ refers to $\norma{2}{\cdot}$.
Given a continuously differentiable map $\gv : \reals^d \to \reals^m$,
its Jacobian $\grad \gv(\wv) \in \reals^{m \times d}$ at $\wv \in \reals^d$
is defined so that its $ij$th entry is $[\grad \gv(\wv)]_{ij} = \partial g_i(\wv) / w_j$
where $g_i$ is the $i$th element of $\gv$ and $w_j$ is the $j$th element of $\wv$.
The vector valued function $\gv : \reals^d \to \reals^m$ is said to be $L$-smooth with respect to $\norm{\cdot}$
if it is continuously differentiable and
its Jacobian $\grad \gv$ is $L$-Lipschitz with respect to $\norm{\cdot}$.
For a vector $\zv \in \reals^m$, $z_{(1)} \ge \cdots \ge z_{(m)}$ refer to its components enumerated in non-increasing order
where ties are broken arbitrarily.
Further, we let $\zv_{[k]} = (z_{(1)}, \cdots, z_{(k)}) \in \reals^k$ denote the vector of the $k$ largest components of $\zv$.
We denote by $\Delta^{m-1}$ the standard probability simplex in $\reals^{m}$.
When the dimension is clear from the context, we shall simply denote it by $\Delta$.
Moreover, for a positive integer $p$, $[p]$ refers to the set $\{1, \ldots,p\}$.
Lastly, $\widetilde \bigO$ in the big-$\bigO$ notation hides factors logarithmic
in problem parameters.
\subsection{Structural Hinge Loss}
On a given input-output pair $(\xv, \yv)$, the error of prediction of $\yv$ by the inference procedure with a
score function $\phi(\cdot, \cdot; \wv)$, is measured by a task loss $\ell \big( \yv, \yv^*(\xv; \wv) \big)$ such as the Hamming loss.
The learning procedure would then aim to find the best parameter $\wv$ that minimizes
the loss on a given dataset of input-output training examples.
However, the resulting problem is piecewise constant and hard to optimize.
Instead, \citet{altun2003hidden,taskar2004max,tsochantaridis2004support} propose to minimize a majorizing surrogate of the task loss,
called the structural hinge loss defined on an input-output pair $(\xv\pow{i}, \yv\pow{i})$ as
\begin{align}\label{eq:pgm:struc_hinge}
f\pow{i}(\wv) = \max_{\yv \in \mcY}
\left\{ \phi(\xv\pow{i}, \yv ; \wv) + \ell(\yv\pow{i}, \yv) \right\}
- \phi(\xv\pow{i}, \yv\pow{i} ; \wv) = \max_{\yv \in \mcY} \psi\pow{i}(\yv, \wv) \, .
\end{align}
where $\psi\pow{i}(\yv ; \wv) = \phi(\xv\pow{i}, \yv ; \wv) + \ell(\yv\pow{i}, \yv) - \phi(\xv\pow{i}, \yv\pow{i} ; \wv)$ is the augmented score function.
This approach, known as {\em max-margin structured prediction},
builds upon binary and multi-class support vector machines~\citep{crammer2001algorithmic}, where the term $\ell(\yv\pow{i}, \yv)$ inside the maximization in \eqref{eq:pgm:struc_hinge}
generalizes the notion of margin.
The task loss $\ell$ is assumed to possess appropriate structure
so that the maximization inside \eqref{eq:pgm:struc_hinge}, known as {\em loss augmented inference},
is no harder than the inference problem in \eqref{eq:pgm:inference}.
When considering a fixed input-output pair $(\xv\pow{i}, \yv\pow{i}$),
we drop the index with respect to the sample $i$ and consider the
structural hinge loss as
\begin{equation}\label{eq:struct_hinge}
f(\wv) = \max_{\yv \in \mathcal{\mcY}} \psi(\yv;\wv),
\end{equation}
When the map $\wv \mapsto \psi(\yv ; \wv)$ is affine,
the structural hinge loss $f$ and the objective $F$ from \eqref{eq:c:main:prob} are both convex -
we refer to this case as the structural support vector machine. When $\wv \mapsto \psi(\yv ; \wv)$
is a nonlinear but smooth map, then the structural hinge loss $f$ and the objective $F$ are nonconvex.
\subsection{Smoothing Strategy}
A convex, non-smooth function $h$ can be smoothed by taking its infimal convolution with a smooth function~\citep{beck2012smoothing}.
We now recall its dual representation, which \citet{nesterov2005smooth}
first used to relate the amount of smoothing to optimal complexity bounds.
\begin{definition} \label{defn:smoothing:inf-conv}
For a given convex function $h:\reals^m \to \reals$, a smoothing function $\omega: \dom h^* \to \reals$ which is
1-strongly convex with respect to $\norma{\alpha}{\cdot}$ (for $\alpha \in \{1,2\}$),
and a parameter $\mu > 0$, define
\begin{align*}
h_{\mu \omega}(\zv) = \max_{\uv \in \dom h^*} \left\{ \inp{\uv}{\zv} - h^*(\uv) - \mu \omega(\uv) \right\} \,.
\end{align*}
as the smoothing of $h$ by $\mu \omega$.
\end{definition}
\noindent
We now state a classical result showing how the parameter $\mu$
controls both the approximation error and the level of the smoothing.
For a proof, see \citet[Thm. 4.1, Lemma 4.2]{beck2012smoothing} or Prop.~\ref{prop:smoothing:difference_of_smoothing}
of Appendix~\ref{sec:a:smoothing}.
\begin{proposition} \label{thm:setting:beck-teboulle}
Consider the setting of Def.~\ref{defn:smoothing:inf-conv}.
The smoothing $h_{\mu \omega}$ is continuously differentiable and its gradient, given by
\[
\grad h_{\mu \omega}(\zv) = \argmax_{\uv \in \dom h^*} \left\{ \inp{\uv}{\zv} - h^*(\uv) - \mu \omega(\uv) \right\}
\]
is $1/\mu$-Lipschitz with respect to $\normad{\alpha}{\cdot}$.
Moreover, letting $h_{\mu \omega} \equiv h$ for $\mu = 0$, the smoothing satisfies, for all $\mu_1 \ge \mu_2 \ge 0$,
\begin{align*}
(\mu_1 - \mu_2) \inf_{\uv \in \dom h^*} \omega(\uv)
\le
h_{\mu_2 \omega}(\zv) - h_{\mu_1 \omega}(\zv)
\le
(\mu_1 - \mu_2) \sup_{\uv \in \dom h^*} \omega(\uv) \,.
\end{align*}
\end{proposition}
\paragraph{Smoothing the Structural Hinge Loss}
We rewrite the structural hinge loss as a composition
\begin{equation}\label{eq:mapping_def}
\gv:\
\begin{cases}
\reals^d &\to \reals^m \\
\wv &\mapsto (\psi(\yv;\wv))_{\yv \in \mathcal{Y}},
\end{cases} \, \qquad h: \begin{cases}
\reals^{m} &\to \reals \\
\zv &\mapsto \max_{i \in [m]} z_i,
\end{cases}
\end{equation}
where $m= |\mcY|$ so that the structural hinge loss reads
\begin{align} \label{eq:pgm:struc_hinge_vec}
f(\wv) = h \circ \gv(\wv)\,.
\end{align}
We smooth the structural hinge loss~\eqref{eq:pgm:struc_hinge_vec} by simply smoothing the
non-smooth max function $h$ as
\begin{align*}
f_{\mu \omega} = h_{\mu \omega} \circ \gv.
\end{align*}
When $\gv$ is smooth and Lipschitz continuous,
$f_{\mu \omega}$ is a smooth approximation of the structural hinge loss, whose gradient is readily given by the chain-rule.
In particular, when $\gv$ is an affine map $\gv(\wv) = \Am\wv + \bv$,
if follows that
$f_{\mu \omega}$ is $(\normasq{\beta,\alpha}{\Am} / \mu)$-smooth with respect to $\norma{\beta}{\cdot}$
(cf. Lemma~\ref{lemma:smoothing:composition} in Appendix~\ref{sec:a:smoothing}).
Furthermore, for $\mu_1 \ge \mu_2 \ge 0$, we have,
\[
(\mu_1 - \mu_2) \min_{\uv \in \Delta^{m-1}} \omega(\uv) \le f_{\mu_2\omega}(\wv) - f_{\mu_1 \omega}(\wv)
\le (\mu_1 - \mu_2) \max_{\uv \in \Delta^{m-1}} \omega(\uv) \,.
\]
\subsection{Smoothing Variants}
In the context of smoothing the max function, we now describe two popular choices for the smoothing function $\omega$,
followed by computational considerations.
\subsubsection{Entropy and $\ell_2^2$ smoothing}
When $h$ is the max function, the smoothing operation can be computed analytically for
the \emph{entropy} smoother and the $\ell_2^2$ smoother, denoted respectively as
\begin{align*}
-H(\uv) := \inp{\uv}{\log \uv} \qquad \mbox{and} \qquad \ell_2^2(\uv) := \tfrac{1}{2}(\normasq{2}{\uv} - 1) \,.
\end{align*}
These lead respectively to the log-sum-exp function~\citep[Lemma 4]{nesterov2005smooth}
\[
h_{-\mu H}(\zv) = \mu \log\left(\sum_{i=1}^{m}e^{z_i/\mu}\right), \quad \nabla h_{-\mu H}(\zv) = \left[\frac{e^{z_i/\mu}}{\sum_{j=1}^{m}e^{z_j/\mu}}\right]_{i=1,\ldots,m} \,,
\]
and an orthogonal projection onto the simplex,
\[
h_{\mu \ell_2^2}(\zv) = \langle \zv, \operatorname{proj}_{\Delta^{m-1}}(\zv/\mu) \rangle
- \tfrac{\mu}{2}\|\operatorname{proj}_{\Delta^{m-1}}(\zv/\mu)\|^2 + \tfrac{\mu}{2},
\quad \nabla h_{\mu \ell_2^2}(\zv) = \operatorname{proj}_{\Delta^{m-1}}(\zv/\mu) \,.
\]
Furthermore, the following holds for all $\mu_1 \ge \mu_2 \ge 0$ from Prop.~\ref{thm:setting:beck-teboulle}:
\[
0 \le h_{-\mu_1 H}(\zv) - h_{-\mu_2 H}(\zv) \le (\mu_1 - \mu_2) \log m, \quad \text{and,} \quad
0 \le h_{\mu_1 \ell_2^2}(\zv) - h_{\mu_2 \ell_2^2}(\zv) \le \tfrac{1}{2}(\mu_1-\mu_2) \,.
\]
\subsubsection{Top-$K$ Strategy}
Though the gradient of the composition $f_{\mu \omega} = h_{\mu \omega} \circ \gv$
can be written using the chain rule, its actual computation for structured prediction problems
involves computing $\grad \gv$ over all $m = \abs{\mcY}$ of its components, which may be intractable.
However, in the case of $\ell_2^2$ smoothing, projections onto the simplex are sparse, as pointed out by the following proposition.
\begin{proposition} \label{prop:smoothing:proj-simplex-1}
Consider the Euclidean projection
$\uv^* = \argmin_{\uv \in \Delta^{m-1}}\normasq{2}{\uv -{\zv}/{\mu}} $ of $\zv/\mu \in \reals^m$ onto the simplex,
where $\mu > 0$.
The projection $\uv^*$ has exactly $k \in [m]$ non-zeros if and only if
\begin{align} \label{eq:smooth:proj:simplex_1_statement}
\sum_{i=1}^k \left(z_{(i)} - z_{(k)} \right) < \mu
\le \sum_{i=1}^k \left(z_{(i)} - z_{(k+1)} \right) \,,
\end{align}
where $z_{(1)}\ge \cdots \ge z_{(m)}$ are the components of $\zv$ in non-decreasing order
and $z_{(m+1)} := -\infty$.
In this case, $\uv^*$ is given by
\begin{align*}
u_i^* = \max \bigg\{0, \, \tfrac{1}{k\mu }\sum_{j=1}^k \big( z_i - z_{(j)} \big) + \tfrac{1}{k} \bigg\} \,.
\end{align*}
\end{proposition}
\begin{proof}
The projection $\uv^*$ satisfies $u^*_i = (z_i/\mu + \rho^*)_+$,
where $\rho^*$ is the unique solution of $\rho$ in the equation
\begin{align} \label{eq:smooth:proj:simplex_1}
\sum_{i=1}^m \left( \frac{z_i}{\mu} + \rho \right)_+ = 1 \,,
\end{align}
where $\alpha_+ = \max\{0, \alpha\}$. See, e.g.,
\citet{held1974validation} for a proof of this fact.
Note that $z_{(i)}/\mu + \rho^* \le 0$
implies that $z_{(j)}/\mu + \rho^* \le 0$ for all $j \ge i$. Therefore
$\uv^*$ has $k$ non-zeros if and only if $z_{(k)}/\mu + \rho^* > 0$ and $z_{(k+1)}/\mu + \rho^* \le 0$.
Now suppose that $\uv^*$ has exactly $k$ non-zeros, we can then solve \eqref{eq:smooth:proj:simplex_1} to obtain $\rho^* = \varphi_k(\zv/\mu)$, which is defined as
\begin{align} \label{eq:smooth:proj:simplex_1b}
\varphi_k\left( \frac \zv \mu \right) := \frac{1}{k} - \frac{1}{k} \sum_{i=1}^k \frac{z_{(i)}}{\mu} \,.
\end{align}
Plugging in the value of $\rho^*$ in $z_{(k)}/\mu + \rho^* > 0$ gives $\mu > \sum_{i=1}^k \left(z_{(i)} - z_{(k)} \right)$.
Likewise, $z_{(k+1)}/\mu + \rho^* \le 0$ gives $\mu \le \sum_{i=1}^k \left(z_{(i)} - z_{(k+1)} \right)$.
Conversely assume~\eqref{eq:smooth:proj:simplex_1_statement} and let $\widehat \rho = \varphi_k(\zv/\mu)$.
Eq. \eqref{eq:smooth:proj:simplex_1_statement} can be written as
$z_{(k)}/\mu + \widehat\rho > 0$ and $z_{(k+1)}/\mu + \widehat\rho \le 0$. Furthermore, we verify that
$\widehat\rho$ satisfies Eq.~\eqref{eq:smooth:proj:simplex_1}, and so $\widehat \rho = \rho^*$ is its unique root.
It follows, therefore, that the sparsity of $\uv^*$ is $k$.
\end{proof}
Thus, the projection of $\zv /\mu$ onto the simplex picks out some number $K_{\zv/\mu}$
of the largest entries of $\zv / \mu$ - we refer to this as the sparsity of
$\operatorname{proj}_{\Delta^{m-1}}(\zv/\mu)$.
This fact motivates the {\em top-$K$ strategy}: given $\mu>0$, fix an integer $K$ {\em a priori} and
consider as surrogates for $h_{\mu\ell_2^2}$ and $\grad h_{\mu\ell_2^2}$ respectively
\[
h_{\mu, K}(\zv) := \max_{\uv \in \Delta^{K-1}} \left\{ \inp*{\zv_{[K]}}{\uv} - \mu \ell_2^2(\uv) \right\}\,,
\quad \text{and,} \quad
\widetilde \grad h_{\mu, K}(\zv) := \Omega_K(\zv)\T\operatorname{proj}_{\Delta^{K-1}}\left( \frac{\zv_{[K]}}{\mu} \right) \,,
\]
where $\zv_{[K]}$ denotes the vector composed of the $K$ largest entries of $\zv$ and
$
\Omega_K : \reals^m \to \{0,1\}^{K \times m}
$
defines their extraction, i.e.,
$\Omega_K(\zv) = (\ev_{j_1}\T, \ldots, \ev_{j_K} \T)^\top\in \{0, 1\}^{K \times m}$
where $j_1, \cdots, j_K$ satisfy $z_{j_1} \ge \cdots \ge z_{j_K}$
such that
$ \zv_{[K]} = \Omega_K(\zv) \zv$ \,.
A surrogate of the $\ell_2^2$ smoothing is then given by
\begin{align} \label{eq:smoothing:fmuK_defn}
f_{\mu, K} := h_{\mu, K} \circ \gv \,,
\quad\text{and,}\quad
\widetilde \grad f_{\mu, K}(\wv) := \grad \gv(\wv)\T \widetilde \grad h_{\mu, K}(\gv(\wv)) \,.
\end{align}
\paragraph{Exactness of Top-$K$ Strategy}
We say that the top-$K$ strategy is {\em exact} at $\zv$ for $\mu>0$ when it recovers the first order information
of $h_{\mu \ell_2^2}$, i.e. when $ h_{\mu \ell_2^2}(\zv) = h_{\mu, K}(\zv)$ and
$\grad h_{\mu \ell_2^2}(\zv) = \widetilde \grad h_{\mu, K}(\zv)$.
The next proposition outlines when this is the case. Note that if the top-$K$ strategy is exact at $\zv$ for
a smoothing parameter $\mu>0$ then it will be exact at $\zv$ for any $\mu'<\mu$.
\begin{proposition} \label{prop:smoothing:proj-simplex-2}
The top-$K$ strategy is exact at $\zv$
for $\mu>0$ if
\begin{align} \label{eq:smooth:proj:simplex_2}
\mu \le \sum_{i=1}^K \left(\zv_{(i)} - \zv_{( {\scriptscriptstyle K}+1)} \right) \,.
\end{align}
Moreover, for any fixed $\zv \in \reals^m$ such that the vector $\zv_{[\scriptscriptstyle K+1]} = \Omega_{K+1}(\zv)\zv$
has at least two unique elements, the top-$K$ strategy is exact at $\zv$ for
all $\mu$ satisfying $0 < \mu \le z_{(1)} - z_{({\scriptscriptstyle K}+1)}$.
\end{proposition}
\begin{proof}
First, we note that the top-$K$ strategy is exact when the sparsity $K_{\zv/\mu}$ of the projection
$\operatorname{proj}_{\Delta^{m-1}}(\zv/\mu)$ satisfies $K_{\zv/\mu} \le K$.
From Prop.~\ref{prop:smoothing:proj-simplex-1}, the condition that
$K_{\zv/\mu} \in \{1, 2, \cdots, K\}$ happens when
\begin{align*}
\mu \in
\bigcup_{k=1}^K \left( \sum_{i=1}^k \left(z_{(i)} - z_{(k)} \right), \,
\sum_{i=1}^k \left(z_{(i)} - z_{(k+1)} \right) \right] =
\left( 0 , \sum_{i=1}^K \left(z_{(i)} - z_{({\scriptscriptstyle K}+1)} \right) \right] \,,
\end{align*}
since the intervals in the union are contiguous.
This establishes \eqref{eq:smooth:proj:simplex_2}.
The only case when \eqref{eq:smooth:proj:simplex_2} cannot hold for any value of $\mu > 0$ is when the right hand size
of \eqref{eq:smooth:proj:simplex_2} is zero. In the opposite case when $\zv_{[{\scriptscriptstyle K} + 1]}$ has at least
two unique components, or equivalently, $z_{(1)} - z_{({\scriptscriptstyle K}+1)} > 0$, the condition
$0 < \mu \le z_{(1)} - z_{({\scriptscriptstyle K}+1)}$ implies \eqref{eq:smooth:proj:simplex_2}.
\end{proof}
If the top-$K$ strategy is exact at $\gv(\wv)$ for $\mu$, then
\[
f_{\mu, K}(\wv) = f_{\mu \ell_2^2}(\wv)
\quad \text{and} \quad
\widetilde \grad f_{\mu, K}(\wv) = \grad f_{\mu \ell_2^2}(\wv) \,,
\]
where the latter follows from the chain rule.
When used instead of $\ell_2^2$ smoothing in the algorithms presented in Sec.~\ref{sec:cvx_opt},
the top-$K$ strategy provides a computationally efficient heuristic to smooth the structural hinge loss.
Though we do not have theoretical guarantees using this surrogate,
experiments presented in Sec.~\ref{sec:expt} show its efficiency and its robustness to the choice of $K$.
\subsection{Score Functions} \label{sec:inf_oracles:score_func}
Structured prediction is defined by the structure of the output $\yv$, while input $\xv \in \mcX$ can be arbitrary.
Each output $\yv \in \mcY$ is composed of $p$ components $y_1, \ldots, y_p$ that are linked through a graphical model
$\mathcal{G} = (\mathcal{V}, \mathcal{E})$ -
the nodes $\mcV=\{1,\cdots,p\}$ represent the components of the output $\yv$ while the edges $\mcE$ define the
dependencies between various components.
The value of each component $y_v$ for $v \in \mcV$ represents the state of the node $v$ and takes values from a finite set $\mcY_v$.
The set of all output structures $\mcY = \mcY_1 \times \cdots \times \mcY_p$
is then finite yet potentially intractably large.
The structure of the graph (i.e., its edge structure) depends on the task.
For the task of sequence labeling, the graph is a chain,
while for the task of parsing, the graph is a tree. On the other hand, the graph
used in image segmentation is a grid.
For a given input $\xv$ and a score function $\phi(\cdot, \cdot ; \wv)$,
the value $\phi(\xv, \yv; \wv)$ measures the compatibility of the
output $\yv$ for the input $\xv$.
The essential characteristic of the score function is that it decomposes over the nodes and edges of the graph as
\begin{align} \label{eq:setting:score:decomp}
\phi(\xv, \yv ; \wv) = \sum_{v \in \mcV} \phi_v(\xv, y_v; \wv)
+ \sum_{(v,v') \in \mcE} \phi_{v,v'}(\xv, y_v, y_{v'} ; \wv) \,.
\end{align}
For a fixed $\wv$, each input $\xv$ defines a specific compatibility function $\phi(\xv, \cdot\, ; \wv)$.
The nature of the problem and the optimization algorithms we consider hinge upon whether
$\phi$ is an affine function of $\wv$ or not. The two settings studied here are the following:
\begin{description}
\item{\bfseries Pre-defined Feature Map.}
In this structured prediction framework, a pre-specified feature map
$\Phi: \mcX \times \mcY \to \reals^d$ is employed and the score $\phi$ is then defined as the linear function
\begin{equation}\label{eq:pre_spec_feature_map}
\phi(\xv, \yv ; \wv) = \inp{\Phi(\xv, \yv)}{\wv} = \sum_{v \in \mcV} \inp{\Phi_v(\xv, y_v)}{\wv}
+ \sum_{(v,v') \in \mcE} \inp{\Phi_{v,v'}(\xv, y_v, y_{v'})}{\wv}\,.
\end{equation}
\item{\bfseries Learning the Feature Map.}
We also consider the setting where the feature map $\Phi$ is parameterized by $\wv_0$,
for example, using a neural network, and is learned from the data. The score function can then be written as
\begin{equation}\label{eq:deep_setting}
\phi(\xv, \yv ; \wv) = \inp{\Phi(\xv, \yv; \wv_0)}{\wv_1}
\end{equation}
where $\wv = (\wv_0, \wv_1)$ and the scalar product decomposes into nodes and edges as above.
\end{description}
Note that we only need the decomposition of the score function over nodes and edges of the $\mcG$ as in
Eq.~\eqref{eq:setting:score:decomp}.
In particular, while Eq.~\eqref{eq:deep_setting} is helpful
to understand the use of neural networks in structured prediction,
the optimization algorithms developed in Sec.~\ref{sec:ncvx_opt}
apply to general nonlinear but smooth score functions.
This framework captures both generative probabilistic models such as Hidden Markov Models (HMMs)
that model the joint distribution between $\xv$ and $\yv$
as well as discriminative probabilistic models,
such as conditional random fields~\citep{lafferty2001conditional}
where dependencies among the input variables $\xv$ do not need to be explicitly represented.
In these cases, the log joint and conditional probabilities respectively
play the role of the score $\phi$.
\begin{example}[Sequence Tagging]
\label{example:inf_oracles:viterbi_example}
Consider the task of sequence tagging in natural language processing
where each $\xv = (x_1, \cdots, x_p) \in \mcX$ is a sequence of words and
$\yv = (y_1, \cdots, y_p) \in \mcY$ is a sequence of labels,
both of length $p$. Common examples include part of speech tagging and named entity recognition.
Each word $x_v$ in the sequence $\xv$ comes from a finite dictionary $\mcD$,
and each tag $y_v$ in $\yv$ takes values from a finite set $\mcY_v = \mcY_{\mathrm{tag}}$.
The corresponding graph is simply a linear chain.
The score function measures the compatibility of a sequence $\yv\in\mcY$ for the input $\xv\in\mcX$ using parameters
$\wv = (\wv_{\mathrm{unary}}, \wv_{\mathrm{pair}})$ as, for instance,
\[
\phi(\xv, \yv; \wv) = \sum_{v =1}^p \inp{\Phi_{\mathrm{unary}}(x_v, y_v)}{\wv_{\mathrm{unary}}}
+ \sum_{v=0}^p \inp{\Phi_{\mathrm{pair}}(y_v, y_{v+1})}{\wv_{\mathrm{pair}}}\,,
\]
where, using $\wv_{\mathrm{unary}} \in \reals^{\abs\mcD\abs{\mcY_{\mathrm{tag}}}}$
and $\wv_{\mathrm{pair}} \in \reals^{\abs{\mcY_{\mathrm{tag}}}^2}$ as node and edge weights respectively,
we define for each $v \in [p]$,
\[
\inp{\Phi_{\mathrm{unary}}(x_v, y_v)}{\wv_{\mathrm{unary}}} = \sum_{x \in \mathcal{D},\, j \in \mcY_{\mathrm{tag}}}
w_{\mathrm{unary},\, x, j} \ind(x = x_v) \ind(j = y_v) \,.
\]
The pairwise term $\inp{\Phi_{\mathrm{pair}}(y_v, y_{v+1})}{\wv_{\mathrm{pair}}}$ is analogously defined.
Here, $y_0, y_{p+1}$ are special ``start'' and ``stop'' symbols respectively.
This can be written as a dot product of $\wv$ with a pre-specified feature map as in~\eqref{eq:pre_spec_feature_map},
by defining
\[
\Phi(\xv, \yv) = \big(\sum_{v=1}^p \ev_{x_v} \otimes \ev_{y_v} \big)
\oplus \big(\sum_{v=0}^p \ev_{y_v} \otimes \ev_{y_{v+1}} \big) \,,
\]
where $\ev_{x_v}$ is the unit vector $(\ind(x = x_v))_{x \in \mcD} \in \reals^{\abs\mcD}$,
$ \ev_{y_v}$ is the unit vector $(\ind(j=y_v))_{j \in \mcY_{\mathrm{tag}}} \in \reals^{\abs{\mcY_{\mathrm{tag}}}}$,
$\otimes$ denotes the Kronecker product between vectors and $\oplus$ denotes vector concatenation.
\end{example}
\subsection{Inference Oracles}
We define now inference oracles as first order oracles in structured prediction.
These are used later
to understand the information-based complexity of optimization algorithms.
\subsubsection{First Order Oracles in Structured Prediction}
A first order oracle for a function $f :\reals^d \to \reals$ is a routine which,
given a point $\wv \in \reals^d$, returns on output a value $f(\wv)$ and
a (sub)gradient $\vv \in \partial f(\wv)$, where $\partial f$ is the
Fr\'echet (or regular) subdifferential
\citep[Def. 8.3]{rockafellar2009variational}.
We now define inference oracles as first order oracles for the structural hinge loss
$f$ and its smoothed variants $f_{\mu \omega}$.
Note that these definitions are independent of the graphical structure.
However, as we shall see, the graphical structure plays a crucial role in the implementation of
the inference oracles.
\begin{definition} \label{defn:inf-oracles-all}
Consider an augmented score function $\psi$,
a level of smoothing $\mu > 0$
and the structural hinge loss $f(\wv) = \max_{\yv \in \mathcal{\mcY}} \psi(\yv;\wv)$. For a given $\wv \in \reals^d$,
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item the {\em max oracle}
returns $f(\wv)$ and $\vv \in \partial f(\wv)$.
\item the {\em exp oracle}
returns $f_{-\mu H}(\wv)$ and $\grad f_{-\mu H}(\wv)$.
\item the {\em top-$K$ oracle}
returns $f_{\mu, K}(\wv)$ and $\widetilde \grad f_{\mu, K}(\wv)$ as surrogates for
$f_{\mu \ell_2^2}(\wv)$ and $\grad f_{\mu \ell_2^2}(\wv)$ respectively.
\end{enumerate}
\end{definition}
\noindent
Note that the exp oracle gets its name since it can be written as an expectation
over all $\yv$, as revealed by the next lemma,
which gives analytical expressions for the gradients returned by the oracles.
\begin{lemma} \label{lemma:smoothing:first-order-oracle}
Consider the setting of Def.~\ref{defn:inf-oracles-all}. We have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item \label{lem:foo:max}
For any $\yv^* \in \argmax_{\yv \in \mathcal{\mcY}} \psi(\yv;\wv)$, we have that
$\grad_\wv \psi(\yv^* ; \wv) \in \partial f(\wv)$. That is, the max oracle can be implemented
by inference.
\item
The output of the exp oracle satisfies
$\grad f_{-\mu H}(\wv) = \sum_{\yv \in \mcY} P_{\psi, \mu}(\yv ; \wv) \grad \psi(\yv ; \wv)$,
where
\[P_{\psi, \mu}(\yv ; \wv)
= \frac{
\exp\left(\tfrac{1}{\mu}\psi(\yv ; \wv)\right)}
{\sum_{\yv' \in \mcY }\exp\left(\tfrac{1}{\mu}\psi(\yv' ; \wv)\right)} \,.
\]
\label{lem:foo:exp}
\item \label{lem:foo:l2}
The output of the top-$K$ oracle satisfies
$
\widetilde \grad f_{\mu, K}(\wv) = \sum_{i=1}^K u_{\psi, \mu, i}^*(\wv) \grad \psi(\yv_{(i)}; \wv) \,,
$
where $Y_K = \left\{\yv_{(1)}, \cdots, \yv_{(K)} \right\}$ is the set of $K$ largest scoring outputs
satisfying
\[
\psi(\yv_{(1)} ; \wv) \ge \cdots \ge \psi(\yv_{(K)} ; \wv) \ge \max_{\yv \in \mcY \setminus Y_K} \psi(\yv ; \wv)\,,
\]
and $
\uv^*_{\psi, \mu} = \operatorname{proj}_{\Delta^{K-1}} \left( \left[\psi(\yv_{(1)} ; \wv), \cdots,
\psi(\yv_{(K)} ; \wv) \right]\T \right)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part~\ref{lem:foo:exp} deals with the composition of differentiable
functions, and follows from the chain rule. Part~\ref{lem:foo:l2} follows from the definition in Eq.~\eqref{eq:smoothing:fmuK_defn}.
The proof of Part~\ref{lem:foo:max} follows from the chain rule for Fr\'echet subdifferentials of compositions
\citep[Theorem 10.6]{rockafellar2009variational}
together with the fact that by convexity and
Danskin's theorem \citep[Proposition B.25]{bertsekas1999nonlinear},
the subdifferential of the max function is given by
$\partial h(\zv) = \conv \{ \ev_i \, | \, i \in [m] \text{ such that } z_i = h(\zv) \}$.
\end{proof}
\begin{figure*}[!t
\centering
\begin{subfigure}[b]{0.28\linewidth}
\centering
\adjincludegraphics[width=\textwidth,trim={0.11\width 0.2\height 0.11\width 0.2\height},clip]{fig/viterbi/viterbi-max.pdf}
\caption{\small{Non-smooth.}}
\label{subfig:viterbi:1:max}
\end{subfigure}
\hspace{5mm}%
\begin{subfigure}[b]{0.28\linewidth}
\centering
\adjincludegraphics[width=\textwidth,trim={0.11\width 0.2\height 0.11\width 0.2\height},clip]{fig/viterbi/viterbi-K.pdf}
\caption{\small{$\ell_2^2$ smoothing.}}
\label{subfig:viterbi:1:l2}
\end{subfigure}
\hspace{5mm}%
\begin{subfigure}[b]{0.28\linewidth}
\centering
\adjincludegraphics[width=\textwidth,trim={0.11\width 0.2\height 0.11\width 0.2\height},clip]{fig/viterbi/viterbi-exp.pdf}
\caption{\small{Entropy smoothing.}}
\label{subfig:viterbi:1:ent}
\end{subfigure}
\caption{\small{Viterbi trellis for a chain graph with $p=4$ nodes and 3 labels.
}}
\label{fig:viterbi:1}
\end{figure*}
\begin{example} \label{example:inf_oracles:viterbi_example_2}
Consider the task of sequence tagging from Example~\ref{example:inf_oracles:viterbi_example}.
The inference problem~\eqref{eq:pgm:inference} is a search over all
$\abs{\mcY} = \abs{\mcY_{\mathrm{tag}}}^p$ label sequences. For chain graphs, this is equivalent
to searching for the shortest path in the associated trellis, shown in Fig.~\ref{fig:viterbi:1}.
An efficient dynamic programming approach called the Viterbi algorithm \citep{viterbi1967error}
can solve this problem in space and time polynomial in $p$ and $\abs{\mcY_{\mathrm{tag}}}$.
The structural hinge loss is non-smooth because a small change in $\wv$ might lead to a radical change
in the best scoring path shown in Fig.~\ref{fig:viterbi:1}.
When smoothing $f$ with $\omega = \ell_2^2$,
the smoothed function $f_{\mu \ell_2^2}$ is given by a projection onto the simplex,
which picks out some number $K_{\psi/\mu}$ of the highest scoring outputs $\yv \in \mcY$ or equivalently,
$K_{\psi/\mu}$ shortest paths in the Viterbi trellis (Fig.~\ref{subfig:viterbi:1:l2}).
The top-$K$ oracle then uses the top-$K$ strategy to approximate $f_{\mu \ell_2^2}$ with $f_{\mu, K}$.
On the other hand, with entropy smoothing $\omega = -H$, we get the log-sum-exp function and
its gradient is obtained by averaging
over paths with weights such that
shorter paths have a larger weight (cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:exp}).
This is visualized in Fig.~\ref{subfig:viterbi:1:ent}.
\end{example}
\subsubsection{Exp Oracles and Conditional Random Fields}
Recall that a {\em Conditional Random Field (CRF)}~\citep{lafferty2001conditional}
with augmented score function $\psi$
and parameters $\wv \in \reals^d$ is a probabilistic model that assigns
to output $\yv \in \mcY$ the probability
\begin{align} \label{eq:smoothing:crf:def}
\prob(\yv \mid \psi ; \wv) = \exp\left(\psi(\yv ; \wv) - A_\psi(\wv) \right) \,,
\end{align}
where $A_\psi(\wv)$ is known as the log-partition function,
a normalizer so that the probabilities sum to one.
Gradient-based maximum likelihood learning algorithms for CRFs require computation
of the log-partition function $A_\psi(\wv)$ and its gradient $\grad A_\psi(\wv)$.
Next proposition relates the computational costs of the exp oracle and the log-partition function.
\begin{proposition} \label{prop:smoothing:exp-crf}
The exp oracle for an augmented score function $\psi$ with parameters $\wv \in \reals^d$ is
equivalent in hardness to computing the log-partition function $A_\psi(\wv)$
and its gradient $\grad A_\psi(\wv)$ for a conditional
random field with augmented score function $\psi$.
\end{proposition}
\begin{proof}
Fix a smoothing parameter $\mu > 0$.
Consider a CRF with augmented score function
$\psi'(\yv ; \wv) = \mu\inv \psi(\yv ; \wv)$. Its log-partition function
$A_{\psi'}(\wv)$ satisfies
$\exp(A_{\psi'}(\wv)) = \sum_{\yv \in \mcY} \exp \left( \mu\inv \psi(\yv ; \wv) \right)$.
The claim now follows from the bijection $f_{- \mu H}(\wv) = \mu \, A_{\psi'}(\wv)$
between $f_{-\mu H}$ and $A_{\psi'}$.
\end{proof}
\subsection{Inference Oracles in Trees} \label{subsec:smooth_inference_trees}
We first consider algorithms implementing the inference algorithms in trees and examine their computational complexity.
\subsubsection{Implementation of Inference Oracles}
\paragraph{Max Oracle}
In tree structured graphical models, the inference problem~\eqref{eq:pgm:inference}, and thus the max oracle
(cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:max})
can always be solved exactly in polynomial time by the max-product algorithm~\citep{pearl1988probabilistic},
which uses the technique of dynamic programming~\citep{bellman1957dynamic}.
The Viterbi algorithm (Algo.~\ref{algo:dp:max:chain}) for chain graphs from Example~\ref{example:inf_oracles:viterbi_example_2}
is a special case. See Algo.~\ref{algo:dp:supp} in Appendix~\ref{sec:a:dp} for
the max-product algorithm in full generality.
\paragraph{Top-$K$ Oracle}
The top-$K$ oracle uses a generalization of the max-product algorithm that we name top-$K$ max-product algorithm.
Following the work of \citet{seroussi1994algorithm}, it keeps track of the $K$-best intermediate structures while
the max-product algorithm just tracks the single best intermediate structure.
Formally, the $k$th largest element from a discrete set $S$ is defined as
\begin{align*}
\maxK{k}{x \in S} f(x) =
\begin{cases}
\text{$k$th largest element of $\{f(y)\, |\, y \in S\} $} & k \le |S| \\
-\infty, & k > |S| \,.
\end{cases}
\end{align*}
We present the algorithm in the simple case of chain structured graphical models in
Algo.~\ref{algo:dp:topK:chain}.
The top-$K$ max-product algorithm for general trees is given in
Algo.~\ref{algo:dp:topK:main} in Appendix~\ref{sec:a:dp}.
Note that it requires $\widetilde\bigO(K)$
times the time and space of the max oracle.
\paragraph{Exp oracle}
The relationship of the exp oracle with CRFs (Prop.~\ref{prop:smoothing:exp-crf})
leads directly to
Algo.~\ref{algo:dp:supp_exp}, which is
based on marginal computations from the sum-product algorithm.
\begin{algorithm}[tb]
\caption{Max-product (Viterbi) algorithm for chain graphs
}
\label{algo:dp:max:chain}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot; \wv)$ defined on a chain graph $\mcG$.
\STATE Set $\pi_1(y_1) \leftarrow \psi_1(y_1)$ for all $y_1 \in \mcY_1$.
\FOR{$v= 2, \cdots p$}
\STATE For all $y_v \in \mcY_v$, set
\begin{align} \label{eq:dp:viterbi:update}
\pi_{v}(y_v) \leftarrow \psi_v(y_v) + \max_{y_{v-1} \in \mcY_{v-1}}
\left\{ \pi_{v-1}(y_{v-1}) + \psi_{v, v-1}(y_v, y_{v-1}) \right\} \,.
\end{align}
\STATE Assign to $\delta_v(y_v)$ the $y_{v-1}$
that attains the $\max$ above for each $y_v \in \mcY_v$.
\ENDFOR
\STATE Set $\psi^* \leftarrow \max_{y_p \in \mcY_p} \pi_p(y_p)$
and store the maximizing assignments of $y_p$ in $y_p^*$.
\FOR{$v= p-1, \cdots, 1$}
\STATE Set $y_v^* \leftarrow \delta_{v+1}( y_{v+1})$.
\ENDFOR
\RETURN $\psi^*, \yv^*:=(y_1^*, \cdots, y_p^*) $.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Top-$K$ max-product (top-$K$ Viterbi) algorithm for chain graphs}
\label{algo:dp:topK:chain}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on chain graph $\mcG$,
integer $K>0$.
\STATE For $k=1,\cdots, K$, set $\pi_1\pow{k}(y_1) \leftarrow \psi_1(y_1)$ if $k=1$ and $-\infty$ otherwise for all $y_1 \in \mcY_1$.
\FOR{$v= 2, \cdots p$ and $k=1,\cdots, K$}
\STATE For all $y_v \in \mcY_v$, set
\begin{align} \label{eq:dp:viterbi:topk:update}
\pi_{v}\pow{k}(y_v) \leftarrow \psi_v(y_v) + \maxK{k}{y_{v-1} \in \mcY_{v-1}, \ell \in [K]}
\left\{ \pi_{v-1}\pow{\ell}(y_{v-1}) + \psi_{v, v-1}(y_v, y_{v-1}) \right\} \,.
\end{align}
\STATE Assign to $\delta_v\pow{k}(y_v), \kappa_v\pow{k}(y_v)$ the $y_{v-1}, \ell$
that attain the $\max\pow{k}$ above for each $y_v \in \mcY_v$.
\ENDFOR
\STATE For $k=1,\cdots, K$, set $\psi\pow{k} \leftarrow \max\pow{k}_{y_p \in \mcY_p, k \in [K]} \pi_p\pow{k}(y_p)$
and store in $y_p\pow{k}, \ell\pow{k}$ respectively the maximizing assignments of $y_p , k$.
\FOR{$v= p-1, \cdots 1$ and $k=1,\cdots, K$}
\STATE Set $y_v\pow{k} \leftarrow \delta_{v+1}\pow{\ell\pow{k}} \big( y_{v+1}\pow{k} \big)$ and
$\ell\pow{k} \leftarrow \kappa_{v+1}\pow{\ell\pow{k}} \big( y_{v+1}\pow{k} \big)$.
\ENDFOR
\RETURN $\left\{ \psi\pow{k}, \yv\pow{k}:=(y_1\pow{k}, \cdots, y_p\pow{k}) \right\}_{k=1}^K$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Entropy smoothed max-product algorithm}
\label{algo:dp:supp_exp}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on
tree structured graph $\mcG$,
$\mu > 0$.
\STATE Compute the log-partition function and marginals using the sum-product algorithm
(Algo.~\ref{algo:dp:supp_sum-prod} in Appendix~\ref{sec:a:dp})
\[
A_{\psi/\mu}, \{P_v \text{ for } v \in \mcV\}, \{ P_{v, v'} \text{ for } (v, v') \in \mcE \}
\leftarrow \textsc{SumProduct}\left( \tfrac{1}{\mu} \psi(\cdot \, ; \wv), \mcG \right) \,.
\]
\STATE Set $f_{-\mu H}(\wv) \leftarrow \mu A_{\psi /\mu}$ and
\[
\grad f_{-\mu H}(\wv) \leftarrow \sum_{v \in \mcV} \sum_{y_v \in \mcY_v} P_v(y_v) \grad \psi_v(y_v ; \wv)
+ \sum_{(v, v') \in \mcE} \sum_{y_v \in \mcY_v} \sum_{y_{v'} \in \mcY_{v'}} P_{v,v'}(y_v, y_{v'})\grad \psi_{v, v'}(y_v ; \wv) \,.
\]
\label{line:algo:dp:exp:gradient}
\RETURN $f_{-\mu H}(\wv), \grad f_{-\mu H}(\wv)$.
\end{algorithmic}
\end{algorithm}
\begin{remark}
We note that clique trees allow the generalization of the
algorithms of this section to general graphs with cycles.
However, the construction of a clique tree requires time and space
exponential in the {\em treewidth} of the graph.
\end{remark}
\begin{example}
Consider the task of sequence tagging from Example~\ref{example:inf_oracles:viterbi_example}.
The Viterbi algorithm (Algo.~\ref{algo:dp:max:chain}) maintains
a table $\pi_v(y_v)$, which stores the best length-$v$ prefix ending in label $y_v$.
One the other hand, the top-$K$ Viterbi algorithm (Algo.~\ref{algo:dp:topK:chain})
must store in $\pi_v\pow{k}(y_v)$ the score of $k$th best length-$v$ prefix that ends in $y_v$ for each $k \in [K]$.
In the vanilla Viterbi algorithm, the entry $\pi_v(y_v)$ is updated by looking the previous column
$\pi_{v-1}$
following~\eqref{eq:dp:viterbi:update}.
Compare this to update \eqref{eq:dp:viterbi:topk:update}
of the top-$K$ Viterbi algorithm.
In this case, the exp oracle is implemented by the forward-backward algorithm, a specialization of the
sum-product algorithm to chain graphs.
\end{example}
\subsubsection{Complexity of Inference Oracles}
The next proposition presents the correctness guarantee and complexity of
each of the aforementioned algorithms. Its proof has been placed in Appendix~\ref{sec:a:dp}.
\begin{proposition} \label{prop:dp:main}
Consider as inputs an augmented score function $\psi(\cdot, \cdot ; \wv)$ defined on a tree structured graph $\mcG$,
an integer $K>0$ and a smoothing parameter $\mu > 0$.
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item The output $(\psi^*, \yv^*)$ of the max-product algorithm
(Algo.~\ref{algo:dp:max:chain} for the special case when $\mcG$ is chain structured
Algo.~\ref{algo:dp:supp} from Appendix~\ref{sec:a:dp} in general) satisfies
$\psi^* = \psi(\yv^* ; \wv) = \max_{\yv \in \mcY} \psi(\yv ; \wv)$.
Thus, the pair $\big(\psi^*, \grad \psi(\yv^* ; \wv)\big)$ is a correct implementation of the max oracle.
It requires time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and space $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\item The output $\{ \psi\pow{k}, \yv\pow{k} \}_{k=1}^K$
of the top-$K$ max-product algorithm
(Algo.~\ref{algo:dp:topK:chain} for the special case when $\mcG$ is chain structured
or Algo.~\ref{algo:dp:topK:main} from Appendix~\ref{sec:a:dp} in general)
satisfies $\psi\pow{k} = \psi(\yv\pow{k}) = \max\pow{k}_{\yv \in \mcY} \psi(\yv)$.
Thus, the top-$K$ max-product algorithm followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
is a correct implementation of the top-$K$ oracle.
It requires time $\bigO(pK\log K \max_{v\in\mcV} \abs{\mcY_v}^2)$
and space $\bigO(p K \max_{v\in\mcV} \abs{\mcY_v})$.
\label{prop:dp:main:part:topk}
\item Algo.~\ref{algo:dp:supp_exp}
returns $\big(f_{-\mu H}(\wv), \grad f_{-\mu H}(\wv)\big)$.
Thus, Algo.~\ref{algo:dp:supp_exp} is a correct implementation of the exp oracle.
It requires time $\bigO(p \max_{v\in\mcV} \abs{\mcY_v}^2)$
and space $\bigO(p \max_{v\in\mcV} \abs{\mcY_v})$.
\end{enumerate}
\end{proposition}
\subsection{Inference Oracles in Loopy Graphs} \label{subsec:smooth_inference_loopy}
For general loopy graphs with high tree-width,
the inference problem \eqref{eq:pgm:inference} is NP-hard \citep{cooper1990computational}.
In particular cases, graph cut, matching or search algorithms
can be used for exact inference in dense loopy graphs, and therefore,
to implement the max oracle as well (cf. Lemma~\ref{lemma:smoothing:first-order-oracle}\ref{lem:foo:max}).
In each of these cases,
we find that the top-$K$ oracle can be implemented, but the exp oracle is intractable.
Appendix~\ref{sec:a:smooth:loopy} contains a review of the algorithms and guarantees referenced
in this section.
\subsubsection{Inference Oracles using Max-Marginals}
We now define a {\em max-marginal},
which is a constrained maximum of the augmented score $\psi$.
\begin{definition}
The max-marginal of $\psi$ relative to a variable $y_v$ is defined,
for $j \in \mcY_v$ as
\begin{align}
\psi_{v; j}(\wv) := \max_{\substack{\yv \in \mcY \,: \, y_v = j}} \psi(\yv ; \wv)\, .
\end{align}
\end{definition}
\noindent
In cases where exact inference is tractable using graph cut or matching algorithms,
it is possible to extract max-marginals as well.
This, as we shall see next, allows the implementation of the max and top-$K$ oracles.
When the augmented score function $\psi$ is {\em unambiguous}, i.e.,
no two distinct $\yv_1, \yv_2 \in \mcY$ have the same augmented score,
the output $\yv^*(\wv)$ is unique can be decoded from the max-marginals as
(see \citet{pearl1988probabilistic,dawid1992applications} or Thm.~\ref{thm:a:loopy:decoding}
in Appendix~\ref{sec:a:smooth:loopy})
\begin{align} \label{eq:max-marg:defn}
y_v^*(\wv) = \argmax_{j \in \mcY_v} \psi_{v ; j}(\wv) \,.
\end{align}
If one has access to an algorithm $\mcM$ that can compute max-marginals,
the top-$K$ oracle is also easily implemented via the {\em Best Max-Marginal First (BMMF)}
algorithm of \citet{yanover2004finding}.
This algorithm requires computations of $2K$ sets of max-marginals,
where a {\em set} of max-marginals refers to max-marginals for all $y_v$ in $\yv$.
Therefore,
the BMMF algorithm followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing})
is a correct implementation of the top-$K$ oracle at a computational cost of
$2K$ sets of max-marginals.
The BMMF algorithm and its guarantee are recalled in Appendix~\ref{sec:a:bmmf} for completeness.
\paragraph{Graph Cut and Matching Inference}
\citet{kolmogorov2004energy} showed that submodular energy functions \citep{lovasz1983submodular}
over binary variables can be efficiently minimized exactly via a minimum cut algorithm.
For a class of alignment problems, e.g., \citet{taskar2005discriminative},
inference amounts to finding the best bipartite matching.
In both these cases, max-marginals can be computed exactly and efficiently
by combinatorial algorithms.
This gives us a way to implement the max and top-$K$ oracles.
However, in both settings,
computing the log-partition function $A_\psi(\wv)$ of a CRF with score $\psi$
is known to be \#P-complete~\citep{jerrum1993polynomial}.
Prop.~\ref{prop:smoothing:exp-crf} immediately extends this result to the exp oracle.
This discussion is summarized by the following proposition, whose proof is provided in Appendix~\ref{sec:a:proof-prop}.
\begin{proposition} \label{prop:smoothing:max-marg:all}
Consider as inputs an augmented score function $\psi(\cdot, \cdot ; \wv)$,
an integer $K>0$ and a smoothing parameter $\mu > 0$.
Further, suppose that $\psi$ is unambiguous, that is,
$\psi(\yv' ; \wv) \neq \psi(\yv'' ;\wv)$ for all distinct $\yv', \yv'' \in \mcY$.
Consider one of the two settings:
\begin{enumerate}[label={\upshape(\Alph*)}, align=left, leftmargin=*]
\item the output space $\mcY_v = \{0,1\}$ for each $v \in \mcV$, and the function
$-\psi$ is submodular (see Appendix~\ref{sec:a:graph_cuts} and, in particular, \eqref{eq:top_k_map:submodular}
for the precise definition), or,
\label{part:prop:max-marg:cuts}
\item the augmented score corresponds to an alignment task where the
inference problem~\eqref{eq:pgm:inference} corresponds to a
maximum weight bipartite matching (see Appendix~\ref{sec:a:graph_matchings} for a precise definition).
\label{part:prop:max-marg:matching}
\end{enumerate}
In these cases, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*]
\item The max oracle can be implemented at a
computational complexity of $\bigO(p)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3)$ in Case~\ref{part:prop:max-marg:matching}.
\item The top-$K$ oracle can be implemented at a
computational complexity of $\bigO(pK)$ minimum cut computations in Case~\ref{part:prop:max-marg:cuts},
and in time $\bigO(p^3K)$ in Case~\ref{part:prop:max-marg:matching}.
\item The exp oracle is \#P-complete in both cases.
\end{enumerate}
\end{proposition}
Prop.~\ref{prop:smoothing:max-marg:all} is loose in that the max oracle can be implemented with just one
minimum cut computation instead of $p$ in in Case~\ref{part:prop:max-marg:cuts}~\citep{kolmogorov2004energy}.
\subsubsection{Branch and Bound Search}
Max oracles implemented via search algorithms can often be extended to implement the top-$K$ oracle.
We restrict our attention to best-first branch and bound search such as the
celebrated Efficient Subwindow Search \citep{lampert2008beyond}.
Branch and bound methods partition the search space into disjoint subsets,
while keeping an upper bound
$\widehat \psi: \mcX \times 2^{\mcY} \to \reals$,
on the maximal augmented score for each of the subsets $\widehat \mcY \subseteq \mcY$.
Using a best-first strategy, promising parts of the search space are explored first.
Parts of the search space whose upper bound indicates that they cannot contain the maximum
do not have to be examined further.
The top-$K$ oracle is implemented by simply
continuing the search procedure until $K$ outputs have been produced - see
Algo.~\ref{algo:top_k:bb} in Appendix~\ref{sec:a:bb_search}.
Both the max oracle and the top-$K$ oracle
can degenerate to an exhaustive search in the worst case, so we do not have sharp running time
guarantees. However, we have the following correctness guarantee.
\begin{proposition} \label{prop:smoothing:bb-search}
Consider an augmented score function $\psi(\cdot, \cdot ; \wv)$,
an integer $K > 0$ and a smoothing parameter $\mu > 0$.
Suppose the upper bound function $\widehat \psi(\cdot, \cdot ; \wv): \mcX \times 2^{\mcY} \to \reals$
satisfies the following properties:
\begin{enumerate}[label=(\alph*), align=left, widest=a, leftmargin=*]
\item $\widehat \psi(\widehat \mcY ; \wv)$ is finite for every $\widehat \mcY \subseteq \mcY$,
\item $\widehat \psi(\widehat \mcY ; \wv) \ge \max_{\yv \in \widehat \mcY} \psi(\yv ; \wv)$
for all $\widehat \mcY \subseteq \mcY$, and,
\item $\widehat \psi(\{\yv\} ; \wv) = \psi(\yv ; \wv)$ for every $\yv \in \mcY$.
\end{enumerate}
Then, we have the following:
\begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=ii, leftmargin=*]
\item Algo.~\ref{algo:top_k:bb} with $K=1$ is a correct implementation of the max oracle.
\item Algo.~\ref{algo:top_k:bb} followed by a projection onto the simplex
(Algo.~\ref{algo:smoothing:top_K_oracle} in Appendix~\ref{sec:a:smoothing}) is a correct implementation of the top-$K$ oracle.
\end{enumerate}
\end{proposition}
\noindent
See Appendix~\ref{sec:a:bb_search} for a proof.
The discrete structure that allows inference via branch and bound search cannot be
leveraged to implement the exp oracle.
\subsection{{Casimir}: Catalyst with Smoothing}
The Catalyst~\citep{lin2017catalyst} approach minimizes regularized objectives centered around the current iterate.
The algorithm proceeds by computing approximate proximal point steps instead of the classical (sub)-gradient steps.
A proximal point step from a point $\wv$ with step-size $\kappa^{-1}$ is defined as the minimizer of
\begin{equation}\label{eq:prox_point}
\min_{\zv \in \reals^m} F(\zv) + \frac{\kappa}{2}\normasq{2}{\zv-\wv},
\end{equation}
which can also be seen as a gradient step on the Moreau envelope of $F$ - see \citet{lin2017catalyst} for a detailed discussion.
While solving the subproblem~\eqref{eq:prox_point} might be as hard as the original problem we only
require an approximate solution returned by a given optimization method $\mathcal{M}$.
The Catalyst approach is then an inexact accelerated proximal point algorithm
that carefully mixes approximate proximal point steps with the extrapolation scheme of \citet{nesterov1983method}.
The {Casimir}{} scheme extends this approach to non-smooth optimization.
For the overall method to be efficient, subproblems~\eqref{eq:prox_point} must have a low complexity.
That is, there must exist an optimization algorithm $\mathcal{M}$ that solves them linearly.
For the {Casimir}{} approach to be able to handle non-smooth objectives, it means that we need not only to regularize the objective
but also to smooth it. To this end we define
\[
F_{\mu \omega}(\wv) := \frac{1}{n}\sum_{i=1}^n h_{\mu \omega}(\Am\pow{i} \wv + {\bm b}\pow{i}) + \frac{\lambda}{2} \normasq{2}{\wv}
\]
as a smooth approximation of the objective $F$, and,
\[
F_{\mu \omega, \kappa}(\wv; \zv) := \frac{1}{n}\sum_{i=1}^n h_{\mu \omega}(\Am\pow{i} \wv + {\bm b}\pow{i}) + \frac{\lambda}{2} \normasq{2}{\wv} + \frac{\kappa}{2}\normasq{2}{\wv-\zv}
\]
a smooth and regularized approximation of the objective centered around a given point $\zv \in \reals^d$.
While the original Catalyst algorithm considered a fixed regularization term $\kappa$,
we vary $\kappa$ and $\mu$ along the iterations.
This enables us to get adaptive smoothing strategies.
The overall method is presented in Algo.~\ref{algo:catalyst}. We first analyze in Sec.~\ref{sec:catalyst:analysis} its complexity
for a generic linearly convergent algorithm $\mcM$.
Thereafter, in Sec.~\ref{sec:catalyst:total_compl}, we compute the total complexity with SVRG~\citep{johnson2013accelerating}
as $\mcM$.
Before that, we specify two practical aspects of the implementation: a proper stopping criterion~\eqref{eq:stopping_criterion}
and a good initialization of subproblems (Line~\ref{line:algo:c:prox_point}).
\paragraph{Stopping Criterion}
Following~\citet{lin2017catalyst}, we
solve subproblem $k$ in Line~\ref{line:algo:c:prox_point} to a degree of relative accuracy specified by
$\delta_k \in [0, 1)$.
In view of the $(\lambda+\kappa_k)$-strong convexity of $F_{\mu_k\omega, \kappa_k}(\cdot\,; \zv_{k-1})$, the functional gap can be controlled by the norm of the gradient, precisely
it can be seen that $\normasq{2}{\grad F_{\mu_k\omega, \kappa_k}(\widehat\wv; \zv_{k-1})}
\le (\lambda+\kappa_k)\delta_k \kappa_k \normasq{2}{\widehat \wv - \zv_{k-1}}$
is a sufficient condition for
the stopping criterion \eqref{eq:stopping_criterion}.
A practical alternate stopping criterion proposed by \citet{lin2017catalyst} is to fix an iteration budget $T_{\mathrm{budget}}$
and run the inner solver $\mcM$ for exactly $T_{\mathrm{budget}}$ steps.
We do not have a theoretical analysis for this scheme but find that it works well in experiments.
\paragraph{Warm Start of Subproblems}
Rate of convergence of first order optimization algorithms depends on the initialization
and we must warm start $\mcM$ at an appropriate initial point in order to obtain
the best convergence of subproblem~\eqref{eq:prox_point_algo} in Line~\ref{line:algo:c:prox_point} of Algo.~\ref{algo:catalyst}.
We advocate the use of the prox center $\zv_{k-1}$ in iteration $k$ as the warm start strategy.
We also experiment with other warm start strategies in Section~\ref{sec:expt}.
\begin{algorithm}[tb]
\caption{The {Casimir}{} algorithm}
\label{algo:catalyst}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Smoothable objective $F$ of the form \eqref{eq:cvx_pb} with $h$ simple,
smoothing function $\omega$,
linearly convergent algorithm $\mcM$,
non-negative and non-increasing sequence of smoothing parameters $(\mu_k)_{k \ge 1}$,
positive and non-decreasing sequence of regularization parameters $(\kappa_k)_{k\ge1}$,
non-negative sequence of relative target accuracies $(\delta_k)_{k\ge 1}$ and,
initial point $\wv_0$, $\alpha_0 \in (0, 1)$,
time horizon $K$.
\STATE {\bfseries Initialize:} $\zv_0 = \wv_0$.
\FOR{$k=1$ \TO $K$}
\STATE Using $\mcM$ with $\zv_{k-1}$ as the starting point, find \label{line:algo:c:prox_point}
$\wv_{k} \approx \argmin_{\wv\in\reals^d} F_{\mu_k \omega, \kappa_k}(\wv; \zv_{k-1})$ where
\begin{equation}\label{eq:prox_point_algo}
F_{\mu_k \omega, \kappa_k}(\wv; \zv_{k-1}) := \frac{1}{n}\sum_{i=1}^n h_{\mu_k \omega}(\Am\pow{i} \wv + {\bm b}\pow{i}) + \frac{\lambda}{2} \normasq{2}{\wv} + \frac{\kappa_k}{2}\normasq{2}{\wv- \zv_{k-1}}
\end{equation}
such that
\begin{align}\label{eq:stopping_criterion}
F_{\mu_k\omega, \kappa_k}(\wv_k;\zv_{k-1}) - \min_\wv F_{\mu_k\omega, \kappa_k}(\wv;\zv_{k-1})\leq \tfrac{\delta_k\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}
\end{align}
\STATE Solve for $\alpha_k \geq 0$
\begin{align} \label{eq:c:update_alpha}
\alpha_k^2 (\kappa_{k+1} + \lambda) = (1-\alpha_k) \alpha_{k-1}^2 (\kappa_k + \lambda) + \alpha_k \lambda.
\end{align}
\STATE Set
\begin{align} \label{eq:c:update_support}
\zv_k = \wv_k + \beta_k (\wv_k - \wv_{k-1}),
\end{align}
where
\begin{align} \label{eq:c:update_beta}
\beta_k = \frac{ \alpha_{k-1}(1-\alpha_{k-1}) (\kappa_k + \lambda) }
{ \alpha_{k-1}^2 (\kappa_k + \lambda) + \alpha_k(\kappa_{k+1} + \lambda) }.
\end{align}
\ENDFOR
\RETURN $\wv_K$.
\end{algorithmic}
\end{algorithm}
\subsection{Convergence Analysis of Casimir} \label{sec:catalyst:analysis}
We first state the outer loop complexity results of Algo.~\ref{algo:catalyst} for any generic
linearly convergent algorithm $\mcM$ in Sec.~\ref{sec:catalyst:outer_compl}, prove it in Sec.~\ref{subsec:c:proof}.
Then, we consider the complexity of each inner optimization problem~\eqref{eq:prox_point_algo} in Sec.~\ref{sec:catalyst:inner_compl}
based on properties of $\mcM$.
\subsubsection{Outer Loop Complexity Results} \label{sec:catalyst:outer_compl}
The following theorem states the convergence of the algorithm for general choice of parameters, where we denote $\wv^* \in \argmin_{\wv\in\reals^d} F(\wv)$ and $F^* = F(\wv^*)$.
\begin{theorem} \label{thm:catalyst:outer}
Consider Problem~\eqref{eq:cvx_pb_finite_sum}.
Suppose
$\delta_k \in [0, 1)$ for all $k \ge 1$, the sequence $(\mu_k)_{k\ge 1}$ is non-negative and non-increasing,
and the sequence $(\kappa_k)_{k \ge 1}$ is strictly positive and non-decreasing.
Further, suppose the smoothing function $\omega: \dom h^* \to \reals$ satisfies
$-D_\omega \le \omega(\uv) \le 0$ for all $\uv \in \dom h^*$ and that
$\alpha_0^2 \ge \lambda / (\lambda + \kappa_1)$.
Then, the sequence $(\alpha_k)_{k \ge 0}$ generated by Algo.~\ref{algo:catalyst}
satisfies $0 < \alpha_k \le \alpha_{k-1} < 1$ for all $k \ge 1$.
Furthermore, the sequence $(\wv_{k})_{k \ge 0}$
of iterates generated by Algo.~\ref{algo:catalyst} satisfies
\begin{align} \label{thm:c:main:main}
F(\wv_k) - F^* \le
\frac{\mcA_0^{k-1}}{\mcB_1^k} \Delta_0 + \mu_k D_\omega
+ \sum_{j=1}^k \frac{\mcA_j^{k-1}}{\mcB_j^k} \left( \mu_{j-1} - (1-\delta_j)\mu_j \right) D_\omega
\,,
\end{align}
where $\mcA_i^j := \prod_{r=i}^j (1-\alpha_r)$,
$\mcB_i^j := \prod_{r=i}^j (1-\delta_r)$,
$\Delta_0 := F(\wv_0) - F^* + \frac{(\kappa_1 + \lambda) \alpha_0^2 - \lambda \alpha_0 } {2(1 - \alpha_0)} \normasq{2}{\wv_0 - \wv^*}$ and
$\mu_0 := 2\mu_1$.
\end{theorem}
Before giving its proof, we present various parameters strategies as corollaries.
Table~\ref{tab:catalyst_corollaries_summary} summarizes the parameter settings and the rates obtained for each setting.
Overall, the target accuracies $\delta_k$ are chosen such that $\mcB_j^k$ is a constant and
the parameters $\mu_k$
and $\kappa_k$ are then carefully chosen
for an almost parameter-free algorithm with the right rate of convergence.
Proofs of these corollaries are provided in Appendix~\ref{subsec:c:proofs_missing_cor}.
The first corollary considers the strongly convex case ($\lambda > 0$) with constant smoothing $\mu_k=\mu$,
assuming that $\eps$ is known {\em a priori}. We note that this is, up to constants, the same complexity obtained by
the original Catalyst scheme on a fixed smooth approximation $F_{\mu\omega}$ with $\mu = \bigO(\eps D_\omega)$.
\begin{corollary} \label{cor:c:outer_sc}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = {\lambda}/(\lambda + \kappa)$.
Suppose $\lambda > 0$ and $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
$\delta_k = {\sqrt{q}}/({2 - \sqrt{q}}) \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{3 - \sqrt{q}}{1 - \sqrt{q}} \mu D_\omega +
2 \left( 1- \frac{\sqrt q}{2} \right)^k \left( F(\wv_0) - F^* \right) \,.
\end{align*}
\end{corollary}
\noindent
Next, we consider the strongly convex case where the target accuracy $\eps$ is not known in advance.
We let smoothing parameters $( \mu_k )_{k \ge 0}$ decrease over time to obtain an adaptive smoothing scheme
that gives progressively better surrogates of the original objective.
\begin{corollary} \label{cor:c:outer_sc:decreasing_mu_const_kappa}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Let $q = {\lambda}/(\lambda + \kappa)$ and $\eta = 1 - {\sqrt q}/{2}$.
Suppose $\lambda > 0$ and
$\kappa_k = \kappa$, for all $k \ge 1$. Choose $\alpha_0 = \sqrt{q}$ and,
the sequences $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$ as
\begin{align*}
\mu_k = \mu \eta^{{k}/{2}} \,, \qquad \text{and,} \qquad
\delta_k = \frac{\sqrt{q}}{2 - \sqrt{q}} \,,
\end{align*}
where $\mu > 0$ is any constant.
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \eta^{{k}/{2}} \left[
2 \left( F(\wv_0) - F^* \right)
+ \frac{\mu D_\omega}{1-\sqrt{q}} \left(2-\sqrt{q} + \frac{\sqrt{q}}{1 - \sqrt \eta} \right)
\right] \, .
\end{align*}
\end{corollary}
\noindent
The next two corollaries consider the unregularized problem, i.e., $\lambda = 0$ with constant and adaptive smoothing respectively.
\begin{corollary} \label{cor:c:outer_smooth}
Consider the setting of Thm.~\ref{thm:catalyst:outer}. Suppose $\mu_k = \mu$, $\kappa_k = \kappa$, for all $k \ge 1$
and $\lambda = 0$. Choose $\alpha_0 = (\sqrt{5}-1)/{2}$ and
$\delta_k = (k+1)^{-2} \,.$
Then, we have,
\begin{align*}
F(\wv_k) - F^* \le \frac{8}{(k+2)^2} \left( F(\wv_0) - F^* + \frac{\kappa}{2} \normasq{2}{\wv_0 - \wv^*} \right)
+ \mu D_\omega\left( 1 + \frac{12}{k+2} + \frac{30}{(k+2)^2} \right) \, .
\end{align*}
\end{corollary}
\begin{corollary} \label{cor:c:outer_smooth_dec_smoothing}
Consider the setting of Thm.~\ref{thm:catalyst:outer} with $\lambda = 0$.
Choose $\alpha_0 = (\sqrt{5}-1)/{2}$, and for some non-negative constants $\kappa, \mu$,
define sequences $(\kappa_k)_{k \ge 1}, (\mu_k)_{k \ge 1}, (\delta_k)_{k \ge 1}$ as
\begin{align*}
\kappa_k = \kappa \, k\,, \quad
\mu_k = \frac{\mu}{k} \quad \text{and,} \quad
\delta_k = \frac{1}{(k + 1)^2} \,.
\end{align*}
Then, for $k \ge 2$, we have,
\begin{align*}
F(\wv_k) - F^* \le
\frac{\log(k+1)}{k+1} \left(
2(F(\wv_0) - F^*) + \kappa \normasq{2}{\wv_0 - \wv^*} + 27 \mu D_\omega
\right) \,.
\end{align*}
For the first iteration (i.e., $k = 1$), this bound is off by a constant factor $1 / \log2$.
\end{corollary}
\begin{table*}[t!]
\caption{\small{Summary of outer iteration complexity for Algorithm~\ref{algo:catalyst}
for different parameter settings. We use shorthand
$\Delta F_0 := F(\wv_0) - F^*$ and {$\Delta_0 = \norma{2}{\wv_0 - \wv^*}$}.
Absolute constants are omitted from the rates.
\vspace{2mm}
}}
\begin{adjustbox}{width=\textwidth}
\label{tab:catalyst_corollaries_summary}
\centering
\begin{tabular}{|c||ccccc|c|c|}
\hline
{Cor.} & $\lambda>0$ & $\kappa_k$ & $\mu_k$ & $\delta_k$ & $\alpha_0$ & $F(\wv_k)-F^*$ & Remark \\ \hline\hline
\ref{cor:c:outer_sc} & Yes &
$\kappa$ & $\mu$ & $\frac{\sqrt{q}}{2-\sqrt{q}}$ & $\sqrt{q}$ &
$\left(1- \frac{\sqrt{q}}{2}\right)^k \Delta F_0 + \frac{\mu D}{1-\sqrt{q}}$
& $q = \frac{\lambda}{\lambda+\kappa}$
\\ \hline
\ref{cor:c:outer_sc:decreasing_mu_const_kappa} & Yes & $\kappa$ &
$\mu \left( 1 - \frac{\sqrt{q}}{2} \right)^{k/2}$ &
$\frac{\sqrt{q}}{2-\sqrt{q}}$ & $\sqrt{q}$ &
$\left(1- \frac{\sqrt{q}}{2}\right)^{k/2} \left( \Delta F_0 + \frac{\mu D}{1-\sqrt{q}} \right)$
& $q = \frac{\lambda}{\lambda+\kappa}$
\\ \hline
\rule{0pt}{12pt}
\ref{cor:c:outer_smooth} & No &
$\kappa$ & $\mu$ & $k^{-2}$ & $c$ &
$\frac{1}{k^2} \left(\Delta F_0 + \kappa \Delta_0^2 \right) + \mu D$
& $c = (\sqrt 5 - 1)/ 2$
\\[3pt]
\hline
\rule{0pt}{12pt}
\ref{cor:c:outer_smooth_dec_smoothing} & No &
$\kappa \, k$ & $\mu /k$ & $k^{-2}$ & $c$ &
$\frac{\log k}{k} (\Delta F_0 + \kappa \Delta_0^2 + \mu D )$
& $c = (\sqrt 5 - 1)/ 2$
\\[3pt]
\hline
\end{tabular}
\end{adjustbox}
\end{table*}
\subsubsection{Outer Loop Convergence Analysis}\label{subsec:c:proof}
We now prove Thm.~\ref{thm:catalyst:outer}.
The proof technique largely follows that of \citet{lin2017catalyst}, with the added challenges of accounting
for smoothing and varying Moreau-Yosida regularization.
We first analyze the sequence $(\alpha_k)_{k \ge 0}$. The proof follows from
the algebra of Eq.~\eqref{eq:c:update_alpha}
and has been given in Appendix~\ref{sec:a:c_alpha_k}.
\begin{lemma} \label{lem:c:alpha_k}
Given a positive, non-decreasing sequence $(\kappa_k)_{k\ge 1}$ and $\lambda \ge 0$,
consider the sequence $(\alpha_k)_{k \ge 0}$ defined by \eqref{eq:c:update_alpha}, where
$\alpha_0 \in (0, 1)$ such that $\alpha_0^2 \ge \lambda / (\lambda + \kappa_1)$.
Then, we have for every $k \ge 1$ that $0< \alpha_k \le \alpha_{k-1}$ and,
$
\alpha_k^2 \ge {\lambda}/({\lambda + \kappa_{k+1}}) \,.
$
\end{lemma}
\noindent
We now characterize the effect of an approximate proximal point step
on $F_{\mu\omega}$.
\begin{lemma} \label{lem:c:approx_descent}
Suppose $\widehat \wv \in \reals^d$ satisfies
$F_{\mu\omega, \kappa}(\widehat \wv ;\zv) - \min_{\wv \in \reals^d} F_{\mu\omega, \kappa}( \wv ;\zv) \le \widehat\eps$
for some $\widehat \eps > 0$.
Then, for all $0 < \theta < 1$ and all $\wv \in \reals^d$, we have,
\begin{align} \label{eq:c:approx_descent}
F_{\mu\omega}(\widehat\wv) + \frac{\kappa}{2} \normasq{2}{\widehat\wv - \zv}
+ \frac{\kappa + \lambda}{2}(1-\theta) \normasq{2}{\wv - \widehat\wv}
\le F_{\mu\omega}(\wv) + \frac{\kappa}{2} \normasq{2}{\wv - \zv} + \frac{\widehat\eps}{\theta} \,.
\end{align}
\end{lemma}
\begin{proof
Let $\widehat F^* = \min_{\wv \in \reals^d} F_{\mu\omega,\kappa}(\wv ; \zv)$.
Let $\widehat \wv^*$ be the unique minimizer of $F_{\mu\omega,\kappa}(\cdot \,; \zv)$.
We have, from $(\kappa + \lambda)$-strong convexity of $F_{\mu\omega,\kappa}(\cdot \,; \zv)$,
\begin{align*}
F_{\mu\omega,\kappa}(\wv ; \zv) &\ge \widehat F^* +\frac{\kappa +\lambda}{2} \normasq{2}{\wv - \widehat \wv^*} \\
&\ge \left( F_{\mu\omega,\kappa}(\widehat\wv ; \zv) - \widehat\eps \right)
+ \frac{\kappa + \lambda}{2}(1-\theta) \normasq{2}{\wv - \widehat\wv}
- \frac{\kappa + \lambda}{2} \left( \frac{1}{\theta} - 1 \right) \normasq{2}{\widehat\wv - \widehat \wv^*} \,,
\end{align*}
where we used that $\widehat\eps$ was sub-optimality of $\widehat\wv$ and Lemma~\ref{lem:c:helper:quadratic}
from Appendix~\ref{subsec:a:catalyst:helper}.
From $(\kappa + \lambda)$-strong convexity of $F_{\mu\omega, \kappa}(\cdot ; \zv)$,
we have,
\begin{align*}
\frac{\kappa + \lambda }{2} \normasq{2}{\widehat\wv - \widehat \wv^*} \le
F_{\mu\omega,\kappa}(\widehat\wv ; \zv) - \widehat F^* \le \widehat\eps\,,
\end{align*}
Since $(1/\theta - 1)$ is non-negative,
we can plug this into the previous statement to get,
\begin{align*}
F_{\mu\omega,\kappa}(\wv ; \zv) \ge F_{\mu\omega,\kappa}(\widehat\wv ; \zv)
+ \frac{\kappa + \lambda}{2} (1-\theta) \normasq{2}{\wv - \widehat\wv} - \frac{\widehat\eps}{\theta}\,.
\end{align*}
Substituting the definition of $F_{\mu\omega,\kappa}(\cdot \,; \zv)$
from \eqref{eq:prox_point_algo} completes the proof.
\end{proof}
We now define a few auxiliary sequences integral to the proof.
Define sequences $(\vv_k)_{k \ge 0}$, $(\gamma_k)_{k \ge 0}$, $(\eta_k)_{k \ge 0}$, and $(\rv_k)_{k \ge 1}$ as
\begin{align}
\label{eq:c:v_defn_base}
\vv_0 &= \wv_0 \, \\
\label{eq:c:v_defn}
\vv_k &= \wv_{k-1} + \frac{1}{\alpha_{k-1}} (\wv_k - \wv_{k-1}) \,, \, k \ge 1 \,, \\
\label{eq:c:gamma_defn_base}
\gamma_0 &= \frac{(\kappa_1 + \lambda) \alpha_0^2 - \lambda \alpha_0 } {1 - \alpha_0} \,, \\
\label{eq:c:gamma_defn}
\gamma_k &= (\kappa_k + \lambda) \alpha_{k-1}^2 \, , \, k \ge 1 \,, \\
\label{eq:c:eta_defn}
\eta_k &= \frac{\alpha_k \gamma_k}{\gamma_{k+1} + \alpha_k \gamma_k} \,, \, k\ge 0 \,, \\
\label{eq:c:ly_vec_defn}
\rv_k &= \alpha_{k-1} \wv^* + ( 1- \alpha_{k-1}) \wv_{k-1} \, , \, k \ge 1\,.
\end{align}
One might recognize $\gamma_k$ and $\vv_k$ from their resemblance to
counterparts from the proof of \citet{nesterov2013introductory}.
Now, we claim some properties of these sequences.
\begin{claim} \label{claim:c:sequences}
For the sequences defined in \eqref{eq:c:v_defn_base}-\eqref{eq:c:ly_vec_defn}, we have,
\begin{align}
\label{eq:c:gamma_defn_2}
\gamma_k &= \frac{(\kappa_{k+1} + \lambda) \alpha_k^2 - \lambda \alpha_k } {1 - \alpha_k} \, , \, k \ge 0\,, \\
\label{eq:c:gamma_defn_3}
\gamma_{k+1} &= (1- \alpha_k) \gamma_k + \lambda \alpha_k \, , \, k \ge 0\,, \\
\label{eq:c:eta_defn_2}
\eta_k &= \frac{\alpha_k \gamma_k}{\gamma_k + \alpha_k \lambda} \, , \, k \ge 0 \\
\label{eq:c:v_defn_2}
\zv_k &= \eta_k \vv_k + (1- \eta_k) \wv_k \, , \, k \ge 0\,, \,.
\end{align}
\end{claim}
\begin{proof
Eq.~\eqref{eq:c:gamma_defn_2}
follows from plugging in \eqref{eq:c:update_alpha} in \eqref{eq:c:gamma_defn} for $k\ge 1$,
while for $k=0$, it is true by definition.
Eq.~\eqref{eq:c:gamma_defn_3} follows from plugging \eqref{eq:c:gamma_defn} in \eqref{eq:c:gamma_defn_2}.
Eq.~\eqref{eq:c:eta_defn_2} follows from \eqref{eq:c:gamma_defn_3} and \eqref{eq:c:eta_defn}.
Lastly, to show \eqref{eq:c:v_defn_2}, we shall show instead that \eqref{eq:c:v_defn_2} is equivalent
to the update \eqref{eq:c:update_support} for $\zv_k$. We have,
\begin{align*}
\zv_k &\,= \eta_k \vv_k + (1-\eta_k) \wv_k \\
&\stackrel{\eqref{eq:c:v_defn}}{=} \eta_k \left( \wv_{k-1}
+ \frac{1}{\alpha_{k-1}} (\wv_k - \wv_{k-1}) \right) + (1-\eta_k) \wv_k \\
&\,= \wv_k + \eta_k \left(\frac{1}{\alpha_{k-1}} - 1 \right) (\wv_{k} - \wv_{k-1}) \,.
\end{align*}
Now,
\begin{align*}
\eta_k \left(\frac{1}{\alpha_{k-1}} - 1 \right)
&\stackrel{\eqref{eq:c:eta_defn}}{=} \frac{\alpha_k \gamma_k}{\gamma_{k+1} + \alpha_k \gamma_k } \cdot \frac{1-\alpha_{k-1}}{\alpha_{k-1}}
\\& \stackrel{\eqref{eq:c:gamma_defn}}{=} \frac{\alpha_k (\kappa_k + \lambda) \alpha_{k-1}^2 }
{ \alpha_k^2 (\kappa_{k+1} + \lambda) +\alpha_k (\kappa_k + \lambda) \alpha_{k-1}^2 }
\cdot \frac{1-\alpha_{k-1}}{\alpha_{k-1}}
\stackrel{\eqref{eq:c:update_beta}}{=} \beta_k \, ,
\end{align*}
completing the proof.
\end{proof}
\begin{claim} \label{claim:c:ly_sequence}
The sequence $(\rv_k)_{k \ge 1}$ from \eqref{eq:c:ly_vec_defn} satisfies
\begin{align} \label{eq:c:norm_ly_sequence}
\normasq{2}{\rv_k - \zv_{k-1}} \le \alpha_{k-1} (\alpha_{k-1} - \eta_{k-1}) \normasq{2}{\wv_{k-1} - \wv^*}
+ \alpha_{k-1} \eta_{k-1} \normasq{2}{\vv_{k-1} - \wv^*} \,.
\end{align}
\end{claim}
\begin{proof
Notice that $\eta_k \stackrel{\eqref{eq:c:eta_defn_2}}{=} \alpha_k \cdot \frac{\gamma_k}{\gamma_k + \alpha_k \lambda} \le \alpha_k$.
Hence, using convexity of the squared Euclidean norm, we get,
\begin{align*}
\normasq{2}{\rv_k - \zv_{k-1}} & \stackrel{\eqref{eq:c:v_defn_2}}{=}
\normasq{2}{ (\alpha_{k-1} - \eta_{k-1})(\wv^* - \wv_{k-1}) + \eta_{k-1}(\wv^* - \vv_{k-1}) } \\
&\,= \alpha_{k-1}^2 \normsq*{ \left(1 - \frac{\eta_{k-1}}{\alpha_{k-1}} \right) (\wv^* - \wv_{k-1})
+ \frac{\eta_{k-1}}{\alpha_{k-1}} (\wv^* - \vv_{k-1}) }_2 \\
&\stackrel{(*)}{\le} \alpha_{k-1}^2 \left(1 - \frac{\eta_{k-1}}{\alpha_{k-1}} \right) \normasq{2}{\wv_{k-1} - \wv^*}
+ \alpha_{k-1}^2 \frac{\eta_{k-1}}{\alpha_{k-1}} \normasq{2}{\vv_{k-1} - \wv^*} \\
&\,= \alpha_{k-1} (\alpha_{k-1} - \eta_{k-1}) \normasq{2}{\wv_{k-1} - \wv^*}
+ \alpha_{k-1} \eta_{k-1} \normasq{2}{\vv_{k-1} - \wv^*} \,.
\end{align*}
\end{proof}
For all $\mu \ge \mu' \ge 0$, we know from Prop.~\ref{thm:setting:beck-teboulle} that
\begin{align}
0 \le F_{\mu \omega}(\wv) - F_{\mu'\omega}(\wv) \le (\mu - \mu') D_\omega \,.
\label{asmp:c:smoothing:1}
\end{align}
We now define the sequence $( S_k )_{k\ge0}$ to play the role of a
potential function here.
\begin{align}
\label{eq:c:ly_fn_defn}
\begin{split}
S_0 &= (1 - \alpha_0) (F(\wv_0) - F(\wv^*)) + \frac{\alpha_0 \kappa_1 \eta_0}{2} \normasq{2}{\wv_0 - \wv^*}\,, \\
S_k &= (1-\alpha_k) ( F_{\mu_k \omega}(\wv_k) - F_{\mu_k \omega}(\wv^*)) + \frac{\alpha_k \kappa_{k+1} \eta_k}{2} \normasq{2}{\vv_k - \wv^*}\,, \, k\ge 1 \,.
\end{split}
\end{align}
We are now ready to analyze the effect of one outer loop. This lemma is the crux of the analysis.
\begin{lemma} \label{lem:c:one_step_ly}
Suppose $F_{\mu_k\omega, \kappa_k}(\wv_k ;\zv) - \min_{\wv\in\reals^d} F_{\mu_k\omega, \kappa_k}(\wv ; \zv) \le \eps_k$
for some $\eps_k > 0$. The following statement holds for all $0 < \theta_k < 1$:
\begin{align} \label{eq:c:one_step_ly}
\frac{S_k}{1-\alpha_k} \le S_{k-1} + (\mu_{k-1} - \mu_k) D_\omega + \frac{\eps_k}{\theta_k}
- \frac{\kappa_k}{2}\normasq{2}{\wv_k - \zv_{k-1}} + \frac{\kappa_{k+1}\eta_k \alpha_k \theta_k}{2(1-\alpha_k)} \normasq{2}{\vv_k - \wv^*} \,,
\end{align}
where we set $\mu_0 := 2 \mu_1$.
\end{lemma}
\begin{proof}
For ease of notation, let $F_k := F_{\mu_k \omega}$, and $D := D_\omega$.
By $\lambda$-strong convexity of $F_{\mu_k\omega}$, we have,
\begin{align} \label{eq:c:proof:step_sc_r_k}
F_k(\rv_k) \le \alpha_{k-1} F_k(\wv^*) + (1-\alpha_{k-1}) F_k(\wv_{k-1}) -
\frac{\lambda \alpha_{k-1} (1-\alpha_{k-1})}{2} \normasq{2}{\wv_{k-1} - \wv^*} \,.
\end{align}
We now invoke Lemma~\ref{lem:c:approx_descent} on the function $F_{\mu_k \omega, \kappa_k}(\cdot ; \zv_{k-1})$ with
$\widehat \eps = \eps_k$ and $\wv = \rv_k$ to get,
\begin{align} \label{eq:c:proof:main_eq_unsimplified}
F_k(\wv_k) + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}} + \frac{\kappa_k + \lambda}{2}(1-\theta_k) \normasq{2}{\rv_k - \wv_k}
\le F_k(\rv_k) + \frac{\kappa_k}{2} \normasq{2}{\rv_k - \zv_{k-1}} + \frac{\eps_k}{\theta_k} \, .
\end{align}
We shall separately manipulate the left and right hand sides of \eqref{eq:c:proof:main_eq_unsimplified},
starting with the right hand side, which we call $\mcR$.
We have, using \eqref{eq:c:proof:step_sc_r_k} and~\eqref{eq:c:norm_ly_sequence},
\begin{align*}
\mcR
\le& \,
(1- \alpha_{k-1}) F_k(\wv_{k-1}) + \alpha_{k-1} F_k(\wv^*)
- \frac{\lambda \alpha_{k-1} (1-\alpha_{k-1})}{2} \normasq{2}{\wv_{k-1} - \wv^*}
\\
&+ \frac{\kappa_k}{2} \alpha_{k-1}(\alpha_{k-1} - \eta_{k-1}) \normasq{2}{\wv_{k-1} - \wv^*}
+ \frac{\kappa_k \alpha_{k-1} \eta_{k-1}}{2} \normasq{2}{\vv_{k-1} - \wv^*} + \frac{\eps_k}{\theta_k} \,.
\end{align*}
We notice now that
\begin{align}
\alpha_{k-1} - \eta_{k-1}
&\stackrel{\eqref{eq:c:eta_defn_2}}{=} \alpha_{k-1} - \frac{\alpha_{k-1}\gamma_{k-1}}{\gamma_k + \alpha_{k-1} \gamma_{k-1}} \nonumber\\
&\,= \alpha_{k-1} \left( \frac{\gamma_k - \gamma_{k-1} ( 1- \alpha_{k-1})}{\gamma_k + \alpha_{k-1} \gamma_{k-1}} \right)
\nonumber\\
&\stackrel{\eqref{eq:c:gamma_defn_3}}{=} \frac{\alpha_{k-1}^2 \lambda}{\gamma_{k-1} + \alpha_{k-1}\lambda}
\nonumber\\
&\stackrel{\eqref{eq:c:gamma_defn_2}}{=} \frac{\alpha_{k-1}^2 \lambda (1-\alpha_{k-1})}
{(\kappa_k + \lambda) \alpha_{k-1}^2 - \lambda \alpha_{k-1} + (1-\alpha_{k-1})\alpha_{k-1}\lambda}
\nonumber\\
&\,= \frac{\lambda}{\kappa_k}(1-\alpha_{k-1}) \,, \label{eq:c:one_step_ly_pf_1}
\end{align}
and hence the terms containing $\normasq{2}{\wv_{k-1} - \wv^*}$ cancel out.
Therefore, we get,
\begin{align} \label{eq:c:proof:main_eq:rhs:simplified}
\mcR
\le
(1 - \alpha_{k-1}) F_k(\wv_{k-1}) + \alpha_{k-1} F_k(\wv^*)
+ \frac{\kappa_k \alpha_{k-1} \eta_{k-1}}{2} \normasq{2}{\vv_{k-1} - \wv^*} + \frac{\eps_k}{\theta_k} \,.
\end{align}
To move on to the left hand side, we note that
\begin{align} \label{eq:c:one_step_ly_proof_prod}
\alpha_k \eta_k \nonumber
&\stackrel{\eqref{eq:c:eta_defn_2}}{=} \frac{\alpha_k^2 \gamma_k}{\gamma_k + \alpha_k \lambda}
\stackrel{\eqref{eq:c:gamma_defn},\eqref{eq:c:gamma_defn_2}}{=} \frac{\alpha_k^2 \alpha_{k-1}^2 (\kappa_k + \lambda)}
{\frac{(\kappa_{k+1} + \lambda) \alpha_k^2 - \lambda \alpha_k}{1-\alpha_k} + \alpha_k \lambda } \\
&\,=\frac{ (1-\alpha_k)(\kappa_k + \lambda) \alpha_{k-1}^2 \alpha_k^2}{(\kappa_{k+1} + \lambda)\alpha_k^2 - \lambda \alpha_k^2}
= (1-\alpha_k) \alpha_{k-1}^2 \frac{\kappa_k + \lambda}{\kappa_{k+1}} \,.
\end{align}
Therefore,
\begin{align} \label{eq:c:one_step_ly_pf_2}
F_k(\wv_k) - F_k(\wv^*) + \frac{\kappa_k + \lambda}{2} \alpha_{k-1}^2 \normasq{2}{\vv_k - \wv^*}
\stackrel{\eqref{eq:c:ly_fn_defn},\eqref{eq:c:one_step_ly_proof_prod}}{=}
\frac{S_k}{1 - \alpha_{k}} \,.
\end{align}
Using
$\rv_k - \wv_k \stackrel{\eqref{eq:c:v_defn}}{=} \alpha_{k-1}(\wv^* - \vv_{k})$,
we simplify the left hand side of \eqref{eq:c:proof:main_eq_unsimplified}, which we call $\mcL$, as
\begin{align} \label{eq:c:proof:main_eq:lhs:simplified}
\nonumber
\mcL &= F_k(\wv_k) - F_k(\wv^*) + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}} +
\frac{\kappa_k + \lambda}{2}(1-\theta_k) \alpha_{k-1}^2 \normasq{2}{\vv_k - \wv^*} \\
&\stackrel{\eqref{eq:c:one_step_ly_pf_2}}{=}
\frac{S_k}{1-\alpha_k} + F_k(\wv^*) + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}
- \frac{\kappa_{k+1} \alpha_k \eta_k \theta_k}{2 (1-\alpha_k)} \normasq{2}{\vv_k - \wv^*} \,.
\end{align}
In view of \eqref{eq:c:proof:main_eq:rhs:simplified} and \eqref{eq:c:proof:main_eq:lhs:simplified},
we can simplify \eqref{eq:c:proof:main_eq_unsimplified} as
\begin{align} \label{eq:c:one_step_ly_pf_2_int}
\begin{aligned}
\frac{S_k}{1-\alpha_k} & + \frac{\kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}
- \frac{\kappa_{k+1}\alpha_k\eta_k\theta_k}{2(1-\alpha_k)} \normasq{2}{\vv_k - \wv^*}
\\&\le (1 - \alpha_{k-1})\left( F_k(\wv_{k-1}) - F_k(\wv^*) \right)
+ \frac{\kappa_k \alpha_{k-1} \eta_{k-1}}{2}\normasq{2}{\vv_{k-1} - \wv^*} + \frac{\eps_k}{\theta_k} \,.
\end{aligned}
\end{align}
We make a distinction for $k \ge 2$ and $k=1$ here. For $k \ge 2$,
the condition that $\mu_{k-1} \ge \mu_k$ gives us,
\begin{align} \label{eq:c:one_step_ly_pf_3}
F_k(\wv_{k-1}) - F_k(\wv^*) \stackrel{\eqref{asmp:c:smoothing:1}}{\le}
F_{k-1}(\wv_{k-1}) - F_{k-1}(\wv^*) + (\mu_{k-1} - \mu_{k}) D \, .
\end{align}
The right hand side of \eqref{eq:c:one_step_ly_pf_2_int} can now be upper bounded by
\begin{align*}
(1 - \alpha_{k-1})(\mu_{k-1} - \mu_k) D + S_{k-1} + \frac{\eps_k}{\theta_k} \,,
\end{align*}
and noting that $1-\alpha_{k-1} \le 1$ yields \eqref{eq:c:one_step_ly} for $k \ge 2$.
For $k=1$, we note that $S_{k-1} (= S_0)$ is defined in terms of $F(\wv)$. So we have,
\begin{align*}
F_1(\wv_0) - F_1(\wv^*) \le F(\wv_0) - F(\wv^*) + \mu_1 D = F(\wv_0) - F(\wv^*) + (\mu_0 - \mu_1) D\,,
\end{align*}
because we used $\mu_0 = 2\mu_1$. This is of the same form as \eqref{eq:c:one_step_ly_pf_3}. Therefore,
\eqref{eq:c:one_step_ly} holds for $k=1$ as well.
%
\end{proof}
\noindent
We now prove Thm.~\ref{thm:catalyst:outer}.
\begin{proof} [Proof of Thm.~\ref{thm:catalyst:outer}]
We continue to use shorthand $F_k := F_{\mu_k \omega}$, and $D := D_\omega$.
We now apply Lemma~\ref{lem:c:one_step_ly}.
In order to satisfy the supposition of Lemma~\ref{lem:c:one_step_ly} that $\wv_k$ is $\eps_k$-suboptimal,
we make the choice $\eps_k = \frac{\delta_k \kappa_k}{2} \normasq{2}{\wv_k - \zv_{k-1}}$ (cf.~\eqref{eq:stopping_criterion}).
Plugging this in and setting $\theta_k = \delta_k < 1$, we get from~\eqref{eq:c:one_step_ly},
\begin{align*}
\frac{S_k}{1-\alpha_k} - \frac{\kappa_{k+1} \eta_k \alpha_k \delta_k}{2(1 - \alpha_k)} \normasq{2}{\vv_k - \wv^*}
\le S_{k-1}
+ (\mu_{k-1} - \mu_k) D \,.
\end{align*}
The left hand side simplifies to $ S_k \, ({1- \delta_k})/({1-\alpha_k}) + \delta_k ( F_k(\wv_k) - F_k(\wv^*))$.
Note that $F_k(\wv_k) - F_k(\wv^*) \stackrel{\eqref{asmp:c:smoothing:1}}{\ge} F(\wv_k) - F(\wv^*) - \mu_k D \ge -\mu_k D$.
From this, noting that $\alpha_k \in (0, 1)$ for all $k$, we get,
\begin{align*}
S_k \left(\frac{1-\delta_k}{1-\alpha_k} \right) \le S_{k-1} + \delta_k \mu_k D + (\mu_{k-1} - \mu_k) D\,,
\end{align*}
or equivalently,
\begin{align*}
S_k \le \left( \frac{1-\alpha_k}{1-\delta_k} \right) S_{k-1} +
\left( \frac{1-\alpha_k}{1-\delta_k} \right) (\mu_{k-1} - (1-\delta_k) \mu_k) D\,.
\end{align*}
Unrolling the recursion for $S_k$, we now have,
\begin{align} \label{eq:c:pf_thm_main_1}
S_k \le \left( \prod_{j=1}^k \frac{1-\alpha_j}{1-\delta_j} \right) S_0
+ \sum_{j=1}^k \left( \prod_{i=j}^k \frac{1-\alpha_i}{1-\delta_i} \right) (\mu_{j-1} - (1- \delta_j) \mu_j) D\,.
\end{align}
Now, we need to reason about $S_0$ and $S_k$ to complete the proof. To this end, consider $\eta_0$:
\begin{align}
\eta_0 &\stackrel{\eqref{eq:c:eta_defn}}{=} \frac{\alpha_0 \gamma_0}{\gamma_1 + \alpha_0 \gamma_0}
\nonumber\\
&\stackrel{\eqref{eq:c:gamma_defn_base}}{=} \frac{\alpha_0 \gamma_0}
{(\kappa_1 + \lambda)\alpha_0^2 + \tfrac{\alpha_0}{1-\alpha_0}\left( (\kappa_1 + \lambda)\alpha_0^2 - \lambda \alpha_0 \right)}
\nonumber\\
&\, = \frac{\alpha_0 \gamma_0 (1-\alpha_0)}{(\kappa_1 + \lambda)\alpha_0^2 - \lambda \alpha_0^2}
= (1-\alpha_0) \frac{\gamma_0}{\kappa_1 \alpha_0} \,. \label{eq:c:thm_pf_1}
\end{align}
With this, we can expand out $S_0$ to get
\begin{align*}
S_0 &\stackrel{\eqref{eq:c:ly_fn_defn}}{=}
(1- \alpha_0) \left(F(\wv_0) - F(\wv^*)\right) + \frac{\alpha_0 \kappa_1 \eta_0}{2} \normasq{2}{\wv_0 - \wv^*} \\
&\stackrel{\eqref{eq:c:thm_pf_1}}{=}
(1- \alpha_0) \left( F(\wv_0) - F^* + \frac{\gamma_0}{2}\normasq{2}{\wv_0 - \wv^*} \right) \,.
\end{align*}
Lastly, we reason about $S_k$ for $k \ge 1$ as,
\begin{align*}
S_k \stackrel{\eqref{eq:c:ly_fn_defn}}{\ge}
(1-\alpha_k) \left(F_k(\wv_k) - F_k(\wv^*) \right)
\stackrel{\eqref{asmp:c:smoothing:1}}{\ge}
(1-\alpha_k) \left( F(\wv_k) - F(\wv^*) - \mu_k D \right) \,.
\end{align*}
Plugging this into the left hand side of \eqref{eq:c:pf_thm_main_1} completes the proof.
\end{proof}
\subsubsection{Inner Loop Complexity} \label{sec:catalyst:inner_compl}
Consider a class $\mcF_{L, \lambda}$ of functions defined as
\[
\mcF_{L, \lambda} = \left\{
f : \reals^d \to \reals \text{ such that $f$ is $L$-smooth and $\lambda$-strongly convex}
\right\} \,.
\]
We now formally define a linearly convergent algorithm on this class of functions.
\begin{definition} \label{defn:c:linearly_convergent}
A first order algorithm $\mcM$ is said to be linearly convergent with parameters
$C : \reals_+ \times \reals_+ \to \reals_+$ and $\tau : \reals_+ \times \reals_+ \to (0, 1)$
if the following holds: for all $L \ge \lambda > 0$, and every $f \in \mcF_{L, \lambda}$ and $\wv_0 \in \reals^d$,
$\mcM$ started at $\wv_0$ generates a sequence $(\wv_k)_{k \ge 0}$ that satisfies:
\begin{align} \label{eq:def:linearly_convergent_2}
\expect f(\wv_k) - f^* \le C(L, \lambda) \left( 1 - \tau(L, \lambda) \right)^k \left( f(\wv_0) - f^* \right)\, ,
\end{align}
where $f^* := \min_{\wv\in\reals^d} f(\wv)$ and the expectation is over the randomness of $\mcM$.
\end{definition}
The parameter $\tau$ determines the rate of convergence of the algorithm.
For instance, batch gradient descent is a deterministic linearly convergent algorithm with $\tau(L, \lambda)\inv = L/\lambda$ and
incremental algorithms such as SVRG and SAGA satisfy requirement~\eqref{eq:def:linearly_convergent_2} with
$\tau(L,\lambda)\inv = c(n + \nicefrac{L}{\lambda})$ for some universal constant $c$.
The warm start strategy in
step $k$ of Algo.~\ref{algo:catalyst} is to initialize $\mcM$ at the prox center $\zv_{k-1}$.
The next proposition, due to \citet[Cor. 16]{lin2017catalyst} bounds the expected number of iterations of $\mcM$ required to
ensure that $\wv_k$ satisfies \eqref{eq:stopping_criterion}. Its proof has been given in Appendix~\ref{sec:c:proofs:inner_compl}
for completeness.
\begin{proposition} \label{prop:c:inner_loop_final}
Consider $F_{\mu\omega, \kappa}(\cdot \, ;\zv)$ defined in Eq.~\eqref{eq:prox_point_algo},
and a linearly convergent algorithm $\mcM$ with parameters $C$, $\tau$.
Let $\delta \in [0,1)$. Suppose $F_{\mu\omega}$ is $L_{\mu\omega}$-smooth and
$\lambda$-strongly convex.
Then the expected number of iterations $\expect[\widehat T]$ of $\mcM$ when started at $\zv$
in order to obtain $\widehat \wv \in \reals^d$ that satisfies
\begin{align*}
F_{\mu\omega, \kappa}(\widehat\wv;\zv) - \min_\wv F_{\mu\omega, \kappa}(\wv;\zv)\leq \tfrac{\delta\kappa}{2} \normasq{2}{\wv - \zv}
\end{align*}
is upper bounded by
\begin{align*}
\expect[\widehat T] \le \frac{1}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \log\left(
\frac{8 C(L_{\mu\omega} + \kappa, \lambda + \kappa)}{\tau(L_{\mu\omega} + \kappa, \lambda + \kappa)} \cdot
\frac{L_{\mu\omega} + \kappa}{\kappa \delta} \right) + 1 \,.
\end{align*}
\end{proposition}
\begin{table*}[t!]
\caption{\small{Summary of global complexity of {Casimir-SVRG}, i.e., Algorithm~\ref{algo:catalyst}
with SVRG as the inner solver for various parameter settings.
We show $\expect[N]$, the expected total number of SVRG iterations required to obtain an accuracy $\eps$,
up to constants and factors logarithmic in problem parameters.
We denote
$\Delta F_0 := F(\wv_0) - F^*$ and {$\Delta_0 = \norma{2}{\wv_0 - \wv^*}$}.
Constants $D, A$ are short for $D_\omega, A_\omega$ (see \eqref{eq:c:A_defn}).
\vspace{2mm}
}}
\begin{adjustbox}{width=\textwidth}
\label{tab:sc-svrg_rates_summary}
\centering
\begin{tabular}{|c||cccc|c|c|}
\hline
\rule{0pt}{12pt}
{Prop.} & $\lambda>0$ & $\mu_k$ & $\kappa_k$ & $\delta_k$ & $\expect[N]$ & Remark \\[3pt] \hline\hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_svrg_main}
& Yes & $\sfrac{\eps}{D}$ &
$\sfrac{A D}{\eps n} - \lambda$ & $\sqrt\frac{\lambda\eps n}{A D}$ &
$n + \sqrt{\frac{A D n}{\lambda \eps}} $ & fix $\eps$ in advance
\\[5pt] \hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_sc:dec_smoothing_main}
& Yes & $\mu c^k $ & $\lambda$ & $c'$ &
$ n + \frac{A}{\lambda\eps} \frac{\Delta F_0 + \mu D}{\mu}$ &
$c,c'<1$ are universal constants
\\[5pt] \hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_svrg_smooth_main}
& No & $\sfrac{\eps}{D}$ &
$\sfrac{A D}{\eps n}$ & $1/k^2$ &
$n\sqrt{\frac{\Delta F_0}{\eps}} +
\frac{ \sqrt{A D n} \Delta_0}{\eps} $ & fix $\eps$ in advance
\\[5pt] \hline
\rule{0pt}{15pt}
\ref{prop:c:total_compl_nsc:dec_smoothing_main} & No & $\sfrac{\mu}{k}$ &
$\kappa_0 \, k$ & $1/k^2$ &
$\frac{\widehat\Delta_0}{\eps} \left( n + \frac{A}{\mu \kappa_0} \right) $ &
$\widehat\Delta_0 = \Delta F_0 + \frac{\kappa_0}{2} \Delta_0^2 + \mu D$
\\[5pt] \hline
\end{tabular}
\end{adjustbox}
\end{table*}
\subsection{Casimir with SVRG} \label{sec:catalyst:total_compl}
We now choose SVRG \citep{johnson2013accelerating} to be the linearly convergent algorithm $\mcM$,
resulting in an algorithm called {Casimir-SVRG}{}.
The rest of this section analyzes the total iteration complexity of
{Casimir-SVRG}{} to solve Problem~\eqref{eq:cvx_pb_finite_sum}.
The proofs of the results from this section are calculations
stemming from combining the outer loop complexity from
Cor.~\ref{cor:c:outer_sc} to~\ref{cor:c:outer_smooth_dec_smoothing} with
the inner loop complexity from Prop.~\ref{prop:c:inner_loop_final},
and are relegated to Appendix~\ref{sec:c:proofs:total_compl}.
Table~\ref{tab:sc-svrg_rates_summary} summarizes the results of this section.
Recall that if $\omega$ is 1-strongly convex with respect to $\norma{\alpha}{\cdot}$, then
$h_{\mu\omega}(\Am \wv + \bv)$ is $L_{\mu\omega}$-smooth with respect to $\norma{2}{\cdot}$,
where $L_{\mu\omega} = \normasq{2,\alpha}{\Am} / \mu$.
Therefore, the complexity of solving problem~\eqref{eq:cvx_pb_finite_sum} will depend on
\begin{align} \label{eq:c:A_defn}
A_\omega := \max_{i=1,\cdots,n} \normasq{2, \alpha}{\Am\pow{i}} \,.
\end{align}
\begin{remark} \label{remark:smoothing:l2vsEnt}
We have that $\norma{2,2}{\Am} = \norma{2}{\Am}$ is the spectral norm of $\Am$ and
$\norma{2,1}{\Am} = \max_j \norma{2}{\av_j}$ is the largest row norm, where $\av_j$ is the $j$th row of $\Am$.
Moreover, we have that $\norma{2,2}{\Am} \ge \norma{2,1}{\Am}$.
\end{remark}
\noindent
We start with the strongly convex case with constant smoothing.
\begin{proposition} \label{prop:c:total_compl_svrg_sc} \label{prop:c:total_compl_svrg_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer} and
fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu = \eps / {10 D_\omega}$, $\kappa_k = k$ chosen as
\begin{align*}
\kappa =
\begin{cases}
\frac{A}{\mu n} - \lambda \,, \text{ if } \frac{A}{\mu n} > 4 \lambda \\
\lambda \,, \text{ otherwise}
\end{cases} \,,
\end{align*}
$q = {\lambda}/{(\lambda + \kappa)}$, $\alpha_0 = \sqrt{q}$, and
$\delta = {\sqrt{q}}/{(2 - \sqrt{q})}$.
Then, the number of iterations $N$ to obtain $\wv$ such that $F(\wv) - F(\wv^*) \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left(
n + \sqrt{\frac{A_\omega D_\omega n}{\lambda \eps}}
\right) \,.
\end{align*}
\end{proposition}
\noindent
Here, we note that $\kappa$ was chosen to minimize the total complexity (cf. \citet{lin2017catalyst}).
This bound is known to be tight, up to logarithmic factors \citep{woodworth2016tight}.
\noindent
Next, we turn to the strongly convex case with decreasing smoothing.
\begin{proposition} \label{prop:c:total_compl_sc:dec_smoothing} \label{prop:c:total_compl_sc:dec_smoothing_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda > 0$ and $\kappa_k = \kappa$, for all $k \ge 1$ and
that $\alpha_0$, $(\mu_k)_{k \ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_sc:decreasing_mu_const_kappa},
with $q = \lambda/(\lambda + \kappa)$ and $\eta = 1- {\sqrt q}/{2}$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n
+ \frac{A_\omega}{\mu(\lambda + \kappa)\eps} \left( F(\wv_0) - F^* + \frac{\mu D_\omega}{1-\sqrt{q}} \right)
\right) \,.
\end{align*}
\end{proposition}
\noindent
Unlike the previous case, there is no obvious choice of $\kappa$, such as to minimize the global complexity.
Notice that we do not get the accelerated rate of Prop.~\ref{prop:c:total_compl_svrg_sc}.
We now turn to the case when $\lambda = 0$ and $\mu_k = \mu$ for all $k$.
\begin{proposition} \label{prop:c:total_compl_svrg_smooth} \label{prop:c:total_compl_svrg_smooth_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer} and fix $\eps > 0$.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with parameters:
$\mu_k = \mu ={\eps}/{20 D_\omega}$, $\alpha_0 = (\sqrt{5} - 1)/{2}$,
$\delta_k = {1}/{(k+1)^2}$, and $\kappa_k = \kappa = {A_\omega}/{\mu(n+1)}$.
Then, the number of iterations $N$ to get a point $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde \bigO \left( n\sqrt{\frac{F(\wv_0) - F^*}{\eps}} +
\sqrt{A_\omega D_\omega n} \frac{\norma{2}{\wv_0 - \wv^*}}{\eps} \right) \, .
\end{align*}
\end{proposition}
\noindent
This rate is tight up to log factors~\citep{woodworth2016tight}.
Lastly, we consider the non-strongly convex case ($\lambda = 0$) together with decreasing smoothing.
As with Prop.~\ref{prop:c:total_compl_sc:dec_smoothing}, we do not obtain an accelerated rate here.
\begin{proposition} \label{prop:c:total_compl_nsc:dec_smoothing} \label{prop:c:total_compl_nsc:dec_smoothing_main}
Consider the setting of Thm.~\ref{thm:catalyst:outer}.
Suppose $\lambda = 0$ and that $\alpha_0$, $(\mu_k)_{k\ge 1}$,$ (\kappa_k)_{k\ge 1}$ and $(\delta_k)_{k \ge 1}$
are chosen as in Cor.~\ref{cor:c:outer_smooth_dec_smoothing}.
If we run Algo.~\ref{algo:catalyst} with SVRG as the inner solver with these parameters,
the number of iterations $N$ of SVRG required to obtain $\wv$ such that $F(\wv) - F^* \le \eps$ is
bounded in expectation as
\begin{align*}
\expect[N] \le \widetilde\bigO \left( \frac{1}{\eps}
\left( F(\wv_0) - F^* + \kappa \normasq{2}{\wv_0 - \wv^*} + \mu D \right)
\left( n + \frac{A_\omega}{\mu \kappa} \right)
\right) \,.
\end{align*}
\end{proposition}
\subsection{The Prox-Linear Algorithm} \label{sec:pl:pl-algo}
The exact prox-linear algorithm of \citet{burke1985descent} generalizes the
proximal gradient algorithm (see e.g., \citet{nesterov2013introductory})
to compositions of convex functions with smooth mappings such as~\eqref{eq:n-cvx_pb}.
When given a function $f=h\circ \gv$, the prox-linear algorithm defines a local convex approximation
$f(\cdot \, ; \wv_k)$ about some point $\wv \in \reals^d$ by linearizing the smooth map $\gv$ as
$
f(\wv; \wv_k) := h(\gv(\wv_k) + \grad\gv(\wv_k)(\wv - \wv_k)) \, .
$
With this, it builds a convex model $F(\cdot \, ; \wv_k)$ of $F$ about $\wv_k$ as
\[
F(\wv ; \wv_k) := \frac{1}{n}\sum_{i=1}^n h(\gv\pow{i}(\wv_k) + \grad\gv\pow{i}(\wv_k)(\wv - \wv_k))
+ \frac{\lambda}{2} \normasq{2}{\wv}\,.
\]
Given a step length $\eta > 0$, each iteration of the exact prox-linear algorithm
then minimizes the local convex model plus a proximal term as
\begin{align} \label{eq:pl:exact_pl}
\wv_{k+1} = \argmin_{\wv \in \reals^d} \left[ F_{\eta}( \wv ; \wv_k) := F(\wv ; \wv_k) + \frac{1}{2\eta}\normasq{2}{\wv-\wv_k} \right] \,.
\end{align}
\begin{algorithm}[tb]
\caption{(Inexact) Prox-linear algorithm: outer loop}
\label{algo:prox-linear}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Smoothable objective $F$ of the form \eqref{eq:n-cvx_pb} with $h$ simple,
step length $\eta$,
tolerances $( \epsilon_k )_{k\ge1}$,
initial point $\wv_0$,
non-smooth convex optimization algorithm, $\mcM$,
time horizon $K$
\FOR{$k=1$ \TO $K$}
\STATE Using $\mcM$ with $\wv_{k-1}$ as the starting point, find
\begin{align} \label{eq:pl:algo:update}
\nonumber
\widehat \wv_k \approx \argmin_{\wv} \bigg[
F_\eta(\wv ; \wv_{k-1}) :=
\frac{1}{n} \sum_{i=1}^n & h\big(\gv\pow{i}(\wv_{k-1}) + \grad\gv\pow{i}(\wv_{k-1})(\wv - \wv_{k-1}) \big) \\ &+ \frac{\lambda}{2}\normasq{2}{\wv}
+ \frac{1}{2\eta}\normasq{2}{\wv-\wv_{k-1}} \,,
\bigg]
\end{align}
such that
\begin{align} \label{eq:pl:algo:stop}
F_\eta(\widehat \wv_k ; \wv_{k-1}) - \min_{ \wv \in \reals^d} F_\eta(\wv ; \wv_{k-1}) \le \eps_k \,.
\end{align}
\label{line:pl:algo:subprob}
\STATE Set $\wv_k = \widehat \wv_k$ if $F(\widehat \wv_k) \le F(\wv_{k-1})$, else set
$\wv_k = \wv_{k-1}$.
\label{line:pl:algo:accept}
\ENDFOR
\RETURN $\wv_K$.
\end{algorithmic}
\end{algorithm}
Following \citet{drusvyatskiy2016efficiency},
we consider an inexact prox-linear algorithm, which approximately solves \eqref{eq:pl:exact_pl}
using an iterative algorithm. In particular, since the function to be minimized in \eqref{eq:pl:exact_pl}
is precisely of the form~\eqref{eq:cvx_pb}, we employ the fast convex solvers developed in the previous section
as subroutines. Concretely, the prox-linear outer loop is displayed in Algo.~\ref{algo:prox-linear}.
We now delve into details about the algorithm and convergence guarantees.
\subsubsection{Inexactness Criterion}
As in Section~\ref{sec:cvx_opt}, we must be prudent in choosing when to terminate the inner optimization
(Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear}).
Function value suboptimality is used as the inexactness criterion here. In particular, for some specified tolerance
$\eps_k > 0$, iteration $k$ of the prox-linear algorithm accepts a solution $\widehat \wv$ that satisfies
$F_\eta(\widehat \wv_k ; \wv_{k-1}) - \min_{ \wv} F_\eta(\wv ; \wv_{k-1}) \le \eps_k$.
\paragraph{Implementation}
In view of the $(\lambda + \eta\inv)$-strong convexity of $F_\eta(\cdot \, ; \wv_{k-1})$,
it suffices to ensure that $(\lambda + \eta\inv) \normasq{2}{\vv} \le \eps_k$ for a subgradient
$\vv \in \partial F_\eta(\widehat \wv_k ; \wv_{k-1})$.
\paragraph{Fixed Iteration Budget}
As in the convex case, we consider as a practical alternative a fixed iteration budget $T_{\mathrm{budget}}$
and optimize $F_\eta(\cdot\, ; \wv_k)$ for exactly $T_{\mathrm{budget}}$ iterations of $\mcM$.
Again, we do not have a theoretical analysis for this scheme but find it to be effective in practice.
\subsubsection{Warm Start of Subproblems}
As in the convex case, we advocate the use of
the prox center $\wv_{k-1}$ to warm start the inner optimization problem in iteration $k$
(Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear}).
\subsection{Convergence analysis of the prox-linear algorithm} \label{sec:pl:convergence}
We now state the assumptions and the convergence guarantee of the prox-linear algorithm.
\subsubsection{Assumptions}
For the prox-linear algorithm to work, the only requirement is that we minimize an upper model.
The assumption below makes this concrete.
\begin{assumption} \label{asmp:pl:upper-bound}
The map $\gv\pow{i}$ is continuously differentiable everywhere for each $i \in [n]$.
Moreover, there exists a constant $L > 0$ such that for all $\wv, \wv' \in \reals^d$ and $i\in [n]$, it holds that
\begin{align*}
h\big(\gv\pow{i}(\wv') \big) \le
h\big(\gv\pow{i}(\wv) + \grad\gv\pow{i}(\wv) (\wv'-\wv) \big) + \frac{L}{2}\normasq{2}{\wv'-\wv} \,.
\end{align*}
\end{assumption}
\noindent
When $h$ is $G$-Lipschitz and each $\gv\pow{i}$ is $\widetilde L$-smooth, both with respect to
$\norma{2}{\cdot}$, then Assumption~\ref{asmp:pl:upper-bound} holds with $L = G\widetilde L$
\citep{drusvyatskiy2016efficiency}.
In the case of structured prediction,
Assumption~\ref{asmp:pl:upper-bound} holds when
the augmented score $\psi$ as a function of $\wv$ is $L$-smooth.
The next lemma makes this precise and its proof is in Appendix~\ref{sec:c:pl_struct_pred}.
\begin{lemma} \label{lem:pl:struct_pred}
Consider the structural hinge loss $f(\wv) = \max_{\yv \in \mcY} \psi(\yv ; \wv) = h\circ \gv(\wv)$
where $h, \gv$ are as defined in \eqref{eq:mapping_def}.
If the mapping $\wv \mapsto \psi(\yv ; \wv)$ is $L$-smooth with respect to $\norma{2}{\cdot}$ for all
$\yv \in \mcY$, then it holds for all $\wv, \zv \in \reals^d$ that
\begin{align*}
|h(\gv(\wv+\zv)) - h(\gv(\wv) + \grad\gv(\wv) \zv)| \le \frac{L}{2}\normasq{2}{\zv}\,.
\end{align*}
\end{lemma}
\subsubsection{Convergence Guarantee}
Convergence is measured via the norm of the {\em prox-gradient} $\bm{\varrho}_\eta(\cdot)$,
also known as the {\em gradient mapping}, defined as
\begin{align}
\bm{\varrho}_\eta(\wv) = \frac{1}{\eta} \left( \wv - \argmin_{\zv \in \reals^d} F_\eta(\zv ; \wv) \right) \,.
\end{align}
The measure of stationarity $\norm{\bm{\varrho}_\eta(\wv)}$ turns out to be related
to the norm of the gradient of the Moreau envelope of $F$ under certain conditions - see
\citet[Section 4]{drusvyatskiy2016efficiency} for a discussion.
In particular, a point $\wv$ with small $\norm{\bm{\varrho}_\eta(\wv)}$ means that $\wv$ is close to
$\wv' = \argmin_{\zv \in \reals^d} F_\eta(\zv ; \wv)$, which is nearly stationary for $F$.
The prox-linear outer loop shown in Algo.~\ref{algo:prox-linear} has the following convergence guarantee
\citep[Thm.~5.2]{drusvyatskiy2016efficiency}.
\begin{theorem} \label{thm:pl:outer-loop}
Consider $F$ of the form~\eqref{eq:n-cvx_pb} that satisfies Assumption~\ref{asmp:pl:upper-bound},
a step length $0 < \eta \le 1/L$ and a non-negative sequence $(\eps_k)_{k\ge1}$.
With these inputs, Algo.~\ref{algo:prox-linear} produces a sequence $(\wv_k)_{k \ge 0}$ that satisfies
\begin{align*}
\min_{k=0, \cdots, K-1} \normasq{2}{\bm{\varrho}_\eta(\wv_k)} \le \frac{2}{\eta K} \left( F(\wv_0) - F^* + \sum_{k=1}^{K} \eps_k \right) \,,
\end{align*}
where $F^* = \inf_{\wv \in \reals^d} F(\wv)$.
In addition, we have that the sequence $(F(\wv_k))_{k\ge0}$ is non-increasing.
\end{theorem}
\begin{remark}
Algo.~\ref{algo:prox-linear} accepts an update only if it improves the function value (Line~\ref{line:pl:algo:accept}).
A variant of Algo.~\ref{algo:prox-linear} which always accepts the update has a guarantee identical to
that of Thm.~\ref{thm:pl:outer-loop},
but the sequence $(F(\wv_k))_{k\ge0}$ would not guaranteed to be non-increasing.
\end{remark}
\subsection{Prox-Linear with {Casimir-SVRG}{}} \label{sec:pl:total-compl}
We now analyze the total complexity of minimizing the finite sum problem~\eqref{eq:n-cvx_pb}
with {Casimir-SVRG}{} to approximately solve the subproblems of Algo.~\ref{algo:prox-linear}.
For the algorithm to converge, the map
$\wv \mapsto \gv\pow{i}(\wv_k) + \grad \gv\pow{i}(\wv_k)(\wv - \wv_k)$ must be Lipschitz for each $i$ and each iterate $\wv_k$.
To be precise, we assume that
\begin{align}
A_\omega := \max_{i=1,\cdots,n} \sup_{\wv \in \reals^d} \normasq{2, \alpha}{\grad \gv\pow{i}(\wv)}
\end{align}
is finite, where $\omega$, the smoothing function, is 1-strongly convex
with respect to $\norma{\alpha}{\cdot}$.
When $\gv\pow{i}$ is the linear map $\wv \mapsto \Am\pow{i}\wv$, this reduces to \eqref{eq:c:A_defn}.
We choose the tolerance $\eps_k$ to decrease as $1/k$.
When using the {Casimir-SVRG}{} algorithm with constant smoothing (Prop.~\ref{prop:c:total_compl_svrg_sc})
as the inner solver, this method effectively smooths the $k$th prox-linear subproblem as $1/k$.
We have the following rate of convergence for this method, which is proved in Appendix~\ref{sec:c:pl_proofs}.
\begin{proposition} \label{prop:pl:total_compl}
Consider the setting of Thm.~\ref{thm:pl:outer-loop}. Suppose the sequence $(\eps_k)_{k\ge 1}$
satisfies $\eps_k = \eps_0 / k$ for some $\eps_0 > 0$ and that
the subproblem of Line~\ref{line:pl:algo:subprob} of Algo.~\ref{algo:prox-linear} is solved using
{Casimir-SVRG}{} with the settings of Prop.~\ref{prop:c:total_compl_svrg_sc}.
Then, total number of SVRG iterations $N$ required to produce a $\wv$ such that
$\norma{2}{\bm{\varrho}_\eta(\wv)} \le \eps$ is bounded as
\begin{align*}
\expect[N] \le \widetilde\bigO\left(
\frac{n}{\eta \eps^2} \left(F(\wv_0) - F^* + \eps_0 \right) +
\frac{\sqrt{A_\omega D_\omega n \eps_0\inv}}{\eta \eps^3} \left( F(\wv_0) - F^* + \eps_0 \right)^{3/2}
\right) \, .
\end{align*}
\end{proposition}
\begin{remark} \label{remark:pl:choosing_eps0}
When an estimate or an upper bound $B$ on $F(\wv_0) - F^*$, one could set
$\eps_0 = \bigO(B)$. This is true, for instance, in the structured prediction task where
$F^* \ge 0$ whenever the task loss $\ell$ is non-negative (cf.~\eqref{eq:pgm:struc_hinge}).
\end{remark}
\subsection{Dataset and Task Description} \label{subsec:expt:task_description}
For each of the tasks, we specify below the following:
(a) the dataset $\{ (\xv\pow{i}, \yv\pow{i})\}_{i=1}^n$,
(b) the output structure $\mcY$,
(c) the loss function $\ell$,
(d) the score function $\phi(\xv, \yv ; \wv)$,
(e) implementation of inference oracles, and lastly,
(f) the evaluation metric used to assess the quality of predictions.
\subsubsection{CoNLL 2003: Named Entity Recognition}
Named entities are phrases that contain the names of persons, organization, locations, etc,
and the task is to predict the label (tag) of each entity.
Named entity recognition can be formulated as a sequence tagging problem where the set
$\mcY_{\mathrm{tag}}$ of individual tags is of size 7.
Each datapoint $\xv$ is a sequence of words $\xv = (x_1, \cdots, x_p)$,
and the label $\yv = (y_1, \cdots, y_p) \in \mcY(\xv)$ is a sequence of the same length,
where each $y_i \in \mcY_{\mathrm{tag}}$ is a tag.
\paragraph{Loss Function}
The loss function is the Hamming Loss $\ell(\yv, \yv') = \sum_i \ind(y_i \neq y_i')$.
\paragraph{Score Function}
We use a chain graph to represent this task. In other words,
the observation-label dependencies are encoded as a Markov chain of order 1 to enable efficient
inference using the Viterbi algorithm.
We only consider the case of linear score $\phi(\xv, \yv ; \wv) = \inp{\wv}{\Phi(\xv, \yv)}$
for this task. The feature map $\Phi$ here is very similar to that given in
Example~\ref{example:inf_oracles:viterbi_example}.
Following \citet{tkachenko2012named}, we use local context $\Psi_i(\xv)$ around $i$\textsuperscript{th} word $x_i$ of $\xv$.
In particular, define $\Psi_i(\xv) = \ev_{x_{i-2}} \otimes \cdots \otimes \ev_{x_{i+2}}$,
where $\otimes$ denotes the Kronecker product between column vectors,
and $\ev_{x_i}$ denotes a one hot encoding of word $x_i$, concatenated with the one hot encoding of its
the part of speech tag and syntactic chunk tag which are provided with the input.
Now, we can define the feature map $\Phi$ as
\begin{align*}
\Phi(\xv, \yv) = \left[ \sum_{v=1}^p \Psi_v(\xv) \otimes \ev_{y_v} \right] \oplus
\left[ \sum_{i=0}^p \ev_{y_{v}} \otimes \ev_{y_{v+1}} \right] \,,
\end{align*}
where $\ev_y \in \reals^{\abs{\mcY_{\mathrm{tag}}}}$ is a one hot-encoding of $y \in \mcY_{\mathrm{tag}}$,
and $\oplus$ denotes vector concatenation.
\paragraph{Inference}
We use the Viterbi algorithm as the max oracle (Algo.~\ref{algo:dp:max:chain})
and top-$K$ Viterbi algorithm (Algo.~\ref{algo:dp:topK:chain}) for the top-$K$ oracle.
\paragraph{Dataset}
The dataset used was CoNLL 2003 \citep{tjong2003introduction},
which contains about $\sim 20K$ sentences.
\paragraph{Evaluation Metric}
We follow the official CoNLL metric: the $F_1$ measure excluding the `O' tags.
In addition, we report the objective function value measured on the training set (``train loss'').
\paragraph{Other Implementation Details}
The sparse feature vectors obtained above are hashed onto $2^{16} - 1$ dimensions for efficiency.
\subsubsection{PASCAL VOC 2007: Visual Object Localization}
Given an image and an object of interest, the task is to localize the object in the given image,
i.e., determine the best bounding box around the object. A related, but harder task is object detection,
which requires identifying and localizing any number of objects of interest, if any, in the image.
Here, we restrict ourselves to pure localization with a single instance of each object.
Given an image $\xv \in \mcX$ of size $n_1 \times n_2$, the label $\yv \in \mcY(\xv)$
is a bounding box, where $\mcY(\xv)$ is the set of all bounding boxes in an image of size $n_1 \times n_2$.
Note that $\abs{\mcY(\xv)} = \bigO(n_1^2n_2^2)$.
\paragraph{Loss Function}
The PASCAL IoU metric \citep{everingham2010pascal} is used to measure the quality of localization.
Given bounding boxes $\yv, \yv'$, the IoU is defined as the ratio of the intersection of the
bounding boxes to the union:
\begin{align*}
\mathrm{IoU}(\yv, \yv') = \frac{\mathrm{Area}(\yv \cap \yv')}{\mathrm{Area}(\yv \cup \yv')} \,.
\end{align*}
We then use the $1 - \mathrm{IoU}$ loss defined as $\ell(\yv, \yv') = 1 - \mathrm{IoU}(\yv, \yv')$.
\paragraph{Score Function}
The formulation we use is based on the popular R-CNN approach \citep{girshick2014rich}.
We consider two cases: linear score and non-linear score $\phi$, both of which are based on the following
definition of the feature map $\Phi(\xv, \yv)$.
\begin{itemize}
\item Consider a patch $\xv|_\yv$ of image $\xv$ cropped to box $\yv$,
and rescale it to $64\times 64$.
Call this $\Pi(\xv|_\yv)$.
\item Consider a convolutional neural network known as AlexNet \citep{krizhevsky2012imagenet}
pre-trained on ImageNet \citep{ILSVRC15} and
pass $\Pi(\xv|_\yv)$ through it.
Take the output of {\tt conv4}, the penultimate convolutional layer as the feature map $\Phi(\xv, \yv)$.
It is of size $ 3 \times 3\times 256$.
\end{itemize}
In the case of linear score functions, we take $\phi(\xv, \yv ; \wv) = \inp{\wv}{\Phi(\xv, \yv)}$.
In the case of non-linear score functions, we define the score $\phi$ as the the result of
a convolution composed with a non-linearity and followed by a linear map. Concretely,
for $\thetav \in \reals^{H \times W \times C_1}$ and $\wv \in \reals^{C_1 \times C_2}$
let the map $\thetav \mapsto \thetav \star \wv \in \reals^{H \times W \times C_2}$ denote a
two dimensional convolution with stride $1$ and kernel size $1$,
and $\sigma: \reals \to \reals$ denote the exponential linear unit, defined respectively as
\begin{align*}
[\thetav \star \wv]_{ij} = \wv\T [\thetav]_{ij} \quad \text{and}
\quad \sigma(x) = x \, \ind(x \ge 0) + (\exp(x) - 1) \, \ind(x < 0) \,,
\end{align*}
where $[\thetav]_{ij} \in \reals^{C_1}$ is such that its $l$th entry is $\thetav_{ijl}$
and likewise for $[\thetav \star \wv]_{ij}$.
We overload notation to let $\sigma:\reals^d\to \reals^d$ denote the exponential linear unit applied element-wise.
Notice that $\sigma$ is smooth.
The non-linear score function $\phi$ is now defined, with
$\wv_1 \in \reals^{256\times16}, \wv_2 \in \reals^{16\times3\times3}$ and $\wv=(\wv_1, \wv_2)$, as,
\begin{align*}
\phi(\xv, \yv ; \wv) = \inp{\sigma(\Phi(\xv, \yv) \star \wv_1)}{\wv_2} \,.
\end{align*}
\paragraph{Inference}
For a given input image $\xv$, we follow the R-CNN approach~\citep{girshick2014rich} and use
selective search~\citep{van2011segmentation} to prune the search space.
In particular, for an image $\xv$, we use the selective search implementation provided by OpenCV~\citep{opencv_library}
and take the top 1000 candidates returned to be the set $\widehat{\mcY}(\xv)$,
which we use as a proxy for $\mcY(\xv)$.
The max oracle and the top-$K$ oracle are then implemented as exhaustive searches over
this reduced set $\widehat\mcY(\xv)$.
\paragraph{Dataset}
We use the PASCAL VOC 2007 dataset \citep{everingham2010pascal}, which contains
$\sim 5K$ annotated consumer (real world) images shared on the photo-sharing site Flickr
from 20 different object categories.
For each class, we consider all images with only a single occurrence of the object, and train
an independent model for each class.
\paragraph{Evaluation Metric}
We keep track of two metrics.
The first is the localization accuracy, also known as CorLoc (for correct localization),
following \citet{deselaers2010localizing}. A bounding box with IoU $> 0.5$
with the ground truth is considered correct and the localization accuracy is the fraction
of images labeled correctly.
The second metric is average precision (AP), which
requires a confidence score for each prediction.
We use $\phi(\xv, \yv' ; \wv)$ as the confidence score of $\yv'$.
As previously, we also plot the objective function value measured on the training examples.
\paragraph{Other Implementation Details}
For a given input-output pair $(\xv, \yv)$ in the dataset,
we instead use $(\xv, \widehat \yv)$ as a training example, where
$\widehat\yv = \argmax_{\yv' \in \widehat\mcY(\xv)} \mathrm{IoU}(\yv, \yv')$
is the element of $\widehat\mcY(\xv)$ which overlaps the most with the true output $\yv$.
\subsection{Methods Compared} \label{subsec:expt:competing_methods}
The experiments compare various convex stochastic and incremental
optimization methods for structured prediction.
\begin{itemize}
\item {\bfseries SGD}: Stochastic subgradient method with a learning rate $\gamma_t = \gamma_0 / (1 + \lfloor t / t_0 \rfloor)$,
where $\eta_0, t_0$ are tuning parameters. Note that this scheme of learning rates does not have
a theoretical analysis. However, the averaged iterate $\overline \wv_t = {2}/(t^2+t)\sum_{\tau=1}^t \tau \wv_\tau$
obtained from the related scheme
$\gamma_t = 1/(\lambda t)$ was shown to have a convergence rate of $\bigO((\lambda \eps)\inv)$
\citep{shalev2011pegasos,lacoste2012simpler}. It works on the non-smooth formulation directly.
\item {\bfseries {BCFW}}: The block coordinate Frank-Wolfe algorithm of \citet{lacoste2012block}.
We use the version that was found to work best in practice, namely,
one that uses the weighted averaged iterate $\overline \wv_t = {2}/(t^2+t)\sum_{\tau=1}^t \tau \wv_\tau$
(called {\tt bcfw-wavg} by the authors)
with optimal tuning of learning rates. This algorithm also works on the non-smooth formulation
and does not require any tuning.
\item {\bfseries {SVRG}}: The SVRG algorithm proposed by \citet{johnson2013accelerating},
with each epoch making one pass through the dataset and using the averaged iterate
to compute the full gradient and restart the next epoch.
This algorithm requires smoothing.
\item {\bfseries {Casimir-SVRG-const}}: Algo.~\ref{algo:catalyst} with SVRG as the inner optimization algorithm.
The parameters $\mu_k$ and $\kappa_k$ as chosen in
Prop.~\ref{prop:c:total_compl_svrg_sc}, where $\mu$ and $\kappa$ are hyperparameters.
This algorithm requires smoothing.
\item {\bfseries {Casimir-SVRG-adapt}}: Algo.~\ref{algo:catalyst} with SVRG as the inner optimization algorithm.
The parameters $\mu_k$ and $\kappa_k$ as chosen in
Prop.~\ref{prop:c:total_compl_sc:dec_smoothing}, where $\mu$ and $\kappa$ are hyperparameters.
This algorithm requires smoothing.
\end{itemize}
On the other hand, for non-convex structured prediction, we only have two methods:
\begin{itemize}
\item {\bfseries SGD}: The stochastic subgradient method \citep{davis2018stochastic}, which we call as SGD. This
algorithm works directly on the non-smooth formulation. We try learning rates
$\gamma_t = \gamma_0$, $\gamma_t = \gamma_0 /\sqrt{t}$
and $\gamma_t = \gamma_0 / t$, where $\gamma_0$ is found by grid search in each of these cases.
We use the names SGD-const, SGD-$t^{-1/2}$ and SGD-$t^{-1}$ respectively for these variants.
We note that SGD-$t^{-1}$ does not have any theoretical analysis in the non-convex case.
\item {\bfseries {PL-Casimir-SVRG}}: Algo.~\ref{algo:prox-linear} with {Casimir-SVRG-const}{} as the inner solver using the settings
of Prop.~\ref{prop:pl:total_compl}. This algorithm requires smoothing the inner subproblem.
\end{itemize}
\subsection{Hyperparameters and Variants} \label{subsec:expt:hyperparam}
\paragraph{Smoothing}
In light of the discussion of Sec.~\ref{sec:smooth_oracle_impl},
we use the $\ell_2^2$ smoother $\omega(\uv) = \normasq{2}{\uv} / 2$ and use the top-$K$ strategy
for efficient computation.
We then have $D_\omega = 1/2$.
\paragraph{Regularization}
The regularization coefficient $\lambda$ is chosen as $\nicefrac{c}{n}$,
where $c$ is varied in $\{ 0.01, 0.1, 1, 10\}$.
\paragraph{Choice of $K$}
The experiments use $K = 5$ for named entity recognition where the performance of the top-$K$
oracle is $K$ times slower,
and $K=10$ for visual object localization, where the running time of the top-$K$ oracle is independent of $K$.
We also present results for other values of $K$ in Fig.~\ref{fig:plot_ner_K} and find that
the performance of the tested algorithms is robust to the value of $K$.
\paragraph{Tuning Criteria}
Some algorithms require tuning one or more hyperparameters such as the learning rate.
We use grid search to find the best choice of the hyperparameters using the following criteria:
For the named entity recognition experiments, the train function value and the validation $F_1$ metric
were only weakly correlated. For instance, the 3 best learning rates in the grid in terms of $F_1$ score,
the best $F_1$ score attained the worst train function value and vice versa.
Therefore, we choose the value of the tuning parameter that attained the best objective function value within 1\% of the
best validation $F_1$ score in order to measure the optimization performance while still remaining relevant
to the named entity recognition task.
For the visual object localization task,
a wide range of hyperparameter values achieved nearly equal performance in terms of
the best CorLoc over the given time horizon, so we choose
the value of the hyperparameter that achieves the best objective function value within
a given iteration budget.
\subsubsection{Hyperparameters for Convex Optimization}
This corresponds to the setting of Section~\ref{sec:cvx_opt}.
\paragraph{Learning Rate}
The algorithms {SVRG}{} and {Casimir-SVRG-adapt}{} require tuning of a learning rate,
while SGD requires $\eta_0, t_0$ and
{Casimir-SVRG-const}{} requires tuning of the Lipschitz constant $L$ of $\grad F_{\mu\omega}$,
which determines the learning rate $\gamma = 1/(L + \lambda + \kappa)$.
Therefore, tuning the Lipschitz parameter is similar to tuning the learning rate.
For both the learning rate and Lipschitz parameter, we use grid search on a logarithmic grid,
with consecutive entries chosen a factor of two apart.
\paragraph{Choice of $\kappa$}
For {Casimir-SVRG-const}{}, with the Lipschitz constant in hand, the parameter $\kappa$
is chosen to minimize the overall complexity as in Prop.~\ref{prop:c:total_compl_svrg_sc}.
For {Casimir-SVRG-adapt}{}, we use $\kappa = \lambda$.
\paragraph{Stopping Criteria}
Following the discussion of Sec.~\ref{sec:cvx_opt}, we use
an iteration budget of $T_{\mathrm{budget}} = n$.
\paragraph{Warm Start}
The warm start criterion determines the starting iterate of an epoch of the inner optimization algorithm.
Recall that we solve the following subproblem using SVRG for the $k$th iterate (cf. \eqref{eq:prox_point_algo}):
\begin{align*}
\wv_k \approx \argmin_{\wv \in \reals^d} F_{\mu_k\omega, \kappa_k}(\wv_k;\zv_{k-1}) \,.
\end{align*}
Here, we consider the following warm start strategy to choose the initial iterate $\widehat \wv_0$ for this subproblem:
\begin{itemize}
\item {\tt Prox-center}: $\widehat \wv_0 = \zv_{k-1}$.
\end{itemize}
In addition, we also try out the following warm start strategies of \citet{lin2017catalyst}:
\begin{itemize}
\item {\tt Extrapolation}: $\widehat \wv_0 = \wv_{k-1} + c(\zv_{k-1} - \zv_{k-2})$ where $c = \frac{\kappa}{\kappa + \lambda}$.
\item {\tt Prev-iterate}: $\widehat \wv_0 = \wv_{k-1}$.
\end{itemize}
We use the {\tt Prox-center} strategy unless mentioned otherwise.
\paragraph{Level of Smoothing and Decay Strategy}
For {SVRG}{} and {Casimir-SVRG-const}{} with constant smoothing, we try various values of the smoothing
parameter in a logarithmic grid. On the other hand, {Casimir-SVRG-adapt}{} is more robust to the choice of
the smoothing parameter (Fig.~\ref{fig:plot_ner_smoothing}).
We use the defaults of $\mu = 2$ for named entity recognition and $\mu = 10$ for
visual object localization.
\subsubsection{Hyperparameters for Non-Convex Optimization}
This corresponds to the setting of Section~\ref{sec:ncvx_opt}.
\paragraph{Prox-Linear Learning Rate $\eta$}
We perform grid search in powers of 10 to find the best prox-linear learning rate $\eta$.
We find that the performance of the algorithm is robust to the choice of $\eta$ (Fig.~\ref{fig:ncvx:pl_lr}).
\paragraph{Stopping Criteria}
We used a fixed budget of 5 iterations of {Casimir-SVRG-const}{}.
In Fig.~\ref{fig:ncvx:inner-iter},
we experiment with different iteration budgets.
\paragraph{Level of Smoothing and Decay Strategy}
In order to solve the $k$th prox-linear subproblem with {Casimir-SVRG-const}{},
we must specify the level of smoothing $\mu_k$. We experiment with two schemes,
(a) constant smoothing $\mu_k = \mu$, and (b) adaptive smoothing $\mu_k = \mu / k$.
Here, $\mu$ is a tuning parameters, and the adaptive smoothing scheme is designed
based on Prop.~\ref{prop:pl:total_compl} and Remark~\ref{remark:pl:choosing_eps0}.
We use the adaptive smoothing strategy as a default, but compare the two in Fig.~\ref{fig:ncvx_smoothing}.
\paragraph{Gradient Lipschitz Parameter for Inner Optimization}
The inner optimization algorithm {Casimir-SVRG-const}{} still requires a hyperparameter
$L_k$ to serve as an estimate to the Lipschitz parameter of the gradient
$\grad F_{\eta, \mu_k\omega}(\cdot\,; \wv_k)$. We set this parameter as follows,
based on the smoothing strategy:
(a) $L_k = L_0$ with the constant smoothing strategy, and
(b) $L_k = k\, L_0$ with the adaptive smoothing strategy (cf. Prop.~\ref{thm:setting:beck-teboulle}).
We note that the latter choice has the effect of decaying the learning rate as $~1/k$
in the $k$th outer iteration.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/ner_best.pdf}
\caption{Comparison of convex optimization algorithms for the task of Named Entity Recognition on CoNLL 2003.}\label{fig:plot_all_ner}
\end{figure}
\begin{figure}[!thb]
\centering
\includegraphics[width=0.93\textwidth]{plots/loc_cvx/loc_all_best_1E+01_0.pdf}
\caption{Comparison of convex optimization algorithms
for the task of visual object localization on PASCAL VOC 2007 for $\lambda=10/n$.
Plots for all other classes are in
Appendix~\ref{sec:a:expt}.}\label{fig:plot_all_loc}
\end{figure}
\begin{figure}[!thb]
\centering
\includegraphics[width=0.93\textwidth]{plots/loc_ncvx/loc_best_all_1E+00_0.pdf}
\caption{Comparison of non-convex optimization algorithms
for the task of visual object localization on PASCAL VOC 2007 for $\lambda=1/n$.
Plots for all other classes are in
Appendix~\ref{sec:a:expt}.}\label{fig:plot_ncvx_loc}
\end{figure}
\subsection{Experimental study of different methods} \label{subsec:expt:competing_results}
\paragraph{Convex Optimization}
For the named entity recognition task,
Fig.~\ref{fig:plot_all_ner} plots the performance of various methods on CoNLL 2003.
On the other hand, Fig.~\ref{fig:plot_all_loc} presents
plots for various classes of PASCAL VOC 2007 for visual object localization.
The plots reveal that smoothing-based methods converge faster in terms of training error
while achieving a competitive performance in terms of the performance metric on a held-out set.
Furthermore, BCFW and SGD make twice as many actual passes as SVRG based algorithms.
\paragraph{Non-Convex Optimization}
Fig.~\ref{fig:plot_ncvx_loc} plots the performance of various algorithms on the task of visual object localization
on PASCAL VOC.
\subsection{Experimental Study of Effect of Hyperparameters: Convex Optimization}
We now study the effects of various hyperparameter choices.
\paragraph{Effect of Smoothing}
Fig.~\ref{fig:plot_ner_smoothing} plots the effect of the level of smoothing for {Casimir-SVRG-const}{}
and {Casimir-SVRG-adapt}{}. The plots reveal that, in general, small values of the smoothing parameter lead
to better optimization performance for {Casimir-SVRG-const}. {Casimir-SVRG-adapt}{} is robust to the choice
of $\mu$.
Fig.~\ref{fig:plot_ner_smoothing-2} shows how the smooth optimization algorithms work when used heuristically on the
non-smooth problem.
\begin{figure}[!thb]
\centering
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_smoother_main.pdf}
\caption{\small{Effect of level of smoothing.}}
\label{fig:plot_ner_smoothing}
\end{subfigure}
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_nonsmooth_svrg_main.pdf}
\caption{\small{Effect of smoothing: use of smooth optimization with smoothing (labeled ``smooth'')
versus the heuristic use of these
algorithms without smoothing (labeled ``non-smooth'') for $\lambda = 0.01/n$.}}
\label{fig:plot_ner_smoothing-2}
\end{subfigure}
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_warm_start.pdf}
\caption{\small{Effect of warm start strategies for $\lambda=0.01/n$ (first row) and $\lambda = 1/n$ (second row).}}
\label{fig:plot_ner_warm-start}
\end{subfigure}
\begin{subfigure}[b]{0.88\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/ner_cvx/plot_K_main.pdf}
\caption{\small{Effect of $K$ in the top-$K$ oracle ($\lambda = 0.01/n$).}}
\label{fig:plot_ner_K}
\end{subfigure}
\caption{Effect of hyperparameters for the task of Named Entity Recognition on CoNLL 2003.
C-SVRG stands for Casimir-SVRG in these plots.}\label{fig:plot_cvx_hyperparam}
\end{figure}
\paragraph{Effect of Warm Start Strategies}
Fig.~\ref{fig:plot_ner_warm-start} plots different warm start strategies for {Casimir-SVRG-const}{}
and {Casimir-SVRG-adapt}.
We find that {Casimir-SVRG-adapt}{} is robust to the choice of the warm start strategy while {Casimir-SVRG-const}{} is not.
For the latter, we observe that {\tt Extrapolation} is less stable (i.e., tends to diverge more) than {\tt Prox-center},
which is in turn less stable than {\tt Prev-iterate}, which always works (cf. Fig.~\ref{fig:plot_ner_warm-start}).
However, when they do work, {\tt Extrapolation} and {\tt Prox-center} provide greater acceleration than {\tt Prev-iterate}.
We use {\tt Prox-center} as the default choice to trade-off between acceleration and applicability.
\paragraph{Effect of $K$}
Fig.~\ref{fig:plot_ner_K} illustrates the robustness of the method to choice of $K$: we observe that
the results are all within one standard deviation of each other.
\subsection{Experimental Study of Effect of Hyperparameters: Non-Convex Optimization}
We now study the effect of various hyperparameters for the non-convex optimization algorithms.
All of these comparisons have been made for $\lambda = 1/n$.
\paragraph{Effect of Smoothing}
Fig.~\ref{fig:ncvx_smoothing:1} compares the adaptive and constant smoothing strategies.
Fig.~\ref{fig:ncvx_smoothing:2} and Fig.~\ref{fig:ncvx_smoothing:3} compare the effect of the level of smoothing on the
the both of these.
As previously, the adaptive smoothing strategy is more robust to the choice of the smoothing parameter.
\begin{figure}[!thb]
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_fixed-vs-adapt-smth_sheep_1E+00.pdf}
\caption{Comparison of adaptive and constant smoothing strategies.}
\label{fig:ncvx_smoothing:1}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_smoother-decay_sheep_1E+00.pdf}
\caption{Effect of $\mu$ of the adaptive smoothing strategy.}
\label{fig:ncvx_smoothing:2}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_smoother-const_sheep_1E+00.pdf}
\caption{Effect of $\mu$ of the constant smoothing strategy.}
\label{fig:ncvx_smoothing:3}
\end{subfigure}
\caption{Effect of smoothing on {PL-Casimir-SVRG}{} for the task of visual object localization on PASCAL VOC 2007.}\label{fig:ncvx_smoothing}
\end{figure}
\begin{figure}[!thb]
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_pl-lr_sheep_1E+00.pdf}
\caption{\small{Effect of the hyperparameter $\eta$.}}\label{fig:ncvx:pl_lr}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_inner-iter_sheep_1E+00.pdf}
\caption{\small{Effect of the iteration budget of the inner solver.}}\label{fig:ncvx:inner-iter}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\textwidth]{plots/loc_ncvx/loc_warmstart_sheep_1E+00.pdf}
\caption{\small{Effect of the warm start strategy of the inner {Casimir-SVRG-const}{} algorithm.}}
\label{fig:ncvx:warm-start}
\end{subfigure}
\caption{Effect of hyperparameters on {PL-Casimir-SVRG}{} for the task of visual object localization on PASCAL VOC 2007.}\label{fig:ncvx_hyperparam}
\end{figure}
\paragraph{Effect of Prox-Linear Learning Rate $\eta$}
Fig.~\ref{fig:ncvx:pl_lr} shows the robustness of the proposed method to the choice of $\eta$.
\paragraph{Effect of Iteration Budget}
Fig.~\ref{fig:ncvx:inner-iter} also shows the robustness of the proposed method to the choice of iteration budget of the inner solver, {Casimir-SVRG-const}.
\paragraph{Effect of Warm Start of the Inner Solver}
Fig.~\ref{fig:ncvx:warm-start} studies the effect of the
warm start strategy used within the inner solver {Casimir-SVRG-const}{} in each inner prox-linear iteration. The results are similar to
those obtained in the convex case, with {\tt Prox-center} choice being the best compromise between acceleration and compatibility.
\section{Introduction}
\input{sections/01_intro}
\section{Smooth Structured Prediction} \label{sec:setting}
\input{sections/02_struct_pred}
\section{Inference Oracles} \label{sec:inference_oracles}
\input{sections/03_inf_oracles}
\section{Implementation of Inference Oracles} \label{sec:smooth_oracle_impl}
\input{sections/04_smoothing}
\section{The Casimir Algorithm} \label{sec:cvx_opt}
\input{sections/05_Casimir}
\section{Extension to Non-Convex Optimization} \label{sec:ncvx_opt}
\input{sections/06_proxlin}
\section{Experiments} \label{sec:expt}
\input{sections/08_expt}
\section{Future Directions}
\input{sections/09_conclusion}
\paragraph{Acknowledgments}
This work was supported by NSF Award CCF-1740551, the Washington Research Foundation
for innovation in Data-intensive Discovery, and the program ``Learning in Machines and Brains'' of CIFAR.
\clearpage
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,923 |
OVERVIEW: With zombies taking over the cities, a group of humans escapes the carnage by taking a small Coast Guard ship out to sea, but theres no getting away--even in the wide ocean. Original.
With another bleak vision of the zombie apocalypse, Keene makes a triumphant return to the still-thriving subgenre he helped revive with his 2004 debut The Rising (a movie version of which is currently in the works)... Delivering enough shudders and gore to satisfy any fan of the genre, Keene proves hes still a lead player in the zombie horror cavalcade. -- Publishers Weekly.
2007 Delerium Limited Hardcover. Sealed in plastic for complete archival protection upon receipt here. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,541 |
Жуліо Сезар Жакобі (, 2 вересня 1986, Гуарамірін) — бразильський футболіст, що грав на позиції воротаря.
Ігрова кар'єра
Народився 2 вересня 1986 року в місті Гуарамірін. Вихованець юнацьких команд футбольних клубів «Ж. Малуселлі» та «Парана». У дорослому футболі дебютував 2006 року виступами за команду «Ботафогу», в якій провів два сезони, взявши участь у 20 матчах чемпіонату.
У січні 2008 року Жуліо Сезар переїхав до Португалії, приєднавшись до «Белененсеша». Дебютував у Прімейра-лізі 23 лютого в домашній грі проти «Марітіму» (1:3), замінивши нападника Велдона на 29-й хвилині після того, як воротар Паулу Коштінья був вилучений з поля. Швидко бразильський воротар став першим номером і у сезоні 2008/09 відіграв усі 30 ігор своєї команди, але команда посіла передостаннє 15 місце і вилетіла з вищого дивізіону.
Не зважаючи на те, що згодом команда буле залишена в еліті, Жуліо Сезар підписав контракт із столичною «Бенфікою», возз'єднавшись із головним тренером Жорже Жезушем, який раніше запрошував його у «Белененсеш». У свій перший рік у новій команді бразилець був дублером Кіма, потіснивши Жозе Морейру до статусу третього воротаря, і став виступати за команду в матчах Ліги Європи УЄФА та Кубка Португалії. 8 квітня 2010 року, на останніх хвилинах гри проти «Ліверпуля» (1:4) у чвертьфіналі Ліги Європи, Жуліо Сезар отримав струс мозку після зіткнення з Дірком Кейтом і бразильця довелося госпіталізувати. Він повністю одужав, з'явившись загалом у 14 офіційних матчах протягом сезону, вигравши чемпіонат Португалії, втім у цьому турнірі участі не брав. Дебютував у Прімейра-лізі за «Бенфіку» 28 серпня 2010 року в матчі проти «Віторії» (Сетубал) (3:0) і загалом того сезоні зіграв у 4 іграх чемпіонату, ставши віце-чемпіоном країни, а також вдруге поспіль виграв Кубок португальської ліги, де жодної хвилини не зіграв.
17 серпня 2011 року Жуліо Сезар разом із товаришами по команді Карлушем Мартіншем та Жорже Рібейру перейшов до іспанської «Гранади». Він дебютував за андалусійців 13 грудня у виїзній грі проти «Реал Сосьєдада» (1:4) у Кубку Іспанії і переважно був дублером Роберто, тому за сезон зіграв лише 17 ігор в усіх турнірах.
Після закінчення терміну оренди він повернувся до «Бенфіки», але так і не зігравши жодної гри, 1 вересня 2013 року Жуліо Сезар розірвав контракт з лісабонцями. Надалі бразилець тривалий час залишався без клубу, поки 10 березня 2014 року він не підписав п'ятимісячний контракт з «Хетафе», якому терміново був потрібен воротар після серйозної травми Мігеля Анхеля Мої. До кінця сезону він конкурував з другим воротарем клубу Хорді Кодіною, зігравши у 5 іграх Ла Ліги.
У вересні 2014 року Жуліо Сезар повернувся на батьківщину і підписав контракт з «Флуміненсе», у складі якого провів наступні п'ять років своєї кар'єри гравця, але основним став лише з 2018 року, коли клуб покинув Дієго Кавальєрі. У сезоні 2018 року він зробив дуже важливі сейви, щоб залишити клуб в еліті бразильського футболу, одним з них став пенальті, відбитий в останньому турі проти «Америка Мінейру» (1:0).
3 січня 2019 року воротар підписав 2-річний контракт з клубом «Греміо», де мав замінити багаторічного основного воротаря клубу Марсело Грое, що покинув клуб. Завершив професійну кар'єру футболіста виступами за команду «Греміо» у 2021 році.
Титули і досягнення
Чемпіонат штату Ріо-де-Жанейро (1):
«Ботафогу»: 2006
Володар Кубка Ріо (1):
«Ботафогу»: 2006, 2007
«Флуміненсе»: 2018
Чемпіон Португалії (1):
«Бенфіка»: 2009/10
Володар Кубка португальської ліги (2):
«Бенфіка»: 2009/10, 2010/11
Переможець Прімейра-ліги Бразилії (1):
«Флуміненсе»: 2017
Володар Кубка Гуанабара (1):
«Флуміненсе»: 2017
Чемпіонат штату Ріу-Гранді-ду-Сул (2):
«Греміо»: 2019, 2020
Індивідуальні
У символічній збірній чемпіонату штату Ріо-де-Жанейро: 2018
Примітки
Посилання
бразильські футболісти
Футбольні воротарі
Футболісти «Ботафогу»
Футболісти «Белененсеша»
Футболісти «Бенфіки» (Лісабон)
Футболісти «Гранади»
Футболісти «Хетафе»
Футболісти «Флуміненсе»
Футболісти «Греміу»
бразильські футбольні легіонери
Футбольні легіонери в Португалії
Футбольні легіонери в Іспанії | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,856 |
\section{\label{sec:intro} Introduction}
Joint fits of \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace disappearance and \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace appearance oscillations in long-baseline neutrino oscillation experiments can provide information on four of the standard neutrino model parameters, $|\dmsq{32}|$, $\theta_{23}$, \ensuremath{\delta_{\rm CP}}\xspace, and the mass hierarchy, when augmented by measurements of the other three parameters, \dmsq{21}, $\theta_{12}$, and $\theta_{13}$, from other experiments \cite{pdg}. Of the four parameters, the first pair are currently most sensitively measured by \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace oscillations and the second pair are most sensitively measured by \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations. However, the precision with which \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations can measure the second pair of parameters depends on the precision of the measurement of $\theta_{23}$ since that oscillation probability is largely proportional to \sinsqtwo{13}\sinsq{23}.
The quantity $\tan^2\theta_{23}$ gives the ratio of the coupling of the third neutrino mass state to \ensuremath{{\nu}_{\mu}}\xspace and \ensuremath{{\nu}_{\tau}}\xspace. Whether $\theta_{23} < \pi/4$ (lower octant), $\theta_{23} > \pi/4$ (upper octant), or $\theta_{23} = \pi/4$ (maximal mixing) is important for models and symmetries of neutrino mixing~\cite{ref:Altarelli-Feruglio}.
The determination of the neutrino mass hierarchy is important both for grand unified models~\cite{ref:Altarelli-Feruglio2} and for the interpretation of neutrinoless double beta decay experiments~\cite{ref:Pascoli-Petcov}. In long-baseline neutrino experiments, it is measured by observing the effect of coherent forward neutrino scattering from electrons in the earth, which enhances \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations for the normal mass hierarchy (NH), $\dmsq{32}>0$, and suppresses them for the inverted mass hierarchy (IH), $\dmsq{32}<0$. For the baselines of current experiments and for fixed baseline length to energy ratio, the magnitude of this effect is approximately proportional to the length of the baseline.
The amount of CP violation in the lepton sector is proportional to $|\sin\ensuremath{\delta_{\rm CP}}\xspace|$. For \ensuremath{\delta_{\rm CP}}\xspace in the range 0 to $2\pi$, \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations are enhanced for $\ensuremath{\delta_{\rm CP}}\xspace > \pi$ and suppressed for $\ensuremath{\delta_{\rm CP}}\xspace < \pi$, with maximal enhancement at $\ensuremath{\delta_{\rm CP}}\xspace = 3\pi/2$ and maximal suppression at $\ensuremath{\delta_{\rm CP}}\xspace = \pi/2$.
In addition to the NOvA results \cite{nova_joint}, previous joint fits of \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace and \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations in long-baseline experiments have been reported by the MINOS \cite{minos} and T2K \cite{t2k} experiments.
The data reported here correspond to the equivalent of $8.85\times10^{20}$ protons on target (POT) in the full NOvA\xspace Far Detector with a beam line set to focus positively charged mesons, which greatly enhances the neutrino to antineutrino ratio. This represents a 46\% increase in neutrino flux since our last publication \cite{nova_joint}.
These data were taken between February 6, 2014 and February 20, 2017.
Significant improvements have been made to both the simulations and data analysis.
The key updates to the simulations include a new data-driven neutrino flux model, an improved treatment of multi-nucleon interactions, and an improved light model including Cherenkov radiation in the scintillator.
The main improvements in the \ensuremath{{\nu}_{\mu}}\xspace disappearance data analysis are the use of a deep-learning event classifier and the separation of selected events into different samples based on their energy resolution.
The main improvement for the \ensuremath{{\nu}_{e}}\xspace appearance data analysis is the addition of a signal-rich sample that expands the active volume considered.
\section{\label{NOvA} The NO\lowercase{v}A experiment}
NOvA \cite{tdr} is a two-detector, long-baseline neutrino oscillation experiment that samples the Fermilab NuMI neutrino beam \cite{numi} approximately \unit[1]{km} from the source using a Near Detector (ND) and observes the oscillated beam \unit[810]{km} downstream with a Far Detector (FD) near Ash River, MN.
The detectors are functionally identical, scintillating tracker-calorimeters consisting of layered reflective polyvinyl chloride cells filled with a liquid scintillator comprised primarily of mineral oil with a 5\% pseudocumene admixture.
These cells are organized into planes alternating in vertical and horizontal orientation.
The net composition of the detectors is 63\% active material by mass.
Light produced within a cell is collected using a loop of wavelength-shifting optical fiber, which is connected to an avalanche photodiode (APD).
The FD cells are \unit[$3.9 \times 6.6$]{$\text{cm}$} in cross section, with the \unit[6.6]{$\text{cm}$} dimension along the beam direction, and \unit[15.5]{m} long \cite{ref:PVC}.
The FD contains 896 planes, leading to a total mass of \unit[14]{kt}.
The majority of ND cells are identical to those of the FD apart from being shorter (\unit[3.9]{m} long instead of \unit[15.5]{m}).
To improve muon containment, the downstream end of the ND is a ``muon catcher" composed of a stack of sets of planes in which a pair of one vertically-oriented and one horizontally-oriented scintillator plane is interleaved with one \unit[10]{cm}-thick plane of steel.
There are 11 pairs of scintillator planes separated by 10 steel planes in this sequence. The vertical planes in this section are \unit[2.6]{m} high.
The ND consists of 214 planes for a total mass of \unit[290]{ton}.
The FD sits \unit[14.6]{mrad} away from the central axis of the NuMI beam.
This off-axis location results in a neutrino flux with a narrow-band energy spectrum centered around \unit[1.9]{GeV} in the FD.
Such a spectrum emphasizes $\ensuremath{{\nu}_{\mu}}\xspace \rightarrow \ensuremath{{\nu}_{e}}\xspace$ oscillations at this baseline and reduces backgrounds from higher energy neutral current events.
The ND is positioned to maximize the similarity between the neutrino energy spectrum at its location and that expected at the FD in the absence of oscillations.
The beam is pulsed at an average rate of \unit[0.75]{Hz}.
All of the APD signals above threshold from a large time window around each \unit[10]{$\mu$s} beam spill are retained.
Because the FD is located on the Earth's surface, it is exposed to a substantial cosmic ray flux, which is only partially mitigated by its overburden of \unit[1.2]{m} of concrete plus \unit[15]{cm} of barite.
Therefore, we also use cosmic data taken from \unit[420]{$\mu$s} surrounding the beam spill within beam triggers to obtain a direct measure of the cosmic background in the FD.
Separate periodic minimum-bias triggers of the same length as the beam trigger allow us to collect high-statistics cosmic data for algorithm training and calibration purposes.
As the ND is \unit[100]{m} underground, the cosmic ray flux there is negligible.
\begin{figure}[tb]
\includegraphics[width=\linewidth]
{nucomponents_nd_fhc_nosim.pdf}
\caption{\label{fig:flux}
Predicted
composition of the NuMI beam at the ND with the horns focusing
positively charged hadrons. Curves from top to bottom: \ensuremath{{\nu}_{\mu}}\xspace, $\bar{\nu}_{\mu}$, \ensuremath{{\nu}_{e}}\xspace, $\bar{\nu}_{e}$. Table~\ref{tab:beam_comp} gives the fractional composition for each neutrino flavor integrated from \unit[1-5]{GeV}.
}
\end{figure}
\section{\label{sim} Simulations}
To assist in calibrating our detectors, determining our analysis criteria, and inferring extracted physical parameters, we rely on predictions generated by a comprehensive simulation suite, which proceeds in stages.
We begin by using \geant/ \cite{geant4} and a detailed model of the beamline geometry to simulate the production of hadrons arising from the collision of the \unit[120]{GeV} primary proton beam with the graphite target \cite{graphite-target}, as well as their subsequent focusing and decay into neutrinos.
The resultant neutrino flux is corrected according to constraints on the hadron spectrum from thin-target hadroproduction data using the \ppfx/ tools developed for the NuMI beam by the MINERvA collaboration \cite{minerva-flux}.
Table~\ref{tab:beam_comp} shows simulated predictions of the beam composition at the Near and Far Detectors in the absence of oscillations; the ND predicted spectra from \unit[0-20]{GeV} are shown in Fig.~\ref{fig:flux}.
\begin{table}[h]
\caption{\label{tab:beam_comp} Predicted beam flux composition in the 1 to 5 GeV neutrino energy region in the absence of oscillations.}
\begin{tabular}{m{0.35\linewidth}cc} \hline \hline
Component & \parbox[t]{0.30\linewidth}{ND (\%)} & \parbox[t]{0.3\linewidth}{FD (\%)} \\ \hline
\ensuremath{{\nu}_{\mu}}\xspace & 93.8 & 94.1 \\
\ensuremath{\bar{\nu}_{\mu}}\xspace & 5.3 & 4.9 \\
\ensuremath{{\nu}_{e}}\xspace and \ensuremath{\bar{\nu}_{e}}\xspace & 0.9 & 1.0 \\
\hline \hline
\end{tabular}
\end{table}
The predicted flux is then used as input to \genie/ \cite{genie-primary,genie-manual}, which simulates neutrino reactions in the variety of materials of which our detectors and their surroundings are composed.
We alter its default interaction model as described below.
Finally, we use a detailed model of our detectors with a combination of \geant/ and custom software to simulate the detectors' photon response to particles outgoing from individual predicted neutrino reactions, including both scintillation and Cherenkov radiation in the active detector materials, as well as the light transport, collection, and digitization processes.
The overall energy scales of both detectors are calibrated using the minimum-ionizing portions of stopping cosmic ray muon tracks.
As in our previous results {\cite{nova_numu,nova_joint,nova_sterile}}, we augment \genie/'s default configuration by enabling its semi-empirical model for Meson Exchange Current (MEC) interactions \cite{katori-empirical-MEC} to account for the likely presence of interactions of neutrinos with nucleon-nucleon pairs in both charged- and neutral-current reactions.
However, in this analysis we no longer reweight the momentum transfer distributions produced by this model, preferring instead to allow fits to the FD data to profile \cite{ref:profile} over the substantially improved systematic uncertainty treatment for this component of the model, as described in Sec.~\ref{systs}. In our central-value prediction we simply increase the rate of MEC interactions by 20\% as suggested by fits to the sample of ND \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidate events in our ND data.
In addition, we now reweight the output of the default model for quasielastic production to treat the expected effect of long-range nuclear charge screening according to the Random Phase Approximation (RPA) calculations of J. Nieves and collaborators \cite{Valencia-RPA,Gran-RPA}.
Lastly, we continue to reduce the rate of \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace nonresonant single pion production with invariant hadronic mass $W < \unit[1.7]{GeV}$ to 41\% of \genie/'s nominal value \cite{nonres-1pi}.
\section{\label{analysis} Data analysis}
In order to infer the oscillation parameters from our data, we compare the spectra observed at the FD with our predictions under various oscillation hypotheses.
This process consists of three steps.
First, we develop selections to retain \ensuremath{{\nu}_{e}}\xspace and \ensuremath{{\nu}_{\mu}}\xspace charged-current (CC) events and to reject neutral-current (NC) events and cosmogenic activity.
Second, we apply the relevant subset of these selections (excluding, e.g., cosmic rejection criteria) to samples observed at the ND, where both \ensuremath{{\nu}_{\mu}}\xspace disappearance and \ensuremath{{\nu}_{e}}\xspace appearance are negligible, to constrain our prediction for the selected sample composition.
Finally, we combine the constrained prediction from the previous step with the predicted ratio of the FD and ND spectra, which accounts for geometric differences between the detectors, the beam dispersion, and the effect of oscillations.
The result is used in fits to the neutrino energy spectra of the candidates observed at the FD.
The following sections discuss how this procedure unfolds for each of the two analyses separately.
\subsection{\label{numu} \ensuremath{{\nu}_{\mu}}\xspace{} disappearance}
\subsubsection{Event selection}
Isolation of samples of candidate events begins with cells whose APD responses are above threshold, known as hits; those neighboring each other in space and time are clustered to produce candidate neutrino events \cite{baird-thesis,slicing}.
We pass hits in event candidates that survive basic quality cuts in timing (relative to the \unit[10]{$\mu$s} beam spill), containment, and contiguity into a deep-learning classifier known as the Convolutional Visual Network (CVN) \cite{cvnpaper}.
CVN applies a series of linear operations, trained over simulated beam and cosmic data event samples, which extract complex, abstract visual features from each event, in a scheme based on techniques from computer vision \cite{szegedy2014googlenet,hinton1986}.
The final step of the classifier is a multilayer perceptron \cite{ref:mlp1,ref:mlp2} that maps the learned features onto a set of normalized classification scores, which range over beam neutrino event hypotheses (\ensuremath{\nu_e\,{\rm CC}}\xspace, \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace, \ensuremath{\nu_{\tau}\,{\rm CC}}\xspace, and NC) and a cosmogenic hypothesis.
We retain events whose CVN score for the \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace hypothesis exceeds a tuned threshold.
To identify the muon in such events, tracks produced by a Kalman filter algorithm \cite{ref:Kalman,raddatz-thesis,ospanov-thesis} are scored by a $k$-nearest neighbor classifier \cite{ref:kNN} over the following variables: likelihoods in $dE/dx$ and scattering constructed from single-particle hypotheses, total track length, and the fraction of planes along the track consistent with having minimum-ionizing-like $dE/dx$.
The most muon-like of these tracks is taken to be the muon candidate.
Events that have no sufficiently muon-like track are rejected.
We also discard events where any clusters of activity extend to the edges of the detector or where any track besides the muon candidate penetrates into the muon catcher in the ND.
To avoid being considered as cosmogenic, FD events must furthermore be deemed sufficiently signal-like by a boosted decision tree (BDT) \cite{ref:BDT} trained over simulation and cosmic data that considers the positions, directions, and lengths of tracks, as well as the fraction of the event's total hit count associated with the track and the CVN score for the cosmic hypothesis.
According to our simulation, the FD selection efficiency for our basic quality and containment cuts, relative to all true \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events within a fiducial volume, is 41.3\%; the efficiency of the CVN and PID constraints applied to the quality-and-containment sample is 78.1\%. The final selected sample is 92.7\% \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace.
The predicted composition of the sample at various stages in the selection is given in Table~\ref{tab:numu_cutflow}.
\begin{table*}[ht]
\caption{\label{tab:numu_cutflow} Predicted composition of the \ensuremath{{\nu}_{\mu}}\xspace CC candidate sample in the FD, in event counts, at various stages in the selection process. Oscillation parameters used in the prediction are the best fit values from Sec.~\ref{sec:results}.}
\begin{tabular}{m{0.15\linewidth}S[table-format = 3.1]S[table-format = 3.1]SS[table-format = 3.1]Sc} \hline \hline
Selection &
\parbox[t]{0.13\linewidth}{\ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace CC} &
\parbox[t]{0.13\linewidth}{NC} &
\parbox[t]{0.13\linewidth}{ \ensuremath{\nu_e\,{\rm CC}}\xspace } &
\parbox[t]{0.13\linewidth} {\ensuremath{\nu_{\tau}\,{\rm CC}}\xspace } &
\parbox[t]{0.13\linewidth}{\ensuremath{\nu_e \rightarrow \nu_\mu}\xspace CC } &
\parbox[t]{0.13\linewidth} {Cosmic} \\ \hline \noalign{\vskip 2pt}
No selection & 963.7 & 612.1 & 126.6 & 9.6 & 0.6 & $4.91 \times 10^{7} $ \\
Containment & 160.8 & 219.9 & 61.5 & 2.4 & 0.3 & $1.95 \times 10^{4} $ \\
CVN & 132.1 & 3.0 & 0.3 & 0.4 & 0.2 & 26.4 \\
Cosmic BDT & 126.1 & 2.5 & 0.3 & 0.4 & 0.2 & 5.8 \\
\hline \hline
\end{tabular}
\end{table*}
\subsubsection{Energy estimation and analysis binning}
We reconstruct each event's neutrino energy $E_{\nu}$ using a function of the muon candidate and hadronic remnant energies, which are estimated separately.
The muon candidate energy $E_{\mu}$ is determined from the range of the track, calibrated to true muon energy in our simulation.
We estimate the energy of the hadronic component with a mapping of observed non-muon energy to true non-muon energy also calibrated with the simulation \cite{lein-thesis}.
The resulting neutrino energy resolution over the whole sample is 9.1\% at the FD (11.8\% at the ND due to the lower active fraction of the muon catcher) for \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events.
We employ a variable neutrino energy binning with finer bins near the disappearance maximum (about \unit[1.6]{GeV} at the NOvA baseline) to improve our sensitivity to the effect of \sinsqtwo{23} and coarser bins elsewhere.
We further divide the event populations in each energy bin into four populations in reconstructed hadronic energy fraction, $(E_{\nu}-E_{\mu})/E_{\nu}$, which correspond to regions of different neutrino energy resolution \cite{vinton-thesis}.
These divisions are chosen such that the FD populations are of equal size in the unoscillated simulation; however, the boundaries show little sensitivity to the choice of oscillation parameters.
Grouping in this manner has the effect of isolating most background cosmic and beam NC events (those typically mistaken for signal events with energetic hadronic systems) along with events of worst energy resolution into a separate quartile from the quartile containing the signal events with best resolution, where sensitivity to the disappearance is strongest.
The average \ensuremath{{\nu}_{\mu}}\xspace energy resolution in the FD across the whole energy spectrum is estimated to be 6.2\%, 8.2\%, 10.3\%, and 12.3\% for each quartile, respectively.
\begin{figure}[ht]
\includegraphics[width=\linewidth]{nd_datamc_allquantiles_testpads.pdf}
\caption{\label{fig:numu_nd}
Comparison of the reconstructed neutrino energy for selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events (black dots) in the ND with area-normalized simulation (red line). Shading represents the bin-to-bin systematic uncertainties. The gray area, which is nearly indistinguishable from the lower figure boundary, shows the simulated background.}
\end{figure}
\begin{figure*}[ht]
\includegraphics[width=0.8\linewidth]{nd_datamc_4pads_testpads.pdf}
\caption{\label{fig:numu_nd_quant} Comparison of selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates (black dots) in the ND data to the prediction (red histograms) in the hadronic energy fraction quartiles, where the prediction is absolutely normalized to the data by exposure. The expected background contributions (gray) are smaller in the quartiles with better resolution. The shaded band represents the quadrature sum of all systematic uncertainties. These distributions are the input to the extrapolation procedure described in the text.}
\end{figure*}
Figure~\ref{fig:numu_nd} shows a comparison of the reconstructed neutrino energy for the selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events in the ND with simulation shown area-normalized to the data.
The means of the distributions agree to within \unit[10]{MeV} (0.6\%).
Normalizing the prediction by area removes a 1.3\% normalization difference between the data and the simulation and suppresses 10-20\% absolute normalization uncertainties due primarily to our knowledge of the neutrino flux and normalization offsets from cross-section uncertainties.
The remaining uncertainties arise from shape differences.
The full set of uncertainties that are used to compute the error band is described in Sec.~\ref{systs}.
Figure~\ref{fig:numu_nd_quant} shows the corresponding distributions divided into the quartiles.
\subsubsection{\label{subsec:numu extrap} Constraints from the Near Detector data}
As in our previous work \cite{nova_numu}, we obtain a data-driven estimate for the true neutrino energy spectrum using our observed ND data.
To do so, we reweight the simulation in each reconstructed neutrino energy bin to obtain agreement with the ND data, thus correcting the differences observed in Fig.~\ref{fig:numu_nd_quant}.
After subtracting the expected background, which is minimal, we pass the resulting reconstructed neutrino energy spectrum through the migration matrix between reconstructed and true neutrino energies predicted by our ND simulation.
The corrected prediction is then multiplied by the predicted bin-by-bin ratios of the FD and ND true energy spectra, which includes the effects of differing detector geometries and acceptances, beam divergence, and three-flavor oscillations, to obtain an expected FD true energy spectrum.
The latter is finally converted back to reconstructed energy by way of the analogous FD migration matrix.
This constrained signal prediction is summed together with the cosmic prediction, whose reconstructed energy distribution is extracted using events in the minimum-bias trigger passing all the selection criteria and normalized using the \unit[420]{$\mu$s} window around the beam bunch, and a simulation-based beam background prediction to compare to the observed FD data.
In the current analysis, this extrapolation procedure is performed within each hadronic energy fraction range separately so that neutrino reaction types that favor different regions of the elastic-to-inelastic continuum (and thereby have typically different neutrino energy resolution) can be constrained independently.
We find the total number of events in each of the four quartiles, in order from lowest to highest inelasticity, to be adjusted by $+12\%$, $-13\%$, $-13\%$, and $+4\%$ relative to the nominal simulation by this method.
\subsection{\ensuremath{{\nu}_{e}}\xspace appearance}
\subsubsection{Event selection}
We employ the same hit finding and time clustering as in the \ensuremath{{\nu}_{\mu}}\xspace{} disappearance analysis, and select events whose \ensuremath{\nu_e\,{\rm CC}}\xspace score under the same CVN algorithm exceeds a tuned selection cut.
To further purify the sample of \ensuremath{\nu_e\,{\rm CC}}\xspace candidates, we reconstruct events as follows.
First, we build three-dimensional event vertices using the intersection of lines constructed from Hough transforms applied to each two-dimensional detector view separately \cite{ref:hough,ref:earms}.
Hits in the same view falling roughly along common directions emanating from these vertices are further grouped into ``prongs,'' which are then matched between views based on their extent and energy deposition \cite{ref:fuzzyk,niner-thesis}.
We use these prongs to remove events where the energy of the event is distributed largely transverse to the neutrino beam direction; our simulation and our large sample of cosmic data taken from minimum-bias triggers indicate such events are typically cosmogenic.
We further reject events where the prongs fail containment criteria, where extremely long tracks indicate obvious muons, where there are too many hits for proper reconstruction, or where another event in close proximity in both time and space approaches the top of the detector.
To combat background events from cosmogenic photon showers entering through the back of the detector, where the overburden is thinner, we also cut events where the number of planes without hits in the upstream portion of the event exceeds the number in the downstream portion, the reverse of the expectation for a downstream-directed shower.
Events surviving these selections form our ``core'' sample in both detectors.
The predicted composition of the FD sample at various stages in this selection is given in Table~\ref{tab:nue_core_cutflow}.
\begin{figure*}[htb]
\subfloat{%
\includegraphics[height=0.35\textwidth]{Peripheral_BDTCVN_BeamPurity.pdf}
}
\subfloat{%
\includegraphics[width=0.45\textwidth]{decomp_Peripheral_cosmic_bdt_data.pdf}
}
\caption{\label{fig:nue_bdt} The peripheral sample is a signal-rich subset of \ensuremath{{\nu}_{e}}\xspace FD candidates that failed the core cosmic rejection or containment criteria (see text).
Left: The two-dimensional BDT-CVN space used in the definition of the peripheral sample. The predicted distribution of \ensuremath{{\nu}_{e}}\xspace appearance signal events (boxes) is shown superimposed on the predicted purity in each bin (shaded color). The peripheral sample boundary is chosen at the red line: the majority of signal events lie above and to the right and the sample has little cosmogenic contamination there, while events to the left and below are predominantly cosmogenic and are rejected.
Right: comparison of the observed distribution (black points) of the BDT variable for peripheral events with the prediction (stacked histogram).}
\end{figure*}
\begin{table*}[!ht]
\caption{\label{tab:nue_core_cutflow} Predicted composition of the core \ensuremath{{\nu}_{e}}\xspace CC candidate sample at the FD, in event counts, at various stages in the selection process. Oscillation parameters used in the prediction are the best fit values from Sec.~\ref{sec:results}. These figures do not include the effect of the extrapolation procedure described in Sec.~\ref{subsec:nue ND}.}
\begin{tabular}{m{0.21\linewidth}SSSSc} \hline \hline
Selection & \parbox[t]{0.14\linewidth}{$\ensuremath{{\nu}_{\mu}}\xspace \rightarrow \ensuremath{{\nu}_{e}}\xspace$ CC} & \parbox[t]{0.14\linewidth}{Beam \ensuremath{{\nu}_{e}}\xspace CC} & \parbox[t]{0.14\linewidth}{NC} & \parbox[t]{0.14\linewidth}{\ensuremath{{\nu}_{\mu}}\xspace, \ensuremath{{\nu}_{\tau}}\xspace CC} & \parbox[t]{0.14\linewidth}{Cosmic}\\ \hline\noalign{\vskip 2pt}
No selection & 77.9 & 48.7 & 612.1 & 973.8 & $4.91 \times 10^{7} $ \\
Containment/energy cut & 52.3 & 8.0 & 121.4 & 49.3 & $2.05 \times 10^{4} $ \\
Pre-CVN cosmic rejection & 51.3 & 7.9 & 114.3 & 47.0 & $1.58\times 10^{4} $ \\
CVN & 41.4 & 6.0 & 5.3 & 1.3 & 2.0 \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table*}[!htb]
\caption{\label{tab:nue_periph_cutflow} Predicted composition of the peripheral \ensuremath{{\nu}_{e}}\xspace CC candidate sample, in event counts, at two stages in the selection process. Here ``basic quality'' refers to events that pass beam and detector data quality cuts but fail the core sample containment criteria. Parameters are as in Table~\ref{tab:nue_core_cutflow}.}
\begin{tabular}{m{0.21\linewidth}SSSSc} \hline \hline
Selection & \parbox[t]{0.14\linewidth}{$\ensuremath{{\nu}_{\mu}}\xspace \rightarrow \ensuremath{{\nu}_{e}}\xspace$ CC} & \parbox[t]{0.14\linewidth}{Beam \ensuremath{{\nu}_{e}}\xspace CC} & \parbox[t]{0.14\linewidth}{NC} & \parbox[t]{0.14\linewidth}{\ensuremath{{\nu}_{\mu}}\xspace, \ensuremath{{\nu}_{\tau}}\xspace CC} & \parbox[t]{0.14\linewidth}{Cosmic}\\ \hline\noalign{\vskip 2pt}
Basic quality & 20.4 & 6.6 & 199.9 & 160.9 & $2.79 \times 10^{6} $ \\
CVN + BDT & 5.9 & 1.0 & 0.2 & 0.1 & 2.2 \\
\hline \hline
\end{tabular}
\end{table*}
We also construct a second, ``peripheral'' sample of FD events by considering events that have high scores for the CVN \ensuremath{{\nu}_{e}}\xspace hypothesis but which fail the cosmic rejection or containment criteria.
These are subjected to a more focused BDT (distinct from the one mentioned in Sec.~\ref{numu}) trained over the variables used for the containment and cosmic rejection cuts.
The containment variables include the closest distance to the top of the detector and the closest distance to any other face of the detector.
Variables distinguishing cosmogenic from beam-induced activity include the transverse momentum fraction of the event and the number of hits in the event.
Simulation and our cosmic data sample indicate that events in the signal-like regions of both this BDT and CVN are likely to be signal and not the result of externally-entering activity and are therefore retained.
Distributions for the peripheral sample illustrating the predicted beam and cosmic response in this BDT and the CVN \ensuremath{{\nu}_{e}}\xspace score, as well as comparing the BDT distribution in data and simulation, are given in Fig.~\ref{fig:nue_bdt}.
Because events on the periphery of the detector are not guaranteed to be fully contained, peripheral events are summed together into a single bin instead of dividing them by the neutrino energy estimate as is done for the core sample.
The FD event counts at two stages of the peripheral selection are noted in Table~\ref{tab:nue_periph_cutflow}.
The ND event sample is predicted to consist of 42\% beam \ensuremath{{\nu}_{e}}\xspace, 30\% NC background, and 28\% \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace background.
These predictions include the effect of the data-driven constraints described in Sec.~\ref{subsec:nue ND}.
The simulated FD efficiency for the basic quality and containment cuts used in the combined core and peripheral selections relative to all true \ensuremath{{\nu}_{e}}\xspace CC events within a fiducial volume is 92.6\%.
The remaining core selections, i.e., CVN and cosmic rejection, retain 58.8\% of the true \ensuremath{{\nu}_{e}}\xspace CC events in the quality-and-containment population.
With the addition of the peripheral sample under the combined CVN+BDT criteria, this figure rises to 67.4\%.
Improvements to the selection criteria generate an increase of 6.8\% in effective exposure relative to our previous results, while the efficiency gain due to the addition of the peripheral sample yields a further increase of 17.4\%.
\subsubsection{Energy estimation and binning}
To estimate the neutrino energy in \ensuremath{{\nu}_{e}}\xspace candidate events, we construct a second-order polynomial in two variables: the sum of the calibrated hit energies from prongs identified as electromagnetic activity and the sum of the energies of hits in the event not within those prongs.
The coefficients of this polynomial are fit to minimize the predicted neutrino energy residuals in selected simulated \ensuremath{\nu_e\,{\rm CC}}\xspace events.
Whether a prong is considered electromagnetic or not is determined by a deep learning single particle classifier that utilizes both information from the prong itself and the full event \cite{psihas-thesis}.
This results in an estimator with 11\% resolution for both appearance signal and beam background \ensuremath{\nu_e\,{\rm CC}}\xspace events in both detectors.
The expected appearance signal has a narrow peak at the \ensuremath{{\nu}_{\mu}}\xspace disappearance maximum, about \unit[1.6]{GeV}.
Additionally, in this analysis, NC and cosmogenic backgrounds concentrate at low reconstructed energies, and beam \ensuremath{{\nu}_{e}}\xspace backgrounds dominate at high energies.
Based on these considerations, figure-of-merit calculations based on simulation suggest we limit the neutrino energies we consider to be between \unit[1 and 4]{GeV} for the FD core sample and \unit[1-4.5]{GeV} for the peripheral sample.
The corresponding core or peripheral range is used for the ND sample when applying the data constraint detailed in Sec.~\ref{subsec:nue ND}.
Each of these is further subdivided into three ranges in the CVN classifier output so as to concentrate the sample of highest purity together.
The peripheral event sample is treated as a fourth bin.
\subsubsection{\label{subsec:nue ND} Near Detector data constraints}
The procedure for using the ND data in the \ensuremath{{\nu}_{e}}\xspace analysis is similar to that used for \ensuremath{{\nu}_{\mu}}\xspace, extended to account for the particular natures of the signal and beam background components.
Appeared electron neutrinos arise from oscillated beam muon neutrinos, so the \ensuremath{{\nu}_{\mu}}\xspace-selected candidates in the ND are used to correct the expected \ensuremath{{\nu}_{e}}\xspace appearance signal with the same procedure detailed in Sec.~\ref{subsec:numu extrap}.
Additionally, the \ensuremath{{\nu}_{\mu}}\xspace-selected events are used to verify the \ensuremath{{\nu}_{e}}\xspace selection efficiency.
From the \ensuremath{{\nu}_{\mu}}\xspace data and simulated samples, we create two subsets where the reconstructed muon track is replaced by a simulated electron shower with the same energy and direction \cite{sachdev-thesis}.
The \ensuremath{{\nu}_{e}}\xspace selection criteria are applied to these electron-inserted samples, and the efficiencies for identifying neutrino events in data and simulation, relative to a loose preselection, are found to match within 2\%.
As there is no signal and cosmogenic activity is negligible at the ND, the \ensuremath{\nu_e\,{\rm CC}}\xspace candidates at the ND consist entirely of beam background events, originating from CC reactions of the intrinsic \ensuremath{{\nu}_{e}}\xspace component in the beam and mis-identified NC and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events.
As in our last result \cite{nova_joint}, we use a combination of data-driven methods to ``decompose'' the \ensuremath{{\nu}_{e}}\xspace-selected data into these three categories and constrain them independently.
We examine low- and high-energy \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace samples at the ND in order to adjust the yields of the parent hadrons that decay into both \ensuremath{{\nu}_{e}}\xspace and \ensuremath{{\nu}_{\mu}}\xspace, which constrains the \ensuremath{{\nu}_{e}}\xspace beam background.
We also use the observed distributions of time-delayed electrons from stopping $\mu$ decay in each analysis bin to constrain the ratio of \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace and NC interactions.
The resulting decomposition of the selected \ensuremath{{\nu}_{e}}\xspace candidate sample at the ND therefore agrees with the data distribution by construction.
The nominal and constrained predictions are shown compared to the data distribution in Fig.~\ref{fig:nue_decomp}.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{Nue2017_MDCMP_stack.pdf}
\caption{\label{fig:nue_decomp} The effect of the decomposition and constraint procedure on the predicted ND candidate \ensuremath{{\nu}_{e}}\xspace spectrum; the stacked histogram shows corrected backgrounds (from bottom, beam \ensuremath{{\nu}_{e}}\xspace, \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace, NC). The three panels show the results for each of the CVN classifier bins, ranging left to right from lower to higher purity. Predictions for each background class prior to correction are given by the dashed lines. The overall corrections to the normalizations of the yields by category are: beam \ensuremath{\nu_e\,{\rm CC}}\xspace, $+3.0\%$; NC, $+17.0\%$; and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace, $+18.9\%$.}
\end{figure}
The corrections to the beam \ensuremath{{\nu}_{e}}\xspace, NC and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace components are extrapolated to the FD core sample using the bin-by-bin ratios of the FD and ND reconstructed energy spectra, for each of the three CVN ranges.
The predicted beam backgrounds in the FD peripheral sample are corrected according to the results of the extrapolation for the highest CVN bin in the core sample (see Fig. \ref{fig:nue_decomp}).
The sum of the final beam-induced background prediction and the extrapolated signal for given oscillation parameters is added to the measured cosmic-induced backgrounds to compare to the observed FD data.
\section{\label{systs} Systematic uncertainties}
We evaluate the effect of potential systematic uncertainties on our results by reweighting or generating new simulated event samples for each source of uncertainty and repeating the entire measurement, including the signal and background constraint procedures, using each modified simulation sample.
The effect of each of these uncertainties
on the predicted yields of selected \ensuremath{\nu_e\,{\rm CC}}\xspace candidate events is contained in
Table~\ref{tab:nue_syst}.
We estimate the effects on the extracted oscillation parameters \sinsq{23}, \dmsq{32} and \ensuremath{\delta_{\rm CP}}\xspace in the joint fit to be as given in Table~\ref{tab:systs_param}.
These are negligibly different from a \ensuremath{{\nu}_{\mu}}\xspace-only fit.
\begin{table}[h]
\caption{\label{tab:nue_syst} Effect of $1\sigma$ variations of the
systematic uncertainties on the total \ensuremath{{\nu}_{e}}\xspace
signal and background predictions.
Simulated data were used and oscillated with $\dmsq{32}=2.445 \times 10 ^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace$ (NH),
\sinsq{23}= 0.558, \ensuremath{\delta_{\rm CP}}\xspace = 1.21$\pi$. }
\begin{tabular}{m{0.47\linewidth}SS} \hline \hline
Source of uncertainty &
\multicolumn{1}{c}{\parbox[t]{0.20\linewidth}{\ensuremath{{\nu}_{e}}\xspace signal (\%)}} &
\multicolumn{1}{c}{\parbox[t]{0.27\linewidth}{Total beam \\background (\%)}} \\ \hline
Cross sections and FSI & 7.7 & 8.6 \\
Normalization & 3.5 & 3.4 \\
Calibration & 3.2 & 4.3 \\
Detector response & 0.67 & 2.8 \\
Neutrino flux & 0.63 & 0.43 \\
\ensuremath{{\nu}_{e}}\xspace extrapolation & 0.36 & 1.2 \\ \hline
\mbox{Total systematic} uncertainty & \multicolumn{1}{c}{9.2} & \multicolumn{1}{c}{11} \\
Statistical uncertainty & \multicolumn{1}{c}{15} & \multicolumn{1}{c}{22} \\
\hline
Total uncertainty & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{25} \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{\label{tab:systs_param} Sources of uncertainty
and their estimated average impact on the oscillation parameters
in the joint fit. This impact is quantified using the increase in
the one-dimensional 68\% C.L. interval, relative to the size of
the interval when only statistical uncertainty is included in the
fit. Simulated data were used and oscillated with the same parameters as in Table~\ref{tab:nue_syst}. The total systematic uncertainty is calculated by adding the individual components in quadrature.}
\resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{1pt}
\begin{tabular}{
>{\raggedright}p{0.38\linewidth}
r
S[table-align-uncertainty=true,table-number-alignment = left]
r
S[table-format = 2.1, table-number-alignment = right]
c
S[table-format = 2.1, table-number-alignment = left]
c
}
\hline \hline
{Source of uncertainty}
&\multicolumn{2}{c}{\parbox[t]{0.20\linewidth}{Uncertainty \\in \sinsq{23} ($\times 10^{-3}$)}}
&\multicolumn{4}{c}{\parbox[t]{0.27\linewidth}{Uncertainty \\in \dmsq{32} \small($\times 10^{-6} \ensuremath{{{\rm eV}^2}/c^4}\xspace$)}}
&\multicolumn{1}{c}{\parbox[t]{0.20\linewidth}{Uncertainty \\ in \ensuremath{\delta_{\rm CP}}\xspace }}
\\ \hline
Calibration & $\pm$ & 9.1 & + & 27& /$-$ & 27 & $\pm$ 0.05$\pi$\\
Cross sections and FSI & $\pm$ & 7.8 & + & 14& /$-$ & 19 & $\pm$ 0.08$\pi$\\
Muon energy scale & $\pm$ & 3.2 & + & 8.5& /$-$ & 12 & $\pm$ 0.01$\pi$\\
Normalization & $\pm$ & 4.6 & + & 7.3& /$-$ & 12 & $\pm$ 0.05$\pi$\\
Detector response & $\pm$ & 1 & + & 6.2& /$-$ & 7.7 & $\pm$ 0.01$\pi$\\
Neutrino flux & $\pm$ & 1.5 & + & 4.0& /$-$ & 4.4 & $\pm$ 0.01$\pi$\\
\ensuremath{{\nu}_{e}}\xspace extrapolation & $\pm$ & 0.63 & + & 0.2& /$-$ & 0.7 & $\pm$ 0.01$\pi$\\ \hline
{Total systematic uncertainty} & $\pm$ & 14 & + & 33& /$-$ & 38 & $\pm$ 0.12$\pi$\\
Statistical \nohyphens{uncertainty} & $\pm$ & 80 & + & 75& /$-$ & 84 & $\pm$ 0.66$\pi$\\ \hline
Total uncertainty & $\pm$ & 82 & + & 82& /$-$ & 92 & $\pm$ 0.67$\pi$\\
\hline \hline
\end{tabular}
}
\end{table}
The largest effects on this analysis stem from uncertainty in our calibrations and energy scales, in the cross-section and final-state interaction (FSI) models in \genie/, and in the impact of imperfectly simulated event pileup from the neutrino beam on reconstruction and selection efficiencies at the ND.
\paragraph*{Calibration and energy scale} To evaluate the uncertainty from calibrations and energy scales, which can affect the two detectors differently, we group these uncertainties into absolute (fully positively correlated between detectors) and relative (anticorrelated or uncorrelated) components.
Both absolute and relative muon energy scale uncertainties are $<1\%$ based on a combination of thorough accounting of our detectors' material composition and an examination of the parameters in the Bethe formula for stopping power and the energy-loss model of \geant/.
The overall energy response uncertainty, on the other hand, is driven by uncertainty in our overall calorimetric energy calibration.
To investigate the response, we compare simulated and measured data distributions of numerous channels including the energy deposits of muons originating from cosmogenic- and beam-related activity, the energy spectra of electrons arising from the decay of stopped muons, the invariant mass spectrum of neutral pion decays into photons, and the proton energy scales in ND quasielastic-like events.
The uncertainty we use is guided by the channel exhibiting the largest differences, the proton energy scale, at 5\%.
We take this 5\% uncertainty as both an absolute energy uncertainty, correlated between the two detectors, and a separate 5\% relative uncertainty, since there are not sufficient quasielastic-like events to perform this check at the FD.
\begin{figure*}[!t]
\includegraphics[width=0.8\textwidth]{datapred_bestfit_4pads_testpads.pdf}
\caption{\label{fig:numu_spectr} Comparison of the reconstructed
energy spectra of selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates in FD data (black dots) and
best-fit prediction (purple). The sample is split into four
reconstructed hadronic energy fraction quartiles labeled 1 through
4, where 1 (4) has the best (worst) energy resolution. The
majority of the total background (gray, upper) including the cosmogenic subcomponent
(blue, lower) lies in the fourth quartile.}
\end{figure*}
\paragraph*{Cross sections and FSI} Estimates for the majority of the cross section and FSI uncertainties that we consider are obtained using the event reweighting framework in \genie/ \cite{genie-manual}.
However, ongoing effort in the neutrino cross section community and the NOvA ND data suggest some modifications are necessary.
First, we apply additional uncertainty to the energy- and momentum-transfer-dependence of CC quasielastic (CCQE) scattering due to long-range nuclear correlations \cite{Valencia-RPA-unc} according to the prescription in Ref.~\cite{Gran-RPA}.
Second, as the detailed nature of MEC interactions is not well understood, we construct uncertainties for the neutrino energy dependence, energy-transfer dependence, and final-state nucleon-nucleon pair composition based on a survey of available theoretical treatments \cite{Gran-Valencia-2p2h,Martini-2p2h,SuSA-2p2h}.
Third, it is now believed that the inflated value of the axial mass in quasielastic scattering ($M_A^{QE}$) obtained in recent neutrino-nucleus scattering experiments relative to the light liquid bubble chamber measurements is due to nuclear effects that we are now treating explicitly with the foregoing \cite{nu-xsec-review}.
We thus reduce \genie/'s uncertainty for $M_A^{QE}$ to $\pm 5\%$ (a conservative estimate of the bubble chamber range \cite{MAQE-BBBA,MAQE-Zexp}) from its default of ${}^{+25\%}_{-15\%}$, while retaining \genie/'s central value $M_{A}^{QE} = \unit[0.99]{GeV/c^{2}}$.
Fourth, we increase the uncertainty applied to nonresonant pion production with three or more pions and invariant hadronic mass of $W<\unit[3]{GeV}$ to 50\% to match the default for 1- and 2-pion cases, based on data-simulation disagreements observed in the ND data.
Fifth, and finally, we introduce two separate 2\% uncertainties on the ratio of \ensuremath{\nu_e\,{\rm CC}}\xspace and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace cross sections: one to account for potential differences between them due to radiative corrections, and one to consider the possibility of second-class currents in CCQE events \cite{DayMcF-numu-nue-diff,t2k}.
To validate the uncertainties assigned by \genie/ to the NC backgrounds in our analyses, we performed a study within the \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidate sample in the ND that measured the rates of neutrons that were produced at the ends of tracks and subsequently recaptured, emitting photons.
This rate is different for the mostly $\mu^{-}$ identified in \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace reactions versus the mostly $\pi^{\pm}$ in NC.
This study suggested that the NC cross-section uncertainties provided by GENIE, combined together with the calibration uncertainties mentioned previously, account for any differences between data and simulation.
Therefore we no longer include the \textit{ad hoc} 100\% additional uncertainty on NC backgrounds used in previous results \cite{nova_numu,nova_joint}.
\paragraph*{Normalization} We quantify the uncertainty arising from potential imperfections in the simulation of beam-induced pileup in the ND by overlaying a single extra simulated event onto samples of both simulated and data events.
We then examine the selection efficiency of this extra event and assign the 3.5\% difference between the data and simulation samples as a conservative uncertainty on the normalization of the ND rate. These are added in quadrature with much smaller uncertainties in the detector mass and the total beam exposure to yield an overall normalization systematic.
\paragraph*{Other} Other contributions to our systematic uncertainty budget are associated with the improved \ppfx/ flux prediction and potential differences between the acceptances of the ND \ensuremath{{\nu}_{\mu}}\xspace selection criteria and the FD \ensuremath{{\nu}_{e}}\xspace sample into which the ND corrections are extrapolated in the \ensuremath{{\nu}_{e}}\xspace analysis.
Also substantially reduced are the uncertainties in the light response model used for detector simulation.
Previous fits of the parameters in the Birks model for scintillator quenching with a second-order term \cite{chou-scint}, using proton tracks in candidate ND \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace quasielastic-like events in data, obtained values inconsistent with other measurements of Birks quenching in liquid scintillator \cite{KamLAND-scint,Borexino-scint}.
Previous results therefore used a variation with the other measurements' values to compute an uncertainty.
With the addition of Cherenkov light in scintillator to our detector model, however, we find a best fit at the same values preferred by other experiments.
To quantify any residual uncertainty in the light model, in this analysis we take alternate predictions where we alter the scintillation and Cherenkov photon yields in the model within the tolerance of agreement with the ND data while holding the muon response fixed (since it is set by our calibration procedure).
\section{\label{sec:results} Results}
We performed a blind analysis in which the FD data were analyzed only after all aspects of the analysis had been specified.
An independent implementation of the methods described in Secs.~\ref{analysis}-\ref{systs} for incorporating the Near Detector data constraint and assessing the impact of systematic uncertainties, as well as extracting oscillation parameters via likelihood fitting, was used to check the analysis presented in this paper.
It produced results consistent with those shown in the following sections.
\subsection{\ensuremath{{\nu}_{\mu}}\xspace disappearance data}
After selection, 126 \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates are observed in the FD.
In the absence of oscillations, we would have expected $720.3^{+67.4}_{-47.0} \text{ (syst.)}$ \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates based on the extrapolation from the Near Detector, including an expected background of 5.8 misidentified cosmic rays and 3.4 misidentified neutrino events of other types.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{datapred_bestfit_quant0.pdf}
\caption{\label{fig:numu_combined} Data from
Fig.~\ref{fig:numu_spectr} summed over the four quartiles.}
\end{figure}
Figure~\ref{fig:numu_spectr} shows the observed energy spectrum in each quartile and the corresponding best fit predictions. As noted earlier, most of the predicted background appears in the fourth (worst resolution) quartile. Figure~\ref{fig:numu_combined} shows the data of Fig.~\ref{fig:numu_spectr} summed over all of the quartiles.
The neutrino energy spectrum exhibits a sharp dip at about \unit[1.6]{GeV}. Essentially, \sinsqtwo{23} corresponds to the depth of the dip and \dmsq{32} corresponds to its location. Both of these measurements are sensitive to the energy resolution, so we expect the best measurement in the quartile with best energy resolution.
\subsection{\ensuremath{{\nu}_{e}}\xspace appearance data}
After selection we observe 66 \ensuremath{\nu_e\,{\rm CC}}\xspace candidate events in the FD including an expected background of $20.3\pm 2.0 \text{ (syst.)}$ events. The composition of the expected background is estimated to be 7.3 beam \ensuremath{\nu_e\,{\rm CC}}\xspace events, 6.4 NC events, 1.3 \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events, 0.4 \ensuremath{\nu_{\tau}\,{\rm CC}}\xspace events, and 4.9 cosmic rays.
Figure~\ref{fig:nue_spectr} shows the distribution of these events as a function of the reconstructed neutrino energy for the three CVN classifier bins and for the peripheral sample, along with the expected background contributions and the best fit predictions.
To give some context to the number of observed \ensuremath{{\nu}_{e}}\xspace events, Fig.~\ref{fig:nue_number} shows the number of events expected for the best fit values of \dmsq{32} and \sinsq{23} as a function of \ensuremath{\delta_{\rm CP}}\xspace, for the two possible mass hierarchies.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{decomp_All_4bin_plot_data__1_.pdf}
\caption{\label{fig:nue_spectr} Comparison of the neutrino energy spectra of selected \ensuremath{\nu_e\,{\rm CC}}\xspace candidates in the FD data (black dots) with the best fit prediction (purple lines) in the three CVN classifier bins and the peripheral sample. The total expected background (gray, upper) and the cosmic component of it (blue, lower) are shown as shaded areas.
The events in the peripheral bin have energies between 1 and 4.5 GeV.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{monoprob_2017_data.pdf}
\caption{\label{fig:nue_number}
Total number of \ensuremath{\nu_e\,{\rm CC}}\xspace candidate events observed in the FD (gray) compared to the prediction (color) as a function of \ensuremath{\delta_{\rm CP}}\xspace.
The color lines correspond to the best fit values of \sinsq{23} and \dmsq{32} from
Table~\ref{tab:best_fits}, with
the upper two curves (blue) representing two octants in the normal mass hierarchy ($\dmsq{32}>0$) and the lower curve (red) the inverted hierarchy ($\dmsq{32}<0$).
The color bands correspond to $0.43 \leq \sinsq{23} \leq 0.60$. All other parameters are held fixed at the best-fit values.
}
\end{figure}
\subsection{Joint fit results}
We have performed a simultaneous fit to the binned data shown in Figs.~\ref{fig:numu_spectr} and \ref{fig:nue_spectr}.
Systematic uncertainties are incorporated into the fit as nuisance parameters with Gaussian penalty terms.
Where systematic uncertainties are common between the two data sets, the nuisance parameters associated with the effect are correlated appropriately.
In making these fits and in the contours and significance levels that follow, we used the following values for physics parameters measured by other experiments \cite{pdg}: $\dmsq{21} = (7.53\pm 0.18)\times 10^{-5}\ensuremath{{{\rm eV}^2}/c^4}\xspace$, $\sin^2\theta_{12} = 0.307\substack{+0.013 \\ -0.012}$, $\sin^2\theta_{13} = 0.0210 \pm 0.0011$.
We use a matter density computed for the average depth of the NuMI beam in the Earth's crust for the NOvA baseline of 810 km using the CRUST2.0 model \cite{ref:crust}, $\rho = \unit[2.84]{\mathrm{g/cm^3}}$.
\subsubsection{Best fits}
Table~\ref{tab:best_fits} gives the parameter values at the best fit point in each relevant mass hierarchy and $\theta_{23}$ octant combination. The top line shows the overall best fit, which occurs in the normal mass hierarchy and the upper $\theta_{23}$ octant; the middle line shows best fit in the lower $\theta_{23}$ octant for the normal mass hierarchy, which is only slightly less significant; and the bottom line shows the best fit in the inverted mass hierarchy, which is disfavored largely because it predicts fewer \ensuremath{{\nu}_{e}}\xspace appearance events than are observed. The column labeled $\Delta\chi^2$ represents the difference in $\chi^2$ between the fit and the overall best fit, where $\chi^2$ in this case is $-2{\text {ln}}\mathcal{L}$ with $\mathcal{L}$ being the likelihood function calculated using Poisson statistics plus Gaussian penalty terms for the systematic uncertainties. There are no best fit values in the inverted mass hierarchy and lower $\theta_{23}$ octant because the likelihood has no local maximum in this hierarchy-octant region, as will become clear in Fig.~\ref{fig:th23}. The $\chi^2$ for the overall best fit is 84.6 for 72 degrees of freedom.
The precision measurements of \sinsq{23} and \dmsq{32} come from the \ensuremath{{\nu}_{\mu}}\xspace disappearance data. A fit to these data alone gives essentially the same values for these parameters in the normal mass hierarchy. However, the best joint \ensuremath{{\nu}_{\mu}}\xspace{}-\ensuremath{{\nu}_{e}}\xspace{} fit pulls the value of $|\dmsq{32}|$ up by $0.04 \times10 ^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace$ from the \ensuremath{{\nu}_{\mu}}\xspace disappearance only fit in the inverted mass hierarchy.
\begin{table}[h]
\caption{\label{tab:best_fits} Best fit values. See text for further explanation.}
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccc}
\hline \hline
Hierarchy/Octant
& {\ensuremath{\delta_{\rm CP}}\xspace ($\pi$)}
& {\sinsq{23}}
& \parbox[t]{0.22\linewidth}{\dmsq{32} ($10^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace)$ } &
{$\Delta\chi^2$}
\\ \hline
Normal/Upper & 1.21 & 0.56 & \phantom{$-$}2.44 & 0.00\\
Normal/Lower & 1.46 & 0.47 & \phantom{$-$}2.45 & 0.13\\
Inverted/Upper & 1.46 & 0.56
& $-$2.51 & 2.54 \\
\hline \hline
\end{tabular}
}
\end{table}
\begin{figure}[!htb]
\includegraphics[width=\linewidth,trim=0 45 0 0,clip]{contour_dmsq_NH.pdf}
\vspace{-.2em}
\includegraphics[width=\linewidth,trim=0 0 0 25,clip]{contour_dmsq_IH.pdf}
\caption{\label{fig:dmsqcontour}
Regions of \dmsq{32} vs.~\sinsq{23} parameter space consistent
with the \ensuremath{{\nu}_{e}}\xspace appearance and the \ensuremath{{\nu}_{\mu}}\xspace
disappearance data at various levels of significance. The top panel corresponds to normal
mass hierarchy and the bottom panel to inverted hierarchy.
The color intensity indicates the confidence level at which
particular parameter combinations are allowed.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{contours_JointFitFC_withFriends.pdf}
\caption{\label{fig:other_expt} Comparison of measured 90\% confidence level contours for \dmsq{32} vs.~\sinsq{23} for this result (black line; best-fit value, black point), T2K \cite{t2k} (green dashed), MINOS \cite{minos} (red dashed), IceCube \cite{icecube} (blue dotted), and Super-Kamiokande \cite{superk} (purple dash-dotted).}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=\linewidth,trim=0 45 0 0,clip]{contour_delta_NH.pdf}
\vspace{-.2em}
\includegraphics[width=\linewidth,trim=0 0 0 25,clip]{contour_delta_IH.pdf}
\caption{\label{fig:deltacontour}
Regions of \sinsq{23} vs.~\ensuremath{\delta_{\rm CP}}\xspace parameter space consistent
with the \ensuremath{{\nu}_{e}}\xspace appearance and the \ensuremath{{\nu}_{\mu}}\xspace
disappearance data. The top panel corresponds to normal
mass hierarchy ($\dmsq{32}>0$) and the bottom panel to inverted hierarchy
($\dmsq{32}<0$). The color intensity indicates the confidence level at which
particular parameter combinations are allowed.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{slice_dmsq.pdf}
\caption{\label{fig:dmsq}
Significance at which each value of $|\dmsq{32}|$ is disfavored
in the normal (blue, lower) or inverted (red, upper) mass hierarchy.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{slice_th23.pdf}
\caption{\label{fig:th23}
Significance at which each value of \sinsq{23} is disfavored
in the normal (blue, lower) or inverted (red, upper) mass hierarchy.
The vertical dotted line indicates the point of maximal mixing.}
\end{figure}
\subsubsection{Two dimensional contours and significance levels of single parameters}
All of the contours and significance levels that follow are constructed following the unified approach of Feldman and Cousins \cite{ref:fc}, profiling over unspecified physics parameters and systematic uncertainties.
Figure~\ref{fig:dmsqcontour} shows the 1, 2, and 3 $\sigma$ two-dimensional contours for \dmsq{32} and \sinsq{23}, separately for each mass hierarchy.
Figure~\ref{fig:other_expt} shows a comparison of 90\% confidence level contours for these parameters in the normal mass hierarchy for NOvA, T2K \cite{t2k}, MINOS \cite{minos}, IceCube \cite{icecube}, and \mbox{Super-Kamiokande~\cite{superk}.}
All of the experiments have results consistent with maximal mixing. Note that the range 0.4 to 0.6 in \sinsq{23} corresponds to the range 0.96 to 1.00 in \sinsqtwo{23}, which is the variable directly measured in \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace oscillations.
Figure~\ref{fig:deltacontour} shows the analogous contours to those of Fig.~\ref{fig:dmsqcontour} in \sinsq{23} and \ensuremath{\delta_{\rm CP}}\xspace.
Figures~\ref{fig:dmsq}, \ref{fig:th23}, and \ref{fig:delta} show the significance with which values of $|\dmsq{32}|$, \sinsq{23}, and \ensuremath{\delta_{\rm CP}}\xspace are disfavored in the two mass hierarchies, respectively. The results in Fig.~\ref{fig:th23} differ from the ones previously reported \cite{nova_numu} in that the disfavoring of maximal mixing ($\theta_{23} = \pi/4$) has changed from 2.6 standard deviations ($\sigma$) to \unit[0.8]{$\sigma$} in the present results.
The reason for this change is that the additional \unit[$2.80\times10^{20}$]{POT} of data included here favored maximal disapperance. In addition, all of the improvements to the simulations and analysis caused $\theta_{23}$ to get closer to maximal mixing.
In Fig.~\ref{fig:delta} two curves are shown in the normal mass hierarchy, one for each of the $\theta_{23}$ octants, corresponding to the near degeneracy shown in Fig.~\ref{fig:th23}. Only one curve is shown for the inverted mass hierarchy since there is only one minimum, which occurs in the upper octant. The point of minimum significance in the inverted mass hierarchy differs among the three figures because, although the $\Delta\chi^2$'s are identical (see
Table~\ref{tab:best_fits}), the translation of $\Delta\chi^2$ to significance depends on which oscillation parameters are profiled.
\begin{figure}[!tb]
\includegraphics[width=\linewidth]{slice_delta_3curves.pdf}
\caption{\label{fig:delta}
Significance at which each value of \ensuremath{\delta_{\rm CP}}\xspace is disfavored
in the normal (blue, lower) or inverted (red, upper) mass hierarchy. The normal mass hierarchy is divided into upper (solid) and lower (dashed) $\theta_{23}$ octants corresponding to the near degeneracy in \sinsq{23}.}
\end{figure}
Table~\ref{tab:1sigma limits} shows the \unit[1]{$\sigma$} confidence intervals for \dmsq{32}, \sinsq{23}, and \ensuremath{\delta_{\rm CP}}\xspace in the normal mass hierarchy, corresponding to Figs.~\ref{fig:dmsq}-\ref{fig:delta}. There are no \unit[1]{$\sigma$} confidence intervals in the inverted mass hierarchy.
\begin{table}[htb]
\caption{\label{tab:1sigma limits} 1 $\sigma$ confidence intervals for physics parameters in the normal mass hierarchy.}
\resizebox{\linewidth}{!}{
\begin{tabular}{p{0.35\linewidth}c}
\hline \hline
Parameter (units)
& \parbox[t]{0.50\linewidth}{1 $\sigma$ interval(s)}
\\ \hline \\[-14pt]
\dmsq{32} ($10 ^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace$) & [2.37,2.52] \\[1pt]
\sinsq{23} & [0.43, 0.51] and [0.52, 0.60] \\[1pt]
\ensuremath{\delta_{\rm CP}}\xspace ($\pi$) & [0, 0.12] and [0.91, 2] \\[1pt]
\hline \hline
\end{tabular}
}
\end{table}
Finally, we have calculated the significance level for the rejection of the inverted hierarchy using the same procedure as in the above contours and confidence intervals, namely by profiling over all the other physics parameters and the systematic uncertainties. Frequentist coverage was checked following the suggestion of Berger and Boos \cite{ref:BergerBoos} with $\beta = 0.001$. The entire inverted mass hierarchy region is disfavored at the 95\% confidence level.
\begin{acknowledgments}
This work was supported by the US Department of Energy; the US National Science Foundation; the Department of Science and Technology, India; the European Research Council; the MSMT CR, GA UK, Czech Republic; the RAS, RFBR, RMES, RSF and BASIS Foundation, Russia; CNPq and FAPEG, Brazil; and the State and University of Minnesota. We are grateful for the contributions of the staffs at the University of Minnesota module assembly facility and Ash River Laboratory, Argonne National Laboratory, and Fermilab. Fermilab is operated by Fermi Research Alliance, LLC under Contract No.~De-AC02-07CH11359 with the US DOE.
\end{acknowledgments}
\FloatBarrier
\section{\label{sec:intro} Introduction}
Joint fits of \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace disappearance and \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace appearance oscillations in long-baseline neutrino oscillation experiments can provide information on four of the standard neutrino model parameters, $|\dmsq{32}|$, $\theta_{23}$, \ensuremath{\delta_{\rm CP}}\xspace, and the mass hierarchy, when augmented by measurements of the other three parameters, \dmsq{21}, $\theta_{12}$, and $\theta_{13}$, from other experiments \cite{pdg}. Of the four parameters, the first pair are currently most sensitively measured by \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace oscillations and the second pair are most sensitively measured by \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations. However, the precision with which \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations can measure the second pair of parameters depends on the precision of the measurement of $\theta_{23}$ since that oscillation probability is largely proportional to \sinsqtwo{13}\sinsq{23}.
The quantity $\tan^2\theta_{23}$ gives the ratio of the coupling of the third neutrino mass state to \ensuremath{{\nu}_{\mu}}\xspace and \ensuremath{{\nu}_{\tau}}\xspace. Whether $\theta_{23} < \pi/4$ (lower octant), $\theta_{23} > \pi/4$ (upper octant), or $\theta_{23} = \pi/4$ (maximal mixing) is important for models and symmetries of neutrino mixing~\cite{ref:Altarelli-Feruglio}.
The determination of the neutrino mass hierarchy is important both for grand unified models~\cite{ref:Altarelli-Feruglio2} and for the interpretation of neutrinoless double beta decay experiments~\cite{ref:Pascoli-Petcov}. In long-baseline neutrino experiments, it is measured by observing the effect of coherent forward neutrino scattering from electrons in the earth, which enhances \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations for the normal mass hierarchy (NH), $\dmsq{32}>0$, and suppresses them for the inverted mass hierarchy (IH), $\dmsq{32}<0$. For the baselines of current experiments and for fixed baseline length to energy ratio, the magnitude of this effect is approximately proportional to the length of the baseline.
The amount of CP violation in the lepton sector is proportional to $|\sin\ensuremath{\delta_{\rm CP}}\xspace|$. For \ensuremath{\delta_{\rm CP}}\xspace in the range 0 to $2\pi$, \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations are enhanced for $\ensuremath{\delta_{\rm CP}}\xspace > \pi$ and suppressed for $\ensuremath{\delta_{\rm CP}}\xspace < \pi$, with maximal enhancement at $\ensuremath{\delta_{\rm CP}}\xspace = 3\pi/2$ and maximal suppression at $\ensuremath{\delta_{\rm CP}}\xspace = \pi/2$.
In addition to the NOvA results \cite{nova_joint}, previous joint fits of \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace and \ensuremath{\nu_\mu \rightarrow \nu_e}\xspace oscillations in long-baseline experiments have been reported by the MINOS \cite{minos} and T2K \cite{t2k} experiments.
The data reported here correspond to the equivalent of $8.85\times10^{20}$ protons on target (POT) in the full NOvA\xspace Far Detector with a beam line set to focus positively charged mesons, which greatly enhances the neutrino to antineutrino ratio. This represents a 46\% increase in neutrino flux since our last publication \cite{nova_joint}.
These data were taken between February 6, 2014 and February 20, 2017.
Significant improvements have been made to both the simulations and data analysis.
The key updates to the simulations include a new data-driven neutrino flux model, an improved treatment of multi-nucleon interactions, and an improved light model including Cherenkov radiation in the scintillator.
The main improvements in the \ensuremath{{\nu}_{\mu}}\xspace disappearance data analysis are the use of a deep-learning event classifier and the separation of selected events into different samples based on their energy resolution.
The main improvement for the \ensuremath{{\nu}_{e}}\xspace appearance data analysis is the addition of a signal-rich sample that expands the active volume considered.
\section{\label{NOvA} The NO\lowercase{v}A experiment}
NOvA \cite{tdr} is a two-detector, long-baseline neutrino oscillation experiment that samples the Fermilab NuMI neutrino beam \cite{numi} approximately \unit[1]{km} from the source using a Near Detector (ND) and observes the oscillated beam \unit[810]{km} downstream with a Far Detector (FD) near Ash River, MN.
The detectors are functionally identical, scintillating tracker-calorimeters consisting of layered reflective polyvinyl chloride cells filled with a liquid scintillator comprised primarily of mineral oil with a 5\% pseudocumene admixture.
These cells are organized into planes alternating in vertical and horizontal orientation.
The net composition of the detectors is 63\% active material by mass.
Light produced within a cell is collected using a loop of wavelength-shifting optical fiber, which is connected to an avalanche photodiode (APD).
The FD cells are \unit[$3.9 \times 6.6$]{$\text{cm}$} in cross section, with the \unit[6.6]{$\text{cm}$} dimension along the beam direction, and \unit[15.5]{m} long \cite{ref:PVC}.
The FD contains 896 planes, leading to a total mass of \unit[14]{kt}.
The majority of ND cells are identical to those of the FD apart from being shorter (\unit[3.9]{m} long instead of \unit[15.5]{m}).
To improve muon containment, the downstream end of the ND is a ``muon catcher" composed of a stack of sets of planes in which a pair of one vertically-oriented and one horizontally-oriented scintillator plane is interleaved with one \unit[10]{cm}-thick plane of steel.
There are 11 pairs of scintillator planes separated by 10 steel planes in this sequence. The vertical planes in this section are \unit[2.6]{m} high.
The ND consists of 214 planes for a total mass of \unit[290]{ton}.
The FD sits \unit[14.6]{mrad} away from the central axis of the NuMI beam.
This off-axis location results in a neutrino flux with a narrow-band energy spectrum centered around \unit[1.9]{GeV} in the FD.
Such a spectrum emphasizes $\ensuremath{{\nu}_{\mu}}\xspace \rightarrow \ensuremath{{\nu}_{e}}\xspace$ oscillations at this baseline and reduces backgrounds from higher energy neutral current events.
The ND sees a line source and so it receives a much larger spread in off-axis angles than the FD does. The ND is positioned at the same average off-axis angle as the FD to maximize the similarity between the neutrino energy spectrum at its location and that expected at the FD in the absence of oscillations.
The beam is pulsed at an average rate of \unit[0.75]{Hz}.
All of the APD signals above threshold from a large time window around each \unit[10]{$\mu$s} beam spill are retained.
Because the FD is located on the Earth's surface, it is exposed to a substantial cosmic ray flux, which is only partially mitigated by its overburden of \unit[1.2]{m} of concrete plus \unit[15]{cm} of barite.
Therefore, we also use cosmic data taken from \unit[420]{$\mu$s} surrounding the beam spill within beam triggers to obtain a direct measure of the cosmic background in the FD.
Separate periodic minimum-bias triggers of the same length as the beam trigger allow us to collect high-statistics cosmic data for algorithm training and calibration purposes.
As the ND is \unit[100]{m} underground, the cosmic ray flux there is negligible.
\begin{figure}[tb]
\includegraphics[width=\linewidth]
{nucomponents_nd_fhc_nosim.pdf}
\caption{\label{fig:flux}
Predicted
composition of the NuMI beam at the ND with the horns focusing
positively charged hadrons. Curves from top to bottom: \ensuremath{{\nu}_{\mu}}\xspace, $\bar{\nu}_{\mu}$, \ensuremath{{\nu}_{e}}\xspace, $\bar{\nu}_{e}$. Table~\ref{tab:beam_comp} gives the fractional composition for each neutrino flavor integrated from \unit[1-5]{GeV}.
}
\end{figure}
\section{\label{sim} Simulations}
To assist in calibrating our detectors, determining our analysis criteria, and inferring extracted physical parameters, we rely on predictions generated by a comprehensive simulation suite, which proceeds in stages.
We begin by using \geant/ \cite{geant4} and a detailed model of the beamline geometry to simulate the production of hadrons arising from the collision of the \unit[120]{GeV} primary proton beam with the graphite target \cite{graphite-target}, as well as their subsequent focusing and decay into neutrinos.
The resultant neutrino flux is corrected according to constraints on the hadron spectrum from thin-target hadroproduction data using the \ppfx/ tools developed for the NuMI beam by the MINERvA collaboration \cite{minerva-flux}. The correction applied to the underlying model used in the simulation (FTFP BERT) is in the order of 7-10\% for both, the \ensuremath{{\nu}_{\mu}}\xspace and \ensuremath{{\nu}_{e}}\xspace flux predictions. The uncertainties are in the order of 8\% in the peak.
Table~\ref{tab:beam_comp} shows simulated predictions of the beam composition at the Near and Far Detectors in the absence of oscillations; the ND predicted spectra from \unit[0-20]{GeV} are shown in Fig.~\ref{fig:flux}.
\begin{table}[h]
\caption{\label{tab:beam_comp} Predicted beam flux composition in the 1 to 5 GeV neutrino energy region in the absence of oscillations.}
\begin{tabular}{m{0.35\linewidth}cc} \hline \hline
Component & \parbox[t]{0.30\linewidth}{ND (\%)} & \parbox[t]{0.3\linewidth}{FD (\%)} \\ \hline
\ensuremath{{\nu}_{\mu}}\xspace & 93.8 & 94.1 \\
\ensuremath{\bar{\nu}_{\mu}}\xspace & 5.3 & 4.9 \\
\ensuremath{{\nu}_{e}}\xspace and \ensuremath{\bar{\nu}_{e}}\xspace & 0.9 & 1.0 \\
\hline \hline
\end{tabular}
\end{table}
The predicted flux is then used as input to \genie/ \cite{genie-primary,genie-manual}, which simulates neutrino reactions in the variety of materials of which our detectors and their surroundings are composed.
We alter its default interaction model as described below.
Finally, we use a detailed model of our detectors with a combination of \geant/ and custom software to simulate the detectors' photon response to particles outgoing from individual predicted neutrino reactions, including both scintillation and Cherenkov radiation in the active detector materials, as well as the light transport, collection, and digitization processes.
The overall energy scales of both detectors are calibrated using the minimum-ionizing portions of stopping cosmic ray muon tracks.
As in our previous results {\cite{nova_numu,nova_joint,nova_sterile}}, we augment \genie/'s default configuration by enabling its semi-empirical model for Meson Exchange Current (MEC) interactions \cite{katori-empirical-MEC} to account for the likely presence of interactions of neutrinos with nucleon-nucleon pairs in both charged- and neutral-current reactions.
However, in this analysis we no longer reweight the momentum transfer distributions produced by this model, preferring instead to allow fits to the FD data to profile \cite{ref:profile} over the substantially improved systematic uncertainty treatment for this component of the model, as described in Sec.~\ref{systs}. In our central-value prediction we simply increase the rate of MEC interactions by 20\% as suggested by fits to the sample of ND \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidate events in our ND data.
In addition, we now reweight the output of the default model for quasielastic production to treat the expected effect of long-range nuclear charge screening according to the Random Phase Approximation (RPA) calculations of J. Nieves and collaborators \cite{Valencia-RPA,Gran-RPA}.
Lastly, we continue to reduce the rate of \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace nonresonant single pion production with invariant hadronic mass $W < \unit[1.7]{GeV}$ to 41\% of \genie/'s nominal value \cite{nonres-1pi}.
\section{\label{analysis} Data analysis}
In order to infer the oscillation parameters from our data, we compare the spectra observed at the FD with our predictions under various oscillation hypotheses.
This process consists of three steps.
First, we develop selections to retain \ensuremath{{\nu}_{e}}\xspace and \ensuremath{{\nu}_{\mu}}\xspace charged-current (CC) events and to reject neutral-current (NC) events and cosmogenic activity.
Second, we apply the relevant subset of these selections (excluding, e.g., cosmic rejection criteria) to samples observed at the ND, where both \ensuremath{{\nu}_{\mu}}\xspace disappearance and \ensuremath{{\nu}_{e}}\xspace appearance are negligible, to constrain our prediction for the selected sample composition.
Finally, we combine the constrained prediction from the previous step with the predicted ratio of the FD and ND spectra, which accounts for geometric differences between the detectors, the beam dispersion, and the effect of oscillations.
The result is used in fits to the neutrino energy spectra of the candidates observed at the FD.
The following sections discuss how this procedure unfolds for each of the two analyses separately.
\subsection{\label{numu} \ensuremath{{\nu}_{\mu}}\xspace{} disappearance}
\subsubsection{Event selection}
Isolation of samples of candidate events begins with cells whose APD responses are above threshold, known as hits; those neighboring each other in space and time are clustered to produce candidate neutrino events \cite{baird-thesis,slicing}.
We pass hits in event candidates that survive basic quality cuts in timing (relative to the \unit[10]{$\mu$s} beam spill), containment, and contiguity into a deep-learning classifier known as the Convolutional Visual Network (CVN) \cite{cvnpaper}.
CVN applies a series of linear operations, trained over simulated beam and cosmic data event samples, which extract complex, abstract visual features from each event, in a scheme based on techniques from computer vision \cite{szegedy2014googlenet,hinton1986}.
The final step of the classifier is a multilayer perceptron \cite{ref:mlp1,ref:mlp2} that maps the learned features onto a set of normalized classification scores, which range over beam neutrino event hypotheses (\ensuremath{\nu_e\,{\rm CC}}\xspace, \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace, \ensuremath{\nu_{\tau}\,{\rm CC}}\xspace, and NC) and a cosmogenic hypothesis.
We retain events whose CVN score for the \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace hypothesis exceeds a tuned threshold.
To identify the muon in such events, tracks produced by a Kalman filter algorithm \cite{ref:Kalman,raddatz-thesis,ospanov-thesis} are scored by a $k$-nearest neighbor classifier \cite{ref:kNN} over the following variables: likelihoods in $dE/dx$ and scattering constructed from single-particle hypotheses, total track length, and the fraction of planes along the track consistent with having minimum-ionizing-like $dE/dx$.
The most muon-like of these tracks is taken to be the muon candidate.
Events that have no sufficiently muon-like track are rejected.
We also discard events where any clusters of activity extend to the edges of the detector or where any track besides the muon candidate penetrates into the muon catcher in the ND.
To avoid being considered as cosmogenic, FD events must furthermore be deemed sufficiently signal-like by a boosted decision tree (BDT) \cite{ref:BDT} trained over simulation and cosmic data that considers the positions, directions, and lengths of tracks, as well as the fraction of the event's total hit count associated with the track and the CVN score for the cosmic hypothesis.
According to our simulation, the FD selection efficiency for our basic quality and containment cuts, relative to all true \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events within a fiducial volume, is 41.3\%; the efficiency of the CVN and PID constraints applied to the quality-and-containment sample is 78.1\%. The final selected sample is 92.7\% \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace.
The predicted composition of the sample at various stages in the selection is given in Table~\ref{tab:numu_cutflow}.
\begin{table*}[ht]
\caption{\label{tab:numu_cutflow} Predicted composition of the \ensuremath{{\nu}_{\mu}}\xspace CC candidate sample in the FD, in event counts, at various stages in the selection process. Oscillation parameters used in the prediction are the best fit values from Sec.~\ref{sec:results}.}
\begin{tabular}{m{0.15\linewidth}S[table-format = 3.1]S[table-format = 3.1]SS[table-format = 3.1]Sc} \hline \hline
Selection &
\parbox[t]{0.13\linewidth}{\ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace CC} &
\parbox[t]{0.13\linewidth}{NC} &
\parbox[t]{0.13\linewidth}{ \ensuremath{\nu_e\,{\rm CC}}\xspace } &
\parbox[t]{0.13\linewidth} {\ensuremath{\nu_{\tau}\,{\rm CC}}\xspace } &
\parbox[t]{0.13\linewidth}{\ensuremath{\nu_e \rightarrow \nu_\mu}\xspace CC } &
\parbox[t]{0.13\linewidth} {Cosmic} \\ \hline \noalign{\vskip 2pt}
No selection & 963.7 & 612.1 & 126.6 & 9.6 & 0.6 & $4.91 \times 10^{7} $ \\
Containment & 160.8 & 219.9 & 61.5 & 2.4 & 0.3 & $1.95 \times 10^{4} $ \\
CVN & 132.1 & 3.0 & 0.3 & 0.4 & 0.2 & 26.4 \\
Cosmic BDT & 126.1 & 2.5 & 0.3 & 0.4 & 0.2 & 5.8 \\
\hline \hline
\end{tabular}
\end{table*}
\subsubsection{Energy estimation and analysis binning}
We reconstruct each event's neutrino energy $E_{\nu}$ using a function of the muon candidate and hadronic remnant energies, which are estimated separately.
The muon candidate energy $E_{\mu}$ is determined from the range of the track, calibrated to true muon energy in our simulation.
We estimate the energy of the hadronic component with a mapping of observed non-muon energy to true non-muon energy also calibrated with the simulation \cite{lein-thesis}.
The resulting neutrino energy resolution over the whole sample is 9.1\% at the FD (11.8\% at the ND due to the lower active fraction of the muon catcher) for \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events.
The precision with which we can measure \sinsqtwo{23} and \dmsq{32} depends on the \ensuremath{{\nu}_{\mu}}\xspace energy resolution, particularly for events near the disappearance maximum, about \unit[1.6]{GeV} at the NOvA baseline. Accordingly, we optimize the binning in two ways to get the best effective use of our energy resolution. First, we employ a variable neutrino energy binning with finer bins near the disappearance maximum and coarser bins elsewhere. And, second,
we further divide the event populations in each energy bin into four populations in reconstructed hadronic energy fraction, $(E_{\nu}-E_{\mu})/E_{\nu}$, which correspond to regions of different neutrino energy resolution \cite{vinton-thesis}.
These divisions are chosen such that the FD populations are of equal size in the unoscillated simulation; however, the boundaries show little sensitivity to the choice of oscillation parameters.
Grouping in this manner has the additional advantage of isolating most background cosmic and beam NC events (those typically mistaken for signal events with energetic hadronic systems) along with events of worst energy resolution into a separate quartile from the three quartiles containing the signal events with better resolution.
The average \ensuremath{{\nu}_{\mu}}\xspace energy resolution in the FD across the whole energy spectrum is estimated to be 6.2\%, 8.2\%, 10.3\%, and 12.3\% for each quartile, respectively.
\begin{figure}[ht]
\includegraphics[width=\linewidth]{nd_datamc_allquantiles_testpads.pdf}
\caption{\label{fig:numu_nd}
Comparison of the reconstructed neutrino energy for selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events (black dots) in the ND with area-normalized simulation (red line). Shading represents the bin-to-bin systematic uncertainties. The gray area, which is nearly indistinguishable from the lower figure boundary, shows the simulated background.}
\end{figure}
\begin{figure*}[ht]
\includegraphics[width=0.8\linewidth]{nd_datamc_4pads_testpads.pdf}
\caption{\label{fig:numu_nd_quant} Comparison of selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates (black dots) in the ND data to the prediction (red histograms) in the hadronic energy fraction quartiles, where the prediction is absolutely normalized to the data by exposure. The expected background contributions (gray) are smaller in the quartiles with better resolution. The shaded band represents the quadrature sum of all systematic uncertainties. These distributions are the input to the extrapolation procedure described in the text.}
\end{figure*}
Figure~\ref{fig:numu_nd} shows a comparison of the reconstructed neutrino energy for the selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events in the ND with simulation shown area-normalized to the data.
The means of the distributions agree to within \unit[10]{MeV} (0.6\%).
Normalizing the prediction by area removes a 1.3\% normalization difference between the data and the simulation and suppresses 10-20\% absolute normalization uncertainties due primarily to our knowledge of the neutrino flux and normalization offsets from cross-section uncertainties.
The remaining uncertainties arise from shape differences.
The full set of uncertainties that are used to compute the error band is described in Sec.~\ref{systs}.
Figure~\ref{fig:numu_nd_quant} shows the corresponding distributions divided into the quartiles.
\subsubsection{\label{subsec:numu extrap} Constraints from the Near Detector data}
As in our previous work \cite{nova_numu}, we obtain a data-driven estimate for the true neutrino energy spectrum using our observed ND data.
To do so, we reweight the simulation in each reconstructed neutrino energy bin to obtain agreement with the ND data, thus correcting the differences observed in Fig.~\ref{fig:numu_nd_quant}.
After subtracting the expected background, which is minimal, we pass the resulting reconstructed neutrino energy spectrum through the migration matrix between reconstructed and true neutrino energies predicted by our ND simulation.
The corrected prediction is then multiplied by the predicted bin-by-bin ratios of the FD and ND true energy spectra, which includes the effects of differing detector geometries and acceptances, beam divergence, and three-flavor oscillations, to obtain an expected FD true energy spectrum.
The latter is finally converted back to reconstructed energy by way of the analogous FD migration matrix.
This constrained signal prediction is summed together with the cosmic prediction, whose reconstructed energy distribution is extracted using events in the minimum-bias trigger passing all the selection criteria and normalized using the \unit[420]{$\mu$s} window around the beam bunch, and a simulation-based beam background prediction to compare to the observed FD data.
In the current analysis, this extrapolation procedure is performed within each hadronic energy fraction range separately so that neutrino reaction types that favor different regions of the elastic-to-inelastic continuum (and thereby have typically different neutrino energy resolution) can be constrained independently.
We find the total number of events in each of the four quartiles, in order from lowest to highest inelasticity, to be adjusted by $+12\%$, $-13\%$, $-13\%$, and $+4\%$ relative to the nominal simulation by this method.
\subsection{\ensuremath{{\nu}_{e}}\xspace appearance}
\subsubsection{Event selection}
We employ the same hit finding and time clustering as in the \ensuremath{{\nu}_{\mu}}\xspace{} disappearance analysis, and select events whose \ensuremath{\nu_e\,{\rm CC}}\xspace score under the same CVN algorithm exceeds a tuned selection cut.
To further purify the sample of \ensuremath{\nu_e\,{\rm CC}}\xspace candidates, we reconstruct events as follows.
First, we build three-dimensional event vertices using the intersection of lines constructed from Hough transforms applied to each two-dimensional detector view separately \cite{ref:hough,ref:earms}.
Hits in the same view falling roughly along common directions emanating from these vertices are further grouped into ``prongs,'' which are then matched between views based on their extent and energy deposition \cite{ref:fuzzyk,niner-thesis}.
We use these prongs to remove events where the energy of the event is distributed largely transverse to the neutrino beam direction; our simulation and our large sample of cosmic data taken from minimum-bias triggers indicate such events are typically cosmogenic.
We further reject events where the prongs fail containment criteria, where extremely long tracks indicate obvious muons, where there are too many hits for proper reconstruction, or where another event in close proximity in both time and space approaches the top of the detector.
To combat background events from cosmogenic photon showers entering through the back of the detector, where the overburden is thinner, we also cut events which appear to be pointing toward Fermilab rather than away from it. These events are distinguished by having the number of planes without hits in the portion of the event closest to Fermilab exceeding the number in the portion farthest from Fermilab, the reverse of the expectation for an electromagnetic shower coming from the neutrino beam direction.
Events surviving these selections form our ``core'' sample in both detectors.
The predicted composition of the FD sample at various stages in this selection is given in Table~\ref{tab:nue_core_cutflow}.
\begin{figure*}[htb]
\subfloat{%
\includegraphics[height=0.35\textwidth]{Peripheral_BDTCVN_BeamPurity.pdf}
}
\subfloat{%
\includegraphics[width=0.45\textwidth]{decomp_Peripheral_cosmic_bdt_data.pdf}
}
\caption{\label{fig:nue_bdt} The peripheral sample is a signal-rich subset of \ensuremath{{\nu}_{e}}\xspace FD candidates that failed the core cosmic rejection or containment criteria (see text).
Left: The two-dimensional BDT-CVN space used in the definition of the peripheral sample. The predicted distribution of \ensuremath{{\nu}_{e}}\xspace appearance signal events (boxes) is shown superimposed on the predicted purity in each bin (shaded color). The peripheral sample boundary is chosen at the red line: the majority of signal events lie above and to the right and the sample has little cosmogenic contamination there, while events to the left and below are predominantly cosmogenic and are rejected.
Right: comparison of the observed distribution (black points) of the BDT variable for peripheral events with the prediction (stacked histogram).}
\end{figure*}
\begin{table*}[!ht]
\caption{\label{tab:nue_core_cutflow} Predicted composition of the core \ensuremath{{\nu}_{e}}\xspace CC candidate sample at the FD, in event counts, at various stages in the selection process. Oscillation parameters used in the prediction are the best fit values from Sec.~\ref{sec:results}. These figures do not include the effect of the extrapolation procedure described in Sec.~\ref{subsec:nue ND}.}
\begin{tabular}{m{0.21\linewidth}SSSSc} \hline \hline
Selection & \parbox[t]{0.14\linewidth}{$\ensuremath{{\nu}_{\mu}}\xspace \rightarrow \ensuremath{{\nu}_{e}}\xspace$ CC} & \parbox[t]{0.14\linewidth}{Beam \ensuremath{{\nu}_{e}}\xspace CC} & \parbox[t]{0.14\linewidth}{NC} & \parbox[t]{0.14\linewidth}{\ensuremath{{\nu}_{\mu}}\xspace, \ensuremath{{\nu}_{\tau}}\xspace CC} & \parbox[t]{0.14\linewidth}{Cosmic}\\ \hline\noalign{\vskip 2pt}
No selection & 77.9 & 48.7 & 612.1 & 973.8 & $4.91 \times 10^{7} $ \\
Containment/energy cut & 52.3 & 8.0 & 121.4 & 49.3 & $2.05 \times 10^{4} $ \\
Pre-CVN cosmic rejection & 51.3 & 7.9 & 114.3 & 47.0 & $1.58\times 10^{4} $ \\
CVN & 41.4 & 6.0 & 5.3 & 1.3 & 2.0 \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table*}[!htb]
\caption{\label{tab:nue_periph_cutflow} Predicted composition of the peripheral \ensuremath{{\nu}_{e}}\xspace CC candidate sample, in event counts, at two stages in the selection process. Here ``basic quality'' refers to events that pass beam and detector data quality cuts but fail the core sample containment criteria. Parameters are as in Table~\ref{tab:nue_core_cutflow}.}
\begin{tabular}{m{0.21\linewidth}SSSSc} \hline \hline
Selection & \parbox[t]{0.14\linewidth}{$\ensuremath{{\nu}_{\mu}}\xspace \rightarrow \ensuremath{{\nu}_{e}}\xspace$ CC} & \parbox[t]{0.14\linewidth}{Beam \ensuremath{{\nu}_{e}}\xspace CC} & \parbox[t]{0.14\linewidth}{NC} & \parbox[t]{0.14\linewidth}{\ensuremath{{\nu}_{\mu}}\xspace, \ensuremath{{\nu}_{\tau}}\xspace CC} & \parbox[t]{0.14\linewidth}{Cosmic}\\ \hline\noalign{\vskip 2pt}
Basic quality & 20.4 & 6.6 & 199.9 & 160.9 & $2.79 \times 10^{6} $ \\
CVN + BDT & 5.9 & 1.0 & 0.2 & 0.1 & 2.2 \\
\hline \hline
\end{tabular}
\end{table*}
We also construct a second, ``peripheral'' sample of FD events by considering events that have high scores for the CVN \ensuremath{{\nu}_{e}}\xspace hypothesis but which fail the cosmic rejection or containment criteria.
These are subjected to a more focused BDT (distinct from the one mentioned in Sec.~\ref{numu}) trained over the variables used for the containment and cosmic rejection cuts.
The containment variables include the closest distance to the top of the detector and the closest distance to any other face of the detector.
Variables distinguishing cosmogenic from beam-induced activity include the transverse momentum fraction of the event and the number of hits in the event.
Simulation and our cosmic data sample indicate that events in the signal-like regions of both this BDT and CVN are likely to be signal and not the result of externally entering activity and are therefore retained.
Distributions for the peripheral sample illustrating the predicted beam and cosmic response in this BDT and the CVN \ensuremath{{\nu}_{e}}\xspace score, as well as comparing the BDT distribution in data and simulation, are given in Fig.~\ref{fig:nue_bdt}.
Because events on the periphery of the detector are not guaranteed to be fully contained, peripheral events are summed together into a single bin instead of dividing them by the neutrino energy estimate as is done for the core sample.
The FD event counts at two stages of the peripheral selection are noted in Table~\ref{tab:nue_periph_cutflow}.
The ND event sample is predicted to consist of 42\% beam \ensuremath{{\nu}_{e}}\xspace, 30\% NC background, and 28\% \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace background.
These predictions include the effect of the data-driven constraints described in Sec.~\ref{subsec:nue ND}.
The simulated FD efficiency for the basic quality and containment cuts used in the combined core and peripheral selections relative to all true \ensuremath{{\nu}_{e}}\xspace CC events within a fiducial volume is 92.6\%.
The remaining core selections, i.e., CVN and cosmic rejection, retain 58.8\% of the true \ensuremath{{\nu}_{e}}\xspace CC events in the quality-and-containment population.
With the addition of the peripheral sample under the combined CVN+BDT criteria, this figure rises to 67.4\%.
Improvements to the selection criteria generate an increase of 6.8\% in effective exposure \cite{eff-exposure} relative to our previous results, while the efficiency gain due to the addition of the peripheral sample yields a further increase of 17.4\%.
\subsubsection{Energy estimation and binning}
To estimate the neutrino energy in \ensuremath{{\nu}_{e}}\xspace candidate events, we construct a second-order polynomial in two variables: the sum of the calibrated hit energies from prongs identified as electromagnetic activity and the sum of the energies of hits in the event not within those prongs.
The coefficients of this polynomial are fit to minimize the predicted neutrino energy residuals in selected simulated \ensuremath{\nu_e\,{\rm CC}}\xspace events.
Whether a prong is considered electromagnetic or not is determined by a deep learning single particle classifier that utilizes both information from the prong itself and the full event \cite{psihas-thesis}.
This results in an estimator with 11\% resolution for both appearance signal and beam background \ensuremath{\nu_e\,{\rm CC}}\xspace events in both detectors.
The expected appearance signal has a narrow peak at the \ensuremath{{\nu}_{\mu}}\xspace disappearance maximum, about \unit[1.6]{GeV}.
Additionally, in this analysis, NC and cosmogenic backgrounds concentrate at low reconstructed energies, and beam \ensuremath{{\nu}_{e}}\xspace backgrounds dominate at high energies.
Based on these considerations, figure-of-merit calculations based on simulation suggest we limit the neutrino energies we consider to be between \unit[1 and 4]{GeV} for the FD core sample and \unit[1-4.5]{GeV} for the peripheral sample.
The corresponding core or peripheral range is used for the ND sample when applying the data constraint detailed in Sec.~\ref{subsec:nue ND}.
Each of these is further subdivided into three ranges in the CVN classifier output so as to concentrate the sample of highest purity together.
The peripheral event sample is treated as a fourth bin.
\subsubsection{\label{subsec:nue ND} Near Detector data constraints}
The procedure for using the ND data in the \ensuremath{{\nu}_{e}}\xspace analysis is similar to that used for \ensuremath{{\nu}_{\mu}}\xspace, extended to account for the particular natures of the signal and beam background components.
Appeared electron neutrinos arise from oscillated beam muon neutrinos, so the \ensuremath{{\nu}_{\mu}}\xspace-selected candidates in the ND are used to correct the expected \ensuremath{{\nu}_{e}}\xspace appearance signal with the same procedure detailed in Sec.~\ref{subsec:numu extrap}.
Additionally, the \ensuremath{{\nu}_{\mu}}\xspace-selected events are used to verify the \ensuremath{{\nu}_{e}}\xspace selection efficiency.
From the \ensuremath{{\nu}_{\mu}}\xspace data and simulated samples, we create two subsets where the reconstructed muon track is replaced by a simulated electron shower with the same energy and direction \cite{sachdev-thesis}.
The \ensuremath{{\nu}_{e}}\xspace selection criteria are applied to these electron-inserted samples, and the efficiencies for identifying neutrino events in data and simulation, relative to a loose preselection, are found to match within 2\%.
As there is no signal and cosmogenic activity is negligible at the ND, the \ensuremath{\nu_e\,{\rm CC}}\xspace candidates at the ND consist entirely of beam background events, originating from CC reactions of the intrinsic \ensuremath{{\nu}_{e}}\xspace component in the beam and mis-identified NC and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events.
As in our last result \cite{nova_joint}, we use a combination of data-driven methods to ``decompose'' the \ensuremath{{\nu}_{e}}\xspace-selected data into these three categories and constrain them independently.
We examine low- and high-energy \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace samples at the ND in order to adjust the yields of the parent hadrons that decay into both \ensuremath{{\nu}_{e}}\xspace and \ensuremath{{\nu}_{\mu}}\xspace, which constrains the \ensuremath{{\nu}_{e}}\xspace beam background.
We also use the observed distributions of time-delayed electrons from stopping $\mu$ decay in each analysis bin to constrain the ratio of \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace and NC interactions.
The resulting decomposition of the selected \ensuremath{{\nu}_{e}}\xspace candidate sample at the ND therefore agrees with the data distribution by construction.
The nominal and constrained predictions are shown compared to the data distribution in Fig.~\ref{fig:nue_decomp}.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{Nue2017_MDCMP_stack.pdf}
\caption{\label{fig:nue_decomp} The effect of the decomposition and constraint procedure on the predicted ND candidate \ensuremath{{\nu}_{e}}\xspace spectrum; the stacked histogram shows corrected backgrounds (from bottom, beam \ensuremath{{\nu}_{e}}\xspace, \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace, NC). The three panels show the results for each of the CVN classifier bins, ranging left to right from lower to higher purity. Predictions for each background class prior to correction are given by the dashed lines. The overall corrections to the normalizations of the yields by category are: beam \ensuremath{\nu_e\,{\rm CC}}\xspace, $+3.0\%$; NC, $+17.0\%$; and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace, $+18.9\%$.}
\end{figure}
The corrections to the beam \ensuremath{{\nu}_{e}}\xspace, NC and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace components are extrapolated to the FD core sample using the bin-by-bin ratios of the FD and ND reconstructed energy spectra, for each of the three CVN ranges.
The predicted beam backgrounds in the FD peripheral sample are corrected according to the results of the extrapolation for the highest CVN bin in the core sample (see Fig. \ref{fig:nue_decomp}).
The sum of the final beam-induced background prediction and the extrapolated signal for given oscillation parameters is added to the measured cosmic-induced backgrounds to compare to the observed FD data.
\section{\label{systs} Systematic uncertainties}
We evaluate the effect of potential systematic uncertainties on our results by reweighting or generating new simulated event samples for each source of uncertainty and repeating the entire measurement, including the extraction of signal and background yields, the computation of migration matrices, and the calculation of the ratios of FD to ND expectations using each modified simulation sample and applying our constraint procedures.
The effect of each of these uncertainties
on the predicted yields of selected \ensuremath{\nu_e\,{\rm CC}}\xspace candidate events is contained in
Table~\ref{tab:nue_syst}.
We estimate the effects on the extracted oscillation parameters \sinsq{23}, \dmsq{32} and \ensuremath{\delta_{\rm CP}}\xspace in the joint fit to be as given in Table~\ref{tab:systs_param}.
These are negligibly different from a \ensuremath{{\nu}_{\mu}}\xspace-only fit.
\begin{table}[h]
\caption{\label{tab:nue_syst} Effect of $1\sigma$ variations of the
systematic uncertainties on the total \ensuremath{{\nu}_{e}}\xspace
signal and background predictions.
Simulated data were used and oscillated with $\dmsq{32}=2.445 \times 10 ^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace$ (NH),
\sinsq{23}= 0.558, \ensuremath{\delta_{\rm CP}}\xspace = 1.21$\pi$. }
\begin{tabular}{m{0.47\linewidth}SS} \hline \hline
Source of uncertainty &
\multicolumn{1}{c}{\parbox[t]{0.20\linewidth}{\ensuremath{{\nu}_{e}}\xspace signal (\%)}} &
\multicolumn{1}{c}{\parbox[t]{0.27\linewidth}{Total beam \\background (\%)}} \\ \hline
Cross sections and FSI & 7.7 & 8.6 \\
Normalization & 3.5 & 3.4 \\
Calibration & 3.2 & 4.3 \\
Detector response & 0.67 & 2.8 \\
Neutrino flux & 0.63 & 0.43 \\
\ensuremath{{\nu}_{e}}\xspace extrapolation & 0.36 & 1.2 \\ \hline
\mbox{Total systematic} uncertainty & \multicolumn{1}{c}{9.2} & \multicolumn{1}{c}{11} \\
Statistical uncertainty & \multicolumn{1}{c}{15} & \multicolumn{1}{c}{22} \\
\hline
Total uncertainty & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{25} \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{\label{tab:systs_param} Sources of uncertainty and their
estimated average impact on the oscillation parameters in the
joint fit. This impact is quantified using the increase in the
one-dimensional 68\% C.L. interval, relative to the size of the
interval when only statistical uncertainty is included in the fit.
Simulated data were used and oscillated with the same parameters
as in Table~\ref{tab:nue_syst}.
Given the asymmetry of the
\sinsq{23} interval with respect to its best fit value, only the
change in the upper edge is included.
The total systematic
uncertainty is calculated by adding the individual components in
quadrature.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{1pt}
\begin{tabular}{
>{\raggedright}p{0.38\linewidth}
r
S[table-align-uncertainty=true,table-number-alignment = left]
r
S[table-format = 2.1, table-number-alignment = right]
c
S[table-format = 2.1, table-number-alignment = left]
c
}
\hline \hline
{Source of uncertainty}
&\multicolumn{2}{c}{\parbox[t]{0.20\linewidth}{Uncertainty \\in \sinsq{23} ($\times 10^{-3}$)}}
&\multicolumn{4}{c}{\parbox[t]{0.27\linewidth}{Uncertainty \\in \dmsq{32} \small($\times 10^{-6} \ensuremath{{{\rm eV}^2}/c^4}\xspace$)}}
&\multicolumn{1}{c}{\parbox[t]{0.20\linewidth}{Uncertainty \\ in \ensuremath{\delta_{\rm CP}}\xspace }}
\\ \hline
Calibration & $+$ & 7.3 & $+$ & 27& /$-$ & 27 & $\pm$ 0.05$\pi$\\
Cross sections and FSI & $+$ & 6.9 & $+$ & 14& /$-$ & 19 & $\pm$ 0.08$\pi$\\
Muon energy scale & $+$ & 2.4 & $+$ & 8.5& /$-$ & 12 & $\pm$ 0.01$\pi$\\
Normalization & $+$ & 4.4 & $+$ & 7.3& /$-$ & 12 & $\pm$ 0.05$\pi$\\
Detector response & $+$ & 0.8 & $+$ & 6.2& /$-$ & 7.7 & $\pm$ 0.01$\pi$\\
Neutrino flux & $+$ & 1.1 & $+$ & 4.0& /$-$ & 4.4 & $\pm$ 0.01$\pi$\\
\ensuremath{{\nu}_{e}}\xspace extrapolation & $+$ & 0.1 & $+$ & 0.2& /$-$ & 0.7 & $\pm$ 0.01$\pi$\\ \hline
{Total systematic uncertainty} & $+$ & 12 & $+$ & 33& /$-$ & 38 & $\pm$ 0.12$\pi$\\
Statistical \nohyphens{uncertainty} & $+$ & 38 & $+$ & 75& /$-$ & 84 & $\pm$ 0.66$\pi$\\ \hline
Total uncertainty & $+$ & 40 & $+$ & 82& /$-$ & 92 & $\pm$ 0.67$\pi$\\
\hline \hline
\end{tabular}
}
\end{table}
The largest effects on this analysis stem from uncertainty in our calibrations and energy scales, in the cross-section and final-state interaction (FSI) models in \genie/, and in the impact of imperfectly simulated event pileup from the neutrino beam on reconstruction and selection efficiencies at the ND.
\paragraph*{Calibration and energy scale} To evaluate the uncertainty from calibrations and energy scales, which can affect the two detectors differently, we group these uncertainties into absolute (fully positively correlated between detectors) and relative (anticorrelated or uncorrelated) components.
Both absolute and relative muon energy scale uncertainties are $<1\%$ based on a combination of thorough accounting of our detectors' material composition and an examination of the parameters in the Bethe formula for stopping power and the energy-loss model of \geant/.
The overall energy response uncertainty, on the other hand, is driven by uncertainty in our overall calorimetric energy calibration.
To investigate the response, we compare simulated and measured data distributions of numerous channels including the energy deposits of muons originating from cosmogenic- and beam-related activity, the energy spectra of electrons arising from the decay of stopped muons, the invariant mass spectrum of neutral pion decays into photons, and the proton energy scales in ND quasielastic-like events.
The uncertainty we use is guided by the channel exhibiting the largest differences, the proton energy scale, at 5\%.
We take this 5\% uncertainty as both an absolute energy uncertainty, correlated between the two detectors, and a separate 5\% relative uncertainty, since there are not sufficient quasielastic-like events to perform this check at the FD.
\begin{figure*}[!t]
\includegraphics[width=0.8\textwidth]{datapred_bestfit_4pads_testpads.pdf}
\caption{\label{fig:numu_spectr} Comparison of the reconstructed
energy spectra of selected \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates in FD data (black dots) and
best-fit prediction (purple). The sample is split into four
reconstructed hadronic energy fraction quartiles labeled 1 through
4, where 1 (4) has the best (worst) energy resolution. The
majority of the total background (gray, upper) including the cosmogenic subcomponent
(blue, lower) lies in the fourth quartile.}
\end{figure*}
\paragraph*{Cross sections and FSI} Estimates for the majority of the cross section and FSI uncertainties that we consider are obtained using the event reweighting framework in \genie/ \cite{genie-manual}.
However, ongoing effort in the neutrino cross section community and the NOvA ND data suggest some modifications are necessary.
First, we apply additional uncertainty to the energy- and momentum-transfer-dependence of CC quasielastic (CCQE) scattering due to long-range nuclear correlations \cite{Valencia-RPA-unc} according to the prescription in Ref.~\cite{Gran-RPA}.
Second, as the detailed nature of MEC interactions is not well understood, we construct uncertainties for the neutrino energy dependence, energy-transfer dependence, and final-state nucleon-nucleon pair composition based on a survey of available theoretical treatments \cite{Gran-Valencia-2p2h,Martini-2p2h,SuSA-2p2h}.
The normalization of the MEC component is recomputed under each of these uncertainties using the same fit procedure used to arrive at the 20\% scale factor for the central value prediction.
Third, it is now believed that the inflated value of the axial mass in quasielastic scattering ($M_A^{QE}$) obtained in recent neutrino-nucleus scattering experiments relative to the light liquid bubble chamber measurements is due to nuclear effects that we are now treating explicitly with the foregoing \cite{nu-xsec-review}.
We thus reduce \genie/'s uncertainty for $M_A^{QE}$ to $\pm 5\%$ (a conservative estimate of the bubble chamber range \cite{MAQE-BBBA,MAQE-Zexp}) from its default of ${}^{+25\%}_{-15\%}$, while retaining \genie/'s central value $M_{A}^{QE} = \unit[0.99]{GeV/c^{2}}$.
Fourth, we increase the uncertainty applied to nonresonant pion production with three or more pions and invariant hadronic mass of $W<\unit[3]{GeV}$ to 50\% to match the default for 1- and 2-pion cases, based on data-simulation disagreements observed in the ND data.
Fifth, and finally, we introduce two separate 2\% uncertainties on the ratio of \ensuremath{\nu_e\,{\rm CC}}\xspace and \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace cross sections: one to account for potential differences between them due to radiative corrections, and one to consider the possibility of second-class currents in CCQE events \cite{DayMcF-numu-nue-diff,t2k}.
To validate the uncertainties assigned by \genie/ to the NC backgrounds in our analyses, we performed a study within the \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidate sample in the ND that measured the rates of neutrons that were produced at the ends of tracks and subsequently recaptured, emitting photons. This study was done by investigating time-delayed activity consistent with a neutron capture, taking into account the tail of the Michel electron time spectrum. The neutron rate is different for the mostly $\mu^{-}$ identified in \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace reactions versus the mostly $\pi^{\pm}$ in NC.
This study suggested that the NC cross-section uncertainties provided by GENIE, combined together with the calibration uncertainties mentioned previously, account for any differences between data and simulation.
Therefore we no longer include the \textit{ad hoc} 100\% additional uncertainty on NC backgrounds used in previous results \cite{nova_numu,nova_joint}.
\paragraph*{Normalization} We quantify the uncertainty arising from potential imperfections in the simulation of beam-induced pileup in the ND by overlaying a single extra simulated event onto samples of both simulated and data events.
We then examine the selection efficiency of this extra event and assign the 3.5\% difference between the data and simulation samples as a conservative uncertainty on the normalization of the ND rate. These are added in quadrature with much smaller uncertainties in the detector mass and the total beam exposure to yield an overall normalization systematic.
\paragraph*{Other} Other contributions to our systematic uncertainty budget are associated with the improved \ppfx/ flux prediction and potential differences between the acceptances of the ND \ensuremath{{\nu}_{\mu}}\xspace selection criteria and the FD \ensuremath{{\nu}_{e}}\xspace sample into which the ND corrections are extrapolated in the \ensuremath{{\nu}_{e}}\xspace analysis.
Also substantially reduced are the uncertainties in the light response model used for detector simulation.
Previous fits of the parameters in the Birks model for scintillator quenching with a second-order term \cite{chou-scint}, using proton tracks in candidate ND \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace quasielastic-like events in data, obtained values inconsistent with other measurements of Birks quenching in liquid scintillator \cite{KamLAND-scint,Borexino-scint}.
Previous results therefore used a variation with the other measurements' values to compute an uncertainty.
With the addition of Cherenkov light in scintillator to our detector model, however, we find a best fit at the same values preferred by other experiments.
To quantify any residual uncertainty in the light model, in this analysis we take alternate predictions where we alter the scintillation and Cherenkov photon yields in the model within the tolerance of agreement with the ND data while holding the muon response fixed (since it is set by our calibration procedure).
\section{\label{sec:results} Results}
We performed a blind analysis in which the FD data were analyzed only after all aspects of the analysis had been specified.
An independent implementation of the methods described in Secs.~\ref{analysis}-\ref{systs} for incorporating the Near Detector data constraint and assessing the impact of systematic uncertainties, as well as extracting oscillation parameters via likelihood fitting, was used to check the analysis presented in this paper.
It produced results consistent with those shown in the following sections.
\subsection{\ensuremath{{\nu}_{\mu}}\xspace disappearance data}
After selection, 126 \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates are observed in the FD.
In the absence of oscillations, we would have expected $720.3^{+67.4}_{-47.0} \text{ (syst.)}$ \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace candidates based on the extrapolation from the Near Detector, including an expected background of 5.8 misidentified cosmic rays and 3.4 misidentified neutrino events of other types.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{datapred_bestfit_quant0.pdf}
\caption{\label{fig:numu_combined} Data from
Fig.~\ref{fig:numu_spectr} summed over the four quartiles.}
\end{figure}
Figure~\ref{fig:numu_spectr} shows the observed energy spectrum in each quartile and the corresponding best fit predictions. As noted earlier, most of the predicted background appears in the fourth (worst resolution) quartile. Figure~\ref{fig:numu_combined} shows the data of Fig.~\ref{fig:numu_spectr} summed over all of the quartiles.
The neutrino energy spectrum exhibits a sharp dip at about \unit[1.6]{GeV}. Essentially, \sinsqtwo{23} corresponds to the depth of the dip and \dmsq{32} corresponds to its location. Both of these measurements are sensitive to the energy resolution, so we expect the best measurement in the quartile with best energy resolution.
\subsection{\ensuremath{{\nu}_{e}}\xspace appearance data}
After selection we observe 66 \ensuremath{\nu_e\,{\rm CC}}\xspace candidate events in the FD including an expected background of $20.3\pm 2.0 \text{ (syst.)}$ events. The composition of the expected background is estimated to be 7.3 beam \ensuremath{\nu_e\,{\rm CC}}\xspace events, 6.4 NC events, 1.3 \ensuremath{\nu_{\mu}\,{\rm CC}}\xspace events, 0.4 \ensuremath{\nu_{\tau}\,{\rm CC}}\xspace events, and 4.9 cosmic rays.
Figure~\ref{fig:nue_spectr} shows the distribution of these events as a function of the reconstructed neutrino energy for the three CVN classifier bins and for the peripheral sample, along with the expected background contributions and the best fit predictions.
To give some context to the number of observed \ensuremath{{\nu}_{e}}\xspace events, Fig.~\ref{fig:nue_number} shows the number of events expected for the best fit values of \dmsq{32} and \sinsq{23} as a function of \ensuremath{\delta_{\rm CP}}\xspace, for the two possible mass hierarchies.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{decomp_All_4bin_plot_data__1_.pdf}
\caption{\label{fig:nue_spectr} Comparison of the neutrino energy spectra of selected \ensuremath{\nu_e\,{\rm CC}}\xspace candidates in the FD data (black dots) with the best fit prediction (purple lines) in the three CVN classifier bins and the peripheral sample. The total expected background (gray, upper) and the cosmic component of it (blue, lower) are shown as shaded areas.
The events in the peripheral bin have energies between 1 and 4.5 GeV.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{monoprob_2017_data.pdf}
\caption{\label{fig:nue_number}
Total number of \ensuremath{\nu_e\,{\rm CC}}\xspace candidate events observed in the FD (gray) compared to the prediction (color) as a function of \ensuremath{\delta_{\rm CP}}\xspace.
The color lines correspond to the best fit values of \sinsq{23} and \dmsq{32} from
Table~\ref{tab:best_fits}, with
the upper two curves (blue) representing two octants in the normal mass hierarchy ($\dmsq{32}>0$) and the lower curve (red) the inverted hierarchy ($\dmsq{32}<0$).
The color bands correspond to $0.43 \leq \sinsq{23} \leq 0.60$. All other parameters are held fixed at the best-fit values.
}
\end{figure}
\subsection{Joint fit results}
We have performed a simultaneous fit to the binned data shown in Figs.~\ref{fig:numu_spectr} and \ref{fig:nue_spectr}.
Systematic uncertainties are incorporated into the fit as nuisance parameters with Gaussian penalty terms.
Where systematic uncertainties are common between the two data sets, the nuisance parameters associated with the effect are correlated appropriately.
In making these fits and in the contours and significance levels that follow, we used the following values for physics parameters measured by other experiments \cite{pdg}: $\dmsq{21} = (7.53\pm 0.18)\times 10^{-5}\ensuremath{{{\rm eV}^2}/c^4}\xspace$, $\sin^2\theta_{12} = 0.307\substack{+0.013 \\ -0.012}$, $\sin^2\theta_{13} = 0.0210 \pm 0.0011$.
We use a matter density computed for the average depth of the NuMI beam in the Earth's crust for the NOvA baseline of 810 km using the CRUST2.0 model \cite{ref:crust}, $\rho = \unit[2.84]{\mathrm{g/cm^3}}$.
\subsubsection{Best fits}
Table~\ref{tab:best_fits} gives the parameter values at the best fit point in each relevant mass hierarchy and $\theta_{23}$ octant combination. The top line shows the overall best fit, which occurs in the normal mass hierarchy and the upper $\theta_{23}$ octant; the middle line shows best fit in the lower $\theta_{23}$ octant for the normal mass hierarchy, which is only slightly less significant; and the bottom line shows the best fit in the inverted mass hierarchy, which is disfavored largely because it predicts fewer \ensuremath{{\nu}_{e}}\xspace appearance events than are observed. The column labeled $\Delta\chi^2$ represents the difference in $\chi^2$ between the fit and the overall best fit, where $\chi^2$ in this case is $-2{\text {ln}}\mathcal{L}$ with $\mathcal{L}$ being the likelihood function calculated using Poisson statistics plus Gaussian penalty terms for the systematic uncertainties. There are no best fit values in the inverted mass hierarchy and lower $\theta_{23}$ octant because the likelihood has no local maximum in this hierarchy-octant region, as will become clear in Fig.~\ref{fig:th23}. The $\chi^2$ for the overall best fit is 84.6 for 72 degrees of freedom.
The precision measurements of \sinsq{23} and \dmsq{32} come from the \ensuremath{{\nu}_{\mu}}\xspace disappearance data. A fit to these data alone gives essentially the same values for these parameters in the normal mass hierarchy. However, the best joint \ensuremath{{\nu}_{\mu}}\xspace{}-\ensuremath{{\nu}_{e}}\xspace{} fit pulls the value of $|\dmsq{32}|$ up by $0.04 \times10 ^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace$ from the \ensuremath{{\nu}_{\mu}}\xspace disappearance only fit in the inverted mass hierarchy.
\begin{table}[h]
\caption{\label{tab:best_fits} Best fit values. See text for further explanation.}
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccc}
\hline \hline
Hierarchy/Octant
& {\ensuremath{\delta_{\rm CP}}\xspace ($\pi$)}
& {\sinsq{23}}
& \parbox[t]{0.22\linewidth}{\dmsq{32} ($10^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace)$ } &
{$\Delta\chi^2$}
\\ \hline
Normal/Upper & 1.21 & 0.56 & \phantom{$-$}2.44 & 0.00\\
Normal/Lower & 1.46 & 0.47 & \phantom{$-$}2.45 & 0.13\\
Inverted/Upper & 1.46 & 0.56
& $-$2.51 & 2.54 \\
\hline \hline
\end{tabular}
}
\end{table}
\begin{figure}[!htb]
\includegraphics[width=\linewidth,trim=0 45 0 0,clip]{contour_dmsq_NH.pdf}
\vspace{-.2em}
\includegraphics[width=\linewidth,trim=0 0 0 25,clip]{contour_dmsq_IH.pdf}
\caption{\label{fig:dmsqcontour}
Regions of \dmsq{32} vs.~\sinsq{23} parameter space consistent
with the \ensuremath{{\nu}_{e}}\xspace appearance and the \ensuremath{{\nu}_{\mu}}\xspace
disappearance data at various levels of significance. The top panel corresponds to normal
mass hierarchy and the bottom panel to inverted hierarchy.
The color intensity indicates the confidence level at which
particular parameter combinations are allowed.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{contours_JointFitFC_withFriends.pdf}
\caption{\label{fig:other_expt} Comparison of measured 90\% confidence level contours for \dmsq{32} vs.~\sinsq{23} for this result (black line; best-fit value, black point), T2K \cite{t2k} (green dashed), MINOS \cite{minos} (red dashed), IceCube \cite{icecube} (blue dotted), and Super-Kamiokande \cite{superk} (purple dash-dotted).}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=\linewidth,trim=0 45 0 0,clip]{contour_delta_NH.pdf}
\vspace{-.2em}
\includegraphics[width=\linewidth,trim=0 0 0 25,clip]{contour_delta_IH.pdf}
\caption{\label{fig:deltacontour}
Regions of \sinsq{23} vs.~\ensuremath{\delta_{\rm CP}}\xspace parameter space consistent
with the \ensuremath{{\nu}_{e}}\xspace appearance and the \ensuremath{{\nu}_{\mu}}\xspace
disappearance data. The top panel corresponds to normal
mass hierarchy ($\dmsq{32}>0$) and the bottom panel to inverted hierarchy
($\dmsq{32}<0$). The color intensity indicates the confidence level at which
particular parameter combinations are allowed.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{slice_dmsq.pdf}
\caption{\label{fig:dmsq}
Significance at which each value of $|\dmsq{32}|$ is disfavored
in the normal (blue, lower) or inverted (red, upper) mass hierarchy.}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{slice_th23.pdf}
\caption{\label{fig:th23}
Significance at which each value of \sinsq{23} is disfavored
in the normal (blue, lower) or inverted (red, upper) mass hierarchy.
The vertical dotted line indicates the point of maximal mixing.}
\end{figure}
\subsubsection{Two dimensional contours and significance levels of single parameters}
All of the contours and significance levels that follow are constructed following the unified approach of Feldman and Cousins \cite{ref:fc}, profiling over unspecified physics parameters and systematic uncertainties.
Figure~\ref{fig:dmsqcontour} shows the 1, 2, and 3 $\sigma$ two-dimensional contours for \dmsq{32} and \sinsq{23}, separately for each mass hierarchy.
Figure~\ref{fig:other_expt} shows a comparison of 90\% confidence level contours for these parameters in the normal mass hierarchy for NOvA, T2K \cite{t2k}, MINOS \cite{minos}, IceCube \cite{icecube}, and \mbox{Super-Kamiokande~\cite{superk}.}
All of the experiments have results consistent with maximal mixing. Note that the range 0.4 to 0.6 in \sinsq{23} corresponds to the range 0.96 to 1.00 in \sinsqtwo{23}, which is the variable directly measured in \ensuremath{\nu_\mu \rightarrow \nu_\mu}\xspace oscillations.
Figure~\ref{fig:deltacontour} shows the analogous contours to those of Fig.~\ref{fig:dmsqcontour} in \sinsq{23} and \ensuremath{\delta_{\rm CP}}\xspace.
Figures~\ref{fig:dmsq}, \ref{fig:th23}, and \ref{fig:delta} show the significance with which values of $|\dmsq{32}|$, \sinsq{23}, and \ensuremath{\delta_{\rm CP}}\xspace are disfavored in the two mass hierarchies, respectively. The results in Fig.~\ref{fig:th23} differ from the ones previously reported \cite{nova_numu} in that the disfavoring of maximal mixing ($\theta_{23} = \pi/4$) has changed from 2.6 standard deviations ($\sigma$) to \unit[0.8]{$\sigma$} in the present results.
This change was caused by three changes, each of which moved $\theta_{23}$ closer to maximal mixing. The largest effect was due to new simulations and calibrations. The two smaller effects were from new selection and analysis procedures and from the additional \unit[$2.80\times10^{20}$]{POT} of data included here. The additional data taken by itself favored maximal disappearance.
In Fig.~\ref{fig:delta} two curves are shown in the normal mass hierarchy, one for each of the $\theta_{23}$ octants, corresponding to the near degeneracy shown in Fig.~\ref{fig:th23}. Only one curve is shown for the inverted mass hierarchy since there is only one minimum, which occurs in the upper octant. The point of minimum significance in the inverted mass hierarchy differs among the three figures because, although the $\Delta\chi^2$'s are identical (see
Table~\ref{tab:best_fits}), the translation of $\Delta\chi^2$ to significance depends on which oscillation parameters are profiled.
\begin{figure}[!tb]
\includegraphics[width=\linewidth]{slice_delta_3curves.pdf}
\caption{\label{fig:delta}
Significance at which each value of \ensuremath{\delta_{\rm CP}}\xspace is disfavored
in the normal (blue, lower) or inverted (red, upper) mass hierarchy. The normal mass hierarchy is divided into upper (solid) and lower (dashed) $\theta_{23}$ octants corresponding to the near degeneracy in \sinsq{23}.}
\end{figure}
Table~\ref{tab:1sigma limits} shows the \unit[1]{$\sigma$} confidence intervals for \dmsq{32}, \sinsq{23}, and \ensuremath{\delta_{\rm CP}}\xspace in the normal mass hierarchy, corresponding to Figs.~\ref{fig:dmsq}-\ref{fig:delta}. There are no \unit[1]{$\sigma$} confidence intervals in the inverted mass hierarchy.
\begin{table}[htb]
\caption{\label{tab:1sigma limits} 1 $\sigma$ confidence intervals for physics parameters in the normal mass hierarchy.}
\resizebox{\linewidth}{!}{
\begin{tabular}{p{0.35\linewidth}c}
\hline \hline
Parameter (units)
& \parbox[t]{0.50\linewidth}{1 $\sigma$ interval(s)}
\\ \hline \\[-14pt]
\dmsq{32} ($10 ^{-3} \ensuremath{{{\rm eV}^2}/c^4}\xspace$) & [2.37,2.52] \\[1pt]
\sinsq{23} & [0.43, 0.51] and [0.52, 0.60] \\[1pt]
\ensuremath{\delta_{\rm CP}}\xspace ($\pi$) & [0, 0.12] and [0.91, 2] \\[1pt]
\hline \hline
\end{tabular}
}
\end{table}
Finally, we have calculated the significance level for the rejection of the inverted hierarchy using the same procedure as in the above contours and confidence intervals, namely by profiling over all the other physics parameters and the systematic uncertainties. Frequentist coverage was checked following the suggestion of Berger and Boos \cite{ref:BergerBoos}. The entire inverted mass hierarchy region is disfavored at the 95\% confidence level.
\begin{acknowledgments}
This work was supported by the US Department of Energy; the US National Science Foundation; the Department of Science and Technology, India; the European Research Council; the MSMT CR, GA UK, Czech Republic; the RAS, RFBR, RMES, RSF and BASIS Foundation, Russia; CNPq and FAPEG, Brazil; and the State and University of Minnesota. We are grateful for the contributions of the staffs at the University of Minnesota module assembly facility and Ash River Laboratory, Argonne National Laboratory, and Fermilab. Fermilab is operated by Fermi Research Alliance, LLC under Contract No.~De-AC02-07CH11359 with the US DOE.
\end{acknowledgments}
\FloatBarrier
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,528 |
Q: How to run the this code for Magento 2.3 version I run the following code in magento2.3 but it's showing Class \Mastering\Itdesire\Setup\Mastering\Itdesire\Setup\InstallData does not exist error.
Content of InstallData.php:
<?php
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
namespace Mastering\Itdesire\Setup;
/**
* Description of InstallData
*
* @author pramod
*/
namespace Mastering\Itdesire\Setup;
use Magento\Framework\Setup\InstallDataInterface;
use Magento\Framework\Setup\ModuleContextInterface;
use Magento\Framework\Setup\ModuleDataSetupInterface;
class InstallData implements InstallDataInterface
{
/**
* {@inheritdoc}
*/
public function install(ModuleDataSetupInterface $setup, ModuleContextInterface $context)
{
$setup->startSetup();
$setup->getConnection()->insert(
$setup->getTable('mastering_itdesire_item'),
[
'name' => 'Item 1'
]
);
$setup->getConnection()->insert(
$setup->getTable('mastering_itdesire_item'),
[
'name' => 'Item 2'
]
);
$setup->endSetup();
}
}
A: Try the below code for your InstallData.php file:
<?php
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
namespace Mastering\Itdesire\Setup;
/**
* Description of InstallData
*
* @author pramod
*/
use Magento\Framework\Setup\InstallDataInterface;
use Magento\Framework\Setup\ModuleContextInterface;
use Magento\Framework\Setup\ModuleDataSetupInterface;
class InstallData implements InstallDataInterface
{
/**
* {@inheritdoc}
*/
public function install(ModuleDataSetupInterface $setup, ModuleContextInterface $context)
{
$setup->startSetup();
$setup->getConnection()->insert(
$setup->getTable('mastering_itdesire_item'),
[
'name' => 'Item 1'
]
);
$setup->getConnection()->insert(
$setup->getTable('mastering_itdesire_item'),
[
'name' => 'Item 2'
]
);
$setup->endSetup();
}
}
A: <?php
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
namespace Mastering\Itdesire\Setup;
/**
* Description of InstallData
*
* @author pramod
*/
namespace Mastering\Itdesire\Setup;
use Magento\Framework\Setup\InstallDataInterface;
use Magento\Framework\Setup\ModuleContextInterface;
use Magento\Framework\Setup\ModuleDataSetupInterface;
class InstallData implements InstallDataInterface
{
/**
* {@inheritdoc}
*/
public function install(ModuleDataSetupInterface $setup, ModuleContextInterface $context)
{
$setup->startSetup();
$setup->getConnection()->insert(
$setup->getTable('mastering_itdesire_item'),
[
'name' => 'Item 1'
]
);
$setup->getConnection()->insert(
$setup->getTable('mastering_itdesire_item'),
[
'name' => 'Item 2'
]
);
$setup->endSetup();
}
}
This is the code
A: You have used the namespace 2 times in your file. like this:
namespace Mastering\Itdesire\Setup;
/**
* Description of InstallData
*
* @author pramod
*/
namespace Mastering\Itdesire\Setup;
Just remove last namespace and it will work fine
just use first line of namespace
namespace Mastering\Itdesire\Setup;
Hope it will help you
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,794 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.